Thank you very much Ben and your pathset based on my patch looks good.
On 2021/2/9 12:50, Benjamin Marzinski wrote:
I've actually managed to reproduce this issue with the latest code. All
it takes is two machines in the same FC zone, with one of them having a
FC driver that supports target
On Mon, Feb 08 2021 at 10:17am -0500,
Mike Snitzer wrote:
> On Fri, Feb 05 2021 at 9:03pm -0500,
> JeffleXu wrote:
>
> >
> >
> > On 2/6/21 2:39 AM, Mike Snitzer wrote:
> > > On Mon, Feb 01 2021 at 10:35pm -0500,
> > > Jeffle Xu wrote:
> > >
> > >> According to the definition of
There are cases where the wwid of a path changes due to LUN remapping
without triggering uevent for the changed path. Multipathd has no method
for trying to catch these cases, and corruption has resulted because of
it.
In order to have a better chance at catching these cases, multipath now
has a
if ev_remove_path() returns success the path has very likely been
deleted. However, if pathinfo() returned something besides PATHINFO_OK,
but ev_remove_path() succeeded, uev_add_path() was still accessing the
the path afterwards, which would likely cause a use-after-free error.
Insted,
This patchset adds a new config option, recheck_wwid_time, to help deal
with devices getting remapped. It's based on Chongyun's patch, but
instead of always checking if the LUN is remapped, users can set how
many seconds the LUN must be down before it gets rechecked, or disable
this checking
I've actually managed to reproduce this issue with the latest code. All
it takes is two machines in the same FC zone, with one of them having a
FC driver that supports target mode and LIO.
To do it:
You can grab TGT_WWPN and INIT_WWPN from
/sys/class/fc_host/host/port_name
On the target machine
On Mon, Feb 08, 2021 at 04:52:41PM +0800, Jeffle Xu wrote:
> DM will iterate and poll all polling hardware queues of all target mq
> devices when polling IO for dm device. To mitigate the race introduced
> by iterating all target hw queues, a per-hw-queue flag is maintained
What is the
>
> I still don't fully understand. Above you said "this coredump doesn't
> seem to appear any more". Am I understanding correctly that you
> observed *other* core dumps instead?
>
No, it is not "instead".
As shown in https://www.spinics.net/lists/dm-devel/msg45293.html,
there are some
Hi Tushar,
On Mon, 2021-02-08 at 15:22 -0500, Mimi Zohar wrote:
> On Fri, 2021-01-29 at 16:45 -0800, Tushar Sugandhi wrote:
> > IMA does not measure duplicate buffer data since TPM extend is a very
> > expensive operation. However, in some cases for integrity critical
> > data, the measurement
Hi Tushar,
On Fri, 2021-01-29 at 16:45 -0800, Tushar Sugandhi wrote:
> IMA needs to support duplicate measurements of integrity
> critical data to accurately determine the current state of that data
> on the system. Further, since measurement of duplicate data is not
> required for all the use
Hi Tushar,
On Fri, 2021-01-29 at 16:45 -0800, Tushar Sugandhi wrote:
> diff --git a/security/integrity/ima/ima_queue.c
> b/security/integrity/ima/ima_queue.c
>
> index c096ef8945c7..fbf359495fa8 100644
> --- a/security/integrity/ima/ima_queue.c
> +++ b/security/integrity/ima/ima_queue.c
> @@
Hi Tushar,
On Fri, 2021-01-29 at 16:45 -0800, Tushar Sugandhi wrote:
> IMA does not measure duplicate buffer data since TPM extend is a very
> expensive operation. However, in some cases for integrity critical
> data, the measurement of duplicate data is necessary to accurately
> determine the
On Fri, Feb 05 2021 at 9:03pm -0500,
JeffleXu wrote:
>
>
> On 2/6/21 2:39 AM, Mike Snitzer wrote:
> > On Mon, Feb 01 2021 at 10:35pm -0500,
> > Jeffle Xu wrote:
> >
> >> According to the definition of dm_iterate_devices_fn:
> >> * This function must iterate through each section of device
On Mon, 2021-02-08 at 18:49 +0800, lixiaokeng wrote:
>
>
> On 2021/2/8 17:50, Martin Wilck wrote:
> > On Mon, 2021-02-08 at 15:41 +0800, lixiaokeng wrote:
> > >
> > > Hi Martin,
> > >
> > > There is a _cleanup_ in device_new_from_nulstr. If uevent_thr
> > > exit in
> > > device_new_from_nulstr
Hi,
We encountered the following warnings during cold reboot tests on rhel
since 8.3 (i.e., 4.18.0-259). Is it a known bug on 8.3 onwards? Could not
figure out the actual cause of this. Any insight would be highly
appreciated. Thanks in Advance
Following are the call trace:
==
On Mon, 2021-02-08 at 15:41 +0800, lixiaokeng wrote:
>
> Hi Martin,
>
> There is a _cleanup_ in device_new_from_nulstr. If uevent_thr exit in
> device_new_from_nulstr and some keys is not be append to sd_device,
> the _cleanup_ will be called, which leads to multipathd crashes with
> the stack.
On 2021/2/8 17:50, Martin Wilck wrote:
> On Mon, 2021-02-08 at 15:41 +0800, lixiaokeng wrote:
>>
>> Hi Martin,
>>
>> There is a _cleanup_ in device_new_from_nulstr. If uevent_thr exit in
>> device_new_from_nulstr and some keys is not be append to sd_device,
>> the _cleanup_ will be called,
17 matches
Mail list logo