Paul, Ilya, others,

Any inputs on this?

Thanks,
Shridhar


On Thu, 9 Apr 2020 at 12:30, Void Star Nill <[email protected]>
wrote:

> Thanks Ilya, Paul.
>
> I dont have the panic traces and probably they are not related to rbd. I
> was merely describing our use case.
>
> On our setup that we manage, we have a software layer similar to
> Kubernetes CSI that orchestrates the volume map/unmap on behalf of the
> users. We are currently using volume locks as a way to protect the volumes
> from inadvertent concurrent write mounts that could lead to FS corruption
> as most of the volumes run with ext3/4.
>
> So in our orchestration, we take a shared on volumes that are read-only
> mounts, thus we can allow concurrent multiple read-only mounts, and we take
> exclusive lock for read-write mounts so that we can reject other RO/RW
> mounts while the first RW mount is in use.
>
> All this orchestration happens in a distributed manner across all our
> compute nodes - so it is not easy to determine when we should kick out the
> dead connections and claim the lock. We need to intervene manually and
> resolve such issues as of now. So I am looking for a way to do this
> deterministically.
>
> Thanks,
> Shridhar
>
>
> On Wed, 8 Apr 2020 at 02:48, Ilya Dryomov <[email protected]> wrote:
>
>> On Tue, Apr 7, 2020 at 6:49 PM Void Star Nill <[email protected]>
>> wrote:
>> >
>> > Hello All,
>> >
>> > Is there a way to specify that a lock (shared or exclusive) on an rbd
>> > volume be released if the client machine becomes unreachable or
>> > irresponsive?
>> >
>> > In one of our clusters, we use rbd locks on volumes to make sure
>> provide a
>> > kind of shared or exclusive access - to make sure there are no writers
>> when
>> > someone is reading and there are no readers when someone is writing.
>> >
>> > However, we often run into issues when one of the machines gets into
>> kernel
>> > panic or something and the whole pipeline gets stalled.
>>
>> What kind of kernel panics are you running into?  Do you have any panic
>> messages or stack traces captured?
>>
>> Thanks,
>>
>>                 Ilya
>>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to