On Mon, Feb 01, 2021 at 07:18:24PM +0200, Nir Soffer wrote:
> Assuming we could use:
>
> io_timeout = 10
> renewal_retries = 8
>
> The worst case would be:
>
> 00 sanlock renewal succeeds
> 19 storage fails
> 20 sanlock try to renew lease 1/7 (timeout=10)
> 30 sanlock renewal timeout
On Sat, Sep 05, 2020 at 12:25:45AM +0300, Nir Soffer wrote:
> > > /var/log/sanlock.log contains a repeating:
> > > add_lockspace
> > >
> > e1270474-108c-4cae-83d6-51698cffebbf:1:/dev/e1270474-108c-4cae-83d6-51698cf
> > > febbf/ids:0 conflicts with name of list1 s1
> > >
> > e1
On Mon, Jun 10, 2019 at 10:59:43PM +0300, Nir Soffer wrote:
> > [root@uk1-ion-ovm-18 pvscan
> > /dev/mapper/36000d31005697814: Checksum error at offset
> > 4397954425856
> > Couldn't read volume group metadata from
> > /dev/mapper/36000d31005697814.
> > Metada
On Wed, Oct 17, 2018 at 11:37:33PM +0300, Nir Soffer wrote:
> - sanlock reads 1MiB from the logical volume "domain-uuid/ids" every 20
> seconds
Every 20 seconds sanlock reads 1MB and writes 512 bytes to monitor and
renew its leases in the lockspace.
___
On Sat, Nov 11, 2017 at 12:24:25AM +, Nir Soffer wrote:
> David, do you know if 4k disks over NFS works for sanlock?
When using files, sanlock always does 512 byte i/o. This can be a problem
when there are 4k disks used under NFS. On disks, sanlock detects the
sector size (with libblkid) and
t better then than I could now!
(I need to include this somewhere in the man page.)
commit 6313c709722b3ba63234a75d1651a160bf1728ee
Author: David Teigland
Date: Wed Mar 9 11:58:21 2016 -0600
sanlock: renewal history
Keep a history of read and write latencies for a lockspace.
T
On Mon, Jan 23, 2017 at 09:50:38PM +0200, Nir Soffer wrote:
> >> The major issue is sanlock, if it is maintaining a lease on storage,
> >> updating sanlock will cause the host to reboot. Sanlock is not
> >> petting the host watchdog because you killed sanlock during the
> >> upgrade, the watchdog
On Thu, Jun 02, 2016 at 06:47:37PM +0300, Nir Soffer wrote:
> > This is a mess that's been caused by improper use of storage, and various
> > sanity checks in sanlock have all reported errors for "impossible"
> > conditions indicating that something catastrophic has been done to the
> > storage it'
> verify_leader 2 wrong space name
> 4643f652-8014-4951-8a1a-02af41e67d08
> f757b127-a951-4fa9-bf90-81180c0702e6
> /dev/f757b127-a951-4fa9-bf90-81180c0702e6/ids
> leader1 delta_acquire_begin error -226 lockspace
> f757b127-a951-4fa9-bf90-81180c0702e6 host_id 2
VDSM has tried to join VG/lockspace/
On Wed, Mar 02, 2016 at 12:15:17AM +0200, Nir Soffer wrote:
> 1. Stop engine, so it will not try to start vdsm
> 2. Stop vdsm on all hosts, so they do not try to acquire a host id with
> sanlock
> This does not affect running vms
> 3. Fix the permissions on the ids file, via glusterfs mount
>
On Fri, Feb 19, 2016 at 11:34:28PM +0200, Nir Soffer wrote:
> On Fri, Feb 19, 2016 at 10:58 PM, Cameron Christensen
> wrote:
> > Hello,
> >
> > I am using glusterfs storage and ran into a split-brain issue. One of the
> > file affected by split-brain was dom_md/ids. In attempts to fix the
> > spli
On Tue, Jun 19, 2012 at 01:29:30PM -0400, Daniel J Walsh wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 06/19/2012 12:13 PM, David Teigland wrote:
> > type=AVC msg=audit(1340053766.745:7): avc: denied { open } for
> >>> pid=1908 comm="wdmd&qu
On Tue, Jun 19, 2012 at 06:56:56PM +0300, Itamar Heim wrote:
> On 06/19/2012 12:50 AM, Trey Dockendorf wrote:
> >I don't know if this is the wrong place to ask this question, but I
> >just started seeing SELinux denials after adding an iSCSI storage
> >domain in oVirt using my first node. The node
13 matches
Mail list logo