On 09/20/2016 12:51 PM, Kristoffer Grönlund wrote:
> The force_unmount option is available in more recent version of SLES as
> well, but not in SLES 11 SP4. You could try installing the upstream
> version of the Filesystem agent and see if that works for you.
Thanks Kristoffer for confirming.
Hi,
I have an issue while shutting down one of our clusters. The unmounting
of an OCFS2 filesystem (ocf:heartbeat:Filesystem) is triggering a node
fence (accordingly). This is because the script for stopping the
application is not killing all processes using the filesystem. Is there
a way to
On 02/08/2016 11:30 AM, Kristoffer Grönlund wrote:
> It is my great pleasure to announce that Hawk 2.0.0 is released!
Hi Krostoffer,
Thanks for the announcement. It looks good!
Will this make it into SLES 12?
Thanks,
Jorge
___
Users mailing list:
On 01/13/2016 04:34 AM, Ulrich Windl wrote:
> Since an update of sbd in SLES11 SP4 (sbd-1.2.1-0.12.1), I see
> frequent syslog messages like these (grep "Watchdog enabled."
> /var/log/messages):
Hi,
This happened to me as well. It turned out I was running a monitor on
my SBD resource. I
On 01/07/2016 05:28 AM, Nikhil Utane wrote:
> So my question is, how to pass the virtual IP to my UDPSend OCF agent so
> that it can then bind to the vip? This will ensure that all messages
> initiated by my UDPSend goes from vip.
Hi,
I don't know how ping -I does it (what system call it uses)
On 12/11/2015 04:14 PM, Ulrich Windl wrote:
> The rest of the config is not displayed in XML, just the nodes.
Hi Ulrich,
I'm using SLES 11 SP4 and I see the node entries perfectly fine. I've
seen the behavior your describing thoughbut can't remember what
triggered it (never found out). I
On 11/02/2015 12:59 PM, - - wrote:
> Is there a way to just start Website8086 or just reload it, without
> affecting the other resources.
Hi,
Try:
crm resource restart Website8086
or
crm resource stop Website8086
crm resource start Website8086
It works for me (without stopping all other
On 10/09/2015 09:06 AM, Ulrich Windl wrote:
> Did you try daemon_options="-d0"? (in clvmd resource)
I've just found this:
http://pacemaker.oss.clusterlabs.narkive.com/C5BaFych/ocf-lvm2-clvmd-resource-agent
...so apparently SUSE changed the resource agent's default of "-d0" to
"-d2" (from SP2 to
On 10/08/2015 06:04 AM, zulucloud wrote:
> are there any other ways?
Hi,
You might want to check external/vmware or external/vcenter. I've never
used them but apparently one is used to fence via the hypervisor (ESXi
itself) and the other thru vCenter.
--
Jorge
On 09/24/2015 08:32 AM, Sven Moeller wrote:
> Is it possible to get cLVM running with just one node? Is this possible by
> setting quorum policy to ignore?
Hi there,
Yes & Yes.
> Is it possible to get cLVM running without enabling Stonith?
I tried it now on SLES 11 SP4 (Pacemaker 1.1.12) and
On 09/15/2015 04:32 PM, Jorge Fábregas wrote:
> I have a situation where the watchdog provided by the hypervisor (z/VM)
> is not configurable (you can't change the heartbeat via the provided
> kernel module). SBD warms me about this and suggests the -T option (so
> it doesn't t
Hi,
I've finished my tests with SBD on x86 (using the emulated 6300esb
watchdog provided by qemu) but now I'm doing final tests on the target
platform (s390x).
I have a situation where the watchdog provided by the hypervisor (z/VM)
is not configurable (you can't change the heartbeat via the
On 09/08/2015 09:29 AM, Jorge Fábregas wrote:
> Who's feeding the watchdog timer? What or where's the watchdog timer
> since there's none defined?
Arrgh. It was kdump. By doing "chkconfig boot.kdump off" and
restarting then I got the expected behavior (permanent freeze w
Hi,
This is on SLES 11 SP4 (Pacemaker 1.1.12).
I recently found out, within the crm shell, that there's the reboot
option that you can use with standby. It will put the node in standby
node until it gets rebooted (I like it). However when I use it, there's
no indication (anywhere) that the
On 08/30/2015 01:15 PM, Jorge Fábregas wrote:
It's just that I expected it to appear as standby before the reboot
occurs.
Nevermind :) I just did a cibadmin -Q dump and saved both files
(before after).
sles11a:~ # grep -i standby before.txt
nvpair id=nodes-sles11a-standby name
Hi,
For an active/passive cluster, using a non-cluster filesystem like ext3
over LVM (using cLVM DLM), would you:
- include the VG activation in the same cloned group that hosts cLVM
DLM? (top of screenshot) so that in both nodes the VG is activated (even
though this is not a
Hi,
For an active/passive cluster, using a non-cluster filesystem like ext3
over LVM (using cLVM DLM), would you:
- include the VG activation in the same cloned group that hosts cLVM
DLM? (top of screenshot) so that in both nodes the VG is activated (even
though this is not a
On 08/29/2015 02:37 PM, Digimer wrote:
No need for clustered LVM, only the active node should see the PV. When
the passive takes over, after connecting to the PV, it should do a
pvscan - vgscan - lvscan before mounting the FS on the LV.
Keep you cluster simple; remove everything not needed.
On 08/28/2015 02:36 AM, Zhen Ren wrote:
Like colocation is what you want.
Please take a look at here:
http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-sets-colocation.html
Hi Zhen,
I wasn't specific enough. As I just mentioned Ken, I was referring to
On 08/23/2015 03:35 PM, emmanuel segura wrote:
please, share your cluster logs config and so on.
Thanks Emmanuel. I wiped everyting; reverted to previous known
good-snapshot on my VMs :) But it does work indeed as I just mentioned
Digimer.
--
Jorge
Hi everyone,
I'm trying out SLES 11 SP4 with the High-Availability Extension on two
virtual machines. I want to keep things simple I have a question
regarding the csync2 tool from SUSE. Considering that:
- I'll have just two nodes
- I'll be using corosync without encryption (no authkey file)
21 matches
Mail list logo