Maybe you need some mapping between vbox-guest-names and pacemaker
node-names? (attribute pcmk_host_map)
That you are writing that you added the script as fence_virtual is probably
a typo in the mail ... and would probably create a different error
message ...
On 02/06/2017 07:06 PM, Jihed
>>> Ken Gaillot schrieb am 06.02.2017 um 16:13 in
Nachricht
<40eba339-2f46-28b8-4605-c7047e0ee...@redhat.com>:
> On 02/06/2017 03:28 AM, Ulrich Windl wrote:
> RaSca schrieb am 03.02.2017 um 14:00 in
>> Nachricht
>>
As of now, I have restarted the planning wiki used for the last summit;
http://plan.alteeve.ca/index.php/Main_Page
It's not the most professional, and the notes aren't as complete as I
would have liked (we didn't have anyone specifically taking notes, I'll
fix that this time). What there are,
Hi Kristoffer,
The meeting looks very attractive.
Just one question, does the meeting have any website to archive the previous
topics/presentations/materials?
Thanks
Gang
>>>
> Hi everyone!
>
> The last time we had an HA summit was in 2015, and the intention then
> was to have SUSE
On 02/06/2017 09:00 AM, Scott Greenlese wrote:
> Further explanation for my concern about --disabled not taking effect
> until after the iface-bridge was configured ...
>
> The reason I wanted to create the iface-bridge resource "disabled", was
> to allow me the opportunity to impose
> a location
On 02/06/2017 03:28 AM, Ulrich Windl wrote:
RaSca schrieb am 03.02.2017 um 14:00 in
> Nachricht
> <0de64981-904f-5bdb-c98f-9c59ee47b...@miamammausalinux.org>:
>
>> On 03/02/2017 11:06, Ferenc Wágner wrote:
>>> Ken Gaillot writes:
>>>
On
Yeah, I see your point. :)
On Mon, Feb 6, 2017, 3:40 PM Klaus Wenninger wrote:
> On 02/06/2017 03:35 PM, Jihed M'selmi wrote:
> >
> > So do you suggest to used the sdb ?
> > The virtualbox was installed above Windows.
> >
>
> Just wanted to give you an option if you don't
Further explanation for my concern about --disabled not taking effect until
after the iface-bridge was configured ...
The reason I wanted to create the iface-bridge resource "disabled", was to
allow me the opportunity to impose
a location constraint / rule on the resource to prevent it from
On 02/06/2017 03:35 PM, Jihed M'selmi wrote:
>
> So do you suggest to used the sdb ?
> The virtualbox was installed above Windows.
>
Just wanted to give you an option if you don't have
a working fence-agent directly talking to the hypervisor -
which I would always prefer.
>
> On Mon, Feb 6,
So do you suggest to used the sdb ?
The virtualbox was installed above Windows.
On Mon, Feb 6, 2017, 3:20 PM Klaus Wenninger wrote:
> No experience with fencing vbox-VMs on my side either ...
> But as always when there is no physical fencing-device
> available sbd might be
No experience with fencing vbox-VMs on my side either ...
But as always when there is no physical fencing-device
available sbd might be a way to go - either with just
a watchdog (guess vbox offers a virtual watchdog that
is supported by the linux-kernel or at least if you install
the guest-support
Not really, I found something in some google group but, it's not documented
(if I am not wrong).
On Mon, Feb 6, 2017, 2:21 PM Marek Grac wrote:
> Hi,
>
> I don't have one. But I see a lot of question about fence_vbox in last
> days, is there any new material that references
Hi,
I don't have one. But I see a lot of question about fence_vbox in last
days, is there any new material that references it?
m,
On Mon, Feb 6, 2017 at 1:56 PM, Jihed M'selmi
wrote:
> Hi,
>
> I want set up a pcmk/corosync cluster using couple vbox nodes.
>
> Anyone
Hi Florin,
I'm afraid I don't quite understand what it is that you are asking. You
can specify the resource ID when creating resources, and using resource
constraints, you can specify any order/colocation structure that you
need.
> 1. RG = rg1 + following resources: fs1, fs2,fs3,
Hi,
I want set up a pcmk/corosync cluster using couple vbox nodes.
Anyone could.share how to install/configure a fence agent fence_vbox ?
Cheers
JM
--
J.M
___
Users mailing list: Users@clusterlabs.org
Hi All.
Recently we got a lrmd coredump. It occured only once and we don't know how to
reproduce it.
The version we use is pacemaker-1.1.15-11. Ths os is centos 7.
Core was generated by `/usr/libexec/pacemaker/lrmd'.
Program terminated with signal 11, Segmentation fault.
#0 __strcasecmp_l_avx
Ulrich Windl writes:
xin schrieb am 06.02.2017 um 10:50 in Nachricht
> <65fbbdf9-f820-63e7-fe02-1d1acefc5...@suse.com>:
>> Hi Ulrich:
>>
>>"crm configure show" can display what you set for properties.
>>
>>Do you find
dur...@mgtsciences.com writes:
> Kristoffer Grönlund wrote on 02/01/2017 10:49:54 PM:
>
>>
>> Another possibility is that the command that fence_vbox tries to run
>> doesn't work for you for some reason. It will either call
>>
>> VBoxManage startvm --type headless
>>
>>
>>> "Ulrich Windl" schrieb am 06.02.2017
um
11:25 in Nachricht <58985d1a02a100024...@gwsmtp1.uni-regensburg.de>:
xin schrieb am 06.02.2017 um 10:50 in Nachricht
> <65fbbdf9-f820-63e7-fe02-1d1acefc5...@suse.com>:
>> Hi Ulrich:
>>
>>
>>> xin schrieb am 06.02.2017 um 10:50 in Nachricht
<65fbbdf9-f820-63e7-fe02-1d1acefc5...@suse.com>:
> Hi Ulrich:
>
>"crm configure show" can display what you set for properties.
>
>Do you find another way?
Yes,, but it shows the while configuration. If your
Hi All,
I'm having issues with a ordering constraint with a clone resource in
pacemaker v1.1.14.
- I have a resourceA-clone (running on 2 nodes: node1 and node2).
- then I have 2 other resources: resourceB1 (allowed to run on node1 only)
and resourceB2 (allowed to run on node2 only).
- finally
Dne 3.2.2017 v 22:08 Scott Greenlese napsal(a):
Hi all..
Over the past few days, I noticed that pcsd and ruby process is pegged
at 99% CPU, and commands such as
pcs status pcsd take up to 5 minutes to complete. On all active cluster
nodes, top shows:
PID USER PR NI VIRT RES SHR S %CPU %MEM
Hi Ulrich:
"crm configure show" can display what you set for properties.
Do you find another way?
在 2017年02月06日 17:12, Ulrich Windl 写道:
Ken Gaillot schrieb am 02.02.2017 um 21:19 in Nachricht
:
[...]
The
Thanks a lot!
2017-02-06 9:55 GMT+01:00 Ulrich Windl :
> >>> Oscar Segarra schrieb am 02.02.2017 um
> 19:49 in
> Nachricht
>
>>> RaSca schrieb am 03.02.2017 um 14:00 in
Nachricht
<0de64981-904f-5bdb-c98f-9c59ee47b...@miamammausalinux.org>:
> On 03/02/2017 11:06, Ferenc Wágner wrote:
>> Ken Gaillot writes:
>>
>>> On 01/10/2017 04:24 AM, Stefan Schloesser wrote:
>>>
>>> Ken Gaillot schrieb am 02.02.2017 um 21:19 in
>>> Nachricht
:
[...]
> The files are not necessary for cluster operation, so you can clean them
> as desired. The cluster can clean them for you based on cluster options;
>
On 03/02/17 16:08 -0500, Scott Greenlese wrote:
> Over the past few days, I noticed that pcsd and ruby process is pegged at
> 99% CPU, and commands such as pcs status pcsd take up to 5 minutes to
> complete.
> On all active cluster nodes, top shows:
>
> PID USER PR NI VIRT
>>> Oscar Segarra schrieb am 02.02.2017 um 19:49 in
Nachricht
>>> Ken Gaillot schrieb am 02.02.2017 um 19:33 in
>>> Nachricht
<91a83571-9930-94fd-e635-962830671...@redhat.com>:
> On 02/02/2017 12:23 PM, renayama19661...@ybb.ne.jp wrote:
>> Hi All,
>>
>> By the next correction, the user was not able to set a value except zero in
>
Hello,
It seems pacemaker has a weird way to manage resource:
And also it seems it's focusing more on defining individual resources
and not to much flexibility for creating resource Groups
Now, let's take this example: RG1 + deps [ fs1,fs2,fs3] = > ALL 3 file
system must be mounted before
30 matches
Mail list logo