Thanks.
Cheat sheet is a PDF file that all useful commands and parameters existed in it.
On Friday, April 9, 2021, 01:12:16 PM GMT+4:30, Antony Stone
wrote:
On Friday 09 April 2021 at 10:34:33, Jason Long wrote:
> Thanks.
> I meant was a Cheat sheet.
I don't understand that
Thanks.
I meant was a Cheat sheet.
Yes, something like rendering a 3D movie or... . The Corosync and Pacemaker are
not OK for it? What kind of clustering using for rendering? Beowulf cluster?
On Friday, April 9, 2021, 12:55:27 PM GMT+4:30, Antony Stone
wrote:
On Friday 09 April 2021
Thank you so much for your great answers.
As the final questions:
1- Which commands are useful to monitoring and managing my pacemaker cluster?
2- I don't know if this is a right question or not. Consider 100 PCs that each
of them have an Intel Core 2 Duo Processor (2 cores) with 4GB of RAM. How
On Friday 09 April 2021 at 10:34:33, Jason Long wrote:
> Thanks.
> I meant was a Cheat sheet.
I don't understand that sentence.
> Yes, something like rendering a 3D movie or... . The Corosync and Pacemaker
> are not OK for it? What kind of clustering using for rendering? Beowulf
> cluster?
On Friday 09 April 2021 at 08:58:39, Jason Long wrote:
> Thank you so much for your great answers.
> As the final questions:
Really :) ?
> 1- Which commands are useful to monitoring and managing my pacemaker
> cluster?
Some people prefer https://crmsh.github.io/documentation/ and some people
Yes, I just wanted to know. In clustering, when a node is down and go online
again, then the cluster will not use it again until another node fails. Am I
right?
On Thursday, April 8, 2021, 11:58:16 PM GMT+4:30, Antony Stone
wrote:
On Thursday 08 April 2021 at 21:24:02, Jason Long
Thanks.
Thus, my cluster uses Node1 when Node2 is down?
On Thursday, April 8, 2021, 07:32:14 PM GMT+4:30, Antony Stone
wrote:
On Thursday 08 April 2021 at 16:55:47, Ken Gaillot wrote:
> On Thu, 2021-04-08 at 14:32 +, Jason Long wrote:
> > Why, when node1 is back, then web server
On Thursday 08 April 2021 at 21:33:48, Jason Long wrote:
> Yes, I just wanted to know. In clustering, when a node is down and
> go online again, then the cluster will not use it again until another node
> fails. Am I right?
Think of it like this:
You can have as many nodes in your cluster as
On Thursday 08 April 2021 at 21:33:48, Jason Long wrote:
> Yes, I just wanted to know. In clustering, when a node is down and
> go online again, then the cluster will not use it again until another node
> fails. Am I right?
In general, yes - unless you have specified a location contraint for
On Thursday 08 April 2021 at 21:24:02, Jason Long wrote:
> Thanks.
> Thus, my cluster uses Node1 when Node2 is down?
Judging from your previous emails, you have a two node cluster.
What else is it going to use?
Antony.
--
Anything that improbable is effectively impossible.
- Murray
Why, when node1 is back, then web server still on node2? Why not switched?
On Thursday, April 8, 2021, 06:49:38 PM GMT+4:30, Ken Gaillot
wrote:
On Thu, 2021-04-08 at 14:14 +, Jason Long wrote:
> Hello,
> I stopped node1 manually as below:
>
> [root@node1 ~]# pcs cluster stop
Hello,
I stopped node1 manually as below:
[root@node1 ~]# pcs cluster stop node1
node1: Stopping Cluster (pacemaker)...
node1: Stopping Cluster (corosync)...
[root@node1 ~]#
[root@node1 ~]# pcs status
Error: error running crm_mon, is pacemaker running?
Could not connect to the CIB: Transport
On Thursday 08 April 2021 at 16:55:47, Ken Gaillot wrote:
> On Thu, 2021-04-08 at 14:32 +, Jason Long wrote:
> > Why, when node1 is back, then web server still on node2? Why not
> > switched?
>
> By default, there are no preferences as to where a resource should run.
> The cluster is free to
On Thu, 2021-04-08 at 14:32 +, Jason Long wrote:
> Why, when node1 is back, then web server still on node2? Why not
> switched?
By default, there are no preferences as to where a resource should run.
The cluster is free to move or leave resources as needed.
If you want a resource to prefer a
On Thu, 2021-04-08 at 14:14 +, Jason Long wrote:
> Hello,
> I stopped node1 manually as below:
>
> [root@node1 ~]# pcs cluster stop node1
> node1: Stopping Cluster (pacemaker)...
> node1: Stopping Cluster (corosync)...
> [root@node1 ~]#
> [root@node1 ~]# pcs status
> Error: error running
15 matches
Mail list logo