hello , thank you for your reply,
i am not try to set the host into maintenance because many services is running
on the host.
my question is if i set the host into maintenance mode is it all of the server
running on the host may be shutdown by itself because there is no other host is
available
On Sat, Oct 3, 2020 at 9:42 PM Amit Bawer wrote:
>
>
> On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer wrote:
>
>>
>>
>> For the gluster bricks being filtered out in 4.4.2, this seems like [1].
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1883805
>>
>
> Maybe remove the lvm filter from
On Sat, Oct 3, 2020 at 10:24 PM Amit Bawer wrote:
>
>
> On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi
> wrote:
>
>> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer wrote:
>>
>>> From the info it seems that startup panics because gluster bricks cannot
>>> be mounted.
>>>
>>>
>> Yes, it is so
>> This
On Sat, Oct 3, 2020 at 7:26 PM Gianluca Cecchi
wrote:
> On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer wrote:
>
>> From the info it seems that startup panics because gluster bricks cannot
>> be mounted.
>>
>>
> Yes, it is so
> This is a testbed NUC I use for testing.
> It has 2 disks, the one named
On Sat, Oct 3, 2020 at 6:33 PM Gianluca Cecchi
wrote:
> Sorry I see that there was an error in the lsinitrd command in 4.4.2,
> inerting the "-f" position.
> Here the screenshot that shows anyway no filter active:
>
>
Sorry I see that there was an error in the lsinitrd command in 4.4.2,
inerting the "-f" position.
Here the screenshot that shows anyway no filter active:
https://drive.google.com/file/d/19VmgvsHU2DhJCRzCbO9K_Xyr70x4BqXX/view?usp=sharing
Gianluca
On Sat, Oct 3, 2020 at 6:26 PM Gianluca Cecchi
On Sat, Oct 3, 2020 at 4:06 PM Amit Bawer wrote:
> From the info it seems that startup panics because gluster bricks cannot
> be mounted.
>
>
Yes, it is so
This is a testbed NUC I use for testing.
It has 2 disks, the one named sdb is where ovirt node has been installed.
The one named sda is
>From the info it seems that startup panics because gluster bricks cannot be
mounted.
The filter that you do have in the 4.4.2 screenshot should correspond to
your root pv,
you can confirm that by doing (replace the pv-uuid with the one from your
filter):
#udevadm info
to get these two hosts into a cluster would i need to castrate them down
to nehalem, or would i be able to botch the db for the 2nd host from
"EPYC-IBPB" to "Opteron_G5"?
I don't really want to drop them down to nehalem, so either I can botch
the 2nd cpu so they are both on opteron_G5 or i'll
9 matches
Mail list logo