On Wednesday, 28 April 2021 13:02:01 CEST Juhani Rautiainen wrote:
> Hi!
>
> I know something about multipathing and LVM but I'm not quite familiar
> how this is supposed to look. I compared 4.4 node to 4.3 node and it
> looks totally different from lvm perpective. I'm not sure if this is
> becaus
On Wednesday, 28 April 2021 08:57:32 CEST Yedidyah Bar David wrote:
> On Wed, Apr 28, 2021 at 9:29 AM Juhani Rautiainen
> wrote:
>
> >
> >
> > Hi!
> >
> >
> >
> > This is from ansible logs:
> > 2021-04-27 22:20:38,286+0300 ERROR ansible failed {
> >
> > "ansible_host": "localhost",
> > "
Hi!
I know something about multipathing and LVM but I'm not quite familiar
how this is supposed to look. I compared 4.4 node to 4.3 node and it
looks totally different from lvm perpective. I'm not sure if this is
because of changes or problems in this node. Multipath shows same
disks but for examp
On Wed, Apr 28, 2021 at 11:16 AM Juhani Rautiainen
wrote:
>
> I found these using dmsetup ls -tree:
>
> 6db20b74--512d--4a70--994e--8923d9e1e50b-master (253:21)
> └─36000d31005b4f629 (253:36)
> ├─ (65:64)
> ├─ (65:80)
> ├─ (65:32)
> └─ (65:48)
> 6db20b74--512d--4a7
I found these using dmsetup ls -tree:
6db20b74--512d--4a70--994e--8923d9e1e50b-master (253:21)
└─36000d31005b4f629 (253:36)
├─ (65:64)
├─ (65:80)
├─ (65:32)
└─ (65:48)
6db20b74--512d--4a70--994e--8923d9e1e50b-inbox (253:17)
└─36000d31005b4f629 (253
On Wed, Apr 28, 2021 at 9:29 AM Juhani Rautiainen
wrote:
>
> Hi!
>
> This is from ansible logs:
> 2021-04-27 22:20:38,286+0300 ERROR ansible failed {
> "ansible_host": "localhost",
> "ansible_playbook":
> "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> "ansible_resul
Hi!
This is from ansible logs:
2021-04-27 22:20:38,286+0300 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"
On Tue, Apr 27, 2021 at 10:59 PM Juhani Rautiainen
wrote:
>
> Story of the problems continues. Finally shut everything down, got
> storage domains to maintenance and then this happens:
>
> ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is
> "[Physical device initialization faile
Story of the problems continues. Finally shut everything down, got
storage domains to maintenance and then this happens:
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is
"[Physical device initialization failed. Please check that the device
is empty and accessible by the host.]"
Hmm. Is it possible that when the other node is still running v4.3
this operation can't be completed as it doesn't know how to do it.
Thanks,
Juhani
On Tue, Apr 27, 2021 at 11:07 AM Juhani Rautiainen
wrote:
>
> It seems that it is not supported in oVirt yet? I got this response
> when I tried to
It seems that it is not supported in oVirt yet? I got this response
when I tried to change master with those storage domain that I have:
[Cannot switch master storage domain. Switch master
storage domain operation is not supported.]
Operation Failed
So is this really the only way to do
Thanks this looks like what I'm looking for. I'm still wondering how
to use this. I have LUN just for new hosted storage. Ansible created
storage domain to it correctly but just can't activate it. So is the
idea that I activate this unattached hosted_storage domain and try to
use API to make it mas
On Tue, Apr 27, 2021 at 10:15 AM Juhani Rautiainen
wrote:
>
> To continue. I noticed that another storage domain took the data
> (master) now. I saw one advice that you can force change by putting
> the storage domain to maintenance mode. Problem is that there are VM's
> running on these domains.
To continue. I noticed that another storage domain took the data
(master) now. I saw one advice that you can force change by putting
the storage domain to maintenance mode. Problem is that there are VM's
running on these domains. How is this supposed to work during the
restore?
Thanks,
Juhani
On
14 matches
Mail list logo