Re: [ovirt-users] Change HostedEngine Storage

2017-06-23 Thread Matt .
Hi Guys,

Got his sorted out again. Just need to checkout what we can change
about the ca_subject, CN that was previously used when setting it up.

If anyone has info there would be nice.

Cheers,

Matt

2017-06-24 0:05 GMT+02:00 Matt . :
> Hi guys,
>
>
> When you want to move your hosted_engine storage you can copy stuff
> over but you still need to change the image which contains all config
> files.
>
> Is there some documentation about how to do so ?
>
> Thanks,
>
> Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Change HostedEngine Storage

2017-06-23 Thread Matt .
Hi guys,


When you want to move your hosted_engine storage you can copy stuff
over but you still need to change the image which contains all config
files.

Is there some documentation about how to do so ?

Thanks,

Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Info on live snapshot and agent interaction

2017-06-23 Thread Nir Soffer
On Thu, May 4, 2017 at 11:09 PM Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
> > On 4 May 2017, at 17:51, Gianluca Cecchi 
> wrote:
> >
> > Hello,
> > supposing to have a Linux VM with ovirt-guest-agent installed,


File system freezing is implemented using libvirt fsFreese/fsThaw, and is
implemented in the guest with qemu-guest-agent, not ovirt-guest-agent.

You can find the info on using it here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-Using_the_QEMU_guest_virtual_machine_agent_protocol_CLI-libvirt_commands.html


> during a live snapshot operation it should be freeze of filesystems.
> > Where to find confirmation of correct/successful interaction?
>
> if it’s not successful there should be an event log message about that.
> And prior to taking the snapshot a warning in red at the bottom of the
> dialog (that check happens when you open the dialog, so it may not be 100%
> reliable)
>
> > /var/log/messages or agent log or other kind of files?
>
> if you want to doublecheck then this is noticable in vdsm.log. First we
> try to take the snapshot with fsfreeze, and only when it fails we take it
> again without it.
>

Since 3.6 we use fsFreeze/fsThaw explicitly, so we don't try snapshot twice.

In vdsm log you will find an INFO log before/after fsFreeze/fsThaw, or WARN
log if they failed.


> > Are there any limitations on filesystems that support freeze? Is it
> fsfreeze the command executed at VM OS level or any other low level command?
>
> It’s a matter of Linux and Windows implementation, they both have an API
> supporting that at kernel level. I’m not aware of filesystem limitations.
>

You should check qemu-guest-agent documentation.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Yaniv Kaul
On Sat, Jun 24, 2017 at 12:23 AM, Vinícius Ferrão  wrote:

> Hello Adam and Karli,
>
> I will remap uid and gid of NFS to 36 and try again with NFS sharing.
>
> But this does not make much sense, because on iSCSI this should not
> happen. There are no permissions involved and when oVirt runs the
> hosted-engine setup it creates the ext3 filesystem on the iSCSI share
> without any issue. Here’s a photo of the network bandwidth during the OVF
> deployment: http://www.if.ufrj.br/~ferrao/ovirt/bandwidth-iscsi-ovf.jpg
>
> So it’s appears to be working. Something happens after the deployment that
> brokes the connections and kills vdsm.
>

Indeed - may be two different issues. Let us know how the NFS works first,
then let's try with iSCSI.
Y.


>
> Thanks,
> V.
>
> On 23 Jun 2017, at 17:47, Adam Litke  wrote:
>
>
>
> On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg 
> wrote:
>
>>
>>
>> Den 23 juni 2017 21:08 skrev Vinícius Ferrão :
>>
>> Hello oVirt folks.
>>
>> I’m a traitor of the Xen movement and was looking for some good
>> alternatives for XenServer hypervisors. I was aware of KVM for a long time
>> but I was missing a more professional and appliance feeling of the product,
>> and oVirt appears to deliver exactly what I’m looking for.
>>
>> Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for
>> equal or better alternatives, but I’m starting to get frustrated with oVirt.
>>
>> Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on
>> my notebook, it was a no go. For whatever reasons I don’t know
>> vdsmd.service and libvirtd failed to start. I make sure that I was running
>> with EPT support enabled to achieve nested virtualization, but as I said:
>> it was a no go.
>>
>> So I’ve decommissioned a XenServer machine that was in production just to
>> try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon
>> E5506 with 48GB of system RAM, but I can’t get the hosted engine to work,
>> it always insults my hardware: --- Hosted Engine deployment failed: this
>> system is not reliable, please check the issue,fix and redeploy.
>>
>> It’s definitely a problem on the storage subsystem, the error is just
>> random, at this moment I’ve got:
>>
>> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
>> response for JSON-RPC StorageDomain.detach request.
>>
>> But on other tries it came up with something like this:
>>
>> No response for JSON-RPC Volume.getSize request.
>>
>> I was thinking that the problem was on the NFSv3 server on our FreeNAS
>> box, so I’ve changed to an iSCSI backend, but the problem continues.
>>
>>
>> Can't say anything about your issues without the logs but there's nothing
>> wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's
>> NFS since oVirt 3.2 so...
>>
>
> Could you share your relevant exports configuration to make sure he's
> using something that you know works?
>
>
>>
>> "You're holding it wrong":) Sorry, I know you're frustrated but that's
>> what I can add to the conversation.
>>
>> /K
>>
>> This happens at the very end of the ovirt-hosted-engine-setup command,
>> which leads me to believe that’s an oVirt issue. The OVA was already copied
>> and deployed to the storage:
>>
>> [ INFO ] Starting vdsmd
>> [ INFO ] Creating Volume Group
>> [ INFO ] Creating Storage Domain
>> [ INFO ] Creating Storage Pool
>> [ INFO ] Connecting Storage Pool
>> [ INFO ] Verifying sanlock lockspace initialization
>> [ INFO ] Creating Image for 'hosted-engine.lockspace' ...
>> [ INFO ] Image for 'hosted-engine.lockspace' created successfully
>> [ INFO ] Creating Image for 'hosted-engine.metadata' ...
>> [ INFO ] Image for 'hosted-engine.metadata' created successfully
>> [ INFO ] Creating VM Image
>> [ INFO ] Extracting disk image from OVF archive (could take a few minutes
>> depending on archive size)
>> [ INFO ] Validating pre-allocated volume size
>> [ INFO ] Uploading volume to data domain (could take a few minutes
>> depending on archive size)
>> [ INFO ] Image successfully imported from OVF
>> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
>> response for JSON-RPC StorageDomain.detach request.
>> [ INFO ] Yum Performing yum transaction rollback
>> [ INFO ] Stage: Clean up
>> [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-
>> setup/answers/answers-20170623032541.conf'
>> [ INFO ] Stage: Pre-termination
>> [ INFO ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
>> please check the issue,fix and redeploy
>> Log file is located at /var/log/ovirt-hosted-engine-s
>> etup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
>>
>> At this point I really don’t know what I should try. And the log file is
>> too verborragic (hoping this word exists) to look for errors.
>>
>> Any guidance?
>>
>> Thanks,
>> V.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>

Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Vinícius Ferrão
Hello Adam and Karli,

I will remap uid and gid of NFS to 36 and try again with NFS sharing.

But this does not make much sense, because on iSCSI this should not happen. 
There are no permissions involved and when oVirt runs the hosted-engine setup 
it creates the ext3 filesystem on the iSCSI share without any issue. Here’s a 
photo of the network bandwidth during the OVF deployment: 
http://www.if.ufrj.br/~ferrao/ovirt/bandwidth-iscsi-ovf.jpg

So it’s appears to be working. Something happens after the deployment that 
brokes the connections and kills vdsm.

Thanks,
V.

On 23 Jun 2017, at 17:47, Adam Litke 
mailto:ali...@redhat.com>> wrote:



On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg 
mailto:ka...@inparadise.se>> wrote:


Den 23 juni 2017 21:08 skrev Vinícius Ferrão 
mailto:fer...@if.ufrj.br>>:
Hello oVirt folks.

I’m a traitor of the Xen movement and was looking for some good alternatives 
for XenServer hypervisors. I was aware of KVM for a long time but I was missing 
a more professional and appliance feeling of the product, and oVirt appears to 
deliver exactly what I’m looking for.

Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal 
or better alternatives, but I’m starting to get frustrated with oVirt.

Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my 
notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and 
libvirtd failed to start. I make sure that I was running with EPT support 
enabled to achieve nested virtualization, but as I said: it was a no go.

So I’ve decommissioned a XenServer machine that was in production just to try 
oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 
48GB of system RAM, but I can’t get the hosted engine to work, it always 
insults my hardware: --- Hosted Engine deployment failed: this system is not 
reliable, please check the issue,fix and redeploy.

It’s definitely a problem on the storage subsystem, the error is just random, 
at this moment I’ve got:

[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response 
for JSON-RPC StorageDomain.detach request.

But on other tries it came up with something like this:

No response for JSON-RPC Volume.getSize request.

I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so 
I’ve changed to an iSCSI backend, but the problem continues.

Can't say anything about your issues without the logs but there's nothing wrong 
with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's NFS since 
oVirt 3.2 so...

Could you share your relevant exports configuration to make sure he's using 
something that you know works?


"You're holding it wrong":) Sorry, I know you're frustrated but that's what I 
can add to the conversation.

/K

This happens at the very end of the ovirt-hosted-engine-setup command, which 
leads me to believe that’s an oVirt issue. The OVA was already copied and 
deployed to the storage:

[ INFO ] Starting vdsmd
[ INFO ] Creating Volume Group
[ INFO ] Creating Storage Domain
[ INFO ] Creating Storage Pool
[ INFO ] Connecting Storage Pool
[ INFO ] Verifying sanlock lockspace initialization
[ INFO ] Creating Image for 'hosted-engine.lockspace' ...
[ INFO ] Image for 'hosted-engine.lockspace' created successfully
[ INFO ] Creating Image for 'hosted-engine.metadata' ...
[ INFO ] Image for 'hosted-engine.metadata' created successfully
[ INFO ] Creating VM Image
[ INFO ] Extracting disk image from OVF archive (could take a few minutes 
depending on archive size)
[ INFO ] Validating pre-allocated volume size
[ INFO ] Uploading volume to data domain (could take a few minutes depending on 
archive size)
[ INFO ] Image successfully imported from OVF
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response 
for JSON-RPC StorageDomain.detach request.
[ INFO ] Yum Performing yum transaction rollback
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20170623032541.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please 
check the issue,fix and redeploy
Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log

At this point I really don’t know what I should try. And the log file is too 
verborragic (hoping this word exists) to look for errors.

Any guidance?

Thanks,
V.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users




--
Adam Litke

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Karli Sjöberg
Den 23 juni 2017 22:47 skrev Adam Litke :On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg  wrote:Den 23 juni 2017 21:08 skrev Vinícius Ferrão :Hello oVirt folks.I’m a traitor of the Xen movement and was looking for some good alternatives for XenServer hypervisors. I was aware of KVM for a long time but I was missing a more professional and appliance feeling of the product, and oVirt appears to deliver exactly what I’m looking for.Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal or better alternatives, but I’m starting to get frustrated with oVirt.Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and libvirtd failed to start. I make sure that I was running with EPT support enabled to achieve nested virtualization, but as I said: it was a no go.So I’ve decommissioned a XenServer machine that was in production just to try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 48GB of system RAM, but I can’t get the hosted engine to work, it always insults my hardware: --- Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy.It’s definitely a problem on the storage subsystem, the error is just random, at this moment I’ve got:[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.But on other tries it came up with something like this:No response for JSON-RPC Volume.getSize request.I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so I’ve changed to an iSCSI backend, but the problem continues.Can't say anything about your issues without the logs but there's nothing wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's NFS since oVirt 3.2 so...Could you share your relevant exports configuration to make sure he's using something that you know works?Good suggestion!/etc/exports:/foo/bar             -maproot=root X.X.X.XDon't forget to chown the directory to 36:36HTH/K "You're holding it wrong":) Sorry, I know you're frustrated but that's what I can add to the conversation./K This happens at the very end of the ovirt-hosted-engine-setup command, which leads me to believe that’s an oVirt issue. The OVA was already copied and deployed to the storage:[ INFO  ] Starting vdsmd[ INFO  ] Creating Volume Group[ INFO  ] Creating Storage Domain[ INFO  ] Creating Storage Pool[ INFO  ] Connecting Storage Pool[ INFO  ] Verifying sanlock lockspace initialization[ INFO  ] Creating Image for 'hosted-engine.lockspace' ...[ INFO  ] Image for 'hosted-engine.lockspace' created successfully[ INFO  ] Creating Image for 'hosted-engine.metadata' ...[ INFO  ] Image for 'hosted-engine.metadata' created successfully[ INFO  ] Creating VM Image[ INFO  ] Extracting disk image from OVF archive (could take a few minutes depending on archive size)[ INFO  ] Validating pre-allocated volume size[ INFO  ] Uploading volume to data domain (could take a few minutes depending on archive size)[ INFO  ] Image successfully imported from OVF[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.[ INFO  ] Yum Performing yum transaction rollback[ INFO  ] Stage: Clean up[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20170623032541.conf'[ INFO  ] Stage: Pre-termination[ INFO  ] Stage: Termination[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy  Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.logAt this point I really don’t know what I should try. And the log file is too verborragic (hoping this word exists) to look for errors.Any guidance?Thanks,V.___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
-- Adam Litke

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Vinícius Ferrão
Hello Adam,

Since the NFSv3 server is running on a FreeBSD based system I don’t want to 
mess with permissions (that installation was only a test). So I’ve mapped all 
users to root and all groups to wheel: everyone has permission to do anything 
on any file.

# cat /etc/exports
/mnt/pool/ovirt  -alldirs -mapall=root:wheel 146.164.36.137

The issue happens on the same point, either on NFS or on iSCSI.

And finally here are the files:
http://www.if.ufrj.br/~ferrao/ovirt/

Thanks,
V.

On 23 Jun 2017, at 17:16, Adam Litke 
mailto:ali...@redhat.com>> wrote:

Please provide links to the files /var/log/vdsm/vdsm.log and 
/var/log/sanlock.log on the host you are attempting to use.  Does the problem 
happen in the same step when using NFS vs. ISCSI?  How are you configuring the 
NFS export that you give to hosted-engine to use?

On Fri, Jun 23, 2017 at 2:53 PM, Vinícius Ferrão 
mailto:fer...@if.ufrj.br>> wrote:
Hello oVirt folks.

I’m a traitor of the Xen movement and was looking for some good alternatives 
for XenServer hypervisors. I was aware of KVM for a long time but I was missing 
a more professional and appliance feeling of the product, and oVirt appears to 
deliver exactly what I’m looking for.

Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal 
or better alternatives, but I’m starting to get frustrated with oVirt.

Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my 
notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and 
libvirtd failed to start. I make sure that I was running with EPT support 
enabled to achieve nested virtualization, but as I said: it was a no go.

So I’ve decommissioned a XenServer machine that was in production just to try 
oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 
48GB of system RAM, but I can’t get the hosted engine to work, it always 
insults my hardware: --- Hosted Engine deployment failed: this system is not 
reliable, please check the issue,fix and redeploy.

It’s definitely a problem on the storage subsystem, the error is just random, 
at this moment I’ve got:

[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response 
for JSON-RPC StorageDomain.detach request.

But on other tries it came up with something like this:

No response for JSON-RPC Volume.getSize request.

I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so 
I’ve changed to an iSCSI backend, but the problem continues. This happens at 
the very end of the ovirt-hosted-engine-setup command, which leads me to 
believe that’s an oVirt issue. The OVA was already copied and deployed to the 
storage:

[ INFO  ] Starting vdsmd
[ INFO  ] Creating Volume Group
[ INFO  ] Creating Storage Domain
[ INFO  ] Creating Storage Pool
[ INFO  ] Connecting Storage Pool
[ INFO  ] Verifying sanlock lockspace initialization
[ INFO  ] Creating Image for 'hosted-engine.lockspace' ...
[ INFO  ] Image for 'hosted-engine.lockspace' created successfully
[ INFO  ] Creating Image for 'hosted-engine.metadata' ...
[ INFO  ] Image for 'hosted-engine.metadata' created successfully
[ INFO  ] Creating VM Image
[ INFO  ] Extracting disk image from OVF archive (could take a few minutes 
depending on archive size)
[ INFO  ] Validating pre-allocated volume size
[ INFO  ] Uploading volume to data domain (could take a few minutes depending 
on archive size)
[ INFO  ] Image successfully imported from OVF
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response 
for JSON-RPC StorageDomain.detach request.
[ INFO  ] Yum Performing yum transaction rollback
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20170623032541.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please 
check the issue,fix and redeploy
  Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log

At this point I really don’t know what I should try. And the log file is too 
verborragic (hoping this word exists) to look for errors.

Any guidance?

Thanks,
V.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--
Adam Litke

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Adam Litke
On Fri, Jun 23, 2017 at 4:40 PM, Karli Sjöberg  wrote:

>
>
> Den 23 juni 2017 21:08 skrev Vinícius Ferrão :
>
> Hello oVirt folks.
>
> I’m a traitor of the Xen movement and was looking for some good
> alternatives for XenServer hypervisors. I was aware of KVM for a long time
> but I was missing a more professional and appliance feeling of the product,
> and oVirt appears to deliver exactly what I’m looking for.
>
> Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for
> equal or better alternatives, but I’m starting to get frustrated with oVirt.
>
> Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on
> my notebook, it was a no go. For whatever reasons I don’t know
> vdsmd.service and libvirtd failed to start. I make sure that I was running
> with EPT support enabled to achieve nested virtualization, but as I said:
> it was a no go.
>
> So I’ve decommissioned a XenServer machine that was in production just to
> try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon
> E5506 with 48GB of system RAM, but I can’t get the hosted engine to work,
> it always insults my hardware: --- Hosted Engine deployment failed: this
> system is not reliable, please check the issue,fix and redeploy.
>
> It’s definitely a problem on the storage subsystem, the error is just
> random, at this moment I’ve got:
>
> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
> response for JSON-RPC StorageDomain.detach request.
>
> But on other tries it came up with something like this:
>
> No response for JSON-RPC Volume.getSize request.
>
> I was thinking that the problem was on the NFSv3 server on our FreeNAS
> box, so I’ve changed to an iSCSI backend, but the problem continues.
>
>
> Can't say anything about your issues without the logs but there's nothing
> wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's
> NFS since oVirt 3.2 so...
>

Could you share your relevant exports configuration to make sure he's using
something that you know works?


>
> "You're holding it wrong":) Sorry, I know you're frustrated but that's
> what I can add to the conversation.
>
> /K
>
> This happens at the very end of the ovirt-hosted-engine-setup command,
> which leads me to believe that’s an oVirt issue. The OVA was already copied
> and deployed to the storage:
>
> [ INFO ] Starting vdsmd
> [ INFO ] Creating Volume Group
> [ INFO ] Creating Storage Domain
> [ INFO ] Creating Storage Pool
> [ INFO ] Connecting Storage Pool
> [ INFO ] Verifying sanlock lockspace initialization
> [ INFO ] Creating Image for 'hosted-engine.lockspace' ...
> [ INFO ] Image for 'hosted-engine.lockspace' created successfully
> [ INFO ] Creating Image for 'hosted-engine.metadata' ...
> [ INFO ] Image for 'hosted-engine.metadata' created successfully
> [ INFO ] Creating VM Image
> [ INFO ] Extracting disk image from OVF archive (could take a few minutes
> depending on archive size)
> [ INFO ] Validating pre-allocated volume size
> [ INFO ] Uploading volume to data domain (could take a few minutes
> depending on archive size)
> [ INFO ] Image successfully imported from OVF
> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
> response for JSON-RPC StorageDomain.detach request.
> [ INFO ] Yum Performing yum transaction rollback
> [ INFO ] Stage: Clean up
> [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-
> setup/answers/answers-20170623032541.conf'
> [ INFO ] Stage: Pre-termination
> [ INFO ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue,fix and redeploy
> Log file is located at /var/log/ovirt-hosted-engine-
> setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
>
> At this point I really don’t know what I should try. And the log file is
> too verborragic (hoping this word exists) to look for errors.
>
> Any guidance?
>
> Thanks,
> V.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Adam Litke
On Fri, Jun 23, 2017 at 4:24 PM, Vinícius Ferrão  wrote:

> Hello Adam,
>
> Since the NFSv3 server is running on a FreeBSD based system I don’t want
> to mess with permissions (that installation was only a test). So I’ve
> mapped all users to root and all groups to wheel: everyone has permission
> to do anything on any file.
>
> # cat /etc/exports
> /mnt/pool/ovirt  -alldirs -mapall=root:wheel 146.164.36.137
>

I think this might be part of your problem.  The host drops privileges to
vdsm:kvm and needs to work with files which may not be world writable.
Consider changing the above to:

/mnt/pool/ovirt  -alldirs -mapall=36:36 146.164.36.137

Since the vdsm user is always 36 and the kvm group is always 36.


> The issue happens on the same point, either on NFS or on iSCSI.
>
> And finally here are the files:
> http://www.if.ufrj.br/~ferrao/ovirt/
>
>
>From the sanlock log I can see host id renewal failures which are resulting
in vdsm being killed.  That's why we are seeing the timeouts of the API
calls.  My first guess is this is a permissions error (although I am not
sure why it would also fail with iscsi).


> Thanks,
> V.
>
> On 23 Jun 2017, at 17:16, Adam Litke  wrote:
>
> Please provide links to the files /var/log/vdsm/vdsm.log
> and /var/log/sanlock.log on the host you are attempting to use.  Does the
> problem happen in the same step when using NFS vs. ISCSI?  How are you
> configuring the NFS export that you give to hosted-engine to use?
>
> On Fri, Jun 23, 2017 at 2:53 PM, Vinícius Ferrão 
> wrote:
>
>> Hello oVirt folks.
>>
>> I’m a traitor of the Xen movement and was looking for some good
>> alternatives for XenServer hypervisors. I was aware of KVM for a long time
>> but I was missing a more professional and appliance feeling of the product,
>> and oVirt appears to deliver exactly what I’m looking for.
>>
>> Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for
>> equal or better alternatives, but I’m starting to get frustrated with oVirt.
>>
>> Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on
>> my notebook, it was a no go. For whatever reasons I don’t know
>> vdsmd.service and libvirtd failed to start. I make sure that I was running
>> with EPT support enabled to achieve nested virtualization, but as I said:
>> it was a no go.
>>
>> So I’ve decommissioned a XenServer machine that was in production just to
>> try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon
>> E5506 with 48GB of system RAM, but I can’t get the hosted engine to work,
>> it always insults my hardware: --- Hosted Engine deployment failed: this
>> system is not reliable, please check the issue,fix and redeploy.
>>
>> It’s definitely a problem on the storage subsystem, the error is just
>> random, at this moment I’ve got:
>>
>> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
>> response for JSON-RPC StorageDomain.detach request.
>>
>> But on other tries it came up with something like this:
>>
>> No response for JSON-RPC Volume.getSize request.
>>
>> I was thinking that the problem was on the NFSv3 server on our FreeNAS
>> box, so I’ve changed to an iSCSI backend, but the problem continues. This
>> happens at the very end of the ovirt-hosted-engine-setup command, which
>> leads me to believe that’s an oVirt issue. The OVA was already copied and
>> deployed to the storage:
>>
>> [ INFO  ] Starting vdsmd
>> [ INFO  ] Creating Volume Group
>> [ INFO  ] Creating Storage Domain
>> [ INFO  ] Creating Storage Pool
>> [ INFO  ] Connecting Storage Pool
>> [ INFO  ] Verifying sanlock lockspace initialization
>> [ INFO  ] Creating Image for 'hosted-engine.lockspace' ...
>> [ INFO  ] Image for 'hosted-engine.lockspace' created successfully
>> [ INFO  ] Creating Image for 'hosted-engine.metadata' ...
>> [ INFO  ] Image for 'hosted-engine.metadata' created successfully
>> [ INFO  ] Creating VM Image
>> [ INFO  ] Extracting disk image from OVF archive (could take a few
>> minutes depending on archive size)
>> [ INFO  ] Validating pre-allocated volume size
>> [ INFO  ] Uploading volume to data domain (could take a few minutes
>> depending on archive size)
>> [ INFO  ] Image successfully imported from OVF
>> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
>> response for JSON-RPC StorageDomain.detach request.
>> [ INFO  ] Yum Performing yum transaction rollback
>> [ INFO  ] Stage: Clean up
>> [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
>> setup/answers/answers-20170623032541.conf'
>> [ INFO  ] Stage: Pre-termination
>> [ INFO  ] Stage: Termination
>> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
>> please check the issue,fix and redeploy
>>   Log file is located at /var/log/ovirt-hosted-engine-s
>> etup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
>>
>> At this point I really don’t know what I should try. And the log file is
>> too verborragic (hoping this word exists) to look for errors.

Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Karli Sjöberg
Den 23 juni 2017 21:08 skrev Vinícius Ferrão :Hello oVirt folks.I’m a traitor of the Xen movement and was looking for some good alternatives for XenServer hypervisors. I was aware of KVM for a long time but I was missing a more professional and appliance feeling of the product, and oVirt appears to deliver exactly what I’m looking for.Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal or better alternatives, but I’m starting to get frustrated with oVirt.Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and libvirtd failed to start. I make sure that I was running with EPT support enabled to achieve nested virtualization, but as I said: it was a no go.So I’ve decommissioned a XenServer machine that was in production just to try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 48GB of system RAM, but I can’t get the hosted engine to work, it always insults my hardware: --- Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy.It’s definitely a problem on the storage subsystem, the error is just random, at this moment I’ve got:[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.But on other tries it came up with something like this:No response for JSON-RPC Volume.getSize request.I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so I’ve changed to an iSCSI backend, but the problem continues.Can't say anything about your issues without the logs but there's nothing wrong with FreeNAS (FreeBSD) NFS, I've been running oVirt with FreeBSD's NFS since oVirt 3.2 so..."You're holding it wrong":) Sorry, I know you're frustrated but that's what I can add to the conversation./K This happens at the very end of the ovirt-hosted-engine-setup command, which leads me to believe that’s an oVirt issue. The OVA was already copied and deployed to the storage:[ INFO  ] Starting vdsmd[ INFO  ] Creating Volume Group[ INFO  ] Creating Storage Domain[ INFO  ] Creating Storage Pool[ INFO  ] Connecting Storage Pool[ INFO  ] Verifying sanlock lockspace initialization[ INFO  ] Creating Image for 'hosted-engine.lockspace' ...[ INFO  ] Image for 'hosted-engine.lockspace' created successfully[ INFO  ] Creating Image for 'hosted-engine.metadata' ...[ INFO  ] Image for 'hosted-engine.metadata' created successfully[ INFO  ] Creating VM Image[ INFO  ] Extracting disk image from OVF archive (could take a few minutes depending on archive size)[ INFO  ] Validating pre-allocated volume size[ INFO  ] Uploading volume to data domain (could take a few minutes depending on archive size)[ INFO  ] Image successfully imported from OVF[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response for JSON-RPC StorageDomain.detach request.[ INFO  ] Yum Performing yum transaction rollback[ INFO  ] Stage: Clean up[ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20170623032541.conf'[ INFO  ] Stage: Pre-termination[ INFO  ] Stage: Termination[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy  Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.logAt this point I really don’t know what I should try. And the log file is too verborragic (hoping this word exists) to look for errors.Any guidance?Thanks,V.___Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Adam Litke
Please provide links to the files /var/log/vdsm/vdsm.log
and /var/log/sanlock.log on the host you are attempting to use.  Does the
problem happen in the same step when using NFS vs. ISCSI?  How are you
configuring the NFS export that you give to hosted-engine to use?

On Fri, Jun 23, 2017 at 2:53 PM, Vinícius Ferrão  wrote:

> Hello oVirt folks.
>
> I’m a traitor of the Xen movement and was looking for some good
> alternatives for XenServer hypervisors. I was aware of KVM for a long time
> but I was missing a more professional and appliance feeling of the product,
> and oVirt appears to deliver exactly what I’m looking for.
>
> Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for
> equal or better alternatives, but I’m starting to get frustrated with oVirt.
>
> Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on
> my notebook, it was a no go. For whatever reasons I don’t know
> vdsmd.service and libvirtd failed to start. I make sure that I was running
> with EPT support enabled to achieve nested virtualization, but as I said:
> it was a no go.
>
> So I’ve decommissioned a XenServer machine that was in production just to
> try oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon
> E5506 with 48GB of system RAM, but I can’t get the hosted engine to work,
> it always insults my hardware: --- Hosted Engine deployment failed: this
> system is not reliable, please check the issue,fix and redeploy.
>
> It’s definitely a problem on the storage subsystem, the error is just
> random, at this moment I’ve got:
>
> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
> response for JSON-RPC StorageDomain.detach request.
>
> But on other tries it came up with something like this:
>
> No response for JSON-RPC Volume.getSize request.
>
> I was thinking that the problem was on the NFSv3 server on our FreeNAS
> box, so I’ve changed to an iSCSI backend, but the problem continues. This
> happens at the very end of the ovirt-hosted-engine-setup command, which
> leads me to believe that’s an oVirt issue. The OVA was already copied and
> deployed to the storage:
>
> [ INFO  ] Starting vdsmd
> [ INFO  ] Creating Volume Group
> [ INFO  ] Creating Storage Domain
> [ INFO  ] Creating Storage Pool
> [ INFO  ] Connecting Storage Pool
> [ INFO  ] Verifying sanlock lockspace initialization
> [ INFO  ] Creating Image for 'hosted-engine.lockspace' ...
> [ INFO  ] Image for 'hosted-engine.lockspace' created successfully
> [ INFO  ] Creating Image for 'hosted-engine.metadata' ...
> [ INFO  ] Image for 'hosted-engine.metadata' created successfully
> [ INFO  ] Creating VM Image
> [ INFO  ] Extracting disk image from OVF archive (could take a few minutes
> depending on archive size)
> [ INFO  ] Validating pre-allocated volume size
> [ INFO  ] Uploading volume to data domain (could take a few minutes
> depending on archive size)
> [ INFO  ] Image successfully imported from OVF
> [ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No
> response for JSON-RPC StorageDomain.detach request.
> [ INFO  ] Yum Performing yum transaction rollback
> [ INFO  ] Stage: Clean up
> [ INFO  ] Generating answer file '/var/lib/ovirt-hosted-engine-
> setup/answers/answers-20170623032541.conf'
> [ INFO  ] Stage: Pre-termination
> [ INFO  ] Stage: Termination
> [ ERROR ] Hosted Engine deployment failed: this system is not reliable,
> please check the issue,fix and redeploy
>   Log file is located at /var/log/ovirt-hosted-engine-
> setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log
>
> At this point I really don’t know what I should try. And the log file is
> too verborragic (hoping this word exists) to look for errors.
>
> Any guidance?
>
> Thanks,
> V.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Adam Litke
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Frustration defines the deployment of Hosted Engine

2017-06-23 Thread Vinícius Ferrão
Hello oVirt folks.

I’m a traitor of the Xen movement and was looking for some good alternatives 
for XenServer hypervisors. I was aware of KVM for a long time but I was missing 
a more professional and appliance feeling of the product, and oVirt appears to 
deliver exactly what I’m looking for.

Don’t get me wrong, I’m not saying that Xen is not good, I’m looking for equal 
or better alternatives, but I’m starting to get frustrated with oVirt.

Firstly I’ve tried to install the oVirt Node on a VM in VMware Fusion on my 
notebook, it was a no go. For whatever reasons I don’t know vdsmd.service and 
libvirtd failed to start. I make sure that I was running with EPT support 
enabled to achieve nested virtualization, but as I said: it was a no go.

So I’ve decommissioned a XenServer machine that was in production just to try 
oVirt. The hardware is not new, but’s it’s very capable: Dual Xeon E5506 with 
48GB of system RAM, but I can’t get the hosted engine to work, it always 
insults my hardware: --- Hosted Engine deployment failed: this system is not 
reliable, please check the issue,fix and redeploy.

It’s definitely a problem on the storage subsystem, the error is just random, 
at this moment I’ve got:

[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response 
for JSON-RPC StorageDomain.detach request.

But on other tries it came up with something like this:

No response for JSON-RPC Volume.getSize request.

I was thinking that the problem was on the NFSv3 server on our FreeNAS box, so 
I’ve changed to an iSCSI backend, but the problem continues. This happens at 
the very end of the ovirt-hosted-engine-setup command, which leads me to 
believe that’s an oVirt issue. The OVA was already copied and deployed to the 
storage:

[ INFO  ] Starting vdsmd
[ INFO  ] Creating Volume Group
[ INFO  ] Creating Storage Domain
[ INFO  ] Creating Storage Pool
[ INFO  ] Connecting Storage Pool
[ INFO  ] Verifying sanlock lockspace initialization
[ INFO  ] Creating Image for 'hosted-engine.lockspace' ...
[ INFO  ] Image for 'hosted-engine.lockspace' created successfully
[ INFO  ] Creating Image for 'hosted-engine.metadata' ...
[ INFO  ] Image for 'hosted-engine.metadata' created successfully
[ INFO  ] Creating VM Image
[ INFO  ] Extracting disk image from OVF archive (could take a few minutes 
depending on archive size)
[ INFO  ] Validating pre-allocated volume size
[ INFO  ] Uploading volume to data domain (could take a few minutes depending 
on archive size)
[ INFO  ] Image successfully imported from OVF
[ ERROR ] Failed to execute stage 'Misc configuration': [-32605] No response 
for JSON-RPC StorageDomain.detach request.
[ INFO  ] Yum Performing yum transaction rollback
[ INFO  ] Stage: Clean up
[ INFO  ] Generating answer file 
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20170623032541.conf'
[ INFO  ] Stage: Pre-termination
[ INFO  ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: this system is not reliable, please 
check the issue,fix and redeploy
  Log file is located at 
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20170623023424-o9rbt0.log

At this point I really don’t know what I should try. And the log file is too 
verborragic (hoping this word exists) to look for errors.

Any guidance?

Thanks,
V.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt split brain resolution

2017-06-23 Thread Abi Askushi
Hi Denis,

I receive permission denied as below:

gluster volume heal engine split-brain latest-mtime
/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
Healing /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent failed:Operation not
permitted.
Volume heal failed.


When I shutdown host3 then no split brain is reported from the remaining
two hosts. When I power up host3 then I receive the mentioned split brain
and host3 logs the following at ovirt-hosted-engine-ha/agent.log

MainThread::INFO::2017-06-23
16:18:06,067::hosted_engine::594::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Failed set the storage domain: 'Failed to set storage domain VdsmBackend,
options {'hosted-engine.lockspace':
'7B22696D6167655F75756964223A202238323132626637382D66392D346465652D61672D346265633734353035366235222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202236323739303162652D666261332D346263342D393037632D393931356138333632633537227D',
'sp_uuid': '----', 'dom_type': 'glusterfs',
'hosted-engine.metadata':
'7B22696D6167655F75756964223A202263353930633034372D613462322D346539312D613832362D643438623961643537323330222C202270617468223A206E756C6C2C2022766F6C756D655F75756964223A202230353166653865612D39632D346134302D383438382D386335313138666438373238227D',
'sd_uuid': 'e1c80750-b880-495e-9609-b8bc7760d101'}: Request failed: '. Waiting '5's before the next attempt

and the following at /var/log/messages:
Jun 23 16:19:43 v2 journal: vdsm root ERROR failed to retrieve Hosted
Engine HA info#012Traceback (most recent call last):#012  File
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
_getHaInfo#012stats = instance.get_all_stats()#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 105, in get_all_stats#012stats =
broker.get_stats_from_storage(service)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 233, in get_stats_from_storage#012result =
self._checked_communicate(request)#012  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 261, in _checked_communicate#012.format(message or
response))#012RequestError: Request failed: failed to read metadata: [Errno
5] Input/output error: '/rhev/data-center/mnt/glusterSD/10.100.100.1:
_engine/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent/hosted-engine.metadata'

Thanx


On Fri, Jun 23, 2017 at 6:05 PM, Denis Chaplygin 
wrote:

> Hello Abi,
>
> On Fri, Jun 23, 2017 at 4:47 PM, Abi Askushi 
> wrote:
>
>> Hi All,
>>
>> I have a 3 node ovirt 4.1 setup. I lost one node due to raid controller
>> issues. Upon restoration I have the following split brain, although the
>> hosts have mounted the storage domains:
>>
>> gluster volume heal engine info split-brain
>> Brick gluster0:/gluster/engine/brick
>> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>> Status: Connected
>> Number of entries in split-brain: 1
>>
>> Brick gluster1:/gluster/engine/brick
>> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>> Status: Connected
>> Number of entries in split-brain: 1
>>
>> Brick gluster2:/gluster/engine/brick
>> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>> Status: Connected
>> Number of entries in split-brain: 1
>>
>>
>>
> It is definitely on gluster side. You could try to use
>
> gluster volume heal engine split-brain latest-mtime /e1c80750-b880-
> 495e-9609-b8bc7760d101/ha_agent
>
>
> I also added gluster developers to that thread, so they may provide you
> with better advices.
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Dell firmware update problem

2017-06-23 Thread Pavel Gashev
This is a multipath issue. I’d suggest to stop multipathd service before 
applying Dell updates. This will not affect currently running multipath devices.

On 23/06/2017, 13:32, "users-boun...@ovirt.org on behalf of Davide Ferrari" 
 wrote:

Hello


I'm trying to update the BIOS firmware on a Dell PowerEdge with CentOS 
(just upgraded to 7.3) running oVirt 4.0 and it fails with an error

mount: /dev/sdb is already mounted or /tmp/SECUPD busy

The problem is acknoleged by Red Hat and there is a solution for it

https://access.redhat.com/solutions/2671581

but well, I have no RH subscriptions :( Has anyone here encountered the 
same problem with CentOS and oVirt? Any hint?


Thanks!

-- 

Davide

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt split brain resolution

2017-06-23 Thread Denis Chaplygin
Hello Abi,

On Fri, Jun 23, 2017 at 4:47 PM, Abi Askushi 
wrote:

> Hi All,
>
> I have a 3 node ovirt 4.1 setup. I lost one node due to raid controller
> issues. Upon restoration I have the following split brain, although the
> hosts have mounted the storage domains:
>
> gluster volume heal engine info split-brain
> Brick gluster0:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
> Brick gluster1:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
> Brick gluster2:/gluster/engine/brick
> /e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
> Status: Connected
> Number of entries in split-brain: 1
>
>
>
It is definitely on gluster side. You could try to use

gluster volume heal engine split-brain latest-mtime
/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent


I also added gluster developers to that thread, so they may provide you
with better advices.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt split brain resolution

2017-06-23 Thread Abi Askushi
Hi All,

I have a 3 node ovirt 4.1 setup. I lost one node due to raid controller
issues. Upon restoration I have the following split brain, although the
hosts have mounted the storage domains:

gluster volume heal engine info split-brain
Brick gluster0:/gluster/engine/brick
/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
Status: Connected
Number of entries in split-brain: 1

Brick gluster1:/gluster/engine/brick
/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
Status: Connected
Number of entries in split-brain: 1

Brick gluster2:/gluster/engine/brick
/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
Status: Connected
Number of entries in split-brain: 1


Hosted engine status gives the following:

hosted-engine --vm-status
Traceback (most recent call last):
  File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
  File "/usr/lib64/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 173, in 
if not status_checker.print_status():
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 103, in print_status
all_host_stats = self._get_all_host_stats()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/vm_status.py",
line 73, in _get_all_host_stats
all_host_stats = ha_cli.get_all_host_stats()
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 160, in get_all_host_stats
return self.get_all_stats(self.StatModes.HOST)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
line 105, in get_all_stats
stats = broker.get_stats_from_storage(service)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 233, in get_stats_from_storage
result = self._checked_communicate(request)
  File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 261, in _checked_communicate
.format(message or response))
ovirt_hosted_engine_ha.lib.exceptions.RequestError: Request failed: failed
to read metadata: [Errno 5] Input/output error:
'/rhev/data-center/mnt/glusterSD/10.100.100.1:
_engine/e1c80750-b880-495e-9609-b8bc7760d101/ha_agent/hosted-engine.metadata'

Any idea on how to resolve this split brain?

Thanx,
Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SPICE keymap ?

2017-06-23 Thread Matthias Leopold

Am 2017-06-22 um 13:23 schrieb Matthias Leopold:

hi,

i'm looking for a way to change the SPICE keymap for a VM. i couldn't 
find it. i couldn't find a way to change it in the client either (linux 
remote-viewer application). this is probably easy, thanks anyway...


matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


as i said, it was easy. it's obviously a matter of guest OS keymap 
configuration (localectl set-keymap in CentOS 7). i dared to ask because 
i found configuration stanzas for qemu like




via google. i don't know if this is obsolete, i was suspicious from the 
beginning that there is no configuration option in oVirt...


matthias
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] is arbiter configuration needed?

2017-06-23 Thread Erekle Magradze

Thanks a lot Kasturl


On 06/23/2017 03:25 PM, knarra wrote:

On 06/23/2017 03:38 PM, Erekle Magradze wrote:

Hello,
I am using glusterfs as the storage backend for the VM images, 
volumes for oVirt consist of three bricks, is it still necessary to 
configure the arbiter to be on the safe side? or since the number of 
bricks is odd it will be done out of the box?

Thanks in advance
Cheers
Erekle
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

An arbiter volume is a special class of replica-3 volume. Arbiter is 
special because the third brick of replica set contains only directory 
hierarchy information and metadata. Therefore, arbiter provides 
split-brain protection with the equivalent consistency of a replica-3 
volume without incurring the additional storage space overhead.


If you already have a replica volume in your config with three 
bricks then that config should be good. You do not need to create a 
arbiter.


Hope this helps !!

Thanks

kasturi



--
Recogizer Group GmbH

Erekle Magradze
Lead Big Data Engineering & DevOps
Rheinwerkallee 2, 53227 Bonn
Tel: +49 228 29974555

E-Mail erekle.magra...@recogizer.de
Web: www.recogizer.com
 
Recogizer auf LinkedIn https://www.linkedin.com/company-beta/10039182/

Folgen Sie uns auf Twitter https://twitter.com/recogizer
 
-

Recogizer Group GmbH
Geschäftsführer: Oliver Habisch, Carsten Kreutze
Handelsregister: Amtsgericht Bonn HRB 20724
Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993
 
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.

Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben,
informieren Sie bitte sofort den Absender und löschen Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der 
darin enthaltenen Informationen ist nicht gestattet.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] is arbiter configuration needed?

2017-06-23 Thread knarra

On 06/23/2017 03:38 PM, Erekle Magradze wrote:

Hello,
I am using glusterfs as the storage backend for the VM images, volumes 
for oVirt consist of three bricks, is it still necessary to configure 
the arbiter to be on the safe side? or since the number of bricks is 
odd it will be done out of the box?

Thanks in advance
Cheers
Erekle
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

An arbiter volume is a special class of replica-3 volume. Arbiter is 
special because the third brick of replica set contains only directory 
hierarchy information and metadata. Therefore, arbiter provides 
split-brain protection with the equivalent consistency of a replica-3 
volume without incurring the additional storage space overhead.


If you already have a replica volume in your config with three 
bricks then that config should be good. You do not need to create a arbiter.


Hope this helps !!

Thanks

kasturi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Dell firmware update problem

2017-06-23 Thread Davide Ferrari

Hello


I'm trying to update the BIOS firmware on a Dell PowerEdge with CentOS 
(just upgraded to 7.3) running oVirt 4.0 and it fails with an error


mount: /dev/sdb is already mounted or /tmp/SECUPD busy

The problem is acknoleged by Red Hat and there is a solution for it

https://access.redhat.com/solutions/2671581

but well, I have no RH subscriptions :( Has anyone here encountered the 
same problem with CentOS and oVirt? Any hint?



Thanks!

--

Davide

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] is arbiter configuration needed?

2017-06-23 Thread Erekle Magradze

Hello,
I am using glusterfs as the storage backend for the VM images, volumes 
for oVirt consist of three bricks, is it still necessary to configure 
the arbiter to be on the safe side? or since the number of bricks is odd 
it will be done out of the box?

Thanks in advance
Cheers
Erekle
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hot Memory add and Physical Memory guaranteed

2017-06-23 Thread Luca 'remix_tj' Lorenzetto
On Fri, Jun 23, 2017 at 11:16 AM, Milan Zamazal  wrote:
> "Luca 'remix_tj' Lorenzetto"  writes:
>
>> i just tested the memory hot add to a vm. This vm had 2048 MB. I set
>> the new memory to 2662 MB.
>> I logged into the vm and i've seen that hasn't been any memory change,
>> even if i said to the manager to apply memory expansion immediately.
>>
>> Memory shown by free -m is 1772 MB.
>
> [...]
>
>> forgot to say that is a RHEL 7 VM and has the memory baloon device enabled.
>
> This is normal with memory balloon enabled – memory balloon often
> "consumes" the hot plugged memory, so you can't see it.

Ok.

What's exactly the role of "guaranteed memory"? Is only about ensuring
on startup time that there is at least X free memory on hosts or
something more complex?

What's the best configuration? keeping baloon or not? setting memory
and guaranteed memory to the same value?
If i have 1TB of ram in all the cluster, does the "guaranteed memory"
doesn't allows to provision vms with cumulative guaranteed memory
usage greater than 1TB?
Can KSM help allow to overprovision in this situation?

Does memory baloon device has impacts on vm performance?

I need to understand better in order to plan correctly all the
required hardware resource i need for migrating to oVirt.

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hot Memory add and Physical Memory guaranteed

2017-06-23 Thread Martin Sivak
And I believe the virt team is working on "hotplug" support for the
guaranteed memory so mom will start changing the balloon size
accordingly.

Michal: is there a bug tracking that?

Regards,

Martin Sivak

On Fri, Jun 23, 2017 at 11:16 AM, Milan Zamazal  wrote:
> "Luca 'remix_tj' Lorenzetto"  writes:
>
>> i just tested the memory hot add to a vm. This vm had 2048 MB. I set
>> the new memory to 2662 MB.
>> I logged into the vm and i've seen that hasn't been any memory change,
>> even if i said to the manager to apply memory expansion immediately.
>>
>> Memory shown by free -m is 1772 MB.
>
> [...]
>
>> forgot to say that is a RHEL 7 VM and has the memory baloon device enabled.
>
> This is normal with memory balloon enabled – memory balloon often
> "consumes" the hot plugged memory, so you can't see it.
>
> Regards,
> Milan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hot Memory add and Physical Memory guaranteed

2017-06-23 Thread Milan Zamazal
"Luca 'remix_tj' Lorenzetto"  writes:

> i just tested the memory hot add to a vm. This vm had 2048 MB. I set
> the new memory to 2662 MB.
> I logged into the vm and i've seen that hasn't been any memory change,
> even if i said to the manager to apply memory expansion immediately.
>
> Memory shown by free -m is 1772 MB.

[...]

> forgot to say that is a RHEL 7 VM and has the memory baloon device enabled.

This is normal with memory balloon enabled – memory balloon often
"consumes" the hot plugged memory, so you can't see it.

Regards,
Milan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] qemu-kvm-ev-2.6.0-28.el7_3.10.1 now available

2017-06-23 Thread Sandro Bonazzola
Hi,
qemu-kvm-ev-2.6.0-28.el7.10.1
 has been tagged for
release and will soon be available on CentOS mirrors.

This release addresses a security issue (CVE-2017-7718) which has a
security impact rated important.
See https://www.redhat.com/archives/rhsa-announce/2017-June/msg00014.html
for more details on this update.

Here's the Changelog:

* Fri Jun 23 2017 Sandro Bonazzola  -
ev-2.6.0-28.el7_3.10.1
- Removing RH branding from package name

* Mon May 22 2017 Miroslav Rezanina  -
rhev-2.6.0-28.el7_3.10
- kvm-virtio-rng-stop-virtqueue-while-the-CPU-is-stopped.patch [bz#1450375]
- Resolves: bz#1450375
  (Migration failed with postcopy enabled from rhel7.3.z host to rhel7.4
host "error while loading state for instance 0x0 of device 'pci)


-- 

SANDRO BONAZZOLA

ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D

Red Hat EMEA 

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] GDeploy, thin provisioning and pools

2017-06-23 Thread Endre Karlson
Hi, I'm trying to get gdeploy working for my servers (3 of them) using the
following configu linked below:

https://gist.github.com/ekarlso/9bfa0e0560b84ec286ef34ab790d

But it seems I need to have 1 pool metadata lv pr volume group?

Regards
Endre
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users