[ovirt-users] oVIRT 4.1 / iSCSI Multipathing / Dell Compellent

2017-07-11 Thread Devin Acosta
I just installed a brand new Dell Compellent SAN for use with our oVIRT
4.1.3 fresh installation. I presented a LUN of 30TB to the cluster over
iSCSI 10G. I went into the Storage domain and added a new storage mount
called “dell-storage” and logged into each of the ports for the target. It
detects the targets just right and the Dell SAN is happy, until a host is
rebooted at which point the iSCSI seems to choose to log into only one of
the controllers and not all the paths that it originally logged into. At
this point the Dell SAN shows only 1/2 connected and therefore my e-mail.

When I looked at the original iscsiadm session information after initially
joining to domain it shows correct connected to (1f,21,1e,20) ports.



tcp: [11] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)
tcp: [12] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe21
(non-flash)
tcp: [13] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)
tcp: [14] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe20
(non-flash)
tcp: [15] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)
tcp: [16] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)

After I reboot the hypervisor and it re-connects to the cluster it shows:

tcp: [1] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)

tcp: [2] 10.4.78.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1e
(non-flash)

tcp: [3] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)

tcp: [4] 10.4.77.100:3260,0 iqn.2002-03.com.compellent:5000d310013dfe1f
(non-flash)

What is bizarre is it shows multiple connections to the same IP but it
shows 2 connections to 1e, and 2 connections to 1f. It seems to selected
only the top controller on each fault domain and not the bottom controller
also.

I did configure a “bond” inside the iSCSI Multipathing of selecting only
the 2 VLANS together for the iSCSI. I didn’t select a target with it
because wasn’t sure the proper configuration for this. If I selected both
virtual and target port the cluster goes down hard.

Any ideas?

Devin Acosta
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually moving disks from FC to iSCSI

2017-07-11 Thread Gianluca Cecchi
On Tue, Jul 11, 2017 at 3:14 PM, Gianluca Cecchi 
wrote:

>
>
> On Tue, Jul 11, 2017 at 2:59 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> I have a source oVirt environment with storage domain on FC
>> I have a destination oVirt environment with storage domain on iSCSI
>> The two environments can communicate only via the network of their
>> respective hypervisors.
>> The source environment, in particular, is almost isolated and I cannot
>> attach an export domain to it or something similar.
>> So I'm going to plan a direct move through dd of the disks of some VMs
>>
>> The workflow would be
>> On destination create a new VM with same config and same number of disks
>> of the same size of corresponding source ones.
>> Also I think same allocation policy (thin provision vs preallocated)
>> Using lvs -o+lv_tags I can detect the names of my origin and destination
>> LVs, corresponding to the disks
>> When a VM is powered down, the LV that maps the disk will be not open, so
>> I have to force its activation (both on source and on destination)
>>
>> lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
>>
>> copy source disk with dd through network (I use gzip to limit network
>> usage basically...)
>> on src_host:
>> dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd
>> bs=1024k of=/dev/dest_vg/dest_lv"
>>
>> deactivate LVs on source and dest
>>
>> lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
>>
>> Try to power on the VM on destination
>>
>> Some questions:
>> - about overall workflow
>> - about dd flags, in particular if source disks are thin vs preallocated
>>
>> Thanks,
>> Gianluca
>>
>>
>
> Some further comments:
>
> - probably better/safe to use SPM hosts for lvchange commands both on
> source and target, as this imply metadata manipulation, correct?
> - when disks are preallocated, no problems, but when they are thin, I can
> be in this situation
>
> source disk defined as 90Gb disk and during time it has expanded up to 50Gb
> dest disk at the beginning just after creation will normally be of few GB
> (eg 4Gb), so the dd command will fail when fulll...
> Does this mean that it will be better to create dest disk as preallocated
> anyway or is it safe to run
> lvextend -L+50G dest_vg/dest_lv
> from command line?
> Will oVirt recognize its actual size or what?
>
>
>
So I've done, both for thin provisioned (without having done snapshot on
them, see below) and preallocated disks, and it seems to work, at least
booting an OS boot disk copied over this way.

I have one further doubt.

For a VM I have a disk defined as thin provisioned and 90Gb in size.
Some weeks ago I created a snapshot for it.
Now, before copying over the LV, I have delete this snapshot.
But I see that at the end of the process, the size of the LV backing the VM
disk is now actually 92Gb, so I presume that my dd over network will
fail
What could I do to cover this scenario?

What would be the command at os level in case I choose "move disk" in web
admin gui to move a disk from a SD to another one?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Detach disk from one VM, attach to another VM

2017-07-11 Thread Victor José Acosta Domínguez
Hello

Yes it is possible, you must remove from first VM's configuration (be
careful, do not delete your virtual disk)

After that you can attach ad that disk to another VM

Process should be:
- Detach disk from VM1
- Delete disk from VM1's configuration
- Attach disk to VM2

Victor Acosta
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Detach disk from one VM, attach to another VM

2017-07-11 Thread Davide Ferrari

Hello

I'm using oVirt 4.1.3 and I was wondering if it's possible to detach a 
disk created specifically for a VM from that VM and attach it to another 
VM.


Reading this

https://www.ovirt.org/documentation/admin-guide/chap-Virtual_Machine_Disks/

it seems that's only possible with floating virtual disks, but I have a 
disk that was created for a specific VM.



Thanks!

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually moving disks from FC to iSCSI

2017-07-11 Thread Gianluca Cecchi
On Tue, Jul 11, 2017 at 3:33 PM, Elad Ben Aharon 
wrote:

> Hi,
>
> I suggest another solution for you (which wasn't tested as a whole but
> much simpler in my opinion).
>
>  On the source environment:
> - Create a local DC with a single local storage domain (let's call it
> sd1).
> - Deactivate and detach the FC domain from the existing shared DC and
> attach it to the local DC (From oVirt 4.1, attaching a shared storage
> domain to a local data center is possible).
> - Register FC domain VMs (from 'VM import' sub-tab under the FC domain).
> - Move the disks from the FC domain to the local domain.
> - Deactivate, detach and remove *without format* the local domain from
> the source environment.
> - Maintenance the host in the local data center (let's call it host1).
>
>  On the destination environment:
> - Add host1 to the destination environment and create a local data center
> with this host and a new local domain.
> - Import the existing local domain (sd1) to the local data center and
> register its VMs.
> - Deactivate and detach the iSCSI domain from the existing data center and
> attach it to the local data center.
> - Move all the disks from the local domain to the iSCSI domain.
> - Deactivate and detach the iSCSI domain from the local data center,
> attach and activate it in the shared data center and register its VMs.
>
>
> Thanks,
>
> ELAD BEN AHARON
>
> SENIOR QUALITY ENGINEER
>
> Red Hat Israel Ltd. 
>
> 34 Jerusalem Road, Building A, 1st floor
>
> Ra'anana, Israel 4350109
>
> ebena...@redhat.comT: +972-9-7692007/8272007
>   TRIED. TESTED. TRUSTED. 
>
>
>
Unfortunately the source environment is placed in a different site and I
cannot connect a "fast" local domain to it: no internal disks and old blade
system so that no fast USB channel
ALso, in the destination iSCSI I have many VMs running and I cannot it put
it down.
I have to transfer 2 VMs for a total of about 500Gb of storage
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually moving disks from FC to iSCSI

2017-07-11 Thread Elad Ben Aharon
Hi,

I suggest another solution for you (which wasn't tested as a whole but much
simpler in my opinion).

 On the source environment:
- Create a local DC with a single local storage domain (let's call it sd1).
- Deactivate and detach the FC domain from the existing shared DC and
attach it to the local DC (From oVirt 4.1, attaching a shared storage
domain to a local data center is possible).
- Register FC domain VMs (from 'VM import' sub-tab under the FC domain).
- Move the disks from the FC domain to the local domain.
- Deactivate, detach and remove *without format* the local domain from the
source environment.
- Maintenance the host in the local data center (let's call it host1).

 On the destination environment:
- Add host1 to the destination environment and create a local data center
with this host and a new local domain.
- Import the existing local domain (sd1) to the local data center and
register its VMs.
- Deactivate and detach the iSCSI domain from the existing data center and
attach it to the local data center.
- Move all the disks from the local domain to the iSCSI domain.
- Deactivate and detach the iSCSI domain from the local data center, attach
and activate it in the shared data center and register its VMs.


Thanks,

ELAD BEN AHARON

SENIOR QUALITY ENGINEER

Red Hat Israel Ltd. 

34 Jerusalem Road, Building A, 1st floor

Ra'anana, Israel 4350109

ebena...@redhat.comT: +972-9-7692007/8272007
  TRIED. TESTED. TRUSTED. 




On Tue, Jul 11, 2017 at 4:14 PM, Gianluca Cecchi 
wrote:

>
>
> On Tue, Jul 11, 2017 at 2:59 PM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> Hello,
>> I have a source oVirt environment with storage domain on FC
>> I have a destination oVirt environment with storage domain on iSCSI
>> The two environments can communicate only via the network of their
>> respective hypervisors.
>> The source environment, in particular, is almost isolated and I cannot
>> attach an export domain to it or something similar.
>> So I'm going to plan a direct move through dd of the disks of some VMs
>>
>> The workflow would be
>> On destination create a new VM with same config and same number of disks
>> of the same size of corresponding source ones.
>> Also I think same allocation policy (thin provision vs preallocated)
>> Using lvs -o+lv_tags I can detect the names of my origin and destination
>> LVs, corresponding to the disks
>> When a VM is powered down, the LV that maps the disk will be not open, so
>> I have to force its activation (both on source and on destination)
>>
>> lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
>>
>> copy source disk with dd through network (I use gzip to limit network
>> usage basically...)
>> on src_host:
>> dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd
>> bs=1024k of=/dev/dest_vg/dest_lv"
>>
>> deactivate LVs on source and dest
>>
>> lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
>>
>> Try to power on the VM on destination
>>
>> Some questions:
>> - about overall workflow
>> - about dd flags, in particular if source disks are thin vs preallocated
>>
>> Thanks,
>> Gianluca
>>
>>
>
> Some further comments:
>
> - probably better/safe to use SPM hosts for lvchange commands both on
> source and target, as this imply metadata manipulation, correct?
> - when disks are preallocated, no problems, but when they are thin, I can
> be in this situation
>
> source disk defined as 90Gb disk and during time it has expanded up to 50Gb
> dest disk at the beginning just after creation will normally be of few GB
> (eg 4Gb), so the dd command will fail when fulll...
> Does this mean that it will be better to create dest disk as preallocated
> anyway or is it safe to run
> lvextend -L+50G dest_vg/dest_lv
> from command line?
> Will oVirt recognize its actual size or what?
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Manually moving disks from FC to iSCSI

2017-07-11 Thread Gianluca Cecchi
On Tue, Jul 11, 2017 at 2:59 PM, Gianluca Cecchi 
wrote:

> Hello,
> I have a source oVirt environment with storage domain on FC
> I have a destination oVirt environment with storage domain on iSCSI
> The two environments can communicate only via the network of their
> respective hypervisors.
> The source environment, in particular, is almost isolated and I cannot
> attach an export domain to it or something similar.
> So I'm going to plan a direct move through dd of the disks of some VMs
>
> The workflow would be
> On destination create a new VM with same config and same number of disks
> of the same size of corresponding source ones.
> Also I think same allocation policy (thin provision vs preallocated)
> Using lvs -o+lv_tags I can detect the names of my origin and destination
> LVs, corresponding to the disks
> When a VM is powered down, the LV that maps the disk will be not open, so
> I have to force its activation (both on source and on destination)
>
> lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname
>
> copy source disk with dd through network (I use gzip to limit network
> usage basically...)
> on src_host:
> dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd
> bs=1024k of=/dev/dest_vg/dest_lv"
>
> deactivate LVs on source and dest
>
> lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname
>
> Try to power on the VM on destination
>
> Some questions:
> - about overall workflow
> - about dd flags, in particular if source disks are thin vs preallocated
>
> Thanks,
> Gianluca
>
>

Some further comments:

- probably better/safe to use SPM hosts for lvchange commands both on
source and target, as this imply metadata manipulation, correct?
- when disks are preallocated, no problems, but when they are thin, I can
be in this situation

source disk defined as 90Gb disk and during time it has expanded up to 50Gb
dest disk at the beginning just after creation will normally be of few GB
(eg 4Gb), so the dd command will fail when fulll...
Does this mean that it will be better to create dest disk as preallocated
anyway or is it safe to run
lvextend -L+50G dest_vg/dest_lv
from command line?
Will oVirt recognize its actual size or what?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Manually moving disks from FC to iSCSI

2017-07-11 Thread Gianluca Cecchi
Hello,
I have a source oVirt environment with storage domain on FC
I have a destination oVirt environment with storage domain on iSCSI
The two environments can communicate only via the network of their
respective hypervisors.
The source environment, in particular, is almost isolated and I cannot
attach an export domain to it or something similar.
So I'm going to plan a direct move through dd of the disks of some VMs

The workflow would be
On destination create a new VM with same config and same number of disks of
the same size of corresponding source ones.
Also I think same allocation policy (thin provision vs preallocated)
Using lvs -o+lv_tags I can detect the names of my origin and destination
LVs, corresponding to the disks
When a VM is powered down, the LV that maps the disk will be not open, so I
have to force its activation (both on source and on destination)

lvchange --config 'global {use_lvmetad=0}' -ay vgname/lvname

copy source disk with dd through network (I use gzip to limit network usage
basically...)
on src_host:
dd if=/dev/src_vg/src_lv bs=1024k | gzip | ssh dest_host "gunzip | dd
bs=1024k of=/dev/dest_vg/dest_lv"

deactivate LVs on source and dest

lvchange --config 'global {use_lvmetad=0}' -an vgname/lvname

Try to power on the VM on destination

Some questions:
- about overall workflow
- about dd flags, in particular if source disks are thin vs preallocated

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Template for imported VMs?

2017-07-11 Thread Arik Hadas
On Tue, Jul 11, 2017 at 1:28 PM, Eduardo Mayoral  wrote:

> Hi,
>
> I am using oVirt 4.1 and I am importing a significant number of VMs
> from VMWare. So far, everything is fine.
>
> However, I find that after importing each VM I have to modify some
> parameters of the imported VM (Timezone, VM type from desktop to server,
> VNC console...).
>
> I have those parameters changed on the "blank" template, but it
> seems that the import process is not picking that template as the base
>

That's true, we don't take anything from the blank template on import at
the moment.
Although this approach makes sense to some degree, because the blank
template is used as a base for new VMs unless stated otherwise, it is
debatable whether this approach should be used for imported VMs - some may
find such a relation between the (remote) imported VMs and the (local)
blank template odd/unexpected .


> for the imported VM. Where is the import process picking the defaults
> for the imported VM from? Can those be changed? I have failed to find
> that information on the docs.
>

Unfortunately, there is no way to set those defaults at the moment. You
will have to edit the VMs after the import process is done. There are some
plans to simplify that process though.


>
> Thanks in advance!
>
> --
> Eduardo Mayoral Jimeno (emayo...@arsys.es)
> Administrador de sistemas. Departamento de Plataformas. Arsys internet.
> +34 941 620 145 ext. 5153
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Template for imported VMs?

2017-07-11 Thread Eduardo Mayoral
Hi,

I am using oVirt 4.1 and I am importing a significant number of VMs
from VMWare. So far, everything is fine.

However, I find that after importing each VM I have to modify some
parameters of the imported VM (Timezone, VM type from desktop to server,
VNC console...).

I have those parameters changed on the "blank" template, but it
seems that the import process is not picking that template as the base
for the imported VM. Where is the import process picking the defaults
for the imported VM from? Can those be changed? I have failed to find
that information on the docs.

Thanks in advance!

-- 
Eduardo Mayoral Jimeno (emayo...@arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Metrics

2017-07-11 Thread Yedidyah Bar David
On Tue, Jul 11, 2017 at 1:13 PM, Arsène Gschwind
 wrote:
> Hi all,
>
> I'm trying to setup oVirt metrics as described at
> https://www.ovirt.org/develop/release-management/features/engine/metrics-store/
> using SSO.
> My oVirt installation is based on Version: 4.1.3.5-1.el7.centos.
>
> I'm missing the SSO tool called ovirt-register-sso-client as written in the
> doc to register new SSO client. I couldn't figure out which package contains
> that tool, is it included in the latest distribution ?

I do not think it's included in 4.1. I can only see it in the master
branch, see [1]. Adding Ravi. Ravi - is it planned to be in 4.1? If not,
perhaps the blog post should mention this.

You can try this by using the nightly snapshot [2]. Obviously do not do
this on a production setup.

Or you can use the version without sso.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1425935
[2] http://www.ovirt.org/develop/dev-process/install-nightly-snapshot/

>
> Thanks for any help.
>
> rgds,
> Arsène
>
> --
>
> Arsène Gschwind
> Fa. Sapify AG im Auftrag der Universität Basel
> IT Services
> Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
> Tel. +41 79 449 25 63  |  http://its.unibas.ch
> ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt Metrics

2017-07-11 Thread Arsène Gschwind

Hi all,

I'm trying to setup oVirt metrics as described at 
https://www.ovirt.org/develop/release-management/features/engine/metrics-store/ 
using SSO.

My oVirt installation is based on Version: 4.1.3.5-1.el7.centos.

I'm missing the SSO tool called ovirt-register-sso-client as written in 
the doc to register new SSO client. I couldn't figure out which package 
contains that tool, is it included in the latest distribution ?


Thanks for any help.

rgds,
Arsène

--

*Arsène Gschwind*
Fa. Sapify AG im Auftrag der Universität Basel
IT Services
Klingelbergstr. 70 |  CH-4056 Basel  |  Switzerland
Tel. +41 79 449 25 63  | http://its.unibas.ch 
ITS-ServiceDesk: support-...@unibas.ch | +41 61 267 14 11

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-11 Thread knarra

On 07/11/2017 01:32 PM, Simone Marchioni wrote:

Il 11/07/2017 07:59, knarra ha scritto:

On 07/10/2017 07:18 PM, Simone Marchioni wrote:

Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
updated the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error 
while evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.domain.it] => (item=/dev/md128)
skipping: [ha1.domain.it] => (item=/dev/md128)
skipping: [ha3.domain.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Hi,

I see that there are some signatures left on your device due to 
which the script is failing and creating physical volume also fails. 
Can you try to do fill zeros in the disk for 512MB or 1GB and try 
again ?


dd if=/dev/zero of=

Before running the script again try to do pvcreate and see if 
that works. If it works, just do pvdelete and run the script. 
Everything should work fine.


Thanks
kasturi


Thanks for your time.
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

removed partition signatures with wipefs and run deploy again: this 
time the creation of VG and LV worked correctly. The deployment 
proceeded until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}

to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 

Re: [ovirt-users] [Ovirt 4.1] Hosted engine with glusterfs (2 node)

2017-07-11 Thread TranceWorldLogic .
Should I consider that there is no use of Quorum on 2 node.
I need to setup odd :)  nodes only.

Please clarify same.

Thanks,
~Rohit

On Tue, Jul 11, 2017 at 12:57 PM, Simone Tiraboschi 
wrote:

>
>
> On Tue, Jul 11, 2017 at 7:17 AM, TranceWorldLogic . <
> tranceworldlo...@gmail.com> wrote:
>
>> Hi,
>>
>> I was trying to setup hosted engine on 2 host machine but it not allowing
>> to setup hosted engine on 2 node gluster file system.
>>
>> I have also modified vdsm.com to allow 2 replica count.
>>
>> But latter I found in code it is hard coded to 3 as shown below:
>> --- ---
>> -- ---
>> src/plugins/gr-he-setup/storage/nfs.py
>>
>> if* replicaCount != '3'*:
>> raise RuntimeError(
>> _(
>> 'GlusterFS Volume is not using replica 3'
>> )
>> )
>> --- ---
>> -- ---
>>
>> Can I get to know the reason to hard coded to 3 ?
>> from online I got to know that split brain issue not get resolve
>> automatically using 2 node gluster setup. But please let me know more (why
>> 3 ? why not 2 or 4 ?)
>>
>
> Simply because two is even and three is odd.
>
> In a split condition over only two nodes, if node A says A and nodes B
> says B you cannot decide if the truth is A or B.
> With three nodes, if node says A but nodes B AND nodes C says B, the truth
> is B by design.
> That's why you need three nodes with at least 2 of them up and working.
>
> If you want to save something on storage side you could evaluate two
> regular nodes plus an arbiter one.
>
>
>>
>> If I am missing something please let me know.
>> Also want to understand, can quorum can solve 2 node gluster issue
>> (split-brain) ?
>>
>> Thanks,
>> ~Rohit
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-11 Thread Gianluca Cecchi
On Tue, Jul 11, 2017 at 10:02 AM, Simone Marchioni 
wrote:

>
> Hi,
>
> removed partition signatures with wipefs and run deploy again: this time
> the creation of VG and LV worked correctly. The deployment proceeded until
> some new errors... :-/
>
>
> PLAY [gluster_servers] **
> ***
>
> TASK [start/stop/restart/reload services] **
> 
> failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item":
> "glusterd", "msg": "Could not find the requested service glusterd: host"}
> failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item":
> "glusterd", "msg": "Could not find the requested service glusterd: host"}
> failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item":
> "glusterd", "msg": "Could not find the requested service glusterd: host"}
> to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry
>
>
>
[snip]


> In start/stop/restart/reload services it complain about "Could not find
> the requested service glusterd: host". GlusterFS must be preinstalled or
> not? I simply installed the rpm packages manually BEFORE the deployment:
>
> yum install glusterfs glusterfs-cli glusterfs-libs
> glusterfs-client-xlators glusterfs-api glusterfs-fuse
>
> but never configured anything.
>
> For firewalld problem "ERROR: Exception caught:
> org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not
> among existing services Services are defined by port/tcp relationship and
> named as they are in /etc/services (on most systems)" I haven't touched
> anything... it's an "out of the box" installation of CentOS 7.3.
>
> Don't know if the following problems - "Run a shell script" and "usermod:
> group 'gluster' does not exist" - are related to these... maybe the usermod
> problem.
>
> Thank you again.
>
> Simone
>
>
For sure you have to install also the package glusterfs-server, that
provides glusterd.
Probably you was misled by the fact that apparently base CentOS packages
seems not to provide the package?
But if you have ovirt-4.1-dependencies.repo enabled, you should have
gluster 3.8 packages and you do have to install glusterfs-server

[ovirt-4.1-centos-gluster38]
name=CentOS-7 - Gluster 3.8
baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-3.8/
gpgcheck=1
enabled=1
gpgkey=
https://raw.githubusercontent.com/CentOS-Storage-SIG/centos-release-storage-common/master/RPM-GPG-KEY-CentOS-SIG-Storage

eg


[g.cecchi@ov300 ~]$ sudo yum install glusterfs-server
Loaded plugins: fastestmirror, langpacks
base
  | 3.6 kB  00:00:00
centos-opstools-release
 | 2.9 kB  00:00:00
epel-util/x86_64/metalink
 |  25 kB  00:00:00
extras
  | 3.4 kB  00:00:00
ovirt-4.1
 | 3.0 kB  00:00:00
ovirt-4.1-centos-gluster38
  | 2.9 kB  00:00:00
ovirt-4.1-epel/x86_64/metalink
  |  25 kB  00:00:00
ovirt-4.1-patternfly1-noarch-epel
 | 3.0 kB  00:00:00
ovirt-centos-ovirt41
  | 2.9 kB  00:00:00
rnachimu-gdeploy
  | 3.0 kB  00:00:00
updates
 | 3.4 kB  00:00:00
virtio-win-stable
 | 3.0 kB  00:00:00
Loading mirror speeds from cached hostfile
 * base: ba.mirror.garr.it
 * epel-util: epel.besthosting.ua
 * extras: ba.mirror.garr.it
 * ovirt-4.1: ftp.nluug.nl
 * ovirt-4.1-epel: epel.besthosting.ua
 * updates: ba.mirror.garr.it
Resolving Dependencies
--> Running transaction check
---> Package glusterfs-server.x86_64 0:3.8.13-1.el7 will be installed
--> Processing Dependency: glusterfs-libs(x86-64) = 3.8.13-1.el7 for
package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs-fuse(x86-64) = 3.8.13-1.el7 for
package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs-client-xlators(x86-64) = 3.8.13-1.el7
for package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs-cli(x86-64) = 3.8.13-1.el7 for
package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs-api(x86-64) = 3.8.13-1.el7 for
package: glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: glusterfs(x86-64) = 3.8.13-1.el7 for package:
glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: liburcu-cds.so.1()(64bit) for package:
glusterfs-server-3.8.13-1.el7.x86_64
--> Processing Dependency: liburcu-bp.so.1()(64bit) for package:
glusterfs-server-3.8.13-1.el7.x86_64
--> Running transaction check
---> Package glusterfs.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs.x86_64 0:3.8.13-1.el7 will be an update
---> Package glusterfs-api.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs-api.x86_64 0:3.8.13-1.el7 will be an update
---> Package glusterfs-cli.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs-cli.x86_64 0:3.8.13-1.el7 will be an update
---> Package glusterfs-client-xlators.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs-client-xlators.x86_64 0:3.8.13-1.el7 will be an
update
---> Package glusterfs-fuse.x86_64 0:3.8.10-1.el7 will be updated
---> Package glusterfs-fuse.x86_64 

Re: [ovirt-users] virt-v2v to glusterfs storage domain

2017-07-11 Thread Arik Hadas
On Mon, Jul 10, 2017 at 7:54 PM, Ramachandra Reddy Ankireddypalle <
rcreddy.ankireddypa...@gmail.com> wrote:

> Thanks for looking in to this. I am looking for a way to achieve this
> through command/script. I tried using virtv2v as below.
>
> [root@hcadev3 tmp]# virt-v2v -i ova 1Deepakvm2.ova -o rhev -of qcow2 -os
> hcadev3:/dav_vm_vol
> [   0.0] Opening the source -i ova 1Deepakvm2.ova
> [   1.2] Creating an overlay to protect the source from being modified
> [   2.3] Initializing the target -o rhev -os hcadev3:/dav_vm_vol
> mount.nfs: requested NFS version or transport protocol is not supported
>
> I am looking for a way to provide an option to virt-v2v to understand that
> dav_vm_vol is a glusterfs volume and that it can be accessed from node
> hcadev3.
>

So the "-o rhev" should not be used in that case because then virt-v2v
assumes the target is an export domain and tries to mount it as NFS.
If this has to be done via the command line then the flag that needs to be
used in virt-v2v is "-o vdsm" but it requires you to create the images and
mount them to the host that the conversion is executed on by yourself + to
execute virt-v2v with vdsm's permissions - which is complicated. That is
why this option was not intended for invocation by users from the command
line (but to be used by vdsm), however, that's possible.
It would be simpler to use the REST-API based clients in that case.


>
> On Mon, Jul 10, 2017 at 2:36 AM, Arik Hadas  wrote:
>
>>
>>
>> On Fri, Jul 7, 2017 at 9:38 PM, Ramachandra Reddy Ankireddypalle <
>> rcreddy.ankireddypa...@gmail.com> wrote:
>>
>>> Hi,
>>>  Does virt-v2v command work with glusterfs storage domain. I have an
>>> OVA image and that needs to be imported to glusterfs storage domain. Please
>>> provide some pointers to this.
>>>
>>
>>  I don't see a reason why it wouldn't work.
>> Assuming that the import operation is triggered from oVirt, the engine
>> then creates the target disk(s) and prepares the image(s) on the host that
>> the conversion will be executed on as it regularly does. virt-v2v then just
>> to write to that "prepared" image.
>> In the import dialog you can select the target storage domain. I would
>> suggest that you just try to pick the glusterfs storage domain and see if
>> it works. It should work, if not - please let us know.
>>
>>
>>>
>>> Thanks and Regards,
>>> Ram
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Windows USB Redirection

2017-07-11 Thread Abi Askushi
Hi All,

Finally I was able to pin-point the issue.
Just in case ones needs this.


ovirt by default enables USB 1.0 controller.

To enable USB 2.0 controller go to

At ovirt engine at config file below, change the controller from piix3-uhci
to piix4-uhci as below:

/etc/ovirt-engine/osinfo.conf.d/00-defaults.properties


os.other.devices.usb.controller.value = piix4-uhci


then restart engine:
systemctl restart ovirt-engine


USB device then is attached successfully at Windows VM.

On Wed, Jun 28, 2017 at 12:44 PM, Abi Askushi 
wrote:

>
> Hi All,
>
> I have Ovirt 4.1 with 3 nodes on top glusterfs.
>
> I Have 2 VMs: Windows 2016 64bit and Windows 10 64bit
>
> When I attach a USB flash disk to a VM (from host devices) the VM cannot
> see the USB drive and report driver issue at device manager (see attached).
> This happens with both VMs when tested.
>
> When testing with Windows 7 or Windows XP USB is attached and accessed
> normally.
> Have you encountered such issue?
> I have installed latest guest tools on both VMs.
>
> Many thanx
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Installation of oVirt 4.1, Gluster Storage and Hosted Engine

2017-07-11 Thread Simone Marchioni

Il 11/07/2017 07:59, knarra ha scritto:

On 07/10/2017 07:18 PM, Simone Marchioni wrote:

Hi Kasturi,

you're right: the file 
/usr/share/gdeploy/scripts/grafton-sanity-check.sh is present. I 
updated the path in the gdeploy config file and run Deploy again.

The situation is much better but the Deployment failed again... :-(

Here are the errors:



PLAY [gluster_servers] 
*


TASK [Run a shell script] 
**
fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}
fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The 
conditional check 'result.rc != 0' failed. The error was: error while 
evaluating conditional (result.rc != 0): 'dict object' has no 
attribute 'rc'"}

to retry, use: --limit @/tmp/tmpNn6XNG/run-script.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Clean up filesystem signature] 
***

skipping: [ha2.domain.it] => (item=/dev/md128)
skipping: [ha1.domain.it] => (item=/dev/md128)
skipping: [ha3.domain.it] => (item=/dev/md128)

TASK [Create Physical Volume] 
**
failed: [ha2.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha1.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}
failed: [ha3.domain.it] (item=/dev/md128) => {"failed": true, 
"failed_when_result": true, "item": "/dev/md128", "msg": "WARNING: 
xfs signature detected on /dev/md128 at offset 0. Wipe it? [y/n]: 
[n]\n  Aborted wiping of xfs.\n  1 existing signature left on the 
device.\n", "rc": 5}

to retry, use: --limit @/tmp/tmpNn6XNG/pvcreate.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1

Ignoring errors...



Any clue?

Hi,

I see that there are some signatures left on your device due to 
which the script is failing and creating physical volume also fails. 
Can you try to do fill zeros in the disk for 512MB or 1GB and try again ?


dd if=/dev/zero of=

Before running the script again try to do pvcreate and see if that 
works. If it works, just do pvdelete and run the script. Everything 
should work fine.


Thanks
kasturi


Thanks for your time.
Simone
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Hi,

removed partition signatures with wipefs and run deploy again: this time 
the creation of VG and LV worked correctly. The deployment proceeded 
until some new errors... :-/



PLAY [gluster_servers] 
*


TASK [start/stop/restart/reload services] 
**
failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}
failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": 
"glusterd", "msg": "Could not find the requested service glusterd: host"}

to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP 
*

ha1.domain.it: ok=0changed=0unreachable=0 failed=1
ha2.domain.it: ok=0changed=0unreachable=0 failed=1
ha3.domain.it: ok=0changed=0unreachable=0 failed=1


PLAY [gluster_servers] 
*


TASK [Start 

Re: [ovirt-users] [Ovirt 4.1] Hosted engine with glusterfs (2 node)

2017-07-11 Thread Simone Tiraboschi
On Tue, Jul 11, 2017 at 7:17 AM, TranceWorldLogic . <
tranceworldlo...@gmail.com> wrote:

> Hi,
>
> I was trying to setup hosted engine on 2 host machine but it not allowing
> to setup hosted engine on 2 node gluster file system.
>
> I have also modified vdsm.com to allow 2 replica count.
>
> But latter I found in code it is hard coded to 3 as shown below:
> --- ---
> -- ---
> src/plugins/gr-he-setup/storage/nfs.py
>
> if* replicaCount != '3'*:
> raise RuntimeError(
> _(
> 'GlusterFS Volume is not using replica 3'
> )
> )
> --- ---
> -- ---
>
> Can I get to know the reason to hard coded to 3 ?
> from online I got to know that split brain issue not get resolve
> automatically using 2 node gluster setup. But please let me know more (why
> 3 ? why not 2 or 4 ?)
>

Simply because two is even and three is odd.

In a split condition over only two nodes, if node A says A and nodes B says
B you cannot decide if the truth is A or B.
With three nodes, if node says A but nodes B AND nodes C says B, the truth
is B by design.
That's why you need three nodes with at least 2 of them up and working.

If you want to save something on storage side you could evaluate two
regular nodes plus an arbiter one.


>
> If I am missing something please let me know.
> Also want to understand, can quorum can solve 2 node gluster issue
> (split-brain) ?
>
> Thanks,
> ~Rohit
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.2 offering RC update?

2017-07-11 Thread Yuval Turgeman
Hi,

The GA iso is 4.1-2017070915

Thanks,
Yuval.


On Tue, Jul 11, 2017 at 1:13 AM, Vinícius Ferrão  wrote:

> May I ask another question?
>
> I’ve noted two new ISO images of 4.1.3 Node version over here:
> http://resources.ovirt.org/pub/ovirt-4.1/iso/ovirt-
> node-ng-installer-ovirt/
>
> Which one is the stable version:
> 4.1-2017070915 <(201)%20707-0915>
> 4.1-2017070913 <(201)%20707-0913>
>
> Thanks,
> V.
>
> On 9 Jul 2017, at 13:26, Lev Veyde  wrote:
>
> Hi Vinicius,
>
> It's actually due to my mistake, and as result the package got tagged with
> the RC version instead of the GA.
> The package itself was based on the 4.1.3 code, though.
> I rebuilt it and published a fixed package, so the issue should be
> resolved now.
>
> Thanks in advance,
>
> On Sat, Jul 8, 2017 at 5:12 AM, Vinícius Ferrão  wrote:
>
>> Hello,
>>
>> I’ve noted a strange thing on oVirt. On the Hosted Engine an update was
>> offered and I was a bit confused, since I’m running the latest oVirt Node
>> release.
>>
>> To check if 4.1.3 was already released I issued an “yum update” on the
>> command line and for my surprise an RC release was offered. This not seems
>> to be right:
>>
>> 
>> ==
>>  PackageArch   Version
>> Repository Size
>> 
>> ==
>> Installing:
>>  ovirt-node-ng-image-update noarch 
>> 4.1.3-0.3.rc3.20170622082156.git47b4302.el7.centos
>>ovirt-4.1 544 M
>>  replacing  ovirt-node-ng-image-update-placeholder.noarch
>> 4.1.2-1.el7.centos
>> Updating:
>>  ovirt-engine-appliance noarch 4.1-20170622.1.el7.centos
>> ovirt-4.1 967 M
>>
>> Transaction Summary
>> 
>> ==
>> Install  1 Package
>> Upgrade  1 Package
>>
>> Total download size: 1.5 G
>> Is this ok [y/d/N]: N
>>
>> Is this normal behavior? This isn’t really good, since it can lead to
>> stable to unstable moves on production. If this is normal, how can we avoid
>> it?
>>
>> Thanks,
>> V.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
> Red Hat Israel
>
> 
>
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users