Re: [ovirt-users] [openstack-dev] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-09 Thread Steven Dake (stdake)
Leni,

Reading their website, they reference the “kollagllue’ namespace.  That hasn’t 
been used for 2+ years (we switched to kolla).  They also said a “recent” 
change in the imges was made to deploy via Ansible “and they weren’t ready for 
that yet”.  That was 3 years ago.  I think those docs are all pretty old and 
could use some validation from the ovirt folks.

-Original Message-
From: Leni Kadali Mutungi 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, July 8, 2017 at 11:03 AM
To: openstack-dev , "de...@ovirt.org" 

Cc: users 
Subject: [openstack-dev] [kolla] Looking for Docker images for Cinder,  Glance 
etc for oVirt

Hello all.

I am trying to use the Cinder and Glance Docker images you provide in
relation to the setup here:

http://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/

I tried to run `sudo docker pull
kollaglue/centos-rdo-glance-registry:latest` and got an error of not
found. I thought that it could possible to use a Dockerfile to spin up
an equivalent of it, so I would like some guidance on how to go about
doing that. Best practices and so on. Alternatively, if it is
possible, may you point me in the direction of the equivalent images
mentioned in the guides if they have been superseded by something else? 
Thanks.

CCing the oVirt users and devel lists to see if anyone has experienced
something similar.

--
- Warm regards
Leni Kadali Mutungi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt's VM backup

2017-07-09 Thread Fred Rolland
Thanks for sharing, looks great !

One remark, you should choose a license for your code and add it to the
github repo. [1]

[1] https://github.com/vacosta94/VirtBKP



On Sat, Jul 8, 2017 at 12:40 AM, Victor José Acosta Domínguez <
vic.a...@gmail.com> wrote:

> Hello everyone, i created a python tool to backup and restore oVirt's VMs.
>
> Also i created a little "how to" on my blog:
> http://blog.infratic.com/2017/07/create-ovirtrhevs-vm-backup/
>
> I hope it help someone else
>
> Regards
>
> Victor Acosta
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt can't find user

2017-07-09 Thread Fabrice Bacchella
Done in : https://bugzilla.redhat.com/show_bug.cgi?id=1468878.

> Le 7 juil. 2017 à 13:51, Ondra Machacek  a écrit :
> 
> On Tue, Jul 4, 2017 at 6:05 PM, Fabrice Bacchella
>  wrote:
>> 
>>> Le 1 juil. 2017 à 09:09, Fabrice Bacchella  a 
>>> écrit :
>>> 
>>> 
 Le 30 juin 2017 à 23:25, Ondra Machacek  a écrit :
 
 On Thu, Jun 29, 2017 at 5:16 PM, Fabrice Bacchella
  wrote:
> 
>> Le 29 juin 2017 à 14:42, Fabrice Bacchella  
>> a écrit :
>> 
>> 
>>> Le 29 juin 2017 à 13:41, Ondra Machacek  a écrit :
>>> 
>>> How do you login? Do you use webadmin or API/SDK, if using SDK, don't
>>> you use kerberos=True?
>> 
>> Ok, got it.
>> It's tested with the sdk, using kerberos. But Kerberos authentication is 
>> done in Apache and I configure a profile for that, so I needed to add: 
>> config.artifact.arg = X-Remote-User in my 
>> /etc/ovirt-engine/extensions.d/MyProfile.authn.properties. But this is 
>> missing from internal-authn.properties. So rexecutor@internal  is 
>> checked with my profil, and not found. But as the internal profil don't 
>> know about X-Remote-User, it can't check the user and fails silently. 
>> That's why I'm getting only one line. Perhaps the log line should have 
>> said the extensions name that was failing, not the generic "External 
>> Authentication" that did'nt caught my eye.
>> 
>> I will check that as soon as I have a few minutes to spare and tell you.
> 
> I'm starting to understand. I need two authn modules, both using 
> org.ovirt.engineextensions.aaa.misc.http.AuthnExtension but with a 
> different authz.plugin. Is that possible ? If I do what, in what order 
> the different Authn will be tried ? Are they all tried until one succeed  
> both authn and authz ?
> 
 
 Yes you can have multiple authn profiles and it tries to login until
 one succeed:
 
 https://github.com/oVirt/ovirt-engine/blob/de46aa78f3117cbe436ab10926ac0c23fcdd7cfc/backend/manager/modules/aaa/src/main/java/org/ovirt/engine/core/aaa/filters/NegotiationFilter.java#L125
 
 The order isn't guaranteed, but I think it's not important, or is it for 
 you?
>>> 
>>> I'm not sure. As I need two 
>>> org.ovirt.engineextensions.aaa.misc.http.AuthnExtension, the authentication 
>>> will always succeed. It's the auhtz that fails as user as either in one 
>>> backend or the other. So if ExtMap output = profile.getAuthn().invoke(..) 
>>> calls the authz part I will be fine.
>>> 
>> 
>> I think it's not possible to have 2 
>> org.ovirt.engineextensions.aaa.misc.http.AuthnExtension with different authz.
>> 
>> The first authz ldap based backend is tried and return:
>> 2017-07-04 17:50:25,711+02 DEBUG 
>> [org.ovirt.engineextensions.aaa.ldap.AuthzExtension] (default task-2) [] 
>> Exception: java.lang.RuntimeException: Cannot resolve principal 'rexecutor'
>>at 
>> org.ovirt.engineextensions.aaa.ldap.AuthzExtension.doFetchPrincipalRecord(AuthzExtension.java:579)
>>  [ovirt-engine-extension-aaa-ldap.jar:]
>>at 
>> org.ovirt.engineextensions.aaa.ldap.AuthzExtension.invoke(AuthzExtension.java:478)
>>  [ovirt-engine-extension-aaa-ldap.jar:]
>>at 
>> org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:49)
>>at 
>> org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:73)
>>at 
>> org.ovirt.engine.core.extensions.mgr.ExtensionProxy.invoke(ExtensionProxy.java:109)
>>at 
>> org.ovirt.engine.core.sso.utils.NegotiateAuthUtils.doAuth(NegotiateAuthUtils.java:122)
>>at 
>> org.ovirt.engine.core.sso.utils.NegotiateAuthUtils.doAuth(NegotiateAuthUtils.java:68)
>>at 
>> org.ovirt.engine.core.sso.utils.NonInteractiveAuth$2.doAuth(NonInteractiveAuth.java:51)
>>at 
>> org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.issueTokenUsingHttpHeaders(OAuthTokenServlet.java:183)
>>at 
>> org.ovirt.engine.core.sso.servlets.OAuthTokenServlet.service(OAuthTokenServlet.java:72)
>>at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>>at 
>> io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85)
>>at 
>> io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129)
>>at 
>> org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73)
>>at 
>> io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
>>at 
>> io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
>>at 
>> org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66)
>>at 
>> io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61)
>>at 
>> io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131)
>>at 
>> org.ovirt.engine.core.uti

Re: [ovirt-users] [kolla] Looking for Docker images for Cinder, Glance etc for oVirt

2017-07-09 Thread Konstantin Shalygin
If you just need Cinder (for example for use Ceph with oVirt), and not a 
docker container then try to use RDO project.
A few month ago I was start from this images, then switched to RDO and 
setup a VM on host with oVirt manager. Still works flawless.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very poor GlusterFS performance

2017-07-09 Thread Sven Achtelik
Hi All,

it's the same for me. I've update all my hosts to the latest release and 
thought it would now use libgfapi since BZ 
1022961 is listed in the release notes 
under enhancements.  Are there any steps that need to be taken after upgrading 
for this to work ?

Thank you,
Sven

Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von 
Mahdi Adnan
Gesendet: Samstag, 8. Juli 2017 09:35
An: Ralf Schenk ; users@ovirt.org; yk...@redhat.com
Betreff: Re: [ovirt-users] Very poor GlusterFS performance


So ovirt access gluster vai FUSE ? i thought its using libgfapi.

When can we expect it to work with libgfapi ?

and what about the changelog of 4.1.3 ?
BZ 1022961 Gluster: running a VM from a gluster domain should use gluster URI 
instead of a fuse mount"



--

Respectfully
Mahdi A. Mahdi

From: users-boun...@ovirt.org 
mailto:users-boun...@ovirt.org>> on behalf of Ralf 
Schenk mailto:r...@databay.de>>
Sent: Monday, June 19, 2017 7:32:45 PM
To: users@ovirt.org
Subject: Re: [ovirt-users] Very poor GlusterFS performance


Hello,

Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi access 
for Ovirt-VM's to gluster volumes which I thought to be possible since 3.6.x. 
Documentation is misleading and still in 4.1.2 Ovirt is using fuse to mount 
gluster-based VM-Disks.

Bye

Am 19.06.2017 um 17:23 schrieb Darrell Budic:
Chris-

You probably need to head over to 
gluster-us...@gluster.org for help with 
performance issues.

That said, what kind of performance are you getting, via some form or testing 
like bonnie++ or even dd runs? Raw bricks vs gluster performance is useful to 
determine what kind of performance you're actually getting.

Beyond that, I'd recommend dropping the arbiter bricks and re-adding them as 
full replicas, they can't serve distributed data in this configuration and may 
be slowing things down on you. If you've got a storage network setup, make sure 
it's using the largest MTU it can, and consider adding/testing these settings 
that I use on my main storage volume:

performance.io-thread-count: 32
client.event-threads: 8
server.event-threads: 3
performance.stat-prefetch: on


Good luck,


  -Darrell


On Jun 19, 2017, at 9:46 AM, Chris Boot 
mailto:bo...@bootc.net>> wrote:

Hi folks,

I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.

Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=
9.20 Gbits/sec between the three servers).

To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.

My volume configuration looks like this:

Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable

I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.

Cheers,
Chris

--
Chris Boot
bo...@bootc.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users





___

Users mailing list

Users@ovirt.org

http://lists.ovirt.org/mailman/listinfo/users

--
[cid:im

Re: [ovirt-users] Very poor GlusterFS performance

2017-07-09 Thread Doron Fediuck
Hi Sven,
libgfapi is not fully operational yet.
There's some additional work which just got merged[1] in order to enable it.
Hopefully it'll be included in one of the next releases.

Doron

[1] https://gerrit.ovirt.org/#/c/78938/

On 9 July 2017 at 14:34, Sven Achtelik  wrote:

> Hi All,
>
>
>
> it’s the same for me. I’ve update all my hosts to the latest release and
> thought it would now use libgfapi since BZ 1022961
>  is listed in the release notes
> under enhancements.  Are there any steps that need to be taken after
> upgrading for this to work ?
>
>
>
> Thank you,
>
> Sven
>
>
>
> *Von:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *Im
> Auftrag von *Mahdi Adnan
> *Gesendet:* Samstag, 8. Juli 2017 09:35
> *An:* Ralf Schenk ; users@ovirt.org; yk...@redhat.com
> *Betreff:* Re: [ovirt-users] Very poor GlusterFS performance
>
>
>
> So ovirt access gluster vai FUSE ? i thought its using libgfapi.
>
> When can we expect it to work with libgfapi ?
>
> and what about the changelog of 4.1.3 ?
>
> BZ 1022961 Gluster: running a VM from a gluster domain should use gluster
> URI instead of a fuse mount"
>
>
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
> --
>
> *From:* users-boun...@ovirt.org  on behalf of
> Ralf Schenk 
> *Sent:* Monday, June 19, 2017 7:32:45 PM
> *To:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Very poor GlusterFS performance
>
>
>
> Hello,
>
> Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi
> access for Ovirt-VM's to gluster volumes which I thought to be possible
> since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is using
> fuse to mount gluster-based VM-Disks.
>
> Bye
>
>
>
> Am 19.06.2017 um 17:23 schrieb Darrell Budic:
>
> Chris-
>
>
>
> You probably need to head over to gluster-us...@gluster.org for help with
> performance issues.
>
>
>
> That said, what kind of performance are you getting, via some form or
> testing like bonnie++ or even dd runs? Raw bricks vs gluster performance is
> useful to determine what kind of performance you’re actually getting.
>
>
>
> Beyond that, I’d recommend dropping the arbiter bricks and re-adding them
> as full replicas, they can’t serve distributed data in this configuration
> and may be slowing things down on you. If you’ve got a storage network
> setup, make sure it’s using the largest MTU it can, and consider
> adding/testing these settings that I use on my main storage volume:
>
>
>
> performance.io-thread-count: 32
>
> client.event-threads: 8
>
> server.event-threads: 3
>
> performance.stat-prefetch: on
>
>
>
> Good luck,
>
>
>
>   -Darrell
>
>
>
>
>
> On Jun 19, 2017, at 9:46 AM, Chris Boot  wrote:
>
>
>
> Hi folks,
>
> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
> 6 bricks, which themselves live on two SSDs in each of the servers (one
> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
> SSDs. Connectivity is 10G Ethernet.
>
> Performance within the VMs is pretty terrible. I experience very low
> throughput and random IO is really bad: it feels like a latency issue.
> On my oVirt nodes the SSDs are not generally very busy. The 10G network
> seems to run without errors (iperf3 gives bandwidth measurements of >=
> 9.20 Gbits/sec between the three servers).
>
> To put this into perspective: I was getting better behaviour from NFS4
> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
> feel right at all.
>
> My volume configuration looks like this:
>
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet6
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard-block-size: 128MB
> performance.strict-o-direct: on
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
>
> I would really appreciate some guidance on this to try to improve things
> because at this rate I will need to reconsider using GlusterFS altogether.
>
> Cheers,
> Chri

Re: [ovirt-users] SQL : last time halted?

2017-07-09 Thread Eli Mesika
On Thu, Jul 6, 2017 at 6:48 PM, Juan Hernández  wrote:

> On 07/06/2017 02:07 PM, Nicolas Ecarnot wrote:
>
>> [For the record]
>>
>> Juan,
>>
>> Thanks to your hint, I eventually found it more convenient for me to use
>> a SQL query to find out which VM was unsed for months :
>>
>> SELECT
>>vm_static.vm_name,
>>vm_dynamic.status,
>>vm_dynamic.vm_ip,
>>vm_dynamic.vm_host,
>>vm_dynamic.last_start_time,
>>vm_dynamic.vm_guid,
>>vm_dynamic.last_stop_time
>> FROM
>>public.vm_dynamic,
>>public.vm_static
>> WHERE
>>vm_dynamic.vm_guid = vm_static.vm_guid AND
>>vm_dynamic.status = 0
>> ORDER BY
>>vm_dynamic.last_stop_time ASC;
>>
>> Thank you.
>>
>>
> That is nice. Just keep in mind that the database schema isn't kept
> backwards compatible. A minor change in the engine can make your query fail
> or return incorrect results.


​I tend to agree ​

​with Juan that if you are doing that periodically then using the Python
SDK will insure that your script will survive schema changes ​

>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.2 offering RC update?

2017-07-09 Thread Lev Veyde
Hi Vinicius,

It's actually due to my mistake, and as result the package got tagged with
the RC version instead of the GA.
The package itself was based on the 4.1.3 code, though.
I rebuilt it and published a fixed package, so the issue should be resolved
now.

Thanks in advance,

On Sat, Jul 8, 2017 at 5:12 AM, Vinícius Ferrão  wrote:

> Hello,
>
> I’ve noted a strange thing on oVirt. On the Hosted Engine an update was
> offered and I was a bit confused, since I’m running the latest oVirt Node
> release.
>
> To check if 4.1.3 was already released I issued an “yum update” on the
> command line and for my surprise an RC release was offered. This not seems
> to be right:
>
> 
> ==
>  PackageArch   Version
> Repository Size
> 
> ==
> Installing:
>  ovirt-node-ng-image-update noarch 
> 4.1.3-0.3.rc3.20170622082156.git47b4302.el7.centos
>ovirt-4.1 544 M
>  replacing  ovirt-node-ng-image-update-placeholder.noarch
> 4.1.2-1.el7.centos
> Updating:
>  ovirt-engine-appliance noarch 4.1-20170622.1.el7.centos
> ovirt-4.1 967 M
>
> Transaction Summary
> 
> ==
> Install  1 Package
> Upgrade  1 Package
>
> Total download size: 1.5 G
> Is this ok [y/d/N]: N
>
> Is this normal behavior? This isn’t really good, since it can lead to
> stable to unstable moves on production. If this is normal, how can we avoid
> it?
>
> Thanks,
> V.
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt Node 4.1.2 offering RC update?

2017-07-09 Thread Vinícius Ferrão
Thanks Lev!

It’s already showing up here:

Installing:
 ovirt-node-ng-image-updatenoarch
4.1.3-1.el7.centos  ovirt-4.1
550 M
 replacing  ovirt-node-ng-image-update-placeholder.noarch 4.1.2-1.el7.centos
Updating:
 ovirt-engine-appliancenoarch
4.1-20170622.1.el7.centos   ovirt-4.1
967 M

V.

On 9 Jul 2017, at 13:26, Lev Veyde 
mailto:lve...@redhat.com>> wrote:

Hi Vinicius,

It's actually due to my mistake, and as result the package got tagged with the 
RC version instead of the GA.
The package itself was based on the 4.1.3 code, though.
I rebuilt it and published a fixed package, so the issue should be resolved now.

Thanks in advance,

On Sat, Jul 8, 2017 at 5:12 AM, Vinícius Ferrão 
mailto:fer...@if.ufrj.br>> wrote:
Hello,

I’ve noted a strange thing on oVirt. On the Hosted Engine an update was offered 
and I was a bit confused, since I’m running the latest oVirt Node release.

To check if 4.1.3 was already released I issued an “yum update” on the command 
line and for my surprise an RC release was offered. This not seems to be right:

==
 PackageArch   Version  
  Repository Size
==
Installing:
 ovirt-node-ng-image-update noarch 
4.1.3-0.3.rc3.20170622082156.git47b4302.el7.centos ovirt-4.1 544 M
 replacing  ovirt-node-ng-image-update-placeholder.noarch 4.1.2-1.el7.centos
Updating:
 ovirt-engine-appliance noarch 4.1-20170622.1.el7.centos
  ovirt-4.1 967 M

Transaction Summary
==
Install  1 Package
Upgrade  1 Package

Total download size: 1.5 G
Is this ok [y/d/N]: N

Is this normal behavior? This isn’t really good, since it can lead to stable to 
unstable moves on production. If this is normal, how can we avoid it?

Thanks,
V.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



--

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | 
lve...@redhat.com

[https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]
TRIED. TESTED. TRUSTED.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] 2 hosts starting the engine at the same time?

2017-07-09 Thread Gianluca Cecchi
Hello.
I'm on 4.1.3 with self hosted engine and glusterfs as storage.
I updated the kernel  on engine so I executed these steps:

- enable global maintenace from the web admin gui
- wait some minutes
- shutdown the engine vm from inside its OS
- wait some minutes
- execute on one host
[root@ovirt02 ~]# hosted-engine --set-maintenance --mode=none

I see that the qemu-kvm process for the engine starts on two hosts and then
on one of them it gets a "kill -15" and stops
Is it expected behaviour? It seems somehow dangerous to me..

- when in maintenance

[root@ovirt02 ~]# hosted-engine --vm-status


!! Cluster is in GLOBAL MAINTENANCE mode !!


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt01.localdomain.local
Host ID: 1
Engine status  : {"health": "good", "vm": "up",
"detail": "up"}
Score  : 2597
stopped: False
Local maintenance  : False
crc32  : 7931c5c3
local_conf_timestamp   : 19811
Host timestamp : 19794
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=19794 (Sun Jul  9 21:31:50 2017)
host-id=1
score=2597
vm_conf_refresh_time=19811 (Sun Jul  9 21:32:06 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : 192.168.150.103
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 616ceb02
local_conf_timestamp   : 2829
Host timestamp : 2812
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2812 (Sun Jul  9 21:31:52 2017)
host-id=2
score=3400
vm_conf_refresh_time=2829 (Sun Jul  9 21:32:09 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False


--== Host 3 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt03.localdomain.local
Host ID: 3
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped: False
Local maintenance  : False
crc32  : 871204b2
local_conf_timestamp   : 24584
Host timestamp : 24567
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=24567 (Sun Jul  9 21:31:52 2017)
host-id=3
score=3400
vm_conf_refresh_time=24584 (Sun Jul  9 21:32:09 2017)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False


!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@ovirt02 ~]#


- then I exit global maintenance
[root@ovirt02 ~]# hosted-engine --set-maintenance --mode=none


- During monitoring of status, at some point I see "EngineStart" on both
host2 and host3

[root@ovirt02 ~]# hosted-engine --vm-status


--== Host 1 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : ovirt01.localdomain.local
Host ID: 1
Engine status  : {"reason": "bad vm status", "health":
"bad", "vm": "down", "detail": "down"}
Score  : 3230
stopped: False
Local maintenance  : False
crc32  : 25cadbfb
local_conf_timestamp   : 20055
Host timestamp : 20040
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=20040 (Sun Jul  9 21:35:55 2017)
host-id=1
score=3230
vm_conf_refresh_time=20055 (Sun Jul  9 21:36:11 2017)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host 2 status ==--

conf_on_shared_storage : True
Status up-to-date  : True
Hostname   : 192.168.150.103
Host ID: 2
Engine status  : {"reason": "vm not running on this
host", "health": "bad", "vm": "down", "detail": "unknown"}
Score  : 3400
stopped  

Re: [ovirt-users] 2 hosts starting the engine at the same time?

2017-07-09 Thread Gianluca Cecchi
On Sun, Jul 9, 2017 at 9:54 PM, Gianluca Cecchi 
wrote:

> Hello.
> I'm on 4.1.3 with self hosted engine and glusterfs as storage.
> I updated the kernel  on engine so I executed these steps:
>
> - enable global maintenace from the web admin gui
> - wait some minutes
> - shutdown the engine vm from inside its OS
> - wait some minutes
> - execute on one host
> [root@ovirt02 ~]# hosted-engine --set-maintenance --mode=none
>
> I see that the qemu-kvm process for the engine starts on two hosts and
> then on one of them it gets a "kill -15" and stops
> Is it expected behaviour? It seems somehow dangerous to me..
>

And I don't know how related, but the engine vm doesn't come up.
Connecting to its vnc console I get it "booting from hard disk" :
https://drive.google.com/file/d/0BwoPbcrMv8mvOEJWeVRvNThmTWc/view?usp=sharing

Gluster volume for the engine vm storage domain seems ok...

[root@ovirt01 vdsm]# gluster volume heal engine info
Brick ovirt01.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0

Brick ovirt02.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0

Brick ovirt03.localdomain.local:/gluster/brick1/engine
Status: Connected
Number of entries: 0

[root@ovirt01 vdsm]#


and in HostedEngine.log

2017-07-09 19:59:20.660+: starting up libvirt version: 2.0.0, package:
10.el7_3.9 (CentOS BuildSystem ,
2017-05-25-20:52:28, c1bm.rdu2.centos.org), qemu version: 2.6.0
(qemu-kvm-ev-2.6.0-28.el7.10.1), hostname: ovirt01.localdomain.local
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name
guest=HostedEngine,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-HostedEngine/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Broadwell,+rtm,+hle -m
6144 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
-uuid 87fd6bdb-535d-45b8-81d4-7e3101a6c364 -smbios
'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-3.1611.el7.centos,serial=564D777E-B638-E808-9044-680BA4957704,uuid=87fd6bdb-535d-45b8-81d4-7e3101a6c364'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-HostedEngine/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2017-07-09T19:59:20,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
file=/var/run/vdsm/storage/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6,format=raw,if=none,id=drive-virtio-disk0,serial=cf8b8f4e-fa01-457e-8a96-c5a27f8408f8,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:0a:e7:ba,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/87fd6bdb-535d-45b8-81d4-7e3101a6c364.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/87fd6bdb-535d-45b8-81d4-7e3101a6c364.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev
socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/87fd6bdb-535d-45b8-81d4-7e3101a6c364.org.ovirt.hosted-engine-setup.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -device
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
rng-random,id=objrng0,filename=/dev/urandom -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on
char device redirected to /dev/pts/1 (label charconsole0)
warning: host doesn't support requested feature: CPUID.07H:EBX.erms [bit 9]
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Libguestfs] virt-v2v import from KVM without storage-pool ?

2017-07-09 Thread Ming Xie
Hi Matthias,

   I have filed the bug for this problem, you could track this problem via 
https://bugzilla.redhat.com/show_bug.cgi?id=1468944,thanks for you found this 
problem!


Hi rjones,
  
   I think bug 1468509 is a different problem with bug 1468944, bug 1468509 is 
due to vdsm can't recognize volume disk, but 1468944 is caused by guest's file 
disk not listed in storage pool.


Regards
Ming Xie


- Original Message -
From: "Matthias Leopold" 
To: "Ming Xie" 
Cc: "Tomáš Golembiovský" , "Richard W.M. Jones" 
, "users" , libgues...@redhat.com
Sent: Friday, July 7, 2017 8:01:54 PM
Subject: Re: [Libguestfs] virt-v2v import from KVM without storage-pool ?

thanks for caring about this.

Ming Xie, are you opening this BZ bug?

thanks
matthias

Am 2017-07-07 um 13:31 schrieb Tomáš Golembiovský:
> Hi,
> 
> yes it is an issue in VDSM. We count on the disks being in storage pool
> (except for block devices).
> 
> Can you open a BZ bug for that please.
> 
> Thanks,
> 
>  Tomas
> 
> 
> On Fri, 7 Jul 2017 02:52:26 -0400 (EDT)
> Ming Xie  wrote:
> 
>> I could reproduce customer's problem
>>
>> Packages:
>> rhv:4.1.3-0.1.el7
>> vdsm-4.19.20-1.el7ev.x86_64
>> virt-v2v-1.36.3-6.el7.x86_64
>> libguestfs-1.36.3-6.el7.x86_64
>>
>> Steps:
>> 1.Prepare a guest which is not listed storage pool
>> # virsh dumpxml avocado-vt-vm1
>> 
>> 
>>
>>
>>
>>> function='0x0'/>
>>  
>> .
>> 2.Try to import this guest in rhv4.1 from KVM host but failed to import the 
>> guest as screenshot and get error info from vdsm.log
>> 
>> 2017-07-07 14:41:22,176+0800 ERROR (jsonrpc/6) [root] Error getting disk 
>> size (v2v:1089)
>> Traceback (most recent call last):
>>File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in 
>> _get_disk_info
>>  vol = conn.storageVolLookupByPath(disk['alias'])
>>File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4555, in 
>> storageVolLookupByPath
>>  if ret is None:raise libvirtError('virStorageVolLookupByPath() failed', 
>> conn=self)
>> libvirtError: Storage volume not found: no storage vol with matching path 
>> '/root/RHEL-7.3-x86_64-latest.qcow2'
>> 
>>
>>
>> 3.Try to convert this guest to rhv by virt-v2v on v2v conversion 
>> server,could import the guest from export domain to data domain on rhv4.1 
>> after finishing conversion
>> # virt-v2v avocado-vt-vm1 -o rhv -os 10.73.131.93:/home/nfs_export
>> [   0.0] Opening the source -i libvirt avocado-vt-vm1
>> [   0.0] Creating an overlay to protect the source from being modified
>> [   0.4] Initializing the target -o rhv -os 10.73.131.93:/home/nfs_export
>> [   0.7] Opening the overlay
>> [   6.1] Inspecting the overlay
>> [  13.8] Checking for sufficient free disk space in the guest
>> [  13.8] Estimating space required on target for each disk
>> [  13.8] Converting Red Hat Enterprise Linux Server 7.3 (Maipo) to run on KVM
>> virt-v2v: This guest has virtio drivers installed.
>> [  52.2] Mapping filesystem data to avoid copying unused and blank areas
>> [  52.4] Closing the overlay
>> [  52.7] Checking if the guest needs BIOS or UEFI to boot
>> [  52.7] Assigning disks to buses
>> [  52.7] Copying disk 1/1 to 
>> /tmp/v2v.Zzc4KD/c9cfeba7-73f8-428a-aa77-9a2a1acf0063/images/c8eb039e-3007-4e08-9580-c49da8b73d55/f76d16ea-5e66-4987-a496-8f378b127986
>>  (qcow2)
>>  (100.00/100%)
>> [ 152.4] Creating output metadata
>> [ 152.6] Finishing off
>>
>>
>> Result:
>> So this problem is caused by vdsm or ovirt
>>
>> Regards
>> Ming Xie
>>
>> - Original Message -
>> From: "Richard W.M. Jones" 
>> To: "Matthias Leopold" 
>> Cc: users@ovirt.org, libgues...@redhat.com
>> Sent: Wednesday, July 5, 2017 9:15:16 PM
>> Subject: Re: [Libguestfs] virt-v2v import from KVM without storage-pool ?
>>
>> On Wed, Jul 05, 2017 at 11:14:09AM +0200, Matthias Leopold wrote:
>>> hi,
>>>
>>> i'm trying to import a VM in oVirt from a KVM host that doesn't use
>>> storage pools. this fails with the following message in
>>> /var/log/vdsm/vdsm.log:
>>>
>>> 2017-07-05 09:34:20,513+0200 ERROR (jsonrpc/5) [root] Error getting
>>> disk size (v2v:1089)
>>> Traceback (most recent call last):
>>>File "/usr/lib/python2.7/site-packages/vdsm/v2v.py", line 1078, in
>>> _get_disk_info
>>>  vol = conn.storageVolLookupByPath(disk['alias'])
>>>File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4770,
>>> in storageVolLookupByPath
>>>  if ret is None:raise libvirtError('virStorageVolLookupByPath()
>>> failed', conn=self)
>>> libvirtError: Storage volume not found: no storage vol with matching path
>>>
>>> the disks in the origin VM are defined as
>>>
>>>  
>>>
>>>
>>>
>>>  
>>>
>>>
>>>
>>> is this a virt-v2v or oVirt problem?
>>
>> Well the stack trace is in the oVirt code, so I guess it's an oVirt
>> problem.  Adding ovirt-users mailing list.
>>
>> Rich.
>>
>> -- 
>> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/

Re: [ovirt-users] 2 hosts starting the engine at the same time?

2017-07-09 Thread Yedidyah Bar David
On Sun, Jul 9, 2017 at 11:12 PM, Gianluca Cecchi
 wrote:
>
>
> On Sun, Jul 9, 2017 at 9:54 PM, Gianluca Cecchi 
> wrote:
>>
>> Hello.
>> I'm on 4.1.3 with self hosted engine and glusterfs as storage.
>> I updated the kernel  on engine so I executed these steps:
>>
>> - enable global maintenace from the web admin gui
>> - wait some minutes
>> - shutdown the engine vm from inside its OS
>> - wait some minutes
>> - execute on one host
>> [root@ovirt02 ~]# hosted-engine --set-maintenance --mode=none
>>
>> I see that the qemu-kvm process for the engine starts on two hosts and
>> then on one of them it gets a "kill -15" and stops
>> Is it expected behaviour?

In the 'hosted-engine' script itself, in the function cmd_vm_start,
there is a comment:
# TODO: Check first the sanlock status, and if allows:

Perhaps ha-agent checks sanlock status before starting the VM?
Adding Martin.

Please also check/share agent.log.

>> It seems somehow dangerous to me..
>
>
> And I don't know how related, but the engine vm doesn't come up.
> Connecting to its vnc console I get it "booting from hard disk" :
> https://drive.google.com/file/d/0BwoPbcrMv8mvOEJWeVRvNThmTWc/view?usp=sharing
>
> Gluster volume for the engine vm storage domain seems ok...
>
> [root@ovirt01 vdsm]# gluster volume heal engine info
> Brick ovirt01.localdomain.local:/gluster/brick1/engine
> Status: Connected
> Number of entries: 0
>
> Brick ovirt02.localdomain.local:/gluster/brick1/engine
> Status: Connected
> Number of entries: 0
>
> Brick ovirt03.localdomain.local:/gluster/brick1/engine
> Status: Connected
> Number of entries: 0
>
> [root@ovirt01 vdsm]#
>
>
> and in HostedEngine.log
>
> 2017-07-09 19:59:20.660+: starting up libvirt version: 2.0.0, package:
> 10.el7_3.9 (CentOS BuildSystem ,
> 2017-05-25-20:52:28, c1bm.rdu2.centos.org), qemu version: 2.6.0
> (qemu-kvm-ev-2.6.0-28.el7.10.1), hostname: ovirt01.localdomain.local
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name
> guest=HostedEngine,debug-threads=on -S -object
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-3-HostedEngine/master-key.aes
> -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Broadwell,+rtm,+hle -m
> 6144 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
> -uuid 87fd6bdb-535d-45b8-81d4-7e3101a6c364 -smbios
> 'type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-3.1611.el7.centos,serial=564D777E-B638-E808-9044-680BA4957704,uuid=87fd6bdb-535d-45b8-81d4-7e3101a6c364'
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-3-HostedEngine/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2017-07-09T19:59:20,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-reboot -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive
> file=/var/run/vdsm/storage/e9e4a478-f391-42e5-9bb8-ed22a33e5cab/cf8b8f4e-fa01-457e-8a96-c5a27f8408f8/94c46bac-0a9f-49e8-9188-627fa0caf2b6,format=raw,if=none,id=drive-virtio-disk0,serial=cf8b8f4e-fa01-457e-8a96-c5a27f8408f8,cache=none,werror=stop,rerror=stop,aio=threads
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
> tap,fd=30,id=hostnet0,vhost=on,vhostfd=32 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:0a:e7:ba,bus=pci.0,addr=0x3
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/87fd6bdb-535d-45b8-81d4-7e3101a6c364.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/87fd6bdb-535d-45b8-81d4-7e3101a6c364.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev
> socket,id=charchannel2,path=/var/lib/libvirt/qemu/channels/87fd6bdb-535d-45b8-81d4-7e3101a6c364.org.ovirt.hosted-engine-setup.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0
> -chardev pty,id=charconsole0 -device
> virtconsole,chardev=charconsole0,id=console0 -vnc 0:0,password -device
> cirrus-vga,id=video0,bus=pci.0,addr=0x2 -object
> rng-random,id=objrng0,filename=/dev/urandom -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on
> char device redirected to /dev/pts/1 (label charconsole0)
> warning: host doesn't support requested feature: CPUID.07H:EBX.erms [bit 9]
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailma

Re: [ovirt-users] virt-v2v to glusterfs storage domain

2017-07-09 Thread Arik Hadas
On Fri, Jul 7, 2017 at 9:38 PM, Ramachandra Reddy Ankireddypalle <
rcreddy.ankireddypa...@gmail.com> wrote:

> Hi,
>  Does virt-v2v command work with glusterfs storage domain. I have an
> OVA image and that needs to be imported to glusterfs storage domain. Please
> provide some pointers to this.
>

 I don't see a reason why it wouldn't work.
Assuming that the import operation is triggered from oVirt, the engine then
creates the target disk(s) and prepares the image(s) on the host that the
conversion will be executed on as it regularly does. virt-v2v then just to
write to that "prepared" image.
In the import dialog you can select the target storage domain. I would
suggest that you just try to pick the glusterfs storage domain and see if
it works. It should work, if not - please let us know.


>
> Thanks and Regards,
> Ram
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users