Re: [ovirt-users] upgrade to ovirt 4 failed

2016-08-07 Thread Sandro Bonazzola
On Wed, Aug 3, 2016 at 12:19 PM, Fabrice Bacchella <
fabrice.bacche...@orange.fr> wrote:

> I'm running on Centos 7, just upgraded to ovirt 4.01 using the procedure
> given in the release notes.
>
> But now I'm getting that in /var/log/ovirt-engine/engine.log:
>
> 2016-08-03 12:04:39,751 ERROR [org.ovirt.engine.core.bll.Backend]
> (ServerService Thread Pool -- 54) [] Error during initialization:
> org.jboss.weld.exceptions.WeldException: WELD-49: Unable to invoke
> private void org.ovirt.engine.core.vds
> broker.ResourceManager.init() on org.ovirt.engine.core.
> vdsbroker.ResourceManager@28b87a8e
> at org.jboss.weld.injection.producer.
> DefaultLifecycleCallbackInvoker.invokeMethods(
> DefaultLifecycleCallbackInvoker.java:100) [weld-core-impl-2.3.2.Final.
> jar:2.3.2.Final]
> at org.jboss.weld.injection.producer.
> DefaultLifecycleCallbackInvoker.postConstruct(
> DefaultLifecycleCallbackInvoker.java:81) [weld-core-impl-2.3.2.Final.
> jar:2.3.2.Final]
> at org.jboss.weld.injection.producer.BasicInjectionTarget.
> postConstruct(BasicInjectionTarget.java:126) [weld-core-impl-2.3.2.Final.
> jar:2.3.2.Final]
> at org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:162)
> [weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
> at org.jboss.weld.context.AbstractContext.get(AbstractContext.java:96)
> [weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
> at org.jboss.weld.bean.ContextualInstanceStrategy$
> DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:101)
> [weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
> at org.jboss.weld.bean.ContextualInstanceStrategy$
> ApplicationScopedContextualInstanceStrategy.get(
> ContextualInstanceStrategy.java:141) [weld-core-impl-2.3.2.Final.
> jar:2.3.2.Final]
> at 
> org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
> [weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
> at 
> org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:742)
> [weld-core-impl-2.3.2.Final.jar:2.3.2.Final]
> ...
> Caused by: java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [rt.jar:1.8.0_92]
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_92]
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_92]
> at java.lang.reflect.Method.invoke(Method.java:498)
> [rt.jar:1.8.0_92]
> at org.jboss.weld.injection.producer.
> DefaultLifecycleCallbackInvoker.invokeMethods(
> DefaultLifecycleCallbackInvoker.java:98) [weld-core-impl-2.3.2.Final.
> jar:2.3.2.Final]
> ... 82 more
> Caused by: java.lang.NullPointerException
> at org.postgresql.jdbc.TypeInfoCache.getSQLType(
> TypeInfoCache.java:182)
>


Can you please check the version of postgresql-jdbc you're using? Looks
like an issue while talking with the driver.




> at org.postgresql.jdbc.TypeInfoCache.getSQLType(
> TypeInfoCache.java:178)
> at org.postgresql.jdbc.PgDatabaseMetaData.getProcedureColumns(
> PgDatabaseMetaData.java:1259)
> at org.postgresql.jdbc.PgDatabaseMetaData.getProcedureColumns(
> PgDatabaseMetaData.java:1040)
> at org.springframework.jdbc.core.metadata.
> GenericCallMetaDataProvider.processProcedureColumns(
> GenericCallMetaDataProvider.java:353) [spring-jdbc.jar:4.2.4.RELEASE]
> at org.springframework.jdbc.core.metadata.
> GenericCallMetaDataProvider.initializeWithProcedureColumnMetaData(
> GenericCallMetaDataProvider.java:112) [spring-jdbc.jar:4.2.4.RELEASE]
> at org.springframework.jdbc.core.metadata.
> CallMetaDataProviderFactory$1.processMetaData(CallMetaDataProviderFactory.java:133)
> [spring-jdbc.jar:4.2.4.RELEASE]
> at org.springframework.jdbc.support.JdbcUtils.
> extractDatabaseMetaData(JdbcUtils.java:299) [spring-jdbc.jar:4.2.4.
> RELEASE]
> at org.springframework.jdbc.core.metadata.
> CallMetaDataProviderFactory.createMetaDataProvider(
> CallMetaDataProviderFactory.java:73) [spring-jdbc.jar:4.2.4.RELEASE]
> at org.springframework.jdbc.core.metadata.CallMetaDataContext.
> initializeMetaData(CallMetaDataContext.java:286) [spring-jdbc.jar:4.2.4.
> RELEASE]
> at org.springframework.jdbc.core.simple.AbstractJdbcCall.
> compileInternal(AbstractJdbcCall.java:303) [spring-jdbc.jar:4.2.4.RELEASE]
> at org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$
> PostgresSimpleJdbcCall.compileInternal(PostgresDbEngineDialect.java:108)
> [dal.jar:]
> at org.springframework.jdbc.core.simple.AbstractJdbcCall.
> compile(AbstractJdbcCall.java:288) [spring-jdbc.jar:4.2.4.RELEASE]
> at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.
> getCall(SimpleJdbcCallsHandler.java:169) [dal.jar:]
> at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.
> executeImpl(SimpleJdbcCallsHandler.java:146) [da

[ovirt-users] ovirt 3.6.7 and gluster 3.7.14

2016-08-07 Thread Luiz Claudio Prazeres Goncalves
Hi, it seems the ovirt-3.6-dependencies.repo is pointing the yum repo to
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/
and
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch,
however the "LATEST" is now pointing to gluster 3.8.x and not to 3.7.x
anymore.

So, in order to fix it I was forced to manually adjust the yum repo paths
as you can see below.  Is this procedure correct? I would say yes, but it's
always good to double check :)

Also, It's currently running centos 7.2 + ovirt 3.6.6 + gluster 3.7.11
(client) and I'm planning to upgrade to ovirt 3.6.7 and gluster 3.7.14
(client).

The oVirt Cluster is running on top of an external gluster replica 3
cluster, hosting the engine storage domain and vms storage domain, running
the version 3.7.11 which I'm also planning to move to 3.7.14 as well.  I'm
using XFS and not ZFS, which seems to have issues with gluster 3.7.13

Is this upgrade safe and recommended?

Thanks
-Luiz

[ovirt-3.6-glusterfs-epel]

name=GlusterFS is a clustered file-system capable of scaling to several
petabytes.

#baseurl=
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/$basearch/

baseurl=https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.14
/EPEL.repo/epel-7.2/x86_64/

enabled=1

skip_if_unavailable=1

gpgcheck=1

gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key


[ovirt-3.6-glusterfs-noarch-epel]

name=GlusterFS is a clustered file-system capable of scaling to several
petabytes.

#baseurl=
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch

baseurl=https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.14
/EPEL.repo/epel-7.2/noarch

enabled=1

skip_if_unavailable=1

gpgcheck=1

gpgkey=https://download.gluster.org/pub/gluster/glusterfs/LATEST/pub.key
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate machines in unknown state?

2016-08-07 Thread Ekin Meroğlu
Hi,

Just a reminder, if you have power management configured, first turn that
off for the host - when you restart vdsmd with the power management
configured, engine finds it not responding and tries to fence (e.g. reboot)
the host.

Other than that, restarting vdsmd has been safe in my experience...

Regards,

On Thu, Aug 4, 2016 at 6:10 PM, Nicolás  wrote:

>
>
> El 04/08/16 a las 15:25, Arik Hadas escribió:
>
>>
>> - Original Message -
>>
>>> El 2016-08-04 08:24, Arik Hadas escribió:
>>>
 - Original Message -

>
> El 04/08/16 a las 07:18, Arik Hadas escribió:
>
>> - Original Message -
>>
>>> Hi,
>>>
>>> We're running oVirt 4.0.1 and today I found out that one of our hosts
>>> has all its VMs in an unknown state. I actually don't know how (and
>>> when) did this happen, but I'd like to restore service possibly
>>> without
>>> turning off these machines. The host is up, the VMs are up, 'qemu'
>>> process exists, no errors, it's just the VMs running on it that have
>>> a
>>> '?' where status is defined.
>>>
>>> Is it safe in this case to simply modify database and set those VM's
>>> status to 'up'? I remember having to do this a time ago when we faced
>>> storage issues, it didn't break anything back then. If not, is there
>>> a
>>> "safe" way to migrate those VMs to a different host and restart the
>>> host
>>> that marked them as unknown?
>>>
>> Hi Nicolás,
>>
>> I assume that the host these VMs are running on is empty in the
>> webadmin,
>> right? if that is the case then you've probably hit [1]. Changing
>> their
>> status to up is not the way to go since these VMs will not be
>> monitored.
>>
> Hi Arik,
>
> By "empty" you mean the webadmin reports the host being running 0 VMs?
> If so, that's not the case, actually the VM count seems to be correct
> in
> relation to "qemu-*" processes (about 32 VMs), I can even see the
> machines in the "Virtual machines" tab of the host, it's just they are
> all marked with the '?' mark.
>
 No, I meant the 'Host' column in the Virtual Machines tab but if you
 see
 the VMs in the "Virtual machines" sub-tab of the host then run_on_vds
 points to the right host..

 The host is up in the webadmin as well?
 Can you share the engine log?

 Yes, the host is up in the webadmin, there are no issues with it, just
>>> the VMs running on it have the '?' mark. I've made 3 tests:
>>>
>>> 1) Restart engine: did not help
>>> 2) Check firewall, seems to be ok.
>>> 2) PostgreSQL: UPDATE vm_dynamic SET status = 1 WHERE status = 8; :
>>> After a while, I see lots of entries like this:
>>>
>>>   2016-08-04 09:23:10,910 WARN
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler4) [6ad135b8] Correlation ID: null, Call Stack:
>>> null, Custom Event ID: -1, Message: VM xxx is not responding.
>>>
>>> I'm attaching the engine log, but I don't know when did this happen for
>>> the first time, though. If there's a manual way/command to migrate VMs
>>> to a different host I'd appreciate a hint about it.
>>>
>>> Is it safe to restart vdsmd on this host?
>>>
>> The engine log looks fine - the VMs are reported as not-responding for
>> some reason. I would restart libvirtd and vdsmd then
>>
>
> Is restarting those two daemons safe? I mean, will that stop all qemu-*
> processes, so the VMs marked as unknown will stop?
>
>
> Thanks.
>>>
>>> Thanks.
>
> Yes, there is no other way to resolve it other than changing the DB but
>> the change should be to update run_on_vds field of these VMs to the
>> host
>> you know they are running on. Their status will then be updates in 15
>> sec.
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1354494
>>
>> Arik.
>>
>> Thanks.
>>>
>>> Nicolás
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
*Ekin Meroğlu** Red Hat Certified Architect*

linuxera Özgür Yazılım Çözüm ve Hizmetleri
*T* +90 (850) 22 LINUX | *GSM* +90 (532) 137 77 04
www.linuxera.com | bi...@linuxera.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0 Hosted Engine

2016-08-07 Thread Arsène Gschwind

Hi,

Thanks for your help...

The scheduling policy is set to none
I've set the "Enable HA Reservation" property and also install the 
macspoof hook and power management is configured correctly and seems to 
work but all that didn't help .


Regargs,

Arsène


On 08/07/2016 04:50 PM, Yanir Quinn wrote:

Hi,

Under Clusters-> your cluster -> Scheduling policy :
1. What is your selected policy and what properties does it contains ?
2. Under Additional Properties , is "Enable HA Reservation" selected ?

Also check if your host has the necessary hooks (e.g. macspoof) as in 
the first host you deployed the hosted engine on

and that it has power management enabled.

Regards,
Yanir Quinn




On Sun, Aug 7, 2016 at 4:42 PM, Arsène Gschwind 
mailto:arsene.gschw...@unibas.ch>> wrote:


Hi,

I have an oVirt setup with 2 server using hosted-engine, both
server resgistered properly the hosted-engine using :
# hosted-engine --deploy

but for some reason the second isn't recognized as a host for
hosted-engine and I'm not able to migrate the hosted-engine.
The error I get when trying to migrate:

  * Cannot migrate VM. There is no host that satisfies current
scheduling constraints. See below for details:
  * The host xx did not satisfy internal filter HA because it
is not a Hosted Engine host..

I've tried to redeploy the hosted-engine but this will fail since
the host already exists in the management DB.
I've tried to redeploy the host using the GUI when editing the
Host and set DEPLOY at Hosted Engine but in that case the Event
just says the configuration was updated but nothing happens.

Is there a way to check if the host is registered as hosted-engine
host?
How could I register it correctly?

Let me know if you need any logs.
Thanks for any hint.

Regards,
Arsène


___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 4.0 Hosted Engine

2016-08-07 Thread Yanir Quinn
Hi,

Under Clusters-> your cluster -> Scheduling policy :
1. What is your selected policy and what properties does it contains ?
2. Under Additional Properties , is "Enable HA Reservation" selected ?

Also check if your host has the necessary hooks (e.g. macspoof) as in the
first host you deployed the hosted engine on
and that it has power management enabled.

Regards,
Yanir Quinn




On Sun, Aug 7, 2016 at 4:42 PM, Arsène Gschwind 
wrote:

> Hi,
>
> I have an oVirt setup with 2 server using hosted-engine, both server
> resgistered properly the hosted-engine using :
> # hosted-engine --deploy
>
> but for some reason the second isn't recognized as a host for
> hosted-engine and I'm not able to migrate the hosted-engine.
> The error I get when trying to migrate:
>
>
>- Cannot migrate VM. There is no host that satisfies current
>scheduling constraints. See below for details:
>- The host xx did not satisfy internal filter HA because it is not
>a Hosted Engine host..
>
> I've tried to redeploy the hosted-engine but this will fail since the host
> already exists in the management DB.
> I've tried to redeploy the host using the GUI when editing the Host and
> set DEPLOY at Hosted Engine but in that case the Event just says the
> configuration was updated but nothing happens.
>
> Is there a way to check if the host is registered as hosted-engine host?
> How could I register it correctly?
>
> Let me know if you need any logs.
> Thanks for any hint.
>
> Regards,
> Arsène
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Migrate machines in unknown state?

2016-08-07 Thread Yaniv Kaul
On Thu, Aug 4, 2016 at 6:10 PM, Nicolás  wrote:

>
>
> El 04/08/16 a las 15:25, Arik Hadas escribió:
>
>>
>> - Original Message -
>>
>>> El 2016-08-04 08:24, Arik Hadas escribió:
>>>
 - Original Message -

>
> El 04/08/16 a las 07:18, Arik Hadas escribió:
>
>> - Original Message -
>>
>>> Hi,
>>>
>>> We're running oVirt 4.0.1 and today I found out that one of our hosts
>>> has all its VMs in an unknown state. I actually don't know how (and
>>> when) did this happen, but I'd like to restore service possibly
>>> without
>>> turning off these machines. The host is up, the VMs are up, 'qemu'
>>> process exists, no errors, it's just the VMs running on it that have
>>> a
>>> '?' where status is defined.
>>>
>>> Is it safe in this case to simply modify database and set those VM's
>>> status to 'up'? I remember having to do this a time ago when we faced
>>> storage issues, it didn't break anything back then. If not, is there
>>> a
>>> "safe" way to migrate those VMs to a different host and restart the
>>> host
>>> that marked them as unknown?
>>>
>> Hi Nicolás,
>>
>> I assume that the host these VMs are running on is empty in the
>> webadmin,
>> right? if that is the case then you've probably hit [1]. Changing
>> their
>> status to up is not the way to go since these VMs will not be
>> monitored.
>>
> Hi Arik,
>
> By "empty" you mean the webadmin reports the host being running 0 VMs?
> If so, that's not the case, actually the VM count seems to be correct
> in
> relation to "qemu-*" processes (about 32 VMs), I can even see the
> machines in the "Virtual machines" tab of the host, it's just they are
> all marked with the '?' mark.
>
 No, I meant the 'Host' column in the Virtual Machines tab but if you
 see
 the VMs in the "Virtual machines" sub-tab of the host then run_on_vds
 points to the right host..

 The host is up in the webadmin as well?
 Can you share the engine log?

 Yes, the host is up in the webadmin, there are no issues with it, just
>>> the VMs running on it have the '?' mark. I've made 3 tests:
>>>
>>> 1) Restart engine: did not help
>>> 2) Check firewall, seems to be ok.
>>> 2) PostgreSQL: UPDATE vm_dynamic SET status = 1 WHERE status = 8; :
>>> After a while, I see lots of entries like this:
>>>
>>>   2016-08-04 09:23:10,910 WARN
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler4) [6ad135b8] Correlation ID: null, Call Stack:
>>> null, Custom Event ID: -1, Message: VM xxx is not responding.
>>>
>>> I'm attaching the engine log, but I don't know when did this happen for
>>> the first time, though. If there's a manual way/command to migrate VMs
>>> to a different host I'd appreciate a hint about it.
>>>
>>> Is it safe to restart vdsmd on this host?
>>>
>> The engine log looks fine - the VMs are reported as not-responding for
>> some reason. I would restart libvirtd and vdsmd then
>>
>
> Is restarting those two daemons safe? I mean, will that stop all qemu-*
> processes, so the VMs marked as unknown will stop?


Neither should touch the qemu process, but re-connect to it as they
restart.
Y.


>
>
> Thanks.
>>>
>>> Thanks.
>
> Yes, there is no other way to resolve it other than changing the DB but
>> the change should be to update run_on_vds field of these VMs to the
>> host
>> you know they are running on. Their status will then be updates in 15
>> sec.
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1354494
>>
>> Arik.
>>
>> Thanks.
>>>
>>> Nicolás
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 4.0 Hosted Engine

2016-08-07 Thread Arsène Gschwind

Hi,

I have an oVirt setup with 2 server using hosted-engine, both server 
resgistered properly the hosted-engine using :

# hosted-engine --deploy

but for some reason the second isn't recognized as a host for 
hosted-engine and I'm not able to migrate the hosted-engine.

The error I get when trying to migrate:

 * Cannot migrate VM. There is no host that satisfies current
   scheduling constraints. See below for details:
 * The host xx did not satisfy internal filter HA because it is not
   a Hosted Engine host..

I've tried to redeploy the hosted-engine but this will fail since the 
host already exists in the management DB.
I've tried to redeploy the host using the GUI when editing the Host and 
set DEPLOY at Hosted Engine but in that case the Event just says the 
configuration was updated but nothing happens.


Is there a way to check if the host is registered as hosted-engine host?
How could I register it correctly?

Let me know if you need any logs.
Thanks for any hint.

Regards,
Arsène

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] preallocated storage issue?

2016-08-07 Thread Fred Rolland
Simon,

What is happening is that when we create a preallocated disk on NFS, we
fill the file with zeros in order to "allocate" the space.
However, while copying the disk we use qemu-img that will ignore the zeros.

Quick way to demonstrate:

[root@white-vdsd test]# dd if=/dev/zero of=myfile.txt bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00172084 s, 609 MB/s

[root@white-vdsd test]# du -h myfile.txt
1.0Mmyfile.txt

[root@white-vdsd test]# ls -lh myfile.txt
-rw-r--r-- 1 root root 1.0M Aug  7 12:39 myfile.txt

[root@white-vdsd test]# qemu-img convert myfile.txt myfile2.txt

[root@white-vdsd test]# ls -lh myfile2.txt
-rw-r--r-- 1 root root 1.0M Aug  7 12:41 myfile2.txt

[root@white-vdsd test]# du -h myfile2.txt
0myfile2.txt

[root@white-vdsd test]# qemu-img info myfile.txt
image: myfile.txt
file format: raw
virtual size: 1.0M (1048576 bytes)
disk size: 1.0M

[root@white-vdsd test]# qemu-img info myfile2.txt
image: myfile2.txt
file format: raw
virtual size: 1.0M (1048576 bytes)
disk size: 0

I hope this explains what you see.

Regards,
Fred


On Sun, Aug 7, 2016 at 11:13 AM, Simon Barrett <
simon.barr...@tradingscreen.com> wrote:

> Storage is NFS. What logs would you like to see?
>
> Many thanks.
>
> Simon
>
>
>
> On Sun, Aug 7, 2016 at 8:53 AM +0100, "Fred Rolland" 
> wrote:
>
> Simon hi,
>
> What storage type are you using in source and target storage domains ?
> (NFS, ISCSI)
>
> Can you share the logs?
>
> Thanks,
> Fred
>
> On Fri, Aug 5, 2016 at 6:37 PM, Simon Barrett <
> simon.barr...@tradingscreen.com> wrote:
>
>> Another example. This one was moved to a new storage domain
>>
>>
>>
>> root@ovirt_host1> qemu-img info dd6f25f6-7830-4024-915f-a20268797c34
>>
>> image: dd6f25f6-7830-4024-915f-a20268797c34
>>
>> file format: raw
>>
>> virtual size: 200G (214748364800 bytes)
>>
>> disk size: 5.0G
>>
>>
>>
>> root@ ovirt_host1> cat dd6f25f6-7830-4024-915f-a20268797c34.meta
>>
>> DOMAIN=53560d43-874a-49c5-9c5a-8b90487c79f8
>>
>> VOLTYPE=LEAF
>>
>> CTIME=1470305678
>>
>> FORMAT=RAW
>>
>> IMAGE=741976cd-d1cb-4031-bdbe-6a745dff16ef
>>
>> DISKTYPE=2
>>
>> PUUID=----
>>
>> LEGALITY=LEGAL
>>
>> MTIME=0
>>
>> POOL_UUID=
>>
>> SIZE=419430400
>>
>> TYPE=PREALLOCATED
>>
>> DESCRIPTION=
>>
>> EOF
>>
>>
>>
>>
>>
>> This one has not been moved:
>>
>>
>>
>> r...@ny2-lvb-066.mgt> qemu-img info 155f0d33-c280-4236-8d1e-fcb88f9a1242
>>
>> image: 155f0d33-c280-4236-8d1e-fcb88f9a1242
>>
>> file format: raw
>>
>> virtual size: 90G (96636764160 bytes)
>>
>> disk size: 90G
>>
>>
>>
>> r...@ny2-lvb-066.mgt> cat 155f0d33-c280-4236-8d1e-fcb88f9a1242.meta
>>
>> DOMAIN=59bde2ff-e10d-477e-91c1-6355abff0999
>>
>> CTIME=1464946639
>>
>> FORMAT=RAW
>>
>> DISKTYPE=2
>>
>> LEGALITY=LEGAL
>>
>> SIZE=188743680
>>
>> VOLTYPE=LEAF
>>
>> DESCRIPTION={"DiskAlias":"ny2-laa-010.prod_Disk1","DiskDescription":""}
>>
>> IMAGE=82158182-81a4-458e-a41b-663756962666
>>
>> PUUID=----
>>
>> MTIME=0
>>
>> POOL_UUID=
>>
>> TYPE=PREALLOCATED
>>
>> EOF
>>
>>
>>
>> See the mismatch between “virtual size” and “disk size” on the one that
>> was moved.
>>
>>
>>
>> TIA,
>>
>>
>>
>> Simon
>>
>>
>>
>> *From:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On
>> Behalf Of *Simon Barrett
>> *Sent:* Friday, 5 August, 2016 10:25
>> *To:* users@ovirt.org
>> *Subject:* [ovirt-users] preallocated storage issue?
>>
>>
>>
>> If I create a preallocated disk for a VM, I see the disk image file
>> listing as the size I requested (100G):
>>
>>
>>
>> cd /rhev/data-center/mnt/storage_host1:_vol_pa1__nas__01b__oVir
>> t__prod__01/53560d43-874a-49c5-9c5a-8b90487c79f8/images/d97f
>> 7706-3662-40bf-9358-80e0dc51bff4
>>
>> root@ovirt_host> ls -l
>>
>> total 105064644
>>
>> -rw-rw 1 vdsm kvm 107374182400 Aug  5 10:57
>> 75c14559-e18f-4cc8-a3fe-bc0de507720b
>>
>> -rw-rw 1 vdsm kvm  1048576 Aug  5 10:57
>> 75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
>>
>> -rw-r--r-- 1 vdsm kvm  313 Aug  5 10:57
>> 75c14559-e18f-4cc8-a3fe-bc0de507720b.meta
>>
>>
>>
>> and the corresponding space used on disk matches
>>
>>
>>
>> root@ ovirt_host > du -sh *
>>
>> 101G75c14559-e18f-4cc8-a3fe-bc0de507720b
>>
>> 1.1M75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
>>
>> 4.0K75c14559-e18f-4cc8-a3fe-bc0de507720b.meta
>>
>>
>>
>> If I then migrate that storage (while the VM is shutdown) to a new
>> storage domain, the size on disk does not match the allocated size. In this
>> case there is nothing in the disk yet so it shows as 0.
>>
>>
>>
>> cd /rhev/data-center/mnt/storage_host2:_vol_pa1__ovirt__uatprod
>> /1f2c2b48-1e77-4c98-a6da-5dc09b78cead/images/d97f7706-
>> 3662-40bf-9358-80e0dc51bff4
>>
>> root@ ovirt_host> ls -l
>>
>> total 1032
>>
>> -rw-rw 1 vdsm kvm 107374182400 Aug  5 11:06
>> 75c14559-e18f-4cc8-a3fe-bc0de507720b
>>
>> -rw-rw 1 vdsm kvm  1048576 Aug  5 11:06
>> 75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
>>
>> -rw-r--r-- 1 vdsm kvm  

Re: [ovirt-users] preallocated storage issue?

2016-08-07 Thread Simon Barrett
Storage is NFS. What logs would you like to see?

Many thanks.

Simon



On Sun, Aug 7, 2016 at 8:53 AM +0100, "Fred Rolland" 
mailto:froll...@redhat.com>> wrote:

Simon hi,

What storage type are you using in source and target storage domains ? (NFS, 
ISCSI)

Can you share the logs?

Thanks,
Fred

On Fri, Aug 5, 2016 at 6:37 PM, Simon Barrett 
mailto:simon.barr...@tradingscreen.com>> wrote:
Another example. This one was moved to a new storage domain

root@ovirt_host1> qemu-img info dd6f25f6-7830-4024-915f-a20268797c34
image: dd6f25f6-7830-4024-915f-a20268797c34
file format: raw
virtual size: 200G (214748364800 bytes)
disk size: 5.0G

root@ ovirt_host1> cat dd6f25f6-7830-4024-915f-a20268797c34.meta
DOMAIN=53560d43-874a-49c5-9c5a-8b90487c79f8
VOLTYPE=LEAF
CTIME=1470305678
FORMAT=RAW
IMAGE=741976cd-d1cb-4031-bdbe-6a745dff16ef
DISKTYPE=2
PUUID=----
LEGALITY=LEGAL
MTIME=0
POOL_UUID=
SIZE=419430400
TYPE=PREALLOCATED
DESCRIPTION=
EOF


This one has not been moved:

r...@ny2-lvb-066.mgt> qemu-img info 155f0d33-c280-4236-8d1e-fcb88f9a1242
image: 155f0d33-c280-4236-8d1e-fcb88f9a1242
file format: raw
virtual size: 90G (96636764160 bytes)
disk size: 90G

r...@ny2-lvb-066.mgt> cat 155f0d33-c280-4236-8d1e-fcb88f9a1242.meta
DOMAIN=59bde2ff-e10d-477e-91c1-6355abff0999
CTIME=1464946639
FORMAT=RAW
DISKTYPE=2
LEGALITY=LEGAL
SIZE=188743680
VOLTYPE=LEAF
DESCRIPTION={"DiskAlias":"ny2-laa-010.prod_Disk1","DiskDescription":""}
IMAGE=82158182-81a4-458e-a41b-663756962666
PUUID=----
MTIME=0
POOL_UUID=
TYPE=PREALLOCATED
EOF

See the mismatch between “virtual size” and “disk size” on the one that was 
moved.

TIA,

Simon

From: users-boun...@ovirt.org 
[mailto:users-boun...@ovirt.org] On Behalf Of 
Simon Barrett
Sent: Friday, 5 August, 2016 10:25
To: users@ovirt.org
Subject: [ovirt-users] preallocated storage issue?

If I create a preallocated disk for a VM, I see the disk image file listing as 
the size I requested (100G):

cd 
/rhev/data-center/mnt/storage_host1:_vol_pa1__nas__01b__oVirt__prod__01/53560d43-874a-49c5-9c5a-8b90487c79f8/images/d97f7706-3662-40bf-9358-80e0dc51bff4
root@ovirt_host> ls -l
total 105064644
-rw-rw 1 vdsm kvm 107374182400 Aug  5 10:57 
75c14559-e18f-4cc8-a3fe-bc0de507720b
-rw-rw 1 vdsm kvm  1048576 Aug  5 10:57 
75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
-rw-r--r-- 1 vdsm kvm  313 Aug  5 10:57 
75c14559-e18f-4cc8-a3fe-bc0de507720b.meta

and the corresponding space used on disk matches

root@ ovirt_host > du -sh *
101G75c14559-e18f-4cc8-a3fe-bc0de507720b
1.1M75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
4.0K75c14559-e18f-4cc8-a3fe-bc0de507720b.meta

If I then migrate that storage (while the VM is shutdown) to a new storage 
domain, the size on disk does not match the allocated size. In this case there 
is nothing in the disk yet so it shows as 0.

cd 
/rhev/data-center/mnt/storage_host2:_vol_pa1__ovirt__uatprod/1f2c2b48-1e77-4c98-a6da-5dc09b78cead/images/d97f7706-3662-40bf-9358-80e0dc51bff4
root@ ovirt_host> ls -l
total 1032
-rw-rw 1 vdsm kvm 107374182400 Aug  5 11:06 
75c14559-e18f-4cc8-a3fe-bc0de507720b
-rw-rw 1 vdsm kvm  1048576 Aug  5 11:06 
75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
-rw-r--r-- 1 vdsm kvm  313 Aug  5 11:06 
75c14559-e18f-4cc8-a3fe-bc0de507720b.meta

root@ ovirt_host > du -sh *
0   75c14559-e18f-4cc8-a3fe-bc0de507720b
1.1M75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
4.0K75c14559-e18f-4cc8-a3fe-bc0de507720b.meta

oVirt still lists the disk as preallocated in the GUI but it is in fact thin 
provisioned.

I see the same issue if I clone a preallocated VM. The size on disk ends up 
being the equivalent of a thin-provisioned disk. I also had the issue when 
importing VM’s from an export domain when I had selected preallocated in the 
import dialog box.

Is this a known issue? Should preallocated not mean preallocated on physical 
disk?

Ovirt Engine is running 3.6.4.1-1.el6

The ovirt nodes are running:

OS Version:RHEL - 7 - 2.1511.el7.centos.2.10
Kernel Version: 3.10.0 - 327.4.5.el7.x86_64
KVM Version:2.3.0 - 31.el7_2.7.1
LIBVIRT Version:   libvirt-1.2.17-13.el7_2.2
VDSM Version: vdsm-4.17.23.2-0.el7.centos
SPICE Version:  0.12.4 - 15.el7
GlusterFS Version:   [N/A]
CEPH Version:   librbd1-0.80.7-3.el7

Many thanks,

Simon

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] preallocated storage issue?

2016-08-07 Thread Fred Rolland
Simon hi,

What storage type are you using in source and target storage domains ?
(NFS, ISCSI)

Can you share the logs?

Thanks,
Fred

On Fri, Aug 5, 2016 at 6:37 PM, Simon Barrett <
simon.barr...@tradingscreen.com> wrote:

> Another example. This one was moved to a new storage domain
>
>
>
> root@ovirt_host1> qemu-img info dd6f25f6-7830-4024-915f-a20268797c34
>
> image: dd6f25f6-7830-4024-915f-a20268797c34
>
> file format: raw
>
> virtual size: 200G (214748364800 bytes)
>
> disk size: 5.0G
>
>
>
> root@ ovirt_host1> cat dd6f25f6-7830-4024-915f-a20268797c34.meta
>
> DOMAIN=53560d43-874a-49c5-9c5a-8b90487c79f8
>
> VOLTYPE=LEAF
>
> CTIME=1470305678
>
> FORMAT=RAW
>
> IMAGE=741976cd-d1cb-4031-bdbe-6a745dff16ef
>
> DISKTYPE=2
>
> PUUID=----
>
> LEGALITY=LEGAL
>
> MTIME=0
>
> POOL_UUID=
>
> SIZE=419430400
>
> TYPE=PREALLOCATED
>
> DESCRIPTION=
>
> EOF
>
>
>
>
>
> This one has not been moved:
>
>
>
> r...@ny2-lvb-066.mgt> qemu-img info 155f0d33-c280-4236-8d1e-fcb88f9a1242
>
> image: 155f0d33-c280-4236-8d1e-fcb88f9a1242
>
> file format: raw
>
> virtual size: 90G (96636764160 bytes)
>
> disk size: 90G
>
>
>
> r...@ny2-lvb-066.mgt> cat 155f0d33-c280-4236-8d1e-fcb88f9a1242.meta
>
> DOMAIN=59bde2ff-e10d-477e-91c1-6355abff0999
>
> CTIME=1464946639
>
> FORMAT=RAW
>
> DISKTYPE=2
>
> LEGALITY=LEGAL
>
> SIZE=188743680
>
> VOLTYPE=LEAF
>
> DESCRIPTION={"DiskAlias":"ny2-laa-010.prod_Disk1","DiskDescription":""}
>
> IMAGE=82158182-81a4-458e-a41b-663756962666
>
> PUUID=----
>
> MTIME=0
>
> POOL_UUID=
>
> TYPE=PREALLOCATED
>
> EOF
>
>
>
> See the mismatch between “virtual size” and “disk size” on the one that
> was moved.
>
>
>
> TIA,
>
>
>
> Simon
>
>
>
> *From:* users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On
> Behalf Of *Simon Barrett
> *Sent:* Friday, 5 August, 2016 10:25
> *To:* users@ovirt.org
> *Subject:* [ovirt-users] preallocated storage issue?
>
>
>
> If I create a preallocated disk for a VM, I see the disk image file
> listing as the size I requested (100G):
>
>
>
> cd /rhev/data-center/mnt/storage_host1:_vol_pa1__nas__01b__
> oVirt__prod__01/53560d43-874a-49c5-9c5a-8b90487c79f8/images/
> d97f7706-3662-40bf-9358-80e0dc51bff4
>
> root@ovirt_host> ls -l
>
> total 105064644
>
> -rw-rw 1 vdsm kvm 107374182400 Aug  5 10:57 75c14559-e18f-4cc8-a3fe-
> bc0de507720b
>
> -rw-rw 1 vdsm kvm  1048576 Aug  5 10:57 75c14559-e18f-4cc8-a3fe-
> bc0de507720b.lease
>
> -rw-r--r-- 1 vdsm kvm  313 Aug  5 10:57 75c14559-e18f-4cc8-a3fe-
> bc0de507720b.meta
>
>
>
> and the corresponding space used on disk matches
>
>
>
> root@ ovirt_host > du -sh *
>
> 101G75c14559-e18f-4cc8-a3fe-bc0de507720b
>
> 1.1M75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
>
> 4.0K75c14559-e18f-4cc8-a3fe-bc0de507720b.meta
>
>
>
> If I then migrate that storage (while the VM is shutdown) to a new storage
> domain, the size on disk does not match the allocated size. In this case
> there is nothing in the disk yet so it shows as 0.
>
>
>
> cd /rhev/data-center/mnt/storage_host2:_vol_pa1__ovirt__
> uatprod/1f2c2b48-1e77-4c98-a6da-5dc09b78cead/images/
> d97f7706-3662-40bf-9358-80e0dc51bff4
>
> root@ ovirt_host> ls -l
>
> total 1032
>
> -rw-rw 1 vdsm kvm 107374182400 Aug  5 11:06 75c14559-e18f-4cc8-a3fe-
> bc0de507720b
>
> -rw-rw 1 vdsm kvm  1048576 Aug  5 11:06 75c14559-e18f-4cc8-a3fe-
> bc0de507720b.lease
>
> -rw-r--r-- 1 vdsm kvm  313 Aug  5 11:06 75c14559-e18f-4cc8-a3fe-
> bc0de507720b.meta
>
>
>
> root@ ovirt_host > du -sh *
>
> 0   75c14559-e18f-4cc8-a3fe-bc0de507720b
>
> 1.1M75c14559-e18f-4cc8-a3fe-bc0de507720b.lease
>
> 4.0K75c14559-e18f-4cc8-a3fe-bc0de507720b.meta
>
>
>
> oVirt still lists the disk as preallocated in the GUI but it is in fact
> thin provisioned.
>
>
>
> I see the same issue if I clone a preallocated VM. The size on disk ends
> up being the equivalent of a thin-provisioned disk. I also had the issue
> when importing VM’s from an export domain when I had selected preallocated
> in the import dialog box.
>
>
>
> Is this a known issue? Should preallocated not mean preallocated on
> physical disk?
>
>
>
> Ovirt Engine is running 3.6.4.1-1.el6
>
>
>
> The ovirt nodes are running:
>
>
>
> OS Version:RHEL - 7 - 2.1511.el7.centos.2.10
>
> Kernel Version: 3.10.0 - 327.4.5.el7.x86_64
>
> KVM Version:2.3.0 - 31.el7_2.7.1
>
> LIBVIRT Version:   libvirt-1.2.17-13.el7_2.2
>
> VDSM Version: vdsm-4.17.23.2-0.el7.centos
>
> SPICE Version:  0.12.4 - 15.el7
>
> GlusterFS Version:   [N/A]
>
> CEPH Version:   librbd1-0.80.7-3.el7
>
>
>
> Many thanks,
>
>
>
> Simon
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.