Re: [ovirt-users] Testing ovirt 3.6 beta 3: Gluster hyperconvergence

2015-08-28 Thread Simone Tiraboschi
On Thu, Aug 27, 2015 at 7:14 PM, wodel youchi wodel.you...@gmail.com
wrote:

 Hi,

 Is gluster hyperconvergence still part of ovirt 3.6?


No, unfortunately due to different issues it didn't get in for the feature
freeze and so we had to postpone to the next release.
It's still considered a really interesting feature and we are still working
on it but it will not come in 3.6.

Last week we had a presentation about it at the KVM forum, here you can
find some updated info:
http://www.linux-kvm.org/images/5/51/03x04-Martin_Sivak-oVirt_and_gluster_hyperconverged.pdf


 I wanted to test the concept, but the deploy script doesn't give me the
 option to create a local brick for the VM engine after selecting glusterfs
 as storage.

 PS: vdsm-gluster and glusterfs-server are installed, vdsmd service was
 restarted

 Regards.

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CentOS 7.1 compatibility with oVirt 3.5.3

2015-08-28 Thread VONDRA Alain
Hi Punit,
Thank’s a lot for your answer, I go more confident this afternoon to migrate 
the engine on a new host.
Regards
Alain







Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.frhttp://www.unicef.fr/

http://www.unicef.fr

http://www.unicef.fr/

http://www.unicef.fr/[cid:signature9bc3cd]http://www.unicef.fr







http://www.unicef.fr

De : Punit Dambiwal [mailto:hypu...@gmail.com]
Envoyé : vendredi 28 août 2015 03:36
À : VONDRA Alain
Cc : Martin Perina; users@ovirt.org
Objet : Re: [ovirt-users] CentOS 7.1 compatibility with oVirt 3.5.3

Hi Vondra,

Yes...I am using 10 server cluster farm with Ovirt 3.5.3 with all Centos 7.1 
(Engine,Host) from last 6 months...and didn't face any critical issues in the 
setup...

Thanks,
Punit

On Thu, Aug 27, 2015 at 11:35 PM, VONDRA Alain 
avon...@unicef.frmailto:avon...@unicef.fr wrote:
Hi,
Thanks for your answer, but do you mean that oVirt 3.5.3 is fully supported  by 
CentOS 7.1, because my environment is in this oVirt version ?
Thanks
Alain



Alain VONDRA
Chargé d'exploitation des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76tel:%2B33%201%2044%2039%2077%2076
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.frhttp://www.unicef.fr




-Message d'origine-
De : Martin Perina [mailto:mper...@redhat.commailto:mper...@redhat.com]
Envoyé : jeudi 27 août 2015 12:12
À : VONDRA Alain
Cc : users@ovirt.orgmailto:users@ovirt.org
Objet : Re: [ovirt-users] CentOS 7.1 compatibility with oVirt 3.5.3

Hi,

yes Centos 7.1 is fully supported OS in oVirt 3.6. I don't have any production 
setup, but many developers uses Centos 7.1 and AFAIK there are no big issues 
that prevents you from using Centos 7.1 on engine and hosts.

Martin Perina

- Original Message -
 From: VONDRA Alain avon...@unicef.frmailto:avon...@unicef.fr
 To: users@ovirt.orgmailto:users@ovirt.org
 Sent: Thursday, August 27, 2015 11:57:19 AM
 Subject: Re: [ovirt-users] CentOS 7.1 compatibility with oVirt 3.5.3

 Hi,
 Is there anybody having experiences about this operation ?
 Thanks





 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.frhttp://www.unicef.fr




 -Message d'origine-
 De : users-boun...@ovirt.orgmailto:users-boun...@ovirt.org 
 [mailto:users-boun...@ovirt.orgmailto:users-boun...@ovirt.org] De la
 part de VONDRA Alain Envoyé : mercredi 26 août 2015 15:17 À :
 users@ovirt.orgmailto:users@ovirt.org Objet : [ovirt-users] CentOS 7.1 
 compatibility with
 oVirt 3.5.3

 Hello,
 Before operating a migration of my oVirt engine, I'd like to know if I
 can install my manager directly as host with CentOS 7.1 and oVirt
 3.5.3 without using hosted engine.
 I know that with older version we needed to use only hosted engine
 installation.
 Thanks for your advices.
 Alain



 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information Direction
 Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.frhttp://www.unicef.fr




 ___
 Users mailing list
 Users@ovirt.orgmailto:Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.orgmailto:Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.orgmailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete snapshot with status illegal - live merge not possible

2015-08-28 Thread Jan Siml

Hello,

if no one has an idea how to correct the Disk/Snapshot paths in Engine 
database, I see only one possible way to solve the issue:


Stop the VM and copy image/meta files target storage to source storage 
(the one where Engine thinks the files are located). Start the VM.


Any concerns regarding this procedure? But I still hope that someone 
from oVirt team can give an advice how to correct the database entries. 
If necessary I would open a bug in Bugzilla.


Kind regards

Jan Siml


after a failed live storage migration (cause unknown) we have a
snapshot which is undeletable due to its status 'illegal' (as seen
in storage/snapshot tab). I have already found some bugs [1],[2],[3]
regarding this issue, but no way how to solve the issue within oVirt

  3.5.3.


I have attached the relevant engine.log snippet. Is there any way to
do a live merge (and therefore delete the snapshot)?

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1213157
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1247377 links to [3]
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1247379 (no access)


some additional informations. I have checked the images on both storages
and verified the disk paths with virsh's dumpxml.

a) The images and snapshots are on both storages.
b) The images on source storage aren't used. (modification time)
c) The images on target storage are used. (modification time)
d) virsh -r dumpxml tells me disk images are located on _target_ storage.
e) Admin interface tells me, that images and snapshot are located on
_source_ storage, which isn't true, see b), c) and d).

What can we do, to solve this issue? Is this to be corrected in database
only?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] CentOS 7.1 compatibility with oVirt 3.5.3

2015-08-28 Thread Martin Perina
Hi,

sorry, currently I'm so focused on 3.6 that I made mistake. oVirt 3.5.3
should work fine on Centos 7.1.

Martin Perina


- Original Message -
 From: VONDRA Alain avon...@unicef.fr
 To: Martin Perina mper...@redhat.com
 Cc: users@ovirt.org
 Sent: Thursday, August 27, 2015 5:35:34 PM
 Subject: RE: [ovirt-users] CentOS 7.1 compatibility with oVirt 3.5.3
 
 Hi,
 Thanks for your answer, but do you mean that oVirt 3.5.3 is fully supported
 by CentOS 7.1, because my environment is in this oVirt version ?
 Thanks
 Alain
 
 
 
 Alain VONDRA
 Chargé d'exploitation des Systèmes d'Information
 Direction Administrative et Financière
 +33 1 44 39 77 76
 UNICEF France
 3 rue Duguay Trouin  75006 PARIS
 www.unicef.fr
 
 
 
 
 -Message d'origine-
 De : Martin Perina [mailto:mper...@redhat.com]
 Envoyé : jeudi 27 août 2015 12:12
 À : VONDRA Alain
 Cc : users@ovirt.org
 Objet : Re: [ovirt-users] CentOS 7.1 compatibility with oVirt 3.5.3
 
 Hi,
 
 yes Centos 7.1 is fully supported OS in oVirt 3.6. I don't have any
 production setup, but many developers uses Centos 7.1 and AFAIK there are no
 big issues that prevents you from using Centos 7.1 on engine and hosts.
 
 Martin Perina
 
 - Original Message -
  From: VONDRA Alain avon...@unicef.fr
  To: users@ovirt.org
  Sent: Thursday, August 27, 2015 11:57:19 AM
  Subject: Re: [ovirt-users] CentOS 7.1 compatibility with oVirt 3.5.3
 
  Hi,
  Is there anybody having experiences about this operation ?
  Thanks
 
 
 
 
 
  Alain VONDRA
  Chargé d'exploitation des Systèmes d'Information Direction
  Administrative et Financière
  +33 1 44 39 77 76
  UNICEF France
  3 rue Duguay Trouin  75006 PARIS
  www.unicef.fr
 
 
 
 
  -Message d'origine-
  De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la
  part de VONDRA Alain Envoyé : mercredi 26 août 2015 15:17 À :
  users@ovirt.org Objet : [ovirt-users] CentOS 7.1 compatibility with
  oVirt 3.5.3
 
  Hello,
  Before operating a migration of my oVirt engine, I'd like to know if I
  can install my manager directly as host with CentOS 7.1 and oVirt
  3.5.3 without using hosted engine.
  I know that with older version we needed to use only hosted engine
  installation.
  Thanks for your advices.
  Alain
 
 
 
  Alain VONDRA
  Chargé d'exploitation des Systèmes d'Information Direction
  Administrative et Financière
  +33 1 44 39 77 76
  UNICEF France
  3 rue Duguay Trouin  75006 PARIS
  www.unicef.fr
 
 
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.5.3.1 - Clone_VM Process deleted Source-VM and the the Clone-VM does not contain any disk anymore

2015-08-28 Thread Christian Rebel
Dear all,

 

I have started a Clone_VM over the GUI, but now the Source-VM has been
deleted and the Target-VM does not contain any disk!

The Task is displaying that Copying Image and  Finalize has been failed,
I hope there is a way to restore the VM somehow - please help me.

 

From the Logfile:

 

2015-08-28 12:47:20,950 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-7) Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: VM Katello is down. Exit message: User shut
down from within the guest

2015-08-28 12:47:20,955 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-7) VM Katello
(9013e3c2-3cd7-4eae-a3e6-f5e83a64db87) is running in db and not running in
VDS itsatltovirtaio.domain.local

2015-08-28 12:47:20,957 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(DefaultQuartzScheduler_Worker-7) START, FullListVdsCommand(HostName =
itsatltovirtaio.domain.local, HostId = b783a2ee-4a63-46ca-9afc-b3b74f0e10ce,
vds=Host[itsatltovirtaio.domain.local,b783a2ee-4a63-46ca-9afc-b3b74f0e10ce],
vmIds=[9013e3c2-3cd7-4eae-a3e6-f5e83a64db87]), log id: 39590448

2015-08-28 12:47:20,966 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(DefaultQuartzScheduler_Worker-7) FINISH, FullListVdsCommand, return: [],
log id: 39590448

2015-08-28 12:47:21,046 INFO
[org.ovirt.engine.core.bll.ProcessDownVmCommand]
(org.ovirt.thread.pool-8-thread-17) [82bee5d] Running command:
ProcessDownVmCommand internal: true.

2015-08-28 12:47:24,589 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-3) Polling and updating Async Tasks: 2 tasks,
2 tasks to poll now

2015-08-28 12:47:24,600 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-3) SPMAsyncTask::PollTask: Polling task
037b2c85-68d2-4159-8310-91c472038b5b (Parent Command
ProcessOvfUpdateForStorageDomain, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status
finished, result 'success'.

2015-08-28 12:47:24,603 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-3) BaseAsyncTask::onTaskEndSuccess: Task
037b2c85-68d2-4159-8310-91c472038b5b (Parent Command
ProcessOvfUpdateForStorageDomain, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
successfully.

2015-08-28 12:47:24,604 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(DefaultQuartzScheduler_Worker-3) Task with DB Task ID
0e6a6d72-0cea-41aa-8fe9-9262bc53d558 and VDSM Task ID
cd125365-3344-4f45-b67a-39c2fa5112ab is in state Polling. End action for
command e9edfed0-915a-4534-b774-c07682bafa59 will proceed when all the
entitys tasks are completed.

2015-08-28 12:47:24,605 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-3) SPMAsyncTask::PollTask: Polling task
cd125365-3344-4f45-b67a-39c2fa5112ab (Parent Command
ProcessOvfUpdateForStorageDomain, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status
finished, result 'success'.

2015-08-28 12:47:24,606 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-3) BaseAsyncTask::onTaskEndSuccess: Task
cd125365-3344-4f45-b67a-39c2fa5112ab (Parent Command
ProcessOvfUpdateForStorageDomain, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
successfully.

2015-08-28 12:47:24,606 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-3) CommandAsyncTask::endActionIfNecessary:
All tasks of command e9edfed0-915a-4534-b774-c07682bafa59 has ended -
executing endAction

2015-08-28 12:47:24,607 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-3) CommandAsyncTask::endAction: Ending action
for 2 tasks (command ID: e9edfed0-915a-4534-b774-c07682bafa59): calling
endAction .

2015-08-28 12:47:24,607 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-33) CommandAsyncTask::endCommandAction
[within thread] context: Attempting to endAction
ProcessOvfUpdateForStorageDomain, executionIndex: 0

2015-08-28 12:47:24,668 INFO
[org.ovirt.engine.core.bll.ProcessOvfUpdateForStorageDomainCommand]
(org.ovirt.thread.pool-8-thread-33) [484cded8] Ending command successfully:
org.ovirt.engine.core.bll.ProcessOvfUpdateForStorageDomainCommand

2015-08-28 12:47:24,669 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-33) [484cded8]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type ProcessOvfUpdateForStorageDomain completed, handling the result.

2015-08-28 12:47:24,670 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-33) [484cded8]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type ProcessOvfUpdateForStorageDomain succeeded, 

Re: [ovirt-users] Delete snapshot with status illegal - live merge not possible

2015-08-28 Thread InterNetX - Juergen Gotteswinter
got exactly the same issue, with all nice side effects like performance
degradation. Until now i was not able to fix this, or to fool the engine somehow
that it whould show the image as ok again and give me a 2nd chance to drop the
snapshot.
 
in some cases this procedure helped (needs 2nd storage domain)
 
- image live migration to a different storage domain (check which combinations
are supported, iscsi - nfs domain seems unsupported. iscsi - iscsi works)
- snapshot went into ok state, and in ~50% i was able to drop the snapshot
than. space had been reclaimed, so seems like this worked
 
 
other workaround is through exporting the image onto a nfs export domain, here
you can tell the engine to not export snapshots. after re-importing everything
is fine
 
 
the snapshot feature (live at least) should be avoided at all currently
simply not reliable enaugh.
 
 
your way works, too. already did that, even it was a pita to figure out where to
find what. this symlinking mess between /rhev /dev and /var/lib/libvirt is
really awesome. not.
 
 
 Jan Siml js...@plusline.net hat am 28. August 2015 um 12:56 geschrieben:


 Hello,

 if no one has an idea how to correct the Disk/Snapshot paths in Engine
 database, I see only one possible way to solve the issue:

 Stop the VM and copy image/meta files target storage to source storage
 (the one where Engine thinks the files are located). Start the VM.

 Any concerns regarding this procedure? But I still hope that someone
 from oVirt team can give an advice how to correct the database entries.
 If necessary I would open a bug in Bugzilla.

 Kind regards

 Jan Siml

  after a failed live storage migration (cause unknown) we have a
  snapshot which is undeletable due to its status 'illegal' (as seen
  in storage/snapshot tab). I have already found some bugs [1],[2],[3]
  regarding this issue, but no way how to solve the issue within oVirt
   3.5.3.
 
  I have attached the relevant engine.log snippet. Is there any way to
  do a live merge (and therefore delete the snapshot)?
 
  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1213157
  [2] https://bugzilla.redhat.com/show_bug.cgi?id=1247377 links to [3]
  [3] https://bugzilla.redhat.com/show_bug.cgi?id=1247379 (no access)
 
  some additional informations. I have checked the images on both storages
  and verified the disk paths with virsh's dumpxml.
 
  a) The images and snapshots are on both storages.
  b) The images on source storage aren't used. (modification time)
  c) The images on target storage are used. (modification time)
  d) virsh -r dumpxml tells me disk images are located on _target_ storage.
  e) Admin interface tells me, that images and snapshot are located on
  _source_ storage, which isn't true, see b), c) and d).
 
  What can we do, to solve this issue? Is this to be corrected in database
  only?
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete snapshot with status illegal - live merge not possible

2015-08-28 Thread InterNetX - Juergen Gotteswinter

 Jan Siml js...@plusline.net hat am 28. August 2015 um 15:15 geschrieben:


 Hello Juergen,

  got exactly the same issue, with all nice side effects like performance
  degradation. Until now i was not able to fix this, or to fool the engine
  somehow that it whould show the image as ok again and give me a 2nd
  chance to drop the snapshot.
  in some cases this procedure helped (needs 2nd storage domain)
  - image live migration to a different storage domain (check which
  combinations are supported, iscsi - nfs domain seems unsupported. iscsi
  - iscsi works)
  - snapshot went into ok state, and in ~50% i was able to drop the
  snapshot than. space had been reclaimed, so seems like this worked

 okay, seems interesting. But I'm afraid of not knowing which image files
 Engine uses when live migration is demanded. If Engine uses the ones
 which are actually used and updates the database afterwards -- fine. But
 if the images are used that are referenced in Engine database, we will
 take a journey into the past.
 
knocking on wood. so far no problems, and i used this way for sure 50 times +
 
in cases where the live merge failed, offline merging worked in another 50%.
those which fail offline, too went back to illegal snap state


  other workaround is through exporting the image onto a nfs export
  domain, here you can tell the engine to not export snapshots. after
  re-importing everything is fine
  the snapshot feature (live at least) should be avoided at all
  currently simply not reliable enaugh.
  your way works, too. already did that, even it was a pita to figure out
  where to find what. this symlinking mess between /rhev /dev and
  /var/lib/libvirt is really awesome. not.
   Jan Siml js...@plusline.net hat am 28. August 2015 um 12:56
  geschrieben:
  
  
   Hello,
  
   if no one has an idea how to correct the Disk/Snapshot paths in Engine
   database, I see only one possible way to solve the issue:
  
   Stop the VM and copy image/meta files target storage to source storage
   (the one where Engine thinks the files are located). Start the VM.
  
   Any concerns regarding this procedure? But I still hope that someone
   from oVirt team can give an advice how to correct the database entries.
   If necessary I would open a bug in Bugzilla.
  
   Kind regards
  
   Jan Siml
  
after a failed live storage migration (cause unknown) we have a
snapshot which is undeletable due to its status 'illegal' (as seen
in storage/snapshot tab). I have already found some bugs [1],[2],[3]
regarding this issue, but no way how to solve the issue within oVirt
 3.5.3.
   
I have attached the relevant engine.log snippet. Is there any way to
do a live merge (and therefore delete the snapshot)?
   
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1213157
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1247377 links to [3]
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1247379 (no access)
   
some additional informations. I have checked the images on both
  storages
and verified the disk paths with virsh's dumpxml.
   
a) The images and snapshots are on both storages.
b) The images on source storage aren't used. (modification time)
c) The images on target storage are used. (modification time)
d) virsh -r dumpxml tells me disk images are located on _target_
  storage.
e) Admin interface tells me, that images and snapshot are located on
_source_ storage, which isn't true, see b), c) and d).
   
What can we do, to solve this issue? Is this to be corrected in
  database
only?

 Kind regards

 Jan Siml___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt 3.5.3.1 - Snapshot Failure with Error creating a new volume, code = 205

2015-08-28 Thread Christian Rebel
Hi all,

 

I have a problem to perform a Snapshot on one of my important VMs, can
anyone please be so kind and assist me.

 

 start of problematic vm snapshot 

2015-08-28 15:36:08,172 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(ajp--127.0.0.1-8702-6) [439efb74] Lock Acquired to object EngineLock
[exclusiveLocks= key: ee2ea036-2af3-4a18-9329-08a7b0e7ce7c value: VM

, sharedLocks= ]

2015-08-28 15:36:08,224 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-45) Command
44173a42-970f-42b3-8d09-ca113c58b5df persisting async task placeholder for
child command be3c1922-aca6-4caf-a432-34fe95043446

2015-08-28 15:36:08,367 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-45) Command
44173a42-970f-42b3-8d09-ca113c58b5df persisting async task placeholder for
child command 03a43c66-0fed-4a82-9139-7b89328f4ae4

2015-08-28 15:36:08,517 INFO
[org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
(org.ovirt.thread.pool-8-thread-45) Running command:
CreateAllSnapshotsFromVmCommand internal: false. Entities affected :  ID:
ee2ea036-2af3-4a18-9329-08a7b0e7ce7c Type: VMAction group
MANIPULATE_VM_SNAPSHOTS with role type USER

2015-08-28 15:36:08,550 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] Running command:
CreateSnapshotCommand internal: true. Entities affected :  ID:
---- Type: Storage

2015-08-28 15:36:08,560 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] START,
CreateSnapshotVDSCommand( storagePoolId =
0002-0002-0002-0002-0021, ignoreFailoverLimit = false,
storageDomainId = 937822d9-8a59-490f-95b7-48371ae32253, imageGroupId =
e7e99288-ad83-406e-9cb6-7a5aa443de9b, imageSizeInBytes = 21474836480,
volumeFormat = COW, newImageId = 2013aa82-6316-4b54-851b-88bf7f523b9c,
newImageDescription = , imageId = c5762dec-d9d1-4842-84d1-05896d4d27fb,
sourceImageGroupId = e7e99288-ad83-406e-9cb6-7a5aa443de9b), log id: d1ffce0

2015-08-28 15:36:08,567 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] -- executeIrsBrokerCommand:
calling 'createVolume' with two new parameters: description and UUID

2015-08-28 15:36:08,655 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] FINISH,
CreateSnapshotVDSCommand, return: 2013aa82-6316-4b54-851b-88bf7f523b9c, log
id: d1ffce0

2015-08-28 15:36:08,668 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command
44173a42-970f-42b3-8d09-ca113c58b5df

2015-08-28 15:36:08,670 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(org.ovirt.thread.pool-8-thread-45) [86e8aad]
CommandMultiAsyncTasks::AttachTask: Attaching task
0221e559-0eec-468b-bc4f-a7aaa487661a to command
44173a42-970f-42b3-8d09-ca113c58b5df.

2015-08-28 15:36:08,734 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(org.ovirt.thread.pool-8-thread-45) [86e8aad] Adding task
0221e559-0eec-468b-bc4f-a7aaa487661a (Parent Command
CreateAllSnapshotsFromVm, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't
started yet..

2015-08-28 15:36:08,793 INFO
[org.ovirt.engine.core.bll.CreateSnapshotCommand]
(org.ovirt.thread.pool-8-thread-45) [53768935] Running command:
CreateSnapshotCommand internal: true. Entities affected :  ID:
---- Type: Storage

2015-08-28 15:36:08,797 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [53768935] START,
CreateSnapshotVDSCommand( storagePoolId =
0002-0002-0002-0002-0021, ignoreFailoverLimit = false,
storageDomainId = 937822d9-8a59-490f-95b7-48371ae32253, imageGroupId =
6281b597-020d-4ea7-a954-bb798a0ca4f1, imageSizeInBytes = 161061273600,
volumeFormat = COW, newImageId = fd9c6e36-90ca-488a-8cbd-534a0caf6886,
newImageDescription = , imageId = 2a2015a1-f62c-4e32-8b04-77ece2ba4cc1,
sourceImageGroupId = 6281b597-020d-4ea7-a954-bb798a0ca4f1), log id: 4bc33f2e

2015-08-28 15:36:08,799 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [53768935] -- executeIrsBrokerCommand:
calling 'createVolume' with two new parameters: description and UUID

2015-08-28 15:36:09,000 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(org.ovirt.thread.pool-8-thread-45) [53768935] FINISH,
CreateSnapshotVDSCommand, return: fd9c6e36-90ca-488a-8cbd-534a0caf6886, log
id: 4bc33f2e

2015-08-28 15:36:09,011 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(org.ovirt.thread.pool-8-thread-45) [53768935]
CommandMultiAsyncTasks::AttachTask: Attaching task

Re: [ovirt-users] Delete snapshot with status illegal - live merge not possible

2015-08-28 Thread Jan Siml

Hello Juergen,


got exactly the same issue, with all nice side effects like performance
degradation. Until now i was not able to fix this, or to fool the engine
somehow that it whould show the image as ok again and give me a 2nd
chance to drop the snapshot.
in some cases this procedure helped (needs 2nd storage domain)
- image live migration to a different storage domain (check which
combinations are supported, iscsi - nfs domain seems unsupported. iscsi
- iscsi works)
- snapshot went into ok state, and in ~50% i was able to drop the
snapshot than. space had been reclaimed, so seems like this worked


okay, seems interesting. But I'm afraid of not knowing which image files 
Engine uses when live migration is demanded. If Engine uses the ones 
which are actually used and updates the database afterwards -- fine. But 
if the images are used that are referenced in Engine database, we will 
take a journey into the past.



other workaround is through exporting the image onto a nfs export
domain, here you can tell the engine to not export snapshots. after
re-importing everything is fine
the snapshot feature (live at least) should be avoided at all
currently simply not reliable enaugh.
your way works, too. already did that, even it was a pita to figure out
where to find what. this symlinking mess between /rhev /dev and
/var/lib/libvirt is really awesome. not.
  Jan Siml js...@plusline.net hat am 28. August 2015 um 12:56
geschrieben:
 
 
  Hello,
 
  if no one has an idea how to correct the Disk/Snapshot paths in Engine
  database, I see only one possible way to solve the issue:
 
  Stop the VM and copy image/meta files target storage to source storage
  (the one where Engine thinks the files are located). Start the VM.
 
  Any concerns regarding this procedure? But I still hope that someone
  from oVirt team can give an advice how to correct the database entries.
  If necessary I would open a bug in Bugzilla.
 
  Kind regards
 
  Jan Siml
 
   after a failed live storage migration (cause unknown) we have a
   snapshot which is undeletable due to its status 'illegal' (as seen
   in storage/snapshot tab). I have already found some bugs [1],[2],[3]
   regarding this issue, but no way how to solve the issue within oVirt
3.5.3.
  
   I have attached the relevant engine.log snippet. Is there any way to
   do a live merge (and therefore delete the snapshot)?
  
   [1] https://bugzilla.redhat.com/show_bug.cgi?id=1213157
   [2] https://bugzilla.redhat.com/show_bug.cgi?id=1247377 links to [3]
   [3] https://bugzilla.redhat.com/show_bug.cgi?id=1247379 (no access)
  
   some additional informations. I have checked the images on both
storages
   and verified the disk paths with virsh's dumpxml.
  
   a) The images and snapshots are on both storages.
   b) The images on source storage aren't used. (modification time)
   c) The images on target storage are used. (modification time)
   d) virsh -r dumpxml tells me disk images are located on _target_
storage.
   e) Admin interface tells me, that images and snapshot are located on
   _source_ storage, which isn't true, see b), c) and d).
  
   What can we do, to solve this issue? Is this to be corrected in
database
   only?


Kind regards

Jan Siml
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete snapshot with status illegal - live merge not possible

2015-08-28 Thread InterNetX - Juergen Gotteswinter

 Jan Siml js...@plusline.net hat am 28. August 2015 um 16:47 geschrieben:


 Hello,

got exactly the same issue, with all nice side effects like performance
degradation. Until now i was not able to fix this, or to fool the
  engine
somehow that it whould show the image as ok again and give me a 2nd
chance to drop the snapshot.
in some cases this procedure helped (needs 2nd storage domain)
- image live migration to a different storage domain (check which
combinations are supported, iscsi - nfs domain seems unsupported.
  iscsi
- iscsi works)
- snapshot went into ok state, and in ~50% i was able to drop the
snapshot than. space had been reclaimed, so seems like this worked
  
   okay, seems interesting. But I'm afraid of not knowing which image files
   Engine uses when live migration is demanded. If Engine uses the ones
   which are actually used and updates the database afterwards -- fine. But
   if the images are used that are referenced in Engine database, we will
   take a journey into the past.
  knocking on wood. so far no problems, and i used this way for sure 50
  times +

 This doesn't work. Engine creates the snapshots on wrong storage (old)
 and this process fails, cause the VM (qemu process) uses the images on
 other storage (new).
 
sounds like there are some other problems in your case, wrong db entries image
- snapshot? i didnt investigate further in the vm which failed this process, i
directly went further and exported them


  in cases where the live merge failed, offline merging worked in another
  50%. those which fail offline, too went back to illegal snap state

 I fear offline merge would cause data corruption. Because if I shut down
 the VM, the information in Engine database is still wrong. Engine thinks
 image files and snapshots are on old storage. But VM has written to the
 equal named image files on new storage. And offline merge might use the
 old files on old storage.
 
than your initial plan is an alternative. you use thin or raw on what kind of
storage domain? but like said, manually processing is a pita due to the symlink
mess.


other workaround is through exporting the image onto a nfs export
domain, here you can tell the engine to not export snapshots. after
re-importing everything is fine

 Same issue as with offline merge.

 Meanwhile I think, we need to shut down the VM, copy the image files
 from one storage (qemu has used before) to the other storage (the one
 Engine expects) and pray while starting the VM again.

the snapshot feature (live at least) should be avoided at all
currently simply not reliable enaugh.
your way works, too. already did that, even it was a pita to figure out
where to find what. this symlinking mess between /rhev /dev and
/var/lib/libvirt is really awesome. not.
 Jan Siml js...@plusline.net hat am 28. August 2015 um 12:56
geschrieben:


 Hello,

 if no one has an idea how to correct the Disk/Snapshot paths in
  Engine
 database, I see only one possible way to solve the issue:

 Stop the VM and copy image/meta files target storage to source
  storage
 (the one where Engine thinks the files are located). Start the VM.

 Any concerns regarding this procedure? But I still hope that someone
 from oVirt team can give an advice how to correct the database
  entries.
 If necessary I would open a bug in Bugzilla.

 Kind regards

 Jan Siml

  after a failed live storage migration (cause unknown) we have a
  snapshot which is undeletable due to its status 'illegal' (as seen
  in storage/snapshot tab). I have already found some bugs
  [1],[2],[3]
  regarding this issue, but no way how to solve the issue within
  oVirt
   3.5.3.
 
  I have attached the relevant engine.log snippet. Is there any
  way to
  do a live merge (and therefore delete the snapshot)?
 
  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1213157
  [2] https://bugzilla.redhat.com/show_bug.cgi?id=1247377 links
  to [3]
  [3] https://bugzilla.redhat.com/show_bug.cgi?id=1247379 (no
  access)
 
  some additional informations. I have checked the images on both
storages
  and verified the disk paths with virsh's dumpxml.
 
  a) The images and snapshots are on both storages.
  b) The images on source storage aren't used. (modification time)
  c) The images on target storage are used. (modification time)
  d) virsh -r dumpxml tells me disk images are located on _target_
storage.
  e) Admin interface tells me, that images and snapshot are
  located on
  _source_ storage, which isn't true, see b), c) and d).
 
  What can we do, to solve this issue? Is this to be corrected in
database
  only?

 Kind regards

 Jan Siml___
Users mailing list
Users@ovirt.org

Re: [ovirt-users] Delete snapshot with status illegal - live merge not possible

2015-08-28 Thread Jan Siml

Hello,


   got exactly the same issue, with all nice side effects like performance
   degradation. Until now i was not able to fix this, or to fool the
engine
   somehow that it whould show the image as ok again and give me a 2nd
   chance to drop the snapshot.
   in some cases this procedure helped (needs 2nd storage domain)
   - image live migration to a different storage domain (check which
   combinations are supported, iscsi - nfs domain seems unsupported.
iscsi
   - iscsi works)
   - snapshot went into ok state, and in ~50% i was able to drop the
   snapshot than. space had been reclaimed, so seems like this worked
 
  okay, seems interesting. But I'm afraid of not knowing which image files
  Engine uses when live migration is demanded. If Engine uses the ones
  which are actually used and updates the database afterwards -- fine. But
  if the images are used that are referenced in Engine database, we will
  take a journey into the past.
knocking on wood. so far no problems, and i used this way for sure 50
times +


This doesn't work. Engine creates the snapshots on wrong storage (old) 
and this process fails, cause the VM (qemu process) uses the images on 
other storage (new).



in cases where the live merge failed, offline merging worked in another
50%. those which fail offline, too went back to illegal snap state


I fear offline merge would cause data corruption. Because if I shut down 
the VM, the information in Engine database is still wrong. Engine thinks 
image files and snapshots are on old storage. But VM has written to the 
equal named image files on new storage. And offline merge might use the 
old files on old storage.



   other workaround is through exporting the image onto a nfs export
   domain, here you can tell the engine to not export snapshots. after
   re-importing everything is fine


Same issue as with offline merge.

Meanwhile I think, we need to shut down the VM, copy the image files 
from one storage (qemu has used before) to the other storage (the one 
Engine expects) and pray while starting the VM again.



   the snapshot feature (live at least) should be avoided at all
   currently simply not reliable enaugh.
   your way works, too. already did that, even it was a pita to figure out
   where to find what. this symlinking mess between /rhev /dev and
   /var/lib/libvirt is really awesome. not.
Jan Siml js...@plusline.net hat am 28. August 2015 um 12:56
   geschrieben:
   
   
Hello,
   
if no one has an idea how to correct the Disk/Snapshot paths in
Engine
database, I see only one possible way to solve the issue:
   
Stop the VM and copy image/meta files target storage to source
storage
(the one where Engine thinks the files are located). Start the VM.
   
Any concerns regarding this procedure? But I still hope that someone
from oVirt team can give an advice how to correct the database
entries.
If necessary I would open a bug in Bugzilla.
   
Kind regards
   
Jan Siml
   
 after a failed live storage migration (cause unknown) we have a
 snapshot which is undeletable due to its status 'illegal' (as seen
 in storage/snapshot tab). I have already found some bugs
[1],[2],[3]
 regarding this issue, but no way how to solve the issue within
oVirt
  3.5.3.

 I have attached the relevant engine.log snippet. Is there any
way to
 do a live merge (and therefore delete the snapshot)?

 [1] https://bugzilla.redhat.com/show_bug.cgi?id=1213157
 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1247377 links
to [3]
 [3] https://bugzilla.redhat.com/show_bug.cgi?id=1247379 (no
access)

 some additional informations. I have checked the images on both
   storages
 and verified the disk paths with virsh's dumpxml.

 a) The images and snapshots are on both storages.
 b) The images on source storage aren't used. (modification time)
 c) The images on target storage are used. (modification time)
 d) virsh -r dumpxml tells me disk images are located on _target_
   storage.
 e) Admin interface tells me, that images and snapshot are
located on
 _source_ storage, which isn't true, see b), c) and d).

 What can we do, to solve this issue? Is this to be corrected in
   database
 only?


Kind regards

Jan Siml
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Trying hosted-engine on ovirt-3.6 beta

2015-08-28 Thread Joop
Hi All,

I have been trying the above and keep getting an error at the end about
unable to write to HEConfImage, see attached log.

Host is Fedora22 (clean system), engine is Centos-7.1, followed the
readme from the 3.6beta release notes but in short:
- setup a nfs server on the fedora22 host
- exported /nfs/ovirt-he/data
- installed yum, installed the 3.6 beta repo
- installed hosted engine
- ran setup
- installed centos7.1, ran engine-setup

Tried with and without selinux/iptables/firewalld.

Regards,

Joop





ovirt-hosted-engine-setup-20150828162548-z5m9zc.log.gz
Description: application/gzip
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt/Gluster

2015-08-28 Thread Sander Hoentjen



On 08/21/2015 06:12 PM, Ravishankar N wrote:



On 08/21/2015 07:57 PM, Sander Hoentjen wrote:

Maybe I should formulate some clear questions:
1) Am I correct in assuming that an issue on of of 3 gluster nodes 
should not cause downtime for VM's on other nodes?


From what I understand, yes. Maybe the ovirt folks can confirm. I can 
tell you this much for sure: If you create a replica 3 volume using 3 
nodes, mount the volume locally on each node, and bring down one node, 
the mounts from the other 2 nodes *must* have read+write access to the 
volume.




2) What can I/we do to fix the issue I am seeing?
3) Can anybody else reproduce my issue?

I'll try and see if I can.


Hi Ravi,

Did you get around to this by any chance? This is a blocker issue for 
us. Apart from that, has anybody else have any success with using 
gluster reliably as an ovirt storage solution?


Regards,
Sander
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete snapshot with status illegal - live merge not possible

2015-08-28 Thread Jan Siml
Hello,

got exactly the same issue, with all nice side effects like
 performance
degradation. Until now i was not able to fix this, or to fool the
  engine
somehow that it whould show the image as ok again and give me a 2nd
chance to drop the snapshot.
in some cases this procedure helped (needs 2nd storage domain)
- image live migration to a different storage domain (check which
combinations are supported, iscsi - nfs domain seems unsupported.
  iscsi
- iscsi works)
- snapshot went into ok state, and in ~50% i was able to drop the
snapshot than. space had been reclaimed, so seems like this worked
  
   okay, seems interesting. But I'm afraid of not knowing which image
 files
   Engine uses when live migration is demanded. If Engine uses the ones
   which are actually used and updates the database afterwards --
 fine. But
   if the images are used that are referenced in Engine database, we will
   take a journey into the past.
  knocking on wood. so far no problems, and i used this way for sure 50
  times +

 This doesn't work. Engine creates the snapshots on wrong storage (old)
 and this process fails, cause the VM (qemu process) uses the images on
 other storage (new).
  
 sounds like there are some other problems in your case, wrong db entries
 image - snapshot? i didnt investigate further in the vm which failed
 this process, i directly went further and exported them

Yes, engine thinks image and snapshot are on storage a, but qemu process
uses equal named images on storage b.

It seems to me, that first live storage migration was successful on qemu
level, but engine hasn't updated the database entries.

Seems to be a possible solution to correct the database entries, but I'm
not familar with the oVirt schema and won't even try it without an
advice from oVirt developers.

  in cases where the live merge failed, offline merging worked in another
  50%. those which fail offline, too went back to illegal snap state

 I fear offline merge would cause data corruption. Because if I shut down
 the VM, the information in Engine database is still wrong. Engine thinks
 image files and snapshots are on old storage. But VM has written to the
 equal named image files on new storage. And offline merge might use the
 old files on old storage.
  
 than your initial plan is an alternative. you use thin or raw on what
 kind of storage domain? but like said, manually processing is a pita due
 to the symlink mess.

We are using raw images which are thin provisioned on NFS based storage
domains. On storage b I can see an qcow formatted image file which qemu
uses and the original (raw) image which is now backing file.

other workaround is through exporting the image onto a nfs export
domain, here you can tell the engine to not export snapshots. after
re-importing everything is fine

 Same issue as with offline merge.

 Meanwhile I think, we need to shut down the VM, copy the image files
 from one storage (qemu has used before) to the other storage (the one
 Engine expects) and pray while starting the VM again.

the snapshot feature (live at least) should be avoided at all
currently simply not reliable enaugh.
your way works, too. already did that, even it was a pita to
 figure out
where to find what. this symlinking mess between /rhev /dev and
/var/lib/libvirt is really awesome. not.
 Jan Siml js...@plusline.net hat am 28. August 2015 um 12:56
geschrieben:


 Hello,

 if no one has an idea how to correct the Disk/Snapshot paths in
  Engine
 database, I see only one possible way to solve the issue:

 Stop the VM and copy image/meta files target storage to source
  storage
 (the one where Engine thinks the files are located). Start the VM.

 Any concerns regarding this procedure? But I still hope that
 someone
 from oVirt team can give an advice how to correct the database
  entries.
 If necessary I would open a bug in Bugzilla.

 Kind regards

 Jan Siml

  after a failed live storage migration (cause unknown) we have a
  snapshot which is undeletable due to its status 'illegal'
 (as seen
  in storage/snapshot tab). I have already found some bugs
  [1],[2],[3]
  regarding this issue, but no way how to solve the issue within
  oVirt
   3.5.3.
 
  I have attached the relevant engine.log snippet. Is there any
  way to
  do a live merge (and therefore delete the snapshot)?
 
  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1213157
  [2] https://bugzilla.redhat.com/show_bug.cgi?id=1247377 links
  to [3]
  [3] https://bugzilla.redhat.com/show_bug.cgi?id=1247379 (no
  access)
 
  some additional informations. I have checked the images on both
storages
  and verified the disk paths with virsh's dumpxml.
 
  a) The images and snapshots are on both storages.
  b) The images on source storage aren't used. 

Re: [ovirt-users] Delete snapshot with status illegal - live merge not possible

2015-08-28 Thread InterNetX - Juergen Gotteswinter

 Jan Siml js...@plusline.net hat am 28. August 2015 um 19:52 geschrieben:


 Hello,

 got exactly the same issue, with all nice side effects like
  performance
 degradation. Until now i was not able to fix this, or to fool the
   engine
 somehow that it whould show the image as ok again and give me a 2nd
 chance to drop the snapshot.
 in some cases this procedure helped (needs 2nd storage domain)
 - image live migration to a different storage domain (check which
 combinations are supported, iscsi - nfs domain seems unsupported.
   iscsi
 - iscsi works)
 - snapshot went into ok state, and in ~50% i was able to drop the
 snapshot than. space had been reclaimed, so seems like this worked
   
okay, seems interesting. But I'm afraid of not knowing which image
  files
Engine uses when live migration is demanded. If Engine uses the ones
which are actually used and updates the database afterwards --
  fine. But
if the images are used that are referenced in Engine database, we will
take a journey into the past.
   knocking on wood. so far no problems, and i used this way for sure 50
   times +
 
  This doesn't work. Engine creates the snapshots on wrong storage (old)
  and this process fails, cause the VM (qemu process) uses the images on
  other storage (new).
 
  sounds like there are some other problems in your case, wrong db entries
  image - snapshot? i didnt investigate further in the vm which failed
  this process, i directly went further and exported them

 Yes, engine thinks image and snapshot are on storage a, but qemu process
 uses equal named images on storage b.

 It seems to me, that first live storage migration was successful on qemu
 level, but engine hasn't updated the database entries.

 Seems to be a possible solution to correct the database entries, but I'm
 not familar with the oVirt schema and won't even try it without an
 advice from oVirt developers.
 
   in cases where the live merge failed, offline merging worked in another
   50%. those which fail offline, too went back to illegal snap state
 
  I fear offline merge would cause data corruption. Because if I shut down
  the VM, the information in Engine database is still wrong. Engine thinks
  image files and snapshots are on old storage. But VM has written to the
  equal named image files on new storage. And offline merge might use the
  old files on old storage.
 
  than your initial plan is an alternative. you use thin or raw on what
  kind of storage domain? but like said, manually processing is a pita due
  to the symlink mess.

 We are using raw images which are thin provisioned on NFS based storage
 domains. On storage b I can see an qcow formatted image file which qemu
 uses and the original (raw) image which is now backing file.

 
might sound a little bit curious, but imho this is the best setup for your plan.
thin on iscsi is an totally different story... lvm volumes which get extended on
demand (which fails with default settings during heavy writes, and causes vm to
pause), additionally ovirt writes qcows images raw onto those lv volumes. since
you can get your hands directly on the images this whould be my prefered
workaround. but maybe one of the ovirt devs got a better idea/solution?
 
 other workaround is through exporting the image onto a nfs export
 domain, here you can tell the engine to not export snapshots. after
 re-importing everything is fine
 
  Same issue as with offline merge.
 
  Meanwhile I think, we need to shut down the VM, copy the image files
  from one storage (qemu has used before) to the other storage (the one
  Engine expects) and pray while starting the VM again.

 the snapshot feature (live at least) should be avoided at all
 currently simply not reliable enaugh.
 your way works, too. already did that, even it was a pita to
  figure out
 where to find what. this symlinking mess between /rhev /dev and
 /var/lib/libvirt is really awesome. not.
  Jan Siml js...@plusline.net hat am 28. August 2015 um 12:56
 geschrieben:
 
 
  Hello,
 
  if no one has an idea how to correct the Disk/Snapshot paths in
   Engine
  database, I see only one possible way to solve the issue:
 
  Stop the VM and copy image/meta files target storage to source
   storage
  (the one where Engine thinks the files are located). Start the VM.
 
  Any concerns regarding this procedure? But I still hope that
  someone
  from oVirt team can give an advice how to correct the database
   entries.
  If necessary I would open a bug in Bugzilla.
 
  Kind regards
 
  Jan Siml
 
   after a failed live storage migration (cause unknown) we have a
   snapshot which is undeletable due to its status 'illegal'
  (as seen
   in storage/snapshot tab). I have already found some bugs
   [1],[2],[3]
   regarding this issue, but no way how to solve the issue within
  

[ovirt-users] New storage domain on nfs share

2015-08-28 Thread gregor
Hi,

what is the right way to create a storage on an NFS share to use as data
storage for virtual machines?

It is only possible to create a storage ISO/NFS or Export/NFS where
I can not create disks for a virtual machine.

cheers
gregor
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users