Re: [Users] Problems when trying to delete a snapshot

2012-12-31 Thread Ricky Schneberger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,
I recovered from this error by import my base-image in a new machine
and make a restore of the backups.

But is it possible by hand to merge latest snapshot in to a
base-image to get a new VM up and running with the old disk image?

I have tried with qemu-img but have no go with it.

Regards //Ricky


On 2012-12-30 16:57, Haim Ateya wrote:
 Hi Ricky,
 
 from going over your logs, it seems like create snapshot failed,
 its logged clearly in both engine and vdsm logs [1]. did you try to
 delete this snapshot or was it a different one? if so, not sure its
 worth debugging.
 
 bee7-78e7d1cbc201, vmId=d41b4ebe-3631-4bc1-805c-d762c636ca5a), log
 id: 46d21393 2012-12-13 10:40:24,372 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (pool-5-thread-50) [12561529] Failed in SnapshotVDS method 
 2012-12-13 10:40:24,372 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (pool-5-thread-50) [12561529] Error code SNAPSHOT_FAILED and error
 message VDSGenericException: VDSErrorException: Failed to
 SnapshotVDS, error = Snapshot failed 2012-12-13 10:40:24,372 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (pool-5-thread-50) [12561529] Command
 org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand return
 value Class Name:
 org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc

 
mStatus   Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
 mCode 48 mMessage
 Snapshot failed
 
 
 
 enter/6d91788c-99d9-11e1-b913-78e7d1cbc201/mastersd/master/tasks/21cbcc25-7672-4704-a414-a44f5e9944ed
 temp
 /rhev/data-center/6d91788c-99d9-11e1-b913-78e7d1cbc201/mastersd/maste

 
r/tasks/21cbcc25-7672-4704-a414-a44f5e9944ed.temp
 21cbcc25-7672-4704-a414-a44f5e9944ed::ERROR::2012-12-14
 10:48:41,189::volume::492::Storage.Volume::(create) Unexpected
 error Traceback (most recent call last): File
 /usr/share/vdsm/storage/volume.py, line 475, in create 
 srcVolUUID, imgPath, volPath) File
 /usr/share/vdsm/storage/fileVolume.py, line 138, in _create 
 oop.getProcessPool(dom.sdUUID).createSparseFile(volPath,
 sizeBytes) File /usr/share/vdsm/storage/remoteFileHandler.py,
 line 277, in callCrabRPCFunction *args, **kwargs) File
 /usr/share/vdsm/storage/remoteFileHandler.py, line 195, in
 callCrabRPCFunction raise err IOError: [Errno 27] File too large
 
 2012-12-13 10:40:24,372 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (pool-5-thread-50) [12561529] Vds: virthost01 2012-12-13
 10:40:24,372 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
 (pool-5-thread-50) [12561529] Command SnapshotVDS execution failed.
 Exception: VDSErrorException: VDSGenericException:
 VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed 
 2012-12-13 10:40:24,373 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.SnapshotVDSCommand]
 (pool-5-thread-50) [12561529] FINISH, SnapshotVDSCommand, log id:
 46d21393 2012-12-13 10:40:24,373 ERROR
 [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
 (pool-5-thread-50) [12561529] Wasnt able to live snpashot due to
 error: VdcBLLException: VdcBLLException:
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
 VDSGenericException: VDSErrorException: Failed to SnapshotVDS,
 error = Snapshot failed, rolling back. 2012-12-13 10:40:24,376
 ERROR [org.ovirt.engine.core.bll.CreateSnapshotCommand]
 (pool-5-thread-50) [4fd6c4e4] Ending command with failure:
 org.ovirt.engine.core.bll.CreateSnapshotCommand 2012-12-13 1
 
 21cbcc25-7672-4704-a414-a44f5e9944ed::ERROR::2012-12-14
 10:48:41,196::task::833::TaskManager.Task::(_setError)
 Task=`21cbcc25-7672-4704-a414-a44f5e9944ed`::Unexpected error 
 Traceback (most recent call last): File
 /usr/share/vdsm/storage/task.py, line 840, in _run return
 fn(*args, **kargs) File /usr/share/vdsm/storage/task.py, line
 307, in run return self.cmd(*self.argslist, **self.argsdict) File
 /usr/share/vdsm/storage/securable.py, line 68, in wrapper return
 f(self, *args, **kwargs) File /usr/share/vdsm/storage/sp.py, line
 1903, in createVolume srcImgUUID=srcImgUUID,
 srcVolUUID=srcVolUUID) File /usr/share/vdsm/storage/fileSD.py,
 line 258, in createVolume volUUID, desc, srcImgUUID, srcVolUUID) 
 File /usr/share/vdsm/storage/volume.py, line 494, in create 
 (volUUID, e)) VolumeCreationError: Error creating a new volume:
 ('Volume creation 6da02c1e-5ef5-4fab-9ab2-bb081b35e7b3 failed:
 [Errno 27] File too large',)
 
 
 
 - Original Message -
 From: Ricky Schneberger ri...@schneberger.se To: Haim Ateya
 hat...@redhat.com Cc: users@ovirt.org Sent: Thursday, December
 20, 2012 5:52:10 PM Subject: Re: [Users] Problems when trying to
 delete a snapshot
 
 Hi, The task did not finished but it broked my VM. What I have
 right now is a VM with a base-image and a snapshot that I need to
 merge together so I can import the disk in a new VM.
 
 I have attached the logs and 

[Users] ovirt 3.1 fails to attach gluster storage volume to data center

2012-12-31 Thread Jithin Raju
Hi,

I have new setup for ovirt 3.1 installed on centos 6.3. Node centos
6.3,gluster 3.3.1.
I was able to create gluster volume on node and activate but attaching it
to  data center fails.
As per my chat with hat...@redhat.com I am submitting some of the traces
below:
[root@hedge /]# grep mount /var/log/vdsm/vdsm.log
Thread-671::DEBUG::2012-08-23
16:44:32,344::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
/bin/mount -t glusterfs -o vers=3 hedge:/vol2
/rhev/data-center/mnt/hedge:_vol2' (cwd None)

[root@hedge /]#  grep connectStorageServer /var/log/vdsm/vdsm.log
Thread-671::INFO::2012-08-23
16:44:32,340::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=6,
spUUID='----', conList=[{'port': '',
'connection': 'hedge:/vol2', 'mnt_options': 'vers=3', 'portal': '', 'user':
'', 'iqn': '', 'vfs_type': 'glusterfs', 'password': '**', 'id':
'----'}], options=None)
Thread-671::INFO::2012-08-23
16:44:36,567::logUtils::39::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id':
'----'}]}

I am new to gluster so mostly I would have made some mistake.Any help is
appreciated.
Thanks,
Jithin
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Manage start/stop of oVirt infrastructure

2012-12-31 Thread Doron Fediuck
- Original Message -

 From: Gianluca Cecchi gianluca.cec...@gmail.com
 To: users users@ovirt.org
 Sent: Monday, December 31, 2012 1:40:53 AM
 Subject: [Users] Manage start/stop of oVirt infrastructure

 Hello,
 I'm going to manage two kinds of environments.
 1) All-in-one installed as an F18 guest (f18aio) on a laptop
 2) Engine as an F18 guest of a dedicated server + node as a separate
 physical server with F18 installed on it

 In particular for 1) I have to start/stop almost daily the oVirt
 infrastructure that is collapsed on a unique environment engine+node
 on f18aio.
 The laptop is an Asus U36SD with an SSD disk and 8Gb of ram and F17
 OS. I'm verifying that in respect of some months ago nested
 virtualization works well on this intel cpu:
 (2cores + HT Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz)

 In fact I was able to install this f18aio F18 guest with all-in-one
 oVirt 3.2 nightly and create and run a windows XP vm inside it.

 After solving other problems I have what seems a stable environment

 My first approach for stop/start seems working from a functional
 point of view, but I would like confirmation of possible missing
 steps:

 starting point is oVirt infra running with all its defined vm running
 on it (only one atm)

 a) shutdown all vms in oVirt (only one atm)

 b) logout from webadmin portal

 c) simply shutdown the OS of the F18 guest f18aio)

 (without putting node in maintenance or other things)

 d) shutdown the laptop

 Then

 e) power on the laptop

 f) start the f18vm

 g) wait about 4 minutes when from contending state the
 local_datacenter passes to Up

 h) start the desired oVirt vms (one atm)

 I would also like to have clear correct procedures for case 2) that
 should respect production-like environments and so be aware of how
 to behave in case of panned maintenance / outages.

 Is it perhaps better in any case in 1) and 2) to first put the host
 in maintenance mode?

 Thanks to all for your continuous help, and Happy New year to you and
 to the Maya people ;-)

 Gianluca

Gianluca, 
Since you have everything contained in a VM, why not suspending or hibernating 
(saving) it? 
You can use virsh save for example to completely hibernate it. See syntax here- 

http://docs.fedoraproject.org/en-US/Fedora/13/html/Virtualization_Guide/chap-Virtualization-Managing_guests_with_virsh.html
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt 3.1 fails to attach gluster storage volume to data center

2012-12-31 Thread Balamurugan Arumugam

Hi Jithin,


- Original Message -
 From: Jithin Raju rajuj...@gmail.com
 To: users@ovirt.org
 Sent: Monday, December 31, 2012 3:04:54 PM
 Subject: [Users] ovirt 3.1 fails to attach gluster storage volume to data 
 center
 
 
 
 Hi,
 
 
 I have new setup for ovirt 3.1 installed on centos 6.3. Node centos
 6.3,gluster 3.3.1.
 I was able to create gluster volume on node and activate but
 attaching it to data center fails.

My understanding is that you are trying to use gluster volume as storage domain 
(Correct me if I am wrong).  Could you give following details here?

1. Describe the flow you followed in the UI.  Adding screenshot of error would 
help more.
2. Attach vdsm log from all nodes and engine log from ovirt engine.


 As per my chat with hat...@redhat.com I am submitting some of the
 traces below:
 
 [root@hedge /]# grep mount /var/log/vdsm/vdsm.log
 Thread-671::DEBUG::2012-08-23
 16:44:32,344::__init__::1164::Storage.Misc.excCmd::(_log)
 '/usr/bin/sudo -n /bin/mount -t glusterfs -o vers=3 hedge:/vol2
 /rhev/data-center/mnt/hedge:_vol2' (cwd None)
 
 
 
 [root@hedge /]# grep connectStorageServer /var/log/vdsm/vdsm.log
 Thread-671::INFO::2012-08-23
 16:44:32,340::logUtils::37::dispatcher::(wrapper) Run and protect:
 connectStorageServer(domType=6,
 spUUID='----', conList=[{'port': '',
 'connection': 'hedge:/vol2', 'mnt_options': 'vers=3', 'portal': '',
 'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password':
 '**', 'id': '----'}],
 options=None)
 Thread-671::INFO::2012-08-23
 16:44:36,567::logUtils::39::dispatcher::(wrapper) Run and protect:
 connectStorageServer, Return response: {'statuslist': [{'status': 0,
 'id': '----'}]}
 
 
 I am new to gluster so mostly I would have made some mistake.Any help
 is appreciated.

Sure.  Welcome to gluster!


Regards,
Bala
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] unable to refresh vds

2012-12-31 Thread Jonathan Horne
i get this repeatedly in my engine.log:

2012-12-31 12:16:27,241 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(QuartzScheduler_Worker-29) ResourceManager::refreshVdsRunTimeInfo::Failed to 
refresh VDS , vds = f5f38576-5365-11e2-a443-0022191dcf23 : d0lppn034, VDS 
Network Error, continuing.
VDSNetworkException:
2012-12-31 12:16:29,251 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(QuartzScheduler_Worker-75) ResourceManager::refreshVdsRunTimeInfo::Failed to 
refresh VDS , vds = f5f38576-5365-11e2-a443-0022191dcf23 : d0lppn034, VDS 
Network Error, continuing.
VDSNetworkException:
2012-12-31 12:16:31,262 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(QuartzScheduler_Worker-12) ResourceManager::refreshVdsRunTimeInfo::Failed to 
refresh VDS , vds = f5f38576-5365-11e2-a443-0022191dcf23 : d0lppn034, VDS 
Network Error, continuing.
VDSNetworkException:
2012-12-31 12:16:33,272 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(QuartzScheduler_Worker-66) ResourceManager::refreshVdsRunTimeInfo::Failed to 
refresh VDS , vds = f5f38576-5365-11e2-a443-0022191dcf23 : d0lppn034, VDS 
Network Error, continuing.
VDSNetworkException:

thats my 4th node, and it does it bout every 15 minutes or so.  and then a 
moment later it tries to fence that server (which also fails, but probably not 
related to the subject of this thread).  only this one node seems to be doing 
this.

is this a problem i need to run down, or can it be ignored?  that fact that it 
tries to (unsuccessfully) fence the node has me a little worried.

thanks,
jonathan




This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] unable to refresh vds

2012-12-31 Thread Jonathan Horne
nver mind… ip conflict.  i fail system admin 101.

From: Jonathan Horne jho...@skopos.usmailto:jho...@skopos.us
Date: Monday, December 31, 2012 12:20 PM
To: users@ovirt.orgmailto:users@ovirt.org 
users@ovirt.orgmailto:users@ovirt.org
Subject: [Users] unable to refresh vds

i get this repeatedly in my engine.log:

2012-12-31 12:16:27,241 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(QuartzScheduler_Worker-29) ResourceManager::refreshVdsRunTimeInfo::Failed to 
refresh VDS , vds = f5f38576-5365-11e2-a443-0022191dcf23 : d0lppn034, VDS 
Network Error, continuing.
VDSNetworkException:
2012-12-31 12:16:29,251 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(QuartzScheduler_Worker-75) ResourceManager::refreshVdsRunTimeInfo::Failed to 
refresh VDS , vds = f5f38576-5365-11e2-a443-0022191dcf23 : d0lppn034, VDS 
Network Error, continuing.
VDSNetworkException:
2012-12-31 12:16:31,262 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(QuartzScheduler_Worker-12) ResourceManager::refreshVdsRunTimeInfo::Failed to 
refresh VDS , vds = f5f38576-5365-11e2-a443-0022191dcf23 : d0lppn034, VDS 
Network Error, continuing.
VDSNetworkException:
2012-12-31 12:16:33,272 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(QuartzScheduler_Worker-66) ResourceManager::refreshVdsRunTimeInfo::Failed to 
refresh VDS , vds = f5f38576-5365-11e2-a443-0022191dcf23 : d0lppn034, VDS 
Network Error, continuing.
VDSNetworkException:

thats my 4th node, and it does it bout every 15 minutes or so.  and then a 
moment later it tries to fence that server (which also fails, but probably not 
related to the subject of this thread).  only this one node seems to be doing 
this.

is this a problem i need to run down, or can it be ignored?  that fact that it 
tries to (unsuccessfully) fence the node has me a little worried.

thanks,
jonathan




This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.


This is a PRIVATE message. If you are not the intended recipient, please delete 
without copying and kindly advise us by e-mail of the mistake in delivery. 
NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to 
any order or other contract unless pursuant to explicit written agreement or 
government initiative expressly permitting the use of e-mail for such purpose.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] cannot upload iso to storage domain

2012-12-31 Thread sirin
Hi, all

cat /etc/redhat-release 
Fedora release 17 (Beefy Miracle)

ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch


[root@vm-sm home]# rpm -qa | grep ovirt
ovirt-engine-backend-3.1.0-4.fc17.noarch
ovirt-engine-genericapi-3.1.0-4.fc17.noarch
ovirt-iso-uploader-3.1.0-0.git1841d9.fc17.noarch
ovirt-engine-setup-3.1.0-4.fc17.noarch
ovirt-log-collector-3.1.0-0.git10d719.fc17.noarch
ovirt-engine-notification-service-3.1.0-4.fc17.noarch
ovirt-engine-dbscripts-3.1.0-4.fc17.noarch
ovirt-engine-3.1.0-4.fc17.noarch
ovirt-release-fedora-5-2.noarch
ovirt-engine-config-3.1.0-4.fc17.noarch
ovirt-engine-restapi-3.1.0-4.fc17.noarch
ovirt-engine-tools-common-3.1.0-4.fc17.noarch
ovirt-engine-sdk-3.2.0.2-1.fc17.noarch
ovirt-engine-userportal-3.1.0-4.fc17.noarch
ovirt-engine-webadmin-portal-3.1.0-4.fc17.noarch
ovirt-image-uploader-3.1.0-0.git9c42c8.fc17.noarch
[root@vm-sm home]# rpm -qa | grep engine
ovirt-engine-backend-3.1.0-4.fc17.noarch
ovirt-engine-genericapi-3.1.0-4.fc17.noarch
ovirt-engine-setup-3.1.0-4.fc17.noarch
ovirt-engine-notification-service-3.1.0-4.fc17.noarch
ovirt-engine-dbscripts-3.1.0-4.fc17.noarch
ovirt-engine-3.1.0-4.fc17.noarch
ovirt-engine-config-3.1.0-4.fc17.noarch
ovirt-engine-restapi-3.1.0-4.fc17.noarch
ovirt-engine-tools-common-3.1.0-4.fc17.noarch
ovirt-engine-sdk-3.2.0.2-1.fc17.noarch
ovirt-engine-userportal-3.1.0-4.fc17.noarch
ovirt-engine-webadmin-portal-3.1.0-4.fc17.noarch

BUGS ?!

[root@vm-sm home]# engine-iso-uploader list -v
ERROR: [ERROR]::ca_file (CA certificate) must be specified for SSL connection.
INFO: Use the -h option to see usage.
DEBUG: Configuration:
DEBUG: command: list
DEBUG: Traceback (most recent call last):
DEBUG:   File /usr/bin/engine-iso-uploader, line 931, in module
DEBUG: isoup = ISOUploader(conf)
DEBUG:   File /usr/bin/engine-iso-uploader, line 331, in __init__
DEBUG: self.list_all_ISO_storage_domains()
DEBUG:   File /usr/bin/engine-iso-uploader, line 381, in 
list_all_ISO_storage_domains
DEBUG: if not self._initialize_api():
DEBUG:   File /usr/bin/engine-iso-uploader, line 358, in _initialize_api
DEBUG: password=self.configuration.get(passwd))
DEBUG:   File /usr/lib/python2.7/site-packages/ovirtsdk/api.py, line 78, in 
__init__
DEBUG: debug=debug
DEBUG:   File 
/usr/lib/python2.7/site-packages/ovirtsdk/infrastructure/connectionspool.py, 
line 47, in __init__
DEBUG: debug=debug))
DEBUG:   File /usr/lib/python2.7/site-packages/ovirtsdk/web/connection.py, 
line 38, in __init__
DEBUG: timeout=timeout)
DEBUG:   File /usr/lib/python2.7/site-packages/ovirtsdk/web/connection.py, 
line 102, in __createConnection
DEBUG: raise NoCertificatesError
DEBUG: NoCertificatesError: [ERROR]::ca_file (CA certificate) must be specified 
for SSL connection.
[root@vm-sm home]# 

Artem
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Fwd: Power outage = storage outage = ovirt crash with mismatched master storage domain?

2012-12-31 Thread Ian Forde
Thanks Haim.  Sorry it took so long to get back to the list.  Firstly, I
ended up sleeping on the problem, then realizing that it boils down to this:

. All VMs are down at this point
. All storage domains in the DC were listed as down
. I've got 3 hypervisors (2 of which I can put into maintenance mode)
. The lone hypervisor has the right idea of what the master storage domain
is supposed to be, and the database is wrong.  So I stopped ovirt-engine,
backed up the database to a sql file, and found the section that started
with:

-- TOC entry 3717 (class 0 OID 12577180)
-- Dependencies: 193
-- Data for Name: storage_domain_static; Type: TABLE DATA; Schema: public;
Owner: engine
--

COPY storage_domain_static (id, storage, storage_name, storage_domain_type,
storage_type, storage_domain_format_type, _create_date, _update_date,
recoverable, last_time_used_as_master) FROM stdin;

At that point I saw that BV4Pool1 had storage_domain value 0, and
BVData-Master had a value of 1.  I flipped them, restored the database
(dropping the existing db), and rebooted the lone non-maintenance
hypervisor.  Then I started ovirt-engine and everything started to look
rosy.

If you do still, however, want logs, I'd be happy to provide them!

  -I

On Sun, Dec 30, 2012 at 8:30 AM, Haim Ateya hat...@redhat.com wrote:

 Hi Ian,

 I really like to understand how it happened, there are several ways to fix
 this issue,
 please send us engine and vdsm logs for a start.
 as for fixing the issue, we will probably have to perform some manual
 intervention on either engine data-base (preferable) or storage
 metadata.
 its a bit complex, and might cause other things to fail if not doing
 carefully, so its your call.


 - Original Message -
  From: Ian Forde ianfo...@gmail.com
  To: users@ovirt.org
  Sent: Sunday, December 30, 2012 12:31:31 PM
  Subject: [Users] Fwd: Power outage = storage outage = ovirt crash with
 mismatched master storage domain?
 
 
  (resend, as it might have tried to go through before my subscription
  to the list was active...)
 
 
  -- Forwarded message --
  From: Ian Forde  ianfo...@gmail.com 
  Date: Sun, Dec 30, 2012 at 1:57 AM
  Subject: Power outage = storage outage = ovirt crash with mismatched
  master storage domain?
  To: users@ovirt.org
 
 
  Hi all -
 
 
  I'm running Ovirt from the dreyou packages (ovirt-engine 3.1.0-3.26)
  on CentOS 6.3, with 3 hypervisors all running CentOS 6.3 with
  dreyou-packaged vdsm 4.10.1-0.77.20 running. Storage is a Synology
  1812+.
 
 
  And tonight I had a series of power events that affected the NAS.
  (So much for the small UPS. Storage has now been moved to a bigger
  UPS.) Anyway - it turns out that my storage went down 3 times while
  I was out. The NAS recovered. Ovirt didn't.
 
 
  I've tried putting boxes into maintenance mode. I've tried reboots
  and confirm reboot. I've even shut everything down and brought it
  back up from scratch. What I'm facing now is a message in the GUI
  that states:
 
 
  Sync Error on Master Domain between Host bv-hv01 and oVirt Engine.
  Domain BV4Pool1 is marked as Master in oVirt Engine database but not
  on the Storage side. Please consult with Support on how to fix this
  issue.
 
 
  Now, what was *supposed* to be my master storage domain is named
  BV4Data-Master, and BV4Pool1 is where the VM disks are stored.
  (I'm not crazy about putting VM disks into the master pool.)
 
 
  So why does the database have the wrong entry and how do I fix it?
  Currently everything is *down*, given that the last time I tried to
  mess with oVirt innards I had to recreate my DC and restore all of
  the VMs by hand. Not something I'd like to do again...
 
 
  (This, of course, also means that I need to get proper oVirt database
  backups going...)
 
 
  Any ideas?
 
 
  -I
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users