> 
> Hi Boudewijn,
> if you have db backup and you won't lose any data using it - it would be the 
> simplest approach.
> 
> Please read carefully the following options and keep backup before attempting 
> any of it -
> for vm's that you don't have space issue with - you can try to previously 
> suggested approach, but it'll obviously take longer as it requires copying of 
> the data.
> 
> Option A -
> *doesn't require copying the disks
> *if your vms had snapshots involving disks - it won't work currently.
> 
> let's try to restore a specific vm and continue from there - i'm adding here 
> info - if needed i'll test it on my own deployment.
> A. first of all, let's get the disks attached to some vm : some options to do 
> that.
> *under the webadmin ui, select a vm listed under the "export" domain, there 
> should be a disks tab indicating what disks are attached to the vm - check if 
> you can see the disk id's there.
> B. query the storage domain content using rest-api - afaik we don't return 
> that info from there. so let's skip that option.
> 1. under the storage domain storage directory (storage) enter the /vms 
> directory - you should see bunch of OVF files there - that's a file 
> containing a vm configuration.
> 2. open one specific ovf file - that's the vm that we'll attempt to restore - 
> the ovf file is a file containing the vm configuration
> *within the ovf file look for the following string: "diskId" and copy those 
> ids aside, these should be the vm attached disks.
> *copy the vm disk from the other storage domain, edit the metadata 
> accordingly to have the proper storage domain id listed
> *try to import the disks using the method specified here:  
> https://bugzilla.redhat.com/show_bug.cgi?id=886133
> *after this, you should see the disks as "floating", then you can add the vm 
> using the OVF file we discussed in stage 2 using the method specified here:
> http://gerrit.ovirt.org/#/c/15894/
> 
> 
> Option B -
> *Replace the images data files with blank files
> *Initiate Import for the vm, should be really quick obviously.
> *As soon as the import starts, you can either:
> 1. let the import be done and replace the data, not that in that case the 
> info saved in the engine db won't be correct (for example, the actual image 
> size..etc)
> 2. after the tasks for importing are created (you'll see that in the engine 
> log), turn the engine down immediately (immediately means within few seconds) 
> and after the copy tasks completes on the host replace the data files and 
> then start the engine - so that when the engine will start it'll update the 
> db information according to the updated data files.
> 


Hi Guys,

Thank you for the elaborate information. I'll try to restore the DB
indeed and make sure all the mounts I previously (when creating the
DB-dump) had will be there too.


I also just had a look in my old DB, which I've just restored:

engine=# select vm_name from vm_static;
     vm_name
-----------------
 Blank
 template
 mail
 nagios
 bacula
 debian-template
 jabber
 downloadbak
 vpn
(9 rows)



That's looking great. Actually the most important VMs to restore (the
rest are re-created in about 2-3 hours so having to re-create those
instead of restoring would be okayish):

- bacula
- downloadbak

Problem is that both of those are the VMs with the most of the disks
attached.
Just had a look in the databasedump for the vm id's and found this:


> COPY disks_vm_map (history_id, vm_disk_id, vm_id, attach_date, detach_date) 
> FROM stdin;
> 2       b2c5d2d5-636c-408b-b52f-b7f5558c0f7f    
> a16e4354-0c32-47c1-a01b-7131da3dbb6b    2014-01-21 02:32:58+01  \N
> 1       4ef54bf7-525b-4a73-b071-c6750fc7c907    
> 33f78ede-e885-4636-bb0b-1021c31d1cca    2014-01-21 02:32:58+01  2014-01-21 
> 18:52:00+01
> 5       38eee7d5-9fd1-44b0-876c-b24e4bc0085b    
> 0b062e65-7b0f-4177-9e08-cba48230f89a    2014-01-22 00:02:01+01  \N
> 4       988f90f6-a37d-4dfd-8477-70aa5d2db5b6    
> 0b062e65-7b0f-4177-9e08-cba48230f89a    2014-01-21 22:57:01+01  2014-01-22 
> 00:02:01+01
> 6       88a7d07b-b4a3-497d-b2e5-3e6ebc85d83e    
> a466a009-cde7-40db-b3db-712b737eb64a    2014-01-22 00:37:01+01  \N
> 7       2cd8d3dc-e92f-4be5-88fa-923076aba287    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-01-22 00:46:01+01  \N
> 8       5e56a396-8deb-4c04-9897-0e4f6582abcc    
> 45434b2f-2a79-4a13-812e-a4fd2f563947    2014-01-22 01:45:01+01  \N
> 9       caecf666-302d-426c-8a32-65eda8d9e5df    
> 0edd5aea-3425-4780-8f54-1c84f9a87765    2014-01-22 19:42:02+01  \N
> 10      8633fb9b-9c08-406b-925e-7d5955912165    
> f45a4a7c-5db5-40c2-af06-230aa5f2b090    2014-01-22 19:57:02+01  \N
> 11      81b71076-be95-436b-9657-61890e81cee9    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-01-22 23:22:02+01  2014-01-30 
> 02:09:09+01
> 12      924e5ba6-913e-4591-a15f-3b61eb66a2e1    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-01 20:42:12+01  2014-02-03 
> 18:00:14+01
> 14      f613aa23-4831-4aba-806e-fb7dcdcd704d    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-03 18:05:14+01  \N
> 15      182ce48c-59d0-4883-8265-0269247d22e0    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-03 18:13:14+01  \N
> 16      cadcce7f-53ff-4735-b5ff-4d8fd1991d51    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-03 18:13:14+01  \N
> 17      76749503-4a8b-4e8f-a2e4-9d89e0de0d71    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-03 18:13:14+01  \N
> 18      c46bb1c0-dad9-490c-95b4-b74b25b80129    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-03 18:13:14+01  \N
> 19      0ad131d7-2619-42a2-899f-d25c33969dc6    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-03 18:14:14+01  \N
> 20      e66b18a7-e2c5-4f6c-9884-03e5c7477e3d    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-03 18:14:14+01  \N
> 21      e1c098fe-4b5d-4728-81d0-7edfdd3d0ec8    
> a16e4354-0c32-47c1-a01b-7131da3dbb6b    2014-02-04 00:45:14+01  \N
> 22      8b511fc2-4ec5-4c82-9faf-93da8490adc9    
> b6cd8901-6832-4d95-935e-bb24d53f486d    2014-02-04 01:12:14+01  \N
> 13      c463c150-77df-496b-bebb-6c5fe090ddd8    
> 1964733f-c562-49e1-86b5-c71b12e8c7e2    2014-02-02 18:49:13+01  2014-02-04 
> 01:12:14+01
> 3       faac72a3-57af-4508-b844-37ee547f9bf3    
> a16e4354-0c32-47c1-a01b-7131da3dbb6b    2014-01-21 03:55:00+01  2014-02-04 
> 21:55:15+01
> 23      aca392f5-8395-46fe-9111-8a3c4812ff72    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-05 00:59:16+01  \N
> 24      829348f3-0f63-4275-92e1-1e84681a422b    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-05 00:59:16+01  \N
> 25      1d304cb5-67bd-4e21-aa2c-2470c19af885    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-05 11:50:16+01  \N
> 26      179ad90d-ed46-467d-ad75-aea6e3ea115e    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-05 22:41:17+01  \N
> 27      4d583a7a-8399-4299-9799-dec33913c20a    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-05 22:41:17+01  \N
> 28      9e5be41b-c512-4f22-9d7c-81090d62dc31    
> c040505a-da58-4ee1-8e17-8e32b9765608    2014-02-25 23:17:21+01  \N
> \.

I created my dump using pg_dumpall > blah.sql and reimported using psql
-f blah.sql .

Despite this :

 Schema |               Name                | Type  | Owner
--------+-----------------------------------+-------+--------
 public | action_version_map                | table | engine
 public | ad_groups                         | table | engine
 public | async_tasks                       | table | engine
 public | async_tasks_entities              | table | engine
 public | audit_log                         | table | engine
 public | base_disks                        | table | engine
 public | bookmarks                         | table | engine
 public | business_entity_snapshot          | table | engine
 public | cluster_policies                  | table | engine
 public | cluster_policy_units              | table | engine
 public | custom_actions                    | table | engine
 public | disk_image_dynamic                | table | engine
 public | disk_lun_map                      | table | engine
 public | dwh_history_timekeeping           | table | engine
 public | dwh_osinfo                        | table | engine
 public | event_map                         | table | engine


That table doesn't exist at all... weird. I'm not a postgres guru so
maybe there's some foo involved I haven't seen :).


The mountpoints I created in my initial setup:
> engine=# select id,connection from storage_server_connections;
>                   id                  |                  connection
> --------------------------------------+-----------------------------------------------
>  752c8d02-b8dd-46e3-9f51-395a7f3e246d | X.Y.nl:/var/lib/exports/iso
>  162603af-2d67-4cd5-902c-a8fc3e4cbf9b | 192.168.1.44:/raid/ovirt/data
>  d84f108d-86d0-42a4-9ee9-12e4506b434b | 192.168.1.44:/raid/ovirt/import_export
>  f60fb79b-5062-497d-9576-d27fdcbc70a0 | 192.168.1.44:/raid/ovirt/iso
> (4 rows)

Seems great too, I indeed created those.


When restarting ovirt-engine I get in my logs:
> 2014-03-17 01:50:06,417 ERROR [org.ovirt.engine.core.bll.Backend] 
> (ajp--127.0.0.1-8702-2) Error in getting DB connection. The database is 
> inaccessible. Original exception is: UncategorizedSQLException: 
> CallableStatementCallback; uncategorized SQLException for SQL [{call 
> checkdbconnection()}]; SQL state [25P02]; error code [0]; ERROR: current 
> transaction is aborted, commands ignored until end of transaction block; 
> nested exception is org.postgresql.util.PSQLException: ERROR: current 
> transaction is aborted, commands ignored until end of transaction block
Also the webinterface stays blank.


Okay that sounds like database permissions:


[root@Xovirt-engine]# cat
/etc/ovirt-engine/engine.conf.d/10-setup-database.conf
ENGINE_DB_HOST="localhost"
ENGINE_DB_PORT="5432"
ENGINE_DB_USER="engine"
ENGINE_DB_PASSWORD="********"
ENGINE_DB_DATABASE="engine"
ENGINE_DB_SECURED="False"
ENGINE_DB_SECURED_VALIDATION="False"
ENGINE_DB_DRIVER="org.postgresql.Driver"
ENGINE_DB_URL="jdbc:postgresql://${ENGINE_DB_HOST}:${ENGINE_DB_PORT}/${ENGINE_DB_DATABASE}?sslfactory=org.postgresql.ssl.NonValidatingFactory"

I tried to reset the database's password using this in the psql shell:

alter user  engine WITH password '********';
(the same password as above)

Still authentication fails, but when I do this:

 psql -h localhost -p 5432 -U engine engine

It works fine... O gosh more debugging ;). Any clue where I should have
a look?
I just tried copying the old /etc/ovirt* stuff over /etc/overt* so both
configs and db are sync'ed again. To no avail.

Thanks guys!


Boudewijn
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to