[ovirt-users] FORCE DELETE DATACENTER

2017-08-15 Thread Erick Vogeler
Hello

i had a host that now i dont have access to it, cant put it on maintenance,
cant force remove, cant remove old vms. How do i force delete this
datacenter?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine is not migrate to other hosts

2017-08-15 Thread Spickiy Nikita
Hi, thanks for the answer. I decided to full re-insall ovirt, as i did errors 
in the install. As i have error vdsmd service:

● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Mon 2017-08-14 16:12:04 +07; 55min ago
 Main PID: 3901 (vdsm)
   CGroup: /system.slice/vdsmd.service
   └─3901 /usr/bin/python2 /usr/share/vdsm/vdsm

Aug 14 16:12:03 vnode10.pi.local vdsmd_init_common.sh[3829]: vdsm: Running 
syslog_available
Aug 14 16:12:03 vnode10.pi.local vdsmd_init_common.sh[3829]: vdsm: Running 
nwfilter
Aug 14 16:12:04 vnode10.pi.local vdsmd_init_common.sh[3829]: vdsm: Running 
dummybr
Aug 14 16:12:04 vnode10.pi.local vdsmd_init_common.sh[3829]: vdsm: Running 
tune_system
Aug 14 16:12:04 vnode10.pi.local vdsmd_init_common.sh[3829]: vdsm: Running 
test_space
Aug 14 16:12:04 vnode10.pi.local vdsmd_init_common.sh[3829]: vdsm: Running 
test_lo
Aug 14 16:12:04 vnode10.pi.local systemd[1]: Started Virtual Desktop Server 
Manager.
Aug 14 16:12:05 vnode10.pi.local vdsm[3901]: vdsm throttled WARN MOM not 
available.
Aug 14 16:12:05 vnode10.pi.local vdsm[3901]: vdsm throttled WARN MOM not 
available, KSM stats will be missing.
Aug 14 16:12:05 vnode10.pi.local vdsm[3901]: vdsm root ERROR failed to retrieve 
Hosted Engine HA info
 Traceback (most recent call last):
   File 
"/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in _getHaInfo…


But when i removed 1 host for re-install, then service get status is ok:

● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: enabled)
   Active: active (running) since Mon 2017-08-14 13:15:43 +07; 1 day 1h ago
 Main PID: 29945 (vdsm)
   CGroup: /system.slice/vdsmd.service
   ├─  647 /usr/bin/dd 
if=/rhev/data-center/mnt/10.10.20.25:_var_nfs-ovirt-iso-share_files/78c7fad8-4c1a-4f08-9b04-df660e1479c0/dom_md/metadata
 of=/dev/null bs=4096 count=1 iflag=direct
   ├─ 1776 /usr/libexec/ioprocess --read-pipe-fd 59 --write-pipe-fd 53 
--max-threads 10 --max-queued-requests 10
   ├─ 7806 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd 64 
--max-threads 10 --max-queued-requests 10
   └─29945 /usr/bin/python2 /usr/share/vdsm/vdsm

Aug 15 13:26:05 vnode11.pi.local vdsm[29945]: vdsm vds.dispatcher WARN 
unhandled close event
Aug 15 13:26:05 vnode11.pi.local vdsm[29945]: vdsm vds.dispatcher WARN 
unhandled close event
Aug 15 13:28:00 vnode11.pi.local vdsm[29945]: vdsm vds.dispatcher WARN 
unhandled close event
Aug 15 13:28:00 vnode11.pi.local vdsm[29945]: vdsm vds.dispatcher WARN 
unhandled close event
Aug 15 13:28:00 vnode11.pi.local vdsm[29945]: vdsm vds.dispatcher WARN 
unhandled close event
Aug 15 13:28:00 vnode11.pi.local vdsm[29945]: vdsm vds.dispatcher WARN 
unhandled close event
Aug 15 14:46:42 vnode11.pi.local vdsm[29945]: vdsm root WARN File: 
/var/lib/libvirt/qemu/channels/aa47c2e6-73ad-4c37-a641-adf2b127fd03.com.redhat.rhevm.vdsm
 already removed
Aug 15 14:46:42 vnode11.pi.local vdsm[29945]: vdsm root WARN File: 
/var/lib/libvirt/qemu/channels/aa47c2e6-73ad-4c37-a641-adf2b127fd03.org.qemu.guest_agent.0
 already removed
Aug 15 14:48:03 vnode11.pi.local vdsm[29945]: vdsm root WARN File: 
/var/lib/libvirt/qemu/channels/aa47c2e6-73ad-4c37-a641-adf2b127fd03.com.redhat.rhevm.vdsm
 already removed
Aug 15 14:48:03 vnode11.pi.local vdsm[29945]: vdsm root WARN File: 
/var/lib/libvirt/qemu/channels/aa47c2e6-73ad-4c37-a641-adf2b127fd03.org.qemu.guest_agent.0
 already removed

But when i shutdown host, on which run hosted-engine, he is not up to the 
available host. I waited 30 minutes, biut not up. I re-install ovirt without 
errors and watch behavior. If situation repeat i writing about it. Thanks for 
help!

On 15 Aug 2017, at 14:35, Yedidyah Bar David 
> wrote:

On Mon, Aug 14, 2017 at 12:04 PM, Spickiy Nikita 
> wrote:
Hi, in advance sorry, i beginner. I try setting HA on oVirt. I install 
hosted-engine on host and after adding two hosts in web portal. When i install 
hosts then select DEPLOY in hosted-engine. But when i shutdown node, on which 
run hosted-engine, and hosted-engine is not start on other hosts automaticlly.

How long did you wait? It does take up to something like 10 minutes
normally, IIRC.

I guess that trouble in state host(he is state=EngineDown),

No, this is ok.

but not found information how to fix it. Maybe it do because i don't setting 
power management?

No. Power management is required for HA inside the engine, for other VMs.
For hosted-engine it's not required.

If you still have problems, please check/share
/var/log/ovirt-hosted-engine-ha/agent.log
on all 3 hosts.

Best,


My configuration:
--== Host 1 status ==--


Re: [ovirt-users] oVirt and FreeNAS

2017-08-15 Thread Karli Sjöberg
Den 15 aug. 2017 10:57 fm skrev Latchezar Filtchev :Dear oVirt-ers, Just curious – did someone uses FreeNAS as storage  for oVirt.  Well, not FreeNAS exactly, but FreeBSD (the base stuff underneath FreeNAS) in two production environments. Doing it's thing for a lot of years now:)/KMy staging environment is - two virtualization nodes, hosted engine, FreeNAS as storage (iSCSI hosted storage, iSCSI Data(Master) domain and NFS shares as ISO and export domains) Thank you! Best,Latcho ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] import qcow2 and sparse

2017-08-15 Thread Michal Skrivanek
> On 14 Aug 2017, at 13:19, Marcin Kruk  wrote:
>
> After import machine from KVM which was based on qemu disk 2GB phisical and 
> 50 GB virtual size
> I have got disk machine which occupy 50GB and even sparse option does not 
> work. It still ocupy 50GB?

Depends on what kind of storage you have on the ovirt side. Is it file based?

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine restore

2017-08-15 Thread Yedidyah Bar David
On Thu, Aug 3, 2017 at 4:32 AM, Gary Pedretty  wrote:
> When attempting to do the restore to a clean engine install before engine
> setup is run as per the documentation, the restore ends with an error about
> restoring the history.
>
>
>
>
> root@fai-kvm-engine backup]# engine-backup --mode=restore
> --file=/backup/ovirt-backup07302017.bak
> --log=/usr/local/share/ovirtbackup.log --provision-db --provision-dwh-db
> --restore-permissions
> Preparing to restore:
> - Unpacking file '/backup/ovirt-backup07302017.bak'
> Restoring:
> - Files
> Provisioning PostgreSQL users/databases:
> - user 'engine', database 'engine'
> - user 'ovirt_engine_history', database 'ovirt_engine_history'
> Restoring:
> - Engine database 'engine'
>   - Cleaning up temporary tables in engine database 'engine'
>   - Resetting DwhCurrentlyRunning in dwh_history_timekeeping in engine
> database
> --
> Please note:
>
> The engine database was backed up at 2017-07-30 23:05:43.0 -0800 .
>
> Objects that were added, removed or changed after this date, such as virtual
> machines, disks, etc., are missing in the engine, and will probably require
> recovery or recreation.
> --
> - DWH database 'ovirt_engine_history'
> FATAL: Errors while restoring database ovirt_engine_history

Please check/share the restore log.

> [root@fai-kvm-engine backup]#
>
>
>
>
>
> If I continue and run engine-setup, then it fails with the following.
>
> It seems that you are running your engine inside of the hosted-engine VM and
> are not in "Global Maintenance" mode

Perhaps you ran engine-setup more than once?
The above error message should not happen on the first (successful)
run immediately after restore.

>
>
>
>
> 
> Gary Pedrettyg...@ravnalaska.net
> Systems Manager  www.flyravn.com
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Green, green as far as the eyes can see yourself” Matt 22:39
> 
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Aug 2, 2017, at 9:56 AM, Gary Pedretty  wrote:
>
> When restoring a self hosted-engine from a backup, the documentation at
>
> https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Restoring_an_EL-Based_Self-Hosted_Environment/
>
> Shows creating a new host server and then doing the hosted-engine —deploy
> and manually installing the ovirt-engine instead of using the
> ovirt-engine-appliance.  If the hosted-engine deployment that was backed up
> and is being restored, was created using the appliance, should I still
> follow this documentation and manually install the engine, run the restore,
> then do the engine-setup?  Or should I use the ovirt-engine-appliance and
> just run the restore before the engine-setup?

There is no inherent difference between using the appliance
and not using it. Recent versions, though, only allow using
the appliance - so now you really have no option.

You do have to make sure you do not ask 'hosted-engine --deploy'
to automatically run engine-setup - you have to manually login,
restore, then run engine-setup.

Best,

>
>
> Gary
>
>
>
>
> 
> Gary Pedrettyg...@ravnalaska.net
> Systems Manager  www.flyravn.com
> Ravn Alaska   /\907-450-7251
> 5245 Airport Industrial Road /  \/\ 907-450-7238 fax
> Fairbanks, Alaska  99709/\  /\ \ Second greatest commandment
> Serving All of Alaska  /  \/  /\  \ \/\   “Love your neighbor as
> Green, green as far as the eyes can see yourself” Matt 22:39
> 
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and FreeNAS

2017-08-15 Thread Latchezar Filtchev
Thank you Uwe,

If it not confidential,  you can share this info offline:

1. Is it in production?
2. Can you share details about your FreeNAS installation - hardware used, RAM 
installed, Type of Disks - SATA, SAS, SSD, network cards used? Do you have SSD 
for ZIL/L2ARC?
3. The size of your data domain? Number of virtual machines? .

Thank you!
Best,
Latcho



-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of Uwe 
Laverenz
Sent: Tuesday, August 15, 2017 1:40 PM
To: users@ovirt.org
Subject: Re: [ovirt-users] oVirt and FreeNAS

Hi,

Am 15.08.2017 um 10:50 schrieb Latchezar Filtchev:

> Just curious - did someone uses FreeNAS as storage  for oVirt.  My 
> staging environment is - two virtualization nodes, hosted engine, 
> FreeNAS as storage (iSCSI hosted storage, iSCSI Data(Master) domain 
> and NFS shares as ISO and export domains)

Yes, works very well (NFS and iSCSI).

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and FreeNAS

2017-08-15 Thread Uwe Laverenz

Hi,

Am 15.08.2017 um 10:50 schrieb Latchezar Filtchev:

Just curious – did someone uses FreeNAS as storage  for oVirt.  My 
staging environment is - two virtualization nodes, hosted engine, 
FreeNAS as storage (iSCSI hosted storage, iSCSI Data(Master) domain and 
NFS shares as ISO and export domains)


Yes, works very well (NFS and iSCSI).

cu,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt and FreeNAS

2017-08-15 Thread Latchezar Filtchev
Dear oVirt-ers,

Just curious - did someone uses FreeNAS as storage  for oVirt.  My staging 
environment is - two virtualization nodes, hosted engine, FreeNAS as storage 
(iSCSI hosted storage, iSCSI Data(Master) domain and NFS shares as ISO and 
export domains)

Thank you!

Best,
Latcho

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted-engine is not migrate to other hosts

2017-08-15 Thread Yedidyah Bar David
On Mon, Aug 14, 2017 at 12:04 PM, Spickiy Nikita  wrote:
> Hi, in advance sorry, i beginner. I try setting HA on oVirt. I install 
> hosted-engine on host and after adding two hosts in web portal. When i 
> install hosts then select DEPLOY in hosted-engine. But when i shutdown node, 
> on which run hosted-engine, and hosted-engine is not start on other hosts 
> automaticlly.

How long did you wait? It does take up to something like 10 minutes
normally, IIRC.

> I guess that trouble in state host(he is state=EngineDown),

No, this is ok.

> but not found information how to fix it. Maybe it do because i don't setting 
> power management?

No. Power management is required for HA inside the engine, for other VMs.
For hosted-engine it's not required.

If you still have problems, please check/share
/var/log/ovirt-hosted-engine-ha/agent.log
on all 3 hosts.

Best,

>
> My configuration:
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : vnode10
> Host ID: 1
> Engine status  : {"health": "good", "vm": "up", "detail": 
> "up"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 68e523f8
> local_conf_timestamp   : 178768
> Host timestamp : 178753
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=178753 (Mon Aug 14 15:22:49 2017)
> host-id=1
> score=3400
> vm_conf_refresh_time=178768 (Mon Aug 14 15:23:04 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineUp
> stopped=False
>
>
> --== Host 2 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : vnode11
> Host ID: 2
> Engine status  : {"reason": "vm not running on this 
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 1204d5c0
> local_conf_timestamp   : 9290
> Host timestamp : 9274
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=9274 (Mon Aug 14 15:22:53 2017)
> host-id=2
> score=3400
> vm_conf_refresh_time=9290 (Mon Aug 14 15:23:10 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineDown
> stopped=False
>
>
> --== Host 3 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : True
> Hostname   : vnode13
> Host ID: 3
> Engine status  : {"reason": "vm not running on this 
> host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 14e97435
> local_conf_timestamp   : 188749
> Host timestamp : 188732
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=188732 (Mon Aug 14 15:22:55 2017)
> host-id=3
> score=3400
> vm_conf_refresh_time=188749 (Mon Aug 14 15:23:11 2017)
> conf_on_shared_storage=True
> maintenance=False
> state=EngineDown
> stopped=False
>
> Thank you in advance!
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users