Re: [ovirt-users] Hung task finalizing live migration

2016-09-06 Thread Maton, Brett
Sorry just hit reply

I'm seeing these errors in the logs which look related to the problem:


2016-09-07 06:46:35,123 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(DefaultQuartzScheduler6) [19c58c0d] Failed invoking callback end method
'onFailed' for command '07608003-ca05-4e2e-b917-85ce525c011b' with
exception 'null', the callback is marked for end method retries
2016-09-07 06:46:45,184 ERROR [org.ovirt.engine.core.bll.CommandsFactory]
(DefaultQuartzScheduler7) [19c58c0d] Error in invocating CTOR of command
'LiveMigrateDisk': null
2016-09-07 06:46:45,185 ERROR
[org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller]
(DefaultQuartzScheduler7) [19c58c0d] Failed invoking callback end method
'onFailed' for command '07608003-ca05-4e2e-b917-85ce525c011b' with
exception 'null', the callback is marked for end method retries

On 5 September 2016 at 06:46, Nir Soffer  wrote:

> Hi Maton,
>
> Please reply to the list, not to me directly.
>
> Ala, can you look at this? is this a known issue?
>
> Thanks,
> Nir
>
> On Mon, Sep 5, 2016 at 8:43 AM, Maton, Brett 
> wrote:
> > Log files as requested
> >
> > https://ufile.io/4fc35 vdsm log
> > https://ufile.io/e9836 engine 03-Sep
> > https://ufile.io/15f37 engine 04-Sep
> >
> > vdsm log stops on the 01-Sep...
> >
> > Couple of entries from the event log:
> >
> > Sep 3, 2016 7:31:07 PMSnapshot 'Auto-generated for Live Storage
> > Migration' deletion for VM 'lv01' has been completed.
> > Sep 3, 2016 6:46:46 PMSnapshot 'Auto-generated for Live Storage
> > Migration' deletion for VM 'lv01' was initiated by SYSTEM
> >
> > And the related tasks
> >
> > Removing Snapshot Auto-generated for Live Storage Migration of VM lv01
> > Sep 3, 2016 6:46:44 PMN/A29f45ca9
> > ValidatingSep 3, 2016 6:46:44 PMuntilSep 3, 2016 6:46:44 PM
> > ExecutingSep 3, 2016 6:46:44 PMuntilSep 3, 2016 7:31:06 PM
> >
> > FinalizingSep 3, 2016 7:31:06 PMN/A
> >
> >
> >
> > On 4 September 2016 at 14:27, Nir Soffer  wrote:
> >>
> >> On Sun, Sep 4, 2016 at 12:40 PM, Maton, Brett  >
> >> wrote:
> >>>
> >>> How do I fix / kill a hung vdsm task?
> >>>
> >>> It seems to have completed the task but is stuck finalising.
> >>>
> >>> Removing Snapshot Auto-generated for Live Storage Migration
> >>> Validating
> >>> Executing
> >>> (hour glass) Finalizing
> >>>
> >>> Task has been 'stuck' finalising for over 13 hours
> >>
> >>
> >> Can you share engine and vdsm logs since the time the merge was started?
> >>
> >> Nir
> >
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cluster shows N/A in Dashboard

2016-09-06 Thread KY LO
Have you check the DNS? I once had this issue due to DNS went down. Thanks.
@Philip 

On Wednesday, September 7, 2016 2:08 AM, Gervais de Montbrun 
 wrote:
 

 Thanks!

It looks like something is broken, so I am glad that this is normal.

Cheers,
Gervais



> On Sep 6, 2016, at 3:03 PM, Alexander Wels  wrote:
> 
> On Tuesday, September 6, 2016 3:00:37 PM EDT Gervais de Montbrun wrote:
>> Hey Folks,
>> 
>> Anyone know why my cluster might be showing N/A in the Dashboard?
>> 
>> 
>> feedback-on-oVirt-engine-4.0.3-1.el7.centos
>> 
>> Cheers,
>> Gervais
> 
> Because clusters don't have a 'status' to display. We wanted to display the 
> count but the widgets also want to display a status. Since there is no status 
> the widget says N/A
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


   ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How can i installed ovirt engine with olderversion like 4.0.2.2 ?

2016-09-06 Thread ??????
I'am not an updated latest version,i want to install the previous version 
by"yum install",how can i to do?thank you.








If I'm not mistaken, you can set the engine to maintenance mode then upgrade it 
to the latest version.


@Philip Lo



On Tuesday, September 6, 2016 6:07 PM, ?? <313922...@qq.com> wrote:



Ovirt current version is 4.0.4 , How can i installed ovirt engine with older 
version like 4.0.2.2 ? Could I install it by "yum install"?

Thank you.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] No SPM and the master data domain won't come up

2016-09-06 Thread Colin Coe
IIRC, a previous (recent) post stated that going from OVS to legacy was not
supported.

CC

On Wed, Sep 7, 2016 at 3:49 AM, Logan Kuhn  wrote:

> During network testing last night I put one compute node into maintenance
> mode and changed it's network from legacy to OVS, this caused issues and I
> changed it back.  When I changed it back SPM contention started and neither
> became SPM, the logs are filled with this error message:
>
> 2016-09-06 14:43:38,720 ERROR 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand]
> (DefaultQuartzScheduler5) [] Command 'SpmStatusVDSCommand(HostName =
> ovirttest1, SpmStatusVDSCommandParameters:{runAsync='true',
> hostId='d84ebe29-5acd-4e4b-9bee-041b27a2f9f9',
> storagePoolId='0001-0001-0001-0001-00d8'})' execution failed:
> VDSGenericException: VDSErrorException: Failed to SpmStatusVDS, error =
> (13, 'Sanlock resource read failure', 'Permission denied'), code = 100
>
> I've put all but the master data domain into maitenance mode and the
> permissions on it are vdsm/kvm.  The permissions on the nfs share have not
> been modified, but I re-exported the share anyway, the share can be seen on
> the compute node and I can even mount it on that compute node, as can vdsm:
>
> nfs-server:/rbd/it/ovirt-nfs 293G 111G 183G 38% /rhev/data-center/mnt/nfs-
> server:_rbd_it_ovirt-nfs
>
> I'm not really sure where to go beyond this point..
>
> Regards,
> Logan
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] mrtg/rrdtool graphs

2016-09-06 Thread Bill Bill
Is it possible to get basic graphs integrated into a tab on the VM details 
without having to deal with the reports engine? Just basic graphs that show 
perhaps cpu, bandwidth etc.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] No SPM and the master data domain won't come up

2016-09-06 Thread Logan Kuhn
During network testing last night I put one compute node into maintenance mode 
and changed it's network from legacy to OVS, this caused issues and I changed 
it back. When I changed it back SPM contention started and neither became SPM, 
the logs are filled with this error message: 

2016-09-06 14:43:38,720 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] 
(DefaultQuartzScheduler5) [] Command 'SpmStatusVDSCommand(HostName = 
ovirttest1, SpmStatusVDSCommandParameters:{runAsync='true', 
hostId='d84ebe29-5acd-4e4b-9bee-041b27a2f9f9', 
storagePoolId='0001-0001-0001-0001-00d8'})' execution failed: 
VDSGenericException: VDSErrorException: Failed to SpmStatusVDS, error = (13, 
'Sanlock resource read failure', 'Permission denied'), code = 100 

I've put all but the master data domain into maitenance mode and the 
permissions on it are vdsm/kvm. The permissions on the nfs share have not been 
modified, but I re-exported the share anyway, the share can be seen on the 
compute node and I can even mount it on that compute node, as can vdsm: 

nfs-server:/rbd/it/ovirt-nfs 293G 111G 183G 38% 
/rhev/data-center/mnt/nfs-server:_rbd_it_ovirt-nfs 

I'm not really sure where to go beyond this point.. 

Regards, 
Logan 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Moving Hosted Engine Disk

2016-09-06 Thread Yedidyah Bar David
On Tue, Sep 6, 2016 at 7:40 PM, Charles Kozler  wrote:
> Hi Everyone - can anyone assist me? I'd really like to move HE off of the
> location it is right now without having to rebuild my entire datacenter. Is
> there anything I can do? Any documentation you can point me to? Thanks!

IIRC something similar was recently discussed on this list. Did you search
the archives? Also adding Simone.

If nothing else, you can try backup/restore.

>
> On Sun, Aug 14, 2016 at 2:51 PM, Charles Kozler 
> wrote:
>>
>> Hi - I followed this doc successfully
>> https://www.ovirt.org/documentation/how-to/hosted-engine/ with no real
>> issues
>>
>> In oVirt I have the hosted_storage storage domain and the one I created
>> (Datastore_VM_NFS) to be a master domain - this is all fine
>>
>> My hosted_storage domain is on a SAN that is being deprecated. I am
>> wondering what I need to do to **move** the disks for Hosted Engine VM off
>> of that storage domain and on to another one. I can create a new volume on
>> the new SAN where the VMs are stored but from all that I have found it
>> doesnt look like I can actually move the hosted engines disks with out
>> possibly rerunning the entire hosted-engine --deploy but I am not sure if
>> this will cause any other conflicting problems
>>
>> Thanks
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cluster shows N/A in Dashboard

2016-09-06 Thread Gervais de Montbrun
Thanks!

It looks like something is broken, so I am glad that this is normal.

Cheers,
Gervais



> On Sep 6, 2016, at 3:03 PM, Alexander Wels  wrote:
> 
> On Tuesday, September 6, 2016 3:00:37 PM EDT Gervais de Montbrun wrote:
>> Hey Folks,
>> 
>> Anyone know why my cluster might be showing N/A in the Dashboard?
>> 
>> 
>> feedback-on-oVirt-engine-4.0.3-1.el7.centos
>> 
>> Cheers,
>> Gervais
> 
> Because clusters don't have a 'status' to display. We wanted to display the 
> count but the widgets also want to display a status. Since there is no status 
> the widget says N/A
> 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cluster shows N/A in Dashboard

2016-09-06 Thread Alexander Wels
On Tuesday, September 6, 2016 3:00:37 PM EDT Gervais de Montbrun wrote:
> Hey Folks,
> 
> Anyone know why my cluster might be showing N/A in the Dashboard?
> 
> 
> feedback-on-oVirt-engine-4.0.3-1.el7.centos
> 
> Cheers,
> Gervais

Because clusters don't have a 'status' to display. We wanted to display the 
count but the widgets also want to display a status. Since there is no status 
the widget says N/A

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cluster shows N/A in Dashboard

2016-09-06 Thread Gervais de Montbrun
Hey Folks,

Anyone know why my cluster might be showing N/A in the Dashboard?


feedback-on-oVirt-engine-4.0.3-1.el7.centos

Cheers,
Gervais



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HELP Upgrade hypervisors from CentOS 6.8 to CentOS 7

2016-09-06 Thread Nir Soffer
On Tue, Sep 6, 2016 at 7:33 PM, VONDRA Alain  wrote:

> Great, I found it !
>
> It was an initiator issue due to the reinstalling of the host. After a
> re-configuration of the SAN, evrything works fine and I could Up the brand
> new host with CentOS 7.2. I will reproduce the same plan on the other host.
>
> Thank so much for your help
>

I think the root cause is that ovirt do not manage the iscsi initiator on
the hosts.

If we had an option to set the initiator when adding a host, like the way
you
set the host password, it would be hard to have such bugs.

Can you file a bug about this?

Nir


>
>
>
>
>
>
> --
>
> *Alain VONDRA   *
> *Chargé d'exploitation des Systèmes d'Information   *
> *Direction Administrative et Financière*
> * +33 1 44 39 77 76 <%2B33%201%2044%2039%2077%2076> *
>
> *UNICEF France 3 rue Duguay Trouin  75006 PARIS*
> * www.unicef.fr  *
> 
>
> 
>
>   
>
>
>
> --
> 
>
> *De :* Simone Tiraboschi [mailto:stira...@redhat.com]
> *Envoyé :* mardi 6 septembre 2016 16:48
>
> *À :* VONDRA Alain 
> *Cc :* Yedidyah Bar David ; users 
> *Objet :* Re: [ovirt-users] HELP Upgrade hypervisors from CentOS 6.8 to
> CentOS 7
>
>
>
>
>
>
>
> On Tue, Sep 6, 2016 at 4:25 PM, VONDRA Alain  wrote:
>
> I’ve just reinstall the host and have the same issue, here is the ERROR
> messages from the vdsm logs :
>
>
>
> Thread-43::ERROR::2016-09-06 
> 16:02:54,399::hsm::2551::Storage.HSM::(disconnectStorageServer)
> Could not disconnect from storageServer
>
> Thread-43::ERROR::2016-09-06 
> 16:02:54,453::hsm::2551::Storage.HSM::(disconnectStorageServer)
> Could not disconnect from storageServer
>
> Thread-43::ERROR::2016-09-06 
> 16:02:54,475::hsm::2551::Storage.HSM::(disconnectStorageServer)
> Could not disconnect from storageServer
>
> Thread-51::ERROR::2016-09-06 
> 16:05:38,319::hsm::2453::Storage.HSM::(connectStorageServer)
> Could not connect to storageServer
>
> Thread-52::ERROR::2016-09-06 16:05:38,636::sdc::137::
> Storage.StorageDomainCache::(_findDomain) looking for unfetched domain
> cc9ab4b2-9880-427b-8f3b-61f03e520cbc
>
> Thread-52::ERROR::2016-09-06 16:05:38,637::sdc::154::
> Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain
> cc9ab4b2-9880-427b-8f3b-61f03e520cbc
>
> Thread-52::ERROR::2016-09-06 16:05:38,756::sdc::143::
> Storage.StorageDomainCache::(_findDomain) domain 
> cc9ab4b2-9880-427b-8f3b-61f03e520cbc
> not found
>
> Thread-52::ERROR::2016-09-06 16:05:38,769::task::866::
> Storage.TaskManager.Task::(_setError) 
> Task=`1cfc5c30-fc82-44d6-a296-223fa9426417`::Unexpected
> error
>
> Thread-52::ERROR::2016-09-06 
> 16:05:38,788::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Cannot find master domain:
> u'spUUID=0002-0002-0002-0002-0193, 
> msdUUID=cc9ab4b2-9880-427b-8f3b-61f03e520cbc'",
> 'code': 304}}
>
>
>
> This SD is present on the host1 (CentOS 6.8) :
>
> [root@unc-srv-hyp1  ~]$ ll /rhev/data-center/
>
> 0002-0002-0002-0002-0193/ mnt/
>
>
>
> Not in the host2 (CentOS 7.2) :
>
> [root@unc-srv-hyp2 ~]# ll /rhev/data-center/
>
> total 4
>
> drwxr-xr-x. 5 vdsm kvm 4096  6 sept. 16:03 mnt
>
>
>
> The vdsm.log more complete :
>
> Thread-88::DEBUG::2016-09-06 16:22:03,057::iscsi::424::Storage.ISCSI::(rescan)
> Performing SCSI scan, this will take up to 30 seconds
>
> Thread-88::DEBUG::2016-09-06 
> 16:22:03,057::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
>
> Thread-88::DEBUG::2016-09-06 16:22:03,078::misc::751::
> Storage.SamplingMethod::(__call__) Returning last result
>
> Thread-88::DEBUG::2016-09-06 16:22:03,079::misc::741::
> Storage.SamplingMethod::(__call__) Trying to enter sampling method
> (storage.hba.rescan)
>
> Thread-88::DEBUG::2016-09-06 16:22:03,079::misc::743::
> Storage.SamplingMethod::(__call__) Got in to sampling method
>
> Thread-88::DEBUG::2016-09-06 16:22:03,080::hba::53::Storage.HBA::(rescan)
> Starting scan
>
> Thread-88::DEBUG::2016-09-06 16:22:03,080::utils::755::Storage.HBA::(execCmd)
> /usr/bin/sudo -n /usr/libexec/vdsm/fc-scan (cwd None)
>
> Thread-88::DEBUG::2016-09-06 16:22:03,134::hba::66::Storage.HBA::(rescan)
> Scan finished
>
> Thread-88::DEBUG::2016-09-06 16:22:03,134::misc::751::
> Storage.SamplingMethod::(__call__) Returning last result
>
> Thread-88::DEBUG::2016-09-06 
> 16:22:03,135::multipath::131::Storage.Misc.excCmd::(rescan)
> /usr/bin/sudo -n /sbin/multipath (cwd None)
>
> Thread-88::DEBUG::2016-09-06 
> 16:22:03,201::multipath::131::Storage.Misc.excCmd::(rescan)
> SUCCESS:  = '';  = 0
>
> Thread-88::DEBUG::2016-09-06 16:22:03,202::utils::755::root::(execCmd)
> /sbin/udevadm settle --timeout=5 (cwd None)
>
> 

Re: [ovirt-users] ovirt hosted-engine deploy never works

2016-09-06 Thread Osvaldo ALVAREZ POZO

Hello, 

Thanks for the reply 

but i did not understood the following part " Since 4.0 you can also choose to 
have your new host participating to the hosted-engine pool from the engine 
itself. 
" 

it's my first time using ovirt 

coud you explain? 




De: "Simone Tiraboschi"  
À: "alvarez"  
Cc: "Artyom Lukianov" , "users"  
Envoyé: Mardi 6 Septembre 2016 18:41:33 
Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 



On Tue, Sep 6, 2016 at 6:19 PM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 



Hello, 

Now I have 1 hoste_engine, 1 ovirt-node and 1 cluster 

How do i add a node to the cluster? 

Do I go to ovirt-engine and I add a node directly? or are there rpms to install 
fist? 



You have just to install [ 
http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm | 
http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm ] on your next 
host. 
Than you can go to the engine in the hosts section of that cluster and choose 
New. 
Since 4.0 you can also choose to have your new host participating to the 
hosted-engine pool from the engine itself. 

BQ_BEGIN


thanks 

Sincerily 


De: "Simone Tiraboschi" < [ mailto:stira...@redhat.com | stira...@redhat.com ] 
> 
À: "alvarez" < [ mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > 
Cc: "Artyom Lukianov" < [ mailto:aluki...@redhat.com | aluki...@redhat.com ] >, 
"users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Envoyé: Vendredi 2 Septembre 2016 13:38:05 

Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 



On Fri, Sep 2, 2016 at 12:39 PM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 

BQ_BEGIN

Ok, 

I have a LUN show as unntached. 

how do I attach it? 


BQ_END

[ 
http://www.ovirt.org/documentation/quickstart/quickstart-guide/#Create_an_iSCSI_Data_Domain
 | 
http://www.ovirt.org/documentation/quickstart/quickstart-guide/#Create_an_iSCSI_Data_Domain
 ] 

BQ_BEGIN


one other question 

do have to add the ISCSI LUN where the hosted-engine was deployed? 

BQ_END

No, the engine will do it automatically once the datacenter will be up. 

BQ_BEGIN


Sincerily 


De: "Simone Tiraboschi" < [ mailto:stira...@redhat.com | stira...@redhat.com ] 
> 
À: "alvarez" < [ mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > 
Cc: "Artyom Lukianov" < [ mailto:aluki...@redhat.com | aluki...@redhat.com ] >, 
"users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Envoyé: Vendredi 2 Septembre 2016 12:09:32 

Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 



On Fri, Sep 2, 2016 at 11:49 AM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 

BQ_BEGIN


Hello , 

when i connect to hosted_engine web-interface i get "the hosted Engine Storage 
Domain doesn't exist. it should be imported into the setup". 

waht does it means? 


BQ_END

It will be automatically imported once you add your first storage domain for 
regular VMs. 

The message is a bit confusing and indeed we have an open bug to change it: 
[ https://bugzilla.redhat.com/show_bug.cgi?id=1358313 | 
https://bugzilla.redhat.com/show_bug.cgi?id=1358313 ] 

BQ_BEGIN

sincerily 

Inle 

De: "Artyom Lukianov" < [ mailto:aluki...@redhat.com | aluki...@redhat.com ] > 
À: "alvarez" < [ mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > 
Cc: "Simone Tiraboschi" < [ mailto:stira...@redhat.com | stira...@redhat.com ] 
>, "users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Envoyé: Jeudi 1 Septembre 2016 16:52:09 

Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 

Just run on your host hosted-engine --vm-status. 

On Thu, Sep 1, 2016 at 5:47 PM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 

BQ_BEGIN

hello, 

I managed to install hoste-engine. 
I think 



yum localinstall [ 
http://dl.fedoraproject.org/pub/epel/7/x86_64/h/haveged-1.9.1-1.el7.x86_64.rpm 
| 
http://dl.fedoraproject.org/pub/epel/7/x86_64/h/haveged-1.9.1-1.el7.x86_64.rpm 
] 

-systemctl enable haveged 

systemctl start haveged 




cat /proc/sys/kernel/random/entropy_avail 



1888 
it was helpfull 

I lost connection for a while to the ovirt-node 

But I think the hosted-engine vm is not in the iscsi LUN. 
How can I see where is the VM located? 

Thanks 


De: "Simone Tiraboschi" < [ mailto:stira...@redhat.com | stira...@redhat.com ] 
> 
À: "alvarez" < [ mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > 
Cc: "Gianluca Cecchi" < [ mailto:gianluca.cec...@gmail.com | 
gianluca.cec...@gmail.com ] >, "users" < [ mailto:users@ovirt.org | 
users@ovirt.org ] > 
Envoyé: Jeudi 1 Septembre 2016 12:12:39 

Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 



On Thu, Sep 1, 2016 at 12:06 PM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 

BQ_BEGIN

Hello, 

but the problem 

Re: [ovirt-users] ovirt hosted-engine deploy never works

2016-09-06 Thread Simone Tiraboschi
On Tue, Sep 6, 2016 at 6:19 PM, Osvaldo ALVAREZ POZO 
wrote:

> Hello,
>
> Now I have 1 hoste_engine, 1 ovirt-node and 1 cluster
>
> How do i add a node to the cluster?
>
> Do I go to ovirt-engine and I add a node directly? or are there rpms to
> install fist?
>

You have just to install
http://resources.ovirt.org/pub/yum-repo/ovirt-release40.rpm on your next
host.
Than you can go to the engine in the hosts section of that cluster and
choose New.
Since 4.0 you can also choose to have your new host participating to the
hosted-engine pool from the engine itself.


>
> thanks
>
> Sincerily
>
> --
> *De: *"Simone Tiraboschi" 
> *À: *"alvarez" 
> *Cc: *"Artyom Lukianov" , "users" 
> *Envoyé: *Vendredi 2 Septembre 2016 13:38:05
>
> *Objet: *Re: [ovirt-users] ovirt hosted-engine deploy never works
>
>
>
> On Fri, Sep 2, 2016 at 12:39 PM, Osvaldo ALVAREZ POZO <
> alva...@xtra-mail.fr> wrote:
>
>> Ok,
>>
>> I have a LUN show as unntached.
>>
>> how do I attach it?
>>
>>
> http://www.ovirt.org/documentation/quickstart/quickstart-guide/#Create_an_
> iSCSI_Data_Domain
>
>
>>
>> one other question
>>
>> do  have to add the ISCSI LUN where the hosted-engine was deployed?
>>
>
> No, the engine will do it automatically once the datacenter will be up.
>
>
>>
>> Sincerily
>>
>> --
>> *De: *"Simone Tiraboschi" 
>> *À: *"alvarez" 
>> *Cc: *"Artyom Lukianov" , "users" 
>> *Envoyé: *Vendredi 2 Septembre 2016 12:09:32
>>
>> *Objet: *Re: [ovirt-users] ovirt hosted-engine deploy never works
>>
>>
>>
>> On Fri, Sep 2, 2016 at 11:49 AM, Osvaldo ALVAREZ POZO <
>> alva...@xtra-mail.fr> wrote:
>>
>>>
>>> Hello ,
>>>
>>> when i connect to hosted_engine web-interface i get "the hosted Engine
>>> Storage Domain doesn't exist. it should be imported into the setup".
>>>
>>>  waht does it means?
>>>
>>>
>> It will be automatically imported once you add your first storage domain
>> for regular VMs.
>>
>> The message is a bit confusing and indeed we have an open bug to change
>> it:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1358313
>>
>>
>>> sincerily
>>>
>>> Inle
>>> --
>>> *De: *"Artyom Lukianov" 
>>> *À: *"alvarez" 
>>> *Cc: *"Simone Tiraboschi" , "users" <
>>> users@ovirt.org>
>>> *Envoyé: *Jeudi 1 Septembre 2016 16:52:09
>>>
>>> *Objet: *Re: [ovirt-users] ovirt hosted-engine deploy never works
>>>
>>> Just run on your host hosted-engine --vm-status.
>>>
>>> On Thu, Sep 1, 2016 at 5:47 PM, Osvaldo ALVAREZ POZO <
>>> alva...@xtra-mail.fr> wrote:
>>>
 hello,

 I managed to install hoste-engine.
 I think

 yum localinstall http://dl.fedoraproject.org/
 pub/epel/7/x86_64/h/haveged-1.9.1-1.el7.x86_64.rpm

  -systemctl enable haveged

 systemctl start haveged


 cat /proc/sys/kernel/random/entropy_avail

 1888
 it was helpfull

 I lost connection for a while to the ovirt-node

 But I think the hosted-engine vm is not in the iscsi LUN.
 How can I see where is the VM located?

 Thanks

 --
 *De: *"Simone Tiraboschi" 
 *À: *"alvarez" 
 *Cc: *"Gianluca Cecchi" , "users" <
 users@ovirt.org>
 *Envoyé: *Jeudi 1 Septembre 2016 12:12:39

 *Objet: *Re: [ovirt-users] ovirt hosted-engine deploy never works



 On Thu, Sep 1, 2016 at 12:06 PM, Osvaldo ALVAREZ POZO <
 alva...@xtra-mail.fr> wrote:

> Hello,
>
> but the problem is that i am not able to finish the hosted_engine
> deploy. So, I am not able to modify the Vm, because procces stoped :-(
>
> thanks for the help
>
>
 Yes, adding entropy to the host should still help.


> --
> *De: *"Simone Tiraboschi" 
> *À: *"Gianluca Cecchi" 
> *Cc: *"alvarez" , "users" 
> *Envoyé: *Jeudi 1 Septembre 2016 12:00:22
> *Objet: *Re: [ovirt-users] ovirt hosted-engine deploy never works
>
>
>
> On Thu, Sep 1, 2016 at 11:48 AM, Gianluca Cecchi <
> gianluca.cec...@gmail.com> wrote:
>
>> On Thu, Sep 1, 2016 at 11:39 AM, Osvaldo ALVAREZ POZO <
>> alva...@xtra-mail.fr> wrote:
>>
>>> hello,
>>>
>>> but normally it is not possible to install paquets on ovirt node
>>> that's way I wonder how to install  haveged?
>>>
>>> yum localinstall is this the command?
>>>
>>> Thanks
>>>
>>>
>> Actually the haveged workaround is to apply on engine server,
>> tipically when it is deployed as a VM, 

Re: [ovirt-users] Moving Hosted Engine Disk

2016-09-06 Thread Charles Kozler
Hi Everyone - can anyone assist me? I'd really like to move HE off of the
location it is right now without having to rebuild my entire datacenter. Is
there anything I can do? Any documentation you can point me to? Thanks!

On Sun, Aug 14, 2016 at 2:51 PM, Charles Kozler 
wrote:

> Hi - I followed this doc successfully https://www.
> ovirt.org/documentation/how-to/hosted-engine/ with no real issues
>
> In oVirt I have the hosted_storage storage domain and the one I created
> (Datastore_VM_NFS) to be a master domain - this is all fine
>
> My hosted_storage domain is on a SAN that is being deprecated. I am
> wondering what I need to do to **move** the disks for Hosted Engine VM off
> of that storage domain and on to another one. I can create a new volume on
> the new SAN where the VMs are stored but from all that I have found it
> doesnt look like I can actually move the hosted engines disks with out
> possibly rerunning the entire hosted-engine --deploy but I am not sure if
> this will cause any other conflicting problems
>
> Thanks
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt hosted-engine deploy never works

2016-09-06 Thread Osvaldo ALVAREZ POZO
Hello, 

Now I have 1 hoste_engine, 1 ovirt-node and 1 cluster 

How do i add a node to the cluster? 

Do I go to ovirt-engine and I add a node directly? or are there rpms to install 
fist? 

thanks 

Sincerily 


De: "Simone Tiraboschi"  
À: "alvarez"  
Cc: "Artyom Lukianov" , "users"  
Envoyé: Vendredi 2 Septembre 2016 13:38:05 
Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 



On Fri, Sep 2, 2016 at 12:39 PM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 



Ok, 

I have a LUN show as unntached. 

how do I attach it? 




[ 
http://www.ovirt.org/documentation/quickstart/quickstart-guide/#Create_an_iSCSI_Data_Domain
 | 
http://www.ovirt.org/documentation/quickstart/quickstart-guide/#Create_an_iSCSI_Data_Domain
 ] 

BQ_BEGIN


one other question 

do have to add the ISCSI LUN where the hosted-engine was deployed? 

BQ_END

No, the engine will do it automatically once the datacenter will be up. 

BQ_BEGIN


Sincerily 


De: "Simone Tiraboschi" < [ mailto:stira...@redhat.com | stira...@redhat.com ] 
> 
À: "alvarez" < [ mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > 
Cc: "Artyom Lukianov" < [ mailto:aluki...@redhat.com | aluki...@redhat.com ] >, 
"users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Envoyé: Vendredi 2 Septembre 2016 12:09:32 

Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 



On Fri, Sep 2, 2016 at 11:49 AM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 

BQ_BEGIN


Hello , 

when i connect to hosted_engine web-interface i get "the hosted Engine Storage 
Domain doesn't exist. it should be imported into the setup". 

waht does it means? 


BQ_END

It will be automatically imported once you add your first storage domain for 
regular VMs. 

The message is a bit confusing and indeed we have an open bug to change it: 
[ https://bugzilla.redhat.com/show_bug.cgi?id=1358313 | 
https://bugzilla.redhat.com/show_bug.cgi?id=1358313 ] 

BQ_BEGIN

sincerily 

Inle 

De: "Artyom Lukianov" < [ mailto:aluki...@redhat.com | aluki...@redhat.com ] > 
À: "alvarez" < [ mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > 
Cc: "Simone Tiraboschi" < [ mailto:stira...@redhat.com | stira...@redhat.com ] 
>, "users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Envoyé: Jeudi 1 Septembre 2016 16:52:09 

Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 

Just run on your host hosted-engine --vm-status. 

On Thu, Sep 1, 2016 at 5:47 PM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 

BQ_BEGIN

hello, 

I managed to install hoste-engine. 
I think 



yum localinstall [ 
http://dl.fedoraproject.org/pub/epel/7/x86_64/h/haveged-1.9.1-1.el7.x86_64.rpm 
| 
http://dl.fedoraproject.org/pub/epel/7/x86_64/h/haveged-1.9.1-1.el7.x86_64.rpm 
] 

-systemctl enable haveged 

systemctl start haveged 




cat /proc/sys/kernel/random/entropy_avail 



1888 
it was helpfull 

I lost connection for a while to the ovirt-node 

But I think the hosted-engine vm is not in the iscsi LUN. 
How can I see where is the VM located? 

Thanks 


De: "Simone Tiraboschi" < [ mailto:stira...@redhat.com | stira...@redhat.com ] 
> 
À: "alvarez" < [ mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > 
Cc: "Gianluca Cecchi" < [ mailto:gianluca.cec...@gmail.com | 
gianluca.cec...@gmail.com ] >, "users" < [ mailto:users@ovirt.org | 
users@ovirt.org ] > 
Envoyé: Jeudi 1 Septembre 2016 12:12:39 

Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 



On Thu, Sep 1, 2016 at 12:06 PM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 

BQ_BEGIN

Hello, 

but the problem is that i am not able to finish the hosted_engine deploy. So, I 
am not able to modify the Vm, because procces stoped :-( 

thanks for the help 


BQ_END

Yes, adding entropy to the host should still help. 

BQ_BEGIN


De: "Simone Tiraboschi" < [ mailto:stira...@redhat.com | stira...@redhat.com ] 
> 
À: "Gianluca Cecchi" < [ mailto:gianluca.cec...@gmail.com | 
gianluca.cec...@gmail.com ] > 
Cc: "alvarez" < [ mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] >, 
"users" < [ mailto:users@ovirt.org | users@ovirt.org ] > 
Envoyé: Jeudi 1 Septembre 2016 12:00:22 
Objet: Re: [ovirt-users] ovirt hosted-engine deploy never works 



On Thu, Sep 1, 2016 at 11:48 AM, Gianluca Cecchi < [ 
mailto:gianluca.cec...@gmail.com | gianluca.cec...@gmail.com ] > wrote: 

BQ_BEGIN

On Thu, Sep 1, 2016 at 11:39 AM, Osvaldo ALVAREZ POZO < [ 
mailto:alva...@xtra-mail.fr | alva...@xtra-mail.fr ] > wrote: 

BQ_BEGIN

hello, 

but normally it is not possible to install paquets on ovirt node 
that's way I wonder how to install haveged? 

yum localinstall is this the command? 

Thanks 


BQ_END

Actually the haveged workaround is to apply on engine server, tipically when it 
is deployed as a VM, not 

Re: [ovirt-users] How can i installed ovirt engine with older version like 4.0.2.2 ?

2016-09-06 Thread KY LO
If I'm not mistaken, you can set the engine to maintenance mode then upgrade it 
to the latest version.
@Philip Lo 

On Tuesday, September 6, 2016 6:07 PM, 转圈圈 <313922...@qq.com> wrote:
 

 Ovirt current version is 4.0.4 , How can i installed ovirt engine with older 
version like 4.0.2.2 ? Could I install it by "yum install"? 
Thank you.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


   ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HELP Upgrade hypervisors from CentOS 6.8 to CentOS 7

2016-09-06 Thread Simone Tiraboschi
On Tue, Sep 6, 2016 at 4:25 PM, VONDRA Alain  wrote:

> I’ve just reinstall the host and have the same issue, here is the ERROR
> messages from the vdsm logs :
>
>
>
> Thread-43::ERROR::2016-09-06 
> 16:02:54,399::hsm::2551::Storage.HSM::(disconnectStorageServer)
> Could not disconnect from storageServer
>
> Thread-43::ERROR::2016-09-06 
> 16:02:54,453::hsm::2551::Storage.HSM::(disconnectStorageServer)
> Could not disconnect from storageServer
>
> Thread-43::ERROR::2016-09-06 
> 16:02:54,475::hsm::2551::Storage.HSM::(disconnectStorageServer)
> Could not disconnect from storageServer
>
> Thread-51::ERROR::2016-09-06 
> 16:05:38,319::hsm::2453::Storage.HSM::(connectStorageServer)
> Could not connect to storageServer
>
> Thread-52::ERROR::2016-09-06 16:05:38,636::sdc::137::
> Storage.StorageDomainCache::(_findDomain) looking for unfetched domain
> cc9ab4b2-9880-427b-8f3b-61f03e520cbc
>
> Thread-52::ERROR::2016-09-06 16:05:38,637::sdc::154::
> Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain
> cc9ab4b2-9880-427b-8f3b-61f03e520cbc
>
> Thread-52::ERROR::2016-09-06 16:05:38,756::sdc::143::
> Storage.StorageDomainCache::(_findDomain) domain 
> cc9ab4b2-9880-427b-8f3b-61f03e520cbc
> not found
>
> Thread-52::ERROR::2016-09-06 16:05:38,769::task::866::
> Storage.TaskManager.Task::(_setError) 
> Task=`1cfc5c30-fc82-44d6-a296-223fa9426417`::Unexpected
> error
>
> Thread-52::ERROR::2016-09-06 
> 16:05:38,788::dispatcher::76::Storage.Dispatcher::(wrapper)
> {'status': {'message': "Cannot find master domain:
> u'spUUID=0002-0002-0002-0002-0193, 
> msdUUID=cc9ab4b2-9880-427b-8f3b-61f03e520cbc'",
> 'code': 304}}
>
>
>
> This SD is present on the host1 (CentOS 6.8) :
>
> [root@unc-srv-hyp1  ~]$ ll /rhev/data-center/
>
> 0002-0002-0002-0002-0193/ mnt/
>
>
>
> Not in the host2 (CentOS 7.2) :
>
> [root@unc-srv-hyp2 ~]# ll /rhev/data-center/
>
> total 4
>
> drwxr-xr-x. 5 vdsm kvm 4096  6 sept. 16:03 mnt
>
>
>
> The vdsm.log more complete :
>
> Thread-88::DEBUG::2016-09-06 16:22:03,057::iscsi::424::Storage.ISCSI::(rescan)
> Performing SCSI scan, this will take up to 30 seconds
>
> Thread-88::DEBUG::2016-09-06 
> 16:22:03,057::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
>
> Thread-88::DEBUG::2016-09-06 16:22:03,078::misc::751::
> Storage.SamplingMethod::(__call__) Returning last result
>
> Thread-88::DEBUG::2016-09-06 16:22:03,079::misc::741::
> Storage.SamplingMethod::(__call__) Trying to enter sampling method
> (storage.hba.rescan)
>
> Thread-88::DEBUG::2016-09-06 16:22:03,079::misc::743::
> Storage.SamplingMethod::(__call__) Got in to sampling method
>
> Thread-88::DEBUG::2016-09-06 16:22:03,080::hba::53::Storage.HBA::(rescan)
> Starting scan
>
> Thread-88::DEBUG::2016-09-06 16:22:03,080::utils::755::Storage.HBA::(execCmd)
> /usr/bin/sudo -n /usr/libexec/vdsm/fc-scan (cwd None)
>
> Thread-88::DEBUG::2016-09-06 16:22:03,134::hba::66::Storage.HBA::(rescan)
> Scan finished
>
> Thread-88::DEBUG::2016-09-06 16:22:03,134::misc::751::
> Storage.SamplingMethod::(__call__) Returning last result
>
> Thread-88::DEBUG::2016-09-06 
> 16:22:03,135::multipath::131::Storage.Misc.excCmd::(rescan)
> /usr/bin/sudo -n /sbin/multipath (cwd None)
>
> Thread-88::DEBUG::2016-09-06 
> 16:22:03,201::multipath::131::Storage.Misc.excCmd::(rescan)
> SUCCESS:  = '';  = 0
>
> Thread-88::DEBUG::2016-09-06 16:22:03,202::utils::755::root::(execCmd)
> /sbin/udevadm settle --timeout=5 (cwd None)
>
> Thread-88::DEBUG::2016-09-06 16:22:03,227::utils::775::root::(execCmd)
> SUCCESS:  = '';  = 0
>
> Thread-88::DEBUG::2016-09-06 16:22:03,228::lvm::498::
> Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate
> operation' got the operation mutex
>
> Thread-88::DEBUG::2016-09-06 16:22:03,228::lvm::500::
> Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate
> operation' released the operation mutex
>
> Thread-88::DEBUG::2016-09-06 16:22:03,229::lvm::509::
> Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate
> operation' got the operation mutex
>
> Thread-88::DEBUG::2016-09-06 16:22:03,229::lvm::511::
> Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate
> operation' released the operation mutex
>
> Thread-88::DEBUG::2016-09-06 16:22:03,229::lvm::529::
> Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate
> operation' got the operation mutex
>
> Thread-88::DEBUG::2016-09-06 16:22:03,230::lvm::531::
> Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate
> operation' released the operation mutex
>
> Thread-88::DEBUG::2016-09-06 16:22:03,230::misc::751::
> Storage.SamplingMethod::(__call__) Returning last result
>
> Thread-88::ERROR::2016-09-06 16:22:03,230::sdc::137::
> Storage.StorageDomainCache::(_findDomain) looking for unfetched domain
> cc9ab4b2-9880-427b-8f3b-61f03e520cbc
>
> Thread-88::ERROR::2016-09-06 16:22:03,231::sdc::154::
> 

Re: [ovirt-users] Ovirt with bad IO performance

2016-09-06 Thread Gabriel Ozaki
Hi Yaniv

This results is averange in sysbench, my machine for example gets
1.3905Mb/sec, i don't know how this test really works and i will search
about it

So i try to make a* bonnie++ test* ( reference
http://support.commgate.net/index.php?/Knowledgebase/Article/View/212 ):

Xenserver speeds:
Write speed: 91076 KB/sec
ReWrite speed: 57885 KB/sec
Read speed: 215457 KB/sec (Strange, too high)
Num of Blocks: 632.4

Ovirt Speeds:
Write speed: 111597 KB/sec (22% more then xenserver)
ReWrite speed: 73402 KB/sec (26% more then xenserver)
Read speed:  121537 KB/sec (44% less then xenserver)
Num of Blocks: 537.2 ( 15% less then xenserver)


result: a draw?


And* DD test *( reference: https://romanrm.net/dd-benchmark )*:*
[root@xenserver teste]# echo 2 > /proc/sys/vm/drop_caches  && sync
[root@xenserver teste]# dd bs=1M count=256 if=/dev/zero of=test
conv=fdatasync
256+0 registros de entrada
256+0 registros de saída
268435456 bytes (268 MB) copiados, 1,40111 s, 192 MB/s (Again, too high)

[root@ovirt teste]# echo 2 > /proc/sys/vm/drop_caches  && sync
[root@ovirt teste]# dd bs=1M count=256 if=/dev/zero of=test conv=fdatasync
256+0 registros de entrada
256+0 registros de saída
268435456 bytes (268 MB) copiados, 2,31288 s, 116 MB/s (Really fair, the
host result is 124 MB/s)


*HDparm *(FAIL on xenserver)
[root@xenserver teste]# hdparm -Tt /dev/xvda1

/dev/xvda1:
 Timing cached reads:   25724 MB in  2.00 seconds = 12882.77 MB/sec
 Timing buffered disk reads: 2984 MB in  3.00 seconds = 994.43 MB/sec ( 8
times the expect value, something is very wrong)

[root@ovirt teste]# hdparm -Tt /dev/vda1

/dev/vda1:
 Timing cached reads:   25042 MB in  2.00 seconds = 12540.21 MB/sec
 Timing buffered disk reads: 306 MB in  3.01 seconds = 101.66 MB/sec(ok
result)


There is something strange in xenserver affecting the results, probably the
best choice is close the thread and  start the studies about benchmarks

Thanks






2016-09-05 12:01 GMT-03:00 Yaniv Kaul :

>
>
> On Mon, Sep 5, 2016 at 1:45 PM, Gabriel Ozaki 
> wrote:
>
>> Hi Yaniv and Sandro
>>
>> The disk is in the same machine then ovirt-engine
>>
>
> I'm looking back at your results, and something is terribly wrong there:
> For example, sysbench:
>
> Host result:2.9843Mb/sec
> Ovirt result:   1.1561Mb/sec
> Xenserver result:   2.9006Mb/sec
>
> This is slower than a USB1 disk on key performance. I don't know what to
> make of it, but it's completely bogus. Even plain QEMU can get better
> results than this.
> And the 2nd benchmark:
>
>
> **The novabench test:*
> Ovirt result:  79Mb/s
> Xenserver result:  101Mb/s
>
> This is better, but still very slow. If I translate it to MB/s, it's
> ~10-12MBs - still very very slow.
> If, however, this is MB/sec, then this makes sense - and is probably as
> much as you can get from a single spindle.
> The difference between XenServer and oVirt are more likely have to do with
> caching than anything else. I don't know what the caching settings of
> XenServer - can you ensure no caching ('direct IO') is used?
>
>
>
>> Thanks
>>
>>
>>
>>
>>
>> 2016-09-02 15:31 GMT-03:00 Yaniv Kaul :
>>
>>>
>>>
>>> On Fri, Sep 2, 2016 at 6:11 PM, Gabriel Ozaki >> > wrote:
>>>
 Hi Yaniv

 Sorry guys, i don't explain well on my first mail, i notice a bad IO
 performance on *disk* benchmarks, the network are working really fine

>>>
>>> But where is the disk? If it's across the network, then network is
>>> involved and is certainly a bottleneck.
>>> Y.
>>>
>>>







 2016-09-02 12:04 GMT-03:00 Yaniv Kaul :

> On Fri, Sep 2, 2016 at 5:33 PM, Gabriel Ozaki <
> gabriel.oz...@kemi.com.br> wrote:
>
>> Hi Nir, thanks for the answer
>>
>>
>> *The nfs server is in the host?*
>> Yes, i choose NFS to use as storage on ovirt host
>>
>> *- Is this 2.9GiB/s or 2.9 MiB/s?*
>> Is MiB/s, i put the full test on paste bin
>> centos guest on ovirt:
>> http://pastebin.com/d48qfvuf
>>
>> centos guest on xenserver:
>> http://pastebin.com/gqN3du29
>>
>> how the test works:
>> https://www.howtoforge.com/how-to-benchmark-your-system-cpu-
>> file-io-mysql-with-sysbench
>>
>> *- Are you testing using NFS in all versions?*
>> i am using the v3 version
>>
>> *- What is the disk format?*
>> partion size format
>> /20Gb xfs
>> swap 2 Gb xfs
>> /dados rest of disk xfs   (note, this is the partition where i save
>> the ISOs,exports and VM disks)
>>
>>
>> *- How do you test io on the host?*
>> I do a clean install of centos and do the test before i install the
>> ovirt
>> the test:
>> http://pastebin.com/7RKU7778
>>
>> *- What kind of nic is used? (1G, 10G?)*
>> Is only a 100mbps :(
>>
>
> 100Mbps will not get you more than 

Re: [ovirt-users] /var/run/ovirt-hosted-engine-ha/vm.conf not found

2016-09-06 Thread Pat Riehecky

It is, thanks for asking!

Pat

On 09/05/2016 02:58 AM, Roy Golan wrote:

btw is the hosted engine VM is listed in the web admin?

On 2 September 2016 at 16:40, Pat Riehecky > wrote:


Hi Simone,

Thanks for the follow up!

I'll see about pulling out some log entries and getting a bugzilla
filed with them attached.

I was able to get it working again late last night by reinstalling
the ha engine rpms and rebooting.  Not sure why that fixed it, but
the engine fired right up shortly after and immediately saw all my
running VMs.

Pat


On 09/02/2016 02:52 AM, Simone Tiraboschi wrote:



On Thu, Sep 1, 2016 at 9:25 PM, Pat Riehecky > wrote:

Any suggestions for how I generate the alternate config?


vm.conf will get generated when needed converting the ovf
description in the OVF_STORE volume on the shared storage. The
OVF_STORE volume and its content are managed by the engine; in
this way you can edit the engine VM configuration from the engine
in a distributed env.

Now the question is why your system is failing on this step.
Can you please attach your
/var/log/ovirt-hosted-engine-ha/agent.log ?

Pat


On 09/01/2016 12:30 PM, Raymond wrote:

Same issue here, got it running with an alternate config
hosted-engine  --vm-start
--vm-conf=/etc/ovirt-hosted-engine/vm.conf

Cheers
Raymond



- Original Message -
From: "Pat Riehecky" >
To: "users" >
Sent: Thursday, September 1, 2016 6:41:08 PM
Subject: [ovirt-users]
/var/run/ovirt-hosted-engine-ha/vm.conf not found

I seem to be unable to restart my ovirt hosted engine.

I'd swear I had this issue once before, but couldn't find
any notes on
my end.


# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=vmadmin.fnal.gov 
vm_disk_id=36a09f1d-0923-4b3b-87aa-4788ca64064e
vm_disk_vol_id=c4244bdd-80c5-4f68-83c2-9494d9d05723
vmid=823d3e5b-60c2-4e53-a9e8-313aedcaf808
storage=None
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=4
console=qxl
domainType=fc
spUUID=----
sdUUID=81f19871-4d91-4698-a97d-36452bfae281
connectionUUID=73b61b0a-85f8-4fb7-8faf-c687ef7cc5d8
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=131.225.193.200
bridge=ovirtmgmt
metadata_volume_UUID=93cf12c5-d5e6-4ea6-bee7-de64ee52d7a5
metadata_image_UUID=22a9849b-c551-4783-ad0c-530464df47f3
lockspace_volume_UUID=0593a3b8-1d75-4be3-b65b-ce9a164d0309
lockspace_image_UUID=2c8c56f2-1711-4867-8ed0-3c502bb635ff
conf_volume_UUID=07c72aa5-7fd0-4159-b83e-4d078ae9c351
conf_image_UUID=f9e59ec5-6903-4b2d-8164-4fce3d901bdd

# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=




___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users






___
Users mailing list
Users@ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How can i installed ovirt engine with older version like 4.0.2.2 ?

2016-09-06 Thread ??????
Ovirt current version is 4.0.4 , How can i installed ovirt engine with older 
version like 4.0.2.2 ? Could I install it by "yum install"? 

Thank you.___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt on a single server

2016-09-06 Thread Yaniv Kaul
On Tue, Sep 6, 2016 at 12:17 PM, Christophe TREFOIS <
christophe.tref...@uni.lu> wrote:

> Hi,
>
> No, I move VMs around with an Export Storage domain.
>

If you have enough disk and bandwidth, perhaps it makes more sense to set
up Gluster as a shared storage?
And then just pin VMs to specific hosts, instead of separate DCs, etc.?
Y.


> All NFS is exported only to the local machine.
>
> Nothing is “shared” between hosts. But because I want to export VMs, we
> use “shared” storage in oVirt instead of “local”.
>
> Best,
>
> --
>
> Dr Christophe Trefois, Dipl.-Ing.
> Technical Specialist / Post-Doc
>
> UNIVERSITÉ DU LUXEMBOURG
>
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine
> 6, avenue du Swing
> L-4367 Belvaux
> T: +352 46 66 44 6124
> F: +352 46 66 44 6949
> http://www.uni.lu/lcsb
>
>
>
> 
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the
> original message and any copies.
> 
>
>
>
> > On 06 Sep 2016, at 10:06, Yedidyah Bar David  wrote:
> >
> > On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS
> >  wrote:
> >> Personally my use case is that I have 4 machines with different specs
> and storage sizing. So I setup four DC with 1 host each. Then I have hosted
> engine on one of the hosts. Storage is local shared via NFS so that I can
> move VMs around.
> >
> > Not sure I fully understand.
> >
> > You use each of the 4 machines for both storage and running VMs?
> > And export nfs on each to all the others?
> >
> > So that if a VM needs more CPU/memory then disk IO, you can move
> > it to another machine and hopefully get better performance even
> > though the storage is not local?
> >
> > I admit that it sounds very reasonable, and agree that doing this
> > with nfs is easier than with iSCSI. If you don't mind the risk of
> > local-nfs-mount locks, fine. As others noted, seems like it's quite
> > a low risk.
> >
> >>
> >> At this point we are not interested necessarily in HA.
> >>
> >> Maybe for you that's the definition of a Dev environment as production
> has other attributes than just the type of storage?
> >
> > Dev or Prod is for you to define :-)
> >
> > How much time/money do you loose if a machine dies? If a machine
> > locks up until someone notices and handles?
> >
> >>
> >> Would be nice to hear your thoughts about this.
> >
> > As wrote above, sounds reasonable if you understand the risks
> > and can live with them.
> >
> > Looking at the future you might want to check HC:
> >
> > https://www.ovirt.org/develop/release-management/features/
> gluster/glusterfs-hyperconvergence/
> >
> >>
> >> Kind regards,
> >> Christophe
> >>
> >> Sent from my iPhone
> >>
> >>> On 06 Sep 2016, at 08:45, Yedidyah Bar David  wrote:
> >>>
> >>> On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS
> >>>  wrote:
>  So basically we need at least 2 nodes to enter the realm of testing
> and maintained?
> >>>
> >>> I think some people occasionally use hosted-engine with local
> >>> iSCSI storage on a single machine. AFAIK it's not tested by CI
> >>> or often, but patches are welcome - e.g. using lago and
> >>> ovirt-system-tests.
> >>>
> >>> Can you please explain your intentions/requirements?
> >>>
> >>> Even if it works, oVirt is not designed for single-machine
> >>> _production_ use. For that, I think that most people agree that
> >>> virt-manager is more suitable. oVirt on a single machine is
> >>> usually for testing/demonstration/learning/etc.
> >>>
> 
>  If we’re talking pure oVirt here.
> 
>  --
> 
>  Dr Christophe Trefois, Dipl.-Ing.
>  Technical Specialist / Post-Doc
> 
>  UNIVERSITÉ DU LUXEMBOURG
> 
>  LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
>  Campus Belval | House of Biomedicine
>  6, avenue du Swing
>  L-4367 Belvaux
>  T: +352 46 66 44 6124
>  F: +352 46 66 44 6949
>  http://www.uni.lu/lcsb
> 
> 
> 
>  
>  This message is confidential and may contain privileged information.
>  It is intended for the named recipient only.
>  If you receive it in error please notify me and permanently delete
> the original message and any copies.
>  
> 
> 
> 
> > On 05 Sep 2016, at 16:31, Fernando Frediani <
> fernando.fredi...@upx.com.br> wrote:
> >
> > Adding Kimchi to oVirt node perhaps may be the easiest option. It
> can be pretty useful for many situations and doesn't need such thing like
> mounting NFS in localhost.
> >
> > It is not nice to not have a All-in-One stable solution anymore as
> this can help with its adoption for later growth.
> >
> > oVirt-Cockpit looks nice and intresting.
> >
> > Fernando
> >
> >
> >> On 05/09/2016 05:18, Barak Korren wrote:
> >>> On 4 September 

Re: [ovirt-users] oVirt on a single server

2016-09-06 Thread Yedidyah Bar David
On Tue, Sep 6, 2016 at 12:17 PM, Christophe TREFOIS
 wrote:
> Hi,
>
> No, I move VMs around with an Export Storage domain.

OK.

>
> All NFS is exported only to the local machine.
>
> Nothing is “shared” between hosts. But because I want to export VMs, we use 
> “shared” storage in oVirt instead of “local”.

I think you can use nfs export storage domains also in a local-storage DC.

>
> Best,
>
> --
>
> Dr Christophe Trefois, Dipl.-Ing.
> Technical Specialist / Post-Doc
>
> UNIVERSITÉ DU LUXEMBOURG
>
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine
> 6, avenue du Swing
> L-4367 Belvaux
> T: +352 46 66 44 6124
> F: +352 46 66 44 6949
> http://www.uni.lu/lcsb
>
>
>
> 
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the 
> original message and any copies.
> 
>
>
>
>> On 06 Sep 2016, at 10:06, Yedidyah Bar David  wrote:
>>
>> On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS
>>  wrote:
>>> Personally my use case is that I have 4 machines with different specs and 
>>> storage sizing. So I setup four DC with 1 host each. Then I have hosted 
>>> engine on one of the hosts. Storage is local shared via NFS so that I can 
>>> move VMs around.
>>
>> Not sure I fully understand.
>>
>> You use each of the 4 machines for both storage and running VMs?
>> And export nfs on each to all the others?
>>
>> So that if a VM needs more CPU/memory then disk IO, you can move
>> it to another machine and hopefully get better performance even
>> though the storage is not local?
>>
>> I admit that it sounds very reasonable, and agree that doing this
>> with nfs is easier than with iSCSI. If you don't mind the risk of
>> local-nfs-mount locks, fine. As others noted, seems like it's quite
>> a low risk.
>>
>>>
>>> At this point we are not interested necessarily in HA.
>>>
>>> Maybe for you that's the definition of a Dev environment as production has 
>>> other attributes than just the type of storage?
>>
>> Dev or Prod is for you to define :-)
>>
>> How much time/money do you loose if a machine dies? If a machine
>> locks up until someone notices and handles?
>>
>>>
>>> Would be nice to hear your thoughts about this.
>>
>> As wrote above, sounds reasonable if you understand the risks
>> and can live with them.
>>
>> Looking at the future you might want to check HC:
>>
>> https://www.ovirt.org/develop/release-management/features/gluster/glusterfs-hyperconvergence/
>>
>>>
>>> Kind regards,
>>> Christophe
>>>
>>> Sent from my iPhone
>>>
 On 06 Sep 2016, at 08:45, Yedidyah Bar David  wrote:

 On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS
  wrote:
> So basically we need at least 2 nodes to enter the realm of testing and 
> maintained?

 I think some people occasionally use hosted-engine with local
 iSCSI storage on a single machine. AFAIK it's not tested by CI
 or often, but patches are welcome - e.g. using lago and
 ovirt-system-tests.

 Can you please explain your intentions/requirements?

 Even if it works, oVirt is not designed for single-machine
 _production_ use. For that, I think that most people agree that
 virt-manager is more suitable. oVirt on a single machine is
 usually for testing/demonstration/learning/etc.

>
> If we’re talking pure oVirt here.
>
> --
>
> Dr Christophe Trefois, Dipl.-Ing.
> Technical Specialist / Post-Doc
>
> UNIVERSITÉ DU LUXEMBOURG
>
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine
> 6, avenue du Swing
> L-4367 Belvaux
> T: +352 46 66 44 6124
> F: +352 46 66 44 6949
> http://www.uni.lu/lcsb
>
>
>
> 
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the 
> original message and any copies.
> 
>
>
>
>> On 05 Sep 2016, at 16:31, Fernando Frediani 
>>  wrote:
>>
>> Adding Kimchi to oVirt node perhaps may be the easiest option. It can be 
>> pretty useful for many situations and doesn't need such thing like 
>> mounting NFS in localhost.
>>
>> It is not nice to not have a All-in-One stable solution anymore as this 
>> can help with its adoption for later growth.
>>
>> oVirt-Cockpit looks nice and intresting.
>>
>> Fernando
>>
>>
>>> On 05/09/2016 05:18, Barak Korren wrote:
 On 4 September 2016 at 23:45, zero four  wrote:
 ...
 I understand and acknowledge that oVirt is not targeted towards homelab
 setups, 

Re: [ovirt-users] oVirt fails to install on nodes after moving engine to C7 host

2016-09-06 Thread James
Worked perfectly, thanks so much!

On Tue, 6 Sep 2016, at 07:11 PM, Yedidyah Bar David wrote:
> On Tue, Sep 6, 2016 at 11:54 AM, James  wrote:
> > Thanks for the info. It looks like everything is tied together (as it
> > should be) dependency wise, so downgrading otopi isn't going to be that
> > simple.
> >
> > [root@engine ~]# yum downgrade otopi-1.4.2-1.el7.centos
> > otopi-java-1.4.2-1.el7.centos
> > 
> > Resolving Dependencies
> > --> Running transaction check
> > ---> Package otopi.noarch 0:1.4.2-1.el7.centos will be a downgrade
> > ---> Package otopi.noarch 0:1.5.2-1.el7.centos will be erased
> > ---> Package otopi-java.noarch 0:1.4.2-1.el7.centos will be a downgrade
> > ---> Package otopi-java.noarch 0:1.5.2-1.el7.centos will be erased
> > --> Finished Dependency Resolution
> > Error: Package: ovirt-engine-setup-base-4.0.2.7-1.el7.centos.noarch
> > (@ovirt-4.0)
> >Requires: otopi >= 1.5.0
> >Removing: otopi-1.5.2-1.el7.centos.noarch (@ovirt-4.0)
> >otopi = 1.5.2-1.el7.centos
> >Downgraded By: otopi-1.4.2-1.el7.centos.noarch (ovirt-3.6)
> >otopi = 1.4.2-1.el7.centos
> >Available: otopi-1.4.0-1.el7.noarch (centos-ovirt36)
> >otopi = 1.4.0-1.el7
> >Available: otopi-1.4.0-1.el7.centos.noarch (ovirt-3.6)
> >otopi = 1.4.0-1.el7.centos
> >Available: otopi-1.4.1-1.el7.noarch (centos-ovirt36)
> >otopi = 1.4.1-1.el7
> >Available: otopi-1.4.1-1.el7.centos.noarch (ovirt-3.6)
> >otopi = 1.4.1-1.el7.centos
> >Available: otopi-1.5.0-1.el7.centos.noarch (ovirt-4.0)
> >otopi = 1.5.0-1.el7.centos
> >Available: otopi-1.5.1-1.el7.noarch (centos-ovirt40-release)
> >otopi = 1.5.1-1.el7
> >Available: otopi-1.5.1-1.el7.centos.noarch (ovirt-4.0)
> >otopi = 1.5.1-1.el7.centos
> >  You could try using --skip-broken to work around the problem
> >  You could try running: rpm -Va --nofiles --nodigest
> > [root@engine ~]#
> >
> > Which just goes down a rabbit hole of dependency chasing for most ovirt*
> > packages. I can't think of a good way past this apart from a reinstall?
> 
> It's not really a rabbit hole - if you only upgraded the setup packages,
> it's just them.
> 
> You can downgrade the entire transaction with 'yum history'. Check
> 'yum history' output, 'yum history info ID' with a specific ID to see
> what happened there, then 'yum history undo ID'.
> 
> >
> > On Tue, 6 Sep 2016, at 06:19 PM, Yedidyah Bar David wrote:
> >> On Tue, Sep 6, 2016 at 10:43 AM, James  wrote:
> >> > Thanks for your help! I've pasted the log lines requested below. Also
> >> > worth noting that I tried upgrading it to 4.0 which I think has lead to
> >> > some broken package versions. Everything seems to work, but there are
> >> > some 4.0 packages installed.
> >>
> >> Perhaps that's the problem [1]. Can you try downgrading otopi to 1.4 and
> >> remove /var/cache/ovirt-engine/ovirt-host-deploy.tar on engine machine?
> >>
> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1348091
> >>
> >> Your package state is not very healthy but if above is enough to get you
> >> to install a host, and later you succeed upgrading to 4.0, you should be
> >> ok.
> >>
> >> >
> >> > When trying to install 4.0 I followed the docs (yum update
> >> > engine-setup*) however when running engine-setup it informed me that it
> >> > couldn't take the install to 4.0 because of the 3.5 cluster that
> >> > existed. Currently installed pacakges:
> >> >
> >> > Installed Packages
> >> > ovirt-engine.noarch
> >> >3.6.7.5-1.el7.centos
> >> >  @ovirt-3.6
> >> > ovirt-engine-backend.noarch
> >> >3.6.7.5-1.el7.centos
> >> >  @ovirt-3.6
> >> > ovirt-engine-cli.noarch
> >> >3.6.8.0-1.el7.centos
> >> >  @ovirt-4.0
> >> > ovirt-engine-dbscripts.noarch
> >> >3.6.7.5-1.el7.centos
> >> >  @ovirt-3.6
> >> > ovirt-engine-dwh.noarch
> >> >3.6.6-1.el7.centos
> >> >  @ovirt-3.6
> >> > ovirt-engine-dwh-setup.noarch
> >> >4.0.1-1.el7.centos
> >> >  @ovirt-4.0
> >> > ovirt-engine-extension-aaa-jdbc.noarch
> >> >1.0.7-1.el7
> >> >  @ovirt-3.6
> >> > ovirt-engine-extension-aaa-ldap.noarch
> >> >1.2.1-1.el7
> >> >  @ovirt-4.0
> >> > ovirt-engine-extension-aaa-ldap-setup.noarch
> >> >1.2.1-1.el7
> >> >

Re: [ovirt-users] oVirt on a single server

2016-09-06 Thread Christophe TREFOIS
Hi,

No, I move VMs around with an Export Storage domain. 

All NFS is exported only to the local machine.

Nothing is “shared” between hosts. But because I want to export VMs, we use 
“shared” storage in oVirt instead of “local”.

Best,

-- 

Dr Christophe Trefois, Dipl.-Ing.  
Technical Specialist / Post-Doc

UNIVERSITÉ DU LUXEMBOURG

LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine  
6, avenue du Swing 
L-4367 Belvaux  
T: +352 46 66 44 6124 
F: +352 46 66 44 6949  
http://www.uni.lu/lcsb




This message is confidential and may contain privileged information. 
It is intended for the named recipient only. 
If you receive it in error please notify me and permanently delete the original 
message and any copies. 


  

> On 06 Sep 2016, at 10:06, Yedidyah Bar David  wrote:
> 
> On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS
>  wrote:
>> Personally my use case is that I have 4 machines with different specs and 
>> storage sizing. So I setup four DC with 1 host each. Then I have hosted 
>> engine on one of the hosts. Storage is local shared via NFS so that I can 
>> move VMs around.
> 
> Not sure I fully understand.
> 
> You use each of the 4 machines for both storage and running VMs?
> And export nfs on each to all the others?
> 
> So that if a VM needs more CPU/memory then disk IO, you can move
> it to another machine and hopefully get better performance even
> though the storage is not local?
> 
> I admit that it sounds very reasonable, and agree that doing this
> with nfs is easier than with iSCSI. If you don't mind the risk of
> local-nfs-mount locks, fine. As others noted, seems like it's quite
> a low risk.
> 
>> 
>> At this point we are not interested necessarily in HA.
>> 
>> Maybe for you that's the definition of a Dev environment as production has 
>> other attributes than just the type of storage?
> 
> Dev or Prod is for you to define :-)
> 
> How much time/money do you loose if a machine dies? If a machine
> locks up until someone notices and handles?
> 
>> 
>> Would be nice to hear your thoughts about this.
> 
> As wrote above, sounds reasonable if you understand the risks
> and can live with them.
> 
> Looking at the future you might want to check HC:
> 
> https://www.ovirt.org/develop/release-management/features/gluster/glusterfs-hyperconvergence/
> 
>> 
>> Kind regards,
>> Christophe
>> 
>> Sent from my iPhone
>> 
>>> On 06 Sep 2016, at 08:45, Yedidyah Bar David  wrote:
>>> 
>>> On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS
>>>  wrote:
 So basically we need at least 2 nodes to enter the realm of testing and 
 maintained?
>>> 
>>> I think some people occasionally use hosted-engine with local
>>> iSCSI storage on a single machine. AFAIK it's not tested by CI
>>> or often, but patches are welcome - e.g. using lago and
>>> ovirt-system-tests.
>>> 
>>> Can you please explain your intentions/requirements?
>>> 
>>> Even if it works, oVirt is not designed for single-machine
>>> _production_ use. For that, I think that most people agree that
>>> virt-manager is more suitable. oVirt on a single machine is
>>> usually for testing/demonstration/learning/etc.
>>> 
 
 If we’re talking pure oVirt here.
 
 --
 
 Dr Christophe Trefois, Dipl.-Ing.
 Technical Specialist / Post-Doc
 
 UNIVERSITÉ DU LUXEMBOURG
 
 LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
 Campus Belval | House of Biomedicine
 6, avenue du Swing
 L-4367 Belvaux
 T: +352 46 66 44 6124
 F: +352 46 66 44 6949
 http://www.uni.lu/lcsb
 
 
 
 
 This message is confidential and may contain privileged information.
 It is intended for the named recipient only.
 If you receive it in error please notify me and permanently delete the 
 original message and any copies.
 
 
 
 
> On 05 Sep 2016, at 16:31, Fernando Frediani 
>  wrote:
> 
> Adding Kimchi to oVirt node perhaps may be the easiest option. It can be 
> pretty useful for many situations and doesn't need such thing like 
> mounting NFS in localhost.
> 
> It is not nice to not have a All-in-One stable solution anymore as this 
> can help with its adoption for later growth.
> 
> oVirt-Cockpit looks nice and intresting.
> 
> Fernando
> 
> 
>> On 05/09/2016 05:18, Barak Korren wrote:
>>> On 4 September 2016 at 23:45, zero four  wrote:
>>> ...
>>> I understand and acknowledge that oVirt is not targeted towards homelab
>>> setups, or at least small homelab setups.  However I believe that 
>>> having a
>>> solid configuration for such use cases would be a benefit to the 
>>> project as
>>> a whole.
>> As others have already mentioned, using the full oVirt  with engine in
>> a single 

Re: [ovirt-users] oVirt fails to install on nodes after moving engine to C7 host

2016-09-06 Thread Yedidyah Bar David
On Tue, Sep 6, 2016 at 11:54 AM, James  wrote:
> Thanks for the info. It looks like everything is tied together (as it
> should be) dependency wise, so downgrading otopi isn't going to be that
> simple.
>
> [root@engine ~]# yum downgrade otopi-1.4.2-1.el7.centos
> otopi-java-1.4.2-1.el7.centos
> 
> Resolving Dependencies
> --> Running transaction check
> ---> Package otopi.noarch 0:1.4.2-1.el7.centos will be a downgrade
> ---> Package otopi.noarch 0:1.5.2-1.el7.centos will be erased
> ---> Package otopi-java.noarch 0:1.4.2-1.el7.centos will be a downgrade
> ---> Package otopi-java.noarch 0:1.5.2-1.el7.centos will be erased
> --> Finished Dependency Resolution
> Error: Package: ovirt-engine-setup-base-4.0.2.7-1.el7.centos.noarch
> (@ovirt-4.0)
>Requires: otopi >= 1.5.0
>Removing: otopi-1.5.2-1.el7.centos.noarch (@ovirt-4.0)
>otopi = 1.5.2-1.el7.centos
>Downgraded By: otopi-1.4.2-1.el7.centos.noarch (ovirt-3.6)
>otopi = 1.4.2-1.el7.centos
>Available: otopi-1.4.0-1.el7.noarch (centos-ovirt36)
>otopi = 1.4.0-1.el7
>Available: otopi-1.4.0-1.el7.centos.noarch (ovirt-3.6)
>otopi = 1.4.0-1.el7.centos
>Available: otopi-1.4.1-1.el7.noarch (centos-ovirt36)
>otopi = 1.4.1-1.el7
>Available: otopi-1.4.1-1.el7.centos.noarch (ovirt-3.6)
>otopi = 1.4.1-1.el7.centos
>Available: otopi-1.5.0-1.el7.centos.noarch (ovirt-4.0)
>otopi = 1.5.0-1.el7.centos
>Available: otopi-1.5.1-1.el7.noarch (centos-ovirt40-release)
>otopi = 1.5.1-1.el7
>Available: otopi-1.5.1-1.el7.centos.noarch (ovirt-4.0)
>otopi = 1.5.1-1.el7.centos
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
> [root@engine ~]#
>
> Which just goes down a rabbit hole of dependency chasing for most ovirt*
> packages. I can't think of a good way past this apart from a reinstall?

It's not really a rabbit hole - if you only upgraded the setup packages,
it's just them.

You can downgrade the entire transaction with 'yum history'. Check
'yum history' output, 'yum history info ID' with a specific ID to see
what happened there, then 'yum history undo ID'.

>
> On Tue, 6 Sep 2016, at 06:19 PM, Yedidyah Bar David wrote:
>> On Tue, Sep 6, 2016 at 10:43 AM, James  wrote:
>> > Thanks for your help! I've pasted the log lines requested below. Also
>> > worth noting that I tried upgrading it to 4.0 which I think has lead to
>> > some broken package versions. Everything seems to work, but there are
>> > some 4.0 packages installed.
>>
>> Perhaps that's the problem [1]. Can you try downgrading otopi to 1.4 and
>> remove /var/cache/ovirt-engine/ovirt-host-deploy.tar on engine machine?
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1348091
>>
>> Your package state is not very healthy but if above is enough to get you
>> to install a host, and later you succeed upgrading to 4.0, you should be
>> ok.
>>
>> >
>> > When trying to install 4.0 I followed the docs (yum update
>> > engine-setup*) however when running engine-setup it informed me that it
>> > couldn't take the install to 4.0 because of the 3.5 cluster that
>> > existed. Currently installed pacakges:
>> >
>> > Installed Packages
>> > ovirt-engine.noarch
>> >3.6.7.5-1.el7.centos
>> >  @ovirt-3.6
>> > ovirt-engine-backend.noarch
>> >3.6.7.5-1.el7.centos
>> >  @ovirt-3.6
>> > ovirt-engine-cli.noarch
>> >3.6.8.0-1.el7.centos
>> >  @ovirt-4.0
>> > ovirt-engine-dbscripts.noarch
>> >3.6.7.5-1.el7.centos
>> >  @ovirt-3.6
>> > ovirt-engine-dwh.noarch
>> >3.6.6-1.el7.centos
>> >  @ovirt-3.6
>> > ovirt-engine-dwh-setup.noarch
>> >4.0.1-1.el7.centos
>> >  @ovirt-4.0
>> > ovirt-engine-extension-aaa-jdbc.noarch
>> >1.0.7-1.el7
>> >  @ovirt-3.6
>> > ovirt-engine-extension-aaa-ldap.noarch
>> >1.2.1-1.el7
>> >  @ovirt-4.0
>> > ovirt-engine-extension-aaa-ldap-setup.noarch
>> >1.2.1-1.el7
>> >  @ovirt-4.0
>> > ovirt-engine-extensions-api-impl.noarch
>> >4.0.2.7-1.el7.centos
>> >  @ovirt-4.0
>> > ovirt-engine-lib.noarch
>> >4.0.2.7-1.el7.centos
>> >  

Re: [ovirt-users] oVirt fails to install on nodes after moving engine to C7 host

2016-09-06 Thread James
Thanks for the info. It looks like everything is tied together (as it
should be) dependency wise, so downgrading otopi isn't going to be that
simple.

[root@engine ~]# yum downgrade otopi-1.4.2-1.el7.centos
otopi-java-1.4.2-1.el7.centos

Resolving Dependencies
--> Running transaction check
---> Package otopi.noarch 0:1.4.2-1.el7.centos will be a downgrade
---> Package otopi.noarch 0:1.5.2-1.el7.centos will be erased
---> Package otopi-java.noarch 0:1.4.2-1.el7.centos will be a downgrade
---> Package otopi-java.noarch 0:1.5.2-1.el7.centos will be erased
--> Finished Dependency Resolution
Error: Package: ovirt-engine-setup-base-4.0.2.7-1.el7.centos.noarch
(@ovirt-4.0)
   Requires: otopi >= 1.5.0
   Removing: otopi-1.5.2-1.el7.centos.noarch (@ovirt-4.0)
   otopi = 1.5.2-1.el7.centos
   Downgraded By: otopi-1.4.2-1.el7.centos.noarch (ovirt-3.6)
   otopi = 1.4.2-1.el7.centos
   Available: otopi-1.4.0-1.el7.noarch (centos-ovirt36)
   otopi = 1.4.0-1.el7
   Available: otopi-1.4.0-1.el7.centos.noarch (ovirt-3.6)
   otopi = 1.4.0-1.el7.centos
   Available: otopi-1.4.1-1.el7.noarch (centos-ovirt36)
   otopi = 1.4.1-1.el7
   Available: otopi-1.4.1-1.el7.centos.noarch (ovirt-3.6)
   otopi = 1.4.1-1.el7.centos
   Available: otopi-1.5.0-1.el7.centos.noarch (ovirt-4.0)
   otopi = 1.5.0-1.el7.centos
   Available: otopi-1.5.1-1.el7.noarch (centos-ovirt40-release)
   otopi = 1.5.1-1.el7
   Available: otopi-1.5.1-1.el7.centos.noarch (ovirt-4.0)
   otopi = 1.5.1-1.el7.centos
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
[root@engine ~]#

Which just goes down a rabbit hole of dependency chasing for most ovirt*
packages. I can't think of a good way past this apart from a reinstall?

On Tue, 6 Sep 2016, at 06:19 PM, Yedidyah Bar David wrote:
> On Tue, Sep 6, 2016 at 10:43 AM, James  wrote:
> > Thanks for your help! I've pasted the log lines requested below. Also
> > worth noting that I tried upgrading it to 4.0 which I think has lead to
> > some broken package versions. Everything seems to work, but there are
> > some 4.0 packages installed.
> 
> Perhaps that's the problem [1]. Can you try downgrading otopi to 1.4 and
> remove /var/cache/ovirt-engine/ovirt-host-deploy.tar on engine machine?
> 
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1348091
> 
> Your package state is not very healthy but if above is enough to get you
> to install a host, and later you succeed upgrading to 4.0, you should be
> ok.
> 
> >
> > When trying to install 4.0 I followed the docs (yum update
> > engine-setup*) however when running engine-setup it informed me that it
> > couldn't take the install to 4.0 because of the 3.5 cluster that
> > existed. Currently installed pacakges:
> >
> > Installed Packages
> > ovirt-engine.noarch
> >3.6.7.5-1.el7.centos
> >  @ovirt-3.6
> > ovirt-engine-backend.noarch
> >3.6.7.5-1.el7.centos
> >  @ovirt-3.6
> > ovirt-engine-cli.noarch
> >3.6.8.0-1.el7.centos
> >  @ovirt-4.0
> > ovirt-engine-dbscripts.noarch
> >3.6.7.5-1.el7.centos
> >  @ovirt-3.6
> > ovirt-engine-dwh.noarch
> >3.6.6-1.el7.centos
> >  @ovirt-3.6
> > ovirt-engine-dwh-setup.noarch
> >4.0.1-1.el7.centos
> >  @ovirt-4.0
> > ovirt-engine-extension-aaa-jdbc.noarch
> >1.0.7-1.el7
> >  @ovirt-3.6
> > ovirt-engine-extension-aaa-ldap.noarch
> >1.2.1-1.el7
> >  @ovirt-4.0
> > ovirt-engine-extension-aaa-ldap-setup.noarch
> >1.2.1-1.el7
> >  @ovirt-4.0
> > ovirt-engine-extensions-api-impl.noarch
> >4.0.2.7-1.el7.centos
> >  @ovirt-4.0
> > ovirt-engine-lib.noarch
> >4.0.2.7-1.el7.centos
> >  @ovirt-4.0
> > ovirt-engine-restapi.noarch
> >3.6.7.5-1.el7.centos
> >  @ovirt-3.6
> > ovirt-engine-sdk-python.noarch
> >3.6.8.0-1.el7.centos
> >  @ovirt-4.0
> > ovirt-engine-setup.noarch
> >4.0.2.7-1.el7.centos
> >  @ovirt-4.0
> > ovirt-engine-setup-base.noarch
> >  

Re: [ovirt-users] oVirt on a single server

2016-09-06 Thread Yedidyah Bar David
On Tue, Sep 6, 2016 at 9:53 AM, Christophe TREFOIS
 wrote:
> Personally my use case is that I have 4 machines with different specs and 
> storage sizing. So I setup four DC with 1 host each. Then I have hosted 
> engine on one of the hosts. Storage is local shared via NFS so that I can 
> move VMs around.

Not sure I fully understand.

You use each of the 4 machines for both storage and running VMs?
And export nfs on each to all the others?

So that if a VM needs more CPU/memory then disk IO, you can move
it to another machine and hopefully get better performance even
though the storage is not local?

I admit that it sounds very reasonable, and agree that doing this
with nfs is easier than with iSCSI. If you don't mind the risk of
local-nfs-mount locks, fine. As others noted, seems like it's quite
a low risk.

>
> At this point we are not interested necessarily in HA.
>
> Maybe for you that's the definition of a Dev environment as production has 
> other attributes than just the type of storage?

Dev or Prod is for you to define :-)

How much time/money do you loose if a machine dies? If a machine
locks up until someone notices and handles?

>
> Would be nice to hear your thoughts about this.

As wrote above, sounds reasonable if you understand the risks
and can live with them.

Looking at the future you might want to check HC:

https://www.ovirt.org/develop/release-management/features/gluster/glusterfs-hyperconvergence/

>
> Kind regards,
> Christophe
>
> Sent from my iPhone
>
>> On 06 Sep 2016, at 08:45, Yedidyah Bar David  wrote:
>>
>> On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS
>>  wrote:
>>> So basically we need at least 2 nodes to enter the realm of testing and 
>>> maintained?
>>
>> I think some people occasionally use hosted-engine with local
>> iSCSI storage on a single machine. AFAIK it's not tested by CI
>> or often, but patches are welcome - e.g. using lago and
>> ovirt-system-tests.
>>
>> Can you please explain your intentions/requirements?
>>
>> Even if it works, oVirt is not designed for single-machine
>> _production_ use. For that, I think that most people agree that
>> virt-manager is more suitable. oVirt on a single machine is
>> usually for testing/demonstration/learning/etc.
>>
>>>
>>> If we’re talking pure oVirt here.
>>>
>>> --
>>>
>>> Dr Christophe Trefois, Dipl.-Ing.
>>> Technical Specialist / Post-Doc
>>>
>>> UNIVERSITÉ DU LUXEMBOURG
>>>
>>> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
>>> Campus Belval | House of Biomedicine
>>> 6, avenue du Swing
>>> L-4367 Belvaux
>>> T: +352 46 66 44 6124
>>> F: +352 46 66 44 6949
>>> http://www.uni.lu/lcsb
>>>
>>>
>>>
>>> 
>>> This message is confidential and may contain privileged information.
>>> It is intended for the named recipient only.
>>> If you receive it in error please notify me and permanently delete the 
>>> original message and any copies.
>>> 
>>>
>>>
>>>
 On 05 Sep 2016, at 16:31, Fernando Frediani  
 wrote:

 Adding Kimchi to oVirt node perhaps may be the easiest option. It can be 
 pretty useful for many situations and doesn't need such thing like 
 mounting NFS in localhost.

 It is not nice to not have a All-in-One stable solution anymore as this 
 can help with its adoption for later growth.

 oVirt-Cockpit looks nice and intresting.

 Fernando


> On 05/09/2016 05:18, Barak Korren wrote:
>> On 4 September 2016 at 23:45, zero four  wrote:
>> ...
>> I understand and acknowledge that oVirt is not targeted towards homelab
>> setups, or at least small homelab setups.  However I believe that having 
>> a
>> solid configuration for such use cases would be a benefit to the project 
>> as
>> a whole.
> As others have already mentioned, using the full oVirt  with engine in
> a single host scenario can work, but is not currently actively
> maintained or tested.
>
> There are other options originating from the oVirt community however.
>
> One notable option is to use the Cockpit-oVirt plugin [1] which can
> use VDSM to manage VMs on a single host.
>
> Another option is to use the Kimchi project [2] for which discussion
> for making it an oVirt project had taken part in the past [3]. It
> seems that also some work for inclusion in oVirt node was also planned
> at some point [4].
>
> [1]: http://www.ovirt.org/develop/release-management/features/cockpit/
> [2]: https://github.com/kimchi-project/kimchi
> [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html
> [4]: 
> http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
>>>
>>> 

Re: [ovirt-users] oVirt fails to install on nodes after moving engine to C7 host

2016-09-06 Thread James
Thanks for your help! I've pasted the log lines requested below. Also
worth noting that I tried upgrading it to 4.0 which I think has lead to
some broken package versions. Everything seems to work, but there are
some 4.0 packages installed.

When trying to install 4.0 I followed the docs (yum update
engine-setup*) however when running engine-setup it informed me that it
couldn't take the install to 4.0 because of the 3.5 cluster that
existed. Currently installed pacakges:

Installed Packages
ovirt-engine.noarch 
   3.6.7.5-1.el7.centos 
 @ovirt-3.6
ovirt-engine-backend.noarch 
   3.6.7.5-1.el7.centos 
 @ovirt-3.6
ovirt-engine-cli.noarch 
   3.6.8.0-1.el7.centos 
 @ovirt-4.0
ovirt-engine-dbscripts.noarch   
   3.6.7.5-1.el7.centos 
 @ovirt-3.6
ovirt-engine-dwh.noarch 
   3.6.6-1.el7.centos   
 @ovirt-3.6
ovirt-engine-dwh-setup.noarch   
   4.0.1-1.el7.centos   
 @ovirt-4.0
ovirt-engine-extension-aaa-jdbc.noarch  
   1.0.7-1.el7  
 @ovirt-3.6
ovirt-engine-extension-aaa-ldap.noarch  
   1.2.1-1.el7  
 @ovirt-4.0
ovirt-engine-extension-aaa-ldap-setup.noarch
   1.2.1-1.el7  
 @ovirt-4.0
ovirt-engine-extensions-api-impl.noarch 
   4.0.2.7-1.el7.centos 
 @ovirt-4.0
ovirt-engine-lib.noarch 
   4.0.2.7-1.el7.centos 
 @ovirt-4.0
ovirt-engine-restapi.noarch 
   3.6.7.5-1.el7.centos 
 @ovirt-3.6
ovirt-engine-sdk-python.noarch  
   3.6.8.0-1.el7.centos 
 @ovirt-4.0
ovirt-engine-setup.noarch   
   4.0.2.7-1.el7.centos 
 @ovirt-4.0
ovirt-engine-setup-base.noarch  
   4.0.2.7-1.el7.centos 
 @ovirt-4.0
ovirt-engine-setup-plugin-ovirt-engine.noarch   
   4.0.2.7-1.el7.centos 
 @ovirt-4.0
ovirt-engine-setup-plugin-ovirt-engine-common.noarch
   4.0.2.7-1.el7.centos 
 @ovirt-4.0
ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch 
   4.0.2.7-1.el7.centos 
 @ovirt-4.0
ovirt-engine-setup-plugin-websocket-proxy.noarch
   4.0.2.7-1.el7.centos 
 @ovirt-4.0
ovirt-engine-tools.noarch   
   3.6.7.5-1.el7.centos 
 @ovirt-3.6
ovirt-engine-tools-backup.noarch
   3.6.7.5-1.el7.centos 
 @ovirt-3.6
ovirt-engine-userportal.noarch  
   3.6.7.5-1.el7.centos 
 @ovirt-3.6
ovirt-engine-vmconsole-proxy-helper.noarch  
   4.0.2.7-1.el7.centos 

Re: [ovirt-users] ovirt3.5 certificate problem

2016-09-06 Thread Fedele Stabile
Can anyone help me?
I'm trying to upgrade my ovirt3.5 to centos 7 and then to ovirt 3.6
and I need to have a second node on wich execute hosted_engine
  

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt fails to install on nodes after moving engine to C7 host

2016-09-06 Thread Yedidyah Bar David
On Tue, Sep 6, 2016 at 6:08 AM, James  wrote:
> Hey,
>
> I moved our standalone engine from a C6 host to a C7 host and now it
> wont install/setup oVirt nodes. It just responds with messages like
> this:
>
> Host xxx.xxx.xxx installation failed. Unexpected connection termination.
> Failed to install Host xxx.xxx.xxx. Command returned failure code 1
> during SSH session 'r...@xxx.xx.xx.x'.
>
> And from engine.log
>
> 2016-09-06 12:20:08,011 ERROR
> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (org.ovirt.thread.pool-8-thread-41) [55397a53] SSH error running command
> r...@xxx.xx.xx.x:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp
> -d -t ovirt-XX)"; trap "chmod -R u+rwX \"${MY
> 2016-09-06 12:20:08,011 ERROR
> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (org.ovirt.thread.pool-8-thread-41) [55397a53] SSH error running command
> r...@xxx.xx.xx.x:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp
> -d -t ovirt-XX)"; trap "chmod -R u+rwX \"${MY
> TMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar
> --warning=no-timestamp -C "${MYTMP}" -x &&  "${MYTMP}"/ovirt-host-deploy
> DIALOG/dialect=str:machine DIALOG/customization=bool:True': Command
> returned failure code 1 during SSH session 'r...@xxx.xx.xx.x'
> 2016-09-06 12:20:08,007 ERROR
> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy)
> [55397a53] Error during deploy dialog: java.io.IOException: Unexpected
> connection termination
> at
> 
> org.ovirt.otopi.dialog.MachineDialogParser.nextEvent(MachineDialogParser.java:387)
> [otopi.jar:]
> at
> 
> org.ovirt.otopi.dialog.MachineDialogParser.nextEvent(MachineDialogParser.java:404)
> [otopi.jar:]
> at
> 
> org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase._threadMain(VdsDeployBase.java:304)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase.access$800(VdsDeployBase.java:45)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase$12.run(VdsDeployBase.java:386)
> [bll.jar:]
> at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_101]
>
> 2016-09-06 12:20:08,013 ERROR
> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (org.ovirt.thread.pool-8-thread-41) [55397a53] Exception:
> java.io.IOException: Command returned failure code 1 during SSH session
> 'r...@xxx.xx.xx.x'
> at
> 
> org.ovirt.engine.core.uutils.ssh.SSHClient.executeCommand(SSHClient.java:527)
> [uutils.jar:]
> at
> 
> org.ovirt.engine.core.uutils.ssh.SSHDialog.executeCommand(SSHDialog.java:312)
> [uutils.jar:]
> at
> 
> org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase.execute(VdsDeployBase.java:567)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.installHost(InstallVdsInternalCommand.java:189)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.executeCommand(InstallVdsInternalCommand.java:93)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1215)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1359)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1982)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:174)
> [utils.jar:]
> at
> 
> org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:116)
> [utils.jar:]
> at
> org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1396)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:378)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.MultipleActionsRunner.executeValidatedCommand(MultipleActionsRunner.java:207)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.MultipleActionsRunner.runCommands(MultipleActionsRunner.java:172)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.bll.MultipleActionsRunner$2.run(MultipleActionsRunner.java:181)
> [bll.jar:]
> at
> 
> org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:89)
> [utils.jar:]
> at
> 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> [rt.jar:1.8.0_101]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [rt.jar:1.8.0_101]
> at
> 
> 

Re: [ovirt-users] oVirt on a single server

2016-09-06 Thread Barak Korren
On 6 September 2016 at 00:34, Christophe TREFOIS
 wrote:
> So basically we need at least 2 nodes to enter the realm of testing and 
> maintained?
>
> If we’re talking pure oVirt here.

The short answer is yes.

The longer answer is more complex, but first a disclaimer, I'm am
going to describe the situation as I am aware of it, from my point of
view is a Red Hat employee and a member of the oVirt infra team. I'm
an probably not knowledgeable about everything that goes on, for
example there is a fairly large Chinese oVirt community that commits
various efforts of which I know very little.

When I'm taking testing and maintenance, I think we can agree that for
something to be maintained it needs to meet to following criteria:
1. It needs to be tested at least once for every oVirt release
2. Results of that testing need to make their way to the hands of developers.
   Malfunctions should end up as bugs tracked in Bugzilla.

Probably the largest group that does regular testing for oVirt is the
quality engineering group in Red Hat. Red Hat puts a great deal of
resources into oVirt, but those resources are not infinite. And when
the time comes to schedule resources, the needs of paying Red Hat
customers typically come first. Those customers are probably more
likely to be running large data centers.

Another set of regular testing is being done automatically by the
oVirt CI systems. Those tests [1] use Lago [2] to run testing suits
that simulate various situations for oVirt to run in. The smallest
configuration currently tested that way is a 2-node hosted engine
configuration. As all those tests have been written by Red Hat
employees, they tend to focus on what ends up going into RHEV.

It it important to note that not every oVirt feature ends up in RHEV,
but that does not mean that that feature never gets tested. There are
several oVirt features that are very useful for building oVirt-based
testing systems for oVirt itself and as a result get regular testing
as well. Notable examples are nested virtualization and the Glance
support.

The above being said, there is nothing preventing anyone in the
community from creating a test suit for single-host use that will get
run regularly by the oVirt CI system. That kind of effort will require
some degree of commitment to make it work, fix it when it inevitably
breaks, and report what it finds to the developers. There are already
existing tools in the oVirt repos that make building such a test suit
quite straight forward. I will be happy to guid anyone interested in
taking such an effort.

[1]: http://lago.readthedocs.io/en/stable/
[2]: https://gerrit.ovirt.org/#/admin/projects/ovirt-system-tests

-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt on a single server

2016-09-06 Thread Christophe TREFOIS
Personally my use case is that I have 4 machines with different specs and 
storage sizing. So I setup four DC with 1 host each. Then I have hosted engine 
on one of the hosts. Storage is local shared via NFS so that I can move VMs 
around.

At this point we are not interested necessarily in HA.

Maybe for you that's the definition of a Dev environment as production has 
other attributes than just the type of storage?

Would be nice to hear your thoughts about this.

Kind regards,
Christophe 

Sent from my iPhone

> On 06 Sep 2016, at 08:45, Yedidyah Bar David  wrote:
> 
> On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS
>  wrote:
>> So basically we need at least 2 nodes to enter the realm of testing and 
>> maintained?
> 
> I think some people occasionally use hosted-engine with local
> iSCSI storage on a single machine. AFAIK it's not tested by CI
> or often, but patches are welcome - e.g. using lago and
> ovirt-system-tests.
> 
> Can you please explain your intentions/requirements?
> 
> Even if it works, oVirt is not designed for single-machine
> _production_ use. For that, I think that most people agree that
> virt-manager is more suitable. oVirt on a single machine is
> usually for testing/demonstration/learning/etc.
> 
>> 
>> If we’re talking pure oVirt here.
>> 
>> --
>> 
>> Dr Christophe Trefois, Dipl.-Ing.
>> Technical Specialist / Post-Doc
>> 
>> UNIVERSITÉ DU LUXEMBOURG
>> 
>> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
>> Campus Belval | House of Biomedicine
>> 6, avenue du Swing
>> L-4367 Belvaux
>> T: +352 46 66 44 6124
>> F: +352 46 66 44 6949
>> http://www.uni.lu/lcsb
>> 
>> 
>> 
>> 
>> This message is confidential and may contain privileged information.
>> It is intended for the named recipient only.
>> If you receive it in error please notify me and permanently delete the 
>> original message and any copies.
>> 
>> 
>> 
>> 
>>> On 05 Sep 2016, at 16:31, Fernando Frediani  
>>> wrote:
>>> 
>>> Adding Kimchi to oVirt node perhaps may be the easiest option. It can be 
>>> pretty useful for many situations and doesn't need such thing like mounting 
>>> NFS in localhost.
>>> 
>>> It is not nice to not have a All-in-One stable solution anymore as this can 
>>> help with its adoption for later growth.
>>> 
>>> oVirt-Cockpit looks nice and intresting.
>>> 
>>> Fernando
>>> 
>>> 
 On 05/09/2016 05:18, Barak Korren wrote:
> On 4 September 2016 at 23:45, zero four  wrote:
> ...
> I understand and acknowledge that oVirt is not targeted towards homelab
> setups, or at least small homelab setups.  However I believe that having a
> solid configuration for such use cases would be a benefit to the project 
> as
> a whole.
 As others have already mentioned, using the full oVirt  with engine in
 a single host scenario can work, but is not currently actively
 maintained or tested.
 
 There are other options originating from the oVirt community however.
 
 One notable option is to use the Cockpit-oVirt plugin [1] which can
 use VDSM to manage VMs on a single host.
 
 Another option is to use the Kimchi project [2] for which discussion
 for making it an oVirt project had taken part in the past [3]. It
 seems that also some work for inclusion in oVirt node was also planned
 at some point [4].
 
 [1]: http://www.ovirt.org/develop/release-management/features/cockpit/
 [2]: https://github.com/kimchi-project/kimchi
 [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html
 [4]: 
 http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/
>>> 
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> -- 
> Didi


smime.p7s
Description: S/MIME cryptographic signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt on a single server

2016-09-06 Thread Yedidyah Bar David
On Tue, Sep 6, 2016 at 12:34 AM, Christophe TREFOIS
 wrote:
> So basically we need at least 2 nodes to enter the realm of testing and 
> maintained?

I think some people occasionally use hosted-engine with local
iSCSI storage on a single machine. AFAIK it's not tested by CI
or often, but patches are welcome - e.g. using lago and
ovirt-system-tests.

Can you please explain your intentions/requirements?

Even if it works, oVirt is not designed for single-machine
_production_ use. For that, I think that most people agree that
virt-manager is more suitable. oVirt on a single machine is
usually for testing/demonstration/learning/etc.

>
> If we’re talking pure oVirt here.
>
> --
>
> Dr Christophe Trefois, Dipl.-Ing.
> Technical Specialist / Post-Doc
>
> UNIVERSITÉ DU LUXEMBOURG
>
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine
> 6, avenue du Swing
> L-4367 Belvaux
> T: +352 46 66 44 6124
> F: +352 46 66 44 6949
> http://www.uni.lu/lcsb
>
>
>
> 
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the 
> original message and any copies.
> 
>
>
>
>> On 05 Sep 2016, at 16:31, Fernando Frediani  
>> wrote:
>>
>> Adding Kimchi to oVirt node perhaps may be the easiest option. It can be 
>> pretty useful for many situations and doesn't need such thing like mounting 
>> NFS in localhost.
>>
>> It is not nice to not have a All-in-One stable solution anymore as this can 
>> help with its adoption for later growth.
>>
>> oVirt-Cockpit looks nice and intresting.
>>
>> Fernando
>>
>>
>> On 05/09/2016 05:18, Barak Korren wrote:
>>> On 4 September 2016 at 23:45, zero four  wrote:
>>> ...
 I understand and acknowledge that oVirt is not targeted towards homelab
 setups, or at least small homelab setups.  However I believe that having a
 solid configuration for such use cases would be a benefit to the project as
 a whole.
>>> As others have already mentioned, using the full oVirt  with engine in
>>> a single host scenario can work, but is not currently actively
>>> maintained or tested.
>>>
>>> There are other options originating from the oVirt community however.
>>>
>>> One notable option is to use the Cockpit-oVirt plugin [1] which can
>>> use VDSM to manage VMs on a single host.
>>>
>>> Another option is to use the Kimchi project [2] for which discussion
>>> for making it an oVirt project had taken part in the past [3]. It
>>> seems that also some work for inclusion in oVirt node was also planned
>>> at some point [4].
>>>
>>> [1]: http://www.ovirt.org/develop/release-management/features/cockpit/
>>> [2]: https://github.com/kimchi-project/kimchi
>>> [3]: http://lists.ovirt.org/pipermail/board/2013-July/000921.html
>>> [4]: 
>>> http://www.ovirt.org/develop/release-management/features/node/kimchiplugin/
>>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users



-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users