Re: [ovirt-users] Storage migration: Preallocation forced on destination

2016-03-04 Thread Nicolás

El 04/03/16 a las 20:23, Nir Soffer escribió:

On Fri, Mar 4, 2016 at 7:07 PM, Nicolás  wrote:

Hi,

We're migrating an existing storage (glusterfs) to a new one (iSCSI). All
disks on glusterfs are thin provisioned, but when migrating to iSCSI the
following warning is shown:

 The following disks will become preallocated, and may consume
considerably more space on the target: local-disk

Why is that? Is there a way to migrate disks so they are thin provisioned on
iSCSI as well?

The issue is that we use raw sparse format for thin provisioned disks
on file based
storage. The file system provides the thin provisioning, maintaining holes in
the files.

When we create the destination lv, we use the disk virtual size, so you get
practically a preallocated volume.

I think we can do better - before copying the disk, we can check the actual used
space (e.g. what qemu-img info or stat report), and create the
destination lv using
the used size (plus additional space for qcow format).

I tested this by extending the destination lv manually and then
copying data manually
using qemu-img convert, and it works.

Can you file a bug for this, and explain the use case?

Nir


Done, for the ones interested on it: 
https://bugzilla.redhat.com/show_bug.cgi?id=1314959


Thanks.

Regards.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage migration: Preallocation forced on destination

2016-03-04 Thread Nir Soffer
On Sat, Mar 5, 2016 at 12:19 AM, Pavel Gashev  wrote:
> I think it's hard to calculate the additional space for cow format without 
> analysing raw image. It's better to allocate enough space, and then decrease 
> it after qemu-img convert.

We use 10% as a rough estimate for additional space when converting
from raw to qcow format. Sure it will waste some space, but it is good
enough.

How to you plan to check the used size on the destination lv?

> Please note that while disk moving keeps disk format, disk copying changes 
> format. So when you copy a thin provisioned disk to iSCSI storage it's being 
> converted to cow. The issue is that size of converted lv still looks like 
> preallocated. You can decrease it manually via lvchange, or you can move it 
> to a file based storage and back. Moving disks keeps disk format, but fixes 
> its size.

Yes, this seems to be the way to work around this issue currently:
1. Copy to the disk to block storage - will convert it to qcow format
on preallocated lv
2. Move disk from block storage to file storage
3. Move disk back to block storage

> Also please consider qcow compat=1.1 as default disk format both for file and 
> block storages.

This will make your disk incompatible with old ovirt versions on el6.
In storage domain format v3
we are using comapt=0.10.

We plan to move to compat=1.1 in 4.0.

>
>
>
> On 04/03/16 23:23, "users-boun...@ovirt.org on behalf of Nir Soffer" 
>  wrote:
>
>>On Fri, Mar 4, 2016 at 7:07 PM, Nicolás  wrote:
>>> Hi,
>>>
>>> We're migrating an existing storage (glusterfs) to a new one (iSCSI). All
>>> disks on glusterfs are thin provisioned, but when migrating to iSCSI the
>>> following warning is shown:
>>>
>>> The following disks will become preallocated, and may consume
>>> considerably more space on the target: local-disk
>>>
>>> Why is that? Is there a way to migrate disks so they are thin provisioned on
>>> iSCSI as well?
>>
>>The issue is that we use raw sparse format for thin provisioned disks
>>on file based
>>storage. The file system provides the thin provisioning, maintaining holes in
>>the files.
>>
>>When we create the destination lv, we use the disk virtual size, so you get
>>practically a preallocated volume.
>>
>>I think we can do better - before copying the disk, we can check the actual 
>>used
>>space (e.g. what qemu-img info or stat report), and create the
>>destination lv using
>>the used size (plus additional space for qcow format).
>>
>>I tested this by extending the destination lv manually and then
>>copying data manually
>>using qemu-img convert, and it works.
>>
>>Can you file a bug for this, and explain the use case?
>>
>>Nir
>>___
>>Users mailing list
>>Users@ovirt.org
>>http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage migration: Preallocation forced on destination

2016-03-04 Thread Pavel Gashev
I think it's hard to calculate the additional space for cow format without 
analysing raw image. It's better to allocate enough space, and then decrease it 
after qemu-img convert.

Please note that while disk moving keeps disk format, disk copying changes 
format. So when you copy a thin provisioned disk to iSCSI storage it's being 
converted to cow. The issue is that size of converted lv still looks like 
preallocated. You can decrease it manually via lvchange, or you can move it to 
a file based storage and back. Moving disks keeps disk format, but fixes its 
size.

Also please consider qcow compat=1.1 as default disk format both for file and 
block storages. 



On 04/03/16 23:23, "users-boun...@ovirt.org on behalf of Nir Soffer" 
 wrote:

>On Fri, Mar 4, 2016 at 7:07 PM, Nicolás  wrote:
>> Hi,
>>
>> We're migrating an existing storage (glusterfs) to a new one (iSCSI). All
>> disks on glusterfs are thin provisioned, but when migrating to iSCSI the
>> following warning is shown:
>>
>> The following disks will become preallocated, and may consume
>> considerably more space on the target: local-disk
>>
>> Why is that? Is there a way to migrate disks so they are thin provisioned on
>> iSCSI as well?
>
>The issue is that we use raw sparse format for thin provisioned disks
>on file based
>storage. The file system provides the thin provisioning, maintaining holes in
>the files.
>
>When we create the destination lv, we use the disk virtual size, so you get
>practically a preallocated volume.
>
>I think we can do better - before copying the disk, we can check the actual 
>used
>space (e.g. what qemu-img info or stat report), and create the
>destination lv using
>the used size (plus additional space for qcow format).
>
>I tested this by extending the destination lv manually and then
>copying data manually
>using qemu-img convert, and it works.
>
>Can you file a bug for this, and explain the use case?
>
>Nir
>___
>Users mailing list
>Users@ovirt.org
>http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sample OVirt KS for HV Deployment

2016-03-04 Thread Dan Yasny
I install a bunch of packages too, but I use full RHEL hosts, so it's just
a matter of running yum.

On Fri, Mar 4, 2016 at 4:53 PM, Duckworth, Douglas C 
wrote:

> "All I use it for is to set one NIC up with an IP address, so the engine
> can reach it. The rest is done via the engine. I probably should just do
> this with DHCP and MAC reservations really"
>
> Ah thanks. So we can do that with kernel options too. I wasn't sure if
> we should even bother with bonds though since the manager will do the rest.
>
> Thanks for sharing your iscsid.conf.  We're using the copy module to
> place our modified one on the hypervisor, though your solution might be
> more elegant, along with others that modify rsyslog and cron, then
> persisting those files:
>
> - name: unpersist conf files on HV
>   tags: [conf]
>   command: "unpersist {{ item.destdir }}/{{ item.name }}"
>   with_items: hv_files
>
> - name: copy conf files to hv
>   tags: [conf]
>   copy: src="{{ item.name }}"
> dest="{{ item.destdir }}/{{ item.name }}"
> mode="{{ item.mode }}"
> owner=root group=root
>   with_items: hv_files
>
> - name: persist conf files on HV
>   tags: [conf]
>   command: "persist {{ item.destdir }}/{{ item.name }}"
>   with_items: hv_files
>   notify:
> - rsyslog
> - crond
>
> How do people handle installing additional packages [we add htop, vim,
> iftop, nethogs, diamond, check_mk] to the hypervisor after deployment?
>
> After remounting rw we use this task to copy rpms from Apache server:
>
> - name: install rhel7 packages
>   command: "rpm -i {{ item.destdir }}/{{ item.name }}"
>   with_items: rh7_rpm_files
>   register: rpm_result
>   when: ansible_distribution_major_version == "7"
>   failed_when: "rpm_result.rc == 69"
>
> --
> Thanks
>
> Douglas Duckworth, MSc, LFCS
> Unix Administrator
> Tulane University
> Technology Services
> 1555 Poydras Ave
> NOLA -- 70112
>
> E: du...@tulane.edu
> O: 504-988-9341
> F: 504-988-8505
>
> On 03/04/2016 03:41 PM, Dan Yasny wrote:
> > All I use it for is to set one NIC up with an IP address, so the engine
> > can reach it. The rest is done via the engine. I probably should just do
> > this with DHCP and MAC reservations really
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sample OVirt KS for HV Deployment

2016-03-04 Thread Duckworth, Douglas C
"All I use it for is to set one NIC up with an IP address, so the engine
can reach it. The rest is done via the engine. I probably should just do
this with DHCP and MAC reservations really"

Ah thanks. So we can do that with kernel options too. I wasn't sure if
we should even bother with bonds though since the manager will do the rest.

Thanks for sharing your iscsid.conf.  We're using the copy module to
place our modified one on the hypervisor, though your solution might be
more elegant, along with others that modify rsyslog and cron, then
persisting those files:

- name: unpersist conf files on HV
  tags: [conf]
  command: "unpersist {{ item.destdir }}/{{ item.name }}"
  with_items: hv_files

- name: copy conf files to hv
  tags: [conf]
  copy: src="{{ item.name }}"
dest="{{ item.destdir }}/{{ item.name }}"
mode="{{ item.mode }}"
owner=root group=root
  with_items: hv_files

- name: persist conf files on HV
  tags: [conf]
  command: "persist {{ item.destdir }}/{{ item.name }}"
  with_items: hv_files
  notify:
- rsyslog
- crond

How do people handle installing additional packages [we add htop, vim,
iftop, nethogs, diamond, check_mk] to the hypervisor after deployment?

After remounting rw we use this task to copy rpms from Apache server:

- name: install rhel7 packages
  command: "rpm -i {{ item.destdir }}/{{ item.name }}"
  with_items: rh7_rpm_files
  register: rpm_result
  when: ansible_distribution_major_version == "7"
  failed_when: "rpm_result.rc == 69"

-- 
Thanks

Douglas Duckworth, MSc, LFCS
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112

E: du...@tulane.edu
O: 504-988-9341
F: 504-988-8505

On 03/04/2016 03:41 PM, Dan Yasny wrote:
> All I use it for is to set one NIC up with an IP address, so the engine
> can reach it. The rest is done via the engine. I probably should just do
> this with DHCP and MAC reservations really
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sample OVirt KS for HV Deployment

2016-03-04 Thread Dan Yasny
On Fri, Mar 4, 2016 at 4:33 PM, Duckworth, Douglas C 
wrote:

> We also use Ansible
>
> Could you share your role / playbook for networks?
>

All I use it for is to set one NIC up with an IP address, so the engine can
reach it. The rest is done via the engine. I probably should just do this
with DHCP and MAC reservations really

As for iscsi:

- name: iSCSI initiator node.session.cmds_max adjustment
  lineinfile: dest=/etc/iscsi/iscsid.conf
  regexp='^node.session.cmds_max'
  line='node.session.cmds_max = 1024'
  state=present

- name: iSCSI initiator node.session.queue_depth adjustment
  lineinfile: dest=/etc/iscsi/iscsid.conf
  regexp='^node.session.queue_depth'
  line='node.session.queue_depth = 128'
  state=present

- name: iSCSI initiatorname set
  lineinfile:
dest: /etc/iscsi/initiatorname.iscsi
regexp: '^InitiatorName=iqn'
line: '{{ "InitiatorName=iqn.2010-12.com.maxbetgroup:" +
inventory_hostname }}'
  notify:
- restart iscsi
- restart iscsid


  handlers:
- name: restart iscsi
  service: name=iscsi state=restarted

- name: restart iscsid
  service: name=iscsid state=restarted




>
> Thanks
> Doug
>
> --
> Thanks
>
> Douglas Duckworth, MSc, LFCS
> Unix Administrator
> Tulane University
> Technology Services
> 1555 Poydras Ave
> NOLA -- 70112
>
> E: du...@tulane.edu
> O: 504-988-9341
> F: 504-988-8505
>
> On 03/02/2016 05:50 PM, Dan Yasny wrote:
> > I usually deploy with only a basic KS that places an ssh cert on the
> > host. Then ansible adds the repos and adjusts the network and iscsi
> > initiator , the ovirt bootstrap takes care of the rest
> >
> > On Mar 2, 2016 5:42 PM, "Duckworth, Douglas C"  > > wrote:
> >
> > Never mind.  This seems to be a place to start:
> >
> >
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Installation_Guide/sect-Automated_Installation.html
> >
> > https://access.redhat.com/solutions/41697
> >
> > --
> > Thanks
> >
> > Douglas Duckworth, MSc, LFCS
> > Unix Administrator
> > Tulane University
> > Technology Services
> > 1555 Poydras Ave
> > NOLA -- 70112
> >
> > E: du...@tulane.edu 
> > O: 504-988-9341 
> > F: 504-988-8505 
> >
> > On 03/02/2016 04:06 PM, Duckworth, Douglas C wrote:
> > > Does anyone have a kickstart available to share?  We are looking to
> > > automate hypervisor deployment with Cobbler.
> > >
> > > Thanks
> > > Doug
> > >
> > ___
> > Users mailing list
> > Users@ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sample OVirt KS for HV Deployment

2016-03-04 Thread Duckworth, Douglas C
We also use Ansible

Could you share your role / playbook for networks?

Thanks
Doug

-- 
Thanks

Douglas Duckworth, MSc, LFCS
Unix Administrator
Tulane University
Technology Services
1555 Poydras Ave
NOLA -- 70112

E: du...@tulane.edu
O: 504-988-9341
F: 504-988-8505

On 03/02/2016 05:50 PM, Dan Yasny wrote:
> I usually deploy with only a basic KS that places an ssh cert on the
> host. Then ansible adds the repos and adjusts the network and iscsi
> initiator , the ovirt bootstrap takes care of the rest
> 
> On Mar 2, 2016 5:42 PM, "Duckworth, Douglas C"  > wrote:
> 
> Never mind.  This seems to be a place to start:
> 
> 
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.6/html/Installation_Guide/sect-Automated_Installation.html
> 
> https://access.redhat.com/solutions/41697
> 
> --
> Thanks
> 
> Douglas Duckworth, MSc, LFCS
> Unix Administrator
> Tulane University
> Technology Services
> 1555 Poydras Ave
> NOLA -- 70112
> 
> E: du...@tulane.edu 
> O: 504-988-9341 
> F: 504-988-8505 
> 
> On 03/02/2016 04:06 PM, Duckworth, Douglas C wrote:
> > Does anyone have a kickstart available to share?  We are looking to
> > automate hypervisor deployment with Cobbler.
> >
> > Thanks
> > Doug
> >
> ___
> Users mailing list
> Users@ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] First oVirt community meetup in Boston MA area

2016-03-04 Thread Douglas Landgraf
Join us for an evening of knowledge-sharing, networking, and Q
about the oVirt open-source project.

All skill levels are welcome! In this first meetup we will introduce
oVirt, show a live demo, and answer any questions you might have about
the technology and the community.

The meetup will be hosted at the Red Hat office in Westford, drinks and
snacks will be provided. Please RSVP to the event so that we can make
sure everyone will be comfortable.

Looking forward to seeing you there!
http://www.meetup.com/Boston-oVirt-Community/

-- 
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VM Migration Fails (Network thoughts)

2016-03-04 Thread Christopher Young
Annnddd here I am again being 'that guy'.  I apologize for all the
traffic, but I wanted to note that migrations appears to be working
even over the migration IP network once short name resolution was
working fine.  The first attempt did fail after changes, but since
then, they are working fine, it seems.

So, I'm down to only a simple question now:

If I edit the resolv.conf on a RHEV-H/ovirt node (7.2) host, does it
REALLY get replaced on reboot?  And if so, how do I work around this
since I need those name resolutions to work?  Would /etc/hosts file
entries be sufficient for just the primary hostnames and IPs (or would
they also get over-written on reboot)?

Again, sorry for unnecessary posts, but I'm just trying to understand
the pieces as much as possible since I'm going to be supporting this
for a very long time.

On Fri, Mar 4, 2016 at 3:42 PM, Christopher Young  wrote:
> So, I'm self-replying, but I was able to confirm that changing
> /etc/resolv.conf so that the short hostnames work allow migration, but
> ONLY when the migration network is set to the main management
> interface (and thus the primary hostname of the system).  I'm
> wondering how this could work with separate IP networks for VM
> Migration when hostname resolutions would generally always return the
> primary IP address of a system.
>
> I obviously still have reading to do, but any help and feedback would
> be greatly appreciated.
>
>
>
> On Fri, Mar 4, 2016 at 3:31 PM, Christopher Young  
> wrote:
>> So,  I'm attempting to understand something fully.  My setup might not
>> be ideal, so please provide me whatever education I may need.
>>
>> RE: https://access.redhat.com/solutions/202823
>>
>> Quick background:
>>
>> I have (3) RHEV-H (ovirt node) Hypervisors with a hosted-engine.  VM
>> Migrations currently are not working and I have the following errors
>> in the vdsm.log:
>>
>> ---
>> "[RHEV]Failed to migrate VM between hypervisor with error "gaierror:
>> [Errno -2] Name or service not known" in vdsm.log"
>> ---
>>
>> Reading the RedHat article leads me to believe that this is a name
>> resolution issue (for the FQDNs), but that leads to another question
>> which I'll get to.  First, a little more about the setup:
>>
>> I have separate IP networks for Management (ovirtmgmt), Storage, and
>> VM Migration.  These are all on unique IP ranges.  I'll make up some
>> for my purposes here:
>>
>> MGMT_VLAN: 10.25.250.x/24
>> STORAGE_VLAN: 10.26.3.x/24
>> MIGRATE_VLAN: 10.26.5.x/24
>>
>> In anticipation for setting all of this up, I assigned hostnames/IPs
>> for each role on each node/hypervisor, like so:
>>
>> vnode01
>> vnode01-sto (storage)
>> vnode01-vmm
>>
>> vnode02
>> vnode02-sto
>> vnode02-vmm
>>
>> etc., etc.
>>
>> So, I'm trying to understand what my best procedure for setting up a
>> separate migration network and what IP/hostname settings are necessary
>> to ensure that this works properly and wouldn't affect other things.
>>
>> I'm going to do some more reading right now, but at least I feel like
>> I'm on the right path to getting migrations working.
>>
>> Please help lol
>>
>> Thanks as always,
>>
>> -- Chris
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Command 'org.ovirt.engine.core.bll.hostdev.RefreshHostDevicesCommand' failed: Failed managing transaction

2016-03-04 Thread ovirt

Hi oVirt List,

I have a oVirt Setup which includes two host. The first host is also 
running the oVirt engine.
Today I installed a couple VMs on the second host from USB without any 
issue. Now I tried to perform this also on the first host but it looks 
like the USB is not accessible.


I'm able to add the device to the host but I can see the following 
message in the engine.log which does not look correct.


I'm running oVirt 3.6.3

Best regards
Christoph


2016-03-04 21:42:13,037 ERROR 
[org.ovirt.engine.core.bll.hostdev.RefreshHostDevicesCommand] 
(org.ovirt.thread.pool-8-thread-17) [4307f9bc] Command 
'org.ovirt.engine.core.bll.hostdev.RefreshHostDevicesCommand' failed: 
Failed managing transaction
2016-03-04 21:42:13,037 ERROR 
[org.ovirt.engine.core.bll.hostdev.RefreshHostDevicesCommand] 
(org.ovirt.thread.pool-8-thread-17) [4307f9bc] Exception: 
java.lang.RuntimeException: Failed managing transaction
at 
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:232) 
[utils.jar:]
at 
org.ovirt.engine.core.bll.hostdev.RefreshHostDevicesCommand.executeCommand(RefreshHostDevicesCommand.java:121) 
[bll.jar:]
at 
org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1215) 
[bll.jar:]
at 
org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1359) 
[bll.jar:]
at 
org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1982) 
[bll.jar:]
at 
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:174) 
[utils.jar:]
at 
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:116) 
[utils.jar:]
at 
org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1396) 
[bll.jar:]
at 
org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:378) [bll.jar:]
at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:480) 
[bll.jar:]
at 
org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:462) [bll.jar:]
at 
org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:672) 
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor345.invoke(Unknown Source) 
[:1.8.0_71]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
[rt.jar:1.8.0_71]

at java.lang.reflect.Method.invoke(Method.java:497) [rt.jar:1.8.0_71]
at 
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at 
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
at 
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at 
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
at 
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70) 
[wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
at 
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80) 
[wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
at 
org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93) 
[wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
at 
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at 
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
at 
org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at 
org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) 
[wildfly-ejb3-8.2.1.Final.jar:8.2.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at 
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
at 
org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:46) 
[weld-core-impl-2.2.6.Final.jar:2014-10-03 10:05]
at 
org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83) 
[wildfly-weld-8.2.1.Final.jar:8.2.1.Final]
at 
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
at 
org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) 

Re: [ovirt-users] VM Migration Fails (Network thoughts)

2016-03-04 Thread Christopher Young
So, I'm self-replying, but I was able to confirm that changing
/etc/resolv.conf so that the short hostnames work allow migration, but
ONLY when the migration network is set to the main management
interface (and thus the primary hostname of the system).  I'm
wondering how this could work with separate IP networks for VM
Migration when hostname resolutions would generally always return the
primary IP address of a system.

I obviously still have reading to do, but any help and feedback would
be greatly appreciated.



On Fri, Mar 4, 2016 at 3:31 PM, Christopher Young  wrote:
> So,  I'm attempting to understand something fully.  My setup might not
> be ideal, so please provide me whatever education I may need.
>
> RE: https://access.redhat.com/solutions/202823
>
> Quick background:
>
> I have (3) RHEV-H (ovirt node) Hypervisors with a hosted-engine.  VM
> Migrations currently are not working and I have the following errors
> in the vdsm.log:
>
> ---
> "[RHEV]Failed to migrate VM between hypervisor with error "gaierror:
> [Errno -2] Name or service not known" in vdsm.log"
> ---
>
> Reading the RedHat article leads me to believe that this is a name
> resolution issue (for the FQDNs), but that leads to another question
> which I'll get to.  First, a little more about the setup:
>
> I have separate IP networks for Management (ovirtmgmt), Storage, and
> VM Migration.  These are all on unique IP ranges.  I'll make up some
> for my purposes here:
>
> MGMT_VLAN: 10.25.250.x/24
> STORAGE_VLAN: 10.26.3.x/24
> MIGRATE_VLAN: 10.26.5.x/24
>
> In anticipation for setting all of this up, I assigned hostnames/IPs
> for each role on each node/hypervisor, like so:
>
> vnode01
> vnode01-sto (storage)
> vnode01-vmm
>
> vnode02
> vnode02-sto
> vnode02-vmm
>
> etc., etc.
>
> So, I'm trying to understand what my best procedure for setting up a
> separate migration network and what IP/hostname settings are necessary
> to ensure that this works properly and wouldn't affect other things.
>
> I'm going to do some more reading right now, but at least I feel like
> I'm on the right path to getting migrations working.
>
> Please help lol
>
> Thanks as always,
>
> -- Chris
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VM Migration Fails (Network thoughts)

2016-03-04 Thread Christopher Young
So,  I'm attempting to understand something fully.  My setup might not
be ideal, so please provide me whatever education I may need.

RE: https://access.redhat.com/solutions/202823

Quick background:

I have (3) RHEV-H (ovirt node) Hypervisors with a hosted-engine.  VM
Migrations currently are not working and I have the following errors
in the vdsm.log:

---
"[RHEV]Failed to migrate VM between hypervisor with error "gaierror:
[Errno -2] Name or service not known" in vdsm.log"
---

Reading the RedHat article leads me to believe that this is a name
resolution issue (for the FQDNs), but that leads to another question
which I'll get to.  First, a little more about the setup:

I have separate IP networks for Management (ovirtmgmt), Storage, and
VM Migration.  These are all on unique IP ranges.  I'll make up some
for my purposes here:

MGMT_VLAN: 10.25.250.x/24
STORAGE_VLAN: 10.26.3.x/24
MIGRATE_VLAN: 10.26.5.x/24

In anticipation for setting all of this up, I assigned hostnames/IPs
for each role on each node/hypervisor, like so:

vnode01
vnode01-sto (storage)
vnode01-vmm

vnode02
vnode02-sto
vnode02-vmm

etc., etc.

So, I'm trying to understand what my best procedure for setting up a
separate migration network and what IP/hostname settings are necessary
to ensure that this works properly and wouldn't affect other things.

I'm going to do some more reading right now, but at least I feel like
I'm on the right path to getting migrations working.

Please help lol

Thanks as always,

-- Chris
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Storage migration: Preallocation forced on destination

2016-03-04 Thread Nir Soffer
On Fri, Mar 4, 2016 at 7:07 PM, Nicolás  wrote:
> Hi,
>
> We're migrating an existing storage (glusterfs) to a new one (iSCSI). All
> disks on glusterfs are thin provisioned, but when migrating to iSCSI the
> following warning is shown:
>
> The following disks will become preallocated, and may consume
> considerably more space on the target: local-disk
>
> Why is that? Is there a way to migrate disks so they are thin provisioned on
> iSCSI as well?

The issue is that we use raw sparse format for thin provisioned disks
on file based
storage. The file system provides the thin provisioning, maintaining holes in
the files.

When we create the destination lv, we use the disk virtual size, so you get
practically a preallocated volume.

I think we can do better - before copying the disk, we can check the actual used
space (e.g. what qemu-img info or stat report), and create the
destination lv using
the used size (plus additional space for qcow format).

I tested this by extending the destination lv manually and then
copying data manually
using qemu-img convert, and it works.

Can you file a bug for this, and explain the use case?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] regenerate libvirt-spice keys after libvirtd restart?

2016-03-04 Thread Bill James
I needed to bounce libvirtd after changing a config in libvirt/qemu.conf 
so import-to-ovirt.pl,

but now my VMs with Spice console complain:

libvirtError: internal error: process exited while connecting to 
monitor: ((null):2791): Spice-Warning **: reds.c:3311:reds_init_ssl: 
Could not use private key file


What is the proper way to sync up the key after restarting libvirtd?
I even tried rebooting host and restart ovirt-engine and ovirt-engine 
setup, didn't help.


Work around is just use VNC consoles. But I'd like to get spice working 
again.


centos 7.2
libvirt-client-1.2.17-13.el7_2.2.x86_64
ovirt-engine-3.6.2.6-1.el7.centos.noarch



Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 Global, 
Inc. and/or its affiliates which may be privileged, confidential or otherwise 
protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is prohibited. If you have 
received this email in error please notify the sender by reply e-mail and 
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights 
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are 
registered trademarks of j2 Global, Inc. and its affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted engine disk migration

2016-03-04 Thread Pat Riehecky

I'm on oVirt 3.6

I'd like to migrate my hosted engine storage to another location and 
have a few questions:


(a) what is the right syntax for glusterfs in 
/etc/ovirt-hosted-engine/hosted-engine.conf? (I'm currently on nfs3)


(b) what is the right syntax for fibre channel?

(c) where are instructions for how to migrate the actual disk files? 
(google was little help)


(d) Can the hosted engine use the same (gluster/fibre) volume as my VM 
Images?


(e) I get various "Cannot edit Virtual Machine. This VM is not managed 
by the engine." in the console for manipulating the HostedEngine.  Is 
that expected?


Pat

--
Pat Riehecky
Scientific Linux developer

Fermi National Accelerator Laboratory
www.fnal.gov
www.scientificlinux.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sample OVirt KS for HV Deployment

2016-03-04 Thread Oved Ourfali
Small correction, we have that since 3.5.

Regards,
Oved
On Mar 3, 2016 22:18, "Yaniv Kaul"  wrote:

> Note that in 3.6 we have great integration with Foreman, for bare-metal to
> fully functional host.
> I urge you to look at the slides and video of the relevant presentation
> we've had at Fosdem[1].
>
> Y.
>
> [1]
> https://fosdem.org/2016/schedule/event/virt_iaas_host_lifecycle_content_management_in_ovirt/
>
> On Thu, Mar 3, 2016 at 12:06 AM, Duckworth, Douglas C 
> wrote:
>
>> Does anyone have a kickstart available to share?  We are looking to
>> automate hypervisor deployment with Cobbler.
>>
>> Thanks
>> Doug
>>
>> --
>> Thanks
>>
>> Douglas Duckworth, MSc, LFCS
>> Unix Administrator
>> Tulane University
>> Technology Services
>> 1555 Poydras Ave
>> NOLA -- 70112
>>
>> E: du...@tulane.edu
>> O: 504-988-9341
>> F: 504-988-8505
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Storage migration: Preallocation forced on destination

2016-03-04 Thread Nicolás

Hi,

We're migrating an existing storage (glusterfs) to a new one (iSCSI). 
All disks on glusterfs are thin provisioned, but when migrating to iSCSI 
the following warning is shown:


The following disks will become preallocated, and may consume 
considerably more space on the target: local-disk


Why is that? Is there a way to migrate disks so they are thin 
provisioned on iSCSI as well?


Thanks.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multi-node cluster with local storage

2016-03-04 Thread Pavel Gashev

On 04/03/16 16:39, "Sahina Bose"  wrote:
>On 03/04/2016 05:30 PM, Pavel Gashev wrote:
>> On 04/03/16 13:50, "Sahina Bose"  wrote:
>>> Most of the problems that you outline here - related to healing and
>>> replacing are addressed with the sharding translator. Sharding breaks
>>> the large image file into smaller files, so that the entire file does
>>> not have to be copied. More details here -
>>> http://blog.gluster.org/2015/12/introducing-shard-translator/
>> Sure, I meant the same by mentioning distributed+replicated volumes. 
>> Actually, distributed+striped+replicated - 
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Setting_Volumes-Distributed_Striped_Replicated.html
>
>Ok. Sharding is not the same as striped volumes in gluster. With 
>striping, like you mentioned, you would require more number of nodes to 
>form the striped set in addition to the replica set.( so 6 nodes since 
>you need replica 3 )
>Sharding can however work with 3 nodes - so on the replica 3 gluster 
>volume that you create, you can turn on the volume option 
>"features.shard on", to turn on this feature.

Great feature. I think it must be "on" in oVirt by default.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multi-node cluster with local storage

2016-03-04 Thread Sahina Bose



On 03/04/2016 05:30 PM, Pavel Gashev wrote:

On 04/03/16 13:50, "Sahina Bose"  wrote:

On 03/04/2016 04:13 PM, Pavel Gashev wrote:

On 04/03/16 12:22, "Sahina Bose"  wrote:

On 03/04/2016 02:14 AM, Pavel Gashev wrote:

Unfortunately, oVirt doesn't support multi-node local storage clusters.
And Gluster/CEPH doesn't work well over 1G network. It looks like that
the only way to use oVirt in a three-node cluster is to share local
storages over NFS. At least it makes possible to migrate VMs and move
disks among hardware nodes.

Do you know of reported problems with Gluster over 1Gb network? I think
10Gb is recommended, but 1Gb can also be used for gluster.
(We use it in our lab setup, and haven't encountered any issues so far
but of course, the workload may be different - hence the question)

Let's calculate. If I have a three node replicated gluster volume, each block 
writing on a node copies the block to the other two nodes. Thus, maximal write 
performance can't be above 50MB/s. Even it's acceptable for my workload, things 
get worse in failure recovering scenario. Gluster works with files. When a node 
fails and then recovers (even it's just a plain reboot), gluster copies the 
whole file over network if the file is changed during node outage. So if I have 
a 100GB VM disk, and guest system has written a 512-byte block to the disk, the 
whole 100GB will be copied during recovery. It might take 20 minutes for 100GB, 
and 3 hours for 1TB. And network will be 100% busy during recovery, so VMs on 
other nodes will wait for I/O most of time. In other words, a plain reboot of a 
node would result in datacenter out of service for several hours.

Things might be better if you have a distributed+replicated gluster volume. It 
requires at least six nodes. But things are still bad when you try to rebalance 
the volume after adding new bricks, or when a node has really failed and 
replaced.

Thus, 1GB network is ok for a lab, but it's not ok for production. IMHO.

Most of the problems that you outline here - related to healing and
replacing are addressed with the sharding translator. Sharding breaks
the large image file into smaller files, so that the entire file does
not have to be copied. More details here -
http://blog.gluster.org/2015/12/introducing-shard-translator/

Sure, I meant the same by mentioning distributed+replicated volumes. Actually, 
distributed+striped+replicated - 
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Setting_Volumes-Distributed_Striped_Replicated.html


Ok. Sharding is not the same as striped volumes in gluster. With 
striping, like you mentioned, you would require more number of nodes to 
form the striped set in addition to the replica set.( so 6 nodes since 
you need replica 3 )
Sharding can however work with 3 nodes - so on the replica 3 gluster 
volume that you create, you can turn on the volume option 
"features.shard on", to turn on this feature.











___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] OVirt SDK python ip problem

2016-03-04 Thread rein
Hi All,

I have an issue with the python ovirt sdk.
I made a script that creates a new host, hypervisor, in the ovirt environment.
The host is created and is put in maintenance state.
So far it works, but then i try to create the networks on the new host and 
there i got an error.
>  
> Creating the host:
>  
> try:
> if api.hosts.add(pHst):
> print("Host was added successfully")
> print("Waiting for host, {}, to reach the 'up' or 'non 
> operational' status").format(host)
> while api.hosts.get(name = host).status.state != 'up' and 
> api.hosts.get(name = host).status.state != 'non_operational':
> sleep(1)
> sleep(5)
> print("Host, {}, is '{}'").format(host)
>  
> except Exception as e:
> print("Failed to install Host: {}").format(str(e))
> return None
>  
> Host = api.hosts.get(name = host)
>  
> Host.deactivate()
> sleep(5)
> address, netmask = getDefaultAddress(api, oc, DC, host)
> if address is not None:

> So far all's well.
> Then the part for the network...
>  
> rc = setBondAndDefVLan(api, oc, DC, cluster, host, address, netmask)
> if rc is not None:
> Host.update()
> Host.activate()
>  
> ---
>  
> def setBondParams(api, oc, DC, host):
> nic0 = params.HostNIC(name  = oc.nic1,
>   network   = params.Network(),
>   boot_protocol = 'none',
>   ip= params.IP(
> address = '',
> netmask = '',
> gateway = '',
>),
>  )
> nic1 = params.HostNIC(name  = oc.nic2,
>   network   = params.Network(),
>   boot_protocol = 'none',
>   ip= params.IP(
> address = '',
> netmask = '',
> gateway = '',
>),
>  )
> 
> bond = params.Bonding(slaves  = params.Slaves(host_nic = [
>   nic0,
>   nic1,
>  ]),
>   options = params.Options(option = [
>  
> params.Option(name  = 'miimon',
>
> value = '100'),
>  
> params.Option(name  = 'mode',
>
> value = '4'),
> ])
>  )
>  
> team = params.HostNIC(name   = 'bond0',
>   boot_protocol  = 'none',
>   ip = params.IP(
>  address = '',
>  netmask = '',
>  gateway = '',
> ),
>   override_configuration = 1,
>   bonding= bond)
> return team
>  
>  
> def setDefVLanParams(api, oc, DC, cluster, host, address, netmask):
> defvlan = oc.defaultvlan
> clusterNW = api.clusters.get(cluster).networks.get(name = defvlan)
> vlan = params.HostNIC(name   = 
> "bond0.{}".format(clusterNW.vlan.id),
>   network= params.Network(name = 
> defvlan),
>   boot_protocol  = 'none',
>   ip = params.IP(
>  address = 
> address,
>  netmask = 
> netmask,
>  gateway = ''
> ),
>   override_configuration = 1,
>  )
> return vlan
>  
>  
> def setBondAndDefVLan(api, oc, DC, cluster, host, address, netmask):
> try:
> Host = api.hosts.get(name = host)
> except Exception as e:
> print("Could not find host {}, this is extremly 
> wrong!\n{}").format(host, str(e))
> return None
>  
> team = 

Re: [ovirt-users] Multi-node cluster with local storage

2016-03-04 Thread Pavel Gashev

On 04/03/16 13:50, "Sahina Bose"  wrote:
>On 03/04/2016 04:13 PM, Pavel Gashev wrote:
>> On 04/03/16 12:22, "Sahina Bose"  wrote:
>>> On 03/04/2016 02:14 AM, Pavel Gashev wrote:
 Unfortunately, oVirt doesn't support multi-node local storage clusters.
 And Gluster/CEPH doesn't work well over 1G network. It looks like that
 the only way to use oVirt in a three-node cluster is to share local
 storages over NFS. At least it makes possible to migrate VMs and move
 disks among hardware nodes.
>>>
>>> Do you know of reported problems with Gluster over 1Gb network? I think
>>> 10Gb is recommended, but 1Gb can also be used for gluster.
>>> (We use it in our lab setup, and haven't encountered any issues so far
>>> but of course, the workload may be different - hence the question)
>> Let's calculate. If I have a three node replicated gluster volume, each 
>> block writing on a node copies the block to the other two nodes. Thus, 
>> maximal write performance can't be above 50MB/s. Even it's acceptable for my 
>> workload, things get worse in failure recovering scenario. Gluster works 
>> with files. When a node fails and then recovers (even it's just a plain 
>> reboot), gluster copies the whole file over network if the file is changed 
>> during node outage. So if I have a 100GB VM disk, and guest system has 
>> written a 512-byte block to the disk, the whole 100GB will be copied during 
>> recovery. It might take 20 minutes for 100GB, and 3 hours for 1TB. And 
>> network will be 100% busy during recovery, so VMs on other nodes will wait 
>> for I/O most of time. In other words, a plain reboot of a node would result 
>> in datacenter out of service for several hours.
>>
>> Things might be better if you have a distributed+replicated gluster volume. 
>> It requires at least six nodes. But things are still bad when you try to 
>> rebalance the volume after adding new bricks, or when a node has really 
>> failed and replaced.
>>
>> Thus, 1GB network is ok for a lab, but it's not ok for production. IMHO.
>
>Most of the problems that you outline here - related to healing and 
>replacing are addressed with the sharding translator. Sharding breaks 
>the large image file into smaller files, so that the entire file does 
>not have to be copied. More details here - 
>http://blog.gluster.org/2015/12/introducing-shard-translator/

Sure, I meant the same by mentioning distributed+replicated volumes. Actually, 
distributed+striped+replicated - 
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/sect-User_Guide-Setting_Volumes-Distributed_Striped_Replicated.html






___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update Self-Hosted-Engine - Right Way?

2016-03-04 Thread Gianluca Cecchi
What I just tested on a lab environment of mine is this below

Starting situation is single host with CentOS 7.2 and SH Engine on CentOS
7.2 deployed from appliance
Version is 3.6.2.
Both are updated up to 26/01/16 nad there are 3 VMs running

- shutdown Vms

- move host to global maintenance
On host:
hosted-engine --set-maintenance --mode=global

- upgrade engine VM both from 3.6.2 to 3.6.3 and update its CentOS packages
with:
On SH Engine:
yum update "ovirt-engine-setup*"
yum update
engine-setup

- reboot engine VM and verify correct access to web admin gui
You can verify its console status at this time with something like this run
on host side:
hosted-engine --add-console-password --password=mypwd
/bin/remote-viewer vnc://localhost:5900 &

NOTE: it seems that nc remote-viewer command doesn't survive the reboot...
you have to reconnect

- verify oVirt version is right and all is ok, eg starting a VM and
accessing its console; then shutdown again the VM

- move the host to local maintenance
On host:
hosted-engine --set-maintenance --mode=local

- update Host
On engine VM:
shutdown -h now
NOTE: I think this step about engine VM shouldn't be necessary in case of
multi host environment. Simply you should go host by host with these steps
up to completing upgrade of your hosts.

On host:
yum update
reboot

- Exit maintenance
On host:
hosted-engine --set-maintenance --mode=none

- Verify engine VM starts and access to web admin gui is ok

- Start your VMs and verify all is ok

I have just done this and all is ok: all storage domains (ISO, NFS and
hosted_storage) are up and able to start and manage VMs

The only "problem" I have seeing is that in Hosts tab, the host is still
marked for some minutes with a box and tip says "Update available".
But then it goes away (no event regarding this on events pane... just
wait)..

HIH,
Gianluca


On Fri, Mar 4, 2016 at 11:19 AM, Joop  wrote:

> On 4-3-2016 10:44, Taste-Of-IT wrote:
> > Hello Simon,
> >
> > i am new with oVirt, but are you sure, that this is the way for the
> > self-hosted-engine? Because i have no commands with hosted-***. As far
> > as i understand the engine runs in self-hosted-engine on same host and
> > therefore there is no way to enter the vm with engine and set it to
> > maintenance mode.?!
> If you have only one host and it sounds like it then upgrading is a bit
> different.
> hosted-engine command is available on the HOST.
> Just shutdown all VM's, either way will work if you have
> ovirt-guest-agent running inside the VM's
> place hosted-engine in maintenance mode (hosted-engine --set-maintenance
> --mode=global)
> ssh, use vnc, into engine
> yum upgrade if you want to upgrade the system too, else running
> engine-setup will update only ovirt components.
> shutdown the engine.
> yum upgrade your host, reboot your host, once back up, disable
> maintenance mode (replace global with none)
> ha-agent and ha-broker will start the engine for you.
>
> Joop
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move from Gluster to NFS

2016-03-04 Thread Sandro Bonazzola
On Fri, Feb 26, 2016 at 6:45 PM, Christophe TREFOIS <
christophe.tref...@uni.lu> wrote:

> Hi Snadro,
>
> How can I remove the host from the engine, if I’m re-deploying the engine?
>


deploy it on a second host, restore the backup, remove the first host from
the engine and then re-add it.



>
> I don’t get it :)
>
> Best,
>
> Dr Christophe Trefois, Dipl.-Ing.
> Technical Specialist / Post-Doc
>
> UNIVERSITÉ DU LUXEMBOURG
>
> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
> Campus Belval | House of Biomedicine
> 6, avenue du Swing
> L-4367 Belvaux
> T: +352 46 66 44 6124
> F: +352 46 66 44 6949
> http://www.uni.lu/lcsb
>
> [image: Facebook]   [image: Twitter]
>   [image: Google Plus]
>   [image: Linkedin]
>   [image: skype]
> 
>
> 
> This message is confidential and may contain privileged information.
> It is intended for the named recipient only.
> If you receive it in error please notify me and permanently delete the
> original message and any copies.
> 
>
>
> On 26 Feb 2016, at 09:29, Sandro Bonazzola  wrote:
>
>
>
> On Mon, Feb 15, 2016 at 4:46 PM, Yaniv Dary  wrote:
>
>> Adding Sandro and Didi might be able to give more detailed flow to do
>> this.
>>
>
> please add Simone, Martin and Roy too when it's HE related.
>
>
>
>>
>> Yaniv Dary
>> Technical Product Manager
>> Red Hat Israel Ltd.
>> 34 Jerusalem Road
>> Building A, 4th floor
>> Ra'anana, Israel 4350109
>>
>> Tel : +972 (9) 7692306
>> 8272306
>> Email: yd...@redhat.com
>> IRC : ydary
>>
>>
>> On Sun, Feb 14, 2016 at 7:12 PM, Christophe TREFOIS <
>> christophe.tref...@uni.lu> wrote:
>>
>>> Is there a reason why this is not possible?
>>>
>>>
>>>
>>> Can I setup a second host in the engine cluster and move the engine with
>>> “storage” to that host?
>>>
>>>
>>>
>>> So, what you would recommend is:
>>>
>>>
>>>
>>> 1.   Move all VMs from engine host to another Host
>>>
>>> 2.   Setup NFS on the empty HE host
>>>
>>> 3.   Shutdown the HE, and disable HA-proxy and agent
>>>
>>> 4.   Re-deploy the engine and restore from backup the HE
>>>
>>> 5.   Enjoy ?
>>>
>>>
>>>
>>> Thank you for any help on this,
>>>
>>> I really don’t want to end up with broken environment J
>>>
>>
> We're discussing this kind of migrations with storage guys. We haven't a
> supported procedure to do this yet.
> Above procedure should work except that if you're going to re-use the
> first host you'll need to remove it from the engine before trying to attach
> it again from he-setup.
>
>
>
>>
>>>
>>> Kind regards,
>>>
>>>
>>>
>>> --
>>>
>>> Christophe
>>>
>>>
>>>
>>> *From:* Yaniv Dary [mailto:yd...@redhat.com]
>>> *Sent:* dimanche 14 février 2016 16:32
>>> *To:* Christophe TREFOIS 
>>> *Cc:* users 
>>> *Subject:* Re: [ovirt-users] Move from Gluster to NFS
>>>
>>>
>>>
>>> You will not be able to move it between storage that is way I suggested
>>> the backup and restore path.
>>>
>>>
>>> Yaniv Dary
>>>
>>> Technical Product Manager
>>>
>>> Red Hat Israel Ltd.
>>>
>>> 34 Jerusalem Road
>>>
>>> Building A, 4th floor
>>>
>>> Ra'anana, Israel 4350109
>>>
>>>
>>>
>>> Tel : +972 (9) 7692306
>>>
>>> 8272306
>>>
>>> Email: yd...@redhat.com
>>>
>>> IRC : ydary
>>>
>>>
>>>
>>> On Sun, Feb 14, 2016 at 5:28 PM, Christophe TREFOIS <
>>> christophe.tref...@uni.lu> wrote:
>>>
>>> Hi Yaniv,
>>>
>>>
>>>
>>> Would you recommend doing a clean install or can I simply move the HE
>>> from the gluster mount point to NFS and tell HA agent to boot from there?
>>>
>>>
>>>
>>> What do you think?
>>>
>>>
>>>
>>> Thank you,
>>>
>>>
>>>
>>> --
>>>
>>> Christophe
>>>
>>> Sent from my iPhone
>>>
>>>
>>> On 14 Feb 2016, at 15:30, Yaniv Dary  wrote:
>>>
>>> We will probably need to backup and restore the HE vm after doing a
>>> clean install on NFS.
>>>
>>>
>>> Yaniv Dary
>>>
>>> Technical Product Manager
>>>
>>> Red Hat Israel Ltd.
>>>
>>> 34 Jerusalem Road
>>>
>>> Building A, 4th floor
>>>
>>> Ra'anana, Israel 4350109
>>>
>>>
>>>
>>> Tel : +972 (9) 7692306
>>>
>>> 8272306
>>>
>>> Email: yd...@redhat.com
>>>
>>> IRC : ydary
>>>
>>>
>>>
>>> On Sat, Feb 6, 2016 at 11:30 PM, Christophe TREFOIS <
>>> christophe.tref...@uni.lu> wrote:
>>>
>>> Dear all,
>>>
>>>
>>>
>>> I currently have a self-hosted setup with gluster on 1 node.
>>>
>>> I do have other data centers with 3 other hosts and local (sharable) NFS
>>> storage. Furthermore, I have 1 NFS export domain.
>>>
>>>
>>>
>>> We would like to move from Gluster to NFS only on the first host.
>>>
>>>
>>>
>>> Does anybody have any experience with this?
>>>
>>>
>>>
>>> Thank you,
>>>
>>>
>>>
>>> —
>>>
>>> Christophe
>>>
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> 

Re: [ovirt-users] Multi-node cluster with local storage

2016-03-04 Thread Sahina Bose



On 03/04/2016 04:13 PM, Pavel Gashev wrote:




On 04/03/16 12:22, "Sahina Bose"  wrote:

On 03/04/2016 02:14 AM, Pavel Gashev wrote:

Unfortunately, oVirt doesn't support multi-node local storage clusters.
And Gluster/CEPH doesn't work well over 1G network. It looks like that
the only way to use oVirt in a three-node cluster is to share local
storages over NFS. At least it makes possible to migrate VMs and move
disks among hardware nodes.


Do you know of reported problems with Gluster over 1Gb network? I think
10Gb is recommended, but 1Gb can also be used for gluster.
(We use it in our lab setup, and haven't encountered any issues so far
but of course, the workload may be different - hence the question)

Let's calculate. If I have a three node replicated gluster volume, each block 
writing on a node copies the block to the other two nodes. Thus, maximal write 
performance can't be above 50MB/s. Even it's acceptable for my workload, things 
get worse in failure recovering scenario. Gluster works with files. When a node 
fails and then recovers (even it's just a plain reboot), gluster copies the 
whole file over network if the file is changed during node outage. So if I have 
a 100GB VM disk, and guest system has written a 512-byte block to the disk, the 
whole 100GB will be copied during recovery. It might take 20 minutes for 100GB, 
and 3 hours for 1TB. And network will be 100% busy during recovery, so VMs on 
other nodes will wait for I/O most of time. In other words, a plain reboot of a 
node would result in datacenter out of service for several hours.

Things might be better if you have a distributed+replicated gluster volume. It 
requires at least six nodes. But things are still bad when you try to rebalance 
the volume after adding new bricks, or when a node has really failed and 
replaced.

Thus, 1GB network is ok for a lab, but it's not ok for production. IMHO.


Most of the problems that you outline here - related to healing and 
replacing are addressed with the sharding translator. Sharding breaks 
the large image file into smaller files, so that the entire file does 
not have to be copied. More details here - 
http://blog.gluster.org/2015/12/introducing-shard-translator/




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multi-node cluster with local storage

2016-03-04 Thread Pavel Gashev




On 04/03/16 12:22, "Sahina Bose"  wrote:
>
>On 03/04/2016 02:14 AM, Pavel Gashev wrote:
>>
>> Unfortunately, oVirt doesn't support multi-node local storage clusters.
>> And Gluster/CEPH doesn't work well over 1G network. It looks like that
>> the only way to use oVirt in a three-node cluster is to share local
>> storages over NFS. At least it makes possible to migrate VMs and move
>> disks among hardware nodes.
>
>
>Do you know of reported problems with Gluster over 1Gb network? I think 
>10Gb is recommended, but 1Gb can also be used for gluster.
>(We use it in our lab setup, and haven't encountered any issues so far 
>but of course, the workload may be different - hence the question)

Let's calculate. If I have a three node replicated gluster volume, each block 
writing on a node copies the block to the other two nodes. Thus, maximal write 
performance can't be above 50MB/s. Even it's acceptable for my workload, things 
get worse in failure recovering scenario. Gluster works with files. When a node 
fails and then recovers (even it's just a plain reboot), gluster copies the 
whole file over network if the file is changed during node outage. So if I have 
a 100GB VM disk, and guest system has written a 512-byte block to the disk, the 
whole 100GB will be copied during recovery. It might take 20 minutes for 100GB, 
and 3 hours for 1TB. And network will be 100% busy during recovery, so VMs on 
other nodes will wait for I/O most of time. In other words, a plain reboot of a 
node would result in datacenter out of service for several hours.

Things might be better if you have a distributed+replicated gluster volume. It 
requires at least six nodes. But things are still bad when you try to rebalance 
the volume after adding new bricks, or when a node has really failed and 
replaced.

Thus, 1GB network is ok for a lab, but it's not ok for production. IMHO.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [ovirt-cli] query snapshot in preview of a vm

2016-03-04 Thread Jiri Belka
I can't figure out how to do nice ovirt-shell command to query current
snapshot in preview of a vm (to commit it later).

This works:

~~~
list snapshots --parent-vm-name jb-w2k8r2 --kwargs "description=Active VM 
before the preview" --show-all | egrep "^(id|description|type)"
id : 
08535a3e-dc9e-42c0-b611-6fea4a0318c9
description: Active VM before the preview
type   : preview
~~~

But why the following does not work?

~~~
list snapshots --parent-vm-name jb-w2k8r2 --kwargs "type=preview"
~~~

It returns nothing.

j.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Update Self-Hosted-Engine - Right Way?

2016-03-04 Thread Taste-Of-IT

Hello Simon,

i am new with oVirt, but are you sure, that this is the way for the 
self-hosted-engine? Because i have no commands with hosted-***. As far 
as i understand the engine runs in self-hosted-engine on same host and 
therefore there is no way to enter the vm with engine and set it to 
maintenance mode.?!


Regards
Taste

Am 2016-03-04 10:24, schrieb Simone Tiraboschi:

On Fri, Mar 4, 2016 at 9:57 AM, Taste-Of-IT 
wrote:


Hello,

i want to Update my oVirt as Self-Hosted-Engine on CentOS 7. The
way i whould do is as follow and i want to know if this is the right
recommended way.


Please follow
this: 
http://www.ovirt.org/documentation/how-to/hosted-engine/#Upgrade_Hosted_Engine
[2]

Ensure to take care to put hosted-engine in global maintenance mode
before upgrading the engine VM otherwise the HA agent could reboot it
(since it detects that the engine is not responsive) in the middle of
the upgrade with potentially really bad results.

 


1.  shutdown all virtual machines (can i do this from GUI, or
should it better be done inside the VMs?)
2.  yum update "ovirt-engine-setup*"
3.  yum update
4.  engine-setup
5.  follow questions and hope all is updated correct...

Is that all or should i do some other stepps before or between
this?

thx
Taste
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [1]




Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users
[2]
http://www.ovirt.org/documentation/how-to/hosted-engine/#Upgrade_Hosted_Engine

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Importing VMs into Cinder backend

2016-03-04 Thread Nir Soffer
On Fri, Mar 4, 2016 at 1:34 AM, Bond, Darryl  wrote:

> Is there a recommended way to import a VM from Vmware into oVirt with
> Cinder back end?
>
> I have successfully created an image in the oVirt Import domain using
> virt-v2v. The image can be imported into NFS storage and works fine.
>
> I can create new VMs with disks in the Cinder storage.
>
>
> There does not seem to be any way of either:
>
> a) Import directly into the cinder storage (regardless if the import has
> raw or qcow disks
>

virt-v2v may support this, as it is using qemu-img under the hood, but I
guess we miss
the integration, passing the needed info from engine and accepting it in
virt-v2v.

(Adding Shahar)


>
> b) Move a disk from NFS storage into cinder
>

Correct, this is not supported yet with cinder/ceph disks.

Maybe you can do this manually:
- Import vm using v2v to nfs
- Create cinder/ceph volume in the correct size
- Copy the imported disk using "qemu-img convert" to the ceph volume
  ceph disks are used as "rbd:poolname/volumename"
  you also have to specify cephx auth, or copy ceph keyring to /etc/ceph/
- Detach the imported disk from the vm
- Attach the cinder/ceph disk to the vm

This work is planned for next version.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Multi-node cluster with local storage

2016-03-04 Thread Sahina Bose



On 03/04/2016 02:14 AM, Pavel Gashev wrote:

Hello,

I'd like to ask community, what is the best way to use oVirt in the
following hardware configuration:

Three servers connected 1GB network. Each server - 32 threads, 256GB
RAM, 4TB RAID.

Please note that a local storage and an 1GB network is a typical
hardware configuration for almost any dedicated hosting.

Unfortunately, oVirt doesn't support multi-node local storage clusters.
And Gluster/CEPH doesn't work well over 1G network. It looks like that
the only way to use oVirt in a three-node cluster is to share local
storages over NFS. At least it makes possible to migrate VMs and move
disks among hardware nodes.



Do you know of reported problems with Gluster over 1Gb network? I think 
10Gb is recommended, but 1Gb can also be used for gluster.
(We use it in our lab setup, and haven't encountered any issues so far 
but of course, the workload may be different - hence the question)





Does somebody have such setup?

Thanks
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Update Self-Hosted-Engine - Right Way?

2016-03-04 Thread Taste-Of-IT

Hello,

i want to Update my oVirt as Self-Hosted-Engine on CentOS 7. The way i 
whould do is as follow and i want to know if this is the right 
recommended way.


1.  shutdown all virtual machines (can i do this from GUI, or should it 
better be done inside the VMs?)

2.  yum update "ovirt-engine-setup*"
3.  yum update
4.  engine-setup
5.  follow questions and hope all is updated correct...

Is that all or should i do some other stepps before or between this?

thx
Taste
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users