Re: [ovirt-users] disk not bootable

2016-07-17 Thread Fernando Fuentes
Nir,

That's odd. gamma is my iscsi host, its in up state and it has active
VM's.
What am I missing?

Regards,

-- 
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org

On Sun, Jul 17, 2016, at 07:24 PM, Nir Soffer wrote:
> On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes 
> wrote:
> > Nir,
> >
> > Ok I got the uuid but I am getting the same results as before.
> > Nothing comes up.
> >
> > [root@gamma ~]# pvscan --cache
> > [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep
> > 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98
> > [root@gamma ~]#
> >
> > without the grep all I get is:
> >
> > [root@gamma ~]# lvs -o vg_name,lv_name,tags
> >   VG   LV  LV Tags
> >   vg_gamma lv_home
> >   vg_gamma lv_root
> >   vg_gamma lv_swap
> 
> You are not connected to the iscsi storage domain.
> 
> Please try this from a host in up state in engine.
> 
> Nir
> 
> >
> > On the other hand an fdisk shows a bunch of disks and here is one
> > example:
> >
> > Disk /dev/mapper/36589cfc0050564002c7e51978316: 2199.0 GB,
> > 219902322 bytes
> > 255 heads, 63 sectors/track, 267349 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > Sector size (logical/physical): 512 bytes / 32768 bytes
> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> > Disk identifier: 0x
> >
> >
> > Disk /dev/mapper/36589cfc00881b9b93c2623780840: 2199.0 GB,
> > 219902322 bytes
> > 255 heads, 63 sectors/track, 267349 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > Sector size (logical/physical): 512 bytes / 32768 bytes
> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> > Disk identifier: 0x
> >
> >
> > Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536
> > MB, 536870912 bytes
> > 255 heads, 63 sectors/track, 65 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > Sector size (logical/physical): 512 bytes / 32768 bytes
> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> > Disk identifier: 0x
> >
> > Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073
> > MB, 1073741824 bytes
> > 255 heads, 63 sectors/track, 130 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > Sector size (logical/physical): 512 bytes / 32768 bytes
> > I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> > Disk identifier: 0x
> >
> > Regards,
> >
> > --
> > Fernando Fuentes
> > ffuen...@txweather.org
> > http://www.txweather.org
> >
> > On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
> >> Nir,
> >>
> >> Ok ill look for it here in a few.
> >> Thanks for your reply and help!
> >>
> >> --
> >> Fernando Fuentes
> >> ffuen...@txweather.org
> >> http://www.txweather.org
> >>
> >> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
> >> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes 
> >> > wrote:
> >> > > Nir,
> >> > >
> >> > > I try to follow your steps but I cant seem to find the ID of the
> >> > > template.
> >> >
> >> > The image-uuid of the template is displayed in the Disks tab in engine.
> >> >
> >> > To find the volume-uuid on block storage, you can do:
> >> >
> >> > pvscan --cache
> >> > lvs -o vg_name,lv_name,tags | grep image-uuid
> >> >
> >> > >
> >> > > Regards,
> >> > >
> >> > > --
> >> > > Fernando Fuentes
> >> > > ffuen...@txweather.org
> >> > > http://www.txweather.org
> >> > >
> >> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
> >> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler 
> >> > >> wrote:
> >> > >> > All, I did a test for Fernando in our ovirt environment. I created 
> >> > >> > a vm
> >> > >> > called win7melly in the nfs domain. I then migrated it to the iscsi
> >> > >> > domain. It booted without any issue. So it has to be something with 
> >> > >> > the
> >> > >> > templates. I have attached the vdsm log for the host the vm resides 
> >> > >> > on.
> >> > >>
> >> > >> The log show a working vm, so it does not help much.
> >> > >>
> >> > >> I think that the template you copied from the nfs domain to the block
> >> > >> domain is
> >> > >> corrupted, or the volume metadata are incorrect.
> >> > >>
> >> > >> If I understand this correctly, this started when Fernando could not 
> >> > >> copy
> >> > >> the vm
> >> > >> disk to the block storage, and I guess the issue was that the template
> >> > >> was missing
> >> > >> on that storage domain. I assume that he copied the template to the
> >> > >> block storage
> >> > >> domain by opening the templates tab, selecting the template, and 
> >> > >> choosing
> >> > >> copy
> >> > >> from the menu.
> >> > >>
> >> > >> Lets compare the template on both nfs and block storage domain.
> >> > >>
> >> > >> 1. Find the template on the nfs storage domain, using the image uuid 
> >> > >> in
> >> > >> engine.
> >> > >>
> >> > >> It should be at
> >> > >>
> >> > >> 
> >> > >> 

Re: [ovirt-users] disk not bootable

2016-07-17 Thread Nir Soffer
On Sun, Jul 17, 2016 at 1:24 AM, Fernando Fuentes  wrote:
> Nir,
>
> Ok I got the uuid but I am getting the same results as before.
> Nothing comes up.
>
> [root@gamma ~]# pvscan --cache
> [root@gamma ~]# lvs -o vg_name,lv_name,tags | grep
> 3b7d9349-9eb1-42f8-9e04-7bbb97c02b98
> [root@gamma ~]#
>
> without the grep all I get is:
>
> [root@gamma ~]# lvs -o vg_name,lv_name,tags
>   VG   LV  LV Tags
>   vg_gamma lv_home
>   vg_gamma lv_root
>   vg_gamma lv_swap

You are not connected to the iscsi storage domain.

Please try this from a host in up state in engine.

Nir

>
> On the other hand an fdisk shows a bunch of disks and here is one
> example:
>
> Disk /dev/mapper/36589cfc0050564002c7e51978316: 2199.0 GB,
> 219902322 bytes
> 255 heads, 63 sectors/track, 267349 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 32768 bytes
> I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> Disk identifier: 0x
>
>
> Disk /dev/mapper/36589cfc00881b9b93c2623780840: 2199.0 GB,
> 219902322 bytes
> 255 heads, 63 sectors/track, 267349 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 32768 bytes
> I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> Disk identifier: 0x
>
>
> Disk /dev/mapper/3ccb7b67--8067--4315--9656--d68ba10975ba-metadata: 536
> MB, 536870912 bytes
> 255 heads, 63 sectors/track, 65 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 32768 bytes
> I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> Disk identifier: 0x
>
> Disk /dev/mapper/4861322b--352f--41c6--890a--5cbf1c2c1f01-master: 1073
> MB, 1073741824 bytes
> 255 heads, 63 sectors/track, 130 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 32768 bytes
> I/O size (minimum/optimal): 32768 bytes / 1048576 bytes
> Disk identifier: 0x
>
> Regards,
>
> --
> Fernando Fuentes
> ffuen...@txweather.org
> http://www.txweather.org
>
> On Sat, Jul 16, 2016, at 04:25 PM, Fernando Fuentes wrote:
>> Nir,
>>
>> Ok ill look for it here in a few.
>> Thanks for your reply and help!
>>
>> --
>> Fernando Fuentes
>> ffuen...@txweather.org
>> http://www.txweather.org
>>
>> On Sat, Jul 16, 2016, at 04:16 PM, Nir Soffer wrote:
>> > On Fri, Jul 15, 2016 at 3:50 PM, Fernando Fuentes 
>> > wrote:
>> > > Nir,
>> > >
>> > > I try to follow your steps but I cant seem to find the ID of the
>> > > template.
>> >
>> > The image-uuid of the template is displayed in the Disks tab in engine.
>> >
>> > To find the volume-uuid on block storage, you can do:
>> >
>> > pvscan --cache
>> > lvs -o vg_name,lv_name,tags | grep image-uuid
>> >
>> > >
>> > > Regards,
>> > >
>> > > --
>> > > Fernando Fuentes
>> > > ffuen...@txweather.org
>> > > http://www.txweather.org
>> > >
>> > > On Sun, Jul 10, 2016, at 02:15 PM, Nir Soffer wrote:
>> > >> On Thu, Jul 7, 2016 at 7:46 PM, Melissa Mesler 
>> > >> wrote:
>> > >> > All, I did a test for Fernando in our ovirt environment. I created a 
>> > >> > vm
>> > >> > called win7melly in the nfs domain. I then migrated it to the iscsi
>> > >> > domain. It booted without any issue. So it has to be something with 
>> > >> > the
>> > >> > templates. I have attached the vdsm log for the host the vm resides 
>> > >> > on.
>> > >>
>> > >> The log show a working vm, so it does not help much.
>> > >>
>> > >> I think that the template you copied from the nfs domain to the block
>> > >> domain is
>> > >> corrupted, or the volume metadata are incorrect.
>> > >>
>> > >> If I understand this correctly, this started when Fernando could not 
>> > >> copy
>> > >> the vm
>> > >> disk to the block storage, and I guess the issue was that the template
>> > >> was missing
>> > >> on that storage domain. I assume that he copied the template to the
>> > >> block storage
>> > >> domain by opening the templates tab, selecting the template, and 
>> > >> choosing
>> > >> copy
>> > >> from the menu.
>> > >>
>> > >> Lets compare the template on both nfs and block storage domain.
>> > >>
>> > >> 1. Find the template on the nfs storage domain, using the image uuid in
>> > >> engine.
>> > >>
>> > >> It should be at
>> > >>
>> > >> 
>> > >> /rhev/data-center/mnt/server:_path/domain-uuid/images/image-uuid/volume-uuid
>> > >>
>> > >> 2. Please share the output of:
>> > >>
>> > >> cat /path/to/volume.meta
>> > >> qemu-img info /path/to/volume
>> > >> qemu-img check /path/to/volume
>> > >>
>> > >> 4. Find the template on the block storage domain
>> > >>
>> > >> You should have an lv using the same volume uuid and the image-uuid
>> > >> should be in the lv tag.
>> > >>
>> > >> Find it using:
>> > >>
>> > >> lvs -o vg_name,lv_name,tags | grep volume-uuid
>> > >>
>> > >> 5. Activate the lv
>> > >>
>> > >> lvchange -ay 

Re: [ovirt-users] Move from Local SD to Shared

2016-07-17 Thread Yaniv Dary
I would recommend using a export domain to clear the SD and reattaching it
to a shared storage cluster, then importing.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Fri, Jul 15, 2016 at 5:10 PM, Alexandr Krivulya 
wrote:

> Hi,
>
> I need to move my datacenter from local storage domain to shared (nfs or
> posix) without destroying storage. What is the best way to do it in
> oVirt 3.6?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-ha-agent keeps quitting - 4.0.0

2016-07-17 Thread Yaniv Dary
The other issue will be fixed in 4.0.2:
https://bugzilla.redhat.com/show_bug.cgi?id=1348907

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Sun, Jul 17, 2016 at 1:04 PM, Artyom Lukianov 
wrote:

> We had the bug related to this issue
> https://bugzilla.redhat.com/show_bug.cgi?id=1343005.
> It must be fixed in recent versions.
> Best Regards
>
> On Thu, Jul 14, 2016 at 8:14 PM, Gervais de Montbrun <
> gerv...@demontbrun.com> wrote:
>
>> Hey Folks,
>>
>> I upgraded my oVirt cluster from 3.6.7 to 4.0.0 yesterday and am
>> experiencing a bunch of issues.
>>
>> 1) I can't update the Compatibility Version to 4.0 because it tells me
>> that all my VMs have to be off to do so, but I have a hosted engine. I
>> found some info online about how you plan to fix this. Do we know if the
>> fix will be in 4.0.1?
>>
>> 2) More alarming... the ovirt-ha-agent keeps quitting. The agent.log
>> shows:
>>
>> MainThread::ERROR::2016-07-13
>> 16:38:57,100::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 16:39:02,104::config::122::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_load)
>> Configuration file '/etc/ovirt-hosted-engine/hosted-engine.conf' not
>> available [[Errno 24] Too many open files:
>> '/etc/ovirt-hosted-engine/hosted-engine.conf']
>> MainThread::ERROR::2016-07-13
>> 16:39:02,105::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 16:39:07,110::agent::210::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Too many errors occurred, giving up. Please review the log and consider
>> filing a bug.
>> MainThread::ERROR::2016-07-13
>> 17:44:03,499::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Shutting down the agent because of 3 failures in a row!
>> MainThread::ERROR::2016-07-13
>> 17:44:03,515::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '(24, 'Sanlock lockspace remove failure', 'Too many open files')' -
>> trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:08,520::config::122::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_load)
>> Configuration file '/etc/ovirt-hosted-engine/hosted-engine.conf' not
>> available [[Errno 24] Too many open files:
>> '/etc/ovirt-hosted-engine/hosted-engine.conf']
>> MainThread::ERROR::2016-07-13
>> 17:44:08,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:13,529::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:18,535::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:23,541::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:28,546::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:33,552::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:38,556::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:43,561::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:48,566::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Error: '[Errno 24] Too many open files' - trying to restart agent
>> MainThread::ERROR::2016-07-13
>> 17:44:53,571::agent::210::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
>> Too many errors occurred, giving up. Please review the log and consider
>> filing a bug.
>> MainThread::ERROR::2016-07-13
>> 18:47:40,048::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Shutting down the agent because of 3 failures in a row!
>> MainThread::ERROR::2016-07-14
>> 10:32:29,184::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
>> Shutting down the agent because of 3 failures in a row!
>> 

Re: [ovirt-users] Importing QCOW2 into ovirt-3.6

2016-07-17 Thread Yaniv Dary
In 3.6 you can use the image upload utility.
In 4.0 we added UI to upload disk images.

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Fri, Jul 15, 2016 at 6:05 PM, Alexis HAUSER <
alexis.hau...@telecom-bretagne.eu> wrote:

> Hi,
>
>
> I downloaded a linux appliance "for KVM" in .QCOW2 extension. How can I
> import it ?
>
> I tried adding it manually to a NFS share but it doesn't seem detected by
> ovirt 3.6
>
> Any ideas ?
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move from Local SD to Shared

2016-07-17 Thread Maor Lipchuk
Hi Alexandr,

Does the storage domain's server supports NFS or posix?
If so, you can create a new shared DC and destroy the old local DC (without
formatting the local SD) and then, try to import this SD as a shared
storage domain.

Regards,
Maor


On Fri, Jul 15, 2016 at 5:10 PM, Alexandr Krivulya 
wrote:

> Hi,
>
> I need to move my datacenter from local storage domain to shared (nfs or
> posix) without destroying storage. What is the best way to do it in
> oVirt 3.6?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-ha-agent keeps quitting - 4.0.0

2016-07-17 Thread Artyom Lukianov
We had the bug related to this issue
https://bugzilla.redhat.com/show_bug.cgi?id=1343005.
It must be fixed in recent versions.
Best Regards

On Thu, Jul 14, 2016 at 8:14 PM, Gervais de Montbrun  wrote:

> Hey Folks,
>
> I upgraded my oVirt cluster from 3.6.7 to 4.0.0 yesterday and am
> experiencing a bunch of issues.
>
> 1) I can't update the Compatibility Version to 4.0 because it tells me
> that all my VMs have to be off to do so, but I have a hosted engine. I
> found some info online about how you plan to fix this. Do we know if the
> fix will be in 4.0.1?
>
> 2) More alarming... the ovirt-ha-agent keeps quitting. The agent.log shows:
>
> MainThread::ERROR::2016-07-13
> 16:38:57,100::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 16:39:02,104::config::122::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_load)
> Configuration file '/etc/ovirt-hosted-engine/hosted-engine.conf' not
> available [[Errno 24] Too many open files:
> '/etc/ovirt-hosted-engine/hosted-engine.conf']
> MainThread::ERROR::2016-07-13
> 16:39:02,105::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 16:39:07,110::agent::210::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Too many errors occurred, giving up. Please review the log and consider
> filing a bug.
> MainThread::ERROR::2016-07-13
> 17:44:03,499::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::ERROR::2016-07-13
> 17:44:03,515::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '(24, 'Sanlock lockspace remove failure', 'Too many open files')' -
> trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:08,520::config::122::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_load)
> Configuration file '/etc/ovirt-hosted-engine/hosted-engine.conf' not
> available [[Errno 24] Too many open files:
> '/etc/ovirt-hosted-engine/hosted-engine.conf']
> MainThread::ERROR::2016-07-13
> 17:44:08,523::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:13,529::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:18,535::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:23,541::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:28,546::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:33,552::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:38,556::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:43,561::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:48,566::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: '[Errno 24] Too many open files' - trying to restart agent
> MainThread::ERROR::2016-07-13
> 17:44:53,571::agent::210::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Too many errors occurred, giving up. Please review the log and consider
> filing a bug.
> MainThread::ERROR::2016-07-13
> 18:47:40,048::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::ERROR::2016-07-14
> 10:32:29,184::hosted_engine::493::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Shutting down the agent because of 3 failures in a row!
> MainThread::ERROR::2016-07-14
> 11:10:07,223::brokerlink::279::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(_communicate)
> Connection closed: Connection closed
> MainThread::ERROR::2016-07-14
> 11:10:07,224::brokerlink::148::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(get_monitor_status)
> Exception getting monitor status: Connection closed
> MainThread::ERROR::2016-07-14
> 11:10:07,224::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
> Error: 

Re: [ovirt-users] Import of ova failed

2016-07-17 Thread Shahar Havivi
On 14.07.16 18:14, Cam Mac wrote:
> Hi,
> 
> I'm trying to import some .ova images from VMWare that have been copied to
> a node. In both cases they fail with the error:
> 
> "Conversion of VM from exteral enironment failed: copy-disk stream closed
> unexpectedly"
First you need to look at vdsm log to see more details why it failed.

> 
> (the message above is copied verbatim from the log, including
> the misspellings)
> 
> One of the VMs is a RHEL 6.6 and the other is Windows 7. The import reports
> adding a disk and then running a conversion, then appears to fail about 5
> or 6 minutes into converting the image.
> 
> I'm considering using virt-v2v to do the conversion of the .ova, but I'm
> not sure how to get that into ovirt then.
> 
> Any suggestions?
Try to run virt-v2v -i ova -o local to import the ova to local disk (see the
virt-v2v man page for more options) and see if that pass - the errors can be
more detailed there (it suppose to reflect the same error that you get in vdsm
log)., another options you can do is run virt-v2v in verbose mode (virt-v2v
-v).
> 
> Regards,
> 
> Cam

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users