Re: [ovirt-users] New SAN Storage Domain: Cannot deactivate Logical Volume

2017-04-06 Thread Liron Aravot
Thanks Alexey,
Nir/Adam/Raz - any initial thoughts about that? have you encountered that
issue before?

Regards,
Liron

On Wed, Apr 5, 2017 at 12:21 PM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:

> Hi Liron,
>
> Well, this error appear on freshly installed oVirt hosted engine 4.1.1.6-1
> with oVirt Node NG 4.1.
> There is no engine major version upgrade.
> Node NG was updated from 4.1.0 to 4.1.1 by the ovirt-node-ng-image* pkgs.
>
> vdsm version is:
>
> Installed Packages
> Name: vdsm
> Arch: x86_64
> Version : 4.19.10
> Release : 1.el7.centos
> --
>
> I also install oVirt 3.6 hosted engine on another server (CentOS 7.2
> minimal, not updated to 7.3) and add the some ISCSI target as the data
> domain. This work OK.
> After I update engine from 3.6 to 4.1. ISCSI data domain is OK after
> update. Another ISCSI targets was added as data domain without errors.
>
> vdsm version is:
>
> Installed Packages
> Name: vdsm
> Arch    : noarch
> Version : 4.17.32
> Release : 0.el7.centos
>
> 04.04.2017, 16:48, "Liron Aravot" <lara...@redhat.com>:
>
> Hi Alexey,
>
> Looking in your logs - it indeed look weird, the vg is created seconds
> later we fail to get it when attempting to create the domain.
>
> Did you upgrade/downgraded only the engine version or the vdsm version as
> well?
>
> 2017-04-03 11:16:39,595+0300 INFO  (jsonrpc/4) [dispatcher] Run and
> protect: createVG(vgname=u'3050e121-83b5-474e-943f-6dc52af38a17',
> devlist=[u'3600140505d11d22e61f46a6ba4a
> 7bdc7'], force=False, options=None) (logUtils:51)
>
>
> output error', '  /dev/mapper/36001405b7dd7b800e7049ecbf6830637: read
> failed after 0 of 4096 at 4294967230464: Input/output error', '
>  /dev/mapper/36001405b7dd7b800e7049ecbf
> 6830637: read failed after 0 of 4096 at 4294967287808: Input/output
> error', '  WARNING: Error counts reached a limit of 3. Device /dev/mapper/
> 36001405b7dd7b800e7049ecbf68306
> 37 was disabled', '  Failed to find physical volume "/dev/mapper/
> 3600140505d11d22e61f46a6ba4a7bdc7".'] (lvm:323)
> 2017-04-03 11:16:38,832+0300 WARN  (jsonrpc/5) [storage.HSM] getPV failed
> for guid: 3600140505d11d22e61f46a6ba4a7bdc7 (hsm:1973)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 1970, in _getDeviceList
> pv = lvm.getPV(guid)
>   File "/usr/share/vdsm/storage/lvm.py", line 853, in getPV
> raise se.((pvName,))
>
>
> On Tue, Apr 4, 2017 at 3:33 PM, Николаев Алексей <
> alexeynikolaev.p...@yandex.ru> wrote:
>
> The last I was update oVirt oVirt 3.6.7.5-1 to oVirt 4.1.1.6-1. Main
> ISCSI datadomain is OK. Another one ISCSI datadomain was added
> successfully.
>
> The problem are only with freshly installed oVirt 4.1.1.6-1.
>
> Added logs after update ovirt and add new ISCSI datadomains.
>
> 03.04.2017, 17:26, "Николаев Алексей" <alexeynikolaev.p...@yandex.ru>:
>
> Also I test again ISCSI TARGET with oVirt 3.6.7.5-1 and CentOS 7.2.1511
> host. ISCSI data domain was added succesfully. "the truth is out there".
> Add logs.
>
> 03.04.2017, 11:24, "Николаев Алексей" <alexeynikolaev.p...@yandex.ru>:
>
> Add logs. Thx!
>
>
> 02.04.2017, 16:19, "Liron Aravot" <lara...@redhat.com>:
>
> Hi Alexey,
> can you please attach the engine/vdsm logs?
>
> thanks.
>
> On Fri, Mar 31, 2017 at 11:08 AM, Николаев Алексей <
> alexeynikolaev.p...@yandex.ru> wrote:
>
> Hi, community!
>
> I'm try using ISCSI DATA DOMAIN with oVirt 4.1.1.6-1 and oVirt Node 4.1.1.
> But get this error while adding data domain.
>
> Error while executing action New SAN Storage Domain: Cannot deactivate
> Logical Volume
>
>
> 2017-03-31 10:47:45,099+03 ERROR [org.ovirt.engine.core.
> vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-41)
> [717af17f] Command 'CreateStorageDomainVDSCommand(HostName = node169-07.,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='ca177a14-56ef-4736-a0a9-ab9dc2a8eb90', storageDomain='
> StorageDomainStatic:{name='data1', 
> id='56038842-2fbe-4ada-96f1-c4b4e66fd0b7'}',
> args='sUG5rL-zqvF-DMox-uMK3-Xl91-ZiYw-k27C3B'})' execution failed:
> VDSGenericException: VDSErrorException: Failed in vdscommand to
> CreateStorageDomainVDS, error = Cannot deactivate Logical Volume: ('General
> Storage Exception: (\'5 [] [\\\'  /dev/mapper/36001405b7dd7b800e7049ecbf
> 6830637: read failed after 0 of 4096 at 0: Input/output error\\\', \\\'
> /dev/mapper/36001405b7dd7b800e7049ecbf6830637: read failed after 0 of
> 4096 at

Re: [ovirt-users] for some reason ovirtnode creates unecessary vg and lvs

2017-04-06 Thread Liron Aravot
On Wed, Apr 5, 2017 at 3:16 PM, Yaniv Kaul  wrote:

>
>
> On Sun, Apr 2, 2017 at 9:15 PM, martin chamambo 
> wrote:
>
>> I managed to configure the mail iscsi domain for my ovirt 4.1 engine and
>> node , it connects to the storage initially and initialises the data center
>> ,but after rebooting the node and engine , it creates unecessary vg and lvs
>> like below
>>
>> 280246d3-ac7b-44ff-8c03-dc2bcb9edb70 d5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a- 128.00m
>>   2d57ab88-16e4-4007-9047-55fc4a35b534 d5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a- 128.00m
>>   ids  d5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a- 128.00m
>>   inboxd5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a- 128.00m
>>   leases   d5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a-   2.00g
>>   master   d5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a-   1.00g
>>   metadata d5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a- 512.00m
>>   outbox   d5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a- 128.00m
>>   xleases  d5104206-5863-4f9d-9ea7-2b140c97d65f
>> -wi-a-   1.00g
>>
>> whats the cause of this
>>
>
> I'm not sure what is the problem? These LVs are the metadata LVs for a
> storage domain.
> Y.
>
>

As Yaniv wrote - those are LVs created by oVirt.
Is the node  currently in use in oVirt? if so, it should be connected to
your storage server and have those vg/lvs.

>
>> NB:mY ISCSI storage is on a centos 7 box
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New SAN Storage Domain: Cannot deactivate Logical Volume

2017-04-04 Thread Liron Aravot
Hi Alexey,

Looking in your logs - it indeed look weird, the vg is created seconds
later we fail to get it when attempting to create the domain.

Did you upgrade/downgraded only the engine version or the vdsm version as
well?

2017-04-03 11:16:39,595+0300 INFO  (jsonrpc/4) [dispatcher] Run and
protect: createVG(vgname=u'3050e121-83b5-474e-943f-6dc52af38a17',
devlist=[u'3600140505d11d22e61f46a6ba4a
7bdc7'], force=False, options=None) (logUtils:51)


output error', '  /dev/mapper/36001405b7dd7b800e7049ecbf6830637: read
failed after 0 of 4096 at 4294967230464: Input/output error', '
 /dev/mapper/36001405b7dd7b800e7049ecbf
6830637: read failed after 0 of 4096 at 4294967287808: Input/output error',
'  WARNING: Error counts reached a limit of 3. Device
/dev/mapper/36001405b7dd7b800e7049ecbf68306
37 was disabled', '  Failed to find physical volume
"/dev/mapper/3600140505d11d22e61f46a6ba4a7bdc7".'] (lvm:323)
2017-04-03 11:16:38,832+0300 WARN  (jsonrpc/5) [storage.HSM] getPV failed
for guid: 3600140505d11d22e61f46a6ba4a7bdc7 (hsm:1973)
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 1970, in _getDeviceList
pv = lvm.getPV(guid)
  File "/usr/share/vdsm/storage/lvm.py", line 853, in getPV
raise se.((pvName,))


On Tue, Apr 4, 2017 at 3:33 PM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:

> The last I was update oVirt oVirt 3.6.7.5-1 to oVirt 4.1.1.6-1. Main
> ISCSI datadomain is OK. Another one ISCSI datadomain was added
> successfully.
>
> The problem are only with freshly installed oVirt 4.1.1.6-1.
>
> Added logs after update ovirt and add new ISCSI datadomains.
>
> 03.04.2017, 17:26, "Николаев Алексей" <alexeynikolaev.p...@yandex.ru>:
>
> Also I test again ISCSI TARGET with oVirt 3.6.7.5-1 and CentOS 7.2.1511
> host. ISCSI data domain was added succesfully. "the truth is out there".
> Add logs.
>
> 03.04.2017, 11:24, "Николаев Алексей" <alexeynikolaev.p...@yandex.ru>:
>
> Add logs. Thx!
>
>
> 02.04.2017, 16:19, "Liron Aravot" <lara...@redhat.com>:
>
> Hi Alexey,
> can you please attach the engine/vdsm logs?
>
> thanks.
>
> On Fri, Mar 31, 2017 at 11:08 AM, Николаев Алексей <
> alexeynikolaev.p...@yandex.ru> wrote:
>
> Hi, community!
>
> I'm try using ISCSI DATA DOMAIN with oVirt 4.1.1.6-1 and oVirt Node 4.1.1.
> But get this error while adding data domain.
>
> Error while executing action New SAN Storage Domain: Cannot deactivate
> Logical Volume
>
>
> 2017-03-31 10:47:45,099+03 ERROR [org.ovirt.engine.core.
> vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-41)
> [717af17f] Command 'CreateStorageDomainVDSCommand(HostName = node169-07.,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='ca177a14-56ef-4736-a0a9-ab9dc2a8eb90', storageDomain='
> StorageDomainStatic:{name='data1', 
> id='56038842-2fbe-4ada-96f1-c4b4e66fd0b7'}',
> args='sUG5rL-zqvF-DMox-uMK3-Xl91-ZiYw-k27C3B'})' execution failed:
> VDSGenericException: VDSErrorException: Failed in vdscommand to
> CreateStorageDomainVDS, error = Cannot deactivate Logical Volume: ('General
> Storage Exception: (\'5 [] [\\\'  /dev/mapper/36001405b7dd7b800e7049ecbf
> 6830637: read failed after 0 of 4096 at 0: Input/output error\\\', \\\'
> /dev/mapper/36001405b7dd7b800e7049ecbf6830637: read failed after 0 of
> 4096 at 4294967230464: Input/output error\\\', \\\'  /dev/mapper/36001405
> b7dd7b800e7049ecbf6830637: read failed after 0 of 4096 at 4294967287808:
> Input/output error\\\', \\\'  WARNING: Error counts reached a limit of 3.
> Device /dev/mapper/36001405b7dd7b800e7049ecbf6830637 was disabled\\\',
> \\\'  WARNING: Error counts reached a limit of 3. Device /dev/mapper/
> 36001405b7dd7b800e7049ecbf6830637 was disabled\\\', \\\'  Volume group "
> 56038842-2fbe-4ada-96f1-c4b4e66fd0b7" not found\\\', \\\'  Cannot process
> volume group 56038842-2fbe-4ada-96f1-c4b4e66fd0b7\\\']\\n56038842-
> 2fbe-4ada-96f1-c4b4e66fd0b7/[\\\'master\\\']\',)',)
>
>
> Previously, this ISCSI DATA DOMAIN works well with oVirt 3.6 and CentOS
> 7.2 host.
> How I can debug this error?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> ,
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
> ,
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] New SAN Storage Domain: Cannot deactivate Logical Volume

2017-04-02 Thread Liron Aravot
Hi Alexey,
can you please attach the engine/vdsm logs?

thanks.

On Fri, Mar 31, 2017 at 11:08 AM, Николаев Алексей <
alexeynikolaev.p...@yandex.ru> wrote:

> Hi, community!
>
> I'm try using ISCSI DATA DOMAIN with oVirt 4.1.1.6-1 and oVirt Node 4.1.1.
> But get this error while adding data domain.
>
> Error while executing action New SAN Storage Domain: Cannot deactivate
> Logical Volume
>
>
> 2017-03-31 10:47:45,099+03 ERROR [org.ovirt.engine.core.
> vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default task-41)
> [717af17f] Command 'CreateStorageDomainVDSCommand(HostName = node169-07.,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='ca177a14-56ef-4736-a0a9-ab9dc2a8eb90', storageDomain='
> StorageDomainStatic:{name='data1', 
> id='56038842-2fbe-4ada-96f1-c4b4e66fd0b7'}',
> args='sUG5rL-zqvF-DMox-uMK3-Xl91-ZiYw-k27C3B'})' execution failed:
> VDSGenericException: VDSErrorException: Failed in vdscommand to
> CreateStorageDomainVDS, error = Cannot deactivate Logical Volume: ('General
> Storage Exception: (\'5 [] [\\\'  /dev/mapper/
> 36001405b7dd7b800e7049ecbf6830637: read failed after 0 of 4096 at 0:
> Input/output error\\\', \\\'  /dev/mapper/36001405b7dd7b800e7049ecbf6830637:
> read failed after 0 of 4096 at 4294967230464: Input/output error\\\', \\\'
> /dev/mapper/36001405b7dd7b800e7049ecbf6830637: read failed after 0 of
> 4096 at 4294967287808: Input/output error\\\', \\\'  WARNING: Error counts
> reached a limit of 3. Device /dev/mapper/36001405b7dd7b800e7049ecbf6830637
> was disabled\\\', \\\'  WARNING: Error counts reached a limit of 3. Device
> /dev/mapper/36001405b7dd7b800e7049ecbf6830637 was disabled\\\', \\\'
> Volume group "56038842-2fbe-4ada-96f1-c4b4e66fd0b7" not found\\\', \\\'
> Cannot process volume group 56038842-2fbe-4ada-96f1-
> c4b4e66fd0b7\\\']\\n56038842-2fbe-4ada-96f1-c4b4e66fd0b7/[\
> \\'master\\\']\',)',)
>
>
> Previously, this ISCSI DATA DOMAIN works well with oVirt 3.6 and CentOS
> 7.2 host.
> How I can debug this error?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] VDSM Hooks

2017-03-30 Thread Liron Aravot
On Thu, Mar 30, 2017 at 7:30 PM, John  wrote:

> Hi,
>
> I've be googling and also trawling the oVirt documentation trying to find
> instructions on how to install and use VDSM hooks in general but
> specifically the "vdsm-hook-localdisk" hook. I have a machine that has a
> large amount of free space on SSD which I was hoping to use to store a
> pinned VM which requires fast I/O throughput and doesn't require HA. In
> essence, I'm trying to figure out how to use a hypervisor host as both a
> member of my cluster but also to provision a VM using local storage. If
> anybody could point me in the right direction I would be very grateful.
>
> Thanks


Hi John,
the method you specified should work.

Have you checked the documentation here?
http://www.ovirt.org/develop/developer-guide/vdsm/hooks/


>
> John
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Gluster and oVirt 4.0 questions

2017-03-30 Thread Liron Aravot
Hi Jim, please see inline

On Thu, Mar 30, 2017 at 4:08 AM, Jim Kusznir  wrote:

> hello:
>
> I've been running my ovirt Version 4.0.5.5-1.el7.centos cluster for a
> while now, and am now revisiting some aspects of it for ensuring that I
> have good reliability.
>
> My cluster is a 3 node cluster, with gluster nodes running on each node.
> After running my cluster a bit, I'm realizing I didn't do a very optimal
> job of allocating the space on my disk to the different gluster mount
> points.  Fortunately, they were created with LVM, so I'm hoping that I can
> resize them without much trouble.
>
> I have a domain for iso, domain for export, and domain for storage, all
> thin provisioned; then a domain for the engine, not thin provisioned.  I'd
> like to expand the storage domain, and possibly shrink the engine domain
> and make that space also available to the main storage domain.  Is it as
> simple as expanding the LVM partition, or are there more steps involved?
> Do I need to take the node offline?
>

I didn't understand completely that part - what is the difference between
the domain for storage and the domain for engine you mentioned?

>
> second, I've noticed that the first two nodes seem to have a full copy of
> the data (the disks are in use), but the 3rd node appears to not be using
> any of its storage space...It is participating in the gluster cluster,
> though.
>
> Third, currently gluster shares the same network as the VM networks.  I'd
> like to put it on its own network.  I'm not sure how to do this, as when I
> tried to do it at install time, I never got the cluster to come online; I
> had to make them share the same network to make that work.
>

I'm adding Sahina who may shed some light on the gluster question, I'd try
on the gluster mailing list as well.

>
>
> Ovirt questions:
> I've noticed that recently, I don't appear to be getting software updates
> anymore.  I used to get update available notifications on my nodes every
> few days; I haven't seen one for a couple weeks now.  is something wrong?
>
> I have a windows 10 x64 VM.  I get a warning that my VM type does not
> match the installed OS.  All works fine, but I've quadrouple-checked that
> it does match.  Is this a known bug?
>

Arik, any info on that?

>
> I have a UPS that all three nodes and the networking are on.  It is a USB
> UPS.  How should I best integrate monitoring in?  I could put a raspberry
> pi up and then run NUT or similar on it, but is there a "better" way with
> oVirt?
>
> Thanks!
> --Jim
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt 3.6 on CentOS 6.7 based HyperConverged DC : Upgradable?

2017-03-29 Thread Liron Aravot
On Wed, Mar 29, 2017 at 4:35 PM, Nicolas Ecarnot 
wrote:

> [Please ignore the previous msg]
>
> Hello,
>
> One of our DC is a very small one, though quite critical.
> It's almost hyper converged : hosts are compute+storage, but the engine is
> standalone.
>
> It's made of :
>
> Hardware :
> - one physical engine : CentOS 6.7
> - 3 physical hosts : CentOS 7.2
>
> Software :
> - oVirt 3.6.5
> - glusterFS 3.7.16 in replica-3, sharded.
>
> The goal is to upgrade all this to oVirt 4.1.1, and also upgrade the OSes.
> (oV 4.x only available on cOS 7.x)
>
> At present, only 3 VMs here are critical, and I have backups for them.
> Though, I'm quite nervous with the path I have to follow and the hazards.
> Especially about the gluster parts.
>
> At first glance, I would go this way (feel free to comment) :
> - upgrade the OS of the engine : 6.7 -> 7.3
> - upgrade the OS of the hosts  : 7.2 -> 7.3
> - upgrade and manage the upgrade of gluster, check the volumes...
> - upgrade oVirt (engine then hosts)
>

Hi Nicolas,
Regards the gluster update - have your tried to ask in the gluster ml?
I'm adding Sahina to share her opinion on that flow.
Export domain as Didi suggested is indeed the easy way from oVirt
perspective - but it'll take a long time to copy/restore, Gluster backup
should perform faster if there's a solution that fit your needs.

>
> But when upgrading the OSes, I guess it will also upgrade the gluster
> layer.
>
> During this upgrade, I have no constraint to keep everything running,
> total shutdown is acceptable.
>
> Is the above procedure seems OK, or may am I missing some essential points?
>
> Thank you.
>
> --
> Nicolas ECARNOT
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to re-initialize storage?

2017-03-28 Thread Liron Aravot
On Tue, Mar 28, 2017 at 9:34 AM, Idan Shaby  wrote:

> Hi Peter,
>
> You can take your hosts down to maintenance and then right click on your
> data center -> force remove.
> That will remove your storage and vms along with your datacenter.
>
>
> Regards,
> Idan
>

Hi,
I'd start off with backing up your db so you'll have a way to access your
old configuration.
I didn't understand how did you move the vms to another cluster - can you
elaborate?
Generally speaking, you can take a look at the import storage domain
feature (
http://www.ovirt.org/develop/release-management/features/storage/importstoragedomain/
).

Regards,
Liron

>
> On Mon, Mar 27, 2017 at 9:53 PM, Peter Wood 
> wrote:
>
>> Hi,
>>
>> I inherited a small oVirt cluster with 3 nodes using iSCSI storage.
>>
>> The LUNs used by the oVirt nodes were also accessible by some VMware
>> nodes.
>>
>> By mistake an action was started on VMware to initialize and start using
>> the same LUNs, which it happily did. This rendered the storage no longer
>> usable by oVirt nodes. All VMs went down, their image files no longer
>> accessible, the whole cluster became unusable without access to the shared
>> storage.
>>
>> We have recovered from backups and moved the VMs to another cluster and
>> now I want to get this oVirt cluster back up and running.
>>
>> I'm not an oVirt expert so everything I tried was through the web
>> interface and everything failed because storage is not accessible. I can't
>> remove VMs, I can't remove nodes, I can't remove the storage domain.
>>
>> What is the right approach to destroy the cluster and rebuild it and
>> re-initialize the storage?
>>
>> Even better if I can just remove all VMs from the inventory and somehow
>> tell it to make the storage LUNs usable again?
>>
>> Any help is highly appreciated?
>>
>> Thank you,
>>
>> -- Peter
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Unable to activate Storage Pool

2017-01-29 Thread Liron Aravot
Hi Phil,
It's not recommended to do that manually, that's error prone.
I'd suggest you to restore the connectivity to your engine and
activate the host.

On Sat, Jan 28, 2017 at 9:01 PM, Phil D  wrote:
> Hello,
>
> My ovirt node has crashed and am unable to reach the engine manager.  I have
> the NFS mounts working okay and am able to execute getStorageDomainInfo
> against them and retrieve the pool id.  The major issue I am having is that
> to be able to use connectStoragePool I seem to need the Master UUID as am
> getting:
>
> vdsClient -s localhost connectStoragePool
> 9b40933b-8198-4b3d-bfc2-6d1e49a97d70 0 0
> Cannot find master domain: 'spUUID=9b40933b-8198-4b3d-bfc2-6d1e49a97d70,
> msdUUID=----'
>
>
> How can I find the correct msdUUID to use ?
>
> Thanks, Phil
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.0.5 reporting dashboard not working.

2017-01-17 Thread Liron Aravot
When a host is "connected" to a storage pool it monitors the active
pool domains. When the domain monitoring results are reported back
from the host, the first monitor run may haven't finished yet. In that
case the returned status from vdsm is a dummy status and not an actual
result - when getting such a status the engine logs it.
We should change that log print to INFO rather then WARN.

On Tue, Dec 20, 2016 at 12:17 PM, Tal Nisan  wrote:
> Liron, can you please check why does this "report isn't an actual report" is
> appearing in the log?
>
> On Mon, Dec 19, 2016 at 10:33 AM, Andrea Ghelardi
>  wrote:
>>
>> Hello Shirly,
>>
>> I restarted dwh service as per your suggestion with no success.
>>
>>
>>
>> Here a sample of DWH log:
>>
>> 2016-12-14
>> 12:31:55|kdcGa9|GCXnuH|PIv06L|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>> not sample data, oVirt Engine is not updating the statistics. Please check
>> your oVirt Engine status.|9704
>>
>> 2016-12-14
>> 12:32:20|ncjn2O|GCXnuH|PIv06L|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>> not sample data, oVirt Engine is not updating the statistics. Please check
>> your oVirt Engine status.|9704
>>
>> 2016-12-14
>> 15:52:37|Xx2zzb|GCXnuH|PIv06L|OVIRT_ENGINE_DWH|SampleTimeKeepingJob|Default|5|tWarn|tWarn_1|Can
>> not sample data, oVirt Engine is not updating the statistics. Please check
>> your oVirt Engine status.|9704
>>
>> 2016-12-16 17:39:13|ETL Service Stopped
>>
>> 2016-12-16 17:39:14|ETL Service Started
>>
>> ovirtEngineDbDriverClass|org.postgresql.Driver
>>
>>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
>>
>> hoursToKeepDaily|0
>>
>> hoursToKeepHourly|720
>>
>> ovirtEngineDbPassword|**
>>
>> runDeleteTime|3
>>
>>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
>>
>> runInterleave|20
>>
>> limitRows|limit 1000
>>
>> ovirtEngineHistoryDbUser|ovirt_engine_history
>>
>> ovirtEngineDbUser|engine
>>
>> deleteIncrement|10
>>
>> timeBetweenErrorEvents|30
>>
>> hoursToKeepSamples|24
>>
>> deleteMultiplier|1000
>>
>> lastErrorSent|2011-07-03 12:46:47.00
>>
>> etlVersion|4.0.5
>>
>> dwhAggregationDebug|false
>>
>> dwhUuid|a18846e6-d188-4170-8549-aeb1c54fa390
>>
>> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>>
>> ovirtEngineHistoryDbPassword|**
>>
>>
>>
>>
>>
>> And here a sample of Engine.log
>>
>>
>>
>> 2016-12-19 04:32:05,939 WARN
>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
>> (DefaultQuartzScheduler5) [53791585] Domain
>> '05c01989-18a1-4a45-9ed6-76c6badc728a:intel2-dstore04' report isn't an
>> actual report
>>
>> 2016-12-19 04:32:05,939 WARN
>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
>> (DefaultQuartzScheduler5) [53791585] Domain
>> 'f0d8ea03-17ab-4feb-bbf8-408e0550fc29:boole2-dstore1' report isn't an actual
>> report
>>
>>
>>
>>
>>
>> So apparently DWH is working, but that “report isn't an actual report” is
>> quite obscure to me.
>>
>> I need to clarify that (almost) all these storages are iscsi SAN hosted
>> and have been detached from Ovirt3 installation and IMPORTED into Ovirt4.
>>
>>
>>
>> Could you suggest for next step?
>>
>> Thanks
>>
>> Andrea
>>
>>
>>
>> From: Shirly Radco [mailto:sra...@redhat.com]
>> Sent: Thursday, December 8, 2016 10:22 AM
>> To: Andrea Ghelardi 
>> Cc: users 
>> Subject: Re: [ovirt-users] Ovirt 4.0.5 reporting dashboard not working.
>>
>>
>>
>> Hi Andrea,
>>
>>
>>
>> Please try to restart the dwh service.
>>
>> It is the service that collects the samples data from the engine database
>> to a data warehouse, ovirt_engine_history.
>>
>> run:
>>
>>
>>
>> service ovirt-engine-dwhd restart
>>
>>
>>
>> and send me the log again.
>>
>>
>>
>> If data is still not collected to ovirt_engine_history db I'll need you to
>> open a bug so we can follow on this there.
>>
>>
>> Best regards,
>>
>> Shirly Radco
>>
>> BI Software Engineer
>>
>> Red Hat Israel Ltd.
>>
>> 34 Jerusalem Road
>>
>> Building A, 4th floor
>>
>> Ra'anana, Israel 4350109
>>
>>
>>
>> On Wed, Dec 7, 2016 at 5:43 PM, Andrea Ghelardi
>>  wrote:
>>
>> Hello group,
>>
>> I noticed that my dashboard does not show little colored “cubes” which
>> show storage CPU and RAM usage history status.
>>
>> Attached an image and logs from
>> /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
>>
>> Here’s an extract:
>>
>>
>>
>> Exception in component tJDBCInput_5
>>
>> org.postgresql.util.PSQLException: ERROR: smallint out of range
>>
>> at
>> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
>>
>> at
>> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
>>
>> at
>> 

Re: [ovirt-users] "Deactivating Storage Domain" error

2016-12-26 Thread Liron Aravot
Hi Juan, see the reply inline

On Sun, Dec 25, 2016 at 8:47 PM, Juan Pablo <pablo.localh...@gmail.com> wrote:
> Hi Liron, Marry christmas!
>
> 1- [root@virt01-int ~]# vdsClient -s 0 getStorageDomainInfo
> e6a85203-32e0-4200-b3d7-83f140203294
> uuid = e6a85203-32e0-4200-b3d7-83f140203294
> vguuid = fDclFZ-KLel-t36P-VAFm-hyle-RxcU-uOc7gj
> state = OK
> version = 3
> role = Master
> type = ISCSI
> class = Data
> pool = ['58405b2d-0094-0380-0260-00a5']
> name = nas01
> 2- which file do you want me to share ?

I've meant to the status of the domain as appears in the web ui.

>
> how do I perform task a and b you mention? is there a guide somewhere ?
You need to login to the engine database, IIRC the default is "engine".
Please make sure to backup you db before you perform manual operation
(those operations are safe, but just in case).

a. psql -U engine -d engine
b. select job_id, description from job where description like '%Deactivating%';
c. delete from job where job_id='JOB_ID_FROM_PREVIOUS_QUERY';

Let me know how it went,
Liron

>
> I appreciate your help,
> best regards,
> JP
>
> 2016-12-25 11:10 GMT-03:00 Liron Aravot <lara...@redhat.com>:
>>
>> On Fri, Dec 23, 2016 at 2:32 PM, Juan Pablo <pablo.localh...@gmail.com>
>> wrote:
>> > So, what do you guys suggest? isn't there a hint or a way to fix this
>> > issue?
>> >
>> > thanks for your help,
>> > JP
>> >
>>
>> Hi Juan,
>> Having a "running" task under the tasks tab doesn't mean there's a
>> task running on vdsm - there are engine "tasks" and vdsm tasks.
>>
>> 1. what is the current status of the domain listed in the task?
>> 2. do you see any print related to the operation in the engine log
>> file currently? (you can upload it so we'll be able to help you with
>> verifying that).
>>
>> After you verify the operation is indeed not running
>> You can delete that specific job from the jobs table:
>> a. select job_id from job where description like '%...%';
>> b. delete from job where..
>>
>> Let me know if that worked for you,
>> Liron.
>>
>>
>>
>> > 2016-12-20 13:35 GMT-03:00 Juan Pablo <pablo.localh...@gmail.com>:
>> >>
>> >> Andrea, thanks for your reply, I already tried that , but
>> >> unfortunately, a
>> >> restart of the engine VM did not solve the error.
>> >>
>> >> best regards,
>> >> JP
>> >>
>> >> 2016-12-20 13:32 GMT-03:00 Andrea Ghelardi <a.ghela...@iontrading.com>:
>> >>>
>> >>> I remember facing a similar issue during decommissioning of my Ovirt 3
>> >>> environment.
>> >>>
>> >>> Several tasks showed on GUI (and storage operations unavailable) but
>> >>> no
>> >>> tasks on command line.
>> >>>
>> >>> Eventually restarting the engine VM cleaned up the situation and made
>> >>> the
>> >>> storage avail again.
>> >>>
>> >>>
>> >>>
>> >>> Cheers
>> >>>
>> >>> AG
>> >>>
>> >>>
>> >>>
>> >>> From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On
>> >>> Behalf
>> >>> Of Elad Ben Aharon
>> >>> Sent: Tuesday, December 20, 2016 5:24 PM
>> >>> To: Juan Pablo <pablo.localh...@gmail.com>
>> >>> Cc: users <Users@ovirt.org>
>> >>> Subject: Re: [ovirt-users] "Deactivating Storage Domain" error
>> >>>
>> >>>
>> >>>
>> >>> Do you have a storage domain in the DC in status 'locked'?
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Dec 20, 2016 at 6:09 PM, Juan Pablo
>> >>> <pablo.localh...@gmail.com>
>> >>> wrote:
>> >>>
>> >>> Elad, I did that and running the commands on the SPM  returned :
>> >>>
>> >>> [root@virt01-int ~]# vdsClient -s 0 getAllTasks
>> >>>
>> >>> [root@virt01-int ~]# vdsClient -s 0 getSpmStatus `vdsClient -s 0
>> >>> getConnectedStoragePoolsList`
>> >>> spmId = 1
>> >>> spmStatus = SPM
>> >>> spmLver = 8
>> >>>
>> >>> [root@virt01-int ~]# vdsClient -s 0 getAllTasks
>> >>>
>> >>>

Re: [ovirt-users] "Deactivating Storage Domain" error

2016-12-25 Thread Liron Aravot
On Fri, Dec 23, 2016 at 2:32 PM, Juan Pablo  wrote:
> So, what do you guys suggest? isn't there a hint or a way to fix this issue?
>
> thanks for your help,
> JP
>

Hi Juan,
Having a "running" task under the tasks tab doesn't mean there's a
task running on vdsm - there are engine "tasks" and vdsm tasks.

1. what is the current status of the domain listed in the task?
2. do you see any print related to the operation in the engine log
file currently? (you can upload it so we'll be able to help you with
verifying that).

After you verify the operation is indeed not running
You can delete that specific job from the jobs table:
a. select job_id from job where description like '%...%';
b. delete from job where..

Let me know if that worked for you,
Liron.



> 2016-12-20 13:35 GMT-03:00 Juan Pablo :
>>
>> Andrea, thanks for your reply, I already tried that , but unfortunately, a
>> restart of the engine VM did not solve the error.
>>
>> best regards,
>> JP
>>
>> 2016-12-20 13:32 GMT-03:00 Andrea Ghelardi :
>>>
>>> I remember facing a similar issue during decommissioning of my Ovirt 3
>>> environment.
>>>
>>> Several tasks showed on GUI (and storage operations unavailable) but no
>>> tasks on command line.
>>>
>>> Eventually restarting the engine VM cleaned up the situation and made the
>>> storage avail again.
>>>
>>>
>>>
>>> Cheers
>>>
>>> AG
>>>
>>>
>>>
>>> From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf
>>> Of Elad Ben Aharon
>>> Sent: Tuesday, December 20, 2016 5:24 PM
>>> To: Juan Pablo 
>>> Cc: users 
>>> Subject: Re: [ovirt-users] "Deactivating Storage Domain" error
>>>
>>>
>>>
>>> Do you have a storage domain in the DC in status 'locked'?
>>>
>>>
>>>
>>> On Tue, Dec 20, 2016 at 6:09 PM, Juan Pablo 
>>> wrote:
>>>
>>> Elad, I did that and running the commands on the SPM  returned :
>>>
>>> [root@virt01-int ~]# vdsClient -s 0 getAllTasks
>>>
>>> [root@virt01-int ~]# vdsClient -s 0 getSpmStatus `vdsClient -s 0
>>> getConnectedStoragePoolsList`
>>> spmId = 1
>>> spmStatus = SPM
>>> spmLver = 8
>>>
>>> [root@virt01-int ~]# vdsClient -s 0 getAllTasks
>>>
>>> [root@virt01-int ~]#
>>>
>>> thanks for your time,
>>>
>>> JP
>>>
>>>
>>>
>>> 2016-12-20 12:55 GMT-03:00 Elad Ben Aharon :
>>>
>>> It's OK, but you need to make sure you're checking it on the host which
>>> has the SPM role. So, first check on the host that it is the current SPM
>>> with the following:
>>>
>>> vdsClient -s 0 getSpmStatus `vdsClient -s 0 getConnectedStoragePoolsList`
>>>
>>> Look for the host that has " spmStatus = SPM"
>>>
>>> And after you find the SPM, check for the running tasks as you did
>>> before.
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Dec 20, 2016 at 5:37 PM, Juan Pablo 
>>> wrote:
>>>
>>> Hi,
>>>
>>>  thanks for your reply, I did check with vdsClient -s 0 getAllTasks ,
>>> returned nothing. is this still the correct way to check?
>>>
>>> regards,
>>>
>>> JP
>>>
>>>
>>>
>>>
>>>
>>> 2016-12-20 12:32 GMT-03:00 Elad Ben Aharon :
>>>
>>> Hi,
>>>
>>>
>>>
>>> Did you check for running tasks in the SPM host?
>>>
>>>
>>>
>>> On Tue, Dec 20, 2016 at 5:04 PM, Juan Pablo 
>>> wrote:
>>>
>>> Hi,
>>>
>>> first of all , thanks to all the Ovirt/Rhev team for the outstanding
>>> work!
>>>
>>>
>>> we are having a small issue with Ovirt 4.0.5 after testing a full end of
>>> year infrastructure shutdown, everything came back correctly except that we
>>> get a 'Deactivating Storage Domain' under the tasks tab.
>>>
>>> another dc/cluster running 3.6.7 reported no error with same maintenance
>>> procedure.. maybe we did something wrong?
>>>
>>> would you please be so kind to point me on the right direction to fix it?
>>> I looked
>>>
>>> vdsClient -s 0 getAllTasks , but returns nothing...
>>>
>>> thanks for your time guys and merry christmas and happy new year if we
>>> dont talk again soon!
>>>
>>> JP
>>>
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Remove Export storage domain

2016-09-21 Thread Liron Aravot
On Tue, Sep 20, 2016 at 8:02 PM, Danny Rehelis  wrote:
> Hi,
>
> I've added NFS Export storage domain and now, can't seems to remove it.
> Remove option is greyed out. Also, am I limited to only one Export domain?
> Can't add another one. No option in drop-down box other then Data
>
> 3 hosts, ovirt-engine Version 4.0.3-1.el7.centos
>
What is the status of the domain? please put it in maintenance and try again.
thanks,
Liron.
> Thanks,
> D
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Create a Virtual Disk failed

2016-06-27 Thread Liron Aravot
Thanks Dewey,

What exact 3.6 version do you use?
Elad - have you encountered that issue? can you see if it happens on your
env as well?

thanks.

On Sun, Jun 26, 2016 at 6:40 PM, Dewey Du  wrote:

> the content attached above is ui.log, and there is no logs in server.log
> and engine.log.
>
> I'm using oVirt 3.6. Finally I found out I can create the disk from the
> edit page of the vm.
>
> On Sun, Jun 26, 2016 at 8:40 PM, Yaniv Dary  wrote:
>
>> Adding Tal.
>> What version are you using? Can you attach logs?
>>
>> Yaniv Dary
>> Technical Product Manager
>> Red Hat Israel Ltd.
>> 34 Jerusalem Road
>> Building A, 4th floor
>> Ra'anana, Israel 4350109
>>
>> Tel : +972 (9) 7692306
>> 8272306
>> Email: yd...@redhat.com
>> IRC : ydary
>>
>>
>> On Sat, Jun 25, 2016 at 12:26 PM, Dewey Du  wrote:
>>
>>> I use glusterfs as storage. When I tried to create a virtual disk for a
>>> vm, I got the following error in UI logs.
>>>
>>> ERROR
>>> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
>>> (default task-48) [] Permutation name: B003B5EDCB6FC4308644D5D001F65B4C
>>> ERROR
>>> [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
>>> (default task-48) [] Uncaught exception: :
>>> com.google.gwt.core.client.JavaScriptException: (TypeError)
>>>  __gwt$exception: : a is undefined
>>> at Unknown._kj(Unknown Source)
>>> at Unknown.JUq(Unknown Source)
>>> at Unknown.Fbr(Unknown Source)
>>> at Unknown.Hno(Unknown Source)
>>> at Unknown.t0n(Unknown Source)
>>> at Unknown.w0n(Unknown Source)
>>> at Unknown.q3n(Unknown Source)
>>> at Unknown.t3n(Unknown Source)
>>> at Unknown.S2n(Unknown Source)
>>> at Unknown.V2n(Unknown Source)
>>> at Unknown.bMe(Unknown Source)
>>> at Unknown.V7(Unknown Source)
>>> at Unknown.k8(Unknown Source)
>>> at Unknown.czf/c.onreadystatechange<(Unknown Source)
>>> at Unknown.Ux(Unknown Source)
>>> at Unknown.Yx(Unknown Source)
>>> at Unknown.Xx/<(Unknown Source)
>>> at Unknown.anonymous(Unknown Source)
>>>
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] looking for some ISV backup software which integrates the backup API

2016-04-10 Thread Liron Aravot
On Wed, Apr 6, 2016 at 1:55 PM, Pavel Gashev  wrote:

> Nathanaël,
>
>
> I can share my backup experience. The current backup/restore api
> integration is not optimal for incremental backups. It's ok to read the
> whole disk if you backup it first time, but you have to read the whole disk
> each time to find changes for incremental backups. It creates huge disk
> load.
>
> For optimal backups it's necessary to keep a list of changed blocks since
> previous backup.
> See http://wiki.qemu.org/Features/IncrementalBackup
>
> Since it's not implemented in oVirt the best option is to run a backup
> agent inside VM. Yes, it's not very convenient, but it's necessary yet.
>
>
>
>
> On 05/04/16 21:32, "users-boun...@ovirt.org on behalf of Nathanaël
> Blanchet"  wrote:
>
> >Hello,
> >
> >We are about to change our backup provider, and I find it is a great
> >chance to choose a full supported ovirt backup solution.
> >I currently use this python script vm-backup-scheduler
> >(https://github.com/wefixit-AT/oVirtBackup) but it is not the workflow
> >officially suggested by the community
> >(
> https://www.ovirt.org/develop/release-management/features/storage/backup-restore-api-integration/
> ).
> >
> >I've been looking for a long time an ISV who supports such an API, but
> >the only one I found  is this one :
> >Acronis Backup Advanced suggested here
> >
> https://access.redhat.com/ecosystem/search/#/ecosystem/Red%20Hat%20Enterprise%20Virtualization?category=Software
> >I ran the trial version, but it doesn't seem to do better than the
> >vm-backup-scheduler script, and it doesn't seem to use the backup API
> >(attach a clone as a disk to an existing vm).
> >Can you suggest me some other ISV solutions, if they ever exist... or
> >share me your backup experience?
>

Hi Nathnael,
In addition to Acronis, you can also check
SEP - Hybrid Backup
Symantec Net Backup
CommVault - Simpana

thanks,
Liron


> >___
> >Users mailing list
> >Users@ovirt.org
> >http://lists.ovirt.org/mailman/listinfo/users
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete disk references without deleting the disk

2015-11-24 Thread Liron Aravot


- Original Message -
> From: "Johan Kooijman" <m...@johankooijman.com>
> To: "Liron Aravot" <lara...@redhat.com>
> Cc: "Nir Soffer" <nsof...@redhat.com>, "users" <users@ovirt.org>
> Sent: Tuesday, November 24, 2015 6:52:51 PM
> Subject: Re: [ovirt-users] Delete disk references without deleting the disk
> 
> When I deactivate the disk on a VM, I can click remove. It then offers me
> the dialog te remove it with a checkbox "Remove permanently". I don't check
> that, the disk will be deleted from inventory, but won't be deleted from
> storage domain?
> Can't try for myself, it'll kill my storage domain.

Yes, when you do that the disk will remain "floating" (you'll be able to see it 
under the
Disks tab) and you'll be able to attach it to another vm later on.
> 
> On Mon, Nov 23, 2015 at 12:46 PM, Johan Kooijman <m...@johankooijman.com>
> wrote:
> 
> > Ok. Any way to do it without? Because with snapshot deletion I end up with
> > the same issue - I can't remove images form my storage.
> >
> > On Mon, Nov 23, 2015 at 12:18 PM, Liron Aravot <lara...@redhat.com> wrote:
> >
> >>
> >>
> >> - Original Message -
> >> > From: "Johan Kooijman" <m...@johankooijman.com>
> >> > To: "Nir Soffer" <nsof...@redhat.com>
> >> > Cc: "users" <users@ovirt.org>
> >> > Sent: Monday, November 23, 2015 10:10:27 AM
> >> > Subject: Re: [ovirt-users] Delete disk references without deleting the
> >> disk
> >> >
> >> > One weird thing though: when I try to remove the VM itself, it won't
> >> let me
> >> > uncheck the "Remove disks" checkbox.
> >> >
> >>
> >> That is because that there are snapshots for the disks, you can remove
> >> the snapshots and then you could
> >> leave your disks. Currently oVirt doesn't support snapshots for floating
> >> disks.
> >> >
> >> > On Sun, Nov 22, 2015 at 9:00 PM, Nir Soffer < nsof...@redhat.com >
> >> wrote:
> >> >
> >> >
> >> > On Sun, Nov 22, 2015 at 6:14 PM, Johan Kooijman <
> >> m...@johankooijman.com >
> >> > wrote:
> >> > > Hi all,
> >> > >
> >> > > I have about 100 old VM's in my cluster. They're powered down, ready
> >> for
> >> > > deletion. What I want to do is delete the VM's including disks without
> >> > > actually deleting the disk images from the storage array itself. Is
> >> that
> >> > > possible?
> >> >
> >> > Select the vm, click "remove", in the confirmation dialog, uncheck the
> >> > "Delete disks"
> >> > checkbox, confirm.
> >> >
> >> > > At the end I want to be able to delete the storage domain (which
> >> > > then should not hold any data, as far as ovirt is concerned).
> >> >
> >> > Ovirt deleted the vms, but is keeping the disks, so the storage domain
> >> > does hold all
> >> > the disks.
> >> >
> >> > >
> >> > > Reason for this: it's a ZFS pool with dedup enabled, deleting the
> >> images
> >> > > one
> >> > > by one will kill the array with 100% iowa for some time.
> >> >
> >> > So what do you need is to destroy the storage domain, which will
> >> > remove all the entities
> >> > associated with it, but will keep the storage without any change.
> >> >
> >> > Do this:
> >> > 1. Select the storage tab
> >> > 2. select the domain
> >> > 3. In the data center sub tab, click "maintenance"
> >> > 4. When domain is in maintenance, click "detach"
> >> > 5. Right click the domain and choose "destroy"
> >> >
> >> > This will remove the storage domain from engine database, leaving
> >> > the contents of the domain.
> >> >
> >> > You can now delete the contents using favorite system tools.
> >> >
> >> > Now, if we want to add support for this in ovirt, how would you delete
> >> > the entire domain in a more efficient way?
> >> >
> >> > Nir
> >> >
> >> >
> >> >
> >> > --
> >> > Met vriendelijke groeten / With kind regards,
> >> > Johan Kooijman
> >> >
> >> > ___
> >> > Users mailing list
> >> > Users@ovirt.org
> >> > http://lists.ovirt.org/mailman/listinfo/users
> >> >
> >>
> >
> >
> >
> > --
> > Met vriendelijke groeten / With kind regards,
> > Johan Kooijman
> >
> 
> 
> 
> --
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Delete disk references without deleting the disk

2015-11-23 Thread Liron Aravot


- Original Message -
> From: "Johan Kooijman" 
> To: "Nir Soffer" 
> Cc: "users" 
> Sent: Monday, November 23, 2015 10:10:27 AM
> Subject: Re: [ovirt-users] Delete disk references without deleting the disk
> 
> One weird thing though: when I try to remove the VM itself, it won't let me
> uncheck the "Remove disks" checkbox.
> 

That is because that there are snapshots for the disks, you can remove the 
snapshots and then you could
leave your disks. Currently oVirt doesn't support snapshots for floating disks.
> 
> On Sun, Nov 22, 2015 at 9:00 PM, Nir Soffer < nsof...@redhat.com > wrote:
> 
> 
> On Sun, Nov 22, 2015 at 6:14 PM, Johan Kooijman < m...@johankooijman.com >
> wrote:
> > Hi all,
> > 
> > I have about 100 old VM's in my cluster. They're powered down, ready for
> > deletion. What I want to do is delete the VM's including disks without
> > actually deleting the disk images from the storage array itself. Is that
> > possible?
> 
> Select the vm, click "remove", in the confirmation dialog, uncheck the
> "Delete disks"
> checkbox, confirm.
> 
> > At the end I want to be able to delete the storage domain (which
> > then should not hold any data, as far as ovirt is concerned).
> 
> Ovirt deleted the vms, but is keeping the disks, so the storage domain
> does hold all
> the disks.
> 
> > 
> > Reason for this: it's a ZFS pool with dedup enabled, deleting the images
> > one
> > by one will kill the array with 100% iowa for some time.
> 
> So what do you need is to destroy the storage domain, which will
> remove all the entities
> associated with it, but will keep the storage without any change.
> 
> Do this:
> 1. Select the storage tab
> 2. select the domain
> 3. In the data center sub tab, click "maintenance"
> 4. When domain is in maintenance, click "detach"
> 5. Right click the domain and choose "destroy"
> 
> This will remove the storage domain from engine database, leaving
> the contents of the domain.
> 
> You can now delete the contents using favorite system tools.
> 
> Now, if we want to add support for this in ovirt, how would you delete
> the entire domain in a more efficient way?
> 
> Nir
> 
> 
> 
> --
> Met vriendelijke groeten / With kind regards,
> Johan Kooijman
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Can ISO Domain be shared across multiple data centers? Ovirt 3.5, 3.6? Does not appear so

2015-11-01 Thread Liron Aravot


- Original Message -
> From: "Liam Curtis" 
> To: users@ovirt.org
> Sent: Saturday, October 31, 2015 12:58:33 AM
> Subject: [ovirt-users] Can ISO Domain be shared across multiple data centers? 
> Ovirt 3.5, 3.6? Does not appear so
> 
> Hello All,
> 
> Tried setting up NFS ISO domain which works on one host/local datacenter. I
> do not want to have to have multiple copies of iso files on all hosts. If I
> try to import pre-existing ISO domain, I either get an error that storage
> path exists or the dialog just does nothing.
> 
> Does not appear possible. Any tricks or workarounds?

Hi Liam,
You should go to the Storage tab, select the domain and in the "Data Centers" 
subtab attach it to the second Data Center.

let me know if that works for you.

thanks,
Liron.

> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very old tasks hanging

2015-10-28 Thread Liron Aravot


- Original Message -
> From: "Soeren Malchow" 
> To: users@ovirt.org
> Sent: Thursday, October 15, 2015 2:36:50 AM
> Subject: [ovirt-users] Very old tasks hanging
> 
> Dear all,
> 
> I have tasks hanging for a very long time, there are 2 different
> possibilities
> 
> 
> 
> 1. The task has nothing to work on anymore, e.g. I have one task that is
> weeks old that was deleting a cloned VM
> 2. I have a cloned VM and that was exported and the task is still running
> after 5 days - it actually isn’t running, but the VM still has the
> hourglass in the front end and i can do nothing with it, i even rebootet
> all hosts in that cluster already and upgraded the engine and the hosts
> in the cluster form 3.5.3 to 3.5.4
> 
> Any idea where to even start looking ?
> 
> Regards
> Soeren
> 


Hi Soeren,
can you start by attach the engine/vdsm log (from the host that has the SPM 
role)?

also, please provide the output of the following -
1. query of the engine db - select * from async_tasks;

2. run the following command on the spm host:
vdsClient -s 0 getAllTasksStatuses 


thanks,
Liron.

> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Backup / Export Storage

2015-10-28 Thread Liron Aravot


- Original Message -
> From: "Soeren Malchow" 
> To: users@ovirt.org
> Sent: Monday, October 19, 2015 3:26:29 PM
> Subject: [ovirt-users] Backup / Export Storage
> 
> Dear all,
> 
> We have a large amount of Vms in the backup/export storage, basically every
> of the approx 50 machines in approx 20-30 generations, the listing (in VM
> Import Tab of the storage) was always very slow, but now i get an error – am
> small popup with a 502.
> 
> I can not even delete Vms to make the storage smaller again.
> 
> Regarding this i have a few questions
> 
> 
> 1. How do i reduce the amount of VMS so that i can access the VM Import
> again ? Can i just delete on storage level ?
> 2. Is there anything i can do to speed up the listing
> 3. Does anyone have an idea how to export the exported (yes :-) ) VMS to
> another storage in a way that they can be accessed and reimported if
> necessary, as far as i know the machines are already ovf formats, i
> would only need to get the name for each machine and write it somehwere
> else, suggestions ?
> Cheers
> Soeren

Hi Soeren,
1. Please attach the vdsm logs so i can inspect if there's an abnormal cause to 
the slowness and if improvement can be done.
2. In the meanwhile, I'd suggest you to increase the config value vdsTimeout to 
a value larger then 180 which is the current max time for a vds operation 
rather than delete it directly from the storage as it's error prone.
3. You can create a new export domain and then copy the ovf and disks to it, 
please check that you copied all you needed and that the import works before 
deleting the source.


Thanks,
Liron.
 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Rebooting gluster nodes make VMs pause due to storage error

2015-10-27 Thread Liron Aravot


- Original Message -
> From: nico...@devels.es
> To: users@ovirt.org
> Sent: Tuesday, October 27, 2015 4:59:31 PM
> Subject: [ovirt-users] Rebooting gluster nodes make VMs pause due to storage  
> error
> 
> Hi,
> 
> We're using ovirt 3.5.3.1, and as storage backend we use GlusterFS. We
> added a Storage Domain with the path "gluster.fqdn1:/volume", and as
> options, we used "backup-volfile-servers=gluster.fqdn2". We now need to
> restart both gluster.fqdn1 and gluster.fqdn2 machines due to system
> update (not at the same time, obviously). We're worried because in
> previous attempts, when restarted the main gluster node (gluster.fqdn1
> in this case), all the VMs running against that storage backend got
> paused due to storage errors, and we couldn't resume them and finally
> had to power them off the hard way and start them again.
> 
> Gluster version on gluster.fqdn1 and gluster.fqdn2 is 3.6.3-1.
> 
> Gluster configuration for that volume is:
> 
> Volume Name: volume
> Type: Replicate
> Volume ID: a2d7e52c-2f63-4e72-9635-4e311baae6ff
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gluster.fqdn1:/gluster/brick_01/brick
> Brick2: gluster.fqdn2:/gluster/brick_01/brick
> Options Reconfigured:
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: none
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> 
> We would like to know if there's a "clean" way to do such a procedure.
> We know that pausing all the VMs and then restarting the gluster nodes
> work with no harm, but the downtime of the VMs is important to us and we
> would like to avoid it, especially when we have 2 gluster nodes for
> that.
> 
> Any hints are appreciated,
> 
> Thanks.

Hi Nicolas,
I'd suggest to try asking in the gluster mailing list as unless there is some 
misconfiguration or a problematic version of any of the components is in use, 
the scenario you specified should work (assuming that you are waiting for the 
server heal process to end after it comes up). And i'd advise to use a 3 nodes 
cluster.
Regardless of your question I suggest you to take a look at the gluster
virt store usecase page - 
http://www.gluster.org/community/documentation/index.php/Virt-store-usecase
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Move DataCenter or Cluster from one engine to another

2015-08-25 Thread Liron Aravot


- Original Message -
 From: Yaniv Dary yd...@redhat.com
 To: Chris Liebman chri...@taboola.com
 Cc: users users@ovirt.org
 Sent: Tuesday, August 25, 2015 1:47:53 PM
 Subject: Re: [ovirt-users] Move DataCenter or Cluster from one engine to  
 another
 
 Yes, using import storage domain:
 http://www.ovirt.org/Features/ImportStorageDomain
 
 Yaniv Dary
 Technical Product Manager
 Red Hat Israel Ltd.
 34 Jerusalem Road
 Building A, 4th floor
 Ra'anana, Israel 4350109
 
 Tel : +972 (9) 7692306
 8272306
 Email: yd...@redhat.com IRC : ydary
 
 On Mon, Aug 24, 2015 at 8:56 PM, Chris Liebman  chri...@taboola.com  wrote:
 
 
 
 Is it possible to export a data center from one engine and import it to
 another? Currently I have an engine running in Europe and a set of nodes
 comprising a datacenter on the west coast of the US and am seeing
 communication issues? There are a number of other data centers that the
 engine in Europe is managing and I'd like to deploy an engine in closer
 proximity to the nodes for one data center. Has anyone one this? Is it
 possible?
 -- Chris

Hi Chris,
what is your (source) Data Center compatibility version?


thanks,
Liron
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] problem with iSCSI/multipath.

2015-08-17 Thread Liron Aravot


- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: users@ovirt.org Users@ovirt.org
 Sent: Monday, July 27, 2015 1:19:23 PM
 Subject: [ovirt-users] problem with iSCSI/multipath.
 
 Hi all.
 We have an oVirt cluster in production happily running from the
 beginning of 2014.
 It started as 3.3 beta and now is Version 3.4.4-1.el6 .
 
 Shared storage provided by an HP P2000 G3 iSCSI MSA.
 The storage server is fully redundant (2 controllers, dual port disks,
 4 iscsi connections per controller) and so is the connectivity (two
 switches, multiple ethernet cards per server).
 
 From now on lets only talk about iSCSI connectivity.
 The two oldest server have 2 nics each; they have been configured by
 hand setting routes aimed to reach every scsi target from every nic.
 On the new server we installed ovirt 3.5 to have a look at the
 network configuration provided by oVirt.
 In Data Center - iSCSI Multipathing we defined an iSCSI Bond binding
 together 3 server's nics and the 8 nics of the MSA.
 The result is a system that has been functioning for months.
 
 Recently we had to do an upgrade of the storage firmware.
 This activity uploads the firmware to one of the MSA controllers then
 reboots it. Being successful this is repeated on the other controller.
 There is an impact on the I/O performance but there should be no
 problems as every volume on the MSA remains visible on other paths.
 
 Well, that's the theory.
 On the two hand configured hosts we had no significant problems.
 On the 3.5 host VMs started to migrate due to storage problems then
 the situation got worse and it took more than an hour to bring again
 the system to a good operating level.
 
 I am inclined to believe that the culprit is the server's routing
 table. Seems to me that the oVirt generated one is too simplistic and
 prone to problems in case of connectivity loss (as in our situation or
 when you have to reboot one of the switches).
 
 Anyone on this list with strong experience on similar setup?
 
 I have included below some background information.
 I'm available to provide anything useful to further investigate the case.
 
 TIA,
 Giorgio.

Hi Giorgio,
There were some issues related to ISCSI multipathing that were already solved 
on later versions then 3.4 AFAIK.
I'm attaching Sergey and Maor (the feature owners) to respond whether related 
fixes were made.

thanks,
Liron.
 
 
 ---
 context information
 ---
 
 oVirt Compatibility Version: 3.4
 
 two FUJITSU PRIMERGY RX300 S5 hosts
 CPU:  Intel(R) Xeon(R) E5504 @ 2.00GHz  / Intel Nehalem Family
 OS Version: RHEL - 6 - 6.el6.centos.12.2
 Kernel Version: 2.6.32 - 504.16.2.el6.x86_64
 KVM Version: 0.12.1.2 - 2.448.el6_6.2
 LIBVIRT Version: libvirt-0.10.2-46.el6_6.6
 VDSM Version: vdsm-4.14.17-0.el6
 RAM: 40GB
 mom-0.4.3-1.el6.noarch.rpm
 ovirt-release34-1.0.3-1.noarch.rpm
 qemu-img-rhev-0.12.1.2-2.448.el6_6.2.x86_64.rpm
 qemu-kvm-rhev-0.12.1.2-2.448.el6_6.2.x86_64.rpm
 qemu-kvm-rhev-tools-0.12.1.2-2.448.el6_6.2.x86_64.rpm
 vdsm-4.14.17-0.el6.x86_64.rpm
 vdsm-cli-4.14.17-0.el6.noarch.rpm
 vdsm-hook-hostusb-4.14.17-0.el6.noarch.rpm
 vdsm-hook-macspoof-4.14.17-0.el6.noarch.rpm
 vdsm-python-4.14.17-0.el6.x86_64.rpm
 vdsm-python-zombiereaper-4.14.17-0.el6.noarch.rpm
 vdsm-xmlrpc-4.14.17-0.el6.noarch.rpm
 
 # ip route list table all |grep 192.168.126.
 192.168.126.87 dev eth4  table 4  proto kernel  scope link  src
 192.168.126.65
 192.168.126.86 dev eth4  table 4  proto kernel  scope link  src
 192.168.126.65
 192.168.126.81 dev eth4  table 4  proto kernel  scope link  src
 192.168.126.65
 192.168.126.80 dev eth4  table 4  proto kernel  scope link  src
 192.168.126.65
 192.168.126.77 dev eth4  table 4  proto kernel  scope link  src
 192.168.126.65
 192.168.126.0/24 dev eth4  table 4  proto kernel  scope link  src
 192.168.126.65
 192.168.126.0/24 dev eth3  proto kernel  scope link  src 192.168.126.64
 192.168.126.0/24 dev eth4  proto kernel  scope link  src 192.168.126.65
 192.168.126.85 dev eth3  table 3  proto kernel  scope link  src
 192.168.126.64
 192.168.126.84 dev eth3  table 3  proto kernel  scope link  src
 192.168.126.64
 192.168.126.83 dev eth3  table 3  proto kernel  scope link  src
 192.168.126.64
 192.168.126.82 dev eth3  table 3  proto kernel  scope link  src
 192.168.126.64
 192.168.126.76 dev eth3  table 3  proto kernel  scope link  src
 192.168.126.64
 192.168.126.0/24 dev eth3  table 3  proto kernel  scope link  src
 192.168.126.64
 broadcast 192.168.126.0 dev eth3  table local  proto kernel  scope
 link  src 192.168.126.64
 broadcast 192.168.126.0 dev eth4  table local  proto kernel  scope
 link  src 192.168.126.65
 local 192.168.126.65 dev eth4  table local  proto kernel  scope host
 src 192.168.126.65
 local 192.168.126.64 dev eth3  table local  proto kernel  scope host
 src 192.168.126.64
 broadcast 192.168.126.255 dev eth3  table local  proto kernel  scope
 link  src 192.168.126.64
 broadcast 192.168.126.255 

Re: [ovirt-users] Concerns with increasing vdsTimeout value on engine?

2015-07-12 Thread Liron Aravot


- Original Message -
 From: Ryan Groten ryan.gro...@stantec.com
 To: users@ovirt.org
 Sent: Friday, July 10, 2015 10:45:11 PM
 Subject: [ovirt-users] Concerns with increasing vdsTimeout value on engine?
 
 
 
 When I try to attach new direct lun disks, the scan takes a very long time to
 complete because of the number of pvs presented to my hosts (there is
 already a bug on this, related to the pvcreate command taking a very long
 time - https://bugzilla.redhat.com/show_bug.cgi?id=1217401 )
 
 
 
 I discovered a workaround by setting the vdsTimeout value higher (it is 180
 seconds by default). I changed it to 300 seconds and now the direct lun scan
 returns properly, but I’m hoping someone can warn me if this workaround is
 safe or if it’ll cause other potential issues? I made this change yesterday
 and so far so good.
 

Hi, no serious issue can be caused by that.
Keep in mind though that any other operation will have that amount of time to 
complete before failing on timeout - which will 
cause delays before failing (as the timeout was increased for all executions) 
when not everything is operational and up as expected (as in most of the time).
I'd guess that a RFE could be opened to allow increasing the timeout of 
specific operations if a user want to do that.

thanks,
Liron.
 
 
 Thanks,
 
 Ryan
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] multiple OVF_STORE disks?

2015-05-11 Thread Liron Aravot


- Original Message -
 From: Daniel Helgenberger daniel.helgenber...@m-box.de
 To: Rik Theys rik.th...@esat.kuleuven.be
 Cc: users@ovirt.org
 Sent: Monday, May 11, 2015 5:57:28 PM
 Subject: Re: [ovirt-users] multiple OVF_STORE disks?
 
 On Mo, 2015-05-11 at 15:53 +0200, Rik Theys wrote:
  Hi,
  
  I'm still on 3.5.2 and noticed that I have two OVF_STORE disks. Is this
  normal, or am I hitting the bug described in the recent announcement?
 I think there is one OVF_STORE disk per storage domain; counting any
 type...

This is normal, by default there are 2 OVF_STORE disks per domain. In case of 
failure of updating the first one
the second is kept as a backup.

The number of OVF_STORE disks for domain can be changed by modifying the config 
value 'StorageDomainOvfStoreCount'.

Thanks,
Liron.
 
  
  This might have been caused by one of my tests that made my storage
  domain unavailable. When I activated the storage domain again I noticed
  that a lot of VM's I created as part of a pool suddenly reappeared  and
  I had to delete them again.
  
  Now that I have two OVF_STORE disks, how do I know which one I can remove?
  
  When I list the virtual machines tab on the disks, they both show an
  empty list.
  
  Regards,
  
  Rik
  
  On 05/11/2015 03:28 PM, Sandro Bonazzola wrote:
   [1] https://bugzilla.redhat.com/1214408 - Importing storage domains into
   an uninitialized datacenter leads to
   duplicate OVF_STORE disks being created, and can cause catastrophic loss
   of VM configuration data
  
  
 
 --
 Daniel Helgenberger
 m box bewegtbild GmbH
 
 P: +49/30/2408781-22
 F: +49/30/2408781-10
 
 ACKERSTR. 19
 D-10115 BERLIN
 
 
 www.m-box.de  www.monkeymen.tv
 
 Geschäftsführer: Martin Retschitzegger / Michaela Göllner
 Handeslregister: Amtsgericht Charlottenburg / HRB 112767
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] VDSM tests not running on RHEL 7 (on make rpm)

2015-03-12 Thread Liron Aravot
Hi, haven't investigated that more deeply- 
I fetched the latest master code on a RHEL7 machine - when i run 'make rpm' it 
does work
but the tests aren't running.
on f20 machine that seems to work fine.

any idea on that?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Sync Error on Master Domain after adding a second one

2015-03-01 Thread Liron Aravot
Hi Stefano,
thanks for the great input!

I went over the logs (is the screencast uses the same domains? i don't have the 
logs from that run) - the master domain deactivation (and the master role 
migration to the new domain) fails with the error to copy the master fs content 
to the new domain on tar copy (see on [1] the error).

1. Is there a chance that there is any problem inconsistent storage access 
problem to any of the domains?
2. Does the issue reproduces always or only in some of the runs?
3. Have you tried to run a operation that creates a task? a creation of a disk 
for example.

thanks,
Liron.



[1]:
Thread-9875::DEBUG::2015-02-25 
15:06:57,969::clusterlock::349::Storage.SANLock::(release) Cluster lock for 
domain 08298f60-4919-4f86-9233-827c1089779a success
fully released
Thread-9875::ERROR::2015-02-25 
15:06:57,969::task::866::Storage.TaskManager.Task::(_setError) 
Task=`2a434209-3e96-4d1e-8d1b-8c7463889f6a`::Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/task.py, line 873, in _run
return fn(*args, **kargs)
  File /usr/share/vdsm/logUtils.py, line 45, in wrapper
res = f(*args, **kwargs)
  File /usr/share/vdsm/storage/hsm.py, line 1246, in deactivateStorageDomain
pool.deactivateSD(sdUUID, msdUUID, masterVersion)
  File /usr/share/vdsm/storage/securable.py, line 77, in wrapper
return method(self, *args, **kwargs)
  File /usr/share/vdsm/storage/sp.py, line 1097, in deactivateSD
self.masterMigrate(sdUUID, newMsdUUID, masterVersion)
  File /usr/share/vdsm/storage/securable.py, line 77, in wrapper
return method(self, *args, **kwargs)
  File /usr/share/vdsm/storage/sp.py, line 816, in masterMigrate
exclude=('./lost+found',))
  File /usr/share/vdsm/storage/fileUtils.py, line 68, in tarCopy
raise TarCopyFailed(tsrc.returncode, tdst.returncode, out, err)
TarCopyFailed: (1, 0, '', '')
Thread-9875::DEBUG::2015-02-25 
15:06:57,969::task::885::Storage.TaskManager.Task::(_run) 
Task=`2a434209-3e96-4d1e-8d1b-8c7463889f6a`::Task._run: 2a434209-3e96
-4d1e-8d1b-8c7463889f6a ('62a034ca-63df-44f2-9a87-735ddd257a6b', 
'0002-0002-0002-0002-022f', '08298f60-4919-4f86-9233-827c1089779a', 
34) {} failed
 - stopping task

- Original Message -
 From: Stefano Stagnaro stefa...@prisma-eng.com
 To: Vered Volansky ve...@redhat.com
 Cc: users@ovirt.org
 Sent: Friday, February 27, 2015 4:54:31 PM
 Subject: Re: [ovirt-users] Sync Error on Master Domain after adding a second 
 one
 
 I think I finally managed to replicate the problem:
 
 1. deploy a datacenter with a virt only cluster and a gluster only cluster
 2. create a first GlusterFS Storage Domain (e.g. DATA) and activate it
 (should become Master)
 3. create a second GlusterFS Storage Domain (e.g. DATA_NEW) and activate it
 4. put DATA in maintenance
 
 Both Storage Domains flows between the following states:
 https://www.dropbox.com/s/x542q1epf40ar5p/Screencast%20from%2027-02-2015%2015%3A09%3A29.webm?dl=1
 
 Webadmin Events shows: Sync Error on Master Domain between Host v10 and
 oVirt Engine. Domain: DATA is marked as Master in oVirt Engine database but
 not on the Storage side. Please consult with Support on how to fix this
 issue.
 
 It seems DATA can be deactivated at the second attempt.
 
 --
 Stefano Stagnaro
 
 Prisma Engineering S.r.l.
 Via Petrocchi, 4
 20127 Milano – Italy
 
 Tel. 02 26113507 int 339
 e-mail: stefa...@prisma-eng.com
 skype: stefano.stagnaro
 
 On mer, 2015-02-25 at 15:41 +0100, Stefano Stagnaro wrote:
  This is what I've done basically:
  
  1. added a new data domain (DATA_R3);
  2. activated the new data domain - both domains in active state;
  3. moved Disks from DATA to DATA_R3;
  4. tried to put the old data domain in maintenance (from webadmin or
  shell);
  5. both domains became inactive;
  6. DATA_R3 came back in active;
  7. DATA domain went in being initialized;
  8. Webadmin shows the error Sync Error on Master Domain between...;
  9. DATA domain completed the reconstruction and came back in active.
  
  Please find engine and vdsm logs here:
  https://www.dropbox.com/sh/uuwwo8sxcg4ffqp/AAAx6UrwI3jbsN4oraJuDx9Fa?dl=0
  
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Backup solution using the API

2015-02-16 Thread Liron Aravot


- Original Message -
 From: Soeren Malchow soeren.malc...@mcon.net
 To: Adam Litke ali...@redhat.com, Nir Soffer nsof...@redhat.com
 Cc: Thomas Keppler (PEBA) thomas.kepp...@kit.edu, users@ovirt.org
 Sent: Wednesday, February 11, 2015 7:30:48 PM
 Subject: Re: [ovirt-users] Backup solution using the API
 
 Dear all,
 
 i am a little lost, i tried quite a few things with the snapshots, so far
 with python scripts I can iterate through the existing machines, take one,
 make a snapshot and all this.
 
 However, there are 2 problems I can not get around:
 
 1. even when on 3.5.1 I can not delete a snapshot on a running VM, if I
 understood that correctly this relies on the Live Merge Feature where the
 code is available in vdsm already but it needs a certain libvirt version
 !?!?
 So question here is, can I delete a snapshot or not ? can I use only the rest
 API not python  (excuse me I am not a developer)
 
 2. when I attach a snapshot to another virtual machine, how do I do the
 backup then ? Does anybody have this already ?
Hi Soeren,
you can find detailed example for Backup/Restore flows here -
http://www.ovirt.org/Features/Backup-Restore_API_Integration

thanks,
laravot.
 
 The environment is running on CentOS 7 (hypervisors), Centos 6 (hosted
 engine), the ovirt is on version 3.5.1, also we use gluster as a storage
 backend where the gluster servers are managed within the hosted engine in a
 separate cluster exporting the storage only.
 
 Regards
 Soeren
 
 
 -Original Message-
 From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
 Adam Litke
 Sent: Tuesday, January 6, 2015 5:04 PM
 To: Nir Soffer
 Cc: Thomas Keppler (PEBA); users@ovirt.org
 Subject: Re: [ovirt-users] Backup solution using the API
 
 On 15/12/14 04:31 -0500, Nir Soffer wrote:
 - Original Message -
  From: Blaster blas...@556nato.com
  To: Thomas Keppler (PEBA) thomas.kepp...@kit.edu, users@ovirt.org
  Sent: Wednesday, December 10, 2014 8:06:58 PM
  Subject: Re: [ovirt-users] Backup solution using the API
 
  On 11/27/2014 9:12 AM, Keppler, Thomas (PEBA) wrote:
 
 
 
  Now, before I go into any more hassle, has somebody else of you done
  a live-backup solution for oVirt? Are there any recommendations?
  Thanks for any help provided!
 
 
  I've been looking for a similar scheme for the last year. It was not
  (really) possible in the past as there wasn't any way to destroy a
  snapshot w/o shutting down the VM. Is this still the case, or are
  snap shots fully implemented now?
 
  Basically, I'd like to:
  Tell VM to flush it's buffers
  suspend VM
  take snap shot of boot virtual disk
  resume VM
 
 You don't need to suspend the vm, qemu can create live snapshot. When
 the snapshot is done, you don't care about future io, it will simply
 not included in the backup.
 
  backup the virtual boot disk from the Hypervisor using standard
  commands (tar, cp, whatever)
 
 You can by attaching the snapshot to another vm
 
  destroy the snapshot
 
 You can in ovirt 3.5.1 - we do not depend any more on future libvirt
 features.
 Adam, can you confirm?
 
 Yes, on 3.5.1 you'll be able to destroy the snapshot without impacting the
 running VM.
 
 
  This would at least give some BMR capabilities of your VMs.
 
  Ideally, I'd also like to be able to create a snapshot from within
  the VM, do
 
 You can do this within the vm by using the engine REST API. But this
 can be fragile - for example, if the vm pauses, your backup tool within
 the vm will never complete :-)
 
  a yum update, see if I like it or not, if I do, then destroy the snap
  shot.
 
 Possible in 3.5.1 using REST API
 
  If I don't, I want to promote the snapshot and boot from that, then
  destroy the original.
 
 Same
 
 Nir
 
 --
 Adam Litke
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Backup and Restore of VMs

2014-12-28 Thread Liron Aravot
Hi All,
I've uploaded an example script (oVirt python-sdk) that contains examples to 
the steps
described on http://www.ovirt.org/Features/Backup-Restore_API_Integration

let me know how it works out for you -
https://github.com/laravot/backuprestoreapi

 

- Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Soeren Malchow soeren.malc...@mcon.net
 Cc: Vered Volansky ve...@redhat.com, Users@ovirt.org
 Sent: Wednesday, December 24, 2014 12:20:36 PM
 Subject: Re: [ovirt-users] Backup and Restore of VMs
 
 Hi guys,
 I'm currently working on complete example of the steps appear in -
 http://www.ovirt.org/Features/Backup-Restore_API_Integration
 
 will share with you as soon as i'm done with it.
 
 thanks,
 Liron
 
 - Original Message -
  From: Soeren Malchow soeren.malc...@mcon.net
  To: Vered Volansky ve...@redhat.com
  Cc: Users@ovirt.org
  Sent: Wednesday, December 24, 2014 11:58:01 AM
  Subject: Re: [ovirt-users] Backup and Restore of VMs
  
  Dear Vered,
  
  at some point we have to start, and right now we are getting closer, even
  with the documentation it is sometime hard to find the correct place to
  start, especially without specific examples (and I have decades of
  experience now)
  
  with the backup plugin that came from Lucas Vandroux we have a starting
  point
  right now, and we will continue form here and try to work with him on this.
  
  Regards
  Soeren
  
  
  -Original Message-
  From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
  Blaster
  Sent: Tuesday, December 23, 2014 5:49 PM
  To: Vered Volansky
  Cc: Users@ovirt.org
  Subject: Re: [ovirt-users] Backup and Restore of VMs
  
  Sounds like a Chicken/Egg problem.
  
  
  
  On 12/23/2014 12:03 AM, Vered Volansky wrote:
   Well, real world is community...
   Maybe change the name of the thread in order to make this more clear for
   someone from the community that might be able to could help.
   Maybe something like:
   Request for sharing real world example of VM backups.
  
   We obviously use it as part as developing, but I don't have what you're
   asking for.
   If you try it yourself and stumble onto questions in the process, please
   ask the list and we'll do our best to help.
  
   Best Regards,
   Vered
  
   - Original Message -
   From: Blaster blas...@556nato.com
   To: Vered Volansky ve...@redhat.com
   Cc: Users@ovirt.org
   Sent: Tuesday, December 23, 2014 5:56:13 AM
   Subject: Re: [ovirt-users] Backup and Restore of VMs
  
  
   Vered,
  
   It sounds like Soeren already knows about that page.  His issue seems
   to be, as well as the issue of others judging by comments on here, is
   that there aren’t any real world examples of how the API is used.
  
  
  
   On Dec 22, 2014, at 9:26 AM, Vered Volansky ve...@redhat.com wrote:
  
   Please take a look at:
   http://www.ovirt.org/Features/Backup-Restore_API_Integration
  
   Specifically:
   http://www.ovirt.org/Features/Backup-Restore_API_Integration#Full_VM
   _Backups
  
   Regards,
   Vered
  
   - Original Message -
   From: Soeren Malchow soeren.malc...@mcon.net
   To: Users@ovirt.org
   Sent: Friday, December 19, 2014 1:44:38 PM
   Subject: [ovirt-users] Backup and Restore of VMs
  
  
  
   Dear all,
  
  
  
   ovirt: 3.5
  
   gluster: 3.6.1
  
   OS: CentOS 7 (except ovirt hosted engine = centos 6.6)
  
  
  
   i spent quite a while researching backup and restore for VMs right
   now, so far I have come up with this as a start for us
  
  
  
   - API calls to create schedule snapshots of virtual machines This
   is or short term storage and to guard against accidential deletion
   within the VM but not for storage corruption
  
  
  
   - Since we are using a gluster backend, gluster snapshots I wasn’t
   able so far to really test it since the LV needs to be thin
   provisioned and we did not do that in the setup
  
  
  
   For the API calls we have the problem that we can not find any
   existing scripts or something like that to do those snapshots (and
   i/we are not developers enough to do that).
  
  
  
   As an additional information, we have a ZFS based storage with
   deduplication that we use for other backup purposes which does a
   great job especially because of the deduplication (we can storage
   generations of backups without problems), this storage can be NFS
   exported and used as backup repository.
  
  
  
   Are there any backup and restore procedure you guys are using for
   backup and restore that works for you and can you point me into the
   right direction ?
  
   I am a little bit list right now and would appreciate any help.
  
  
  
   Regards
  
   Soeren
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users
  
   ___
   Users mailing list
   Users@ovirt.org
   http://lists.ovirt.org/mailman/listinfo/users

Re: [ovirt-users] Backup and Restore of VMs

2014-12-24 Thread Liron Aravot
Hi guys,
I'm currently working on complete example of the steps appear in -
http://www.ovirt.org/Features/Backup-Restore_API_Integration

will share with you as soon as i'm done with it.

thanks,
Liron

- Original Message -
 From: Soeren Malchow soeren.malc...@mcon.net
 To: Vered Volansky ve...@redhat.com
 Cc: Users@ovirt.org
 Sent: Wednesday, December 24, 2014 11:58:01 AM
 Subject: Re: [ovirt-users] Backup and Restore of VMs
 
 Dear Vered,
 
 at some point we have to start, and right now we are getting closer, even
 with the documentation it is sometime hard to find the correct place to
 start, especially without specific examples (and I have decades of
 experience now)
 
 with the backup plugin that came from Lucas Vandroux we have a starting point
 right now, and we will continue form here and try to work with him on this.
 
 Regards
 Soeren
 
 
 -Original Message-
 From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
 Blaster
 Sent: Tuesday, December 23, 2014 5:49 PM
 To: Vered Volansky
 Cc: Users@ovirt.org
 Subject: Re: [ovirt-users] Backup and Restore of VMs
 
 Sounds like a Chicken/Egg problem.
 
 
 
 On 12/23/2014 12:03 AM, Vered Volansky wrote:
  Well, real world is community...
  Maybe change the name of the thread in order to make this more clear for
  someone from the community that might be able to could help.
  Maybe something like:
  Request for sharing real world example of VM backups.
 
  We obviously use it as part as developing, but I don't have what you're
  asking for.
  If you try it yourself and stumble onto questions in the process, please
  ask the list and we'll do our best to help.
 
  Best Regards,
  Vered
 
  - Original Message -
  From: Blaster blas...@556nato.com
  To: Vered Volansky ve...@redhat.com
  Cc: Users@ovirt.org
  Sent: Tuesday, December 23, 2014 5:56:13 AM
  Subject: Re: [ovirt-users] Backup and Restore of VMs
 
 
  Vered,
 
  It sounds like Soeren already knows about that page.  His issue seems
  to be, as well as the issue of others judging by comments on here, is
  that there aren’t any real world examples of how the API is used.
 
 
 
  On Dec 22, 2014, at 9:26 AM, Vered Volansky ve...@redhat.com wrote:
 
  Please take a look at:
  http://www.ovirt.org/Features/Backup-Restore_API_Integration
 
  Specifically:
  http://www.ovirt.org/Features/Backup-Restore_API_Integration#Full_VM
  _Backups
 
  Regards,
  Vered
 
  - Original Message -
  From: Soeren Malchow soeren.malc...@mcon.net
  To: Users@ovirt.org
  Sent: Friday, December 19, 2014 1:44:38 PM
  Subject: [ovirt-users] Backup and Restore of VMs
 
 
 
  Dear all,
 
 
 
  ovirt: 3.5
 
  gluster: 3.6.1
 
  OS: CentOS 7 (except ovirt hosted engine = centos 6.6)
 
 
 
  i spent quite a while researching backup and restore for VMs right
  now, so far I have come up with this as a start for us
 
 
 
  - API calls to create schedule snapshots of virtual machines This
  is or short term storage and to guard against accidential deletion
  within the VM but not for storage corruption
 
 
 
  - Since we are using a gluster backend, gluster snapshots I wasn’t
  able so far to really test it since the LV needs to be thin
  provisioned and we did not do that in the setup
 
 
 
  For the API calls we have the problem that we can not find any
  existing scripts or something like that to do those snapshots (and
  i/we are not developers enough to do that).
 
 
 
  As an additional information, we have a ZFS based storage with
  deduplication that we use for other backup purposes which does a
  great job especially because of the deduplication (we can storage
  generations of backups without problems), this storage can be NFS
  exported and used as backup repository.
 
 
 
  Are there any backup and restore procedure you guys are using for
  backup and restore that works for you and can you point me into the
  right direction ?
 
  I am a little bit list right now and would appreciate any help.
 
 
 
  Regards
 
  Soeren
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Backup solution using the API

2014-12-04 Thread Liron Aravot


- Original Message -
 From: Thomas Keppler (PEBA) thomas.kepp...@kit.edu
 To: Liron Aravot lara...@redhat.com
 Sent: Thursday, December 4, 2014 12:01:28 PM
 Subject: Re: [ovirt-users] Backup solution using the API
 
 The Export Phase.
 
There's no export phase, you create a snapshot and then you backup the data 
present to the time of the snapshot.
 --
 Best regards
 Thomas Keppler
 
  Am 04.12.2014 um 10:54 schrieb Liron Aravot lara...@redhat.com:
  
  
  
  - Original Message -
  From: Thomas Keppler (PEBA) thomas.kepp...@kit.edu
  To: Liron Aravot lara...@redhat.com
  Sent: Thursday, December 4, 2014 11:41:13 AM
  Subject: Re: [ovirt-users] Backup solution using the API
  
  This it requires you to halt the  vm, therefore it's not usable for us.
  What phase requires to halt the vm?
  --
  Best regards
  Thomas Keppler
  
  Am 04.12.2014 um 08:51 schrieb Liron Aravot lara...@redhat.com:
  
  Hi Thomas/plysan
  oVirt has the Backup/Restore api designated to provide the capabilities
  to
  backup/restore a vm.
  see here:
  http://www.ovirt.org/Features/Backup-Restore_API_Integration
  
  - Original Message -
  From: plysan ply...@gmail.com
  To: Thomas Keppler (PEBA) thomas.kepp...@kit.edu
  Cc: users@ovirt.org
  Sent: Thursday, December 4, 2014 3:51:24 AM
  Subject: Re: [ovirt-users] Backup solution using the API
  
  Hi,
  
  For the live-backup, i think you can make a live snapshot of the vm, and
  then
  clone a new vm from that snapshot, after that you can do export.
  
  2014-11-27 23:12 GMT+08:00 Keppler, Thomas (PEBA) 
  thomas.kepp...@kit.edu
  :
  
  
  
  Hello,
  
  now that our oVirt Cluster runs great, I'd like to know if anybody has
  done a
  backup solution for oVirt before.
  
  My basic idea goes as follows:
  - Create a JSON file with machine's preferences for later restore,
  create
  a
  snapshot, snatch the disk by its disk-id (copy it to the fileserver),
  then
  deleting the snapshot and be done with it.
  - On restore I'd just create a new VM with the Disks and NICs specified
  in
  the preferences; then I'd go ahead and and put back the disk... t
  
  I've played a little bit around with building a JSON file and so far it
  works
  great; I haven't tried to make a VM with that, though...
  
  Using the export domain or a simple export command is not what I want
  since
  you'd have to turn off the machine to do that - AFAIK. Correct me if
  that
  should not be true.
  
  Now, before I go into any more hassle, has somebody else of you done a
  live-backup solution for oVirt? Are there any recommendations? Thanks
  for
  any help provided!
  
  Best regards
  Thomas Keppler
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
  
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Backup solution using the API

2014-12-03 Thread Liron Aravot
Hi Thomas/plysan
oVirt has the Backup/Restore api designated to provide the capabilities to 
backup/restore a vm.
see here:
http://www.ovirt.org/Features/Backup-Restore_API_Integration

- Original Message -
 From: plysan ply...@gmail.com
 To: Thomas Keppler (PEBA) thomas.kepp...@kit.edu
 Cc: users@ovirt.org
 Sent: Thursday, December 4, 2014 3:51:24 AM
 Subject: Re: [ovirt-users] Backup solution using the API
 
 Hi,
 
 For the live-backup, i think you can make a live snapshot of the vm, and then
 clone a new vm from that snapshot, after that you can do export.
 
 2014-11-27 23:12 GMT+08:00 Keppler, Thomas (PEBA)  thomas.kepp...@kit.edu 
 :
 
 
 
 Hello,
 
 now that our oVirt Cluster runs great, I'd like to know if anybody has done a
 backup solution for oVirt before.
 
 My basic idea goes as follows:
 - Create a JSON file with machine's preferences for later restore, create a
 snapshot, snatch the disk by its disk-id (copy it to the fileserver), then
 deleting the snapshot and be done with it.
 - On restore I'd just create a new VM with the Disks and NICs specified in
 the preferences; then I'd go ahead and and put back the disk... t
 
 I've played a little bit around with building a JSON file and so far it works
 great; I haven't tried to make a VM with that, though...
 
 Using the export domain or a simple export command is not what I want since
 you'd have to turn off the machine to do that - AFAIK. Correct me if that
 should not be true.
 
 Now, before I go into any more hassle, has somebody else of you done a
 live-backup solution for oVirt? Are there any recommendations? Thanks for
 any help provided!
 
 Best regards
 Thomas Keppler
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Master Storage Domain fails to activate

2014-11-26 Thread Liron Aravot


- Original Message -
 From: Eli Mesika emes...@redhat.com
 To: Nathan Llaneza ntllaneza...@gmail.com
 Cc: users users@ovirt.org
 Sent: Tuesday, November 25, 2014 9:48:05 PM
 Subject: Re: [ovirt-users] Master Storage Domain fails to activate
 
 
 
 - Original Message -
  From: Nathan Llaneza ntllaneza...@gmail.com
  To: users users@ovirt.org
  Sent: Monday, November 24, 2014 4:18:48 PM
  Subject: [ovirt-users] Master Storage Domain fails to activate
  
  Hello All,
  
  I am running oVirt 3.4.4 in my datacenter. Over the weekend the datacenter
  completely lost power. I have brought all the hypervisor back online;
  however it will not bring the Master storage domain back online. Just to
  rule out a few issues I have
  
  * stopped iptables on the storage domains
  * verified that each hypervisor has mounted all of the storage domains
  (using the mount command from a ssh session)
  * mounted the storage domains individually from the engine
  
  
  
  I don't think that it is a connection issue, but rather a data corruption
  issue. I could use a little help interpreting the log files. Thank you.
 
 From the engine log it really seems that master SD is corrupted , CC'ing
 Allon M who is in charge of the storage team in order to get more direction
 how to recover from that ...

Hi Nathan,
please attach also the vdsm logs from the hosts.
also, please check the storage access from the hosts and what's under the 
domain dirs - for example the domain under 172.16.241.98:/mnt/storage
(1c4abda0-906b-4faf-a2e3-b6683729cc2f)
 
  
  
  
  
  
  
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Cancelling a running task

2014-11-05 Thread Liron Aravot


- Original Message -
 From: Eli Mesika emes...@redhat.com
 To: Daniel Lang daniel.l...@redi.com
 Cc: users@ovirt.org
 Sent: Wednesday, November 5, 2014 2:23:00 PM
 Subject: Re: [ovirt-users] Cancelling a running task
 
 
 
 - Original Message -
  From: Daniel Lang daniel.l...@redi.com
  To: users@ovirt.org users@ovirt.org
  Sent: Tuesday, November 4, 2014 6:24:48 PM
  Subject: [ovirt-users] Cancelling a running task
  
  
  
  I am creating a VM and the copy from template operation has gone haywire
  causing significant performance issues on the host server. I’d like to
  cancel the copying image action (it’s been running ~3hours on a 3GB disk
  image copy) but I cannot find anything in the web UI to cancel a task. Is
  there a command line tool to cancel the running task?
 
 login to your SPM host and run the following
 
 vdsClient -s 0 getAllTasksStatuses
 
 You can than use
 
 stopTask
 TaskID
 stop async task
 
 and then
 
 clearTask
 TaskID
 clear async task
 
 
 
I suggest to only stop the task/tasks and let the ovirt engine to perform the 
clearance of the tasks.

  
  
  
  The oVirt version is 3.4 and vdsm version 4.14.
  
  
  
  Thanks for any advice or links to documentation/man pages.
  
  
  
  Daniel Lang
  
  © Copyright 2014 REDI Global Technologies LLC (“REDI”), member FINRA, SIPC.
  All rights reserved. The information contained in and accompanying this
  communication may be confidential, subject to legal privilege, or otherwise
  protected from disclosure, and is intended solely for the use of the
  intended recipient(s). If you are not the intended recipient of this
  communication, please delete and destroy all copies in your possession,
  notify the sender that you have received this communication in error, and
  note that any review or dissemination of, or the taking of any action in
  reliance on, this communication is expressly prohibited. E-mail messages
  may
  contain computer viruses or other defects, may not be accurately replicated
  on other systems, or may be intercepted, deleted or interfered with without
  the knowledge of the sender or the intended recipient. REDI makes no
  warranties in relation to these matters. Please note that REDI reserves the
  right to intercept, monitor, and retain e-mail messages to and from its
  systems as permitted by applicable law. If you are not comfortable with the
  risks associated with e-mail messages, you may decide not to use e-mail to
  communicate with REDI.
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [OVIRT-3.5-TEST-DAY-3] Node Hosted Engine

2014-09-18 Thread Liron Aravot
Hi All,
Federico and I tested the node hosted engine.
We started a vm using the posted ovirt node iso and actually didn't get to test 
the complete hosted engine setup due to failure which was unclear [3] (opened a 
bug for that). Bug were opened for clarity of the node menu as well [1], same 
as for the way to access the logs [2].

thanks,
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1143857
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1143861
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1143871
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] feature review - ReportGuestDisksLogicalDeviceName

2014-09-02 Thread Liron Aravot


- Original Message -
 From: Federico Simoncelli fsimo...@redhat.com
 To: de...@ovirt.org
 Cc: Liron Aravot lara...@redhat.com, users@ovirt.org, 
 smizr...@redhat.com, Michal Skrivanek
 mskri...@redhat.com, Vinzenz Feenstra vfeen...@redhat.com, Allon 
 Mureinik amure...@redhat.com, Dan
 Kenigsberg dan...@redhat.com
 Sent: Tuesday, September 2, 2014 12:50:28 PM
 Subject: Re: feature review - ReportGuestDisksLogicalDeviceName
 
 - Original Message -
  From: Dan Kenigsberg dan...@redhat.com
  To: Liron Aravot lara...@redhat.com
  Cc: users@ovirt.org, de...@ovirt.org, smizr...@redhat.com,
  fsimo...@redhat.com, Michal Skrivanek
  mskri...@redhat.com, Vinzenz Feenstra vfeen...@redhat.com, Allon
  Mureinik amure...@redhat.com
  Sent: Monday, September 1, 2014 11:23:45 PM
  Subject: Re: feature review - ReportGuestDisksLogicalDeviceName
  
  On Sun, Aug 31, 2014 at 07:20:04AM -0400, Liron Aravot wrote:
   Feel free to review the the following feature.
   
   http://www.ovirt.org/Features/ReportGuestDisksLogicalDeviceName
  
  Thanks for posting this feature page. Two things worry me about this
  feature. The first is timing. It is not reasonable to suggest an API
  change, and expect it to get to ovirt-3.5.0. We are two late anyway.
  
  The other one is the suggested API. You suggest placing volatile and
  optional infomation in getVMList. It won't be the first time that we
  have it (guestIPs, guestFQDN, clientIP, and displayIP are there) but
  it's foreign to the notion of conf reported by getVMList() - the set
  of parameters needed to recreate the VM.

The fact is that today we return guest information in list(Full=true), We 
decide on it's notion
and it seems like we already made our minds when guest info was added there :) 
. I don't see any harm in returning the disk mapping there
and if we'll want to extract the guest info out, we can extract all of it in 
later version (4?) without need for BC. Having
the information spread between different verbs is no better imo.
 
 At first sight this seems something belonging to getVmStats (which
 is reporting already other guest agent information).
 

Fede, I've mentioned in the wiki, getVmStats is called by the engine every few 
seconds and therefore that info
wasn't added there but to list() which is called only when the hash is changed. 
If everyone is in for that simple
solution i'm fine with that, but Michal/Vincenz preferred it that way.

 Federico
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] feature review - ReportGuestDisksLogicalDeviceName

2014-08-31 Thread Liron Aravot
Feel free to review the the following feature.

http://www.ovirt.org/Features/ReportGuestDisksLogicalDeviceName
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Question on Backup and Restore API

2014-08-19 Thread Liron Aravot


- Original Message -
 From: santosh sba...@commvault.com
 To: users@ovirt.org
 Sent: Monday, August 18, 2014 7:34:02 PM
 Subject: [ovirt-users] Question on Backup and Restore API
 
 Hi,
 
 The link Backup and Restore API has steps for Full VM Backups. The mentioned
 steps are,
 
 
 
 1. Take a snapshot of the virtual machine to be backed up - (existing
 oVirt REST API operation)
 2. Back up the virtual machine configuration at the time of the snapshot
 (the disk configuration can be backed up as well if needed) - (added
 capabillity to oVirt as part of the Backup API)
 3. Attach the disk snapshots that were created in (1) to the virtual
 appliance for data backup - (added capabillity to oVirt as part of the
 Backup API)
 4. data can be backed up
 5. Detach the disk snapshots that were attached in (4) from the virtual
 appliance - (added capabillity to oVirt as part of the Backup API)
 
 In the example section, following is the explanation for step 2.
 
 
 
 Grab the wanted vm configuration from the needed snapshot - it'll be under
 initialization/configuration/data
 URL = SERVER:PORT/api/vms/VM_ID/snapshots/ID
   Method = GET

Hi santosh,
you should also have All-Content:true header.
I updated the wiki page accordingly.

thanks!
 
 But When run the GET request using rest API, I am not finding information for
 initialization/configuration/data in the output.
 Following is the output of the GET request. Please advise If I am missing
 something or looking at wrong place.
 I am also attaching xml file with the following content. Please let me know
 if you need more information.
 
 
 
 
 
 
 snapshot href =
 /api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094
  id = a8f63b31-1bfa-45ba-a8cc-d10b486f1094  
 actions
 link href =
 /api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/restore
  rel = restore  /
 /actions
 description FirstClick /description
 type regular /type
 vm id = 4dcd5b6a-cf4b-460c-899d-4edb5345d705  
 name RHEL_65_CL1 /name
 description This is cluster 1 /description
 link href =
 /api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/cdroms
  rel = cdroms  /
 link href =
 /api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/disks
  rel = disks  /
 link href =
 /api/vms/4dcd5b6a-cf4b-460c-899d-4edb5345d705/snapshots/a8f63b31-1bfa-45ba-a8cc-d10b486f1094/nics
  rel = nics  /
 type server /type
 status
 state up /state
 /status
 memory 1073741824 /memory
 cpu
 topology sockets = 1  cores = 1  /
 architecture X86_64 /architecture
 /cpu
 cpu_shares 0 /cpu_shares
 os type = other  
 boot dev = cdrom  /
 boot dev = hd  /
 /os
 high_availability
 enabled false /enabled
 priority 1 /priority
 /high_availability
 display
 type vnc /type
 address172.19.110.43 /address
 port 5900 /port
 monitors 1 /monitors
 single_qxl_pci false /single_qxl_pci
 allow_override false /allow_override
 smartcard_enabled false /smartcard_enabled
 /display
 host id = 2704a037-0a61-4f3e-8063-6bd67bdbac36  /
 cluster id = ba23117a-708e-40f6-bf32-970c4f86b7ee  /
 template id = ----  /
 start_time 2014-08-18T11:16:47.307-04:00 /start_time
 stop_time 2014-08-18T11:14:26.323-04:00 /stop_time
 creation_time 2014-08-08T17:39:52.000-04:00 /creation_time
 origin ovirt /origin
 stateless false /stateless
 delete_protected false /delete_protected
 sso
 methods
 method id = GUEST_AGENT  /
 /methods
 /sso
 initialization/
 placement_policy
 affinity migratable /affinity
 /placement_policy
 memory_policy
 guaranteed 1073741824 /guaranteed
 /memory_policy
 usb
 enabled false /enabled
 /usb
 migration_downtime -1 /migration_downtime
 /vm
 date 2014-08-17T17:03:53.461-04:00 /date
 snapshot_status ok /snapshot_status
 persist_memorystate false /persist_memorystate
 /snapshot
 
 Thanks,
 Santosh
 ***Legal Disclaimer***
 This communication may contain confidential and privileged material for the
 sole use of the intended recipient. Any unauthorized review, use or
 distribution
 by others is strictly prohibited. If you have received the message by
 mistake,
 please advise the sender by reply email and delete the message. Thank you.
 **
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt3.5 - deep dive - OVF on any domain + import existing data domain

2014-08-14 Thread Liron Aravot
Updated link:
https://www.youtube.com/watch?v=71EuTct0wfc

- Original Message -
 From: Barak Azulay bazu...@redhat.com
 To: Liron Aravot lara...@redhat.com, mlipc...@redhat.com, Allon 
 Mureinik amure...@redhat.com,
 users@ovirt.org, de...@ovirt.org
 Sent: Tuesday, August 12, 2014 5:44:53 PM
 Subject: ovirt3.5 - deep dive - OVF on any domain + import existing data 
 domain
 
 The following meeting has been modified:
 
 Subject: ovirt3.5 - deep dive - OVF on any domain + import existing data
 domain
 Organizer: Barak Azulay bazu...@redhat.com
 
 Time: Thursday, August 14, 2014, 5:00:00 PM - 5:45:00 PM GMT +02:00 Jerusalem
  
 Invitees: lara...@redhat.com; mlipc...@redhat.com; amure...@redhat.com;
 users@ovirt.org; de...@ovirt.org
 
 
 *~*~*~*~*~*~*~*~*~*
 
 Hangout link:
 https://plus.google.com/events/c7rkldonq80g14c9e3ob8as2kq8
 
 Session description:
 The OVF on any domain feature introduces a change on the way the vm ovfs are
 being stored/backed up in oVirt. Currently all the ovfs are being stored on
 the master domain and are being updated asynchronously on a time basis by
 the OvfAutoUpdater, This feature purpose is to store the OVFs on all wanted
 domains to provide better recovery abillity, reduce the use of master_fs and
 the master domain and add capabillities to oVirt that will be used further
 on.
 
 The import data storage domain feature makes use of the OVF on any domain
 feature to import existing storage domain in order to be able to recover
 after the loss of the oVirt Engine's database and be able to move storage
 domain with vms/templates between setups.
 
 The talk will cover those two featuers and will provide deep dive into it's
 use and implementation.
 
 Wiki pages:
 http://www.ovirt.org/Feature/OvfOnAnyDomain
 
 http://www.ovirt.org/Features/ImportStorageDomain
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Test Day Report

2014-07-02 Thread Liron Aravot
Hi all,
I tested the following :
Bug 1053435 - OVIRT35 - [RFE] oVirt virtual appliance

I've encountered an issues already during the image creation phase and opened 
this bug for it -
https://bugzilla.redhat.com/show_bug.cgi?id=1115450

I didn't manage to test the rest of it because of that and other related issues.

- Original Message -
 From: Ravi Nori rn...@redhat.com
 To: users@ovirt.org
 Sent: Wednesday, July 2, 2014 2:33:32 PM
 Subject: [ovirt-users] Test Day Report
 
 Hi,
 
 I tested the following
 
 Bug 1090530 - OVIRT35 - [RFE] Please add host count and guest count columns
 to Clusters tab in webadmin
 
 Everything worked fine here
 
 
 Bug 1078836 - OVIRT35 - [RFE] add a warning when changing display network
 
 Everything worked fine but I had to go into events tab to see the warning
 
 Thanks
 
 Ravi
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Rename disks when creating VM from Template

2014-06-08 Thread Liron Aravot


- Original Message -
 From: Gilad Chaplik gchap...@redhat.com
 To: s k sokratis1...@outlook.com
 Cc: users@ovirt.org
 Sent: Sunday, June 8, 2014 10:39:40 AM
 Subject: Re: [ovirt-users] Rename disks when creating VM from Template
 
 - Original Message -
  From: s k sokratis1...@outlook.com
  To: users@ovirt.org
  Sent: Thursday, June 5, 2014 10:07:44 PM
  Subject: Re: [ovirt-users] Rename disks when creating VM from Template
  
  Just noticed that I can actually change the disk name at the bottom of
  Resource Allocation. Apparently I never scrolled down to the very bottom :)
 
 Hi Sokratis,
 
 Great that you've managed to handle it, but if it took you time to realize
 that, I think it's a bug.
 Can you please open one?
 
 https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt
 
 Thanks,
 Gilad.
 
  
  
  From: sokratis1...@outlook.com
  To: users@ovirt.org
  Subject: Rename disks when creating VM from Template
  Date: Wed, 4 Jun 2014 11:59:55 +0300
  
  Hello all,
  
  I have noticed that when I create a VM from a template the disks of the VM
  have the name of the Templates's disk.
  
  What I would like is to be able to rename the VM's disks based on the
  hostname.
  
  For example the template I use have the following disk names:
  
  VM_Template_Disk1
  VM_Template_Disk2
  VM_Template_Disk3
  
  When I create a new VM (e.g. testvm01) the disk names remain the same. What
  I
  would like is to be able to rename them automatically as below:
  
  testvm01_Disk1
  testvm01_Disk2
  testvm01_Disk3
  
  At the moment I'm editing the names manually using the oVirt GUI.
  
  Is it possible to do this automatically?
Hi Sokratis,
if you could open an RFE for that it'll be great.
basically it's a matter for preference, if you created your template with 
meaningful disk names, you might want to keep it.
I completely agree that one may prefer automatic naming without keeping the 
original names.

thanks,
Liron
  
  Thanks,
  
  Sokratis
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] SPM error

2014-04-16 Thread Liron Aravot
Hi Maurice,
any updates on the above?

thanks, Liron

- Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Maurice \Moe\ James mja...@media-node.com
 Cc: users@ovirt.org
 Sent: Tuesday, April 15, 2014 11:53:40 AM
 Subject: Re: [ovirt-users] SPM error
 
 
 
 - Original Message -
  From: Maurice \Moe\ James mja...@media-node.com
  To: Liron Aravot lara...@redhat.com
  Cc: Itamar Heim ih...@redhat.com, users@ovirt.org
  Sent: Tuesday, April 15, 2014 3:14:16 AM
  Subject: Re: [ovirt-users] SPM error
  
  Sorry forgot to paste
  https://bugzilla.redhat.com/show_bug.cgi?id=1086951
 
 Hi Maurice,
 the issue is that the host doesn't have access to all the storage domains
 which causes to the spm start process to fail.
 There's a bug open for that issue -
 https://bugzilla.redhat.com/show_bug.cgi?id=1072900 so seems we'll be able
 to close the one you opened as a duplicate but let's wait with that till
 your issue is solved.
 From looking in the logs, it seems like that host have problem accessing two
 storage domains -
 3406665e-4adc-4fd4-aa1e-037547b29adb
 f3b51811-4a7f-43af-8633-322b3db23c48
 
 Can you verify that the host can access those domains? from the log it seems
 like the nfs paths for those are:
 shtistg01.suprtekstic.com:/storage/infrastructure
 shtistg01.suprtekstic.com:/storage/exports
 
 
 log snippet:
 1.
 Thread-14::DEBUG::2014-04-11
 22:54:44,331::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n
 /bin/mount -t nfs -o soft,nosharecache,timeo=600,retra
 ns=6,nfsvers=3 ashtistg01.suprtekstic.com:/storage/exports
 /rhev/data-center/mnt/ashtistg01.suprtekstic.com:_storage_exports' (cwd
 None)
 Thread-14::ERROR::2014-04-11
 22:55:36,659::storageServer::209::StorageServer.MountConnection::(connect)
 Mount failed: (32, ';mount.nfs: Failed to resolve serv
 er ashtistg01.suprtekstic.com: Name or service not known\n')
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/storageServer.py, line 207, in connect
 self._mount.mount(self.options, self._vfsType)
   File /usr/share/vdsm/storage/mount.py, line 222, in mount
 return self._runcmd(cmd, timeout)
   File /usr/share/vdsm/storage/mount.py, line 238, in _runcmd
 raise MountError(rc, ;.join((out, err)))
 MountError: (32, ';mount.nfs: Failed to resolve server
 ashtistg01.suprtekstic.com: Name or service not known\n')
 Thread-14::ERROR::2014-04-11
 22:55:36,705::hsm::2379::Storage.HSM::(connectStorageServer) Could not
 connect to storageServer
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/hsm.py, line 2376, in connectStorageServer
 conObj.connect()
   File /usr/share/vdsm/storage/storageServer.py, line 320, in connect
 return self._mountCon.connect()
   File /usr/share/vdsm/storage/storageServer.py, line 215, in connect
 raise e
 MountError: (32, ';mount.nfs: Failed to resolve server
 ashtistg01.suprtekstic.com: Name or service not known\n')
 
 
 
 
 
 2.
 Thread-14::ERROR::2014-04-11
 22:56:29,307::storageServer::209::StorageServer.MountConnection::(connect)
 Mount failed: (32, ';mount.nfs: Failed to resolve serv
 er ashtistg01.suprtekstic.com: Name or service not known\n')
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/storageServer.py, line 207, in connect
 self._mount.mount(self.options, self._vfsType)
   File /usr/share/vdsm/storage/mount.py, line 222, in mount
 return self._runcmd(cmd, timeout)
   File /usr/share/vdsm/storage/mount.py, line 238, in _runcmd
 raise MountError(rc, ;.join((out, err)))
 MountError: (32, ';mount.nfs: Failed to resolve server
 ashtistg01.suprtekstic.com: Name or service not known\n')
 Thread-14::ERROR::2014-04-11
 22:56:29,309::hsm::2379::Storage.HSM::(connectStorageServer) Could not
 connect to storageServer
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/hsm.py, line 2376, in connectStorageServer
 conObj.connect()
   File /usr/share/vdsm/storage/storageServer.py, line 320, in connect
 return self._mountCon.connect()
   File /usr/share/vdsm/storage/storageServer.py, line 215, in connect
 raise e
 MountError: (32, ';mount.nfs: Failed to resolve server
 ashtistg01.suprtekstic.com: Name or service not known\n')
 
 
 Regardless of that, there are sanlock errors over the log when trying to
 acquire host-id over the log.
 I'd start with check the connectivity issues to the storage domains above,
 later on you can attach check that sanlock is running and operational and/or
 attach the sanlock logs.
 
 
  
  On Mon, 2014-04-14 at 17:11 -0400, Liron Aravot wrote:
   
   - Original Message -
From: Maurice \Moe\ James mja...@media-node.com
To: Itamar Heim ih...@redhat.com
Cc: users@ovirt.org
Sent: Sunday, April 13, 2014 2:28:45 AM
Subject: Re: [ovirt-users] SPM error

Were you able to find out anything? Is there anything that I can check
in the meanwhile?

   
   Hi Muarice,
   can you please attach the ovirt engine/vdsm logs

Re: [ovirt-users] Error creating Disks

2014-04-16 Thread Liron Aravot
Hi Maurice, Dafna
The creation of the snapshot for each of the disks succeeds, but performing the 
live part of it fails - we can see the following error in the vdsm log.

Thread-::DEBUG::2014-04-16 09:40:39,071::vm::4007::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Snapshot faile
d using the quiesce flag, trying again without it (unsupported configuration: 
reuse is not supported with this QEMU binary)
Thread-::DEBUG::2014-04-16 
09:40:39,081::libvirtconnection::124::root::(wrapper) Unknown libvirterror: 
ecode: 67 edom: 10 level:
 2 message: unsupported configuration: reuse is not supported with this QEMU 
binary
Thread-::ERROR::2014-04-16 09:40:39,082::vm::4011::vm.Vm::(snapshot) 
vmId=`50cf8bce-3982-491a-8b67-7d009c5c3243`::Unable to take
 snapshot
Traceback (most recent call last):
  File /usr/share/vdsm/vm.py, line 4009, in snapshot
self._dom.snapshotCreateXML(snapxml, snapFlags)
  File /usr/share/vdsm/vm.py, line 859, in f
ret = attr(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py, line 92, 
in wrapper
ret = f(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 1636, in 
snapshotCreateXML
if ret is None:raise libvirtError('virDomainSnapshotCreateXML() failed', 
dom=self)
libvirtError: unsupported configuration: reuse is not supported with this QEMU 
binary
Thread-::DEBUG::2014-04-16 
09:40:39,091::BindingXMLRPC::1074::vds::(wrapper) return vmSnapshot with 
{'status': {'message': 'Snap
shot failed', 'code': 48}}

there's open bug on possibly the same issue - but let's verify that it's the 
same 
https://bugzilla.redhat.com/show_bug.cgi?id=1009100

what OS are you running? if it's not centos, please try to upgrade libvirt and 
try again.

if it's urgent - as i see that you already stopped your vm during those tries, 
as a temporary solution you can stop the vm, move the disk while it's stopped 
(which won't be live storage migration) and than start it. 

- Original Message -
 From: Dafna Ron d...@redhat.com
 To: Maurice James mja...@media-node.com
 Cc: users@ovirt.org
 Sent: Wednesday, April 16, 2014 4:46:10 PM
 Subject: Re: [ovirt-users] Error creating Disks
 
 can you try to restart the engine?
 This should clean the cache and some of the tables holding temporary
 task info.
 
 Dafna
 
 
 
 On 04/16/2014 02:37 PM, Maurice James wrote:
  I ran vdsClient -s 0 getAllTasksInfo and nothing was returned. What should
  my next step be?
 
  - Original Message -
  From: Dafna Ron d...@redhat.com
  To: Maurice James mja...@media-node.com
  Cc: Yair Zaslavsky yzasl...@redhat.com, users@ovirt.org
  Sent: Wednesday, April 16, 2014 4:44:48 AM
  Subject: Re: [ovirt-users] Error creating Disks
 
  ok... Now I am starting to understand what's going on and if you look at
  the vdsm log, the live snapshot succeeds and no other ERROR are
  reported. I think it's related to the task management on engine side.
 
  Can I ask you to run in the spm: vdsClient -s 0 getAllTasksInfo
  If you have tasks we would have to stop and clear them (vdsClient -s 0
  stopTask task ; vdsClient -s 0 clearTask task)
  after you clear the tasks you will have to restart the engine
 
  Also, what version of ovirt are you using? is it 3.4? because this was
  suppose to be fixed...
 
  here is the explanation from what I see in the logs:
 
  you are sending a command to Live Migrate
 
  2014-04-15 12:50:54,381 INFO
  [org.ovirt.engine.core.bll.MoveDisksCommand] (ajp--127.0.0.1-8702-4)
  [4c089392] Running command: MoveDisksCommand internal: false. Entities
  affected :  ID: c24706d3-1872-4cd3-94a2-9c61ef032e29 Type: Disk
  2014-04-15 12:50:54,520 INFO
  [org.ovirt.engine.core.bll.lsm.LiveMigrateVmDisksCommand]
  (ajp--127.0.0.1-8702-4) [4c089392] Lock Acquired to object EngineLock
  [exclusiveLocks= key: c24706d3-1872-4cd3-94a2-9c61ef032e29 value: DISK
  , sharedLocks= key: ba49605b-fb7e-4a70-a380-6286d3903e50 value: VM
  ]
 
  The fist step is creating the snapshot:
 
  2014-04-15 12:50:54,734 INFO
  [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand]
  (org.ovirt.thread.pool-6-thread-47) Running command:
  CreateAllSnapshotsFromVmCommand internal: true. Entities affected :  ID:
  ba49605b-fb7e-4a70-a3
  80-6286d3903e50 Type: VM
  2014-04-15 12:50:54,748 INFO
  [org.ovirt.engine.core.bll.CreateSnapshotCommand]
  (org.ovirt.thread.pool-6-thread-47) [23c23b4] Running command:
  CreateSnapshotCommand internal: true. Entities affected :  ID:
  ----000
  0 Type: Storage
  2014-04-15 12:50:54,760 INFO
  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
  (org.ovirt.thread.pool-6-thread-47) [23c23b4] START,
  CreateSnapshotVDSCommand( storagePoolId =
  a106ab81-9d5f-49c1-aeaf-832a137b708c, ignor
  eFailoverLimit = false, storageDomainId =
  3406665e-4adc-4fd4-aa1e-037547b29adb, imageGroupId =
  c24706d3-1872-4cd3-94a2-9c61ef032e29, imageSizeInBytes = 107374182400,
  

Re: [ovirt-users] SPM error

2014-04-16 Thread Liron Aravot


- Original Message -
 From: Maurice James mja...@media-node.com
 To: Liron Aravot lara...@redhat.com
 Cc: users@ovirt.org
 Sent: Wednesday, April 16, 2014 4:49:18 PM
 Subject: Re: [ovirt-users] SPM error
 
 After a few sleepless nights. I went through my entire system again and I
 found an interface on one of my hosts that had already been removed from the
 UI. Even after multiple service network restart it would still show up
 when I ran ip addr. I had to end up forcefully removing it with rm -rf
 /etc/sysconfig/network-scripts/ifcfg-interface. After that I rebooted the
 node and the SPM came out of contention. I cant make sense of it but it
 worked

ok, so we might have a bug here - what os are you running? 

as it seems the initial issue SPM issue is as the bug i provided earlier,
seems like the bug you opened on that issue can be closed as a duplicate, 
adding federico to verify that there's no further sanlock issue there.
 
 - Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Maurice \Moe\ James mja...@media-node.com
 Cc: users@ovirt.org
 Sent: Wednesday, April 16, 2014 8:49:05 AM
 Subject: Re: [ovirt-users] SPM error
 
 Hi Maurice,
 any updates on the above?
 
 thanks, Liron
 
 - Original Message -
  From: Liron Aravot lara...@redhat.com
  To: Maurice \Moe\ James mja...@media-node.com
  Cc: users@ovirt.org
  Sent: Tuesday, April 15, 2014 11:53:40 AM
  Subject: Re: [ovirt-users] SPM error
  
  
  
  - Original Message -
   From: Maurice \Moe\ James mja...@media-node.com
   To: Liron Aravot lara...@redhat.com
   Cc: Itamar Heim ih...@redhat.com, users@ovirt.org
   Sent: Tuesday, April 15, 2014 3:14:16 AM
   Subject: Re: [ovirt-users] SPM error
   
   Sorry forgot to paste
   https://bugzilla.redhat.com/show_bug.cgi?id=1086951
  
  Hi Maurice,
  the issue is that the host doesn't have access to all the storage domains
  which causes to the spm start process to fail.
  There's a bug open for that issue -
  https://bugzilla.redhat.com/show_bug.cgi?id=1072900 so seems we'll be able
  to close the one you opened as a duplicate but let's wait with that till
  your issue is solved.
  From looking in the logs, it seems like that host have problem accessing
  two
  storage domains -
  3406665e-4adc-4fd4-aa1e-037547b29adb
  f3b51811-4a7f-43af-8633-322b3db23c48
  
  Can you verify that the host can access those domains? from the log it
  seems
  like the nfs paths for those are:
  shtistg01.suprtekstic.com:/storage/infrastructure
  shtistg01.suprtekstic.com:/storage/exports
  
  
  log snippet:
  1.
  Thread-14::DEBUG::2014-04-11
  22:54:44,331::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n
  /bin/mount -t nfs -o soft,nosharecache,timeo=600,retra
  ns=6,nfsvers=3 ashtistg01.suprtekstic.com:/storage/exports
  /rhev/data-center/mnt/ashtistg01.suprtekstic.com:_storage_exports' (cwd
  None)
  Thread-14::ERROR::2014-04-11
  22:55:36,659::storageServer::209::StorageServer.MountConnection::(connect)
  Mount failed: (32, ';mount.nfs: Failed to resolve serv
  er ashtistg01.suprtekstic.com: Name or service not known\n')
  Traceback (most recent call last):
File /usr/share/vdsm/storage/storageServer.py, line 207, in connect
  self._mount.mount(self.options, self._vfsType)
File /usr/share/vdsm/storage/mount.py, line 222, in mount
  return self._runcmd(cmd, timeout)
File /usr/share/vdsm/storage/mount.py, line 238, in _runcmd
  raise MountError(rc, ;.join((out, err)))
  MountError: (32, ';mount.nfs: Failed to resolve server
  ashtistg01.suprtekstic.com: Name or service not known\n')
  Thread-14::ERROR::2014-04-11
  22:55:36,705::hsm::2379::Storage.HSM::(connectStorageServer) Could not
  connect to storageServer
  Traceback (most recent call last):
File /usr/share/vdsm/storage/hsm.py, line 2376, in connectStorageServer
  conObj.connect()
File /usr/share/vdsm/storage/storageServer.py, line 320, in connect
  return self._mountCon.connect()
File /usr/share/vdsm/storage/storageServer.py, line 215, in connect
  raise e
  MountError: (32, ';mount.nfs: Failed to resolve server
  ashtistg01.suprtekstic.com: Name or service not known\n')
  
  
  
  
  
  2.
  Thread-14::ERROR::2014-04-11
  22:56:29,307::storageServer::209::StorageServer.MountConnection::(connect)
  Mount failed: (32, ';mount.nfs: Failed to resolve serv
  er ashtistg01.suprtekstic.com: Name or service not known\n')
  Traceback (most recent call last):
File /usr/share/vdsm/storage/storageServer.py, line 207, in connect
  self._mount.mount(self.options, self._vfsType)
File /usr/share/vdsm/storage/mount.py, line 222, in mount
  return self._runcmd(cmd, timeout)
File /usr/share/vdsm/storage/mount.py, line 238, in _runcmd
  raise MountError(rc, ;.join((out, err)))
  MountError: (32, ';mount.nfs: Failed to resolve server
  ashtistg01.suprtekstic.com: Name or service not known\n')
  Thread-14::ERROR::2014-04-11
  22:56:29,309::hsm::2379

Re: [ovirt-users] SPM error

2014-04-15 Thread Liron Aravot


- Original Message -
 From: Maurice \Moe\ James mja...@media-node.com
 To: Liron Aravot lara...@redhat.com
 Cc: Itamar Heim ih...@redhat.com, users@ovirt.org
 Sent: Tuesday, April 15, 2014 3:14:16 AM
 Subject: Re: [ovirt-users] SPM error
 
 Sorry forgot to paste
 https://bugzilla.redhat.com/show_bug.cgi?id=1086951

Hi Maurice,
the issue is that the host doesn't have access to all the storage domains which 
causes to the spm start process to fail.
There's a bug open for that issue - 
https://bugzilla.redhat.com/show_bug.cgi?id=1072900 so seems we'll be able to 
close the one you opened as a duplicate but let's wait with that till your 
issue is solved.
From looking in the logs, it seems like that host have problem accessing two 
storage domains - 
3406665e-4adc-4fd4-aa1e-037547b29adb
f3b51811-4a7f-43af-8633-322b3db23c48

Can you verify that the host can access those domains? from the log it seems 
like the nfs paths for those are:
shtistg01.suprtekstic.com:/storage/infrastructure
shtistg01.suprtekstic.com:/storage/exports


log snippet:
1.
Thread-14::DEBUG::2014-04-11 
22:54:44,331::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n 
/bin/mount -t nfs -o soft,nosharecache,timeo=600,retra
ns=6,nfsvers=3 ashtistg01.suprtekstic.com:/storage/exports 
/rhev/data-center/mnt/ashtistg01.suprtekstic.com:_storage_exports' (cwd None)
Thread-14::ERROR::2014-04-11 
22:55:36,659::storageServer::209::StorageServer.MountConnection::(connect) 
Mount failed: (32, ';mount.nfs: Failed to resolve serv
er ashtistg01.suprtekstic.com: Name or service not known\n')
Traceback (most recent call last):
  File /usr/share/vdsm/storage/storageServer.py, line 207, in connect
self._mount.mount(self.options, self._vfsType)
  File /usr/share/vdsm/storage/mount.py, line 222, in mount
return self._runcmd(cmd, timeout)
  File /usr/share/vdsm/storage/mount.py, line 238, in _runcmd
raise MountError(rc, ;.join((out, err)))
MountError: (32, ';mount.nfs: Failed to resolve server 
ashtistg01.suprtekstic.com: Name or service not known\n')
Thread-14::ERROR::2014-04-11 
22:55:36,705::hsm::2379::Storage.HSM::(connectStorageServer) Could not connect 
to storageServer
Traceback (most recent call last):
  File /usr/share/vdsm/storage/hsm.py, line 2376, in connectStorageServer
conObj.connect()
  File /usr/share/vdsm/storage/storageServer.py, line 320, in connect
return self._mountCon.connect()
  File /usr/share/vdsm/storage/storageServer.py, line 215, in connect
raise e
MountError: (32, ';mount.nfs: Failed to resolve server 
ashtistg01.suprtekstic.com: Name or service not known\n')





2.
Thread-14::ERROR::2014-04-11 
22:56:29,307::storageServer::209::StorageServer.MountConnection::(connect) 
Mount failed: (32, ';mount.nfs: Failed to resolve serv
er ashtistg01.suprtekstic.com: Name or service not known\n')
Traceback (most recent call last):
  File /usr/share/vdsm/storage/storageServer.py, line 207, in connect
self._mount.mount(self.options, self._vfsType)
  File /usr/share/vdsm/storage/mount.py, line 222, in mount
return self._runcmd(cmd, timeout)
  File /usr/share/vdsm/storage/mount.py, line 238, in _runcmd
raise MountError(rc, ;.join((out, err)))
MountError: (32, ';mount.nfs: Failed to resolve server 
ashtistg01.suprtekstic.com: Name or service not known\n')
Thread-14::ERROR::2014-04-11 
22:56:29,309::hsm::2379::Storage.HSM::(connectStorageServer) Could not connect 
to storageServer
Traceback (most recent call last):
  File /usr/share/vdsm/storage/hsm.py, line 2376, in connectStorageServer
conObj.connect()
  File /usr/share/vdsm/storage/storageServer.py, line 320, in connect
return self._mountCon.connect()
  File /usr/share/vdsm/storage/storageServer.py, line 215, in connect
raise e
MountError: (32, ';mount.nfs: Failed to resolve server 
ashtistg01.suprtekstic.com: Name or service not known\n')


Regardless of that, there are sanlock errors over the log when trying to 
acquire host-id over the log.
I'd start with check the connectivity issues to the storage domains above, 
later on you can attach check that sanlock is running and operational and/or 
attach the sanlock logs.


 
 On Mon, 2014-04-14 at 17:11 -0400, Liron Aravot wrote:
  
  - Original Message -
   From: Maurice \Moe\ James mja...@media-node.com
   To: Itamar Heim ih...@redhat.com
   Cc: users@ovirt.org
   Sent: Sunday, April 13, 2014 2:28:45 AM
   Subject: Re: [ovirt-users] SPM error
   
   Were you able to find out anything? Is there anything that I can check
   in the meanwhile?
   
  
  Hi Muarice,
  can you please attach the ovirt engine/vdsm logs?
  thanks,
  Liron
   
   On Sat, 2014-04-12 at 19:23 +0300, Itamar Heim wrote:
On 04/12/2014 03:40 PM, Maurice James wrote:
 What did you do to try to fix the sanlock? Anything is better than
 nothing at this point

 - Original Message -
 From: Ted Miller tmil...@hcjb.org
 To: Maurice James mja...@media-node.com
 Sent: Friday

Re: [ovirt-users] SPM error

2014-04-14 Thread Liron Aravot


- Original Message -
 From: Maurice \Moe\ James mja...@media-node.com
 To: Itamar Heim ih...@redhat.com
 Cc: users@ovirt.org
 Sent: Sunday, April 13, 2014 2:28:45 AM
 Subject: Re: [ovirt-users] SPM error
 
 Were you able to find out anything? Is there anything that I can check
 in the meanwhile?
 

Hi Muarice,
can you please attach the ovirt engine/vdsm logs?
thanks,
Liron
 
 On Sat, 2014-04-12 at 19:23 +0300, Itamar Heim wrote:
  On 04/12/2014 03:40 PM, Maurice James wrote:
   What did you do to try to fix the sanlock? Anything is better than
   nothing at this point
  
   - Original Message -
   From: Ted Miller tmil...@hcjb.org
   To: Maurice James mja...@media-node.com
   Sent: Friday, April 11, 2014 7:27:24 PM
   Subject: Re: [ovirt-users] SPM error
  
   I did receive some help on one stage of rebuilding my sanlock, but there
   were
   too many other things wrong to get it started again. Only advice I have
   is --
   look at your sanlock logs, and see if you can find anything there that is
   helpful.
  
   On 4/11/2014 7:23 PM, Maurice James wrote:
   Nooo.
  
  
   Sent from my Galaxy S®III
  
    Original message 
   From: Ted Miller tmil...@hcjb.org
   Date:04/11/2014  7:08 PM  (GMT-05:00)
   To: Maurice James mja...@media-node.com
   Subject: Re: [ovirt-users] SPM error
  
  
  
   I didn't, really.  I did something wrong along the way, and ended up
   having
   to rebuild the engine and hosts.  (My problems were due to a glusterfs
   split-brain.)
   Ted Miller
  
   On 4/11/2014 6:03 PM, Maurice James wrote:
   How did you fix it?
  
  
   Sent from my Galaxy S®III
  
    Original message 
   From: Ted Miller tmil...@hcjb.org
   Date:04/11/2014  6:00 PM  (GMT-05:00)
   To: users@ovirt.org
   Subject: Re: [ovirt-users] SPM error
  
  
  
   On 4/11/2014 2:05 PM, Maurice James wrote:
   I have an error trying to bring the master DC back online. After
   several
   reboots, no luck. I took the other cluster members offline to try to
   troubleshoot. The remaining host is constantly in contention with
   itself
   for SPM
  
  
   ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
   (DefaultQuartzScheduler_Worker-40) [38d400ea]
   IrsBroker::Failed::GetStoragePoolInfoVDS due to:
   IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
   SpmStart failed
  
   I'm no expert, but the last time I beat my head on that rock, something
   was
   wrong with my sanlock storage.  YMMV
   Ted Miller
   Elkhart, IN, USA
  
  
  
  Maurice - which type of storage is this?
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SAN storage expansion

2014-03-18 Thread Liron Aravot
Hi Giorgio,
perhaps i missed something - but why don't you want to extend the domain by 
right click and editing it?

- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Elad Ben Aharon ebena...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 18, 2014 11:28:14 AM
 Subject: Re: [Users] SAN storage expansion
 
 2014-03-18 9:15 GMT+01:00 Elad Ben Aharon ebena...@redhat.com:
  Hi,
 
  LUNs size are being updated using the multipath mechanism. In order to
  update storage domain size, you'll need to update the physical volume size
  using LVM on you hypervisors.
 
  Please do the following:
  1) Put the storage domain to maintenance
  2) Execute on your hosts:
  'pvresize --setphysicalvolumesize' with the new physical volume size (150G)
  3) Activate the storage domain
 
 Thank you Elad,
 I knew it was required a pvresize command but wanted to be sure that I
 wasn't disrupting possible storage parameters bookkeeping by the oVirt
 system.
 I'm wondering if it's because of this that SD has to be put in maintenance.
 
 Any chance to have the SD resize without putting it offline, for
 example doing pvresize on the SPM node and then pvscan on every other
 host involved?
 I'm sure there are use cases where this is would be of great convenience.
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: users@ovirt.org Users@ovirt.org
  Sent: Monday, March 17, 2014 7:25:36 PM
  Subject: [Users] SAN storage expansion
 
  Hi all,
  I'm happily continuing my experiments with oVirt and I need a
  suggestion regarding storage management.
 
  I'm using an iSCSI based Storage Domain and I'm trying to understand
  which is the correct way to extend it's size some time after it's
  creation.
 
  Please consider the following steps:
 
  1) Using storage specific tools create a new volume on iSCSI storage
  (e.g. 130.0 GiB)
 
  2) in oVirt webadmin: Storage - New Domain , select the previously
  created storage (130 GB), OK, after some time it's online:
   Size: 129 GB ; Available: 125 GB ; Used: 4 GB
 
  3) Use according to your needs (even noop is OK)
 
  4) Using storage specific tools expand the new volume (e.g. 20.0 GiB
  more, now it says 150.0GiB)
 
  5) in webadmin: Storage, click on the domain, Edit. Looking at
  LunsTargets , now it correctly sees 150GB
 
  BUT
  on the Storage tab it continues to be seen with
   Size: 129 GB ; Available: 125 GB ; Used: 4 GB
 
 
  Do you think it is possible to make oVirt aware of the new available
  storage space? Am I missing something obvious?
 
  My setup is oVirt 3.4.0 RC2 with fully patched CentOS 6.5 hosts.
 
  Best regards,
  Giorgio.
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Reimporting storage domains after reinstalling ovirt

2014-03-16 Thread Liron Aravot


- Original Message -
 From: Boudewijn Ector boudew...@boudewijnector.nl
 To: Liron Aravot lara...@redhat.com
 Cc: Jason Brooks jbro...@redhat.com, users@ovirt.org
 Sent: Saturday, March 15, 2014 5:01:55 PM
 Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
 
 On 11-03-14 16:39, Liron Aravot wrote:
 
  - Original Message -
  From: Boudewijn Ector boudew...@boudewijnector.nl
  To: Jason Brooks jbro...@redhat.com
  Cc: users@ovirt.org
  Sent: Tuesday, March 11, 2014 1:32:00 AM
  Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
 
 
  Some people have reported success creating an image of the desired
  size, then noting the name of this new image, and copying the old
  image into the place of the new one, with the new name. Something
  like that might work, but I don't have first-hand experience w/
  it.
 
  Jason
  Hi Jason,
 
 
  Due to lack of viable alternative, I've decided to go and try this
  approach. I just had a look at my datafiles:
 
  - these are either 8gb (OS) or 250gb (LVM images)
  - can't mount those directly in my host OS (tried because of the next
  point)
  - I don't know to what VM this image/VM belongs . They're all quite
  the same (basic debian install), so determining it just by running
  strings etc on those will not be easy
  - I can't import the old VMs from the old storage. If I create new
  images and dd the old information into those new images the metadata
  will not be copied too.
 
  So the only option is not reusing the VM's  but creating completely
  new ones and determining which disk images are required for these VMs.
  Then creating the new image structure and dd'ing the corresponding
  images from the old VMs into the new ones. In order to do so I need to
  know what data belongs to what VM.
  Is there a trick for doing this?
 
  I still do have the database from the old ovirt machine, this might
  save me. Will have a look into that one tomorrow.
 
  Cheers,
 
  Boudewijn
  Hi Boudewijn,
  So we can proceed and recover your data i'd like to know -
  1. can you use the db backup? will you lose any important data if you chose
  to use it?
  2, did you have snapshots for your vm?
 
  please answer so we can proceed with it, we can find methods for restoring
  without having to perform images copy (and to restore with copying) - but
  each way has it's implications.
  thanks,
  Liron.
 
 Hi Liron,
 
 
 
 Have you already been able to look at my reply on the list? It would be
 great for me to be able to make some decent progress this weekend.
 
 Cheers,
 
 Boudewijn

Hi Boudewijn,
if you have db backup and you won't lose any data using it - it would be the 
simplest approach.

Please read carefully the following options and keep backup before attempting 
any of it -
for vm's that you don't have space issue with - you can try to previously 
suggested approach, but it'll obviously take longer as it requires copying of 
the data.

Option A -
*doesn't require copying the disks
*if your vms had snapshots involving disks - it won't work currently.

let's try to restore a specific vm and continue from there - i'm adding here 
info - if needed i'll test it on my own deployment.
A. first of all, let's get the disks attached to some vm : some options to do 
that.
*under the webadmin ui, select a vm listed under the export domain, there 
should be a disks tab indicating what disks are attached to the vm - check if 
you can see the disk id's there.
B. query the storage domain content using rest-api - afaik we don't return that 
info from there. so let's skip that option.
1. under the storage domain storage directory (storage) enter the /vms 
directory - you should see bunch of OVF files there - that's a file containing 
a vm configuration.
2. open one specific ovf file - that's the vm that we'll attempt to restore - 
the ovf file is a file containing the vm configuration
*within the ovf file look for the following string: diskId and copy those ids 
aside, these should be the vm attached disks.
*copy the vm disk from the other storage domain, edit the metadata accordingly 
to have the proper storage domain id listed
*try to import the disks using the method specified here:  
https://bugzilla.redhat.com/show_bug.cgi?id=886133
*after this, you should see the disks as floating, then you can add the vm 
using the OVF file we discussed in stage 2 using the method specified here:
http://gerrit.ovirt.org/#/c/15894/


Option B -
*Replace the images data files with blank files
*Initiate Import for the vm, should be really quick obviously.
*As soon as the import starts, you can either:
1. let the import be done and replace the data, not that in that case the info 
saved in the engine db won't be correct (for example, the actual image 
size..etc)
2. after the tasks for importing are created (you'll see that in the engine 
log), turn the engine down immediately (immediately means within few seconds) 
and after the copy tasks

Re: [Users] Reimporting storage domains after reinstalling ovirt

2014-03-11 Thread Liron Aravot


- Original Message -
 From: Boudewijn Ector boudew...@boudewijnector.nl
 To: Jason Brooks jbro...@redhat.com
 Cc: users@ovirt.org
 Sent: Tuesday, March 11, 2014 1:32:00 AM
 Subject: Re: [Users] Reimporting storage domains after reinstalling ovirt
 
 
  
  Some people have reported success creating an image of the desired
  size, then noting the name of this new image, and copying the old
  image into the place of the new one, with the new name. Something
  like that might work, but I don't have first-hand experience w/
  it.
  
  Jason
 
 Hi Jason,
 
 
 Due to lack of viable alternative, I've decided to go and try this
 approach. I just had a look at my datafiles:
 
 - these are either 8gb (OS) or 250gb (LVM images)
 - can't mount those directly in my host OS (tried because of the next
 point)
 - I don't know to what VM this image/VM belongs . They're all quite
 the same (basic debian install), so determining it just by running
 strings etc on those will not be easy
 - I can't import the old VMs from the old storage. If I create new
 images and dd the old information into those new images the metadata
 will not be copied too.
 
 So the only option is not reusing the VM's  but creating completely
 new ones and determining which disk images are required for these VMs.
 Then creating the new image structure and dd'ing the corresponding
 images from the old VMs into the new ones. In order to do so I need to
 know what data belongs to what VM.
 Is there a trick for doing this?
 
 I still do have the database from the old ovirt machine, this might
 save me. Will have a look into that one tomorrow.
 
 Cheers,
 
 Boudewijn

Hi Boudewijn,
So we can proceed and recover your data i'd like to know -
1. can you use the db backup? will you lose any important data if you chose to 
use it?
2, did you have snapshots for your vm?

please answer so we can proceed with it, we can find methods for restoring 
without having to perform images copy (and to restore with copying) - but each 
way has it's implications.
thanks,
Liron.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] How to get status of async operation via Java SDK

2014-03-09 Thread Liron Aravot
Juan, can you provide some info on that?
thanks.

- Original Message -
 From: Eric Bollengier e...@baculasystems.com
 To: users@ovirt.org
 Sent: Tuesday, March 4, 2014 12:29:09 PM
 Subject: [Users] How to get status of async operation via Java SDK
 
 Hello,
 
 From what I see in my tests, deleting a snapshot can take up to few
 minutes (my test platform is not really a heavy production system, but
 my VMs are powered off all the time, so I have hard time to know why the
 KVM Host is having a huge CPU consumption for this operation).
 
 So, I would like to know when the delete is actually done, and if the
 status is OK. I have pieces of information about a CorrelationId, I can
 read Events, but it's not really clear how to query the status of my
 operation, the Response object provides only a Type, and the
 documentation is oriented on creation.
 
 
 
 Documentation or some examples would be welcome.
 
 Thanks,
 
 Best Regards,
 Eric
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-05 Thread Liron Aravot


- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org 
 Users@ovirt.org, fsimo...@redhat.com
 Sent: Wednesday, March 5, 2014 1:19:34 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 22:35 GMT+01:00 Liron Aravot lara...@redhat.com:
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Liron Aravot lara...@redhat.com
  Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org
  Users@ovirt.org, fsimo...@redhat.com
  Sent: Tuesday, March 4, 2014 6:10:27 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 16:37 GMT+01:00 Liron Aravot lara...@redhat.com:
  
  
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Liron Aravot lara...@redhat.com
   Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org
   Users@ovirt.org, fsimo...@redhat.com
   Sent: Tuesday, March 4, 2014 5:31:01 PM
   Subject: Re: [Users] Data Center Non Responsive / Contending
  
   2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
Hi Giorgio,
Apperantly the issue is caused because there is no connectivity to
the
export domain and than we fail on spmStart - that's obviously a bug
that
shouldn't happen.
  
   Hi Liron,
   we are reaching the same conclusion.
  
can you open a bug for the issue?
   Surely I will
  
in the meanwhile, as it seems to still exist - seems to me like the
way
for
solving it would be either to fix the connectivity issue between vdsm
and
the storage domain or to downgrade your vdsm version to before this
issue
was introduced.
  
  
   I have some problems with your suggestion(s):
   - I cannot fix the connectivity between vdsm and the storage domain
   because, as I already said, it is exposed by a VM by this very same
   DataCenter and if the DC doesn't goes up, the NFS server can't too.
   - I don't understand what does it mean to downgrade the vdsm: to which
   point in time?
  
   It seems I've put myself - again - in a situation of the the egg or
   the chicken type, where the SD depends from THIS export domain but
   the export domain isn't available if the DC isn't running.
  
   This export domain isn't that important to me. I can throw it away
   without any problem.
  
   What if we edit the DB and remove any instances related to it? Any
   adverse consequences?
  
  
   Ok, please perform a full db backup before attempting the following:
   1. right click on the the domain and choose Destory
   2. move all hosts to maintenance
   3. log in into the database and run the following sql command:
   update storage_pool where id = '{you id goes here}' set
   master_domain_version = master_domain_version + 1;
   4. activate a host.
 
  Ok Liron, that did the trick!
 
 
 Just for the record, the correct command was this one:
 
 update storage_pool  set master_domain_version = master_domain_version + 1
 where id = '{your id goes here}' ;
 
 Best regards,
 Giorgio.

Giorgio,
I've added the following patch to resolve the issue: 
http://gerrit.ovirt.org/#/c/25424/
Have you opened the bug for it? if so, please provide me the bug number so i 
could assign the patch to it.
Thanks.
 
  Up and running again, even that VM supposed to be the server acting as
  export domain.
 
  Now I've to run away as I'm late to a meeting but tomorrow I'll file a
  bug regarding this.
 
  Thanks to you and Meital for your assistance,
  Giorgio.
 
  Sure, happy that everything is fine!
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot
Hi Giorgio,
Apperantly the issue is caused because there is no connectivity to the export 
domain and than we fail on spmStart - that's obviously a bug that shouldn't 
happen.
can you open a bug for the issue?
in the meanwhile, as it seems to still exist - seems to me like the way for 
solving it would be either to fix the connectivity issue between vdsm and the 
storage domain or to downgrade your vdsm version to before this issue was 
introduced.

6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04 
13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['  
Volume group 1810e5eb-9e
b6-4797-ac50-8023a939f312 not found', '  Skipping volume group 
1810e5eb-9eb6-4797-ac50-8023a939f312']
6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04 
13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 
1810e5eb-9eb6-4797-ac50-8023a
939f312 not found
Traceback (most recent call last):
  File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
dom = findMethod(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: 
(u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04 
13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
Traceback (most recent call last):
  File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
self._updateDomainsRole()
  File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
return method(self, *args, **kwargs)
  File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
domain = sdCache.produce(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 98, in produce
domain.getRealDomain()
  File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
domain = self._findDomain(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
dom = findMethod(sdUUID)
  File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)




- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 4:35:07 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Master data domain must be reachable in order for the DC to be up.
  Export domain shouldn't affect the dc status.
  Are you sure that you've created the export domain as an export domain, and
  not as a regular nfs?
 
 
 Yes, I am.
 
 Don't know how to extract this info from DB, but in webadmin, in the
 storage list, I have these info:
 
 Domain Name: nfs02EXPORT
 Domain Type: Export
 Storage Type: NFS
 Format: V1
 Cross Data-Center Status: Inactive
 Total Space: [N/A]
 Free Space: [N/A]
 
 ATM my only Data Domain is based on iSCSI, no NFS.
 
 
 
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:16:19 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   StorageDomainDoesNotExist: Storage domain does not exist:
   (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  
   What's the output of:
   lvs
   vdsClient -s 0 getStorageDomainsList
  
   If it exists in the list, please run:
   vdsClient -s 0 getStorageDomainInfo 1810e5eb-9eb6-4797-ac50-8023a939f312
  
 
  I'm attaching a compressed archive to avoid mangling by googlemail client.
 
  Indeed the NFS storage with that id is not in the list of available
  storage as it is brought up by a VM that has to be run in this very
  same cluster. Obviously it isn't running at the moment.
 
  You find this in the DB:
 
  COPY storage_domain_static (id, storage, storage_name,
  storage_domain_type, storage_type, storage_domain_format_type,
  _create_date, _update_date, recoverable, last_time_used_as_master,
  storage_description, storage_comment) FROM stdin;
  ...
  1810e5eb-9eb6-4797-ac50-8023a939f312
  11d4972d-f227-49ed-b997-f33cf4b2aa26nfs02EXPORT 3   1
   0   2014-02-28 18:11:23.17092+01\N  t   0   \N
\N
  ...
 
  Also, disks for that VM are carved from the Master Data Domain that is
  not available ATM.
 
  To say in other words: I thought that availability of an export domain
  wasn't critical to switch on a Data Center. Am I wrong?
 
  Thanks,
  Giorgio.
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot


- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Meital Bourvine mbour...@redhat.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 5:06:13 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 15:38 GMT+01:00 Meital Bourvine mbour...@redhat.com:
  Ok, and is the iscsi functional at the moment?
 
 
 I think so.
 For example I see in the DB that the id of my Master Data Domain ,
 dt02clu6070,  is  a689cb30-743e-4261-bfd1-b8b194dc85db then
 
 [root@vbox70 ~]# lvs a689cb30-743e-4261-bfd1-b8b194dc85db
   LV   VG
  Attr   LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
   4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,62g
   5c8bb733-4b0c-43a9-9471-0fde3d159fb2
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi---  11,00g
   7b617ab1-70c1-42ea-9303-ceffac1da72d
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,88g
   e4b86b91-80ec-4bba-8372-10522046ee6b
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   9,00g
   ids
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-ao 128,00m
   inbox
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
   leases
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   2,00g
   master
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   1,00g
   metadata
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 512,00m
   outbox
 a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
 
 I can read from the LVs that have the LVM Available bit set:
 
 [root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
 bs=1M of=/dev/null
 128+0 records in
 128+0 records out
 134217728 bytes (134 MB) copied, 0,0323692 s, 4,1 GB/s
 
 [root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
 bs=1M |od -xc |head -20
 00020101221000200030200
 020   ! 022 002  \0 003  \0  \0  \0  \0  \0  \0 002  \0  \0
 0200001
  \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0
 04000010007
 001  \0  \0  \0  \0  \0  \0  \0  \a  \0  \0  \0  \0  \0  \0  \0
 0603661393862633033
  \0  \0  \0  \0  \0  \0  \0  \0   a   6   8   9   c   b   3   0
 100372d33342d6532343136622d64662d31
   -   7   4   3   e   -   4   2   6   1   -   b   f   d   1   -
 120386231623439636435386264
   b   8   b   1   9   4   d   c   8   5   d   b  \0  \0  \0  \0
 1403638343839663932
  \0  \0  \0  \0  \0  \0  \0  \0   8   6   8   4   f   9   2   9
 160612d62372d6638346564622d38302d35
   -   a   7   b   f   -   4   8   d   e   -   b   0   8   5   -
 200656363306539353766306364762e6f62
   c   e   0   c   9   e   7   5   0   f   d   c   .   v   b   o
 22037782e307270006926de
   x   7   0   .   p   r   i  \0 336 \0  \0  \0  \0  \0  \0
 [root@vbox70 ~]#
 
 Obviously I can't read from LVs that aren't available:
 
 [root@vbox70 ~]# dd
 if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
 bs=1M of=/dev/null
 dd: apertura di
 `/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400':
 No such file or directory
 [root@vbox70 ~]#
 
 But those LV are the VM's disks and I suppose it's availability is
 managed by oVirt
 

please see my previous mail on this thread, the issue seems to be with the 
connectivity to the nfs path, not the iscsi.
2014-03-04 13:15:41,167 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(DefaultQuartzScheduler_Worker-27) [1141851d] Correlation
 ID: null, Call Stack: null, Custom Event ID: -1, Message: Failed to connect 
Host vbox70 to the Storage Domains nfs02EXPORT.
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:35:07 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Master data domain must be reachable in order for the DC to be up.
   Export domain shouldn't affect the dc status.
   Are you sure that you've created the export domain as an export domain,
   and
   not as a regular nfs?
  
 
  Yes, I am.
 
  Don't know how to extract this info from DB, but in webadmin, in the
  storage list, I have these info:
 
  Domain Name: nfs02EXPORT
  Domain Type: Export
  Storage Type: NFS
  Format: V1
  Cross Data-Center Status: Inactive
  Total Space: [N/A]
  Free Space: [N/A]
 
  ATM my 

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot
adding federico
- Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Giorgio Bersano giorgio.bers...@gmail.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 5:11:36 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 
 
 - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 5:06:13 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
  
  2014-03-04 15:38 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Ok, and is the iscsi functional at the moment?
  
  
  I think so.
  For example I see in the DB that the id of my Master Data Domain ,
  dt02clu6070,  is  a689cb30-743e-4261-bfd1-b8b194dc85db then
  
  [root@vbox70 ~]# lvs a689cb30-743e-4261-bfd1-b8b194dc85db
LV   VG
   Attr   LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,62g
5c8bb733-4b0c-43a9-9471-0fde3d159fb2
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi---  11,00g
7b617ab1-70c1-42ea-9303-ceffac1da72d
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   3,88g
e4b86b91-80ec-4bba-8372-10522046ee6b
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi---   9,00g
ids
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-ao 128,00m
inbox
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
leases
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   2,00g
master
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a-   1,00g
metadata
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 512,00m
outbox
  a689cb30-743e-4261-bfd1-b8b194dc85db -wi-a- 128,00m
  
  I can read from the LVs that have the LVM Available bit set:
  
  [root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
  bs=1M of=/dev/null
  128+0 records in
  128+0 records out
  134217728 bytes (134 MB) copied, 0,0323692 s, 4,1 GB/s
  
  [root@vbox70 ~]# dd if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/ids
  bs=1M |od -xc |head -20
  00020101221000200030200
  020   ! 022 002  \0 003  \0  \0  \0  \0  \0  \0 002  \0  \0
  0200001
   \0  \0  \0  \0  \0  \0  \0  \0 001  \0  \0  \0  \0  \0  \0  \0
  04000010007
  001  \0  \0  \0  \0  \0  \0  \0  \a  \0  \0  \0  \0  \0  \0  \0
  0603661393862633033
   \0  \0  \0  \0  \0  \0  \0  \0   a   6   8   9   c   b   3   0
  100372d33342d6532343136622d64662d31
-   7   4   3   e   -   4   2   6   1   -   b   f   d   1   -
  120386231623439636435386264
b   8   b   1   9   4   d   c   8   5   d   b  \0  \0  \0  \0
  1403638343839663932
   \0  \0  \0  \0  \0  \0  \0  \0   8   6   8   4   f   9   2   9
  160612d62372d6638346564622d38302d35
-   a   7   b   f   -   4   8   d   e   -   b   0   8   5   -
  200656363306539353766306364762e6f62
c   e   0   c   9   e   7   5   0   f   d   c   .   v   b   o
  22037782e307270006926de
x   7   0   .   p   r   i  \0 336 \0  \0  \0  \0  \0  \0
  [root@vbox70 ~]#
  
  Obviously I can't read from LVs that aren't available:
  
  [root@vbox70 ~]# dd
  if=/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400
  bs=1M of=/dev/null
  dd: apertura di
  `/dev/a689cb30-743e-4261-bfd1-b8b194dc85db/4a1be3d8-ac7d-46cf-ae1c-ba154bc9a400':
  No such file or directory
  [root@vbox70 ~]#
  
  But those LV are the VM's disks and I suppose it's availability is
  managed by oVirt
  
 
 please see my previous mail on this thread, the issue seems to be with the
 connectivity to the nfs path, not the iscsi.
 2014-03-04 13:15:41,167 INFO
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (DefaultQuartzScheduler_Worker-27) [1141851d] Correlation
  ID: null, Call Stack: null, Custom Event ID: -1, Message: Failed to connect
  Host vbox70 to the Storage Domains nfs02EXPORT.
  
  
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Meital Bourvine mbour...@redhat.com
   Cc: users@ovirt.org Users@ovirt.org
   Sent: Tuesday, March 4, 2014 4:35:07 PM
   Subject: Re: [Users] Data Center Non Responsive / Contending
  
   2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
Master data domain must be reachable in order for the DC to be up.
Export domain shouldn't affect the dc status.
Are you sure that you've

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot


- Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Giorgio Bersano giorgio.bers...@gmail.com
 Cc: users@ovirt.org Users@ovirt.org
 Sent: Tuesday, March 4, 2014 5:03:44 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 Hi Giorgio,
 Apperantly the issue is caused because there is no connectivity to the export
 domain and than we fail on spmStart - that's obviously a bug that shouldn't
 happen.
 can you open a bug for the issue?
 in the meanwhile, as it seems to still exist - seems to me like the way for
 solving it would be either to fix the connectivity issue between vdsm and
 the storage domain or to downgrade your vdsm version to before this issue
 was introduced.

by the way, solution that we can go with is to remove the domain manually from 
the engine and forcibly cause to reconstruction of the pool metadata, so that 
issue should be resolved.

note that if it'll happen for further domains in the future the same procedure 
would be required.
up to your choice we can proceed with solution - let me know on which way you'd 
want to go.
 
 6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04
 13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
 Volume group 1810e5eb-9e
 b6-4797-ac50-8023a939f312 not found', '  Skipping volume group
 1810e5eb-9eb6-4797-ac50-8023a939f312']
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
 13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
 1810e5eb-9eb6-4797-ac50-8023a
 939f312 not found
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
 StorageDomainDoesNotExist: Storage domain does not exist:
 (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
 6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
 13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
 Traceback (most recent call last):
   File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
 self._updateDomainsRole()
   File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
 return method(self, *args, **kwargs)
   File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
 domain = sdCache.produce(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 98, in produce
 domain.getRealDomain()
   File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
 return self._cache._realProduce(self._sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
 domain = self._findDomain(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
 dom = findMethod(sdUUID)
   File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
 raise se.StorageDomainDoesNotExist(sdUUID)
 
 
 
 
 - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:35:07 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
  
  2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Master data domain must be reachable in order for the DC to be up.
   Export domain shouldn't affect the dc status.
   Are you sure that you've created the export domain as an export domain,
   and
   not as a regular nfs?
  
  
  Yes, I am.
  
  Don't know how to extract this info from DB, but in webadmin, in the
  storage list, I have these info:
  
  Domain Name: nfs02EXPORT
  Domain Type: Export
  Storage Type: NFS
  Format: V1
  Cross Data-Center Status: Inactive
  Total Space: [N/A]
  Free Space: [N/A]
  
  ATM my only Data Domain is based on iSCSI, no NFS.
  
  
  
  
  
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Meital Bourvine mbour...@redhat.com
   Cc: users@ovirt.org Users@ovirt.org
   Sent: Tuesday, March 4, 2014 4:16:19 PM
   Subject: Re: [Users] Data Center Non Responsive / Contending
  
   2014-03-04 14:48 GMT+01:00 Meital Bourvine mbour...@redhat.com:
StorageDomainDoesNotExist: Storage domain does not exist:
(u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
   
What's the output of:
lvs
vdsClient -s 0 getStorageDomainsList
   
If it exists in the list, please run:
vdsClient -s 0 getStorageDomainInfo
1810e5eb-9eb6-4797-ac50-8023a939f312
   
  
   I'm attaching a compressed archive to avoid mangling by googlemail
   client.
  
   Indeed the NFS storage with that id is not in the list of available
   storage as it is brought up by a VM that has to be run in this very
   same cluster. Obviously it isn't running at the moment.
  
   You find this in the DB:
  
   COPY storage_domain_static (id, storage, storage_name,
   storage_domain_type, storage_type, storage_domain_format_type,
   _create_date, _update_date, recoverable

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot


- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org 
 Users@ovirt.org, fsimo...@redhat.com
 Sent: Tuesday, March 4, 2014 5:31:01 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
  Hi Giorgio,
  Apperantly the issue is caused because there is no connectivity to the
  export domain and than we fail on spmStart - that's obviously a bug that
  shouldn't happen.
 
 Hi Liron,
 we are reaching the same conclusion.
 
  can you open a bug for the issue?
 Surely I will
 
  in the meanwhile, as it seems to still exist - seems to me like the way for
  solving it would be either to fix the connectivity issue between vdsm and
  the storage domain or to downgrade your vdsm version to before this issue
  was introduced.
 
 
 I have some problems with your suggestion(s):
 - I cannot fix the connectivity between vdsm and the storage domain
 because, as I already said, it is exposed by a VM by this very same
 DataCenter and if the DC doesn't goes up, the NFS server can't too.
 - I don't understand what does it mean to downgrade the vdsm: to which
 point in time?
 
 It seems I've put myself - again - in a situation of the the egg or
 the chicken type, where the SD depends from THIS export domain but
 the export domain isn't available if the DC isn't running.
 
 This export domain isn't that important to me. I can throw it away
 without any problem.
 
 What if we edit the DB and remove any instances related to it? Any
 adverse consequences?
 

Ok, please perform a full db backup before attempting the following:
1. right click on the the domain and choose Destory
2. move all hosts to maintenance
3. log in into the database and run the following sql command:
update storage_pool where id = '{you id goes here}' set master_domain_version = 
master_domain_version + 1;
4. activate a host.
 
 
 
  6a519e95-62ef-445b-9a98-f05c81592c85::WARNING::2014-03-04
  13:05:31,489::lvm::377::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
  Volume group 1810e5eb-9e
  b6-4797-ac50-8023a939f312 not found', '  Skipping volume group
  1810e5eb-9eb6-4797-ac50-8023a939f312']
  6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
  13:05:31,499::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
  1810e5eb-9eb6-4797-ac50-8023a
  939f312 not found
  Traceback (most recent call last):
File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
  dom = findMethod(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
  StorageDomainDoesNotExist: Storage domain does not exist:
  (u'1810e5eb-9eb6-4797-ac50-8023a939f312',)
  6a519e95-62ef-445b-9a98-f05c81592c85::ERROR::2014-03-04
  13:05:31,500::sp::329::Storage.StoragePool::(startSpm) Unexpected error
  Traceback (most recent call last):
File /usr/share/vdsm/storage/sp.py, line 296, in startSpm
  self._updateDomainsRole()
File /usr/share/vdsm/storage/securable.py, line 75, in wrapper
  return method(self, *args, **kwargs)
File /usr/share/vdsm/storage/sp.py, line 205, in _updateDomainsRole
  domain = sdCache.produce(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 98, in produce
  domain.getRealDomain()
File /usr/share/vdsm/storage/sdc.py, line 52, in getRealDomain
  return self._cache._realProduce(self._sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 122, in _realProduce
  domain = self._findDomain(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 141, in _findDomain
  dom = findMethod(sdUUID)
File /usr/share/vdsm/storage/sdc.py, line 171, in _findUnfetchedDomain
  raise se.StorageDomainDoesNotExist(sdUUID)
 
 
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Meital Bourvine mbour...@redhat.com
  Cc: users@ovirt.org Users@ovirt.org
  Sent: Tuesday, March 4, 2014 4:35:07 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 15:23 GMT+01:00 Meital Bourvine mbour...@redhat.com:
   Master data domain must be reachable in order for the DC to be up.
   Export domain shouldn't affect the dc status.
   Are you sure that you've created the export domain as an export domain,
   and
   not as a regular nfs?
  
 
  Yes, I am.
 
  Don't know how to extract this info from DB, but in webadmin, in the
  storage list, I have these info:
 
  Domain Name: nfs02EXPORT
  Domain Type: Export
  Storage Type: NFS
  Format: V1
  Cross Data-Center Status: Inactive
  Total Space: [N/A]
  Free Space: [N/A]
 
  ATM my only Data Domain is based on iSCSI, no NFS.
 
 
 
 
 
   - Original Message -
   From: Giorgio Bersano giorgio.bers...@gmail.com
   To: Meital Bourvine mbour...@redhat.com
   Cc: users@ovirt.org Users@ovirt.org
   Sent: Tuesday, March 4, 2014 4:16:19 PM
   Subject: Re: [Users] Data

Re: [Users] Data Center Non Responsive / Contending

2014-03-04 Thread Liron Aravot


- Original Message -
 From: Giorgio Bersano giorgio.bers...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org 
 Users@ovirt.org, fsimo...@redhat.com
 Sent: Tuesday, March 4, 2014 6:10:27 PM
 Subject: Re: [Users] Data Center Non Responsive / Contending
 
 2014-03-04 16:37 GMT+01:00 Liron Aravot lara...@redhat.com:
 
 
  - Original Message -
  From: Giorgio Bersano giorgio.bers...@gmail.com
  To: Liron Aravot lara...@redhat.com
  Cc: Meital Bourvine mbour...@redhat.com, users@ovirt.org
  Users@ovirt.org, fsimo...@redhat.com
  Sent: Tuesday, March 4, 2014 5:31:01 PM
  Subject: Re: [Users] Data Center Non Responsive / Contending
 
  2014-03-04 16:03 GMT+01:00 Liron Aravot lara...@redhat.com:
   Hi Giorgio,
   Apperantly the issue is caused because there is no connectivity to the
   export domain and than we fail on spmStart - that's obviously a bug that
   shouldn't happen.
 
  Hi Liron,
  we are reaching the same conclusion.
 
   can you open a bug for the issue?
  Surely I will
 
   in the meanwhile, as it seems to still exist - seems to me like the way
   for
   solving it would be either to fix the connectivity issue between vdsm
   and
   the storage domain or to downgrade your vdsm version to before this
   issue
   was introduced.
 
 
  I have some problems with your suggestion(s):
  - I cannot fix the connectivity between vdsm and the storage domain
  because, as I already said, it is exposed by a VM by this very same
  DataCenter and if the DC doesn't goes up, the NFS server can't too.
  - I don't understand what does it mean to downgrade the vdsm: to which
  point in time?
 
  It seems I've put myself - again - in a situation of the the egg or
  the chicken type, where the SD depends from THIS export domain but
  the export domain isn't available if the DC isn't running.
 
  This export domain isn't that important to me. I can throw it away
  without any problem.
 
  What if we edit the DB and remove any instances related to it? Any
  adverse consequences?
 
 
  Ok, please perform a full db backup before attempting the following:
  1. right click on the the domain and choose Destory
  2. move all hosts to maintenance
  3. log in into the database and run the following sql command:
  update storage_pool where id = '{you id goes here}' set
  master_domain_version = master_domain_version + 1;
  4. activate a host.
 
 Ok Liron, that did the trick!
 
 Up and running again, even that VM supposed to be the server acting as
 export domain.
 
 Now I've to run away as I'm late to a meeting but tomorrow I'll file a
 bug regarding this.
 
 Thanks to you and Meital for your assistance,
 Giorgio.

Sure, happy that everything is fine!
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Master storage domain version mismatch

2013-12-10 Thread Liron Aravot
Thanks,
if you could please just add the number of hosts/domains/vdsm log to the bug 
scenario.

- Original Message -
 From: Markus Stockhausen stockhau...@collogia.de
 To: Liron Aravot lara...@redhat.com
 Cc: ovirt-users users@ovirt.org
 Sent: Tuesday, December 10, 2013 10:48:43 AM
 Subject: AW: [Users] Master storage domain version mismatch
 
  Von: Liron Aravot [lara...@redhat.com]
  Gesendet: Dienstag, 10. Dezember 2013 08:28
  An: Markus Stockhausen
  Cc: ovirt-users
  Betreff: Re: [Users] Master storage domain version mismatch
  
  Hi Marcus,
  to identify the exact issue that you had i'll need to take a look on
  vdsm/engine logs - if you could attach those it'll be great.
  tnx.
 
 Hello logs have been attached to bug 1039835.
 
 Best regards.
 
 Markus
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Master storage domain version mismatch

2013-12-09 Thread Liron Aravot
Hi Marcus,
to identify the exact issue that you had i'll need to take a look on 
vdsm/engine logs - if you could attach those it'll be great.
tnx.

- Original Message -
 From: Markus Stockhausen stockhau...@collogia.de
 To: ovirt-users users@ovirt.org
 Sent: Tuesday, December 10, 2013 8:41:45 AM
 Subject: [Users] Master storage domain version mismatch
 
 Hello,
 
 We are on ovirt 3.3.1 and I just filed BZ 1039835 because our master
 storage domain could not be activated any longer after some NFS failure
 tests (pulling cables, modifying /etc/exports, ...).
 
 This bug seems to happen from time to time. Several BZ and mailing list
 entries suggest that the logic to track the domain version in critical
 situations is not yet perfect.
 
 I was able to clean the situation with a SQL modification in the database
 update storage_pool set master_domain_version=...;
 
 As it may be interesting for others could someone clarify if there might
 be an issue with atomic operations. Admins may have enough headaches
 to get the NFS up and running after some kind of failure. So Ovirt should
 not be one of them.
 
 Markus
 
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Export not exporting the disk

2013-11-11 Thread Liron Aravot
Hi Juan,
When exporting a vm, direct lun disks aren't part of the exported data.

Thanks,
Liron

- Original Message -
 From: Juan Pablo Lorier jplor...@gmail.com
 To: users users@ovirt.org
 Sent: Monday, November 11, 2013 5:26:17 PM
 Subject: [Users] Export not exporting the disk
 
 Hi,
 
 I've already posted before but the thread went dead. If I export a
 stoped vm with direct-lun disk, it creates the export without errors but
 doesn't export the disk. I was told that it should.
 How can I debugg this?
 Regards,
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Problems with snapshots

2013-10-31 Thread Liron Aravot
Hi Juan,
When you a snapshot- the snapshot is being taken for image disks - 
lun/shareable disks aren't part of the snapshot.

- Original Message -
 From: Juan Pablo Lorier jplor...@gmail.com
 To: users users@ovirt.org
 Sent: Thursday, October 31, 2013 6:24:27 PM
 Subject: [Users] Problems with snapshots
 
 Hi,
 
 I have a DC with an ISCSI data storage and several VMs running with direct
 luns discs for each one. Ovirt is 3.2 all running on top of centos 6.4
 minimal(hosts and engine). I want to take snapshots of the vms for backup
 but though it claims everything went right, there's no file or anything
 created anywhere and the details of the snapshot shows no disks. Where
 should I look for the snapshot? what info do you need to give me a hand on
 this?
 Here's a capture of the web admin snapshot tab:
 
 
 
 Regards,
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk state - Illegal?

2013-10-14 Thread Liron Aravot


- Original Message -
 From: Andrew Lau and...@andrewklau.com
 To: Dan Ferris dfer...@prometheusresearch.com
 Cc: Users@ovirt.org
 Sent: Monday, October 14, 2013 4:53:51 AM
 Subject: Re: [Users] Disk state - Illegal?
 
 I was just wondering, did anyone ever find the root cause of this issue?

Hi,
Disk will be in illegal status in case that ovirt failed to initiate the delete 
task for it,
as the engine can't tell what's the exact disk state at this stage (on what 
phase the delete operation has failed) - the disk status is shown is illegal.
Therefore there might be many possible reasons, each issue that will fail the 
creation of the delete task would cause the disk to remain in that status 
(along permissions issues)
The disk deletion won't succeed till those issues would be resolved.

 
 Mine still seems to be doing the same thing, is it a permission issue or a
 bug?
 
 
 On Fri, Sep 27, 2013 at 12:28 PM, Dan Ferris  dfer...@prometheusresearch.com
  wrote:
 
 
 I was off today, so I just saw this. I will try it out tomorrow.
 
 Thanks!
 
 Dan
 
 
 On 9/25/2013 8:53 PM, Andrew Lau wrote:
 
 
 
 I noticed that too, I wasn't sure if it was a bug or just how I had
 setup my NFS share..
 
 There were three steps I did to remove the disk images, I'm sure there's
 a 100% easier solution..:
 
 I found the easiest way (graphically) was go to your
 https://ovirtengine/api/disks and so a search for the illegal disk.
 Append the extra ID eg. diskhref=/api/disks/lk342- dfsdf...
 
 into your URL this'll give you your image ID.
 
 Go to your storage share:
 cd /data/storage-id/master/vms/ storage-id
 grep -ir 'vmname' *
 You'll find the image-id reference here too.
 
 Then the image you will want to remove is in the
 /data/storage-id/images/image- id
 I assume you could safely remove this whole folder if you wanted to
 delete the disk.
 
 To remove the illegal state I did it through the API so again with the
 URL above https://ovirtengine/disks/ disk-id send a DELETE using HTTP/CURL
 
 Again, this was a poor mans solution but it worked for me.
 
 On Thu, Sep 26, 2013 at 4:04 AM, Dan Ferris
  dferris@prometheusresearch. com
 mailto: dferris@ prometheusresearch.com wrote:
 
 
 Hi,
 
 I have another hopefully simple question.
 
 One VM that I am trying to remove says that it's disk state is
 illegal and when I try to remove the disk it says that it failed
 to initiate the removing of the disk.
 
 Is there an easy way to get rid of these illegal disk images?
 
 Dan
 __ ___
 Users mailing list
 Users@ovirt.org mailto: Users@ovirt.org 
 http://lists.ovirt.org/__ mailman/listinfo/users
  http://lists.ovirt.org/ mailman/listinfo/users 
 
 
 __ _
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/ mailman/listinfo/users
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Broken Main Datastore Gluster

2013-10-07 Thread Liron Aravot
Hi Emiliano,
please attach the engine and vdsm logs so i could take a look to see what went 
wrong.
thanks,
Liron

- Original Message -
 From: emi...@gmail.com
 To: users@ovirt.org
 Sent: Monday, October 7, 2013 3:10:51 PM
 Subject: [Users] Broken Main Datastore Gluster
 
 Hi,
 
 I have a gluster cluster configured, I've restarted both nodes and the
 datastore now give me an error Failed to activate Storage Domain
 ds_cluster1_gfs (Data Center ing_GFS) by admin@internal
 
 I would like to be able to recover the datastore but I've just tried to
 destroy it and it gives me an error too.
 
 Any ideas?
 
 Regards!
 
 --
 Emiliano Tortorella
 +598 98941176
 emi...@gmail.com
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] About the Backup-Restore API Integration

2013-10-06 Thread Liron Aravot


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: emi...@gmail.com
 Cc: users@ovirt.org
 Sent: Wednesday, October 2, 2013 10:23:04 PM
 Subject: Re: [Users] About the Backup-Restore API Integration
 
 On 10/02/2013 01:44 AM, emi...@gmail.com wrote:
  Hi,
 
  I'm testing ovirt to replace ESXi. Right now I want know if there is
  something like guettoVCB.sh
  (https://communities.vmware.com/docs/DOC-8760) that i can use with oVirt.
 
  I'm checking the following link:
  http://www.ovirt.org/Features/Backup-Restore_API_Integration but i don't
  understand if this feature it's aviable right now and if it allow me to
  do something like what guettoVCB.sh do.

Hi Emiliano,
The backup/restore API is a set of supported operations by ovirt.
The support fir some of the operations is already merged and some should be 
merged upstream soon.

 
  Regards!
  Emiliano
 
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 it missed 3.3.0 (which should be mentioned in the RN).
 it may get into 3.3.1 or 3.3.2 still.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Designate Master Storage Domain

2013-08-15 Thread Liron Aravot


- Original Message -
 From: Dead Horse deadhorseconsult...@gmail.com
 To: Itamar Heim ih...@redhat.com
 Cc: users@ovirt.org users@ovirt.org
 Sent: Thursday, August 15, 2013 6:46:00 PM
 Subject: Re: [Users] Designate Master Storage Domain
 
 Itamar this is true (I have noted occasional timing issues with it actually
 working).
 But what if as the administrator I have a specific storage domain in mind
 that I would like to have become the master (in the case of more then two)?

In that case - you could reinitialize the pool using unattached domain (you 
chose which one) - this domain will become the new master.
1. Create new unattached domain/Detach the domain that you want to be the 
master from the pool (if possible)
2. Move all the pool domain to maintenance (the master should be put to 
maintenance last to avoid reconstructing to another domain).
3. Right click on the data center, chose reinitialize storage pool - you can 
chose which domain do you want to be the master from the unattached domains.

The other option is manual db intervention :)
Possibly we could add a possibility to manually change the master domain, same 
as was added for the spm (Make SPM button that was added recently).


 
 @Karli
 The idea is not to not have to shut down all the VM's or the engine just to
 maintenance a storage domain(s) that may happen to be on disparate storage
 servers
 .
 - DHC
 
 
 On Thu, Aug 15, 2013 at 9:37 AM, Itamar Heim  ih...@redhat.com  wrote:
 
 
 
 On 08/15/2013 06:18 AM, Dead Horse wrote:
 
 
 
 Is there any method of designating which domain should be the master
 storage domain or forcibly changing the role to a different storage domain?
 
 EG: Given the following example
 
 Storage Domain A (Master) -- NFS -- Storage Server 1
 Storage Domain B -- NFS -- Storage Server 2
 
 One wants to do maintenance to Storage Server 1 but in doing so the
 Master storage domain is hosted from Storage Server 1. Thus the net
 result of taking down Storage Server 1 is that one must also take down
 Storage Server 2.
 
 Thus we know we must shut down VM's from Storage Domain A to maintenance
 Storage Server 1. Suppose however that VM's are running that we don't
 want to shut down and are hosted from Storage Domain B via Storage Server 2.
 
 We would want to be able to promote Storage Domain B to Master so that
 we can take down Storage Domain A to do maintenance to Storage Server 1.
 
 Once we are done with maintenance to Storage Server 1 we can bring
 Storage Domain A back on line, re-designate it as Master if desired and
 bring it's VM's back online.
 
 I know I have seen this occur automatically to a point when a Storage
 Domain goes missing that is the Master Domain but I have not noted any
 manual method of doing so given the above scenario.
 
 - DHC
 
 
 __ _
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/ mailman/listinfo/users
 
 
 my understanding is you can move the storage domain A which is master to
 maint. engine will promote storage domain B to master and everything should
 continue working as is.
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] All VMs disappeared

2013-08-14 Thread Liron Aravot
Jakub, are only your vms disappear? what about the disks?

- Original Message -
 From: Laszlo Hornyak lhorn...@redhat.com
 To: Jakub Bittner j.bitt...@nbu.cz
 Cc: users@ovirt.org
 Sent: Wednesday, August 14, 2013 10:39:48 AM
 Subject: Re: [Users] All VMs disappeared
 
 
 
 - Original Message -
  From: Jakub Bittner j.bitt...@nbu.cz
  To: users@ovirt.org
  Sent: Wednesday, August 14, 2013 8:09:45 AM
  Subject: Re: [Users] All VMs disappeared
  
  Dne 13.8.2013 18:29, Laszlo Hornyak napsal(a):
   Ahoj Jakub,
  
   Just one more idea: could you turn statement logging on in postgresql
   before you try again?
   It is in /var/lib/pgsql/data/postgresql.conf
   log_statement = 'all'
  
  
   - Original Message -
   From: Jakub Bittner j.bitt...@nbu.cz
   To: Itamar Heim ih...@redhat.com
   Cc: users@ovirt.org
   Sent: Tuesday, August 13, 2013 5:20:29 PM
   Subject: Re: [Users] All VMs disappeared
  
   Dne 13.8.2013 17:02, Itamar Heim napsal(a):
   On 08/13/2013 05:31 PM, Jakub Bittner wrote:
   Dne 13.8.2013 15:39, Itamar Heim napsal(a):
   On 08/13/2013 08:50 AM, Jakub Bittner wrote:
   Dne 12.8.2013 22:44, Itamar Heim napsal(a):
   On 08/12/2013 06:59 PM, Jakub Bittner wrote:
   Dne 12.8.2013 14:54, Jakub Bittner napsal(a):
   Dne 12.8.2013 14:29, Laszlo Hornyak napsal(a):
   I looked around Noam's patch and that should not cause such
   behavior.
   I am wondering how that lost VM's could happen.
  
   Jakub, can you give a more detailed description what you were
   doing
   with oVirt when this happened? Maybe the bug is still there.
  
   Thank you,
   Laszlo
  
   - Original Message -
   From: Greg Sheremeta gsher...@redhat.com
   To: Laszlo Hornyak lhorn...@redhat.com
   Cc: Jakub Bittner j.bitt...@nbu.cz, Noam Slomianko
   nslom...@redhat.com, users@ovirt.org
   Sent: Monday, August 12, 2013 1:23:40 PM
   Subject: Re: [Users] All VMs disappeared
  
   Not the one I fixed, 987907. It was a simple UI
   NullPointerException.
  
   Greg
  
  
   - Original Message -
   From: Laszlo Hornyak lhorn...@redhat.com
   To: Jakub Bittner j.bitt...@nbu.cz, Noam Slomianko
   nslom...@redhat.com, Greg Sheremeta
   gsher...@redhat.com
   Cc: users@ovirt.org
   Sent: Monday, August 12, 2013 7:21:26 AM
   Subject: Re: [Users] All VMs disappeared
  
   Well if they are no longer in DB then that explains why the
   exception no
   longer occurs, but at the cost of database corruption.
   Noam and Greg, can these bugs cause data corruption?
  
   Thank you,
   Laszlo
  
   - Original Message -
   From: Jakub Bittner j.bitt...@nbu.cz
   To: Greg Sheremeta gsher...@redhat.com
   Cc: users@ovirt.org, Laszlo Hornyak lhorn...@redhat.com
   Sent: Monday, August 12, 2013 9:05:09 AM
   Subject: Re: [Users] All VMs disappeared
  
   Dne 10.8.2013 01:54, Greg Sheremeta napsal(a):
   It could also be this bug[1], for which I just submitted a
   fix.
  
   [1] https://bugzilla.redhat.com/show_bug.cgi?id=987907
  
   You can work around it by typing just VMs: (without the
   quotes) in
   the
   search bar.
  
   Greg
  
  
   - Original Message -
   From: Laszlo Hornyak lhorn...@redhat.com
   To: Jakub Bittner j.bitt...@nbu.cz
   Cc: users@ovirt.org
   Sent: Friday, August 9, 2013 11:21:26 AM
   Subject: Re: [Users] All VMs disappeared
  
   Hi Jakub,
  
   Could you check through DB or REST-API if the VM's are in
   your
   DB?
   select * from vm_static;
   or
   curl -u admin@internal:blablabla [engine-url]api/vms
  
   It seems Noam fixed this issue already in
   c2295c31fa645e1ba1b94cd557bd1fecb40c8829.
  
   Thank you,
   Laszlo
  
   - Original Message -
   From: Jakub Bittner j.bitt...@nbu.cz
   To: users@ovirt.org
   Sent: Friday, August 9, 2013 1:06:53 PM
   Subject: Re: [Users] All VMs disappeared
  
   Dne 9.8.2013 09:48, Jakub Bittner napsal(a):
   Hello,
  
   Iam running ovirt 3.3.0.beta1 on centos 6,4 and all our
   VMs
   disappeared from VMs tab. Nodes running on centos too.
   Repeating
   problem in log is:
  
  
   2013-08-09 09:44:24,203 WARN
   [org.ovirt.engine.core.vdsbroker.VdsManager]
   (DefaultQuartzScheduler_Worker-45) Failed to refresh VDS
   ,
   vds =
   7cb6aedf-47bc-40b0-877f-2a537fca5c64 : node2.x.com, error
   =
   java.lang.NullPointerException, continuing.:
   java.lang.NullPointerException
  at
   org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.proceedGuaranteedMemoryCheck(VdsUpdateRunTimeInfo.java:1313)
  
  
  
  
   [vdsbroker.jar:]
  at
   org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refreshVmStats(VdsUpdateRunTimeInfo.java:968)
  
  
  
  
   [vdsbroker.jar:]
  at
   org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refreshVdsRunTimeInfo(VdsUpdateRunTimeInfo.java:542)
  
  
  
  
   [vdsbroker.jar:]
  at
   org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.Refresh(VdsUpdateRunTimeInfo.java:383)
  
  
  
  
   [vdsbroker.jar:]
  at
   

Re: [Users] All VMs disappeared

2013-08-14 Thread Liron Aravot


- Original Message -
 From: Itamar Heim ih...@redhat.com
 To: Jakub Bittner j.bitt...@nbu.cz
 Cc: Michal Skrivanek mskri...@redhat.com, users@ovirt.org
 Sent: Wednesday, August 14, 2013 5:27:17 PM
 Subject: Re: [Users] All VMs disappeared
 
 On 08/14/2013 05:19 PM, Jakub Bittner wrote:
  Dne 14.8.2013 10:48, Jakub Bittner napsal(a):
  Dne 14.8.2013 09:57, Liron Aravot napsal(a):
  Jakub, are only your vms disappear? what about the disks?
 
  - Original Message -
  From: Laszlo Hornyak lhorn...@redhat.com
  To: Jakub Bittner j.bitt...@nbu.cz
  Cc: users@ovirt.org
  Sent: Wednesday, August 14, 2013 10:39:48 AM
  Subject: Re: [Users] All VMs disappeared
 
 
 
  - Original Message -
  From: Jakub Bittner j.bitt...@nbu.cz
  To: users@ovirt.org
  Sent: Wednesday, August 14, 2013 8:09:45 AM
  Subject: Re: [Users] All VMs disappeared
 
  Dne 13.8.2013 18:29, Laszlo Hornyak napsal(a):
  Ahoj Jakub,
 
  Just one more idea: could you turn statement logging on in postgresql
  before you try again?
  It is in /var/lib/pgsql/data/postgresql.conf
  log_statement = 'all'
 
 
  - Original Message -
  From: Jakub Bittner j.bitt...@nbu.cz
  To: Itamar Heim ih...@redhat.com
  Cc: users@ovirt.org
  Sent: Tuesday, August 13, 2013 5:20:29 PM
  Subject: Re: [Users] All VMs disappeared
 
  Dne 13.8.2013 17:02, Itamar Heim napsal(a):
  On 08/13/2013 05:31 PM, Jakub Bittner wrote:
  Dne 13.8.2013 15:39, Itamar Heim napsal(a):
  On 08/13/2013 08:50 AM, Jakub Bittner wrote:
  Dne 12.8.2013 22:44, Itamar Heim napsal(a):
  On 08/12/2013 06:59 PM, Jakub Bittner wrote:
  Dne 12.8.2013 14:54, Jakub Bittner napsal(a):
  Dne 12.8.2013 14:29, Laszlo Hornyak napsal(a):
  I looked around Noam's patch and that should not cause such
  behavior.
  I am wondering how that lost VM's could happen.
 
  Jakub, can you give a more detailed description what you
  were
  doing
  with oVirt when this happened? Maybe the bug is still there.
 
  Thank you,
  Laszlo
 
  - Original Message -
  From: Greg Sheremeta gsher...@redhat.com
  To: Laszlo Hornyak lhorn...@redhat.com
  Cc: Jakub Bittner j.bitt...@nbu.cz, Noam Slomianko
  nslom...@redhat.com, users@ovirt.org
  Sent: Monday, August 12, 2013 1:23:40 PM
  Subject: Re: [Users] All VMs disappeared
 
  Not the one I fixed, 987907. It was a simple UI
  NullPointerException.
 
  Greg
 
 
  - Original Message -
  From: Laszlo Hornyak lhorn...@redhat.com
  To: Jakub Bittner j.bitt...@nbu.cz, Noam Slomianko
  nslom...@redhat.com, Greg Sheremeta
  gsher...@redhat.com
  Cc: users@ovirt.org
  Sent: Monday, August 12, 2013 7:21:26 AM
  Subject: Re: [Users] All VMs disappeared
 
  Well if they are no longer in DB then that explains why
  the
  exception no
  longer occurs, but at the cost of database corruption.
  Noam and Greg, can these bugs cause data corruption?
 
  Thank you,
  Laszlo
 
  - Original Message -
  From: Jakub Bittner j.bitt...@nbu.cz
  To: Greg Sheremeta gsher...@redhat.com
  Cc: users@ovirt.org, Laszlo Hornyak
  lhorn...@redhat.com
  Sent: Monday, August 12, 2013 9:05:09 AM
  Subject: Re: [Users] All VMs disappeared
 
  Dne 10.8.2013 01:54, Greg Sheremeta napsal(a):
  It could also be this bug[1], for which I just
  submitted a
  fix.
 
  [1] https://bugzilla.redhat.com/show_bug.cgi?id=987907
 
  You can work around it by typing just VMs: (without
  the
  quotes) in
  the
  search bar.
 
  Greg
 
 
  - Original Message -
  From: Laszlo Hornyak lhorn...@redhat.com
  To: Jakub Bittner j.bitt...@nbu.cz
  Cc: users@ovirt.org
  Sent: Friday, August 9, 2013 11:21:26 AM
  Subject: Re: [Users] All VMs disappeared
 
  Hi Jakub,
 
  Could you check through DB or REST-API if the VM's
  are in
  your
  DB?
  select * from vm_static;
  or
  curl -u admin@internal:blablabla [engine-url]api/vms
 
  It seems Noam fixed this issue already in
  c2295c31fa645e1ba1b94cd557bd1fecb40c8829.
 
  Thank you,
  Laszlo
 
  - Original Message -
  From: Jakub Bittner j.bitt...@nbu.cz
  To: users@ovirt.org
  Sent: Friday, August 9, 2013 1:06:53 PM
  Subject: Re: [Users] All VMs disappeared
 
  Dne 9.8.2013 09:48, Jakub Bittner napsal(a):
  Hello,
 
  Iam running ovirt 3.3.0.beta1 on centos 6,4 and
  all our
  VMs
  disappeared from VMs tab. Nodes running on centos
  too.
  Repeating
  problem in log is:
 
 
  2013-08-09 09:44:24,203 WARN
  [org.ovirt.engine.core.vdsbroker.VdsManager]
  (DefaultQuartzScheduler_Worker-45) Failed to
  refresh VDS
  ,
  vds =
  7cb6aedf-47bc-40b0-877f-2a537fca5c64 :
  node2.x.com, error
  =
  java.lang.NullPointerException, continuing.:
  java.lang.NullPointerException
  at
  org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.proceedGuaranteedMemoryCheck(VdsUpdateRunTimeInfo.java:1313)
 
 
 
 
 
  [vdsbroker.jar:]
  at
  org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo.refreshVmStats(VdsUpdateRunTimeInfo.java:968)
 
 
 
 
 
  [vdsbroker.jar

Re: [Users] Copy disks?

2013-06-04 Thread Liron Aravot
Hi,
The copy operation is currently supported for template disks.
If you have a template, try to select one of it's disk and the operation should 
become available.


- Original Message -
 From: noc n...@nieuwland.nl
 To: users users@ovirt.org
 Sent: Tuesday, June 4, 2013 11:12:06 AM
 Subject: Re: [Users] Copy disks?
 
 On 4-6-2013 10:10, noc wrote:
  Hi All,
 
  When in the top tab Disks and a disk is selected I can do Add, Remove,
  Move but Copy is always greyed out, no matter what I try. Doesn't
  matter if the disk is thin provisioned or not, whether it is attached
  to a VM or not, whether the VM is running or not.
  How does one copy a disk? Or when is that button active?
 Great, forgot the version used: 3.2.1 on Fedora-18
 
 Yes, I know snapshotting the VM and then copying the snapshot doesn't
 work on 3.2.1, will upgrade to 3.2.2 today.
 
 Joop
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Cannot remove disk, storage domain corrupted ?

2013-02-28 Thread Liron Aravot
Hi Matt, please attach the engine and vdsm logs.
thanks.

- Original Message -
 From: Matt . yamakasi@gmail.com
 To: users users@ovirt.org
 Sent: Thursday, February 28, 2013 1:59:59 PM
 Subject: Re: [Users] Cannot remove disk, storage domain corrupted ?
 
 
 
 
 At the moment I'm not able to start/stop any VM, I also cannot
 activate my host/node and even removing storage is not possible.
 
 In simple words I'm totally stuck, so what would be the best
 possibilty to fix this at this moment ?
 
 
 
 
 2013/2/27 YamakasY  yamakasi@gmail.com 
 
 
 
 Hi All,
 
 I have a strange issue with testdisks I created. My Vm's are working
 fine on both storage domains, local and NFS but I cannot remove also
 from both.
 
 Here is my log:
 
 2013-02-27 11:36:50,703 INFO
 [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
 (pool-3-thread-50) [68bb2cec] FINISH, DeleteImageGroupVDSCommand,
 log id: 6dfbe300
 2013-02-27 11:36:50,704 ERROR
 [org.ovirt.engine.core.bll.RemoveImageCommand] (pool-3-thread-50)
 [68bb2cec] Command org.ovirt.engine.core.bll.RemoveImageCommand
 throw Vdc Bll exception. With error message VdcBLLException:
 org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailoverException:
 IRSGenericException: IRSErrorException: Storage domain layout
 corrupted: ('222a9c6d-0c6b-41dd-9239-8ab83ce3f618',)
 2013-02-27 11:36:50,710 INFO
 [org.ovirt.engine.core.bll.RemoveImageCommand] (pool-3-thread-50)
 [68bb2cec] Command [id=18b5ad2d-cc67-4aab-a10c-2d5ab7e60269]:
 Compensating CHANGED_STATUS_ONLY of
 org.ovirt.engine.core.common.businessentities.Image; snapshot:
 EntityStatusSnapshot [id=1760a9e1-0f81-4afb-90ea-2f9dc5d90975,
 status=ILLEGAL].
 2013-02-27 11:36:56,077 INFO
 [org.ovirt.engine.core.bll.LoginUserCommand] (ajp--127.0.0.1-8702-5)
 Running command: LoginUserCommand internal: false.
 
 What could this be ?
 
 Thanks,
 
 Matt
 
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.2: impossiible to delete a snapshot?

2013-02-28 Thread Liron Aravot
Gianluca, seems like there is a bug when there is an image on ILLEGAL, i'll 
handle it. thanks for raising the issue.

- Original Message -
 From: noc n...@nieuwland.nl
 To: Gianluca Cecchi gianluca.cec...@gmail.com
 Cc: users users@ovirt.org
 Sent: Thursday, February 28, 2013 12:47:51 PM
 Subject: Re: [Users] oVirt 3.2: impossiible to delete a snapshot?
 
 On 28-2-2013 11:44, Gianluca Cecchi wrote:
  On Wed, Feb 27, 2013 at 1:26 PM, Gianluca Cecchi wrote:
  I have not the system with me now but I'm going to provide
  screenshots
  this evening
  It is my @HOME environment that consists on one fedora 18 system
  with
  all-in-one 3.2
  I only have one WIndows XP VM on it
  Possibly the problem is due to the VM having a disk in illegal
  state...
  In fact I can't reproduce on another system with similar setup (3.2
  final but not all-in-one).
  I'm going to clean the situation and then retry.
 
 
 I'm also running 3.2 stable on F18 and can delete snapshots but as
 Gianluca can't clone. Get the same blank window. Saw the fix but
 since
 its a java.class change would need a rebuild of 3.2
 Any idea when a wrap up of several bugs found and fixed will be up at
 ovirt.org?
 
 Thanks,
 
 Joop
 
 --
 irc: jvandewege
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt Connection Storage error

2013-02-27 Thread Liron Aravot
Hi, looking in the vdsm logs i see
libvirtError: authentication failed: Authorization requires authentication but 
no agent is available.

can you please attach your libvirt configuration file and the libvirt log 
perhaps?

- Original Message -
 From: xianghuadu xianghu...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: users users@ovirt.org
 Sent: Tuesday, February 26, 2013 10:21:57 AM
 Subject: Re: Re: [Users] ovirt Connection Storage error
 
 
 hi liron aravot
 attached is full vdsm log
 thx
 
 
 xianghuadu
 
 
 
 From: Liron Aravot
 Date: 2013-02-26 15:50
 To: xianghuadu
 CC: users
 Subject: Re: [Users] ovirt Connection Storage error
 
 Hi,
 can you please attach the full vdsm log?
 
 - Original Message -
  From: xianghuadu xianghu...@gmail.com
  To: users users@ovirt.org
  Sent: Tuesday, February 26, 2013 9:27:05 AM
  Subject: [Users] ovirt Connection Storage error
  
  
  
  Hi all
  Add iscsi storage, Error while executing action New SAN storage
  Domain: Unexpected exception.
  engine log
  
  
  
  2013-02-26 15:09:08,211 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
  (ajp--127.0.0.1-8702-1) [4952790e] HostName = 225
  2013-02-26 15:09:08,212 ERROR
  [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
  (ajp--127.0.0.1-8702-1) [4952790e] Command FormatStorageDomainVDS
  execution failed. Exception: VDSErrorException:
  VDSGenericException:
  VDSErrorException: Failed to FormatStorageDomainVDS, error = Cannot
  format attached storage domain:
  ('378ef2e6-e12d-4eae-8c6c-9bc2b983d4ce',)
  2013-02-26 15:09:08,214 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.FormatStorageDomainVDSCommand]
  (ajp--127.0.0.1-8702-1) [4952790e] FINISH,
  FormatStorageDomainVDSCommand, log id: 1f498799
  2013-02-26 15:09:08,215 ERROR
  [org.ovirt.engine.core.bll.storage.RemoveStorageDomainCommand]
  (ajp--127.0.0.1-8702-1) [4952790e] Command
  org.ovirt.engine.core.bll.storage.RemoveStorageDomainCommand throw
  Vdc Bll exception. With error message VdcBLLException:
  org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
  VDSGenericException: VDSErrorException: Failed to
  FormatStorageDomainVDS, error = Cannot format attached storage
  domain: ('378ef2e6-e12d-4eae-8c6c-9bc2b983d4ce',)
  2013-02-26 15:09:08,221 INFO
  [org.ovirt.engine.core.bll.storage.RemoveStorageDomainCommand]
  (ajp--127.0.0.1-8702-1) [4952790e] Lock freed to object EngineLock
  [exclusiveLocks= key: 378ef2e6-e12d-4eae-8c6c-9bc2b983d4ce value:
  STORAGE
  , sharedLocks= ]
  2013-02-26 15:09:42,067 WARN
  [org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand]
  (ajp--127.0.0.1-8702-6) [a8c7727] CanDoAction of action
  UpdateStoragePool failed.
  Reasons:VAR__TYPE__STORAGE__POOL,ACTION_TYPE_FAILED_STORAGE_POOL_WITH_DEFAULT_VDS_GROUP_CANNOT_BE_LOCALFS,VAR__ACTION__UPDATE
  2013-02-26 15:09:59,224 INFO
  [org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand]
  (ajp--127.0.0.1-8702-3) [3d8faa5f] Running command:
  UpdateStoragePoolCommand internal: false. Entities affected : ID:
  da5870e0-7aae-11e2-9da5-00188be4de29 Type: StoragePool
  2013-02-26 15:10:00,000 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
  (QuartzScheduler_Worker-80) Autorecovering hosts is disabled,
  skipping
  2013-02-26 15:10:00,001 INFO
  [org.ovirt.engine.core.bll.AutoRecoveryManager]
  (QuartzScheduler_Worker-80) Autorecovering storage domains is
  disabled, skipping
  2013-02-26 15:10:23,814 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
  (ajp--127.0.0.1-8702-2) START, GetDeviceListVDSCommand(HostName =
  225, HostId = 342b111a-7fdf-11e2-a963-00188be4de29,
  storageType=ISCSI), log id: 484eccef
  2013-02-26 15:10:24,119 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
  (ajp--127.0.0.1-8702-2) FINISH, GetDeviceListVDSCommand, return:
  [org.ovirt.engine.core.common.businessentities.LUNs@b420cc6], log
  id: 484eccef
  2013-02-26 15:10:32,523 INFO
  [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
  (ajp--127.0.0.1-8702-4) [66fa978c] Running command:
  AddSANStorageDomainCommand internal: false. Entities affected : ID:
  aaa0----123456789aaa Type: System
  2013-02-26 15:10:32,539 INFO
  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
  (ajp--127.0.0.1-8702-4) [66fa978c] START,
  CreateVGVDSCommand(HostName = 225, HostId =
  342b111a-7fdf-11e2-a963-00188be4de29,
  storageDomainId=c13260c4-d1aa-455c-9031-0711a7a4cc8d,
  deviceList=[14945540078797a00],
  force=false), log id: 4f1651f1
  2013-02-26 15:10:32,578 ERROR
  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
  (ajp--127.0.0.1-8702-4) [66fa978c] Failed in CreateVGVDS method
  2013-02-26 15:10:32,579 ERROR
  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
  (ajp--127.0.0.1-8702-4) [66fa978c] Error code unexpected and error
  message VDSGenericException: VDSErrorException: Failed to
  CreateVGVDS

Re: [Users] oVirt 3.2: impossiible to delete a snapshot?

2013-02-27 Thread Liron Aravot
Hi Gianluca,
what you describe is a bit odd to me, as the GUI should block you from 
selecting the active snapshot and select remove - I tried to reproduce on 3.2 
and had no success.
Could you please attach a screenshot and the engine log?

how many vms do you have in your system?

- Original Message -
 From: Gianluca Cecchi gianluca.cec...@gmail.com
 To: users users@ovirt.org
 Sent: Wednesday, February 27, 2013 9:13:21 AM
 Subject: [Users] oVirt 3.2: impossiible to delete a snapshot?
 
 oVirt 3.2 with node f18 and engine another f18 and ovirt stable repo.
 
 After I create a snapshot of a powered off win xp VM, I then select
 the snapshot and select delete
 
 Are you sure you want to delete snapshot from Wed Feb 27 07:56:23
 GMT+100 2013 with description 'test'?
 
 ok
 
 Error:
 
 winxp:
 
 Cannot remove Snapshot. Removing the VM active Snapshot is not
 allowed.
 
 
 Is this expected? Why?
 
 Thanks,
 Gianluca
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] clone vm from snapshot problem in 3.2

2013-02-27 Thread Liron Aravot


- Original Message -
 From: Gianluca Cecchi gianluca.cec...@gmail.com
 To: Vered Volansky ve...@redhat.com
 Cc: users users@ovirt.org, Liron Aravot lara...@redhat.com
 Sent: Wednesday, February 27, 2013 8:27:32 AM
 Subject: Re: [Users] clone vm from snapshot problem in 3.2
 
 On Tue, Feb 26, 2013 at 8:10 PM, Vered Volansky wrote:
  Latest build (not release), as in from:
  http://resources.ovirt.org/releases/nightly/rpm/Fedora/18/
 
  Good luck,
  Vered
 
 Yes but now I'm on an ovirt- stable environment.
 Also, nightly is now at 3.3 code version, while I would like to stay
 on 3.2 for now for this test environment.
 I think this is a serious bug, is there anything in the air to be
 proposed as a patch/update for 3.2 stable?
 
 Otherwise in release notes you should notice that you are unable to
 create clones or is there any other way to create a clone?
 Gianluca
 

Hi Gianluca,
I've built 3.2 and had success in reproducing your issue, I've also tested the 
fix and after it the clone window shows fine - so this should solve your issue 
(http://gerrit.ovirt.org/#/c/11254/).

Thanks, Liron.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] clone vm from snapshot problem in 3.2

2013-02-27 Thread Liron Aravot


- Original Message -
 From: Liron Aravot lara...@redhat.com
 To: Gianluca Cecchi gianluca.cec...@gmail.com
 Cc: users users@ovirt.org, Vered Volansky ve...@redhat.com
 Sent: Wednesday, February 27, 2013 2:14:06 PM
 Subject: Re: [Users] clone vm from snapshot problem in 3.2
 
 
 
 - Original Message -
  From: Gianluca Cecchi gianluca.cec...@gmail.com
  To: Vered Volansky ve...@redhat.com
  Cc: users users@ovirt.org, Liron Aravot lara...@redhat.com
  Sent: Wednesday, February 27, 2013 8:27:32 AM
  Subject: Re: [Users] clone vm from snapshot problem in 3.2
  
  On Tue, Feb 26, 2013 at 8:10 PM, Vered Volansky wrote:
   Latest build (not release), as in from:
   http://resources.ovirt.org/releases/nightly/rpm/Fedora/18/
  
   Good luck,
   Vered
  
  Yes but now I'm on an ovirt- stable environment.
  Also, nightly is now at 3.3 code version, while I would like to
  stay
  on 3.2 for now for this test environment.
  I think this is a serious bug, is there anything in the air to be
  proposed as a patch/update for 3.2 stable?
  
  Otherwise in release notes you should notice that you are unable to
  create clones or is there any other way to create a clone?
  Gianluca
  
 
 Hi Gianluca,
 I've built 3.2 and had success in reproducing your issue, I've also
 tested the fix and after it the clone window shows fine - so this
 should solve your issue (http://gerrit.ovirt.org/#/c/11254/).
 
 Thanks, Liron.

By the way, it should work through the REST API as well.
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt Connection Storage error

2013-02-27 Thread Liron Aravot
Hi,
can you please tell me what's the DC version?
can you also please run the following query and reply with the results?

select * from vdc_options where option_name like '%SupportForceCreateVG%';

- Original Message -
 From: xianghuadu xianghu...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: users users@ovirt.org
 Sent: Wednesday, February 27, 2013 10:30:09 AM
 Subject: Re: Re: [Users] ovirt Connection Storage error
 
 
 hi liron aravot
 Thank you for your help
 
   
 
 [root@kvm1 libvirt]# cat libvirtd.conf
 # Master libvirt daemon configuration file
 #
 # For further information consult http://libvirt.org/format.html
 #
 # NOTE: the tests/daemon-conf regression test script requires
 # that each PARAMETER = VALUE line in this file have the parameter
 # name just after a leading #.
 
 #
 #
 # Network connectivity controls
 #
 
 # Flag listening for secure TLS connections on the public TCP/IP
 port.
 # NB, must pass the --listen flag to the libvirtd process for this to
 # have any effect.
 #
 # It is necessary to setup a CA and issue server certificates before
 # using this capability.
 #
 # This is enabled by default, uncomment this to disable it
 #listen_tls = 0
 
 # Listen for unencrypted TCP connections on the public TCP/IP port.
 # NB, must pass the --listen flag to the libvirtd process for this to
 # have any effect.
 #
 # Using the TCP socket requires SASL authentication by default. Only
 # SASL mechanisms which support data encryption are allowed. This is
 # DIGEST_MD5 and GSSAPI (Kerberos5)
 #
 # This is disabled by default, uncomment this to enable it.
 #listen_tcp = 1
 
 
 
 # Override the port for accepting secure TLS connections
 # This can be a port number, or service name
 #
 #tls_port = 16514
 
 # Override the port for accepting insecure TCP connections
 # This can be a port number, or service name
 #
 #tcp_port = 16509
 
 
 # Override the default configuration which binds to all network
 # interfaces. This can be a numeric IPv4/6 address, or hostname
 #
 #listen_addr = 192.168.0.1
 
 
 # Flag toggling mDNS advertizement of the libvirt service.
 #
 # Alternatively can disable for all services on a host by
 # stopping the Avahi daemon
 #
 # This is enabled by default, uncomment this to disable it
 #mdns_adv = 0
 
 # Override the default mDNS advertizement name. This must be
 # unique on the immediate broadcast network.
 #
 # The default is Virtualization Host HOSTNAME, where HOSTNAME
 # is subsituted for the short hostname of the machine (without
 domain)
 #
 #mdns_name = Virtualization Host Joe Demo
 
 
 #
 #
 # UNIX socket access controls
 #
 
 # Set the UNIX domain socket group ownership. This can be used to
 # allow a 'trusted' set of users access to management capabilities
 # without becoming root.
 #
 # This is restricted to 'root' by default.
 #unix_sock_group = libvirt
 
 # Set the UNIX socket permissions for the R/O socket. This is used
 # for monitoring VM status only
 #
 # Default allows any user. If setting group ownership may want to
 # restrict this to:
 #unix_sock_ro_perms = 0777
 
 # Set the UNIX socket permissions for the R/W socket. This is used
 # for full management of VMs
 #
 # Default allows only root. If PolicyKit is enabled on the socket,
 # the default will change to allow everyone (eg, 0777)
 #
 # If not using PolicyKit and setting group ownership for access
 # control then you may want to relax this to:
 #unix_sock_rw_perms = 0770
 
 # Set the name of the directory in which sockets will be
 found/created.
 #unix_sock_dir = /var/run/libvirt
 
 #
 #
 # Authentication.
 #
 # - none: do not perform auth checks. If you can connect to the
 # socket you are allowed. This is suitable if there are
 # restrictions on connecting to the socket (eg, UNIX
 # socket permissions), or if there is a lower layer in
 # the network providing auth (eg, TLS/x509 certificates)
 #
 # - sasl: use SASL infrastructure. The actual auth scheme is then
 # controlled from /etc/sasl2/libvirt.conf. For the TCP
 # socket only GSSAPI  DIGEST-MD5 mechanisms will be used.
 # For non-TCP or TLS sockets, any scheme is allowed.
 #
 # - polkit: use PolicyKit to authenticate. This is only suitable
 # for use on the UNIX sockets. The default policy will
 # require a user to supply their own password to gain
 # full read/write access (aka sudo like), while anyone
 # is allowed read/only access.
 #
 # Set an authentication scheme for UNIX read-only sockets
 # By default socket permissions allow anyone to connect
 #
 # To restrict monitoring of domains you may wish to enable
 # an authentication mechanism here
 #auth_unix_ro = none
 
 # Set an authentication scheme for UNIX read-write sockets
 # By default socket permissions only allow root. If PolicyKit
 # support was compiled into libvirt

Re: [Users] clone vm from snapshot problem in 3.2

2013-02-26 Thread Liron Aravot
Hi, can you please provide engine/vdsm logs?
does the snapshot has disks?

- Original Message -
 From: Gianluca Cecchi gianluca.cec...@gmail.com
 To: users users@ovirt.org
 Sent: Tuesday, February 26, 2013 3:52:10 PM
 Subject: [Users] clone vm from snapshot problem in 3.2
 
 Hello,
 anyone succeeding in doing what in subject?
 
 F18 node with 3.2 final. ANother server with f18 and engine (3.2
 also)
 Tried with a Win 7 VM and a CentOS 5 VM.
 
 I successfully create a snapshot.
 Then I right-click the snapshot line and select clone
 
 I get this window that remains there without exiting
 
 See here:
 https://docs.google.com/file/d/0BwoPbcrMv8mvRFVsQ1J3QVk1TGM/edit?usp=sharing
 
 WHat kind of log to provide?
 
 Thanks,
 Gianluca
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] clone vm from snapshot problem in 3.2

2013-02-26 Thread Liron Aravot
Gianluca,
please try to refresh the UI, you should see the snapshot disks.
regarding the clone issue, it was solved by this patch:

http://gerrit.ovirt.org/#/c/11254/

so if you'll use the latest it should be fine.

- Original Message -
 From: Gianluca Cecchi gianluca.cec...@gmail.com
 To: Liron Aravot lara...@redhat.com
 Cc: users users@ovirt.org
 Sent: Tuesday, February 26, 2013 5:18:44 PM
 Subject: Re: [Users] clone vm from snapshot problem in 3.2
 
 On Tue, Feb 26, 2013 at 4:00 PM, Gianluca Cecchi wrote:
  On Tue, Feb 26, 2013 at 3:26 PM, Liron Aravot  wrote:
  Hi, can you please provide engine/vdsm logs?
  does the snapshot has disks?
 
 
  Here they are.
  Snapshot has been taken around 14:45. Clone of the snapshot
  attempted
  around 14:47
  engine.log:
  https://docs.google.com/file/d/0BwoPbcrMv8mvY3J4Vk10aVpiRzg/edit?usp=sharing
 
  vdsm.log:
  https://docs.google.com/file/d/0BwoPbcrMv8mvQlo0QzNVWVoxVkE/edit?usp=sharing
 
  What you mean with does the snapshot has disks? ? I hope so
  ;-)
 
  But if I select the snapshot line and go atthe right panel, the
  disk
  columns seems empty
  See here:
  https://docs.google.com/file/d/0BwoPbcrMv8mvZDdhTG9LQXg3UFk/edit?usp=sharing
 
  Gianluca
 
 BTW: nothing changes if the VM is powered off when I take the
 snapshot. A following clone operation remains with the white window I
 sent before...
 
 NOTE: On February 5th on this same infra I was with engine and node
 at
 nightly repo as of  vdsm-4.10.3-0.78.gitb005b54.fc18.x86_64 and I was
 able to successfully complete this kind of operation. It was done
 with
 the VM powered off.
 
 Gianluca
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt Connection Storage error

2013-02-25 Thread Liron Aravot
Hi,
can you please attach the full vdsm log?

- Original Message -
 From: xianghuadu xianghu...@gmail.com
 To: users users@ovirt.org
 Sent: Tuesday, February 26, 2013 9:27:05 AM
 Subject: [Users] ovirt Connection Storage error
 
 
 
 Hi all
 Add iscsi storage, Error while executing action New SAN storage
 Domain: Unexpected exception.
 engine log
 
   
 
 2013-02-26 15:09:08,211 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (ajp--127.0.0.1-8702-1) [4952790e] HostName = 225
 2013-02-26 15:09:08,212 ERROR
 [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
 (ajp--127.0.0.1-8702-1) [4952790e] Command FormatStorageDomainVDS
 execution failed. Exception: VDSErrorException: VDSGenericException:
 VDSErrorException: Failed to FormatStorageDomainVDS, error = Cannot
 format attached storage domain:
 ('378ef2e6-e12d-4eae-8c6c-9bc2b983d4ce',)
 2013-02-26 15:09:08,214 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.FormatStorageDomainVDSCommand]
 (ajp--127.0.0.1-8702-1) [4952790e] FINISH,
 FormatStorageDomainVDSCommand, log id: 1f498799
 2013-02-26 15:09:08,215 ERROR
 [org.ovirt.engine.core.bll.storage.RemoveStorageDomainCommand]
 (ajp--127.0.0.1-8702-1) [4952790e] Command
 org.ovirt.engine.core.bll.storage.RemoveStorageDomainCommand throw
 Vdc Bll exception. With error message VdcBLLException:
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
 VDSGenericException: VDSErrorException: Failed to
 FormatStorageDomainVDS, error = Cannot format attached storage
 domain: ('378ef2e6-e12d-4eae-8c6c-9bc2b983d4ce',)
 2013-02-26 15:09:08,221 INFO
 [org.ovirt.engine.core.bll.storage.RemoveStorageDomainCommand]
 (ajp--127.0.0.1-8702-1) [4952790e] Lock freed to object EngineLock
 [exclusiveLocks= key: 378ef2e6-e12d-4eae-8c6c-9bc2b983d4ce value:
 STORAGE
 , sharedLocks= ]
 2013-02-26 15:09:42,067 WARN
 [org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand]
 (ajp--127.0.0.1-8702-6) [a8c7727] CanDoAction of action
 UpdateStoragePool failed.
 Reasons:VAR__TYPE__STORAGE__POOL,ACTION_TYPE_FAILED_STORAGE_POOL_WITH_DEFAULT_VDS_GROUP_CANNOT_BE_LOCALFS,VAR__ACTION__UPDATE
 2013-02-26 15:09:59,224 INFO
 [org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand]
 (ajp--127.0.0.1-8702-3) [3d8faa5f] Running command:
 UpdateStoragePoolCommand internal: false. Entities affected : ID:
 da5870e0-7aae-11e2-9da5-00188be4de29 Type: StoragePool
 2013-02-26 15:10:00,000 INFO
 [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-80) Autorecovering hosts is disabled,
 skipping
 2013-02-26 15:10:00,001 INFO
 [org.ovirt.engine.core.bll.AutoRecoveryManager]
 (QuartzScheduler_Worker-80) Autorecovering storage domains is
 disabled, skipping
 2013-02-26 15:10:23,814 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
 (ajp--127.0.0.1-8702-2) START, GetDeviceListVDSCommand(HostName =
 225, HostId = 342b111a-7fdf-11e2-a963-00188be4de29,
 storageType=ISCSI), log id: 484eccef
 2013-02-26 15:10:24,119 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
 (ajp--127.0.0.1-8702-2) FINISH, GetDeviceListVDSCommand, return:
 [org.ovirt.engine.core.common.businessentities.LUNs@b420cc6], log
 id: 484eccef
 2013-02-26 15:10:32,523 INFO
 [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
 (ajp--127.0.0.1-8702-4) [66fa978c] Running command:
 AddSANStorageDomainCommand internal: false. Entities affected : ID:
 aaa0----123456789aaa Type: System
 2013-02-26 15:10:32,539 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
 (ajp--127.0.0.1-8702-4) [66fa978c] START,
 CreateVGVDSCommand(HostName = 225, HostId =
 342b111a-7fdf-11e2-a963-00188be4de29,
 storageDomainId=c13260c4-d1aa-455c-9031-0711a7a4cc8d,
 deviceList=[14945540078797a00],
 force=false), log id: 4f1651f1
 2013-02-26 15:10:32,578 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (ajp--127.0.0.1-8702-4) [66fa978c] Failed in CreateVGVDS method
 2013-02-26 15:10:32,579 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (ajp--127.0.0.1-8702-4) [66fa978c] Error code unexpected and error
 message VDSGenericException: VDSErrorException: Failed to
 CreateVGVDS, error = Unexpected exception
 2013-02-26 15:10:32,581 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (ajp--127.0.0.1-8702-4) [66fa978c] Command
 org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand return
 value
 Class Name:
 org.ovirt.engine.core.vdsbroker.irsbroker.OneUuidReturnForXmlRpc
 mUuid Null
 mStatus Class Name:
 org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
 mCode 16
 mMessage Unexpected exception
 
 2013-02-26 15:10:32,585 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (ajp--127.0.0.1-8702-4) [66fa978c] HostName = 225
 2013-02-26 15:10:32,586 ERROR
 [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
 (ajp--127.0.0.1-8702-4) [66fa978c] Command CreateVGVDS execution
 failed.