Re: [ovirt-users] Cannot create disk image on storage domain NOT master

2014-09-17 Thread Gabi C
Please note that creating disk on master data domain works!

On Wed, Sep 17, 2014 at 10:08 AM, Gabi C  wrote:

> Hello!
>
> - 1 oVirt Engine (not hosted), 4 nodes, 3 in "Default" datacenter, 1 in a
> local-storage type datacenter;
> - Recently upgraded from 3.4.2 to 3.4.3;
>
>
> "Default"Datacenter has, along "mandatory"ISO and Export domain, two
> storage domains "storage_gluster2" aprox 550Gb  and "storage_gluster3"
> aprox 130 GB.
> As you can guess :-), the three nodes serve also as gluster bricks, each
> with 2 bricks in the above mentioned storage domains, both in replicated
> mode.
>
> Master domain is "storage_gluster3".
>
> After create new VM, trying to add new disk on "storage_gluster2 ", which
> is NOT master domain raise (ovirt-engine.log)
>
>
>
> -- Message: VDSGenericException: VDSErrorException: Failed to
> HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory:
> '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/b694642b-4603-4dc6-ba36-c0d82ca83fa2/images/5de04fd7-a8a5-483a-be53-cffb17db3d94',
> code = 100,
> -- Exception: VDSGenericException: VDSErrorException: Failed to
> HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory:
> '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/b694642b-4603-4dc6-ba36-c0d82ca83fa2/images/5de04fd7-a8a5-483a-be53-cffb17db3d94',
> code = 100
>
>
>
> On SPM:
>
>
> vdsClient -s 0 getConnectedStoragePoolsList
> 5849b030-626e-47cb-ad90-3ce782d831b3
>
>
> vdsClient -s 0 getStoragePoolInfo 5849b030-626e-47cb-ad90-3ce782d831b3
> name = Default
> isoprefix = /rhev/data-center/mnt/10.125.1.193:
> _mnt_storage__filer1_export__nfs2_iso/bb09aac2-05d5-45c5-b573-ebff9292ad6d/images/----
> pool_status = connected
> lver = 23
> spm_id = 1
> master_uuid = aa401118-fc30-4399-8f01-f1751f89fff4
> version = 3
> domains =
> b694642b-4603-4dc6-ba36-c0d82ca83fa2:Active,bb09aac2-05d5-45c5-b573-ebff9292ad6d:Active,aa401118-fc30-4399-8f01-f1751f89fff4:Active,62f795ed-486c-443d-93e9-2c16595fe546:Active
> type = GLUSTERFS
> master_ver = 1136
> b694642b-4603-4dc6-ba36-c0d82ca83fa2 = {'status': 'Active',
> 'diskfree': '32422528', 'isoprefix': '', 'alerts': [], 'disktotal':
> '590231371776', 'version': 3}
> bb09aac2-05d5-45c5-b573-ebff9292ad6d = {'status': 'Active',
> 'diskfree': '1904214016', 'isoprefix': 
> '/rhev/data-center/mnt/10.125.1.193:_mnt_storage__filer1_export__nfs2_iso/bb09aac2-05d5-45c5-b573-ebff9292ad6d/images/----',
> 'alerts': [], 'disktotal': '8049917952', 'version': 0}
> aa401118-fc30-4399-8f01-f1751f89fff4 = {'status': 'Active',
> 'diskfree': '111680159744', 'isoprefix': '', 'alerts': [], 'disktotal':
> '144146563072', 'version': 3}
> 62f795ed-486c-443d-93e9-2c16595fe546 = {'status': 'Active',
> 'diskfree': '131499819008', 'isoprefix': '', 'alerts': [], 'disktotal':
> '234763583488', 'version': 0}
>
>
>
>
>
>
> mount
>
> .
>
> 127.0.0.1:gluster_data3 on 
> /rhev/data-center/mnt/glusterSD/127.0.0.1:gluster__data3
> type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> 127.0.0.1:gluster_data2 on 
> /rhev/data-center/mnt/glusterSD/127.0.0.1:gluster__data2
> type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> 
>
>
> ls -la /rhev/data-center/mnt/
> total 20
> drwxr-xr-x. 5 vdsm kvm 4096 Sep  2 16:46 .
> drwxr-xr-x. 5 vdsm kvm 4096 Sep  2 16:46 ..
> drwxrwsrwx+ 3   96  96 4096 Feb  7  2014 10.125.1.193:
> _mnt_storage__filer1_export__nfs1_export
> drwxrwsrwx+ 3   96  96 4096 May  8 17:16 10.125.1.193:
> _mnt_storage__filer1_export__nfs2_iso
> drwxr-xr-x. 5 vdsm kvm 4096 Sep  2 16:45 glusterSD
>
>
> but no:
>
>
> '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/b694642b-4603-4dc6-ba36-c0d82ca83fa2/images/5de04fd7-a8a5-483a-be53-cffb17db3d94',
>
>
> as per error from engine log posted above.
>
>
>
>
> Do you have any opinions/ideeas/advice?
>
>
>
> I''ll try to restart the whole thing ( engine plus 3 nodes) but , since it
> is in production it is a "'little bit" dificult :-)
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Cannot create disk image on storage domain NOT master

2014-09-17 Thread Gabi C
Hello!

- 1 oVirt Engine (not hosted), 4 nodes, 3 in "Default" datacenter, 1 in a
local-storage type datacenter;
- Recently upgraded from 3.4.2 to 3.4.3;


"Default"Datacenter has, along "mandatory"ISO and Export domain, two
storage domains "storage_gluster2" aprox 550Gb  and "storage_gluster3"
aprox 130 GB.
As you can guess :-), the three nodes serve also as gluster bricks, each
with 2 bricks in the above mentioned storage domains, both in replicated
mode.

Master domain is "storage_gluster3".

After create new VM, trying to add new disk on "storage_gluster2 ", which
is NOT master domain raise (ovirt-engine.log)



-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory:
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/b694642b-4603-4dc6-ba36-c0d82ca83fa2/images/5de04fd7-a8a5-483a-be53-cffb17db3d94',
code = 100,
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory:
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/b694642b-4603-4dc6-ba36-c0d82ca83fa2/images/5de04fd7-a8a5-483a-be53-cffb17db3d94',
code = 100



On SPM:


vdsClient -s 0 getConnectedStoragePoolsList
5849b030-626e-47cb-ad90-3ce782d831b3


vdsClient -s 0 getStoragePoolInfo 5849b030-626e-47cb-ad90-3ce782d831b3
name = Default
isoprefix = /rhev/data-center/mnt/10.125.1.193:
_mnt_storage__filer1_export__nfs2_iso/bb09aac2-05d5-45c5-b573-ebff9292ad6d/images/----
pool_status = connected
lver = 23
spm_id = 1
master_uuid = aa401118-fc30-4399-8f01-f1751f89fff4
version = 3
domains =
b694642b-4603-4dc6-ba36-c0d82ca83fa2:Active,bb09aac2-05d5-45c5-b573-ebff9292ad6d:Active,aa401118-fc30-4399-8f01-f1751f89fff4:Active,62f795ed-486c-443d-93e9-2c16595fe546:Active
type = GLUSTERFS
master_ver = 1136
b694642b-4603-4dc6-ba36-c0d82ca83fa2 = {'status': 'Active',
'diskfree': '32422528', 'isoprefix': '', 'alerts': [], 'disktotal':
'590231371776', 'version': 3}
bb09aac2-05d5-45c5-b573-ebff9292ad6d = {'status': 'Active',
'diskfree': '1904214016', 'isoprefix':
'/rhev/data-center/mnt/10.125.1.193:_mnt_storage__filer1_export__nfs2_iso/bb09aac2-05d5-45c5-b573-ebff9292ad6d/images/----',
'alerts': [], 'disktotal': '8049917952', 'version': 0}
aa401118-fc30-4399-8f01-f1751f89fff4 = {'status': 'Active',
'diskfree': '111680159744', 'isoprefix': '', 'alerts': [], 'disktotal':
'144146563072', 'version': 3}
62f795ed-486c-443d-93e9-2c16595fe546 = {'status': 'Active',
'diskfree': '131499819008', 'isoprefix': '', 'alerts': [], 'disktotal':
'234763583488', 'version': 0}






mount

.

127.0.0.1:gluster_data3 on
/rhev/data-center/mnt/glusterSD/127.0.0.1:gluster__data3
type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
127.0.0.1:gluster_data2 on
/rhev/data-center/mnt/glusterSD/127.0.0.1:gluster__data2
type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)



ls -la /rhev/data-center/mnt/
total 20
drwxr-xr-x. 5 vdsm kvm 4096 Sep  2 16:46 .
drwxr-xr-x. 5 vdsm kvm 4096 Sep  2 16:46 ..
drwxrwsrwx+ 3   96  96 4096 Feb  7  2014 10.125.1.193:
_mnt_storage__filer1_export__nfs1_export
drwxrwsrwx+ 3   96  96 4096 May  8 17:16 10.125.1.193:
_mnt_storage__filer1_export__nfs2_iso
drwxr-xr-x. 5 vdsm kvm 4096 Sep  2 16:45 glusterSD


but no:

'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/b694642b-4603-4dc6-ba36-c0d82ca83fa2/images/5de04fd7-a8a5-483a-be53-cffb17db3d94',


as per error from engine log posted above.




Do you have any opinions/ideeas/advice?



I''ll try to restart the whole thing ( engine plus 3 nodes) but , since it
is in production it is a "'little bit" dificult :-)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users