[Users] Error importing export storage

2012-08-28 Thread зоррыч
Hi

Trying to import export stotage created earlier.

But get this error:

 

There is no storage domain under the specified path. Please check path.

 

Vdsm.logs:

Thread-99790::DEBUG::2012-08-28
09:17:26,010::task::568::TaskManager.Task::(_updateState)
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::moving from state init - state
preparing

Thread-99790::INFO::2012-08-28
09:17:26,010::logUtils::37::dispatcher::(wrapper) Run and protect:
repoStats(options=None)

Thread-99790::INFO::2012-08-28
09:17:26,010::logUtils::39::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'b0a0e76b-f983-405b-a0af-d0314a1c381a':
{'delay': '0.00292301177979', 'lastCheck': 1346159839.788852, 'code': 0,
'valid': True}}

Thread-99790::DEBUG::2012-08-28
09:17:26,011::task::1151::TaskManager.Task::(prepare)
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::finished:
{'b0a0e76b-f983-405b-a0af-d0314a1c381a': {'delay': '0.00292301177979',
'lastCheck': 1346159839.788852, 'code': 0, 'valid': True}}

Thread-99790::DEBUG::2012-08-28
09:17:26,011::task::568::TaskManager.Task::(_updateState)
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::moving from state preparing -
state finished

Thread-99790::DEBUG::2012-08-28
09:17:26,011::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}

Thread-99790::DEBUG::2012-08-28
09:17:26,011::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}

Thread-99790::DEBUG::2012-08-28
09:17:26,011::task::957::TaskManager.Task::(_decref)
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::ref 0 aborting False

Thread-99792::DEBUG::2012-08-28
09:17:26,473::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

Thread-99792::DEBUG::2012-08-28
09:17:26,474::task::568::TaskManager.Task::(_updateState)
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::moving from state init - state
preparing

Thread-99792::INFO::2012-08-28
09:17:26,474::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=1,
spUUID='----', conList=[{'connection':
'10.1.20.2:/home/nfs4', 'iqn': '', 'portal': '', 'user': '', 'password':
'**', 'id': '----', 'port': ''}],
options=None)

Thread-99792::INFO::2012-08-28
09:17:26,474::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist': [{'status':
0, 'id': '----'}]}

Thread-99792::DEBUG::2012-08-28
09:17:26,474::task::1151::TaskManager.Task::(prepare)
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::finished: {'statuslist':
[{'status': 0, 'id': '----'}]}

Thread-99792::DEBUG::2012-08-28
09:17:26,474::task::568::TaskManager.Task::(_updateState)
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::moving from state preparing -
state finished

Thread-99792::DEBUG::2012-08-28
09:17:26,475::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}

Thread-99792::DEBUG::2012-08-28
09:17:26,475::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}

Thread-99792::DEBUG::2012-08-28
09:17:26,475::task::957::TaskManager.Task::(_decref)
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::ref 0 aborting False

Thread-99793::DEBUG::2012-08-28
09:17:26,494::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

Thread-99793::DEBUG::2012-08-28
09:17:26,495::task::568::TaskManager.Task::(_updateState)
Task=`700181ad-b9d4-411b-bfbc-25a28aa288e2`::moving from state init - state
preparing

Thread-99793::INFO::2012-08-28
09:17:26,503::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=1,
spUUID='----', conList=[{'connection':
'10.1.20.2:/home/nfs4', 'iqn': '', 'portal': '', 'user': '', 'password':
'**', 'id': '----', 'port': ''}],
options=None)

Thread-99793::DEBUG::2012-08-28
09:17:26,505::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
10.1.20.2:/home/nfs4 /rhev/data-center/mnt/10.1.20.2:_home_nfs4' (cwd None)

Thread-99793::DEBUG::2012-08-28
09:17:26,609::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' got the operation mutex

Thread-99793::DEBUG::2012-08-28
09:17:26,609::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' released the operation mutex

Thread-99793::DEBUG::2012-08-28
09:17:26,609::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' got the operation mutex

Thread-99793::DEBUG::2012-08-28
09:17:26,610::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' released the operation mutex

Thread-99793::DEBUG::2012-08-28
09:17:26,610::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' got the operation mutex

Thread-99793::DEBUG::2012-08-28

Re: [Users] Error importing export storage

2012-08-28 Thread зоррыч
Since when servers need to collect vdsm logs?
I have one node is running  vdsm  and on the server overt-engine


Node:
root@noc-3-synt ~]# /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
/rhev/data-center/mnt/10.1.20.2:_home_nfs4
mount.nfs: mount point /rhev/data-center/mnt/10.1.20.2:_home_nfs4 does not exist
[root@noc-3-synt ~]# /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 /tmp/foo
[root@noc-3-synt ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/vg_noc3synt-lv_root
   50G  4.5G   43G  10% /
tmpfs 7.9G 0  7.9G   0% /dev/shm
/dev/sda1 497M  246M  227M  52% /boot
/dev/mapper/vg_noc3synt-lv_home
  491G  7.4G  459G   2% /mht
127.0.0.1:/gluster491G   11G  456G   3% 
/rhev/data-center/mnt/127.0.0.1:_gluster
10.1.20.2:/home/nfs4  493G  304G  164G  65% /tmp/foo
[root@noc-3-synt ~]# vdsClient -s 0 getStorageDomainsList
b0a0e76b-f983-405b-a0af-d0314a1c381a

[root@noc-3-synt ~]# mount
/dev/mapper/vg_noc3synt-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext=system_u:object_r:tmpfs_t:s0)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_noc3synt-lv_home on /mht type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
127.0.0.1:/gluster on /rhev/data-center/mnt/127.0.0.1:_gluster type 
fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
10.1.20.2:/home/nfs4 on /tmp/foo type nfs 
(rw,soft,nosharecache,timeo=600,retrans=6,addr=10.1.20.2)




-Original Message-
From: Haim [mailto:hat...@redhat.com] 
Sent: Tuesday, August 28, 2012 5:57 PM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Error importing export storage

On 08/28/2012 04:34 PM, зоррыч wrote:
 /usr/bin/sudo -n /bin/mount -t nfs -o 
 soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
 /rhev/data-center/mnt/10.1.20.2:_home_nfs4
please attach both engine and vdsm logs (full, compressed).
also, please execute the following commands from host (vds):

1) /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
/rhev/data-center/mnt/10.1.20.2:_home_nfs4

2) vdsClient -s 0 getStorageDomainsList

3) mount

* if you are working in a non-secure mode, try vdsClient 0 (without the -s).




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt and gluster storage (two servers in a cluster)

2012-07-03 Thread зоррыч
I can not mount a volume cluster with two servers.

In a separate node activation, gluster volume installed successfully. 

 

For simultaneous operation of two servers with gluster volume ovirt
constantly switches SPM, without giving an error.

 

 

 

From: Robert Middleswarth [mailto:rob...@middleswarth.net] 
Sent: Wednesday, July 04, 2012 12:00 AM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

Are you having problems creating a Volume or mounting the volume?

Thanks
Robert


On 07/03/2012 03:56 PM, зоррыч wrote:

I've updated ovirt and vdsm to the latest test version (git repository). But
the situation continues to be repeated.

What am I doing wrong? How do I find what is wrong?

 

 

 

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
??
Sent: Wednesday, June 27, 2012 6:19 PM
To: rob...@middleswarth.net
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

logs in the attachment

 

 

 

 

From:  mailto:users-boun...@ovirt.org users-boun...@ovirt.org
mailto:[mailto:users-boun...@ovirt.org] [mailto:users-boun...@ovirt.org]
On Behalf Of ??
Sent: Wednesday, June 27, 2012 1:15 PM
To: 'Robert Middleswarth'
Cc:  mailto:users@ovirt.org users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

The problem still persists.

How do I solve it?

 

 

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
??
Sent: Tuesday, June 26, 2012 2:38 PM
To: 'Robert Middleswarth'
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

I checked the manual work gluster from two hosts. Mounting is working
correctly.

However, in a pair of hosts refuse to work (connection error storage).

Individually, each of their hosts working correctly,

and connects with gluster storage.

I have to manually mount gluster storage? In which folder?

You can write that how-to add a server to an existing cluster of gluster?

 

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Robert Middleswarth
Sent: Monday, June 25, 2012 11:49 PM
To: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

On 06/25/2012 09:54 AM, ?? wrote:

Hi.

I use ovirt 3.1 and gluster storage.

I added the two servers in a cluster. 

And faced with the problem of their joint work with gluster storage.

 

Storage not initialized, although on one server working successfully with
gluster storage.

Vdsm log an attachment

(vdsm-6.log - node -1)

(vdsm-7.log - node -2)

 

 

You have to tweak you ipstates table to allow glusterd to talk to the other
box glusterd and you have to manually peer the systems together.

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Thanks
Robert

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt and gluster storage (two servers in a cluster)

2012-07-03 Thread зоррыч
I use Scientific linux, and have already installed several times ovirt and
vdsm, but the problem is reproduced.

I sent the logs vdsm, but they do not specify any error.

How do I know what is going wrong?

 

 

I would be grateful for any ideas.

 

 

 

From: Robert Middleswarth [mailto:rob...@middleswarth.net] 
Sent: Wednesday, July 04, 2012 1:31 AM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

I had the same problem with Fedora 17.  I killed everything and started from
scratch using centos and it works fine.  I am wonder if there is the an
issue with the Direct IO support that was added recently to Fedora 17.

Thanks
Robert

On 07/03/2012 05:15 PM, зоррыч wrote:

I can not mount a volume cluster with two servers.

In a separate node activation, gluster volume installed successfully. 

 

For simultaneous operation of two servers with gluster volume ovirt
constantly switches SPM, without giving an error.

 

 

 

From: Robert Middleswarth [mailto:rob...@middleswarth.net] 
Sent: Wednesday, July 04, 2012 12:00 AM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

Are you having problems creating a Volume or mounting the volume?

Thanks
Robert


On 07/03/2012 03:56 PM, зоррыч wrote:

I've updated ovirt and vdsm to the latest test version (git repository). But
the situation continues to be repeated.

What am I doing wrong? How do I find what is wrong?

 

 

 

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
??
Sent: Wednesday, June 27, 2012 6:19 PM
To: rob...@middleswarth.net
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

logs in the attachment

 

 

 

 

From:  mailto:users-boun...@ovirt.org users-boun...@ovirt.org
mailto:[mailto:users-boun...@ovirt.org] [mailto:users-boun...@ovirt.org]
On Behalf Of ??
Sent: Wednesday, June 27, 2012 1:15 PM
To: 'Robert Middleswarth'
Cc:  mailto:users@ovirt.org users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

The problem still persists.

How do I solve it?

 

 

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
??
Sent: Tuesday, June 26, 2012 2:38 PM
To: 'Robert Middleswarth'
Cc: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

I checked the manual work gluster from two hosts. Mounting is working
correctly.

However, in a pair of hosts refuse to work (connection error storage).

Individually, each of their hosts working correctly,

and connects with gluster storage.

I have to manually mount gluster storage? In which folder?

You can write that how-to add a server to an existing cluster of gluster?

 

 

From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of
Robert Middleswarth
Sent: Monday, June 25, 2012 11:49 PM
To: users@ovirt.org
Subject: Re: [Users] Ovirt and gluster storage (two servers in a cluster)

 

On 06/25/2012 09:54 AM, ?? wrote:

Hi.

I use ovirt 3.1 and gluster storage.

I added the two servers in a cluster. 

And faced with the problem of their joint work with gluster storage.

 

Storage not initialized, although on one server working successfully with
gluster storage.

Vdsm log an attachment

(vdsm-6.log - node -1)

(vdsm-7.log - node -2)

 

 

You have to tweak you ipstates table to allow glusterd to talk to the other
box glusterd and you have to manually peer the systems together.

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Thanks
Robert

 

 

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] host install failed (kernel version 3.4.3)

2012-06-25 Thread зоррыч
Unfortunately I do not know how to do it.

On what url is bag tracker? 

 

 

 

 

 

From: Douglas Landgraf [mailto:dougsl...@redhat.com] 
Sent: Saturday, June 23, 2012 8:10 AM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] host install failed (kernel version 3.4.3)

 

Hi,

On 06/22/2012 04:26 PM, зоррыч wrote: 

Hi.

I am trying to install the host kernel with version 3.4.3.

an error:

Unsupported kernel version: 0

Logs:

 

[root@noc-3-synt tmp]# cat vds_bootstrap.372080.log

Fri, 22 Jun 2012 16:08:20 DEBUG Start VDS Validation 

Fri, 22 Jun 2012 16:08:20 DEBUGEntered VdsValidation(subject = '10.1.20.7', 
random_num = '7d832636-a512-40ae-8e27-ecd24728b39a', rev_num = 'None', 
installVirtualizationService = 'False', installGlusterService = 'True')

Fri, 22 Jun 2012 16:08:20 DEBUGSetting up Package Sacks

Fri, 22 Jun 2012 16:08:20 DEBUGyumSearch: found vdsm entries: 
[YumAvailablePackageSqlite : vdsm-4.10.0-0.58.gita6f4929.el6.x86_64 
(0x1b42a10)]

Fri, 22 Jun 2012 16:08:20 DEBUGHost properly registered with RHN/Satellite.

Fri, 22 Jun 2012 16:08:20 DEBUGBSTRAP component='RHN_REGISTRATION' 
status='OK' message='Host properly registered with RHN/Satellite.'/

Fri, 22 Jun 2012 16:08:21 DEBUGyumSearchVersion: pkg 
vdsm-4.10.0-0.58.gita6f4929.el6.x86_64 starts with: vdsm-4.10

Fri, 22 Jun 2012 16:08:21 DEBUGAvailable VDSM matches requirements

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='VDSM_MAJOR_VER' 
status='OK' message='Available VDSM matches requirements'/

Fri, 22 Jun 2012 16:08:21 DEBUG['/bin/uname', '-r']

Fri, 22 Jun 2012 16:08:21 DEBUG3.4.3

 

Fri, 22 Jun 2012 16:08:21 DEBUG

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='OS' status='OK' 
type='RHEL6' message='Supported platform version'/

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='KERNEL' status='FAIL' 
version='0' message='Unsupported kernel version: 0. Minimal supported version: 
150'/

Fri, 22 Jun 2012 16:08:21 ERRORosExplorer test failed

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='RHEV_INSTALL' 
status='FAIL'/

Fri, 22 Jun 2012 16:08:21 DEBUG End VDS Validation 

 

[root@noc-3-synt tmp]# uname -r

3.4.3

 

Can you please open a bz assigned to me?

Thanks!



-- 
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Do not start the virtual machine (gluster storage and ovirt 3.1)

2012-06-25 Thread зоррыч
I'm sorry.
Disabled SElinux
And the error disappeared.
Thank you!



-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
зоррыч
Sent: Monday, June 25, 2012 3:20 PM
To: 'Itamar Heim'
Cc: users@ovirt.org
Subject: Re: [Users] Do not start the virtual machine (gluster storage and 
ovirt 3.1)

In an attachment



-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com]
Sent: Saturday, June 23, 2012 4:46 PM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Do not start the virtual machine (gluster storage and 
ovirt 3.1)

On 06/22/2012 05:02 PM, зоррыч wrote:
 Hi.

 I use a bunch of ovirt 3.1 beta and gluster storage.

 The virtual machine was created successfully, but will not start.

 In the logs:

 Vdsm.log:

 Thread-1426::DEBUG::2012-06-22
 09:37:27,151::task::978::TaskManager.Task::(_decref)
 Task=`9a68c120-169f-4c0e-98e3-08e3bf5c66ab`::ref 0 aborting False

 Thread-1427::DEBUG::2012-06-22
 09:37:27,162::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-1427::DEBUG::2012-06-22
 09:37:27,163::task::588::TaskManager.Task::(_updateState)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::moving from state init - 
 state preparing

 Thread-1427::INFO::2012-06-22
 09:37:27,163::logUtils::37::dispatcher::(wrapper) Run and protect:
 getStoragePoolInfo(spUUID='b1c7875a-964d-4633-8ea4-2b191d68c105',
 options=None)

 Thread-1427::DEBUG::2012-06-22
 09:37:27,163::resourceManager::175::ResourceManager.Request::(__init__
 )
 ResName=`Storage.b1c7875a-964d-4633-8ea4-2b191d68c105`ReqID=`ca9b7715-
 1f0b-4225-9717-d1179193c42e`::Request
 was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 
 'registerResource'

 Thread-1427::DEBUG::2012-06-22
 09:37:27,164::resourceManager::486::ResourceManager::(registerResource
 )
 Trying to register resource
 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' for lock type 'shared'

 Thread-1427::DEBUG::2012-06-22
 09:37:27,164::resourceManager::528::ResourceManager::(registerResource
 ) Resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' is free. Now 
 locking as 'shared' (1 active user)

 Thread-1427::DEBUG::2012-06-22
 09:37:27,164::resourceManager::212::ResourceManager.Request::(grant)
 ResName=`Storage.b1c7875a-964d-4633-8ea4-2b191d68c105`ReqID=`ca9b7715-
 1f0b-4225-9717-d1179193c42e`::Granted
 request

 Thread-1427::DEBUG::2012-06-22
 09:37:27,164::task::817::TaskManager.Task::(resourceAcquired)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::_resourcesAcquired:
 Storage.b1c7875a-964d-4633-8ea4-2b191d68c105 (shared)

 Thread-1427::DEBUG::2012-06-22
 09:37:27,165::task::978::TaskManager.Task::(_decref)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::ref 1 aborting False

 Thread-1427::INFO::2012-06-22
 09:37:27,165::logUtils::39::dispatcher::(wrapper) Run and protect:
 getStoragePoolInfo, Return response: {'info': {'spm_id': 1,
 'master_uuid': '68aa0dc2-9cd1-4549-8008-30b1bae667db', 'name':
 'gluster', 'version': '0', 'domains':
 '68aa0dc2-9cd1-4549-8008-30b1bae667db:Active', 'pool_status':
 'connected', 'isoprefix': '', 'type': 'SHAREDFS', 'master_ver': 1,
 'lver': 0}, 'dominfo': {'68aa0dc2-9cd1-4549-8008-30b1bae667db':
 {'status': 'Active', 'diskfree': '27505983488', 'alerts': [],
 'disktotal': '53579874304'}}}

 Thread-1427::DEBUG::2012-06-22
 09:37:27,165::task::1172::TaskManager.Task::(prepare)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::finished: {'info':
 {'spm_id': 1, 'master_uuid': '68aa0dc2-9cd1-4549-8008-30b1bae667db',
 'name': 'gluster', 'version': '0', 'domains':
 '68aa0dc2-9cd1-4549-8008-30b1bae667db:Active', 'pool_status':
 'connected', 'isoprefix': '', 'type': 'SHAREDFS', 'master_ver': 1,
 'lver': 0}, 'dominfo': {'68aa0dc2-9cd1-4549-8008-30b1bae667db':
 {'status': 'Active', 'diskfree': '27505983488', 'alerts': [],
 'disktotal': '53579874304'}}}

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::task::588::TaskManager.Task::(_updateState)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::moving from state 
 preparing
 - state finished

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::resourceManager::809::ResourceManager.Owner::(releaseAll
 ) Owner.releaseAll requests {} resources
 {'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105':  ResourceRef 
 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105', isValid: 'True' obj:
 'None'}

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::resourceManager::538::ResourceManager::(releaseResource)
 Trying to release resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105'

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::resourceManager::553::ResourceManager::(releaseResource)
 Released resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' (0 
 active users)

 Thread-1427::DEBUG::2012-06-22
 09:37:27,167::resourceManager::558::ResourceManager::(releaseResource)
 Resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' is free, 
 finding

Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

2012-06-22 Thread зоррыч
I updated the kernel to 3.4.3
It works! Mounting is successful.
Thank you!




-Original Message-
From: Vijay Bellur [mailto:vbel...@redhat.com] 
Sent: Friday, June 22, 2012 3:35 PM
To: зоррыч
Cc: 'Daniel Paikov'; users@ovirt.org; 'Itamar Heim'
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

On 06/21/2012 07:35 AM, зоррыч wrote:
 Vijay?


 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Thursday, June 21, 2012 12:47 AM
 To: зоррыч
 Cc: 'Daniel Paikov'; users@ovirt.org; Vijay Bellur
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 On 06/20/2012 11:41 PM, зоррыч wrote:
 Exactly the same problem:
 http://www.mail-archive.com/vdsm-devel@lists.fedorahosted.org/msg00555
 .html

 ok, so this is still not available in fedora based on the last comment:
   From #gluster i figure that fuse still does not support O_DIRECT  From 
 linux-fsdevel, it looks like patches to enable O_DIRECT in fuse are just 
 getting in.

 vijay - any estimation on when this may be available?




O_DIRECT support from FUSE is available in 3.4.x kernels.

-Vijay


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] host install failed (kernel version 3.4.3)

2012-06-22 Thread зоррыч
Hi.

I am trying to install the host kernel with version 3.4.3.

an error:

Unsupported kernel version: 0

Logs:

 

[root@noc-3-synt tmp]# cat vds_bootstrap.372080.log

Fri, 22 Jun 2012 16:08:20 DEBUG Start VDS Validation 

Fri, 22 Jun 2012 16:08:20 DEBUGEntered VdsValidation(subject =
'10.1.20.7', random_num = '7d832636-a512-40ae-8e27-ecd24728b39a', rev_num =
'None', installVirtualizationService = 'False', installGlusterService =
'True')

Fri, 22 Jun 2012 16:08:20 DEBUGSetting up Package Sacks

Fri, 22 Jun 2012 16:08:20 DEBUGyumSearch: found vdsm entries:
[YumAvailablePackageSqlite : vdsm-4.10.0-0.58.gita6f4929.el6.x86_64
(0x1b42a10)]

Fri, 22 Jun 2012 16:08:20 DEBUGHost properly registered with
RHN/Satellite.

Fri, 22 Jun 2012 16:08:20 DEBUGBSTRAP component='RHN_REGISTRATION'
status='OK' message='Host properly registered with RHN/Satellite.'/

Fri, 22 Jun 2012 16:08:21 DEBUGyumSearchVersion: pkg
vdsm-4.10.0-0.58.gita6f4929.el6.x86_64 starts with: vdsm-4.10

Fri, 22 Jun 2012 16:08:21 DEBUGAvailable VDSM matches requirements

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='VDSM_MAJOR_VER'
status='OK' message='Available VDSM matches requirements'/

Fri, 22 Jun 2012 16:08:21 DEBUG['/bin/uname', '-r']

Fri, 22 Jun 2012 16:08:21 DEBUG3.4.3

 

Fri, 22 Jun 2012 16:08:21 DEBUG

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='OS' status='OK'
type='RHEL6' message='Supported platform version'/

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='KERNEL' status='FAIL'
version='0' message='Unsupported kernel version: 0. Minimal supported
version: 150'/

Fri, 22 Jun 2012 16:08:21 ERRORosExplorer test failed

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='RHEV_INSTALL'
status='FAIL'/

Fri, 22 Jun 2012 16:08:21 DEBUG End VDS Validation 

 

[root@noc-3-synt tmp]# uname -r

3.4.3

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

2012-06-21 Thread зоррыч
Vijay?


-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com] 
Sent: Thursday, June 21, 2012 12:47 AM
To: зоррыч
Cc: 'Daniel Paikov'; users@ovirt.org; Vijay Bellur
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

On 06/20/2012 11:41 PM, зоррыч wrote:
 Exactly the same problem:
 http://www.mail-archive.com/vdsm-devel@lists.fedorahosted.org/msg00555
 .html

ok, so this is still not available in fedora based on the last comment:
 From #gluster i figure that fuse still does not support O_DIRECT  From 
linux-fsdevel, it looks like patches to enable O_DIRECT in fuse are just 
getting in.

vijay - any estimation on when this may be available?

thanks,
Itamar




 -Original Message-
 From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On 
 Behalf Of зоррыч
 Sent: Wednesday, June 20, 2012 3:11 PM
 To: 'Itamar Heim'
 Cc: 'Daniel Paikov'; users@ovirt.org
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 I'm sorry for your persistence, but I'm trying to mount gluster storage not 
 NFS storage.
 Such a document is to gluster storage?


 How can I see what line ovirt considered invalid in the file metadata?



 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Tuesday, June 19, 2012 7:55 PM
 To: зоррыч
 Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 On 06/19/2012 11:34 AM, зоррыч wrote:
 I do not understand you
 The catalog, which is mounted Gloucester storage available for recording and 
 vdsm successfully creates the necessary files to it.

 can you please try the NFS troubleshooting approach on this first to try and 
 diagnose the issue?
 http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues





 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Monday, June 18, 2012 7:07 PM
 To: зоррыч
 Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 On 06/18/2012 06:03 PM, зоррыч wrote:
 Posix FS storage

 and you can mount this from vdsm via sudo with same mount options and use it?




 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Monday, June 18, 2012 6:29 PM
 To: зоррыч
 Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 On 06/18/2012 04:50 PM, зоррыч wrote:
 Any ideas for solutions?

 Is this a bug?

 *From:*users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On 
 Behalf Of *зоррыч
 *Sent:* Sunday, June 17, 2012 12:04 AM
 *To:* 'Vijay Bellur'; 'Robert Middleswarth'
 *Cc:* users@ovirt.org; 'Daniel Paikov'
 *Subject:* Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 I have updated GlusterFS and volume successfully created

 Thank you!

 But I was not able to mount a storage domain.

 an NFS or Posix FS storage domain?


 Vdsm.log:

 Thread-21025::DEBUG::2012-06-16
 15:43:21,495::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-21025::DEBUG::2012-06-16
 15:43:21,495::task::588::TaskManager.Task::(_updateState)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state init
 -   state preparing

 Thread-21025::INFO::2012-06-16
 15:43:21,503::logUtils::37::dispatcher::(wrapper) Run and protect:
 validateStorageServerConnection(domType=6,
 spUUID='----', conList=[{'port': 
 '',
 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user': 
 '',
 'vfs_type': 'glusterfs', 'password': '**', 'id':
 '----'}], options=None)

 Thread-21025::INFO::2012-06-16
 15:43:21,503::logUtils::39::dispatcher::(wrapper) Run and protect:
 validateStorageServerConnection, Return response: {'statuslist':
 [{'status': 0, 'id': '----'}]}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::task::1172::TaskManager.Task::(prepare)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::finished: {'statuslist':
 [{'status': 0, 'id': '----'}]}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::task::588::TaskManager.Task::(_updateState)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state 
 preparing
 -state finished

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::resourceManager::809::ResourceManager.Owner::(release
 A
 l
 l
 ) Owner.releaseAll requests {} resources {}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,504::resourceManager::844::ResourceManager.Owner::(cancelA
 l
 l
 )
 Owner.cancelAll requests {}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,504::task::978::TaskManager.Task::(_decref)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::ref 0 aborting False

 Thread-21026::DEBUG::2012-06-16
 15:43:21,526::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-21026::DEBUG::2012-06-16
 15:43:21,526::task::588::TaskManager.Task::(_updateState)
 Task

Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

2012-06-20 Thread зоррыч
I'm sorry for your persistence, but I'm trying to mount gluster storage not NFS 
storage.
Such a document is to gluster storage?


How can I see what line ovirt considered invalid in the file metadata?



-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com] 
Sent: Tuesday, June 19, 2012 7:55 PM
To: зоррыч
Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

On 06/19/2012 11:34 AM, зоррыч wrote:
 I do not understand you
 The catalog, which is mounted Gloucester storage available for recording and 
 vdsm successfully creates the necessary files to it.

can you please try the NFS troubleshooting approach on this first to try and 
diagnose the issue?
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues





 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Monday, June 18, 2012 7:07 PM
 To: зоррыч
 Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 On 06/18/2012 06:03 PM, зоррыч wrote:
 Posix FS storage

 and you can mount this from vdsm via sudo with same mount options and use it?




 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Monday, June 18, 2012 6:29 PM
 To: зоррыч
 Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 On 06/18/2012 04:50 PM, зоррыч wrote:
 Any ideas for solutions?

 Is this a bug?

 *From:*users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On 
 Behalf Of *зоррыч
 *Sent:* Sunday, June 17, 2012 12:04 AM
 *To:* 'Vijay Bellur'; 'Robert Middleswarth'
 *Cc:* users@ovirt.org; 'Daniel Paikov'
 *Subject:* Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 I have updated GlusterFS and volume successfully created

 Thank you!

 But I was not able to mount a storage domain.

 an NFS or Posix FS storage domain?


 Vdsm.log:

 Thread-21025::DEBUG::2012-06-16
 15:43:21,495::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-21025::DEBUG::2012-06-16
 15:43:21,495::task::588::TaskManager.Task::(_updateState)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state init
 -  state preparing

 Thread-21025::INFO::2012-06-16
 15:43:21,503::logUtils::37::dispatcher::(wrapper) Run and protect:
 validateStorageServerConnection(domType=6,
 spUUID='----', conList=[{'port': '',
 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user': '',
 'vfs_type': 'glusterfs', 'password': '**', 'id':
 '----'}], options=None)

 Thread-21025::INFO::2012-06-16
 15:43:21,503::logUtils::39::dispatcher::(wrapper) Run and protect:
 validateStorageServerConnection, Return response: {'statuslist':
 [{'status': 0, 'id': '----'}]}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::task::1172::TaskManager.Task::(prepare)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::finished: {'statuslist':
 [{'status': 0, 'id': '----'}]}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::task::588::TaskManager.Task::(_updateState)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state 
 preparing
 -   state finished

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::resourceManager::809::ResourceManager.Owner::(releaseA
 l
 l
 ) Owner.releaseAll requests {} resources {}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,504::resourceManager::844::ResourceManager.Owner::(cancelAl
 l
 )
 Owner.cancelAll requests {}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,504::task::978::TaskManager.Task::(_decref)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::ref 0 aborting False

 Thread-21026::DEBUG::2012-06-16
 15:43:21,526::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-21026::DEBUG::2012-06-16
 15:43:21,526::task::588::TaskManager.Task::(_updateState)
 Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::moving from state init
 -  state preparing

 Thread-21026::INFO::2012-06-16
 15:43:21,527::logUtils::37::dispatcher::(wrapper) Run and protect:
 connectStorageServer(domType=6,
 spUUID='----', conList=[{'port': '',
 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user': '',
 'vfs_type': 'glusterfs', 'password': '**', 'id':
 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}], options=None)

 Thread-21026::DEBUG::2012-06-16
 15:43:21,530::lvm::460::OperationMutex::(_invalidateAllPvs) 
 Operation 'lvm invalidate operation' got the operation mutex

 Thread-21026::DEBUG::2012-06-16
 15:43:21,531::lvm::462::OperationMutex::(_invalidateAllPvs) 
 Operation 'lvm invalidate operation' released the operation mutex

 Thread-21026::DEBUG::2012-06-16
 15:43:21,531::lvm::472::OperationMutex::(_invalidateAllVgs) 
 Operation 'lvm invalidate operation' got the operation mutex

 Thread-21026::DEBUG::2012-06-16
 15:43:21,531::lvm::474

Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

2012-06-20 Thread зоррыч
Exactly the same problem:
http://www.mail-archive.com/vdsm-devel@lists.fedorahosted.org/msg00555.html



-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
зоррыч
Sent: Wednesday, June 20, 2012 3:11 PM
To: 'Itamar Heim'
Cc: 'Daniel Paikov'; users@ovirt.org
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

I'm sorry for your persistence, but I'm trying to mount gluster storage not NFS 
storage.
Such a document is to gluster storage?


How can I see what line ovirt considered invalid in the file metadata?



-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com]
Sent: Tuesday, June 19, 2012 7:55 PM
To: зоррыч
Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

On 06/19/2012 11:34 AM, зоррыч wrote:
 I do not understand you
 The catalog, which is mounted Gloucester storage available for recording and 
 vdsm successfully creates the necessary files to it.

can you please try the NFS troubleshooting approach on this first to try and 
diagnose the issue?
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues





 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Monday, June 18, 2012 7:07 PM
 To: зоррыч
 Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 On 06/18/2012 06:03 PM, зоррыч wrote:
 Posix FS storage

 and you can mount this from vdsm via sudo with same mount options and use it?




 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Monday, June 18, 2012 6:29 PM
 To: зоррыч
 Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
 Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 On 06/18/2012 04:50 PM, зоррыч wrote:
 Any ideas for solutions?

 Is this a bug?

 *From:*users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On 
 Behalf Of *зоррыч
 *Sent:* Sunday, June 17, 2012 12:04 AM
 *To:* 'Vijay Bellur'; 'Robert Middleswarth'
 *Cc:* users@ovirt.org; 'Daniel Paikov'
 *Subject:* Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 I have updated GlusterFS and volume successfully created

 Thank you!

 But I was not able to mount a storage domain.

 an NFS or Posix FS storage domain?


 Vdsm.log:

 Thread-21025::DEBUG::2012-06-16
 15:43:21,495::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-21025::DEBUG::2012-06-16
 15:43:21,495::task::588::TaskManager.Task::(_updateState)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state init
 -  state preparing

 Thread-21025::INFO::2012-06-16
 15:43:21,503::logUtils::37::dispatcher::(wrapper) Run and protect:
 validateStorageServerConnection(domType=6,
 spUUID='----', conList=[{'port': '',
 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user': '',
 'vfs_type': 'glusterfs', 'password': '**', 'id':
 '----'}], options=None)

 Thread-21025::INFO::2012-06-16
 15:43:21,503::logUtils::39::dispatcher::(wrapper) Run and protect:
 validateStorageServerConnection, Return response: {'statuslist':
 [{'status': 0, 'id': '----'}]}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::task::1172::TaskManager.Task::(prepare)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::finished: {'statuslist':
 [{'status': 0, 'id': '----'}]}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::task::588::TaskManager.Task::(_updateState)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state 
 preparing
 -   state finished

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::resourceManager::809::ResourceManager.Owner::(releaseA
 l
 l
 ) Owner.releaseAll requests {} resources {}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,504::resourceManager::844::ResourceManager.Owner::(cancelAl
 l
 )
 Owner.cancelAll requests {}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,504::task::978::TaskManager.Task::(_decref)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::ref 0 aborting False

 Thread-21026::DEBUG::2012-06-16
 15:43:21,526::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-21026::DEBUG::2012-06-16
 15:43:21,526::task::588::TaskManager.Task::(_updateState)
 Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::moving from state init
 -  state preparing

 Thread-21026::INFO::2012-06-16
 15:43:21,527::logUtils::37::dispatcher::(wrapper) Run and protect:
 connectStorageServer(domType=6,
 spUUID='----', conList=[{'port': '',
 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user': '',
 'vfs_type': 'glusterfs', 'password': '**', 'id':
 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}], options=None)

 Thread-21026::DEBUG::2012-06-16
 15:43:21,530::lvm::460::OperationMutex::(_invalidateAllPvs)
 Operation 'lvm invalidate operation' got the operation mutex

 Thread

Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

2012-06-18 Thread зоррыч
Posix FS storage



-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com] 
Sent: Monday, June 18, 2012 6:29 PM
To: зоррыч
Cc: 'Vijay Bellur'; 'Robert Middleswarth'; users@ovirt.org; 'Daniel Paikov'
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

On 06/18/2012 04:50 PM, зоррыч wrote:
 Any ideas for solutions?

 Is this a bug?

 *From:*users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] *On 
 Behalf Of *зоррыч
 *Sent:* Sunday, June 17, 2012 12:04 AM
 *To:* 'Vijay Bellur'; 'Robert Middleswarth'
 *Cc:* users@ovirt.org; 'Daniel Paikov'
 *Subject:* Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)

 I have updated GlusterFS and volume successfully created

 Thank you!

 But I was not able to mount a storage domain.

an NFS or Posix FS storage domain?


 Vdsm.log:

 Thread-21025::DEBUG::2012-06-16
 15:43:21,495::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-21025::DEBUG::2012-06-16
 15:43:21,495::task::588::TaskManager.Task::(_updateState)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state init - 
 state preparing

 Thread-21025::INFO::2012-06-16
 15:43:21,503::logUtils::37::dispatcher::(wrapper) Run and protect:
 validateStorageServerConnection(domType=6,
 spUUID='----', conList=[{'port': '',
 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user': '',
 'vfs_type': 'glusterfs', 'password': '**', 'id':
 '----'}], options=None)

 Thread-21025::INFO::2012-06-16
 15:43:21,503::logUtils::39::dispatcher::(wrapper) Run and protect:
 validateStorageServerConnection, Return response: {'statuslist':
 [{'status': 0, 'id': '----'}]}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::task::1172::TaskManager.Task::(prepare)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::finished: {'statuslist':
 [{'status': 0, 'id': '----'}]}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::task::588::TaskManager.Task::(_updateState)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::moving from state 
 preparing
 - state finished

 Thread-21025::DEBUG::2012-06-16
 15:43:21,503::resourceManager::809::ResourceManager.Owner::(releaseAll
 ) Owner.releaseAll requests {} resources {}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,504::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}

 Thread-21025::DEBUG::2012-06-16
 15:43:21,504::task::978::TaskManager.Task::(_decref)
 Task=`8d841c96-43e3-4d4b-b115-a36c4adf695a`::ref 0 aborting False

 Thread-21026::DEBUG::2012-06-16
 15:43:21,526::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-21026::DEBUG::2012-06-16
 15:43:21,526::task::588::TaskManager.Task::(_updateState)
 Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::moving from state init - 
 state preparing

 Thread-21026::INFO::2012-06-16
 15:43:21,527::logUtils::37::dispatcher::(wrapper) Run and protect:
 connectStorageServer(domType=6,
 spUUID='----', conList=[{'port': '',
 'connection': '10.1.20.7:/sd2', 'iqn': '', 'portal': '', 'user': '',
 'vfs_type': 'glusterfs', 'password': '**', 'id':
 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}], options=None)

 Thread-21026::DEBUG::2012-06-16
 15:43:21,530::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 
 'lvm invalidate operation' got the operation mutex

 Thread-21026::DEBUG::2012-06-16
 15:43:21,531::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 
 'lvm invalidate operation' released the operation mutex

 Thread-21026::DEBUG::2012-06-16
 15:43:21,531::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 
 'lvm invalidate operation' got the operation mutex

 Thread-21026::DEBUG::2012-06-16
 15:43:21,531::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 
 'lvm invalidate operation' released the operation mutex

 Thread-21026::DEBUG::2012-06-16
 15:43:21,531::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 
 'lvm invalidate operation' got the operation mutex

 Thread-21026::DEBUG::2012-06-16
 15:43:21,531::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 
 'lvm invalidate operation' released the operation mutex

 Thread-21026::INFO::2012-06-16
 15:43:21,532::logUtils::39::dispatcher::(wrapper) Run and protect:
 connectStorageServer, Return response: {'statuslist': [{'status': 0,
 'id': 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}]}

 Thread-21026::DEBUG::2012-06-16
 15:43:21,532::task::1172::TaskManager.Task::(prepare)
 Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::finished: {'statuslist':
 [{'status': 0, 'id': 'e7766e1d-f2c6-45ee-900e-00c6689649cd'}]}

 Thread-21026::DEBUG::2012-06-16
 15:43:21,532::task::588::TaskManager.Task::(_updateState)
 Task=`2a6538e5-b961-478a-bce6-f5ded1a62bca`::moving from state 
 preparing
 - state finished

 Thread-21026::DEBUG::2012-06-16
 15:43:21,532::resourceManager::809::ResourceManager.Owner::(releaseAll
 ) Owner.releaseAll requests {} resources {}

 Thread-21026::DEBUG

Re: [Users] Fwd: Ovirt 3.1 and backup (created in storage 3.0)

2012-06-18 Thread зоррыч
I can now manually edit ovf file it to restore the virtual machine?

-Original Message-
From: Shahar Havivi [mailto:shah...@redhat.com] 
Sent: Monday, June 18, 2012 6:46 PM
To: зоррыч
Cc: users@ovirt.org; 'Omer Frenkel'; 'Itamar Heim'
Subject: Re: Fwd: [Users] Ovirt 3.1 and backup (created in storage 3.0)

On 18.06.12 17:42, зоррыч wrote:
 Hi
 In an attachment
 
 -Original Message-
 From: Shahar Havivi [mailto:shah...@redhat.com]
 Sent: Sunday, June 17, 2012 10:56 AM
 To: зоррыч
 Cc: users@ovirt.org; Omer Frenkel; Itamar Heim
 Subject: Re: Fwd: [Users] Ovirt 3.1 and backup (created in storage 
 3.0)
 
  Subject: [Users] Ovirt 3.1 and backup (created in storage 3.0)
  Date: Fri, 15 Jun 2012 21:54:53 +0400
  From: зоррыч zo...@megatrone.ru
  To: users@ovirt.org
  
  I'm trying to import a virtual machine from the backup storage 
  created in 3.0 ovirt.
  
  Ovirt 3.1 does not detects the presence of virtual machines in the 
  backup storage, giving the error:
  
  Failed to read VM wifi-test OVF, it mau be corrupted
  
  
  How do I restore the virtual machine?
 can you attach the log file and the ovf file?
I will replace the ovf file...
thanks
 
  
  
  
  
  
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Failed to build (ovirt version 3.1.0_0001)

2012-06-14 Thread зоррыч
I downloaded the src RPM package with the URL 
http://ovirt.org/releases/beta/src/ovirt-engine-3.1.0_0001-0.gitdd65f3.fc17.src.rpm
unpack it:
rpm -ihv ovirt-engine-3.1.0_0001-0.gitdd65f3.fc17.src.rpm
And tried to collect:
cd rpmbuild/SPECS/
rpmbuild -bb ovirt-engine.spec

In the process of binding maven has issued an error.

[root@noc-2 SPECS]# mvn --version
Warning: JAVA_HOME environment variable is not set.
Apache Maven 2.2.1 (r801777; 2009-08-06 15:16:01-0400)
Java version: 1.6.0_24
Java home: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre
Default locale: en_US, platform encoding: UTF-8
OS name: linux version: 2.6.32-220.7.1.el6.x86_64 arch: amd64 Family: 
unix



[root@noc-2 SPECS]# locale -a | grep en
en_AG
en_AG.utf8
en_AU
en_AU.iso88591
en_AU.utf8
en_BW
en_BW.iso88591
en_BW.utf8
en_CA
en_CA.iso88591
en_CA.utf8
en_DK
en_DK.iso88591
en_DK.utf8
en_GB
en_GB.iso88591
en_GB.iso885915
en_GB.utf8
en_HK
en_HK.iso88591
en_HK.utf8
en_IE
en_IE@euro
en_IE.iso88591
en_IE.iso885915@euro
en_IE.utf8
en_IN
en_IN.utf8
en_NG
en_NG.utf8
en_NZ
en_NZ.iso88591
en_NZ.utf8
en_PH
en_PH.iso88591
en_PH.utf8
en_SG
en_SG.iso88591
en_SG.utf8
en_US
en_US.iso88591
en_US.iso885915
en_US.utf8
en_ZA
en_ZA.iso88591
en_ZA.utf8
en_ZW
en_ZW.iso88591
en_ZW.utf8

-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com] 
Sent: Thursday, June 14, 2012 6:51 PM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Failed to build (ovirt version 3.1.0_0001)

command line you used?
maven version?
git repo?
any special locale on your system?

On 06/14/2012 02:04 PM, зоррыч wrote:
 An error in the encoding:

 [INFO] [resources:testResources {execution: default-testResources}]

 [INFO] Using 'UTF-8' encoding to copy filtered resources.

 [INFO] skip non existing resourceDirectory 
 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/resources

 [INFO] [compiler:testCompile {execution: default-testCompile}]

 [INFO] Compiling 22 source files to
 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/target/test-classes

 [INFO]
 --
 --

 [ERROR] BUILD FAILURE

 [INFO]
 --
 --

 [INFO] Compilation failure

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,83] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,84] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,86] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,87] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,89] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,90] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,91] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,92] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,93] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,94] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,96] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,97] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine-3.1.0_0001/backend/manager/modules/c
 ommon/src/test/java/org/ovirt/engine/core/common/utils/ValidationUtils
 Test.java:[26,98] unmappable character for encoding ASCII

 /root/rpmbuild/BUILD/ovirt-engine

Re: [Users] Failed to initialize storage

2012-05-20 Thread зоррыч
Thanks

-Original Message-
From: Haim Ateya [mailto:hat...@redhat.com] 
Sent: Sunday, May 20, 2012 1:29 PM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Failed to initialize storage

vdsm now requires higher version of lvm: 

Requires: lvm2 = 2.02.95

please use correct version and try again. 

we introduced this requirement in commit 
aa709c48778de1aadfe8331160280e51e2a83587

Thanks, 

Haim


- Original Message -
 From: зоррыч zo...@megatrone.ru
 To: Haim Ateya hat...@redhat.com
 Cc: users@ovirt.org
 Sent: Sunday, May 20, 2012 12:14:43 PM
 Subject: RE: [Users] Failed to initialize storage
 
 
 
 
 Host:
 
 [root@noc-3-synt ~]# rpm -qa | grep lvm2
 
 lvm2-libs-2.02.87-6.el6.x86_64
 
 lvm2-2.02.87-6.el6.x86_64
 
 
 
 ovirt:
 
 [root@noc-2 ~]# rpm -qa | grep lvm2
 
 lvm2-libs-2.02.87-6.el6.x86_64
 
 lvm2-2.02.87-6.el6.x86_64
 
 
 
 
 
 
 
 
 
 From: Haim Ateya [mailto:hat...@redhat.com]
 Sent: Sunday, May 20, 2012 8:03 AM
 To: зоррыч
 Cc: users@ovirt.org
 Subject: Re: [Users] Failed to initialize storage
 
 
 
 
 Hi,
 
 
 
 
 
 What version of lvm2 are you using?
 
 Haim
 
 
 
 On May 20, 2012, at 1:16, зоррыч  zo...@megatrone.ru  wrote:
 
 
 
 
 Hi.
 
 I installed ovirt and vdsm version:
 
 [root@noc-2 vds]# rpm -qa | grep ovirt-engine
 
 ovirt-engine-image-uploader-3.1.0_0001-1.8.el6.x86_64
 
 ovirt-engine-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-restapi-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-notification-service-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-jboss-deps-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-userportal-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-tools-common-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-setup-plugin-allinone-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-jbossas-1.2-2.fc16.x86_64
 
 ovirt-engine-log-collector-3.1.0_0001-1.8.el6.x86_64
 
 ovirt-engine-setup-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-iso-uploader-3.1.0_0001-1.8.el6.x86_64
 
 ovirt-engine-dbscripts-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-sdk-1.3-1.el6.noarch
 
 ovirt-engine-backend-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-config-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-genericapi-3.1.0_0001-1.8.el6.noarch
 
 ovirt-engine-webadmin-portal-3.1.0_0001-1.8.el6.noarch
 
 
 
 [root@noc-2 vds]# rpm -qa | grep vdsm
 
 vdsm-python-4.9.6-0.223.gitb3c6b0c.el6.x86_64
 
 vdsm-bootstrap-4.9.6-0.223.gitb3c6b0c.el6.noarch
 
 vdsm-4.9.6-0.223.gitb3c6b0c.el6.x86_64
 
 
 
 Installing a new host is successful. The host goes to reboot.
 
 However, after rebooting the status of the host:
 
 Host 10.1.20.7 is initializing. Message: Failed to initialize storage
 
 
 
 In the logs:
 
 Engine.log:
 
 2012-05-19 17:36:45,183 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (QuartzScheduler_Worker-88) Command
 org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand
 return value
 
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc@60828
 48f
 
 2012-05-19 17:36:45,183 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (QuartzScheduler_Worker-88) Vds: 10.1.20.7
 
 2012-05-19 17:36:45,183 ERROR
 [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
 (QuartzScheduler_Worker-88) Command GetCapabilitiesVDS execution 
 failed. Error: VDSRecoveringException: Failed to initialize storage
 
 2012-05-19 17:36:47,203 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (QuartzScheduler_Worker-91) Command
 org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand
 return value
 
 org.ovirt.engine.core.vdsbroker.vdsbroker.VDSInfoReturnForXmlRpc@70db2
 9ad
 
 2012-05-19 17:36:47,203 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
 (QuartzScheduler_Worker-91) Vds: 10.1.20.7
 
 2012-05-19 17:36:47,203 ERROR
 [org.ovirt.engine.core.vdsbroker.VDSCommandBase]
 (QuartzScheduler_Worker-91) Command GetCapabilitiesVDS execution 
 failed. Error: VDSRecoveringException: Failed to initialize storage
 
 
 
 Vdsm.log(host):
 
 MainThread::INFO::2012-05-19 17:21:54,938::vdsm::78::vds::(run)
 _MainThread(MainThread, started 140055851738880)
 
 MainThread::INFO::2012-05-19 17:21:54,938::vdsm::78::vds::(run)
 Thread(libvirtEventLoop, started daemon 140055763654400)
 
 MainThread::INFO::2012-05-19 17:21:54,938::vdsm::78::vds::(run)
 WorkerThread(Thread-5, started daemon 140055620335360)
 
 MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
 WorkerThread(Thread-8, started daemon 140055249151744)
 
 MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
 WorkerThread(Thread-10, started daemon 140055228172032)
 
 MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
 KsmMonitorThread(KsmMonitor, started daemon 140054789879552)
 
 MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
 WorkerThread(Thread-3, started daemon 140055641315072)
 
 MainThread::INFO::2012-05-19 17:21:54,939::vdsm::78::vds::(run)
 WorkerThread(Thread-6, started daemon 140055609845504)
 
 MainThread::INFO::2012-05-19 17:21:54,940::vdsm

Re: [Users] install failed host in ovirt 3.1.0_0001-1.8

2012-04-10 Thread зоррыч
Hi.
I have reinstalled ovirt
http://[engine machine]:8080/engine.ssh.key.txt page is loaded successfully.

But the error is still playing

Logs(node):
[root@noc-4-m77 tmp]# cat vds_installer.124826.log
Tue, 10 Apr 2012 09:39:53 DEBUG Start VDS Installation 
Tue, 10 Apr 2012 09:39:53 DEBUGget_id_line: read line Red Hat Enterprise 
Linux Server release 6.2 (Santiago).
Tue, 10 Apr 2012 09:39:53 DEBUGlsb_release: input line Red Hat Enterprise 
Linux Server release 6.2 (Santiago).
Tue, 10 Apr 2012 09:39:53 DEBUGlsb_release: return: RedHatEnterpriseServer.
Tue, 10 Apr 2012 09:39:53 DEBUGBSTRAP component='INSTALLER' status='OK' 
message='Test platform succeeded'/
Tue, 10 Apr 2012 09:39:53 DEBUGtrying to fetch deployUtil.py script cmd = 
'/usr/bin/curl -s -k -w %{http_code} -o /tmp/deployUtil.py 
http://noc-2-synt.rutube.ru:80/Components/vds/deployUtil.py'
Tue, 10 Apr 2012 09:39:53 DEBUGBSTRAP component='INSTALLER LIB' 
status='OK' message='deployUtil.py download succeeded'/
Tue, 10 Apr 2012 09:39:53 DEBUGtrying to fetch vds_bootstrap.py script cmd 
= '/usr/bin/curl -s -k -w %{http_code} -o 
/tmp/vds_bootstrap_e65ebf8f-205b-4807-ac98-9024517f33aa.py 
http://noc-2-synt.rutube.ru:80/Components/vds/vds_bootstrap.py'
Tue, 10 Apr 2012 09:39:54 DEBUGBSTRAP component='INSTALLER' status='OK' 
message='vds_bootstrap.py download succeeded'/
Tue, 10 Apr 2012 09:39:54 DEBUGtrying to run 
/tmp/vds_bootstrap_e65ebf8f-205b-4807-ac98-9024517f33aa.py script cmd = 
'/tmp/vds_bootstrap_e65ebf8f-205b-4807-ac98-9024517f33aa.py -v -O rutue -t 
2012-04-10T13:43:46 -f /tmp/firewall.conf.e65ebf8f-205b-4807-ac98-9024517f33aa 
http://noc-2-synt.rutube.ru:80/Components/vds/ 10.2.20.8 
e65ebf8f-205b-4807-ac98-9024517f33aa'
[root@noc-4-m77 tmp]#

[root@noc-4-m77 tmp]# cat vds_bootstrap.62096.log
[root@noc-4-m77 tmp]#

[root@noc-4-m77 tmp]# 
/tmp/vds_bootstrap_e65ebf8f-205b-4807-ac98-9024517f33aa.py -v -O rutue -t 
2012-04-10T13:43:46 -f /tmp/firewall.conf.e65ebf8f-205b-4807-ac98-9024517f33aa 
http://noc-2-synt.rutube.ru:80/Components/vds/ 10.2.20.8 
e65ebf8f-205b-4807-ac98-9024517f33aa
Usage: vds_bootstrap.py [options] url subject random_num

options:
-O organizationName
-t systemTime
-u {true|false} -- use rhev-m-deployed yum repo
-f firewall_rules_file -- override firewall rules.
obsolete options:
-n netconsole_host:port
-r rev_num






-Original Message-
From: Doron Fediuck [mailto:dfedi...@redhat.com] 
Sent: Sunday, April 08, 2012 8:18 AM
To: users@ovirt.org; zo...@megatrone.ru
Subject: RE: [Users] install failed host in ovirt 3.1.0_0001-1.8

Please check if this host can fetch
the engine's public key using-
wget http://[engine machine]:8080/engine.ssh.key.txt if it fails see if it's a 
networking issue.

Sent from my Android phone. Please ignore typos.


-Original Message-
From: ?? [zo...@megatrone.ru]
Received: Friday, 06 Apr 2012, 18:03
To: users@ovirt.org
Subject: [Users] install failed host in ovirt 3.1.0_0001-1.8


Hi

I installed overt version 3.1.0_0001-1.8 (not stable)

When you add the host shows an error - Install failed 

Host 10.1.20.7 installation failed. Please refer to log files for further 
details..

 

Dir /tmp to host:

 

[root@noc-3-synt tmp]# ls -lh

total 172K

-rw-r--r--. 1 root root  45K Apr  6 10:43 deployUtil.py

-rw-r--r--. 1 root root  41K Apr  6 10:43 deployUtil.pyc

-rw-r--r--. 1 root root0 Apr  6 10:43 vds_bootstrap.689232.log

-rwxr-xr-x. 1 root root  32K Apr  6 10:43 
vds_bootstrap_9216e6fd-d74d-470b-8e3a-cb71c79c36c3.py

-rw-r--r--. 1 root root  27K Apr  6 10:43 
vds_bootstrap_9216e6fd-d74d-470b-8e3a-cb71c79c36c3.pyc

-rw-r--r--. 1 root root 1.6K Apr  6 10:43 vds_installer.567418.log

-rwxr-xr-x. 1 root root  16K Apr  6 10:43 
vds_installer_9216e6fd-d74d-470b-8e3a-cb71c79c36c3.py 

 

Logs:

 

[root@noc-3-synt tmp]# cat vds_installer.567418.log

 

Fri, 06 Apr 2012 10:43:31 DEBUG Start VDS Installation 

Fri, 06 Apr 2012 10:43:31 DEBUGget_id_line: read line Red Hat Enterprise
Linux Server release 6.2 (Santiago).

Fri, 06 Apr 2012 10:43:31 DEBUGlsb_release: input line Red Hat
Enterprise Linux Server release 6.2 (Santiago).

Fri, 06 Apr 2012 10:43:31 DEBUGlsb_release: return:
RedHatEnterpriseServer.

Fri, 06 Apr 2012 10:43:31 DEBUGBSTRAP component='INSTALLER' status='OK'
message='Test platform succeeded'/

Fri, 06 Apr 2012 10:43:31 DEBUGtrying to fetch deployUtil.py script cmd
= '/usr/bin/curl -s -k -w %{http_code} -o /tmp/deployUtil.py 
http://noc-2-synt.rutube.ru:80/Components/vds/deployUtil.py'

Fri, 06 Apr 2012 10:43:31 DEBUGBSTRAP component='INSTALLER LIB'
status='OK' message='deployUtil.py download succeeded'/

Fri, 06 Apr 2012 10:43:31 DEBUGtrying to fetch vds_bootstrap.py script
cmd = '/usr/bin/curl -s -k -w %{http_code} -o 
/tmp/vds_bootstrap_9216e6fd-d74d-470b-8e3a-cb71c79c36c3.py
http://noc-2-synt.rutube.ru:80/Components/vds/vds_bootstrap.py'

Fri, 06 Apr 

Re: [Users] install failed host in ovirt 3.1.0_0001-1.8

2012-04-10 Thread зоррыч
Version ovirt 3.1.0_0001-1.8

I have created a rpm package from source code 
Command:
git clone git://gerrit.ovirt.org/ovirt-engine
make
make test
make rpm
yum localinstall /path/to/rpms/*.rpm




-Original Message-
From: Doron Fediuck [mailto:dfedi...@redhat.com] 
Sent: Tuesday, April 10, 2012 6:14 PM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] install failed host in ovirt 3.1.0_0001-1.8

Hi Zorrych,
It looks like your bootstrap scripts are out of sync.
Are you using the RPM's or the source code?
Please provide some more information on the code version you;re using.


On 10/04/12 16:51, зоррыч wrote:
 Hi.
 I have reinstalled ovirt
 http://[engine machine]:8080/engine.ssh.key.txt page is loaded successfully.
 
 But the error is still playing
 
 Logs(node):
 [root@noc-4-m77 tmp]# cat vds_installer.124826.log
 Tue, 10 Apr 2012 09:39:53 DEBUG Start VDS Installation 
 Tue, 10 Apr 2012 09:39:53 DEBUGget_id_line: read line Red Hat Enterprise 
 Linux Server release 6.2 (Santiago).
 Tue, 10 Apr 2012 09:39:53 DEBUGlsb_release: input line Red Hat Enterprise 
 Linux Server release 6.2 (Santiago).
 Tue, 10 Apr 2012 09:39:53 DEBUGlsb_release: return: 
 RedHatEnterpriseServer.
 Tue, 10 Apr 2012 09:39:53 DEBUGBSTRAP component='INSTALLER' status='OK' 
 message='Test platform succeeded'/
 Tue, 10 Apr 2012 09:39:53 DEBUGtrying to fetch deployUtil.py script cmd = 
 '/usr/bin/curl -s -k -w %{http_code} -o /tmp/deployUtil.py 
 http://noc-2-synt.rutube.ru:80/Components/vds/deployUtil.py'
 Tue, 10 Apr 2012 09:39:53 DEBUGBSTRAP component='INSTALLER LIB' 
 status='OK' message='deployUtil.py download succeeded'/
 Tue, 10 Apr 2012 09:39:53 DEBUGtrying to fetch vds_bootstrap.py script 
 cmd = '/usr/bin/curl -s -k -w %{http_code} -o 
 /tmp/vds_bootstrap_e65ebf8f-205b-4807-ac98-9024517f33aa.py 
 http://noc-2-synt.rutube.ru:80/Components/vds/vds_bootstrap.py'
 Tue, 10 Apr 2012 09:39:54 DEBUGBSTRAP component='INSTALLER' status='OK' 
 message='vds_bootstrap.py download succeeded'/
 Tue, 10 Apr 2012 09:39:54 DEBUGtrying to run 
 /tmp/vds_bootstrap_e65ebf8f-205b-4807-ac98-9024517f33aa.py script cmd = 
 '/tmp/vds_bootstrap_e65ebf8f-205b-4807-ac98-9024517f33aa.py -v -O rutue -t 
 2012-04-10T13:43:46 -f 
 /tmp/firewall.conf.e65ebf8f-205b-4807-ac98-9024517f33aa 
 http://noc-2-synt.rutube.ru:80/Components/vds/ 10.2.20.8 
 e65ebf8f-205b-4807-ac98-9024517f33aa'
 [root@noc-4-m77 tmp]#
 
 [root@noc-4-m77 tmp]# cat vds_bootstrap.62096.log
 [root@noc-4-m77 tmp]#
 
 [root@noc-4-m77 tmp]# 
 /tmp/vds_bootstrap_e65ebf8f-205b-4807-ac98-9024517f33aa.py -v -O rutue 
 -t 2012-04-10T13:43:46 -f 
 /tmp/firewall.conf.e65ebf8f-205b-4807-ac98-9024517f33aa 
 http://noc-2-synt.rutube.ru:80/Components/vds/ 10.2.20.8 
 e65ebf8f-205b-4807-ac98-9024517f33aa
 Usage: vds_bootstrap.py [options] url subject random_num
 
 options:
 -O organizationName
 -t systemTime
 -u {true|false} -- use rhev-m-deployed yum repo
 -f firewall_rules_file -- override firewall rules.
 obsolete options:
 -n netconsole_host:port
 -r rev_num
 
 
 
 
 
 
 -Original Message-
 From: Doron Fediuck [mailto:dfedi...@redhat.com]
 Sent: Sunday, April 08, 2012 8:18 AM
 To: users@ovirt.org; zo...@megatrone.ru
 Subject: RE: [Users] install failed host in ovirt 3.1.0_0001-1.8
 
 Please check if this host can fetch
 the engine's public key using-
 wget http://[engine machine]:8080/engine.ssh.key.txt if it fails see if it's 
 a networking issue.
 
 Sent from my Android phone. Please ignore typos.
 
 
 -Original Message-
 From: ?? [zo...@megatrone.ru]
 Received: Friday, 06 Apr 2012, 18:03
 To: users@ovirt.org
 Subject: [Users] install failed host in ovirt 3.1.0_0001-1.8
 
 
 Hi
 
 I installed overt version 3.1.0_0001-1.8 (not stable)
 
 When you add the host shows an error - Install failed 
 
 Host 10.1.20.7 installation failed. Please refer to log files for further 
 details..
 
  
 
 Dir /tmp to host:
 
  
 
 [root@noc-3-synt tmp]# ls -lh
 
 total 172K
 
 -rw-r--r--. 1 root root  45K Apr  6 10:43 deployUtil.py
 
 -rw-r--r--. 1 root root  41K Apr  6 10:43 deployUtil.pyc
 
 -rw-r--r--. 1 root root0 Apr  6 10:43 vds_bootstrap.689232.log
 
 -rwxr-xr-x. 1 root root  32K Apr  6 10:43 
 vds_bootstrap_9216e6fd-d74d-470b-8e3a-cb71c79c36c3.py
 
 -rw-r--r--. 1 root root  27K Apr  6 10:43 
 vds_bootstrap_9216e6fd-d74d-470b-8e3a-cb71c79c36c3.pyc
 
 -rw-r--r--. 1 root root 1.6K Apr  6 10:43 vds_installer.567418.log
 
 -rwxr-xr-x. 1 root root  16K Apr  6 10:43 
 vds_installer_9216e6fd-d74d-470b-8e3a-cb71c79c36c3.py
 
  
 
 Logs:
 
  
 
 [root@noc-3-synt tmp]# cat vds_installer.567418.log
 
  
 
 Fri, 06 Apr 2012 10:43:31 DEBUG Start VDS Installation 
 
 Fri, 06 Apr 2012 10:43:31 DEBUGget_id_line: read line Red Hat Enterprise
 Linux Server release 6.2 (Santiago).
 
 Fri, 06 Apr 2012 10:43:31 DEBUGlsb_release: input line Red Hat
 Enterprise Linux Server release 6.2 (Santiago).
 
 Fri, 06 Apr

Re: [Users] Reinstall Ovirt and XML RPC error

2012-04-06 Thread зоррыч
Port 54321 is open:
[root@noc-2-synt ~]# telnet 10.2.20.8 54321
Trying 10.2.20.8...
Connected to 10.2.20.8.
Escape character is '^]'.
^Csd
sd
Connection closed by foreign host.
[root@noc-2-synt ~]# telnet 10.1.20.7 54321
Trying 10.1.20.7...
Connected to 10.1.20.7.
Escape character is '^]'.
^]
telnet q
Connection closed.
[root@noc-2-synt ~]#


Where can I find details of where to look for SSL key?





-Original Message-
From: Laszlo Hornyak [mailto:lhorn...@redhat.com] 
Sent: Friday, April 06, 2012 3:05 PM
To: ??
Cc: users@ovirt.org
Subject: Re: [Users] Reinstall Ovirt and XML RPC error

Hi,

The most usual reason for this error message is that the ovirt engine is not 
able to connect to the host. Could you check if
- the server running ovirt can connect to the hosts on port 54321
- if the tcp connection is ok, then it could also be an SSL connection problem, 
make sure the ssl key is the same.

Laszlo

- Original Message -
 From: ?? zo...@megatrone.ru
 To: users@ovirt.org
 Sent: Friday, April 6, 2012 12:36:15 PM
 Subject: [Users] Reinstall Ovirt and XML RPC error
 
 
 
 
 
 Hi
 
 I have reinstalled Ovirt (removed and placed again without removing 
 the database).
 
 After reinstalling ovrt can not connect to the servers (hosts) - the 
 status of –“non-responsive”
 
 
 
 In the logs:
 
 2012-04-06 06:33:18,675 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
 (QuartzScheduler_Worker-31) XML RPC error in command 
 GetCapabilitiesVDS ( Vds: 10.2.20.8 ), the error was:
 java.util.concurrent.ExecutionException:
 java.lang.reflect.InvocationTargetException
 
 2012-04-06 06:33:20,151 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
 (QuartzScheduler_Worker-33) XML RPC error in command 
 GetCapabilitiesVDS ( Vds: 10.1.20.7 ), the error was:
 java.util.concurrent.ExecutionException:
 java.lang.reflect.InvocationTargetException
 
 2012-04-06 06:33:20,742 ERROR
 [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
 (QuartzScheduler_Worker-35) XML RPC error in command 
 GetCapabilitiesVDS ( Vds: 10.2.20.8 ), the error was:
 java.util.concurrent.ExecutionException:
 java.lang.reflect.InvocationTargetException
 
 
 
 How can I fix it?
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] NFS mounts ovirt storage

2012-03-18 Thread зоррыч
By default, NFS mounts ovirt storage with the following parameters:

/bin/mount -o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs
127.0.0.1:/share /tmp/tmpgcOezk

You can set them manually?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] glusterfs and ovirt

2012-03-05 Thread зоррыч
[root@noc-4-m77 ~]# gluster --version
glusterfs 3.2.5 built on Nov 15 2011 08:43:14
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. http://www.gluster.com
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General 
Public License.
[root@noc-4-m77 ~]#




-Original Message-
From: Balamurugan Arumugam [mailto:barum...@redhat.com] 
Sent: Monday, March 05, 2012 10:30 AM
To: зоррыч
Cc: users@ovirt.org; Itamar Heim
Subject: Re: [Users] glusterfs and ovirt


Hi Zorro,

Can you tell me your Gluster version?

Regards,
Bala


- Original Message -
 From: зоррыч zo...@megatrone.ru
 To: Itamar Heim ih...@redhat.com
 Cc: users@ovirt.org
 Sent: Friday, March 2, 2012 12:06:24 AM
 Subject: Re: [Users] glusterfs and ovirt
 
 Good news.
 It already works in a test version or development has not yet begun?
 
 
 -Original Message-
 From: Itamar Heim [mailto:ih...@redhat.com]
 Sent: Thursday, March 01, 2012 7:44 PM
 To: ??
 Cc: users@ovirt.org
 Subject: Re: [Users] glusterfs and ovirt
 
 On 03/01/2012 01:48 PM, ?? wrote:
  Hi.
 
  Test the ability to work as a storage server glusterfs. Direct 
  support to glusterf ovirt unfortunately not.
 
  This feature will be added in the future?
 
 I'll let someone else reply on the below, but as for ovirt-gluster 
 integration - yes, it is in the works.
 this gives a general picture of the work being carried out:
 http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
 
 
  Attempted to implement a scheme of work - glusterfs mounted on a 
  node in a folder mount glusterfs connected via NFS to ovirt.
 
  It works =)
 
  Now try to mount NFS to 127.0.0.1 and encountered an error:
 
  Command:
 
  [root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, 
  nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
 
  Error:
 
  mount.nfs: Unknown error 521
 
  NFS V4 is disabled.
 
  In this mount:
 
  /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
 
  I understand that this is not a problem ovirt, but you might prompt 
  any ideas how to fix it?
 
  To use glusterfs in overt to execute a commandL
 
  Mount -t glusterfs -o log-level = WARNING, log-file = 
  /var/log/gluster.log noc-1 :/mht / /share
 
  I can prescribe it in vdsm that it was carried out instead of 
  /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 
  -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
 
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] glusterfs and ovirt

2012-03-01 Thread зоррыч
Good news.
It already works in a test version or development has not yet begun?


-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com] 
Sent: Thursday, March 01, 2012 7:44 PM
To: ??
Cc: users@ovirt.org
Subject: Re: [Users] glusterfs and ovirt

On 03/01/2012 01:48 PM, ?? wrote:
 Hi.

 Test the ability to work as a storage server glusterfs. Direct support 
 to glusterf ovirt unfortunately not.

 This feature will be added in the future?

I'll let someone else reply on the below, but as for ovirt-gluster
integration - yes, it is in the works.
this gives a general picture of the work being carried out:
http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt


 Attempted to implement a scheme of work - glusterfs mounted on a node 
 in a folder mount glusterfs connected via NFS to ovirt.

 It works =)

 Now try to mount NFS to 127.0.0.1 and encountered an error:

 Command:

 [root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, 
 nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk

 Error:

 mount.nfs: Unknown error 521

 NFS V4 is disabled.

 In this mount:

 /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.

 I understand that this is not a problem ovirt, but you might prompt 
 any ideas how to fix it?

 To use glusterfs in overt to execute a commandL

 Mount -t glusterfs -o log-level = WARNING, log-file = 
 /var/log/gluster.log noc-1 :/mht / /share

 I can prescribe it in vdsm that it was carried out instead of 
 /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t 
 nfs 127.0.0.1:/share/tmp/tmpgtsOetsk



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] migration failed

2012-02-19 Thread зоррыч
Thank you!
There was a wrong host on one of the nodes in  /etc/hostname 




-Original Message-
From: Nathan Stratton [mailto:nat...@robotics.net] 
Sent: Friday, February 17, 2012 8:27 PM
To: ??
Cc: users@ovirt.org
Subject: Re: [Users] migration failed

On Fri, 17 Feb 2012, ?? wrote:

 How do I fix it?
 I checked the host name on both nodes and found that they resolves 
 correctly (there is an entry in /etc/hostname).
 In DNS hostname is not registered (!)

Have you tried entering them all in /etc/hosts?


Nathan StrattonCTO, BlinkMind, Inc.
nathan at robotics.net nathan at blinkmind.com
http://www.robotics.nethttp://www.blinkmind.com

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Suspend VM and export or snapshots

2012-02-19 Thread зоррыч
 Hi
Why can not I make a snapshot or export a virtual machine while it is suspended?
Ovirt do not give me that opportunity. To export or snapshot, turn off the 
virtual machine.
This is the right behavior?
Many hypervisors allow you to do a backup (or snapshot) with a suspended 
virtual machine to.
I can manually enable this feature in ovirt? it is very inconvenient.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Support by Scientific linux?

2012-01-28 Thread зоррыч
Hi.

Testing work Ovirt on scientific linux 6.1.

Compiling from source oVirt-engine
(http://www.ovirt.org/wiki/Building_Ovirt_Engine) is successful.

However, if you add a node incorrectly determined the operating system:

OS Version: unknown.

Nodes perform the installation instructions on:
http://www.ovirt.org/wiki/Building_Ovirt_Engine

Analyzing the log files are found incorrect answer vdsm (Node status
NonOperational.  nonOperationalReason = VERSION_INCOMPATIBLE_WITH_CLUSTER)

/var/log/vdsm/vdsm.log:

Thread-16::DEBUG::2012-01-28 14:08:09,200::clientIF::48::vds::(wrapper)
return getVdsCapabilities with {'status': {'message': 'Done', 'code': 0},
'info': {'HBAInventory': {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:4052e3fcadb'}], 'FC': []}, 'packages2': {'kernel':
{'release': '220.el6.x86_64', 'buildtime': '0', 'version': '2.6.32'},
'spice-server': {'release': '5.el6', 'buildtime': '1323492018', 'version':
'0.8.2'}, 'vdsm': {'release': '63.el6', 'buildtime': '1327784725',
'version': '4.9'}, 'qemu-kvm': {'release': '2.209.el6_2.4', 'buildtime':
'1327361568', 'version': '0.12.1.2'}, 'libvirt': {'release': '23.el6',
'buildtime': '1323231757', 'version': '0.9.4'}, 'qemu-img': {'release':
'2.209.el6_2.4', 'buildtime': '1327361568', 'version': '0.12.1.2'}},
'cpuModel': 'Intel(R) Xeon(R) CPU5140  @ 2.33GHz', 'hooks': {},
'networks': {'virbr0': {'cfg': {}, 'netmask': '255.255.255.0', 'stp': 'on',
'ports': ['virbr0-nic'], 'addr': '192.168.122.1'}}, 'vmTypes': ['kvm',
'qemu'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflus
h,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,lm,constant_tsc,arch_pe
rfmon,pebs,bts,rep_good,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,sss
e3,cx16,xtpr,pdcm,dca,lahf_lm,dts,tpr_shadow,model_486,model_pentium,model_p
entium2,model_pentium3,model_pentiumpro,model_qemu32,model_coreduo,model_cor
e2duo,model_n270,model_Conroe,model_Opteron_G1', 'cpuSockets': '1', 'uuid':
'343C9406-3478-4923-3478-492339393407_00:1c:c4:74:a0:60', 'lastClientIface':
'eth0', 'nics': {'eth1': {'hwaddr': '00:1C:C4:74:A0:61', 'netmask': '',
'speed': 0, 'addr': ''}, 'eth0': {'hwaddr': '00:1C:C4:74:A0:60', 'netmask':
'255.255.255.0', 'speed': 1000, 'addr': '10.1.20.10'}}, 'software_revision':
'63', 'management_ip': '', 'clusterLevels': ['2.3'], 'supportedProtocols':
['2.2', '2.3'], 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:4052e3fcadb',
'memSize': '15949', 'reservedMem': '256', 'bondings': {'bond4': {'hwaddr':
'00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []},
'bond0': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr':
'', 'slaves': []}, 'bond1': {'hwaddr': '00:00:00:00:00:00', 'cfg': {},
'netmask': '', 'addr': '', 'slaves': []}, 'bond2': {'hwaddr':
'00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr': '', 'slaves': []},
'bond3': {'hwaddr': '00:00:00:00:00:00', 'cfg': {}, 'netmask': '', 'addr':
'', 'slaves': []}}, 'software_version': '4.9', 'cpuSpeed': '2333.331',
'version_name': 'Snow Man', 'vlans': {}, 'cpuCores': 2, 'kvmEnabled':
'true', 'guestOverhead': '65', 'supportedRHEVMs': ['2.3'],
'emulatedMachines': ['pc', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0',
'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'], 'operatingSystem': {'release': '',
'version': '', 'name': 'unknown'}, 'lastClient': '10.1.20.12'}}

Note the line: 'operatingSystem': {'release':'', 'version':'', 'name':
'unknown'}.

 

 

The output commands to the node:

[root@node ~]# lsb_release -a

LSB Version:
:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:print
ing-4.0-amd64:printing-4.0-noarch

Distributor ID: Scientific

Description:Scientific Linux release 6.2 rolling (Carbon)

Release:6.2

Codename:   Carbon

[root@lnode ~]# cat /etc/redhat-release

Scientific Linux release 6.1 (Carbon)

 

 

How do I get to define vdsm right operating system?

 

p.s. Sorry for my english (google translate)

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users