Re: [Users] Boot from ISCSI issue

2012-08-28 Thread Alon Bar-Lev


- Original Message -
 From: Jason Lawer ak...@thegeekhood.net
 To: users@ovirt.org
 Sent: Tuesday, August 28, 2012 8:38:07 AM
 Subject: [Users] Boot from ISCSI issue
 
 Hi,
 
 I am noticing that whenever a ovirt host is being put into
 maintenance the iscsi connection for the root volume is disconnected
 as well.
 
 Is this expected behaviour? If so is there a way around this?
 
 Hosts are Centos 5.3 nodes running the dreyou stable 3.1 build.
 Booting from the same iscsi array that the data centre has as it's
 storage.
 
 Jason
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 

Hello,

Maybe you hit bug[1], and it is unrelated to ovirt-node, this is good input.

Alon.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=841883
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Boot from ISCSI issue

2012-08-28 Thread Jason Lawer
Thanks mate, 

Its the end of the day here, but tomorrow I will attempt to setup a remote 
syslog server and capture the system logs when this happens and see if it sheds 
any light. 

I just wanted to make sure this wasn't expected behaviour (such as all paths to 
the iscsi array being torn down), before I went to deep investigating it. 

Jason

On 28/08/2012, at 4:26 PM, Alon Bar-Lev alo...@redhat.com wrote:

 
 
 - Original Message -
 From: Jason Lawer ak...@thegeekhood.net
 To: users@ovirt.org
 Sent: Tuesday, August 28, 2012 8:38:07 AM
 Subject: [Users] Boot from ISCSI issue
 
 Hi,
 
 I am noticing that whenever a ovirt host is being put into
 maintenance the iscsi connection for the root volume is disconnected
 as well.
 
 Is this expected behaviour? If so is there a way around this?
 
 Hosts are Centos 5.3 nodes running the dreyou stable 3.1 build.
 Booting from the same iscsi array that the data centre has as it's
 storage.
 
 Jason
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
 
 Hello,
 
 Maybe you hit bug[1], and it is unrelated to ovirt-node, this is good input.
 
 Alon.
 
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=841883

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Error importing export storage

2012-08-28 Thread зоррыч
Hi

Trying to import export stotage created earlier.

But get this error:

 

There is no storage domain under the specified path. Please check path.

 

Vdsm.logs:

Thread-99790::DEBUG::2012-08-28
09:17:26,010::task::568::TaskManager.Task::(_updateState)
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::moving from state init - state
preparing

Thread-99790::INFO::2012-08-28
09:17:26,010::logUtils::37::dispatcher::(wrapper) Run and protect:
repoStats(options=None)

Thread-99790::INFO::2012-08-28
09:17:26,010::logUtils::39::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {'b0a0e76b-f983-405b-a0af-d0314a1c381a':
{'delay': '0.00292301177979', 'lastCheck': 1346159839.788852, 'code': 0,
'valid': True}}

Thread-99790::DEBUG::2012-08-28
09:17:26,011::task::1151::TaskManager.Task::(prepare)
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::finished:
{'b0a0e76b-f983-405b-a0af-d0314a1c381a': {'delay': '0.00292301177979',
'lastCheck': 1346159839.788852, 'code': 0, 'valid': True}}

Thread-99790::DEBUG::2012-08-28
09:17:26,011::task::568::TaskManager.Task::(_updateState)
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::moving from state preparing -
state finished

Thread-99790::DEBUG::2012-08-28
09:17:26,011::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}

Thread-99790::DEBUG::2012-08-28
09:17:26,011::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}

Thread-99790::DEBUG::2012-08-28
09:17:26,011::task::957::TaskManager.Task::(_decref)
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::ref 0 aborting False

Thread-99792::DEBUG::2012-08-28
09:17:26,473::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

Thread-99792::DEBUG::2012-08-28
09:17:26,474::task::568::TaskManager.Task::(_updateState)
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::moving from state init - state
preparing

Thread-99792::INFO::2012-08-28
09:17:26,474::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=1,
spUUID='----', conList=[{'connection':
'10.1.20.2:/home/nfs4', 'iqn': '', 'portal': '', 'user': '', 'password':
'**', 'id': '----', 'port': ''}],
options=None)

Thread-99792::INFO::2012-08-28
09:17:26,474::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist': [{'status':
0, 'id': '----'}]}

Thread-99792::DEBUG::2012-08-28
09:17:26,474::task::1151::TaskManager.Task::(prepare)
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::finished: {'statuslist':
[{'status': 0, 'id': '----'}]}

Thread-99792::DEBUG::2012-08-28
09:17:26,474::task::568::TaskManager.Task::(_updateState)
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::moving from state preparing -
state finished

Thread-99792::DEBUG::2012-08-28
09:17:26,475::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}

Thread-99792::DEBUG::2012-08-28
09:17:26,475::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}

Thread-99792::DEBUG::2012-08-28
09:17:26,475::task::957::TaskManager.Task::(_decref)
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::ref 0 aborting False

Thread-99793::DEBUG::2012-08-28
09:17:26,494::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

Thread-99793::DEBUG::2012-08-28
09:17:26,495::task::568::TaskManager.Task::(_updateState)
Task=`700181ad-b9d4-411b-bfbc-25a28aa288e2`::moving from state init - state
preparing

Thread-99793::INFO::2012-08-28
09:17:26,503::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=1,
spUUID='----', conList=[{'connection':
'10.1.20.2:/home/nfs4', 'iqn': '', 'portal': '', 'user': '', 'password':
'**', 'id': '----', 'port': ''}],
options=None)

Thread-99793::DEBUG::2012-08-28
09:17:26,505::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
10.1.20.2:/home/nfs4 /rhev/data-center/mnt/10.1.20.2:_home_nfs4' (cwd None)

Thread-99793::DEBUG::2012-08-28
09:17:26,609::lvm::477::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' got the operation mutex

Thread-99793::DEBUG::2012-08-28
09:17:26,609::lvm::479::OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' released the operation mutex

Thread-99793::DEBUG::2012-08-28
09:17:26,609::lvm::488::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' got the operation mutex

Thread-99793::DEBUG::2012-08-28
09:17:26,610::lvm::490::OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' released the operation mutex

Thread-99793::DEBUG::2012-08-28
09:17:26,610::lvm::508::OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' got the operation mutex

Thread-99793::DEBUG::2012-08-28

Re: [Users] Error importing export storage

2012-08-28 Thread Haim

On 08/28/2012 04:34 PM, зоррыч wrote:
/usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
/rhev/data-center/mnt/10.1.20.2:_home_nfs4

please attach both engine and vdsm logs (full, compressed).
also, please execute the following commands from host (vds):

1) /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
/rhev/data-center/mnt/10.1.20.2:_home_nfs4


2) vdsClient -s 0 getStorageDomainsList

3) mount

* if you are working in a non-secure mode, try vdsClient 0 (without the -s).


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Error importing export storage

2012-08-28 Thread зоррыч
Since when servers need to collect vdsm logs?
I have one node is running  vdsm  and on the server overt-engine


Node:
root@noc-3-synt ~]# /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
/rhev/data-center/mnt/10.1.20.2:_home_nfs4
mount.nfs: mount point /rhev/data-center/mnt/10.1.20.2:_home_nfs4 does not exist
[root@noc-3-synt ~]# /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 /tmp/foo
[root@noc-3-synt ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/vg_noc3synt-lv_root
   50G  4.5G   43G  10% /
tmpfs 7.9G 0  7.9G   0% /dev/shm
/dev/sda1 497M  246M  227M  52% /boot
/dev/mapper/vg_noc3synt-lv_home
  491G  7.4G  459G   2% /mht
127.0.0.1:/gluster491G   11G  456G   3% 
/rhev/data-center/mnt/127.0.0.1:_gluster
10.1.20.2:/home/nfs4  493G  304G  164G  65% /tmp/foo
[root@noc-3-synt ~]# vdsClient -s 0 getStorageDomainsList
b0a0e76b-f983-405b-a0af-d0314a1c381a

[root@noc-3-synt ~]# mount
/dev/mapper/vg_noc3synt-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext=system_u:object_r:tmpfs_t:s0)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg_noc3synt-lv_home on /mht type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
127.0.0.1:/gluster on /rhev/data-center/mnt/127.0.0.1:_gluster type 
fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
10.1.20.2:/home/nfs4 on /tmp/foo type nfs 
(rw,soft,nosharecache,timeo=600,retrans=6,addr=10.1.20.2)




-Original Message-
From: Haim [mailto:hat...@redhat.com] 
Sent: Tuesday, August 28, 2012 5:57 PM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Error importing export storage

On 08/28/2012 04:34 PM, зоррыч wrote:
 /usr/bin/sudo -n /bin/mount -t nfs -o 
 soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
 /rhev/data-center/mnt/10.1.20.2:_home_nfs4
please attach both engine and vdsm logs (full, compressed).
also, please execute the following commands from host (vds):

1) /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
/rhev/data-center/mnt/10.1.20.2:_home_nfs4

2) vdsClient -s 0 getStorageDomainsList

3) mount

* if you are working in a non-secure mode, try vdsClient 0 (without the -s).




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Error importing export storage

2012-08-28 Thread Haim

On 08/28/2012 04:34 PM, зоррыч wrote:


Hi

Trying to import export stotage created earlier.

But get this error:


hi,

please provide full vdsm and engine logs (compressed).
also, run the following commands on host (vds):

- /usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 
/rhev/data-center/mnt/10.1.20.2:_home_nfs4

- mount
- vdsClient -s 0 getStorageDomainsList (if you are working in a 
non-secure mode, don't use the -s).

- vdsClient -s 0 getStorageDomainInfo b23c7ab6-b1d4-4888-8d4a-adc78e61db38
- check permission on storage server (NFS) - ls -l /home/nfs4 (should be 
vdsm:kvm).


another thing - it appears that domain is already attached to storage 
pool (as I understand from domain metadata), is it possible that you 
haven't detach this domain from its previous domain ?



There is no storage domain under the specified path. Please check path.

Vdsm.logs:

Thread-99790::DEBUG::2012-08-28 
09:17:26,010::task::568::TaskManager.Task::(_updateState) 
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::moving from state init - 
state preparing


Thread-99790::INFO::2012-08-28 
09:17:26,010::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)


Thread-99790::INFO::2012-08-28 
09:17:26,010::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {'b0a0e76b-f983-405b-a0af-d0314a1c381a': 
{'delay': '0.00292301177979', 'lastCheck': 1346159839.788852, 'code': 
0, 'valid': True}}


Thread-99790::DEBUG::2012-08-28 
09:17:26,011::task::1151::TaskManager.Task::(prepare) 
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::finished: 
{'b0a0e76b-f983-405b-a0af-d0314a1c381a': {'delay': '0.00292301177979', 
'lastCheck': 1346159839.788852, 'code': 0, 'valid': True}}


Thread-99790::DEBUG::2012-08-28 
09:17:26,011::task::568::TaskManager.Task::(_updateState) 
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::moving from state 
preparing - state finished


Thread-99790::DEBUG::2012-08-28 
09:17:26,011::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll 
requests {} resources {}


Thread-99790::DEBUG::2012-08-28 
09:17:26,011::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}


Thread-99790::DEBUG::2012-08-28 
09:17:26,011::task::957::TaskManager.Task::(_decref) 
Task=`ac99ba99-f55d-4562-822e-0286ab30566e`::ref 0 aborting False


Thread-99792::DEBUG::2012-08-28 
09:17:26,473::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]


Thread-99792::DEBUG::2012-08-28 
09:17:26,474::task::568::TaskManager.Task::(_updateState) 
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::moving from state init - 
state preparing


Thread-99792::INFO::2012-08-28 
09:17:26,474::logUtils::37::dispatcher::(wrapper) Run and protect: 
validateStorageServerConnection(domType=1, 
spUUID='----', conList=[{'connection': 
'10.1.20.2:/home/nfs4', 'iqn': '', 'portal': '', 'user': '', 
'password': '**', 'id': '----', 
'port': ''}], options=None)


Thread-99792::INFO::2012-08-28 
09:17:26,474::logUtils::39::dispatcher::(wrapper) Run and protect: 
validateStorageServerConnection, Return response: {'statuslist': 
[{'status': 0, 'id': '----'}]}


Thread-99792::DEBUG::2012-08-28 
09:17:26,474::task::1151::TaskManager.Task::(prepare) 
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::finished: {'statuslist': 
[{'status': 0, 'id': '----'}]}


Thread-99792::DEBUG::2012-08-28 
09:17:26,474::task::568::TaskManager.Task::(_updateState) 
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::moving from state 
preparing - state finished


Thread-99792::DEBUG::2012-08-28 
09:17:26,475::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll 
requests {} resources {}


Thread-99792::DEBUG::2012-08-28 
09:17:26,475::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}


Thread-99792::DEBUG::2012-08-28 
09:17:26,475::task::957::TaskManager.Task::(_decref) 
Task=`e55145ac-1052-454b-92ec-a9eb981c1b04`::ref 0 aborting False


Thread-99793::DEBUG::2012-08-28 
09:17:26,494::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]


Thread-99793::DEBUG::2012-08-28 
09:17:26,495::task::568::TaskManager.Task::(_updateState) 
Task=`700181ad-b9d4-411b-bfbc-25a28aa288e2`::moving from state init - 
state preparing


Thread-99793::INFO::2012-08-28 
09:17:26,503::logUtils::37::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=1, 
spUUID='----', conList=[{'connection': 
'10.1.20.2:/home/nfs4', 'iqn': '', 'portal': '', 'user': '', 
'password': '**', 'id': '----', 
'port': ''}], options=None)


Thread-99793::DEBUG::2012-08-28 
09:17:26,505::__init__::1164::Storage.Misc.excCmd::(_log) 
'/usr/bin/sudo -n /bin/mount -t nfs -o 
soft,nosharecache,timeo=600,retrans=6 10.1.20.2:/home/nfs4 

Re: [Users] ovirt node NFS

2012-08-28 Thread Nicholas Kesick

Date: Mon, 27 Aug 2012 09:46:18 +0200
From: w...@dds.nl
To: users@ovirt.org
Subject: [Users] ovirt node  NFS


  


  
  
Hi all,



NOTE: Due to an
issue affecting NFS  storage domains and the current Linux
  3.5-based Fedora kernel, we recommend testing with a Fedora
host running a pre-3.5 version of the Linux kernel. 

  

  Any progress on this issue yet?

  

  Winfried


  


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

There is an open Bugzilla about the issue (will try to find the number), but 
AFAIK there has not been a kernel update that has been pushed that resolves the 
issue, and thus no rebuild ovirt-node.  It's at the mercy of upstream and 
testing for now.
  ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] problem building latest node iso image from git

2012-08-28 Thread Joey Boggs

On 08/28/2012 10:34 AM, Jorick Astrego wrote:

Hi,

After some unsuccessful attempts to install the stable node image on 
our hardware, I found a patch that can help me.


I followed all the steps in the wiki to build a node image and got the 
build process working (Fedora 17).


The successfully built iso fails to start correctly and I get a login 
prompt. During the built process I had some errors that could be the 
cause:


Installing: sed # [ 43/507]
install-info: No such file or directory for
/usr/share/info/sed.info.gz
Installing: libidn   # [
62/507]
install-info: No such file or directory for
/usr/share/info/libidn.info.gz
Installing: which# [
73/507]
install-info: No such file or directory for
/usr/share/info/which.info.gz
Installing: grep #
[101/507]
install-info: No such file or directory for
/usr/share/info/grep.info.gz
Installing: diffutils#
[163/507]
install-info: No such file or directory for
/usr/share/info/diffutils.info
Installing: wget #
[215/507]
install-info: No such file or directory for
/usr/share/info/wget.info.gz
Installing: krb5-workstation #
[226/507]
install-info: No such file or directory for
/usr/share/info/krb5-user.info.gz
Installing: groff#
[227/507]
install-info: No such file or directory for /usr/share/info/groff.info
Installing: gnupg2   #
[287/507]
install-info: No such file or directory for /usr/share/info/gnupg.info
Installing: gettext  #
[298/507]
install-info: No such file or directory for
/usr/share/info/gettext.info.gz
Installing: tog-pegasus  #
[361/507]
/usr/share/Pegasus/scripts/genOpenPegasusSSLCerts: line 31:
hostname: command not found
Installing: grub2#
[386/507]
install-info: No such file or directory for
/usr/share/info/grub2.info.gz
install-info: No such file or directory for
/usr/share/info/grub2-dev.info.gz
Installing: libvirt-cim  #
[449/507]
/var/tmp/rpm-tmp.KwDxnP: line 5: /etc/init.d/tog-pegasus: No such
file or directory
Installing: vdsm #
[501/507]

/usr/libexec/vdsm/vdsm-gencerts.sh: line 29: hostname: command not
found
error reading information on service cgconfig: No such file or
directory
umount: /tmp/lorax.imgutils.f8u8hE: target is busy.
(In some cases useful info about processes that use
 the device is found by lsof(8) or fuser(1))
losetup: /dev/loop2: detach failed: Device or resource busy
Traceback (most recent call last):
  File /sbin/mkefiboot, line 139, in module
opt.diskname)
  File /sbin/mkefiboot, line 38, in mkmacboot
mkhfsimg(None, outfile, label=label, graft=graft)
  File /usr/lib/python2.7/site-packages/pylorax/imgutils.py,
line 310, in mkhfsimg
mkfsargs=[-v, label], graft=graft)
  File /usr/lib/python2.7/site-packages/pylorax/imgutils.py,
line 293, in mkfsimage
do_grafts(graft, mnt, preserve)
  File /usr/lib/python2.7/site-packages/pylorax/imgutils.py,
line 206, in __exit__
umount(self.mnt)
  File /usr/lib/python2.7/site-packages/pylorax/imgutils.py,
line 120, in umount
os.rmdir(mnt)
OSError: [Errno 16] Device or resource busy:
'/tmp/lorax.imgutils.f8u8hE'

Fixing boot menu
/usr/share/info/dir: could not read (No such file or directory)
and could not create (No such file or directory)
Removing python source files

/etc/selinux/targeted/contexts/files/file_contexts: line 768 has
invalid context system_u:object_r:selinux_login_config_t:s0
/etc/selinux/targeted/contexts/files/file_contexts: line 1739 has
invalid context system_u:object_r:squid_cron_exec_t:s0
/etc/selinux/targeted/contexts/files/file_contexts: line 4225 has
invalid context system_u:object_r:ovirt_exec_t:s0
/etc/selinux/targeted/contexts/files/file_contexts: line 4457 has
invalid context system_u:object_r:ovirt_exec_t:s0
/etc/selinux/targeted/contexts/files/file_contexts: has invalid
context system_u:object_r:selinux_login_config_t:s

--
Kind Regards,

Netbulae
Jorick Astrego

Netbulae B.V.
Site:http://www.netbulae.eu


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


The build continued so it might be ok although some thing need to