Re: [Users] ovirt-node trying to use NFS4

2012-04-16 Thread Itamar Heim

On 04/16/2012 06:20 PM, Mike Burns wrote:

On Mon, 2012-04-16 at 18:13 +0300, Yaniv Kaul wrote:

On 04/16/2012 06:09 PM, Mike Burns wrote:

On Fri, 2012-04-13 at 16:12 -0400, Ian Levesque wrote:

Hi Juan,

On Apr 13, 2012, at 2:57 PM, Juan Hernandez wrote:


On 04/13/2012 07:20 PM, Ian Levesque wrote:

Hi,

I'm in the early stages of testing an ovirt install, using the

official ovirt engine packages + F16.

I configured a storage domain to connect to our gluster storage via

NFS (pity we can't use gluster natively yet). On the engine server, I
added Nfsvers=3 to /etc/nfsmount.conf as instructed here:
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues

I then installed a host via ovirt-node (2.3.0-1) and successfully

added the host, but it refuses to connect to our storage domain
because it's attempting to mount via NFS4. When I add Nfsvers=3
to /etc/nfsmount.conf on the host, it comes to life. Of course, that
will be reset after a reboot.

So what are my options here, other than not use ovirt-node? AFAIK,

gluster doesn't export NFS4, let alone have options to disable it. And
I don't see any way to add nfs mount options when defining a storage
domain...

Take a look at the /etc/vdsm/vdsm.conf configuration file in the

host.

There is a nfs_mount_options parameter there that you can use.

This worked perfectly - thanks for your help (and for pointing me at
the persistent /config directory).

Cheers,
~irl

This issue should be filed as a bug against either engine or vdsm (or
perhaps both).  An interface to set nfs options needs to be provided
through either the engine admin UI or through vdsm in some other way.
Manually editing and persisting files is ok as a workaround, but should
not be recommended or used long term.

Mike


Isn't that what http://www.ovirt.org/wiki/Features/AdvancedNfsOptions
all about ?
Y.


Reading that, looks like it's geared toward I/O timeouts, but could
certainly be used for the other options.  I wasn't aware of the feature
previously.

My point was simply that we shouldn't be use edit/persist long term for
issues like this.  There needs to be either a bug or feature filed so
that we solve it long term.


well, for anything fancy, this can be used:
http://www.ovirt.org/wiki/Features/PosixFSConnection

but if we have an issue with NFS, i agree with need to solve it without 
using this.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Hey everyone how do we upgrade engine?

2012-04-16 Thread Livnat Peer
On 13/04/12 19:59, Itamar Heim wrote:
> On 04/13/2012 06:42 PM, Dominic Kaiser wrote:
>> I am running 3.0.0_0001-1.6.fc16 ovirt engine and has been great!  I
>> appreciate Red Hat and all the support from the community on this
>> project you all are awesome!
>>
>> I have not thought about it yet but was looking on the wiki and did not
>> see how to upgrade and reninstallengine.  What have been your
>> experiences doing this and is it worth it?  And how to do it?  Sorry if
>> it was posted somewhere else just let me know if there is info on this
>> already.
> 
> by running the engine-upgrade utility, though i suspect it will have to
> pass some testing/bug fixes for the second release before can be used
> (doing a test upgrade at that phase on your environment would help
> finding issues)
> 

We have one issue that we know of and we'll fix soon -
https://bugzilla.redhat.com/show_bug.cgi?id=790303

If you are interested you can track engine-de...@ovirt.org list for
updates on this in the upcoming week (or keep track of the bug status).

Livnat

>>
>> I have had the greatest experience moving from 2 esxiservers to ovirt.
>>   You guys and gals rock!
> thanks - nice to hear.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Use Filer for NFS storage

2012-04-16 Thread Christian Hernandez
Excuse my ignorance...

But how do I apply the patch? I don't testing the patch on my systems (as I
am only testing myself); but I am...

1) Only have a elementary skills at git
2) Don't know how to apply the patch

--Christian


On Mon, Apr 16, 2012 at 6:50 AM, Adam Litke  wrote:

> On Sun, Apr 15, 2012 at 04:57:15PM +0300, Dan Kenigsberg wrote:
> > On Fri, Apr 13, 2012 at 12:26:39PM -0700, Christian Hernandez wrote:
> > > Here is the log from the Host
> > >
> > >
> > > *Thread-1821::DEBUG::2012-04-13
> > > 12:18:52,200::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
> > > Thread-1821::ERROR::2012-04-13
> > > 12:18:52,200::BindingXMLRPC::171::vds::(wrapper) Unexpected exception
> > > Traceback (most recent call last):
> > >   File "/usr/share/vdsm/BindingXMLRPC.py", line 169, in wrapper
> > > return f(*args, **kwargs)
> > >   File "/usr/share/vdsm/BindingXMLRPC.py", line 571, in
> > > poolValidateStorageServerConnection
> > > return pool.validateStorageServerConnection(domType, conList)
> > >   File "/usr/share/vdsm/API.py", line 897, in
> > > validateStorageServerConnection
> > > return self._irs.validateStorageServerConnection(domainType,
> > > AttributeError: 'NoneType' object has no attribute
> > > 'validateStorageServerConnection'
> > > Thread-1822::DEBUG::2012-04-13
> > > 12:18:52,333::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
> > > Thread-1822::ERROR::2012-04-13
> > > 12:18:52,334::BindingXMLRPC::171::vds::(wrapper) Unexpected exception
> > > Traceback (most recent call last):
> > >   File "/usr/share/vdsm/BindingXMLRPC.py", line 169, in wrapper
> > > return f(*args, **kwargs)
> > >   File "/usr/share/vdsm/BindingXMLRPC.py", line 491, in
> > > poolDisconnectStorageServer
> > > return pool.disconnectStorageServer(domType, conList)
> > >   File "/usr/share/vdsm/API.py", line 823, in disconnectStorageServer
> > > return self._irs.disconnectStorageServer(domainType, self._UUID,
> > > AttributeError: 'NoneType' object has no attribute
> 'disconnectStorageServer'
> >
> > It seems like the interesting traceback should be further up - I
> > suppose self._irs failed initialization and kept its original None
> > value. Please scroll up and try to find out why this failed on Vdsm
> > startup.
> >
> > We have a FIXME in vdsm so that we report such failures better:
> >
> > vdsm/BindingXMLRPC.py: # XXX: Need another way to check if IRS init was
> okay
> >
> > Adam, could you take a further look into this?
>
> Have a look at http://gerrit.ovirt.org/3571 .  This should handle the
> problem
> better by reporting a better error when storage was not initialized
> properly.
>
> --
> Adam Litke 
> IBM Linux Technology Center
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt-node trying to use NFS4

2012-04-16 Thread Mike Burns
On Mon, 2012-04-16 at 18:13 +0300, Yaniv Kaul wrote:
> On 04/16/2012 06:09 PM, Mike Burns wrote: 
> > On Fri, 2012-04-13 at 16:12 -0400, Ian Levesque wrote:
> > > Hi Juan,
> > > 
> > > On Apr 13, 2012, at 2:57 PM, Juan Hernandez wrote:
> > > 
> > > > On 04/13/2012 07:20 PM, Ian Levesque wrote:
> > > > > Hi,
> > > > > 
> > > > > I'm in the early stages of testing an ovirt install, using the
> > > official ovirt engine packages + F16.
> > > > > I configured a storage domain to connect to our gluster storage via
> > > NFS (pity we can't use gluster natively yet). On the engine server, I
> > > added Nfsvers=3 to /etc/nfsmount.conf as instructed here:
> > > http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues
> > > > > I then installed a host via ovirt-node (2.3.0-1) and successfully
> > > added the host, but it refuses to connect to our storage domain
> > > because it's attempting to mount via NFS4. When I add Nfsvers=3
> > > to /etc/nfsmount.conf on the host, it comes to life. Of course, that
> > > will be reset after a reboot.
> > > > > So what are my options here, other than not use ovirt-node? AFAIK,
> > > gluster doesn't export NFS4, let alone have options to disable it. And
> > > I don't see any way to add nfs mount options when defining a storage
> > > domain...
> > > > Take a look at the /etc/vdsm/vdsm.conf configuration file in the
> > > host.
> > > > There is a nfs_mount_options parameter there that you can use.
> > > This worked perfectly - thanks for your help (and for pointing me at
> > > the persistent /config directory).
> > > 
> > > Cheers,
> > > ~irl
> > This issue should be filed as a bug against either engine or vdsm (or
> > perhaps both).  An interface to set nfs options needs to be provided
> > through either the engine admin UI or through vdsm in some other way.
> > Manually editing and persisting files is ok as a workaround, but should
> > not be recommended or used long term.  
> > 
> > Mike
> 
> Isn't that what http://www.ovirt.org/wiki/Features/AdvancedNfsOptions
> all about ?
> Y.

Reading that, looks like it's geared toward I/O timeouts, but could
certainly be used for the other options.  I wasn't aware of the feature
previously.  

My point was simply that we shouldn't be use edit/persist long term for
issues like this.  There needs to be either a bug or feature filed so
that we solve it long term.

Mike

> 
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> 


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt-node trying to use NFS4

2012-04-16 Thread Yaniv Kaul

On 04/16/2012 06:09 PM, Mike Burns wrote:

On Fri, 2012-04-13 at 16:12 -0400, Ian Levesque wrote:

Hi Juan,

On Apr 13, 2012, at 2:57 PM, Juan Hernandez wrote:


On 04/13/2012 07:20 PM, Ian Levesque wrote:

Hi,

I'm in the early stages of testing an ovirt install, using the

official ovirt engine packages + F16.

I configured a storage domain to connect to our gluster storage via

NFS (pity we can't use gluster natively yet). On the engine server, I
added Nfsvers=3 to /etc/nfsmount.conf as instructed here:
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues

I then installed a host via ovirt-node (2.3.0-1) and successfully

added the host, but it refuses to connect to our storage domain
because it's attempting to mount via NFS4. When I add Nfsvers=3
to /etc/nfsmount.conf on the host, it comes to life. Of course, that
will be reset after a reboot.

So what are my options here, other than not use ovirt-node? AFAIK,

gluster doesn't export NFS4, let alone have options to disable it. And
I don't see any way to add nfs mount options when defining a storage
domain...

Take a look at the /etc/vdsm/vdsm.conf configuration file in the

host.

There is a nfs_mount_options parameter there that you can use.

This worked perfectly - thanks for your help (and for pointing me at
the persistent /config directory).

Cheers,
~irl

This issue should be filed as a bug against either engine or vdsm (or
perhaps both).  An interface to set nfs options needs to be provided
through either the engine admin UI or through vdsm in some other way.
Manually editing and persisting files is ok as a workaround, but should
not be recommended or used long term.

Mike


Isn't that what http://www.ovirt.org/wiki/Features/AdvancedNfsOptions 
all about ?

Y.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] ovirt-node trying to use NFS4

2012-04-16 Thread Mike Burns
On Fri, 2012-04-13 at 16:12 -0400, Ian Levesque wrote:
> Hi Juan,
> 
> On Apr 13, 2012, at 2:57 PM, Juan Hernandez wrote:
> 
> > On 04/13/2012 07:20 PM, Ian Levesque wrote:
> >> Hi,
> >> 
> >> I'm in the early stages of testing an ovirt install, using the
> official ovirt engine packages + F16.
> >> 
> >> I configured a storage domain to connect to our gluster storage via
> NFS (pity we can't use gluster natively yet). On the engine server, I
> added Nfsvers=3 to /etc/nfsmount.conf as instructed here:
> http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues
> >> 
> >> I then installed a host via ovirt-node (2.3.0-1) and successfully
> added the host, but it refuses to connect to our storage domain
> because it's attempting to mount via NFS4. When I add Nfsvers=3
> to /etc/nfsmount.conf on the host, it comes to life. Of course, that
> will be reset after a reboot.
> >> 
> >> So what are my options here, other than not use ovirt-node? AFAIK,
> gluster doesn't export NFS4, let alone have options to disable it. And
> I don't see any way to add nfs mount options when defining a storage
> domain...
> > 
> > Take a look at the /etc/vdsm/vdsm.conf configuration file in the
> host.
> > There is a nfs_mount_options parameter there that you can use.
> 
> This worked perfectly - thanks for your help (and for pointing me at
> the persistent /config directory).
> 
> Cheers,
> ~irl

This issue should be filed as a bug against either engine or vdsm (or
perhaps both).  An interface to set nfs options needs to be provided
through either the engine admin UI or through vdsm in some other way.
Manually editing and persisting files is ok as a workaround, but should
not be recommended or used long term.  

Mike
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Booting oVirt node image 2.3.0, no install option

2012-04-16 Thread Adam vonNieda

   Thanks very much Mike. Below is some additional info now that I can get
in. Also, when I "su - admin" it tries to start graphical mode, and just
goes to blank screen and stays there.

   Any insight is much appreciated, and please let me know if there's
anything else I can try / provide.

   Thanks,

  -Adam

/tmp/ovirt.log
==

/sbin/restorecon set context
/var/cache/yum->unconfined_u:object_r:rpm_var_cache_t:s0 failed:'Read-only
file system'
/sbin/restorecon reset /var/cache/yum context
unconfined_u:object_r:file_t:s0->unconfined_u:object_r:rpm_var_cache_t:s0
/sbin/restorecon reset /etc/sysctl.conf context
system_u:object_r:etc_runtime_t:s0->system_u:object_r:system_conf_t:s0
/sbin/restorecon reset /boot-kdump context
system_u:object_r:boot_t:s0->system_u:object_r:default_t:s0
2012-04-16 09:36:26,827 - INFO - ovirt-config-installer - live
device
/dev/sdb
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions - cat /proc/mounts|grep
-q "none /live"
2012-04-16 09:36:26,836 - DEBUG - ovirtfunctions -
2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions - umount /live
2012-04-16 09:36:26,915 - DEBUG - ovirtfunctions -
2012-04-16 09:36:27,455 - ERROR - ovirtfunctions - Failed to mount_live()

/var/log/ovirt.log
==

Apr 16 09:35:53 Starting ovirt-early
oVirt Node Hypervisor release 2.3.0 (1.0.fc16)
Apr 16 09:35:53 Updating /etc/default/ovirt
Apr 16 09:35:54 Updating OVIRT_BOOTIF to ''
Apr 16 09:35:54 Updating OVIRT_INIT to ''
Apr 16 09:35:54 Updating OVIRT_UPGRADE to ''
Apr 16 09:35:54 Updating OVIRT_STANDALONE to '1'
Apr 16 09:35:54 Updating OVIRT_BOOTPARAMS to 'nomodeset
crashkernel=512M-2G:64M,2G-:128M elevator=deadline quiet rd_NO_LVM rhgb
rd.luks=0 rd.md=0 rd.dm=0'
Apr 16 09:35:54 Updating OVIRT_RHN_TYPE to 'classic'
Apr 16 09:35:54 Updating OVIRT_INSTALL to '1'
Apr 16 09:35:54 Updating OVIRT_ISCSI_INSTALL to '1'
Apr 16 09:36:08 Setting temporary admin password: F8Ax67kfRPSAw
Apr 16 09:36:09 Setting temporary root password: F8Ax67kfRPSAw
Apr 16 09:36:09 Skip runtime mode configuration.
Apr 16 09:36:09 Completed ovirt-early
Apr 16 09:36:09 Starting ovirt-awake.
Apr 16 09:36:09 Node is operating in unmanaged mode.
Apr 16 09:36:09 Completed ovirt-awake: RETVAL=0
Apr 16 09:36:09 Starting ovirt
Apr 16 09:36:09 Completed ovirt
Apr 16 09:36:10 Starting ovirt-post
Apr 16 09:36:20 Hardware virtualization detected
  Volume group "HostVG" not found
  Skipping volume group HostVG
Restarting network (via systemctl):  [  OK  ]
Apr 16 09:36:20 Starting ovirt-post
Apr 16 09:36:21 Hardware virtualization detected
  Volume group "HostVG" not found
  Skipping volume group HostVG
Restarting network (via systemctl):  [  OK  ]
Apr 16 09:36:22 Starting ovirt-cim
Apr 16 09:36:22 Completed ovirt-cim
WARNING: persistent config storage not available

/var/log/vdsm/vdsm.log
===

MainThread::INFO::2012-04-16 09:36:21,828::vdsm::71::vds::(run) I am the
actual vdsm 4.9-0
MainThread::DEBUG::2012-04-16
09:36:23,873::resourceManager::376::ResourceManager::(registerNamespace)
Registering namespace 'Storage'
MainThread::DEBUG::2012-04-16
09:36:23,874::threadPool::45::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::DEBUG::2012-04-16
09:36:23,918::multipath::85::Storage.Misc.excCmd::(isEnabled)
'/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None)
MainThread::INFO::2012-04-16 09:36:25,000::vdsm::71::vds::(run) I am the
actual vdsm 4.9-0
MainThread::DEBUG::2012-04-16
09:36:25,199::resourceManager::376::ResourceManager::(registerNamespace)
Registering namespace 'Storage'
MainThread::DEBUG::2012-04-16
09:36:25,200::threadPool::45::Misc.ThreadPool::(__init__) Enter -
numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0
MainThread::DEBUG::2012-04-16
09:36:25,231::multipath::85::Storage.Misc.excCmd::(isEnabled)
'/usr/bin/sudo -n /bin/cat /etc/multipath.conf' (cwd None)
MainThread::DEBUG::2012-04-16
09:36:25,243::multipath::85::Storage.Misc.excCmd::(isEnabled) SUCCESS:
 = '';  = 0
MainThread::DEBUG::2012-04-16
09:36:25,244::multipath::109::Storage.Multipath::(isEnabled) multipath
Defaulting to False
MainThread::DEBUG::2012-04-16
09:36:25,244::misc::487::Storage.Misc::(rotateFiles) dir: /etc,
prefixName: multipath.conf, versions: 5
MainThread::DEBUG::2012-04-16
09:36:25,244::misc::508::Storage.Misc::(rotateFiles) versions found: [0]
MainThread::DEBUG::2012-04-16
09:36:25,244::multipath::118::Storage.Misc.excCmd::(setupMultipath)
'/usr/bin/sudo -n /bin/cp /etc/multipath.conf /etc/multipath.conf.1' (cwd
None)
MainThread::DEBUG::2012-04-16
09:36:25,255::multipath::118::Storage.Misc.excCmd::(setupMultipath)
FAILED:  = 'sudo: unable to mkdir /var/db/sudo/vdsm: Read-only file
system\nsudo: sorry, a password is required to run sudo\n';  = 1
MainThread::DEBUG::2012-04-16
09:36:25,256::multipath::118::Storage.Misc.excCmd::(setupMultipath)
'/usr/bin/sudo -n /usr/sbin/persist /etc/multipath.conf.1' (cwd None)
MainThread::DEBUG::2012-04-16
09:36:25,269::multi

Re: [Users] Use Filer for NFS storage

2012-04-16 Thread Adam Litke
On Sun, Apr 15, 2012 at 04:57:15PM +0300, Dan Kenigsberg wrote:
> On Fri, Apr 13, 2012 at 12:26:39PM -0700, Christian Hernandez wrote:
> > Here is the log from the Host
> > 
> > 
> > *Thread-1821::DEBUG::2012-04-13
> > 12:18:52,200::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
> > Thread-1821::ERROR::2012-04-13
> > 12:18:52,200::BindingXMLRPC::171::vds::(wrapper) Unexpected exception
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/BindingXMLRPC.py", line 169, in wrapper
> > return f(*args, **kwargs)
> >   File "/usr/share/vdsm/BindingXMLRPC.py", line 571, in
> > poolValidateStorageServerConnection
> > return pool.validateStorageServerConnection(domType, conList)
> >   File "/usr/share/vdsm/API.py", line 897, in
> > validateStorageServerConnection
> > return self._irs.validateStorageServerConnection(domainType,
> > AttributeError: 'NoneType' object has no attribute
> > 'validateStorageServerConnection'
> > Thread-1822::DEBUG::2012-04-13
> > 12:18:52,333::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
> > Thread-1822::ERROR::2012-04-13
> > 12:18:52,334::BindingXMLRPC::171::vds::(wrapper) Unexpected exception
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/BindingXMLRPC.py", line 169, in wrapper
> > return f(*args, **kwargs)
> >   File "/usr/share/vdsm/BindingXMLRPC.py", line 491, in
> > poolDisconnectStorageServer
> > return pool.disconnectStorageServer(domType, conList)
> >   File "/usr/share/vdsm/API.py", line 823, in disconnectStorageServer
> > return self._irs.disconnectStorageServer(domainType, self._UUID,
> > AttributeError: 'NoneType' object has no attribute 'disconnectStorageServer'
> 
> It seems like the interesting traceback should be further up - I
> suppose self._irs failed initialization and kept its original None
> value. Please scroll up and try to find out why this failed on Vdsm
> startup.
> 
> We have a FIXME in vdsm so that we report such failures better:
> 
> vdsm/BindingXMLRPC.py: # XXX: Need another way to check if IRS init was okay
> 
> Adam, could you take a further look into this?

Have a look at http://gerrit.ovirt.org/3571 .  This should handle the problem
better by reporting a better error when storage was not initialized properly.

-- 
Adam Litke 
IBM Linux Technology Center

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Booting oVirt node image 2.3.0, no install option

2012-04-16 Thread Mike Burns
On Mon, 2012-04-16 at 08:14 -0500, Adam vonNieda wrote:
> 
> 
>Hi folks,
> 
> 
>I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I
> can boot up just fine, but the two menu options I see are "Start oVirt
> node", and "Troubleshooting". When I choose "Start oVirt node", it
> does just that, and I am soon after given a console login prompt. I've
> checked the docs, and I don't see what I'm supposed to do next, as in
> a password etc. Am I missing something? 

Hi Adam,

Something is breaking in the boot process.  You should be getting a TUI
screen that will let you configure and install ovirt-node.  

I just added an entry on the Node Troublesooting wiki page[1] for you to
follow.

Mike

[1] http://ovirt.org/wiki/Node_Troubleshooting#Boot_up_problems


> 
> 
>Thanks,
> 
> 
>   -Adam
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] Booting oVirt node image 2.3.0, no install option

2012-04-16 Thread Adam vonNieda

   Hi folks,

   I'm trying to install oVirt node v2.3.0 on A Dell C2100 server. I can
boot up just fine, but the two menu options I see are "Start oVirt node",
and "Troubleshooting". When I choose "Start oVirt node", it does just that,
and I am soon after given a console login prompt. I've checked the docs, and
I don't see what I'm supposed to do next, as in a password etc. Am I missing
something? 

   Thanks,

  -Adam


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Use Filer for NFS storage

2012-04-16 Thread Dan Kenigsberg
On Sun, Apr 15, 2012 at 01:05:52PM -0700, Christian Hernandez wrote:
> Scolling up I found these "initIRS" errors in the logs
> 
> *MainThread::INFO::2012-04-13 17:18:30,965::vdsm::78::vds::(run)
> *
> *MainThread::INFO::2012-04-13 17:18:30,965::vdsm::78::vds::(run)
> *
> *MainThread::INFO::2012-04-13 17:18:30,965::vdsm::78::vds::(run)
> *
> *MainThread::INFO::2012-04-13 17:18:30,966::vdsm::78::vds::(run)
> *
> *MainThread::INFO::2012-04-13 17:18:30,966::vmChannels::135::vds::(stop) VM
> channels listener was stopped.*
> *MainThread::INFO::2012-04-13 17:18:30,966::vdsm::78::vds::(run)
> *
> *MainThread::INFO::2012-04-13 17:18:30,966::vdsm::78::vds::(run)
> *
> *MainThread::INFO::2012-04-13 17:18:30,966::vdsm::78::vds::(run)
> *
> *MainThread::INFO::2012-04-13 17:18:32,475::vdsm::70::vds::(run) I am the
> actual vdsm 4.9-0*
> *MainThread::DEBUG::2012-04-13
> 17:18:32,688::resourceManager::379::ResourceManager::(registerNamespace)
> Registering namespace 'Storage'*
> *MainThread::DEBUG::2012-04-13
> 17:18:32,688::threadPool::45::Misc.ThreadPool::(__init__) Enter -
> numThreads: 10.0, waitTimeout: 3, maxTasks: 500.0*
> *MainThread::DEBUG::2012-04-13
> 17:18:32,708::multipath::109::Storage.Multipath::(isEnabled) multipath
> Defaulting to False*
> *MainThread::DEBUG::2012-04-13
> 17:18:32,709::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
> /bin/cp /tmp/tmpXnjmr1 /etc/multipath.conf' (cwd None)*
> *MainThread::DEBUG::2012-04-13
> 17:18:32,761::__init__::1164::Storage.Misc.excCmd::(_log) FAILED:  =
> 'sudo: sorry, you must have a tty to run sudo\n';  = 1*

Vdsm's evil /etc/sudoers.d/vdsm file has

Defaults:vdsm !requiretty

which should allow vdsm running sudo with no tty.
I suspect that you have an overriding config in your /etc/sudoers** or
that you somehow got that file removed. Could you check if that's that?

> *MainThread::ERROR::2012-04-13 17:18:32,762::clientIF::162::vds::(_initIRS)
> Error initializing IRS*
> *Traceback (most recent call last):*
> *  File "/usr/share/vdsm/clientIF.py", line 160, in _initIRS*
> *self.irs = Dispatcher(HSM())*
> *  File "/usr/share/vdsm/storage/hsm.py", line 294, in __init__*
> *multipath.setupMultipath()*
> *  File "/usr/share/vdsm/storage/multipath.py", line 125, in setupMultipath*
> *raise se.MultipathSetupError()*
> *MultipathSetupError: Failed to setup multipath: ()*
> *MainThread::DEBUG::2012-04-13
> 17:18:32,766::__init__::1164::Storage.Misc.excCmd::(_log) '/usr/bin/pgrep
> -xf ksmd' (cwd None)*
> *MainThread::DEBUG::2012-04-13
> 17:18:32,779::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:  =
> '';  = 0*
> *MainThread::INFO::2012-04-13
> 17:18:32,780::vmChannels::139::vds::(settimeout) Setting channels' timeout
> to 30 seconds.*
> *VM Channels Listener::INFO::2012-04-13
> 17:18:32,782::vmChannels::127::vds::(run) Starting VM channels listener
> thread.*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,521::BindingXMLRPC::869::vds::(wrapper) client
> [192.168.11.236]::call getCapabilities with () {}*
> *Thread-13::WARNING::2012-04-13
> 17:18:33,616::utils::688::root::(getHostUUID) Could not find host UUID.*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,643::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/rpm -q --qf
> "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" qemu-kvm' (cwd None)*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,667::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:  =
> '';  = 0*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,667::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/rpm -q --qf
> "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" qemu-img' (cwd None)*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,690::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:  =
> '';  = 0*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,690::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/rpm -q --qf
> "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" vdsm' (cwd None)*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,711::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:  =
> '';  = 0*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,711::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/rpm -q --qf
> "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" spice-server' (cwd None)*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,732::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:  =
> '';  = 0*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,733::__init__::1164::Storage.Misc.excCmd::(_log) '/bin/rpm -q --qf
> "%{NAME}\t%{VERSION}\t%{RELEASE}\t%{BUILDTIME}\n" libvirt' (cwd None)*
> *Thread-13::DEBUG::2012-04-13
> 17:18:33,754::__init__::1164::Storage.Misc.excCmd::(_log) SUCCESS:  =
> '';  = 0*
> 
> 
> 
> On Sun, Apr 15, 2012 at 6:57 AM, Dan Kenigsberg  wrote:
> 
> > On Fri, Apr 13, 2012 at 12:26:39PM -0700, Christian Hernandez wrote:
> > > Here is the log from the Host
> > >
> > >
> > > *Thread-1821::DEBUG::2012-04-13
> > > 12:18:52,200::BindingXMLRPC::167::vds::(wrapper) [192.168.11.236]
> > > Thread-1821::ERROR::2012-04-13
> > > 12:18:52,200::BindingXMLR