[Users] Fwd: Re: latest vdsm cannot read ib device speeds causing storage attach fail

2013-01-25 Thread Royce Lv
:
util.debug('got EOF -- exiting thread serving %r',
   threading.current_thread().name)
sys.exit(0)

except Exception:--does not handle IOError,INTR here should 
retry recv()
msg = ('#TRACEBACK', format_exc())



3. Actions we will take:
(1)As a work round we can first remove the zombie reaper from supervdsm 
server

(2)I'll see whether python has a fixed version for this
(3)Yaniv is working on changing vdsm/svdsm communication channel to pipe 
and handle it ourselves, I believe we'll get rid of this with that 
properly handled.



 Original Message 
Subject: 	Re: [Users] latest vdsm cannot read ib device speeds causing 
storage attach fail

Resent-Date:Thu, 24 Jan 2013 12:24:10 +0200
Resent-From:Dan Kenigsberg dan...@redhat.com
Resent-To:  Royce Lv lvro...@linux.vnet.ibm.com
Date:   Wed, 23 Jan 2013 10:44:57 -0600
From:   Dead Horse deadhorseconsult...@gmail.com
To: Dan Kenigsberg dan...@redhat.com
CC: users@ovirt.org users@ovirt.org



VDSM was built from:
commit 166138e37e75767b32227746bb671b1dab9cdd5e

Attached is the full vdsm log

I should also note that from engine perspective it sees the master storage
domain as locked and the others as unknown.


On Wed, Jan 23, 2013 at 2:49 AM, Dan Kenigsberg dan...@redhat.com wrote:


On Tue, Jan 22, 2013 at 04:02:24PM -0600, Dead Horse wrote:
 Any ideas on this one? (from VDSM log):
 Thread-25::DEBUG::2013-01-22
 15:35:29,065::BindingXMLRPC::914::vds::(wrapper) client
[3.57.111.30]::call
 getCapabilities with () {}
 Thread-25::ERROR::2013-01-22 15:35:29,113::netinfo::159::root::(speed)
 cannot read ib0 speed
 Traceback (most recent call last):
   File /usr/lib64/python2.6/site-packages/vdsm/netinfo.py, line 155, in
 speed
 s = int(file('/sys/class/net/%s/speed' % dev).read())
 IOError: [Errno 22] Invalid argument

 Causes VDSM to fail to attach storage

I doubt that this is the cause of the failure, as vdsm has always
reported 0 for ib devices, and still is.

Does a former version works with your Engine?
Could you share more of your vdsm.log? I suppose the culprit lies in one
one of the storage-related commands, not in statistics retrieval.


 Engine side sees:
 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
 (QuartzScheduler_Worker-96) [553ef26e] The connection with details
 192.168.0.1:/ovirt/ds failed because of error code 100 and error message
 is: general exception
 2013-01-22 15:35:30,160 INFO
 [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
 (QuartzScheduler_Worker-96) [1ab78378] Running command:
 SetNonOperationalVdsCommand internal: true. Entities affected :  ID:
 8970b3fe-1faf-11e2-bc1f-00151712f280 Type: VDS
 2013-01-22 15:35:30,200 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
 (QuartzScheduler_Worker-96) [1ab78378] START,
 SetVdsStatusVDSCommand(HostName = kezan, HostId =
 8970b3fe-1faf-11e2-bc1f-00151712f280, status=NonOperational,
 nonOperationalReason=STORAGE_DOMAIN_UNREACHABLE), log id: 4af5c4cd
 2013-01-22 15:35:30,211 INFO
 [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
 (QuartzScheduler_Worker-96) [1ab78378] FINISH, SetVdsStatusVDSCommand,
log
 id: 4af5c4cd
 2013-01-22 15:35:30,242 ERROR
 [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 (QuartzScheduler_Worker-96) [1ab78378] Try to add duplicate audit log
 values with the same name. Type: VDS_SET_NONOPERATIONAL_DOMAIN. Value:
 storagepoolname

 Engine = latest master
 VDSM = latest master

Since latest master is an unstable reference by definition, I'm sure
that History would thank you if you post the exact version (git hash?)
of the code.

 node = el6







from testrunner import VdsmTestCase as TestCaseBase
import supervdsm
import testValidation
import tempfile
from vdsm import utils
import os
import uuid
from vdsm import constants
from storage import misc
from monkeypatch import MonkeyPatch
from time import sleep
from vdsm.constants import DISKIMAGE_GROUP, METADATA_GROUP,\
QEMU_PROCESS_USER, EXT_PS


@utils.memoized
def getNeededPythonPath():
testDir = os.path.dirname(__file__)
base = os.path.dirname(testDir)
vdsmPath = os.path.join(base, 'vdsm')
cliPath = os.path.join(base, 'vdsm_cli')
pyPath = PYTHONPATH= + ':'.join([base, vdsmPath, cliPath])
return pyPath


def monkeyStart(self):
self._authkey = str(uuid.uuid4())
self._log.debug(Launching Super Vdsm)

superVdsmCmd = [getNeededPythonPath(), constants.EXT_PYTHON,
supervdsm.SUPERVDSM,
self._authkey, str(os.getpid()),
self.pidfile, self.timestamp, self.address,
str(os.getuid())]
misc.execCmd(superVdsmCmd, sync=False, sudo=True)
sleep(2)


class TestSuperVdsm(TestCaseBase):
def setUp(self):
testValidation.checkSudo(['python', supervdsm.SUPERVDSM])
self._proxy = supervdsm.getProxy()

# temporary values

Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail

2013-01-24 Thread Royce Lv

On 01/24/2013 05:21 PM, Dan Kenigsberg wrote:

quent commits yields the

Hi,
Will you provide the log or let me access the test env if 
possible(cause we don't have IB in our Lab)? I'll look at it immediately.

Sorry for the inconvenience if I have introduced the regression.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] direct lun VM can't create template

2012-09-13 Thread Royce Lv

Guys,
I have a vm with a lun and a qcow disk, when making template from this 
VM, engine reports: *Cannot create Template. Vm has no disks*.

 ps: engine has no log for this action and no request is passed to vdsm.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] error ISO domain

2012-07-31 Thread Royce Lv
Shall we put the activate in the right click or in the Storage page? 
I know it seems natural for developers, but our users may spend some 
time to find the subtab activate button...


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users