Re: [Users] Getting a stream of errors inside engine.log

2012-06-25 Thread Ofer Schreiber


- Original Message -
 On 06/24/2012 07:26 PM, Robert Middleswarth wrote:
  On 06/24/2012 12:11 PM, Yair Zaslavsky wrote:
  On 06/24/2012 07:02 PM, Robert Middleswarth wrote:
  On 06/24/2012 11:55 AM, Yair Zaslavsky wrote:
  On 06/24/2012 06:45 PM, Robert Middleswarth wrote:
  On 06/24/2012 09:55 AM, Doron Fediuck wrote:
  On 22/06/12 04:12, Robert Middleswarth wrote:
  Getting a log full of error messages like these.
 
  2012-06-21 21:03:30,075 ERROR
  [org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
  (QuartzScheduler_Worker-90) Failed to decryptData must start
  with
  zero
  2012-06-21 21:03:46,235 ERROR
  [org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
  (QuartzScheduler_Worker-18) Failed to decryptData must not be
  longer
  than 128 bytes
 
  What is causing this and what can I do to fix it?
 
  Thanks
  Robert
  Hi Robert,
  This really depends on your setup (devel' or yum installed).
  There are several DB values which are encrypted during the
  installation which you override
  by plain text values. In most cases the engine tries to
  fallback, and
  use the original value
  it got despite the error. The question now you should answer,
  is if
  you have any other issue.
  If things seem to work well, than it's an issue of encrypting
  the
  relevant values (if needed).
  If you have other related issues, then setting these values to
  plain
  text should use the fallback
  and resolve the issue.
 
  There doesn't seem to be any negitive effect other then making
  the log
  hard to read.  Is there any way to suppress the error?
 
  Thanks
  Robert
  Robert, I do suggest that you try further to investigate the
  root cause
  for this.
  However, it is possible to modify the configuration not to log
  this
  error - you should be able to define a log4j category at
  $JBOSS_HOME/standalone/configuration/standalone.xml (under
  subsystem
  xmlns=urn:jboss:domain:logging:1.1) in such a way that for
  EncryptionUtils only FATAL messages will appear at the log (and
  still, I
  suggest that you further try to investigate why this is
  happening in
  the
  first place).engine-service.xml
 
  Yair
  I am fine with investigating why but these error's are so generic
  I have
  no idea what files to check?  Despite tiring a few times I really
  don't
  get Java.
  Robert - we can go to two approaches here with logging both
  require you
  to define a log4j category for EncryptionUtils
 
  First approach - as I suggested, move to FATAL - you will not see
  these
  errors.
  Second approach - move to DEBUG. I looked at the code, and saw
  that if
  you define a category for EncryptionUtils  with DEBUG level you
  will
  also get a print of stacktrace, this may assist in understanding
  the
  problem you're facing.
  Yair,
  
  I checked standalone.xml file and there is no section related to
  EncyptionUtils nor was there a definition for engine.log I am
  guessing
  that these are defined in a diff config file.
 
 You should modify this file and add a category for EncryptionUtils -
 look at how the category -
 
 logger category=org.ovirt
 level name=INFO/
 handlers
 handler name=ENGINE_LOG/
 /handlers
 /logger
 
 
 
 Is defined.
 
 What you should do is add
 
 logger category=org.ovirt.engine.core.engineencryptutils
   level name=FATAL/
 handlers
 handler name=ENGINE_LOG/
 /handlers
 /logger
 
 Near the logger category for org.ovirt (I mean, as a sibling to
 that
 element).
 Pay attention that here i defined it for FATAL - you may wish to try
 it
 with DEBUG to see the stacktrace.
 
 Yair

In rpm based enb, we use /etc/ovirt-engine/engine-service.xml as the jboss 
settings file.

 
 
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
  Thanks
  Robert
  Thanks
  Robert
  
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] ovirt3.1 add node error

2012-06-25 Thread tian b
Dear,all
Can you help me?


# tail /var/log/ovirt-engine/engine.log -f
2012-06-25 17:50:42,796 INFO  
[org.ovirt.engine.core.utils.hostinstall.HostKeyVerifier] (NioProcessor-21) SSH 
key fingerprint b4:f3:b9:4b:93:40:e8:9e:57:fc:b3:fe:b3:af:13:b4 for host 
172.30.1.63 (172.30.1.63) has been successfully verified.
2012-06-25 17:50:42,872 INFO  
[org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
(ajp--0.0.0.0-8009-3) Invoking /bin/echo -e `/bin/bash -c  
/usr/sbin/dmidecode|/bin/awk ' /UUID/{ print $2; } ' | /usr/bin/tr '
' '_'  cat /sys/class/net/*/address | /bin/grep -v '00:00:00:00' | /bin/sort 
-u | /usr/bin/head --lines=1` on 172.30.1.63
2012-06-25 17:50:42,959 INFO  
[org.ovirt.engine.core.utils.hostinstall.MinaInstallWrapper] 
(ajp--0.0.0.0-8009-3) RunSSHCommand returns true
2012-06-25 17:50:42,985 INFO  [org.ovirt.engine.core.bll.AddVdsCommand] 
(ajp--0.0.0.0-8009-3) [7d335777] Running command: AddVdsCommand internal: 
false. Entities affected :  ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: 
VdsGroups
2012-06-25 17:50:43,017 INFO  [org.ovirt.engine.core.bll.AddVdsSpmIdCommand] 
(ajp--0.0.0.0-8009-3) [318b963a] Running command: AddVdsSpmIdCommand internal: 
true. Entities affected :  ID: 3a9d9a40-beab-11e1-9366-525400fe2d56 Type: VDS
2012-06-25 17:50:43,048 ERROR [org.ovirt.engine.core.vdsbroker.ResourceManager] 
(ajp--0.0.0.0-8009-3) [318b963a] Cannot get vdsManager for 
vdsid=3a9d9a40-beab-11e1-9366-525400fe2d56
2012-06-25 17:50:43,049 INFO  
[org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] (ajp--0.0.0.0-8009-3) 
[318b963a] START, RemoveVdsVDSCommand(vdsId = 
3a9d9a40-beab-11e1-9366-525400fe2d56), log id: 7bd4b2c7
2012-06-25 17:50:43,051 ERROR [org.ovirt.engine.core.vdsbroker.ResourceManager] 
(ajp--0.0.0.0-8009-3) [318b963a] Cannot get vdsManager for 
vdsid=3a9d9a40-beab-11e1-9366-525400fe2d56
2012-06-25 17:50:43,052 INFO  
[org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] (ajp--0.0.0.0-8009-3) 
[318b963a] FINISH, RemoveVdsVDSCommand, log id: 7bd4b2c7
2012-06-25 17:50:43,053 ERROR [org.ovirt.engine.core.vdsbroker.ResourceManager] 
(ajp--0.0.0.0-8009-3) [318b963a] Cannot get vdsManager for 
vdsid=3a9d9a40-beab-11e1-9366-525400fe2d56
2012-06-25 17:50:43,054 INFO  
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (ajp--0.0.0.0-8009-3) 
[318b963a] START, AddVdsVDSCommand(vdsId = 
3a9d9a40-beab-11e1-9366-525400fe2d56), log id: 49256654
2012-06-25 17:50:43,056 INFO  
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (ajp--0.0.0.0-8009-3) 
[318b963a] AddVds - entered , starting logic to add VDS 
3a9d9a40-beab-11e1-9366-525400fe2d56
2012-06-25 17:50:43,059 INFO  
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (ajp--0.0.0.0-8009-3) 
[318b963a] AddVds - VDS 3a9d9a40-beab-11e1-9366-525400fe2d56 was added, will 
try to add it to the resource manager
2012-06-25 17:50:43,062 INFO  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(ajp--0.0.0.0-8009-3) [318b963a] Eneterd VdsManager:constructor
2012-06-25 17:50:43,063 INFO  [org.ovirt.engine.core.vdsbroker.VdsManager] 
(ajp--0.0.0.0-8009-3) [318b963a] vdsBroker(172.30.1.63,54,321)
2012-06-25 17:50:43,065 INFO  [org.ovirt.engine.core.vdsbroker.ResourceManager] 
(ajp--0.0.0.0-8009-3) [318b963a] ResourceManager::AddVds - VDS 
3a9d9a40-beab-11e1-9366-525400fe2d56 was added to the Resource Manager
2012-06-25 17:50:43,067 INFO  
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (ajp--0.0.0.0-8009-3) 
[318b963a] FINISH, AddVdsVDSCommand, log id: 49256654
2012-06-25 17:50:43,098 INFO  [org.ovirt.engine.core.bll.InstallVdsCommand] 
(pool-3-thread-49) [44a5dc80] Running command: InstallVdsCommand internal: 
true. Entities affected :  ID: 3a9d9a40-beab-11e1-9366-525400fe2d56 Type: VDS
2012-06-25 17:50:43,102 INFO  [org.ovirt.engine.core.bll.InstallVdsCommand] 
(pool-3-thread-49) [44a5dc80] Before Installation pool-3-thread-49
2012-06-25 17:50:43,103 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
(pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Executing 
installation stage. (Stage: Starting Host installation)
2012-06-25 17:50:43,105 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
(pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Executing 
installation stage. (Stage: Connecting to Host)
2012-06-25 17:50:43,134 INFO  
[org.ovirt.engine.core.utils.hostinstall.HostKeyVerifier] (NioProcessor-27) SSH 
key fingerprint b4:f3:b9:4b:93:40:e8:9e:57:fc:b3:fe:b3:af:13:b4 for host 
172.30.1.63 (172.30.1.63) has been successfully verified.
2012-06-25 17:50:43,206 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
(pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Recieved message: 
BSTRAP component='RHEV_INSTALL' status='OK' message='Connected to Host 
172.30.1.63 with SSH key fingerprint: 
b4:f3:b9:4b:93:40:e8:9e:57:fc:b3:fe:b3:af:13:b4'/. FYI. (Stage: Connecting to 
Host)
2012-06-25 17:50:43,223 INFO  [org.ovirt.engine.core.bll.VdsInstaller] 
(pool-3-thread-49) [44a5dc80] Installation of 172.30.1.63. Successfully 
connected to 

[Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread jose garcia

Good monday morning,

Installed Fedora 17 and tried to install the node to a 3.1 engine.

I'm getting an VDS Network exception in the engine side:

in /var/log/ovirt-engine/engine:

2012-06-25 10:15:34,132 WARN 
[org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-96) 
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds = 
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb : ovirt-node2.smb.eurotux.local, 
VDS Network Error, continuing.

VDSNetworkException:
2012-06-25 10:15:36,143 ERROR 
[org.ovirt.engine.core.vdsbroker.VdsManager] (QuartzScheduler_Worker-20) 
VDS::handleNetworkException Server failed to respond,  vds_id = 
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, vds_name = 
ovirt-node2.smb.eurotux.local, error = VDSNetworkException:
2012-06-25 10:15:36,181 INFO 
[org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49) 
ResourceManager::vdsNotResponding entered for Host 
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
2012-06-25 10:15:36,214 ERROR 
[org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand] 
(pool-3-thread-49) [1afd4b89] Failed to run Fence script on 
vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.


While in the node, vdsmd does fail to sample nics:

in /var/log/vdsm/vdsm.log:

   nf = netinfo.NetInfo()
  File /usr/share/vdsm/netinfo.py, line 268, in __init__
_netinfo = get()
  File /usr/share/vdsm/netinfo.py, line 220, in get
for nic in nics() ])
KeyError: 'p36p1'

MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM 
main thread ended. Waiting for 1 other threads...
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run) 
_MainThread(MainThread, started 140567823243072)
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run) 
Thread(libvirtEventLoop, started daemon 140567752681216)


in /etc/var/log/messages there is a lot of vdsmd died too quickly:

Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm' died 
too quickly, respawning slave
Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm' died 
too quickly, respawning slave
Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm' died 
too quickly for more than 30 seconds, master sleeping for 900 seconds


I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora 16, 
but tried to configure a bridge ovirtmgmt and the only difference is 
that KeyError becomes 'ovirtmgmt'.


Regards,
Jose Garcia
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Getting a stream of errors inside engine.log

2012-06-25 Thread Yair Zaslavsky
On 06/25/2012 11:55 AM, Ofer Schreiber wrote:
 
 
 - Original Message -
 On 06/24/2012 07:26 PM, Robert Middleswarth wrote:
 On 06/24/2012 12:11 PM, Yair Zaslavsky wrote:
 On 06/24/2012 07:02 PM, Robert Middleswarth wrote:
 On 06/24/2012 11:55 AM, Yair Zaslavsky wrote:
 On 06/24/2012 06:45 PM, Robert Middleswarth wrote:
 On 06/24/2012 09:55 AM, Doron Fediuck wrote:
 On 22/06/12 04:12, Robert Middleswarth wrote:
 Getting a log full of error messages like these.

 2012-06-21 21:03:30,075 ERROR
 [org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
 (QuartzScheduler_Worker-90) Failed to decryptData must start
 with
 zero
 2012-06-21 21:03:46,235 ERROR
 [org.ovirt.engine.core.engineencryptutils.EncryptionUtils]
 (QuartzScheduler_Worker-18) Failed to decryptData must not be
 longer
 than 128 bytes

 What is causing this and what can I do to fix it?

 Thanks
 Robert
 Hi Robert,
 This really depends on your setup (devel' or yum installed).
 There are several DB values which are encrypted during the
 installation which you override
 by plain text values. In most cases the engine tries to
 fallback, and
 use the original value
 it got despite the error. The question now you should answer,
 is if
 you have any other issue.
 If things seem to work well, than it's an issue of encrypting
 the
 relevant values (if needed).
 If you have other related issues, then setting these values to
 plain
 text should use the fallback
 and resolve the issue.

 There doesn't seem to be any negitive effect other then making
 the log
 hard to read.  Is there any way to suppress the error?

 Thanks
 Robert
 Robert, I do suggest that you try further to investigate the
 root cause
 for this.
 However, it is possible to modify the configuration not to log
 this
 error - you should be able to define a log4j category at
 $JBOSS_HOME/standalone/configuration/standalone.xml (under
 subsystem
 xmlns=urn:jboss:domain:logging:1.1) in such a way that for
 EncryptionUtils only FATAL messages will appear at the log (and
 still, I
 suggest that you further try to investigate why this is
 happening in
 the
 first place).engine-service.xml

 Yair
 I am fine with investigating why but these error's are so generic
 I have
 no idea what files to check?  Despite tiring a few times I really
 don't
 get Java.
 Robert - we can go to two approaches here with logging both
 require you
 to define a log4j category for EncryptionUtils

 First approach - as I suggested, move to FATAL - you will not see
 these
 errors.
 Second approach - move to DEBUG. I looked at the code, and saw
 that if
 you define a category for EncryptionUtils  with DEBUG level you
 will
 also get a print of stacktrace, this may assist in understanding
 the
 problem you're facing.
 Yair,

 I checked standalone.xml file and there is no section related to
 EncyptionUtils nor was there a definition for engine.log I am
 guessing
 that these are defined in a diff config file.

 You should modify this file and add a category for EncryptionUtils -
 look at how the category -

 logger category=org.ovirt
 level name=INFO/
 handlers
 handler name=ENGINE_LOG/
 /handlers
 /logger



 Is defined.

 What you should do is add

 logger category=org.ovirt.engine.core.engineencryptutils
   level name=FATAL/
 handlers
 handler name=ENGINE_LOG/
 /handlers
 /logger

 Near the logger category for org.ovirt (I mean, as a sibling to
 that
 element).
 Pay attention that here i defined it for FATAL - you may wish to try
 it
 with DEBUG to see the stacktrace.

 Yair
 
 In rpm based enb, we use /etc/ovirt-engine/engine-service.xml as the jboss 
 settings file.

Thanks for the clarification.
The procedure I stated to add the log category should be the same.

 




 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 Thanks
 Robert
 Thanks
 Robert


 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread Dan Kenigsberg
On Mon, Jun 25, 2012 at 10:57:47AM +0100, jose garcia wrote:
 Good monday morning,
 
 Installed Fedora 17 and tried to install the node to a 3.1 engine.
 
 I'm getting an VDS Network exception in the engine side:
 
 in /var/log/ovirt-engine/engine:
 
 2012-06-25 10:15:34,132 WARN
 [org.ovirt.engine.core.vdsbroker.VdsManager]
 (QuartzScheduler_Worker-96)
 ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds
 = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb :
 ovirt-node2.smb.eurotux.local, VDS Network Error, continuing.
 VDSNetworkException:
 2012-06-25 10:15:36,143 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsManager]
 (QuartzScheduler_Worker-20) VDS::handleNetworkException Server
 failed to respond,  vds_id = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb,
 vds_name = ovirt-node2.smb.eurotux.local, error =
 VDSNetworkException:
 2012-06-25 10:15:36,181 INFO
 [org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
 ResourceManager::vdsNotResponding entered for Host
 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
 2012-06-25 10:15:36,214 ERROR
 [org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
 (pool-3-thread-49) [1afd4b89] Failed to run Fence script on
 vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.
 
 While in the node, vdsmd does fail to sample nics:
 
 in /var/log/vdsm/vdsm.log:
 
nf = netinfo.NetInfo()
   File /usr/share/vdsm/netinfo.py, line 268, in __init__
 _netinfo = get()
   File /usr/share/vdsm/netinfo.py, line 220, in get
 for nic in nics() ])
 KeyError: 'p36p1'
 
 MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM
 main thread ended. Waiting for 1 other threads...
 MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
 _MainThread(MainThread, started 140567823243072)
 MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
 Thread(libvirtEventLoop, started daemon 140567752681216)
 
 in /etc/var/log/messages there is a lot of vdsmd died too quickly:
 
 Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly, respawning slave
 Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly, respawning slave
 Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly for more than 30 seconds, master sleeping for 900
 seconds
 
 I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora
 16, but tried to configure a bridge ovirtmgmt and the only
 difference is that KeyError becomes 'ovirtmgmt'.

The nic renaming may have happened due to biosdevname. Do you have it
installed? Does any of the /etc/sysconfig/network-scripts/ifcfg-* refer
to an old nic name?

Which version of vdsm are you running? It seems that it is
pre-v4.9.4-61-g24f8627 which is too old for f17 to run - the output of
ifconfig has changed. Please retry with latest beta version
https://koji.fedoraproject.org/koji/buildinfo?buildID=327015

If the problem persists, could you run vdsm manually, with
# su - vdsm -s /bin/bash
# cd /usr/share/vdsm
# ./vdsm
maybe it would give a hint about the crash.

regards,
Dan.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] host install failed (kernel version 3.4.3)

2012-06-25 Thread зоррыч
Unfortunately I do not know how to do it.

On what url is bag tracker? 

 

 

 

 

 

From: Douglas Landgraf [mailto:dougsl...@redhat.com] 
Sent: Saturday, June 23, 2012 8:10 AM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] host install failed (kernel version 3.4.3)

 

Hi,

On 06/22/2012 04:26 PM, зоррыч wrote: 

Hi.

I am trying to install the host kernel with version 3.4.3.

an error:

Unsupported kernel version: 0

Logs:

 

[root@noc-3-synt tmp]# cat vds_bootstrap.372080.log

Fri, 22 Jun 2012 16:08:20 DEBUG Start VDS Validation 

Fri, 22 Jun 2012 16:08:20 DEBUGEntered VdsValidation(subject = '10.1.20.7', 
random_num = '7d832636-a512-40ae-8e27-ecd24728b39a', rev_num = 'None', 
installVirtualizationService = 'False', installGlusterService = 'True')

Fri, 22 Jun 2012 16:08:20 DEBUGSetting up Package Sacks

Fri, 22 Jun 2012 16:08:20 DEBUGyumSearch: found vdsm entries: 
[YumAvailablePackageSqlite : vdsm-4.10.0-0.58.gita6f4929.el6.x86_64 
(0x1b42a10)]

Fri, 22 Jun 2012 16:08:20 DEBUGHost properly registered with RHN/Satellite.

Fri, 22 Jun 2012 16:08:20 DEBUGBSTRAP component='RHN_REGISTRATION' 
status='OK' message='Host properly registered with RHN/Satellite.'/

Fri, 22 Jun 2012 16:08:21 DEBUGyumSearchVersion: pkg 
vdsm-4.10.0-0.58.gita6f4929.el6.x86_64 starts with: vdsm-4.10

Fri, 22 Jun 2012 16:08:21 DEBUGAvailable VDSM matches requirements

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='VDSM_MAJOR_VER' 
status='OK' message='Available VDSM matches requirements'/

Fri, 22 Jun 2012 16:08:21 DEBUG['/bin/uname', '-r']

Fri, 22 Jun 2012 16:08:21 DEBUG3.4.3

 

Fri, 22 Jun 2012 16:08:21 DEBUG

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='OS' status='OK' 
type='RHEL6' message='Supported platform version'/

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='KERNEL' status='FAIL' 
version='0' message='Unsupported kernel version: 0. Minimal supported version: 
150'/

Fri, 22 Jun 2012 16:08:21 ERRORosExplorer test failed

Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='RHEV_INSTALL' 
status='FAIL'/

Fri, 22 Jun 2012 16:08:21 DEBUG End VDS Validation 

 

[root@noc-3-synt tmp]# uname -r

3.4.3

 

Can you please open a bz assigned to me?

Thanks!



-- 
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Quantum support

2012-06-25 Thread Andrew Cathrow


- Original Message -
 From: Rahul Upadhyaya rak...@gmail.com
 To: users@ovirt.org
 Sent: Monday, June 25, 2012 4:38:12 AM
 Subject: [Users] Quantum support
 
 
 Hi Folks,
 
 
 According to your road-map, when shall quantum support be available
 for oVirt ?
 

It's still work in progress - no ETA as yet, of course patches would help :-)

To help make sure all the use cases are scoped it'd be interesting to hear 
how/why you need Quantum.

Current details here : http://www.ovirt.org/wiki/Quantum_and_oVirt


 
 --
 Regards,
 Rahul
 ===
 
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] host install failed (kernel version 3.4.3)

2012-06-25 Thread Dan Kenigsberg
On Mon, Jun 25, 2012 at 02:46:38PM +0400, зоррыч wrote:
 Unfortunately I do not know how to do it.
 
 On what url is bag tracker? 

The bug tracker is bugzilla.redhat.com, look for Community|oVirt
product.
Here's a list of all NEW vdsm bugs
https://bugzilla.redhat.com/buglist.cgi?action=wrapbug_file_loc=bug_file_loc_type=allwordssubstrbug_id=bug_id_type=anyexactchfieldfrom=chfieldto=Nowchfieldvalue=component=vdsmdeadlinefrom=deadlineto=email1=email2=emailtype1=substringemailtype2=substringfield0-0-0=flagtypes.namekeywords=keywords_type=allwordslongdesc=longdesc_type=allwordssubstrshort_desc=short_desc_type=allwordssubstrstatus_whiteboard=status_whiteboard_type=allwordssubstrtype0-0-0=notsubstringvalue0-0-0=rhel-6.2.0votes==bug_status=NEW

I've posted a couple of patches for this kind of installation failure
http://gerrit.ovirt.org/#/c/5637/
I'd appreciate if you build vdsm-bootstrap and verify that they actually
work (I haven't...)

  
 
 From: Douglas Landgraf [mailto:dougsl...@redhat.com] 
 Sent: Saturday, June 23, 2012 8:10 AM
 To: зоррыч
 Cc: users@ovirt.org
 Subject: Re: [Users] host install failed (kernel version 3.4.3)
 
  
 
 Hi,
 
 On 06/22/2012 04:26 PM, зоррыч wrote: 
 
 Hi.
 
 I am trying to install the host kernel with version 3.4.3.
 
 an error:
 
 Unsupported kernel version: 0
 
 Logs:
 
  
 
 [root@noc-3-synt tmp]# cat vds_bootstrap.372080.log
 
 Fri, 22 Jun 2012 16:08:20 DEBUG Start VDS Validation 
 
 Fri, 22 Jun 2012 16:08:20 DEBUGEntered VdsValidation(subject = 
 '10.1.20.7', random_num = '7d832636-a512-40ae-8e27-ecd24728b39a', rev_num = 
 'None', installVirtualizationService = 'False', installGlusterService = 
 'True')
 
 Fri, 22 Jun 2012 16:08:20 DEBUGSetting up Package Sacks
 
 Fri, 22 Jun 2012 16:08:20 DEBUGyumSearch: found vdsm entries: 
 [YumAvailablePackageSqlite : vdsm-4.10.0-0.58.gita6f4929.el6.x86_64 
 (0x1b42a10)]
 
 Fri, 22 Jun 2012 16:08:20 DEBUGHost properly registered with 
 RHN/Satellite.
 
 Fri, 22 Jun 2012 16:08:20 DEBUGBSTRAP component='RHN_REGISTRATION' 
 status='OK' message='Host properly registered with RHN/Satellite.'/
 
 Fri, 22 Jun 2012 16:08:21 DEBUGyumSearchVersion: pkg 
 vdsm-4.10.0-0.58.gita6f4929.el6.x86_64 starts with: vdsm-4.10
 
 Fri, 22 Jun 2012 16:08:21 DEBUGAvailable VDSM matches requirements
 
 Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='VDSM_MAJOR_VER' 
 status='OK' message='Available VDSM matches requirements'/
 
 Fri, 22 Jun 2012 16:08:21 DEBUG['/bin/uname', '-r']
 
 Fri, 22 Jun 2012 16:08:21 DEBUG3.4.3
 
  
 
 Fri, 22 Jun 2012 16:08:21 DEBUG
 
 Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='OS' status='OK' 
 type='RHEL6' message='Supported platform version'/
 
 Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='KERNEL' status='FAIL' 
 version='0' message='Unsupported kernel version: 0. Minimal supported 
 version: 150'/
 
 Fri, 22 Jun 2012 16:08:21 ERRORosExplorer test failed
 
 Fri, 22 Jun 2012 16:08:21 DEBUGBSTRAP component='RHEV_INSTALL' 
 status='FAIL'/
 
 Fri, 22 Jun 2012 16:08:21 DEBUG End VDS Validation 
 
  
 
 [root@noc-3-synt tmp]# uname -r
 
 3.4.3
 
  
 
 Can you please open a bz assigned to me?
 
 Thanks!
 
 
 
 -- 
 Cheers
 Douglas

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread jose garcia

On 06/25/2012 11:37 AM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 10:57:47AM +0100, jose garcia wrote:

Good monday morning,

Installed Fedora 17 and tried to install the node to a 3.1 engine.

I'm getting an VDS Network exception in the engine side:

in /var/log/ovirt-engine/engine:

2012-06-25 10:15:34,132 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-96)
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds
= 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb :
ovirt-node2.smb.eurotux.local, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 10:15:36,143 ERROR
[org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-20) VDS::handleNetworkException Server
failed to respond,  vds_id = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb,
vds_name = ovirt-node2.smb.eurotux.local, error =
VDSNetworkException:
2012-06-25 10:15:36,181 INFO
[org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
ResourceManager::vdsNotResponding entered for Host
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
2012-06-25 10:15:36,214 ERROR
[org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
(pool-3-thread-49) [1afd4b89] Failed to run Fence script on
vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.

While in the node, vdsmd does fail to sample nics:

in /var/log/vdsm/vdsm.log:

nf = netinfo.NetInfo()
   File /usr/share/vdsm/netinfo.py, line 268, in __init__
 _netinfo = get()
   File /usr/share/vdsm/netinfo.py, line 220, in get
 for nic in nics() ])
KeyError: 'p36p1'

MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM
main thread ended. Waiting for 1 other threads...
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
_MainThread(MainThread, started 140567823243072)
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
Thread(libvirtEventLoop, started daemon 140567752681216)

in /etc/var/log/messages there is a lot of vdsmd died too quickly:

Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly, respawning slave
Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly, respawning slave
Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly for more than 30 seconds, master sleeping for 900
seconds

I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora
16, but tried to configure a bridge ovirtmgmt and the only
difference is that KeyError becomes 'ovirtmgmt'.

The nic renaming may have happened due to biosdevname. Do you have it
installed? Does any of the /etc/sysconfig/network-scripts/ifcfg-* refer
to an old nic name?

Which version of vdsm are you running? It seems that it is
pre-v4.9.4-61-g24f8627 which is too old for f17 to run - the output of
ifconfig has changed. Please retry with latest beta version
https://koji.fedoraproject.org/koji/buildinfo?buildID=327015

If the problem persists, could you run vdsm manually, with
# su - vdsm -s /bin/bash
# cd /usr/share/vdsm
# ./vdsm
maybe it would give a hint about the crash.

regards,
Dan.
Well, thank you. I have updated Vdsm to version 4.10. Now the problem is 
with SSL and XMLRPC.


This is the error in the side of the engine:

/var/log/ovirt-engine/engine.log

ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand] 
(QuartzScheduler_Worker-52) XML RPC error in command GetCapabilitiesVDS 
( Vds: ovirt-node2.smb.eurotux.local ), the error was: 
java.util.concurrent.ExecutionException: 
java.lang.reflect.InvocationTargetException, NoHttpResponseException: 
The server 10.10.30.177 failed to respond.


In the side of the node, there seems to be an authentication problem:

/var/log/vdsm/vdsm.log

SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL 
routines:SSL23_GET_CLIENT_HELLO:http request
Thread-810::ERROR::2012-06-25 
12:02:46,351::SecureXMLRPCServer::73::root::(handle_error) client 
('10.10.30.101', 58605)

Traceback (most recent call last):
  File /usr/lib64/python2.7/SocketServer.py, line 582, in 
process_request_thread

self.finish_request(request, client_address)
  File /usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py, 
line 66, in finish_request

request.do_handshake()
  File /usr/lib64/python2.7/ssl.py, line 305, in do_handshake
self._sslobj.do_handshake()
SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL 
routines:SSL23_GET_CLIENT_HELLO:http request


In /var/log/messages there is an:

vdsm [5834]: vdsm root ERROR client ()

with the ip address of the engine.

Kind regards.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread Dan Kenigsberg
On Mon, Jun 25, 2012 at 12:11:37PM +0100, jose garcia wrote:
 On 06/25/2012 11:37 AM, Dan Kenigsberg wrote:
 On Mon, Jun 25, 2012 at 10:57:47AM +0100, jose garcia wrote:
 Good monday morning,
 
 Installed Fedora 17 and tried to install the node to a 3.1 engine.
 
 I'm getting an VDS Network exception in the engine side:
 
 in /var/log/ovirt-engine/engine:
 
 2012-06-25 10:15:34,132 WARN
 [org.ovirt.engine.core.vdsbroker.VdsManager]
 (QuartzScheduler_Worker-96)
 ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds
 = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb :
 ovirt-node2.smb.eurotux.local, VDS Network Error, continuing.
 VDSNetworkException:
 2012-06-25 10:15:36,143 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsManager]
 (QuartzScheduler_Worker-20) VDS::handleNetworkException Server
 failed to respond,  vds_id = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb,
 vds_name = ovirt-node2.smb.eurotux.local, error =
 VDSNetworkException:
 2012-06-25 10:15:36,181 INFO
 [org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
 ResourceManager::vdsNotResponding entered for Host
 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
 2012-06-25 10:15:36,214 ERROR
 [org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
 (pool-3-thread-49) [1afd4b89] Failed to run Fence script on
 vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.
 
 While in the node, vdsmd does fail to sample nics:
 
 in /var/log/vdsm/vdsm.log:
 
 nf = netinfo.NetInfo()
File /usr/share/vdsm/netinfo.py, line 268, in __init__
  _netinfo = get()
File /usr/share/vdsm/netinfo.py, line 220, in get
  for nic in nics() ])
 KeyError: 'p36p1'
 
 MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM
 main thread ended. Waiting for 1 other threads...
 MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
 _MainThread(MainThread, started 140567823243072)
 MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
 Thread(libvirtEventLoop, started daemon 140567752681216)
 
 in /etc/var/log/messages there is a lot of vdsmd died too quickly:
 
 Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly, respawning slave
 Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly, respawning slave
 Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly for more than 30 seconds, master sleeping for 900
 seconds
 
 I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora
 16, but tried to configure a bridge ovirtmgmt and the only
 difference is that KeyError becomes 'ovirtmgmt'.
 The nic renaming may have happened due to biosdevname. Do you have it
 installed? Does any of the /etc/sysconfig/network-scripts/ifcfg-* refer
 to an old nic name?
 
 Which version of vdsm are you running? It seems that it is
 pre-v4.9.4-61-g24f8627 which is too old for f17 to run - the output of
 ifconfig has changed. Please retry with latest beta version
 https://koji.fedoraproject.org/koji/buildinfo?buildID=327015
 
 If the problem persists, could you run vdsm manually, with
 # su - vdsm -s /bin/bash
 # cd /usr/share/vdsm
 # ./vdsm
 maybe it would give a hint about the crash.
 
 regards,
 Dan.
 Well, thank you. I have updated Vdsm to version 4.10. Now the
 problem is with SSL and XMLRPC.
 
 This is the error in the side of the engine:
 
 /var/log/ovirt-engine/engine.log
 
 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
 (QuartzScheduler_Worker-52) XML RPC error in command
 GetCapabilitiesVDS ( Vds: ovirt-node2.smb.eurotux.local ), the error
 was: java.util.concurrent.ExecutionException:
 java.lang.reflect.InvocationTargetException,
 NoHttpResponseException: The server 10.10.30.177 failed to respond.
 
 In the side of the node, there seems to be an authentication problem:
 
 /var/log/vdsm/vdsm.log
 
 SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
 routines:SSL23_GET_CLIENT_HELLO:http request
 Thread-810::ERROR::2012-06-25
 12:02:46,351::SecureXMLRPCServer::73::root::(handle_error) client
 ('10.10.30.101', 58605)
 Traceback (most recent call last):
   File /usr/lib64/python2.7/SocketServer.py, line 582, in
 process_request_thread
 self.finish_request(request, client_address)
   File
 /usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py, line
 66, in finish_request
 request.do_handshake()
   File /usr/lib64/python2.7/ssl.py, line 305, in do_handshake
 self._sslobj.do_handshake()
 SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
 routines:SSL23_GET_CLIENT_HELLO:http request
 
 In /var/log/messages there is an:
 
 vdsm [5834]: vdsm root ERROR client ()
 
 with the ip address of the engine.

Hmm... Do you have ssl=true in your /etc/vdsm/vdsm.conf ?
Does vdsm respond locally to

vdsClient -s 0 getVdsCaps

(Maybe your local certificates and key were corrupted, and you will have
to re-install the host form Engine in order to create a new set)

___

Re: [Users] Do not start the virtual machine (gluster storage and ovirt 3.1)

2012-06-25 Thread зоррыч
I'm sorry.
Disabled SElinux
And the error disappeared.
Thank you!



-Original Message-
From: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] On Behalf Of 
зоррыч
Sent: Monday, June 25, 2012 3:20 PM
To: 'Itamar Heim'
Cc: users@ovirt.org
Subject: Re: [Users] Do not start the virtual machine (gluster storage and 
ovirt 3.1)

In an attachment



-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com]
Sent: Saturday, June 23, 2012 4:46 PM
To: зоррыч
Cc: users@ovirt.org
Subject: Re: [Users] Do not start the virtual machine (gluster storage and 
ovirt 3.1)

On 06/22/2012 05:02 PM, зоррыч wrote:
 Hi.

 I use a bunch of ovirt 3.1 beta and gluster storage.

 The virtual machine was created successfully, but will not start.

 In the logs:

 Vdsm.log:

 Thread-1426::DEBUG::2012-06-22
 09:37:27,151::task::978::TaskManager.Task::(_decref)
 Task=`9a68c120-169f-4c0e-98e3-08e3bf5c66ab`::ref 0 aborting False

 Thread-1427::DEBUG::2012-06-22
 09:37:27,162::BindingXMLRPC::160::vds::(wrapper) [10.1.20.2]

 Thread-1427::DEBUG::2012-06-22
 09:37:27,163::task::588::TaskManager.Task::(_updateState)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::moving from state init - 
 state preparing

 Thread-1427::INFO::2012-06-22
 09:37:27,163::logUtils::37::dispatcher::(wrapper) Run and protect:
 getStoragePoolInfo(spUUID='b1c7875a-964d-4633-8ea4-2b191d68c105',
 options=None)

 Thread-1427::DEBUG::2012-06-22
 09:37:27,163::resourceManager::175::ResourceManager.Request::(__init__
 )
 ResName=`Storage.b1c7875a-964d-4633-8ea4-2b191d68c105`ReqID=`ca9b7715-
 1f0b-4225-9717-d1179193c42e`::Request
 was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 
 'registerResource'

 Thread-1427::DEBUG::2012-06-22
 09:37:27,164::resourceManager::486::ResourceManager::(registerResource
 )
 Trying to register resource
 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' for lock type 'shared'

 Thread-1427::DEBUG::2012-06-22
 09:37:27,164::resourceManager::528::ResourceManager::(registerResource
 ) Resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' is free. Now 
 locking as 'shared' (1 active user)

 Thread-1427::DEBUG::2012-06-22
 09:37:27,164::resourceManager::212::ResourceManager.Request::(grant)
 ResName=`Storage.b1c7875a-964d-4633-8ea4-2b191d68c105`ReqID=`ca9b7715-
 1f0b-4225-9717-d1179193c42e`::Granted
 request

 Thread-1427::DEBUG::2012-06-22
 09:37:27,164::task::817::TaskManager.Task::(resourceAcquired)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::_resourcesAcquired:
 Storage.b1c7875a-964d-4633-8ea4-2b191d68c105 (shared)

 Thread-1427::DEBUG::2012-06-22
 09:37:27,165::task::978::TaskManager.Task::(_decref)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::ref 1 aborting False

 Thread-1427::INFO::2012-06-22
 09:37:27,165::logUtils::39::dispatcher::(wrapper) Run and protect:
 getStoragePoolInfo, Return response: {'info': {'spm_id': 1,
 'master_uuid': '68aa0dc2-9cd1-4549-8008-30b1bae667db', 'name':
 'gluster', 'version': '0', 'domains':
 '68aa0dc2-9cd1-4549-8008-30b1bae667db:Active', 'pool_status':
 'connected', 'isoprefix': '', 'type': 'SHAREDFS', 'master_ver': 1,
 'lver': 0}, 'dominfo': {'68aa0dc2-9cd1-4549-8008-30b1bae667db':
 {'status': 'Active', 'diskfree': '27505983488', 'alerts': [],
 'disktotal': '53579874304'}}}

 Thread-1427::DEBUG::2012-06-22
 09:37:27,165::task::1172::TaskManager.Task::(prepare)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::finished: {'info':
 {'spm_id': 1, 'master_uuid': '68aa0dc2-9cd1-4549-8008-30b1bae667db',
 'name': 'gluster', 'version': '0', 'domains':
 '68aa0dc2-9cd1-4549-8008-30b1bae667db:Active', 'pool_status':
 'connected', 'isoprefix': '', 'type': 'SHAREDFS', 'master_ver': 1,
 'lver': 0}, 'dominfo': {'68aa0dc2-9cd1-4549-8008-30b1bae667db':
 {'status': 'Active', 'diskfree': '27505983488', 'alerts': [],
 'disktotal': '53579874304'}}}

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::task::588::TaskManager.Task::(_updateState)
 Task=`662a52dd-f00d-4be1-941d-eac8ec6a70f6`::moving from state 
 preparing
 - state finished

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::resourceManager::809::ResourceManager.Owner::(releaseAll
 ) Owner.releaseAll requests {} resources
 {'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105':  ResourceRef 
 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105', isValid: 'True' obj:
 'None'}

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::resourceManager::844::ResourceManager.Owner::(cancelAll)
 Owner.cancelAll requests {}

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::resourceManager::538::ResourceManager::(releaseResource)
 Trying to release resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105'

 Thread-1427::DEBUG::2012-06-22
 09:37:27,166::resourceManager::553::ResourceManager::(releaseResource)
 Released resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' (0 
 active users)

 Thread-1427::DEBUG::2012-06-22
 09:37:27,167::resourceManager::558::ResourceManager::(releaseResource)
 Resource 'Storage.b1c7875a-964d-4633-8ea4-2b191d68c105' is free, 
 finding 

Re: [Users] ovirt3.1 add node error

2012-06-25 Thread Juan Hernandez

On 06/25/2012 11:54 AM, tian b wrote:

# systemctl status jboss-as.service
jboss-as.service - The JBoss Application Server
  Loaded: loaded (/usr/lib/systemd/system/jboss-as.service; enabled)
  Active: active (running) since Mon, 25 Jun 2012 17:29:06 +0800; 24min 
ago
Main PID: 4873 (standalone.sh)
  CGroup: name=systemd:/system/jboss-as.service
  ├ 4873 /bin/sh /usr/share/jboss-as/bin/standalone.sh -c 
standalone-web.xml
  └ 4925 java -D[Standalone] -server -XX:+UseCompressedOops 
-XX:+TieredCompilation -Xms64m -Xmx512m -XX:MaxPermSize=256m 
-Djava.net.preferIPv4Stack=true -Dorg.j...


Not sure if this is your problem, but if you are installing the RPMs in 
Fedora 17 the jboss-as service should be stopped and disabled. You 
should instead use the ovirt-engine service.


--
Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta 
3ºD, 28016 Madrid, Spain

Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] direct lun

2012-06-25 Thread ovirt
VM with attached direct lun not started with error in vm.log

qemu-kvm: -drive 
file=/dev/mapper/1p_ISCSI_0_lun1,if=none,id=drive-virtio-disk1,format=raw,serial=,cache=none,werror=stop,rerror=stop,aio=native:
 could not open disk image /dev/mapper/1p_ISCSI_0_lun1: Permission denied

is this a bug or feature is not implemented yet?

env:
fedora17
engine from http://www.ovirt.org/releases/beta/fedora/17/
vdsm-4.10.0-0.57.git2987ee3.fc17.x86_64

[root@kvm04 /]# ls -l /dev/mapper/ | grep lun1
lrwxrwxrwx. 1 root root   7 Jun 25 14:05 1p_ISCSI_0_lun1 - ../dm-7
[root@kvm04 /]# ls -l /dev/ | grep dm-7
brw-rw.  1 root disk253,   7 Jun 25 14:05 dm-7

usermod -a -G disk qemu solved the problem, but is it correct way to solve it?

--
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread Dan Kenigsberg
On Mon, Jun 25, 2012 at 01:15:08PM +0100, jose garcia wrote:
 On 06/25/2012 12:30 PM, Dan Kenigsberg wrote:
 On Mon, Jun 25, 2012 at 12:11:37PM +0100, jose garcia wrote:
 On 06/25/2012 11:37 AM, Dan Kenigsberg wrote:
 On Mon, Jun 25, 2012 at 10:57:47AM +0100, jose garcia wrote:
 Good monday morning,
 
 Installed Fedora 17 and tried to install the node to a 3.1 engine.
 
 I'm getting an VDS Network exception in the engine side:
 
 in /var/log/ovirt-engine/engine:
 
 2012-06-25 10:15:34,132 WARN
 [org.ovirt.engine.core.vdsbroker.VdsManager]
 (QuartzScheduler_Worker-96)
 ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds
 = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb :
 ovirt-node2.smb.eurotux.local, VDS Network Error, continuing.
 VDSNetworkException:
 2012-06-25 10:15:36,143 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsManager]
 (QuartzScheduler_Worker-20) VDS::handleNetworkException Server
 failed to respond,  vds_id = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb,
 vds_name = ovirt-node2.smb.eurotux.local, error =
 VDSNetworkException:
 2012-06-25 10:15:36,181 INFO
 [org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
 ResourceManager::vdsNotResponding entered for Host
 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
 2012-06-25 10:15:36,214 ERROR
 [org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
 (pool-3-thread-49) [1afd4b89] Failed to run Fence script on
 vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.
 
 While in the node, vdsmd does fail to sample nics:
 
 in /var/log/vdsm/vdsm.log:
 
 nf = netinfo.NetInfo()
File /usr/share/vdsm/netinfo.py, line 268, in __init__
  _netinfo = get()
File /usr/share/vdsm/netinfo.py, line 220, in get
  for nic in nics() ])
 KeyError: 'p36p1'
 
 MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM
 main thread ended. Waiting for 1 other threads...
 MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
 _MainThread(MainThread, started 140567823243072)
 MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
 Thread(libvirtEventLoop, started daemon 140567752681216)
 
 in /etc/var/log/messages there is a lot of vdsmd died too quickly:
 
 Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly, respawning slave
 Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly, respawning slave
 Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly for more than 30 seconds, master sleeping for 900
 seconds
 
 I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora
 16, but tried to configure a bridge ovirtmgmt and the only
 difference is that KeyError becomes 'ovirtmgmt'.
 The nic renaming may have happened due to biosdevname. Do you have it
 installed? Does any of the /etc/sysconfig/network-scripts/ifcfg-* refer
 to an old nic name?
 
 Which version of vdsm are you running? It seems that it is
 pre-v4.9.4-61-g24f8627 which is too old for f17 to run - the output of
 ifconfig has changed. Please retry with latest beta version
 https://koji.fedoraproject.org/koji/buildinfo?buildID=327015
 
 If the problem persists, could you run vdsm manually, with
 # su - vdsm -s /bin/bash
 # cd /usr/share/vdsm
 # ./vdsm
 maybe it would give a hint about the crash.
 
 regards,
 Dan.
 Well, thank you. I have updated Vdsm to version 4.10. Now the
 problem is with SSL and XMLRPC.
 
 This is the error in the side of the engine:
 
 /var/log/ovirt-engine/engine.log
 
 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
 (QuartzScheduler_Worker-52) XML RPC error in command
 GetCapabilitiesVDS ( Vds: ovirt-node2.smb.eurotux.local ), the error
 was: java.util.concurrent.ExecutionException:
 java.lang.reflect.InvocationTargetException,
 NoHttpResponseException: The server 10.10.30.177 failed to respond.
 
 In the side of the node, there seems to be an authentication problem:
 
 /var/log/vdsm/vdsm.log
 
 SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
 routines:SSL23_GET_CLIENT_HELLO:http request
 Thread-810::ERROR::2012-06-25
 12:02:46,351::SecureXMLRPCServer::73::root::(handle_error) client
 ('10.10.30.101', 58605)
 Traceback (most recent call last):
File /usr/lib64/python2.7/SocketServer.py, line 582, in
 process_request_thread
  self.finish_request(request, client_address)
File
 /usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py, line
 66, in finish_request
  request.do_handshake()
File /usr/lib64/python2.7/ssl.py, line 305, in do_handshake
  self._sslobj.do_handshake()
 SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
 routines:SSL23_GET_CLIENT_HELLO:http request
 
 In /var/log/messages there is an:
 
 vdsm [5834]: vdsm root ERROR client ()
 
 with the ip address of the engine.
 Hmm... Do you have ssl=true in your /etc/vdsm/vdsm.conf ?
 Does vdsm respond locally to
 
  vdsClient -s 0 getVdsCaps
 
 (Maybe your local certificates and key were corrupted, and you will 

[Users] Call For Agenda Items -- oVirt Weekly Meeting 2012-06-20

2012-06-25 Thread Mike Burns
The next oVirt Weekly Meeting is scheduled for 2012-06-20 at 15:00 UTC
(10:00 AM EDT).

Current Agenda Items (as of this email)

* Status of Next Release
* Sub-project reports (engine, vdsm, node)
* Upcoming workshops


Latest Agenda is listed here:
http://www.ovirt.org/wiki/Meetings#Weekly_project_sync_meeting

Reminder:  If you propose an additional agenda item, please be present
for the meeting to lead the discussion on it.

Thanks

Mike

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread jose garcia

On 06/25/2012 01:24 PM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 01:15:08PM +0100, jose garcia wrote:

On 06/25/2012 12:30 PM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 12:11:37PM +0100, jose garcia wrote:

On 06/25/2012 11:37 AM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 10:57:47AM +0100, jose garcia wrote:

Good monday morning,

Installed Fedora 17 and tried to install the node to a 3.1 engine.

I'm getting an VDS Network exception in the engine side:

in /var/log/ovirt-engine/engine:

2012-06-25 10:15:34,132 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-96)
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds
= 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb :
ovirt-node2.smb.eurotux.local, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 10:15:36,143 ERROR
[org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-20) VDS::handleNetworkException Server
failed to respond,  vds_id = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb,
vds_name = ovirt-node2.smb.eurotux.local, error =
VDSNetworkException:
2012-06-25 10:15:36,181 INFO
[org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
ResourceManager::vdsNotResponding entered for Host
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
2012-06-25 10:15:36,214 ERROR
[org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
(pool-3-thread-49) [1afd4b89] Failed to run Fence script on
vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.

While in the node, vdsmd does fail to sample nics:

in /var/log/vdsm/vdsm.log:

nf = netinfo.NetInfo()
   File /usr/share/vdsm/netinfo.py, line 268, in __init__
 _netinfo = get()
   File /usr/share/vdsm/netinfo.py, line 220, in get
 for nic in nics() ])
KeyError: 'p36p1'

MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM
main thread ended. Waiting for 1 other threads...
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
_MainThread(MainThread, started 140567823243072)
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
Thread(libvirtEventLoop, started daemon 140567752681216)

in /etc/var/log/messages there is a lot of vdsmd died too quickly:

Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly, respawning slave
Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly, respawning slave
Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly for more than 30 seconds, master sleeping for 900
seconds

I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora
16, but tried to configure a bridge ovirtmgmt and the only
difference is that KeyError becomes 'ovirtmgmt'.

The nic renaming may have happened due to biosdevname. Do you have it
installed? Does any of the /etc/sysconfig/network-scripts/ifcfg-* refer
to an old nic name?

Which version of vdsm are you running? It seems that it is
pre-v4.9.4-61-g24f8627 which is too old for f17 to run - the output of
ifconfig has changed. Please retry with latest beta version
https://koji.fedoraproject.org/koji/buildinfo?buildID=327015

If the problem persists, could you run vdsm manually, with
# su - vdsm -s /bin/bash
# cd /usr/share/vdsm
# ./vdsm
maybe it would give a hint about the crash.

regards,
Dan.

Well, thank you. I have updated Vdsm to version 4.10. Now the
problem is with SSL and XMLRPC.

This is the error in the side of the engine:

/var/log/ovirt-engine/engine.log

ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
(QuartzScheduler_Worker-52) XML RPC error in command
GetCapabilitiesVDS ( Vds: ovirt-node2.smb.eurotux.local ), the error
was: java.util.concurrent.ExecutionException:
java.lang.reflect.InvocationTargetException,
NoHttpResponseException: The server 10.10.30.177 failed to respond.

In the side of the node, there seems to be an authentication problem:

/var/log/vdsm/vdsm.log

SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
routines:SSL23_GET_CLIENT_HELLO:http request
Thread-810::ERROR::2012-06-25
12:02:46,351::SecureXMLRPCServer::73::root::(handle_error) client
('10.10.30.101', 58605)
Traceback (most recent call last):
   File /usr/lib64/python2.7/SocketServer.py, line 582, in
process_request_thread
 self.finish_request(request, client_address)
   File
/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py, line
66, in finish_request
 request.do_handshake()
   File /usr/lib64/python2.7/ssl.py, line 305, in do_handshake
 self._sslobj.do_handshake()
SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
routines:SSL23_GET_CLIENT_HELLO:http request

In /var/log/messages there is an:

vdsm [5834]: vdsm root ERROR client ()

with the ip address of the engine.

Hmm... Do you have ssl=true in your /etc/vdsm/vdsm.conf ?
Does vdsm respond locally to

 vdsClient -s 0 getVdsCaps

(Maybe your local certificates and key were corrupted, and you will have
to re-install the host form Engine in order to create a new set)


I 

Re: [Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread Dan Kenigsberg
On Mon, Jun 25, 2012 at 03:15:47PM +0100, jose garcia wrote:
 On 06/25/2012 01:24 PM, Dan Kenigsberg wrote:
 On Mon, Jun 25, 2012 at 01:15:08PM +0100, jose garcia wrote:
 On 06/25/2012 12:30 PM, Dan Kenigsberg wrote:
 On Mon, Jun 25, 2012 at 12:11:37PM +0100, jose garcia wrote:
 On 06/25/2012 11:37 AM, Dan Kenigsberg wrote:
 On Mon, Jun 25, 2012 at 10:57:47AM +0100, jose garcia wrote:
 Good monday morning,
 
 Installed Fedora 17 and tried to install the node to a 3.1 engine.
 
 I'm getting an VDS Network exception in the engine side:
 
 in /var/log/ovirt-engine/engine:
 
 2012-06-25 10:15:34,132 WARN
 [org.ovirt.engine.core.vdsbroker.VdsManager]
 (QuartzScheduler_Worker-96)
 ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds
 = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb :
 ovirt-node2.smb.eurotux.local, VDS Network Error, continuing.
 VDSNetworkException:
 2012-06-25 10:15:36,143 ERROR
 [org.ovirt.engine.core.vdsbroker.VdsManager]
 (QuartzScheduler_Worker-20) VDS::handleNetworkException Server
 failed to respond,  vds_id = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb,
 vds_name = ovirt-node2.smb.eurotux.local, error =
 VDSNetworkException:
 2012-06-25 10:15:36,181 INFO
 [org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
 ResourceManager::vdsNotResponding entered for Host
 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
 2012-06-25 10:15:36,214 ERROR
 [org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
 (pool-3-thread-49) [1afd4b89] Failed to run Fence script on
 vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.
 
 While in the node, vdsmd does fail to sample nics:
 
 in /var/log/vdsm/vdsm.log:
 
 nf = netinfo.NetInfo()
File /usr/share/vdsm/netinfo.py, line 268, in __init__
  _netinfo = get()
File /usr/share/vdsm/netinfo.py, line 220, in get
  for nic in nics() ])
 KeyError: 'p36p1'
 
 MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM
 main thread ended. Waiting for 1 other threads...
 MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
 _MainThread(MainThread, started 140567823243072)
 MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
 Thread(libvirtEventLoop, started daemon 140567752681216)
 
 in /etc/var/log/messages there is a lot of vdsmd died too quickly:
 
 Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly, respawning slave
 Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly, respawning slave
 Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
 died too quickly for more than 30 seconds, master sleeping for 900
 seconds
 
 I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora
 16, but tried to configure a bridge ovirtmgmt and the only
 difference is that KeyError becomes 'ovirtmgmt'.
 The nic renaming may have happened due to biosdevname. Do you have it
 installed? Does any of the /etc/sysconfig/network-scripts/ifcfg-* refer
 to an old nic name?
 
 Which version of vdsm are you running? It seems that it is
 pre-v4.9.4-61-g24f8627 which is too old for f17 to run - the output of
 ifconfig has changed. Please retry with latest beta version
 https://koji.fedoraproject.org/koji/buildinfo?buildID=327015
 
 If the problem persists, could you run vdsm manually, with
 # su - vdsm -s /bin/bash
 # cd /usr/share/vdsm
 # ./vdsm
 maybe it would give a hint about the crash.
 
 regards,
 Dan.
 Well, thank you. I have updated Vdsm to version 4.10. Now the
 problem is with SSL and XMLRPC.
 
 This is the error in the side of the engine:
 
 /var/log/ovirt-engine/engine.log
 
 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
 (QuartzScheduler_Worker-52) XML RPC error in command
 GetCapabilitiesVDS ( Vds: ovirt-node2.smb.eurotux.local ), the error
 was: java.util.concurrent.ExecutionException:
 java.lang.reflect.InvocationTargetException,
 NoHttpResponseException: The server 10.10.30.177 failed to respond.
 
 In the side of the node, there seems to be an authentication problem:
 
 /var/log/vdsm/vdsm.log
 
 SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
 routines:SSL23_GET_CLIENT_HELLO:http request
 Thread-810::ERROR::2012-06-25
 12:02:46,351::SecureXMLRPCServer::73::root::(handle_error) client
 ('10.10.30.101', 58605)
 Traceback (most recent call last):
File /usr/lib64/python2.7/SocketServer.py, line 582, in
 process_request_thread
  self.finish_request(request, client_address)
File
 /usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py, line
 66, in finish_request
  request.do_handshake()
File /usr/lib64/python2.7/ssl.py, line 305, in do_handshake
  self._sslobj.do_handshake()
 SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
 routines:SSL23_GET_CLIENT_HELLO:http request
 
 In /var/log/messages there is an:
 
 vdsm [5834]: vdsm root ERROR client ()
 
 with the ip address of the engine.
 Hmm... Do you have ssl=true in your /etc/vdsm/vdsm.conf ?
 Does vdsm respond locally 

[Users] Adding disk via rest api

2012-06-25 Thread Jakub Libosvar

Hi guys,

I'm struggling with creating disk via api. I tried to POST this body
disk
namemy_cool_disk/name
provisioned_size1073741824/provisioned_size
storage_domains
storage_domain
namemaster_sd/name
/storage_domain
/storage_domains
size1073741824/size
interfacevirtio/interface
formatcow/format
/disk

but getting error from CanDoAction:
2012-06-25 17:37:14,497 WARN [org.ovirt.engine.core.bll.AddDiskCommand] 
(ajp--0.0.0.0-8009-11) [26a7e908] CanDoAction of action AddDisk failed. 
Reasons:VAR__ACTION__ADD,VAR__TYPE__VM_DISK,ACTION_TYPE_FAILED_STORAGE_DOMAIN_NOT_EXIST
2012-06-25 17:37:14,502 ERROR 
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] 
(ajp--0.0.0.0-8009-11) Operation Failed: [Cannot add Virtual Machine 
Disk. Storage Domain doesn't exist.]


The storage domain 'master_sd' is operational and I can create a disk 
from webadmin. According rsdl the provisioned_size is not child of disk 
element

parameter required=true type=xs:int
   nameprovisioned_size/name
/parameter
parameter required=true type=xs:string
   namedisk.interface/name
/parameter
parameter required=true type=xs:string
   namedisk.format/name
/parameter

but in api/disks it is.

Any ideas what am I doing wrong?

Thanks,
Kuba
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread jose garcia

On 06/25/2012 04:17 PM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 03:15:47PM +0100, jose garcia wrote:

On 06/25/2012 01:24 PM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 01:15:08PM +0100, jose garcia wrote:

On 06/25/2012 12:30 PM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 12:11:37PM +0100, jose garcia wrote:

On 06/25/2012 11:37 AM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 10:57:47AM +0100, jose garcia wrote:

Good monday morning,

Installed Fedora 17 and tried to install the node to a 3.1 engine.

I'm getting an VDS Network exception in the engine side:

in /var/log/ovirt-engine/engine:

2012-06-25 10:15:34,132 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-96)
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds
= 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb :
ovirt-node2.smb.eurotux.local, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 10:15:36,143 ERROR
[org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-20) VDS::handleNetworkException Server
failed to respond,  vds_id = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb,
vds_name = ovirt-node2.smb.eurotux.local, error =
VDSNetworkException:
2012-06-25 10:15:36,181 INFO
[org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
ResourceManager::vdsNotResponding entered for Host
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
2012-06-25 10:15:36,214 ERROR
[org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
(pool-3-thread-49) [1afd4b89] Failed to run Fence script on
vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.

While in the node, vdsmd does fail to sample nics:

in /var/log/vdsm/vdsm.log:

nf = netinfo.NetInfo()
   File /usr/share/vdsm/netinfo.py, line 268, in __init__
 _netinfo = get()
   File /usr/share/vdsm/netinfo.py, line 220, in get
 for nic in nics() ])
KeyError: 'p36p1'

MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run) VDSM
main thread ended. Waiting for 1 other threads...
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
_MainThread(MainThread, started 140567823243072)
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
Thread(libvirtEventLoop, started daemon 140567752681216)

in /etc/var/log/messages there is a lot of vdsmd died too quickly:

Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly, respawning slave
Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly, respawning slave
Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly for more than 30 seconds, master sleeping for 900
seconds

I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora
16, but tried to configure a bridge ovirtmgmt and the only
difference is that KeyError becomes 'ovirtmgmt'.

The nic renaming may have happened due to biosdevname. Do you have it
installed? Does any of the /etc/sysconfig/network-scripts/ifcfg-* refer
to an old nic name?

Which version of vdsm are you running? It seems that it is
pre-v4.9.4-61-g24f8627 which is too old for f17 to run - the output of
ifconfig has changed. Please retry with latest beta version
https://koji.fedoraproject.org/koji/buildinfo?buildID=327015

If the problem persists, could you run vdsm manually, with
# su - vdsm -s /bin/bash
# cd /usr/share/vdsm
# ./vdsm
maybe it would give a hint about the crash.

regards,
Dan.

Well, thank you. I have updated Vdsm to version 4.10. Now the
problem is with SSL and XMLRPC.

This is the error in the side of the engine:

/var/log/ovirt-engine/engine.log

ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
(QuartzScheduler_Worker-52) XML RPC error in command
GetCapabilitiesVDS ( Vds: ovirt-node2.smb.eurotux.local ), the error
was: java.util.concurrent.ExecutionException:
java.lang.reflect.InvocationTargetException,
NoHttpResponseException: The server 10.10.30.177 failed to respond.

In the side of the node, there seems to be an authentication problem:

/var/log/vdsm/vdsm.log

SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
routines:SSL23_GET_CLIENT_HELLO:http request
Thread-810::ERROR::2012-06-25
12:02:46,351::SecureXMLRPCServer::73::root::(handle_error) client
('10.10.30.101', 58605)
Traceback (most recent call last):
   File /usr/lib64/python2.7/SocketServer.py, line 582, in
process_request_thread
 self.finish_request(request, client_address)
   File
/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py, line
66, in finish_request
 request.do_handshake()
   File /usr/lib64/python2.7/ssl.py, line 305, in do_handshake
 self._sslobj.do_handshake()
SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
routines:SSL23_GET_CLIENT_HELLO:http request

In /var/log/messages there is an:

vdsm [5834]: vdsm root ERROR client ()

with the ip address of the engine.

Hmm... Do you have ssl=true in your /etc/vdsm/vdsm.conf ?
Does vdsm respond locally to

 vdsClient -s 0 getVdsCaps

(Maybe your local certificates 

Re: [Users] Call For Agenda Items -- oVirt Weekly Meeting 2012-06-20

2012-06-25 Thread Mike Burns
So, the date is obviously 2012-06-27 and not 06-20...

On Mon, 2012-06-25 at 08:27 -0400, Mike Burns wrote:
 The next oVirt Weekly Meeting is scheduled for 2012-06-20 at 15:00 UTC
 (10:00 AM EDT).
 
 Current Agenda Items (as of this email)
 
 * Status of Next Release
 * Sub-project reports (engine, vdsm, node)
 * Upcoming workshops
 
 
 Latest Agenda is listed here:
 http://www.ovirt.org/wiki/Meetings#Weekly_project_sync_meeting
 
 Reminder:  If you propose an additional agenda item, please be present
 for the meeting to lead the discussion on it.
 
 Thanks
 
 Mike
 
 ___
 Board mailing list
 bo...@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/board


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Post-Mortem Notes

2012-06-25 Thread Leslie Hawthorn

On 06/21/2012 09:22 AM, Barak Azulay wrote:

Thanks for setting it up.

BTW the videos from the VDSM sessions were not uploaded (the VDSM
session mentioned is another ovirt arch session that I gave in the mini
summit).

Do you have a way to check with the Linux Foundation ?



I checked in with the Linux Foundation and it turns out that the audio 
from this VDSM session was very poor so the video was not usable. I have 
asked the LF to take down the incorrect video. Would you like to see the 
mini-summit session linked from the oVirt workshop page? I don't know if 
it's possible, but it cannot hurt to ask if this is a desired outcome.


Cheers,
LH

--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat

identi.ca/lh
twitter.com/lhawthorn

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Vdsmd is respawning trying to sample NICs

2012-06-25 Thread Itamar Heim

On 06/25/2012 12:00 PM, jose garcia wrote:

On 06/25/2012 04:17 PM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 03:15:47PM +0100, jose garcia wrote:

On 06/25/2012 01:24 PM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 01:15:08PM +0100, jose garcia wrote:

On 06/25/2012 12:30 PM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 12:11:37PM +0100, jose garcia wrote:

On 06/25/2012 11:37 AM, Dan Kenigsberg wrote:

On Mon, Jun 25, 2012 at 10:57:47AM +0100, jose garcia wrote:

Good monday morning,

Installed Fedora 17 and tried to install the node to a 3.1 engine.

I'm getting an VDS Network exception in the engine side:

in /var/log/ovirt-engine/engine:

2012-06-25 10:15:34,132 WARN
[org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-96)
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS ,
vds
= 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb :
ovirt-node2.smb.eurotux.local, VDS Network Error, continuing.
VDSNetworkException:
2012-06-25 10:15:36,143 ERROR
[org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-20) VDS::handleNetworkException Server
failed to respond, vds_id = 2e9929c6-bea6-11e1-bfdd-ff11f39c80eb,
vds_name = ovirt-node2.smb.eurotux.local, error =
VDSNetworkException:
2012-06-25 10:15:36,181 INFO
[org.ovirt.engine.core.bll.VdsEventListener] (pool-3-thread-49)
ResourceManager::vdsNotResponding entered for Host
2e9929c6-bea6-11e1-bfdd-ff11f39c80eb, 10.10.30.177
2012-06-25 10:15:36,214 ERROR
[org.ovirt.engine.core.bll.VdsNotRespondingTreatmentCommand]
(pool-3-thread-49) [1afd4b89] Failed to run Fence script on
vds:ovirt-node2.smb.eurotux.local, VMs moved to UnKnown instead.

While in the node, vdsmd does fail to sample nics:

in /var/log/vdsm/vdsm.log:

nf = netinfo.NetInfo()
File /usr/share/vdsm/netinfo.py, line 268, in __init__
_netinfo = get()
File /usr/share/vdsm/netinfo.py, line 220, in get
for nic in nics() ])
KeyError: 'p36p1'

MainThread::INFO::2012-06-25 10:45:09,110::vdsm::76::vds::(run)
VDSM
main thread ended. Waiting for 1 other threads...
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
_MainThread(MainThread, started 140567823243072)
MainThread::INFO::2012-06-25 10:45:09,111::vdsm::79::vds::(run)
Thread(libvirtEventLoop, started daemon 140567752681216)

in /etc/var/log/messages there is a lot of vdsmd died too quickly:

Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly, respawning slave
Jun 25 10:45:08 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly, respawning slave
Jun 25 10:45:09 ovirt-node2 respawn: slave '/usr/share/vdsm/vdsm'
died too quickly for more than 30 seconds, master sleeping for 900
seconds

I don't know why Fedora 17 calls p36p1 to what was eth0 in Fedora
16, but tried to configure a bridge ovirtmgmt and the only
difference is that KeyError becomes 'ovirtmgmt'.

The nic renaming may have happened due to biosdevname. Do you
have it
installed? Does any of the
/etc/sysconfig/network-scripts/ifcfg-* refer
to an old nic name?

Which version of vdsm are you running? It seems that it is
pre-v4.9.4-61-g24f8627 which is too old for f17 to run - the
output of
ifconfig has changed. Please retry with latest beta version
https://koji.fedoraproject.org/koji/buildinfo?buildID=327015

If the problem persists, could you run vdsm manually, with
# su - vdsm -s /bin/bash
# cd /usr/share/vdsm
# ./vdsm
maybe it would give a hint about the crash.

regards,
Dan.

Well, thank you. I have updated Vdsm to version 4.10. Now the
problem is with SSL and XMLRPC.

This is the error in the side of the engine:

/var/log/ovirt-engine/engine.log

ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand]
(QuartzScheduler_Worker-52) XML RPC error in command
GetCapabilitiesVDS ( Vds: ovirt-node2.smb.eurotux.local ), the error
was: java.util.concurrent.ExecutionException:
java.lang.reflect.InvocationTargetException,
NoHttpResponseException: The server 10.10.30.177 failed to respond.

In the side of the node, there seems to be an authentication
problem:

/var/log/vdsm/vdsm.log

SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
routines:SSL23_GET_CLIENT_HELLO:http request
Thread-810::ERROR::2012-06-25
12:02:46,351::SecureXMLRPCServer::73::root::(handle_error) client
('10.10.30.101', 58605)
Traceback (most recent call last):
File /usr/lib64/python2.7/SocketServer.py, line 582, in
process_request_thread
self.finish_request(request, client_address)
File
/usr/lib/python2.7/site-packages/vdsm/SecureXMLRPCServer.py, line
66, in finish_request
request.do_handshake()
File /usr/lib64/python2.7/ssl.py, line 305, in do_handshake
self._sslobj.do_handshake()
SSLError: [Errno 1] _ssl.c:504: error:1407609C:SSL
routines:SSL23_GET_CLIENT_HELLO:http request

In /var/log/messages there is an:

vdsm [5834]: vdsm root ERROR client ()

with the ip address of the engine.

Hmm... Do you have ssl=true in your /etc/vdsm/vdsm.conf ?
Does vdsm respond locally to

vdsClient -s 0 getVdsCaps

(Maybe your local certificates and key 

[Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Nathan Stratton


Steps:

1) Installed ovirt-engine, configured Cluster for GlusterFS
2) Installed 8 ovirt 3.1 nodes
3) Joined all 8 notes with ovirt-engine (all up and happy)
4) Manually added all 8 peers for GlusterFS on hosts (all peers happy)
5) Create Volume errors:

2012-06-25 16:44:42,412 WARN 
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] The message key CreateGlusterVolume is 
missing from bundles/ExecutionMessages
2012-06-25 16:44:42,483 INFO 
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Running command: 
CreateGlusterVolumeCommand internal: false. Entities affected :  ID: 
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 16:44:42,486 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] START, 
CreateGlusterVolumeVDSCommand(vdsId = 
8324ff12-bf1c-11e1-b235-43d3f71a81d8), log id: 15757d4c
2012-06-25 16:44:42,593 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Failed in CreateGlusterVolumeVDS method
2012-06-25 16:44:42,594 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Error code unexpected and error message 
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, 
error = Unexpected exception
2012-06-25 16:44:42,594 ERROR 
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-4) 
[6c02c1f5] Command CreateGlusterVolumeVDS execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,595 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] FINISH, CreateGlusterVolumeVDSCommand, 
log id: 15757d4c
2012-06-25 16:44:42,596 ERROR 
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command 
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll 
exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, 
error = Unexpected exception






Nathan Stratton
nathan at robotics.net
http://www.robotics.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Robert Middleswarth

On 06/25/2012 07:59 PM, Nathan Stratton wrote:


Steps:

1) Installed ovirt-engine, configured Cluster for GlusterFS
2) Installed 8 ovirt 3.1 nodes
3) Joined all 8 notes with ovirt-engine (all up and happy)
4) Manually added all 8 peers for GlusterFS on hosts (all peers happy)
5) Create Volume errors:

2012-06-25 16:44:42,412 WARN 
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] The message key CreateGlusterVolume 
is missing from bundles/ExecutionMessages
2012-06-25 16:44:42,483 INFO 
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Running command: 
CreateGlusterVolumeCommand internal: false. Entities affected : ID: 
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 16:44:42,486 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--0.0.0.0-8009-4) 
[6c02c1f5] START, CreateGlusterVolumeVDSCommand(vdsId = 
8324ff12-bf1c-11e1-b235-43d3f71a81d8), log id: 15757d4c
2012-06-25 16:44:42,593 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Failed in CreateGlusterVolumeVDS method
2012-06-25 16:44:42,594 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Error code unexpected and error 
message VDSGenericException: VDSErrorException: Failed to 
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,594 ERROR 
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-4) 
[6c02c1f5] Command CreateGlusterVolumeVDS execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,595 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] (ajp--0.0.0.0-8009-4) 
[6c02c1f5] FINISH, CreateGlusterVolumeVDSCommand, log id: 15757d4c
2012-06-25 16:44:42,596 ERROR 
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command 
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc 
Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to 
CreateGlusterVolumeVDS, error = Unexpected exception


I am tiring to write up a howto on the process.  Since I just got 
everything working recently and am testing right now.  Make sure you 
have gluster 3.3 installed not gluster 3.2.5 that comes with F17.  I 
installed from http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ .


Thanks
Robert





Nathan Stratton
nathan at robotics.net
http://www.robotics.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Nathan Stratton

On Mon, 25 Jun 2012, Robert Middleswarth wrote:


On 06/25/2012 07:59 PM, Nathan Stratton wrote:


Steps:

1) Installed ovirt-engine, configured Cluster for GlusterFS
2) Installed 8 ovirt 3.1 nodes
3) Joined all 8 notes with ovirt-engine (all up and happy)
4) Manually added all 8 peers for GlusterFS on hosts (all peers happy)
5) Create Volume errors:

2012-06-25 16:44:42,412 WARN 
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] The message key CreateGlusterVolume is 
missing from bundles/ExecutionMessages
2012-06-25 16:44:42,483 INFO 
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Running command: 
CreateGlusterVolumeCommand internal: false. Entities affected : ID: 
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 16:44:42,486 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] START, CreateGlusterVolumeVDSCommand(vdsId 
= 8324ff12-bf1c-11e1-b235-43d3f71a81d8), log id: 15757d4c
2012-06-25 16:44:42,593 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Failed in CreateGlusterVolumeVDS method
2012-06-25 16:44:42,594 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Error code unexpected and error message 
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, 
error = Unexpected exception
2012-06-25 16:44:42,594 ERROR 
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-4) 
[6c02c1f5] Command CreateGlusterVolumeVDS execution failed. Exception: 
VDSErrorException: VDSGenericException: VDSErrorException: Failed to 
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,595 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] FINISH, CreateGlusterVolumeVDSCommand, log 
id: 15757d4c
2012-06-25 16:44:42,596 ERROR 
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command 
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll 
exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS, 
error = Unexpected exception


I am tiring to write up a howto on the process.  Since I just got everything 
working recently and am testing right now.  Make sure you have gluster 3.3 
installed not gluster 3.2.5 that comes with F17.  I installed from 
http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ .


Yes, running 3.3.





Nathan Stratton
nathan at robotics.net
http://www.robotics.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Robert Middleswarth

On 06/25/2012 09:01 PM, Nathan Stratton wrote:

On Mon, 25 Jun 2012, Robert Middleswarth wrote:


On 06/25/2012 07:59 PM, Nathan Stratton wrote:


Steps:

1) Installed ovirt-engine, configured Cluster for GlusterFS
2) Installed 8 ovirt 3.1 nodes
3) Joined all 8 notes with ovirt-engine (all up and happy)
4) Manually added all 8 peers for GlusterFS on hosts (all peers happy)
5) Create Volume errors:

2012-06-25 16:44:42,412 WARN 
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] The message key CreateGlusterVolume 
is missing from bundles/ExecutionMessages
2012-06-25 16:44:42,483 INFO 
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Running command: 
CreateGlusterVolumeCommand internal: false. Entities affected : ID: 
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 16:44:42,486 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] START, 
CreateGlusterVolumeVDSCommand(vdsId = 
8324ff12-bf1c-11e1-b235-43d3f71a81d8), log id: 15757d4c
2012-06-25 16:44:42,593 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Failed in CreateGlusterVolumeVDS 
method
Make sure vdsm-gluster is installed on all nodes?  It wasn't installed 
on nodes that were added before I added a check box to support gluster 
on the cluster.


Thanks
Robert
2012-06-25 16:44:42,594 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Error code unexpected and error 
message VDSGenericException: VDSErrorException: Failed to 
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,594 ERROR 
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command CreateGlusterVolumeVDS 
execution failed. Exception: VDSErrorException: VDSGenericException: 
VDSErrorException: Failed to CreateGlusterVolumeVDS, error = 
Unexpected exception
2012-06-25 16:44:42,595 INFO 
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] FINISH, 
CreateGlusterVolumeVDSCommand, log id: 15757d4c
2012-06-25 16:44:42,596 ERROR 
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand] 
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command 
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw 
Vdc Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: 
VDSGenericException: VDSErrorException: Failed to 
CreateGlusterVolumeVDS, error = Unexpected exception


I am tiring to write up a howto on the process.  Since I just got 
everything working recently and am testing right now.  Make sure you 
have gluster 3.3 installed not gluster 3.2.5 that comes with F17.  I 
installed from http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ .


Yes, running 3.3.





Nathan Stratton
nathan at robotics.net
http://www.robotics.net



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Nathan Stratton

On Mon, 25 Jun 2012, Robert Middleswarth wrote:

Make sure vdsm-gluster is installed on all nodes?  It wasn't installed on 
nodes that were added before I added a check box to support gluster on the 
cluster.


Yes, it was out of 8 boxes I was looking at the vdsm logs on box 1 and 
ovirt-engine tried to start gluster on a different box so I did not see 
the vdsm log. Once I did, it was clear that gluster we not happy about the 
directory being used for gluster in the past. To fix that I just ran the 
following on all 8 of my hosts:


cd /export/ ;for i in `attr -lq .`; do   setfattr -x trusted.$i .; done

I now have a happy gluster volume called share. Now I am lost as to how to 
use that volume. Do I have to change the Default NFS storage type? How do 
I get this bad boy mounted?





Nathan Stratton
nathan at robotics.net
http://www.robotics.net

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Robert Middleswarth

On 06/25/2012 09:26 PM, Nathan Stratton wrote:

On Mon, 25 Jun 2012, Robert Middleswarth wrote:

Make sure vdsm-gluster is installed on all nodes?  It wasn't 
installed on nodes that were added before I added a check box to 
support gluster on the cluster.


Yes, it was out of 8 boxes I was looking at the vdsm logs on box 1 and 
ovirt-engine tried to start gluster on a different box so I did not 
see the vdsm log. Once I did, it was clear that gluster we not happy 
about the directory being used for gluster in the past. To fix that I 
just ran the following on all 8 of my hosts:


cd /export/ ;for i in `attr -lq .`; do   setfattr -x trusted.$i .; done

I now have a happy gluster volume called share. Now I am lost as to 
how to use that volume. Do I have to change the Default NFS storage 
type? How do I get this bad boy mounted?
I spent 2 weeks tiring to get glusterfs working with no luck. However I 
was able to get gluster nfs working.  Here are the steps I needed to do.


1) I added the following option to the volume.  Might only need the 1st 
one but I have all 3 of them set on my working system.

a) nfs.nlm off
b) nfs.register-with-portmap on
c) nfs.addr-namelookup off
2) Mount the volume outside of ovirt. mount -t nfs 
ipofonenode:/volumename /temp/share/name

3) chown -R 36.36 /temp/share/name.
4) umount /temp/share/name
5) Added a new NFS domain with the NFS Export Path: localhost:/volumename

I was then able to active the NFS based Data Center and add working VM's

Thanks
Robert




Nathan Stratton
nathan at robotics.net
http://www.robotics.net




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Nathan Stratton

On Mon, 25 Jun 2012, Robert Middleswarth wrote:

I now have a happy gluster volume called share. Now I am lost as to how to 
use that volume. Do I have to change the Default NFS storage type? How do I 
get this bad boy mounted?
I spent 2 weeks tiring to get glusterfs working with no luck. However I was 
able to get gluster nfs working.  Here are the steps I needed to do.


Are there any docs on direct gluster?

1) I added the following option to the volume.  Might only need the 1st one 
but I have all 3 of them set on my working system.

a) nfs.nlm off
b) nfs.register-with-portmap on
c) nfs.addr-namelookup off
2) Mount the volume outside of ovirt. mount -t nfs ipofonenode:/volumename 
/temp/share/name

3) chown -R 36.36 /temp/share/name.
4) umount /temp/share/name
5) Added a new NFS domain with the NFS Export Path: localhost:/volumename

I was then able to active the NFS based Data Center and add working VM's


Problme I am running into is that I need to add -o mountproto=tcp,vers=3 
because it looks like it is defaulting to udp and gluster NFS does not 
support that.





Nathan Stratton
nathan at robotics.net
http://www.robotics.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Robert Middleswarth

On 06/25/2012 10:51 PM, Nathan Stratton wrote:

On Mon, 25 Jun 2012, Robert Middleswarth wrote:

I now have a happy gluster volume called share. Now I am lost as to 
how to use that volume. Do I have to change the Default NFS storage 
type? How do I get this bad boy mounted?
I spent 2 weeks tiring to get glusterfs working with no luck. However 
I was able to get gluster nfs working.  Here are the steps I needed 
to do.


Are there any docs on direct gluster?
No.  I got close to getting it to work but I was having issues with 
Fedora 17 crashing every 10 to 12 hours and that might have been why I 
couldn't get that last 1%.  I did pretty much the same steps to get 
really close to getting the glusterfs structure to work.


1) Mount the volume outside of ovirt. mount -t nfs 
ipofonenode:/volumename /temp/share/name or mount -t glusterfs 
ipofonenode:/volumename /temp/share/name

2) chown -R 36.36 /temp/share/name
3) umount /temp/share/name
4) Added a new Postfix gluster fs domain with the Path of: 
localhost:/volumename and the type of glusterfs.


It mounted fine but I couldn't get it to activate.

1) I added the following option to the volume.  Might only need the 
1st one but I have all 3 of them set on my working system.a) nfs.nlm off

b) nfs.register-with-portmap on
c) nfs.addr-namelookup off
2) Mount the volume outside of ovirt. mount -t nfs 
ipofonenode:/volumename /temp/share/name

3) chown -R 36.36 /temp/share/name.
4) umount /temp/share/name
5) Added a new NFS domain with the NFS Export Path: 
localhost:/volumename


I was then able to active the NFS based Data Center and add working VM's


Problme I am running into is that I need to add -o 
mountproto=tcp,vers=3 because it looks like it is defaulting to udp 
and gluster NFS does not support that.

Try editing file /etc/nfsmount.conf on each node and add the following.

[ NFSMount_Global_Options ]
Defaultvers=3
Nfsvers=3

Thanks
Robert






Nathan Stratton
nathan at robotics.net
http://www.robotics.net



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Nathan Stratton

On Mon, 25 Jun 2012, Robert Middleswarth wrote:

Problme I am running into is that I need to add -o mountproto=tcp,vers=3 
because it looks like it is defaulting to udp and gluster NFS does not 
support that.

Try editing file /etc/nfsmount.conf on each node and add the following.

[ NFSMount_Global_Options ]
Defaultvers=3
Nfsvers=3


Thanks Robert, you rock. I added tcp and ver3 and it works.





Nathan Stratton
nathan at robotics.net
http://www.robotics.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Itamar Heim

On 06/25/2012 10:22 PM, Robert Middleswarth wrote:

On 06/25/2012 09:26 PM, Nathan Stratton wrote:

On Mon, 25 Jun 2012, Robert Middleswarth wrote:


Make sure vdsm-gluster is installed on all nodes? It wasn't installed
on nodes that were added before I added a check box to support
gluster on the cluster.


Yes, it was out of 8 boxes I was looking at the vdsm logs on box 1 and
ovirt-engine tried to start gluster on a different box so I did not
see the vdsm log. Once I did, it was clear that gluster we not happy
about the directory being used for gluster in the past. To fix that I
just ran the following on all 8 of my hosts:

cd /export/ ;for i in `attr -lq .`; do setfattr -x trusted.$i .; done

I now have a happy gluster volume called share. Now I am lost as to
how to use that volume. Do I have to change the Default NFS storage
type? How do I get this bad boy mounted?

I spent 2 weeks tiring to get glusterfs working with no luck. However I
was able to get gluster nfs working. Here are the steps I needed to do.

1) I added the following option to the volume. Might only need the 1st
one but I have all 3 of them set on my working system.
a) nfs.nlm off
b) nfs.register-with-portmap on
c) nfs.addr-namelookup off
2) Mount the volume outside of ovirt. mount -t nfs
ipofonenode:/volumename /temp/share/name
3) chown -R 36.36 /temp/share/name.
4) umount /temp/share/name
5) Added a new NFS domain with the NFS Export Path: localhost:/volumename


what about the newer kernel with direct_io support?




I was then able to active the NFS based Data Center and add working VM's

Thanks
Robert




Nathan Stratton
nathan at robotics.net
http://www.robotics.net




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Itamar Heim

On 06/25/2012 07:59 PM, Nathan Stratton wrote:


Steps:

1) Installed ovirt-engine, configured Cluster for GlusterFS
2) Installed 8 ovirt 3.1 nodes
3) Joined all 8 notes with ovirt-engine (all up and happy)
4) Manually added all 8 peers for GlusterFS on hosts (all peers happy)


I thought ovirt engine takes care of that.
did you add these nodes to a gluster cluster from the UI?


5) Create Volume errors:

2012-06-25 16:44:42,412 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector]
(ajp--0.0.0.0-8009-4) [6c02c1f5] The message key CreateGlusterVolume is
missing from bundles/ExecutionMessages
2012-06-25 16:44:42,483 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Running command:
CreateGlusterVolumeCommand internal: false. Entities affected : ID:
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 16:44:42,486 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] START,
CreateGlusterVolumeVDSCommand(vdsId =
8324ff12-bf1c-11e1-b235-43d3f71a81d8), log id: 15757d4c
2012-06-25 16:44:42,593 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Failed in CreateGlusterVolumeVDS method
2012-06-25 16:44:42,594 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,594 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-4)
[6c02c1f5] Command CreateGlusterVolumeVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,595 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] FINISH, CreateGlusterVolumeVDSCommand,
log id: 15757d4c
2012-06-25 16:44:42,596 ERROR
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc
Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = Unexpected exception





Nathan Stratton
nathan at robotics.net
http://www.robotics.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Robert Middleswarth

On 06/25/2012 11:38 PM, Nathan Stratton wrote:

On Mon, 25 Jun 2012, Robert Middleswarth wrote:

Problme I am running into is that I need to add -o 
mountproto=tcp,vers=3 because it looks like it is defaulting to udp 
and gluster NFS does not support that.

Try editing file /etc/nfsmount.conf on each node and add the following.

[ NFSMount_Global_Options ]
Defaultvers=3
Nfsvers=3


Thanks Robert, you rock. I added tcp and ver3 and it works.


Care to provide your settings?



Nathan Stratton
nathan at robotics.net
http://www.robotics.net



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt 3.1 -- GlusterFS 3.3

2012-06-25 Thread Itamar Heim

On 06/26/2012 12:22 AM, Robert Middleswarth wrote:

On 06/25/2012 11:50 PM, Itamar Heim wrote:

On 06/25/2012 07:59 PM, Nathan Stratton wrote:


Steps:

1) Installed ovirt-engine, configured Cluster for GlusterFS
2) Installed 8 ovirt 3.1 nodes
3) Joined all 8 notes with ovirt-engine (all up and happy)
4) Manually added all 8 peers for GlusterFS on hosts (all peers happy)


I thought ovirt engine takes care of that.
did you add these nodes to a gluster cluster from the UI?


No it doesn't. I have done several installs. I already bugged the issue
about the fact that the iptables ports don't get opened and if you do
anything inside the engine that updates the iptables it removes the ones
I manually add. I have done both a f17 install using beta and Jenkins
and the CentOS builds including the latest build 8 and none of these
setup IP tables or links the nodes.


ip tables - no.
peer addition - from what i understand, works if you use installVds=false

still needs to be fixed to work with installVds=true as well.

option A - a hacky way for now might be:
1. use installVds=true
2. add all hosts so they will get configured with vdsm and certificates 
and regular ip tables

3. change installVds to false
4. remove/add them with the gluster flag

not sure worth it for peer detection for now.

option B
1. set InstallVds=false and disable ssl
2. yum install the hosts yourself

option C
1. use like today and do peer detection manually until resolved.

btw, i saw you manged to get it to work using NFS.
did you try the gluster native client/mount as well?





Thanks
Robert

5) Create Volume errors:

2012-06-25 16:44:42,412 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector]
(ajp--0.0.0.0-8009-4) [6c02c1f5] The message key CreateGlusterVolume is
missing from bundles/ExecutionMessages
2012-06-25 16:44:42,483 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Running command:
CreateGlusterVolumeCommand internal: false. Entities affected : ID:
99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups
2012-06-25 16:44:42,486 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] START,
CreateGlusterVolumeVDSCommand(vdsId =
8324ff12-bf1c-11e1-b235-43d3f71a81d8), log id: 15757d4c
2012-06-25 16:44:42,593 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Failed in CreateGlusterVolumeVDS method
2012-06-25 16:44:42,594 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,594 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-4)
[6c02c1f5] Command CreateGlusterVolumeVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = Unexpected exception
2012-06-25 16:44:42,595 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] FINISH, CreateGlusterVolumeVDSCommand,
log id: 15757d4c
2012-06-25 16:44:42,596 ERROR
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--0.0.0.0-8009-4) [6c02c1f5] Command
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc
Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = Unexpected exception





Nathan Stratton
nathan at robotics.net
http://www.robotics.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users