Re: [Users] Where to download ovirt-engine-sdk-java 1.0.0.2?
Hi, On 01/23/2013 02:37 AM, Sherry Yu wrote: Hi Michael, Can you point me to download ovirt-engine-sdk-java 1.0.0.2 and where to get more info. on Java development using this API? sdk can be deployed using [1], more info can be found here [2]. [1] http://www.ovirt.org/Java-sdk#Maven_deployment [2] http://www.ovirt.org/Java-sdk My task is to create an integration between RHEV 3.x and SAP LVM, a product that has a Java API. I met with Oved at oVirt workshop this week and he mentioned this Java SDK. It sounds a better fit than the REST API that I have been investigating. Many Thanks and I am looking forward to hearing from you. Sherry - Forwarded Message - From: Oved Ourfalli ov...@redhat.com To: Sherry Yu s...@redhat.com Sent: Tuesday, January 22, 2013 4:30:11 PM Subject: Fwd: [Engine-devel] ovirt-engine-sdk-java 1.0.0.2 released - Forwarded Message - From: Michael Pasternak mpast...@redhat.com To: users@ovirt.org Cc: engine-devel engine-de...@ovirt.org Sent: Wednesday, January 16, 2013 6:38:02 AM Subject: [Engine-devel] ovirt-engine-sdk-java 1.0.0.2 released Basically this release addresses an issue when [1] constructor is used with NULLs as optional parameters, [1] public Api(String url, String username, String password, String key_file, String cert_file, String ca_file, Integer port, Integer timeout, Boolean persistentAuth, Boolean insecure, Boolean filter, Boolean debug) -- Michael Pasternak RedHat, ENG-Virtualization RD ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] cannot add gluster domain
Hi all, Am not too familiar with fedora and its services, anyone can help him? Alex On Jan 23, 2013 5:02 AM, T-Sinjon tscbj1...@gmail.com wrote: I have forced v3 in my /etc/nfsmount and there's no firewall between NFS server and the host. The only problem is no rpc.statd running . Could you tell me how can i start it since there's no rpcbind installed on overt node 2.5.5-0.1? [root@ovirtnode1 ~]# systemctl status nfs-lock.service nfs-lock.service - NFS file locking service. Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled) Active: failed (Result: exit-code) since Thu, 17 Jan 2013 09:41:45 +; 5 days ago CGroup: name=systemd:/system/nfs-lock.service Jan 17 09:41:45 localhost.localdomain rpc.statd[1385]: Version 1.2.6 starting Jan 17 09:41:45 localhost.localdomain rpc.statd[1385]: Initializing NSM state [root@ovirtnode1 ~]# systemctl start nfs-lock.service Failed to issue method call: Unit rpcbind.service failed to load: No such file or directory. See system logs and 'systemctl status rpcbind.service' for details. On 22 Jan, 2013, at 6:14 PM, Alex Leonhardt alex.t...@gmail.com wrote: Hi, this seems to look like the error you're getting : MountError: (32, ;mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start statd.\nmount.nfs: an incorrect mount option was specified\n) Are you running nfs3 on that host ? if yes, have you forced v3 ? is rpc.statd running ? is the NFS server firewalling off the rpc.* ports ? alex On 22 January 2013 09:58, T-Sinjon tscbj1...@gmail.com wrote: HI, everyone: Recently , I newly installed ovirt 3.1 from http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/, and node use http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso when i add gluster domain via nfs, mount error occurred, I have do manually mount action on the node but failed if without -o nolock option: # /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 my-gluster-ip:/gvol02/GlusterDomain /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified blow is the vdsm.log from node and engine.log, any help was appreciated : vdsm.log Thread-12717::DEBUG::2013-01-22 09:19:02,261::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip] Thread-12717::DEBUG::2013-01-22 09:19:02,261::task::588::TaskManager.Task::(_updateState) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state init - state preparing Thread-12717::INFO::2013-01-22 09:19:02,262::logUtils::37::dispatcher::(wrapper) Run and protect: validateStorageServerConnection(domType=1, spUUID='----', conList=[{'connection': 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '', 'password': '**', 'id': '----', 'port': ''}], options=None) Thread-12717::INFO::2013-01-22 09:19:02,262::logUtils::39::dispatcher::(wrapper) Run and protect: validateStorageServerConnection, Return response: {'statuslist': [{'status': 0, 'id': '----'}]} Thread-12717::DEBUG::2013-01-22 09:19:02,262::task::1172::TaskManager.Task::(prepare) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::finished: {'statuslist': [{'status': 0, 'id': '----'}]} Thread-12717::DEBUG::2013-01-22 09:19:02,262::task::588::TaskManager.Task::(_updateState) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state preparing - state finished Thread-12717::DEBUG::2013-01-22 09:19:02,262::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-12717::DEBUG::2013-01-22 09:19:02,262::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-12717::DEBUG::2013-01-22 09:19:02,263::task::978::TaskManager.Task::(_decref) Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::ref 0 aborting False Thread-12718::DEBUG::2013-01-22 09:19:02,307::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip] Thread-12718::DEBUG::2013-01-22 09:19:02,307::task::588::TaskManager.Task::(_updateState) Task=`c07a075a-a910-4bc3-9a33-b957d05ea270`::moving from state init - state preparing Thread-12718::INFO::2013-01-22 09:19:02,307::logUtils::37::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='----', conList=[{'connection': 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '', 'password': '**', 'id': '6463ca53-6c57-45f6-bb5c-45505891cae9', 'port': ''}], options=None) Thread-12718::DEBUG::2013-01-22
Re: [Users] host deploy and after reboot not responsive
- Original Message - From: Dan Kenigsberg dan...@redhat.com To: Alon Bar-Lev alo...@redhat.com, Gianluca Cecchi gianluca.cec...@gmail.com Cc: Adam Litke a...@us.ibm.com, users users@ovirt.org Sent: Wednesday, January 23, 2013 10:40:01 AM Subject: Re: [Users] host deploy and after reboot not responsive On Tue, Jan 22, 2013 at 07:29:43PM -0500, Alon Bar-Lev wrote: This is nice to know otopi is ready: commit d756b789d60934f935718ff25f8208f563b0f123 Author: Alon Bar-Lev alo...@redhat.com Date: Wed Jan 23 02:27:51 2013 +0200 system: clock: support chrony as ntpd Change-Id: I2917bdd8248eb0b123f6b2cca875820f2cac664c Signed-off-by: Alon Bar-Lev alo...@redhat.com http://gerrit.ovirt.org/#/c/11288/ Alon, if both ntpq and chronyc are installed, do you intentionally run them both? Yes. I only need to know if clock is sync so I query them both. Alon ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail
On Tue, Jan 22, 2013 at 04:02:24PM -0600, Dead Horse wrote: Any ideas on this one? (from VDSM log): Thread-25::DEBUG::2013-01-22 15:35:29,065::BindingXMLRPC::914::vds::(wrapper) client [3.57.111.30]::call getCapabilities with () {} Thread-25::ERROR::2013-01-22 15:35:29,113::netinfo::159::root::(speed) cannot read ib0 speed Traceback (most recent call last): File /usr/lib64/python2.6/site-packages/vdsm/netinfo.py, line 155, in speed s = int(file('/sys/class/net/%s/speed' % dev).read()) IOError: [Errno 22] Invalid argument Causes VDSM to fail to attach storage I doubt that this is the cause of the failure, as vdsm has always reported 0 for ib devices, and still is. Does a former version works with your Engine? Could you share more of your vdsm.log? I suppose the culprit lies in one one of the storage-related commands, not in statistics retrieval. Engine side sees: ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (QuartzScheduler_Worker-96) [553ef26e] The connection with details 192.168.0.1:/ovirt/ds failed because of error code 100 and error message is: general exception 2013-01-22 15:35:30,160 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (QuartzScheduler_Worker-96) [1ab78378] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 8970b3fe-1faf-11e2-bc1f-00151712f280 Type: VDS 2013-01-22 15:35:30,200 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (QuartzScheduler_Worker-96) [1ab78378] START, SetVdsStatusVDSCommand(HostName = kezan, HostId = 8970b3fe-1faf-11e2-bc1f-00151712f280, status=NonOperational, nonOperationalReason=STORAGE_DOMAIN_UNREACHABLE), log id: 4af5c4cd 2013-01-22 15:35:30,211 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (QuartzScheduler_Worker-96) [1ab78378] FINISH, SetVdsStatusVDSCommand, log id: 4af5c4cd 2013-01-22 15:35:30,242 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (QuartzScheduler_Worker-96) [1ab78378] Try to add duplicate audit log values with the same name. Type: VDS_SET_NONOPERATIONAL_DOMAIN. Value: storagepoolname Engine = latest master VDSM = latest master Since latest master is an unstable reference by definition, I'm sure that History would thank you if you post the exact version (git hash?) of the code. node = el6 ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] host deploy and after reboot not responsive
On Tue, Jan 22, 2013 at 07:29:43PM -0500, Alon Bar-Lev wrote: This is nice to know otopi is ready: commit d756b789d60934f935718ff25f8208f563b0f123 Author: Alon Bar-Lev alo...@redhat.com Date: Wed Jan 23 02:27:51 2013 +0200 system: clock: support chrony as ntpd Change-Id: I2917bdd8248eb0b123f6b2cca875820f2cac664c Signed-off-by: Alon Bar-Lev alo...@redhat.com http://gerrit.ovirt.org/#/c/11288/ Alon, if both ntpq and chronyc are installed, do you intentionally run them both? - Original Message - From: Gianluca Cecchi gianluca.cec...@gmail.com To: Dan Kenigsberg dan...@redhat.com Cc: Adam Litke a...@us.ibm.com, Alon Bar-Lev alo...@redhat.com, users users@ovirt.org Sent: Wednesday, January 23, 2013 1:50:59 AM Subject: Re: [Users] host deploy and after reboot not responsive On Tue, Jan 22, 2013 at 9:02 PM, Dan Kenigsberg wrote: That's great, as it reduces the question to: why doesn't ntpd start on your machine after boot. Do you have any guess? and ntpd logs to share? Any peculiar ntp.conf setting? Hum... I think in Fedora 18 itself and/or due to oVirt setup there is some conflict regarding network synchronization services In fact together with standard ntp there is chrony that I didn't know until today... My suspect is this one: - At install time Fedora 18 by default installs now chrony and doesn't install instead ntpd Gianluca, how about softening our ntpd requirement with something like http://gerrit.ovirt.org/11291 ? Could you verify that it's working on your system? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] BFA FC driver no stable on Fedora
Hi I've spent hours to make my Brocade FC card working on Fedora17 or Ovirt Node build. In fact the card are randomly seen by the system, which is really painfull. So I've downloaded, compile and installed the latest driver from Brocade, and now when I load the module the card is seen. So I've installed : bfa_util_linux_noioctl-3.2.0.0-0.noarch bfa_driver_linux-3.2.0.0-0.noarch And the module info are : # modinfo bfa filename: /lib/modules/3.3.4-5.fc17.x86_64/kernel/drivers/scsi/bfa.ko version:3.2.0.0 author: Brocade Communications Systems, Inc. description:Brocade Fibre Channel HBA Driver fcpim ipfc license:GPL srcversion: 5C0FBDF3571ABCA9632B9CA alias: pci:v1657d0022sv*sd*bc0Csc04i00* alias: pci:v1657d0021sv*sd*bc0Csc04i00* alias: pci:v1657d0014sv*sd*bc0Csc04i00* alias: pci:v1657d0017sv*sd*bc*sc*i* alias: pci:v1657d0013sv*sd*bc*sc*i* depends:scsi_transport_fc vermagic: 3.3.4-5.fc17.x86_64 SMP mod_unload parm: os_name:OS name of the hba host machine (charp) parm: os_patch:OS patch level of the hba host machine (charp) parm: host_name:Hostname of the hba host machine (charp) parm: num_rports:Max number of rports supported per port (physical/logical), default=1024 (int) parm: num_ioims:Max number of ioim requests, default=2000 (int) parm: num_tios:Max number of fwtio requests, default=0 (int) parm: num_tms:Max number of task im requests, default=128 (int) parm: num_fcxps:Max number of fcxp requests, default=64 (int) parm: num_ufbufs:Max number of unsolicited frame buffers, default=64 (int) parm: reqq_size:Max number of request queue elements, default=256 (int) parm: rspq_size:Max number of response queue elements, default=64 (int) parm: num_sgpgs:Number of scatter/gather pages, default=2048 (int) parm: rport_del_timeout:Rport delete timeout, default=90 secs, Range[0] (int) parm: bfa_lun_queue_depth:Lun queue depth, default=32, Range[0] (int) parm: bfa_io_max_sge:Max io scatter/gather elements , default=255 (int) parm: log_level:Driver log level, default=3, Range[Critical:1|Error:2|Warning:3|Info:4] (int) parm: ioc_auto_recover:IOC auto recovery, default=1, Range[off:0|on:1] (int) parm: linkup_delay:Link up delay, default=30 secs for boot port. Otherwise 10 secs in RHEL4 0 for [RHEL5, SLES10, ESX40] Range[0] (int) parm: msix_disable_cb:Disable Message Signaled Interrupts for Brocade-415/425/815/825 cards, default=0, Range[false:0|true:1] (int) parm: msix_disable_ct:Disable Message Signaled Interrupts if possible for Brocade-1010/1020/804/1007/1741 cards, default=0, Range[false:0|true:1] (int) parm: fdmi_enable:Enables fdmi registration, default=1, Range[false:0|true:1] (int) parm: pcie_max_read_reqsz:PCIe max read request size, default=0 (use system setting), Range[128|256|512|1024|2048|4096] (int) parm: max_xfer_size:default=32MB, Range[64k|128k|256k|512k|1024k|2048k] (int) parm: max_rport_logins:Max number of logins to initiator and target rports on a port (physical/logical), default=1024 (int) I guess that I could be a possible to update the driver inside the Ovirt Node build ? Kevin -- Kevin Mazière Responsable Infrastructure Alter Way – Hosting 1 rue Royal - 227 Bureaux de la Colline 92213 Saint-Cloud Cedex Tél : +33 (0)1 41 16 38 41 Mob : +33 (0)7 62 55 57 05 http://www.alterway.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] host deploy and after reboot not responsive
- Original Message - From: Dan Kenigsberg dan...@redhat.com To: Gianluca Cecchi gianluca.cec...@gmail.com Cc: Alon Bar-Lev alo...@redhat.com, Adam Litke a...@us.ibm.com, users users@ovirt.org Sent: Wednesday, January 23, 2013 12:09:59 PM Subject: Re: [Users] host deploy and after reboot not responsive On Wed, Jan 23, 2013 at 10:57:03AM +0100, Gianluca Cecchi wrote: On Wed, Jan 23, 2013 at 9:40 AM, Dan Kenigsberg wrote: Gianluca, how about softening our ntpd requirement with something like http://gerrit.ovirt.org/11291 ? Could you verify that it's working on your system? Yes, I can test the change the service modification for vdsmd. I presume you want me to test it with default fedora 18, so chronyd enabled and ntpd disabled, correct? Yes. though my patch may generate excessive noise, with its attempt to start two conflicting service. I suppose that we can/should want only chrony. But please provide your input. We should use either, and default. On none systemd other distributions there is the concept of 'provides'. This means that a service like vdsmd can depend on timesync and a service like chronyd *AND* ntpd provide timesync. This is kind of logical name for service. Also other non systemd distributions provides the ability to script the dependencies. I could not find either of he above methods in systemd documentation. BTW: in case of indipendent services both enabled in a target level, how systemd processes them, in parallel? In old style init there were S25 S35 that ruled the start order, together with some rules inside scripts themselves such as # Proveides: # Required-Start: I'm no systemd expert, but I believe that it build a dependency tree, and attempts to start independet services in parallel, to save time. Dan. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] host deploy and after reboot not responsive
- Original Message - From: Dan Kenigsberg dan...@redhat.com To: Alon Bar-Lev alo...@redhat.com Cc: Adam Litke a...@us.ibm.com, users users@ovirt.org, Gianluca Cecchi gianluca.cec...@gmail.com Sent: Wednesday, January 23, 2013 12:37:07 PM Subject: Re: [Users] host deploy and after reboot not responsive On Wed, Jan 23, 2013 at 05:16:09AM -0500, Alon Bar-Lev wrote: - Original Message - From: Dan Kenigsberg dan...@redhat.com To: Gianluca Cecchi gianluca.cec...@gmail.com Cc: Alon Bar-Lev alo...@redhat.com, Adam Litke a...@us.ibm.com, users users@ovirt.org Sent: Wednesday, January 23, 2013 12:09:59 PM Subject: Re: [Users] host deploy and after reboot not responsive On Wed, Jan 23, 2013 at 10:57:03AM +0100, Gianluca Cecchi wrote: On Wed, Jan 23, 2013 at 9:40 AM, Dan Kenigsberg wrote: Gianluca, how about softening our ntpd requirement with something like http://gerrit.ovirt.org/11291 ? Could you verify that it's working on your system? Yes, I can test the change the service modification for vdsmd. I presume you want me to test it with default fedora 18, so chronyd enabled and ntpd disabled, correct? Yes. though my patch may generate excessive noise, with its attempt to start two conflicting service. I suppose that we can/should want only chrony. But please provide your input. We should use either, and default. I'm not sure that I parse your English properly. Could you be more explict, preferably in a code review of http://gerrit.ovirt.org/#/c/11291/ ? I guess this will work, however this is primitive approach that a service should explicitly state the service provider in multi-option configuration. Let's assume there is yet another alternative to ntpd, then all services should have three alternatives... And what if I add my own alternative to my system? then I need to manually fix all services... as the authors of these were not aware of the new service I introduce. It is systemd I don't like... On none systemd other distributions there is the concept of 'provides'. This means that a service like vdsmd can depend on timesync and a service like chronyd *AND* ntpd provide timesync. This is kind of logical name for service. Also other non systemd distributions provides the ability to script the dependencies. I could not find either of he above methods in systemd documentation. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] host deploy and after reboot not responsive
On Wed, Jan 23, 2013 at 10:57:03AM +0100, Gianluca Cecchi wrote: On Wed, Jan 23, 2013 at 9:40 AM, Dan Kenigsberg wrote: Gianluca, how about softening our ntpd requirement with something like http://gerrit.ovirt.org/11291 ? Could you verify that it's working on your system? Yes, I can test the change the service modification for vdsmd. I presume you want me to test it with default fedora 18, so chronyd enabled and ntpd disabled, correct? Yes. though my patch may generate excessive noise, with its attempt to start two conflicting service. I suppose that we can/should want only chrony. But please provide your input. BTW: in case of indipendent services both enabled in a target level, how systemd processes them, in parallel? In old style init there were S25 S35 that ruled the start order, together with some rules inside scripts themselves such as # Proveides: # Required-Start: I'm no systemd expert, but I believe that it build a dependency tree, and attempts to start independet services in parallel, to save time. Dan. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] host deploy and after reboot not responsive
On Wed, Jan 23, 2013 at 11:42 AM, Alon Bar-Lev wrote: And what if I add my own alternative to my system? then I need to manually fix all services... as the authors of these were not aware of the new service I introduce. It is systemd I don't like... if it can be of any help, it seems that others occurred in similar problems. On freeipa-devel mailing list november 2012 http://www.redhat.com/archives/freeipa-devel/2012-November/msg00198.html and in particular its final progression (around December ;-): http://www.redhat.com/archives/freeipa-devel/2012-December/msg00088.html The approach installing oVirt hosts could be similar... There the point was also distinguishing between FreeIPA server and client and the conclusion was: During server installation, user is warned when running conflicting time service. Installation then enforces ntpd configuration. During client installation, user is also warned, but continuing in installation omits ntpd configuration instead. But user can use --force-ntpd to force ntpd configuration. We can follow the server side for oVirt node Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Create ISO domain from scratch? Not attached, create!
On 22/01/2013 16:45, No Reply wrote: I have a functional data domain, NFS based. So this is not an issue of where the storage domain is offline, nor is the cluster nor the datacenter. I can create a VM, and start it if so desired. And as Alex asked... there is no existing ISO domain, that is what I am trying to create! I don't want to attach an existing ISO domain, I want to create an ISO domain, and the GUI will not allow that option... not even listed as an option. As I said before in RHEV 3.0 and/or oVirt 3.0... this could be done, but in 3.1 something is blocking any attempt to select the type as ISO or even as an EXPORT domain the type drop down list shows DATA only type, nothing else. The backing store to the data domain is just Fedora 17 based NFS, with several exports, all identical. There is nothing unique or custom in this setup, that is why this is such a mystery. I take it, by SPM you mean the Storage Pool Master? The first data domain, NFS DATA based, is the functional master for the datacenter. I am using the existing DEFAULT datacenter and DEFAULT cluster objects, I did not as yet create any additional objects of said types. Not going to increase the complexity of the environment until I get this issue understood and resolved. can you please send a screenshot - this should just work... thanks -Original Message- From: Haim Ateya [mailto:hat...@redhat.com] Sent: Tuesday, January 22, 2013 09:42 To: No Reply Cc: users@ovirt.org Subject: Re: [Users] Create ISO domain from scratch? Not attached, create! Hi, generally speaking, you should be able to create an EXPORT\ISO domain and attach it to the current data-center, if you can't it just means that there is no working data domain and pool is not initialised. please make sure pool (data-center) status is up and running, one of the hosts is functioning as SPM. Haim - Original Message - From: No Reply no-re...@dc.rr.com To: users@ovirt.org Sent: Monday, January 21, 2013 8:50:43 PM Subject: [Users] Create ISO domain from scratch? Not attached, create! Ok, this one has me scratching my head... Have existing oVirt 3.1 environment, based on Fedora 17, storage resource is NFS, also based on Fedora 17. Can create NFS storage domain, no problem. But when the environment was setup initially, no (local) ISO domain as created. Now, under oVirt engine 3.1, the option to create a storage type under domain function, only allows for creation of data type domain, only type in the drop down list is DATA/NFS? What gives? It used to be 3.0 and older, you could select the storage type as Data or ISO! Anyone have this issue as well? Any help appreciated. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] UI Plugin issue when switching main tabs
On 22/01/2013 14:06, Oved Ourfalli wrote: - Original Message - From: René Koch r.k...@ovido.at To: Oved Ourfalli ov...@redhat.com Cc: users@ovirt.org Sent: Tuesday, January 22, 2013 1:51:58 PM Subject: Re: [Users] UI Plugin issue when switching main tabs Thanks a lot for your input - it was really helpfu! I added a check for argument.length and now it's working as expected. The working code is: VirtualMachineSelectionChange: function() { if (arguments.length == 1) { var vmName = arguments[0].name; alert(vmName); // Reload VM Sub Tab api.setTabContentUrl('vms-monitoring', conf.url + '?subtab=vmsname=' + encodeURIComponent(vmName)); } } Btw, do you know if I can get the name of a host instead the hostname/ip? arguments[0].name; gives me the IP address (value of webadmin column hostname/ip) but not the name (column name) - (I always use ips instead of dns names for hypervisors)... I don't think you can currently do that. The plan is to expose all the attributes of the entity that are exposed via REST, also in the plugin api, but currently that's not the case. Not sure why it doesn't return the name itself (don't know if it is a bug, or it is as designed), but anyway, other properties will be exposed in the future. but it sounds like a bug in the mapping of id/name for the current uiplugin infra (maybe vojtech mapped the wrong field). assuming restapi returns host name for the name field, uiplugin should return it as well. Thanks, René -Original message- From:Oved Ourfalli ov...@redhat.com Sent: Tuesday 22nd January 2013 19:34 To: René Koch r.k...@ovido.at Cc: users@ovirt.org Subject: Re: [Users] UI Plugin issue when switching main tabs Found your bug (I think): You don't check in your code whether there is a selection (if (arguments.length == 1) for example, if you want to act only when one is selected). You'll probably want to do similar logic to the Foreman plugin, setting the sub-tab content URL only in case one VM is selected, and setting the URL properly. The fact that your alert won't show is that you call arguments[0].name, and it fails when nothing is selected... and I guess that's the case when you switch tabs. Didn't check it out, but please check it out... and let me know what you found :-) Oved - Original Message - From: René Koch r.k...@ovido.at To: Oved Ourfalli ov...@redhat.com Cc: users@ovirt.org Sent: Monday, January 21, 2013 6:05:47 PM Subject: Re: [Users] UI Plugin issue when switching main tabs When switching back to vm main tab changing the selection doesn't work. No matter which VM I select the VirtualMachineSelectionChange function isn't called again (as my debug alert window doesn't appear), but the last vm I selected before switching to another main tab is cached in variable vmName... It works again after restarting engine-service. I just tested your foreman plugin and it seems that this one is working as expected. Regards, René -Original message- From:Oved Ourfalli ov...@redhat.com Sent: Monday 21st January 2013 14:56 To: René Koch r.k...@ovido.at Cc: users@ovirt.org; Vojtech Szocs vsz...@redhat.com Subject: Re: [Users] UI Plugin issue when switching main tabs I'll let Vojtech (cc-ed) to give a more accurate answer, but, trying to narrow down the issue: when you switch to a different main tab and then back to the VM main tab, and change the selection in there, does it work? (Trying to understand if the problem is only when doing the switch, and it works afterwards, or not). Thank you, Oved - Original Message - From: René Koch r.k...@ovido.at To: users@ovirt.org Sent: Sunday, January 20, 2013 11:07:13 PM Subject: [Users] UI Plugin issue when switching main tabs Hi, I'm working on an UI plugin to integrate Nagios/Icinga into oVirt Engine and made some progress, but have an issue when switching the main tabs. I use VirtualMachineSelectionChange to create URL with name of vm (and HostSelectionChange for hosts). Name is used in my backend code (Perl) for fetching monitoring status. Here's the code of VirtualMachineSelectionChange: VirtualMachineSelectionChange: function() { var vmName = arguments[0].name; alert(vmName); // Reload VM Sub Tab api.setTabContentUrl('vms-monitoring', conf.url + '?subtab=vmsname=' + encodeURIComponent(vmName)); } Everything works fine as long as I stay in Virtual Machine main tab. When switching to e.g. Disks and back to Virtual Machines again the JavaScript code of start.html isn't processed anymore (or cached (?) as the my generated URL with last vm name will still be sent back to my Perl backend) - added alert() to test this. oVirt Engine version: ovirt-engine-3.2.0-1.20130118.gitd102d6f.fc18.noarch Full code of start.hml: http://pastebin.com/iEY6dA6F Thanks a lot for your help, René ___ Users mailing list
Re: [Users] UI Plugin issue when switching main tabs
On 23/01/2013 04:32, Itamar Heim wrote: On 22/01/2013 14:06, Oved Ourfalli wrote: - Original Message - From: René Koch r.k...@ovido.at To: Oved Ourfalli ov...@redhat.com Cc: users@ovirt.org Sent: Tuesday, January 22, 2013 1:51:58 PM Subject: Re: [Users] UI Plugin issue when switching main tabs Thanks a lot for your input - it was really helpfu! I added a check for argument.length and now it's working as expected. The working code is: VirtualMachineSelectionChange: function() { if (arguments.length == 1) { var vmName = arguments[0].name; alert(vmName); // Reload VM Sub Tab api.setTabContentUrl('vms-monitoring', conf.url + '?subtab=vmsname=' + encodeURIComponent(vmName)); } } Btw, do you know if I can get the name of a host instead the hostname/ip? arguments[0].name; gives me the IP address (value of webadmin column hostname/ip) but not the name (column name) - (I always use ips instead of dns names for hypervisors)... I don't think you can currently do that. The plan is to expose all the attributes of the entity that are exposed via REST, also in the plugin api, but currently that's not the case. Not sure why it doesn't return the name itself (don't know if it is a bug, or it is as designed), but anyway, other properties will be exposed in the future. but it sounds like a bug in the mapping of id/name for the current uiplugin infra (maybe vojtech mapped the wrong field). assuming restapi returns host name for the name field, uiplugin should return it as well. hmmm, i wonder if the shellinabox uiplugin relies on this bug. once it is fixed, it will need to fetch the ip via rest or assume the name is resolvable for ssh. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] custom nfs mount options
On 22/01/2013 11:43, Haim Ateya wrote: you can set it manually on each hypervisor by using vdsm.conf. add the following into /etc/vdsm/vdsm.conf [irs] nfs_mount_options = soft,nosharecache restart vdsmd service on the end. you can also set them via a posixfs storage domain, but for nfs, an nfs storage domain is recommended over a posixfs one. question is what is the use case for them, and should they be added for nfs domains as well. - Original Message - From: Alex Leonhardt alex.t...@gmail.com To: oVirt Mailing List users@ovirt.org Sent: Tuesday, January 22, 2013 1:46:56 AM Subject: [Users] custom nfs mount options Hi, Is it possible set custom nfs mount options, specifically : noatime, wsize and rsize ? I couldnt see anything when adding a NFS domain - only timeout retry. Thanks! Alex -- | RHCE | Senior Systems Engineer | www.vcore.co | | www.vsearchcloud.com | ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] default mutipath.conf config for fedora 18 invalid
Hi Gianluca, I was wandering if you could help me verify this issue. Do you still have this fedora 18 setup? Can you check if it's possible to add an iscsi storage domain with the current vdsm multipath.conf? And also if we remove getuid_callout from multipath.conf can you add a storage domain then? Thanks, Yeela - Original Message - From: Dan Kenigsberg dan...@redhat.com To: Gianluca Cecchi gianluca.cec...@gmail.com, Yeela Kaplan ykap...@redhat.com Cc: users users@ovirt.org, Ayal Baron aba...@redhat.com Sent: Wednesday, January 16, 2013 9:46:05 AM Subject: Re: [Users] default mutipath.conf config for fedora 18 invalid On Wed, Jan 16, 2013 at 01:22:38AM +0100, Gianluca Cecchi wrote: Hello, configuring All-In-One on Fedora 18 puts these lines in multipath.conf (at least on ovrt-njghtly for f18 of some days ago) # RHEV REVISION 0.9 ... defaults { polling_interval5 getuid_callout /lib/udev/scsi_id --whitelisted --device=/dev/%n ... device { vendor HITACHI product DF.* getuid_callout /lib/udev/scsi_id --whitelisted --device=/dev/%n ... Actually Fedora 18 has device-mapper-multipath 0.49 without getuid_callout; from changelog: multipath no longer uses the getuid callout. It now gets the wwid from the udev database or the environment variables so the two getuid_callouts lines have to be removed for f18 multipath -l gives Jan 16 00:30:15 | multipath.conf +5, invalid keyword: getuid_callout Jan 16 00:30:15 | multipath.conf +18, invalid keyword: getuid_callout I think it has to be considered. Hmm, it seems that the title of Bug 886087 - Rest query add storage domain fails on fedora18: missing /sbin/scsi_id is inaccurate. I've marked the bug as an ovirt-3.2 blocker, and nacked the patch that attempts to fix it http://gerrit.ovirt.org/#/c/10824/ Dan. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] UI Plugin issue when switching main tabs
- Original Message - From: Itamar Heim ih...@redhat.com To: Oved Ourfalli ov...@redhat.com Cc: users@ovirt.org, Daniel Erez de...@redhat.com Sent: Wednesday, January 23, 2013 2:33:53 PM Subject: Re: [Users] UI Plugin issue when switching main tabs On 23/01/2013 04:32, Itamar Heim wrote: On 22/01/2013 14:06, Oved Ourfalli wrote: - Original Message - From: René Koch r.k...@ovido.at To: Oved Ourfalli ov...@redhat.com Cc: users@ovirt.org Sent: Tuesday, January 22, 2013 1:51:58 PM Subject: Re: [Users] UI Plugin issue when switching main tabs Thanks a lot for your input - it was really helpfu! I added a check for argument.length and now it's working as expected. The working code is: VirtualMachineSelectionChange: function() { if (arguments.length == 1) { var vmName = arguments[0].name; alert(vmName); // Reload VM Sub Tab api.setTabContentUrl('vms-monitoring', conf.url + '?subtab=vmsname=' + encodeURIComponent(vmName)); } } Btw, do you know if I can get the name of a host instead the hostname/ip? arguments[0].name; gives me the IP address (value of webadmin column hostname/ip) but not the name (column name) - (I always use ips instead of dns names for hypervisors)... I don't think you can currently do that. The plan is to expose all the attributes of the entity that are exposed via REST, also in the plugin api, but currently that's not the case. Not sure why it doesn't return the name itself (don't know if it is a bug, or it is as designed), but anyway, other properties will be exposed in the future. but it sounds like a bug in the mapping of id/name for the current uiplugin infra (maybe vojtech mapped the wrong field). assuming restapi returns host name for the name field, uiplugin should return it as well. hmmm, i wonder if the shellinabox uiplugin relies on this bug. once it is fixed, it will need to fetch the ip via rest or assume the name is resolvable for ssh. Yes, the shellinabox uiplugin indeed relies on it. I thought it was intentional rather than a bug... Can we simply add a new property to the JSON object? I.e. obj.setProperty(hostname, ((VDS) businessEntity).gethost_name()); //$NON-NLS-1$ (in addition to: obj.setProperty(name, ((VDS) businessEntity).getvds_name()); //$NON-NLS-1$) ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] KVM version not showing in Ovirt Manager
Hi, so what is the OS on each server and what package is actually installed there? ie. the rpm name Thanks, michal On Jan 22, 2013, at 18:55 , Tom Brown t...@ng23.net wrote: I think that it's a typo and the right command is: vdsClient . The second command doesn't have the typo :). Working node node002 ~]# vdsClient -s 0 getVdsCaps vdsClient -s 0 getVdsStats /tmp/node002.log HBAInventory = {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:31b77320b5e6'}], 'FC': []} ISCSIInitiatorName = iqn.1994-05.com.redhat:31b77320b5e6 bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}} bridges = {'ovirtmgmt': {'addr': '10.192.42.196', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['vnet0', 'vnet1', 'eth0', 'vnet2']}} clusterLevels = ['3.0', '3.1', '3.2'] cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_Penryn cpuModel = Intel(R) Xeon(R) CPU W3520 @ 2.67GHz cpuSockets = 1 cpuSpeed = 2666.908 emulatedMachines = ['rhel6.3.0', 'pc', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = 65 hooks = {} kvmEnabled = true lastClient = 10.192.42.207 lastClientIface = ovirtmgmt management_ip = memSize = 7853 netConfigDirty = False networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '10.192.42.196', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '10.192.42.1', 'ports': ['vnet0', 'vnet1', 'eth0', 'vnet2']}} nics = {'eth0': {'addr': '', 'cfg': {'DEVICE': 'eth0', 'BRIDGE': 'ovirtmgmt', 'BOOTPROTO': 'dhcp', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd4:85:64:09:34:08', 'speed': 1000}} operatingSystem = {'release': '3.el6.centos.9', 'version': '6', 'name': 'RHEL'} packages2 = {'kernel': {'release': '279.14.1.el6.x86_64', 'buildtime': 1352245389.0, 'version': '2.6.32'}, 'spice-server': {'release': '10.el6', 'buildtime': 1340375889, 'version': '0.10.1'}, 'vdsm': {'release': '0.77.20.el6', 'buildtime': 1351246440, 'version': '4.10.1'}, 'qemu-kvm': {'release': '2.295.el6_3.8', 'buildtime': 1354636067, 'version': '0.12.1.2'}, 'libvirt': {'release': '21.el6_3.6', 'buildtime': 1353577785, 'version': '0.9.10'}, 'qemu-img': {'release': '2.295.el6_3.8', 'buildtime': 1354636067, 'version': '0.12.1.2'}} reservedMem = 321 software_revision = 0.77 software_version = 4.10 supportedENGINEs = ['3.0', '3.1'] supportedProtocols = ['2.2', '2.3'] uuid = 55414E03-C241-11DF-BBDA-64093408D485_d4:85:64:09:34:08 version_name = Snow Man vlans = {} vmTypes = ['kvm'] Non Working node node003 ~]# vdsClient -s 0 getVdsCaps vdsClient -s 0 getVdsStats HBAInventory = {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:9a7b944e2160'}], 'FC': []} ISCSIInitiatorName = iqn.1994-05.com.redhat:9a7b944e2160 bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}} bridges = {'ovirtmgmt': {'addr': '10.192.42.144', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['vnet1', 'eth0', 'vnet0']}} clusterLevels = ['3.0', '3.1', '3.2'] cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_Penryn cpuModel = Intel(R) Xeon(R) CPU
Re: [Users] default mutipath.conf config for fedora 18 invalid
On Wed, Jan 23, 2013 at 2:34 PM, wrote: Hi Gianluca, I was wandering if you could help me verify this issue. Do you still have this fedora 18 setup? Can you check if it's possible to add an iscsi storage domain with the current vdsm multipath.conf? And also if we remove getuid_callout from multipath.conf can you add a storage domain then? Thanks, Yeela I can confirm that the environment is still present. It is the same as at this other thread: http://lists.ovirt.org/pipermail/users/2013-January/011593.html I confirm that after - removing getuid_callout entry - blacklisting devices that was part of clustered volumes (pre-existing CentOS 6.3 + KVM nodes I'm migrating to oVirt) I was able to successfully create FCP storage domain (see myself follow up of thread above) To add iSCSI domain I need to have another host, correct? As I can't mix FCP and iSCSI domain types for the same host/cluster Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] [Engine-devel] ovirt engine sdk
On 01/23/2013 03:53 PM, navin p wrote: Hi Michael, Thanks for your help. On Wed, Jan 23, 2013 at 6:15 PM, Michael Pasternak mpast...@redhat.com mailto:mpast...@redhat.com wrote: in python, you can see object's attributes by accessing __dict__/__getattr__/dir(object)/etc., vm.__dict__ will do the job for you, however i'd suggest you using some IDE (i'm using Eclipse + PyDev plugin), this way you'll be able accessing object attributes simply by Ctrl+SPACE auto-completion. Do i have import something for Ctrl+SPACE to work ? It doesn't work for me atleast for list attributes. for vm in vmlist: print vm.name http://vm.name,vm.memory,vm.id http://vm.id,vm.os.kernel,vm.cluster.id http://vm.cluster.id,vm.start_time #print help(vm.statistics.list()) vmslist = vm.statistics.list() for i in vmslist: print i.get_name() prints memory.installed memory.used cpu.current.guest cpu.current.hypervisor cpu.current.total but i need the values of memory.installed and memory.used . statistic holders are complex types, you can fetch data by: i.unit // the unit of the holder data i.values.value[0].datum // actual data Also where do i get the Java SDK and jars ? I looked at maven but it was 1.0 version of SDK. central repo has 1.0.0.2-1, see [1], deployment details can be found at [2], wiki at [3]. [1] http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.ovirt.engine.sdk%22 [2] http://www.ovirt.org/Java-sdk#Maven_deployment [3] http://www.ovirt.org/Java-sdk Regards, Navin -- Michael Pasternak RedHat, ENG-Virtualization RD ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] KVM version not showing in Ovirt Manager
I'm corrected my vdms packages to pass qemu-kvm version to ovirt-engine. On Wed, Jan 23, 2013 at 6:18 PM, Tom Brown t...@ng23.net wrote: I mentioned these in the first post but they are CentOS 6.3's running on dreyou's packages - the KVM packages are different between the nodes as indicated below working node002 ~]# rpm -qa | grep kvm qemu-kvm-tools-0.12.1.2-2.295.el6_3.8.x86_64 qemu-kvm-0.12.1.2-2.295.el6_3.8.x86_64 non working node003 ~]# rpm -qa | grep kvm qemu-kvm-rhev-0.12.1.2-2.295.el6.10.x86_64 qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.10.x86_64 cheers Hi, so what is the OS on each server and what package is actually installed there? ie. the rpm name Thanks, michal On Jan 22, 2013, at 18:55 , Tom Brown t...@ng23.net wrote: I think that it's a typo and the right command is: vdsClient . The second command doesn't have the typo :). Working node node002 ~]# vdsClient -s 0 getVdsCaps vdsClient -s 0 getVdsStats /tmp/node002.log HBAInventory = {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:31b77320b5e6'}], 'FC': []} ISCSIInitiatorName = iqn.1994-05.com.redhat:31b77320b5e6 bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}} bridges = {'ovirtmgmt': {'addr': '10.192.42.196', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['vnet0', 'vnet1', 'eth0', 'vnet2']}} clusterLevels = ['3.0', '3.1', '3.2'] cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_Penryn cpuModel = Intel(R) Xeon(R) CPU W3520 @ 2.67GHz cpuSockets = 1 cpuSpeed = 2666.908 emulatedMachines = ['rhel6.3.0', 'pc', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = 65 hooks = {} kvmEnabled = true lastClient = 10.192.42.207 lastClientIface = ovirtmgmt management_ip = memSize = 7853 netConfigDirty = False networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '10.192.42.196', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '10.192.42.1', 'ports': ['vnet0', 'vnet1', 'eth0', 'vnet2']}} nics = {'eth0': {'addr': '', 'cfg': {'DEVICE': 'eth0', 'BRIDGE': 'ovirtmgmt', 'BOOTPROTO': 'dhcp', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd4:85:64:09:34:08', 'speed': 1000}} operatingSystem = {'release': '3.el6.centos.9', 'version': '6', 'name': 'RHEL'} packages2 = {'kernel': {'release': '279.14.1.el6.x86_64', 'buildtime': 1352245389.0, 'version': '2.6.32'}, 'spice-server': {'release': '10.el6', 'buildtime': 1340375889, 'version': '0.10.1'}, 'vdsm': {'release': '0.77.20.el6', 'buildtime': 1351246440, 'version': '4.10.1'}, 'qemu-kvm': {'release': '2.295.el6_3.8', 'buildtime': 1354636067, 'version': '0.12.1.2'}, 'libvirt': {'release': '21.el6_3.6', 'buildtime': 1353577785, 'version': '0.9.10'}, 'qemu-img': {'release': '2.295.el6_3.8', 'buildtime': 1354636067, 'version': '0.12.1.2'}} reservedMem = 321 software_revision = 0.77 software_version = 4.10 supportedENGINEs = ['3.0', '3.1'] supportedProtocols = ['2.2', '2.3'] uuid = 55414E03-C241-11DF-BBDA-64093408D485_d4:85:64:09:34:08 version_name = Snow Man vlans = {} vmTypes = ['kvm'] Non Working node node003 ~]# vdsClient -s 0 getVdsCaps vdsClient -s 0 getVdsStats HBAInventory = {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:9a7b944e2160'}], 'FC': []} ISCSIInitiatorName = iqn.1994-05.com.redhat:9a7b944e2160 bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}} bridges = {'ovirtmgmt': {'addr': '10.192.42.144', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0',
Re: [Users] Create ISO domain from scratch? Not attached, create!
On Wed, Jan 23, 2013 at 1:45 AM, No Reply wrote: I have a functional data domain, NFS based. So this is not an issue of where the storage domain is offline, nor is the cluster nor the datacenter. I can create a VM, and start it if so desired. And as Alex asked... there is no existing ISO domain, that is what I am trying to create! I can talk for my all-in-one config installed on Fedora 18 with version 3.2.0-1.20130115.git2970f58, configured with local on host SD for data. it happens: - ISO domain already existing and attached (it doesn't matter if active or inactive) I cannot add a new ISO domain - ISO domain detached so that I only see it under the top System view, Storage tab I can add a new ISO domain: it appears in the drop down list of possible types I've not a 3.1 environment to verify, though... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Guests are paused without error message while doing maintenance on NFS storage
Hi, this is a bit complex issue, so I´l try and be as clear as possible. We are running oVirt-3.1 in our production environment, based on minimal Fedora 17 installs. We have 4xHP 380's (Intel) running in one cluster, and 2xSun 7310's (AMD) in another cluster. They have shared storage over NFS to a FreeBSD-based system that uses ZFS as a filesystem. The storage boots off of a mirrored ZFS pool made up of two USB's that only houses /, while /var, /usr, etc. lies on a separate ZFS pool made up of the rest of the HDD's in the system. It looks like this: FSMOUNTPOINT pool1 (The mirrored USB's)none pool1/root / (mounted ro) pool2 (The regular HDD's) none pool2/root none pool2/root/usr/usr pool2/root/usr/home /usr/home pool2/root/usr/local /usr/local pool2/root/var/var tmpfs /tmp pool2/export /export pool2/export/ds1 /export/ds1 pool2/export/ds1/data /export/ds1/data pool2/export/ds1/export /export/ds1/export pool2/export/ds1/iso /export/ds1/iso pool2/export/ds2 /export/ds2 pool2/export/ds2/data /export/ds2/data /etc/exports: /export/ds1/data -alldirs -maproot=root 10.0.0.(all of the HV's) /export/ds1/export -alldirs -maproot=root 10.0.0.(all of the HV's) /export/ds1/iso-alldirs -maproot=root 10.0.0.(all of the HV's) /export/ds2/data -alldirs -maproot=root 10.0.0.(all of the HV's) To make those USB's last for as long as possible, / is usually mounted read-only. And when you need to change anything, you need to remount / to read-write, do the maintenance, and then remount back to read-only again. But when you issue a mount command, the VM's in oVirt pause. At first we didn´t understand that was actually the cause and tried to correlate the seemingly spontaneous pausing to just about anything, Then I was logged in to both oVirt's webadmin, and the storage at the same and issued mount -uw /, and *boom*, random VM's started to pause:) Not all of them though, and not just every one in either cluster or something, it is completely random which VM's are paused every time. # time mount -ur / real 0m2.198s user 0m0.000s sys 0m0.002s And here´s what vdsm on one of the HV's thought about that: http://pastebin.com/MXjgpDfU It begins with all VM's being Up, then me issuing the remount on the storage from read-write to read-only which took 2 secs to complete, vdsm freaking out when it shortly looses it´s connections and lastly me at 14:34 making them all run again from webadmin. Two things: 1) Does anyone know of any improvements that could be made on the storage side, apart from the obvious stop remounting, since patching must eventually be made, configurations changed, and so on. A smarter way of configuring something? Booting from another ordinary HDD is sadly out of the question because there isn´t any room for any more, it´s full. And I would have really like it rather to boot from the HDD's that are already in there, but there are other things preventing that. 2) Nothing in engine was logged about it, no Events were made and nothing in engine.log that could indicate something had gone wrong at all. If it wasn´t serious enough to issue a warning, why disrupt the service with pausing the machines? Or at least automatically start them back up when connection to the storage almost immediately came back on it´s own. Saying nothing made it really hard to troubleshoot, since we didn´t initially knew at all what could be causing the pauses to happen, and when. Best Regards /Karli Sjöberg ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Create ISO domain from scratch? Not attached, create!
a pool can only have 1 iso and 1 export domains. if an iso is already attached to the DC you will not be able to add a second iso. On 01/23/2013 04:31 PM, Gianluca Cecchi wrote: On Wed, Jan 23, 2013 at 1:45 AM, No Reply wrote: I have a functional data domain, NFS based. So this is not an issue of where the storage domain is offline, nor is the cluster nor the datacenter. I can create a VM, and start it if so desired. And as Alex asked... there is no existing ISO domain, that is what I am trying to create! I can talk for my all-in-one config installed on Fedora 18 with version 3.2.0-1.20130115.git2970f58, configured with local on host SD for data. it happens: - ISO domain already existing and attached (it doesn't matter if active or inactive) I cannot add a new ISO domain - ISO domain detached so that I only see it under the top System view, Storage tab I can add a new ISO domain: it appears in the drop down list of possible types I've not a 3.1 environment to verify, though... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Dafna Ron ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] BFA FC driver no stable on Fedora
On Wed, 2013-01-23 at 10:25 +0100, Kevin Maziere Aubry wrote: Hi I've spent hours to make my Brocade FC card working on Fedora17 or Ovirt Node build. In fact the card are randomly seen by the system, which is really painfull. So I've downloaded, compile and installed the latest driver from Brocade, and now when I load the module the card is seen. So I've installed : bfa_util_linux_noioctl-3.2.0.0-0.noarch bfa_driver_linux-3.2.0.0-0.noarch Hi Kevin, A few things: 1. what version of ovirt-node are you using? 2. You can use the plugin tooling to install an updated kmod package as long as it's in rpm format. This will ensure that it gets into both the system and the initramfs. It's an offline process that will produce a new iso that you can install or upgrade to. The tool is called edit-node and is available in the ovirt-node-tools rpm (from the ovirt.org repos) 3. Do you know if it's fixed in a recent release of the kernel on Fedora? If it is, then we can spin an updated version of the image and pick up the fix directly from fedora. Mike And the module info are : # modinfo bfa filename: /lib/modules/3.3.4-5.fc17.x86_64/kernel/drivers/scsi/bfa.ko version:3.2.0.0 author: Brocade Communications Systems, Inc. description:Brocade Fibre Channel HBA Driver fcpim ipfc license:GPL srcversion: 5C0FBDF3571ABCA9632B9CA alias: pci:v1657d0022sv*sd*bc0Csc04i00* alias: pci:v1657d0021sv*sd*bc0Csc04i00* alias: pci:v1657d0014sv*sd*bc0Csc04i00* alias: pci:v1657d0017sv*sd*bc*sc*i* alias: pci:v1657d0013sv*sd*bc*sc*i* depends:scsi_transport_fc vermagic: 3.3.4-5.fc17.x86_64 SMP mod_unload parm: os_name:OS name of the hba host machine (charp) parm: os_patch:OS patch level of the hba host machine (charp) parm: host_name:Hostname of the hba host machine (charp) parm: num_rports:Max number of rports supported per port (physical/logical), default=1024 (int) parm: num_ioims:Max number of ioim requests, default=2000 (int) parm: num_tios:Max number of fwtio requests, default=0 (int) parm: num_tms:Max number of task im requests, default=128 (int) parm: num_fcxps:Max number of fcxp requests, default=64 (int) parm: num_ufbufs:Max number of unsolicited frame buffers, default=64 (int) parm: reqq_size:Max number of request queue elements, default=256 (int) parm: rspq_size:Max number of response queue elements, default=64 (int) parm: num_sgpgs:Number of scatter/gather pages, default=2048 (int) parm: rport_del_timeout:Rport delete timeout, default=90 secs, Range[0] (int) parm: bfa_lun_queue_depth:Lun queue depth, default=32, Range[0] (int) parm: bfa_io_max_sge:Max io scatter/gather elements , default=255 (int) parm: log_level:Driver log level, default=3, Range[Critical:1|Error:2|Warning:3|Info:4] (int) parm: ioc_auto_recover:IOC auto recovery, default=1, Range[off:0|on:1] (int) parm: linkup_delay:Link up delay, default=30 secs for boot port. Otherwise 10 secs in RHEL4 0 for [RHEL5, SLES10, ESX40] Range[0] (int) parm: msix_disable_cb:Disable Message Signaled Interrupts for Brocade-415/425/815/825 cards, default=0, Range[false:0|true:1] (int) parm: msix_disable_ct:Disable Message Signaled Interrupts if possible for Brocade-1010/1020/804/1007/1741 cards, default=0, Range[false:0|true:1] (int) parm: fdmi_enable:Enables fdmi registration, default=1, Range[false:0|true:1] (int) parm: pcie_max_read_reqsz:PCIe max read request size, default=0 (use system setting), Range[128|256|512|1024|2048|4096] (int) parm: max_xfer_size:default=32MB, Range[64k|128k|256k|512k| 1024k|2048k] (int) parm: max_rport_logins:Max number of logins to initiator and target rports on a port (physical/logical), default=1024 (int) I guess that I could be a possible to update the driver inside the Ovirt Node build ? Kevin -- Kevin Mazière Responsable Infrastructure Alter Way – Hosting 1 rue Royal - 227 Bureaux de la Colline 92213 Saint-Cloud Cedex Tél : +33 (0)1 41 16 38 41 Mob : +33 (0)7 62 55 57 05 http://www.alterway.fr ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Attaching floppy to guest
Hello, if I want to attach a floppy image to a guest, can I only do via run once when it is powered off or can I attach it to a running guest too? In my case I have a winxp guest running and I only see Change CD as an option... Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Attaching floppy to guest
oh, and as answer file for install of a vm... silly me On 01/23/2013 05:12 PM, Dafna Ron wrote: you cannot attach a floppy while the vm is running. the logic behind it is that floppy is used as rescue :) On 01/23/2013 05:03 PM, Gianluca Cecchi wrote: Hello, if I want to attach a floppy image to a guest, can I only do via run once when it is powered off or can I attach it to a running guest too? In my case I have a winxp guest running and I only see Change CD as an option... Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Dafna Ron ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] KVM version not showing in Ovirt Manager
Tom, you should have qemu-kvm on Fedora, CentOS and such qemu-kvm-rhev on RHEL hosts are supposed to work with downstream RHEV product. How did you get them there? You can modify that if you want in vdsm/caps.py in _getKeyPackages() function http://www.dreyou.org/ovirt/vdsm/Packages/ ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] oVirt Weekly Meeting -- 2013-01-23
Minutes:http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-23-15.00.html Minutes (text): http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-23-15.00.txt Log: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-23-15.00.log.html #ovirt: oVirt Weekly Meeting Meeting started by mburns_ovirt_ws at 15:00:57 UTC. The full logs are available at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-01-23-15.00.log.html . Meeting summary --- * agenda and roll call (mburns_ovirt_ws, 15:01:05) * Release status (mburns, 15:04:10) * vdsm and engine rpms are posted to ovirt-beta repo (mburns, 15:04:23) * testing is ongoing with ovirt-node (mburns, 15:04:43) * hope to have ovirt-node packages and image uploaded and beta announcement sent today (mburns, 15:05:10) * workshops (dneary, 15:08:24) * Sunnyvale workshop on now. 90 registered, ~70 attendees for day 1 (dneary, 15:08:51) * Dates for Shanghai workshop in Intel's campus there have been pushed back (dneary, 15:09:29) * Shanghai workshop will now happen on May 8-9, 2013 (dneary, 15:09:49) * Call for participation and registration for that workshop will go online next week (dneary, 15:10:38) * oVirt Board meeting during the NetApp workshop is planned for tomorrow, Thursday 24 Jan. Remote attendance is possible - please contact dne...@redhat.com or lhawt...@redhat.com to attend remotely (dneary, 15:13:33) * Board meeting starts at 09:00 AM PST, 17:00 UTC (mburns, 15:16:38) * Infra update (quick) (mburns, 15:17:15) * quaid we've got *both* sets of servers I'll be distributing access to the Infra maintainers later today so work can continue in my absence (mburns, 15:17:38) * Release Status (continued) (mburns, 15:18:02) * Q from aglitke -- will we have a 3.2 ovirt on a stick? (mburns, 15:18:33) * A - yes, it's already in use at the workshop right now (mburns, 15:18:48) * next step is to get it into a jenkins build to build it daily (mburns, 15:19:07) * question from linex about upgrades -- will it be a simple package update to go from 3.1 to 3.2? (mburns, 15:19:34) * answer -- no, 3.1 runs on F17 and 3.2 on F18, so there is an OS upgrade involved as well as running engine-upgrade (mburns, 15:20:17) * upgrade from 3.2 beta to 3.2 GA *should* be more smooth (mburns, 15:20:57) * Proposal -- since we don't have beta ready yet, slip test day from 24-Jan to 29-Jan and GA from 30-Jan to 06-Feb (mburns, 15:24:12) * question -- release note status (mburns, 15:25:16) * sgordon and cheryn tan are coordinating (mburns, 15:25:29) * maintainers need to be responsive is asked for info from them if we hope to have them available on time (mburns, 15:25:48) * should have a draft ready for the 29th (mburns, 15:26:13) * AGREED: release date to slip 1 week to 06-Feb and test day to 29-Jan (mburns, 15:27:40) * Other Topics (mburns, 15:29:40) Meeting ended at 15:32:24 UTC. Action Items Action Items, by person --- * **UNASSIGNED** * (none) People Present (lines said) --- * mburns (60) * dneary (20) * aglitke (19) * mburns_ovirt_ws (7) * sgordon (5) * linex (5) * ovirtbot (4) * Rydekull (2) * quaid (1) * dustins (1) * oschreib (0) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] default mutipath.conf config for fedora 18 invalid
- Original Message - From: Gianluca Cecchi gianluca.cec...@gmail.com To: Yeela Kaplan ykap...@redhat.com Cc: users users@ovirt.org, Ayal Baron aba...@redhat.com, Dan Kenigsberg dan...@redhat.com Sent: Wednesday, January 23, 2013 4:05:34 PM Subject: Re: [Users] default mutipath.conf config for fedora 18 invalid On Wed, Jan 23, 2013 at 2:34 PM, wrote: Hi Gianluca, I was wandering if you could help me verify this issue. Do you still have this fedora 18 setup? Can you check if it's possible to add an iscsi storage domain with the current vdsm multipath.conf? And also if we remove getuid_callout from multipath.conf can you add a storage domain then? Thanks, Yeela I can confirm that the environment is still present. It is the same as at this other thread: http://lists.ovirt.org/pipermail/users/2013-January/011593.html I confirm that after - removing getuid_callout entry - blacklisting devices that was part of clustered volumes (pre-existing CentOS 6.3 + KVM nodes I'm migrating to oVirt) I was able to successfully create FCP storage domain (see myself follow up of thread above) Thanks To add iSCSI domain I need to have another host, correct? As I can't mix FCP and iSCSI domain types for the same host/cluster Yes, you need a different DC and host for iSCSI SDs. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] KVM version not showing in Ovirt Manager
On Jan 23, 2013, at 16:23 , Tom Brown t...@ng23.net wrote: Tom, you should have qemu-kvm on Fedora, CentOS and such qemu-kvm-rhev on RHEL hosts are supposed to work with downstream RHEV product. How did you get them there? You can modify that if you want in vdsm/caps.py in _getKeyPackages() function http://www.dreyou.org/ovirt/vdsm/Packages/ so…I guess - either do not build it with -rhev(regular upstream/fedora packages are without the suffix) or build vdsm with the corresponding modification in caps.py Thanks, michal ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] default mutipath.conf config for fedora 18 invalid
On Wed, Jan 23, 2013 at 4:41 PM, Yeela Kaplan wrote: Yes, you need a different DC and host for iSCSI SDs. Possibly I can test tomorrow adding another host that should go into the same DC but I can temporarily put it in another newly created iSCSI DC for testing. What is the workflow when I have a host in a DC and then I want to put it into another one, in general and when the two DCs have configured different SD types? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] KVM version not showing in Ovirt Manager
so…I guess - either do not build it with -rhev(regular upstream/fedora packages are without the suffix) or build vdsm with the corresponding modification in caps.py yes - i am not sure why those packages are in dreyou's repo, they weren't there a couple of weeks ago when the last HV was built as that one just pulled the stock CentOS ones. I may just leave it and wait until 3.2 comes along with official RHEL/CentOS packages thanks ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] KVM version not showing in Ovirt Manager
On Jan 23, 2013, at 17:11 , Tom Brown t...@ng23.net wrote: so…I guess - either do not build it with -rhev(regular upstream/fedora packages are without the suffix) or build vdsm with the corresponding modification in caps.py yes - i am not sure why those packages are in dreyou's repo, they weren't there a couple of weeks ago when the last HV was built as that one just pulled the stock CentOS ones. I may just leave it and wait until 3.2 comes along with official RHEL/CentOS packages yes or - you can really just do a modification in that file, there's no need for any rebuild of vdsm on that one particular box….it will break on upgrade, but if it would solve something for you now - why not. thanks ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] best disk type for WIn XP guests
Hello, I have a WIn XP guest configured with one ide disk. I would like to pass to virtio. Is it supported/usable for Win XP as a disk type on oVirt? What else are using other ones in case, apart IDE? My attempt is to add a second 1Gb disk configured as virtio and then if successful change disk type for the first disk too. But when powering up the guest it finds new hardware for the second disk, I point it to the directory WXP\X86 of the iso using virtio-win-1.1.16.vfd It finds the viostor.xxx files but at the end it fails installing the driver (see https://docs.google.com/file/d/0BwoPbcrMv8mvMUQ2SWxYZWhSV0E/edit ) Any help/suggestion is welcome. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Encrypted Admin password error
On Tue, Jan 22, 2013 at 4:12 PM, Alon Bar-Lev alo...@redhat.com wrote: engine-config -s AdminPassword=interactive That did the trick vs passing the passwd via stdin directly from a command line. - DHC ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] default mutipath.conf config for fedora 18 invalid
- Original Message - On Wed, Jan 23, 2013 at 4:41 PM, Yeela Kaplan wrote: Yes, you need a different DC and host for iSCSI SDs. Possibly I can test tomorrow adding another host that should go into the same DC but I can temporarily put it in another newly created iSCSI DC for testing. What is the workflow when I have a host in a DC and then I want to put it into another one, in general and when the two DCs have configured different SD types? As long as the host has visibility to the target storage domains, all you need to do is put the host in maintenance and then edit it and change the cluster/dc it belongs to. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] KVM version not showing in Ovirt Manager
Andrey Gordeev aka dreyou 23.01.2013 18:35 пользователь Michal Skrivanek michal.skriva...@redhat.com написал: Tom, you should have qemu-kvm on Fedora, CentOS and such qemu-kvm-rhev on RHEL hosts are supposed to work with downstream RHEV product. How did you get them there? You can modify that if you want in vdsm/caps.py in _getKeyPackages() function On Jan 23, 2013, at 15:23 , Andrey Gordeev dre...@gmail.com wrote: I'm corrected my vdms packages to pass qemu-kvm version to ovirt-engine. Andrey, was there any change needed? If I'm reading the code correctly this should be the case. Or you mean some other modification? Thanks, michal I'm just added check for qemu-kvm-rhel, so _getKeyPackages now return qemu-kvm version in both cases. I'm built qemu-kvm-rhel from rhel rpms, because only with this package live snapshot working correctly in CentOS 6.3. On Wed, Jan 23, 2013 at 6:18 PM, Tom Brown t...@ng23.net wrote: I mentioned these in the first post but they are CentOS 6.3's running on dreyou's packages - the KVM packages are different between the nodes as indicated below working node002 ~]# rpm -qa | grep kvm qemu-kvm-tools-0.12.1.2-2.295.el6_3.8.x86_64 qemu-kvm-0.12.1.2-2.295.el6_3.8.x86_64 non working node003 ~]# rpm -qa | grep kvm qemu-kvm-rhev-0.12.1.2-2.295.el6.10.x86_64 qemu-kvm-rhev-tools-0.12.1.2-2.295.el6.10.x86_64 cheers Hi, so what is the OS on each server and what package is actually installed there? ie. the rpm name Thanks, michal On Jan 22, 2013, at 18:55 , Tom Brown t...@ng23.net wrote: I think that it's a typo and the right command is: vdsClient . The second command doesn't have the typo :). Working node node002 ~]# vdsClient -s 0 getVdsCaps vdsClient -s 0 getVdsStats /tmp/node002.log HBAInventory = {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:31b77320b5e6'}], 'FC': []} ISCSIInitiatorName = iqn.1994-05.com.redhat:31b77320b5e6 bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}} bridges = {'ovirtmgmt': {'addr': '10.192.42.196', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['vnet0', 'vnet1', 'eth0', 'vnet2']}} clusterLevels = ['3.0', '3.1', '3.2'] cpuCores = 4 cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_Penryn cpuModel = Intel(R) Xeon(R) CPU W3520 @ 2.67GHz cpuSockets = 1 cpuSpeed = 2666.908 emulatedMachines = ['rhel6.3.0', 'pc', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = 65 hooks = {} kvmEnabled = true lastClient = 10.192.42.207 lastClientIface = ovirtmgmt management_ip = memSize = 7853 netConfigDirty = False networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr': '10.192.42.196', 'cfg': {'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway': '10.192.42.1', 'ports': ['vnet0', 'vnet1', 'eth0', 'vnet2']}} nics = {'eth0': {'addr': '', 'cfg': {'DEVICE': 'eth0', 'BRIDGE': 'ovirtmgmt', 'BOOTPROTO': 'dhcp', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr': 'd4:85:64:09:34:08', 'speed': 1000}} operatingSystem = {'release': '3.el6.centos.9', 'version': '6', 'name': 'RHEL'} packages2 = {'kernel': {'release': '279.14.1.el6.x86_64', 'buildtime': 1352245389.0, 'version': '2.6.32'}, 'spice-server': {'release': '10.el6', 'buildtime': 1340375889, 'version': '0.10.1'}, 'vdsm': {'release': '0.77.20.el6', 'buildtime': 1351246440, 'version': '4.10.1'}, 'qemu-kvm': {'release': '2.295.el6_3.8', 'buildtime': 1354636067, 'version': '0.12.1.2'}, 'libvirt': {'release': '21.el6_3.6', 'buildtime': 1353577785, 'version': '0.9.10'}, 'qemu-img': {'release': '2.295.el6_3.8', 'buildtime': 1354636067, 'version': '0.12.1.2'}} reservedMem = 321 software_revision = 0.77 software_version = 4.10 supportedENGINEs = ['3.0', '3.1'] supportedProtocols = ['2.2', '2.3'] uuid = 55414E03-C241-11DF-BBDA-64093408D485_d4:85:64:09:34:08 version_name = Snow Man vlans = {}
Re: [Users] cannot add gluster domain
On 01/22/2013 03:28 PM, T-Sinjon wrote: HI, everyone: Recently , I newly installed ovirt 3.1 from http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/, and node use http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso when i add gluster domain via nfs, mount error occurred, I have do manually mount action on the node but failed if without -o nolock option: # /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 my-gluster-ip:/gvol02/GlusterDomain /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified Can you please confirm the glusterfs version that is available in ovirt-node-iso? Thanks, Vijay ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] oVirt 3.2 Release delayed
The oVirt 3.2 Release has been delayed due to delays getting a stable oVirt Node available. New Dates * General availability: 2013-02-06 * Beta release: 2013-01-24 * Test Day: 2013-01-29 Thanks Mike ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail
Indeed reverting back to an older vdsm clears up the above issue. However now I the issue is see is: Thread-18::ERROR::2013-01-23 15:50:42,885::task::833::TaskManager.Task::(_setError) Task=`08709e68-bcbc-40d8-843a-d69d4df40ac6`::Unexpected error Traceback (most recent call last): File /usr/share/vdsm/storage/task.py, line 840, in _run return fn(*args, **kargs) File /usr/share/vdsm/logUtils.py, line 42, in wrapper res = f(*args, **kwargs) File /usr/share/vdsm/storage/hsm.py, line 923, in connectStoragePool masterVersion, options) File /usr/share/vdsm/storage/hsm.py, line 970, in _connectStoragePool res = pool.connect(hostID, scsiKey, msdUUID, masterVersion) File /usr/share/vdsm/storage/sp.py, line 643, in connect self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion) File /usr/share/vdsm/storage/sp.py, line 1167, in __rebuild self.masterDomain = self.getMasterDomain(msdUUID=msdUUID, masterVersion=masterVersion) File /usr/share/vdsm/storage/sp.py, line 1506, in getMasterDomain raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID) StoragePoolMasterNotFound: Cannot find master domain: 'spUUID=f90a0d1c-06ca-11e2-a05b-00151712f280, msdUUID=67534cca-1327-462a-b455-a04464084b31' Thread-18::DEBUG::2013-01-23 15:50:42,887::task::852::TaskManager.Task::(_run) Task=`08709e68-bcbc-40d8-843a-d69d4df40ac6`::Task._run: 08709e68-bcbc-40d8-843a-d69d4df40ac6 ('f90a0d1c-06ca-11e2-a05b-00151712f280', 2, 'f90a0d1c-06ca-11e2-a05b-00151712f280', '67534cca-1327-462a-b455-a04464084b31', 433) {} failed - stopping task This is with vdsm built from commit25a2d8572ad32352227c98a86631300fbd6523c1 - DHC On Wed, Jan 23, 2013 at 10:44 AM, Dead Horse deadhorseconsult...@gmail.comwrote: VDSM was built from: commit 166138e37e75767b32227746bb671b1dab9cdd5e Attached is the full vdsm log I should also note that from engine perspective it sees the master storage domain as locked and the others as unknown. On Wed, Jan 23, 2013 at 2:49 AM, Dan Kenigsberg dan...@redhat.com wrote: On Tue, Jan 22, 2013 at 04:02:24PM -0600, Dead Horse wrote: Any ideas on this one? (from VDSM log): Thread-25::DEBUG::2013-01-22 15:35:29,065::BindingXMLRPC::914::vds::(wrapper) client [3.57.111.30]::call getCapabilities with () {} Thread-25::ERROR::2013-01-22 15:35:29,113::netinfo::159::root::(speed) cannot read ib0 speed Traceback (most recent call last): File /usr/lib64/python2.6/site-packages/vdsm/netinfo.py, line 155, in speed s = int(file('/sys/class/net/%s/speed' % dev).read()) IOError: [Errno 22] Invalid argument Causes VDSM to fail to attach storage I doubt that this is the cause of the failure, as vdsm has always reported 0 for ib devices, and still is. Does a former version works with your Engine? Could you share more of your vdsm.log? I suppose the culprit lies in one one of the storage-related commands, not in statistics retrieval. Engine side sees: ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (QuartzScheduler_Worker-96) [553ef26e] The connection with details 192.168.0.1:/ovirt/ds failed because of error code 100 and error message is: general exception 2013-01-22 15:35:30,160 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (QuartzScheduler_Worker-96) [1ab78378] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 8970b3fe-1faf-11e2-bc1f-00151712f280 Type: VDS 2013-01-22 15:35:30,200 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (QuartzScheduler_Worker-96) [1ab78378] START, SetVdsStatusVDSCommand(HostName = kezan, HostId = 8970b3fe-1faf-11e2-bc1f-00151712f280, status=NonOperational, nonOperationalReason=STORAGE_DOMAIN_UNREACHABLE), log id: 4af5c4cd 2013-01-22 15:35:30,211 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (QuartzScheduler_Worker-96) [1ab78378] FINISH, SetVdsStatusVDSCommand, log id: 4af5c4cd 2013-01-22 15:35:30,242 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (QuartzScheduler_Worker-96) [1ab78378] Try to add duplicate audit log values with the same name. Type: VDS_SET_NONOPERATIONAL_DOMAIN. Value: storagepoolname Engine = latest master VDSM = latest master Since latest master is an unstable reference by definition, I'm sure that History would thank you if you post the exact version (git hash?) of the code. node = el6 ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] latest vdsm cannot read ib device speeds causing storage attach fail
I narrowed down on the commit where the originally reported issue crept in: commitfc3a44f71d2ef202cff18d7203b9e4165b546621building and testing with this commit or subsequent commits yields the original issue. - DHC On Wed, Jan 23, 2013 at 3:56 PM, Dead Horse deadhorseconsult...@gmail.comwrote: Indeed reverting back to an older vdsm clears up the above issue. However now I the issue is see is: Thread-18::ERROR::2013-01-23 15:50:42,885::task::833::TaskManager.Task::(_setError) Task=`08709e68-bcbc-40d8-843a-d69d4df40ac6`::Unexpected error Traceback (most recent call last): File /usr/share/vdsm/storage/task.py, line 840, in _run return fn(*args, **kargs) File /usr/share/vdsm/logUtils.py, line 42, in wrapper res = f(*args, **kwargs) File /usr/share/vdsm/storage/hsm.py, line 923, in connectStoragePool masterVersion, options) File /usr/share/vdsm/storage/hsm.py, line 970, in _connectStoragePool res = pool.connect(hostID, scsiKey, msdUUID, masterVersion) File /usr/share/vdsm/storage/sp.py, line 643, in connect self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion) File /usr/share/vdsm/storage/sp.py, line 1167, in __rebuild self.masterDomain = self.getMasterDomain(msdUUID=msdUUID, masterVersion=masterVersion) File /usr/share/vdsm/storage/sp.py, line 1506, in getMasterDomain raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID) StoragePoolMasterNotFound: Cannot find master domain: 'spUUID=f90a0d1c-06ca-11e2-a05b-00151712f280, msdUUID=67534cca-1327-462a-b455-a04464084b31' Thread-18::DEBUG::2013-01-23 15:50:42,887::task::852::TaskManager.Task::(_run) Task=`08709e68-bcbc-40d8-843a-d69d4df40ac6`::Task._run: 08709e68-bcbc-40d8-843a-d69d4df40ac6 ('f90a0d1c-06ca-11e2-a05b-00151712f280', 2, 'f90a0d1c-06ca-11e2-a05b-00151712f280', '67534cca-1327-462a-b455-a04464084b31', 433) {} failed - stopping task This is with vdsm built from commit25a2d8572ad32352227c98a86631300fbd6523c1 - DHC On Wed, Jan 23, 2013 at 10:44 AM, Dead Horse deadhorseconsult...@gmail.com wrote: VDSM was built from: commit 166138e37e75767b32227746bb671b1dab9cdd5e Attached is the full vdsm log I should also note that from engine perspective it sees the master storage domain as locked and the others as unknown. On Wed, Jan 23, 2013 at 2:49 AM, Dan Kenigsberg dan...@redhat.comwrote: On Tue, Jan 22, 2013 at 04:02:24PM -0600, Dead Horse wrote: Any ideas on this one? (from VDSM log): Thread-25::DEBUG::2013-01-22 15:35:29,065::BindingXMLRPC::914::vds::(wrapper) client [3.57.111.30]::call getCapabilities with () {} Thread-25::ERROR::2013-01-22 15:35:29,113::netinfo::159::root::(speed) cannot read ib0 speed Traceback (most recent call last): File /usr/lib64/python2.6/site-packages/vdsm/netinfo.py, line 155, in speed s = int(file('/sys/class/net/%s/speed' % dev).read()) IOError: [Errno 22] Invalid argument Causes VDSM to fail to attach storage I doubt that this is the cause of the failure, as vdsm has always reported 0 for ib devices, and still is. Does a former version works with your Engine? Could you share more of your vdsm.log? I suppose the culprit lies in one one of the storage-related commands, not in statistics retrieval. Engine side sees: ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (QuartzScheduler_Worker-96) [553ef26e] The connection with details 192.168.0.1:/ovirt/ds failed because of error code 100 and error message is: general exception 2013-01-22 15:35:30,160 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (QuartzScheduler_Worker-96) [1ab78378] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 8970b3fe-1faf-11e2-bc1f-00151712f280 Type: VDS 2013-01-22 15:35:30,200 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (QuartzScheduler_Worker-96) [1ab78378] START, SetVdsStatusVDSCommand(HostName = kezan, HostId = 8970b3fe-1faf-11e2-bc1f-00151712f280, status=NonOperational, nonOperationalReason=STORAGE_DOMAIN_UNREACHABLE), log id: 4af5c4cd 2013-01-22 15:35:30,211 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (QuartzScheduler_Worker-96) [1ab78378] FINISH, SetVdsStatusVDSCommand, log id: 4af5c4cd 2013-01-22 15:35:30,242 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (QuartzScheduler_Worker-96) [1ab78378] Try to add duplicate audit log values with the same name. Type: VDS_SET_NONOPERATIONAL_DOMAIN. Value: storagepoolname Engine = latest master VDSM = latest master Since latest master is an unstable reference by definition, I'm sure that History would thank you if you post the exact version (git hash?) of the code. node = el6 ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] cannot add gluster domain
Thanks for your kindly reply, but I think the problem may not be glusterfs , i have tested with a normal nfs storage, the error still the same. There's familiar case in ' http://comments.gmane.org/gmane.comp.emulators.ovirt.user/3216 ' , says: It seems it's related http://gerrit.ovirt.org/#/c/4720/. The unnecessary nfs related rpm/commands including rpc-bind were removed from ovirt-node. But the service nfs-lock requires rpc-bind, so it can't start. I guess so. Michael, is it? Thanks! seems the problem has occurred since ovirt-node-iso-2.5.0-2.0.fc17 and related with this merge. On 24 Jan, 2013, at 2:50 AM, Vijay Bellur vbel...@redhat.com wrote: On 01/22/2013 03:28 PM, T-Sinjon wrote: HI, everyone: Recently , I newly installed ovirt 3.1 from http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/, and node use http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso when i add gluster domain via nfs, mount error occurred, I have do manually mount action on the node but failed if without -o nolock option: # /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 my-gluster-ip:/gvol02/GlusterDomain /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain mount.nfs: rpc.statd is not running but is required for remote locking. mount.nfs: Either use '-o nolock' to keep locks local, or start statd. mount.nfs: an incorrect mount option was specified Can you please confirm the glusterfs version that is available in ovirt-node-iso? Thanks, Vijay ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users