Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot
can you please upload full engine, vdsm, libvirt and vm's qemu logs? On 02/02/2014 02:08 AM, Steve Dainard wrote: I have two CentOS 6.5 Ovirt hosts (ovirt001, ovirt002) I've installed the applicable qemu-kvm-rhev packages from this site: http://www.dreyou.org/ovirt/vdsm32/Packages/ on ovirt002. On ovirt001 if I take a live snapshot: Snapshot 'test qemu-kvm' creation for VM 'snapshot-test' was initiated by admin@internal. The VM is paused Failed to create live snapshot 'test qemu-kvm' for VM 'snapshot-test'. VM restart is recommended. Failed to complete snapshot 'test qemu-kvm' creation for VM 'snapshot-test'. The VM is then started, and the status for the snapshot changes to OK. On ovirt002 (with the packages from dreyou) I don't get any messages about a snapshot failing, but my VM is still paused to complete the snapshot. Is there something else other than the qemu-kvm-rhev packages that would enable this functionality? I've looked for some information on when the packages would be built as required in the CentOS repos, but I don't see anything definitive. http://lists.ovirt.org/pipermail/users/2013-December/019126.html Looks like one of the maintainers is waiting for someone to tell him what flags need to be set. Also, another thread here: http://comments.gmane.org/gmane.comp.emulators.ovirt.arch/1618 same maintainer, mentioning that he hasn't seen anything in the bug tracker. There is a bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1009100 that seems to have ended in finding a way for qemu to expose whether it supports live snapshots, rather than figuring out how to get the CentOS team the info they need to build the packages with the proper flags set. I have bcc'd both dreyou (packaged the qemu-kvm-rhev packages listed above) and Russ (CentOS maintainer mentioned in the other threads) if they wish to chime in and perhaps collaborate on which flags, if any, should be set for the qemu-kvm builds so we can get a CentOS bug report going and hammer this out. Thanks everyone. **crosses fingers and hopes for live snapshots soon** *Steve Dainard * IT Infrastructure Manager Miovision http://miovision.com/ | /Rethink Traffic/ 519-513-2407 tel:519-513-2407 ex.250 877-646-8476 tel:877-646-8476 (toll-free) *Blog http://miovision.com/blog | **LinkedIn https://www.linkedin.com/company/miovision-technologies | Twitter https://twitter.com/miovision | Facebook https://www.facebook.com/miovision* Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Fri, Jan 31, 2014 at 1:26 PM, Steve Dainard sdain...@miovision.com mailto:sdain...@miovision.com wrote: How would you developers, speaking for the oVirt-community, propose to solve this for CentOS _now_ ? I would imagine that the easiest way is that you build and host this one package(qemu-kvm-rhev), since you´ve basically already have the source and recipe (since you´re already providing it for RHEV anyway). Then, once that´s in place, it´s more a question of where to host the packages, in what repository. Be it your own, or some other repo set up for the SIG. This is my view, how I as a user view this issue. I think this is a pretty valid view. What would it take to get the correct qemu package hosted in the ovirt repo? -- Med Vänliga Hälsningar --- Karli Sjöberg Swedish University of Agricultural Sciences Box 7079 (Visiting Address Kronåsvägen 8) S-750 07 Uppsala, Sweden Phone: +46-(0)18-67 15 66 tel:%2B46-%280%2918-67%2015%2066 karli.sjob...@slu.se mailto:karli.sjob...@slu.se ___ Users mailing list Users@ovirt.org mailto:Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Dafna Ron ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] about the size of an offline snapshot
On 02/01/2014 06:52 PM, Itamar Heim wrote: On 01/29/2014 10:45 AM, Maor Lipchuk wrote: Hi Sandy, virtual size is the size of the disk the VM knows, it is actually the size you chose to create it with. The true size is the summerise of all the true size which the volumes related to disk. So for example if you have one disk of 20G and you occupied 18GB of it. Then you created a snapshot and you occupied 4GB of it, you might see that the virtual size will still be 20GB though the true size will be 22 GB You can also check in the host the commands: vdsClient 0 getVmStats vmId or vdsClient 0 getAllVmStats. why at host level and not via api or ovirt-cli? That is correct, you can also see the size and the fields through the API or ovirt-cli (see http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3-Beta/html-single/Developer_Guide/index.html#Floating_Disk_Elements) though, you can not see the true size in the floating disk, IINM you can see it under the VM snapshots disks in the API. I think that since I already saw this question before, maybe its worth to open an RFE on that issueת perhaps changing the column name to full_disk_size ןמ איק GUI or adding a column of actual size in the disks main tab? Regards, Maor On 01/29/2014 04:13 AM, Sandy Sun wrote: Assign 20G virtio disk for VM, create an offline snapshot, find the true size of Vm-disk bigger than the virtual size (assign size)? I want to know how to compute the true size of vm-disk. Anbody can tell me the answer ? thanks. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] [URGENT] hacking the DB
Hi Eli, Thanks for the reply. My first measure was to backup the db. I have managed to clean up the mess and as soon as I get the time, I'll send the data so you can figure out what went wrong. I'll also try to post my findings on several matters. I thank you all for the fast help to get this problem fixed so I can get to my dead line. Regards, El 01/02/2014 20:10, users-requ...@ovirt.org escribió: Send Users mailing list submissions to users@ovirt.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.ovirt.org/mailman/listinfo/users or, via email, send a message with subject or body 'help' to users-requ...@ovirt.org You can reach the person managing the list at users-ow...@ovirt.org When replying, please edit your Subject line so it is more specific than Re: Contents of Users digest... Today's Topics: 1. Re: Reattach storage domain (Itamar Heim) 2. Re: Reattach storage domain (Alexandr) 3. Re: Reattach storage domain (Itamar Heim) 4. Re: Reattach storage domain (Meital Bourvine) 5. Re: Reattach storage domain (Alexandr) 6. Re: Reattach storage domain (Itamar Heim) 7. Re: [URGENT] hacking the DB (Eli Mesika) 8. Re: Host x is installed with VDSM version (UNKNOWN) (Dan Kenigsberg) -- Message: 1 Date: Sat, 01 Feb 2014 18:51:19 +0100 From: Itamar Heim ih...@redhat.com To: Alexandr shur...@shurik.kiev.ua, users@ovirt.org Subject: Re: [Users] Reattach storage domain Message-ID: 52ed3417.4050...@redhat.com Content-Type: text/plain; charset=ISO-8859-1; format=flowed On 02/01/2014 06:38 PM, Alexandr wrote: Hello! Unfortunately my master storage domain (gluster) is dead. I setup another gluster-storage and attach it to the ovirt. Hostname, path and volume name are same as old ones. Then I restored from tar archive all files. But I cannot activate master domain, operation failed and domain status remains inactive. I see - it mounted on nodes: vms02.lis.ua:STORAGE on /rhev/data-center/mnt/glusterSD/vms02.lis.ua: STORAGE I attached engine.log, can someone provide me recovery steps? P.S. Sorry for my english ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users the right way to do this would be to restore it not via engine, then use the rest api to edit the storage domain connection (mount) in the engine. -- Message: 2 Date: Sat, 01 Feb 2014 19:56:36 +0200 From: Alexandr shur...@shurik.kiev.ua To: Itamar Heim ih...@redhat.com, users@ovirt.org Subject: Re: [Users] Reattach storage domain Message-ID: 52ed3554.8080...@shurik.kiev.ua Content-Type: text/plain; charset=UTF-8 Thank you. Can you provide me more detailed steps, I'm not familiar with rest api :( 01.02.2014 19:51, Itamar Heim ?: On 02/01/2014 06:38 PM, Alexandr wrote: Hello! Unfortunately my master storage domain (gluster) is dead. I setup another gluster-storage and attach it to the ovirt. Hostname, path and volume name are same as old ones. Then I restored from tar archive all files. But I cannot activate master domain, operation failed and domain status remains inactive. I see - it mounted on nodes: vms02.lis.ua:STORAGE on /rhev/data-center/mnt/glusterSD/vms02.lis.ua:STORAGE I attached engine.log, can someone provide me recovery steps? P.S. Sorry for my english ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users the right way to do this would be to restore it not via engine, then use the rest api to edit the storage domain connection (mount) in the engine. -- Message: 3 Date: Sat, 01 Feb 2014 18:58:56 +0100 From: Itamar Heim ih...@redhat.com To: Alexandr shur...@shurik.kiev.ua, users@ovirt.org Subject: Re: [Users] Reattach storage domain Message-ID: 52ed35e0.6000...@redhat.com Content-Type: text/plain; charset=UTF-8; format=flowed On 02/01/2014 06:56 PM, Alexandr wrote: Thank you. Can you provide me more detailed steps, I'm not familiar with rest api :( sorry i can't give more details right now: manage connection details: http://www.ovirt.org/Features/Manage_Storage_Connections rest api (and other things): https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html-single/Developer_Guide/index.html 01.02.2014 19:51, Itamar Heim ?: On 02/01/2014 06:38 PM, Alexandr wrote: Hello! Unfortunately my master storage domain (gluster) is dead. I setup another gluster-storage and attach it to the ovirt. Hostname, path and volume name are same as old ones. Then I restored from tar archive all files. But I cannot activate master domain, operation failed and domain status remains
Re: [Users] about the size of an offline snapshot
On Sun, Feb 2, 2014 at 11:21 AM, Maor Lipchuk wrote: That is correct, you can also see the size and the fields through the API or ovirt-cli (see http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3-Beta/html-single/Developer_Guide/index.html#Floating_Disk_Elements) though, you can not see the true size in the floating disk, IINM you can see it under the VM snapshots disks in the API. Just to correct the link as 3.3 is not beta any more and that the link provided is accessible only by Red Hat employees probably: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html/Developer_Guide/chap-Floating_Disks.html#Floating_Disk_Elements Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] How to scratch previous versions reports and dwh installation?
Hello, I have some problems, I think they are also due to PosgreSQL access rules changes from f18 to f19. In particular if one has passed from fedora 18 to Fedora 19 and form 3.2 to 3.3 as I did; see https://bugzilla.redhat.com/show_bug.cgi?id=1009335 Right now I have a problem with my AIO installation where trying to upgrade from 3.3.2 to 3.3.3 I got many problems related to RDBMS. Is there a way to completely get rid of prevous installtion and reinstall cleanly in 3.3.3 as I did on another system without getting problems. I tried removing packages: yum remove ovirt-engine-dwh ovirt-engine-reports and deleting related databases: drop database ovirt_engine_history; drop database ovirtenginereports; But then trying to install again packages and runnign setup I get this in log: 2014-01-30 00:37:30::DEBUG::common_utils::446::root:: running sql query on host: localhost, port: 5432, db: ovirt_engine_history, user: engine, query: 'select 1 from history_configuration;'. 2014-01-30 00:37:30::DEBUG::common_utils::907::root:: Executing command -- '/usr/bin/psql --pset=tuples_only=on --set ON_ERROR_STOP=1 --dbname ovirt_engine_history --host localhost --port 5432 --username engine -w -c select 1 from history_configuration;' in working directory '/root' 2014-01-30 00:37:30::DEBUG::common_utils::962::root:: output = 2014-01-30 00:37:30::DEBUG::common_utils::963::root:: stderr = ERROR: relation history_configuration does not exist LINE 1: select 1 from history_configuration; Can anyone tell if I have to run any other purge operation, suche as any users or table on engine db itself? And also what would be the expected configuration of /var/lib/pgsql/data/pg_hba.conf and eventually /var/lib/pgsql/data/postgresql.conf for a fedora 19 system so that a setup for reports and dwh could complete? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] How to scratch previous versions reports and dwh installation?
- Original Message - From: Gianluca Cecchi gianluca.cec...@gmail.com To: users users@ovirt.org Sent: Sunday, February 2, 2014 3:56:25 PM Subject: [Users] How to scratch previous versions reports and dwh installation? Hello, I have some problems, I think they are also due to PosgreSQL access rules changes from f18 to f19. In particular if one has passed from fedora 18 to Fedora 19 and form 3.2 to 3.3 as I did; see https://bugzilla.redhat.com/show_bug.cgi?id=1009335 Right now I have a problem with my AIO installation where trying to upgrade from 3.3.2 to 3.3.3 I got many problems related to RDBMS. Is this related to dwh/reports? Is there a way to completely get rid of prevous installtion and reinstall cleanly in 3.3.3 as I did on another system without getting problems. So you want to delete dwh/reports only or also the engine? I tried removing packages: yum remove ovirt-engine-dwh ovirt-engine-reports and deleting related databases: drop database ovirt_engine_history; drop database ovirtenginereports; This might not be enough. Sadly there is currently no cleanup tool for dwh/reports, similar to engine-cleanup for the engine. For things to delete look at /etc/ovirt*dwh* /etc/ovirt-*reports* /etc/ovirt-engine/*dwh* /etc/ovirt-engine/*reports* /var/lib/ovirt-engine/deployments/*reports* But then trying to install again packages and runnign setup I get this in log: 2014-01-30 00:37:30::DEBUG::common_utils::446::root:: running sql query on host: localhost, port: 5432, db: ovirt_engine_history, user: engine, query: 'select 1 from history_configuration;'. 2014-01-30 00:37:30::DEBUG::common_utils::907::root:: Executing command -- '/usr/bin/psql --pset=tuples_only=on --set ON_ERROR_STOP=1 --dbname ovirt_engine_history --host localhost --port 5432 --username engine -w -c select 1 from history_configuration;' in working directory '/root' 2014-01-30 00:37:30::DEBUG::common_utils::962::root:: output = 2014-01-30 00:37:30::DEBUG::common_utils::963::root:: stderr = ERROR: relation history_configuration does not exist LINE 1: select 1 from history_configuration; This in itself is not necessarily a problem, unless of course setup failed immediately after that. Which versions of relevant packages? Can you please post full logs? Can anyone tell if I have to run any other purge operation, suche as any users or table on engine db itself? IIRC nothing in the engine db itself. Didn't check that, though. And also what would be the expected configuration of /var/lib/pgsql/data/pg_hba.conf Manually-edited? On a remote database server? Or local, letting the setup programs edit it for you? Please post it if you have a specific problem that you think might be related. and eventually /var/lib/pgsql/data/postgresql.conf Only max_connections is edited by setup, to be 150. For a remote database you want to change listen_addresses. for a fedora 19 system so that a setup for reports and dwh could complete? In general, on a clean system, with a local database, the setup programs should do what's needed and no manual fixes are needed. Do note that, as Yaniv mentioned earlier, 3.3.3 should be used when released. 3.3.3-beta might probably be good enough if you can't wait. Regards, -- Didi ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] ovirt 3.4.0-0.5.beta1 and OpenLDAP groups
Hi All, I managed to use OpenLDAP to integrate with oVirt 3.4.0-0.5.beta1. For this, I followed (more or less, I used a Raspberry Pi and Raspbian) instructions as found on http://www.ovirt.org/LDAP_Quick_Start It all seems to work well, I am able to connect to a domain, login etc. and assign some roles to users. However, I cannot use (ldap) groups it seems. I cann add a group in the ovirt gui, but (in the tab General) Active remain false. A I missing something...? Winfried ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt-report Forbidden access error
Il 02/02/2014 07:40, Yedidyah Bar David ha scritto: - Original Message - From: Alessandro Bianchi a.bian...@skynet.it To: users@ovirt.org Sent: Friday, January 31, 2014 4:13:51 PM Subject: [Users] ovirt-report Forbidden access error Hi all I installed ovirt-engine-reports-3.3.2-1.fc19.noarch using yum Now I have reports listed when right clicking on Vms but on any report i see this error: Forbidden You don't have permission to access /ovirt-engine-reports/flow.html on this server. This seems to be related to apache redirection but how to fix it? I have three files in conf.d ovirt-engine-root-redirect.conf z-ovirt-engine-proxy.conf z-ovirt-engine-reports-proxy.conf but can't figure how to fix them I applied no changes to these files Hi and thank you for your answer Did you also run 'ovirt-engine-reports-setup' after installing it? yes: I had to fix the pg file setting the user as locally enabled as reported by others to avoid the installation missing password error Was it a new installation or an upgrade from 3.2? Both engine and reports. Engine was upgraded from 3.2 and reports are newly installed (not preoviously installed) Did you install and setup dwh? yes Note that there are several bugs in dwh/reports 3.3.2, targeted to be fixed in 3.3.3. So you might want to wait till 3.3.3 is released. Regards, I'll wait Thank you Best rgards -- SkyNet SRL Via Maggiate 67/a - 28021 Borgomanero (NO) - tel. +39 0322-836487/834765 - fax +39 0322-836608 http://www.skynet.it http://www.skynet.it/ Autorizzazione Ministeriale n.197 Le informazioni contenute in questo messaggio sono riservate e confidenziali ed è vietata la diffusione in qualunque modo eseguita. Qualora Lei non fosse la persona a cui il presente messaggio è destinato, La invitiamo ad eliminarlo ed a distruggerlo non divulgandolo, dandocene gentilmente comunicazione. Per qualsiasi informazione si prega di contattare i...@skynet.it (e-mail dell'azienda). Rif. D.L. 196/2003 ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] How to scratch previous versions reports and dwh installation?
On Sun, Feb 2, 2014 at 3:34 PM, Yedidyah Bar David wrote: - Original Message - From: Gianluca Cecchi To: users users@ovirt.org Sent: Sunday, February 2, 2014 3:56:25 PM Subject: [Users] How to scratch previous versions reports and dwh installation? Hello, I have some problems, I think they are also due to PosgreSQL access rules changes from f18 to f19. In particular if one has passed from fedora 18 to Fedora 19 and form 3.2 to 3.3 as I did; see https://bugzilla.redhat.com/show_bug.cgi?id=1009335 Right now I have a problem with my AIO installation where trying to upgrade from 3.3.2 to 3.3.3 I got many problems related to RDBMS. Is this related to dwh/reports? yes, engine is ok Is there a way to completely get rid of prevous installtion and reinstall cleanly in 3.3.3 as I did on another system without getting problems. So you want to delete dwh/reports only or also the engine? Yes, only dwh/reports. Suppose I don't mind to maintain my previous (3.2.x settings in my case) and I want to start from new data Engine is ok I tried removing packages: yum remove ovirt-engine-dwh ovirt-engine-reports and deleting related databases: drop database ovirt_engine_history; drop database ovirtenginereports; This might not be enough. Sadly there is currently no cleanup tool for dwh/reports, similar to engine-cleanup for the engine. For things to delete look at /etc/ovirt*dwh* /etc/ovirt-*reports* /etc/ovirt-engine/*dwh* /etc/ovirt-engine/*reports* /var/lib/ovirt-engine/deployments/*reports* But then trying to install again packages and runnign setup I get this in log: 2014-01-30 00:37:30::DEBUG::common_utils::446::root:: running sql query on host: localhost, port: 5432, db: ovirt_engine_history, user: engine, query: 'select 1 from history_configuration;'. 2014-01-30 00:37:30::DEBUG::common_utils::907::root:: Executing command -- '/usr/bin/psql --pset=tuples_only=on --set ON_ERROR_STOP=1 --dbname ovirt_engine_history --host localhost --port 5432 --username engine -w -c select 1 from history_configuration;' in working directory '/root' 2014-01-30 00:37:30::DEBUG::common_utils::962::root:: output = 2014-01-30 00:37:30::DEBUG::common_utils::963::root:: stderr = ERROR: relation history_configuration does not exist LINE 1: select 1 from history_configuration; This in itself is not necessarily a problem, unless of course setup failed immediately after that. The actual problem that cause setup to abort was few lines below: 2014-01-30 00:37:42::DEBUG::common_utils::907::root:: Executing command -- '/usr/share/ovirt-engine-dwh/db-scripts/create_schema.sh -l /var/log/ovirt-engine/ovirt-history-db-install-2014_01_30_00_37_42.log -u engine_history -s localhost -p 5432 -g' in working directory '/root' 2014-01-30 00:37:42::DEBUG::common_utils::962::root:: output = 2014-01-30 00:37:42::DEBUG::common_utils::963::root:: stderr = psql: FATAL: password authentication failed for user engine_history FATAL: password authentication failed for user engine_history password retrieved from file /tmp/pgpasss2gOEF.tmp 2014-01-30 00:37:42::DEBUG::common_utils::964::root:: retcode = 2 2014-01-30 00:37:42::ERROR::decorators::27::root:: Traceback (most recent call last): File /usr/share/ovirt-engine-dwh/decorators.py, line 20, in wrapped_f output = f(*args) File /bin/ovirt-engine-dwh-setup, line 160, in createDbSchema envDict={'ENGINE_PGPASS': PGPASS_TEMP}, File /usr/share/ovirt-engine-dwh/common_utils.py, line 967, in execCmd raise Exception(msg) Exception: Error while trying to create ovirt_engine_history db 2014-01-30 00:37:42::ERROR::ovirt-engine-dwh-setup::691::root:: Exception caught! 2014-01-30 00:37:42::ERROR::ovirt-engine-dwh-setup::692::root:: Traceback (most recent call last): File /bin/ovirt-engine-dwh-setup, line 623, in main createDbSchema(db_dict) File /usr/share/ovirt-engine-dwh/decorators.py, line 28, in wrapped_f raise Exception(instance) Exception: Error while trying to create ovirt_engine_history db Which versions of relevant packages? Can you please post full logs? I'm using packages from 3.3.3rc so: ovirt-engine-reports-3.3.3-1.fc19.noarch ovirt-engine-dwh-3.3.3-1.fc19.noarch Can anyone tell if I have to run any other purge operation, suche as any users or table on engine db itself? IIRC nothing in the engine db itself. Didn't check that, though. And also what would be the expected configuration of /var/lib/pgsql/data/pg_hba.conf Manually-edited? On a remote database server? Or local, letting the setup programs edit it for you? Please post it if you have a specific problem that you think might be related. This is an All-In-One so that engine itself and its database, as the dwh and reports RDBMS are all on the same server I didn't edit anything, but I don't know if passing from f18 to f19 caused postgres yum update phase to change anything itself... and eventually
Re: [Users] How to scratch previous versions reports and dwh installation?
for example, - yum remove ovirt-engine-dwh ovirt-engine-reports - psql drop database ovirt_engine_history; drop database ovirtenginereports; cd /etc mv ovirt-engine-dwh/ ovirt-engine-dwh.old mv ovirt-engine-reports/ ovirt-engine-reports.old mv ovirt-engine/ovirt-engine-dwh ovirt-engine/ovirt-engine-dwh.old mv /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war.dodeploy /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war.dodeploy.old and run setup again after installing ovirt-engine-dwh-3.3.3-1.fc19.noarch (I currently have stable + updates-testing repos enabled) [root@tekkaman ~]# ovirt-engine-dwh-setup Welcome to ovirt-engine-dwh setup utility In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service (yes|no): yes Stopping ovirt-engine... [ DONE ] This utility can configure a read only user for DB access. Would you like to do so? (yes|no): yes Error: user name cannot be empty Provide a username for read-only user : ovirt_read Provide a password for read-only user: Warning: Weak Password. Re-type password: Should postgresql be setup with secure connection? (yes|no): yes Creating DB...[ ERROR ] Error encountered while installing ovirt-engine-dwh, please consult the log file: /var/log/ovirt-engine/ovirt-engine-dwh-setup-2014_02_02_16_29_34.log The main problem in log is: 2014-02-02 16:30:04::DEBUG::ovirt-engine-dwh-setup::144::root:: ovirt engine history db creation is logged at /var/log/ovirt-engine/ovirt-history-db-install-2014_02_02_16_30_04.log 2014-02-02 16:30:04::DEBUG::common_utils::907::root:: Executing command -- '/usr/share/ovirt-engine-dwh/db-scripts/create_schema.sh -l /var/log/ovirt-engine/ovirt-history-db-install-2014_02_02_16_30_04.log -u engine_history -s localhost -p 5432 -g' in working directory '/root' 2014-02-02 16:30:04::DEBUG::common_utils::962::root:: output = 2014-02-02 16:30:04::DEBUG::common_utils::963::root:: stderr = psql: FATAL: password authentication failed for user engine_history FATAL: password authentication failed for user engine_history password retrieved from file /tmp/pgpassW0tW24.tmp 2014-02-02 16:30:04::DEBUG::common_utils::964::root:: retcode = 2 2014-02-02 16:30:04::ERROR::decorators::27::root:: Traceback (most recent call last): File /usr/share/ovirt-engine-dwh/decorators.py, line 20, in wrapped_f output = f(*args) File /bin/ovirt-engine-dwh-setup, line 160, in createDbSchema envDict={'ENGINE_PGPASS': PGPASS_TEMP}, File /usr/share/ovirt-engine-dwh/common_utils.py, line 967, in execCmd raise Exception(msg) Exception: Error while trying to create ovirt_engine_history db 2014-02-02 16:30:04::ERROR::ovirt-engine-dwh-setup::691::root:: Exception caught! 2014-02-02 16:30:04::ERROR::ovirt-engine-dwh-setup::692::root:: Traceback (most recent call last): File /bin/ovirt-engine-dwh-setup, line 637, in main createDbSchema(db_dict) File /usr/share/ovirt-engine-dwh/decorators.py, line 28, in wrapped_f raise Exception(instance) Exception: Error while trying to create ovirt_engine_history db So it seems a problem with creating db Currently [root@tekkaman ~]# grep -v ^# /var/lib/pgsql/data/pg_hba.conf local all postgres trust local all all md5 hostovirtenginereports engine_reports 0.0.0.0/0 md5 hostovirtenginereports engine_reports ::0/0 md5 hostall all 127.0.0.1/32md5 hostall all ::1/128 md5 hostovirt_engine_history engine_history ::0/0 trust where is stored information for user engine_history do I have also to cleanup this user information? See my full log /var/log/ovirt-engine/ovirt-engine-dwh-setup-2014_02_02_16_29_34.log here: https://drive.google.com/file/d/0BwoPbcrMv8mvQnJlRkljTkhjMTg/edit?usp=sharing Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] EL6 manual hypervisor bare minimum with ovirt-hosted-engine packages
Hello, I'd like to run the new ovirt-hosted-engine option in RHEV/Ovirt. This appears to not be a built in option of the Ovirt node image. I'd like to create an EL 6 install that as closely resembles the Node/Hypervisor image. How can I keep the amount of installed packages to a minimum and have the self hosted packages installed? Please post how I can get to the node/rhev-h image manually with as similar of packages and the ovirt-hosted-engine packages. # yum install ovirt-hosted-engine-setup # yum install ovirt-hosted-engine-ha Thanks -- Zach ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] How to scratch previous versions reports and dwh installation?
On Sun, Feb 2, 2014 at 4:37 PM, Gianluca Cecchi wrote: where is stored information for user engine_history do I have also to cleanup this user information? So it seems it was related to postreSQL users needed to be dropped too, before running setup again. And I also noticed that in 3.3.3 rc the problem solved here: http://gerrit.ovirt.org/#/c/23304/2/packaging/ovirt-engine-reports-setup.py is not applied and reports setup fails. I hope someone is following to get into final 3.3.3. This is my list of things to do in 3.3.3 to get a clean new install that worked today: Some steps could be not necessary depending on previous setup or previous errors and so incomplete deployments: 1) packages clean yum remove ovirt-engine-dwh ovirt-engine-reports 2) files and directory clean rm -rf /etc/ovirt-engine-dwh/ rm -rf /etc/ovirt-engine-reports/ rm -rf /etc/ovirt-engine/ovirt-engine-dwh rm -rf /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war.dodeploy rm -rf /usr/share/ovirt-engine-dwh/ rm -rf /usr/share/ovirt-engine-reports 3) DB clean (substitute ovirt_ruser with your previously chosen read user) engine=# drop user engine_history; DROP ROLE engine=# drop user engine_reports; DROP ROLE engine=# drop user ovirt_ruser; DROP ROLE At the end you shold have something like: postgres=# \du List of roles Role name | Attributes | Member of ---++--- engine| Create DB | {} postgres | Superuser, Create role, Create DB, Replication | {} NOTE: it seems not necessary to purge the lines eventually addedd during previous setup into pg_hba.conf such as: hostovirtenginereports engine_reports 0.0.0.0/0 md5 hostovirtenginereports engine_reports ::0/0 md5 hostovirt_engine_history engine_history ::0/0 trust 3) DWH packages install and setup yum install ovirt-engine-dwh [root@tekkaman ~]# ovirt-engine-dwh-setup Welcome to ovirt-engine-dwh setup utility This utility can configure a read only user for DB access. Would you like to do so? (yes|no): yes Error: user name cannot be empty Provide a username for read-only user : ovirt_ruser Provide a password for read-only user: Warning: Weak Password. Re-type password: Should postgresql be setup with secure connection? (yes|no): yes Creating DB...[ DONE ] Creating read-only user...[ DONE ] Setting DB connectivity...[ DONE ] Starting ovirt-engine... [ DONE ] Starting oVirt-ETL... [ DONE ] Successfully installed ovirt-engine-dwh. 4) Install reports packages, patch and setup yum install ovirt-engine-reports in file /usr/share/ovirt-engine-reports/ovirt-engine-reports-setup.py line 1163 change if preserveReportsJobs: -- if preserveReports: [root@tekkaman ~]# ovirt-engine-reports-setup Welcome to ovirt-engine-reports setup utility In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service (yes|no): yes Stopping ovirt-engine... [ DONE ] Editing XML files... [ DONE ] Setting DB connectivity...[ DONE ] Please choose a password for the reports admin user(s) (ovirt-admin): Warning: Weak Password. Retype password: Warning: Weak Password. Deploying Server... [ DONE ] Updating Redirect Servlet... [ DONE ] Importing reports... [ DONE ] Customizing Server... [ DONE ] Running post setup steps... [ DONE ] Starting ovirt-engine... [ DONE ] Restarting httpd... [ DONE ] Succesfully installed ovirt-engine-reports. 5) Connect to Reports Portal with ovirt-admin user and verify correct access HIH others, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] changing hostname in ovirt
On 01/27/2014 01:52 PM, Sven Kieske wrote: well, that's not what I want, because I'm talking about a local storage DC. I just want to change the hosts address which ovirt uses to connect to the host. This isn't possible without changing the certificates (re-deploy)? you shouldn't need to remove/add the host, just move to maint and click re-install. try it first on a debug host to make sure, but i wouldn't expect it to destroy your VMs in local storage. Am 27.01.2014 13:27, schrieb Dafna Ron: well, if you can re-deploy the hosts that would change the certificates as well (create new ones with the new hostname). ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] VM install failures on a stateless node
On 01/28/2014 02:08 AM, David Li wrote: Hi, I have been trying to install my first VM on a stateless node. so far I have failed twice with the node ending up in the Non-responsive mode. I had to reboot to recover and it took a while to reconfigure everything since this is stateless. I can still get into the node via the console. It's not dead. But the ovirtmgmt interface seems to be dead. The other iSCSI interface is running ok. Can anyone recommend ways how to debug this problem? Thanks. David ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users was this resolved? is vdsm up and running? output of vdsClient -s 0 getVdsCaps? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Error message constantly being reported
Hi All, Constantly seeing this message in the logs: vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012 File /usr/share/vdsm/BindingXMLRPC.py, line 952, in wrapper#012res = f(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 54, in wrapper#012rv = func(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 306, in tasksList#012status = self.svdsmProxy.glusterTasksList(taskIds)#012 File /usr/share/vdsm/supervdsm.py, line 50, in __call__#012return callMethod()#012 File /usr/share/vdsm/supervdsm.py, line 48, in lambda#012**kwargs)#012 File string, line 2, in glusterTasksList#012 File /usr/lib64/python2.6/multiprocessing/managers.py, line 740, in _callmethod#012raise convert_to_error(kind, result)#012GlusterCmdExecFailedException: Command execution failed#012error: tasks is not a valid status option#012Usage: volume status [all | VOLNAME [nfs|shd|BRICK]] [detail|clients|mem|inode|fd|callpool]#012return code: 1 looks like an option which isn't recognised by the gluster volume status command. Any ideas how to resolve? It's not causing any problems, but I would like to stop it. Cheers Jon ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.3.3 RC EL6 Live Snapshot
please run vdsClient -s 0 getStorageDomainInfo a52938f7-2cf4-4771-acb2-0c78d14999e5 Thanks, Dafna On 02/02/2014 03:02 PM, Steve Dainard wrote: Logs attached with VM running on qemu-kvm-rhev packages installed. *Steve Dainard * IT Infrastructure Manager Miovision http://miovision.com/ | /Rethink Traffic/ 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog http://miovision.com/blog | **LinkedIn https://www.linkedin.com/company/miovision-technologies | Twitter https://twitter.com/miovision | Facebook https://www.facebook.com/miovision* Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Sun, Feb 2, 2014 at 5:05 AM, Dafna Ron d...@redhat.com mailto:d...@redhat.com wrote: can you please upload full engine, vdsm, libvirt and vm's qemu logs? On 02/02/2014 02:08 AM, Steve Dainard wrote: I have two CentOS 6.5 Ovirt hosts (ovirt001, ovirt002) I've installed the applicable qemu-kvm-rhev packages from this site: http://www.dreyou.org/ovirt/vdsm32/Packages/ on ovirt002. On ovirt001 if I take a live snapshot: Snapshot 'test qemu-kvm' creation for VM 'snapshot-test' was initiated by admin@internal. The VM is paused Failed to create live snapshot 'test qemu-kvm' for VM 'snapshot-test'. VM restart is recommended. Failed to complete snapshot 'test qemu-kvm' creation for VM 'snapshot-test'. The VM is then started, and the status for the snapshot changes to OK. On ovirt002 (with the packages from dreyou) I don't get any messages about a snapshot failing, but my VM is still paused to complete the snapshot. Is there something else other than the qemu-kvm-rhev packages that would enable this functionality? I've looked for some information on when the packages would be built as required in the CentOS repos, but I don't see anything definitive. http://lists.ovirt.org/pipermail/users/2013-December/019126.html Looks like one of the maintainers is waiting for someone to tell him what flags need to be set. Also, another thread here: http://comments.gmane.org/gmane.comp.emulators.ovirt.arch/1618 same maintainer, mentioning that he hasn't seen anything in the bug tracker. There is a bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1009100 that seems to have ended in finding a way for qemu to expose whether it supports live snapshots, rather than figuring out how to get the CentOS team the info they need to build the packages with the proper flags set. I have bcc'd both dreyou (packaged the qemu-kvm-rhev packages listed above) and Russ (CentOS maintainer mentioned in the other threads) if they wish to chime in and perhaps collaborate on which flags, if any, should be set for the qemu-kvm builds so we can get a CentOS bug report going and hammer this out. Thanks everyone. **crosses fingers and hopes for live snapshots soon** *Steve Dainard * IT Infrastructure Manager Miovision http://miovision.com/ | /Rethink Traffic/ 519-513-2407 tel:519-513-2407 tel:519-513-2407 tel:519-513-2407 ex.250 877-646-8476 tel:877-646-8476 tel:877-646-8476 tel:877-646-8476 (toll-free) *Blog http://miovision.com/blog | **LinkedIn https://www.linkedin.com/company/miovision-technologies | Twitter https://twitter.com/miovision | Facebook https://www.facebook.com/miovision* Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Fri, Jan 31, 2014 at 1:26 PM, Steve Dainard sdain...@miovision.com mailto:sdain...@miovision.com mailto:sdain...@miovision.com mailto:sdain...@miovision.com wrote: How would you developers, speaking for the oVirt-community, propose to solve this for CentOS _now_ ? I would imagine that the easiest way is that you build and host this one package(qemu-kvm-rhev), since you´ve basically already have the source and recipe (since you´re already providing it for RHEV
Re: [Users] ovirt 3.4.0-0.5.beta1 and OpenLDAP groups
- Original Message - From: Winfried de Heiden - Voorwinde w...@dds.nl To: users@ovirt.org Sent: Sunday, February 2, 2014 5:09:01 PM Subject: [Users] ovirt 3.4.0-0.5.beta1 and OpenLDAP groups Hi All, I managed to use OpenLDAP to integrate with oVirt 3.4.0-0.5.beta1. For this, I followed (more or less, I used a Raspberry Pi and Raspbian) instructions as found on http://www.ovirt.org/LDAP_Quick_Start It all seems to work well, I am able to connect to a domain, login etc. and assign some roles to users. However, I cannot use (ldap) groups it seems. I cann add a group in the ovirt gui, but (in the tab General) Active remain false. A I missing something...? HI Winfried, I have a question for you - When you add the group , can you use one of its user to perform an operation the group has permission to perform? for example, if the group has login permissions, can you login with a user that belongs to the group? I'm looking at the code, and this might be an issue that the active flag is simply not set on a group. Winfried ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt 3.4.0-0.5.beta1 and OpenLDAP groups
On 02/02/2014 11:01 PM, Yair Zaslavsky wrote: - Original Message - From: Winfried de Heiden - Voorwinde w...@dds.nl To: users@ovirt.org Sent: Sunday, February 2, 2014 5:09:01 PM Subject: [Users] ovirt 3.4.0-0.5.beta1 and OpenLDAP groups Hi All, I managed to use OpenLDAP to integrate with oVirt 3.4.0-0.5.beta1. For this, I followed (more or less, I used a Raspberry Pi and Raspbian) instructions as found on http://www.ovirt.org/LDAP_Quick_Start It all seems to work well, I am able to connect to a domain, login etc. and assign some roles to users. However, I cannot use (ldap) groups it seems. I cann add a group in the ovirt gui, but (in the tab General) Active remain false. A I missing something...? HI Winfried, I have a question for you - When you add the group , can you use one of its user to perform an operation the group has permission to perform? for example, if the group has login permissions, can you login with a user that belongs to the group? I'm looking at the code, and this might be an issue that the active flag is simply not set on a group. Yair - why would active be set on a group? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Error message constantly being reported
On 02/02/2014 08:01 PM, Jon Archer wrote: Hi All, Constantly seeing this message in the logs: vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012 File /usr/share/vdsm/BindingXMLRPC.py, line 952, in wrapper#012res = f(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 54, in wrapper#012rv = func(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 306, in tasksList#012status = self.svdsmProxy.glusterTasksList(taskIds)#012 File /usr/share/vdsm/supervdsm.py, line 50, in __call__#012return callMethod()#012 File /usr/share/vdsm/supervdsm.py, line 48, in lambda#012**kwargs)#012 File string, line 2, in glusterTasksList#012 File /usr/lib64/python2.6/multiprocessing/managers.py, line 740, in _callmethod#012raise convert_to_error(kind, result)#012GlusterCmdExecFailedException: Command execution failed#012error: tasks is not a valid status option#012Usage: volume status [all | VOLNAME [nfs|shd|BRICK]] [detail|clients|mem|inode|fd|callpool]#012return code: 1 looks like an option which isn't recognised by the gluster volume status command. Any ideas how to resolve? It's not causing any problems, but I would like to stop it. Cheers Jon ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users sahina - iirc, there is a patch removing that noise? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt 3.4.0-0.5.beta1 and OpenLDAP groups
- Original Message - From: Itamar Heim ih...@redhat.com To: Yair Zaslavsky yzasl...@redhat.com, Winfried de Heiden - Voorwinde w...@dds.nl Cc: users@ovirt.org Sent: Monday, February 3, 2014 1:32:00 AM Subject: Re: [Users] ovirt 3.4.0-0.5.beta1 and OpenLDAP groups On 02/02/2014 11:01 PM, Yair Zaslavsky wrote: - Original Message - From: Winfried de Heiden - Voorwinde w...@dds.nl To: users@ovirt.org Sent: Sunday, February 2, 2014 5:09:01 PM Subject: [Users] ovirt 3.4.0-0.5.beta1 and OpenLDAP groups Hi All, I managed to use OpenLDAP to integrate with oVirt 3.4.0-0.5.beta1. For this, I followed (more or less, I used a Raspberry Pi and Raspbian) instructions as found on http://www.ovirt.org/LDAP_Quick_Start It all seems to work well, I am able to connect to a domain, login etc. and assign some roles to users. However, I cannot use (ldap) groups it seems. I cann add a group in the ovirt gui, but (in the tab General) Active remain false. A I missing something...? HI Winfried, I have a question for you - When you add the group , can you use one of its user to perform an operation the group has permission to perform? for example, if the group has login permissions, can you login with a user that belongs to the group? I'm looking at the code, and this might be an issue that the active flag is simply not set on a group. Yair - why would active be set on a group? Itamar - I don't think there is a sense in that. At engine-core- not being set. At UI - I think the code should be revisited, in AdElementListModel there are places where we create user objects and store in side them group information. later on we store these objects at the groups collection of the model, and this model is being used to present the list of users and groups. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Constant lopping log of task activity
I'm seeing a constant loop of task issues in the vdsm.log Host is a CentOS 6.5 server otherwise setup as a db lab server. Engine is a CentOS 6.5 vm running on vmware esx host. The host has local storage only. Not sure if related, but I've been trying to attach a Win2k8-hosted NFS share for ISO's and failing. I can manually mount it just fine from the host os, but the ovirt-engine complains of privileges errors. I also had trouble getting the host recognized in ovirt dues to a messed up sudoers file. With that sorted, it added and report fine. Any troubleshooting tips? Thread-84::DEBUG::2014-02-02 16:08:55,459::BindingXMLRPC::167::vds::(wrapper) client [10.1.9.11] Thread-84::DEBUG::2014-02-02 16:08:55,459::task::579::TaskManager.Task::(_updateState) Task=`cc9c6dac-ed6e-4cc9-a3e4-b741c5137caf`::moving from state init - state preparing Thread-84::INFO::2014-02-02 16:08:55,460::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='b780d909-b6a0-4a38-82e2-3d7fd3a2b745', options=None) Thread-84::INFO::2014-02-02 16:08:55,460::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}} Thread-84::DEBUG::2014-02-02 16:08:55,460::task::1168::TaskManager.Task::(prepare) Task=`cc9c6dac-ed6e-4cc9-a3e4-b741c5137caf`::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}} Thread-84::DEBUG::2014-02-02 16:08:55,460::task::579::TaskManager.Task::(_updateState) Task=`cc9c6dac-ed6e-4cc9-a3e4-b741c5137caf`::moving from state preparing - state finished Thread-84::DEBUG::2014-02-02 16:08:55,460::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-84::DEBUG::2014-02-02 16:08:55,461::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-84::DEBUG::2014-02-02 16:08:55,461::task::974::TaskManager.Task::(_decref) Task=`cc9c6dac-ed6e-4cc9-a3e4-b741c5137caf`::ref 0 aborting False Thread-85::DEBUG::2014-02-02 16:08:55,470::BindingXMLRPC::167::vds::(wrapper) client [10.1.9.11] Thread-85::DEBUG::2014-02-02 16:08:55,470::task::579::TaskManager.Task::(_updateState) Task=`59aa1ec2-64b9-4c6d-bdb8-66154f73915d`::moving from state init - state preparing Thread-85::INFO::2014-02-02 16:08:55,470::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='b780d909-b6a0-4a38-82e2-3d7fd3a2b745', options=None) Thread-85::DEBUG::2014-02-02 16:08:55,471::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745`ReqID=`7bbb6abd-006b- 4b7b-b540-322dfdda7644`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2553' at 'getStoragePoolInfo' Thread-85::DEBUG::2014-02-02 16:08:55,471::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745' for lock type 'shared' Thread-85::DEBUG::2014-02-02 16:08:55,471::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745' is free. Now locking as 'shared' (1 active user) Thread-85::DEBUG::2014-02-02 16:08:55,471::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745`ReqID=`7bbb6abd-006b- 4b7b-b540-322dfdda7644`::Granted request Thread-85::DEBUG::2014-02-02 16:08:55,472::task::811::TaskManager.Task::(resourceAcquired) Task=`59aa1ec2-64b9-4c6d-bdb8-66154f73915d`::_resourcesAcquired: Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745 (shared) Thread-85::DEBUG::2014-02-02 16:08:55,472::task::974::TaskManager.Task::(_decref) Task=`59aa1ec2-64b9-4c6d-bdb8-66154f73915d`::ref 1 aborting False Thread-85::INFO::2014-02-02 16:08:55,472::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': 'd9f70b92-3c38-4503-ac69-b48325ead406', 'name': 'hnwdb05-Local', 'version': '3', 'domains': 'd9f70b92-3c38-4503-ac69-b48325ead406:Active', 'pool_status': 'connected', 'isoprefix': '', 'type': 'LOCALFS', 'master_ver': 1, 'lver': 1}, 'dominfo': {'d9f70b92-3c38-4503-ac69-b48325ead406': {'status': 'Active', 'diskfree': '409127440384', 'alerts': [], 'version': 3, 'disktotal': '945068838912'}}} Thread-85::DEBUG::2014-02-02 16:08:55,473::task::1168::TaskManager.Task::(prepare) Task=`59aa1ec2-64b9-4c6d-bdb8-66154f73915d`::finished: {'info': {'spm_id': 1, 'master_uuid': 'd9f70b92-3c38-4503-ac69-b48325ead406', 'name': 'hnwdb05-Local', 'version': '3', 'domains': 'd9f70b92-3c38-4503-ac69-b48325ead406:Active', 'pool_status': 'connected', 'isoprefix': '', 'type': 'LOCALFS', 'master_ver': 1, 'lver': 1}, 'dominfo': {'d9f70b92-3c38-4503-ac69-b48325ead406': {'status': 'Active', 'diskfree': '409127440384', 'alerts': [], 'version': 3, 'disktotal': '945068838912'}}} Thread-85::DEBUG::2014-02-02 16:08:55,473::task::579::TaskManager.Task::(_updateState)
Re: [Users] EL6 manual hypervisor bare minimum with ovirt-hosted-engine packages
On 02/02/2014 05:57 PM, Zach Musselman wrote: Hello, I'd like to run the new ovirt-hosted-engine option in RHEV/Ovirt. This appears to not be a built in option of the Ovirt node image. I'd like to create an EL 6 install that as closely resembles the Node/Hypervisor image. How can I keep the amount of installed packages to a minimum and have the self hosted packages installed? Please post how I can get to the node/rhev-h image manually with as similar of packages and the ovirt-hosted-engine packages. |# yum install ovirt-hosted-engine-setup # yum install ovirt-hosted-engine-ha| | | Thanks -- Zach ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users start from rhel minimal install? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Constant lopping log of task activity
Hi Matt, Are the permissions on the NFS share set vdsm:kvm (36:36)? They have to be in order to be used by ovirt. Also, can you please attach the full vdsm log for the problematic host? I am probably missing something from the logs you pasted, but I cannot see any errors, and it is easier to work with the full logs. Thanks, Gadi Ickowicz - Original Message - From: Matt Warren mwar...@hnw.com To: users@ovirt.org Sent: Monday, February 3, 2014 4:41:51 AM Subject: [Users] Constant lopping log of task activity I'm seeing a constant loop of task issues in the vdsm.log Host is a CentOS 6.5 server otherwise setup as a db lab server. Engine is a CentOS 6.5 vm running on vmware esx host. The host has local storage only. Not sure if related, but I've been trying to attach a Win2k8-hosted NFS share for ISO's and failing. I can manually mount it just fine from the host os, but the ovirt-engine complains of privileges errors. I also had trouble getting the host recognized in ovirt dues to a messed up sudoers file. With that sorted, it added and report fine. Any troubleshooting tips? Thread-84::DEBUG::2014-02-02 16:08:55,459::BindingXMLRPC::167::vds::(wrapper) client [10.1.9.11] Thread-84::DEBUG::2014-02-02 16:08:55,459::task::579::TaskManager.Task::(_updateState) Task=`cc9c6dac-ed6e-4cc9-a3e4-b741c5137caf`::moving from state init - state preparing Thread-84::INFO::2014-02-02 16:08:55,460::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='b780d909-b6a0-4a38-82e2-3d7fd3a2b745', options=None) Thread-84::INFO::2014-02-02 16:08:55,460::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}} Thread-84::DEBUG::2014-02-02 16:08:55,460::task::1168::TaskManager.Task::(prepare) Task=`cc9c6dac-ed6e-4cc9-a3e4-b741c5137caf`::finished: {'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}} Thread-84::DEBUG::2014-02-02 16:08:55,460::task::579::TaskManager.Task::(_updateState) Task=`cc9c6dac-ed6e-4cc9-a3e4-b741c5137caf`::moving from state preparing - state finished Thread-84::DEBUG::2014-02-02 16:08:55,460::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-84::DEBUG::2014-02-02 16:08:55,461::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-84::DEBUG::2014-02-02 16:08:55,461::task::974::TaskManager.Task::(_decref) Task=`cc9c6dac-ed6e-4cc9-a3e4-b741c5137caf`::ref 0 aborting False Thread-85::DEBUG::2014-02-02 16:08:55,470::BindingXMLRPC::167::vds::(wrapper) client [10.1.9.11] Thread-85::DEBUG::2014-02-02 16:08:55,470::task::579::TaskManager.Task::(_updateState) Task=`59aa1ec2-64b9-4c6d-bdb8-66154f73915d`::moving from state init - state preparing Thread-85::INFO::2014-02-02 16:08:55,470::logUtils::44::dispatcher::(wrapper) Run and protect: getStoragePoolInfo(spUUID='b780d909-b6a0-4a38-82e2-3d7fd3a2b745', options=None) Thread-85::DEBUG::2014-02-02 16:08:55,471::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745`ReqID=`7bbb6abd-006b- 4b7b-b540-322dfdda7644`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2553' at 'getStoragePoolInfo' Thread-85::DEBUG::2014-02-02 16:08:55,471::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745' for lock type 'shared' Thread-85::DEBUG::2014-02-02 16:08:55,471::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745' is free. Now locking as 'shared' (1 active user) Thread-85::DEBUG::2014-02-02 16:08:55,471::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745`ReqID=`7bbb6abd-006b- 4b7b-b540-322dfdda7644`::Granted request Thread-85::DEBUG::2014-02-02 16:08:55,472::task::811::TaskManager.Task::(resourceAcquired) Task=`59aa1ec2-64b9-4c6d-bdb8-66154f73915d`::_resourcesAcquired: Storage.b780d909-b6a0-4a38-82e2-3d7fd3a2b745 (shared) Thread-85::DEBUG::2014-02-02 16:08:55,472::task::974::TaskManager.Task::(_decref) Task=`59aa1ec2-64b9-4c6d-bdb8-66154f73915d`::ref 1 aborting False Thread-85::INFO::2014-02-02 16:08:55,472::logUtils::47::dispatcher::(wrapper) Run and protect: getStoragePoolInfo, Return response: {'info': {'spm_id': 1, 'master_uuid': 'd9f70b92-3c38-4503-ac69-b48325ead406', 'name': 'hnwdb05-Local', 'version': '3', 'domains': 'd9f70b92-3c38-4503-ac69-b48325ead406:Active', 'pool_status': 'connected', 'isoprefix': '', 'type': 'LOCALFS', 'master_ver': 1, 'lver': 1}, 'dominfo': {'d9f70b92-3c38-4503-ac69-b48325ead406': {'status': 'Active', 'diskfree': '409127440384', 'alerts': [], 'version': 3, 'disktotal': '945068838912'}}} Thread-85::DEBUG::2014-02-02 16:08:55,473::task::1168::TaskManager.Task::(prepare) Task=`59aa1ec2-64b9-4c6d-bdb8-66154f73915d`::finished: {'info':
Re: [Users] How to scratch previous versions reports and dwh installation?
- Original Message - From: Gianluca Cecchi gianluca.cec...@gmail.com To: Yedidyah Bar David d...@redhat.com Cc: users users@ovirt.org, Yaniv Dary yd...@redhat.com Sent: Sunday, February 2, 2014 7:58:05 PM Subject: Re: [Users] How to scratch previous versions reports and dwh installation? On Sun, Feb 2, 2014 at 4:37 PM, Gianluca Cecchi wrote: where is stored information for user engine_history do I have also to cleanup this user information? So it seems it was related to postreSQL users needed to be dropped too, before running setup again. And I also noticed that in 3.3.3 rc the problem solved here: http://gerrit.ovirt.org/#/c/23304/2/packaging/ovirt-engine-reports-setup.py is not applied and reports setup fails. I hope someone is following to get into final 3.3.3. It is backported to 3.3: http://gerrit.ovirt.org/23335 so should enter 3.3.3. This is my list of things to do in 3.3.3 to get a clean new install that worked today: Some steps could be not necessary depending on previous setup or previous errors and so incomplete deployments: 1) packages clean yum remove ovirt-engine-dwh ovirt-engine-reports 2) files and directory clean rm -rf /etc/ovirt-engine-dwh/ rm -rf /etc/ovirt-engine-reports/ rm -rf /etc/ovirt-engine/ovirt-engine-dwh rm -rf /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war.dodeploy rm -rf /usr/share/ovirt-engine-dwh/ rm -rf /usr/share/ovirt-engine-reports 3) DB clean (substitute ovirt_ruser with your previously chosen read user) engine=# drop user engine_history; DROP ROLE engine=# drop user engine_reports; DROP ROLE engine=# drop user ovirt_ruser; DROP ROLE At the end you shold have something like: postgres=# \du List of roles Role name | Attributes | Member of ---++--- engine| Create DB | {} postgres | Superuser, Create role, Create DB, Replication | {} NOTE: it seems not necessary to purge the lines eventually addedd during previous setup into pg_hba.conf such as: hostovirtenginereports engine_reports 0.0.0.0/0 md5 hostovirtenginereports engine_reports ::0/0 md5 hostovirt_engine_history engine_history ::0/0 trust 3) DWH packages install and setup yum install ovirt-engine-dwh [root@tekkaman ~]# ovirt-engine-dwh-setup Welcome to ovirt-engine-dwh setup utility This utility can configure a read only user for DB access. Would you like to do so? (yes|no): yes Error: user name cannot be empty Provide a username for read-only user : ovirt_ruser Provide a password for read-only user: Warning: Weak Password. Re-type password: Should postgresql be setup with secure connection? (yes|no): yes Creating DB...[ DONE ] Creating read-only user...[ DONE ] Setting DB connectivity...[ DONE ] Starting ovirt-engine... [ DONE ] Starting oVirt-ETL... [ DONE ] Successfully installed ovirt-engine-dwh. 4) Install reports packages, patch and setup yum install ovirt-engine-reports in file /usr/share/ovirt-engine-reports/ovirt-engine-reports-setup.py line 1163 change if preserveReportsJobs: -- if preserveReports: [root@tekkaman ~]# ovirt-engine-reports-setup Welcome to ovirt-engine-reports setup utility In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service (yes|no): yes Stopping ovirt-engine... [ DONE ] Editing XML files... [ DONE ] Setting DB connectivity...[ DONE ] Please choose a password for the reports admin user(s) (ovirt-admin): Warning: Weak Password. Retype password: Warning: Weak Password. Deploying Server... [ DONE ] Updating Redirect Servlet... [ DONE ] Importing reports... [ DONE ] Customizing Server... [ DONE ] Running post setup steps... [ DONE ] Starting ovirt-engine... [ DONE ] Restarting httpd... [ DONE ] Succesfully installed ovirt-engine-reports. 5) Connect to Reports Portal with ovirt-admin user and verify correct access Good job. Thanks for the report! Please, if possible, open relevant bugs as you see fit. E.g. you might want to claim that if a db user engine_reports exists, setup should not fail, or at least fail with a clean message. Do note that in 3.4 the setup of dwh and reports was rewritten from scratch, and is integrated into engine-setup. I did not play
Re: [Users] Error message constantly being reported
On 02/03/2014 05:02 AM, Itamar Heim wrote: On 02/02/2014 08:01 PM, Jon Archer wrote: Hi All, Constantly seeing this message in the logs: vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012 File /usr/share/vdsm/BindingXMLRPC.py, line 952, in wrapper#012res = f(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 54, in wrapper#012rv = func(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 306, in tasksList#012status = self.svdsmProxy.glusterTasksList(taskIds)#012 File /usr/share/vdsm/supervdsm.py, line 50, in __call__#012 return callMethod()#012 File /usr/share/vdsm/supervdsm.py, line 48, in lambda#012**kwargs)#012 File string, line 2, in glusterTasksList#012 File /usr/lib64/python2.6/multiprocessing/managers.py, line 740, in _callmethod#012raise convert_to_error(kind, result)#012GlusterCmdExecFailedException: Command execution failed#012error: tasks is not a valid status option#012Usage: volume status [all | VOLNAME [nfs|shd|BRICK]] [detail|clients|mem|inode|fd|callpool]#012return code: 1 looks like an option which isn't recognised by the gluster volume status command. Any ideas how to resolve? It's not causing any problems, but I would like to stop it. Cheers Jon ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users sahina - iirc, there is a patch removing that noise? Yes, there was a patch removing this for clusters 3.4 compatibility version For 3.4 gluster clusters, we need a version of gluster (= 3.5) to support the gluster async task feature. This version has the support for gluster volume status tasks ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Error message constantly being reported
On 02/03/2014 07:35 AM, Sahina Bose wrote: On 02/03/2014 05:02 AM, Itamar Heim wrote: On 02/02/2014 08:01 PM, Jon Archer wrote: Hi All, Constantly seeing this message in the logs: vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012 File /usr/share/vdsm/BindingXMLRPC.py, line 952, in wrapper#012res = f(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 54, in wrapper#012rv = func(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 306, in tasksList#012status = self.svdsmProxy.glusterTasksList(taskIds)#012 File /usr/share/vdsm/supervdsm.py, line 50, in __call__#012 return callMethod()#012 File /usr/share/vdsm/supervdsm.py, line 48, in lambda#012**kwargs)#012 File string, line 2, in glusterTasksList#012 File /usr/lib64/python2.6/multiprocessing/managers.py, line 740, in _callmethod#012raise convert_to_error(kind, result)#012GlusterCmdExecFailedException: Command execution failed#012error: tasks is not a valid status option#012Usage: volume status [all | VOLNAME [nfs|shd|BRICK]] [detail|clients|mem|inode|fd|callpool]#012return code: 1 looks like an option which isn't recognised by the gluster volume status command. Any ideas how to resolve? It's not causing any problems, but I would like to stop it. Cheers Jon ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users sahina - iirc, there is a patch removing that noise? Yes, there was a patch removing this for clusters 3.4 compatibility version For 3.4 gluster clusters, we need a version of gluster (= 3.5) to support the gluster async task feature. This version has the support for gluster volume status tasks was this backported to stable 3.3 ? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Error message constantly being reported
On 02/03/2014 12:06 PM, Itamar Heim wrote: On 02/03/2014 07:35 AM, Sahina Bose wrote: On 02/03/2014 05:02 AM, Itamar Heim wrote: On 02/02/2014 08:01 PM, Jon Archer wrote: Hi All, Constantly seeing this message in the logs: vdsm vds ERROR vdsm exception occured#012Traceback (most recent call last):#012 File /usr/share/vdsm/BindingXMLRPC.py, line 952, in wrapper#012res = f(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 54, in wrapper#012 rv = func(*args, **kwargs)#012 File /usr/share/vdsm/gluster/api.py, line 306, in tasksList#012status = self.svdsmProxy.glusterTasksList(taskIds)#012 File /usr/share/vdsm/supervdsm.py, line 50, in __call__#012 return callMethod()#012 File /usr/share/vdsm/supervdsm.py, line 48, in lambda#012**kwargs)#012 File string, line 2, in glusterTasksList#012 File /usr/lib64/python2.6/multiprocessing/managers.py, line 740, in _callmethod#012raise convert_to_error(kind, result)#012GlusterCmdExecFailedException: Command execution failed#012error: tasks is not a valid status option#012Usage: volume status [all | VOLNAME [nfs|shd|BRICK]] [detail|clients|mem|inode|fd|callpool]#012return code: 1 looks like an option which isn't recognised by the gluster volume status command. Any ideas how to resolve? It's not causing any problems, but I would like to stop it. Cheers Jon ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users sahina - iirc, there is a patch removing that noise? Yes, there was a patch removing this for clusters 3.4 compatibility version For 3.4 gluster clusters, we need a version of gluster (= 3.5) to support the gluster async task feature. This version has the support for gluster volume status tasks was this backported to stable 3.3 ? Unfortunately, no - missed this. Have submitted a patch now - http://gerrit.ovirt.org/23982 ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt-3.3.3 release postponed due to blockers
Il 02/02/2014 00:05, Dan Kenigsberg ha scritto: On Fri, Jan 31, 2014 at 04:16:54PM -0500, Douglas Schilling Landgraf wrote: On 01/31/2014 05:17 AM, Dan Kenigsberg wrote: On Fri, Jan 31, 2014 at 09:36:48AM +0100, Sandro Bonazzola wrote: Il 30/01/2014 22:38, Robert Story ha scritto: Can we revert these packages to previous versions in the 3.3.2 stable repo so those of us who want/need to install new hosts in our clusters aren't dead in the water waiting for 3.3.3? Hi Robert, I think you can still install 3.3.2 on your clusters with the requirement of adding manually oython-cpopen before trying to install vdsm. About 3.3.3, I think vdsm should really drop dependency on vdsm-python-cpopen: it's a package obsoleted by python-cpopen so there's no point in still requiring it especially if keeping that requirement still break dependency resolution. I really wanted to avoid eliminating a subpackage during a micro release. That's impolite and surprising. But given the awkward yum bug, persistent dependency problems, and the current release delay, I give up. Let's eliminate vdsm-python-cpopen from ovirt-3.3 branch, and require python-cpopen. Yaniv, Douglas: could you handle it? Sure. Done: http://gerrit.ovirt.org/#/c/23942/ Acked. Could you cherry-pick it into dist-git and rebuild the ovirt-3.3.3 candidate (and without other changes that can wait for ovirt-3.3.4). Let me know where I can get the new rpms, so I can add them to 3.3.3 repo and release :-) -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users