[Users] too many bouces information message..
Already received some weeks ago this kind of message I just received today: " Your membership in the mailing list Users has been disabled due to excessive bounces The last bounce received from you was dated 08-Jan-2014. You will not get any more messages from this list until you re-enable your membership. You will receive 3 more reminders like this before your membership in the list is deleted. To re-enable your membership, you can simply respond to this message .. " At the mean time I just re-enabled my membership receiving confirmation message. What could be the reason? I'm only normally using via web my gmail account that I use for other mailing lists too, without any problem A difference is that I statistically find more messages from the list that gmails initally puts into spam and that I mark as "not spam" Any hint here? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] too many bouces information message..
On Wed, Jan 8, 2014 at 11:34 PM, Dave Neary wrote: > Hi, > > It may be related to the IPv6 bounce we got from a spam filtering > service recently. RP Herrold suggested that we add an IPv6 PTR for > linode01 and see if that fixes it, and I have requested that from IT > services here in Red Hat, who control the DNS records. > > Will keep everyone posted, we'll see if it makes a difference. > > Thanks, > Dave. > > On 01/08/2014 11:31 PM, Itamar Heim wrote: >> >> we're not sure why, but gmail considers us as spammers since we send the >> same email to a few hundrends gmail accounts registered to users@ovirt.org. Ok, thanks. Important that I'm not origin of problems with the mailing list for other ones. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] [libvirt] libvirt migration port configuration and virPortAllocator
On Fri, Jan 10, 2014 at 9:53 AM, Sven Kieske wrote: > Hi, > > any chance this also gets backported to EL 6 ? > > I would open a BZ, but just if it's technically possible. > The version difference is quite huge. > The bug for RH EL 6 has been already opened as a clone some months ago: https://bugzilla.redhat.com/show_bug.cgi?id=1018695 You can eventually "stress" there for a resolution... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Hypervisor info
On Fri, Jan 10, 2014 at 10:12 AM, Sven Kieske wrote: > Hi, > > yeah of course, there are various scripts for that > i. e. http://www.dmo.ca/blog/detecting-virtualization-on-linux/ > > (google is your friend) > > the (afaik) fastest way to detect ovirt is: > > dmidecode | grep oVirt > > but this needs root privileges. > I confirm that on three guest systems: CentOS 5.10, CentOS 6.4 and Fedora 20 all x86_64 I get: $ sudo /usr/sbin/dmidecode | grep -A2 "System Information" System Information Manufacturer: oVirt Product Name: oVirt Node This on oVirt 3.3.2 with Fedora 19 oVirt stable repo. nice to know. Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] cluster.min-free-disk option on gluster 3.4 Was: Re: how to temporarily solve low disk space problem
On Tue, Jan 7, 2014 at 8:29 PM, Amedeo Salvati wrote: >> >> Could this parameter impact in general only start of new VMs or in any >> way also already running VMs? >> Gianluca > > added gluster-users as they can responde to us questions. > > Gianluca, as you are using glusterfs, and as I can see on your df output: > > /dev/mapper/fedora-DATA_GLUSTER 30G 23G 7.8G 75% > /gluster/DATA_GLUSTER > node01.mydomain:gvdata 30G 26G 4.6G 85% > /rhev/data-center/mnt/glusterSD/node01.mydomain:gvdata > > > be careful to gluster cluster.min-free-disk option, that on gluster 3.1 and > 3.2 it's default option is 0% (good for you!) > > http://gluster.org/community/documentation//index.php/Gluster_3.2:_Setting_Volume_Options#cluster.min-free-disk > > but I can't find the same documentation for gluster 3.4, that I suppose > you're using this gluster version on ovirt; otherwise on red hat storage > documentation cluster.min-free-disk default option is 10% (bad for you): > > https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/chap-User_Guide-Managing_Volumes.html > > so you fill your gvdata volume up to 90% and let we know if it's stop I/O > (or only write) or we can wait for a clarification from gluster guy :-) > > best regards > a My environment is based on OVirt 3.3.2 with Fedora 19 oVirt stable repo. Plus GlusterFS upgraded to 3.4.2-1.fc19 from updates-testing f19 repo At this moment I have 4Gb free on xfs filesystem, that is the base for gluster mount point. # engine-config -g FreeSpaceCriticalLowInGB FreeSpaceCriticalLowInGB: 2 version: general --> all ok Then # engine-config -s FreeSpaceCriticalLowInGB=6 # systemctl restart ovirt-engine --> all ok Then Tried "yum update" of a F20 VM that goes without problem (about 50Mb involved in transaction) Reboot on VM (so no power off) --> all ok Then Shutdown of VM and attempt to power on it --> fail In webadmin gui I get: " Error while executing action: f20: Cannot run VM. Low disk space on target Storage Domain gvdata. " In engine.log I get 2014-01-10 10:37:42,610 WARN [org.ovirt.engine.core.bll.RunVmCommand] (ajp--127.0.0.1-8702-1) [12c2f039] CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,ACTION_TYPE_FAILED_DISK_SPACE_LOW_ON_TARGET_STORAGE_DOMAIN,$storageName gvdata , sharedLocks= ] Strangely no output in webadmin gui events pane (see also below the list of them) Then # engine-config -s FreeSpaceCriticalLowInGB=1 # systemctl restart ovirt-engine --> all ok I can power on the f20 VM again HIH, Gianluca Regarding GlusterFS i went through several updates from 3.x to 4.x version for gluster and in 3.x for engine/hosts, so I don't know in a clean install starting with oVirt 3.3.2 and GlusterFS 3.4.2 what engine would have put as a value for FreeSpaceCriticalLowInGB, if a fixed one (absolute or percentage) or variable depending on total initial size of XFS filesystem. Still one strange thing I notice in engine events sequence is this (please read from last line going up); are normal/expected the datacenter outputs messages when you restart engine? 2014-Jan-10, 10:50 user admin@internal initiated console session for VM f20 2014-Jan-10, 10:50 VM f20 started on Host f18ovn03 2014-Jan-10, 10:50 user admin@internal initiated console session for VM f20 --> here VM starts ok now 2014-Jan-10, 10:49 VM f20 was started by admin@internal (Host: f18ovn03). 2014-Jan-10, 10:49 User admin@internal logged in. ---> here I have done login again to webadmin gui after engine restart 2014-Jan-10, 10:49 User admin@internal logged in. 2014-Jan-10, 10:47 Storage Pool Manager runs on Host f18ovn03 (Address: 10.4.4.59). 2014-Jan-10, 10:47 Invalid status on Data Center Gluster. Setting status to Non Responsive. 2014-Jan-10, 10:47 State was set to Up for host f18ovn01. 2014-Jan-10, 10:47 State was set to Up for host f18ovn03. --> reset FreeSpaceCriticalLowInGB to 1 and restart of engine 2014-Jan-10, 10:47 User admin@internal logged out. ---> shutdown and power off of VM 2014-Jan-10, 10:37 VM f20 is down. Exit message: User shut down 2014-Jan-10, 10:36 user admin@internal initiated console session for VM f20 2014-Jan-10, 10:35 user admin@internal initiated console session for VM f20 2014-Jan-10, 10:31 User admin@internal logged in. ---> here I have done login again to webadmin gui after engine restart 2014-Jan-10, 10:31 User admin@internal logged in. 2014-Jan-10, 10:29 Warning, Low disk space.gvdata domain has 4 GB of free space 2014-Jan-10, 10:29 Storage Pool Manager runs on Host f18ovn03 (Address: 10.4.4.59). 2014-Jan-10, 10:29 Invalid status on Data Center Gluster. Setting status to Non Responsive. 2014-Jan-10, 10:29 State was set to Up for host f18ovn03. 2014-Jan-10, 10:29 State was set to Up for host f18ovn01. --> here I have set new higher value for FreeSpaceCriticalLowInGB that will cause problems and issued restart of engine 2014-Jan-10, 10:29 User admin
Re: [Users] VSDM´s logrotate makes Hosts fill up var eventually
On Thu, Jan 9, 2014 at 5:33 PM, Dan Kenigsberg wrote: > > The question of how much logging we should keep is a tough one. I, as a > developer, would like to have as much as possible. For long-running busy > systems, it has happened to me that the core bug was spotted in > vdsm.log.67 or so. > > However, I understand that verbosity has its price. To understand > whether we are stable enough to change the defaults, I need volunteers: > people who are willing to change their log level to INFO or WARNING, and > see if they miss useful information from their logs. > > When you make you log level higher, you can lower the number of kept > log files, as they would not be filled as quick. > > Would you, users@, help me with hard data? > > Dan. I can't tell about missing information because at the moment I didn't had big problmes since in my case I put logging to INFO where there was DEBUG on 17 December for both vdsm and supervdsm. This is a very calm infra composed by two nodes f19 oVirt 3.3.2 stable repo and GlusterFS datcenter. Only 3 VMS running about everytime Rotation has not yet overwritten old log files Under /var/log/vdsm now # du -sh . 137M . # ll supervdsm.log*|wc -l 106 # ll vdsm.log*|wc -l 113 For vdsm logs: before I had 5 daily generated files for an overall of about 100Mb uncompressed, now one file per day of about 14Mb uncompressed A newer log -rw-r--r--. 1 vdsm kvm429380 Jan 7 12:00 vdsm.log.1.xz A before change log sequence -rw-r--r--. 1 vdsm kvm 660280 Dec 8 22:01 vdsm.log.36.xz -rw-r--r--. 1 vdsm kvm 659672 Dec 8 17:01 vdsm.log.37.xz -rw-r--r--. 1 vdsm kvm 662584 Dec 8 12:01 vdsm.log.38.xz -rw-r--r--. 1 vdsm kvm 655232 Dec 8 07:01 vdsm.log.39.xz -rw-r--r--. 1 vdsm kvm 657832 Dec 8 02:00 vdsm.log.40.xz For supervdsm logs no rotation yet after change, because size is set to 15M in logrotate conf files: Before change I had one file a day (uncompressed daily size was about 17-20Mb) # ls -lt super* -rw-r--r--. 1 root root 12620463 Jan 7 12:57 supervdsm.log -rw-r--r--. 1 root root 342736 Dec 16 16:01 supervdsm.log.1.xz -rw-r--r--. 1 root root 328952 Dec 15 14:00 supervdsm.log.2.xz -rw-r--r--. 1 root root 343360 Dec 14 13:01 supervdsm.log.3.xz -rw-r--r--. 1 root root 339244 Dec 13 11:00 supervdsm.log.4.xz -rw-r--r--. 1 root root 349012 Dec 12 09:00 supervdsm.log.5.xz I got a problem with SPM on 7 anuary and I correctly found in vdsm.log Thread-7000::INFO::2014-01-07 15:42:52,632::logUtils::44::dispatcher::(wrapper) Run and protect: disconnectStorageServer(domType=7, sp UUID='eb679feb-4da2-4fd0-a185-abbe459ffa70', conList=[{'port': '', 'connection': 'f18ovn01.mydomain:gvdata', 'iqn': '', 'portal' : '', 'user': '', 'vfs_type': 'glusterfs', 'password': '**', 'id': '9d01b8fa-853e-4720-990b-f86bdb7cfcbb'}], options=None) Thread-7001::INFO::2014-01-07 15:42:54,776::logUtils::44::dispatcher::(wrapper) Run and protect: getAllTasksStatuses(spUUID=None, opti ons=None) Thread-7001::ERROR::2014-01-07 15:42:54,777::task::850::TaskManager.Task::(_setError) Task=`2e948a29-fdaa-4049-ada4-421c6407b037`::Une xpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 2109, in getAllTasksStatuses raise se.SpmStatusError() SpmStatusError: Not SPM: () Thread-7001::INFO::2014-01-07 15:42:54,777::task::1151::TaskManager.Task::(prepare) Task=`2e948a29-fdaa-4049-ada4-421c6407b037`::abort ing: Task is aborted: 'Not SPM' - code 654 Thread-7001::ERROR::2014-01-07 15:42:54,778::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': 'Not SPM: ()', ' code': 654}} So I think it is good for me, otherwise I would have great difficulties identifying problems inside the big files produced Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] no VM network connection
On Fri, Jan 10, 2014 at 11:48 AM, Assaf Muller wrote: > If you intend to produce data following your stress tests, please share. > > Assaf Muller, Cloud Networking Engineer > Red Hat > > - Original Message - > From: "William Kwan" > To: "Dan Kenigsberg" , "Assaf Muller" > Cc: users@ovirt.org > Sent: Thursday, January 9, 2014 11:26:50 PM > Subject: Re: [Users] no VM network connection > > Thanks for the tips. Yes, after tcpdump left and right, we found an issue. > It is definitely an user error. > > BTW, thanks for all the assistance. We have a promising setup working so > far. We are running tests and putting more stress to test it. And the tcpdump commands and options you used too: it can help others (such as the no network-gurus as me;-) going through and give feedback in similar situations... Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] VSDM´s logrotate makes Hosts fill up var eventually
On Fri, Jan 10, 2014 at 2:04 PM, Markus Stockhausen wrote: >> Ah man, now I see it, it´s not rotating the right file, look: >> # grep libvirtd /etc/logrotate.d/libvirtd >> /var/log/libvirt/libvirtd.log { >> # grep libvirtd.log /etc/libvirt/libvirtd.conf >> log_outputs="1:file:/var/log/libvirtd.log" >> >> Bug? > > I would say no - at least for Fedora 19 nodes. > In our environment (3.3.2) /var/log/libvirtd.log > does not exist and everything is smoothly > rotating in directory /var/log/libvirt > > Maybe /var/log/libvirtd.log is an old file? > > Markus In its default config for my f19 nodes, the logging section of libvirtd is empty and no other customization in /etc/sysconfig/libvirtd regaridng log files; so based on log location and contents, it seems it defaults to "debug" level, such as: log_outputs="1:file:/var/log/libvirt/libvirtd.log" So you can comment out your line or put the one above. Or put other numerical numbers if you want a different level: #1: DEBUG #2: INFO #3: WARNING #4: ERROR Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] VoIP on virtual infrastructure and oVirt in particular
Hello, any feedback about VoIP softswitches on virtual servers? I found this positive whitepaper on VMware web site but it seems mostly related to Class5 softswitches (intended for work with end-users): http://www.vmware.com/files/pdf/techpaper/voip-perf-vsphere5.pdf I would like to know about real experiences and Class4 softswitches too (used for transit VoIP traffic between carriers). Thanks in advance, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Attach NFS storage to multiple datacenter
On Thu, Jan 16, 2014 at 5:15 PM, Sven Kieske wrote: > Hi, > > that is an internal RedHat Link, at least I can not access it. > > But it's good to know there is a doc for this..somewhere.. ;) > > What about also opening up the documentation of this project? > > My idea would be to write docs first for oVirt and then RedHat > just has to brand it. > > The oVirt documentation could need some serious improvement. > > Thoughts? > > Am 16.01.2014 17:06, schrieb Elad Ben Aharon: >> Documentation: >> >> http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3-Beta/html/Administration_Guide/sect-Importing_and_Exporting_Virtual_Machines.html > It s already open to everyone I think. Point here instead of your link https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3-Beta/index.html HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine reports and dwh setup in 3.3.2
On Thu, Jan 16, 2014 at 4:57 PM, Yaniv Dary wrote: > The oVirt 3.3.3 release should be working for all install and upgrade flows. > Please use that on your setup once it's released. > I have just updated another infra (not all-in-one) from 3.3.2 to 3.3.3.beta1. It is an environment on f19. One engine + 2 hosts There are currently no reports/dwh installed at all Can I go with install/test or do I have to wait rc and/or final 3.3.3? If I can go, can I use https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3-Beta/html/Installation_Guide/Installing_and_Configuring_the_History_Database.html as a workflow input? (with changes where needed, as rhevm-dwh --> ovirt-engine-dwh) Let me know so that I can try Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.3.3 beta is now available
On Tue, Jan 14, 2014 at 8:21 PM, Sandro Bonazzola wrote: > The oVirt team is pleased to announce that the 3.3.3 Beta is now > available in beta [1] repository for testing. I would like to say that I was able to pass from 3.3.2 to 3.3.3beta1 without any downtime as far as VMs are concerned and my GlusterFS based DC. I also put my line inside wiki testing page. My setup is composed by a fedora 19 server used as engine (actually a vSphere VM) + 2 Fedora 19 server used as Hypervisors (BL685 G1 blades). 3 VMs configured: CentOS 5.10, CentOS 6.4, Fedora 20 Starting point is Fedora 19 with stable repo and a GlusterFS Datacenter. Gluster version on the two nodes is 3.4.2-1.fc19 and I have set quorum policy this way: gluster volume set gvdata cluster.quorum-type none This lets me maintain DC and VMs active also with only one host active at a time. Steps done: - enable beta repo on engine - engine-setup --> proposed to update ovirt-engine-setup package - yum update ovirt-engine-setup - engine-setup --> ok, downloaded new packages and setup - 1 day test with new engine and keep hosts the same version - migrate all 3 VMs to the currently SPM host (host2) - put non SPM host1 in maintenance - enable beta repo on host1 - stop vdsmd on host1 (possibly not necessary...) - yum update on host1 - merge between /etc/vdsm/logger.conf and created .rpmnew NOTE: in December I previously changed DEBUG entries to INFO in this file. In 3.3.3 they are yet set to DEBUG, but there is a new entry for ha engine -> it could be useful to put this note in release notes if one simply updates host and doesn't redeploy it (in this last case I don't know if a yum remove / yum install is made or the same "problem" is present...) In my case before merge this was the situation: # diff logger.conf logger.conf.rpmnew 2c2 < keys=root,vds,Storage,metadata --- > keys=root,vds,Storage,metadata,ovirt_hosted_engine_ha 11c11 < level=INFO --- > level=DEBUG 16c16 < level=INFO --- > level=DEBUG 22c22 < level=INFO --- > level=DEBUG 32a33,38 > [logger_ovirt_hosted_engine_ha] > level=ERROR > handlers= > qualname=ovirt_hosted_engine_ha > propagate=1 > 43c49 < level=INFO --- > level=DEBUG So after the merge I have the new config file but with the INFO settings instead of DEBUG. - shutdown -r of host1 - activate host1 in webadmin gui - migrate all 3 VMs to the new version host1 - some tests accessing consoles - select host1 as SPM in webadmin gui before working on update for host2; wait and see --> ok - put host2 in maintenance - enable beta repo on host2 - stop vdsmd on host2 - yum update on host2 - merge new /etc/vdsm/logger.conf - shutdown -r of host2 - activate host2 in webadmin gui - migrate all 3 VMs to the new version host2 - test access to consoles Bye, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Gluster storage
On Fri, Nov 29, 2013 at 11:48 AM, Dan Kenigsberg wrote: > On Fri, Nov 29, 2013 at 04:04:03PM +0530, Vijay Bellur wrote: > >> >> There are two ways in which GlusterFS can be used as a storage domain: >> >> a) Use gluster native/fuse access with POSIXFS >> >> b) Use the gluster native storage domain to bypass fuse (with >> libgfapi). We are currently addressing an issue in libvirt >> (https://bugzilla.redhat.com/show_bug.cgi?id=1017289) to enable >> snapshot support with libgfapi. Once this is addressed, we will have >> libgfapi support in the native storage domain. > > It won't be as immediate, since there's a required fix on Vdsm's side > (Bug 1022961 - Running a VM from a gluster domain uses mount instead of > gluster URI) > >> Till then, fuse would >> be used with native storage domain. You can find more details about >> native storage domain here: >> >> http://www.ovirt.org/Features/GlusterFS_Storage_Domain rg/mailman/listinfo/users Hello, revamping here I'm using oVirt 3.3.3 beta1 after upgrade from 3.3.2 on fedora 19 ovirt beta repo It seems bug referred by Vijay (1017289) is still marked as "assigned", but actually it is for rhel 6. Bug referred by Dan (1022961) is marked as "blocked" but I don't see any particular updates since late november. But it is for rhevm, so I think it is for rhel 6... So what is the situation for fedora 19 and oVirt in upcoming 3.3.3? And upcoming fedora 19/20 and 3.4? I ask because I see that in my qemu command line generated by oVirt there is: for virtio (virtio-blk) disk -drive file=/rhev/data-center/mnt/glusterSD/node01.mydomain:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/a5e4f67b-50b5-4740-9990-39deb8812445/53408cb0-bcd4-40de-bc69-89d59b7b5bc2,if=none,id=drive-virtio-disk0,format=raw,serial=a5e4f67b-50b5-4740-9990-39deb8812445,cache=none,werror=stop,rerror=stop,aio=threads for virtio-scsi disk -drive file=/rhev/data-center/mnt/glusterSD/node01.mydomain:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/c1477133-6b06-480d-a233-1dae08daf8b3/c2a82c64-9dee-42bb-acf2-65b8081f2edf,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c1477133-6b06-480d-a233-1dae08daf8b3,cache=none,werror=stop,rerror=stop,aio=threads So it is referred as mount point and not gluster:// way Also, the output of "mount" command on hypervisors shows: node01.mydomain:gvdata on /rhev/data-center/mnt/glusterSD/node01.mydomain:gvdata type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) and it seems indeed fuse mount, so not using libgfapi... output of "lsof -Pp pid" where pid is the qemu process shows libgfapi: qemu-syst 2057 qemu mem REG 253,0 99440 541417 /usr/lib64/libgfapi.so.0.0.0 (btw: strange version 0.0.0 for a release.. not so reassuring ;-) # ll /usr/lib64/libgfapi.so.0* lrwxrwxrwx. 1 root root17 Jan 7 12:45 /usr/lib64/libgfapi.so.0 -> libgfapi.so.0.0.0 -rwxr-xr-x. 1 root root 99440 Jan 3 13:35 /usr/lib64/libgfapi.so.0.0.0 ) At page http://www.gluster.org/category/qemu/ there is a schema about mount types and benchmarks 1) FUSE mount 2) GlusterFS block driver in QEMU (FUSE bypass) 3) Base (VM image accessed directly from brick) ( qemu-system-x86_64 –enable-kvm –nographic -smp 4 -m 2048 -drive file=/test/F17,if=virtio,cache=none => /test is brick directory ) I have not understood if we are in Base (best performance) or FUSE (worst performance) thanks in advance for clarifications and possible roadmaps... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] iSCSI storage domain.
On Fri, Jan 17, 2014 at 10:25 AM, Elad Ben Aharon wrote: > Indeed, > Using ovirt-engine APIs you can edit your iSCSI storage domain and extend it > by adding physical volumes from your shared storage (The process is managed > by ovirt-engine, the actual actions on your storage are done by your host > which has VDSM installed on). > > - Original Message - > From: "Hans Emmanuel" > To: "Elad Ben Aharon" > Cc: users@ovirt.org > Sent: Friday, January 17, 2014 6:37:56 AM > Subject: Re: [Users] iSCSI storage domain. > > Thanks for the reply . > > > Are you suggesting to use Ovirt Engine to resize iSCSI storage domain ? > > > On Thu, Jan 16, 2014 at 7:11 PM, Elad Ben Aharon wrote: > >> Hi, >> >> Both storage types are suitable for production setup. >> As for your second question - >> manually LVM resizing is not recommended, why not using RHEVM for that? >> >> - Original Message - >> From: "Hans Emmanuel" >> To: users@ovirt.org >> Sent: Thursday, January 16, 2014 3:30:39 PM >> Subject: Re: [Users] iSCSI storage domain. >> >> >> >> Could any one please give valuable suggestions? >> On 16-Jan-2014 12:28 PM, "Hans Emmanuel" < hansemman...@gmail.com > wrote: >> >> >> >> Hi all, >> >> I would like to get some comparison on NFS & iSCSI storage domain . Which >> one more suitable for a production setup ? I am planning to use LVM backed >> DRBD replication . And also is that possible to expand iSCSI storage domain >> by simply resizing backend LVM ? >> >> -- >> Hans Emmanuel I think one option should be to provide the user, if not already present/tested/supported, the opportunity to resize the LUN on storage array and then run a rescan from oVIrt to see the new size and use it without disruption of service. To avoid also LUNs proliferation on storage arrays that in general are providing storage to many sources, also different from oVirt itself Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] iSCSI storage domain.
On Fri, Jan 17, 2014 at 10:32 AM, Gianluca Cecchi wrote: > > I think one option should be to provide the user, if not already > present/tested/supported, the opportunity to resize the LUN on storage > array and then run a rescan from oVIrt to see the new size and use it > without disruption of service. > To avoid also LUNs proliferation on storage arrays that in general are > providing storage to many sources, also different from oVirt itself > Gianluca So something like this for vSphere: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1017662 Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine reports and dwh setup in 3.3.2
On Thu, Jan 16, 2014 at 5:55 PM, Yaniv Dary wrote: > > I think a this point it is safe to install. > >> If I can go, can I use >> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3-Beta/html/Installation_Guide/Installing_and_Configuring_the_History_Database.html >> >> as a workflow input? >> (with changes where needed, as rhevm-dwh --> ovirt-engine-dwh) > > You can try it, but not sure everything will be applicable. > >> >> Let me know so that I can try >> >> Gianluca >> dwh part went ok and ovirt_engine_history db was created. The reports part got error # ovirt-engine-reports-setup Welcome to ovirt-engine-reports setup utility In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service (yes|no): yes Stopping ovirt-engine... [ DONE ] Editing XML files... [ DONE ] Setting DB connectivity...[ DONE ] Please choose a password for the reports admin user(s) (ovirt-admin): Warning: Weak Password. Retype password: Warning: Weak Password. Deploying Server... [ DONE ] Updating Redirect Servlet... [ DONE ] Importing reports... [ DONE ] Customizing Server... [ DONE ] global name 'preserveReportsJobs' is not defined Error encountered while installing ovirt-engine-reports, please consult the log file: /var/log/ovirt-engine/ovirt-engine-reports-setup-2014_01_17_11_20_29.log It seems the main point in log file, as in standard output is this: 2014-01-17 11:22:22::ERROR::ovirt-engine-reports-setup::1196::root:: Failed to complete the setup of the reports package! 2014-01-17 11:22:22::DEBUG::ovirt-engine-reports-setup::1197::root:: Traceback (most recent call last): File "/bin/ovirt-engine-reports-setup", line 1163, in main if preserveReportsJobs: NameError: global name 'preserveReportsJobs' is not defined Let me know if you need full logs or if this is sufficient to debug code and propose a solution to try then again. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine reports and dwh setup in 3.3.2
On Fri, Jan 17, 2014 at 11:50 AM, Sandro Bonazzola wrote: > > This has been fixed yesterday http://gerrit.ovirt.org/#/c/23304/ > Please note that at the end of the problem, only PostgreSQL database has been restarted, while ovirt engine kept stopped as for the requested input at the beginning of install. Should be better at least ask/notify the user ? systemctl status ovirt-engine ovirt-engine.service - oVirt Engine Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled) Active: inactive (dead) since Fri 2014-01-17 11:20:33 CET; 25min ago Main PID: 14918 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/ovirt-engine.service Jan 17 11:02:16 myengine.mydomain systemd[1]: Starting oVirt Engine... Jan 17 11:02:16 myengine.mydomain systemd[1]: Started oVirt Engine. Jan 17 11:02:16 myengine.mydomain ovirt-engine.py[14918]: 2014-01-17 11:02:16,886 ovirt-engine: WARNING _setupEngineApps:...nored Jan 17 11:20:32 myengine.mydomain systemd[1]: Stopping oVirt Engine... Jan 17 11:20:33 myengine.mydomain systemd[1]: Stopped oVirt Engine. In my case restart of engine gives errors and unable to startup correctly: started from a systemctl point of view but not able to login to web admin gui In engine log: 2014-01-17 11:47:36,929 ERROR [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-4) [7909477c] Timer update run timeinfo failed. Exception:: java.lang.IllegalStateException: JBAS011049: Component is stopped at org.jboss.as.ee.component.BasicComponent.waitForComponentStart(BasicComponent.java:104) [jboss-as-ee-7.1.1.Final.jar:7.1.1.Fi nal] at org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:127) [jboss-as-ee-7.1.1.Final.jar:7.1 .1.Final] at org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:85) [jboss-as-ee-7.1.1.Final.jar:7.1.1.Final] at org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:116) [jboss-as-ejb3-7.1 .1.Final.jar:7.1.1.Final] at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentIn stanceAssociationInterceptor.java:48) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.1.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:211) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:363) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:194) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.1.Final] at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.1.Final] at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.1.Final] at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) [jboss-as-ee-7.1.1.Final.jar:7.1.1.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.1.Final] at org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45) [jboss-as-ee-7.1.1.Final.jar:7.1.1.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.1.Final] at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) [jboss-invocation.jar:1.1.1.Final] at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:165) [jboss-as-ee-7.1.1.Final.jar:7.1.1.Final] at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:173) [jboss-as-ee-7.1.1.Final.jar:7.1.1.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288) [jboss-invocation.jar:1.1.1.Final] at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) [jboss-invocation.jar:1.1.1.Final] Anyway I applied the patch to /usr/share/ovirt-engine-reports/ovirt-engine-reports-setup.py and then successfully reexecuted ovirt-engine-reports-setup ;-) # ovirt-engine-reports-setup Welcome to ovirt-engine-reports setup utility Backin
Re: [Users] engine reports and dwh setup in 3.3.2
On Fri, Jan 17, 2014 at 12:10 PM, Gianluca Cecchi wrote: > > engine web admin available and reportsportal too > but the password set up during first broken install for ovirt-admin > user doesn't work > > Invalid credentials supplied. > > Could not login to JasperReports Server. > > Any hints on how to solve? Manually change in Postgres? Ok, digging into /bin/ovirt-engine-reports-setup I see that actually the password is written in clear text inside file /usr/share/ovirt-engine-reports/reports/users/ovirt-002dadmin.xml Not a great security choice, also because the file is currently readable by everyone on the system (and "rpm -qVv" doesn't complain about its permissions). I was then able to connect to reports portal and to see some output reports... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Making v2v easier?
On Sun, Jan 19, 2014 at 7:13 AM, Ted Miller wrote: > > * BEAT HEAD AGAINST WALL because virt-v2v.x86_64 0.9.1-5.el6_5 from Centos > updates doesn't seem to know about .ova files. I was following the > instructions in > Red_Hat_Enterprise_Virtualization-3.3-Beta-V2V_Guide-en-US.pdf guide, but I > figured out that the v2v they are talking about has an "-i ova" option, > while the help file for the version I am using does not list ova as an > option for -i, and if I try to use it, it tells me that it is an invalid > option, and if I leave it off it goes off looking for a qemu///system to > attach to. help files for v2v say nothing at all about .ova files. > > I am wondering where to find a v2v program that knows about .ova files, or > else am I going to have to import all my VMWare files to my (non-ovirt) KVM > host, and then drag them into ovirt from libvirt? I made a bit of research about this Strange I just update a CentOS 6.4 VM to latest 6.5 and see that there (also matching RHEL 6.5 I think) there is indeed as you wrote: virt-v2v-0.9.1-5.el6_5.x86_64 And it seems ova is missing as an option... Instead on a Fedora 19 system with virt-v2v-0.9.0-3.fc19.x86_64 I have it So for any reason was it removed in newer packages? It seems also strange to see a Fedora package (even if 19 and not 20) older than a RH EL 6 one ... RHEL 6 version bumped this way skipping 0.9.0: * Wed Jun 12 2013 Matthew Booth - 0.9.1-1 - Rebase to new upstream release * Mon Oct 22 2012 Matthew Booth - 0.8.9-2 while fedora 19 has been currently stopped at * Wed Jul 03 2013 Richard W.M. Jones - 0.9.0-3 - Default to using the appliance backend, since in Fedora >= 18 the libvirt backend doesn't support the 'iface' parameter which virt-v2v requires. - Add BR perl(Sys::Syslog), required to run the tests. - Remove some cruft from the spec file. BTW in F20 we do have ova too: virt-v2v-0.9.0-5.fc20.x86_64 and in fact it has the older version... For RHEL 6 I remained here: http://lists.ovirt.org/pipermail/users/2013-May/014457.html ANd no particular virt-v2v package in rhev source repo http://ftp.redhat.com/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/ For sure the rhev 3.3 beta guide is incorrect at the moment https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.2/html-single/V2V_Guide/index.html#chap-V2V_Guide-Installing_virt_v2v because it says " virt-v2v is available on Red Hat Network (RHN) in the Red Hat Enterprise Linux Server (v.6 for 64-bit x86_64) or Red Hat Enterprise Linux Workstation (v.6 for x86_64) channel. Ensure the system is subscribed to the appropriate channel before installing virt-v2v. " and some lines below " 7.1. virt-v2v Parameters The following parameters can be used with virt-v2v: -i input Specifies the input method to obtain the guest for conversion. The default is libvirt. Supported options are: libvirt Guest argument is the name of a libvirt domain. libvirtxml Guest argument is the path to an XML file containing a libvirt domain. ova " Any light to shed on this? Thanks Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] [libvirt] libvirt migration port configuration and virPortAllocator
On Thu, Jan 9, 2014 at 2:27 PM, Ján Tomko wrote: > On 01/08/2014 05:48 PM, Eric Blake wrote: >> On 01/08/2014 02:45 AM, Dan Kenigsberg wrote: >>> On Wed, Jan 08, 2014 at 01:26:23AM +0100, Gianluca Cecchi wrote: >>>> Hello, >>>> following the bugzilla here: >>>> https://bugzilla.redhat.com/show_bug.cgi?id=1019053 >> >> This bug was tagged against upstream libvirt. If you need it backported >> to specific Fedora releases, it might be worth cloning the BZ and making >> sure the clone is against Fedora instead of Virtualization Tools. >> >>>> >>>> we have >>>> " >>>> This is now fixed by v1.1.3-188-g0196845 and v1.1.3-189-ge3ef20d >>>> " >>>> >>>> Does this mean that f19 that has >>>> libvirt-1.0.5.8-1.fc19.x86_64 >>>> is out? >> >> We can backport the patch to v1.0.5-maint if there is a compelling >> reason (such as a BZ). >> > > I found the BZ and backported the patches to v1.0.5-maint: > https://bugzilla.redhat.com/show_bug.cgi?id=1018530 > > I will do v1.1.3-maint as well. > > Jan > As it was released to updates-testing, I tested and gave positive feedback for your backport to v1.0.5-maint https://bugzilla.redhat.com/show_bug.cgi?id=1018530 Successfully tested also the capability to automatically change tcp port if busy without giving error messages in webadmin gui Thanks! BTW with gluster 3.4.2-1, already pushed in f19 stable we have now also there the possibility to put port range (skip if busy capability was already there before), putting for example inside /etc/glusterfs/glusterd.vol option base-port 50152 Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] membership disabled due to bouces again..
Already spotted this problem before... I just found that I lose e-mails from Tue Jan 21 11:45:10 EST 2014 to Wed Jan 22 15:35:51 EST 2014 And received only some minutes ago (22/Jan/2014 21:26 Italy Time) an e-mail regarding my disabled membership apparently occurred at Mon, Jan 20, 2014 at 4:11 PM so that I didn't either know before Any way to solve this? I have to mix e-mail received and archives... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Possible problems testing 3.3.3 RC
Hello, engine 3.3.3 beta1 on fedora 19 fully updated (for f19 repos). I would like to upgrade and test 3.3.3 RC I just updated ovirt-release so that now I have ovirt-release-fedora-10.0.1-2.noarch yum clean metadata modified new ovirt repofile enabling updates-testing and disabling beta [ovirt-stable] name=Older Stable builds of the oVirt project baseurl=http://ovirt.org/releases/stable/rpm/Fedora/$releasever/ [ovirt-3.3.2] name=oVirt 3.3.2 release baseurl=http://resources.ovirt.org/releases/3.3.2/rpm/Fedora/$releasever/ [ovirt-updates-testing] name=Test Updates builds of the oVirt project baseurl=http://ovirt.org/releases/updates-testing/rpm/Fedora/$releasever/ Even if ovirt update will be made through engine-setup, I presum that a "yum update" shouldn't give any depedency error, correct? Instead I have: # yum update Loaded plugins: langpacks, refresh-packagekit, versionlock Resolving Dependencies --> Running transaction check ---> Package ovirt-engine-dwh.noarch 0:3.3.3-1.fc19 will be updated ---> Package ovirt-engine-dwh.noarch 0:3.4.0-0.2.master.20140122122024.fc19 will be an update --> Processing Dependency: ovirt-engine-dwh-setup >= 3.4.0-0.2.master.20140122122024.fc19 for package: ovirt-engine-dwh-3.4.0-0.2.master.20140122122024.fc19.noarch ---> Package ovirt-engine-lib.noarch 0:3.3.3-0.1.beta1.fc19 will be updated ---> Package ovirt-engine-lib.noarch 0:3.3.3-1.fc19 will be an update ---> Package ovirt-engine-setup.noarch 0:3.3.3-0.1.beta1.fc19 will be updated ---> Package ovirt-engine-setup.noarch 0:3.3.3-1.fc19 will be an update ---> Package ovirt-engine-websocket-proxy.noarch 0:3.3.3-0.1.beta1.fc19 will be updated ---> Package ovirt-engine-websocket-proxy.noarch 0:3.3.3-1.fc19 will be an update ---> Package ovirt-image-uploader.noarch 0:3.3.2-1.fc19 will be updated ---> Package ovirt-image-uploader.noarch 0:3.3.3-1.fc19 will be an update ---> Package ovirt-iso-uploader.noarch 0:3.3.2-1.fc19 will be updated ---> Package ovirt-iso-uploader.noarch 0:3.3.3-1.fc19 will be an update ---> Package ovirt-log-collector.noarch 0:3.3.2-2.fc19 will be updated ---> Package ovirt-log-collector.noarch 0:3.3.3-1.fc19 will be an update --> Running transaction check ---> Package ovirt-engine-dwh-setup.noarch 0:3.4.0-0.2.master.20140122122024.fc19 will be installed --> Processing Dependency: ovirt-engine-setup-plugin-ovirt-engine-common for package: ovirt-engine-dwh-setup-3.4.0-0.2.master.20140122122024.fc19.noarch --> Finished Dependency Resolution Error: Package: ovirt-engine-dwh-setup-3.4.0-0.2.master.20140122122024.fc19.noarch (ovirt-updates-testing) Requires: ovirt-engine-setup-plugin-ovirt-engine-common You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest Why ovirt-engine-dwh.noarch 0:3.4.0-0.2.master.20140122122024.fc19 is proposed as an update? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Possible problems testing 3.3.3 RC
On Thu, Jan 23, 2014 at 12:08 PM, Yedidyah Bar David wrote: > > Because it's in the updates repo, not sure why is that. Yaniv/Eyal? > -- > Didi Indeed I see that 1) In this repo http://resources.ovirt.org/releases/beta/rpm/Fedora/19/noarch/ I would expect dwh for 3.4 while I have ovirt-engine-dwh-3.3.3-1.fc19.noarch.rpm 2) in this repo http://resources.ovirt.org/releases/updates-testing/rpm/Fedora/19/noarch/ I would expect dwh for 3.3.3 while I have ovirt-engine-dwh-3.3.3-1.fc19.noarch.rpm ovirt-engine-dwh-3.4.0-0.2.master.20140122122024.fc19.noarch.rpm ovirt-engine-dwh-setup-3.4.0-0.2.master.20140122122024.fc19.noarch.rpm Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Possible problems testing 3.3.3 RC
On Thu, Jan 23, 2014 at 1:23 PM, Sandro Bonazzola wrote: > Il 23/01/2014 12:12, Gianluca Cecchi ha scritto: [snip] >> 2) in this repo >> http://resources.ovirt.org/releases/updates-testing/rpm/Fedora/19/noarch/ >> I would expect dwh for 3.3.3 while I have >> ovirt-engine-dwh-3.3.3-1.fc19.noarch.rpm >> ovirt-engine-dwh-3.4.0-0.2.master.20140122122024.fc19.noarch.rpm >> ovirt-engine-dwh-setup-3.4.0-0.2.master.20140122122024.fc19.noarch.rpm > > It seems that it has been added after 3.3.3-rc announce by mistake. > Will fix it OK. It seems now the updates-testing repo is ok: ovirt-engine-dwh-3.3.3-1.fc19.noarch.rpm upgrade went smooth. I'm going to test. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] disks successfully removed with torage failure message
On Mon, Dec 16, 2013 at 10:52 AM, Vered Volansky wrote: > Gianluca, Huntxu > > It looks like huntxu's solution is the way to go. > I'd appreciate it if any of you will open a bug on this issue and inform the > list. > > If there's anything else regarding this issue please ask. > > Thanks, > Vered > > ----- Original Message - >> From: "Gianluca Cecchi" >> To: "Vered Volansky" >> Cc: "users" , "huntxu" >> Sent: Saturday, December 14, 2013 11:10:35 AM >> Subject: Re: [Users] disks successfully removed with torage failure message >> >> On Dec 14, 2013 9:41 AM, "Vered Volansky" wrote: >> > >> > Hi, >> > >> > I've looked at the logs and will have to discuss the issue with some >> people, which I will only be able to do tomorrow. >> > In the mean time, When the message to manually remove the disks appear, >> it actually means manually - not through the engine at all, if that helps >> for now. >> > This means through the DB, which it not recommended and I'd prefer to >> wait till tomorrow and figure this out. >> > >> > Regards, >> > Vered >> >> OK. Let me know when you have something for me to try. >> I'm going to analyze the other user suggestion too >> >> Gianluca Hello, to resume this thread and confirm as resolved in 3.3.3 RC. I had 3 disks in Illegal state since 3.3.2 beta and that I was unable to delete. It seems it was already fixed in beta1 but I didn't test there. I think it is related to this bug in QA https://bugzilla.redhat.com/show_bug.cgi?id=1046600 Just to confirm that indeed I was able to delete and so confirm for 3.3.3 RC Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] oVirt 3.4 release notes and what's new section empty
hello, at http://www.ovirt.org/OVirt_3.4.0_release_notes the what's new section seems to be empty. What major/minor features one would expect as added value passing from 3.3.3 to 3.4? Any other pointers? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt 3.4 beta on F19: engine-setup error "Failed to execute stage 'Misc configuration': Command '/bin/systemctl' failed to execute"
On Thu, Jan 23, 2014 at 3:21 PM, Jorick Astrego wrote: > Yes that's it: > > Adding "--otopi-environment="OVESETUP_SYSTEM/shmmax=int:68719476736" " > works. > > I never thought a known issue would crop up so soon, so I just started > installing. > > Thanks > hello, is it correct to say that if I put something like this in my /etc/sysctl.d/ovirt-engine.conf # cat /etc/sysctl.d/ovirt-engine.conf # ovirt-engine configuration. # Put 200Mb 15/01/14 kernel.shmmax = 209715200 And then # systemctl stop ovirt-engine # systemctl stop postgresql # sysctl -p # sysctl -a --system | grep shmmax (to verify new settings) # systemctl start postgresql # systemctl start ovirt-engine Then I can run engine-setup without need of putting the env variable any more? At least it worked from me for upgrade to 3.3.3beta1 and 3.3.3rc Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Spice-proxy questions
On Fri, Jan 24, 2014 at 6:31 PM, David Li wrote: > > > Hi Tomas, David: > > > I set my SpiceProxyDefault to "http://:80 > and my .vv file indeed reflects that. > > However this is still not working. The popup window still fails to connect to > the graphic server. > Any further suggestion how to debug? > > Thanks. > > David You have to configure a web Proxy server such as Squid. Port 80 on your engine is already occupied by engine web component itself On engine tipically # lsof -i :80 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME httpd8191 root4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd8195 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd8196 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd8197 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd8762 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd8763 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd8764 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd8770 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd9346 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd 10338 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) httpd 10340 apache4u IPv6 49597 0t0 TCP *:http (LISTEN) BTW I had problems to configure my engine to work both with SpiceProxy and WebSocketProxy, so I ended up to configure another server (10.4.4.63) with squid configured on port 80 so that now my engine (ip 10.4.4.58) has: # engine-config -g SpiceProxyDefault SpiceProxyDefault: http://10.4.4.63:80 version: general and I have both SpiceProxy and WebSocketProxy (where needed) working. In your case you should install squid on your engine and set it up with a port different than 80. Official and free documentation on just released (?) final RHEV 3.3 is better to read to configure Proxy: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html/Installation_Guide/sect-SPICE_Proxy.html (good advertising... ;-) it seems 3.3 has been released but no official announcement? https://www.redhat.com/about/news/press-archive/2014/1/rhev-3-3-enables-openstack-ready-cloud-infrastructure or did I loose anything? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Spice-proxy questions
On Fri, Jan 24, 2014 at 6:58 PM, David Jaša wrote: > On Pá, 2014-01-24 at 18:45 +0100, David Jaša wrote: >> On Pá, 2014-01-24 at 09:39 -0800, David Li wrote: >> > David, >> > >> > With SpiceProxy, should I point my admin portal browser to >> > http://proxy_ip_or_fqdn:port? Does it matter which port number to use? >> >> Both FQDN/IP and port do matter. You have to set them so they point to a >> running http proxy server instance (e.g. squid). Engine won't set up a >> spice-capable http proxy > > Just to clarify: you need to tell squid to permit connections to spice > port range (5900-6144 IIRC). It only allows connections to http ports by > default. > > David > >> for you, you have to take care of it yoursef. >> >> What engine can do for you is to configure websocket proxy that allows >> connections by html5 client (the one that runs entirely in browser). >> >> David On my CentOS 5.10 server (10.4.4.63) that is the squid proxy for engine I have this configuration that works [root@c510 squid]# diff squid.conf squid.conf.orig 578,582d577 < < acl localnet src 10.4.3.0/24# RFC1918 possible internal network < acl localnet src 10.4.23.0/24# RFC1918 possible internal network < acl localnet src 10.4.4.0/24# RFC1918 possible internal network < 625c620 < #http_access deny CONNECT !SSL_ports --- > http_access deny CONNECT !SSL_ports 639d633 < http_access allow localnet 927,928c921 < #http_port 3128 < http_port 80 --- > http_port 3128 My clients where I run the browser that connects to engine (10.4.4.58) are on 10.4.3.0, 10.4.4.0 or 10.4.23.0 networks. No iptables on proxy server oVirt hosts are on 10.4.4.0 netowrk too. HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] disk could not be removed, illegal state
This was my first related post when in 3.3.2 beta http://lists.ovirt.org/pipermail/users/2013-December/018653.html and it remained with same problem in 3.3.2 final Its bug id should be this one: https://bugzilla.redhat.com/show_bug.cgi?id=1046600 resolved in 3.3.3beta1. I successfullt verified it for my disks in 3.3.3 rc. See my feedback here: http://lists.ovirt.org/pipermail/users/2014-January/020222.html I think final 3.3.3 will be out pretty soon so you can update and delete your disks in illegal state. Gianluca On Fri, Jan 24, 2014 at 10:40 PM, William Kwan wrote: > oVirt Engine Version: 3.3.2-1.el6 > > > On Friday, January 24, 2014 12:33 PM, Gianluca Cecchi > wrote: > On Fri, Jan 24, 2014 at 6:15 PM, William Kwan wrote: > >> Hi, >> >> When I remove a VM, I get the following messages >> >> VM has been removed, but the following disk could not be removed: >> _disk1. These disks will appear in the main disks tab in illegal >> state, >> please remove manually when possible. >> >> Is this a known bug or is it some problem with my setup? >> I think I can find the "disk" by searching the ID from the filesystem. It >> may stay as an entry in the disk list and I can remove it. >> >> Thanks >> Will > > > > Known bug. Verified resolved since 3.3.3beta1. > > what's your version? > Gianluca > > > ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Spice-proxy questions
On Fri, Jan 24, 2014 at 8:45 PM, David Li wrote: > David > > I set up the squid proxy on the same machine as ovirt-engine. I have this in > squid.conf: > > > > --- > acl localhost src 10.10.2.143/32 # for the machine running the browser > > > #safe ports > acl SSL_ports port 443 > acl Safe_ports port 80 # http > acl Safe_ports port 21 # ftp > acl Safe_ports port 443 # https > acl Safe_ports port 70 # gopher > acl Safe_ports port 210 # wais > acl Safe_ports port 1025-65535 # unregistered ports <-- will this > allow connections to spice port range (5900-6144 IIRC).??? > acl Safe_ports port 280 # http-mgmt > acl Safe_ports port 488 # gss-http > acl Safe_ports port 591 # filemaker > acl Safe_ports port 777 # multiling http > > > > # Squid normally listens to port 3128 > http_port 3128 > > # Deny requests to certain unsafe ports > http_access deny !Safe_ports > > - > > and set my SpiceProxyDefault=http://10.10.2.143:3128 > > > > So far, this is still not working. The Spice popup window still fails to > connect to the graphics server and html5 browser window remains blank. > Are there any log files that can be used to debug this? > > Thanks. > > There is something I don't understand or that you are doing incorrectly. >From what you write it seems that: - your engine has ip 10.10.2.143 - From which ip do you run your browser? - Can this ip connect to engine on port 3128? Perhaps your engine setup already configured iptables (or firewalld) and it is blocking you? You can easily verify at runtime by putting this line on engine: iptables -I INPUT -s xxx.yyy.www.zzz -j ACCEPT where xxx.yyy.www.zzz is the ip of the client from where you run the browser so that you put this accept rule on top of INPUT chain and retry to connect to VM console - Which ip have the hosts where VMs are running? - Is engine (so your proxy in your configuration) capable to reach ip of your hosts on spice ports (5900-..)? ALso see my previous thread here: http://lists.ovirt.org/pipermail/users/2013-December/018554.html and the useful answers. I cannot test your config, because I have no control on my network and network admins only allow 80 and 443 so that they are already taken by engine itself and I can't test putting the proxy on engine itself... HIH anyway, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine reports and dwh setup in 3.3.2
On Sun, Jan 26, 2014 at 4:29 PM, Yaniv Dary wrote: > > > - Original Message - >> From: "Gianluca Cecchi" [snip] >> Ok, digging into /bin/ovirt-engine-reports-setup I see that actually >> the password is written in clear text inside file >> /usr/share/ovirt-engine-reports/reports/users/ovirt-002dadmin.xml > > This was only due to setup failing in the middle, this is improved in 3.4 and > you won't see that anymore. > > >> >> Not a great security choice, also because the file is currently >> readable by everyone on the system (and "rpm -qVv" doesn't complain >> about its permissions). >> I was then able to connect to reports portal and to see some output >> reports... >> >> Gianluca >> Ok. see also my NOTES from the previous e-mail about possible improvements: Thanks Gianluca " NOTES: - keep an eye about ovirt-engine not restarted after reports setup error, not a good thing IMHO - in patched file I see some lines above "logging.debug("Imporing users")" without the "t" So when it happens to further modify the file you can put this type fix too... - in main reports page if you click on "Need help logging in?" link, the pop-up says "rhevm-admin" instead of what setup specified as "ovirt-admin" (none of these logins works: (Please choose a password for the reports admin user(s) (ovirt-admin): ) " ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Spice-proxy questions
On Tue, Jan 28, 2014 at 9:49 AM, David Jaša wrote: > On Po, 2014-01-27 at 11:21 -0800, David Li wrote: >> Do I need to generate and install a x509 key pair for the squid proxy? How >> can I find out if the key pair has already been done? > > No. Spice channels are encrypted end-to-end so if you configure squid to > forward the connections just to the display network range of the hosts, > you anly allow connections that are encrypted anyway - so the TLS would > be here quite redundant. > > Have you made sure that you have opened port 3128 in iptables? If the > box doesn't use firewalld (which is the case on RHEL/CentOS, Fedora must > be configured to disable firewalld but I presume that engine-setup does > that), add the port definition among other opened ports > in /etc/sysconfig/iptables. > > David > > PS: I'm mangling reply-to: header for a reason. Please don't hog my > inbox, I can very well read your messages on-list. Thank you. I made a test setting proxy on engine and it seems it is ok. I have no other ports than 80 and 443 allowed so I have to use environment with all the servers in 10.4.4.0 network client 10.4.4.61 engine 10.4.4.60 test VM 10.4.4.63 host (where test VM is running on) 10.4.4.59 # engine-config -s SpiceProxyDefault="http://10.4.4.60:3128"; # systemctl restart ovirt-engine configured squid on engine on its default port 3128 I have firewalld configured on engine, so that I have this in /etc/firewalld/zones/public.xml Public For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted. On client CentOS 6.5 (10.4.4.61): I run firefox and connect to webadmin gui of engine (https://10.4.4.60) I have enabled spice proxy for the test VM I select console and specify to run /usr/bin/remote-viewer at popup window, enabling popups in firefox I successfully get the console $ ps -ef|grep remote g.cecchi 23897 23726 0 15:50 pts/000:00:00 /usr/bin/remote-viewer /tmp/console.vv g.cecchi 23923 23704 0 15:52 pts/000:00:00 grep remote $ sudo lsof -Pp 23897 | grep TCP remote-vi 23897 g.cecchi4u IPv6 498441 0t0TCP localhost:45817->localhost:6010 (ESTABLISHED) remote-vi 23897 g.cecchi 14u IPv4 498447 0t0TCP 10.4.4.61:36909->10.4.4.60:3128 (ESTABLISHED) remote-vi 23897 g.cecchi 20u IPv4 498449 0t0TCP 10.4.4.61:36910->10.4.4.60:3128 (ESTABLISHED) remote-vi 23897 g.cecchi 24u IPv4 498451 0t0TCP 10.4.4.61:36911->10.4.4.60:3128 (ESTABLISHED) remote-vi 23897 g.cecchi 25u IPv4 498452 0t0TCP 10.4.4.61:36912->10.4.4.60:3128 (ESTABLISHED) remote-vi 23897 g.cecchi 60u IPv4 497799 0t0TCP 10.4.4.61:44961->10.4.4.60:443 (ESTABLISHED) On engine (10.4.4.60) # netstat -an|grep 3128 tcp6 0 0 :::3128 :::*LISTEN tcp6 0 0 10.4.4.60:3128 10.4.4.61:36912 ESTABLISHED tcp6 0 0 10.4.4.60:3128 10.4.4.61:36911 ESTABLISHED tcp6 0 0 10.4.4.60:3128 10.4.4.61:36910 ESTABLISHED tcp6 0 0 10.4.4.60:3128 10.4.4.61:36909 ESTABLISHED On hypervisor (10.4.4.59) $ netstat -an|grep 5901 tcp0 0 0.0.0.0:59010.0.0.0:* LISTEN tcp0 0 10.4.4.59:5901 10.4.4.60:38879 ESTABLISHED tcp0 0 10.4.4.59:5901 10.4.4.60:38881 ESTABLISHED tcp0 0 10.4.4.59:5901 10.4.4.60:38880 ESTABLISHED tcp0 0 10.4.4.59:5901 10.4.4.60:38882 ESTABLISHED So all seems ok. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine reports and dwh setup in 3.3.2
On Thu, Jan 16, 2014 at 4:57 PM, Yaniv Dary wrote: > The oVirt 3.3.3 release should be working for all install and upgrade flows. > Please use that on your setup once it's released. > > > > Yaniv > > - Original Message - >> From: "Yaniv Dary" >> To: "Gianluca Cecchi" >> Cc: "users" >> Sent: Sunday, January 5, 2014 3:54:40 PM >> Subject: Re: [Users] engine reports and dwh setup in 3.3.2 >> >> >> >> - Original Message - >> > From: "Gianluca Cecchi" >> > To: "Yaniv Dary" >> > Cc: "users" >> > Sent: Saturday, January 4, 2014 12:59:46 AM >> > Subject: Re: [Users] engine reports and dwh setup in 3.3.2 >> > >> > On Wed, Dec 25, 2013 at 1:00 AM, Yaniv Dary wrote: >> > >> > > >> > > Hi, >> > > We have found a few blockers on the setup of dwh and reports. >> > > We hope to resolve the issues in the next few days. If you restore your >> > > environment using the backups, you will be able to upgrade. >> > > I've put a note to let you know when new packages are available. >> > >> > Ok, I'll wait good news about these items >> > Just a question: when you write about blockers, are you referring to >> > updates only or also to new setups directly made in 3.3.2 >> > environments? >> >> Local fresh installs should work and upgrades\remote fresh install will not >> probably. >> >> >> > >> > thanks in advance, >> > Gianluca >> > Just to note that on my oVirt 3.3.2-1 allinone on fedora 19, after enabling updates-testing I get the same failure using 3.3.3rc [root@tekkaman ~]# engine-setup [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-aio.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'] Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20140129214203.log Version: otopi-1.1.2 (otopi-1.1.2-1.fc19) [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Stage: Environment customization --== PACKAGES ==-- [ INFO ] Checking for product updates... Setup has found updates for some packages, do you wish to update them now? (Yes, No) [Yes]: [ INFO ] Checking for an update for Setup... --== NETWORK CONFIGURATION ==-- [WARNING] Failed to resolve tekkaman.localdomain.local using DNS, it can be resolved only locally Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: [ INFO ] iptables will be configured as firewall manager. --== DATABASE CONFIGURATION ==-- --== OVIRT ENGINE CONFIGURATION ==-- Skipping storing options as database already prepared --== PKI CONFIGURATION ==-- PKI is already configured --== APACHE CONFIGURATION ==-- --== SYSTEM CONFIGURATION ==-- --== END OF CONFIGURATION ==-- [ INFO ] Stage: Setup validation During execution engine service will be stopped (OK, Cancel) [OK]: [ INFO ] Cleaning stale zombie tasks --== CONFIGURATION PREVIEW ==-- Database name : engine Database secured connection: False Database host : localhost Database user name : engine Database host name validation : False Datbase port : 5432 NFS setup : True Firewall manager : iptables Update Firewall: True Configure WebSocket Proxy : True Host FQDN : tekkaman.localdomain.local NFS mount point: /ISO Set application as default page: True Configure Apache SSL : False Upgrade packages : True Please confirm installation settings (OK, Cancel) [OK]: [ INFO ] Cleaning async tasks and compensations [ INFO ] Checking the DB consistency [ INFO ] Stage: Transaction setup [ INFO ] Stopping engine service [ INFO ] Stopping websocket-proxy service [ INFO ] Stage: Misc configuration [ INFO ] Fixing DB inconsistencies [ INFO ] Stage: Package installation [ INFO ] Yum
Re: [Users] engine reports and dwh setup in 3.3.2
On Wed, Jan 29, 2014 at 10:49 PM, Gianluca Cecchi wrote: > [ INFO ] Backing up database to > '/var/lib/ovirt-engine/backups/engine-20140129223909.I3iTp5.sql'. > [ INFO ] Updating database schema > [ ERROR ] Failed to execute stage 'Misc configuration': [Errno 2] No > such file or directory: > '/var/lib/ovirt-engine/deployments/ovirt-engine-reports.war' > [ INFO ] Yum Performing yum transaction rollback > Is the patch to be put yet for final 3.3.3? > Or will there be any particular note in release (for example to > deinstall reports..)? > > Gianluca Note that my current situation is: [root@tekkaman ~]# ll /var/lib/ovirt-engine/deployments total 8 lrwxrwxrwx. 1 root root 34 Jan 7 2013 engine.ear -> /usr/share/ovirt-engine/engine.ear -rw-r--r--. 1 ovirt ovirt 10 Dec 21 12:28 engine.ear.deployed lrwxrwxrwx. 1 root root 48 Feb 19 2013 ovirt-engine-reports.war -> /usr/share/ovirt-engine/ovirt-engine-reports.war -rw-r--r--. 1 ovirt ovirt 24 Dec 21 15:43 ovirt-engine-reports.war.dodeploy with /usr/share/ovirt-engine/ovirt-engine-reports.war being a broken link [root@tekkaman ~]# ll /usr/share/ovirt-engine total 52 drwxr-xr-x. 2 root root 4096 Jan 29 22:38 bin drwxr-xr-x. 3 root root 4096 Jan 21 11:10 branding drwxr-xr-x. 2 root root 4096 Jan 21 11:10 conf drwxr-xr-x. 3 root root 4096 Jan 29 22:39 dbscripts drwxr-xr-x. 8 ovirt ovirt 4096 Jan 29 22:38 engine.ear drwxr-xr-x. 2 root root 4096 Jan 29 22:38 files drwxr-xr-x. 4 root root 4096 Jan 21 11:10 firewalld drwxr-xr-x. 2 root root 4096 Jan 29 22:38 manual drwxr-xr-x. 4 root root 4096 Jan 21 11:10 modules drwxr-xr-x. 3 root root 4096 Nov 24 20:51 scripts drwxr-xr-x. 5 root root 4096 Jan 21 11:10 services drwxr-xr-x. 6 root root 4096 Jan 21 11:10 setup drwxr-xr-x. 2 root root 4096 Jan 21 11:10 ui-plugins Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] engine reports and dwh setup in 3.3.2
On Wed, Jan 29, 2014 at 10:55 PM, Alon Bar-Lev wrote: > > > - Original Message - >> From: "Gianluca Cecchi" >> To: "Yaniv Dary" >> Cc: "users" >> Sent: Wednesday, January 29, 2014 11:49:04 PM >> Subject: Re: [Users] engine reports and dwh setup in 3.3.2 >> >> Just to note that on my oVirt 3.3.2-1 allinone on fedora 19, after >> enabling updates-testing I get the same failure using 3.3.3rc > > Can it be that the following is dead symlink? if so please remove. > > /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war I confirm that - rollback (made automatically during the failed engine-setup first run) - rm /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war - engine-setup again now it completes correctly Probably it is worth to put a note in release notes. But I don't know where: http://wiki.ovirt.org/OVirt_3.3.3_release_notes or http://wiki.ovirt.org/OVirt_3.2_to_3.3_upgrade ? Because I previously used (in December I think) the latest link to migrate from f18+3.2 to f19+3.3.2 Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt-3.3.3 release postponed due to blockers
On Thu, Jan 30, 2014 at 10:38 PM, Robert Story wrote: > Can we revert these packages to previous versions in the 3.3.2 stable repo > so those of us who want/need to install new hosts in our clusters aren't > dead in the water waiting for 3.3.3? Also because 3.3.3, at least based on beta1 and rc, seems the best ever and the more stable. I need yet to stress test snapshots functionalities and some other things but in general it seems very ok, also in virtual disks mgmt. Just my opinion Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] about the size of an offline snapshot
On Sun, Feb 2, 2014 at 11:21 AM, Maor Lipchuk wrote: > That is correct, you can also see the size and the fields through the > API or ovirt-cli > (see > http://documentation-devel.engineering.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3-Beta/html-single/Developer_Guide/index.html#Floating_Disk_Elements) > though, you can not see the true size in the floating disk, IINM you can > see it under the VM snapshots disks in the API. Just to correct the link as 3.3 is not beta any more and that the link provided is accessible only by Red Hat employees probably: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.3/html/Developer_Guide/chap-Floating_Disks.html#Floating_Disk_Elements Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] How to scratch previous versions reports and dwh installation?
Hello, I have some problems, I think they are also due to PosgreSQL access rules changes from f18 to f19. In particular if one has passed from fedora 18 to Fedora 19 and form 3.2 to 3.3 as I did; see https://bugzilla.redhat.com/show_bug.cgi?id=1009335 Right now I have a problem with my AIO installation where trying to upgrade from 3.3.2 to 3.3.3 I got many problems related to RDBMS. Is there a way to completely get rid of prevous installtion and reinstall cleanly in 3.3.3 as I did on another system without getting problems. I tried removing packages: yum remove ovirt-engine-dwh ovirt-engine-reports and deleting related databases: drop database ovirt_engine_history; drop database ovirtenginereports; But then trying to install again packages and runnign setup I get this in log: 2014-01-30 00:37:30::DEBUG::common_utils::446::root:: running sql query on host: localhost, port: 5432, db: ovirt_engine_history, user: engine, query: 'select 1 from history_configuration;'. 2014-01-30 00:37:30::DEBUG::common_utils::907::root:: Executing command --> '/usr/bin/psql --pset=tuples_only=on --set ON_ERROR_STOP=1 --dbname ovirt_engine_history --host localhost --port 5432 --username engine -w -c select 1 from history_configuration;' in working directory '/root' 2014-01-30 00:37:30::DEBUG::common_utils::962::root:: output = 2014-01-30 00:37:30::DEBUG::common_utils::963::root:: stderr = ERROR: relation "history_configuration" does not exist LINE 1: select 1 from history_configuration; Can anyone tell if I have to run any other purge operation, suche as any users or table on engine db itself? And also what would be the expected configuration of /var/lib/pgsql/data/pg_hba.conf and eventually /var/lib/pgsql/data/postgresql.conf for a fedora 19 system so that a setup for reports and dwh could complete? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] How to scratch previous versions reports and dwh installation?
On Sun, Feb 2, 2014 at 3:34 PM, Yedidyah Bar David wrote: > - Original Message - >> From: "Gianluca Cecchi" >> To: "users" >> Sent: Sunday, February 2, 2014 3:56:25 PM >> Subject: [Users] How to scratch previous versions reports and dwh >> installation? >> >> Hello, >> I have some problems, I think they are also due to PosgreSQL access >> rules changes from f18 to f19. >> In particular if one has passed from fedora 18 to Fedora 19 and form >> 3.2 to 3.3 as I did; see >> https://bugzilla.redhat.com/show_bug.cgi?id=1009335 >> >> Right now I have a problem with my AIO installation where trying to >> upgrade from 3.3.2 to 3.3.3 I got many problems related to RDBMS. > > Is this related to dwh/reports? yes, engine is ok > >> >> Is there a way to completely get rid of prevous installtion and >> reinstall cleanly in 3.3.3 as I did on another system without getting >> problems. > > So you want to delete dwh/reports only or also the engine? Yes, only dwh/reports. Suppose I don't mind to maintain my previous (3.2.x settings in my case) and I want to start from new data Engine is ok > >> >> I tried removing packages: >> yum remove ovirt-engine-dwh ovirt-engine-reports >> >> and deleting related databases: >> >> drop database ovirt_engine_history; >> drop database ovirtenginereports; > > This might not be enough. Sadly there is currently no cleanup tool for > dwh/reports, similar to engine-cleanup for the engine. > > For things to delete look at > /etc/ovirt*dwh* /etc/ovirt-*reports* /etc/ovirt-engine/*dwh* > /etc/ovirt-engine/*reports* > /var/lib/ovirt-engine/deployments/*reports* > >> >> But then trying to install again packages and runnign setup I get this in >> log: >> >> 2014-01-30 00:37:30::DEBUG::common_utils::446::root:: running sql >> query on host: localhost, port: 5432, db: ovirt_engine_history, user: >> engine, query: 'select 1 from history_configuration;'. >> 2014-01-30 00:37:30::DEBUG::common_utils::907::root:: Executing >> command --> '/usr/bin/psql --pset=tuples_only=on --set ON_ERROR_STOP=1 >> --dbname ovirt_engine_history --host localhost --port 5432 --username >> engine -w -c select 1 from history_configuration;' in working >> directory '/root' >> 2014-01-30 00:37:30::DEBUG::common_utils::962::root:: output = >> 2014-01-30 00:37:30::DEBUG::common_utils::963::root:: stderr = ERROR: >> relation "history_configuration" does not exist >> LINE 1: select 1 from history_configuration; > > This in itself is not necessarily a problem, unless of course setup failed > immediately after that. The actual problem that cause setup to abort was few lines below: 2014-01-30 00:37:42::DEBUG::common_utils::907::root:: Executing command --> '/usr/share/ovirt-engine-dwh/db-scripts/create_schema.sh -l /var/log/ovirt-engine/ovirt-history-db-install-2014_01_30_00_37_42.log -u engine_history -s localhost -p 5432 -g' in working directory '/root' 2014-01-30 00:37:42::DEBUG::common_utils::962::root:: output = 2014-01-30 00:37:42::DEBUG::common_utils::963::root:: stderr = psql: FATAL: password authentication failed for user "engine_history" FATAL: password authentication failed for user "engine_history" password retrieved from file "/tmp/pgpasss2gOEF.tmp" 2014-01-30 00:37:42::DEBUG::common_utils::964::root:: retcode = 2 2014-01-30 00:37:42::ERROR::decorators::27::root:: Traceback (most recent call last): File "/usr/share/ovirt-engine-dwh/decorators.py", line 20, in wrapped_f output = f(*args) File "/bin/ovirt-engine-dwh-setup", line 160, in createDbSchema envDict={'ENGINE_PGPASS': PGPASS_TEMP}, File "/usr/share/ovirt-engine-dwh/common_utils.py", line 967, in execCmd raise Exception(msg) Exception: Error while trying to create ovirt_engine_history db 2014-01-30 00:37:42::ERROR::ovirt-engine-dwh-setup::691::root:: Exception caught! 2014-01-30 00:37:42::ERROR::ovirt-engine-dwh-setup::692::root:: Traceback (most recent call last): File "/bin/ovirt-engine-dwh-setup", line 623, in main createDbSchema(db_dict) File "/usr/share/ovirt-engine-dwh/decorators.py", line 28, in wrapped_f raise Exception(instance) Exception: Error while trying to create ovirt_engine_history db > > Which versions of relevant packages? Can you please post full logs? I'm using packages from 3.3.3rc so: ovirt-engine-reports-3.3.3-1.fc19.noarch ovirt-engine-dwh-3.3.3-1.fc19.noarch > >> >> >> Can anyone tell if I have to run any other purge operation, suche as >> any users
Re: [Users] How to scratch previous versions reports and dwh installation?
for example, - yum remove ovirt-engine-dwh ovirt-engine-reports - psql drop database ovirt_engine_history; drop database ovirtenginereports; cd /etc mv ovirt-engine-dwh/ ovirt-engine-dwh.old mv ovirt-engine-reports/ ovirt-engine-reports.old mv ovirt-engine/ovirt-engine-dwh ovirt-engine/ovirt-engine-dwh.old mv /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war.dodeploy /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war.dodeploy.old and run setup again after installing ovirt-engine-dwh-3.3.3-1.fc19.noarch (I currently have stable + updates-testing repos enabled) [root@tekkaman ~]# ovirt-engine-dwh-setup Welcome to ovirt-engine-dwh setup utility In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service (yes|no): yes Stopping ovirt-engine... [ DONE ] This utility can configure a read only user for DB access. Would you like to do so? (yes|no): yes Error: user name cannot be empty Provide a username for read-only user : ovirt_read Provide a password for read-only user: Warning: Weak Password. Re-type password: Should postgresql be setup with secure connection? (yes|no): yes Creating DB...[ ERROR ] Error encountered while installing ovirt-engine-dwh, please consult the log file: /var/log/ovirt-engine/ovirt-engine-dwh-setup-2014_02_02_16_29_34.log The main problem in log is: 2014-02-02 16:30:04::DEBUG::ovirt-engine-dwh-setup::144::root:: ovirt engine history db creation is logged at /var/log/ovirt-engine/ovirt-history-db-install-2014_02_02_16_30_04.log 2014-02-02 16:30:04::DEBUG::common_utils::907::root:: Executing command --> '/usr/share/ovirt-engine-dwh/db-scripts/create_schema.sh -l /var/log/ovirt-engine/ovirt-history-db-install-2014_02_02_16_30_04.log -u engine_history -s localhost -p 5432 -g' in working directory '/root' 2014-02-02 16:30:04::DEBUG::common_utils::962::root:: output = 2014-02-02 16:30:04::DEBUG::common_utils::963::root:: stderr = psql: FATAL: password authentication failed for user "engine_history" FATAL: password authentication failed for user "engine_history" password retrieved from file "/tmp/pgpassW0tW24.tmp" 2014-02-02 16:30:04::DEBUG::common_utils::964::root:: retcode = 2 2014-02-02 16:30:04::ERROR::decorators::27::root:: Traceback (most recent call last): File "/usr/share/ovirt-engine-dwh/decorators.py", line 20, in wrapped_f output = f(*args) File "/bin/ovirt-engine-dwh-setup", line 160, in createDbSchema envDict={'ENGINE_PGPASS': PGPASS_TEMP}, File "/usr/share/ovirt-engine-dwh/common_utils.py", line 967, in execCmd raise Exception(msg) Exception: Error while trying to create ovirt_engine_history db 2014-02-02 16:30:04::ERROR::ovirt-engine-dwh-setup::691::root:: Exception caught! 2014-02-02 16:30:04::ERROR::ovirt-engine-dwh-setup::692::root:: Traceback (most recent call last): File "/bin/ovirt-engine-dwh-setup", line 637, in main createDbSchema(db_dict) File "/usr/share/ovirt-engine-dwh/decorators.py", line 28, in wrapped_f raise Exception(instance) Exception: Error while trying to create ovirt_engine_history db So it seems a problem with creating db Currently [root@tekkaman ~]# grep -v ^# /var/lib/pgsql/data/pg_hba.conf local all postgres trust local all all md5 hostovirtenginereports engine_reports 0.0.0.0/0 md5 hostovirtenginereports engine_reports ::0/0 md5 hostall all 127.0.0.1/32md5 hostall all ::1/128 md5 hostovirt_engine_history engine_history ::0/0 trust where is stored information for user engine_history do I have also to cleanup this user information? See my full log /var/log/ovirt-engine/ovirt-engine-dwh-setup-2014_02_02_16_29_34.log here: https://drive.google.com/file/d/0BwoPbcrMv8mvQnJlRkljTkhjMTg/edit?usp=sharing Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] How to scratch previous versions reports and dwh installation?
On Sun, Feb 2, 2014 at 4:37 PM, Gianluca Cecchi wrote: > where is stored information for user engine_history do I have also to > cleanup this user information? So it seems it was related to postreSQL users needed to be dropped too, before running setup again. And I also noticed that in 3.3.3 rc the problem solved here: http://gerrit.ovirt.org/#/c/23304/2/packaging/ovirt-engine-reports-setup.py is not applied and reports setup fails. I hope someone is following to get into final 3.3.3. This is my list of things to do in 3.3.3 to get a clean new install that worked today: Some steps could be not necessary depending on previous setup or previous errors and so incomplete deployments: 1) packages clean yum remove ovirt-engine-dwh ovirt-engine-reports 2) files and directory clean rm -rf /etc/ovirt-engine-dwh/ rm -rf /etc/ovirt-engine-reports/ rm -rf /etc/ovirt-engine/ovirt-engine-dwh rm -rf /var/lib/ovirt-engine/deployments/ovirt-engine-reports.war.dodeploy rm -rf /usr/share/ovirt-engine-dwh/ rm -rf /usr/share/ovirt-engine-reports 3) DB clean (substitute ovirt_ruser with your previously chosen read user) engine=# drop user engine_history; DROP ROLE engine=# drop user engine_reports; DROP ROLE engine=# drop user ovirt_ruser; DROP ROLE At the end you shold have something like: postgres=# \du List of roles Role name | Attributes | Member of ---++--- engine| Create DB | {} postgres | Superuser, Create role, Create DB, Replication | {} NOTE: it seems not necessary to purge the lines eventually addedd during previous setup into pg_hba.conf such as: hostovirtenginereports engine_reports 0.0.0.0/0 md5 hostovirtenginereports engine_reports ::0/0 md5 hostovirt_engine_history engine_history ::0/0 trust 3) DWH packages install and setup yum install ovirt-engine-dwh [root@tekkaman ~]# ovirt-engine-dwh-setup Welcome to ovirt-engine-dwh setup utility This utility can configure a read only user for DB access. Would you like to do so? (yes|no): yes Error: user name cannot be empty Provide a username for read-only user : ovirt_ruser Provide a password for read-only user: Warning: Weak Password. Re-type password: Should postgresql be setup with secure connection? (yes|no): yes Creating DB...[ DONE ] Creating read-only user...[ DONE ] Setting DB connectivity...[ DONE ] Starting ovirt-engine... [ DONE ] Starting oVirt-ETL... [ DONE ] Successfully installed ovirt-engine-dwh. 4) Install reports packages, patch and setup yum install ovirt-engine-reports in file /usr/share/ovirt-engine-reports/ovirt-engine-reports-setup.py line 1163 change if preserveReportsJobs: --> if preserveReports: [root@tekkaman ~]# ovirt-engine-reports-setup Welcome to ovirt-engine-reports setup utility In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service (yes|no): yes Stopping ovirt-engine... [ DONE ] Editing XML files... [ DONE ] Setting DB connectivity...[ DONE ] Please choose a password for the reports admin user(s) (ovirt-admin): Warning: Weak Password. Retype password: Warning: Weak Password. Deploying Server... [ DONE ] Updating Redirect Servlet... [ DONE ] Importing reports... [ DONE ] Customizing Server... [ DONE ] Running post setup steps... [ DONE ] Starting ovirt-engine... [ DONE ] Restarting httpd... [ DONE ] Succesfully installed ovirt-engine-reports. 5) Connect to Reports Portal with ovirt-admin user and verify correct access HIH others, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] [vdsm] Migration failed (previous migrations succeded)
On Mon, Feb 3, 2014 at 1:43 PM, Michal Skrivanek wrote: > > On Feb 1, 2014, at 18:25 , Itamar Heim wrote: > >> On 01/31/2014 09:30 AM, Sven Kieske wrote: >>> Hi, >>> >>> is there any documentation regarding all >>> allowed settings in the vdsm.conf? >>> >>> I didn't find anything related in the rhev docs >> >> that's a question for vdsm mailing list - cc-ing… > > the vdsm.conf has a description for each parameter…just search for all > containing "migration":) > we do want to expose some/most/all of them in UI/REST eventually, the > "migration downtime" setting is there now in 3.4, but others are missing > > Thanks, > michal >> >>> >>> Am 30.01.2014 21:43, schrieb Itamar Heim: On 01/30/2014 10:37 PM, Markus Stockhausen wrote: >>> Von: Itamar Heim [ih...@redhat.com] >>> Gesendet: Donnerstag, 30. Januar 2014 21:25 >>> An: Markus Stockhausen; ovirt-users >>> Betreff: Re: [Users] Migration failed (previous migrations succeded) >>> >>> >>> Now I' getting serious problems. During the migration the VM was >>> doing a rather slow download at 1,5 MB/s. So the memory changed >>> by 15 MB per 10 seconds. No wonder that a check every 10 seconds >>> was not able to see any progress. Im scared what will happen if I >>> want to migrate a medium loaded system runing a database. >>> >>> Any tip for a parametrization? >>> >>> Markus >>> >> >> what's the bandwidth? default is up to 30MB/sec, to allow up to 3 VMs to >> migrate on 1Gb without congesting it. >> you could raise that if you have 10GB, or raise the bandwidth cap and >> reduce max number of concurrent VMs, etc. > > My migration network is IPoIB 10GBit. During our tests only one VM > was migrated. Bandwidth cap or number of concurrent VMs has not > been changed after default install. > > Is migration_max_bandwidth in /etc/vdsm/vdsm.conf still the right place? probably > And what settings do you suggest? Not tried myself the change of values, but in a previous thread (actually for limiting and not speeding up migration ;-) these two parameters were described and to be put in each /etc/vdsm/vdsm.conf max_outgoing_migrations (eg 1 for allowing only one migration at a time) migration_max_bandwidth (unit is in MBytes/s, vdsm default is 32MiBps and it is for single migration, not overall) I think it is necessary to follow this workflow for every host - put host into maintenance - stop vdsmd service - change values - start vdsmd service - activate host Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt-report Forbidden access error
On Tue, Feb 4, 2014 at 9:10 AM, Alessandro Bianchi wrote: > in working directory '/usr/share/ovirt-engine-dwh/db-scripts' > 2014-02-04 09:01:26::DEBUG::common_utils::962::root:: output = > 2014-02-04 09:01:26::DEBUG::common_utils::963::root:: stderr = psql: FATALE: > autenticazione con password fallita per l'utente "engine_history" > password retrieved from file "/tmp/pgpassNkKGNp.tmp" > > (autenticazione con password fallita per l'utente "engine_history" = > authentication failed for user "engine_history" system language is italian) > > so it seems a user creation permission problem on the database > > since I'm not too familiar with pgsql how is it supposed to fix this? > > It look like it misses the password in some ovirt configuration file but > where to edit and how o fix it? > > Any hint? > > Thank you See this thread of mine if you want to start from scratch and you don't have any previous reports/dwh data or you don't mind to loose them. Engine and its data is not impacted at all. Eventually I'm going to open a bug for bad mgmt of pre-existing DB user during setup (eg due to a previously failed in the middle install). http://lists.ovirt.org/pipermail/users/2014-February/020740.html Let us know how it goes. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt-report Forbidden access error
On Tue, Feb 4, 2014 at 10:39 AM, Alessandro Bianchi wrote: > > > Il 04/02/2014 09:55, Gianluca Cecchi ha scritto: > > On Tue, Feb 4, 2014 at 9:10 AM, Alessandro Bianchi wrote: > > in working directory '/usr/share/ovirt-engine-dwh/db-scripts' > 2014-02-04 09:01:26::DEBUG::common_utils::962::root:: output = > 2014-02-04 09:01:26::DEBUG::common_utils::963::root:: stderr = psql: FATALE: > autenticazione con password fallita per l'utente "engine_history" > password retrieved from file "/tmp/pgpassNkKGNp.tmp" > > (autenticazione con password fallita per l'utente "engine_history" = > authentication failed for user "engine_history" system language is italian) > > so it seems a user creation permission problem on the database > > since I'm not too familiar with pgsql how is it supposed to fix this? > > It look like it misses the password in some ovirt configuration file but > where to edit and how o fix it? > > Any hint? > > Thank you > > See this thread of mine if you want to start from scratch and you > don't have any previous reports/dwh data or you don't mind to loose > them. Engine and its data is not impacted at all. > Eventually I'm going to open a bug for bad mgmt of pre-existing DB > user during setup (eg due to a previously failed in the middle > install). > > http://lists.ovirt.org/pipermail/users/2014-February/020740.html > > Let us know how it goes. > > Gianluca > > Thank you > > I'm following the post but I'm stuck at 3) > > drop user engine_history; > ERRORE: il ruolo "engine_history" non può essere eliminato perché alcuni > oggetti ne dipendono > DETTAGLI: proprietario di database ovirt_engine_history > 300 oggetti nel database ovirt_engine_history > > it says "Error: can't remove role engine_history because some object depend > on it. Detail: database owner ovirt_engine_history 300 objects in database > ovirt_engine history" > > Any hint? > > Thank you for your help It seems I forgot a step... let us call 2bis) you have to drop the two DBS before of users: as postgres user - psql drop database ovirt_engine_history; drop database ovirtenginereports; and then you can drop users Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt-report Forbidden access error
On Tue, Feb 4, 2014 at 11:10 AM, Alessandro Bianchi wrote: > > > Il 04/02/2014 09:55, Gianluca Cecchi ha scritto: > > On Tue, Feb 4, 2014 at 9:10 AM, Alessandro Bianchi wrote: > > in working directory '/usr/share/ovirt-engine-dwh/db-scripts' > 2014-02-04 09:01:26::DEBUG::common_utils::962::root:: output = > 2014-02-04 09:01:26::DEBUG::common_utils::963::root:: stderr = psql: FATALE: > autenticazione con password fallita per l'utente "engine_history" > password retrieved from file "/tmp/pgpassNkKGNp.tmp" > > (autenticazione con password fallita per l'utente "engine_history" = > authentication failed for user "engine_history" system language is italian) > > so it seems a user creation permission problem on the database > > since I'm not too familiar with pgsql how is it supposed to fix this? > > It look like it misses the password in some ovirt configuration file but > where to edit and how o fix it? > > Any hint? > > Thank you > > See this thread of mine if you want to start from scratch and you > don't have any previous reports/dwh data or you don't mind to loose > them. Engine and its data is not impacted at all. > Eventually I'm going to open a bug for bad mgmt of pre-existing DB > user during setup (eg due to a previously failed in the middle > install). > > http://lists.ovirt.org/pipermail/users/2014-February/020740.html > > Let us know how it goes. > > Gianluca > > Ok with this 2b extra step it works > > I have installed everything with no errors, but still have Forbidden access > right clicking on Vms -> reports > > If I click on the "reports portal" I see this link > > http://10.0.0.5/OvirtEngineWeb/ReportsRedirectServlet > > I suspect this is something related to apache configuration > > access.log shows nothing so were may I see a log of what's happening? > > Thank you > > Alessandro I too see that redirect and then when I click I land to https://my-engine/ovirt-engine-reports/login.html and then after login/pwd : https://my-engine/ovirt-engine-reports/flow.html?_flowId=searchFlow I have SpiceProxy configured. Don't know if this impacts apache configuration. In my case it works and in /etc/httpd/conf.d Ihave # ls -lrt total 68 -rw-r--r--. 1 root root 926 Mar 31 2013 BackupPC.conf -rw-r--r--. 1 root root 298 Jul 23 2013 squid.conf -rw-r--r--. 1 root root 516 Jul 31 2013 welcome.conf -rw-r--r--. 1 root root 1252 Jul 31 2013 userdir.conf -rw-r--r--. 1 root root 9426 Jul 31 2013 ssl.conf.20131003112151 -rw-r--r--. 1 root root 2893 Jul 31 2013 autoindex.conf -rw-r--r--. 1 root root 366 Jul 31 2013 README -rw-r--r--. 1 root root 2778 Oct 3 11:21 z-ovirt-engine-proxy.conf.20131119125706 -rw-r--r--. 1 root root 33 Oct 3 11:21 ovirt-engine-root-redirect.conf -rw-r--r--. 1 root root 9444 Oct 3 11:21 ssl.conf -rw-r--r--. 1 root root 2775 Nov 19 12:57 z-ovirt-engine-proxy.conf.20140115003015 -rw-r--r--. 1 root root 1251 Jan 7 15:54 z-ovirt-engine-reports-proxy.conf -rw-r--r--. 1 root root 2788 Jan 15 00:30 z-ovirt-engine-proxy.conf z-ovirt-engine-reports-proxy.conf: # This is needed to make sure that connections to the application server # are recovered in a short time interval (5 seconds at the moment) # otherwise when the application server is restarted the web server will # refuse to connect during 60 seconds. ProxySet retry=5 # This is needed to make sure that long RESTAPI requests have time to # finish before the web server aborts the request as the default timeout # (controlled by the Timeout directive in httpd.conf) is 60 seconds. ProxySet timeout=3600 ProxyPass ajp://localhost:8702/ovirt-engine-reports AddOutputFilterByType DEFLATE text/javascript text/css text/html text/xml text/json application/xml application/json application/x-yaml ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] [QE] oVirt 3.4.0 beta status
On Tue, Feb 4, 2014 at 1:03 PM, Sandro Bonazzola wrote: > Hi, > oVirt 3.4.0 beta has been released and is actually on QA. > We're going to tart composing oVirt 3.4.0 beta2 this Thursday 2014-02-06 > 09:00 UTC from 3.4 branches. > This build will be used for a second Test Day scheduled for 2014-02-11. Is beta2 expected to be available to run on Fedora 20, engine and hosts? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Problems with migration on a VM and not detected by gui
Hello, passing from 3.3.3rc to 3.3.3 final on fedora 19 based infra. Two hosts and one engine. Gluster DC. I have 3 VMs: CentOS 5.10, 6.5, Fedora 20 Main steps: 1) update engine with usual procedure 2) all VMs are on one node; I put into maintenance the other one and update it and reboot 3) activate the new node and migrate all VMs to it. >From webadmin gui point of view it seems all ok. Only "strange" thing is that the CentOS 6.5 VM has no ip shown, when usually it has becuse of ovrt-guest-agent installed on it So I try to connect to its console (configured as VNC). But I get error (the other two are ok and they are spice) Also, I cannot ping or ssh into the VM so there is indeed some problem. I didn't connect since 30th January so I don't knw if any probem arised before today. >From the original host /var/log/libvirt/qemu/c6s.log I see: 2014-01-30 11:21:37.561+: shutting down 2014-01-30 11:22:14.595+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name c6s -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 4147e0d3-19a7-447b-9d88-2ff19365bec0 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-5,serial=34353439-3036-435A-4A38-303330393338,uuid=4147e0d3-19a7-447b-9d88-2ff19365bec0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6s.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-01-23T11:42:26,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/a5e4f67b-50b5-4740-9990-39deb8812445/53408cb0-bcd4-40de-bc69-89d59b7b5bc2,if=none,id=drive-virtio-disk0,format=raw,serial=a5e4f67b-50b5-4740-9990-39deb8812445,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/c1477133-6b06-480d-a233-1dae08daf8b3/c2a82c64-9dee-42bb-acf2-65b8081f2edf,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c1477133-6b06-480d-a233-1dae08daf8b3,cache=none,werror=stop,rerror=stop,aio=threads -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:8f:04:f8,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4147e0d3-19a7-447b-9d88-2ff19365bec0.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev pty,id=charconsole0 -device virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 char device redirected to /dev/pts/0 (label charconsole0) 2014-02-04 12:48:01.855+: shutting down qemu: terminating on signal 15 from pid 1021 >From the updated host where I apparently migrated it I see: 2014-02-04 12:47:54.674+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name c6s -S -machine pc-1.0,accel=kvm,usb=off -cpu Opteron_G2 -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 4147e0d3-19a7-447b-9d88-2ff19365bec0 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-5,serial=34353439-3036-435A-4A38-303330393338,uuid=4147e0d3-19a7-447b-9d88-2ff19365bec0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6s.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-01-28T13:08:06,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/glusterSD/f18ovn01.ceda.polimi.it:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/a5e4f67b-50b5-4740-9990-39deb8812445/53408cb0-bcd4-40de-bc69-89d59b7b5bc2,if=none,id=drive-virtio-disk0,format=raw,serial=a5e4f67b-50b5-4740-9990-
Re: [Users] ovirt-report Forbidden access error
On Tue, Feb 4, 2014 at 2:06 PM, Yedidyah Bar David wrote: > - Original Message - >> From: "Alessandro Bianchi" >> To: "Yedidyah Bar David" >> Cc: "Gianluca Cecchi" , "users" >> Sent: Tuesday, February 4, 2014 2:49:47 PM >> Subject: Re: [Users] ovirt-report Forbidden access error > [snip] > > (for now I'll ignore the confs (which seem ok) and the logs (which will > require more time to understand) > >> postgres-# select * from vdc_options where >> option_name='RedirectServletReportsPage' >> postgres-# >> (no results) > > This is probably the source of the problem. > > Can you post all the setup log files (engine, dwh, reports)? > > The line is normally inserted by engine-setup and updated by reports-setup. > -- > Didi In fact in my case where it is working I have: engine=# select * from vdc_options where option_name='RedirectServletReportsPage'; option_id |option_name | option_value| version ---++---+- 281 | RedirectServletReportsPage | https://my-engine:443/ovirt-engine-reports | general (1 row) Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Problems with migration on a VM and not detected by gui
On Tue, Feb 4, 2014 at 2:41 PM, Meital Bourvine wrote: > Hi, > > Can you please open a bug about KeyError: 'path'? I will do. Which component? during migration itself the two nodes was not aligned. In this case on source host I had 3.3.3rc packages such as vdsm-4.13.3-1.fc19.x86_64 and kernel-3.12.8-200.fc19.x86_64 On dest I had already updated packages (also fedora ones) such as: vdsm-4.13.3-3.fc19.x86_64 and kernel-3.12.9-201.fc19.x86_64 Instead libvirt was the same on both: libvirt-1.0.5.9-1.fc19.x86_64 qemu-kvm-1.4.2-15.fc19.x86_64 > > What error did you get while trying to connect with console (vnc)? I get "unable to connect" but sorry this was my fault. I'm using SPiceProxy because from my client I cannot connect to port 5900 of hypervisors. I was also testing some days ago the vnc protocol, but obviously it doesn't use the SPiceProxy feature.. ;-) on my client I have in fact, as expected: tcp0 1 10.4.23.21:5185610.4.4.58:5900 SYN_SENT Going into a web console from a server that is in the same network of hypervisors I can see the vnc console and the guest problem seems during boot, apparently since 23rd of January... so it is normal I can't ping it.. ;-) I will investigate at this point the reason of the problem and report... see screenshot: https://drive.google.com/file/d/0BwoPbcrMv8mvejFybE94WmJfQWM/edit?usp=sharing > Did you try changing it to spice and see what happens? SO it was not a vnc problem at all apparently. Anyway I think I can change console type only if VM is powered off, correct? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Problems with migration on a VM and not detected by gui
On Tue, Feb 4, 2014 at 3:43 PM, Dan Kenigsberg wrote: > On Tue, Feb 04, 2014 at 08:41:13AM -0500, Meital Bourvine wrote: >> Hi, >> >> Can you please open a bug about KeyError: 'path'? > > Please include an excerpt of supervdsm.log, to give a better hint why > glusterVolumeStatus is failing in this fashion. here it is. https://bugzilla.redhat.com/show_bug.cgi?id=1061211 After fsck of /boot on VM and restart I configured ntp that was not up. I successfuly migrated the VM. Only message I get in it during migration (16:07): Feb 4 15:35:09 c6s ntpd[1510]: 0.0.0.0 c614 04 freq_mode Feb 4 15:51:35 c6s ntpd[1510]: 0.0.0.0 0612 02 freq_set kernel -0.031 PPM Feb 4 15:51:35 c6s ntpd[1510]: 0.0.0.0 0615 05 clock_sync Feb 4 16:07:33 c6s kernel: Clocksource tsc unstable (delta = -79565137 ns) the two hosts seem almost aligned (they use chronyd): node 1 where c6s booted # chronyc tracking Reference ID: (one-common-server in my infra) Stratum : 3 Ref time (UTC) : Tue Feb 4 15:10:05 2014 System time : 0.15313 seconds slow of NTP time Last offset : -0.17689 seconds RMS offset : 0.47532 seconds Frequency : 19.180 ppm fast Residual freq : -0.004 ppm Skew: 0.146 ppm Root delay : 0.004059 seconds Root dispersion : 0.005400 seconds Update interval : 260.4 seconds Leap status : Normal node 2 where I migrated first # chronyc tracking Reference ID: (one-common-server in my infra) Stratum : 3 Ref time (UTC) : Tue Feb 4 15:09:48 2014 System time : 0.03940 seconds slow of NTP time Last offset : 0.00705 seconds RMS offset : 0.43605 seconds Frequency : 31.695 ppm fast Residual freq : -0.000 ppm Skew: 0.131 ppm Root delay : 0.004193 seconds Root dispersion : 0.005894 seconds Update interval : 128.6 seconds Leap status : Normal MIgrating VM on the other side (from node2 to node 1 at 16:13) I don't get any message. But I do see this about two minutes after the first migration in my VM Feb 4 16:09:03 c6s kernel: EXT4-fs error (device vda1): ext4_readdir: bad entry in directory #11: rec_len is smaller than minimal - block=4358offset=0(0), inode=0, rec_len=0, name_len=0 Donna if a problem of VM itself or related in some way with ovirt and migration I will investigate Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] failed nestd vm only with spice and not vnc
#x27;: 'yes'}, 'iface': 'ens256', 'ipv6addrs': ['fe80::250:56ff:fe9f:3b86/64'], 'mtu': '1500', 'netmask': '', 'vlanid': 172}} vmTypes = ['kvm'] I have a pre-booted VM that is configured as VNC. As soon as I start another VM (CentOS 6.4) defined as spice console all the two go into paused mode In qemulog of spice VM I have 2014-02-05 08:05:45.965+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/q emu-kvm -name C2prealloc -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 1107ce34-46e6-4989-a5cf-de601ea71cae -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-6,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=1107ce34-46e6-4989-a5cf-de601ea71cae -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C2prealloc.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-02-05T08:05:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/e8a52eea-5531-4d12-8747-061c2136b6fd/14707e58-aedf-4059-a815-605a0df4b396,if=none,id=drive-virtio-disk0,format=raw,serial=e8a52eea-5531-4d12-8747-061c2136b6fd,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:19,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/1107ce34-46e6-4989-a5cf-de601ea71cae.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/1107ce34-46e6-4989-a5cf-de601ea71cae.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel 1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 KVM: unknown exit, hardware reason 3 EAX=0037 EBX=6e44 ECX=001a EDX=0511 ESI= EDI=6df8 EBP=6e08 ESP=6dd4 EIP=3ffe1464 EFL=0017 [APC] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00c09300 DPL=0 DS [-WA] CS =0008 00c09b00 DPL=0 CS32 [-RA] SS =0010 00c09300 DPL=0 DS [-WA] DS =0010 00c09300 DPL=0 DS [-WA] FS =0010 00c09300 DPL=0 DS [-WA] GS =0010 00c09300 DPL=0 DS [-WA] LDT= 8200 DPL=0 LDT TR = 8b00 DPL=0 TSS32-busy GDT= 000fd3a8 0037 IDT= 000fd3e6 CR0=0011 CR2= CR3= CR4= DR0=0000 DR1= DR2= DR3= DR6=0ff0 DR7=0400 EFER=0000 Code=eb be 83 c4 08 5b 5e 5f 5d c3 89 c1 ba 11 05 00 00 eb 01 ec <49> 83 f9 ff 75 f9 c3 57 56 53 89 c3 8b b0 84 00 00 00 39 ce 77 1e 89 d7 0f b7 80 8c 00 00 In messages I get when I start the spice VM: Feb 5 09:05:46 ovnode01 vdsm vm.Vm WARNING vmId=`1107ce34-46e6-4989-a5cf-de601ea71cae`::_readPauseCode unsupported by libvirt vm In VNC VM qemu.log, when I started it yestaerday: 2014-02-04 23:56:48.635+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name C6 -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 409c5dbe-5e70-40de-bf73-46ef484ea2d7 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-6,serial=421F7170-C703-34E3-9628-4588D841F8B1,uuid=409c5dbe-5e70-40de-bf73-46ef484ea2d7 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/C6.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-02-04T23:56:48,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d26393a08,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:10,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/409c5dbe-5e70-40de-bf73-46ef484ea2d7.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0:0,password -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 when I start the spice one KVM: unknown exit, hardware reason 3 EAX=0011 EBX=ffea ECX= EDX=000fc5b9 ESI=000d7c2a EDI= EBP= ESP=6f80 EIP=c489 EFL=0006 [-P-] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES = 00809300 DPL=0 DS16 [-WA] CS =f000 000f 00809b00 DPL=0 CS16 [-RA] SS = 00809300 DPL=0 DS16 [-WA] DS = 00809300 DPL=0 DS16 [-WA] FS = 00809300 DPL=0 DS16 [-WA] GS = 00809300 DPL=0 DS16 [-WA] LDT= 8200 DPL=0 LDT TR = 8b00 DPL=0 TSS32-busy GDT= 000fd3a8 0037 IDT= 000fd3e6 CR0=0011 CR2= CR3= CR4= DR0= DR1= DR2= DR3= DR6=0ff0 DR7=0400 EFER= Code=01 1e e0 d3 2e 0f 01 16 a0 d3 0f 20 c0 66 83 c8 01 0f 22 c0 <66> ea 91 c4 0f 00 08 00 b8 10 00 00 00 8e d8 8e c0 8e d0 8e e0 8e e8 89 c8 ff e2 89 c1 b8 Thanks in advance, Gianluca On Thu, Oct 3, 2013 at 2:54 PM, Itamar Heim wrote: > On 10/03/2013 01:21 AM, Gianluca Cecchi wrote: >> >> On Wed, Oct 2, 2013 at 9:16 PM, Itamar Heim wrote: >>> >>> On 10/02/2013 12:57 AM, Gianluca Cecchi wrote: >>>> >>>> >>>> Today I was able to work again on this matter and it seems related to >>>> spice >>>> Every time I start the VM (that is defined with spice) it goes in >>> >>> >>> >>> and this doesn't happen if the VM is defined with vnc? >> >> >> No, reproduced both from oVirt and through virsh. >> with spice defined in boot options or in xml (for virsh) the vm >> remains in paused state and after a few minutes it seems the node >> hangs... >> with vnc the VM goes in runnign state >> I'm going to put same config on 2 physical nodes with only local >> storage and see what happens and report... >> >> Gianluca >> > > adding spice-devel mailing list as the VM only hangs if started with spice > and not with vnc, from virsh as well. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Putting a Gluster host into an ISCSI DC
Hello, in a test infra I would to temporarily put a Host that is part of Gluster DC and has gluster volumes active into an ISCSI DC. I put Gluster DC in maintenance, stop glustervolume and put both hosts in maintenance. Then I try to edit host and put into the ISCSI DC. But I receive error Error while executing action: node02: Cannot Edit host. Server having Gluster volume Is there any basic check so that I cannot do this at all? Any tip? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Putting a Gluster host into an ISCSI DC
On Wed, Feb 5, 2014 at 4:10 PM, Itamar Heim wrote: > Are the volumes hosted on the host? Yes, but I have put down all resources related to it ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] New iSCSI domain problem when first input wrong password
Hello, I have a fedora 19 engine based on 3.3 final. I installed a fedora 19 host (configured as infrastructure server) and then installed from web gui. Then yum update of the host updated vdsm (see separate thread I'm going to open for this). I create iSCSI DC and add that host to it. When I try to create the new iSCSI storage domain (sw iscsi target on a CentOS 6.5 server) I initially input the wrong chap password so that discovery goes ok, while authentication fails. Then I put correct pwd but after apparently going on (I see the exposed lun) then gives error in webadmin gui I have 2014-Feb-05, 16:57 Failed to attach Storage Domain OV01 to Data Center ISCSI. (User: admin@internal) 2014-Feb-05, 16:57 Failed to attach Storage Domains to Data Center ISCSI. (User: admin@internal) 2014-Feb-05, 16:57 The error message for connection 192.168.230.101 iqn.2013-09.local.localdomain:c6iscsit.target11 (LUN 1IET_00010001) returned by VDSM was: Failed to setup iSCSI subsystem 2014-Feb-05, 16:57 Storage Domain OV01 was added by admin@internal A workaround to have it activated is then (see logs 18:11): - put host in maintenance - reactivate it - attach sd to DC - ok (but see watermark message I don't understand) 2014-Feb-05, 18:13 The system has reached the 80% watermark on the VG metadata area size on OV01. This is due to a high number of Vdisks or large Vdisks size allocated on this specific VG. 2014-Feb-05, 18:13 Storage Domains were attached to Data Center ISCSI by admin@internal 2014-Feb-05, 18:13 Storage Domain OV01 (Data Center ISCSI) was activated by admin@internal 2014-Feb-05, 18:13 Storage Pool Manager runs on Host ovnode03 (Address: 192.168.33.44). 2014-Feb-05, 18:12 Data Center is being initialized, please wait for initialization to complete. 2014-Feb-05, 18:12 State was set to Up for host ovnode03. 2014-Feb-05, 18:11 Host ovnode03 was activated by admin@internal. 2014-Feb-05, 18:11 Host ovnode03 was switched to Maintenance mode by admin@internal. See my vdsm.log and supervdsm.log here https://drive.google.com/file/d/0BwoPbcrMv8mval9XMnN4ZGoxa00/edit?usp=sharing https://drive.google.com/file/d/0BwoPbcrMv8mveC1KQmo1dFAtNzQ/edit?usp=sharing Possibly initially wrong password is not optimally managed when putting the correct one, as I see some traceback. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] ovirt 3.3.3 host deploy push an "old" vdsm
Hello, fedora 19 engine 3.3.3 with ovirt-host-deploy-1.1.3-1.fc19.noarch when I deploy a fedora 19 host from web admin gui I get on it vdsm-4.13.0-11.fc19.x86_64 (it seems updated at november 2013) If after that I explicitly enable ovirt 3.3.3 yum repo and run yum update on host it correctly passes to : vdsm-4.13.3-3.fc19.x86_64 Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] ovirt 3.3.3 host deploy push an "old" vdsm
On Wed, Feb 5, 2014 at 7:19 PM, Alon Bar-Lev wrote: > > > - Original Message - >> From: "Itamar Heim" >> >> engine will deploy the latest vdsm the host sees based on the repos >> configured on the host? > > indeed. I thought sort of archive of rpm packages "embedded" and delivered from engine itself... and I thought it was in ovirt-host-deploy package on engine I didn't have any ovirt repo configured on host at deploy stage. Why not use repo of engine? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] failed nestd vm only with spice and not vnc
I replicated the problem with same environment but attached to iSCSI storage domain. So the Gluster part is not involved. As soon as I run once a VM on the host the VM goes into paused state and in host messages: Feb 5 19:22:45 localhost kernel: [16851.192234] cgroup: libvirtd (1460) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. Feb 5 19:22:45 localhost kernel: [16851.192240] cgroup: "memory" requires setting use_hierarchy to 1 on the root. Feb 5 19:22:46 localhost kernel: [16851.228204] device vnet0 entered promiscuous mode Feb 5 19:22:46 localhost kernel: [16851.236198] ovirtmgmt: port 2(vnet0) entered forwarding state Feb 5 19:22:46 localhost kernel: [16851.236208] ovirtmgmt: port 2(vnet0) entered forwarding state Feb 5 19:22:46 localhost kernel: [16851.591058] qemu-system-x86: sending ioctl 5326 to a partition! Feb 5 19:22:46 localhost kernel: [16851.591074] qemu-system-x86: sending ioctl 80200204 to a partition! Feb 5 19:22:46 localhost vdsm vm.Vm WARNING vmId=`7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0`::_readPauseCode unsupported by libvirt vm Feb 5 19:22:47 localhost avahi-daemon[449]: Registering new address record for fe80 And in qemu.log for the VM: 2014-02-05 18:22:46.280+: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name c6i -S -machine pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid 7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=19-6,serial=421F4B4A-9A4C-A7C4-54E5-847BF1ADE1A5,uuid=7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6i.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2014-02-05T18:22:45,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/----/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/rhev/data-center/mnt/blockSD/f741671e-6480-4d7b-b357-8cf6e8d2c0f1/images/0912658d-1541-4d56-945b-112b0b074d29/67aaf7db-4d1c-42bd-a1b0-988d95c5d5d2,if=none,id=drive-virtio-disk0,format=qcow2,serial=0912658d-1541-4d56-945b-112b0b074d29,cache=none,werror=stop,rerror=stop,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:17,bus=pci.0,addr=0x3,bootindex=3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=33554432 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 KVM: unknown exit, hardware reason 3 EAX=0094 EBX=6e44 ECX=000e EDX=0511 ESI=0002 EDI=6df8 EBP=6e08 ESP=6dd4 EIP=3ffe1464 EFL=0017 [APC] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0010 00c09300 DPL=0 DS [-WA] CS =0008 00c09b00 DPL=0 CS32 [-RA] SS =0010 00c09300 DPL=0 DS [-WA] DS =0010 00c09300 DPL=0 DS [-WA] FS =0010 00c09300 DPL=0 DS [-WA] GS =0010 00c09300 DPL=0 DS [-WA] LDT= 8200 DPL=0 LDT TR = 8b00 DPL=0 TSS32-busy GDT= 000fd3a8 0037 IDT= 000fd3e6 CR0=0011 CR2= CR3= CR4= DR0= DR1= DR2= DR3= DR6=0ff0 DR7=0400 EFER= Code=eb be 83 c4 08 5b 5e 5f 5d c3 89 c1 ba 11 05 00 00 eb 01 ec <49> 83 f9 ff 75 f9 c3 57 5
Re: [Users] Will this two node concept scale and work?
On Thu, Feb 6, 2014 at 9:05 AM, Sven Kieske wrote: > Hi, > > this one lead me to question which drbd version is or will > be available in EL 6/7(upcoming). > > My search so far just revealed there is no official supported > version for EL6 and maybe even worse, as far as I looked, will > not even be supported in EL7: > > https://access.redhat.com/site/discussions/669243 > > But I didn't check if the kernel modules are disabled or > just unsupported. > If you want complete support for DRBD on RH EL (5.x and 6.x at the moment as 7 is still in beta) see here (main part is accessible also without red hat portal login): https://access.redhat.com/site/solutions/32085 On that link there are also elrepo links to drbd 8.3 and 8.4 packages that could be used to begin using and evaluate for your needs (so not supported) You can also download from linbit after a free login registration or download source code otherwise. Partnership should be there since 2011: http://www.linbit.com/en/company/news/12-linbit-enhances-red-hat-enterprise-linux-with-full-drbd-support HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Unable to activate iSCSI domain after crash of host
Hello, Fedora 19 with 3.3.3. Only one host configured. After crash of host I'm not able to activate storage domain again. Any way to recover? Gianluca in engine.log 2014-02-07 08:11:12,602 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-32) [ 60d513d1] HostName = ovnode03 2014-02-07 08:11:12,602 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-32) [ 60d513d1] Command HSMGetAllTasksStatusesVDS execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IR SNonOperationalException: Not SPM: () 2014-02-07 08:11:12,613 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-32) [60d513d1] hostFr omVds::selectedVds - ovnode03, spmStatus Unknown_Pool, storage pool ISCSI 2014-02-07 08:11:12,615 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] START, ConnectStoragePoolVDSCommand(HostName = ovnode03, HostId = b6f8f68f-4f9e-4c87-918b-aa1ff60f575a, storagePoolId = 546cd29c-7249-473 3-8fd5-317cff38ed71, vds_spm_id = 1, masterDomainId = f741671e-6480-4d7b-b357-8cf6e8d2c0f1, masterVersion = 2), log id: 3e99a2c6 2014-02-07 08:11:15,747 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (ajp--127.0.0.1-8702-4) [465b0976] Lock Acquired to object EngineLock [exclusiveLocks= key: f741671e-6480-4d7b-b357-8cf6e8d2c0f1 value: STORAGE , sharedLocks= ] 2014-02-07 08:11:15,759 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (pool-6-thread-49) [465b0976] Running command: A ctivateStorageDomainCommand internal: false. Entities affected : ID: f741671e-6480-4d7b-b357-8cf6e8d2c0f1 Type: Storage 2014-02-07 08:11:15,762 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (pool-6-thread-49) [465b0976] Lock freed to obje ct EngineLock [exclusiveLocks= key: f741671e-6480-4d7b-b357-8cf6e8d2c0f1 value: STORAGE , sharedLocks= ] 2014-02-07 08:11:15,763 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (pool-6-thread-49) [465b0976] ActivateStorage Do main. Before Connect all hosts to pool. Time:2/7/14 8:11 AM 2014-02-07 08:11:15,765 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (pool-6-thread-49) [465b0976] START, ActivateStorageDomainVDSCommand( storagePoolId = 546cd29c-7249-4733-8fd5-317cff38ed71, ignoreFailoverLimit = false, storageDomainId = f741671e- 6480-4d7b-b357-8cf6e8d2c0f1), log id: da4b270 2014-02-07 08:11:16,739 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] Command org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand return value StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=304, mMessage=Cannot find master domain: 'spUUID=546cd29c-7249-4733-8fd5-317cff38ed7 1, msdUUID=f741671e-6480-4d7b-b357-8cf6e8d2c0f1']] 2014-02-07 08:11:16,740 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] HostName = ovnode03 2014-02-07 08:11:16,740 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] Command ConnectStoragePoolVDS execution failed. Exception: IRSNoMasterDomainException: IRSGenericException: IRSErrorException: IRSNoMaste rDomainException: Cannot find master domain: 'spUUID=546cd29c-7249-4733-8fd5-317cff38ed71, msdUUID=f741671e-6480-4d7b-b357-8cf6e8d2c0f1' 2014-02-07 08:11:16,741 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-32) [60d5 13d1] FINISH, ConnectStoragePoolVDSCommand, log id: 3e99a2c6 In vdsm.log I get: Thread-85157::ERROR::2014-02-07 08:12:44,774::task::850::TaskManager.Task::(_setError) Task=`b9f3c2b5-18fa-4135-96f7-c152b5 ffe675`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 1008, in connectStoragePool masterVersion, options) File "/usr/share/vdsm/storage/hsm.py", line 1062, in _connectStoragePool res = pool.connect(hostID, scsiKey, msdUUID, masterVersion) File "/usr/share/vdsm/storage/sp.py", line 699, in connect self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion) File "/usr/share/vdsm/storage/sp.py", line 1244, in __rebuild masterVersion=masterVersion) File "/usr/share/vdsm/storage/sp.py", line 1603, in getMasterDomain raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID) StoragePoolMasterNotFound: Cannot find master domain: 'spUUID=546cd29c-7249-4733-8fd5-317cff38ed71, msdUUID=f741671e-6480-4 d7b-b357-8cf6e8d2c0f1' Thread-85157::DEBUG::2014-02-07 08:1
Re: [Users] Unable to activate iSCSI domain after crash of host
On Fri, Feb 7, 2014 at 9:53 AM, Dafna Ron wrote: > the host can't see the domain > If you fixed everything and the host can now see the domain this mail be a > cache issue - can you please reboot the host? > > Dafna Already tried many times without results. One note: there was an initial problem wen I configured the storage. At first attempt I made an input of wrong password. I fear the new one was not retained forsome reason This my debug on host: [root@ovnode03 ~]# iscsiadm -m discovery -t st -p 192.168.230.101 --discover 192.168.230.101:3260,1 iqn.2013-09.local.localdomain:c6iscsit.target11 [root@ovnode03 ~]# iscsiadm -m node iqn.2013-09.local.localdomain:c6iscsit.target11 -l Logging in to [iface: default, target: iqn.2013-09.local.localdomain:c6iscsit.target11, portal: 192.168.230.101,3260] (multiple) iscsiadm: Could not login to [iface: default, target: iqn.2013-09.local.localdomain:c6iscsit.target11, portal: 192.168.230.101,3260]. iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure) iscsiadm: Could not log into all portals If I go in /var/lib/iscsi/send_targets/192.168.230.101,3260 st_config contains: # BEGIN RECORD 6.2.0.873-17 discovery.startup = manual discovery.type = sendtargets discovery.sendtargets.address = 192.168.230.101 discovery.sendtargets.port = 3260 discovery.sendtargets.auth.authmethod = None discovery.sendtargets.timeo.login_timeout = 15 discovery.sendtargets.use_discoveryd = No discovery.sendtargets.discoveryd_poll_inval = 30 discovery.sendtargets.reopen_max = 5 discovery.sendtargets.timeo.auth_timeout = 45 discovery.sendtargets.timeo.active_timeout = 30 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 # END RECORD where is recorded che chap configured password on host? Also, are iscsi and iscsid services to be configured at startup, or is it vdsm that should take care of starting one or all of them? In my case I have this kind of config on host: iscsi enabled but in failed state iscsid active but disabled ? [root@ovnode03 192.168.230.101,3260]# systemctl status iscsi iscsi.service - Login and scanning of iSCSI devices Loaded: loaded (/usr/lib/systemd/system/iscsi.service; enabled) Active: failed (Result: exit-code) since Fri 2014-02-07 08:41:17 CET; 1h 18min ago Docs: man:iscsid(8) man:iscsiadm(8) Process: 911 ExecStart=/sbin/iscsiadm -m node --loginall=automatic (code=exited, status=21) Process: 908 ExecStart=/usr/libexec/iscsi-mark-root-nodes (code=exited, status=0/SUCCESS) Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Starting Login and scanning of iSCSI devices... Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: iscsi.service: main process exited, code=exited, status=21/n/a Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Failed to start Login and scanning of iSCSI devices. Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Unit iscsi.service entered failed state. [root@ovnode03 192.168.230.101,3260]# systemctl status iscsid iscsid.service - Open-iSCSI Loaded: loaded (/usr/lib/systemd/system/iscsid.service; disabled) Active: active (running) since Fri 2014-02-07 08:41:17 CET; 1h 19min ago Docs: man:iscsid(8) man:iscsiadm(8) Process: 875 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS) Main PID: 895 (iscsid) CGroup: name=systemd:/system/iscsid.service ââ894 /usr/sbin/iscsid ââ895 /usr/sbin/iscsid Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Starting Open-iSCSI... Feb 07 08:41:17 ovnode03.localdomain.local iscsid[875]: iSCSI logger with pid=894 started! Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Failed to read PID from file /var/run/iscsid.pid: Invalid argument Feb 07 08:41:17 ovnode03.localdomain.local systemd[1]: Started Open-iSCSI. Feb 07 08:41:18 ovnode03.localdomain.local iscsid[894]: iSCSI daemon with pid=895 started! Feb 07 09:55:06 ovnode03.localdomain.local iscsid[894]: Login failed to authenticate with target iqn.2013-09.local.localdomain:c6iscsi...rget11 Feb 07 09:55:06 ovnode03.localdomain.local iscsid[894]: session 1 login rejected: Initiator failed authentication with target Feb 07 09:55:06 ovnode03.localdomain.local iscsid[894]: Connection1:0 to [target: iqn.2013-09.local.localdomain:c6iscsit.target11, por...tdown. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Unable to activate iSCSI domain after crash of host
On Fri, Feb 7, 2014 at 10:19 AM, Dafna Ron wrote: > can you try to restart ovirt-engine as well? > Also, can you run vdsClient -s 0 getDeviceList on the host? restarted engine [root@ovirt ovirt-engine]# systemctl status ovirt-engine ovirt-engine.service - oVirt Engine Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled) Active: active (running) since Fri 2014-02-07 10:23:16 CET; 45s ago Main PID: 18479 (ovirt-engine.py) CGroup: name=systemd:/system/ovirt-engine.service ââ18479 /usr/bin/python /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py --redirect-output start ââ18499 ovirt-engine -server -XX:+TieredCompilation -Xms1g -Xmx1g -XX:PermSize=256m -XX:MaxPermSize=256m -Djava.net.preferIPv4Sta... Feb 07 10:23:16 ovirt.localdomain.local systemd[1]: Started oVirt Engine. On node [root@ovnode03 192.168.230.101,3260]# vdsClient -s 0 getDeviceList [] [root@ovnode03 192.168.230.101,3260]# engine.log here: https://drive.google.com/file/d/0BwoPbcrMv8mvYjliV19JclZha3c/edit?usp=sharing Thanks for viewing Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Unable to activate iSCSI domain after crash of host
On Fri, Feb 7, 2014 at 12:02 PM, Dafna Ron wrote: > If you do not have an iptables or any connectivity issue still existing from > the host to the storage than your host is not seeing any of the devices on > the storage > it might be a problem with access list (password or iqn) but the storage is > not exposing the luns to the host. O my iscsi target (CentOS 6.5 with sw iscsi target), I still have [root@c6iscsit ~]# tgtadm --lld iscsi --mode target --op show Target 1: iqn.2013-09.local.localdomain:c6iscsit.target11 System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 0001 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 53683 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/VG_ISCSI/ISCSI_OV01 Backing store flags: Account information: ovirt ACL information: 192.168.230.102 192.168.230.103 My node has ip 192.168.230.102 and can ping it no iptables rules [root@c6iscsit ~]# iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination and in targets.conf that is in place default-driver iscsi backing-store /dev/VG_ISCSI/ISCSI_OV01 incominguser ovirt my_ovirt_setup_pwd initiator-address 192.168.230.102 initiator-address 192.168.230.103 so it seems ok to me And discovery is ok from ovirt node where are chap user/pwd stored on ovirt node? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Unable to activate iSCSI domain after crash of host
On Fri, Feb 7, 2014 at 2:02 PM, Dafna Ron wrote: > what does multipath get? > > I am not sure which table the chap will be saved in. > try to list teh db tables - there are not that many for storage so it should > he easy to find. > there is yet the "historical" warning about getuid_callout not valid any more for fedora based distro... but if I remember well should not be of influence in output apart the warnings, correct? [root@ovnode03 ~]# multipath -ll Feb 07 14:09:49 | multipath.conf +5, invalid keyword: getuid_callout Feb 07 14:09:49 | multipath.conf +18, invalid keyword: getuid_callout I'm going to check rdbms tables too... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Unable to activate iSCSI domain after crash of host
On Fri, Feb 7, 2014 at 2:11 PM, Gianluca Cecchi wrote: > > I'm going to check rdbms tables too... > > Gianluca it seems that the table is storage_server_connections but the value seems (correctly in my opinion) encrypted... how can I update it eventually? engine=# select * from storage_server_connections ; id | connection | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans --+--+---+ -- -- --+-+--++--+---+---+-+ ---+- 6a5b159d-4c11-43cc-aa09-55c325de47b3 | 192.168.230.101 | ovirt | lf1mtw6jWq0tcO/jBeLtSdrx9WSMvLOJxMF/Z4UWsgK W10jYKXzkxG8iPgX9xMEcOhTJCeMNtC6EQES5Tq0MjHGPfuzigwL9nejZEZwtDvOFmKZtCBSGaKoOyjQpU8hfoqq7u47jvGE5VmVwDQ40p6goXWDHMWPxdCk2IzAOBsDlsnrJGmqLioRDj JQVya28cJsgzGoaLFHZMQD8bfW7ay3cQ6k8Hxlz99MKNpxxoV0fju1Blpfrqpa2bCSpQ5w0PrVHmJrW4eiBEd/Rg/XV497PGatAcwQr7hD5/uG/GLoqBbCMyR9S11Ot90aprL0Gd9cOlM4 VngzCD/2JqFmvhA== | iqn.2013-09.local.localdomain:c6iscsit.target11 | 3260 | 1 |3 | | | | | Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Unable to activate iSCSI domain after crash of host
On Fri, Feb 7, 2014 at 3:48 PM, Dafna Ron wrote: > well... that actually sounds like a bug to me - can you open it once we > manage to find a solution? > Yes, eventually I start form a clean config replicating and in case send full logs, once we have a solution > did you get anything in multipath -ll? No. I got what I wrote before: [root@ovnode03 ~]# multipath -ll Feb 07 16:09:55 | multipath.conf +5, invalid keyword: getuid_callout Feb 07 16:09:55 | multipath.conf +18, invalid keyword: getuid_callout [root@ovnode03 ~]# > > one more question... did you try to put the host in maintenance and than > activate it again? > > > Yes more than once, but with same results. Tried just now and it fails the same Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Unable to activate iSCSI domain after crash of host
On Fri, Feb 7, 2014 at 3:11 PM, Dafna Ron wrote: > what happens when you try to update from the UI? (edit the storage) > I think it's a cat that tries to bite its tail matter... ;-) If I go and select storage, storage domain OV01 and edit, I see all empty so I cannot edit because probably it has to authenticate first... or not? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Unable to activate iSCSI domain after crash of host
where can I find the function that encrypts iscsi chap password and put the encrypted value into storage_server_connections table? So that I can try to reinsert it and verify. Thanks Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] failed nestd vm only with spice and not vnc
On Sun, Feb 9, 2014 at 11:13 PM, Itamar Heim wrote: > > is there a bug for tracking this? Not yet. What to bug against? spice or spice-protocol or qemu-kvm itself? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Unable to activate iSCSI domain after crash of host
On Mon, Feb 10, 2014 at 10:56 AM, Alon Bar-Lev wrote: > > > - Original Message - >> From: "Dafna Ron" >> To: "Gianluca Cecchi" , "Alon Bar-Lev" >> >> Cc: "users" >> Sent: Monday, February 10, 2014 11:31:33 AM >> Subject: Re: [Users] Unable to activate iSCSI domain after crash of host >> >> adding Alon >> >> On 02/08/2014 05:42 PM, Gianluca Cecchi wrote: >> > where can I find the function that encrypts iscsi chap password and >> > put the encrypted value into storage_server_connections table? >> > So that I can try to reinsert it and verify. > > You can just put plain password, it should work... > > If you want to encrypt use: > > echo -n 'PASSWORD' | openssl pkeyutl -encrypt -certin -inkey > /etc/pki/ovirt-engine/certs/engine.cer | openssl enc -a | tr -d '\n' > > But Dafna, isn't there a way at UI to re-specify password, so it be encrypted > by the application? > >> > >> > Thanks >> > Gianluca >> >> >> -- >> Dafna Ron >> In my opinion when I first defined the ISCSI domain and input a wrong password there was something not correctly managed when I then used the correct one. In fact in my opinion it seems there is no correspondence between storage_domains table and storage_server_connections table. If I take a glusterfs domain named gv01 I see this: engine=# select * from storage_server_connections where id=(select storage from storage_domains where storage_name='gv01'); id | connection | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retrans --+---+---+--+-+--++--+---+--- +-+---+- 3b6a-aff3-47fa-b7ca-8e809804cbe2 | ovnode01:gv01 | | | | ||7 | | glusterfs | | | (1 row) Instead for this ISCSI domain named OV01 engine=# select * from storage_server_connections where id=(select storage from storage_domains where storage_name='OV01'); id | connection | user_name | password | iqn | port | portal | storage_type | mount_options | vfs_type | nfs_version | nfs_timeo | nfs_retran s ++---+--+-+--++--+---+--+-+---+--- -- (0 rows) In particular: engine=# select * from storage_domains where storage_name='OV01'; id |storage | storage_name | storage_description | storage_comment | storage_pool_id| available_disk_size | used_disk_size | commited_disk_size | actual_images_size | status | storage_pool_name | storage_type | storage_domain_type | storage_domain_format_type | last_time_used_as_master | storage_domain_shared_status | recoverable --++--+-+-+--- ---+-+++++---+ --+-++--+--+- f741671e-6480-4d7b-b357-8cf6e8d2c0f1 | uqe7UZ-PaBY-IiLj-XLAY-XoCZ-cmOk-cMJkeX | OV01 | | | 546cd2 9c-7249-4733-8fd5-317cff38ed71 | 44 | 5 | 10 | 1 | 4 | ISCSI | 3 | 0 | 3 | 0 |2 | t (1 row) engine=# select * from storage_pool where id='546cd29c-7249-4733-8fd5-317cff38ed71'; id | name | description | storage_pool_type | storage_pool_format_type | status | master_domain_version | spm_vds_id | compatibility_version | _create_date | _update_date | quota_enforcement_type | free_text_commen t --+---+-+---+--++---+- ---+---+---+---++- -- 546cd29c-7249-4733-8fd5-317cff38ed71 | ISCSI | | 3 | 3| 4 | 2 | | 3.3 | 2014-02-05 11:46:50.797079+01 | 2014-02-05 23:53:18.864716+01 | 0 | (1 row) engine=# select * from storage_server_connection
Re: [Users] failed nestd vm only with spice and not vnc
On Mon, Feb 10, 2014 at 10:22 AM, Gianluca Cecchi wrote: > On Sun, Feb 9, 2014 at 11:13 PM, Itamar Heim wrote: > >> >> is there a bug for tracking this? > > Not yet. What to bug against? spice or spice-protocol or qemu-kvm itself? > > Gianluca Actually it seems more complicated because also with vnc as display, I don't get the L2 VM in paused state as in spice, but if I run once and select CentOS 6.4 CD it starts and then it blocks at this screen (see link below), so it has to do with nested itself that is not so viable, at least with this cpu (Intel(R) Xeon(R) CPU E7- 4870 @ 2.40GHz) and this L0 https://drive.google.com/file/d/0BwoPbcrMv8mvVHJybUw2dGFlTjg/edit?usp=sharing At least with console set as vnc I can power off without problem my L2 VM and continue to work; instead with spice console as soon as I power off the paused L2 VM, the hypervisor (that is L1 vm) completely freezes with black console and I need to power off it. My problem is similar to this one: https://bugzilla.redhat.com/show_bug.cgi?id=922075 but in my case L0 is ESXi 5.1, 799733 so I think I have no much chance to trick it... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Dedicated Bonding interface for Gluster
On Wed, Feb 12, 2014 at 6:18 PM, ml ml wrote: > > I guess the brick details are stored in the postgres database and everything > else after will fail?! > > Am i the only one with dedicated migration/storage interfaces? :) > > Thanks, > Mario > One of the workarounds I found and that works for me as I'm not using dns is this: - for engine host node1 and node two have ip on mgmt - for node1 and node2 their own ip addresses are on dedicated gluster network so for example 10.4.4.x = mgmt 192.168.3.x = gluster dedicated before: on engine /etc/hosts 10.4.4.58 node01 10.4.4.59 node02 10.4.4.60 engine on node01 10.4.4.58 node01 10.4.4.59 node02 10.4.4.60 engine after: on engine (the same as before) /etc/hosts 10.4.4.58 node01 10.4.4.59 node02 10.4.4.60 engine on node01 #10.4.4.58 node01 #10.4.4.59 node02 192.168.3.1 node01 192.168.3.3 node02 10.4.4.60 engine No operations on RDBMS. HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] about live snapshot and qemu-kvm
On Thu, Feb 13, 2014 at 8:42 PM, Douglas Schilling Landgraf wrote: > > Just for the record, we have setup a jenkins job to rebuild qemu-kvm for el6 > until we get it officially from centos: > http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/ Great! And thanks! Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Live Migrate Disks
On Thu, Feb 13, 2014 at 8:56 PM, Maurice James wrote: > Is it possible to live migrate disks like storage vMotion? > It should be supported since 3.2. See http://www.ovirt.org/OVirt_3.2_release_notes So in current stable 3.3.3 it should be ok. take in mind that it depends on live snapshots, so limitation on RHEL / CerntOS 6.5 apply. In that case you should use the recent qemu-kvm rebuilt from jenkins job, as noted in other threads today: http://jenkins.ovirt.org/view/Packaging/job/qemu-kvm-rhev_create_rpms_el6/ HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] [rhevm-api] Assign IP address to VM using Java SDK
On Fri, Feb 14, 2014 at 11:22 AM, Juan Hernandez wrote: > On 02/14/2014 11:03 AM, Tejesh M wrote: >> In the two screenshots which i shared earlier, in that "No Cloud-Init >> 2.png" is "Run Once" screenshot, it has only 4 options, >> >> i. Boot Options >> ii. Host >> iii. Display Protocol >> iv. Custom Properties >> >> After selecting VM, when i click Run Once, that screen is getting >> appears as in screenshot, no option for cloud-init. >> >> Do i need to update the RHEV-M? >> > > Ok, I thought you were using RHEV-M 3.3, but apparently you are using > 3.2. Can you verify? > >> You are using the "Run" and "New" options, but you have to use the "Run >> Once" option. There, in "Run Once" is where we have cloud-init support >> in 3.3. >> >> > And in fact in my oVirt 3.3.3 run once options I have this kind of output: https://drive.google.com/file/d/0BwoPbcrMv8mvTlphZWJyV1ZOdmc/edit?usp=sharing HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Storage Performance Issue !
On Wed, Feb 19, 2014 at 1:45 PM, Sven Kieske wrote: > Just as a quick shot: > > You really may want to check which IO scheduler runs inside the > vm. you probably want deadline or noop instead of CFQ, which > can have a huge performance impact inside the vms. BTW: actually I'm already doing it (setting scheduler as deadline) at hypervisor level for my oVirt hosts, even if it seems to me it is not the default... What do you think about it? This way a single process (aka vm) cannot have such huge performance impact and you have nothing to do at VM level... Even in recent fedora (19) I see some situations where cfq is suboptimal in case of heavy I/O operations... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.4 pre-release and GlusterFS support (F19)
On Thu, Feb 20, 2014 at 9:23 PM, Brad House wrote: > The question is is this expected behavior, and if so, is it because > of the hosted engine? Or is this some form of regression from the > advertised feature list of oVirt 3.3? Anything I should try or look at? > I'm obviously concerned about the FUSE overhead with Gluster and would > like to avoid that if possible. See this thread I started and the answers. HIH understanding better, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.4 pre-release and GlusterFS support (F19)
On Thu, Feb 20, 2014 at 11:19 PM, Gianluca Cecchi wrote: > On Thu, Feb 20, 2014 at 9:23 PM, Brad House wrote: > >> The question is is this expected behavior, and if so, is it because >> of the hosted engine? Or is this some form of regression from the >> advertised feature list of oVirt 3.3? Anything I should try or look at? >> I'm obviously concerned about the FUSE overhead with Gluster and would >> like to avoid that if possible. > > > See this thread I started and the answers. > HIH understanding better, > Gianluca Missed the thread link ;-) http://lists.ovirt.org/pipermail/users/2014-January/019797.html ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] quick start guide and ovirt-release rpm broken link
Hello, on the page http://www.ovirt.org/Quick_Start_Guide there was a broken link tp the ovirt release rpm. I created two different links to fedora (ovirt-release-fedora.noarch.rpm) and CentOS (ovirt-release-el.noarch.rpm). Feel free to adapt if there was something else to consider. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt 3.5 planning
>>> 2014-02-24 17:59 GMT+01:00 Itamar Heim : with oVirt 3.4 getting close to GA with many many great features, time to collect requests for 3.5... Signed rpms as in: http://lists.ovirt.org/pipermail/users/2014-January/019627.html and the mentioned ticket inside Dan answer: https://fedorahosted.org/ovirt/ticket/99 Hopefully in 3.4 too... ;-) Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Can't install ovirt 3.3 on CentOS 6.5
On Wed, Feb 26, 2014 at 10:45 AM, Alon Bar-Lev wrote: > > If openssl is not installed properly, we cannot rely on it completely. > We can workaround one issue but there will always be another. > openssl just like any other package should be installed and have full > integrity. > BTW I can confirm that right yesterday I installed for a test a stable 3.3.3 engine on a CentOS 6.5 server and had no problem at all, using standard packages and current repos. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] ovirt engine in high availability
Hello, I have made a base testing environment putting engine in HA. I think it could complement self hosted engine. As this could be of interest for enterprise based installations, I used CentOS 6.5 as OS environment for engine. Even if in upcoming RH EL 7 it seems corosync will substitute cman, in this 6.5 test I have used cman/pacemaker with pcs commands. The engine servers are two vSphere VMs, so I tested the configuration both without stonith and with vSphere CLi sdk and fence_vmware fenging agent. Also, as this kind of environment could be suitable for DR too, I set up the filesystem resource with drbd in active/passive. There are many possible configurations; to stick with similar environment as the standalone engine, I configured the whole stack of resources in one group, also with the nfs share for the default ISO-DOMAIN. Actually in this kind of environment it would be preferable to have a dedicated nfs resource on another ip so that it doesn't influence the ovirt-engine resource that is more critical. In the same way one could choose to put the PostgreSQL resource as a separate one, optimizing and balancing nodes' utilization. At the moment I successfully tested on ovirt-engine-3.3.3-2.el6.noarch and I have not yet configured any hypervisor hosts. I will do next days. I plan to test also upgrades to 3.3.4 and 3.4.0, to see if maintenance activity would reveal too complex and be a show stopper. In the mean time what I tested to be transparent with a client session connected to webadmin portal is: - relocation of resources - power off of the passive node - power off of the master node With transparent I mean that in the same browser window the user gets the login page (the same as when it goes timeout) and after login it gets the same situation as before. Obviously the PostgreSQL rdbms has stopped (or crashed in third case) and so problems could arise as if you stopped your db or powered off your server in single server environment. If there is interest I can post my configuration steps into a wiki page on oVirt web site (after learning the formatting syntax... ;-).. the page name could be something like "clustered_engine". In the mean time if you have any scenario I can test to verify consistency you are welcome. Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Reimporting storage domains after reinstalling ovirt
On Mon, Mar 17, 2014 at 2:18 AM, Boudewijn Ector wrote: > > Okay that sounds like database permissions: > > > [root@Xovirt-engine]# cat > /etc/ovirt-engine/engine.conf.d/10-setup-database.conf > ENGINE_DB_HOST="localhost" > ENGINE_DB_PORT="5432" > ENGINE_DB_USER="engine" > ENGINE_DB_PASSWORD="" > ENGINE_DB_DATABASE="engine" > ENGINE_DB_SECURED="False" > ENGINE_DB_SECURED_VALIDATION="False" > ENGINE_DB_DRIVER="org.postgresql.Driver" > ENGINE_DB_URL="jdbc:postgresql://${ENGINE_DB_HOST}:${ENGINE_DB_PORT}/${ENGINE_DB_DATABASE}?sslfactory=org.postgresql.ssl.NonValidatingFactory" > > I tried to reset the database's password using this in the psql shell: > > alter user engine WITH password ''; > (the same password as above) > > Still authentication fails, but when I do this: > > psql -h localhost -p 5432 -U engine engine > > It works fine... O gosh more debugging ;). Any clue where I should have > a look? > I just tried copying the old /etc/ovirt* stuff over /etc/overt* so both > configs and db are sync'ed again. To no avail. > > Thanks guys! > > > Boudewijn > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users For PostgreSQL access you have to check your previous settings in /var/lib/pgsql/data relevant files are pg_hba.conf and postgresql.conf On a standard ovirt 3.3.3 on CentOS 6.5 my config is pg_hba.conf; local all all ident hostengine engine 0.0.0.0/0 md5 hostengine engine ::0/0 md5 hostall all 127.0.0.1/32 ident hostall all ::1/128 ident If I compare modifications made by engine setup in respect of pre-defined ones: [root@ovirteng02 data]# diff postgresql.conf postgresql.conf.20140301072333 64,65c64 < # max_connections = 100 # (change requires restart) < max_connections = 150 --- > max_connections = 100 # (change requires restart) [root@ovirteng02 data]# diff pg_hba.conf pg_hba.conf.20140301072333 71,72d70 < hostengine engine 0.0.0.0/0 md5 < hostengine engine ::0/0 md5 check also your PostgreSQL version with the original one. HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] [Ann] oVirt 3.4 GA Releases
On Thu, Mar 27, 2014 at 10:52 AM, Brian Proffitt wrote: > The existing repository ovirt-stable has been updated for delivering this > release without the need of enabling any other repository. Is thee a way to eventually migrate from 3.3.3 to 3.3.4 without directly passing to 3.4? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] A new release also means a new Download page
On Thu, Mar 27, 2014 at 11:38 AM, Brian Proffitt wrote: > All: > > As you may have noticed, the next version of oVirt, 3.4, came out today, > thanks to the efforts of a great development team! > > Timed with this release, the Download page [1] on the oVirt site has been > redesigned to make it easier for users to find exactly what they need when > downloading and installing oVirt. Inside the page I see various "6.4" for CentOS/RHEL; probably they should be changed to 6.5 (or 6.4/6.5)? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Fwd: Trying to connect via SPICE or VNC to VM using a Macbook...
On Thu, Mar 27, 2014 at 9:55 PM, Adam Stracener wrote: > Got oVirt up and running like a champ and created my first VM with it but > the issue I'm having now is getting to console into that VM to setup SSH and > a few basics. > I've installed virt-manager (RemoteViewer) when i put in spice:// I get > Unable to connect to the graphic server. > > I would love to use the .vv files that download when I click CONSOLE but i > have yet to find a way to wrap .vv into Remote Viewer and pass that data to > it. Any thoughts? This is the only issue I am having so far. Not tried myself, but see here http://www.ovirt.org/SPICE_Remote-Viewer_on_OS_X and the whole thread here in case from February this year: http://lists.ovirt.org/pipermail/users/2014-February/021087.html HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Cannot add IPA server to ovirt
On Fri, Mar 28, 2014 at 9:44 AM, Martin Perina wrote: > Hi, > > this error message means, that engine-manage-domains cannot found any > KDC (kerberos domain controller) servers registered for your domain. > To verify this could you please execute: > > dig _kerberos._tcp.itsmart.local SRV > > If you domain is configured correctly (including kerberos support) the output > should look similar to (assuming you have configured two kerberos servers: > krb1.itsmart.local and krb2.itsmart.local): > > _kerberos._tcp.itsmart.local. 3600 IN SRV 10 0 88 krb1.itsmart.local > _kerberos._tcp.itsmart.local. 3600 IN SRV 10 0 88 krb2.itsmart.local > > > Thanks > > Martin Perina > > > - Original Message - >> From: "Demeter Tibor" >> To: users@ovirt.org >> Sent: Friday, March 28, 2014 9:19:53 AM >> Subject: [Users] Cannot add IPA server to ovirt >> >> Hi, >> >> I made an IPA server for testing purposes, but I cannot add to ovirt 3.4. The >> IPA server seems to be working good. >> >> When I add IPA to ovirt, I get this error mesage: >> >> >> >> [root@ovirttest etc]# engine-manage-domains add --domain=itsmart.local >> --user=admin --provider=ipa >> --ldap-servers=ldap1.itsmart.local,ldap2.itsmart.local >> No KDC can be obtained for domain itsmart.local >> >> >> >> >> What does mean this? >> >> Can me help anyone? >> >> >> >> >> Thanks, >> >> >> >> >> Tibor >> >> >> >> >> >> >> >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > ___ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users Based on previous documents I read (I don't remember the link now) and the fact I'm using bind on CentOS 6.4 for DNS, I set this in my /var/named/data/forward.zone file (infra is my dns server and localdomain.local is my domain name): ; ldap servers _ldap._tcp IN SRV 0 100 389infra ;kerberos realm _kerberos IN TXT LOCALDOMAIN.LOCAL ; kerberos servers _kerberos._tcp IN SRV 0 100 88 infra _kerberos._udp IN SRV 0 100 88 infra _kerberos-master._tcp IN SRV 0 100 88 infra _kerberos-master._udp IN SRV 0 100 88 infra _kpasswd._tcp IN SRV 0 100 464infra _kpasswd._udp IN SRV 0 100 464infra ;ntp server _ntp._udp IN SRV 0 100 123infra HIH, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] oVirt Weekly Meeting Minutes -- 2014-04-02
On Thu, Apr 3, 2014 at 2:29 PM, Doron Fediuck wrote: > Yesterday wde had a DST glitch. > Starting yesterday we'll follow the calendar, which uses GMT+0 no DST. where is based this "we", just for curiosity? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Updating AIO from 3.3.4 to 3.4
Hello, I know that AIO setup is only for experimenting and trying but I find it very useful in several situations. So having a 3.3.4 All-In_one installation on fedora19 and wanting to update it to 3.4 I have some doubts. In release notes I see for general updates: On Fedora 19 you'll need to enable fedora-updates repository for having updated openstack packages --> OK. It is normally already enabled On Fedora 19, you'll need to enable fedora-virt-preview repository for using Fedora 19 as node on 3.4 clusters --> how do I managed this? This is indeed a node too, so I have to enable virt-preview for it? Correct? BTW: Is this note still true in general or did the related virt packages go into fedora-updates, or is it planned in coming days/weeks? When engine-setup is run and detects that a newer version is available and outputs that I have to run yum update ovirt-engine-setup is it correct to say that in AIO setups I should actually run yum update ovirt-engine-setup ovirt-engine-setup-plugin-allinone or is it the latest package not crucial to be updated before new engine-setup is run? At the end of engine update can I simply put the AIO server into maintenance (as a node) and run "yum update" so that vdsm (aka node) packages are updated too and so the server will become a 3.4 enabled node after reboot? Thanks, Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
Hello, on F19 I upgraded engine from 3..3.4 to 3.4.0. Then yum update ovirt-engine-dwh ovirt-engine-reports ... Running transaction Installing : ovirt-engine-dwh-setup-3.4.0-2.fc19.noarch 1/8 Installing : ovirt-engine-reports-setup-3.4.0-2.fc19.noarch 2/8 Updating : jasperreports-server-5.5.0-7.fc19.noarch 3/8 Updating : ovirt-engine-reports-3.4.0-2.fc19.noarch 4/8 Updating : ovirt-engine-dwh-3.4.0-2.fc19.noarch 5/8 Cleanup: ovirt-engine-reports-3.3.4-1.fc19.noarch 6/8 Cleanup: jasperreports-server-5.5.0-1.noarch 7/8 Cleanup: ovirt-engine-dwh-3.3.4-1.fc19.noarch 8/8 Verifying : jasperreports-server-5.5.0-7.fc19.noarch 1/8 Verifying : ovirt-engine-reports-setup-3.4.0-2.fc19.noarch 2/8 Verifying : ovirt-engine-dwh-setup-3.4.0-2.fc19.noarch 3/8 Verifying : ovirt-engine-dwh-3.4.0-2.fc19.noarch 4/8 Verifying : ovirt-engine-reports-3.4.0-2.fc19.noarch 5/8 Verifying : jasperreports-server-5.5.0-1.noarch 6/8 Verifying : ovirt-engine-reports-3.3.4-1.fc19.noarch 7/8 Verifying : ovirt-engine-dwh-3.3.4-1.fc19.noarch 8/8 Dependency Installed: ovirt-engine-dwh-setup.noarch 0:3.4.0-2.fc19 ovirt-engine-reports-setup.noarch 0:3.4.0-2.fc19 Updated: ovirt-engine-dwh.noarch 0:3.4.0-2.fc19 ovirt-engine-reports.noarch 0:3.4.0-2.fc19 Dependency Updated: jasperreports-server.noarch 0:5.5.0-7.fc19 Complete! But now I see I have neither ovirt-engine-dwh-setup nor ovirt-engine-reports.setup as in 3.3.4... What to do? I only see this thread http://lists.ovirt.org/pipermail/users/2014-March/022205.html and other infos that suggests for new installation engine-setup should care about dwh too (but I don't know for reports...) What about pre-existing 3.3.4 installations with dwh and reports? The engine-setup for upgrade from 3.3.4 to 3.4 didn't notice anything about dwh/reports and didn't update any package: --== CONFIGURATION PREVIEW ==-- Engine database name: engine Engine database secured connection : False Engine database host: localhost Engine database user name : engine Engine database host name validation: False Engine database port: 5432 NFS setup : True Firewall manager: iptables Update Firewall : True Configure WebSocket Proxy : True Host FQDN : tekkaman.localdomain.local NFS mount point : /ISO Set application as default page : True Configure Apache SSL: False Require packages rollback : False Upgrade packages: True Please confirm installation settings (OK, Cancel) [OK]: If I try to run engine-setup again I get [root@tekkaman ~]# engine-setup [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-aio.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'] Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20140405184435.log Version: otopi-1.2.0 (otopi-1.2.0-1.fc19) [ ERROR ] Failed to execute stage 'Environment setup': initial_value must be unicode or None, not str [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140405184435.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed If I patch as in http://gerrit.ovirt.org/#/c/26005/ I get # engine-setup [ INFO ] Stage: Initializing [ ERROR ] Failed to execute stage 'Initializing': type object 'ConfigEnv' has no attribute 'LEGACY_REPORTS_WAR' [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140405190842.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termina
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sat, Apr 5, 2014 at 10:43 PM, Alon Bar-Lev wrote: >> >> If I try to run engine-setup again I get >> >> [root@tekkaman ~]# engine-setup >> [ INFO ] Stage: Initializing >> [ INFO ] Stage: Environment setup >> Configuration files: >> ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', >> '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', >> '/etc/ovirt-engine-setup.conf.d/20-setup-aio.conf', >> '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'] >> Log file: >> /var/log/ovirt-engine/setup/ovirt-engine-setup-20140405184435.log >> Version: otopi-1.2.0 (otopi-1.2.0-1.fc19) >> [ ERROR ] Failed to execute stage 'Environment setup': initial_value >> must be unicode or None, not str >> [ INFO ] Stage: Clean up >> Log file is located at >> /var/log/ovirt-engine/setup/ovirt-engine-setup-20140405184435.log >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Execution of setup failed > > Please attach setup log, need to see where it comes from. here it is: https://drive.google.com/file/d/0BwoPbcrMv8mvMnRMSm5oTF9NRUU/edit?usp=sharing > >> >> If I patch as in >> http://gerrit.ovirt.org/#/c/26005/ > > You should not patch unless we find an issue, but this error is strange, as > patch does contains this variable[1] > > [1] > http://gerrit.ovirt.org/#/c/26005/4/packaging/setup/ovirt_engine_setup/reportsconstants.py,cm > >> >> I get >> # engine-setup >> [ INFO ] Stage: Initializing >> [ ERROR ] Failed to execute stage 'Initializing': type object >> 'ConfigEnv' has no attribute 'LEGACY_REPORTS_WAR' >> [ INFO ] Stage: Clean up >> Log file is located at >> /var/log/ovirt-engine/setup/ovirt-engine-setup-20140405190842.log >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Execution of setup failed >> >> any hint? > > Thanks! I tried that patch because actually I also found this thread some days ago with the same error and it was suggested to do so... http://lists.ovirt.org/pipermail/users/2014-April/023083.html the setup log about the run after patching is here: https://drive.google.com/file/d/0BwoPbcrMv8mvWEc5UVV4a1BuNlk/edit?usp=sharing Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sat, Apr 5, 2014 at 11:54 PM, Alon Bar-Lev wrote: >> > >> > Please attach setup log, need to see where it comes from. >> >> here it is: >> https://drive.google.com/file/d/0BwoPbcrMv8mvMnRMSm5oTF9NRUU/edit?usp=sharing > > This was solved in[1], be available in next z-stream. > > You can manually modify it, one letter... :) > > [1] http://gerrit.ovirt.org/#/c/26118/ ok. Are you telling me that I have to put again original /usr/share/ovirt-engine/setup/ovirt_engine_setup/reportsconstants.py /usr/share/ovirt-engine/setup/plugins/ovirt-engine-setup/ovirt-engine-reports/jasper/deploy.py that I modified to apply http://gerrit.ovirt.org/#/c/26005/ and then apply instead http://gerrit.ovirt.org/#/c/26118/ or do I have to apply it on the current already modified environment? Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sun, Apr 6, 2014 at 12:09 AM, Alon Bar-Lev wrote: >> I think you can yum reinstall ovirt-engine-dwh-setup, this should revert the >> changes and then you can perform [1] change. > > Sorry... you have changed reports... so yum reinstall > ovirt-engine-reports-setup will do the revert. > reverted the two backed up files and then applied the one character pathc. Now setup proceeds but fails at this stage: ... [ INFO ] Stage: Misc configuration [ INFO ] Backing up database localhost:engine to '/var/lib/ovirt-engine/backups/engine-20140406010658.a7sKzy.sql'. [ INFO ] Updating Engine database schema [ INFO ] Backing up database localhost:ovirt_engine_history to '/var/lib/ovirt-engine-dwh/backups/dwh-20140406010717.D2aTFC.sql'. [ INFO ] Creating/refreshing DWH database schema [ INFO ] Exporting data out of Jasper [ INFO ] Regenerating Jasper's build configuration files [ ERROR ] Failed to execute stage 'Misc configuration': Command './js-ant' failed to execute [ INFO ] Yum Performing yum transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ INFO ] Restoring Engine database engine [ INFO ] Rolling back DWH database schema [ INFO ] Clearing DWH database ovirt_engine_history [ INFO ] Restoring DWH database ovirt_engine_history [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140406010543.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed Log is here: https://drive.google.com/file/d/0BwoPbcrMv8mvSVg3dk1lcU5EbW8/edit?usp=sharing SO I come back here: http://lists.ovirt.org/pipermail/users/2014-April/023083.html ;-) do I have now to try to apply again http://gerrit.ovirt.org/26005 as suggested? BTW: the failed action above left engine down but running systemctl start ovirt-engine let the service up again and web admin portal usable... Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sun, Apr 6, 2014 at 1:25 AM, Alon Bar-Lev wrote: > > """ > Target "gen-config-ce" does not exist in the project "buildomatic". > """ > > Hmm... it should have been gen-config without the -ce. > > The tags of ovirt-engine-dwh/ovirt-engine-reports are incorrect or correct > but the release misses fixes as far as I can see, so I do not know what > exactly do you have... But I think this[1] is the fix. > > [1] http://gerrit.ovirt.org/#/c/25998/ My packages should be the official ones at the moment: rpm -q ovirt-engine-dwh ovirt-engine-reports ovirt-engine-dwh-setup ovirt-engine-reports-setup ovirt-engine-dwh-3.4.0-2.fc19.noarch ovirt-engine-reports-3.4.0-2.fc19.noarch ovirt-engine-dwh-setup-3.4.0-2.fc19.noarch ovirt-engine-reports-setup-3.4.0-2.fc19.noarch hum... tried also approach of reinstall, as I saw that some .pyc files resulted modified after my first tried patch when I edited reportsconstants.py and deploy.py don't know if expected that also .pyc files are modified after changing .py files but I had now: # rpm -qV ovirt-engine-dwh ovirt-engine-reports ovirt-engine-dwh-setup ovirt-engine-reports-setup S.5T. /usr/share/ovirt-engine/setup/plugins/ovirt-engine-setup/ovirt-engine-dwh/legacy/config.py --> the only one I expected S.5T. /usr/share/ovirt-engine/setup/plugins/ovirt-engine-setup/ovirt-engine-dwh/legacy/config.pyc ...T. /usr/share/ovirt-engine/setup/ovirt_engine_setup/reportsconstants.pyc S.5T. /usr/share/ovirt-engine/setup/plugins/ovirt-engine-setup/ovirt-engine-reports/jasper/deploy.pyc So # yum reinstall ovirt-engine-dwh ovirt-engine-reports ovirt-engine-dwh-setup ovirt-engine-reports-setup Reinstalling: ovirt-engine-dwh noarch 3.4.0-2.fc19ovirt-stable1.2 M ovirt-engine-dwh-setupnoarch 3.4.0-2.fc19ovirt-stable 48 k ovirt-engine-reports noarch 3.4.0-2.fc19ovirt-stable1.0 M ovirt-engine-reports-setupnoarch 3.4.0-2.fc19ovirt-stable 49 k [root@tekkaman ~]# rpm -qV ovirt-engine-dwh ovirt-engine-reports ovirt-engine-dwh-setup ovirt-engine-reports-setup [root@tekkaman ~]# then applied http://gerrit.ovirt.org/#/c/26118/ for config.py But now I see that if I run engine-setup again it tries a fresh install and doesn't recognize it is an already configured anvironment (I pressed Ctrl-C at RDBMS question to avoid further problems...) [root@tekkaman ~]# engine-setup [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-aio.conf', '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'] Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20140406013743.log Version: otopi-1.2.0 (otopi-1.2.0-1.fc19) [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Stage: Environment customization --== PRODUCT OPTIONS ==-- Configure Reports on this host (Yes, No) [Yes]: --== PACKAGES ==-- [ INFO ] Checking for product updates... [ INFO ] No product updates found --== NETWORK CONFIGURATION ==-- [WARNING] Failed to resolve tekkaman.localdomain.local using DNS, it can be resolved only locally Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: [ INFO ] iptables will be configured as firewall manager. --== DATABASE CONFIGURATION ==-- Where is the Reports database located? (Local, Remote) [Local]: ^C[ ERROR ] Failed to execute stage 'Environment customization': SIG2 [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140406013743.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed setup log here: https://drive.google.com/file/d/0BwoPbcrMv8mvQnd4WGxuOENzZTg/edit?usp=sharing Now it is time to go to sleep to be sure not to do worse... ;-) But if you have any hint about this problem tomorrow (today actually.. ;-) I can dig more. Thanks again Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sun, Apr 6, 2014 at 2:13 AM, Alon Bar-Lev wrote: > > This is a progress... :) I have not understood in what sense, but I trust you... > > Now... do you have anything at: > > /usr/share/jasperreports-server/buildomatic/build_conf/default/master.properties > > Do you have: > > /var/lib/ovirt-engine-reports/build-conf/master.properties yes see below. Note that I didn't complete the engine-setup process But if this influences only my reports/dwh installation and not engine itself is ok for me to run it again I suppose in case I have to drop before my preexisting reports/dwh rdbms and so loose my previous data? [g.cecchi@tekkaman ~]$ cat /usr/share/jasperreports-server/buildomatic/build_conf/default/master.properties appServerType=jboss7 # JBoss app server root dir appServerDir=/usr/share/ovirt-engine # database type dbType=postgresql # database location and connection settings dbHost=localhost dbPort=5432 dbUsername=engine_reports dbPassword= # JasperServer db name js.dbName=ovirtenginereports # web app name # (set one of these to deploy to a non-default war file name) webAppNameCE=ovirt-engine-reports # JDBC driver # (uncomment to change to a non-default setting) # # driver will be found here: /buildomatic/conf_source/db/postgresql/jdbc # # maven.jdbc.groupId=postgresql # maven.jdbc.artifactId=postgresql maven.jdbc.version=jdbc # Skip JDBC Driver Deploy # Flag used to skip JDBC driver deploying during deployment process # (uncomment to change to a non-default setting) deployJDBCDriver=false [g.cecchi@tekkaman ~]$ sudo cat /var/lib/ovirt-engine-reports/build-conf/master.properties # File locations reportsHome=/var/lib/ovirt-engine-reports reportsWar=/var/lib/ovirt-engine-reports/ovirt-engine-reports.war currentConf=/var/lib/ovirt-engine-reports/build-conf appServerDir=/var/lib/ovirt-engine-reports appServerType=jboss7 # database type dbType=postgresql # database location and connection settings dbHost=localhost dbPort=5432 dbUsername=engine_reports dbPassword= js.dbName=ovirtenginereports # web app name # (set one of these to deploy to a non-default war file name) webAppNameCE=ovirt-engine-reports webAppNamePro=ovirt-engine-reports # Database maven.jdbc.groupId=postgresql maven.jdbc.artifactId=postgresql maven.jdbc.version=9.2-1002.jdbc4 deployJDBCDriver=false ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sun, Apr 6, 2014 at 10:26 AM, Alon Bar-Lev wrote: > > > - Original Message - >> From: "Gianluca Cecchi" >> To: "Alon Bar-Lev" >> Cc: "users" >> Sent: Sunday, April 6, 2014 10:57:42 AM >> Subject: Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0? >> >> On Sun, Apr 6, 2014 at 2:13 AM, Alon Bar-Lev wrote: >> >> > >> > This is a progress... :) >> >> I have not understood in what sense, but I trust you... >> >> > >> > Now... do you have anything at: >> > >> > /usr/share/jasperreports-server/buildomatic/build_conf/default/master.properties >> > >> > Do you have: >> > >> > /var/lib/ovirt-engine-reports/build-conf/master.properties >> >> yes see below. Note that I didn't complete the engine-setup process >> But if this influences only my reports/dwh installation and not engine >> itself is ok for me to run it again >> I suppose in case I have to drop before my preexisting reports/dwh >> rdbms and so loose my previous data? > > OK, this is a bug, as the > /var/lib/ovirt-engine-reports/build-conf/master.properties should been left > only on success, while I see that if fail it is left, I will open a bug. > > Please remove /var/lib/ovirt-engine-reports/build-conf and re-run setup > should attempt to upgrade again. > > Thanks, > Alon ok, done. I see your opened bugzilla entry. Now: [ INFO ] Stage: Setup validation During execution engine service will be stopped (OK, Cancel) [OK]: [ INFO ] Cleaning stale zombie tasks --== CONFIGURATION PREVIEW ==-- Engine database name: engine Engine database secured connection : False Engine database host: localhost Engine database user name : engine Engine database host name validation: False Engine database port: 5432 NFS setup : True Firewall manager: iptables Update Firewall : True Configure WebSocket Proxy : True Host FQDN : tekkaman.localdomain.local NFS mount point : /ISO DWH installation: True DWH database name : ovirt_engine_history DWH database secured connection : False DWH database host : localhost DWH database user name : engine_history DWH database host name validation : False DWH database port : 5432 Reports installation: True Reports database name : ovirtenginereports Reports database secured connection : False Reports database host : localhost Reports database user name : engine_reports Reports database host name validation : False Reports database port : 5432 Please confirm installation settings (OK, Cancel) [OK]: [ INFO ] Cleaning async tasks and compensations [ INFO ] Checking the Engine database consistency [ INFO ] Stage: Transaction setup [ INFO ] Stopping engine service [ INFO ] Stopping dwh service [ INFO ] Stopping websocket-proxy service [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Backing up database localhost:engine to '/var/lib/ovirt-engine/backups/engine-20140406105216.IjEVb2.sql'. [ INFO ] Updating Engine database schema [ INFO ] Backing up database localhost:ovirt_engine_history to '/var/lib/ovirt-engine-dwh/backups/dwh-20140406105234.jk3lc9.sql'. [ INFO ] Creating/refreshing DWH database schema [ INFO ] Exporting data out of Jasper [ INFO ] Regenerating Jasper's build configuration files [ ERROR ] Failed to execute stage 'Misc configuration': Command './js-ant' failed to execute [ INFO ] Yum Performing yum transaction rollback [ INFO ] Rolling back database schema [ INFO ] Clearing Engine database engine [ INFO ] Restoring Engine database engine [ INFO ] Rolling back DWH database schema [ INFO ] Clearing DWH database ovirt_engine_history [ INFO ] Restoring DWH database ovirt_engine_history [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140406105032.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed we come back with the problem you outlined with BUILD FAILED Target "gen-config-c
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sun, Apr 6, 2014 at 11:34 AM, Alon Bar-Lev wrote: > > You need to remove and apply this[1] to fix that problem, I thought you > applied it. > > [1] http://gerrit.ovirt.org/#/c/25998/ > >> >> Gianluca >> Ah ok, right. I forgot that. Now all ended ok (see below) I see that now the user is defined into the db instead of palin file as in 3.3. ovirtenginereports=# select * from jiuser; id | username| tenantid | fullname | emailaddress | password | externallydefined | enable d | previouspasswordchangetime -+---+--+--+--+--+---+--- --+ 5 | jasperadmin |1 | jasperadmin User | | 349AFAADD5C5A2BD477309618DCD58B9 | f | f | 2013-10-17 00:58:10.01 6 | anonymousUser |1 | anonymousUser| | CF35D2E88192D6EB | f | f | 2013-10-17 00:58:10.346 129 | ovirt-admin |1 | ovirt-admin | | D41B13B925B503A13CF5EBB35EB9BFF7 | f | t | 130 | admin |1 | admin| | 5C598ADA6CCE35AA | f | f | (4 rows) How can I reset the password of ovirt-admin? (previously I used it only at certain times and watched the clear text file /usr/share/ovirt-engine-reports/reports/users/ovirt-002dadmin.xml I forgot it ;-) ) so that I can try and check previous data? At the end of the e-mail what I wrote as a summary and if right I can add into release notes... ... Please confirm installation settings (OK, Cancel) [OK]: [ INFO ] Cleaning async tasks and compensations [ INFO ] Checking the Engine database consistency [ INFO ] Stage: Transaction setup [ INFO ] Stopping engine service [ INFO ] Stopping dwh service [ INFO ] Stopping websocket-proxy service [ INFO ] Stage: Misc configuration [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Backing up database localhost:engine to '/var/lib/ovirt-engine/backups/engine-20140406140343.eSKtje.sql'. [ INFO ] Updating Engine database schema [ INFO ] Backing up database localhost:ovirt_engine_history to '/var/lib/ovirt-engine-dwh/backups/dwh-20140406140401.A6N4NM.sql'. [ INFO ] Creating/refreshing DWH database schema [ INFO ] Regenerating Jasper's build configuration files [ INFO ] Exporting data out of Jasper [ INFO ] Backing up database localhost:ovirtenginereports to '/var/lib/ovirt-engine-reports/backups/reports-20140406140547.a7FquY.sql'. [ INFO ] Deploying Jasper [ INFO ] Importing data into Jasper [ INFO ] Configuring Jasper Java resources [ INFO ] Configuring Jasper Database resources [ INFO ] Customizing Jasper [ INFO ] Customizing Jasper metadata [ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf' [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up --== SUMMARY ==-- SSH fingerprint: FE:C6:E1:DE:5D:44:DC:30:CB:12:A3:F7:01:3A:49:83 Internal CA 55:D5:F1:1F:1A:65:45:61:D5:17:A5:74:36:84:65:3B:E5:53:6E:84 Web access is enabled at: http://tekkaman.localdomain.local:80/ovirt-engine https://tekkaman.localdomain.local:443/ovirt-engine --== END OF SUMMARY ==-- [ INFO ] Starting engine service [ INFO ] Restarting httpd [ INFO ] Restarting nfs services [ INFO ] Starting dwh service [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20140406140949-setup.conf' [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140406140253.log [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ INFO ] Execution of setup completed successfully The workflow for upgrade from configured 3.3 dwh and reports should be this Coming from a 3.3.4 with dwh and reports already configured, in my case I start with ovirt-engine-dwh-3.3.4-1.fc19.noarch ovirt-engine-reports-3.3.4-1.fc19.noarch after first tun of engine-setup, that updates only engine itself you have to: 1) yum update and you will get ovirt-engine-dwh-3.4.0-2.fc19.noarch ovirt-engine-reports-3.4.0-2.fc19.noarch and also these 2 new packages installed ovirt-engine-dwh-setup-3.4.0-2.fc19.noarch ovirt-engine-reports-setup-3.4.0-2.fc19.noarch 2) patch as from http://gerrit.ovirt.org/#/c/26118/ and http://gerrit.ovirt.org/#/c/25998/ 3) engine-setup (no more ovirt-engine-dwh-setup and ovirt-engine-reports-setup to run) And if you have run 3) before 2) you have to remove the directory tree under /var/lib/ovirt-engine-reports/build-conf and run 2) and 3) again due to currently targeted for 3.4.1 bug: https://bugzilla.redhat.com/show_bug.cgi?id=1084749 Gianluca ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sun, Apr 6, 2014 at 3:37 PM, Alon Bar-Lev wrote: > >> How can I reset the password of ovirt-admin? >> (previously I used it only at certain times and watched the clear text file >> /usr/share/ovirt-engine-reports/reports/users/ovirt-002dadmin.xml >> I forgot it ;-) > > I think you can run: > > # engine-setup > --otopi-environment="OVESETUP_REPORTS_CONFIG/adminPassword=str:" the command went ok but not able to login with ovirt-admin. And the select from db gives the same encrypted password for ovirt-admin user, so I think it din't change the password. suppose my password is the word changeme I used the command engine-setup --otopi-environment="OVESETUP_REPORTS_CONFIG/adminPassword=str:changeme" is it ok? Also, it would be nice not to stop/reconfigure/start the whole engine only to change reports password >> At the end of the e-mail what I wrote as a summary and if right I can >> add into release notes... > > Well, better to wait for 3.4.1 I think. OK, but I think something needs to be written into release notes for 3.4. In fact at the moment you see this kind of index: https://drive.google.com/file/d/0BwoPbcrMv8mvQTlQQU8zdVdDeDQ/edit?usp=sharing but actually there is no reference at all to reports and dwh and anyone who wants to setup and/or upgrade them would retain more than a doubt... > > Thank you for your help! > > Alon. To tell it with Kevin Spacey words in his great film: When someone does you a big favor, don't pay it back... pay it forward I think it could be valid in IT too... not only in real life ;-) My forward will be to sponsor oVirt... ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Upgrading dwh and reports from 3.3.4 to 3.4.0?
On Sun, Apr 6, 2014 at 8:03 PM, Yaniv Dary wrote: > # cd /usr/share/jasperreports-server/buildomatic > # export masterPropsSource=/var/lib/ovirt-engine-reports/build-conf > # ./js-export.sh --users --output-dir < some folder > > > now edit password of the user you would like to change > > # ./js-import.sh --input-dir < same some folder > > [sorry for the previous e-mail only sent to Yaniv] Hello, I get this during export: [root@tekkaman ~]# cd /usr/share/jasperreports-server/buildomatic [root@tekkaman buildomatic]# export masterPropsSource=/var/lib/ovirt-engine-reports/build-conf [root@tekkaman buildomatic]# mkdir /tmp/js_export [root@tekkaman buildomatic]# ./js-export.sh --users --output-dir /tmp/js_export Using CE setup First resource path:/usr/share/jasperreports-server/buildomatic/conf_source/ieCe/applicationContext-cascade.xml Started to load resources Resource name: applicationContext.xml Resource name: applicationContext-cascade.xml Resource name: applicationContext-data-snapshots.xml Resource name: applicationContext-events-logging.xml Resource name: applicationContext-export-config.xml Resource name: applicationContext-export-import.xml Resource name: applicationContext-logging.xml Resource name: applicationContext-olap-connection.xml Resource name: applicationContext-report-scheduling.xml Resource name: applicationContext-search.xml Resource name: applicationContext-security.xml Resource name: applicationContext-themes.xml Resource name: applicationContext-virtual-data-source.xml org.springframework.beans.factory.BeanInitializationException: Could not load properties; nested exception is java.io.FileNotFoundException: class path resource [js.jdbc.properties] cannot be opened because it does not exist org.springframework.beans.factory.BeanInitializationException: Could not load properties; nested exception is java.io.FileNotFoundException: class path resource [js.jdbc.properties] cannot be opened because it does not exist at org.springframework.beans.factory.config.PropertyResourceConfigurer.postProcessBeanFactory(PropertyResourceConfigurer.java:87) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:681) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:656) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:446) at com.jaspersoft.jasperserver.export.BaseExportImportCommand.createSpringContext(BaseExportImportCommand.java:129) at com.jaspersoft.jasperserver.export.BaseExportImportCommand.process(BaseExportImportCommand.java:82) at com.jaspersoft.jasperserver.export.ExportCommand.main(ExportCommand.java:43) Caused by: java.io.FileNotFoundException: class path resource [js.jdbc.properties] cannot be opened because it does not exist at org.springframework.core.io.ClassPathResource.getInputStream(ClassPathResource.java:158) at org.springframework.core.io.support.PropertiesLoaderSupport.loadProperties(PropertiesLoaderSupport.java:181) at org.springframework.core.io.support.PropertiesLoaderSupport.mergeProperties(PropertiesLoaderSupport.java:161) at org.springframework.beans.factory.config.PropertyResourceConfigurer.postProcessBeanFactory(PropertyResourceConfigurer.java:78) ... 6 more [root@tekkaman buildomatic]# do I have to install anything that normally is installed only on developer stations? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] live storage migration - when is this being targeted?
2014-04-07 10:26 GMT+02:00 Sven Kieske : > Ah, okay, now I'm totally confused > and really don't know where this other > package comes from. > > I was really thinking it was used. > > Thanks for your correction. > > Am 05.04.2014 23:15, schrieb Itamar Heim: >> >> RHEL uses: >> http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/qemu-kvm-0.12.1.2-2.415.el6.src.rpm >> http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/RHEV/SRPMS/qemu-kvm-rhev-0.12.1.2-2.415.el6_5.5.src.rpm is the source rpm for the binary used in RHEV commercial product ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users