Re: [Users] mozilla-xpi for Ubuntu
Hi, I have added it to the below link where I found some other info about the spice-xpi. http://www.ovirt.org/Testing/Spice#Testing_Spice_with_Ovirt I've also had feedback that there is another method: 1. install spice client: # apt-get install spice-client 2. download spice xpi from:https://launchpad.net/~jasonbrooks/+archive/ppa/+packages # dpkg -i spice-xpi_2.7-0~41~precise1_amd64.deb The xpi package is compiled for Ubuntu 12.04, but works fine with Ubuntu 12.10 as well - don't know if it works with Ubuntu 13.04... I haven't added these steps to the wiki as I haven't tested them yet. JP-- JP Pitout Senior Consultant Qualifications: RHCVA, RHCE, RHCI Email: j...@obsidian.co.za Tel: +2711 795 0200 Contact: 0860 4 LINUX Fax: 086 686 8608 Web: http://www.obsidian.co.za On 31/05/2013 16:42, Douglas Schilling Landgraf wrote: Hi JP, On 05/31/2013 07:59 AM, JP Pitout wrote: Mario Giammarco mgiammarco@... writes: Hello, I need a working mozilla-xpi for Ubuntu 12.04 (and soon 12.10). It is strange that an opensource project as ovirt is only working on Fedora. Thanks in advance for any help. Mario Hi, I am running Ubuntu 13.04 and got it working using the following steps: 1. Install the spice-client package which gives you /usr/bin/spicec. 2. Extract the libnsISpicec.so file from the latest Fedora (FC19) RPM 3. Placed it in /usr/lib/mozilla/plugins/ 4. Restart Firefox Worked for me! Nice. Do you mind to add these steps into wiki.ovirt.org ? To request account please go to: http://www.ovirt.org/index.php?title=Special:UserLoginreturnto=Home Thanks! ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[Users] oVirt @China
Hi everyone, On our last visit to Shanghai we discovered there's no access to YouTube, so oVirt users and developers in China cannot see the cool video clips we have for oVirt. To resolve that, we were able to open an account in YouKu for oVirt, which will host the relevant video clips. I'm happy to announce oVirt's YouKu page: http://i.youku.com/theovirt Feel free to share with your friends! Here's how you can help: - People who do not see their video clips there can mail me and I'll upload it to YouKu. - For new contents please contact me as well so I'll upload it to YouKu. Special thanks to Jimmy and the Intel team, who helped me with this effort. Looking forward to see more video clips, Doron ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] deduplication
Hello Jose, We also have FreeNAS working in our infraestructure, with about 3 TB and ZFS. Some of the pools has compression enabled and you can save space with it. We have this FreeNAS connected to a hypervisor Xen and it works very well and it's stable and sure. We have nine virtual servers some wirtualized and other paravirtualized, and some Windows Server machine all about 2 years in production without any problem. My idea is connect this infrastructure with oVirt wo be able to have some resources for test VMs in that. Only wanted to share as another FreeNas success experience. Juanjo. On Fri, May 31, 2013 at 12:33 PM, supo...@logicworks.pt wrote: Thanks a lot Karli, you make my mind clear about deduplication, once again we cannot have the best of both worlds. I'll try FreeNAS despite my poor knowledge on FreeBSD. Openfiler, running on Linux, has no better performance but supports DRDB. Jose -- *From: *Karli Sjöberg karli.sjob...@slu.se *To: *supo...@logicworks.pt *Cc: *Jiri Belka jbe...@redhat.com, users@ovirt.org *Sent: *Sexta-feira, 31 de Maio de 2013 10:45:41 *Subject: *Re: [Users] deduplication fre 2013-05-31 klockan 09:50 +0100 skrev supo...@logicworks.pt: So, we can say that dedup has more disadvantages than advantages. For a primary system; most definitely, yes. But for a backup system, that has tons of RAM and SSD's for cache, and you have lots of virtual machines that are based off of the template, or are very much the same, then you have a real use-case. I´m active at the FreeBSD forums where one person reports storing 150TB of data in only 30TB of physical disk. The best practice of scrubbing is once a week on enterprise systems, though he is only able to do it once a month, because that´s how long it takes for a scrub to complete in that system. So you´ve got to choose performance or savings, you can´t have both. And what about dedup of Netapp? Much better implementation, in my opinion. You are able schedule dedup-runs to go at night so your user´s performance isn´t impacted, and you get the savings. The question is if you value the savings enough to take on price-tag that is NetApp. Or just build your own FreeBSD/ZFS server with compression enabled and buy in standard HDD's from anywhere... We did;) /Karli Jose -- *From: *Karli Sjöberg karli.sjob...@slu.se *To: *supo...@logicworks.pt *Cc: *Jiri Belka jbe...@redhat.com, users@ovirt.org *Sent: *Quinta-feira, 30 de Maio de 2013 8:33:19 *Subject: *Re: [Users] deduplication ons 2013-05-29 klockan 09:59 +0100 skrev supo...@logicworks.pt: Absolutely agree with you, planning is the best thing to do, but normally people want a plug'n'play system with all included, because there is not much time to think and planning, and there are many companies that know how to take advantage of this people characteristics. Any way, I think another solution for dedup is FreeNAS using ZFS. FreeNAS is just FreeBSD with a fancy web-ui ontop, so it´s neither more or less of ZFS than you would have otherwise, And regarding dedup in ZFS; Just don´t, it´s not worth it! It´s said that it *may* increase performance when you have a very suitable usecase, e.g. everything *exactly* the same over and over. What´s not said is that scrubbing and resilvering slows down to a snail (from hundreds of MB/s, or GB if your system is large enough, down to less than 10), just from dedup. Also deleting snapshots of datasets that have(or have had) dedup on can kill the entire system, and when I say kill, I mean really fubar. Been there, regretted that... Now, compression on the other hand, you get basically for free and gives decent savings, I highly recommend that. /Karli Jose -- *From: *Jiri Belka jbe...@redhat.com *To: *supo...@logicworks.pt *Cc: *users@ovirt.org *Sent: *Quarta-feira, 29 de Maio de 2013 7:33:10 *Subject: *Re: [Users] deduplication On Tue, 28 May 2013 14:29:05 +0100 (WEST) supo...@logicworks.pt wrote: That's why I'm making this questions, to demystify some buzzwords around here. But if you have a strong and good technology why not create buzzwords to get into as many people as possible? without trapped them. Share a disk containing static data is a good idea, do you know from where I can start? Everything depends on your needs, design planning. Maybe then sharing disk would be better to share via NFS/iscsi. Of course if you have many VMs each of them is different you will fail. But if you have mostly homogeneous environment you can think about this approach. Sure you have to have plan for upgrading base static shared OS data, you have to have plan how to install additional software (different destination than /usr or /usr/local)... If you already have your own build host which builds for you OS packages and you have already your own plan for deployment, you have done first
Re: [Users] [Spice-devel] Fedora 18 and usb pass-through
Any more info. on this issue? On Thu, May 30, 2013 at 9:10 AM, Ryan Wilkinson ryanw...@gmail.com wrote: I didn't install double. The one package was the version on the FC17 box and the other was FC18 box. On Thu, May 30, 2013 at 9:08 AM, Hans de Goede hdego...@redhat.comwrote: Hi, On 05/30/2013 04:34 PM, Ryan Wilkinson wrote: Yes, I can manually pass it through. Not necessarily looking to use this specific device (usb wifi adaptor) but was just handy. Here is the output: Hmm, that should work. I just noticed that in your last mail you mentioned that you've virt-viewer installed double. Can you try doing: rpm -e --allmatches virt-viewer yum install virt-viewer And see if that fixes things ? Regards, Hans Bus 001 Device 003: ID 0bda:8172 Realtek Semiconductor Corp. RTL8191SU 802.11n WLAN Adapter Device Descriptor: bLength18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize064 idVendor 0x0bda Realtek Semiconductor Corp. idProduct 0x8172 RTL8191SU 802.11n WLAN Adapter bcdDevice2.00 iManufacturer 1 Manufacturer Realtek iProduct2 RTL8191S WLAN Adapter iSerial 3 00e04c01 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass255 Vendor Specific Subclass bInterfaceProtocol255 Vendor Specific Protocol iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes2 Transfer TypeBulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x04 EP 4 OUT bmAttributes2 Transfer TypeBulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x06 EP 6 OUT bmAttributes2 Transfer TypeBulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x0d EP 13 OUT bmAttributes2 Transfer TypeBulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize064 bNumConfigurations 1 Device Status: 0x (Bus Powered) Bus 005 Device 002: ID 2101:020f ActionStar Device Descriptor: bLength18 bDescriptorType 1 bcdUSB 1.00 bDeviceClass0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x2101 ActionStar idProduct 0x020f bcdDevice0.01 iManufacturer 0 iProduct0 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 59 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 500mA Interface Descriptor: bLength
Re: [Users] deduplication
Hi Juan, thanks for your info, I'll try to test FreeNAS with compression. Do you use it with iSCSI or NFS? Jose - Original Message - From: Juan Jose jj197...@gmail.com To: supo...@logicworks.pt, users@ovirt.org Sent: Segunda-feira, 3 de Junho de 2013 13:37:21 Subject: Re: [Users] deduplication Hello Jose, We also have FreeNAS working in our infraestructure, with about 3 TB and ZFS. Some of the pools has compression enabled and you can save space with it. We have this FreeNAS connected to a hypervisor Xen and it works very well and it's stable and sure. We have nine virtual servers some wirtualized and other paravirtualized, and some Windows Server machine all about 2 years in production without any problem. My idea is connect this infrastructure with oVirt wo be able to have some resources for test VMs in that. Only wanted to share as another FreeNas success experience. Juanjo. On Fri, May 31, 2013 at 12:33 PM, supo...@logicworks.pt wrote: Thanks a lot Karli, you make my mind clear about deduplication, once again we cannot have the best of both worlds. I'll try FreeNAS despite my poor knowledge on FreeBSD. Openfiler, running on Linux, has no better performance but supports DRDB. Jose From: Karli Sjöberg karli.sjob...@slu.se To: supo...@logicworks.pt Cc: Jiri Belka jbe...@redhat.com , users@ovirt.org Sent: Sexta-feira, 31 de Maio de 2013 10:45:41 Subject: Re: [Users] deduplication fre 2013-05-31 klockan 09:50 +0100 skrev supo...@logicworks.pt : blockquote So, we can say that dedup has more disadvantages than advantages. For a primary system; most definitely, yes. But for a backup system, that has tons of RAM and SSD's for cache, and you have lots of virtual machines that are based off of the template, or are very much the same, then you have a real use-case. I´m active at the FreeBSD forums where one person reports storing 150TB of data in only 30TB of physical disk. The best practice of scrubbing is once a week on enterprise systems, though he is only able to do it once a month, because that´s how long it takes for a scrub to complete in that system. So you´ve got to choose performance or savings, you can´t have both. blockquote And what about dedup of Netapp? /blockquote Much better implementation, in my opinion. You are able schedule dedup-runs to go at night so your user´s performance isn´t impacted, and you get the savings. The question is if you value the savings enough to take on price-tag that is NetApp. Or just build your own FreeBSD/ZFS server with compression enabled and buy in standard HDD's from anywhere... We did;) /Karli blockquote Jose /blockquote blockquote From: Karli Sjöberg karli.sjob...@slu.se To: supo...@logicworks.pt Cc: Jiri Belka jbe...@redhat.com , users@ovirt.org Sent: Quinta-feira, 30 de Maio de 2013 8:33:19 Subject: Re: [Users] deduplication ons 2013-05-29 klockan 09:59 +0100 skrev supo...@logicworks.pt : blockquote Absolutely agree with you, planning is the best thing to do, but normally people want a plug'n'play system with all included, because there is not much time to think and planning, and there are many companies that know how to take advantage of this people characteristics. Any way, I think another solution for dedup is FreeNAS using ZFS. /blockquote FreeNAS is just FreeBSD with a fancy web-ui ontop, so it´s neither more or less of ZFS than you would have otherwise, And regarding dedup in ZFS; Just don´t, it´s not worth it! It´s said that it may increase performance when you have a very suitable usecase, e.g. everything exactly the same over and over. What´s not said is that scrubbing and resilvering slows down to a snail (from hundreds of MB/s, or GB if your system is large enough, down to less than 10), just from dedup. Also deleting snapshots of datasets that have(or have had) dedup on can kill the entire system, and when I say kill, I mean really fubar. Been there, regretted that... Now, compression on the other hand, you get basically for free and gives decent savings, I highly recommend that. /Karli blockquote Jose From: Jiri Belka jbe...@redhat.com To: supo...@logicworks.pt Cc: users@ovirt.org Sent: Quarta-feira, 29 de Maio de 2013 7:33:10 Subject: Re: [Users] deduplication On Tue, 28 May 2013 14:29:05 +0100 (WEST) supo...@logicworks.pt wrote: That's why I'm making this questions, to demystify some buzzwords around here. But if you have a strong and good technology why not create buzzwords to get into as many people as possible? without trapped them. Share a disk containing static data is a good idea, do you know from where I can start? Everything depends on your needs, design planning. Maybe then sharing disk would be better to share via NFS/iscsi. Of course if you have many VMs each of them is different you will fail. But if you have mostly homogeneous environment you can think about this
Re: [Users] deduplication
Just wanted to add that Freenas is great. I use it with NFS and ISCSI and it works well. What I will say, on the HP DNS-320 I have in it I have had to go to the command prompt to fix some multipathing issues when I first add a disk but I beleive that is just a product of the cciss controller driver in that server. On Mon, Jun 3, 2013 at 12:12 PM, supo...@logicworks.pt wrote: Hi Juan, thanks for your info, I'll try to test FreeNAS with compression. Do you use it with iSCSI or NFS? Jose -- *From: *Juan Jose jj197...@gmail.com *To: *supo...@logicworks.pt, users@ovirt.org *Sent: *Segunda-feira, 3 de Junho de 2013 13:37:21 *Subject: *Re: [Users] deduplication Hello Jose, We also have FreeNAS working in our infraestructure, with about 3 TB and ZFS. Some of the pools has compression enabled and you can save space with it. We have this FreeNAS connected to a hypervisor Xen and it works very well and it's stable and sure. We have nine virtual servers some wirtualized and other paravirtualized, and some Windows Server machine all about 2 years in production without any problem. My idea is connect this infrastructure with oVirt wo be able to have some resources for test VMs in that. Only wanted to share as another FreeNas success experience. Juanjo. On Fri, May 31, 2013 at 12:33 PM, supo...@logicworks.pt wrote: Thanks a lot Karli, you make my mind clear about deduplication, once again we cannot have the best of both worlds. I'll try FreeNAS despite my poor knowledge on FreeBSD. Openfiler, running on Linux, has no better performance but supports DRDB. Jose -- *From: *Karli Sjöberg karli.sjob...@slu.se *To: *supo...@logicworks.pt *Cc: *Jiri Belka jbe...@redhat.com, users@ovirt.org *Sent: *Sexta-feira, 31 de Maio de 2013 10:45:41 *Subject: *Re: [Users] deduplication fre 2013-05-31 klockan 09:50 +0100 skrev supo...@logicworks.pt: So, we can say that dedup has more disadvantages than advantages. For a primary system; most definitely, yes. But for a backup system, that has tons of RAM and SSD's for cache, and you have lots of virtual machines that are based off of the template, or are very much the same, then you have a real use-case. I´m active at the FreeBSD forums where one person reports storing 150TB of data in only 30TB of physical disk. The best practice of scrubbing is once a week on enterprise systems, though he is only able to do it once a month, because that´s how long it takes for a scrub to complete in that system. So you´ve got to choose performance or savings, you can´t have both. And what about dedup of Netapp? Much better implementation, in my opinion. You are able schedule dedup-runs to go at night so your user´s performance isn´t impacted, and you get the savings. The question is if you value the savings enough to take on price-tag that is NetApp. Or just build your own FreeBSD/ZFS server with compression enabled and buy in standard HDD's from anywhere... We did;) /Karli Jose -- *From: *Karli Sjöberg karli.sjob...@slu.se *To: *supo...@logicworks.pt *Cc: *Jiri Belka jbe...@redhat.com, users@ovirt.org *Sent: *Quinta-feira, 30 de Maio de 2013 8:33:19 *Subject: *Re: [Users] deduplication ons 2013-05-29 klockan 09:59 +0100 skrev supo...@logicworks.pt: Absolutely agree with you, planning is the best thing to do, but normally people want a plug'n'play system with all included, because there is not much time to think and planning, and there are many companies that know how to take advantage of this people characteristics. Any way, I think another solution for dedup is FreeNAS using ZFS. FreeNAS is just FreeBSD with a fancy web-ui ontop, so it´s neither more or less of ZFS than you would have otherwise, And regarding dedup in ZFS; Just don´t, it´s not worth it! It´s said that it *may* increase performance when you have a very suitable usecase, e.g. everything * exactly* the same over and over. What´s not said is that scrubbing and resilvering slows down to a snail (from hundreds of MB/s, or GB if your system is large enough, down to less than 10), just from dedup. Also deleting snapshots of datasets that have(or have had) dedup on can kill the entire system, and when I say kill, I mean really fubar. Been there, regretted that... Now, compression on the other hand, you get basically for free and gives decent savings, I highly recommend that. /Karli Jose -- *From: *Jiri Belka jbe...@redhat.com *To: *supo...@logicworks.pt *Cc: *users@ovirt.org *Sent: *Quarta-feira, 29 de Maio de 2013 7:33:10 *Subject: *Re: [Users] deduplication On Tue, 28 May 2013 14:29:05 +0100 (WEST) supo...@logicworks.pt wrote: That's why I'm making this questions, to demystify some buzzwords around here. But if you have a strong and good technology why not create buzzwords to get into as many people as
Re: [Users] deduplication
If we have a hardware RAID controller will we need RAID-Z ? Jose - Original Message - From: Chris Noffsinger cnoff...@gmail.com To: supo...@logicworks.pt Cc: Juan Jose jj197...@gmail.com, users@ovirt.org Sent: Segunda-feira, 3 de Junho de 2013 17:16:55 Subject: Re: [Users] deduplication Just wanted to add that Freenas is great. I use it with NFS and ISCSI and it works well. What I will say, on the HP DNS-320 I have in it I have had to go to the command prompt to fix some multipathing issues when I first add a disk but I beleive that is just a product of the cciss controller driver in that server. On Mon, Jun 3, 2013 at 12:12 PM, supo...@logicworks.pt wrote: Hi Juan, thanks for your info, I'll try to test FreeNAS with compression. Do you use it with iSCSI or NFS? Jose From: Juan Jose jj197...@gmail.com To: supo...@logicworks.pt , users@ovirt.org Sent: Segunda-feira, 3 de Junho de 2013 13:37:21 Subject: Re: [Users] deduplication Hello Jose, We also have FreeNAS working in our infraestructure, with about 3 TB and ZFS. Some of the pools has compression enabled and you can save space with it. We have this FreeNAS connected to a hypervisor Xen and it works very well and it's stable and sure. We have nine virtual servers some wirtualized and other paravirtualized, and some Windows Server machine all about 2 years in production without any problem. My idea is connect this infrastructure with oVirt wo be able to have some resources for test VMs in that. Only wanted to share as another FreeNas success experience. Juanjo. On Fri, May 31, 2013 at 12:33 PM, supo...@logicworks.pt wrote: blockquote Thanks a lot Karli, you make my mind clear about deduplication, once again we cannot have the best of both worlds. I'll try FreeNAS despite my poor knowledge on FreeBSD. Openfiler, running on Linux, has no better performance but supports DRDB. Jose From: Karli Sjöberg karli.sjob...@slu.se To: supo...@logicworks.pt Cc: Jiri Belka jbe...@redhat.com , users@ovirt.org Sent: Sexta-feira, 31 de Maio de 2013 10:45:41 Subject: Re: [Users] deduplication fre 2013-05-31 klockan 09:50 +0100 skrev supo...@logicworks.pt : blockquote So, we can say that dedup has more disadvantages than advantages. For a primary system; most definitely, yes. But for a backup system, that has tons of RAM and SSD's for cache, and you have lots of virtual machines that are based off of the template, or are very much the same, then you have a real use-case. I´m active at the FreeBSD forums where one person reports storing 150TB of data in only 30TB of physical disk. The best practice of scrubbing is once a week on enterprise systems, though he is only able to do it once a month, because that´s how long it takes for a scrub to complete in that system. So you´ve got to choose performance or savings, you can´t have both. blockquote And what about dedup of Netapp? /blockquote Much better implementation, in my opinion. You are able schedule dedup-runs to go at night so your user´s performance isn´t impacted, and you get the savings. The question is if you value the savings enough to take on price-tag that is NetApp. Or just build your own FreeBSD/ZFS server with compression enabled and buy in standard HDD's from anywhere... We did;) /Karli blockquote Jose /blockquote blockquote From: Karli Sjöberg karli.sjob...@slu.se To: supo...@logicworks.pt Cc: Jiri Belka jbe...@redhat.com , users@ovirt.org Sent: Quinta-feira, 30 de Maio de 2013 8:33:19 Subject: Re: [Users] deduplication ons 2013-05-29 klockan 09:59 +0100 skrev supo...@logicworks.pt : blockquote Absolutely agree with you, planning is the best thing to do, but normally people want a plug'n'play system with all included, because there is not much time to think and planning, and there are many companies that know how to take advantage of this people characteristics. Any way, I think another solution for dedup is FreeNAS using ZFS. /blockquote FreeNAS is just FreeBSD with a fancy web-ui ontop, so it´s neither more or less of ZFS than you would have otherwise, And regarding dedup in ZFS; Just don´t, it´s not worth it! It´s said that it may increase performance when you have a very suitable usecase, e.g. everything exactly the same over and over. What´s not said is that scrubbing and resilvering slows down to a snail (from hundreds of MB/s, or GB if your system is large enough, down to less than 10), just from dedup. Also deleting snapshots of datasets that have(or have had) dedup on can kill the entire system, and when I say kill, I mean really fubar. Been there, regretted that... Now, compression on the other hand, you get basically for free and gives decent savings, I highly recommend that. /Karli blockquote Jose From: Jiri Belka jbe...@redhat.com To: supo...@logicworks.pt Cc: users@ovirt.org Sent: Quarta-feira, 29 de Maio de 2013 7:33:10
Re: [Users] [Spice-devel] Fedora 18 and usb pass-through
Hi, On 06/03/2013 05:13 PM, Ryan Wilkinson wrote: Any more info. on this issue? No not really, you're the only one seeing this, some random idea: -Is udevd running properly on the box with the problem ? -Have you tried putting selinux in permissive mode? -Have you perhaps build some things (ie libusb) from source ? Regards, Hans ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] Resize storage domain
- Original Message - Tal, Thanks for responding, I'll try that sequence. But, what about the vgresize? There is no such command, only pvresize (which does what you need). It will do this by itself? I saw that there are some logical volumes... Thanks again. De : Tal Nisan tni...@redhat.com Enviado : domingo, 26 de maio de 2013 06:25 Para : Eduardo Ramos edua...@freedominterface.org Assunto : Re: [Users] Resize storage domain On 05/22/2013 11:04 PM, Eduardo Ramos wrote: Hi all! I have an iscsi domain based on a HP Lefthand cluster. Using HP tool, I resized the iscsi volume without problem. On the SPM host, with fdisk -l /dev/sdb, I saw the new size, ok. But now, How do I do ovirt engine see the new size? ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users Hi Eduardo, In case you have more than one host: 1. Put the domain in maintenance 2. Manually connect iscsi on the SPM host 3. Run pvresize on the LUN 4. Activate the domains In case you have only 1 host just run pvresize on the disk. Tal. ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] iSCSI and snapshots
Hi Maor, The thing is that I trigger the snapshot creation and everything seems to be ok, but I can't see any new volume of any kind. As the VM runs from an iscsi lun, it can not use LVM snapshots as there's no spare space to create any new LMV volume, neither I see any new volume in the data storage domain (another iscsi lun) so I can't tell for sure that the snapshot was really created. Where are the volumes saved if the VM has no spare disks or space? Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [Users] iSCSI and snapshots
Thanks Itamar, I'll take a look at that URL. Regards, ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users