Re: VMWare/Virtualbox virtio network drivers?
On Sat, Sep 24, 2011 at 9:37 PM, Craig Rodrigues wrote: Virtio drivers are coming. See: http://lists.freebsd.org/pipermail/svn-src-projects/2011-September/004361.html Great news, do you know if an MFC is planned? -- Adam Vande More ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: VirtualBox drama - failing VDI disks, VM not starting headless, Rebuild problems on 8.2 (clang related problem?) and 9-STABLE (libpcre.so.0)
On Sun, Apr 1, 2012 at 10:54 PM, Petro Rossini petro.ross...@gmail.comwrote: On Mon, Apr 2, 2012 at 9:55 AM, Petro Rossini petro.ross...@gmail.com wrote: Hi all, I had some VirtualBox hassle over the weekend. ... Anyway, I started upgading to the newest VirtualBox in the ports(4.1.18). Sorry, it is 4.1.8. This mailing list is used for virtualization issues like VIMAGE. Questions concerning VirtualBox go to freebsd-emulation@ AFAICT, that error message is related to a permissions issue. Can you supply the VM log when you post the question to emulation? -- Adam Vande More ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Best VM setup for FreeBSD
On Thu, Jun 6, 2013 at 9:26 AM, TJ t...@melodicninja.co.uk wrote: I have been looking into VirtualBox. My biggest hurdle at the moment is getting multiple hosts on one machine and setting up the VRDE to use different ports. Works great for me. -- Adam Vande More ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Is it possible to install a VirtualBox_Extension_Pack in FreeBSD 9.1
On Thu, Jul 25, 2013 at 5:48 AM, Leslie Jensen les...@eskk.nu wrote: I'm trying to get USB support in a Windows7 guest under FreeBSD 9.1-RELEASE. I've read that I need this Extension Pack https://wiki.freebsd.org/VirtualBox#USB_support -- Adam Vande More ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: RFC: Changes to handbook on virtualization
On Fri, Oct 11, 2013 at 9:00 AM, Dee Nixon dnixon-f...@nyclocal.net wrote: Must have been a brief temporary glitch. The link is indeed working: http://www.petitecloud.org/handbook.jsp It's not resolving here from 2 different dns paths. -- Adam Vande More ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Report of my virtual network lab migrated from virtualbox to bhyve
On Sat, Feb 8, 2014 at 5:20 AM, Olivier Cochard-Labbé oliv...@cochard.mewrote: On Fri, Feb 7, 2014 at 8:38 PM, Peter Grehan gre...@freebsd.org wrote: If you create a sparse file for the bhyve raw disk (e.g. with truncate -s), du will show the actual blocks used rather than the total size. But can I truncate an already existing image disk (downloaded nanobsd image as example) ? There is this: https://github.com/masover/sparsify I think this or something like it used to be in ports too. -- Adam ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Report of my virtual network lab migrated from virtualbox to bhyve
On Sat, Feb 8, 2014 at 6:51 AM, Aryeh Friedman aryeh.fried...@gmail.comwrote: bhyve blindly read/writes into the middle of the file without consulting the filesystem and thus bypassing any things like sparse fill in namely all you gain is a few seconds of startup time (matter of fact I think truncate might use sparse allocation [i.e. attempting to read into the middle with guest OS control will result in potentially seeing host data]) If this is true then there is a *critical* security issue. Using sparse files isn't to gain performance, it's to conserve disk space. Using md devices backed by sparse images would accomplish this. If the sparsify app works on FreeBSD, then there should be no problem using those type of volumes. -- Adam ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Report of my virtual network lab migrated from virtualbox to bhyve
On Sat, Feb 8, 2014 at 2:14 PM, Aryeh Friedman aryeh.fried...@gmail.comwrote: It sounds almost identical to the qcow2 security issue being discussed on qemu-de...@qemu.org recently. This might be a *HUGE* win for bhyve then in considering that it's default format is raw (should ahci-hdd be the default?). devel/qemu (not sure about -dev) uses qcow2 as a default and when playing with it on other OS's I found that it seemed to default to that also. It is my understand that most of the open source cloud platforms use qcow2 as their default also (I remember this from an attempt to install openstack grizzly last summer... I have not checked havana though... can any of the freebsd-openstack confirm this?). I don't consider it a huge win because the possibility of using an insecure device precludes it. Someone high on the tree bhyve needs to confirm or deny this otherwise it is unsafe to recommend bhyve or petitecloud. No offense intended, I really hope it succeeds and will likely use it if it does. I cannot use anything which leaves the host open. I am also unclear on how bhyve bypasses GEOM which *should* prevent any of the symptoms discussed. -- Adam ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: Report of my virtual network lab migrated from virtualbox to bhyve
On Sat, Feb 8, 2014 at 2:57 PM, Aryeh Friedman aryeh.fried...@gmail.comwrote: On Sat, Feb 8, 2014 at 3:54 PM, Adam Vande More amvandem...@gmail.comwrote: On Sat, Feb 8, 2014 at 2:14 PM, Aryeh Friedman aryeh.fried...@gmail.comwrote: It sounds almost identical to the qcow2 security issue being discussed on qemu-de...@qemu.org recently. This might be a *HUGE* win for bhyve then in considering that it's default format is raw (should ahci-hdd be the default?). devel/qemu (not sure about -dev) uses qcow2 as a default and when playing with it on other OS's I found that it seemed to default to that also. It is my understand that most of the open source cloud platforms use qcow2 as their default also (I remember this from an attempt to install openstack grizzly last summer... I have not checked havana though... can any of the freebsd-openstack confirm this?). I don't consider it a huge win because the possibility of using an insecure device precludes it. Someone high on the tree bhyve needs to confirm or deny this otherwise it is unsafe to recommend bhyve or petitecloud. No offense intended, I really hope it succeeds and will likely use it if it does. I cannot use anything which leaves the host open. I am also unclear on how bhyve bypasses GEOM which *should* prevent any of the symptoms discussed. The point was that raw has no issue and this is the default for both bhyve and petitecloud (to avoid certain list politics I didn't mention it by name before). Sparse is the issue and thus qemu, openstack and cloudstack (as well as likely vbox) are a problem. Yes but bhyve *supports* other backing devices than raw correct? Then this really bad. I don't want a politics game either, just saying you won't get adoption until security is clear. I have no problem with you mentioning petitecloud. Indeed I think you should but others may disagree. In your opinion are ZVOL's a good option? -- Adam ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to freebsd-virtualization-unsubscr...@freebsd.org
Re: vm-bhyve port upgrade
On Mon, Nov 16, 2015 at 9:34 AM, Matt Churchyardwrote: > > I am now looking at actually implementing static macs for all interfaces, > as I’d rather guests saw the same mac address every run just in case they > tie configuration to the mac (important for vm-bhyve as simply starting > guests in a different order will change what tap devices they get). Also > tap/slot/func isn’t much of a uniqueness guarantee across multiple hosts. > > Yes, and udev treats MAC as ethX = MAC. So linux guests using static ip's will be quite broken unless some more fiddling is done. Static MAC's aren't the only way to handle this, but it's the best IMO. -- Adam ___ freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Re: Issues gatewaying through Hyper-V
On Thu, Oct 29, 2015 at 10:45 PM, Larry Bairdwrote: > I have two identical setups on Hyper-V 2012R2 and Hype-V windows 10. > > I have two FreeBSD 10.2-RELEASE-p6 Hyper-V hosts in both cases. > The first FreeBSD host (client) has one NIC configured to use a Private > network. The second (gateway) has two nics. One private and one external. > The gateway box has the gateway_enable option set to YES in its rc.conf. > The boxes otherwize of very vanilla. > > I get failures forwarding thru gateway on both versions of Hyper-V. But > they fail in different ways. > There was a similar issue w/ PF resolved earlier and I'm pretty sure the fix wasn't in 10.2. If PF is in use, does switching to ipfw fix it? I know nothing of HyperV, but I also saw similar behavior on KVM. Switching the VM NIC away from virtio to intel was a successful workaround. -- Adam ___ freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Re: available hypervisors in FreeBSD
On Sun, Dec 20, 2015 at 2:15 AM, Peter Rosswrote: > Hi all, > > I read through an older threat I kept in my archive. It started like this: > > On Wed, 1 Apr 2015, Udo Rader wrote: > > As far as my homework digging revealed, FreeBSD supports four hypervisors: >> >> * bhyve >> * KVM >> * QEMU >> * VirtualBox >> > > .. and later Xen was mentioned. > > I ask myself which of the solutions are most mature at the moment and > immediately usable in production. > > Reason is a potential company move from VMware ESXi/Centos(6/7) with some > critical Windows 2008 and 2012 IIS/.NET applications) involved. > > While most of open source may go into FreeBSD jails, we have a few > CentOS6/7 boxes with proprietary software we have to keep, as well as the > Windows VMs to maintain (there is a long term effort to move them to Open > Source too but the final migration of all may be years away). > > We may phase out ESXi gradually, or just keep it, depending on the > performance and maturity of FreeBSD based solutions. > > I have experience with Linux on VirtualBox and it worked well if the load > was not high but the performance wasn't too good when under stress (but it > never crashed, I might add). > > Which of the solutions are worth testing? Do you have recommendations? > > I am thinking of server software and "containerisation" only, so USB > passthrough or PCI etc. is not really important. > > Stability, performance and resource utilisation (e.g. possible > over-allocation of RAM) are matter most. > VBox is fine, it works well and really has all the features of vitalization of the big 3 except for clustering and a few side things. I've been using bhyve and I like it. I have no stability issues on dozens of guests some with a lot of IO net and disk. I had hoped VPS[1] would make it in, but that seems to have stalled. [1] http://www.7he.at/freebsd/vps/ -- Adam ___ freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Re: available hypervisors in FreeBSD
On Sun, Dec 20, 2015 at 9:25 AM, Sergey Manucharianwrote: > I agree that VirtualBox is really stable, and I'm using it in production > environments for many years. However, there are a couple of possible > drawbacks: It does not support VRDP (remote console) and USB2/3 on FreeBSD. > > Tha latter is probably not really important (although I needed it too). > The lack of remote console is bad for troubleshooting and/or remote > (re)installation. > Remote console is available via VNC, not RDP. -- Adam ___ freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Re: available hypervisors in FreeBSD
On Sun, Dec 20, 2015 at 10:14 AM, Sergey Manucharianwrote: > > Remote console is available via VNC, not RDP. > > It is VNC, and I use it Linux hosts, it's rather confusing since the option > is "--vrde on|off". See https://lists.freebsd.org/pipermail/freebsd-emulation/2013-January/010354.html You can also set options like VNCAddress4 for listening address. > But isn't it a part of the extension pack, which is > not available for FreeBSD? > > https://www.virtualbox.org/manual/ch07.html > The explanation lies within that page. VRDP is only in extension pack, VRDE is available to all. So someone with enough gumption could write a VRDE RDP support. -- Adam ___ freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Re: floppy image in bhyve
On Wed, Jan 20, 2016 at 8:06 PM, Sergey Manucharianwrote: > Yes, that a good idea especially taking into account that I have many > variables. I'm trying to migrate Windows 7 on an encrypted volume from > VBox to bhyve. > Another option would be to migrate to GELI, possibly on a ZVOL if available. Then tie bhyve windows vm startup/shutdown scripts to unlock/lock device as needed. -- Adam ___ freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Re: Re-sparse a file-backed IO device + zfs
On Mon, Dec 12, 2016 at 7:20 PM, javocadowrote: > Hi, > > I'm setting up a bhyve wherein: > > host # truncate -s 1T vol.file > host # du -ah vol.file > 200Kvol.file > > host # /usr/sbin/bhyve ... -s 4,ahci-hd,vol.file ... > > Then inside the bhyve I create a zpool (ada0 = vol.file): > > bhyve # zpool create -O devices=off -O atime=off -O compression=on -m > /mnt/data1 data1 ada0 > I think there used to be a utility called sparsify. -- Adam ___ freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Re: Storage overhead on zvols
On Mon, Dec 4, 2017 at 5:19 PM, Dustin Wenzwrote: > I'm starting a new thread based on the previous discussion in "bhyve uses > all available memory during IO-intensive operations" relating to size > inflation of bhyve data stored on zvols. I've done some experimenting with > this, and I think it will be useful for others. > > The zvols listed here were created with this command: > > zfs create -o volmode=dev -o volblocksize=Xk -V 30g > vm00/chyves/guests/myguest/diskY > > The zvols were created on a raidz1 pool of four disks. For each zvol, I > created a basic zfs filesystem in the guest using all default tuning (128k > recordsize, etc). I then copied the same 8.2GB dataset to each filesystem. > > volblocksizesize amplification > > 512B11.7x > 4k 1.45x > 8k 1.45x > 16k 1.5x > 32k 1.65x > 64k 1x > 128k1x > > The worst case is with a 512B volblocksize, where the space used is more > than 11 times the size of the data stored within the guest. The size > efficiency gains are non-linear as I continue from 4k and double the block > sizes; 32k blocks being the second-worst. The amount of wasted space was > minimized by using 64k and 128k blocks. > > It would appear that 64k is a good choice for volblocksize if you are > using a zvol to back your VM, and the VM is using the virtual device for a > zpool. Incidentally, I believe this is the default when creating VMs in > FreeNAS. > I'm not sure what your purpose is behind the posting, but if its simply a "why this behavior" you can find more detail here as well as some calculation leg work: https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz -- Adam ___ freebsd-virtualization@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"