Re: bhyve: simplify memory assignment to VMs
Hi, On Mon, Mar 11, 2013 at 11:08 AM, Neel Natu wrote: > Hi, > > I am going to commit a change soon that simplifies how memory is > assigned to a VM from a command line perspective. > > At the moment if you want to create a VM with say 8GB of memory you > need to express this as "-m 2048 -M 6144" or "-m 1024 -M 7168" or some > split that adds up to 8192MB. This is very flexible but is also very > confusing to most users who don't know or care about the location of > the PCI hole. > > I am going to change this so that the memory size is specified > entirely with the "-m " option and the "-M" option will > disappear altogether. > > Of course, this means that scripts using "-M" will break and hence > this heads up. > > I did consider making this a backward compatible change but given the > stage of development of bhyve it seemed early to start accumulating > compatibility crud. > Committed as r248477: http://svnweb.freebsd.org/base?view=revision&revision=248477 Please update your vmrun.sh scripts from /usr/share/examples/bhyve/vmrun.sh best Neel > best > Neel ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Re: Difference in event channel implementation for Xen PV vs HVM guests
On Mar 18, 2013, at 6:35 AM, Roger Pau Monné wrote: > Hello, > > While working on improving XENHVM (I've been looking at adding PV > timers), I've realized that the event channel implementation in PV vs > HVM mode differs greatly. Xen PV port uses sys/xen/evtchn/evtchn.c while > Xen HVM uses sys/dev/xenpci/evtchn.c, and the Xen HVM implementation is > greatly reduced (only contains the necessary functions to operate > backends/frontends). > > To implement PV timers I need to expand the event channel interface for > XENHVM, and I was wondering why FreeBSD choose to have two different > implementations, the main difference between PV and HVM is the event > callback, but I guess this can be abstracted between the two different > implementations, and then everything else could be reused. Am I missing > something obvious? > > Is there any known technical problem in modifying XENHVM to use the full > event channel implementation present in sys/xen/evtchn/evtchn.c that > prevented XENHVM from using it in the first place? > > (Sorry if I've Cc'ed someone not related) > > Thanks, Roger. > ___ > freebsd-virtualization@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization > To unsubscribe, send any mail to > "freebsd-virtualization-unsubscr...@freebsd.org" Hi Roger, I know of no reasons why XENHVM cannot use the full event channel interface. In fact, Spectra Logic implemented PV timers and a general cleanup of the HVM event channel interface. I haven't merged it back yet because I know the changes break PV and I haven't found time to clean up the PV code, merge the HVM and PV event channel systems, and move IPI/MSI delivery to event channels. I've uploaded Spectra's changes here: http://people.freebsd.org/~gibbs/xen_ev/ The diffs file provides the history of the original checkins to our Perforce repository. The tar file includes all the files that have been modified and reflects our efforts to keep our code base in sync with stable/9. Apart from the PV issues outlined above, I would expect the code provided to just drop right in to stable/9. Unfortunately, Xen support is not a current priority for Spectra so I don't have a lot of day job time to focus on getting this code back into FreeBSD. However, if this code looks like it would suite your needs, and you have resources for testing i386/PV, I'd be happy to collaborate with you and will make the time to help get this committed. -- Justin ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Difference in event channel implementation for Xen PV vs HVM guests
Hello, While working on improving XENHVM (I've been looking at adding PV timers), I've realized that the event channel implementation in PV vs HVM mode differs greatly. Xen PV port uses sys/xen/evtchn/evtchn.c while Xen HVM uses sys/dev/xenpci/evtchn.c, and the Xen HVM implementation is greatly reduced (only contains the necessary functions to operate backends/frontends). To implement PV timers I need to expand the event channel interface for XENHVM, and I was wondering why FreeBSD choose to have two different implementations, the main difference between PV and HVM is the event callback, but I guess this can be abstracted between the two different implementations, and then everything else could be reused. Am I missing something obvious? Is there any known technical problem in modifying XENHVM to use the full event channel implementation present in sys/xen/evtchn/evtchn.c that prevented XENHVM from using it in the first place? (Sorry if I've Cc'ed someone not related) Thanks, Roger. ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
Current problem reports assigned to freebsd-virtualization@FreeBSD.org
Note: to view an individual PR, use: http://www.freebsd.org/cgi/query-pr.cgi?pr=(number). The following is a listing of current problems submitted by FreeBSD users. These represent problem reports covering all versions including experimental development code and obsolete releases. S Tracker Resp. Description o kern/170096 virtualization[vimage] Dynamically-attached network interface will c o kern/169991 virtualization[run] [vimage] panic after device plugged in o kern/165252 virtualization[vimage] [pf] [panic] kernel panics with VIMAGE and PF o kern/161094 virtualization[vimage] [pf] [panic] kernel panic with pf + VIMAGE wh o kern/160541 virtualization[vimage][pf][patch] panic: userret: Returning on td 0x o kern/160496 virtualization[vimage] [pf] [patch] kernel panic with pf + VIMAGE o kern/148155 virtualization[vimage] [pf] Kernel panic with PF/IPFilter + VIMAGE k a kern/147950 virtualization[vimage] [carp] VIMAGE + CARP = kernel crash s kern/143808 virtualization[pf] pf does not work inside jail a kern/141696 virtualization[rum] [vimage] [panic] rum(4)+ vimage = kernel panic 10 problems total. ___ freebsd-virtualization@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"