Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Roger Pau Monné
On 22/05/13 22:03, Colin Percival wrote:
 On 05/22/13 04:45, Roger Pau Monné wrote:
 On 18/05/13 17:44, Colin Percival wrote:
 That seems to work.  dmesg is attached.  Are there any particular tests
 you'd like me to run?

 I have not tested ZFS, that might be a good one. If you are running this
 on Xen 3.4 the behaviour should be the same as without this patches, so
 there shouldn't be many differences.
 
 I don't use ZFS personally, so I'm not sure exactly what tests to run on it;
 hopefully someone else can take care of that.
 
 If you could try that on Xen 4.0 at least (if I remember correctly
 that's when the vector callback was introduced), you should see the PV
 timer getting attached, and a performance increase.
 
 Testing on a cr1.8xlarge EC2 instance, I get Xen 4.2, but it ends up with
 a panic -- console output below.  I can get a backtrace and possibly even
 a dump if those would help.

Hello Colin,

Thanks for the test, I've been using Xen 4.2 (and 4.3) without problems 
so far. By looking at the Xen code, the only reason the timer setup 
could return -22 (EINVAL), is that we try to set the timer for a 
different vCPU than the one we are running on.

I've been able to boot a 32 vCPU DomU on my 8way box using Xen 4.2.1 
(using both qemu-xen and qemu-xen-traditional device models), so I'm 
unsure if this could be due to some patch Amazon applies to Xen. Could 
you try the following patch and post the error message? I would like to 
see if the cpuid reported by kdb and the vCPU that we are trying to set 
the timer are the same.

Booting...
GDB: no debug ports present
KDB: debugger backends: ddb
KDB: current backend: ddb
Copyright (c) 1992-2013 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 10.0-CURRENT #68: Wed May 22 19:00:14 CEST 2013
root@:/usr/obj/usr/src/sys/XENHVM amd64
FreeBSD clang version 3.3 (trunk 178860) 20130405
WARNING: WITNESS option enabled, expect reduced performance.
XEN: Hypervisor version 4.2 detected.
CPU: Intel(R) Xeon(R) CPU   W3550  @ 3.07GHz (3066.83-MHz K8-class CPU)
  Origin = GenuineIntel  Id = 0x106a5  Family = 0x6  Model = 0x1a  Stepping = 
5
  
Features=0x1783fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE,SSE2,HTT
  Features2=0x81b82201SSE3,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,TSCDLT,HV
  AMD Features=0x28100800SYSCALL,NX,RDTSCP,LM
  AMD Features2=0x1LAHF
real memory  = 4286578688 (4088 MB)
avail memory = 3961323520 (3777 MB)
Event timer LAPIC quality 400
ACPI APIC Table: Xen HVM
FreeBSD/SMP: Multiprocessor System Detected: 32 CPUs
FreeBSD/SMP: 2 package(s) x 16 core(s)
 cpu0 (BSP): APIC ID:  0
 cpu1 (AP): APIC ID:  2
 cpu2 (AP): APIC ID:  4
 cpu3 (AP): APIC ID:  6
 cpu4 (AP): APIC ID:  8
 cpu5 (AP): APIC ID: 10
 cpu6 (AP): APIC ID: 12
 cpu7 (AP): APIC ID: 14
 cpu8 (AP): APIC ID: 16
 cpu9 (AP): APIC ID: 18
 cpu10 (AP): APIC ID: 20
 cpu11 (AP): APIC ID: 22
 cpu12 (AP): APIC ID: 24
 cpu13 (AP): APIC ID: 26
 cpu14 (AP): APIC ID: 28
 cpu15 (AP): APIC ID: 30
 cpu16 (AP): APIC ID: 32
 cpu17 (AP): APIC ID: 34
 cpu18 (AP): APIC ID: 36
 cpu19 (AP): APIC ID: 38
 cpu20 (AP): APIC ID: 40
 cpu21 (AP): APIC ID: 42
 cpu22 (AP): APIC ID: 44
 cpu23 (AP): APIC ID: 46
 cpu24 (AP): APIC ID: 48
 cpu25 (AP): APIC ID: 50
 cpu26 (AP): APIC ID: 52
 cpu27 (AP): APIC ID: 54
 cpu28 (AP): APIC ID: 56
 cpu29 (AP): APIC ID: 58
 cpu30 (AP): APIC ID: 60
 cpu31 (AP): APIC ID: 62
random device not loaded; using insecure entropy
ioapic0: Changing APIC ID to 1
MADT: Forcing active-low polarity and level trigger for SCI
ioapic0 Version 1.1 irqs 0-47 on motherboard
kbd1 at kbdmux0
xen_et0: Xen PV Clock on motherboard
Event timer XENTIMER frequency 10 Hz quality 950
Timecounter XENTIMER frequency 10 Hz quality 950
acpi0: Xen on motherboard
acpi0: Power Button (fixed)
acpi0: Sleep Button (fixed)
acpi0: reservation of 0, a (3) failed
cpu0: ACPI CPU on acpi0
cpu1: ACPI CPU on acpi0
cpu2: ACPI CPU on acpi0
cpu3: ACPI CPU on acpi0
cpu4: ACPI CPU on acpi0
cpu5: ACPI CPU on acpi0
cpu6: ACPI CPU on acpi0
cpu7: ACPI CPU on acpi0
cpu8: ACPI CPU on acpi0
cpu9: ACPI CPU on acpi0
cpu10: ACPI CPU on acpi0
cpu11: ACPI CPU on acpi0
cpu12: ACPI CPU on acpi0
cpu13: ACPI CPU on acpi0
cpu14: ACPI CPU on acpi0
cpu15: ACPI CPU on acpi0
cpu16: ACPI CPU on acpi0
cpu17: ACPI CPU on acpi0
cpu18: ACPI CPU on acpi0
cpu19: ACPI CPU on acpi0
cpu20: ACPI CPU on acpi0
cpu21: ACPI CPU on acpi0
cpu22: ACPI CPU on acpi0
cpu23: ACPI CPU on acpi0
cpu24: ACPI CPU on acpi0
cpu25: ACPI CPU on acpi0
cpu26: ACPI CPU on acpi0
cpu27: ACPI CPU on acpi0
cpu28: ACPI CPU on acpi0
cpu29: ACPI CPU on acpi0
cpu30: ACPI CPU on acpi0
cpu31: ACPI CPU on acpi0
hpet0: High Precision Event Timer iomem 0xfed0-0xfed003ff on acpi0
Timecounter HPET frequency 6250 Hz quality 950
attimer0: AT 

virtio-net vs qemu 1.5.0

2013-05-23 Thread Julian Stecklina
Hello,

I just compiled qemu 1.5.0 and noticed that virtio network (on CURRENT as of 
today) seems to have problems updating the MAC filter table:

vtnet0: error setting host MAC filter table

As far as I understand, if_vtnet.c does the following in vtnet_rx_filter_mac. 
It appends two full struct vtnet_mac_tables (one for unicast and one for 
multicast) to the request. Each consists of the number of actual entries in the 
table and space for 128 (mostly unused) entries in total.

The qemu code parses this differently. It first reads the number of elements in 
the first table and then skips over so many MAC addresses and then expects the 
header to the second table (which in our case points to zero'd memory). Then it 
skips those 0 MAC entries as well and expects that it has consumed the whole 
request and returns an error, because there is still data left. The relevant 
code is in qemu/hw/net/virtio-net.c in virtio_net_handle_rx_mode.

Assuming the qemu code is correct (of which I am not sure) the correct solution 
would be to enqueue only so many MACs in the original requests as are actually 
used. The following (a bit dirty) patch fixes this for me:


diff --git a/sys/dev/virtio/network/if_vtnet.c 
b/sys/dev/virtio/network/if_vtnet.c
index ffc349a..6f00dfb 100644
--- a/sys/dev/virtio/network/if_vtnet.c
+++ b/sys/dev/virtio/network/if_vtnet.c
@@ -2470,9 +2470,9 @@ vtnet_rx_filter_mac(struct vtnet_softc *sc)
sglist_init(sg, 4, segs);
error |= sglist_append(sg, hdr, sizeof(struct virtio_net_ctrl_hdr));
error |= sglist_append(sg, filter-vmf_unicast,
-   sizeof(struct vtnet_mac_table));
+   sizeof(uint32_t) + ETHER_ADDR_LEN*filter-vmf_unicast.nentries);
error |= sglist_append(sg, filter-vmf_multicast,
-   sizeof(struct vtnet_mac_table));
+   sizeof(uint32_t) + ETHER_ADDR_LEN*filter-vmf_multicast.nentries);
error |= sglist_append(sg, ack, sizeof(uint8_t));
KASSERT(error == 0  sg.sg_nseg == 4,
(error adding MAC filtering message to sglist));

Any virtio guru here to comment on this?

Julian

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Outback Dingo
On Thu, May 23, 2013 at 8:57 AM, Jeroen van der Ham jer...@dckd.nl wrote:

 Hi,

 On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com wrote:
  Right now the code is in a state where it can be tested by users, so we
  would like to encourage FreeBSD and Xen users to test it and provide
  feedback.

 I've just been able to install it on a VPS using the latest pvhvm_v9
 branch.
 This is good news, because the system I had before actually had trouble
 with the HVM kernel from 9.1 [0].

 I'm going to leave this running for a while and do some more tests on it.

 Jeroen.


Curious if this would work under XEN XCP (Xen Cloud Platform)




 [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822

 ___
 freebsd-virtualization@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
 To unsubscribe, send any mail to 
 freebsd-virtualization-unsubscr...@freebsd.org

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Roger Pau Monné
On 23/05/13 15:20, Jeroen van der Ham wrote:
 
 On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com wrote:
 Also, I've created a wiki page that explains how to set up a FreeBSD
 PVHVM for testing:

 http://wiki.xen.org/wiki/Testing_FreeBSD_PVHVM
 
 
 You mention on that page that it is easier to install on 10.0-CURRENT 
 snapshots.
 What are the issues with installing this on 9.1? Is it possible?

I don't think it is recommended to use a HEAD (10) kernel with a 9.1
userland. You can always install a 9.1 and then do a full update with
the source on my repository.

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Roger Pau Monné
On 23/05/13 14:57, Jeroen van der Ham wrote:
 Hi,
 
 On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com wrote:
 Right now the code is in a state where it can be tested by users, so we
 would like to encourage FreeBSD and Xen users to test it and provide
 feedback.
 
 I've just been able to install it on a VPS using the latest pvhvm_v9 branch.

The branch pvhvm_v9 contains an initial implementation of PV IPIs for
amd64. I've now finished it and I'm going to port it to i386 also, and
push a new branch to the repository.

 This is good news, because the system I had before actually had trouble with 
 the HVM kernel from 9.1 [0].
 
 I'm going to leave this running for a while and do some more tests on it.
 
 Jeroen.
 
 
 [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822
 

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Outback Dingo
On Thu, May 23, 2013 at 9:33 AM, Roger Pau Monné roger@citrix.comwrote:

 On 23/05/13 14:57, Jeroen van der Ham wrote:
  Hi,
 
  On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com wrote:
  Right now the code is in a state where it can be tested by users, so we
  would like to encourage FreeBSD and Xen users to test it and provide
  feedback.
 
  I've just been able to install it on a VPS using the latest pvhvm_v9
 branch.

 The branch pvhvm_v9 contains an initial implementation of PV IPIs for
 amd64. I've now finished it and I'm going to port it to i386 also, and
 push a new branch to the repository.



curious as the from what rev you guys forked your XENPVM work from HEAD, so
i can assure
Ive not lost some fixes, new commits from upstream



  This is good news, because the system I had before actually had trouble
 with the HVM kernel from 9.1 [0].
 
  I'm going to leave this running for a while and do some more tests on it.
 
  Jeroen.
 
 
  [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822
 

 ___
 freebsd-virtualization@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
 To unsubscribe, send any mail to 
 freebsd-virtualization-unsubscr...@freebsd.org

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Outback Dingo
On Thu, May 23, 2013 at 9:33 AM, Roger Pau Monné roger@citrix.comwrote:

 On 23/05/13 14:57, Jeroen van der Ham wrote:
  Hi,
 
  On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com wrote:
  Right now the code is in a state where it can be tested by users, so we
  would like to encourage FreeBSD and Xen users to test it and provide
  feedback.
 
  I've just been able to install it on a VPS using the latest pvhvm_v9
 branch.

 The branch pvhvm_v9 contains an initial implementation of PV IPIs for
 amd64. I've now finished it and I'm going to port it to i386 also, and
 push a new branch to the repository.

  This is good news, because the system I had before actually had trouble
 with the HVM kernel from 9.1 [0].
 
  I'm going to leave this running for a while and do some more tests on it.
 
  Jeroen.
 
 
  [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822
 

 I built the rev_9 branch on a XCP host and rebooted, however I am seeing

on boot after ugen0.2: QEMU 0.10.2 at usbus0

run_interrupt_driven_hooks: still waiting after 60 seconds for
xenbus_nop_confighook_cb
run_interrupt_driven_hooks: still waiting after 120 seconds for
xenbus_nop_confighook_cb
run_interrupt_driven_hooks: still waiting after 180 seconds for
xenbus_nop_confighook_cb
run_interrupt_driven_hooks: still waiting after 240 seconds for
xenbus_nop_confighook_cb
run_interrupt_driven_hooks: still waiting after 300 seconds for
xenbus_nop_confighook_cb
panic: run_interrupt_driven_confighooks: waited too long
cpuid = 0
KDB: enter: panic
[ thread pid 0 tid 10 ]
Stropped at kdb_enter +0x3b: movq $0,0xad6522(%rip)
db



___
 freebsd-virtualization@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
 To unsubscribe, send any mail to 
 freebsd-virtualization-unsubscr...@freebsd.org

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Sergey Kandaurov
On 23 May 2013 20:30, Outback Dingo outbackdi...@gmail.com wrote:
 On Thu, May 23, 2013 at 9:33 AM, Roger Pau Monné roger@citrix.comwrote:

 On 23/05/13 14:57, Jeroen van der Ham wrote:
  Hi,
 
  On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com wrote:
  Right now the code is in a state where it can be tested by users, so we
  would like to encourage FreeBSD and Xen users to test it and provide
  feedback.
 
  I've just been able to install it on a VPS using the latest pvhvm_v9
 branch.

 The branch pvhvm_v9 contains an initial implementation of PV IPIs for
 amd64. I've now finished it and I'm going to port it to i386 also, and
 push a new branch to the repository.

  This is good news, because the system I had before actually had trouble
 with the HVM kernel from 9.1 [0].
 
  I'm going to leave this running for a while and do some more tests on it.
 
  Jeroen.
 
 
  [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822
 

 I built the rev_9 branch on a XCP host and rebooted, however I am seeing

 on boot after ugen0.2: QEMU 0.10.2 at usbus0

 run_interrupt_driven_hooks: still waiting after 60 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 120 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 180 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 240 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 300 seconds for
 xenbus_nop_confighook_cb
 panic: run_interrupt_driven_confighooks: waited too long
 cpuid = 0
 KDB: enter: panic
 [ thread pid 0 tid 10 ]
 Stropped at kdb_enter +0x3b: movq $0,0xad6522(%rip)
 db

Can you recheck this on stock HEAD?

From your description this looks like a rather old bug seen with 8.2
or above and XCP (referenced in PR kern/164630). You can trigger it
when e.g. booting with empty cdrom.

-- 
wbr,
pluknet
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Roger Pau Monné
On 23/05/13 18:30, Outback Dingo wrote:
 
 
 
 On Thu, May 23, 2013 at 9:33 AM, Roger Pau Monné roger@citrix.com
 mailto:roger@citrix.com wrote:
 
 On 23/05/13 14:57, Jeroen van der Ham wrote:
  Hi,
 
  On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com
 mailto:roger@citrix.com wrote:
  Right now the code is in a state where it can be tested by users,
 so we
  would like to encourage FreeBSD and Xen users to test it and provide
  feedback.
 
  I've just been able to install it on a VPS using the latest
 pvhvm_v9 branch.
 
 The branch pvhvm_v9 contains an initial implementation of PV IPIs for
 amd64. I've now finished it and I'm going to port it to i386 also, and
 push a new branch to the repository.
 
  This is good news, because the system I had before actually had
 trouble with the HVM kernel from 9.1 [0].
 
  I'm going to leave this running for a while and do some more tests
 on it.
 
  Jeroen.
 
 
  [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822
 
 
 I built the rev_9 branch on a XCP host and rebooted, however I am seeing
 
 on boot after ugen0.2: QEMU 0.10.2 at usbus0
 
 run_interrupt_driven_hooks: still waiting after 60 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 120 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 180 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 240 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 300 seconds for
 xenbus_nop_confighook_cb
 panic: run_interrupt_driven_confighooks: waited too long
 cpuid = 0
 KDB: enter: panic
 [ thread pid 0 tid 10 ]
 Stropped at kdb_enter +0x3b: movq $0,0xad6522(%rip)
 db

From what I've read on the list, it seems like you cannot boot the PVHVM
kernel if you have a cdrom attached to the guest, could you try
disabling the cdrom and booting again?

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Outback Dingo
On Thu, May 23, 2013 at 12:48 PM, Roger Pau Monné roger@citrix.comwrote:

 On 23/05/13 18:30, Outback Dingo wrote:
 
 
 
  On Thu, May 23, 2013 at 9:33 AM, Roger Pau Monné roger@citrix.com
  mailto:roger@citrix.com wrote:
 
  On 23/05/13 14:57, Jeroen van der Ham wrote:
   Hi,
  
   On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com
  mailto:roger@citrix.com wrote:
   Right now the code is in a state where it can be tested by users,
  so we
   would like to encourage FreeBSD and Xen users to test it and
 provide
   feedback.
  
   I've just been able to install it on a VPS using the latest
  pvhvm_v9 branch.
 
  The branch pvhvm_v9 contains an initial implementation of PV IPIs for
  amd64. I've now finished it and I'm going to port it to i386 also,
 and
  push a new branch to the repository.
 
   This is good news, because the system I had before actually had
  trouble with the HVM kernel from 9.1 [0].
  
   I'm going to leave this running for a while and do some more tests
  on it.
  
   Jeroen.
  
  
   [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822
  
 
  I built the rev_9 branch on a XCP host and rebooted, however I am seeing
 
  on boot after ugen0.2: QEMU 0.10.2 at usbus0
 
  run_interrupt_driven_hooks: still waiting after 60 seconds for
  xenbus_nop_confighook_cb
  run_interrupt_driven_hooks: still waiting after 120 seconds for
  xenbus_nop_confighook_cb
  run_interrupt_driven_hooks: still waiting after 180 seconds for
  xenbus_nop_confighook_cb
  run_interrupt_driven_hooks: still waiting after 240 seconds for
  xenbus_nop_confighook_cb
  run_interrupt_driven_hooks: still waiting after 300 seconds for
  xenbus_nop_confighook_cb
  panic: run_interrupt_driven_confighooks: waited too long
  cpuid = 0
  KDB: enter: panic
  [ thread pid 0 tid 10 ]
  Stropped at kdb_enter +0x3b: movq $0,0xad6522(%rip)
  db

 From what I've read on the list, it seems like you cannot boot the PVHVM
 kernel if you have a cdrom attached to the guest, could you try
 disabling the cdrom and booting again?


great how does one go about disabling the cdrom, i get some disk parameters
needs to be removed from the vm template before boot
___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Jeroen van der Ham
Hi,

Just remove this line (or pointing to a similar file from the template: (It's 
part of the disks definition:

   'file:/root/freebsd-10.iso,hdc:cdrom,r',

Jeroen.

On 23 May 2013, at 19:02, Outback Dingo outbackdi...@gmail.com wrote:

 On Thu, May 23, 2013 at 12:48 PM, Roger Pau Monné roger@citrix.comwrote:
 
 On 23/05/13 18:30, Outback Dingo wrote:
 
 
 
 On Thu, May 23, 2013 at 9:33 AM, Roger Pau Monné roger@citrix.com
 mailto:roger@citrix.com wrote:
 
On 23/05/13 14:57, Jeroen van der Ham wrote:
 Hi,
 
 On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com
mailto:roger@citrix.com wrote:
 Right now the code is in a state where it can be tested by users,
so we
 would like to encourage FreeBSD and Xen users to test it and
 provide
 feedback.
 
 I've just been able to install it on a VPS using the latest
pvhvm_v9 branch.
 
The branch pvhvm_v9 contains an initial implementation of PV IPIs for
amd64. I've now finished it and I'm going to port it to i386 also,
 and
push a new branch to the repository.
 
 This is good news, because the system I had before actually had
trouble with the HVM kernel from 9.1 [0].
 
 I'm going to leave this running for a while and do some more tests
on it.
 
 Jeroen.
 
 
 [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822
 
 
 I built the rev_9 branch on a XCP host and rebooted, however I am seeing
 
 on boot after ugen0.2: QEMU 0.10.2 at usbus0
 
 run_interrupt_driven_hooks: still waiting after 60 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 120 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 180 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 240 seconds for
 xenbus_nop_confighook_cb
 run_interrupt_driven_hooks: still waiting after 300 seconds for
 xenbus_nop_confighook_cb
 panic: run_interrupt_driven_confighooks: waited too long
 cpuid = 0
 KDB: enter: panic
 [ thread pid 0 tid 10 ]
 Stropped at kdb_enter +0x3b: movq $0,0xad6522(%rip)
 db
 
 From what I've read on the list, it seems like you cannot boot the PVHVM
 kernel if you have a cdrom attached to the guest, could you try
 disabling the cdrom and booting again?
 
 
 great how does one go about disabling the cdrom, i get some disk parameters
 needs to be removed from the vm template before boot

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Outback Dingo
On Thu, May 23, 2013 at 1:22 PM, Jeroen van der Ham jer...@dckd.nl wrote:

 Hi,

 Just remove this line (or pointing to a similar file from the template:
 (It's part of the disks definition:

'file:/root/freebsd-10.iso,hdc:cdrom,r',


Thanks, but this is XCP not a generic XEN server where there are vm config
files under /etc/xen/ in XCP they dont exists


 Jeroen.

 On 23 May 2013, at 19:02, Outback Dingo outbackdi...@gmail.com wrote:

  On Thu, May 23, 2013 at 12:48 PM, Roger Pau Monné roger@citrix.com
 wrote:
 
  On 23/05/13 18:30, Outback Dingo wrote:
 
 
 
  On Thu, May 23, 2013 at 9:33 AM, Roger Pau Monné roger@citrix.com
  mailto:roger@citrix.com wrote:
 
 On 23/05/13 14:57, Jeroen van der Ham wrote:
  Hi,
 
  On 13 May 2013, at 20:32, Roger Pau Monné roger@citrix.com
 mailto:roger@citrix.com wrote:
  Right now the code is in a state where it can be tested by users,
 so we
  would like to encourage FreeBSD and Xen users to test it and
  provide
  feedback.
 
  I've just been able to install it on a VPS using the latest
 pvhvm_v9 branch.
 
 The branch pvhvm_v9 contains an initial implementation of PV IPIs
 for
 amd64. I've now finished it and I'm going to port it to i386 also,
  and
 push a new branch to the repository.
 
  This is good news, because the system I had before actually had
 trouble with the HVM kernel from 9.1 [0].
 
  I'm going to leave this running for a while and do some more tests
 on it.
 
  Jeroen.
 
 
  [0]: http://www.freebsd.org/cgi/query-pr.cgi?pr=175822
 
 
  I built the rev_9 branch on a XCP host and rebooted, however I am
 seeing
 
  on boot after ugen0.2: QEMU 0.10.2 at usbus0
 
  run_interrupt_driven_hooks: still waiting after 60 seconds for
  xenbus_nop_confighook_cb
  run_interrupt_driven_hooks: still waiting after 120 seconds for
  xenbus_nop_confighook_cb
  run_interrupt_driven_hooks: still waiting after 180 seconds for
  xenbus_nop_confighook_cb
  run_interrupt_driven_hooks: still waiting after 240 seconds for
  xenbus_nop_confighook_cb
  run_interrupt_driven_hooks: still waiting after 300 seconds for
  xenbus_nop_confighook_cb
  panic: run_interrupt_driven_confighooks: waited too long
  cpuid = 0
  KDB: enter: panic
  [ thread pid 0 tid 10 ]
  Stropped at kdb_enter +0x3b: movq $0,0xad6522(%rip)
  db
 
  From what I've read on the list, it seems like you cannot boot the PVHVM
  kernel if you have a cdrom attached to the guest, could you try
  disabling the cdrom and booting again?
 
 
  great how does one go about disabling the cdrom, i get some disk
 parameters
  needs to be removed from the vm template before boot


___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org


Re: FreeBSD PVHVM call for testing

2013-05-23 Thread Roger Pau Monné
Hello,

I've pushed a new branch, pvhvm_v10 that contains a PV IPI
implementation for both amd64 and i386. I've also updated the wiki to
point to the pvhvm_v10 branch:

http://xenbits.xen.org/gitweb/?p=people/royger/freebsd.git;a=shortlog;h=refs/heads/pvhvm_v10

I've updated my tree to latest HEAD, so now branch pvhvm_v10 is on top
of this commit:

commit b44da0fb82647f2cfb06f65a6695c7e36c98828c
Author: gber g...@freebsd.org
Date:   Thu May 23 12:24:46 2013 +

Rework and organize pmap_enter_locked() function.

pmap_enter_locked() implementation was very ambiguous and confusing.
Rearrange it so that each part of the mapping creation is separated.
Avoid walking through the redundant conditions.
Extract vector_page specific PTE setup from normal PTE setting.

Submitted by:   Zbigniew Bodek z...@semihalf.com
Sponsored by:   The FreeBSD Foundation, Semihalf

Thanks for the testing, Roger.

___
freebsd-virtualization@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-virtualization
To unsubscribe, send any mail to 
freebsd-virtualization-unsubscr...@freebsd.org