Hello Konrad.

Thanks for your email. I have added my responses below.



On Tue, Jan 3, 2012 at 11:22 PM, Konrad Rzeszutek Wilk <[email protected]>wrote:

> On Thu, Dec 29, 2011 at 11:28:59PM +0530, R J wrote:
> > Hello List,
> >
> > Merry Christmas to all !!
> >
> > Basically I'm trying to boot a Windows 2008R2 DC HVM with 90GB static max
> > memory and 32GB static min.
> >
> > The node config is Dell M610 with X5660 and 96GB RAM and its running XCP
> 1.1
> >
> > Many times the node crashes while booting HVM. Sometimes I get success.
>
>
> Node? Meaning dom0? Or the guest? Are you using dom0_mem=max:X argument?
>

Node means the physical machine. I was not sure to call it as dom0.
dom0 in this case has default 750 MB RAM.


> > I have attached the HVM boot log of successful start. Many times the node
> > hangs as soon as the BalloonWorkerThread is activated.
>
> Which PV driver is this? Is this with the other ones: GPL one, Citrix,
> Novell, and
> Oracle as well?
>

This is Citrix PV driver. XCP 1.1 and PV drivers are 1.1 version.


> >
> > In attached txt the ballon inflation rate is constant 4090
> > *XENUTIL: BalloonWorkerThread: inflated balloon by 4090 page(s) in 7924ms
> > (2064k/s)  *
> >
> > till the time it starts, the inflation rate shoots to 12554884 and the VM
> > is live.
> > *XENUTIL: BalloonWorkerThread: inflated balloon by 12554884 page(s) in
> > 32604ms (91243k/s) *
> > *XENUTIL: BalloonWorkerThread: de-activating *
> > *XENUTIL: XenevtchnMapResources setting callback irq to 11 *
> >
> >
> > Can some one help me understand the *BalloonWorkerThread *behavior ?*
> >
> >
> > *Many thanks,
> > Rushi
>
> > Dec 29 23:08:01 n4 xenguest: Determined the following parameters from
> xenstore:
> > Dec 29 23:08:01 n4 xenguest: vcpu/number:16 vcpu/weight:0 vcpu/cap:0 nx:
> 1 viridian: 1 apic: 1 acpi: 1 pae: 1 acpi_s4: 0 acpi_s3: 0
> > Dec 29 23:08:01 n4 xenguest: vcpu/0/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/1/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/2/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/3/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/4/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/5/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/6/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/7/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/8/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/9/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/10/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/11/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/12/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/13/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/14/affinity:0
> > Dec 29 23:08:01 n4 xenguest: vcpu/15/affinity:0
> > Dec 29 23:08:14 n4 tapdisk[18204]: tapdisk-control: init, 10 x 4k buffers
> > Dec 29 23:08:14 n4 tapdisk[18204]: I/O queue driver: lio
> > Dec 29 23:08:14 n4 tapdisk[18204]: tapdisk-log: started, level 0
> > Dec 29 23:08:14 n4 tapdisk[18204]: received 'attach' message (uuid = 0)
> > Dec 29 23:08:14 n4 tapdisk[18204]: sending 'attach response' message
> (uuid = 0)
> > Dec 29 23:08:14 n4 tapdisk[18204]: received 'open' message (uuid = 0)
> > Dec 29 23:08:14 n4 tapdisk[18204]: Loading driver 'vhd' for vbd 0
> /dev/VG_XenStorage-49740841-8056-06e2-373b-ec72084f6fb0/VHD-62c5a501-d662-4d38-a75c-a280e2929297
> 0x00000000
> > Dec 29 23:08:14 n4 tapdisk[18204]:
> /dev/VG_XenStorage-49740841-8056-06e2-373b-ec72084f6fb0/VHD-62c5a501-d662-4d38-a75c-a280e2929297
> version: tap 0x00010003, b: 15360, a: 307, f: 26, n: 1268376
> > Dec 29 23:08:14 n4 tapdisk[18204]: opened image
> /dev/VG_XenStorage-49740841-8056-06e2-373b-ec72084f6fb0/VHD-62c5a501-d662-4d38-a75c-a280e2929297
> (1 users, state: 0x00000001, type: 4)
> > Dec 29 23:08:14 n4 tapdisk[18204]:
> /dev/mapper/VG_XenStorage--49740841--8056--06e2--373b--ec72084f6fb0-VHD--8eae906c--8f44--4618--a850--3aaa5293408b
> version: tap 0x00010003, b: 15360, a: 3331, f: 3307, n: 0
> > Dec 29 23:08:14 n4 tapdisk[18204]: opened image
> /dev/mapper/VG_XenStorage--49740841--8056--06e2--373b--ec72084f6fb0-VHD--8eae906c--8f44--4618--a850--3aaa5293408b
> (1 users, state: 0x00000003, type: 4)
> > Dec 29 23:08:14 n4 tapdisk[18204]: VBD CHAIN:
> > Dec 29 23:08:14 n4 tapdisk[18204]:
> /dev/VG_XenStorage-49740841-8056-06e2-373b-ec72084f6fb0/VHD-62c5a501-d662-4d38-a75c-a280e2929297:
> type:vhd(4) storage:lvm(3)
> > Dec 29 23:08:14 n4 tapdisk[18204]:
> /dev/mapper/VG_XenStorage--49740841--8056--06e2--373b--ec72084f6fb0-VHD--8eae906c--8f44--4618--a850--3aaa5293408b:
> type:vhd(4) storage:lvm(3)
> > Dec 29 23:08:14 n4 tapdisk[18204]: sending 'open response' message (uuid
> = 0)
> > Dec 29 23:08:14 n4 vbd.uevent[add](backend/vbd/18/768): wrote
> /xapi/18/hotplug/vbd/768/hotplug = 'online'
> > Dec 29 23:08:15 n4 vbd.uevent[add](backend/vbd/18/5696): wrote
> /xapi/18/hotplug/vbd/5696/hotplug = 'online'
> > Dec 29 23:08:15 n4 ovs-vsctl: 00001|vsctl|INFO|Called as
> /usr/bin/ovs-vsctl list-ports xapi9
> > Dec 29 23:08:15 n4 ovs-vsctl: 00001|vsctl|INFO|Called as
> /usr/bin/ovs-vsctl --timeout=30 -- --if-exists del-port vif18.0 -- add-port
> xapi9 vif18.0 -- set interface vif18.0
> "external-ids:\"xs-vm-uuid\"=\"6591a403-0eba-30b4-96a6-e02a7db0607a\"" --
> set interface vif18.0
> "external-ids:\"xs-vif-uuid\"=\"3be54e6d-6d13-b04b-6735-24831e5169e5\"" --
> set interface vif18.0
> "external-ids:\"xs-network-uuid\"=\"7051ef99-4fcb-fa61-a10e-f98456e12e90\""
> -- set interface vif18.0
> "external-ids:\"attached-mac\"=\"d6:6d:60:7e:45:52\""
> > Dec 29 23:08:15 n4 qemu.18: domid: 18
> > Dec 29 23:08:15 n4 qemu.18: qemu: the number of cpus is 16
> > Dec 29 23:08:15 n4 qemu.18: -videoram option does not work with cirrus
> vga device model. Videoram set to 4M.
> > Dec 29 23:08:15 n4 HVM18[18302]: Guest uuid =
> 6591a403-0eba-30b4-96a6-e02a7db0607a
> > Dec 29 23:08:15 n4 HVM18[18302]: Watching
> /local/domain/18/logdirty/next-active
> > Dec 29 23:08:15 n4 HVM18[18302]: Watching
> /local/domain/0/device-model/18/command
> > Dec 29 23:08:15 n4 HVM18[18302]: char device redirected to /dev/pts/2
> > Dec 29 23:08:15 n4 HVM18[18302]: char device redirected to /dev/pts/3
> > Dec 29 23:08:15 n4 HVM18[18302]: qemu_map_cache_init nr_buckets = 4000
> size 327680
> > Dec 29 23:08:15 n4 HVM18[18302]: shared page at pfn feffd
> > Dec 29 23:08:15 n4 HVM18[18302]: buffered io page at pfn feffb
> > Dec 29 23:08:15 n4 HVM18[18302]: Time offset set 0
> > Dec 29 23:08:15 n4 HVM18[18302]: pci_register_device: 00:00:00 (i440FX)
> > Dec 29 23:08:15 n4 HVM18[18302]: pci_register_device: 00:01:00 (PIIX3)
> > Dec 29 23:08:15 n4 HVM18[18302]: pci_register_device: 00:02:00 (Cirrus
> VGA)
> > Dec 29 23:08:15 n4 HVM18[18302]: populating video RAM at ff000000
> > Dec 29 23:08:15 n4 HVM18[18302]: mapping video RAM from ff000000
> > Dec 29 23:08:15 n4 HVM18[18302]: pci_register_device: 00:03:00
> (xen-platform)
> > Dec 29 23:08:15 n4 HVM18[18302]:
> xs_read(/vm/6591a403-0eba-30b4-96a6-e02a7db0607a/log-throttling): read error
> > Dec 29 23:08:15 n4 HVM18[18302]: ROM memory area now RW
> > Dec 29 23:08:15 n4 HVM18[18302]: pci_register_device: 00:04:00 (RTL8139)
> > Dec 29 23:08:15 n4 HVM18[18302]: pci_register_device: 00:01:01 (PIIX3
> IDE)
> > Dec 29 23:08:15 n4 HVM18[18302]: pci_register_device: 00:01:02 (USB-UHCI)
> > Dec 29 23:08:15 n4 HVM18[18302]: pci_register_device: 00:01:03 (PIIX4
> ACPI)
> > Dec 29 23:08:15 n4 HVM18[18302]:
> xs_read(/local/domain/0/device-model/18/xen_extended_power_mgmt): read error
> > Dec 29 23:08:15 n4 HVM18[18302]: releasing VM
> > Dec 29 23:08:15 n4 HVM18[18302]: xs_read(): vncpasswd get error.
> /vm/6591a403-0eba-30b4-96a6-e02a7db0607a/vncpasswd.
> > Dec 29 23:08:15 n4 HVM18[18302]: I/O request not ready: 0, ptr: 0, port:
> 0, data: 0, count: 0, size: 0
> > Dec 29 17:38:15 n4 last message repeated 2 times
> > Dec 29 17:38:15 n4 HVM18[18302]: Triggered log-dirty buffer switch
> > Dec 29 17:38:15 n4 HVM18[18302]: I/O request not ready: 0, ptr: 0, port:
> 0, data: 0, count: 0, size: 0
> > Dec 29 17:38:15 n4 HVM18[18302]: medium change watch on `hdd' (index: 1):
> > Dec 29 17:38:15 n4 HVM18[18302]: I/O request not ready: 0, ptr: 0, port:
> 0, data: 0, count: 0, size: 0
> > Dec 29 17:38:15 n4 last message repeated 11 times
> > Dec 29 17:38:16 n4 HVM18[18302]: cirrus vga map change while on lfb mode
> > Dec 29 23:08:16 n4 ovs-vsctl: 00001|vsctl|INFO|Called as
> /usr/bin/ovs-vsctl --timeout=30 -- --if-exists del-port tap18.0 -- add-port
> xapi9 tap18.0
> > Dec 29 17:38:16 n4 HVM18[18302]: mapping vram to f0000000 - f0400000
> > Dec 29 17:38:17 n4 HVM18[18302]: ROM memory area now RW
> > Dec 29 17:38:17 n4 HVM18[18302]: ROM memory area now RO
> > Dec 29 17:38:18 n4 HVM18[18302]: cirrus: blanking the screen
> line_offset=3072 height=768
> > Dec 29 17:38:34 n4 HVM18[18302]: cirrus: blanking the screen
> line_offset=1024 height=768
> > Dec 29 17:38:37 n4 HVM18[18302]: UNPLUG: protocol version set to 1
> (drivers not blacklisted)
> > Dec 29 17:38:37 n4 HVM18[18302]: UNPLUG: protocol 1 active
> > Dec 29 17:38:37 n4 HVM18[18302]: UNPLUG: product_id: 1 build_number:
> 30876
> > Dec 29 17:38:37 n4 HVM18[18302]: UNPLUG: drivers not blacklisted
> > Dec 29 17:38:37 n4 HVM18[18302]: ide_unplug_harddisk: drive 0
> > Dec 29 17:38:37 n4 HVM18[18302]: pci_dev_unplug: 00:04:00
> > Dec 29 17:38:37 n4 HVM18[18302]: net_tap_shutdown: model=tap,name=tap.0
> > Dec 29 23:08:38 n4 ovs-vsctl: 00001|vsctl|INFO|Called as
> /usr/bin/ovs-vsctl --timeout=30 -- --if-exists del-port tap18.0
> > Dec 29 17:38:38 n4 HVM18[18302]:  XEVTCHN: InstallDumpDeviceCallback:
> version mismatch (255 != 1)
> > Dec 29 17:38:38 n4 HVM18[18302]:   XEVTCHN: XenevtchnAddDevice: FDO =
> 0xFFFFFA8044323970
> > Dec 29 17:38:38 n4 HVM18[18302]:   XEVTCHN: Initialized tracing provider
> > Dec 29 17:38:38 n4 HVM18[18302]:   XEVTCHN: StartDeviceFdo: ====>
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: XEVTCHN: IO hole:
> [00000000fbfa6000,00000000fc000000) mapped at FFFFF88002965000
> > Dec 29 17:38:38 n4 HVM18[18302]: net_tap_shutdown: model=tap,name=tap.0
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: KERNEL: 6.1 (build 7600)
> platform WIN32_NT
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: SP: NONE
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: SUITES:
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: - TERMINAL
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: - DATACENTER
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: - SINGLEUSERTS
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: TYPE: SERVER
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: PV DRIVERS: VERSION: 5.6.0
> BUILD: 30876 (Apr 30 2010.06:57:01)
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: 64-bit HVM
> > Dec 29 17:38:38 n4 HVM18[18302]: net_tap_shutdown: model=tap,name=tap.0
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: ExpandGrantTable: GRANT
> TABLE 0: (0 - 511) at FFFFF88002966000 (fbfa7000)
> > Dec 29 17:38:38 n4 HVM18[18302]:   XENUTIL: XenEnterprise product string
> is present
> > Dec 29 17:38:39 n4 HVM18[18302]:   XENUTIL: PHYSICAL MEMORY: TOP =
> 00000016.8fc00000
> > Dec 29 17:38:39 n4 HVM18[18302]:   XENUTIL: BalloonTargetChanged:
> 94371840k -> 43792384k
> > Dec 29 17:38:39 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> activating
> > Dec 29 17:38:47 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 2230ms
> > Dec 29 17:38:47 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 7924ms (2064k/s)
> > Dec 29 17:38:47 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94355480k)
> > Dec 29 17:38:57 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 1794ms
> > Dec 29 17:38:57 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 9157ms (1786k/s)
> > Dec 29 17:38:57 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94339120k)
> > Dec 29 17:39:13 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 5070ms
> > Dec 29 17:39:13 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 14601ms (1120k/s)
> > Dec 29 17:39:13 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94322760k)
> > Dec 29 17:39:30 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 4321ms
> > Dec 29 17:39:30 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 16052ms (1019k/s)
> > Dec 29 17:39:30 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94306400k)
> > Dec 29 17:39:40 n4 HVM18[18302]:   XENUTIL: WARNING: BalloonPodSweep:
> HYPERVISOR_memory_op(XENMEM_pod_sweep, ...) failed (fffffff4)
> > Dec 29 17:39:46 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 6099ms
> > Dec 29 17:39:46 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 15132ms (1081k/s)
> > Dec 29 17:39:46 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94290040k)
> > Dec 29 17:40:04 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 4492ms
> > Dec 29 17:40:04 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 17206ms (950k/s)
> > Dec 29 17:40:04 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94273680k)
> > Dec 29 17:40:16 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 2043ms
> > Dec 29 17:40:16 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 11294ms (1448k/s)
> > Dec 29 17:40:16 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94257320k)
> > Dec 29 17:40:27 n4 HVM18[18302]:   XENUTIL: WARNING: BalloonPodSweep:
> HYPERVISOR_memory_op(XENMEM_pod_sweep, ...) failed (fffffff4)
> > Dec 29 17:40:32 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 5179ms
> > Dec 29 17:40:32 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 15100ms (1083k/s)
> > Dec 29 17:40:32 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94240960k)
> > Dec 29 17:40:46 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 2230ms
> > Dec 29 17:40:46 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 12870ms (1271k/s)
> > Dec 29 17:40:46 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94224600k)
> > Dec 29 17:41:01 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 5350ms
> > Dec 29 17:41:01 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 13228ms (1236k/s)
> > Dec 29 17:41:01 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94208240k)
> > Dec 29 17:41:14 n4 HVM18[18302]:   XENUTIL: WARNING: BalloonPodSweep:
> HYPERVISOR_memory_op(XENMEM_pod_sweep, ...) failed (fffffff4)
> > Dec 29 17:41:17 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 3026ms
> > Dec 29 17:41:17 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 15490ms (1056k/s)
> > Dec 29 17:41:17 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94191880k)
> > Dec 29 17:41:31 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 3151ms
> > Dec 29 17:41:31 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 13291ms (1230k/s)
> > Dec 29 17:41:31 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94175520k)
> > Dec 29 17:41:49 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 5553ms
> > Dec 29 17:41:49 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 16832ms (971k/s)
> > Dec 29 17:41:49 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94159160k)
> > Dec 29 17:42:08 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 6754ms
> > Dec 29 17:42:08 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 18111ms (903k/s)
> > Dec 29 17:42:08 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94142800k)
> > Dec 29 17:42:28 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 3244ms
> > Dec 29 17:42:28 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 18392ms (889k/s)
> > Dec 29 17:42:28 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94126440k)
> > Dec 29 17:42:47 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 5725ms
> > Dec 29 17:42:47 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 18454ms (886k/s)
> > Dec 29 17:42:47 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94110080k)
> > Dec 29 17:43:08 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 4243ms
> > Dec 29 17:43:08 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 19453ms (841k/s)
> > Dec 29 17:43:08 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94093720k)
> > Dec 29 17:43:26 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 5241ms
> > Dec 29 17:43:26 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 17206ms (950k/s)
> > Dec 29 17:43:26 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94077360k)
> > Dec 29 17:43:44 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 1996ms
> > Dec 29 17:43:44 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 17253ms (948k/s)
> > Dec 29 17:43:44 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94061000k)
> > Dec 29 17:44:02 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 4773ms
> > Dec 29 17:44:02 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 16286ms (1004k/s)
> > Dec 29 17:44:02 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94044640k)
> > Dec 29 17:44:24 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 2152ms
> > Dec 29 17:44:24 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 21231ms (770k/s)
> > Dec 29 17:44:24 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94028280k)
> > Dec 29 17:44:40 n4 HVM18[18302]:   XENUTIL: WARNING: BalloonPodSweep:
> HYPERVISOR_memory_op(XENMEM_pod_sweep, ...) failed (fffffff4)
> > Dec 29 17:44:42 n4 HVM18[18302]:   XENUTIL: WARNING:
> BalloonReleasePfnArray: ran for more than 2199ms
> > Dec 29 17:44:42 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 4090 page(s) in 17331ms (943k/s)
> > Dec 29 17:44:42 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread: pausing
> for 1s (target = 43792384k, current = 94011920k)
> > Dec 29 17:45:16 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> inflated balloon by 12554884 page(s) in 32604ms (91243k/s)
> > Dec 29 17:45:16 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> de-activating
> > Dec 29 17:45:16 n4 HVM18[18302]:   XENUTIL: XenevtchnMapResources
> setting callback irq to 11
> > Dec 29 17:45:16 n4 HVM18[18302]:   XEVTCHN: PV init. done
> > Dec 29 17:45:16 n4 HVM18[18302]:   XENUTIL: BalloonTargetChanged:
> 43792384k -> 48911360k
> > Dec 29 17:45:16 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> activating
> > Dec 29 17:45:16 n4 HVM18[18302]:   XEVTCHN: Detected new device vif/0.
> > Dec 29 17:45:16 n4 HVM18[18302]:   XEVTCHN: closing device/vif/0...
> > Dec 29 17:45:16 n4 HVM18[18302]:   XEVTCHN: device/vif/0 closed
> > Dec 29 17:45:16 n4 HVM18[18302]:   XEVTCHN: StartDeviceFdo: <====
> (00000000)
> > Dec 29 17:45:17 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> deflated balloon by 1279744 page(s) in 998ms (825660k/s)
> > Dec 29 17:45:17 n4 HVM18[18302]:   XENUTIL: BalloonWorkerThread:
> de-activating
> > Dec 29 17:45:18 n4 HVM18[18302]:    XENVBD: XENVBD in NORMAL mode.
> > Dec 29 17:45:18 n4 HVM18[18302]:    XENVBD: XenvbdAddDevice: FDO =
> 0xFFFFFA804434B060
> > Dec 29 17:45:18 n4 HVM18[18302]:   XENUTIL: WARNING: IO hole already
> initialized by XEVTCHN
> > Dec 29 17:45:18 n4 HVM18[18302]:   XENUTIL: WARNING: Bugcheck callback
> already installed
> > Dec 29 17:45:18 n4 HVM18[18302]:   XENUTIL: WARNING: Bugcheck reason
> callback already installed
> > Dec 29 17:45:18 n4 HVM18[18302]:    XENVBD: RescanThread: starting
> > Dec 29 17:45:18 n4 HVM18[18302]:   XENUTIL: XenvbdHwInitialize setting
> callback irq to 30
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: DeviceRelationsFdo: scanning
> targets...
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: XenbusFindVbds: found new
> disk (VBD 768)
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: XenbusFindVbds: ignoring
> cdrom (VBD 5696)
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: claiming
> frontend...
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: successfuly
> claimed device/vbd/768
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: synthesising
> inquiry data: default page
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: unit serial number
> = '62c5a501-d662-4d  '
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: device
> identifier[0]: CodeSet: 'Ascii' Type: 'VendorId' Assocation: 'Device'
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: device
> identifier[0]: Length = 45 Data = 'XENSRC
>  62c5a501-d662-4d38-a75c-a280e2929297 '
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: closing frontend...
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: backend is closed
> > Dec 29 17:45:19 n4 HVM18[18302]:    XENVBD: target 0: created
>
> > _______________________________________________
> > Xen-devel mailing list
> > [email protected]
> > http://lists.xensource.com/xen-devel
>
>
_______________________________________________
xen-api mailing list
[email protected]
http://lists.xensource.com/mailman/listinfo/xen-api

Reply via email to