Re: [Xen-ia64-devel] Xen Itanium features available in Xen HVM?

2008-05-13 Thread Kayvan Sylvan
Thanks very much for the replies.

We would like to lend some help with trying to implement the protection 
registers.

What is your advice on how to proceed?

---
Kayvan Sylvan, Platform Solutions Inc.

- Original Message -
From: [EMAIL PROTECTED] [EMAIL PROTECTED]
To: Kayvan Sylvan
Cc: xen-ia64-devel@lists.xensource.com xen-ia64-devel@lists.xensource.com; 
Paul Leisy
Sent: Mon May 12 07:22:50 2008
Subject: Re: [Xen-ia64-devel] Xen Itanium features available in Xen HVM?

Quoting Kayvan Sylvan [EMAIL PROTECTED]:

 Hi everyone,



 We have some questions about the HVM roadmap and features.



 According to the Wiki, Fujitsu is working on getting other operating systems
 running as HVM guests. Tristan appears to be working on getting openVMS
 running and then HP-UX.

Right for me.

 From what I gather from posts here, the major roadblock to HP-UX is
 protection key support.

Not yet.  The HP-UX efi loader crashes very early.  This is currently
blocking.  I need either help from HP *or* time to investigate.

 Does anyone know the big picture of HVM features:

 - What Itanium architecture features are supported by HVM guests now?

Most of them :-)

 - What Itanium architecture features are not supported now?

Protection registers.  I think that's almost it.

 - What features are most likely needed by openVMS?

OpenVMS now works on Xen/ia64.  There are still a few issues however (e.g. UP
only).

 - What features are most likely needed by HP-UX?

Protection registers.

 Thank you very much, any answers would be much appreciated.

You're welcome!

Tristan.
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] Xen/ia64 maintainership transition

2008-05-07 Thread Kayvan Sylvan
Thank you very much for everything you have done, Alex!

Good luck and best regards to Isaku Yamahata.

---Kayvan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alex Williamson
Sent: Tuesday, May 06, 2008 7:47 PM
To: xen-ia64-devel
Cc: Isaku Yamahata; xen-devel; Keir Fraser
Subject: [Xen-ia64-devel] Xen/ia64 maintainership transition

Hi all,

The Xen/ia64 project has come a long way in the past few years, and it's
with some satisfaction and excitement that I announce the transition of
the maintainership.  Isaku Yamahata, who is easily the biggest
contributor to the project and in many ways the technical expert, has
graciously agreed to take over this role.  I've been working with Isaku
on infrastructure and testing setup, and will continue to help as needed
until Isaku is comfortable.  I intend to stay involved with the project,
but after over two years as the maintainer, I'm ready for a change.
Congratulations and best of luck Isaku, thanks,

Alex

--
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Xen Itanium features available in Xen HVM?

2008-05-06 Thread Kayvan Sylvan
Hi everyone,



We have some questions about the HVM roadmap and features.



According to the Wiki, Fujitsu is working on getting other operating systems 
running as HVM guests. Tristan appears to be working on getting openVMS running 
and then HP-UX.



From what I gather from posts here, the major roadblock to HP-UX is protection 
key support.



Does anyone know the big picture of HVM features:

- What Itanium architecture features are supported by HVM guests now?

- What Itanium architecture features are not supported now?

- What features are most likely needed by openVMS?

- What features are most likely needed by HP-UX?



Thank you very much, any answers would be much appreciated.

Paul Leisy and Kayvan Sylvan, Platform Solutions Inc.
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] IA64 xen-3.3 unstable hanging

2008-04-24 Thread Kayvan Sylvan
This is the stanza from elilo.conf:

image=vmlinuz-2.6.18.8-xen
vmm=xen-3.3.gz
label=xen33
initrd=initrd-2.6.18.8-xen.img
read-only
append=com1=38400,8n1,0x2f8,45 dom0_mem=2048M -- 
xencons=uart,io,0x2f8,38400n8 console=uart,io,0x2f8,38400n8 root=/dev/md0

Did the parameters change?


From: Akio Takebe [EMAIL PROTECTED]
Sent: Thursday, April 24, 2008 6:10 PM
To: Kayvan Sylvan; xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel] IA64 xen-3.3 unstable hanging

Hi,

(XEN) Xen command line: BOOT_IMAGE=scsi0:EFI\redhat\xen-3.3.gz root=/dev/
VolGroup00/LogVol00 com1=38400,8n1,0x2f8,45 dom0_mem=2048M
You should check elilo.conf.
The boot parameter should be in dom0 parameter.

Best Regards,

AKio Takebe


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] IA64 xen-3.3 unstable hanging

2008-04-24 Thread Kayvan Sylvan
Akio, thank you for your replies and help so far. I am most grateful.

In my response, I caused confusion by mixing up two different machines on which 
I am having this identical problem.

On one machine, I am using RAID1 (root=/dev/md0) and on the other, I am using 
LVM.

Let me try again:

Here is the stanza from the RAID1 using machine:

image=vmlinuz-2.6.18.8-xen
vmm=xen-3.3.gz
label=xen33
initrd=initrd-2.6.18.8-xen.img
read-only
append=com1=38400,8n1,0x2f8,45 dom0_mem=2048M -- 
xencons=uart,io,0x2f8,38400n8 console=uart,io,0x2f8,38400n8 root=/dev/md0

I added the log options as you indicated, so now it looks like this:

image=vmlinuz-2.6.18.8-xen
vmm=xen-3.3.gz
label=xen33
initrd=initrd-2.6.18.8-xen.img
read-only
append=com1=38400,8n1,0x2f8,45 dom0_mem=2048M loglvl=all 
guest_loglvl=all -- xencons=uart,io,0x2f8,38400n8 console=uart,io,0x2f8,38400n8 
root=/dev/md0

Upon booting the xen kernel, I see the following on the console and then the 
machine just hangs.

(XEN) Xen version 3.3-unstable ([EMAIL PROTECTED]) (gcc version 3.4.6 20060404 
(Red Hat 3.4.6-3)) Thu Apr 24 20:29:20 GMT 2008
(XEN) Latest ChangeSet: Tue Apr 15 11:15:20 2008 -0600 17465:1fbc9073a566
(XEN) Xen command line: BOOT_IMAGE=scsi0:EFI\redhat\xen-3.3.gz  
com1=38400,8n1,0x2f8,45 dom0_mem=2048M loglvl=all guest_loglvl=all
(XEN) xen image pstart: 0x400, xenheap pend: 0x800
(XEN) Xen patching physical address access by offset: 0xfc00
(XEN) find_memory: efi_memmap_walk returns max_page=11
(XEN) Before xen_heap_start: f41e0f10
(XEN) After xen_heap_start: f4208000
(XEN) Init boot pages: 0x1d8 - 0x400.
(XEN) Init boot pages: 0x800 - 0x3b00.
(XEN) Init boot pages: 0x4000 - 0x7b794000.
(XEN) Init boot pages: 0x7c8104f8 - 0x7cea8010.
(XEN) Init boot pages: 0x7cea8070 - 0x7ceabf55.
(XEN) Init boot pages: 0x7ceabfc1 - 0x7ceaf000.
(XEN) Init boot pages: 0x7cff3f00 - 0x7cffa010.
(XEN) Init boot pages: 0x7cffaaf0 - 0x7da6.
(XEN) Init boot pages: 0x7f30 - 0x7fa2.
(XEN) Init boot pages: 0x7fa8c000 - 0x7fb78000.
(XEN) Init boot pages: 0x7fbd8000 - 0x7fcec000.
(XEN) Init boot pages: 0x7fd6 - 0x7fd64000.
(XEN) Init boot pages: 0x8000 - 0xc000.
(XEN) Init boot pages: 0x1 - 0x44000.
(XEN) System RAM: 16259MB (16649744kB)
(XEN) size of virtual frame_table: 40752kB
(XEN) virtual machine to physical table: f678 size: 8192kB
(XEN) max_page: 0x11
(XEN) allocating frame table/mpt table at mfn 0.
(XEN) SAL 3.1: NEC AsAmA2 version 1.0
(XEN) SAL Platform features: BusLock IRQ_Redirection IPI_Redirection
(XEN) SAL: AP wakeup using external interrupt vector 0xbf
(XEN) avail:0x31700740, 
status:0x740,control:0x3170, vm?0x100
(XEN) WARNING: no opcode provided from hardware(0)!!!
(XEN) vm buffer size: 1048576
(XEN) vm_buffer: 0xf40004208000
(XEN) Xen heap: 60MB (62432kB)
(XEN) Reserving non-aligned node boundary @ mfn 65536
(XEN) Reserving non-aligned node boundary @ mfn 262144
(XEN) Domain heap initialised: DMA width 32 bits
(XEN) cpu package is Multi-Core capable: number of cores=2
(XEN) cpu package is Multi-Threading capable: number of siblings=2
(XEN) cpu_init: current=f412c000
(XEN) vhpt_init: vhpt paddr=0x43d02, end=0x43d02
(XEN) iosapic_system_init: Disabling PC-AT compatible 8259 interrupts
(XEN) ACPI: Local APIC address f200fee0
(XEN) ACPI: LSAPIC (acpi_id[0x00] lsapic_id[0x00] lsapic_eid[0x00] enabled)
(XEN) CPU 0 (0x) enabled (BSP)
(XEN) ACPI: LSAPIC (acpi_id[0x02] lsapic_id[0x04] lsapic_eid[0x00] enabled)
(XEN) CPU 1 (0x0400) enabled
(XEN) ACPI: LSAPIC (acpi_id[0x04] lsapic_id[0x02] lsapic_eid[0x00] enabled)
(XEN) CPU 2 (0x0200) enabled
(XEN) ACPI: LSAPIC (acpi_id[0x06] lsapic_id[0x06] lsapic_eid[0x00] enabled)
(XEN) CPU 3 (0x0600) enabled
(XEN) ACPI: LSAPIC (acpi_id[0x08] lsapic_id[0x01] lsapic_eid[0x00] enabled)
(XEN) CPU 4 (0x0100) enabled
(XEN) ACPI: LSAPIC (acpi_id[0x0a] lsapic_id[0x05] lsapic_eid[0x00] enabled)
(XEN) CPU 5 (0x0500) enabled
(XEN) ACPI: LSAPIC (acpi_id[0x0c] lsapic_id[0x03] lsapic_eid[0x00] enabled)
(XEN) CPU 6 (0x0300) enabled
(XEN) ACPI: LSAPIC (acpi_id[0x0e] lsapic_id[0x07] lsapic_eid[0x00] enabled)
(XEN) CPU 7 (0x0700) enabled
(XEN) ACPI: LSAPIC (acpi_id[0x10] lsapic_id[0x00] lsapic_eid[0x02] disabled)
(XEN) CPU 8 (0x0002) disabled
(XEN) ACPI: LSAPIC (acpi_id[0x12] lsapic_id[0x04] lsapic_eid[0x02] disabled)
(XEN) CPU 9 (0x0402) disabled
(XEN) ACPI: LSAPIC (acpi_id[0x14] lsapic_id[0x02] lsapic_eid[0x02] disabled)
(XEN) CPU 10 (0x0202) disabled
(XEN) ACPI: LSAPIC (acpi_id[0x16] lsapic_id[0x06] lsapic_eid[0x02] disabled)
(XEN) CPU 11 (0x0602) disabled
(XEN) ACPI: LSAPIC (acpi_id[0x18] lsapic_id[0x01] lsapic_eid[0x02] disabled)
(XEN) CPU 12 (0x0102) disabled
(XEN) ACPI: LSAPIC (acpi_id[0x1a] lsapic_id[0x05] lsapic_eid[0x02] disabled)
(XEN) CPU 13 (0x0502) disabled
(XEN) ACPI: 

[Xen-ia64-devel] HPUX on Itanium as a guest HVM?

2008-03-26 Thread Kayvan Sylvan
Has anyone tried installing HPUX as an HVM guest?

When I tried, I got the following:

In the VNC console (where I can interact with the guest firmware EFI shell):

Welcome to HP-UX Install Media

'install' is not recognized as an internal or external command, operable 
program, or batch file
Exit status code: Invalid Parameter

At the EFI Shell, I can do the following:

Shell fs0:
Fs0:\ dir
Directory of: fs0:\

07/17/06  09:43p DIR  1,024  EFI
07/17/06  09:43p   644,703  INSTALL.EFI
07/17/06 09:43p  16  AUTO
07/17/06 09:43p174 STARTUP.NSH
3 File(s) 644,893 bytes
1 Dir(s)

And when I type INSTALL at the fs0:\ prompt, I see the following in the xm 
console window:

InstallProtocolInterface: 5B1B31A1-9562-11D2-8E3F-00A0C969723B 7F0BD440
ConvertPages: Incompatible memory types
InstallProtocolInterface: 5B1B31A1-9562-11D2-8E3F-00A0C969723B 7F0BD440
ConvertPages: Incompatible memory types
CoreOpenImageFile: Device did not support a known load protocol
InstallProtocolInterface: 5B1B31A1-9562-11D2-8E3F-00A0C969723B 7F0BD440
ConvertPages: Incompatible memory types

Any ideas about what is going wrong?

---Kayvan
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] New error trying to create a domain (using latest xend-unstable

2008-03-18 Thread Kayvan Sylvan
/lib/python/xen/xend/XendDomainInfo.py, line 420, in start
XendTask.log_progress(31, 60, self._initDomain)
  File //usr/lib/python/xen/xend/XendTask.py, line 209, in log_progress
retval = func(*args, **kwds)
  File //usr/lib/python/xen/xend/XendDomainInfo.py, line 2062, in _initDomain
raise VmError(str(exn))
VmError: (22, 'Invalid argument')
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:2176) XendDomainInfo.destroy: 
domid=None
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:2193) 
XendDomainInfo.destroyDomain(None)
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:1762) No device model
[2008-03-18 08:21:58 5348] DEBUG (XendDomainInfo:1764) Releasing devices


Best regards,

---Kayvan Sylvan, Platform Solutions.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] New error trying to create a domain (using latestxend-unstable

2008-03-18 Thread Kayvan Sylvan
Hmmm... Thank you.

Adding cpus= allowed me to create the domain, but I don't understand why. Can 
you explain it?

The xm info output follows:

host   : superd3p2
release: 2.6.18.8-xen
version: #3 SMP Tue Mar 18 05:46:19 GMT 2008
machine: ia64
nr_cpus: 8
nr_nodes   : 2
cores_per_socket   : 2
threads_per_core   : 1
cpu_mhz: 1598
hw_caps: 
::::::::
total_memory   : 16288
free_memory: 14081
node_to_cpu: node0:0-7
 node1:no cpus
node_to_memory : node0:0
 node1:14081
xen_major  : 3
xen_minor  : 3
xen_extra  : -unstable
xen_caps   : xen-3.0-ia64 xen-3.0-ia64be hvm-3.0-ia64 
hvm-3.0-ia64-sioemu
xen_scheduler  : credit
xen_pagesize   : 16384
platform_params: virt_start=0xf000
xen_changeset  : Fri Mar 14 15:07:45 2008 -0600 17209:8c921adf4833
cc_compiler: gcc version 3.4.6 20060404 (Red Hat 3.4.6-3)
cc_compile_by  : root
cc_compile_domain  : invent.psi.com
cc_compile_date: Tue Mar 18 01:17:19 GMT 2008
xend_config_format : 4



From: Masaki Kanno [EMAIL PROTECTED]
Sent: Tuesday, March 18, 2008 1:55 AM
To: Kayvan Sylvan; xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel] New error trying to create a domain (using 
latestxend-unstable

Hi Kayvan,

Could you define cpus into the domain configuration file?
 e.g. cpus = 0-3

And could you show a result of xm info?

Best regards,
 Kan

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Xen debugging on ia64? gdbserver-xen or other tools?

2008-03-13 Thread Kayvan Sylvan
Hi everyone,

I'm trying to debug an issue that looks to be related to the way the guest 
firmware sets up the various EFI tables.

Has anyone gotten gdbserver-xen to work under IA64?

When I try to compile it, I get the following compile error:

gcc-o gdbserver-xen inferiors.o regcache.o remote-utils.o server.o 
signals.o target.o utils.o mem-break.o reg-ia64.o linux-low.o linux-ia64-low.o  
\
  -L../../../../../libxc/ -lxenctrl
server.o(.text+0x12d1): In function `ctrl_c_handler':
../../../gdb-6.2.1/gdb/gdbserver/server.c:346: undefined reference to 
`control_c_pressed_flag'
server.o(.text+0x12e1):../../../gdb-6.2.1/gdb/gdbserver/server.c:346: undefined 
reference to `control_c_pressed_flag'
collect2: ld returned 1 exit status

It looks like maybe this code is x86-specific.

How does one debug Xen related issues? What tools do you all use?

Thanks for your replies.

---Kayvan
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] HVM Live Migration works perfectly!

2008-02-22 Thread Kayvan Sylvan
Thanks to everyone who worked on it.

I pulled the latest IA64 xen-unstable tree, compiled and installed a minimal 
CentOS-4.6 HVM guest, and while doing a compile of bash-3.2 on the guest, 
migrated from one Xen host to another.

Best regards,

---Kayvan Sylvan, Platform Solutions Inc.
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] priv_handle_op fails on bootup, machine hangs

2008-02-07 Thread Kayvan Sylvan
I'm loving this. Ife is exciting on the bleeding... uhhh...  leading edge. :)

I now have a new failure mode. On bootup of an IA64 Xen-3.3-unstable machine, I 
am seeing this:

(XEN)   already assigned pte_val 0x0010ff4406e1
(XEN)   mpaddr 0xff44 physaddr 0xff44 flags 0x2
(XEN) __assign_domain_page:956 WARNING can't assign page domain 
0xf4314080 id 0
(XEN)   already assigned pte_val 0x0010ff58c6e1
(XEN)   mpaddr 0xff58c000 physaddr 0xff58c000 flags 0x2
(XEN) __assign_domain_page:956 WARNING can't assign page domain 
0xf4314080 id 0
(XEN)   already assigned pte_val 0x08143b0646e1
(XEN)   mpaddr 0x2000 physaddr 0x2000 flags 0x2
(XEN) priv_emulate: priv_handle_op fails, isr=0x20 iip=a00100083eb0

And the machine hangs.

Any advice for how to fix this? Is there anything I can do to get more info?

Thank you.

---Kayvan
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] [PATCH] fix live migration

2008-02-07 Thread Kayvan Sylvan
I was running into this too. Is this fix in the repository now?

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alex Williamson
Sent: Thursday, February 07, 2008 10:01 AM
To: Kouya Shimura
Cc: xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel] [PATCH] fix live migration


On Thu, 2008-02-07 at 12:01 +0900, Kouya Shimura wrote:
 Hi Alex,

 Sorry, I added a bug.
 Here is a fixed one.

 I'm now struggling with supporting HVM live migation.
 HVM doesn't support log-dirty mode yet.

   This works much better!  Applied.  Have you seen the case where xm
migrate hangs, but the domain has successfully migrated and is running
on the remote system?  I've seen this happen a couple times.  Seems like
we're missing the signal to finish and destroy the domain on the source
machine.  Thanks,

Alex

--
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] priv_handle_op fails on bootup, machine hangs

2008-02-07 Thread Kayvan Sylvan
Thank you very much for the great info on hg bisect.

Best regards,

---Kayvan

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] Latest updates: xend hanging on DomU shutdown

2008-02-07 Thread Kayvan Sylvan
Prior to the latest updates, when I would shutdown an HVM Linux guest via xm 
console, it would shutdown and close the console, leaving me at the shell 
prompt, and xm list would work, showing all the other domains still running.

Using the latest checkout of xen-unstable.hg (changeset 16985:e3e8bdb5d52d), 
this is now what happens:

Connected to the guest console (xm console), I type shutdown -h now:

[...]
Turning off quotas:
Unmounting pipe file systems:
Unmounting file systems:
Halting system...
md: stopping all md devices.
md: md0 switched to read-only mode.
Shutdown: hda
Power down.
acpi_power_off called

At this point, the console process never ends.

In /var/log/xen/xend.log, I see:

[2008-02-07 20:52:40 5027] INFO (XendDomain:1167) Domain CentOS (1) unpaused.
[2008-02-07 20:57:02 5027] INFO (XendDomainInfo:1285) Domain has shutdown: 
name=CentOS id=1 reason=poweroff.
[2008-02-07 20:57:02 5027] DEBUG (XendDomainInfo:1922) XendDomainInfo.destroy: 
domid=1
[2008-02-07 20:57:02 5027] DEBUG (XendDomainInfo:1939) 
XendDomainInfo.destroyDomain(1)
[2008-02-07 20:57:03 5027] DEBUG (XendDomainInfo:1539) Destroying device model

And it looks as if xend is waiting for something, because it never responds to 
xm commands:

# xm list
(no output)

I hope this report is useful.

Best regards,

---Kayvan
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] Not sure what I broke... xm create failing

2008-02-06 Thread Kayvan Sylvan
I did an update of the latest tree, recompiled and re-installed, and now:

# xm create -c Xen/config/CentOS-2.hvm
Using config file ./Xen/config/CentOS-2.hvm.
Error: (1, 'Internal error', 'Could not read guest firmware image 
/usr/lib/xen/boot/hvmloader (2 = No such file or directory)')

This is the same config file I have been using for a while now:

arch_libdir = 'lib'
kernel = /root/efi-vfirmware.hg/binaries/xenia64-gfw.bin
builder='hvm'
memory = 32768
shadow_memory = 148
name = CentOS
vcpus = 20
vif = [ 'type=ioemu, mac=00:7f:8f:31:7d:11, bridge=xenbr0' ]
disk = [ 'phy:/dev/VolGroup00/CentOS46,hda,w', 
'file:/root/Xen/isos/centos-4.6-ia64-dvd.iso,hdc:cdrom,r' ]
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
sdl=0
vnc=1
vnclisten=0.0.0.0
vncdisplay=89
vncpasswd='XXX'
nographic=0
serial='pty'
monitor=1

It's looking for /usr/lib/xen/boot/hvmloader when the kernel is specified 
there as root/efi-vfirmware.hg/binaries/xenia64-gfw.bin

Did something change?

---Kayvan
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] HVM Multi-Processor Performance followup

2008-02-04 Thread Kayvan Sylvan
Hi everyone,

I gathered new RE-AIM7 compute workload numbers for Ia64 xen-3.3-unstable 
(latest pull).

Please take a look at http://www.editgrid.com/user/kayvan/CPU_Perf_2

The OS where the benchmarks were run was given 32GB of memory (both for the 
native runs and the guest runs) and during the guest runs, Dom0 vcpus was set 
to 4 and its memory was also greatly increased to eliminate swapping.

The Dom0 OS and the guest OS are both CentOS-4.6 with the latest updates.

Best regards,

---Kayvan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kayvan Sylvan
Sent: Thursday, January 31, 2008 7:15 PM
To: '[EMAIL PROTECTED]'; '[EMAIL PROTECTED]'
Cc: 'xen-ia64-devel@lists.xensource.com'
Subject: Re: [Xen-ia64-devel] HVM Multi-Processor Performance followup

I used the workfile.compute for those results.

I will redo the tests with some different parameters tomorrow.

---
Kayvan Sylvan, Platform Solutions Inc.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] HVM Multi-Processor Performance followup

2008-01-31 Thread Kayvan Sylvan
Hi everyone,

A follow-up on the multiprocessor performance benchmark on HVM guests.

We ran the RE-AIM7 benchmarks on a 5-cell (40 CPU) machine and a single-cell 
8-cpu NEC machine.

Here are the jobs per minute maximums.

You can see the drop in performance starts to get really bad at about 9 CPUs 
and beyond.

Questions:


1.   What can I do to help improve this situation?

2.   Are there any other experiments I can run?

3.   What tools/profilers will help to gather more data here?

I am very interested in helping to solve this problem! Thanks for your ideas 
and suggestions.

Best regards,

---Kayvan




Xen performance comparison on 5-Cell NEC machine (each cell with 4 dual-core 
Itaniums)



CPUs

Native Jobs/Min

HVM Jobs/Min

Overhead


1

2037

1791

12.08%


2

4076

3615

11.31%


3

6090

5221

14.27%


4

8118

6839

15.76%


5

10119

8404

16.95%


6

12037

9949

17.35%


7

14106

11095

21.35%


8

15953

12360

22.52%


9

18059

13201

26.90%


10

20170

13742

31.87%


11

21896

13694

37.46%


12

24079

13331

44.64%


13

25992

12374

52.39%


14

28072

11684

58.38%


15

29931

11032

63.14%


16

31696

10451

67.03%



The guest OS was CentOS-4.6 with 2GB of memory,


running under a Dom0 that was limited to 1 VCPU.




Xen performance comparison on 1-Cell NEC machine (4 dual core Itanium Montecito)



CPUs

Native Jobs/Min

HVM Jobs/Min

Overhead


1

2037

1779

12.67%


2

4067

3619

11.02%


3

6097

5344

12.35%


4

8112

7004

13.66%


5

10145

8663

14.61%


6

12023

10213

15.05%


7

14083

11249

20.12%


8

16182

12969

19.86%




The guest OS was CentOS-4.6 with 2GB of memory,


running under a Dom0 that was limited to 1 VCPU.



Powered by EditGridhttp://www.editgrid.com/ - Online Spreadsheets

Source: CPU Performancehttp://www.editgrid.com/user/kayvan/CPU_Performance


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] HVM Multi-Processor Performance followup

2008-01-31 Thread Kayvan Sylvan
I used the workfile.compute for those results.

I will redo the tests with some different parameters tomorrow.

---
Kayvan Sylvan, Platform Solutions Inc.

- Original Message -
From: Alex Williamson [EMAIL PROTECTED]
To: Xu, Anthony [EMAIL PROTECTED]
Cc: Kayvan Sylvan; xen-ia64-devel xen-ia64-devel@lists.xensource.com
Sent: Thu Jan 31 18:47:14 2008
Subject: RE: [Xen-ia64-devel] HVM Multi-Processor Performance followup


On Fri, 2008-02-01 at 09:11 +0800, Xu, Anthony wrote:
 Thanks for your efforts

 You can see the drop in performance starts to get really bad at about
 9 CPUs and beyond

 If you increase guest vCPU number, the bottleneck may be dom0 vCPU
 number( only 1vCPU for dom0).

 You can try configure two/four vCPU for dom0, the performance may be
 back.

 a curious question,

 Alex said there are ~70% degradation on RE-AIM7,
 Your test result seems much better than his.

 What's the difference of your test environment?

   re-aim-7 provides a number of different workloads.  I was
specifically running the high_systime workload to try to get the worst
case performance out of an HVM domain.  Anything that involves more time
spent running code in user space will lean more towards the results I
showed for the kernel build test.  What workload was this?

   Kayvan, when you ran the native test, did you also limit the memory
using the mem= boot option?  I would expect that you need to increase
the guest memory as vCPUs are increased, or you may be getting into a
scenario where memory management in the guest or even swapping comes
into play.  Thanks,

Alex

--
Alex Williamson HP Open Source  Linux Org.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] RE: Has anyone seen this before? vcpus 1panics DomU kernel

2008-01-29 Thread Kayvan Sylvan
 Hi, Kayvan
 I suspect Linux CD-ROM driver cannot use DVD.iso image.
 So you should not specify it as cdrom.

Hi Akio,

That is an interesting idea.

I installed the HVM guest using the DVD.iso image as a CD and it worked fine, 
so I don't think that is the problem.

To verify, I just removed the DVD.iso reference, set vcpus=4 and rebooted. The 
same thing as before:

Loading sd_mod.ko module
Loading libata.ko module
Loading ata_piix.ko module
scsi1 : ata_piix
ata1: PATA max MWDMA2 cmd 0x000101f0 ctl 0x000103f6 bmdma 
0x0001c000 irq 34
ata2: PATA max MWDMA2 cmd 0x00010170 ctl 0x00010376 bmdma 
0x0001c008 irq 33
ata1.00: ATA-7: QEMU HARDDISK, 0.9.0, max UDMA/100
ata1.00: 20971520 sectors, multi 16: LBA48
ata1.00: configured for MWDMA2
scsi 0:0:0:0: Direct-Access ATA  QEMU HARDDISK0.9. PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 20971520 512-byte hardware sectors (10737 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support 
DPO or FUA
sd 0:0:0:0: [sda] 20971520 512-byte hardware sectors (10737 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support 
DPO or FUA
 sda: sda1 sda2 sda3
sd 0:0:0:0: [sda] Attached SCSI disk
Kernel panic - not syncing: Attempted to kill init!
Waiting for driv

When I set vcpus=1, the guest OS boots fine. :-(

Best regards,

---Kayvan

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Time for hybrid virtualization?

2008-01-24 Thread Kayvan Sylvan
We (myself and my colleagues at Platform Solutions) are working on this exactly.

We will send some results to the list soon, but I think we can confirm Alex's 
experience with HVM configurations. Our preliminary results are about 13-15% 
overhead.

I am racking a Montvale system in about 15 minutes to do some more benchmarks 
today.

Best regards,

---Kayvan Sylvan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of [EMAIL PROTECTED]
Sent: Thursday, January 24, 2008 4:58 AM
To: Alex Williamson
Cc: xen-ia64-devel
Subject: RE: [Xen-ia64-devel] Time for hybrid virtualization?

Quoting Alex Williamson [EMAIL PROTECTED]:
BTW, has anyone else done more benchmarks to compare HVM and PV?

Indeed, it would be useful to have benchmark in various configurations
(dom0/domU, nbr of vcpus, Montecito/Montvale).

I was puzzled by Alex's first results.

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Time for hybrid virtualization?

2008-01-24 Thread Kayvan Sylvan
For the Montecito chip, running the RE-AIM7 compute workload, we have the 
following preliminary summary:

Native vs. Xen performace
--
At 2-CPU, the overhead was 13%.
At 4-CPU, the overhead was 17%
At 8-cpu, overhead was 28%
At 16-cpu, the overhead was 173%

Not quite sure why the performance dropped off so radically in the 16-CPU case.

I'm trying to get equivalent numbers for x86_64 and the Montvale chip.

---Kayvan

-Original Message-
From: Alex Williamson [mailto:[EMAIL PROTECTED]
Sent: Thursday, January 24, 2008 12:44 PM
To: Kayvan Sylvan
Cc: [EMAIL PROTECTED]; xen-ia64-devel
Subject: RE: [Xen-ia64-devel] Time for hybrid virtualization?


On Thu, 2008-01-24 at 08:20 -0800, Kayvan Sylvan wrote:
 We (myself and my colleagues at Platform Solutions) are working on
 this exactly.

 We will send some results to the list soon, but I think we can confirm
 Alex's experience with HVM configurations. Our preliminary results are
 about 13-15% overhead.

 I am racking a Montvale system in about 15 minutes to do some more
 benchmarks today.

Hi Kayvan,

   That's good news.  If you get Montecito vs Montvale results that
would be interesting too.   Does anyone have suggestions for other easy
to run and setup benchmarks?  Kernel builds seem to be fairly
virtualization friendly.  Thanks,

Alex

--
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Was there a change in xen-3.2-rc5? bootmessages not showing up to console

2008-01-14 Thread Kayvan Sylvan
Hmmm... No, that does not appear to work either.

I am just not getting any output on the console from xen or dom0 on the
one machine.

What could be causing this?

I'm going to carefully re-examine the boot messages and the linux build
(.config) file...

Anyone have any ideas?

---Kayvan

-Original Message-
From: Alex Williamson [mailto:[EMAIL PROTECTED] 
Sent: Friday, January 11, 2008 6:31 PM
To: Kayvan Sylvan
Cc: xen-ia64-devel@lists.xensource.com
Subject: Re: [Xen-ia64-devel] Was there a change in xen-3.2-rc5?
bootmessages not showing up to console


On Fri, 2008-01-11 at 16:06 -0800, Kayvan Sylvan wrote:
 Did something change between xen-3.2-unstable rc4 and rc5 with the
 console output handling?
 
   I still get xen  dom0 console output on all my systems with
xen-unstable.hg cset 16707.  Can you copy the xen and vmlinuz binaries
from one to the other and see if they still work/fail the same way?

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel]Open GFW Building HowTo Document

2007-12-17 Thread Kayvan Sylvan
Looks great! Thank you very much for this.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mu, Qin
Sent: Sunday, December 16, 2007 9:30 PM
To: xen-ia64-devel
Subject: [Xen-ia64-devel]Open GFW Building HowTo Document

Hi,

A HowTo document is created for the open guest firmware project to help
users obtain and build from sources.
Any of your suggestions or comments will be appreciated very much!
 
Qin Mu





___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [Xen-ia64][GFW PATCH]VGA high color fix

2007-12-14 Thread Kayvan Sylvan
Great! That worked. There was a minor typo that I fixed.

How do I get better than 800x600 and 16bit display?

-Original Message-
From: Alex Williamson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 13, 2007 1:56 PM
To: Kayvan Sylvan
Cc: Zhang, Xing Z; xen-ia64-devel@lists.xensource.com
Subject: RE: [Xen-ia64-devel] [Xen-ia64][GFW PATCH]VGA high color fix


On Thu, 2007-12-13 at 13:24 -0800, Kayvan Sylvan wrote:
 What's the best way to use HVM Windows guests now?
 
 I have a HVM Windows 2008 Server for Itanium running, but trying to
 figure out how to get networking working. The Windows guest does not
 detect any NIC cards.

   Convenient you should ask, I just added a section in the Xen wiki for
this yesterday ;^)

http://wiki.xensource.com/xenwiki/XenIA64/WindowsGuestNetworking

Please update/correct if there's anything missing.  The Windows 2008
instructions are largely copied from email Kouya-san sent to this list a
while ago.  Thanks,

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] [Xen-ia64][GFW PATCH]VGA high color fix

2007-12-14 Thread Kayvan Sylvan
That is SOOO much better!!! That binary should be in the binaries/
directory of the efi-vfirmware mercurial tree.

Thank you!

-Original Message-
From: Alex Williamson [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 14, 2007 1:17 PM
To: Kayvan Sylvan
Cc: Zhang, Xing Z; xen-ia64-devel@lists.xensource.com
Subject: RE: [Xen-ia64-devel] [Xen-ia64][GFW PATCH]VGA high color fix


On Fri, 2007-12-14 at 13:09 -0800, Kayvan Sylvan wrote:
 Great! That worked. There was a minor typo that I fixed.

   Thanks!

 How do I get better than 800x600 and 16bit display?

   Are you already using a build of the latest open GFW?  If not, try
using this pre-compiled binary:

http://free.linux.hp.com/~awilliam/ia64_open_gfw_cset38.bin

Point your kernel= line in the domain config file to a local copy of the
file.

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Xen-3.2-unstable crash on NEC box

2007-12-03 Thread Kayvan Sylvan
Hi everyone.

I'm trying to bring up the xen-ia64 unstable on a new machine.

I'm running Redhat 5.1 and I grabbed and compiled the latest sources
from the following:

http://xenbits.xensource.com/ext/ia64/xen-unstable.hg
http://xenbits.xensource.com/ext/ia64/linux-2.6.18-xen.hg

After installing the various bits in the right places, my elilo.conf
entry looks like this:

image=vmlinuz-2.6.18.8-xen
vmm=xen-3.2-unstable.gz
label=xen32
initrd=initrd-2.6.18.8.img
read-only
root=/dev/VolGroup00/LogVol00
append=com1=115200,8n1 -- nomca xencons=ttyS0 console=ttyS0
rhgb quiet

Bringing up the machine looks okay --- there are iptables errors, etc.
--- but I figure I want to get the machine up first in some way and fix
each problem afterwards. It looks fine all the way till about a fraction
of a second after the login: prompt appears.

If anyone has any ideas of what I should try, please let me know!

Thanks!

Best regards,

---Kayvan Sylvan, Platform Solutions Inc.

(the boot log follows)


-

Uncompressing Linux... done
Loading file initrd-2.6.18.8.img...done
Loading file vmlinuz-2.6.18.8-xen...done
Uncompressing... done
 __  ___   __ _
 \ \/ /___ _ __   |___ / |___ \_   _ _ __  ___| |_ __ _| |__ | | ___
  \  // _ \ '_ \|_ \   __) |__| | | | '_ \/ __| __/ _` | '_ \| |/ _
\
  /  \  __/ | | |  ___) | / __/|__| |_| | | | \__ \ || (_| | |_) | |
__/
 /_/\_\___|_| |_| |(_)_|   \__,_|_|
|_|___/\__\__,_|_.__/|_|\___|

(XEN) Xen version 3.2-unstable (root@) (gcc version 4.1.2 20070626 (Red
Hat 4.1.2-14)) Mon Dec  3 14:18:52 PST 2007
(XEN) Latest ChangeSet: Fri Nov 30 08:54:33 2007 -0700
16501:32ec5dbe2978
(XEN) Xen command line: BOOT_IMAGE=scsi0:EFI\redhat\xen-3.2-unstable.gz
com1=115200,8n1
(XEN) xen image pstart: 0x400, xenheap pend: 0x800
(XEN) Xen patching physical address access by offset: 0xfc00
(XEN) find_memory: efi_memmap_walk returns max_page=9
(XEN) Before xen_heap_start: f41e0a20
(XEN) After xen_heap_start: f41f8000
(XEN) Init boot pages: 0x1d8 - 0x400.
(XEN) Init boot pages: 0x800 - 0x3c00.
(XEN) Init boot pages: 0x4000 - 0x7ba36000.
(XEN) Init boot pages: 0x7ca2e8d8 - 0x7d08a010.
(XEN) Init boot pages: 0x7d08a070 - 0x7d08df7e.
(XEN) Init boot pages: 0x7d08dfc0 - 0x7d09.
(XEN) Init boot pages: 0x7d4cf170 - 0x7d4d6010.
(XEN) Init boot pages: 0x7d4d6a60 - 0x7df38000.
(XEN) Init boot pages: 0x7f30 - 0x7f9ec000.
(XEN) Init boot pages: 0x7fa58000 - 0x7fb84000.
(XEN) Init boot pages: 0x7fc0 - 0x7fcf8000.
(XEN) Init boot pages: 0x7fd88000 - 0x7fd8c000.
(XEN) Init boot pages: 0x8000 - 0xc000.
(XEN) Init boot pages: 0x1 - 0x24000.
(XEN) System RAM: 8088MB (8282416kB)
(XEN) size of virtual frame_table: 20288kB
(XEN) virtual machine to physical table: f6b8 size: 4096kB
(XEN) max_page: 0x9
(XEN) allocating frame table/mpt table at mfn 0.
(XEN) Xen heap: 62MB (63520kB)
(XEN) Reserving non-aligned node boundary @ mfn 65536
(XEN) Reserving non-aligned node boundary @ mfn 262144
(XEN) Domain heap initialised: DMA width 32 bits
(XEN) avail:0x31700740,
status:0x740,control:0x3170, vm?0x100
(XEN) WARNING: no opcode provided from hardware(0)!!!
(XEN) vm buffer size: 1048576, order: 6
(XEN) vm_buffer: 0xf40007e0
(XEN) register_intr: changing vector 41 from IO-SAPIC-edge to
IO-SAPIC-level
(XEN) register_intr: changing vector 39 from IO-SAPIC-edge to
IO-SAPIC-level
(XEN) register_intr: changing vector 38 from IO-SAPIC-edge to
IO-SAPIC-level
(XEN) register_intr: changing vector 35 from IO-SAPIC-edge to
IO-SAPIC-level
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Time Zone is not specified on EFIRTC
(XEN) Time init:
(XEN)  System Time: 467496ns
(XEN)  scale:  27FEDFAC5
(XEN) num_online_cpus=1, max_cpus=64
(XEN) Fixed BSP b0 value from CPU 1
(XEN) Brought up 8 CPUs
(XEN) the number of physical stacked general
registers(RSE.N_STACKED_PHYS) = 96
(XEN) xenoprof: using perfmon.
(XEN) perfmon: version 2.0 IRQ 238
(XEN) perfmon: Montecito PMU detected, 27 PMCs, 35 PMDs, 12 counters (47
bits)
(XEN) Maximum number of domains: 63; 18 RID bits per domain
(XEN) *** LOADING DOMAIN 0 ***
(XEN) Maximum permitted dom0 size: 7973MB
(XEN)  Dom0 kernel: 64-bit, lsb, paddr 0x400 - 0x5423970
(XEN) METAPHYSICAL MEMORY ARRANGEMENT:
(XEN)  Kernel image:  400-5423970
(XEN)  Entry address: 400ff20
(XEN)  Init. ramdisk: 5428000 len 43f170
(XEN)  Start info.:   5424000-5428000
(XEN) Dom0 max_vcpus=4
(XEN) Dom0: 0xf7c74080
(XEN) enable lsapic entry: 0xf0007df3c484
(XEN) enable lsapic entry: 0xf0007df3c490
(XEN) enable lsapic entry: 0xf0007df3c49c
(XEN) enable lsapic entry: 0xf0007df3c4a8
(XEN) DISABLE lsapic entry: 0xf0007df3c4b4

RE: [Xen-ia64-devel] IA64 using Redhat 5.1: xm createfails: Hotplugscripts not working.

2007-11-14 Thread Kayvan Sylvan
 Does the guest firmware is the correct one, which is installed from
 Supplymentary CD?

Yes.

Here's the list of xen related packages:

# rpm -qa | grep xen

kernel-xen-2.6.18-53.el5
kernel-xen-devel-2.6.18-53.el5
xen-3.0.3-41.el5
xen-libs-3.0.3-41.el5
xen-devel-3.0.3-41.el5
xen-ia64-guest-firmware-1.0.0-8



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] IA64 using Redhat 5.1: xm create fails: Hotplugscripts not working.

2007-11-14 Thread Kayvan Sylvan
Thanks for the suggestions. It still does not work.

I also set SELinux to permissive mode to eliminate that potential source
of problems.

Using virt-manager to create the fully virtualized host results in the
following error (basically the same):

Unable to complete install 'libvirt.libvirtError virDomainCreateLinux()
failed POST operation failed: (xend.err 'Device 0 (vif) could not be
connected. Hotplug scripts not working.')
Traceback (most recent call last):
  File /usr/share/virt-manager/virtManager/create.py, line 681, in
do_install
dom = guest.start_install(False, meter = meter)
  File /usr/lib/python2.4/site-packages/virtinst/Guest.py, line 649,
in start_install
return self._do_install(consolecb, meter)
  File /usr/lib/python2.4/site-packages/virtinst/Guest.py, line 666,
in _do_install
self.domain = self.conn.createLinux(install_xml, 0)
  File /usr/lib/python2.4/site-packages/libvirt.py, line 503, in
createLinux
if ret is None:raise libvirtError('virDomainCreateLinux() failed',
conn=self)
libvirtError: virDomainCreateLinux() failed POST operation failed:
(xend.err 'Device 0 (vif) could not be connected. Hotplug scripts not
working.')
'

At this point, I have to restart xend (it seems to die when the xm
create fails).

---Kayvan


-Original Message-
From: Jarod Wilson [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 13, 2007 6:46 PM
To: Kayvan Sylvan
Cc: xen-ia64-devel
Subject: Re: [Xen-ia64-devel] IA64 using Redhat 5.1: xm create fails:
Hotplugscripts not working.

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Zhang, Xing Z wrote:
 Do you check your disk parameter is right? 
 Error: Device 768 (vbd) could not be connected. Hotplug scripts
 not working. often has relation to un-correct disk image path or you
mount the image somewhere but forget unmount it.
 In the newest XEN/IA64 changeset, the error info is more readable, you
can update to a new one.
 If you want to debug it, see /var/log/xen/xend.log and search key word
Error. Then you can get a series of python error info.
 The python code of Xend is at /usr/lib/python/xen. Good luck.

Two other suggestions...

1) the vbd not connecting *could* be selinux preventing usage of the
disk image, so you may be able to start the guest up correctly after
running 'setenforce 0' (or booting with selinux in permissive or
disabled mode).

2) try creating the guest using virt-manager. Always works for me.


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On
 Behalf Of Kayvan Sylvan
 Sent: 2007?11?14? 8:43
 To: Tristan Gingold
 Cc: xen-ia64-devel
 Subject: RE: [Xen-ia64-devel] IA64 using Redhat 5.1: xm create
 fails: Hotplugscripts not working.

 I tried commenting the vif= line out and I get:

 Error: Device 768 (vbd) could not be connected. Hotplug scripts
 not
 working.

 I'm also not getting a console or anything else that I can
 interact
 with.

 I'm not sure how to proceed to try to debug this.



- --
Jarod Wilson
[EMAIL PROTECTED]

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHOmFLtO+bni+75QMRAr0zAKC7Te5Xh/dJO7fOKhRB9WBax821YwCgo7Zh
bi6v3YfTXD2LGaAK84AE4so=
=h6tC
-END PGP SIGNATURE-

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] IA64 using Redhat 5.1: xm createfails: Hotplugscripts not working.

2007-11-14 Thread Kayvan Sylvan
 That's bad. You've got too old a guest firmware package for the RHEL5.1
 hypervisor. You need xen-ia64-guest-firmware version 1.0.11-1, which
 should be on the 5.1 supplementary CD.

Hi Jared,
 
I used the Redhat 5.1 DVD to install this server, then pulled the RPM off the 
Redhat website.
 
I see the following ISO images:
 
1) Compatability Layer Disc
2) Binary Disc 1 (Server Core)
3) Binary Disc 2 (Server Core)
4) Binary Disc 3 (Server Core)
5) Binary Disc 4 (Server Core)
6) Binary Disc 5 (Server Core/Cluster/Virtualization)
7) Binary Disc 6 (Cluster/Cluster Storage)
8) Binary DVD (Server Core/Cluster/Cluster Storage/Virtualization)
 
Which one contains the right xen-ia64-guest-firmware?
 
Thanks!
 
---Kayvan
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] IA64 using Redhat 5.1: xm create fails: Hotplug scripts not working.

2007-11-13 Thread Kayvan Sylvan
Hi everyone,

I'm trying to bring up an hvm guest on a Redhat Enterprise Linux 5.1 on an IA64 
host machine.

The configuration file looks like this:

import os, re
arch_libdir = 'lib'
kernel = /usr/lib/xen/boot/hvmloader
builder='hvm'
memory = 512
shadow_memory = 32
name = PSI52
vif = [ 'type=ioemu, mac=00:16:3e:00:dd:11, bridge=xenbr0' ]
disk = [ 'file:/var/lib/xen/images/PSI_52.img,hda,w', 
'file:/root/ML5.2-IA64.iso,hdc:cdrom,r' ]
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
sdl=0
vnc=1
vncpasswd='psipsi'
stdvga=0
serial='pty'

When I try to xm create, here is what happens:

Using config file ./PSI52.hvm.
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.

In the /var/log/xen log files, I see:

[2007-11-13 06:55:22 xend.XendDomainInfo 11529] DEBUG (XendDomainInfo:988) 
XendDomainInfo.handleShutdownWatch
[2007-11-13 06:55:22 xend 11529] DEBUG (DevController:143) Waiting for devices 
vif.
[2007-11-13 06:55:22 xend 11529] DEBUG (DevController:149) Waiting for 0.
[2007-11-13 06:55:22 xend 11529] DEBUG (DevController:476) 
hotplugStatusCallback /local/domain/0/backend/vif/8/0/hotplug-status.

== xend-debug.log ==
ERROR Internal error: Invalid nvram signature. Nvram save failed!
 (11 = Resource temporarily unavailabl)

== xend.log ==
[2007-11-13 06:57:02 xend.XendDomainInfo 11529] DEBUG (XendDomainInfo:1557) 
XendDomainInfo.destroy: domid=8
[2007-11-13 06:57:02 xend.XendDomainInfo 11529] DEBUG (XendDomainInfo:1566) 
XendDomainInfo.destroyDomain(8)

Does anyone have any ideas about how I can debug this?

Thanks!

---
Kayvan Sylvan
Platform Solutions, Inc.
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] IA64 using Redhat 5.1: xm create fails: Hotplug scripts not working.

2007-11-13 Thread Kayvan Sylvan
Hi everyone,

I'm trying to bring up an hvm guest on a Redhat Enterprise Linux 5.1 on an IA64 
host machine.

The configuration file looks like this:

import os, re
arch_libdir = 'lib'
kernel = /usr/lib/xen/boot/hvmloader
builder='hvm'
memory = 512
shadow_memory = 32
name = PSI52
vif = [ 'type=ioemu, mac=00:16:3e:00:dd:11, bridge=xenbr0' ]
disk = [ 'file:/var/lib/xen/images/PSI_52.img,hda,w', 
'file:/root/ML5.2-IA64.iso,hdc:cdrom,r' ]
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
sdl=0
vnc=1
vncpasswd='psipsi'
stdvga=0
serial='pty'

When I try to xm create, here is what happens:

Using config file ./PSI52.hvm.
Error: Device 0 (vif) could not be connected. Hotplug scripts not working.

In the /var/log/xen log files, I see:

[2007-11-13 06:55:22 xend.XendDomainInfo 11529] DEBUG (XendDomainInfo:988) 
XendDomainInfo.handleShutdownWatch
[2007-11-13 06:55:22 xend 11529] DEBUG (DevController:143) Waiting for devices 
vif.
[2007-11-13 06:55:22 xend 11529] DEBUG (DevController:149) Waiting for 0.
[2007-11-13 06:55:22 xend 11529] DEBUG (DevController:476) 
hotplugStatusCallback /local/domain/0/backend/vif/8/0/hotplug-status.

== xend-debug.log ==
ERROR Internal error: Invalid nvram signature. Nvram save failed!
 (11 = Resource temporarily unavailabl)

== xend.log ==
[2007-11-13 06:57:02 xend.XendDomainInfo 11529] DEBUG (XendDomainInfo:1557) 
XendDomainInfo.destroy: domid=8
[2007-11-13 06:57:02 xend.XendDomainInfo 11529] DEBUG (XendDomainInfo:1566) 
XendDomainInfo.destroyDomain(8)

Does anyone have any ideas about how I can debug this?

Thanks!

---
Kayvan Sylvan
Platform Solutions, Inc.
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] IA64 using Redhat 5.1: xm create fails: Hotplug scripts not working.

2007-11-13 Thread Kayvan Sylvan
I tried commenting the vif= line out and I get:

Error: Device 768 (vbd) could not be connected. Hotplug scripts not
working.

I'm also not getting a console or anything else that I can interact
with.

I'm not sure how to proceed to try to debug this.

Thanks for any suggestions!

---Kayvan

-Original Message-
From: Tristan Gingold [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 13, 2007 11:34 AM
To: Kayvan Sylvan
Cc: xen-ia64-devel
Subject: Re: [Xen-ia64-devel] IA64 using Redhat 5.1: xm create fails:
Hotplug scripts not working.

On Tue, Nov 13, 2007 at 07:06:13AM -0800, Kayvan Sylvan wrote:
 Hi everyone,
 
 I'm trying to bring up an hvm guest on a Redhat Enterprise Linux 5.1
on an IA64 host machine.
 
[...]
 vif = [ 'type=ioemu, mac=00:16:3e:00:dd:11, bridge=xenbr0' ]
[...]
 Error: Device 0 (vif) could not be connected. Hotplug scripts not
working.

Try to disable vif as vif fails to connect.

Did you setup the ethernet bridge ?

Tristan.

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel