[Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:

2007-01-18 Thread Isaku Yamahata
On Thu, Jan 18, 2007 at 07:13:17AM +, Keir Fraser wrote:
 On 18/1/07 6:52 am, Isaku Yamahata [EMAIL PROTECTED] wrote:
 
  Subject: [PATCH 1/5] dump-core take 2: XENMEM_set_memory_map hypercall
  Subject: [PATCH 2/5] dump-core take 2: libxc: xc_domain memmap functions
 
 Should be able to work without these. We need to be able to support
 ballooning anyway, so it's not as if every E820_RAM region will necessarily
 be entirely populated with memory. What you need is a max_pfn value and then
 iterate 0...max_pfn-1 and try to map each page. If the mapping fails then
 there is no underlying memory. The tools could give a suitable max_pfn or we
 could add a hypercall to get it from Xen.

max_pfn isn't sufficient.
Memory may be sparse on ia64 so that iterating on [0, max_pfn - 1]
isn't practical. It would take too long time.
Mempry map is also necessary to avoid dumping I/O regions of a driver domain.


  Subject: [PATCH 3/5] dump-core take 2: libxc: add xc_domain_tranlate_gpfn()
 Why? x86 moved to always mapping HVM memory by GPFN. Can ia64 do the same?

IA64 uses GPFN for both domU and HVM.
It is used to just in order to check whether each GPFNs have underlying
memory. Not in order to get MFN.
It can be replaced with trying to map and checking the result,
however I want to know it _before_ dumping pages to create program headers.
Is there any cheaper way than trying to map each PFNs?


  Subject: [PATCH 5/5] dump-core take 2: elf formatify and added PFN-GMFN 
  table
 Shouldn't dump zero pages. Hence we need PFN-GMFN info even for HVM guests
 -- absence of PFN-GMFN pair, or GMFN==INVALID_MFN, could represent a RAM
 hole more cheaply than 4kB of zeroes. Otherwise PFN=GMFN.

I'm not sure I understand.
The posted patch doesn't dump a page which doesn't have underlying memory.
By checking program header's physical address and size
(Elf_Phdr.{p_paddr, p_filesz}), we can know whether a given GPFN is
present or not.

-- 
yamahata

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel][Patch] Clean patch (clean VLIDATE_VT)

2007-01-18 Thread Zhang, Xing Z
It's not be defined, so clean it.

 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation

 



clean_VALIDATE_VT.patch
Description: clean_VALIDATE_VT.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:

2007-01-18 Thread Tian, Kevin
From: Isaku Yamahata
Sent: 2007年1月18日 16:33

 Should be able to work without these. We need to be able to support
 ballooning anyway, so it's not as if every E820_RAM region will
necessarily
 be entirely populated with memory. What you need is a max_pfn value
and then
 iterate 0...max_pfn-1 and try to map each page. If the mapping fails
then
 there is no underlying memory. The tools could give a suitable max_pfn
or we
 could add a hypercall to get it from Xen.

max_pfn isn't sufficient.
Memory may be sparse on ia64 so that iterating on [0, max_pfn - 1]
isn't practical. It would take too long time.
Mempry map is also necessary to avoid dumping I/O regions of a driver
domain.


Yeah, memory map may be sparse on ia64, but, only at physical level. 
You can always present a compact pseudo physical layout to a 
domain, despite of sparse or not in real physical.:-) BTW, is it possible 
to save memmap into xenstore, so that multiple user components can 
communicate such info directly without xen's intervention?

Thanks,
Kevin

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Xen/IA64 Healthiness Report -Cset#13465

2007-01-18 Thread You, Yongkang
Xen/IA64 Healthiness Report

All 16 cases pass testing.

Testing Environment:

Platform: Tiger4
Processor: Itanium 2 Processor
Logic Processors number: 8 (2 processors with Due Core)
Service OS: RHEL4u3 IA64 SMP with 2 vcpus  1G memory
VTI Guest OS: RHEL4u2  RHEL4u3
XenU Guest OS: RHEL4u2
Xen IA64 Unstable tree: 13465:58637a0a7c7e
Xen Schedule: credit
VTI Guest Firmware Flash.fd.2006.12.01 MD5:
09a224270416036a8b4e6f8496e97854

Summary Test Report:
-
  Total cases: 16
  Passed:16
  Failed: 0

Case Name Status   Case Description
Four_SMPVTI_Coexistpass   4 VTI (mem=256, vcpus=2)
Two_UP_VTI_Co pass   2 UP_VTI (mem=256)
One_UP_VTIpass1 UP_VTI (mem=256)
One_UP_XenU  pass1 UP_xenU(mem=256)
SMPVTI_LTPpassVTI (vcpus=4, mem=512) run LTP
SMPVTI_and_SMPXenU   pass  1 VTI + 1 xenU (mem=256 vcpus=2)
Two_SMPXenU_Coexistpass2 xenU (mem=256, vcpus=2)
One_SMPVTI_4096M  pass  1 VTI (vcpus=2, mem=4096M)
SMPVTI_Network  pass  1 VTI (mem=256,vcpu=2) and 'ping'
SMPXenU_Networkpass  1 XenU (vcpus=2) and 'ping'
One_SMP_XenU  pass   1 SMP xenU (vcpus=2)
One_SMP_VTIpass   1 SMP VTI (vcpus=2)
SMPVTI_Kernel_Build  pass  VTI (vcpus=4) and do Kernel Build
Four_SMPVTI_Coexist  pass  4 VTI domains( mem=256, vcpu=2)
SMPVTI_Windows  pass  SMPVTI windows(vcpu=2)
SMPWin_SMPVTI_SMPxenU  pass  SMPVTI Linux/Windows  XenU
UPVTI_Kernel_Build   pass   1 UP VTI and do kernel build

Notes:
-
The last stable changeset:
-
13465:58637a0a7c7e

Thanks,
Yongkang

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:

2007-01-18 Thread Tian, Kevin
From: Isaku Yamahata [mailto:[EMAIL PROTECTED]
Sent: 2007年1月18日 17:25

On Thu, Jan 18, 2007 at 04:54:05PM +0800, Tian, Kevin wrote:
 Yeah, memory map may be sparse on ia64, but, only at physical level.
 You can always present a compact pseudo physical layout to a
 domain, despite of sparse or not in real physical.:-)

That's right. Xen/ia64 does so now for paravirtualized domain
except dom0.
There is an unsolved issue. If much memory (e.g. 4GB) is given
to a driver domain, the domain can't access I/O.
At least the I/O area must be avoided somehow,
thus paravirtualized domain's memory map may become sparse
(in the future when the issue is solved).

Yes, if I/O regions are very sparse, so does memory map for driver 
domain.



 BTW, is it possible
 to save memmap into xenstore, so that multiple user components can
 communicate such info directly without xen's intervention?

Do you have any usage in mind?

Case like your above requirement, case like qemu, and even case 
like save/restore... anyway, to me there's no need to let Xen aware 
of the domain memmap. Domain image builder constructs the memmap 
based on its configuration, and then just notify xen to allocate pages for 
appropriate regions or setup mapping for assigned MMIO ranges. If 
builder also saves memmap to xenstore, you don't need above 
hypercall then. Just an alternative...

Thanks,
Kevin

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH]Implement eager save, lazy restore FPalgorithm

2007-01-18 Thread Xu, Anthony
Alex Williamson write on 2007年1月17日 5:12:
 On Tue, 2007-01-16 at 12:52 +0800, Xu, Anthony wrote:
 This patch Implements eager save, lazy restore FP algorithm,
 This patch reduces the cost of VCPU schedule.
 
 Hi Anthony,
 
In theory, this is nice.  In practice, I get segfaults doing a
 kernel build in a domVT with multiple VT domains running.  Thanks,
 
   Alex

Hi Alex,

I didn't consider VCPU migration in previous patch.
This patch fixed this.

Tested by running KB on two VTI domain with two VCPUs each.

--anthony


lazy_fp2.patch
Description: lazy_fp2.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [Patch] allocate all memory to dom0

2007-01-18 Thread Alex Williamson
On Thu, 2007-01-18 at 13:06 +0900, Akio Takebe wrote:
 Hi,
 
 If we don't specify dom0_mem, we can use all memory on dom0 with this patch.
 I change alloc_dom0() to alloc_dom0_size(), and alloc_dom0_size() is static 
 function.

Hi Akio,

   I don't think we're able to do this yet.  One of my test systems is
an rx6600 w/ 96G of memory.  With this patch, I get a _very_ long pause
during Xen boot (30 seconds), followed by an endless loop of page
allocation failures:

(XEN) Domain0 EFI passthrough: ACPI 2.0=0x3fdce000 SMBIOS=0x3e52a000
[long pause here]
(XEN) Cannot handle page request order 0!
(XEN) Cannot handle page request order 0!
(XEN) Cannot handle page request order 0!
(XEN) Cannot handle page request order 0!
(XEN) Cannot handle page request order 0!
...

The most memory I can specify for dom0_mem and still boot seems to be
around 87G.  Obviously failing to boot is bad, but the long pause while
we're individually mapping every page for dom0 is also a usability
issue.  We either need to see about optimizing this code segment to
reduce the delay to something acceptable, or add some kind of forward
progress indicator.  Thanks,

Alex

-- 
Alex Williamson HP Open Source  Linux Org.


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel][PATCH] don't panic dom0 if there is not enoughmemory to create guest

2007-01-18 Thread Zhang, Xing Z
I am sorry I miss the case which VHPT allocated successful but TLB allocated 
failed.

This patch is a new version for that.

 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zhang, Xing Z
Sent: 2007年1月18日 18:01
To: xen-ia64-devel@lists.xensource.com
Subject: [Xen-ia64-devel][PATCH] don't panic dom0 if there is not enoughmemory 
to create guest

 

When there is not enough memory to create a domain,

we not need panic domain0. Just prevent it from crating.

 

Signed-off-by, Zhang Xin  [EMAIL PROTECTED]

 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation

 

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

RE: [Xen-ia64-devel][PATCH] don't panic dom0 if there is not enoughmemory to create guest

2007-01-18 Thread Zhang, Xing Z
Sorry, forgot to attach the patch.

 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation



From: Zhang, Xing Z 
Sent: 2007年1月19日 9:59
To: Zhang, Xing Z; xen-ia64-devel@lists.xensource.com
Subject: RE: [Xen-ia64-devel][PATCH] don't panic dom0 if there is not 
enoughmemory to create guest

 

I am sorry I miss the case which VHPT allocated successful but TLB allocated 
failed.

This patch is a new version for that.

 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation



From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zhang, Xing Z
Sent: 2007年1月18日 18:01
To: xen-ia64-devel@lists.xensource.com
Subject: [Xen-ia64-devel][PATCH] don't panic dom0 if there is not enoughmemory 
to create guest

 

When there is not enough memory to create a domain,

we not need panic domain0. Just prevent it from crating.

 

Signed-off-by, Zhang Xin  [EMAIL PROTECTED]

 

Good good study,day day up ! ^_^

-Wing(zhang xin)

 

OTC,Intel Corporation

 



donnot_panic_dom0_2.patch
Description: donnot_panic_dom0_2.patch
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] [Patch] allocate all memory to dom0

2007-01-18 Thread Akio Takebe
Hi, Alex

Thank you for your comments.
I agree, my patch can be applied after fixing .config(of discontigous issue),
fixing and optimizing them, and so on.

Also, as Mark said, xen-ia64 may not need to allocate 
all memory and cpu to dom0. IA64 machine is very large system,
and dom0 don't need large machine environment.

Best Regards,

Akio Takebe

On Thu, 2007-01-18 at 13:06 +0900, Akio Takebe wrote:
 Hi,
 
 If we don't specify dom0_mem, we can use all memory on dom0 with this patch.
 I change alloc_dom0() to alloc_dom0_size(), and alloc_dom0_size() is 
 static function.

Hi Akio,

   I don't think we're able to do this yet.  One of my test systems is
an rx6600 w/ 96G of memory.  With this patch, I get a _very_ long pause
during Xen boot (30 seconds), followed by an endless loop of page
allocation failures:

(XEN) Domain0 EFI passthrough: ACPI 2.0=0x3fdce000 SMBIOS=0x3e52a000
[long pause here]
(XEN) Cannot handle page request order 0!
(XEN) Cannot handle page request order 0!
(XEN) Cannot handle page request order 0!
(XEN) Cannot handle page request order 0!
(XEN) Cannot handle page request order 0!
...

The most memory I can specify for dom0_mem and still boot seems to be
around 87G.  Obviously failing to boot is bad, but the long pause while
we're individually mapping every page for dom0 is also a usability
issue.  We either need to see about optimizing this code segment to
reduce the delay to something acceptable, or add some kind of forward
progress indicator.  Thanks,

   Alex

-- 
Alex Williamson HP Open Source  Linux Org.



___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


Re: [Xen-ia64-devel] Re: [Xen-devel] [PATCH 0/5] dump-core take 2:

2007-01-18 Thread Jes Sorensen
 Kevin == Tian, Kevin [EMAIL PROTECTED] writes:

Kevin Yeah, memory map may be sparse on ia64, but, only at physical
Kevin level.  You can always present a compact pseudo physical layout
Kevin to a domain, despite of sparse or not in real physical.:-) BTW,
Kevin is it possible to save memmap into xenstore, so that multiple
Kevin user components can communicate such info directly without
Kevin xen's intervention?

Providing a fake linear memory map like that is totally broken, it
means the domU operating system will not be able to benefit from NUMA
information and do appropriate scheduling. The domU pages needs to be
placed in the metaphysical memory zones that match their physical zone
to get this right.

We can provide a virtual linear map for special cases, like to support
lesser operating systems that can't handle real computers, but the
general case needs to be that pages go into the metaphysical zone that
matches their real physical zone.

This is applicable to any NUMA system, not just ia64 systems, so with
x86_64 becoming mainstream they will need it there too.

A linear scan of the pfn list is just wrong, one should never do that.

Cheers,
Jes

___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [Patch 0/2] INIT handler improvement

2007-01-18 Thread Akio Takebe
Hi,

I fix a bug and I improve CallTrace 
at the time of pushing INIT button.
This improve to support calltrace of all vcpu.

[1/2] fix a bug init_handler_platform
[2/2] improve calltarce

Best Regards,

Akio Takebe


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] [Patch 0/1] fix a bug init_handler_platform

2007-01-18 Thread Akio Takebe
Hi,

This patch fix breaking switch stack.
unw_init_running() in INIT handler the save switch stack pointer.
The swith stack is not guaranteed after returning unw_init_running().
But because INIT handler of Xen-ia64 call some functions after
unw_init_running(), the switch stack is broken by the functions.

Signed-off-by: Akio Takebe [EMAIL PROTECTED]

Best Regards,

Akio Takebe

fix_break_sw_init_hanlder.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

[Xen-ia64-devel] [Patch 2/2] improve calltarce

2007-01-18 Thread Akio Takebe
Hi,

I improve CallTrace at the time of pushing INIT button.
This improve to support calltrace of all vcpu.

Signed-off-by: Akio Takebe [EMAIL PROTECTED]

Best Regards,

Akio Takebe

improvement_init_handler.patch
Description: Binary data
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel

Re: [Xen-ia64-devel] Xen crash when creating VTI in some machines.

2007-01-18 Thread Masaki Kanno
Hi Yongkang,

Hmmm... It's the strange issue. I think so, too. 
I'm trying to reproduce the strange issue. But I'm not able to meet 
the strange issue at this stage. 
Currently, the creation of HVM domain does not call the pervcpu_vhpt_free() 
by my patch(changeset:13434). Which function is f7b30080?

Best regards,
 Kan


Hi all,

This is a strange issue, because some machines do not meet this problem, 
including our daily regular testing machine. 

But we found Xen might crash when doing the operation of creating VTI in 
some other machines. Serial console would keep report:
...
(XEN)  [f4080300] pervcpu_vhpt_free+0x30/0x50
(XEN) sp=f7bffe00 bsp=
f7bf93e8
(XEN)  [f7b30080] ???
(XEN) sp=f7bffe00 bsp=
f7bf93e8
...


We found this issue at least in 13438 and 13465. Does anybody meet this 
issue too?

Best Regards,
Yongkang (Kangkang) モタソオ


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


RE: [Xen-ia64-devel] Xen crash when creating VTI in some machines.

2007-01-18 Thread Masaki Kanno
Hi Wing and Yongkang,

Oh! I am sorry that it was caused by my patch. 
I will review my patch. 

Best regards,
 Kan

Hi Kan:
   f7b30080 is an address belong to free_domheap_pages(). I 
found 
if reversed your patch, the issue will not meet. Could you give a review to
 your patch, maybe there is something lost. 

Good good study,day day up ! ^_^
-Wing(zhang xin)
 
OTC,Intel Corporation

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Masaki 
Kanno
Sent: 2007年1月19日 13:13
To: You, Yongkang; xen-ia64-devel
Subject: Re: [Xen-ia64-devel] Xen crash when creating VTI in some machines.

Hi Yongkang,

Hmmm... It's the strange issue. I think so, too.
I'm trying to reproduce the strange issue. But I'm not able to meet
the strange issue at this stage.
Currently, the creation of HVM domain does not call the pervcpu_vhpt_free()
by my patch(changeset:13434). Which function is f7b30080?

Best regards,
 Kan


Hi all,

This is a strange issue, because some machines do not meet this problem,
including our daily regular testing machine.

But we found Xen might crash when doing the operation of creating VTI in
some other machines. Serial console would keep report:
...
(XEN)  [f4080300] pervcpu_vhpt_free+0x30/0x50
(XEN) sp=f7bffe00 bsp=
f7bf93e8
(XEN)  [f7b30080] ???
(XEN) sp=f7bffe00 bsp=
f7bf93e8
...


We found this issue at least in 13438 and 13465. Does anybody meet this
issue too?

Best Regards,
Yongkang (Kangkang) モタソオ


---text/plain---
___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


___
Xen-ia64-devel mailing list
Xen-ia64-devel@lists.xensource.com
http://lists.xensource.com/xen-ia64-devel


[Xen-ia64-devel] Weekly benchmark results [ww03]

2007-01-18 Thread KUWAMURA Shin'ya
Hi,

I report a benchmark result of this week.

In LTP, clone06 and sockioctl01 failed.  Both cases passed when
testing manually.

sockioctl01's fail caused by a bad configuration (/dev/tty0 is
missing). I will fix it.

clone06 fails because the environment variable TERM is unset.
In such cases, clone06 causes SEGV.


TEST ENVIRONMENT
Machine  : Tiger4
Kernel   : 2.6.16.33-xen
Changeset: 13438:43115ffc6635 (xen-ia64-unstable)
Dom0 OS  : RHEL4 U2 (2P)
DomU OS  : RHEL4 U2 (8P)
#DomU: 1
Scheduler: credit
Device   : tap (tap:aio)

TEST RESULT
unixbench4.1.0: Pass
bonnie++-1.03 : Pass
ltp-full-20061121 : 3/831 FAIL (see attachment)
iozone3_191   : Pass
lmbench-3.0-a5: Pass

Best regards,
KUWAMURA and Fujitsu members
Test Start Time: Thu Jan 18 17:18:04 2007
-
Testcase   Result Exit Value
   -- --
abort01PASS   0
accept01   PASS   0
access01   PASS   0
access02   PASS   0
access03   PASS   0
access04   PASS   0
access05   PASS   0
acct01 PASS   0
acct02 PASS   0
adjtimex01 PASS   0
adjtimex02 PASS   0
alarm01PASS   0
alarm02PASS   0
alarm03PASS   0
alarm04PASS   0
alarm05PASS   0
alarm06PASS   0
alarm07PASS   0
asyncio02  PASS   0
bind01 PASS   0
bind02 PASS   0
brk01  PASS   0
capget01   PASS   0
capget02   PASS   0
capset01   PASS   0
capset02   PASS   0
chdir01PASS   0
chdir01A   PASS   0
chdir02PASS   0
chdir03PASS   0
chdir04PASS   0
chmod01PASS   0
chmod01A   PASS   0
chmod02PASS   0
chmod03PASS   0
chmod04PASS   0
chmod05PASS   0
chmod06PASS   0
chmod07PASS   0
chown01PASS   0
chown02PASS   0
chown03PASS   0
chown04PASS   0
chown05PASS   0
chroot01   PASS   0
chroot02   PASS   0
chroot03   PASS   0
chroot04   PASS   0
clone01PASS   0
clone02PASS   0
clone03PASS   0
clone04PASS   0
clone05PASS   0
clone06FAIL   2
clone07PASS   0
close01PASS   0
close02PASS   0
close08PASS   0
confstr01  PASS   0
connect01  PASS   0
creat01PASS   0
creat03PASS   0
creat04PASS   0
creat05PASS   0
creat06PASS   0
creat07PASS   0
creat08PASS   0
creat09PASS   0
dup01  PASS   0
dup02  PASS   0
dup03  PASS   0
dup04  PASS   0
dup05  PASS   0
dup06  PASS   0
dup07  PASS   0
dup201 PASS   0
dup202 PASS   0
dup203 PASS   0
dup204 PASS   0
dup205 PASS   0
execl01PASS   0
execle01   PASS   0
execlp01