Hello community, here is the log from the commit of package xen for openSUSE:Factory checked in at 2014-05-02 19:21:27 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Factory/xen (Old) and /work/SRC/openSUSE:Factory/.xen.new (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "xen" Changes: -------- --- /work/SRC/openSUSE:Factory/xen/xen.changes 2014-04-16 07:44:25.000000000 +0200 +++ /work/SRC/openSUSE:Factory/.xen.new/xen.changes 2014-05-02 19:21:29.000000000 +0200 @@ -1,0 +2,26 @@ +Sat Apr 26 09:56:36 MDT 2014 - [email protected] + +- When the xl command is used, check to see if the domain being + modified is managed by libvirt and print warning if it is. + xl-check-for-libvirt-managed-domain.patch + +------------------------------------------------------------------- +Thu Apr 24 08:17:36 MDT 2014 - [email protected] + +- Upstream patches from Jan + 53455585-x86-AMD-feature-masking-is-unavailable-on-Fam11.patch + 5346a7a0-x86-AMD-support-further-feature-masking-MSRs.patch + 534bbd90-x86-nested-HAP-don-t-BUG-on-legitimate-error.patch + 534bdf47-x86-HAP-also-flush-TLB-when-altering-a-present-1G-or-intermediate-entry.patch + 53563ea4-x86-MSI-drop-workaround-for-insecure-Dom0-kernels.patch + 5357baff-x86-add-missing-break-in-dom0_pit_access.patch +- XSA-92 + xsa92.patch + +------------------------------------------------------------------- +Sat Apr 12 20:48:21 UTC 2014 - [email protected] + +- Add # needssslcertforbuild to use the project's certificate when + building in a home project. (bnc#872354) + +------------------------------------------------------------------- New: ---- 53455585-x86-AMD-feature-masking-is-unavailable-on-Fam11.patch 5346a7a0-x86-AMD-support-further-feature-masking-MSRs.patch 534bbd90-x86-nested-HAP-don-t-BUG-on-legitimate-error.patch 534bdf47-x86-HAP-also-flush-TLB-when-altering-a-present-1G-or-intermediate-entry.patch 53563ea4-x86-MSI-drop-workaround-for-insecure-Dom0-kernels.patch 5357baff-x86-add-missing-break-in-dom0_pit_access.patch xl-check-for-libvirt-managed-domain.patch xsa92.patch ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ xen.spec ++++++ --- /var/tmp/diff_new_pack.FnTEMX/_old 2014-05-02 19:21:32.000000000 +0200 +++ /var/tmp/diff_new_pack.FnTEMX/_new 2014-05-02 19:21:32.000000000 +0200 @@ -16,6 +16,8 @@ # +# needssslcertforbuild + Name: xen ExclusiveArch: %ix86 x86_64 %arm aarch64 %define xvers 4.4 @@ -152,7 +154,7 @@ %endif %endif -Version: 4.4.0_14 +Version: 4.4.0_16 Release: 0 PreReq: %insserv_prereq %fillup_prereq Summary: Xen Virtualization: Hypervisor (aka VMM aka Microkernel) @@ -235,6 +237,13 @@ Patch22: 53356c1e-x86-HVM-correct-CPUID-leaf-80000008-handling.patch Patch23: 533ad1ee-VMX-fix-PAT-value-seen-by-guest.patch Patch24: 533d413b-x86-mm-fix-checks-against-max_mapped_pfn.patch +Patch25: 53455585-x86-AMD-feature-masking-is-unavailable-on-Fam11.patch +Patch26: 5346a7a0-x86-AMD-support-further-feature-masking-MSRs.patch +Patch27: 534bbd90-x86-nested-HAP-don-t-BUG-on-legitimate-error.patch +Patch28: 534bdf47-x86-HAP-also-flush-TLB-when-altering-a-present-1G-or-intermediate-entry.patch +Patch29: 53563ea4-x86-MSI-drop-workaround-for-insecure-Dom0-kernels.patch +Patch30: 5357baff-x86-add-missing-break-in-dom0_pit_access.patch +Patch92: xsa92.patch # Upstream qemu Patch250: VNC-Support-for-ExtendedKeyEvent-client-message.patch Patch251: 0001-net-move-the-tap-buffer-into-TAPState.patch @@ -356,6 +365,7 @@ Patch464: set-mtu-from-bridge-for-tap-interface.patch Patch465: libxl.add-option-for-discard-support-to-xl-disk-conf.patch Patch466: aarch64-rename-PSR_MODE_ELxx-to-match-linux-headers.patch +Patch467: xl-check-for-libvirt-managed-domain.patch # Hypervisor and PV driver Patches Patch501: x86-ioapic-ack-default.patch Patch502: x86-cpufreq-report.patch @@ -617,6 +627,13 @@ %patch22 -p1 %patch23 -p1 %patch24 -p1 +%patch25 -p1 +%patch26 -p1 +%patch27 -p1 +%patch28 -p1 +%patch29 -p1 +%patch30 -p1 +%patch92 -p1 # Upstream qemu patches %patch250 -p1 %patch251 -p1 @@ -737,6 +754,7 @@ %patch464 -p1 %patch465 -p1 %patch466 -p1 +%patch467 -p1 # Hypervisor and PV driver Patches %patch501 -p1 %patch502 -p1 ++++++ 53455585-x86-AMD-feature-masking-is-unavailable-on-Fam11.patch ++++++ # Commit 70e79fad6dc6f533ff83ee23b8d13de5a696d896 # Date 2014-04-09 16:13:25 +0200 # Author Jan Beulich <[email protected]> # Committer Jan Beulich <[email protected]> x86/AMD: feature masking is unavailable on Fam11 Reported-by: Aravind Gopalakrishnan<[email protected]> Signed-off-by: Jan Beulich <[email protected]> Reviewed-by: Andrew Cooper <[email protected]> --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -107,6 +107,10 @@ static void __devinit set_cpuidmask(cons ASSERT((status == not_parsed) && (smp_processor_id() == 0)); status = no_mask; + /* Fam11 doesn't support masking at all. */ + if (c->x86 == 0x11) + return; + if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx & opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) { feat_ecx = opt_cpuid_mask_ecx; @@ -176,7 +180,6 @@ static void __devinit set_cpuidmask(cons extfeat_ecx, extfeat_edx); setmask: - /* FIXME check if processor supports CPUID masking */ /* AMD processors prior to family 10h required a 32-bit password */ if (c->x86 >= 0x10) { wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx); ++++++ 5346a7a0-x86-AMD-support-further-feature-masking-MSRs.patch ++++++ # Commit e74de9c0b19f9bd16d658a96bf6c9ab9a2a639e9 # Date 2014-04-10 16:16:00 +0200 # Author Jan Beulich <[email protected]> # Committer Jan Beulich <[email protected]> x86/AMD: support further feature masking MSRs Newer AMD CPUs also allow masking CPUID leaf 6 ECX and CPUID leaf 7 sub-leaf 0 EAX and EBX. Signed-off-by: Jan Beulich <[email protected]> Reviewed-by: Aravind Gopalakrishnan<[email protected]> --- a/docs/misc/xen-command-line.markdown +++ b/docs/misc/xen-command-line.markdown @@ -320,24 +320,42 @@ Indicate where the responsibility for dr ### cpuid\_mask\_cpu (AMD only) > `= fam_0f_rev_c | fam_0f_rev_d | fam_0f_rev_e | fam_0f_rev_f | fam_0f_rev_g > | fam_10_rev_b | fam_10_rev_c | fam_11_rev_b` -If the other **cpuid\_mask\_{,ext\_}e{c,d}x** options are fully set -(unspecified on the command line), specify a pre-canned cpuid mask to -mask the current processor down to appear as the specified processor. -It is important to ensure that all hosts in a pool appear the same to -guests to allow successful live migration. +If the other **cpuid\_mask\_{,ext\_,thermal\_,l7s0\_}e{a,b,c,d}x** +options are fully set (unspecified on the command line), specify a +pre-canned cpuid mask to mask the current processor down to appear as +the specified processor. It is important to ensure that all hosts in a +pool appear the same to guests to allow successful live migration. -### cpuid\_mask\_ ecx,edx,ext\_ecx,ext\_edx,xsave_eax +### cpuid\_mask\_{{,ext\_}ecx,edx} > `= <integer>` > Default: `~0` (all bits set) -These five command line parameters are used to specify cpuid masks to +These four command line parameters are used to specify cpuid masks to help with cpuid levelling across a pool of hosts. Setting a bit in the mask indicates that the feature should be enabled, while clearing a bit in the mask indicates that the feature should be disabled. It is important to ensure that all hosts in a pool appear the same to guests to allow successful live migration. +### cpuid\_mask\_xsave\_eax (Intel only) +> `= <integer>` + +> Default: `~0` (all bits set) + +This command line parameter is also used to specify a cpuid mask to +help with cpuid levelling across a pool of hosts. See the description +of the other respective options above. + +### cpuid\_mask\_{l7s0\_{eax,ebx},thermal\_ecx} (AMD only) +> `= <integer>` + +> Default: `~0` (all bits set) + +These three command line parameters are also used to specify cpuid +masks to help with cpuid levelling across a pool of hosts. See the +description of the other respective options above. + ### cpuidle > `= <boolean>` --- a/xen/arch/x86/cpu/amd.c +++ b/xen/arch/x86/cpu/amd.c @@ -30,9 +30,17 @@ * "fam_10_rev_c" * "fam_11_rev_b" */ -static char opt_famrev[14]; +static char __initdata opt_famrev[14]; string_param("cpuid_mask_cpu", opt_famrev); +static unsigned int __initdata opt_cpuid_mask_l7s0_eax = ~0u; +integer_param("cpuid_mask_l7s0_eax", opt_cpuid_mask_l7s0_eax); +static unsigned int __initdata opt_cpuid_mask_l7s0_ebx = ~0u; +integer_param("cpuid_mask_l7s0_ebx", opt_cpuid_mask_l7s0_ebx); + +static unsigned int __initdata opt_cpuid_mask_thermal_ecx = ~0u; +integer_param("cpuid_mask_thermal_ecx", opt_cpuid_mask_thermal_ecx); + /* 1 = allow, 0 = don't allow guest creation, -1 = don't allow boot */ s8 __read_mostly opt_allow_unsafe; boolean_param("allow_unsafe", opt_allow_unsafe); @@ -96,7 +104,11 @@ static void __devinit set_cpuidmask(cons { static unsigned int feat_ecx, feat_edx; static unsigned int extfeat_ecx, extfeat_edx; + static unsigned int l7s0_eax, l7s0_ebx; + static unsigned int thermal_ecx; + static bool_t skip_l7s0_eax_ebx, skip_thermal_ecx; static enum { not_parsed, no_mask, set_mask } status; + unsigned int eax, ebx, ecx, edx; if (status == no_mask) return; @@ -104,7 +116,7 @@ static void __devinit set_cpuidmask(cons if (status == set_mask) goto setmask; - ASSERT((status == not_parsed) && (smp_processor_id() == 0)); + ASSERT((status == not_parsed) && (c == &boot_cpu_data)); status = no_mask; /* Fam11 doesn't support masking at all. */ @@ -112,11 +124,16 @@ static void __devinit set_cpuidmask(cons return; if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx & - opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) { + opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx & + opt_cpuid_mask_l7s0_eax & opt_cpuid_mask_l7s0_ebx & + opt_cpuid_mask_thermal_ecx)) { feat_ecx = opt_cpuid_mask_ecx; feat_edx = opt_cpuid_mask_edx; extfeat_ecx = opt_cpuid_mask_ext_ecx; extfeat_edx = opt_cpuid_mask_ext_edx; + l7s0_eax = opt_cpuid_mask_l7s0_eax; + l7s0_ebx = opt_cpuid_mask_l7s0_ebx; + thermal_ecx = opt_cpuid_mask_thermal_ecx; } else if (*opt_famrev == '\0') { return; } else if (!strcmp(opt_famrev, "fam_0f_rev_c")) { @@ -179,11 +196,39 @@ static void __devinit set_cpuidmask(cons printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n", extfeat_ecx, extfeat_edx); + if (c->cpuid_level >= 7) + cpuid_count(7, 0, &eax, &ebx, &ecx, &edx); + else + ebx = eax = 0; + if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) { + if (l7s0_eax > eax) + l7s0_eax = eax; + l7s0_ebx &= ebx; + printk("Writing CPUID leaf 7 subleaf 0 feature mask EAX:EBX -> %08Xh:%08Xh\n", + l7s0_eax, l7s0_ebx); + } else + skip_l7s0_eax_ebx = 1; + + /* Only Fam15 has the respective MSR. */ + ecx = c->x86 == 0x15 && c->cpuid_level >= 6 ? cpuid_ecx(6) : 0; + if (ecx && ~thermal_ecx) { + thermal_ecx &= ecx; + printk("Writing CPUID thermal/power feature mask ECX -> %08Xh\n", + thermal_ecx); + } else + skip_thermal_ecx = 1; + setmask: /* AMD processors prior to family 10h required a 32-bit password */ if (c->x86 >= 0x10) { wrmsr(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx); wrmsr(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx); + if (!skip_l7s0_eax_ebx) + wrmsr(MSR_AMD_L7S0_FEATURE_MASK, l7s0_ebx, l7s0_eax); + if (!skip_thermal_ecx) { + rdmsr(MSR_AMD_THRM_FEATURE_MASK, eax, edx); + wrmsr(MSR_AMD_THRM_FEATURE_MASK, thermal_ecx, edx); + } } else { wrmsr_amd(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx); wrmsr_amd(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx); --- a/xen/include/asm-x86/msr-index.h +++ b/xen/include/asm-x86/msr-index.h @@ -204,6 +204,8 @@ #define MSR_AMD_FAM15H_EVNTSEL5 0xc001020a #define MSR_AMD_FAM15H_PERFCTR5 0xc001020b +#define MSR_AMD_L7S0_FEATURE_MASK 0xc0011002 +#define MSR_AMD_THRM_FEATURE_MASK 0xc0011003 #define MSR_K8_FEATURE_MASK 0xc0011004 #define MSR_K8_EXT_FEATURE_MASK 0xc0011005 ++++++ 534bbd90-x86-nested-HAP-don-t-BUG-on-legitimate-error.patch ++++++ # Commit 1ca73aaf51eba14256794bf045c2eb01e88e1324 # Date 2014-04-14 12:50:56 +0200 # Author Jan Beulich <[email protected]> # Committer Jan Beulich <[email protected]> x86/nested HAP: don't BUG() on legitimate error p2m_set_entry() can fail without there being a bug in the code - crash the domain rather than the host in that case. Signed-off-by: Jan Beulich <[email protected]> Reviewed-by: Andrew Cooper <[email protected]> Acked-by: Tim Deegan <[email protected]> --- a/xen/arch/x86/mm/hap/nested_hap.c +++ b/xen/arch/x86/mm/hap/nested_hap.c @@ -133,7 +133,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct gdprintk(XENLOG_ERR, "failed to set entry for %#"PRIx64" -> %#"PRIx64"\n", L2_gpa, L0_gpa); - BUG(); + domain_crash(p2m->domain); } } ++++++ 534bdf47-x86-HAP-also-flush-TLB-when-altering-a-present-1G-or-intermediate-entry.patch ++++++ # Commit c82fbfe6ec8be597218eb943641d1f7a81c4c01e # Date 2014-04-14 15:14:47 +0200 # Author Jan Beulich <[email protected]> # Committer Jan Beulich <[email protected]> x86/HAP: also flush TLB when altering a present 1G or intermediate entry Signed-off-by: Jan Beulich <[email protected]> Acked-by: Tim Deegan <[email protected]> --- a/xen/arch/x86/mm/hap/hap.c +++ b/xen/arch/x86/mm/hap/hap.c @@ -711,9 +711,8 @@ hap_write_p2m_entry(struct vcpu *v, unsi } safe_write_pte(p, new); - if ( (old_flags & _PAGE_PRESENT) - && (level == 1 || (level == 2 && (old_flags & _PAGE_PSE))) ) - flush_tlb_mask(d->domain_dirty_cpumask); + if ( old_flags & _PAGE_PRESENT ) + flush_tlb_mask(d->domain_dirty_cpumask); paging_unlock(d); ++++++ 53563ea4-x86-MSI-drop-workaround-for-insecure-Dom0-kernels.patch ++++++ # Commit 061eebe0e99ad45c9c3b1a778b06140de4a91f25 # Date 2014-04-22 12:04:20 +0200 # Author Jan Beulich <[email protected]> # Committer Jan Beulich <[email protected]> x86/MSI: drop workaround for insecure Dom0 kernels Considering that - the workaround is expensive (iterating through the entire P2M space of a domain), - the planned elimination of the expensiveness (by propagating the type change step by step to the individual P2M leaves) wouldn't address the IOMMU side of things (as for it to obey to the changed permissions the adjustments must be pushed down immediately through the entire tree) - the proper solution (PHYSDEVOP_msix_prepare) should by now be implemented by all security conscious Dom0 kernels remove the workaround, killing eventual guests that would be known to become a security risk instead. Signed-off-by: Jan Beulich <[email protected]> Acked-by: Kevin Tian <[email protected]> --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -679,7 +679,7 @@ static void ept_change_entry_type_global return; BUG_ON(p2m_is_grant(ot) || p2m_is_grant(nt)); - BUG_ON(ot != nt && (ot == p2m_mmio_direct || nt == p2m_mmio_direct)); + BUG_ON(p2m_is_mmio(ot) || p2m_is_mmio(nt)); ept_change_entry_type_page(_mfn(ept_get_asr(ept)), ept_get_wl(ept), ot, nt); --- a/xen/arch/x86/msi.c +++ b/xen/arch/x86/msi.c @@ -825,32 +825,22 @@ static int msix_capability_init(struct p msix->pba.last) ) WARN(); - if ( dev->domain ) - p2m_change_entry_type_global(dev->domain, - p2m_mmio_direct, p2m_mmio_direct); - if ( desc && (!dev->domain || !paging_mode_translate(dev->domain)) ) + if ( desc ) { - struct domain *d = dev->domain; + struct domain *currd = current->domain; + struct domain *d = dev->domain ?: currd; - if ( !d ) - for_each_domain(d) - if ( !paging_mode_translate(d) && - (iomem_access_permitted(d, msix->table.first, - msix->table.last) || - iomem_access_permitted(d, msix->pba.first, - msix->pba.last)) ) - break; - if ( d ) - { - if ( !is_hardware_domain(d) && msix->warned != d->domain_id ) - { - msix->warned = d->domain_id; - printk(XENLOG_ERR - "Potentially insecure use of MSI-X on %04x:%02x:%02x.%u by Dom%d\n", - seg, bus, slot, func, d->domain_id); - } - /* XXX How to deal with existing mappings? */ - } + if ( !is_hardware_domain(currd) || d != currd ) + printk("%s use of MSI-X on %04x:%02x:%02x.%u by Dom%d\n", + is_hardware_domain(currd) + ? XENLOG_WARNING "Potentially insecure" + : XENLOG_ERR "Insecure", + seg, bus, slot, func, d->domain_id); + if ( !is_hardware_domain(d) && + /* Assume a domain without memory has no mappings yet. */ + (!is_hardware_domain(currd) || d->tot_pages) ) + domain_crash(d); + /* XXX How to deal with existing mappings? */ } } WARN_ON(msix->nr_entries != nr_entries); ++++++ 5357baff-x86-add-missing-break-in-dom0_pit_access.patch ++++++ # Commit 815dc9f1dba5782dcef77d8a002a11f5b1e5cc37 # Date 2014-04-23 15:07:11 +0200 # Author Jan Beulich <[email protected]> # Committer Jan Beulich <[email protected]> x86: add missing break in dom0_pit_access() Coverity ID 1203045 Signed-off-by: Jan Beulich <[email protected]> Reviewed-by: Andrew Cooper <[email protected]> --- a/xen/arch/x86/time.c +++ b/xen/arch/x86/time.c @@ -1632,6 +1632,7 @@ int dom0_pit_access(struct ioreq *ioreq) outb(ioreq->data, PIT_MODE); return 1; } + break; case 0x61: if ( ioreq->dir == IOREQ_READ ) ++++++ xl-check-for-libvirt-managed-domain.patch ++++++ Index: xen-4.4.0-testing/tools/libxl/xl.c =================================================================== --- xen-4.4.0-testing.orig/tools/libxl/xl.c +++ xen-4.4.0-testing/tools/libxl/xl.c @@ -282,6 +282,44 @@ static void xl_ctx_free(void) } } +/* + Return 0 if domain is managed by libvirt +*/ +static int xl_lookup_libvirt_managed_domains(int argc, char **argv) +{ + FILE *fp; + int i; + char line[1024]; + char *libvirt_sock = "/run/libvirt/libvirt-sock"; + + /* Check for the libvirt socket file */ + if (access(libvirt_sock, F_OK) != 0) { + return 1; + } + + /* Run virsh to get a list of running domains managed by libvirt */ + fp = popen("/usr/bin/virsh list --name 2>&1", "r"); + if (fp == NULL) { + return 1; + } + + /* Read the list of domains looking for each name in the xl command */ + while (fgets(line, sizeof(line)-1, fp) != NULL) { + line[strlen(line)-1] = '\0'; + for (i=0; i<argc && line[0]; ++i) { + if (!strcmp(argv[i], line)) { + pclose(fp); + return 0; + } + } + } + + pclose(fp); + + /* Not found */ + return 1; +} + int main(int argc, char **argv) { int opt = 0; @@ -345,6 +383,18 @@ int main(int argc, char **argv) goto xit; } if (cspec->modifies && !dryrun_only) { + if (!force_execution) { + if (!xl_lookup_libvirt_managed_domains(argc, argv)) { + fprintf(stderr, +"Warning: This domain is managed by libvirt. Using xl commands to modify this\n" +"domain will result in errors when virsh or virt-manager is used.\n" +"Please use only virsh or virt-manager to manage this domain.\n\n" +"(This check can be overridden with the -f option.)\n" + ); + ret = 1; + goto xit; + } + } for (int i = 0; i < sizeof(locks)/sizeof(locks[0]); i++) { if (!access(locks[i], F_OK) && !force_execution) { fprintf(stderr, ++++++ xsa92.patch ++++++ x86/HVM: restrict HVMOP_set_mem_type Permitting arbitrary type changes here has the potential of creating present P2M (and hence EPT/NPT/IOMMU) entries pointing to an invalid MFN (INVALID_MFN truncated to the respective hardware structure field's width). This would become a problem the latest when something real sat at the end of the physical address space; I'm suspecting though that other things might break with such bogus entries. Along with that drop a bogus (and otherwise becoming stale) log message. Afaict the similar operation in p2m_set_mem_access() is safe. This is XSA-92. Signed-off-by: Jan Beulich <[email protected]> Reviewed-by: Tim Deegan <[email protected]> --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -4410,12 +4410,10 @@ long do_hvm_op(unsigned long op, XEN_GUE rc = -EINVAL; goto param_fail4; } - if ( p2m_is_grant(t) ) + if ( !p2m_is_ram(t) && + (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) ) { put_gfn(d, pfn); - gdprintk(XENLOG_WARNING, - "type for pfn %#lx changed to grant while " - "we were working?\n", pfn); goto param_fail4; } else -- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
