Re: Automated report: NetBSD-current/i386 build failure
David Holland wrote: > On Mon, Jul 19, 2021 at 10:32:20AM +0900, Rin Okuyama wrote: > > Logs below are usually more helpful. > > Right... I wonder what happened to bracket's error-matching script; it > usually does better than that. There are multiple causes, but a major one is that since babylon5 was upgraded to a new server with more cores, the builds have more parallelism, which causes make(1) to print more output from the other parallel jobs after the actual error message, and bracket isn't looking far enough back in the log. I have a fix in testing on my own testbed but still need to deploy it on babylon5. -- Andreas Gustafsson, g...@gson.org
daily CVS update output
Updating src tree: P src/doc/3RDPARTY P src/external/cddl/osnet/dist/uts/common/fs/zfs/zfs_vnops.c P src/external/cddl/osnet/sys/sys/vnode.h P src/sys/arch/alpha/common/sgmap_common.c P src/sys/arch/alpha/common/sgmap_typedep.c P src/sys/arch/alpha/include/bus_defs.h P src/sys/arch/alpha/pci/cia_dma.c P src/sys/arch/alpha/pci/ciavar.h P src/sys/arch/alpha/pci/tsp_dma.c P src/sys/arch/alpha/pci/tsvar.h P src/sys/arch/alpha/tc/tc_dma.c P src/sys/arch/alpha/tc/tc_dma_3000_500.c P src/sys/fs/adosfs/advnops.c P src/sys/fs/cd9660/cd9660_node.h P src/sys/fs/cd9660/cd9660_vnops.c P src/sys/fs/efs/efs_vnops.c P src/sys/fs/filecorefs/filecore_node.h P src/sys/fs/filecorefs/filecore_vnops.c P src/sys/fs/hfs/hfs_vnops.c P src/sys/fs/msdosfs/denode.h P src/sys/fs/msdosfs/msdosfs_vnops.c P src/sys/fs/ptyfs/ptyfs_vnops.c P src/sys/fs/puffs/puffs_vnops.c P src/sys/fs/tmpfs/tmpfs_fifoops.c P src/sys/fs/tmpfs/tmpfs_fifoops.h P src/sys/fs/tmpfs/tmpfs_specops.c P src/sys/fs/tmpfs/tmpfs_specops.h P src/sys/fs/tmpfs/tmpfs_vnops.c P src/sys/fs/tmpfs/tmpfs_vnops.h P src/sys/fs/v7fs/v7fs_extern.c P src/sys/kern/kern_ksyms.c P src/sys/kern/vfs_vnops.c P src/sys/miscfs/deadfs/dead_vnops.c P src/sys/miscfs/fifofs/fifo.h P src/sys/miscfs/kernfs/kernfs_vnops.c P src/sys/miscfs/procfs/procfs_vnops.c P src/sys/miscfs/specfs/spec_vnops.c P src/sys/miscfs/specfs/specdev.h P src/sys/nfs/nfs_vnops.c P src/sys/nfs/nfsnode.h P src/sys/rump/librump/rumpvfs/rumpfs.c P src/sys/ufs/chfs/chfs_vnops.c P src/sys/ufs/ext2fs/ext2fs_vnops.c P src/sys/ufs/ffs/ffs_vnops.c P src/sys/ufs/lfs/lfs_vnops.c P src/sys/ufs/lfs/ulfs_extern.h P src/sys/ufs/mfs/mfs_extern.h P src/sys/ufs/mfs/mfs_vnops.c P src/sys/ufs/mfs/mfsnode.h P src/sys/ufs/ufs/ufs_extern.h P src/tests/usr.sbin/execsnoop/t_execsnoop.sh P src/usr.bin/aiomixer/main.c Updating xsrc tree: Killing core files: Updating release-8 src tree (netbsd-8): Updating release-8 xsrc tree (netbsd-8): Updating release-9 src tree (netbsd-9): Updating release-9 xsrc tree (netbsd-9): Updating file list: -rw-rw-r-- 1 srcmastr netbsd 42878317 Jul 19 03:11 ls-lRA.gz
Re: Weird ldd problem
On Sun, 18 Jul 2021, Chavdar Ivanov wrote: The main difference is indeed ... (RPATH) Library rpath: [$ORIGIN/:$ORIGIN/../../lib/libast:$ORIGIN/cmds:$ORIGIN/../../lib/libdll] I have to read more to understand it though... $ORIGIN? That's yet another way to locate shared libraries. At run-time, $ORIGIN is replaced with the pathname of the executable, and libraries are searched for in the new path that results. Let's say you link a program like this: cc -o foo foo.c -Wl,-rpath='$ORIGIN/../lib' Then, if the executable is installed in /opt/X/bin, the search path becomes `/opt/X/bin/../lib'. Typically used by large commercial programs which the user could install into any location, and they don't want the user to muck about adjusting paths in /etc/ld.so.conf or LD_LIBRARY_PATH. Incidentally, NetBSD's ld.elf_so seems overly sensitive about this. I always thought that $ORIGIN always expanded to an absolute path rather than a relative one. At least, that's how it is on Linux and FreeBSD. Have the linker experts look at this, then file a PR if warranted. -RVP
Automated report: NetBSD-current/i386 build success
The NetBSD-current/i386 build is working again. The following commits were made between the last failed build and the successful build: 2021.07.19.01.06.14 thorpej src/sys/arch/alpha/pci/cia_dma.c,v 1.37 2021.07.19.01.06.14 thorpej src/sys/arch/alpha/pci/ciavar.h,v 1.22 2021.07.19.01.06.14 thorpej src/sys/arch/alpha/pci/tsp_dma.c,v 1.22 2021.07.19.01.06.14 thorpej src/sys/arch/alpha/pci/tsvar.h,v 1.17 2021.07.19.01.30.24 dholland src/sys/fs/cd9660/cd9660_vnops.c,v 1.61 2021.07.19.01.30.24 dholland src/sys/fs/puffs/puffs_vnops.c,v 1.221 2021.07.19.01.30.25 dholland src/sys/fs/tmpfs/tmpfs_fifoops.c,v 1.15 2021.07.19.01.30.25 dholland src/sys/fs/tmpfs/tmpfs_specops.c,v 1.16 2021.07.19.01.33.53 dholland src/sys/miscfs/kernfs/kernfs_vnops.c,v 1.172 2021.07.19.01.34.03 rin src/doc/3RDPARTY,v 1.1808 Logs can be found at: http://releng.NetBSD.org/b5reports/i386/commits-2021.07.html#2021.07.19.01.34.03
Re: Automated report: NetBSD-current/i386 build failure
On 2021/07/19 10:28, David Holland wrote: that is... less than helpful :-( it looks like CVS randomly didn't commit some of my changes, investigating... Logs below are usually more helpful. On 2021/07/19 9:42, NetBSD Test Fixture wrote: Logs can be found at: http://releng.NetBSD.org/b5reports/i386/commits-2021.07.html#2021.07.18.23.57.34 Thanks, rin
Re: Automated report: NetBSD-current/i386 build failure
On Mon, Jul 19, 2021 at 12:42:49AM +, NetBSD Test Fixture wrote: > This is an automatically generated notice of a NetBSD-current/i386 > build failure. > > The failure occurred on babylon5.netbsd.org, a NetBSD/amd64 host, > using sources from CVS date 2021.07.18.23.57.34. > > An extract from the build.sh output follows: > > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U}:C/^/dependall-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U}:C/^/install-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U1}:C/^/dependall-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U1}:C/^/install-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U11}:C/^/dependall-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U11}:C/^/install-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U111}:C/^/dependall-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U111}:C/^/install-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U}:C/^/dependall-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U}:C/^/install-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U1}:C/^/dependall-/} > ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U1}:C/^/install-/} > *** [build_install] Error code 1 > nbmake[4]: stopped in /tmp/build/2021.07.18.23.57.34-i386/src/lib > 1 error that is... less than helpful :-( it looks like CVS randomly didn't commit some of my changes, investigating... -- David A. Holland dholl...@netbsd.org
Automated report: NetBSD-current/i386 build failure
This is an automatically generated notice of a NetBSD-current/i386 build failure. The failure occurred on babylon5.netbsd.org, a NetBSD/amd64 host, using sources from CVS date 2021.07.18.23.57.34. An extract from the build.sh output follows: ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U}:C/^/dependall-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U}:C/^/install-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U1}:C/^/dependall-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U1}:C/^/install-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U11}:C/^/dependall-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U11}:C/^/install-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U111}:C/^/dependall-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U111}:C/^/install-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U}:C/^/dependall-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U}:C/^/install-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U1}:C/^/dependall-/} ${MAKEDIRTARGET} . ${SUBDIR_GROUP.${:U1}:C/^/install-/} *** [build_install] Error code 1 nbmake[4]: stopped in /tmp/build/2021.07.18.23.57.34-i386/src/lib 1 error nbmake[4]: stopped in /tmp/build/2021.07.18.23.57.34-i386/src/lib nbmake[3]: stopped in /tmp/build/2021.07.18.23.57.34-i386/src nbmake[2]: stopped in /tmp/build/2021.07.18.23.57.34-i386/src nbmake[1]: stopped in /tmp/build/2021.07.18.23.57.34-i386/src nbmake: stopped in /tmp/build/2021.07.18.23.57.34-i386/src ERROR: Failed to make release The following commits were made between the last successful build and the failed build: 2021.07.18.23.56.12 dholland src/external/cddl/osnet/dist/uts/common/fs/zfs/zfs_vnops.c,v 1.73 2021.07.18.23.56.13 dholland src/sys/fs/cd9660/cd9660_vnops.c,v 1.60 2021.07.18.23.56.13 dholland src/sys/fs/efs/efs_vnops.c,v 1.43 2021.07.18.23.56.13 dholland src/sys/fs/hfs/hfs_vnops.c,v 1.39 2021.07.18.23.56.13 dholland src/sys/fs/puffs/puffs_vnops.c,v 1.220 2021.07.18.23.56.13 dholland src/sys/fs/tmpfs/tmpfs_fifoops.c,v 1.14 2021.07.18.23.56.13 dholland src/sys/fs/tmpfs/tmpfs_specops.c,v 1.15 2021.07.18.23.56.13 dholland src/sys/fs/v7fs/v7fs_extern.c,v 1.9 2021.07.18.23.56.13 dholland src/sys/miscfs/fifofs/fifo.h,v 1.27 2021.07.18.23.56.13 dholland src/sys/miscfs/kernfs/kernfs_vnops.c,v 1.171 2021.07.18.23.56.14 dholland src/sys/miscfs/specfs/specdev.h,v 1.45 2021.07.18.23.56.14 dholland src/sys/nfs/nfs_vnops.c,v 1.319 2021.07.18.23.56.14 dholland src/sys/rump/librump/rumpvfs/rumpfs.c,v 1.165 2021.07.18.23.56.14 dholland src/sys/ufs/chfs/chfs_vnops.c,v 1.45 2021.07.18.23.56.14 dholland src/sys/ufs/ext2fs/ext2fs_vnops.c,v 1.134 2021.07.18.23.56.14 dholland src/sys/ufs/ffs/ffs_vnops.c,v 1.136 2021.07.18.23.56.14 dholland src/sys/ufs/lfs/lfs_vnops.c,v 1.338 2021.07.18.23.57.13 dholland src/external/cddl/osnet/dist/uts/common/fs/zfs/zfs_vnops.c,v 1.74 2021.07.18.23.57.13 dholland src/sys/fs/adosfs/advnops.c,v 1.57 2021.07.18.23.57.13 dholland src/sys/fs/cd9660/cd9660_node.h,v 1.17 2021.07.18.23.57.14 dholland src/sys/fs/filecorefs/filecore_node.h,v 1.7 2021.07.18.23.57.14 dholland src/sys/fs/filecorefs/filecore_vnops.c,v 1.49 2021.07.18.23.57.14 dholland src/sys/fs/msdosfs/denode.h,v 1.26 2021.07.18.23.57.14 dholland src/sys/fs/msdosfs/msdosfs_vnops.c,v 1.106 2021.07.18.23.57.14 dholland src/sys/fs/ptyfs/ptyfs_vnops.c,v 1.65 2021.07.18.23.57.14 dholland src/sys/fs/tmpfs/tmpfs_fifoops.h,v 1.9 2021.07.18.23.57.14 dholland src/sys/fs/tmpfs/tmpfs_specops.h,v 1.9 2021.07.18.23.57.14 dholland src/sys/fs/tmpfs/tmpfs_vnops.c,v 1.147 2021.07.18.23.57.14 dholland src/sys/fs/tmpfs/tmpfs_vnops.h,v 1.14 2021.07.18.23.57.14 dholland src/sys/miscfs/deadfs/dead_vnops.c,v 1.65 2021.07.18.23.57.14 dholland src/sys/miscfs/procfs/procfs_vnops.c,v 1.218 2021.07.18.23.57.14 dholland src/sys/miscfs/specfs/spec_vnops.c,v 1.183 2021.07.18.23.57.15 dholland src/sys/miscfs/specfs/specdev.h,v 1.46 2021.07.18.23.57.15 dholland src/sys/nfs/nfs_vnops.c,v 1.320 2021.07.18.23.57.15 dholland src/sys/nfs/nfsnode.h,v 1.75 2021.07.18.23.57.15 dholland src/sys/ufs/ext2fs/ext2fs_vnops.c,v 1.135 2021.07.18.23.57.15 dholland src/sys/ufs/ffs/ffs_vnops.c,v 1.137 2021.07.18.23.57.15 dholland src/sys/ufs/lfs/lfs_vnops.c,v 1.339 2021.07.18.23.57.15 dholland src/sys/ufs/lfs/ulfs_extern.h,v 1.26 2021.07.18.23.57.15 dholland src/sys/ufs/mfs/mfs_extern.h,v 1.32 2021.07.18.23.57.15 dholland src/sys/ufs/mfs/mfs_vnops.c,v 1.63 2021.07.18.23.57.15 dholland src/sys/ufs/mfs/mfsnode.h,v 1.22 2021.07.18.23.57.15 dholland src/sys/ufs/ufs/ufs_extern.h,v 1.87 2021.07.18.23.57.34 dholland src/sys/fs/ptyfs/ptyfs_vnops.c,v 1.66 Logs can be found at: http://releng.NetBSD.org/b5reports/i386/commits-2021.07.html#2021.07.18.23.57.34
Anyone still using PCI "isp" SCSI / FC controllers?
The Qlogic ISP SCSI / FC driver PCI front-end appears to universally support using 64-bit PCI DMA addresses, based on my reading of this code block in isp_pci_dmasetup(): if (sizeof (bus_addr_t) > 4) { if (rq->req_header.rqs_entry_type == RQSTYPE_T2RQS) { rq->req_header.rqs_entry_type = RQSTYPE_T3RQS; } else if (rq->req_header.rqs_entry_type == RQSTYPE_REQUEST) { rq->req_header.rqs_entry_type = RQSTYPE_A64; } } There's just one problem, though! It does not use the 64-bit PCI DMA tag, and so it is always getting DMA addresses that fit in 32-bits. On x86-64 machines, this results in having to bounce DMA transfers (ick). On Alpha machines, this results in having to use SGMAP (IOMMU) DMA; this is not a problem unto itself, and I recently made some improvements to this on systems where Qlogic ISP controllers were more likely to be present (e.g. AlphaServer 1000 / 1000A). But there are some Alpha systems we support (notably the EV6+ Tsunami/Typhoon/Titan systems e.g. DS10/DS20/DS25/...) that natively support 64-bit PCI DMA addressing without having to use SGMAPs ... this is generally preferred because, among other things, it's faster. I'm pretty sure it's safe, based on the code block quoted above, to change PCI DMA tag selection in the driver to something like this: /* * See conditional in isp_pci_dmasetup(); if * sizeof (bus_addr_t) > 4, then we'll program * the device using 64-bit DMA addresses. * So, if we're going to do that, we should do * our best to get 64-bit addresses in the first * place. */ if (sizeof (bus_addr_t) > 4 && pci_dma64_available(pa)) { isp->isp_dmatag = pa->pa_dmat64; } else { isp->isp_dmatag = pa->pa_dmat; } Anyway, if someone with more knowledge of these controllers could chime in, I'd really appreciate it. (Hopefully Matt is still lurking on these mailing lists??) -- thorpej
netbsd update expanded kernel with netbsdIleNEk file name
Hi, Today I upgraded my current setup from the latest nycdn image (amd64) using sysinst (update flow) and I was surprised that netbsd kernel was copied to file with unusual /netbsdIleNEk name instead of /netbsd. Thus, I needed to recopy it manually to netbsd. Unfortunately, I didn't have time to do it twice, because of that I am not if it is a bug in sysinst or just a one time weird situation. Does anyone experiencing the same with the latest image? Regards, Andrius V
Re: Weird ldd problem
On Sun, 18 Jul 2021, Chavdar Ivanov wrote: # file ksh93 ksh93: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /usr/libexec/ld.elf_so, for NetBSD 9.99.45, with debug_info, not stripped # ldd ./ksh93 ./ksh93: -lm.0 => /usr/lib/libm.so.0 -lc.12 => /usr/lib/libc.so.12 -lexecinfo.0 => /usr/lib/libexecinfo.so.0 -lelf.2 => /usr/lib/libelf.so.2 -lgcc_s.1 => /usr/lib/libgcc_s.so.1 # ldd ksh93 ldd: bad execname `ksh93' in AUX vector: No such file or directory That's coming from the run-time linker: function expand() in src/libexec/ld.elf_so/expand.c when it tries to expand $ORIGIN. You can create such a binary yourself like this: --- $ cat foo.c #include int main(int argc, char* argv[]) { printf("%s\n", *argv); return 0; } $ cc -o foo foo.c -Wl,-rpath='$ORIGIN' $ ldd ./foo ./foo: -lc.12 => /usr/lib/libc.so.12 $ ldd foo ldd: bad execname `foo' in AUX vector: Undefined error: 0 $ readelf -d foo Dynamic section at offset 0xbb8 contains 18 entries: TagType Name/Value 0x0001 (NEEDED) Shared library: [libc.so.12] 0x000f (RPATH) Library rpath: [$ORIGIN] 0x000c (INIT) 0x400530 0x000d (FINI) 0x4009b0 0x0004 (HASH) 0x400210 0x0005 (STRTAB) 0x4003c8 0x0006 (SYMTAB) 0x400260 0x000a (STRSZ) 158 (bytes) 0x000b (SYMENT) 24 (bytes) 0x0015 (DEBUG) 0x0 0x0003 (PLTGOT) 0x600d48 0x0002 (PLTRELSZ) 120 (bytes) 0x0014 (PLTREL) RELA 0x0017 (JMPREL) 0x4004b0 0x0007 (RELA) 0x400468 0x0008 (RELASZ) 72 (bytes) 0x0009 (RELAENT)24 (bytes) 0x (NULL) 0x0 --- Do a `readelf -d' on both ksh93s. You'll see the difference. -RVP
Re: Weird ldd problem
On Sun, 18 Jul 2021 at 10:26, RVP wrote: > > On Sun, 18 Jul 2021, Chavdar Ivanov wrote: > > > # file ksh93 > > ksh93: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), > > dynamically linked, interpreter /usr/libexec/ld.elf_so, for NetBSD > > 9.99.45, with debug_info, not stripped > > # ldd ./ksh93 > > ./ksh93: > >-lm.0 => /usr/lib/libm.so.0 > >-lc.12 => /usr/lib/libc.so.12 > >-lexecinfo.0 => /usr/lib/libexecinfo.so.0 > >-lelf.2 => /usr/lib/libelf.so.2 > >-lgcc_s.1 => /usr/lib/libgcc_s.so.1 > > # ldd ksh93 > > ldd: bad execname `ksh93' in AUX vector: No such file or directory > > > > That's coming from the run-time linker: > > function expand() in src/libexec/ld.elf_so/expand.c when it tries > to expand $ORIGIN. You can create such a binary yourself like this: > > --- > $ cat foo.c > #include > > int > main(int argc, char* argv[]) > { > printf("%s\n", *argv); > return 0; > } > $ cc -o foo foo.c -Wl,-rpath='$ORIGIN' > $ ldd ./foo > ./foo: > -lc.12 => /usr/lib/libc.so.12 > $ ldd foo > ldd: bad execname `foo' in AUX vector: Undefined error: 0 > $ readelf -d foo > > Dynamic section at offset 0xbb8 contains 18 entries: >TagType Name/Value > 0x0001 (NEEDED) Shared library: [libc.so.12] > 0x000f (RPATH) Library rpath: [$ORIGIN] > 0x000c (INIT) 0x400530 > 0x000d (FINI) 0x4009b0 > 0x0004 (HASH) 0x400210 > 0x0005 (STRTAB) 0x4003c8 > 0x0006 (SYMTAB) 0x400260 > 0x000a (STRSZ) 158 (bytes) > 0x000b (SYMENT) 24 (bytes) > 0x0015 (DEBUG) 0x0 > 0x0003 (PLTGOT) 0x600d48 > 0x0002 (PLTRELSZ) 120 (bytes) > 0x0014 (PLTREL) RELA > 0x0017 (JMPREL) 0x4004b0 > 0x0007 (RELA) 0x400468 > 0x0008 (RELASZ) 72 (bytes) > 0x0009 (RELAENT)24 (bytes) > 0x (NULL) 0x0 > --- > > Do a `readelf -d' on both ksh93s. You'll see the difference. The main difference is indeed ... (RPATH) Library rpath: [$ORIGIN/:$ORIGIN/../../lib/libast:$ORIGIN/cmds:$ORIGIN/../../lib/libdll] I have to read more to understand it though... > > -RVP > Thanks, Chavdar --
Weird ldd problem
Hi, Not really a problem, but is bugging me, I have no clue why this happens. I used to follow the ksh93 development branch and regularly used to build it for -current. The last one I built was circa February 2020 (${.sh.version} -> 2020.0.0-beta1-222-g8cf92b28), apparently for NetBSD 9.99.45. It still works ok, but I am getting: # pwd /usr/local/bin # ls -l ksh93 -rwxr-xr-x 1 root wheel 4582528 Feb 6 2020 ksh93 # file ksh93 ksh93: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /usr/libexec/ld.elf_so, for NetBSD 9.99.45, with debug_info, not stripped # ldd ./ksh93 ./ksh93: -lm.0 => /usr/lib/libm.so.0 -lc.12 => /usr/lib/libc.so.12 -lexecinfo.0 => /usr/lib/libexecinfo.so.0 -lelf.2 => /usr/lib/libelf.so.2 -lgcc_s.1 => /usr/lib/libgcc_s.so.1 # ldd ksh93 ldd: bad execname `ksh93' in AUX vector: No such file or directory # uname -a NetBSD marge.lorien.lan 9.99.86 NetBSD 9.99.86 (GENERIC_KASLR) #1: Thu Jul 15 22:22:02 BST 2021 sysbuild@ymir:/home/sysbuild/amd64/obj/home/sysbuild/src/sys/arch/amd64/compile/GENERIC_KASLR amd64 Basically ldd succeeds when provided a path, absolute or relative, and fails when provided with the name only when the current directory contains it. The same behaviour is also for shcomp, which comes from the same build. I also examined the ktruss output from both executions and could not see any difference right up to the point when one of them succeeds and the other fails. I also rebuilt the (rather old) shells/ast-ksh and it does not show this behaviour. Chavdar --