On Thu, Dec 24, 2009 at 11:18:49AM +, Nigel Kukard wrote:
Hi guys,
Just curious if there is a technical reason why
pthread_condattr_[gs]etpshared arn't implemented?
AFAICS it's implemented in linuxthreads.old and NPTL. It's required for
SUS threads
Sorry for cross-posting this, I think the uclibc mailing list may be a
better suited ML.
I'm trying to track down a segfault when building and running a i586
buildroot.
Now in buildroot:
In the uclibc 0.9.29 config UCLIBC_USE_NETLINK is disabled and works 100%.
In the uclibc 0.9.30 config
On Sat, 2008-03-29 at 16:25 +0100, Denys Vlasenko wrote:
On Saturday 29 March 2008 07:22, Nigel Kukard wrote:
Stupid busybox, it didn't export the env variable I'm rebuilding a
static sh now.
What version of stupid busybox is that, which shell (ash I think?),
and what
On Sun, 2008-03-30 at 12:45 +0200, Joakim Tjernlund wrote:
Well, it would be generally nice if the new linuxthreads would work on
x86 if you find the time, since i don't quite see much progress in the
NPTL camp.
I'm lucky having nptl on my sh4 ;-)
I don't know if there is
HI,
Can't see anything, I think you should add printouts in __uClibc_init()
to see if you get there, use the write() sys call as I don't think you
can use any of the libc print functions.
Non PIE rpm works I guess?
Does rpm work in glibc, both PIE and non PIE?
Jocke
_malloc:921:
Yo,
Can't see anything, I think you should add printouts in __uClibc_init()
to see if you get there, use the write() sys call as I don't think you
can use any of the libc print functions.
Non PIE rpm works I guess?
Does rpm work in glibc, both PIE and non PIE?
Jocke
_dl_get_ready_to_run:839: We got here: 839, func = U��S���
Segmentation fault
Regards
Nigel
On Thu, 2008-03-27 at 14:21 +, Nigel Kukard wrote:
Ok,
I've tracked this error now in uclibc svn to these lines in
ldso/ldso.c...
Segfault now occurs on that line ...
if (tpnt-dynamic_info[DT_INIT
On Thu, 2008-03-27 at 16:52 +0100, Joakim Tjernlund wrote:
On Thu, 2008-03-27 at 14:56 +, Nigel Kukard wrote:
I'm dumping loadaddr and func just before that segfault so ignore the
line numbers (i have half a gazillion lines of debugging) the only
thing that changes is the loadaddr
_dl_get_ready_to_run:838: We got here: 838, loadaddr = 0xb7b33000
_dl_get_ready_to_run:839: We got here: 839, func = U��S���
Segmentation fault
Good for now, I rather have the debug built in to ldso than your hack as
I know the ones in ldso.
I can rebuild everything and remove my
hmm, should not func address change when loadaddr change?
Not sure if its a func address or a string, i just outputted %s ;)
It is a address, print tpnt-loadaddr, tpnt-dynamic_info[DT_INIT] and
dl_elf_func.
dl_elf_func should be tpnt-loadaddr + tpnt-dynamic_info[DT_INIT]
hmm, should not func address change when loadaddr change?
Not sure if its a func address or a string, i just outputted %s ;)
Shouldn't you have a pretty good chance of segfaulting just by virtue of
treating a random address as %s?
More than likely, I stopped when I saw the segfault
Hi,
Ok, here is a vanilla uClibc from SVN its x86 architecture.
i386/pentium-mmx .
$ rpm
argc=1 argv=0xbfbe8094 envp=0xbfbe809c
[SNIP]
_dl_malloc:926: mmapping more memory
_dl_get_ready_to_run:748: Beginning relocation fixups
_dl_get_ready_to_run:831: calling INIT:
This trace looks like it is missing LD_DEBUG=1 rpm or LD_DEBUG=all rpm,
such a trace can get very big so you need to trim it down before
posting. You also need SUPPORT_LD_DEBUG=y in .config
Stupid busybox, it didn't export the env variable I'm rebuilding a
static sh now.
Hi,
This trace looks like it is missing LD_DEBUG=1 rpm or LD_DEBUG=all rpm,
such a trace can get very big so you need to trim it down before
posting. You also need SUPPORT_LD_DEBUG=y in .config
Stupid busybox, it didn't export the env variable I'm rebuilding a
static sh
HI Carmelo,
I'm trying to trace a segfault in ldso when running a PIE compiled
binary under uclibc.
Hello,
recently there have been some fixes into the ld.so to cope with
problems in PIE applications.
I suggest you to see if latest SVN release works fine for you.
Cheers,
15 matches
Mail list logo