Hi all, I'm developing an application on a number of platforms, using ELF shared libraries wherever possible, and static linking the rest.
There are six shared libraries: a.so (27k) b.so (115k) c.so (33k) d.so (5k) e.so (170k) f.so (300k) These total to around 655k. The actual application, when dynamically linked against these, is only 140k big. When statically linked it goes to about 500k. I can understand that 500k < (655 + 140) because I may not be linking in certain objects when going static. The reason I want to use dynamic linking with shared libraries if because I want to run many of the actual processes without using lots of memory. However, when I run ten of them and do a 'top', it looks as though all of the shared libraries are being loaded for all of the processes. In other words, each process has a top line that looks like this: PID USER PRI NI SIZE RES SHRD STAT %CPU %MEM TIME COMMAND 24894 marks 1 0 238 760 964 S 0.3 5.0 0:00 myproc This tell me that each process is using around 1MB of RAM, which is much too much! Am I linking the thing wrong? What I did is to build the normal static libraries, using -fPIC, and then using ar rip out the object files and use gcc -shared to create the .so files. This works ;-) Then, when linking the final executable, I just did a -lxxx.so, and it found everything just fine. Am I mistaken? Will every process that uses a shared library load a copy of it into memory? Is this the case fo libc5 etc.? Is there another way to link the thing so that onlky one copy of each shared library is loaded? Perhaps more importantly, if a new copy of every shared library will be loaded for every copy of the executable that I run, surely it will be faster to use static linking throughout? Regards, Mark -- Mark Shuttleworth Thawte Consulting cc.

