On 3. 6. 25 21:46, Timofei Zhakov wrote:
Hi!

This would be very helpful, due to several reasons. But it also carries several disadvantages. Everything I've found is explained below:

- Currently we need to create and configure two separate targets for the serf library (serf_static and serf_shared). One for shared build and one for static. This could be hard to maintain, since stuff in relation to a new functionality has to be added to both targets and tested separately.

Nope. There is one list of source files and headers, it just happens to be compiled twice.

- This would be less confusing to produce and install this one main target.


It would also be completely wrong on Unix. I'll explain below.


- CMake has designed a pretty useful framework that handles different build types. If add_library() was called without directly specifying shared or static, cmake will automatically determine it based on BUILD_SHARED_LIBRARIES option.

That's a different issue.

- It takes twice as much time to build a second instance of the same library, one of which you wouldn't actually use (in most scenarios).

Right. This is the "below" where I'll explain. There is a fundamental difference between how shared libraries are handled on Unix (or rather, Unix-lie, elf and earlier object format targets) and DLLs on Windows.

On Windows, all compiled code is position-dependent, whether its packaged into static libraries or DLLs. It also has a section with relocation information, basically telling the linker or dynamic loader how to change the compiled code on the fly so that it can run at a different load address. DLLs have a "target" virtual address to which they expect to be loaded. When you link with a static library, the linker does the relocation behind the scenes, in order to match the executable. DLLs, however, are relocated by the dynamic object loader at run-time. When two different programs use the same DLL, and they can't for some reason both map it to the same virtual address, you essentially end up with two copies of the DLL in memory -- since one of those has to be relocated, again. That's ... less than efficient.

On Unix, however, static libraries have their relocation info; but shared libraries are compiled to position-independent machine code, so that it doesn't matter where in the virtual address space they're loaded -- they'll just always work. PIC is less efficient, because it has to make all references relative to the program counter. Depending on the CPU architecture, this may involve using one or more offset registers, reducing the number of CPU registers available to application code. The benefit is that you'll only ever have one copy of the shared library in memory and different programs will all use that copy, even if they see it at different virtual addresses.

So, yes, exactly -- we compile the code twice, once to PIC for the shared library and once as position-dependent, relocatable (and possibly faster, more efficient) code for the static libraries.

- However, the tests are currently made so they will compile only against the static version.

Yeah, that's just laziness.

- On the other hand, this means that you can't actually test the shared version -- the one that you'll most probably use. So it would be better to fix the tests with shared libs as well. It seems pretty easy to do, since the only limit is a few hidden private symbols.

You actually can (and should) test against the shared lib, the catch is that we aren't setting the runpath for that. Basically, you'd set the runpath at build time so that the tests can use the library, then at install time, you'd modify it to the install prefix. That, or require the shared lib to be installed before the tests can run. Or, as we can do on macOS, use a relative runpath. It's just a matter of choosing one way and implementing it.

That's it... Maybe I missed some points...

I wouldn't exactly say "missed".  ;)

-- Brane

Reply via email to