On Sun, Mar 31, 2013 at 08:07:23PM +0400, vasily postnicov wrote:
Also, I wonder why LDT is used at all. Wikipedia says, that descriptor
tables were used in i286 as a memory addressing mechanism prior to paging
(AFAIK, paging first appeared in i386). With descriptor tables it was
possible to translate "logical" addresses to real physical addresses
(sorry, if I use wrong terminology). With LDT it was possible to translate
same "logical" addresses to different physical for different processes, as
I understand it. Wikipedia says: "Therefore, modern
32-bit<http://en.wikipedia.org/wiki/32-bit>x86 operating systems use
the LDT very little, primarily to run legacy
16-bit <http://en.wikipedia.org/wiki/16-bit> code." I do not know, what LDT
has to do with TLS. Can you explain this?

Segmentation, segment descriptors tables, and the segment selector
registers are not gone in 32-bit and 64-bit x86. You must still set up
a global descriptor table and segments, because the processor will still
consult them on address translations, but most code sets up all segment
bases to 0, segment limits to 4 GiB (on 32-bit), and relies on paging
for address translation.

TLS works because each thread has its own set of registers; the simple
explanation is that the per-thread %FS register points at the base of
some block of per-thread variables.
However, the story is not so simple; %FS is not a full pointer-sized
value and cannot be used like a general purpose register. It is instead
an index into a table, either the GDT or LDT. The %FS index is looked up
in the appropriate table, then the base from that table entry is added
to the address you were looking up, and a limit check is performed. Manipulating the LDT is generally done by priviledged code, which is why
you have the set_ldt/get_ldt syscall dance above.

HTH,
-- vs;

Reply via email to