Looking over the apple code path for Host::GetArchitecture, I'm a little
confused about why all this custom logic is needed.  What are the
situations in which llvm::sys::getDefaultTargetTriple() will return
something other than what we want?  Specifically, a concrete example might
help illustrate the problem.

I understand from the comment that it has something to do with being able
to run 32 and 64-bit executables in the same operating system.  Isn't this
the case everywhere?  I can run 32-bit executables on Windows x64 as well.
 llvm::Triple has a function called get32BitArchVariant() that I thought
returns a 32-bit triple for this case.  Does this not work for some Apple
configuration?

It seems like this logic should be able to be sunk into llvm::Triple
somehow.  Conceptually speaking, it seems like there should be two cases:

64-bit:
host_arch_64 = llvm::Triple::getDefaultTargetTriple()
host_arch_32 = llvm::Triple::getDefaultTargetTriple().get32BitArchVariant()

32-bit
host_arch_64 = <empty>
host_arch_32 = llvm::Triple::getDefaultTargetTriple()

Why doesn't this work?
_______________________________________________
lldb-dev mailing list
lldb-dev@cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

Reply via email to