> On Aug 19, 2014, at 1:52 PM, Zachary Turner <ztur...@google.com> wrote:
> 
> Looking over the apple code path for Host::GetArchitecture, I'm a little 
> confused about why all this custom logic is needed.  What are the situations 
> in which llvm::sys::getDefaultTargetTriple() will return something other than 
> what we want?  Specifically, a concrete example might help illustrate the 
> problem.
> 
> I understand from the comment that it has something to do with being able to 
> run 32 and 64-bit executables in the same operating system.  Isn't this the 
> case everywhere?  I can run 32-bit executables on Windows x64 as well.  
> llvm::Triple has a function called get32BitArchVariant() that I thought 
> returns a 32-bit triple for this case.  Does this not work for some Apple 
> configuration?
> 
> It seems like this logic should be able to be sunk into llvm::Triple somehow. 
>  Conceptually speaking, it seems like there should be two cases:
> 
> 64-bit:
> host_arch_64 = llvm::Triple::getDefaultTargetTriple()
> host_arch_32 = llvm::Triple::getDefaultTargetTriple().get32BitArchVariant()
> 
> 32-bit
> host_arch_64 = <empty>
> host_arch_32 = llvm::Triple::getDefaultTargetTriple()
> 
> Why doesn't this work?

I am not sure if this would work. At Apple we can have either:

x86_64h-apple-macosx or x86_64-apple-macosx for 64 bit enabled machines. Not 
sure which llvm::Triple::getDefaultTargetTriple() returns. I highly doubt it 
detects x86_64h, though I could be wrong.

i386-apple-macosx for 32 bit.

I am happy to switch over if llvm::Triple::getDefaultTargetTriple() properly 
detects x86_64h on Haswell enabled macs. I will let others comment on their 
systems (FreeBSD, linux, windows, MingGW).



_______________________________________________
lldb-dev mailing list
lldb-dev@cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

Reply via email to