Todd Fiala wrote:
> "You can't possibly do that, so many other architectures have 8-bit bytes, and so this proposal would make them harder to enhance, for the benefit of (currently) just kalimba"

From a design principle, I would probably vote for having non-8-bit-bytes added as a layer on top of memory accesses that is only used for systems that need it. I'd strongly prefer not to have the generality of n-bit (or multiple of 8-bit bytes) permute throughout the lldb code base. I can see that becoming challenging from a maintainability standpoint (testing, code complexity, etc.)

I could imagine a layer you build on top of memory accesses that knows how to run out and grab the right underlying cpu_arch-specific bytes to satisfy the request. And only wiring that in to low level memory access maybe in the memory read/write virtual methods (like Process::DoReadMemory() and friends).
I've started working on this on my local copy. It's hard to layer this everywhere. For example, inside CommandObjectMemoryRead::DoExecute, I'm finding that buffers are allocated expectant of a certain number of host bytes to be written there. However if the target-being-debugged has, for example 24-bit bytes, clearly I need to multiply this buffer size accounting for the discrepancy between host/target bytes.

e.g. in CommandObjectMemoryRead::DoExecute

@@ -622,6 +622,8 @@
                 total_byte_size = 32;
         }

+ total_byte_size *= target->GetArchitecture().GetTargetDataByteSize();
+
         if (argc > 0)
addr = Args::StringToAddress(&m_exe_ctx, command.GetArgumentAtIndex(0), LLDB_INVALID_ADDRESS, &error);


An additional challenge that I'm seeing, is that when we read memory (without an attached process) direct from the ELF file, I discovered that our toolchain, if building an image for a 24-bit target, rather than setting aside 24-bits per target byte, it uses 32-bits per byte. (I'm told, this is because part of our toolchain was based on binutils). So a set of additional APIs in ArchSpec are required to configure this:

uint32_t GetObjectFileDataByteSize() const;
uint32_t GetObjectFileCodeByteSize() const;

in addition to

uint32_t GetTargetDataByteSize() const;
uint32_t GetTargetCodeByteSize() const;

It turned that the byte size in my object file needs to be considered by lldb, when searching for sections in the ELF file,

e.g. from Section::ContainsFileAddress

@@ -186,7 +190,7 @@
     {
         if (file_addr <= vm_addr)
         {
-            const addr_t offset = vm_addr - file_addr;
+ const addr_t offset = (vm_addr - file_addr) * m_host_per_target_bytes;
             return offset < GetByteSize();
         }
     }

So it's difficult/impossible to have the host_per_target_bytes scaling hidden away in only one place.


That's just my take. Maintainability and testability being the key driver here.

Yes. My goal is to add some more cases in our test-suite, e.g. doing raw memory reads from various targets, and validating how the data is presented back. I certainly won't be submitting anything too controversial until this is in place.

Matt


Member of the CSR plc group of companies. CSR plc registered in England and 
Wales, registered number 4187346, registered office Churchill House, Cambridge 
Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom
More information can be found at www.csr.com. Keep up to date with CSR on our 
technical blog, www.csr.com/blog, CSR people blog, www.csr.com/people, YouTube, 
www.youtube.com/user/CSRplc, Facebook, 
www.facebook.com/pages/CSR/191038434253534, or follow us on Twitter at 
www.twitter.com/CSR_plc.
New for 2014, you can now access the wide range of products powered by aptX at 
www.aptx.com.
_______________________________________________
lldb-dev mailing list
lldb-dev@cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev

Reply via email to