On Thu, Feb 01, 2018 at 04:30:10PM +0100, Nesrine Zouari wrote: > I am a computer engineering student and I am actually working on my > graduation project at Lauterbach company. The project is about Qemu Trace > and as a future I would like to contribute this work to the main line. > > My project is divided into two parts: > > 1/ Collecting the Guest trace data : The trace solution should be able to > provide: > > a/ Instruction flow Trace > > b/ Memory read/write access > > c/ Time Stamps. > > d/ For tracing rich operating systems that are using MMU, we > additionally need to trace the task switches.
Lluìs has done the most instrumentation work in qemu.git and can explain the current status. The focus in QEMU is more on functional simulation than on low-level instrumentation. Therefore the instrumentation facilities aren't very rich. Code changes will be required to get the information you need. In order to be suitable for upstream they should not be too invasive or impact performance significantly. Which CPU architecture are you targeting? > 2/ Sending the collected data to a third party tool for analysis. > > My question is about the first part. I would like to know, which trace > backend that better fit my use case. LTTng UST has the highest performance tracing interface. It uses shared memory to efficiently export trace data to a collector or analysis process. It is probably not necessary to invent your own tracer or interface for capturing trace data. I suggest looking into LTTng UST and trying it out. The basic idea would be: 1. Add missing trace events to QEMU 2. Build with ./configure --enable-trace-backend=ust && make 3. Use LTTng tools or write your own collector using the LTTng libraries 4. Enable the trace events that you need for instruction flow, memory access, and task switching. The QEMU code changes involved would be changes to trace-events and placing those trace events into TCG and/or memory API code to record the necessary information. Stefan
Description: PGP signature