On 08/04/2015 10:26 AM, Alex Bennée wrote: > > Richard Henderson <r...@twiddle.net> writes: > >> On 08/03/2015 02:14 AM, Alex Bennée wrote: >>> Each individual architecture needs to use the qemu_log_in_addr_range() >>> feature for enabling in_asm and marking blocks for op/opt_op output. >>> >>> Signed-off-by: Alex Bennée <alex.ben...@linaro.org> >>> --- >>> target-arm/translate-a64.c | 6 ++++-- >>> target-arm/translate.c | 6 ++++-- >>> 2 files changed, 8 insertions(+), 4 deletions(-) >>> >>> diff --git a/target-arm/translate-a64.c b/target-arm/translate-a64.c >>> index 689f2be..0b0f4ae 100644 >>> --- a/target-arm/translate-a64.c >>> +++ b/target-arm/translate-a64.c >>> @@ -11026,7 +11026,8 @@ void gen_intermediate_code_internal_a64(ARMCPU *cpu, >>> gen_io_start(); >>> } >>> >>> - if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_OP | >>> CPU_LOG_TB_OP_OPT))) { >>> + if (unlikely(qemu_loglevel_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT) >>> && >>> + qemu_log_in_addr_range(dc->pc))) { >>> tcg_gen_debug_insn_start(dc->pc); >>> } >> >> If there's more than one or two ranges, it's probably quicker to >> generate the debug opcode regardless of the range. Remember, this >> check is happening once per insn, not once per tb. > > Maybe I should hoist the check up to the start of a block? This would > mean we would dump all instructions in a block even if they went past > the end-point but the reverse case is probably just confusing. > > We'll still not dump anything that starts outside the range.
Why hoist when the loglevel_mask check is so quick? Processing of these debug opcodes is equally quick. It's really only the dumping of the opcodes elsewhere that needs to check the addr_range. r~