Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Emilio G. Cota
On Wed, Jun 07, 2017 at 18:45:10 +0300, Lluís Vilanova wrote:
> If there is such renewed interest, I will carve a bit more time to bring the
> patches up to date and send the instrumentation ones for further discussion.

I'm very interested and have time to spend on it -- I'm working on a
simulator backend and would like to move ASAP from QSim[1] to qemu proper for
the front-end. BTW I left some comments/questions a few days ago on the
v7 patchset you sent in January (ouch!).

I can also help with testing or bringing patches up to date -- let me
know if you need any help.

Thanks,

Emilio

[1] http://manifold.gatech.edu/projects/qsim-a-multicore-emulator/



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Paolo Bonzini


On 07/06/2017 17:52, Lluís Vilanova wrote:
> Paolo Bonzini writes:
> 
>> On 07/06/2017 14:07, Peter Maydell wrote:
 My understanding was that adding a public instrumentation interface would 
 add
 too much code maintenance overhead for a feature that is not in QEMU's core
 target.
>>> Well, it depends what you define as our core target :-)
>>> I think we get quite a lot of users that want some useful ability
>>> to see what their guest code is doing, and these days (when
>>> dev board hardware is often very cheap and easily available)
> 
>> and virtualization is too...
> 
> Actually, in this case I was thinking of some way to transition between KVM 
> and
> TCG back and forth to be able to instrument a VM at any point in time.

That's not really easy because KVM exposes different hardware (on x86:
kvmclock, hypercalls, x2apic, more MSRs).  But we are digressing.

>> Related to this is also Alessandro's work to librarify TCG (he has a
>> TCG-> LLVM backend for example).
> 
> Maybe I misunderstood, but that would be completely orthogonal, even though
> instrumentation performance might benefit from LLVM's advanced IR
> optimizers.

It is different, but it shows the interest in bringing QEMU's
translation engine (the front-end in Alessandro's case, the back-end in
yours) beyond the simple usecase of dynamic recompilation.

Paolo

> But this goes a long way to hot code identification and asynchronous
> optimization (since code that is not really hot will just run faster with
> simpler optimizations, like in the TCG compiler). This actually sounds pretty
> much like Java's HotSpot, certainly a non-trivial effort.
> 
> 
> Cheers,
>   Lluis
> 



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Peter Maydell
On 7 June 2017 at 16:45, Lluís Vilanova  wrote:
> This speed comes at the cost of exposing TCG operations to the instrumentation
> library (i.e., the library can inject TCG code; AFAIR, calling out into a
> function in the instrumentation library is slower than PIN).

Mmm, that's awkward. I'm not sure I'd really like to allow arbitrary
user instrumentation to inject TCG code: it's exposing our rather
changeable internals to the user, and it's a more complicated
interface to understand. For user-facing API (as opposed to
instrumentation interfaces within QEMU which we use to implement
something more simplified to present to the user) I would favour
a clean and straightforward API over pure speed.

thanks
-- PMM



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Alex Bennée

Lluís Vilanova  writes:

> Paolo Bonzini writes:
>
>> On 07/06/2017 14:07, Peter Maydell wrote:
 My understanding was that adding a public instrumentation interface would 
 add
 too much code maintenance overhead for a feature that is not in QEMU's core
 target.
>>> Well, it depends what you define as our core target :-)
>>> I think we get quite a lot of users that want some useful ability
>>> to see what their guest code is doing, and these days (when
>>> dev board hardware is often very cheap and easily available)
>
>> and virtualization is too...
>
> Actually, in this case I was thinking of some way to transition between KVM 
> and
> TCG back and forth to be able to instrument a VM at any point in time.

While we are blue sky thinking another fun thing might be doing system
emulation without SoftMMU but instead using the hosts virtualized page
tables (i.e. running TCG code inside KVM). Obviously there are mapping
issues given differing page sizes and the like but it would save the
SoftMMU overhead.
>
>
>>> I think that's a lot of the value that emulation can bring to
>>> the table. Obviously we would want to try to do it in a way
>>> that is low-runtime-overhead and is easy to get right for
>>> people adding/maintaining cpu target frontend code...
>
>> Indeed.  I even sometimes use TCG -d in_asm,exec,int for KVM unit tests,
>> because it's easier to debug them that way :) so introspection ability
>> is welcome.
>
> AFAIR, Blue Swirl once proposed to use the instrumentation features to 
> implement
> unit tests.
>
>
>> Related to this is also Alessandro's work to librarify TCG (he has a
>> TCG-> LLVM backend for example).
>
> Maybe I misunderstood, but that would be completely orthogonal, even though
> instrumentation performance might benefit from LLVM's advanced IR
> optimizers. But this goes a long way to hot code identification and 
> asynchronous
> optimization (since code that is not really hot will just run faster with
> simpler optimizations, like in the TCG compiler). This actually sounds pretty
> much like Java's HotSpot, certainly a non-trivial effort.

--
Alex Bennée



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Lluís Vilanova
Paolo Bonzini writes:

> On 07/06/2017 14:07, Peter Maydell wrote:
>>> My understanding was that adding a public instrumentation interface would 
>>> add
>>> too much code maintenance overhead for a feature that is not in QEMU's core
>>> target.
>> Well, it depends what you define as our core target :-)
>> I think we get quite a lot of users that want some useful ability
>> to see what their guest code is doing, and these days (when
>> dev board hardware is often very cheap and easily available)

> and virtualization is too...

Actually, in this case I was thinking of some way to transition between KVM and
TCG back and forth to be able to instrument a VM at any point in time.


>> I think that's a lot of the value that emulation can bring to
>> the table. Obviously we would want to try to do it in a way
>> that is low-runtime-overhead and is easy to get right for
>> people adding/maintaining cpu target frontend code...

> Indeed.  I even sometimes use TCG -d in_asm,exec,int for KVM unit tests,
> because it's easier to debug them that way :) so introspection ability
> is welcome.

AFAIR, Blue Swirl once proposed to use the instrumentation features to implement
unit tests.


> Related to this is also Alessandro's work to librarify TCG (he has a
> TCG-> LLVM backend for example).

Maybe I misunderstood, but that would be completely orthogonal, even though
instrumentation performance might benefit from LLVM's advanced IR
optimizers. But this goes a long way to hot code identification and asynchronous
optimization (since code that is not really hot will just run faster with
simpler optimizations, like in the TCG compiler). This actually sounds pretty
much like Java's HotSpot, certainly a non-trivial effort.


Cheers,
  Lluis



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Lluís Vilanova
Peter Maydell writes:

> On 7 June 2017 at 12:12, Lluís Vilanova  wrote:
>> My understanding was that adding a public instrumentation interface would add
>> too much code maintenance overhead for a feature that is not in QEMU's core
>> target.

> Well, it depends what you define as our core target :-)
> I think we get quite a lot of users that want some useful ability
> to see what their guest code is doing, and these days (when
> dev board hardware is often very cheap and easily available)
> I think that's a lot of the value that emulation can bring to
> the table. Obviously we would want to try to do it in a way
> that is low-runtime-overhead and is easy to get right for
> people adding/maintaining cpu target frontend code...

In that case I would say that QEMU is now much more in line with what I
proposed. The mechanisms I have (and most have been sent here in the form of
patch series) are architecture-agnostic (the generic code translation loop I
RFC'ed some time ago) and provide relatively good performance.

I did some tests tracing memory accesses of SPEC benchmarks in x86-64, and QEMU
is consistently faster than PIN in most cases. Even better, it works for any
guest architecture and for both apps and full systems.

This speed comes at the cost of exposing TCG operations to the instrumentation
library (i.e., the library can inject TCG code; AFAIR, calling out into a
function in the instrumentation library is slower than PIN). I have a separate
project that translates a higher-level language into the TCG instrumentation
primitives (providing something like PIN's instrumentation auto-inlining), but I
think that's a completely separate discussion.

If there is such renewed interest, I will carve a bit more time to bring the
patches up to date and send the instrumentation ones for further discussion.


Cheers,
  Lluis



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Paolo Bonzini


On 07/06/2017 14:07, Peter Maydell wrote:
>> My understanding was that adding a public instrumentation interface would add
>> too much code maintenance overhead for a feature that is not in QEMU's core
>> target.
> Well, it depends what you define as our core target :-)
> I think we get quite a lot of users that want some useful ability
> to see what their guest code is doing, and these days (when
> dev board hardware is often very cheap and easily available)

and virtualization is too...

> I think that's a lot of the value that emulation can bring to
> the table. Obviously we would want to try to do it in a way
> that is low-runtime-overhead and is easy to get right for
> people adding/maintaining cpu target frontend code...

Indeed.  I even sometimes use TCG -d in_asm,exec,int for KVM unit tests,
because it's easier to debug them that way :) so introspection ability
is welcome.

Related to this is also Alessandro's work to librarify TCG (he has a
TCG->LLVM backend for example).



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Peter Maydell
On 7 June 2017 at 12:12, Lluís Vilanova  wrote:
> My understanding was that adding a public instrumentation interface would add
> too much code maintenance overhead for a feature that is not in QEMU's core
> target.

Well, it depends what you define as our core target :-)
I think we get quite a lot of users that want some useful ability
to see what their guest code is doing, and these days (when
dev board hardware is often very cheap and easily available)
I think that's a lot of the value that emulation can bring to
the table. Obviously we would want to try to do it in a way
that is low-runtime-overhead and is easy to get right for
people adding/maintaining cpu target frontend code...

thanks
-- PMM



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Lluís Vilanova
Emilio G Cota writes:

> - Instrumentation. I think QEMU should have a good interface to enable
>   dynamic binary instrumentation. This has many uses and in fact there
>   are quite a few forks of QEMU doing this.
>   I think Lluís Vilanova's work [1] is a good start to eventually get
>   something upstream.

> [1] https://projects.gso.ac.upc.edu/projects/qemu-dbi

Hey, I'm really happy you think that's worth pursuing. Even if it doesn't look
like it, I keep working on this on small bits of free time. I have a few patch
series that were ready to send, but should now be rebased to upstream before
that. In fact, I have an academic paper on the back-burner describing the work I
did (there's some cool tricks), but was waiting to get the core
intrumentation-agnostic infrastructure upstreamed first.

My understanding was that adding a public instrumentation interface would add
too much code maintenance overhead for a feature that is not in QEMU's core
target.

During time, I've kept simplifying large parts of the intrumentation code base,
and maybe things have changed in QEMU enough to rethink if that's worth
integrating. Of course, I'm completely open to discuss it.


Cheers,
  Lluis



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-07 Thread Alex Bennée

Emilio G. Cota  writes:

> On Sat, Mar 25, 2017 at 12:52:35 -0400, Pranith Kumar wrote:
> (snip)
>> * Implement an LRU translation block code cache.
>>
>>   In the current TCG design, when the translation cache fills up, we flush 
>> all
>>   the translated blocks (TBs) to free up space. We can improve this situation
>>   by not flushing the TBs that were recently used i.e., by implementing an 
>> LRU
>>   policy for freeing the blocks. This should avoid the re-translation 
>> overhead
>>   for frequently used blocks and improve performance.
>
> I doubt this will yield any benefits because:
>
> - I still have not found a workload where the performance bottleneck is
>   code retranslation due to unnecessary flushes (unless of course we
>   artificially restrict the size of code_gen_buffer.)
> - To keep track of LRU you need at least one extra instruction on every
>   TB, e.g. to increase a counter or add a timestamp. This might be expensive
>   and possibly a scalability bottleneck (e.g. what to do when several
>   cores are executing the same TB?).
> - tb_find_pc now does a simple binary search. This is easy because we
>   know that TB's are allocated from code_gen_buffer in order. If they
>   were out of order, we'd need another data structure (e.g. some sort of
>   tree) to have quick searches. This is not a fast path though so this
>   could be OK.

Certainly to make changes here we would need some proper numbers showing
it is a problem. Even my re-compile stress-ng test only flushes every
now an then.

>
> (snip)
>> Please let me know if you have any comments or suggestions. Also please let 
>> me
>> know if there are other enhancements that are easily implementable to 
>> increase
>> TCG performance as part of this project or otherwise.
>
> My not-necessarily-easy-to-implement wishlist would be:
>
> - Reduction of tb_lock contention when booting many cores. For instance,
>   booting 64 aarch64 cores on a 64-core host shows quite a bit of contention 
> (host
>   cores are 80% idle, i.e. waiting to acquire tb_lock); fortunately this is 
> not a
>   big deal (e.g. 4s for booting 1 core vs. ~14s to boot 64) and anyway most
>   long-running workloads are cached a lot more effectively.
>   Still, it would make sense to consider the option of not going through 
> tb_lock
>   etc. (via a private cache? or simply not caching at all) for code that is 
> not
>   executed many times. Another option is to translate privately, and only 
> acquire
>   tb_lock to copy the translated code to the shared buffer.

Currently tb_lock protects the whole translation cycle. However to get
any sort of parallelism in a different translation cache we would also
need to make the translators thread safe. Currently translation involves
too many shared globals across the core TCG state as well as the
per-arch translate.c functions.

>
> - Instrumentation. I think QEMU should have a good interface to enable
>   dynamic binary instrumentation. This has many uses and in fact there
>   are quite a few forks of QEMU doing this.
>   I think Lluís Vilanova's work [1] is a good start to eventually get
>   something upstream.

I too want to see more here. It would be nice to have a hit count for
each block and some live introspection so we could investigate the
hotest blocks and examine the code the generate more closely.

I think there is scope for a big improvement if you could create a
hot-path series of basic blocks with multiple exit points and avoid the
spill/fills of registers in the hot path. However this is a fairly major
change to the current design.

Outside of performance improvements having a good instrumentation story
would be good for people who want to do analysis of guest behaviour.

>
>   Emilio
>
> [1] https://projects.gso.ac.upc.edu/projects/qemu-dbi


--
Alex Bennée



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-06-06 Thread Emilio G. Cota
On Sat, Mar 25, 2017 at 12:52:35 -0400, Pranith Kumar wrote:
(snip)
> * Implement an LRU translation block code cache.
> 
>   In the current TCG design, when the translation cache fills up, we flush all
>   the translated blocks (TBs) to free up space. We can improve this situation
>   by not flushing the TBs that were recently used i.e., by implementing an LRU
>   policy for freeing the blocks. This should avoid the re-translation overhead
>   for frequently used blocks and improve performance.

I doubt this will yield any benefits because:

- I still have not found a workload where the performance bottleneck is
  code retranslation due to unnecessary flushes (unless of course we
  artificially restrict the size of code_gen_buffer.)
- To keep track of LRU you need at least one extra instruction on every
  TB, e.g. to increase a counter or add a timestamp. This might be expensive
  and possibly a scalability bottleneck (e.g. what to do when several
  cores are executing the same TB?).
- tb_find_pc now does a simple binary search. This is easy because we
  know that TB's are allocated from code_gen_buffer in order. If they
  were out of order, we'd need another data structure (e.g. some sort of
  tree) to have quick searches. This is not a fast path though so this
  could be OK.

(snip)
> Please let me know if you have any comments or suggestions. Also please let me
> know if there are other enhancements that are easily implementable to increase
> TCG performance as part of this project or otherwise.

My not-necessarily-easy-to-implement wishlist would be:

- Reduction of tb_lock contention when booting many cores. For instance,
  booting 64 aarch64 cores on a 64-core host shows quite a bit of contention 
(host
  cores are 80% idle, i.e. waiting to acquire tb_lock); fortunately this is not 
a
  big deal (e.g. 4s for booting 1 core vs. ~14s to boot 64) and anyway most
  long-running workloads are cached a lot more effectively.
  Still, it would make sense to consider the option of not going through tb_lock
  etc. (via a private cache? or simply not caching at all) for code that is not
  executed many times. Another option is to translate privately, and only 
acquire
  tb_lock to copy the translated code to the shared buffer.

- Instrumentation. I think QEMU should have a good interface to enable
  dynamic binary instrumentation. This has many uses and in fact there
  are quite a few forks of QEMU doing this.
  I think Lluís Vilanova's work [1] is a good start to eventually get
  something upstream.

Emilio

[1] https://projects.gso.ac.upc.edu/projects/qemu-dbi



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-28 Thread Stefan Hajnoczi
On Mon, Mar 27, 2017 at 11:09:23PM -0400, Pranith Kumar wrote:
> On Mon, Mar 27, 2017 at 11:03 PM, Pranith Kumar  wrote:
> 
> >
> > If you think the project makes sense, I will add it to the GSoC wiki
> > so that others can also apply for it. Please let me know if you are
> > interested in mentoring it along with Alex.
> >
> 
> One other thing is if you think the scope is too vast, can we split
> this and have multiple GSoC projects? In that case, having more
> mentors should help.

It's up to the mentor(s) if they want to take on more students in this
area.  Regarding your own project plan:

It's fine to have stretch goals that will be completed if time permits.
The project plan can be adjusted so don't worry about being ambitious -
it won't be held against you if you've agreed with your mentor on
certain goals that may not fit.

Stefan


signature.asc
Description: PGP signature


Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-27 Thread Pranith Kumar
On Mon, Mar 27, 2017 at 11:03 PM, Pranith Kumar  wrote:

>
> If you think the project makes sense, I will add it to the GSoC wiki
> so that others can also apply for it. Please let me know if you are
> interested in mentoring it along with Alex.
>

One other thing is if you think the scope is too vast, can we split
this and have multiple GSoC projects? In that case, having more
mentors should help.

-- 
Pranith



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-27 Thread Pranith Kumar
Hi Paolo,

On Mon, Mar 27, 2017 at 7:32 AM, Paolo Bonzini  wrote:
>
>
> On 25/03/2017 17:52, Pranith Kumar wrote:
>> * Implement an LRU translation block code cache.
>>
>>   In the current TCG design, when the translation cache fills up, we flush 
>> all
>>   the translated blocks (TBs) to free up space. We can improve this situation
>>   by not flushing the TBs that were recently used i.e., by implementing an 
>> LRU
>>   policy for freeing the blocks. This should avoid the re-translation 
>> overhead
>>   for frequently used blocks and improve performance.
>
> IIRC, Emilio measured one flush every roughly 10 seconds with 128 MB
> cache in system emulation mode---and "never" is a pretty accurate
> estimate for user-mode emulation.  This means that a really hot block
> would be retranslated very quickly.
>

OK. The problem with re-translation is that it is a serializing step
in the current design. All the cores have to wait for the translation
to complete. I think it will be a win if we could avoid it, although,
I should admit that I am not sure how much that benefit would be.

-- 
Pranith



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-27 Thread Pranith Kumar
Hi Richard,

Thanks for the feedback. Please find some comments inline.

On Mon, Mar 27, 2017 at 6:57 AM, Richard Henderson  wrote:
>
> 128MB is really quite large.  I doubt doubling the cache size will really
> help that much.  That said, it's really quite trivial to make this change,
> if you'd like to experiment.
>
> FWIW, I rarely see TB flushes for alpha -- not one during an entire gcc
> bootstrap.  Now, this is usually with 4GB ram, which by default implies
> 512MB translation cache.  But it does mean that, given an ideal guest, TB
> flushes should not dominate anything at all.
>
> If you're seeing multiple flushes during the startup of a browser, your
> guest must be flushing for other reasons than the code_gen_buffer being
> full.
>

This is indeed the case. From commit a9353fe897ca onwards, we are
flushing the tb cache instead of invalidating a single TB from
breakpoint_invalidate(). Now that MTTCG added proper tb/mmap locking,
we can revert that commit. I will do so once the merge windows opens.

>
>> * Implement an LRU translation block code cache.
>
>
> The major problem you'll encounter is how to manage allocation in this case.
>
> The current mechanism means that it is trivial to not know how much code is
> going to be generated for a given set of TCG opcodes.  When we reach the
> high-water mark, we've run out of room.  We then flush everything and start
> over at the beginning of the buffer.
>
> If you manage the cache with an allocator, you'll need to know in advance
> how much code is going to be generated.  This is going to require that you
> either (1) severely over-estimate the space required (qemu_ld generates lots
> more code than just add), (2) severely increase the time required, by
> generating code twice, or (3) somewhat increase the time required, by
> generating position-independent code into an external buffer and copying it
> into place after determining the size.
>

3 seems to the only feasible options, but I am not sure how easy it is
to generate position-independent code. Do you think it can be done as
a GSoC project?

>
>> * Avoid consistency overhead for strong memory model guests by generating
>>   load-acquire and store-release instructions.
>
>
> This is probably required for good performance of the user-only code path,
> but considering the number of other insns required for the system tlb
> lookup, I'm surprised that the memory barrier matters.
>

I know that having some experimental data will help to accurately show
the benefit here, but my observation from generating store-release
instruction instead of store+fence is that it helps make the system
more usable. I will try to collect this data for a linux x86 guest.

>
> I think it would be interesting to place TranslationBlock structures into
> the same memory block as code_gen_buffer, immediately before the code that
> implements the TB.
>
> Consider what happens within every TB:
>
> (1) We have one or more references to the TB address, via exit_tb.
>
> For aarch64, this will normally require 2-4 insns.
>
> # alpha-softmmu
> 0x7f75152114:  d0ffb320  adrp x0, #-0x99a000 (addr 0x7f747b8000)
> 0x7f75152118:  91004c00  add x0, x0, #0x13 (19)
> 0x7f7515211c:  17c3  b #-0xf4 (addr 0x7f75152028)
>
> # alpha-linux-user
> 0x00569500:  d2800260  mov x0, #0x13
> 0x00569504:  f2b59820  movk x0, #0xacc1, lsl #16
> 0x00569508:  f2c00fe0  movk x0, #0x7f, lsl #32
> 0x0056950c:  17df  b #-0x84 (addr 0x569488)
>
> We would reduce this to one insn, always, if the TB were close by, since the
> ADR instruction has a range of 1MB.
>
> (2) We have zero to two references to a linked TB, via goto_tb.
>
> Your stated goal above for eliminating the code_gen_buffer maximum of 128MB
> can be done in two ways.
>
> (2A) Raise the maximum to 2GB.  For this we would align an instruction pair,
> adrp+add, to compute the address; the following insn would branch.  The
> update code would write a new destination by modifing the adrp+add with a
> single 64-bit store.
>
> (2B) Eliminate the maximum altogether by referencing the destination
> directly in the TB.  This is the !USE_DIRECT_JUMP path.  It is normally not
> used on 64-bit targets because computing the full 64-bit address of the TB
> is harder, or just as hard, as computing the full 64-bit address of the
> destination.
>
> However, if the TB is nearby, aarch64 can load the address from
> TB.jmp_target_addr in one insn, with LDR (literal).  This pc-relative load
> also has a 1MB range.
>
> This has the side benefit that it is much quicker to re-link TBs, both in
> the computation of the code for the destination as well as re-flushing the
> icache.

This(2B) is the idea I had in mind. If we could have a combination of
both the above. If address range falls outside the 1MB range, we take
the penalty and generate the full 64-bit address.

>
>
> In addition, I strongly suspect the 1,342,177 entries (153MB) that we
> currently allocate for tcg_ctx.tb_ctx.tbs, giv

Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-27 Thread Pranith Kumar
Hi Stefan,

On Mon, Mar 27, 2017 at 11:54 AM, Stefan Hajnoczi  wrote:
> On Sat, Mar 25, 2017 at 12:52:35PM -0400, Pranith Kumar wrote:
>> Alex Bennée, who mentored me last year, has agreed to mentor me again this
>> time if the proposal is accepted.
>
> Thanks, the project idea looks good for GSoC.  I've talked to Alex about
> adding it to the wiki page.
>
> The "How to propose a custom project idea" section on the wiki says:
>
>   Note that other candidates can apply for newly added project ideas.
>   This ensures that custom project ideas are fair and open.
>
> This means that Alex has agreed to mentor the _project idea_.  Proposing
> a custom project idea doesn't guarantee that you will be selected for
> it.
>
> I think you already knew that but I wanted to clarify in case someone
> reading misinterprets what you wrote to think custom project ideas are a
> loophole for getting into GSoC.

Yes, I was waiting for the project idea to be finalized before mailing
you with the filled out template. But if you think it will be easier
if I add it first and then edit it, I will send you the template. I
will update the wiki as the discussion progresses.

-- 
Pranith



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-27 Thread Stefan Hajnoczi
On Sat, Mar 25, 2017 at 12:52:35PM -0400, Pranith Kumar wrote:
> Alex Bennée, who mentored me last year, has agreed to mentor me again this
> time if the proposal is accepted.

Thanks, the project idea looks good for GSoC.  I've talked to Alex about
adding it to the wiki page.

The "How to propose a custom project idea" section on the wiki says:

  Note that other candidates can apply for newly added project ideas.
  This ensures that custom project ideas are fair and open.

This means that Alex has agreed to mentor the _project idea_.  Proposing
a custom project idea doesn't guarantee that you will be selected for
it.

I think you already knew that but I wanted to clarify in case someone
reading misinterprets what you wrote to think custom project ideas are a
loophole for getting into GSoC.

Stefan


signature.asc
Description: PGP signature


Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-27 Thread Alex Bennée

Richard Henderson  writes:

> On 03/26/2017 02:52 AM, Pranith Kumar wrote:
>> Hello,
>>

>
>> Please let me know if you have any comments or suggestions. Also please let 
>> me
>> know if there are other enhancements that are easily implementable to 
>> increase
>> TCG performance as part of this project or otherwise.
>
> I think it would be interesting to place TranslationBlock structures
> into the same memory block as code_gen_buffer, immediately before the
> code that implements the TB.
>
> Consider what happens within every TB:
>
> (1) We have one or more references to the TB address, via exit_tb.
>
> For aarch64, this will normally require 2-4 insns.
>
> # alpha-softmmu
> 0x7f75152114:  d0ffb320  adrp x0, #-0x99a000 (addr 0x7f747b8000)
> 0x7f75152118:  91004c00  add x0, x0, #0x13 (19)
> 0x7f7515211c:  17c3  b #-0xf4 (addr 0x7f75152028)
>
> # alpha-linux-user
> 0x00569500:  d2800260  mov x0, #0x13
> 0x00569504:  f2b59820  movk x0, #0xacc1, lsl #16
> 0x00569508:  f2c00fe0  movk x0, #0x7f, lsl #32
> 0x0056950c:  17df  b #-0x84 (addr 0x569488)
>
> We would reduce this to one insn, always, if the TB were close by,
> since the ADR instruction has a range of 1MB.

Having a TB address statically addressable from the generated code would
also be very handy for doing things like rough block execution counts
(or even precise if you want to go through the atomic penalty for it).

It would be nice for future work to be able to track where our hot-paths
are through generated code.

>
>
> (2) We have zero to two references to a linked TB, via goto_tb.
>
> Your stated goal above for eliminating the code_gen_buffer maximum of
> 128MB can be done in two ways.
>
> (2A) Raise the maximum to 2GB.  For this we would align an instruction
> pair, adrp+add, to compute the address; the following insn would
> branch.  The update code would write a new destination by modifing the
> adrp+add with a single 64-bit store.
>
> (2B) Eliminate the maximum altogether by referencing the destination
> directly in the TB.  This is the !USE_DIRECT_JUMP path.  It is
> normally not used on 64-bit targets because computing the full 64-bit
> address of the TB is harder, or just as hard, as computing the full
> 64-bit address of the destination.
>
> However, if the TB is nearby, aarch64 can load the address from
> TB.jmp_target_addr in one insn, with LDR (literal).  This pc-relative
> load also has a 1MB range.
>
> This has the side benefit that it is much quicker to re-link TBs, both
> in the computation of the code for the destination as well as
> re-flushing the icache.
>
>
> In addition, I strongly suspect the 1,342,177 entries (153MB) that we
> currently allocate for tcg_ctx.tb_ctx.tbs, given a 512MB
> code_gen_buffer, is excessive.
>
> If we co-allocate the TB and the code, then we get exactly the right
> number of TBs allocated with no further effort.
>
> There will be some additional memory wastage, since we'll want to keep
> the code and the data in different cache lines and that means padding,
> but I don't think that'll be significant.  Indeed, given the above
> over-allocation will probably still be a net savings.
>
>
> r~


--
Alex Bennée



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-27 Thread Paolo Bonzini


On 25/03/2017 17:52, Pranith Kumar wrote:
> * Implement an LRU translation block code cache.
> 
>   In the current TCG design, when the translation cache fills up, we flush all
>   the translated blocks (TBs) to free up space. We can improve this situation
>   by not flushing the TBs that were recently used i.e., by implementing an LRU
>   policy for freeing the blocks. This should avoid the re-translation overhead
>   for frequently used blocks and improve performance.

IIRC, Emilio measured one flush every roughly 10 seconds with 128 MB
cache in system emulation mode---and "never" is a pretty accurate
estimate for user-mode emulation.  This means that a really hot block
would be retranslated very quickly.

Paolo



Re: [Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-27 Thread Richard Henderson

On 03/26/2017 02:52 AM, Pranith Kumar wrote:

Hello,

With MTTCG code now merged in mainline, I tried to see if we are able to run
x86 SMP guests on ARM64 hosts. For this I tried running a windows XP guest on
a dragonboard 410c which has 1GB RAM. Since x86 has a strong memory model
whereas ARM64 is a weak memory model, I added a patch to generate fence
instructions for every guest memory access. After some minor fixes, I was
successfully able to boot a 4 core guest all the way to the desktop (albeit
with a 1GB backing swap). However the performance is severely
limited and the guest is barely usable. Based on my observations, I think
there are some easily implementable additions we can make to improve the
performance of TCG in general and on ARM64 in particular. I propose to do the
following as part of Google Summer of Code 2017.


* Implement jump-to-register instruction on ARM64 to overcome the 128MB
  translation cache size limit.

  The translation cache size for an ARM64 host is currently limited to 128
  MB. This limitation is imposed by utilizing a branch instruction which
  encodes the jump offset and is limited by the number of bits it can use for
  the range of the offset. The performance impact by this limitation is severe
  and can be observed when you try to run large programs like a browser in the
  guest. The cache is flushed several times before the browser starts and the
  performance is not satisfactory. This limitation can be overcome by
  generating a branch-to-register instruction and utilizing that when the
  destination address is outside the range of what can be encoded in current
  branch instruction.


128MB is really quite large.  I doubt doubling the cache size will really help 
that much.  That said, it's really quite trivial to make this change, if you'd 
like to experiment.


FWIW, I rarely see TB flushes for alpha -- not one during an entire gcc 
bootstrap.  Now, this is usually with 4GB ram, which by default implies 512MB 
translation cache.  But it does mean that, given an ideal guest, TB flushes 
should not dominate anything at all.


If you're seeing multiple flushes during the startup of a browser, your guest 
must be flushing for other reasons than the code_gen_buffer being full.




* Implement an LRU translation block code cache.

  In the current TCG design, when the translation cache fills up, we flush all
  the translated blocks (TBs) to free up space. We can improve this situation
  by not flushing the TBs that were recently used i.e., by implementing an LRU
  policy for freeing the blocks. This should avoid the re-translation overhead
  for frequently used blocks and improve performance.


The major problem you'll encounter is how to manage allocation in this case.

The current mechanism means that it is trivial to not know how much code is 
going to be generated for a given set of TCG opcodes.  When we reach the 
high-water mark, we've run out of room.  We then flush everything and start 
over at the beginning of the buffer.


If you manage the cache with an allocator, you'll need to know in advance how 
much code is going to be generated.  This is going to require that you either 
(1) severely over-estimate the space required (qemu_ld generates lots more code 
than just add), (2) severely increase the time required, by generating code 
twice, or (3) somewhat increase the time required, by generating 
position-independent code into an external buffer and copying it into place 
after determining the size.




* Avoid consistency overhead for strong memory model guests by generating
  load-acquire and store-release instructions.


This is probably required for good performance of the user-only code path, but 
considering the number of other insns required for the system tlb lookup, I'm 
surprised that the memory barrier matters.



Please let me know if you have any comments or suggestions. Also please let me
know if there are other enhancements that are easily implementable to increase
TCG performance as part of this project or otherwise.


I think it would be interesting to place TranslationBlock structures into the 
same memory block as code_gen_buffer, immediately before the code that 
implements the TB.


Consider what happens within every TB:

(1) We have one or more references to the TB address, via exit_tb.

For aarch64, this will normally require 2-4 insns.

# alpha-softmmu
0x7f75152114:  d0ffb320  adrp x0, #-0x99a000 (addr 0x7f747b8000)
0x7f75152118:  91004c00  add x0, x0, #0x13 (19)
0x7f7515211c:  17c3  b #-0xf4 (addr 0x7f75152028)

# alpha-linux-user
0x00569500:  d2800260  mov x0, #0x13
0x00569504:  f2b59820  movk x0, #0xacc1, lsl #16
0x00569508:  f2c00fe0  movk x0, #0x7f, lsl #32
0x0056950c:  17df  b #-0x84 (addr 0x569488)

We would reduce this to one insn, always, if the TB were close by, since the 
ADR instruction has a range of 1MB.



(2) We have zero to two references to a linked TB, via goto_tb.

Your stated 

[Qemu-devel] GSoC 2017 Proposal: TCG performance enhancements

2017-03-25 Thread Pranith Kumar
Hello,

With MTTCG code now merged in mainline, I tried to see if we are able to run
x86 SMP guests on ARM64 hosts. For this I tried running a windows XP guest on
a dragonboard 410c which has 1GB RAM. Since x86 has a strong memory model
whereas ARM64 is a weak memory model, I added a patch to generate fence
instructions for every guest memory access. After some minor fixes, I was
successfully able to boot a 4 core guest all the way to the desktop (albeit
with a 1GB backing swap). However the performance is severely
limited and the guest is barely usable. Based on my observations, I think
there are some easily implementable additions we can make to improve the
performance of TCG in general and on ARM64 in particular. I propose to do the
following as part of Google Summer of Code 2017.


* Implement jump-to-register instruction on ARM64 to overcome the 128MB
  translation cache size limit.

  The translation cache size for an ARM64 host is currently limited to 128
  MB. This limitation is imposed by utilizing a branch instruction which
  encodes the jump offset and is limited by the number of bits it can use for
  the range of the offset. The performance impact by this limitation is severe
  and can be observed when you try to run large programs like a browser in the
  guest. The cache is flushed several times before the browser starts and the
  performance is not satisfactory. This limitation can be overcome by
  generating a branch-to-register instruction and utilizing that when the
  destination address is outside the range of what can be encoded in current
  branch instruction.

* Implement an LRU translation block code cache.

  In the current TCG design, when the translation cache fills up, we flush all
  the translated blocks (TBs) to free up space. We can improve this situation
  by not flushing the TBs that were recently used i.e., by implementing an LRU
  policy for freeing the blocks. This should avoid the re-translation overhead
  for frequently used blocks and improve performance.

* Avoid consistency overhead for strong memory model guests by generating
  load-acquire and store-release instructions.

  To run a strongly ordered guest on a weakly ordered host using MTTCG, for
  example, x86 on ARM64, we have to generate fence instructions for all the
  guest memory accesses to ensure consistency. The overhead imposed by these
  fence instructions is significant (almost 3x when compared to a run without
  fence instructions). ARM64 provides load-acquire and store-release
  instructions which are sequentially consistent and can be used instead of
  generating fence instructions. I plan to add support to generate these
  instructions in the TCG run-time to reduce the consistency overhead in
  MTTCG.

Alex Bennée, who mentored me last year, has agreed to mentor me again this
time if the proposal is accepted.

Please let me know if you have any comments or suggestions. Also please let me
know if there are other enhancements that are easily implementable to increase
TCG performance as part of this project or otherwise.

Thanks,
-- 
Pranith