On 21/07/16 14:25, Paolo Bonzini wrote:
>
> - Original Message -
>> From: "Sergey Fedorov" <serge.f...@gmail.com>
>> To: "Paolo Bonzini" <pbonz...@redhat.com>
>> Cc: qemu-devel@nongnu.org, "sergey fedorov" <sergey.fe
On 20/07/16 01:27, Paolo Bonzini wrote:
>
> - Original Message -
>> From: "Sergey Fedorov" <serge.f...@gmail.com>
>> To: "Paolo Bonzini" <pbonz...@redhat.com>, qemu-devel@nongnu.org
>> Cc: "sergey fedorov" <sergey.fe
fore calling
> tb_add_jump() since TB lookup is to be performed out of 'tb_lock' in
> future. Note that we don't have to check 'last_tb'; an already invalidated
> TB will not be executed anyway and it is thus safe to patch it.
>
> Suggested-by: Sergey Fedorov <serge.f...@gmail.com&g
On 18/07/16 20:58, Paolo Bonzini wrote:
>
> On 18/07/2016 19:31, Sergey Fedorov wrote:
>> On 18/07/16 20:28, Paolo Bonzini wrote:
>>> On 18/07/2016 19:25, Sergey Fedorov wrote:
>>>>>> @@ -753,14 +753,14 @@ static inline void
>>>&
On 18/07/16 20:28, Paolo Bonzini wrote:
>
> On 18/07/2016 19:25, Sergey Fedorov wrote:
>>>> @@ -753,14 +753,14 @@ static inline void
>>>> cpu_get_invalid_tb_cpu_state(target_ulong *pc,
>>>>
On 18/07/16 20:00, Paolo Bonzini wrote:
>
> On 18/07/2016 18:57, Sergey Fedorov wrote:
>> On 18/07/16 19:53, Paolo Bonzini wrote:
>>> On 18/07/2016 18:52, Sergey Fedorov wrote:
>>>> So how are we going to use them?
>>> Instead of atomic_read/atomic_set
On 18/07/16 20:22, Paolo Bonzini wrote:
>
> On 18/07/2016 19:17, Sergey Fedorov wrote:
>> On 18/07/16 20:11, Paolo Bonzini wrote:
>>> On 18/07/2016 19:07, Sergey Fedorov wrote:
>>>> On 18/07/16 20:00, Paolo Bonzini wrote:
>>>>> On 18/07/2016 18:57
On 18/07/16 20:11, Paolo Bonzini wrote:
>
> On 18/07/2016 19:07, Sergey Fedorov wrote:
>> On 18/07/16 20:00, Paolo Bonzini wrote:
>>> On 18/07/2016 18:57, Sergey Fedorov wrote:
>>>> On 18/07/16 19:53, Paolo Bonzini wrote:
>>>>> On 18/07/2016 18:52,
On 18/07/16 19:53, Paolo Bonzini wrote:
>
> On 18/07/2016 18:52, Sergey Fedorov wrote:
>> So how are we going to use them?
> Instead of atomic_read/atomic_set when marking invalid TBs.
But shouldn't they be atomic to avoid reading torn writes?
Thanks,
Sergey
>
> diff --g
So how are we going to use them?
Thanks,
Sergey
On 18/07/16 17:17, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini
> ---
> docs/atomics.txt | 19 ---
> include/qemu/atomic.h | 17 +
> 2 files changed, 33 insertions(+), 3
On 15/07/16 09:45, Stefan Weil wrote:
> Hi,
>
> Am 11.05.2016 um 12:21 schrieb Sergey Fedorov:
> [...]
>> int cpu_exec(CPUState *cpu)
>> @@ -516,8 +576,6 @@ int cpu_exec(CPUState *cpu)
>> CPUArchState *env = _cpu->env;
>> #endif
>> in
From: Sergey Fedorov <serge.f...@gmail.com>
This will fix a compiler warning with -Wclobbered:
http://lists.nongnu.org/archive/html/qemu-devel/2016-07/msg03347.html
Reported-by: Stefan Weil <s...@weilnetz.de>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-
From: Sergey Fedorov <serge.f...@gmail.com>
This will be useful to enable CPU work on user mode emulation.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Reviewed-by: Alex Bennée <alex.ben...@linaro.org>
--
From: Sergey Fedorov <serge.f...@gmail.com>
Use async_safe_run_on_cpu() to make tb_flush() thread safe.
It can happen that multiple threads schedule a safe work to flush the
translation buffer. To keep statistics and debugging output sane, always
check if the translation buffer has alread
From: Sergey Fedorov <serge.f...@gmail.com>
A single variable 'pending_cpus' was used for both counting currently
running CPUs and for signalling the pending exclusive operation request.
To prepare for supporting operations which requires a quiescent state,
like translation buffer
From: Sergey Fedorov <serge.f...@gmail.com>
Make CPU work core functions common between system and user-mode
emulation. User-mode does not have BQL, so process_queued_cpu_work() is
protected by 'exclusive_lock'.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Se
From: Sergey Fedorov <serge.f...@gmail.com>
Convert pthread_mutex_t and pthread_cond_t to QemuMutex and QemuCond.
This will allow to make some locks and conditional variables common
between user and system mode emulation.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-of
From: Sergey Fedorov <serge.f...@gmail.com>
Move the code common between run_on_cpu() and async_run_on_cpu() into a
new function queue_work_on_cpu().
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Reviewed-by: Ale
From: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Reviewed-by: Alex Bennée <alex.ben...@linaro.org>
---
linux-user/main.c | 10 ++
1 file changed, 10 insertion
From: Sergey Fedorov <serge.f...@gmail.com>
It is a minimalistic support because bsd-linux claims to be _not_
threadsafe.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
bsd-user/main.c | 15 +++
From: Sergey Fedorov <serge.f...@gmail.com>
Hi,
This is a v4 for the series [1]. There's only a small change to keep
tb_flush() statistic and debugging output sane. I also picked up
"Reviewed-by" tags.
This series is available at a public git repository:
https://gith
From: Sergey Fedorov <serge.f...@gmail.com>
To avoid possible confusion, rename flush_queued_work() to
process_queued_cpu_work().
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Reviewed-by: Alex Bennée <a
to use
CPUState where available.
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
[Sergey Fedorov:
- eliminate more CPUState in user data;
- remove unnecessary user data passing;
- fix target-s390x/kvm.c and target-s390x/misc_helper.c]
Signed-off-by: Sergey Fedorov <sergey.fedo...@li
From: Sergey Fedorov <serge.f...@gmail.com>
This patch is based on the ideas found in work of KONRAD Frederic [1],
Alex Bennée [2], and Alvise Rigo [3].
This mechanism allows to perform an operation safely in a quiescent
state. Quiescent state means: (1) no vCPU is running and (2) BQL in
From: Alex Bennée <alex.ben...@linaro.org>
Useful for counting down.
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
include/qemu/atomic.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/qemu/
From: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Reviewed-by: Alex Bennée <alex.ben...@linaro.org>
---
cpu-exec.c | 17 -
1 file changed, 12 in
From: Alex Bennée <alex.ben...@linaro.org>
This ensures that if we find the TB on the slow path that tb->page_addr
is correctly set before being tested.
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
Reviewed-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Signed-o
From: Sergey Fedorov <serge.f...@gmail.com>
These functions are not too big and can be merged together. This makes
locking scheme more clear and easier to follow.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
From: Sergey Fedorov <serge.f...@gmail.com>
Ensure atomicity and ordering of CPU's 'tb_flushed' access for future
translation block lookup out of 'tb_lock'.
This field can only be touched from another thread by tb_flush() in user
mode emulation. So the only access to be sequential
From: Sergey Fedorov <serge.f...@gmail.com>
These functions will be used to make translation block invalidation safe
with concurrent lockless lookup in the global hash table.
Most targets don't use 'cs_base'; so marking TB as invalid is as simple
as assigning -1 to 'cs_base'. SPARC target
From: Sergey Fedorov <serge.f...@gmail.com>
In fact, this function does not exactly perform a lookup by physical
address as it is descibed for comment on get_page_addr_code(). Thus
it may be a bit confusing to have "physical" in it's name. So rename it
to tb_htable_lookup() to be
From: Sergey Fedorov <serge.f...@gmail.com>
'HF_SOFTMMU_MASK' is only set when 'CONFIG_SOFTMMU' is defined. So
there's no need in this flag: test 'CONFIG_SOFTMMU' instead.
Suggested-by: Paolo Bonzini <pbonz...@redhat.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Sig
become NOPs anyway until the MTTCG work is completed.
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
Reviewed-by: Richard Henderson <r...@twiddle.net>
Reviewed-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
v2
filled.
Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
[Sergey Fedorov: commit title and message provided;
comment on qht_remove() elided]
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
---
Changes in v4:
- Modified version of Paolo's patch is used
---
include/qemu/qht.h | 5
From: Sergey Fedorov <serge.f...@gmail.com>
When invalidating a translation block, set an invalid CPU state into the
TranslationBlock structure first.
As soon as the TB is marked with an invalid CPU state, there is no need
to remove it from CPU's 'tb_jmp_cache'. However it will be nec
From: Sergey Fedorov <serge.f...@gmail.com>
This is a small clean up. tb_find_fast() is a final consumer of this
variable so no need to pass it by reference. 'last_tb' is always updated
by subsequent cpu_loop_exec_tb() in cpu_exec().
This change also simplifies calling cpu_exec_n
From: Sergey Fedorov <serge.f...@gmail.com>
Hi,
This is a respin of this series [1].
Here I used a modified version of Paolo's patch to docuement memory
ordering assumptions for certain QHT operations.
The last patch is a suggestion for renaming tb_find_physicall().
This series can be
From: Sergey Fedorov <serge.f...@gmail.com>
Ensure atomicity of CPU's 'tb_jmp_cache' access for future translation
block lookup out of 'tb_lock'.
Note that this patch does *not* make CPU's TLB invalidation safe if it
is done from some other thread while the CPU is in its execution loop.
On 15/07/16 15:51, Paolo Bonzini wrote:
>
> On 15/07/2016 14:37, Sergey Fedorov wrote:
>> I understand why an implied wmb can be expected after the entry is
>> removed: so that the caller can trash the contents of the object
>> removed. However that would require dou
On 13/07/16 14:13, Paolo Bonzini wrote:
> diff --git a/include/qemu/qht.h b/include/qemu/qht.h
> index 70bfc68..f4f1d55 100644
> --- a/include/qemu/qht.h
> +++ b/include/qemu/qht.h
> @@ -69,6 +69,9 @@ void qht_destroy(struct qht *ht);
> * Attempting to insert a NULL @p is a bug.
> * Inserting
On 14/07/16 16:56, Paolo Bonzini wrote:
>
> On 13/07/2016 19:50, Sergey Fedorov wrote:
>> On 13/07/16 10:36, Paolo Bonzini wrote:
>>> On 13/07/2016 01:19, Emilio G. Cota wrote:
>>>> I wouldn't put those comments in the source--seqlock callers should
>>>&g
On 14/07/16 16:01, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedo...@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.f...@gmail.com>
>>
>> Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
>> Signed-off-by: Sergey Fedorov <sergey.fedo.
On 14/07/16 15:53, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedo...@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.f...@gmail.com>
>>
>> These functions will be used to make translation block invalidation safe
>> with concurrent lockless lookup in
On 14/07/16 15:45, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedo...@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.f...@gmail.com>
>>
>> Ensure atomicity of CPU's 'tb_flushed' access for future translation
>> block lookup out of 'tb_lock'.
>>
On 14/07/16 15:59, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedo...@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.f...@gmail.com>
>>
>> When invalidating a translation block, set an invalid CPU state into the
>> TranslationBlock structure
On 14/07/16 12:55, Alex Bennée wrote:
> Sergey Fedorov <serge.f...@gmail.com> writes:
>
>> On 13/07/16 10:39, Paolo Bonzini wrote:
>>> Have you measured performance with the series? In any case, it's nice
>>> to see MTTCG finally taking shape!
>> No, I d
On 14/07/16 13:25, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedo...@linaro.org> writes:
>
>> > From: Sergey Fedorov <serge.f...@gmail.com>
>> >
>> > These functions will be used to make translation block invalidation safe
>> > wi
On 14/07/16 11:41, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedo...@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.f...@gmail.com>
>>
>> Use async_safe_run_on_cpu() to make tb_flush() thread safe.
>>
>> Signed-off-by: Sergey Fedorov <ser
From: Sergey Fedorov <serge.f...@gmail.com>
Use async_safe_run_on_cpu() to make tb_flush() thread safe.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
Changes in v3:
- 'tb_flushed' removed
Changes in v2:
- s
From: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
linux-user/main.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/linux-user/main.c b/linux
From: Sergey Fedorov <serge.f...@gmail.com>
Make CPU work core functions common between system and user-mode
emulation. User-mode does not have BQL, so process_queued_cpu_work() is
protected by 'exclusive_lock'.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Se
From: Sergey Fedorov <serge.f...@gmail.com>
Move the code common between run_on_cpu() and async_run_on_cpu() into a
new function queue_work_on_cpu().
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Reviewed-by: Ale
From: Sergey Fedorov <serge.f...@gmail.com>
Convert pthread_mutex_t and pthread_cond_t to QemuMutex and QemuCond.
This will allow to make some locks and conditional variables common
between user and system mode emulation.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-of
From: Sergey Fedorov <serge.f...@gmail.com>
To avoid possible confusion, rename flush_queued_work() to
process_queued_cpu_work().
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Reviewed-by: Alex Bennée <a
From: Sergey Fedorov <serge.f...@gmail.com>
A single variable 'pending_cpus' was used for both counting currently
running CPUs and for signalling the pending exclusive operation request.
To prepare for supporting operations which requires a quiescent state,
like translation buffer
From: Sergey Fedorov <serge.f...@gmail.com>
This patch is based on the ideas found in work of KONRAD Frederic [1],
Alex Bennée [2], and Alvise Rigo [3].
This mechanism allows to perform an operation safely in a quiescent
state. Quiescent state means: (1) no vCPU is running and (2) BQL in
From: Sergey Fedorov <serge.f...@gmail.com>
This will be useful to enable CPU work on user mode emulation.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Reviewed-by: Alex Bennée <alex.ben...@linaro.org>
--
From: Sergey Fedorov <serge.f...@gmail.com>
Hi,
This is a v3 for the RFC series [1]. This is not RFC anymore because
bsd-user is supported now. However I wasn't able to even compile-test
bsd-user so some help on this would be really appreciated.
This series is available at a publ
to use
CPUState where available.
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
[Sergey Fedorov:
- eliminate more CPUState in user data;
- remove unnecessary user data passing;
- fix target-s390x/kvm.c and target-s390x/misc_helper.c]
Signed-off-by: Sergey Fedorov <sergey.fedo...@li
From: Sergey Fedorov <serge.f...@gmail.com>
It is a minimalistic support because bsd-linux claims to be _not_
threadsafe.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
bsd-user/main.c | 15 +++
From: Alex Bennée <alex.ben...@linaro.org>
Useful for counting down.
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
include/qemu/atomic.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/qemu/
On 13/07/16 10:39, Paolo Bonzini wrote:
> If anything, for historical reasons one might rename tb_find_physical to
> tb_find_slow and leave the tb_find_fast name, but I think the patch is
> good as is.
I think tb_find_htable() or tb_find_global_htable() could also be good
options if we're going
On 13/07/16 14:13, Paolo Bonzini wrote:
>
> On 12/07/2016 22:13, Sergey Fedorov wrote:
>> diff --git a/include/qemu/qht.h b/include/qemu/qht.h
>> index 70bfc68b8d67..5f633e5d8100 100644
>> --- a/include/qemu/qht.h
>> +++ b/include/qemu/qht.h
>> @@ -69,6 +69,
On 13/07/16 10:36, Paolo Bonzini wrote:
>
> On 13/07/2016 01:19, Emilio G. Cota wrote:
>> I wouldn't put those comments in the source--seqlock callers should
>> know what they're doing, and what barriers seqlocks imply.
> In general I'd agree with you, however in this case the "begin" calls
> are
On 13/07/16 10:39, Paolo Bonzini wrote:
> On 12/07/2016 22:13, Sergey Fedorov wrote:
>> From: Sergey Fedorov <serge.f...@gmail.com>
>>
>> Hi,
>>
>> This is my respin of Alex's v2 series [1].
>>
>> The first 8 patches are preparation for the p
From: Sergey Fedorov <serge.f...@gmail.com>
These functions are not too big and can be merged together. This makes
locking scheme more clear and easier to follow.
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
--
From: Sergey Fedorov <serge.f...@gmail.com>
When invalidating a translation block, set an invalid CPU state into the
TranslationBlock structure first. All subsequent changes are ordered
after it with smp_wmb(). This pairs with implied smp_rmb() of
qht_lookup() in tb_find_physical().
A
From: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
cpu-exec.c | 15 +++
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/cpu-exec.c
become NOPs anyway until the MTTCG work is completed.
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
Reviewed-by: Richard Henderson <r...@twiddle.net>
Reviewed-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
From: Alex Bennée <alex.ben...@linaro.org>
This ensures that if we find the TB on the slow path that tb->page_addr
is correctly set before being tested.
Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
Reviewed-by: Sergey Fedorov <sergey.fedo...@linaro.org>
Signed-o
From: Sergey Fedorov <serge.f...@gmail.com>
These functions will be used to make translation block invalidation safe
with concurrent lockless lookup in the global hash table.
Most targets don't use 'cs_base'; so marking TB as invalid is as simple
as assigning -1 to 'cs_base'. SPARC target
From: Sergey Fedorov <serge.f...@gmail.com>
'HF_SOFTMMU_MASK' is only set when 'CONFIG_SOFTMMU' is defined. So
there's no need in this flag: test 'CONFIG_SOFTMMU' instead.
Suggested-by: Paolo Bonzini <pbonz...@redhat.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Sig
From: Sergey Fedorov <serge.f...@gmail.com>
Ensure atomicity of CPU's 'tb_flushed' access for future translation
block lookup out of 'tb_lock'.
This field can only be touched from another thread by tb_flush() in user
mode emulation. So the only access to be atomic is:
* a single
From: Sergey Fedorov <serge.f...@gmail.com>
This is a small clean up. tb_find_fast() is a final consumer of this
variable so no need to pass it by reference. 'last_tb' is always updated
by subsequent cpu_loop_exec_tb() in cpu_exec().
This change also simplifies calling cpu_exec_n
From: Sergey Fedorov <serge.f...@gmail.com>
It is naturally expected that some memory ordering should be provided
around qht_insert(), qht_remove(), and qht_lookup(). Document these
assumptions in the header file and put some comments in the source to
denote how that memory ordering requir
From: Sergey Fedorov <serge.f...@gmail.com>
Ensure atomicity of CPU's 'tb_jmp_cache' access for future translation
block lookup out of 'tb_lock'.
Note that this patch does *not* make CPU's TLB invalidation safe if it
is done from some other thread while the CPU is in its execution loop.
From: Sergey Fedorov <serge.f...@gmail.com>
Hi,
This is my respin of Alex's v2 series [1].
The first 8 patches are preparation for the patch 9, the subject matter
of this series, which enables lockless translation block lookup. The
main change here is that Paolo's suggestion is implemente
On 11/07/16 17:03, Paolo Bonzini wrote:
>
> On 11/07/2016 15:06, Sergey Fedorov wrote:
>> On 08/07/16 23:52, Paolo Bonzini wrote:
>>> On 08/07/2016 22:24, Sergey Fedorov wrote:
>>>> I remember, I've just found that we discussed it in this thread:
On 08/07/16 23:52, Paolo Bonzini wrote:
>
> On 08/07/2016 22:24, Sergey Fedorov wrote:
>> I remember, I've just found that we discussed it in this thread:
>>
>> http://thread.gmane.org/gmane.comp.emulators.qemu/401723/focus=406852
>>
>> I was thinking of
On 11/07/16 15:36, Christian Borntraeger wrote:
> On 07/06/2016 11:14 PM, Sergey Fedorov wrote:
>> From: Alex Bennée <alex.ben...@linaro.org>
>>
>> CPUState is a fairly common pointer to pass to these helpers. This means
>> if you need other arguments for the
On 08/07/16 23:18, Paolo Bonzini wrote:
>
> On 08/07/2016 21:55, Sergey Fedorov wrote:
>> On 08/07/16 17:07, Paolo Bonzini wrote:
>>> On 08/07/2016 14:32, Sergey Fedorov wrote:
>>>>>>>> I think we can do even better. One option is using a separate
On 05/07/16 19:18, Alex Bennée wrote:
> Lock contention in the hot path of moving between existing patched
> TranslationBlocks is the main drag in multithreaded performance. This
> patch pushes the tb_lock() usage down to the two places that really need
> it:
>
> - code generation (tb_gen_code)
On 08/07/16 17:07, Paolo Bonzini wrote:
>
> On 08/07/2016 14:32, Sergey Fedorov wrote:
>>>>>> I think we can do even better. One option is using a separate tiny lock
>>>>>> to protect direct jump set/reset instead of tb_lock.
>>>> If you hav
On 08/07/16 21:03, Alex Bennée wrote:
> Sergey Fedorov <serge.f...@gmail.com> writes:
>
>> On 07/07/16 17:18, Sergey Fedorov wrote:
>>> On 05/07/16 19:18, Alex Bennée wrote:
>>>> Lock contention in the hot path of moving between existing patched
On 07/07/16 17:18, Sergey Fedorov wrote:
> On 05/07/16 19:18, Alex Bennée wrote:
>> Lock contention in the hot path of moving between existing patched
>> TranslationBlocks is the main drag in multithreaded performance. This
>> patch pushes the tb_lock() usage down to the t
On 07/07/16 17:18, Sergey Fedorov wrote:
> On 05/07/16 19:18, Alex Bennée wrote:
>> Lock contention in the hot path of moving between existing patched
>> TranslationBlocks is the main drag in multithreaded performance. This
>> patch pushes the tb_lock() usage down to the t
On 05/07/16 19:18, Alex Bennée wrote:
> From: Sergey Fedorov <serge.f...@gmail.com>
>
> First, ensure atomicity of CPU's 'tb_jmp_cache' access by:
> * using atomic_read() to look up a TB when not holding 'tb_lock';
> * using atomic_write() to remove a TB from each CPU's l
On 08/07/16 14:02, Paolo Bonzini wrote:
>> On 08/07/16 11:40, Paolo Bonzini wrote:
>>> Even better: add a "bool *tb_locked" argument to tb_find_slow, and
>>> don't move the mmap_lock release. Then tb_find_fast knows directly
>>> whether tb_lock is taken, and you don't need any of tb_lock_reset
On 08/07/16 11:40, Paolo Bonzini wrote:
> Even better: add a "bool *tb_locked" argument to tb_find_slow, and
> don't move the mmap_lock release. Then tb_find_fast knows directly
> whether tb_lock is taken, and you don't need any of tb_lock_reset
> or mmap_lock_reset.
I think we can do even
On 07/07/16 17:08, Sergey Fedorov wrote:
> On 05/07/16 19:18, Alex Bennée wrote:
>> This ensures that if we find the TB on the slow path that tb->page_addr
>> is correctly set before being tested.
>>
>> Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
> R
On 07/07/16 22:46, Sergey Fedorov wrote:
> On 07/07/16 22:36, Alex Bennée wrote:
>> Sergey Fedorov <sergey.fedo...@linaro.org> writes:
>>
>>> From: Sergey Fedorov <serge.f...@gmail.com>
>>>
>>> Signed-off-by: Sergey Fedorov <serge.f...@
On 07/07/16 00:15, Sergey Fedorov wrote:
> From: Sergey Fedorov <serge.f...@gmail.com>
>
> Use async_safe_run_on_cpu() to make tb_flush() thread safe.
I've just realized that this allows to remove CPUState::tb_flushed as well.
Regards,
Sergey
On 07/07/16 22:36, Alex Bennée wrote:
> Sergey Fedorov <sergey.fedo...@linaro.org> writes:
>
>> From: Sergey Fedorov <serge.f...@gmail.com>
>>
>> Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
>> Signed-off-by: Sergey Fedorov <se
From: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
bsd-user/mmap.c | 7 +++
include/exec/exec-all.h | 2 ++
linux-user/mmap.c | 7 +++
3 files chan
From: Sergey Fedorov <serge.f...@gmail.com>
Here is my attempt ;-)
Kind regards,
Sergey
Sergey Fedorov (3):
tcg: Introduce mmap_lock_reset()
tcg: Introduce tb_lock_locked()
tcg: Avoid bouncing tb_lock between tb_gen_code() and tb_add_jump()
bsd-user/mmap.c | 7 ++
From: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
cpu-exec.c | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/cpu-exec.c b/cpu-exec.c
From: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <serge.f...@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedo...@linaro.org>
---
tcg/tcg.h | 1 +
translate-all.c | 9 +
2 files changed, 10 insertions(+)
diff --git a/tcg/tcg.h
moved the #ifdef CONFIG_USER_ONLY stuff as the
> locks become NOPs anyway until the MTTCG work is completed.
>
> Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
> Reviewed-by: Richard Henderson <r...@twiddle.net>
Reviewed-by: Sergey Fedorov <sergey.fedo...@linaro.org>
On 05/07/16 19:18, Alex Bennée wrote:
> This ensures that if we find the TB on the slow path that tb->page_addr
> is correctly set before being tested.
>
> Signed-off-by: Alex Bennée <alex.ben...@linaro.org>
Reveiwed-by: Sergey Fedorov <sergey.fedo...@linaro.org>
&
I was not sure if the language I used in the source code comments is
100% correct. So it would be fine if someone could check if it is easy
to understand ;)
Thanks,
Sergey
On 05/07/16 19:18, Alex Bennée wrote:
> From: Sergey Fedorov <serge.f...@gmail.com>
>
> First, ensure ato
1 - 100 of 811 matches
Mail list logo