Re: [PATCH 0/4] hw/nmi: Remove @cpu_index argument

2024-03-20 Thread Mark Burton
> On 20 Mar 2024, at 16:00, Peter Maydell wrote: > > WARNING: This email originated from outside of Qualcomm. Please be wary of > any links or attachments, and do not enable macros. > > On Wed, 20 Mar 2024 at 14:10, Mark Burton wrote: >> I’d broaden this to all ’

Re: [PATCH 0/4] hw/nmi: Remove @cpu_index argument

2024-03-20 Thread Mark Burton
> On 20 Mar 2024, at 14:55, Peter Maydell wrote: > > WARNING: This email originated from outside of Qualcomm. Please be wary of > any links or attachments, and do not enable macros. > > On Wed, 20 Mar 2024 at 12:31, Mark Burton wrote: >>> On 20 Mar 2024, at

Re: [PATCH 0/4] hw/nmi: Remove @cpu_index argument

2024-03-20 Thread Mark Burton
> On 20 Mar 2024, at 13:00, Peter Maydell wrote: > > WARNING: This email originated from outside of Qualcomm. Please be wary of > any links or attachments, and do not enable macros. > > On Wed, 20 Mar 2024 at 11:20, Philippe Mathieu-Daudé > wrote: >> >> On 20/2/24 16:19, Thomas Huth

Re: [PATCH 0/4] hw/nmi: Remove @cpu_index argument

2024-03-20 Thread Mark Burton
> On 20 Mar 2024, at 12:19, Philippe Mathieu-Daudé wrote: > > WARNING: This email originated from outside of Qualcomm. Please be wary of > any links or attachments, and do not enable macros. > > On 20/2/24 16:19, Thomas Huth wrote: >> On 20/02/2024 16.08, Philippe Mathieu-Daudé wrote: >>>

Re: Call for agenda for 2023-09-19 QEMU developers call

2023-09-18 Thread Mark Burton
Seems like we’ve had a bit of a ’slower’ time in recent weeks - presumably “summer time” - If I understand correctly, Linaro are not going toe preset this week? Maybe we should re-group in the next meeting, So I’m happy to have the meeting tomorrow if Linaro can make it, otherwise for 3rd

Re: QEMU developers fortnightly conference for 2023-08-08

2023-08-03 Thread Mark Burton
Are too many people away right now? Cheers Mark. On 3 Aug 2023, at 10:33, juan.quint...@gmail.com wrote: WARNING: This email originated from outside of Qualcomm. Please be wary of any links or attachments, and do not enable macros. Hi Do you have any

Re: [PATCH] hvf: Handle EC_INSNABORT

2023-06-02 Thread Mark Burton
> On 2 Jun 2023, at 11:07, Peter Maydell wrote: > > WARNING: This email originated from outside of Qualcomm. Please be wary of > any links or attachments, and do not enable macros. > > On Thu, 1 Jun 2023 at 20:21, Mark Burton wrote: >> >> >> >&g

Re: [PATCH] hvf: Handle EC_INSNABORT

2023-06-01 Thread Mark Burton
> On 1 Jun 2023, at 18:45, Peter Maydell wrote: > > WARNING: This email originated from outside of Qualcomm. Please be wary of > any links or attachments, and do not enable macros. > > On Thu, 1 Jun 2023 at 17:00, Mark Burton wrote: >> This patch came from a di

Re: [PATCH] hvf: Handle EC_INSNABORT

2023-06-01 Thread Mark Burton
re is a read or a write (through memory region ops > callbacks). > > When enabling HVF, we hit an instruction abort on the very first instruction > as there is no memory region alias for it yet in system memory. > >>> Signed-off-by: Antonio Caggiano >>> Co-auth

Re: QEMU developers fortnightly call for agenda - 2023-05-29

2023-05-30 Thread Mark Burton
(Sorry Juan - I do have a question) We do have a question, about using Instruction abort in HVF/KVM - we’re like to explain, and ask if a patch would be acceptable. Cheers Mark. > On 28 May 2023, at 19:50, juan.quint...@gmail.com wrote: > > WARNING: This email originated from outside of

Re: QEMU developers fortnightly call for agenda for 2023-05-16

2023-05-09 Thread Mark Burton
I’d appreciate an update on single binary. Also, What’s the status on the “icount” plugin ? (Also I could do with some help on a specific issue on KVM/HVF memory handling.) Cheers Mark. On 9 May 2023, at 14:06, juan.quint...@gmail.com wrote: WARNING:

Re: [PATCH] memory: Do not print MR priority in flatview HMP output

2022-12-28 Thread Mark Burton
Is there any chance between 7.1 and 7.2 ‘something’ happened to make it so that Qemu ‘cares more’ about e.g. when memory regions are added/removed? I seem to get an abort because a memory region has not been completely setup in 7.2 (while it is being flattened actually) - In 7.1 that never

Re: Single system binary & Dynamic machine model (KVM developers conference call 2022-12-13)

2022-12-13 Thread Mark Burton
Happy with any choice so long as the meeting can be opened (either by any of us, or by a ‘larger’ number of people) - it’s not fair it relies on one person. Cheers Mark. On 13/12/2022, 17:35, "Felipe Franciosi" wrote: WARNING: This email originated from outside of Qualcomm. Please be wary of

Re: Single system binary & Dynamic machine model (KVM developers conference call 2022-12-13)

2022-12-13 Thread Mark Burton
(BTW, really happy that we use another approach for next time - we only used it because we didn’t have another quick choice when the meeting didn’t open….) Cheers Mark On 13/12/2022, 15:52, "Marc-André Lureau" wrote: WARNING: This email originated from outside of Qualcomm. Please be wary of

Re: Single system binary & Dynamic machine model (KVM developers conference call 2022-12-13)

2022-12-13 Thread Mark Burton
On 13/12/2022, 15:17, "Stefan Hajnoczi" wrote: WARNING: This email originated from outside of Qualcomm. Please be wary of any links or attachments, and do not enable macros. On Tue, 13 Dec 2022 at 09:08, Philippe Mathieu-Daudé wrote: > > On 12/12/22 00:41, Philippe Mathieu-Daudé wrote: > >

Re: Any interest in a QEMU emulation BoF at KVM Forum?

2022-08-31 Thread Mark Burton
I am VERY interested in these topics from a Qualcomm perspective. I’ll be there from Tuesday morning, I think a “BoF” would be very helpful … Cheers Mark. On 31/08/2022, 17:20, "Alex Bennée" wrote: WARNING: This email originated from outside of Qualcomm. Please be wary of any links or

Re: "Startup" meeting (was Re: Meeting today?)

2022-02-08 Thread Mark Burton
Hi Juan, is there a meeting today? I think the plan was to talk about ’startup’ itself ? Cheers Mark. > On 25 Jan 2022, at 11:58, Juan Quintela wrote: > > Philippe Mathieu-Daudé wrote: >> On 1/25/22 09:50, Juan Quintela wrote: >>> Mark Burton wrote: >>

Re: "Startup" meeting (was Re: Meeting today?)

2022-01-23 Thread Mark Burton
. > On 17 Jan 2022, at 18:13, Kevin Wolf wrote: > > Am 11.01.2022 um 11:22 hat Mark Burton geschrieben: >> That is my understanding… >> See you there! > > Unfortunately, I missed this whole thread until now. > > If the meeting did happen, has anyone taken notes? And

Re: "Startup" meeting (was Re: Meeting today?)

2022-01-11 Thread Mark Burton
e.com/calendar/embed?src=dG9iMXRqcXAzN3Y4ZXZwNzRoMHE4a3BqcXNAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ > > On 1/6/22 12:23, Daniel P. Berrangé wrote: >> No one objected, so I think we can go for the 11th. >> >> On Thu, Jan 06, 2022 at 12:21:56PM +0100, Mark Burton wrote: >>> Can we co

"Startup" meeting (was Re: Meeting today?)

2022-01-06 Thread Mark Burton
ites: > > > On Tue, Dec 14, 2021 at 12:37:43PM +0100, Markus Armbruster wrote: > >> Mark Burton mailto:mark.bur...@greensocs.com>> > >> writes: > >> > >> > I realise it’s very short notice, but what about having a discussion > >> &

Re: Redesign of QEMU startup & initial configuration

2021-12-16 Thread Mark Burton
>> >> Totally agree on this (of course). >> >> Thats why I’m here - I care about the people who care about emulation :-) >> >> In general, what we are working on is exactly the ability to service the >> ‘complex’ emulation use case. No CLI, nor single ‘config’ file will be good >> enough,

Re: Redesign of QEMU startup & initial configuration

2021-12-16 Thread Mark Burton
> On 16 Dec 2021, at 16:40, Daniel P. Berrangé wrote: > > On Thu, Dec 16, 2021 at 04:28:29PM +0100, Paolo Bonzini wrote: >> On 12/16/21 11:24, Markus Armbruster wrote: Not really, in particular the startup has been mostly reworked already and I disagree that it is messy.

Re: Redesign of QEMU startup & initial configuration

2021-12-15 Thread Mark Burton
FWIW I Agree. (Which probably means somethings hiding somewhere :-) ) Cheers Mark. > On 15 Dec 2021, at 21:00, Paolo Bonzini wrote: > > On 12/14/21 12:48, Markus Armbruster wrote: >> Let's start with where we (hopefully) agree: > > More or less I do agree with this, except for a couple

Re: Redesign of QEMU startup & initial configuration

2021-12-14 Thread Mark Burton
> On 14 Dec 2021, at 16:12, Markus Armbruster wrote: > > Daniel P. Berrangé writes: > >> On Tue, Dec 14, 2021 at 03:42:52PM +0100, Mark Burton wrote: >>> I think we’re talking at cross purposes, and probably we agree (not sure). >>> I’ll top quote

Re: Redesign of QEMU startup & initial configuration

2021-12-14 Thread Mark Burton
, and we have work to do on QAPI. If somebody wants to build a new CLI, with a new ‘high level’ interface, using QAPI - let them! Cheers Mark. > On 14 Dec 2021, at 14:48, Daniel P. Berrangé wrote: > > On Tue, Dec 14, 2021 at 02:36:26PM +0100, Mark Burton wrote: >> >>

Re: Redesign of QEMU startup & initial configuration

2021-12-14 Thread Mark Burton
> On 14 Dec 2021, at 14:21, Daniel P. Berrangé wrote: > > On Tue, Dec 14, 2021 at 02:11:11PM +0100, Mark Burton wrote: >> >> >>> On 14 Dec 2021, at 14:05, Daniel P. Berrangé wrote: >>> >>> On Mon, Dec 13, 2021 at 09:22:14PM +0100, Mark Burto

Re: Redesign of QEMU startup & initial configuration

2021-12-14 Thread Mark Burton
> On 14 Dec 2021, at 14:05, Daniel P. Berrangé wrote: > > On Mon, Dec 13, 2021 at 09:22:14PM +0100, Mark Burton wrote: >> >> >>> On 13 Dec 2021, at 18:59, Daniel P. Berrangé wrote: >>> >>> …. we no longer have to solve everything

Re: Redesign of QEMU startup & initial configuration

2021-12-14 Thread Mark Burton
> On 14 Dec 2021, at 12:48, Markus Armbruster wrote: > > Paolo Bonzini writes: > >> On 12/13/21 16:28, Markus Armbruster wrote: >>> Paolo Bonzini writes: >>> On 12/10/21 14:54, Markus Armbruster wrote: > I want an open path to a single binary. Taking years to get there is >

Re: Meeting today?

2021-12-14 Thread Mark Burton
Works for me Cheers Mark. > On 14 Dec 2021, at 12:37, Markus Armbruster wrote: > > Mark Burton writes: > >> I realise it’s very short notice, but what about having a discussion today >> at 15:00 ? > > I have a conflict today. I could try to reschedule, but

Meeting today?

2021-12-13 Thread Mark Burton
I realise it’s very short notice, but what about having a discussion today at 15:00 ? Cheers Mark. > On 13 Dec 2021, at 19:53, Daniel P. Berrangé wrote: > > On Mon, Dec 13, 2021 at 07:37:49PM +0100, Paolo Bonzini wrote: >> On 12/13/21 19:07, Daniel P. Berrangé wrote: >>> - /usr/bin/qemu (or

Re: Redesign of QEMU startup & initial configuration

2021-12-13 Thread Mark Burton
> On 13 Dec 2021, at 18:59, Daniel P. Berrangé wrote: > > …. we no longer have to solve everything > Ourselves. I support this sentiment. Lets re-factor the code so people can build what they need using an API. Actually, ‘QEMU’ only need support the existing CLI, and provide a suitable

Re: Redesign of QEMU startup & initial configuration

2021-12-10 Thread Mark Burton
> On 10 Dec 2021, at 15:26, Daniel P. Berrangé wrote: > > On Fri, Dec 10, 2021 at 03:15:50PM +0100, Mark Burton wrote: >> >> >>> On 10 Dec 2021, at 12:25, Daniel P. Berrangé wrote: >>> >>> On Fri, Dec 10, 2021 at 09:34:41AM +0100, Paol

Re: Redesign of QEMU startup & initial configuration

2021-12-10 Thread Mark Burton
> On 10 Dec 2021, at 12:25, Daniel P. Berrangé wrote: > > On Fri, Dec 10, 2021 at 09:34:41AM +0100, Paolo Bonzini wrote: >> On 12/9/21 20:11, Daniel P. Berrangé wrote: They still need to bootstrap a QMP monitor, and for that, CLI is fine as long as it's simple and stable. >>

Re: Redesign of QEMU startup & initial configuration

2021-12-09 Thread Mark Burton
I’ll take the liberty to cut one part (I agree with much of what you say elsewhere) > On 9 Dec 2021, at 20:11, Daniel P. Berrangé wrote: > > As illustrated earlier, I'd really like us to consider being a bit > more adventurous on the CLI side. I'm convinced that a CLI for > directly

Re: [RFC PATCH v2 00/16] Initial support for machine creation via QMP

2021-10-12 Thread Mark Burton
Fixed Cheers Mark. > On 13 Oct 2021, at 00:16, Alistair Francis wrote: > > On Thu, Sep 23, 2021 at 2:22 AM Damien Hedde > wrote: >> >> Hi, >> >> The goal of this work is to bring dynamic machine creation to QEMU: >> we want to setup a machine without compiling a specific machine C >> code.

Re: [Qemu-devel] [RFC PATCH 0/6] Clock and power gating support

2018-07-30 Thread Mark Burton
Hi Pete, sorry for the tardy reply, Im not in the office. Your right we shoujd have co-ordinated better the 2 patches, sorry for that. Regarding this new patchset, we think there is a degree of complementary to the clocktree one. Here we simply add a couple of generic power states to a

Re: [Qemu-devel] MTTCG Sync-up call today?

2016-04-25 Thread Mark Burton
Fred’s away this week too Cheers Mark. > On 25 Apr 2016, at 12:32, alvise rigo wrote: > > Hi Alex, > > On Mon, Apr 25, 2016 at 11:53 AM, Alex Bennée > wrote: > Hi, > > We are due to have a sync-up call

Re: [Qemu-devel] MTTCG Sync-up call today? Agenda items?

2016-04-11 Thread Mark Burton
Thanks Alex, sorry I was so ‘mute’ - seems I have gremlins on the phone line again! Cheers Good to see Sergey on, I’ve added him to the list Mark. > On 11 Apr 2016, at 15:38, Alex Bennée wrote: > > > Alex Bennée writes: > >> Hi, >> >> It's

Re: [Qemu-devel] MTTCG Sync-up call today? Agenda items?

2016-04-11 Thread Mark Burton
So see you all online - on the normal number Cheers Mark. > On 11 Apr 2016, at 13:45, alvise rigo wrote: > > Hi Alex, > > On Mon, Apr 11, 2016 at 1:21 PM, Alex Bennée wrote: >> >> Hi, >> >> It's been awhile since we synced-up with

Re: [Qemu-devel] MTTCG Sync-up call today? Agenda items?

2016-04-11 Thread Mark Burton
Good plan :-) Cheers Mark. > On 11 Apr 2016, at 13:21, Alex Bennée wrote: > > Hi, > > It's been awhile since we synced-up with quite weeks and Easter out of > the way are we good for a call today? > > Some items I can think would be worth covering: > > - State of

Re: [Qemu-devel] MTTCG Tasks (kvmforum summary)

2015-09-04 Thread Mark Burton
> On 4 Sep 2015, at 11:41, Edgar E. Iglesias wrote: > > On Fri, Sep 04, 2015 at 11:25:33AM +0200, Paolo Bonzini wrote: >> >> >> On 04/09/2015 09:49, Alex Bennée wrote: >>> * Signal free qemu_cpu_kick (Paolo) >>> >>> I don't know much about this patch set but I

Re: [Qemu-devel] MTTCG Tasks (kvmforum summary)

2015-09-04 Thread Mark Burton
> On 4 Sep 2015, at 14:38, Lluís Vilanova wrote: > > dovgaluk writes: > >> Hi! >> Alex Bennée писал 2015-09-04 10:49: >>> * What to do about icount? >>> >>> What is the impact of multi-thread on icount? Do we need to disable it >>> for MTTCG or can it be correct per-cpu?

Re: [Qemu-devel] MTTCG next version?

2015-08-26 Thread Mark Burton
Just to remind everybody as well - we’ll have a call next Monday to co-ordinate. It would be good to make sure everybody knows which bit of this everybody else is committing to do, so we avoid replication and treading on each others patch sets. Cheers Mark. On 26 Aug 2015, at 14:18, Frederic

Re: [Qemu-devel] Summary MTTCG related patch sets

2015-07-20 Thread Mark Burton
Huge thanks Alex, really good summary Cheers Mark. On 20 Jul 2015, at 18:17, Alex Bennée alex.ben...@linaro.org wrote: Hi, Following this afternoons call I thought I'd summarise the state of the various patch series and their relative dependencies. We re-stated the aim should be to get

Re: [Qemu-devel] [RFC v3 00/13] Slow-path for atomic instruction translation

2015-07-10 Thread Mark Burton
big snip To be clear, for a normal user (e.g. they boot linux, they run some apps, etc)..., if they use only one core, is it true that they will see no difference in performance? For a ‘normal user’ who does use multi-core, are you saying that a typical boot is slower? Cheers Mark. On 10

Re: [Qemu-devel] [RFC PATCH V6 15/18] cpu: introduce tlb_flush*_all.

2015-07-06 Thread Mark Burton
Paolo, Alex, Alexander, Talking to Fred after the call about ways of avoiding the ‘stop the world’ (or rather ‘sync the world’) - we already discussed this on this thread. One thing that would be very helpful would be some test cases around this. We could then use Fred’s code to check some of

Re: [Qemu-devel] [RFC PATCH V3] Use atomic cmpxchg to atomically check the exclusive value in a STREX

2015-06-19 Thread Mark Burton
On 19 Jun 2015, at 09:42, Paolo Bonzini pbonz...@redhat.com wrote: On 19/06/2015 09:40, Mark Burton wrote: On 19/06/2015 09:29, Mark Burton wrote: Does anybody know if the current atomic_cmpxchg will support 64 bit on a (normal) 32 bit x86, or do we need to special case

Re: [Qemu-devel] [RFC PATCH V3] Use atomic cmpxchg to atomically check the exclusive value in a STREX

2015-06-19 Thread Mark Burton
On 19 Jun 2015, at 09:31, Paolo Bonzini pbonz...@redhat.com wrote: On 19/06/2015 09:29, Mark Burton wrote: Does anybody know if the current atomic_cmpxchg will support 64 bit on a (normal) 32 bit x86, or do we need to special case that with cmpxchg8b ? (I get the impression

Re: [Qemu-devel] [RFC PATCH V3] Use atomic cmpxchg to atomically check the exclusive value in a STREX

2015-06-19 Thread Mark Burton
On 18 Jun 2015, at 21:53, Peter Maydell peter.mayd...@linaro.org wrote: On 18 June 2015 at 19:32, Mark Burton mark.bur...@greensocs.com wrote: for the 1size thing - I think that code has been used elsewhere, which is a little worrying - I’ll check. On 18 Jun 2015, at 17:56, Peter

Re: [Qemu-devel] [RFC PATCH V3] Use atomic cmpxchg to atomically check the exclusive value in a STREX

2015-06-18 Thread Mark Burton
for the 1size thing - I think that code has been used elsewhere, which is a little worrying - I’ll check. On 18 Jun 2015, at 17:56, Peter Maydell peter.mayd...@linaro.org wrote: On 18 June 2015 at 16:44, fred.kon...@greensocs.com wrote: +uint64_t oldval, *p; +p =

Re: [Qemu-devel] RFC Multi-threaded TCG design document

2015-06-17 Thread Mark Burton
On 17 Jun 2015, at 18:57, Dr. David Alan Gilbert dgilb...@redhat.com wrote: * Alex Benn?e (alex.ben...@linaro.org) wrote: Hi, Shared Data Structures == Global TCG State We need to protect the entire code generation cycle including any post

Re: [Qemu-devel] RFC Multi-threaded TCG design document

2015-06-15 Thread Mark Burton
I think we SHOUDL use the wiki - and keep it current. A lot of what you have is in the wiki too, but I’d like to see the wiki updated. We will add our stuff there too… Cheers Mark. On 15 Jun 2015, at 12:06, Alex Bennée alex.ben...@linaro.org wrote: Frederic Konrad

Re: [Qemu-devel] [RFC PATCH] Use atomic cmpxchg to atomically check the exclusive value in a STREX

2015-06-09 Thread Mark Burton
On 9 Jun 2015, at 11:12, Alex Bennée alex.ben...@linaro.org wrote: fred.kon...@greensocs.com writes: From: KONRAD Frederic fred.kon...@greensocs.com This mechanism replaces the existing load/store exclusive mechanism which seems to be broken for multithread. It follows the

Re: [Qemu-devel] [RFC PATCH] Use atomic cmpxchg to atomically check the exclusive value in a STREX

2015-06-09 Thread Mark Burton
On 9 Jun 2015, at 15:59, Alex Bennée alex.ben...@linaro.org wrote: fred.kon...@greensocs.com writes: From: KONRAD Frederic fred.kon...@greensocs.com snip +DEF_HELPER_4(atomic_cmpxchg64, i32, env, i32, i64, i32) +DEF_HELPER_2(atomic_check, i32, env, i32)

Re: [Qemu-devel] [RFC PATCH] Use atomic cmpxchg to atomically check the exclusive value in a STREX

2015-06-09 Thread Mark Burton
On 9 Jun 2015, at 15:55, Alex Bennée alex.ben...@linaro.org wrote: Alex Bennée alex.ben...@linaro.org writes: fred.kon...@greensocs.com writes: From: KONRAD Frederic fred.kon...@greensocs.com This mechanism replaces the existing load/store exclusive mechanism which seems to be

Re: [Qemu-devel] [RFC 0/5] Slow-path for atomic instruction translation

2015-05-06 Thread Mark Burton
On 6 May 2015, at 18:19, alvise rigo a.r...@virtualopensystems.com wrote: Hi Mark, Firstly, thank you for your feedback. On Wed, May 6, 2015 at 5:55 PM, Mark Burton mark.bur...@greensocs.com wrote: A massive thank you for doing this work Alvise, On our side, the patch we suggested

Re: [Qemu-devel] [RFC 0/5] Slow-path for atomic instruction translation

2015-05-06 Thread Mark Burton
A massive thank you for doing this work Alvise, On our side, the patch we suggested is only applicable for ARM, though the mechanism would work for any CPU, - BUT It doesn’t force atomic instructions out through the slow path. This is either a very good thing (it’s much faster), or a

Re: [Qemu-devel] [RFC 0/5] Slow-path for atomic instruction translation

2015-05-06 Thread Mark Burton
By the way - would it help to send the atomic patch we did separately from the whole MTTCG patch set? Or have you already taken a look at that - it’s pretty short. Cheers Mark. On 6 May 2015, at 17:51, Paolo Bonzini pbonz...@redhat.com wrote: On 06/05/2015 17:38, Alvise Rigo wrote: This

Re: [Qemu-devel] [RFC 00/10] MultiThread TCG.

2015-03-31 Thread Mark Burton
understood. Cheers Mark. On 30 Mar 2015, at 23:46, Peter Maydell peter.mayd...@linaro.org wrote: On 30 March 2015 at 07:52, Mark Burton mark.bur...@greensocs.com wrote: So - Fred is unwilling to send the patch set as it stands, because frankly this part is totally broken

Re: [Qemu-devel] [RFC 00/10] MultiThread TCG.

2015-03-30 Thread Mark Burton
to add some detail, Unfortunately Fred is away this week, so we won’t get this patch set to you ask quickly as I’d have liked. We have a ‘working’ implementation - where ‘working’ is limited to a couple of SMP cores, booting and running Dhrystone. The performance improvement we get is close to

Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*.

2015-03-03 Thread Mark Burton
? On 2 Mar 2015, at 13:27, Peter Maydell peter.mayd...@linaro.org wrote: On 27 February 2015 at 16:54, Mark Burton mark.bur...@greensocs.com wrote: On 26 Feb 2015, at 23:56, Peter Maydell peter.mayd...@linaro.org wrote: cpu_physical_memory_rw would bypass the TLB and so be much

Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*.

2015-03-03 Thread Mark Burton
On 3 Mar 2015, at 16:32, Paolo Bonzini pbonz...@redhat.com wrote: On 03/03/2015 16:29, Mark Burton wrote: ps. on our bug - we believe somehow the STREX is being marked as failed, but actually succeeds to write something. There are only 3 ways the strex can fail: 1/ the address

Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*.

2015-03-03 Thread Mark Burton
we’ll try and clean a patch up to show just this……. THANKS! Cheers Mark. On 3 Mar 2015, at 16:34, Paolo Bonzini pbonz...@redhat.com wrote: On 03/03/2015 16:33, Mark Burton wrote: On 3 Mar 2015, at 16:32, Paolo Bonzini pbonz...@redhat.com wrote: On 03/03/2015 16:29, Mark

Re: [Qemu-devel] [RFC] Adding multithreads to LDREX/STREX.

2015-03-03 Thread Mark Burton
On 3 Mar 2015, at 18:09, Paolo Bonzini pbonz...@redhat.com wrote: On 03/03/2015 17:47, Mark Burton wrote: +inline void arm_exclusive_lock(void) +{ +if (!cpu_have_exclusive_lock) { +qemu_mutex_lock(cpu_exclusive_lock); +cpu_have_exclusive_lock = true

[Qemu-devel] Fwd: [RFC] Adding multithreads to LDREX/STREX.

2015-03-03 Thread Mark Burton
Paolo - here is a partially cleaned up patch - it’s still not quite right - but maybe its enough so that you can see what we’re doing. There are changes in here that wouldn’t be sensible to upstream - like changing the address from a 64 to a 32 bit and needlessly moving it about - thats just

Re: [Qemu-devel] [RFC 01/10] target-arm: protect cpu_exclusive_*.

2015-02-26 Thread Mark Burton
On 26 Feb 2015, at 23:56, Peter Maydell peter.mayd...@linaro.org wrote: On 27 February 2015 at 03:09, Frederic Konrad fred.kon...@greensocs.com wrote: On 29/01/2015 16:17, Peter Maydell wrote: On 16 January 2015 at 17:19, fred.kon...@greensocs.com wrote: From: KONRAD Frederic

Re: [Qemu-devel] Help on TLB Flush

2015-02-13 Thread Mark Burton
the memory barrier is on the cpu requesting the flush isn’t it (not on the CPU that is being flushed)? Cheers Mark. On 13 Feb 2015, at 10:34, Paolo Bonzini pbonz...@redhat.com wrote: On 12/02/2015 22:57, Peter Maydell wrote: The only requirement is that if the CPU that did the TLB

Re: [Qemu-devel] Help on TLB Flush

2015-02-13 Thread Mark Burton
Agreed Cheers Mark. On 13 Feb 2015, at 14:30, Lluís Vilanova vilan...@ac.upc.edu wrote: Mark Burton writes: On 13 Feb 2015, at 08:24, Peter Maydell peter.mayd...@linaro.org wrote: On 13 February 2015 at 07:16, Mark Burton mark.bur...@greensocs.com wrote: If the kernel is doing

Re: [Qemu-devel] Help on TLB Flush

2015-02-12 Thread Mark Burton
On 12 Feb 2015, at 16:31, Dr. David Alan Gilbert dgilb...@redhat.com wrote: * Mark Burton (mark.bur...@greensocs.com) wrote: On 12 Feb 2015, at 16:01, Peter Maydell peter.mayd...@linaro.org wrote: On 12 February 2015 at 14:45, Alexander Graf ag...@suse.de wrote: On 12.02.2015, at 15

Re: [Qemu-devel] Help on TLB Flush

2015-02-12 Thread Mark Burton
Up top - thanks Peter, I think you may give us an idea ! On 12 Feb 2015, at 23:10, Lluís Vilanova vilan...@ac.upc.edu wrote: Mark Burton writes: On 12 Feb 2015, at 16:38, Alexander Graf ag...@suse.de wrote: On 12.02.15 15:58, Peter Maydell wrote: On 12 February 2015 at 14:45

Re: [Qemu-devel] Help on TLB Flush

2015-02-12 Thread Mark Burton
On 13 Feb 2015, at 08:24, Peter Maydell peter.mayd...@linaro.org wrote: On 13 February 2015 at 07:16, Mark Burton mark.bur...@greensocs.com wrote: If the kernel is doing this - then effectively - for X86, each CPU only flush’s it’s own TLB (from the perspective of Qemu) - correct

[Qemu-devel] Help on TLB Flush

2015-02-12 Thread Mark Burton
TLB Flush: We have spent a few days on this issue, and still haven’t resolved the best path. Our solution seems to work, most of the time, but we still have some strange issues - so I want to check that what we are proposing has a chance of working. Our plan is to allow all CPU’s to

Re: [Qemu-devel] Help on TLB Flush

2015-02-12 Thread Mark Burton
Graf ag...@suse.de wrote: On 12.02.2015, at 15:35, Mark Burton mark.bur...@greensocs.com wrote: TLB Flush: We have spent a few days on this issue, and still haven’t resolved the best path. Our solution seems to work, most of the time, but we still have some strange issues - so I

Re: [Qemu-devel] Help on TLB Flush

2015-02-12 Thread Mark Burton
On 12 Feb 2015, at 16:01, Peter Maydell peter.mayd...@linaro.org wrote: On 12 February 2015 at 14:45, Alexander Graf ag...@suse.de wrote: On 12.02.2015, at 15:35, Mark Burton mark.bur...@greensocs.com wrote: We are proposing to implement this by signalling all other CPU’s to exit

Re: [Qemu-devel] Help on TLB Flush

2015-02-12 Thread Mark Burton
On 12 Feb 2015, at 16:38, Alexander Graf ag...@suse.de wrote: On 12.02.15 15:58, Peter Maydell wrote: On 12 February 2015 at 14:45, Alexander Graf ag...@suse.de wrote: almost nobody except x86 does global flushes All ARM TLB maintenance operations have both this CPU only and all

Re: [Qemu-devel] CPU TLB flush with multithread TCG.

2015-02-11 Thread Mark Burton
On 11 Feb 2015, at 04:33, Alex Bennée alex.ben...@linaro.org wrote: Frederic Konrad fred.kon...@greensocs.com writes: Hi everybody, In multithread tlb_flush is broken as CPUA can flush an other CPUB and CPUB can be executing code, and fixing this can be quite hard: * We need to

Re: [Qemu-devel] [RFC PATCH v8 04/21] replay: internal functions for replay log

2015-01-30 Thread Mark Burton
I believe thats what we concluded too Cheers Mark. On 30 Jan 2015, at 14:06, Paolo Bonzini pbonz...@redhat.com wrote: On 30/01/2015 13:56, Pavel Dovgaluk wrote: Could this be static? (I haven't checked). No, because it is used from several replay files. I wonder if that's a

Re: [Qemu-devel] [RFC 02/10] use a different translation block list for each cpu.

2015-01-29 Thread Mark Burton
I’ll let Fred answer the other points you make - which might help explain what we’re finding.. But - for this one… The idea for now is to keep things simple and have a thread per CPU and a ‘cache’ per thread. (Later we can look at reducing the caches). What we mean by a ‘cache’ needs to be

Re: [Qemu-devel] global_mutex and multithread.

2015-01-16 Thread Mark Burton
On 16 Jan 2015, at 09:07, Jan Kiszka jan.kis...@siemens.com wrote: On 2015-01-16 08:25, Mark Burton wrote: On 15 Jan 2015, at 22:41, Paolo Bonzini pbonz...@redhat.com wrote: On 15/01/2015 21:53, Mark Burton wrote: Jan said he had it working at least on ARM (MusicPal). yeah - our

Re: [Qemu-devel] global_mutex and multithread.

2015-01-15 Thread Mark Burton
On 15 Jan 2015, at 21:27, Paolo Bonzini pbonz...@redhat.com wrote: On 15/01/2015 20:07, Mark Burton wrote: However - if we go this route -the current patch is only for x86. (apart from the fact that we still seem to land in a deadlock…) Jan said he had it working at least on ARM

Re: [Qemu-devel] global_mutex and multithread.

2015-01-15 Thread Mark Burton
Still in agony on this issue - I’ve CC’d Jan as his patch looks important… the patch below would seem to offer by far and away the best result here. (If only we could get it working ;-) ) it allows threads to proceed as we want them to, it means we dont have to ‘count’ the number of

Re: [Qemu-devel] global_mutex and multithread.

2015-01-15 Thread Mark Burton
On 15 Jan 2015, at 22:41, Paolo Bonzini pbonz...@redhat.com wrote: On 15/01/2015 21:53, Mark Burton wrote: Jan said he had it working at least on ARM (MusicPal). yeah - our problem is when we enable multi-threads - which I dont believe Jan did… Multithreaded TCG, or single

Re: [Qemu-devel] global_mutex and multithread.

2015-01-15 Thread Mark Burton
I think we call that flag “please dont reallocate this TB until at least after a CPU has exited and we do a global flush”… So if we sync and get all cpu’s to exit on a global flush, this flag is only there as a figment of our imagination… e.g. we’re safe without it? Wish I could say the same of

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-18 Thread Mark Burton
On 17 Dec 2014, at 17:39, Peter Maydell peter.mayd...@linaro.org wrote: On 17 December 2014 at 16:29, Mark Burton mark.bur...@greensocs.com wrote: On 17 Dec 2014, at 17:27, Peter Maydell peter.mayd...@linaro.org wrote: I think a mutex is fine, personally -- I just don't want to see fifteen

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-18 Thread Mark Burton
On 18 Dec 2014, at 13:24, Alexander Graf ag...@suse.de wrote: On 18.12.14 10:12, Mark Burton wrote: On 17 Dec 2014, at 17:39, Peter Maydell peter.mayd...@linaro.org wrote: On 17 December 2014 at 16:29, Mark Burton mark.bur...@greensocs.com wrote: On 17 Dec 2014, at 17:27, Peter

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-18 Thread Mark Burton
Dec 2014, at 13:24, Alexander Graf ag...@suse.de wrote: On 18.12.14 10:12, Mark Burton wrote: On 17 Dec 2014, at 17:39, Peter Maydell peter.mayd...@linaro.org wrote: On 17 December 2014 at 16:29, Mark Burton mark.bur...@greensocs.com wrote: On 17 Dec 2014, at 17:27, Peter Maydell

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-18 Thread Mark Burton
On 18 Dec 2014, at 15:44, Alexander Graf ag...@suse.de wrote: On 18.12.14 15:20, Mark Burton wrote: On 18/12/2014 13:24, Alexander Graf wrote: That's the nice thing about transactions - they guarantee that no other CPU accesses the same cache line at the same time. So you're safe

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-18 Thread Mark Burton
In other words — the back-end (slow path) memory interface should look ‘transactional’…? Yeah, the semantics should be tied to what TM would give you. We can always be more safe than TM in our fallback implementation, but I wouldn't want to see semantic optimizations tied to the MMIO

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-17 Thread Mark Burton
On 17 Dec 2014, at 11:28, Alexander Graf ag...@suse.de wrote: On 17.12.14 11:27, Frederic Konrad wrote: On 16/12/2014 17:37, Peter Maydell wrote: On 16 December 2014 at 09:13, fred.kon...@greensocs.com wrote: From: KONRAD Frederic fred.kon...@greensocs.com This adds a lock to avoid

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-17 Thread Mark Burton
On 17 Dec 2014, at 11:45, Alexander Graf ag...@suse.de wrote: On 17.12.14 11:31, Mark Burton wrote: On 17 Dec 2014, at 11:28, Alexander Graf ag...@suse.de wrote: On 17.12.14 11:27, Frederic Konrad wrote: On 16/12/2014 17:37, Peter Maydell wrote: On 16 December 2014 at 09:13

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-17 Thread Mark Burton
Actually - I dont see any other option. Playing with the ideas - it seems to me that if we were to implement ‘generic’ Lock/unlock instructions which could then somehow we ‘combined’ with loads/stores then we would be relying on an optimisation step to ‘notice’ that this could be combined into

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-17 Thread Mark Burton
Sorry - I should have replied to this Peter I agree with you - I dont know how much overlap we’ll find with different architectures. But if we stick to the more generic ‘lock/unlock’, I dont see how this is going to help us output thread safe code without going thought a mutex - at which point

Re: [Qemu-devel] [RFC PATCH] target-arm: protect cpu_exclusive_*.

2014-12-17 Thread Mark Burton
On 17 Dec 2014, at 17:27, Peter Maydell peter.mayd...@linaro.org wrote: On 17 December 2014 at 16:17, Mark Burton mark.bur...@greensocs.com wrote: Sorry - I should have replied to this Peter I agree with you - I dont know how much overlap we’ll find with different architectures

[Qemu-devel] Atomic Instructions - comments please

2014-12-15 Thread Mark Burton
Comments please…. Choices for Atomic instructions: The current approach (for ARM at least) for Ld and St exclusive inside Qemu simply records the address and the value that atomic read instructions attempt to read from. When an atomic write happens, it checks the value and address remain the

Re: [Qemu-devel] TCG multithread plan.

2014-12-15 Thread Mark Burton
(please note the address of the match list server is mt...@listserver.greensocs.com mailto:mt...@listserver.greensocs.com) On 9 Dec 2014, at 18:57, Lluís Vilanova vilan...@ac.upc.edu wrote: Frederic Konrad writes: Hi everybody, Here is the plan we will follow: We will be focusing -

Re: [Qemu-devel] Atomic Instructions - comments please

2014-12-15 Thread Mark Burton
, Mark Burton mark.bur...@greensocs.com wrote: One proposal is ‘simply’ to add a mutex around this code, such that multi-threaded TCG will correctly update/read these saved address/values. This _should_ maintain the status-quo. Things that were broken before will remain broken, nothing new should

Re: [Qemu-devel] Atomic Instructions - comments please

2014-12-15 Thread Mark Burton
(not address of mttcg list server) On 15 Dec 2014, at 14:32, Paolo Bonzini pbonz...@redhat.com wrote: On 15/12/2014 14:28, Peter Maydell wrote: Personally I would start out with this approach. We're going to need a do this whole sequence atomically wrt other guest CPUs mechanism

Re: [Qemu-devel] Atomic Instructions - comments please

2014-12-15 Thread Mark Burton
On 15 Dec 2014, at 14:39, Peter Maydell peter.mayd...@linaro.org wrote: [I'm getting bounces from mt...@greensocs.com so have taken them off cc: 550 5.1.1 mt...@greensocs.com: Recipient address rejected: User unknown in virtual mailbox table] the address should be:

[Qemu-devel] MultiThread TCG mail list

2014-12-03 Thread Mark Burton
All - to make things easier to track, there is now a mail list specifically for MultiThread development issues mt...@listserver.greensocs.com You can subscribe etc here: http://listserver.greensocs.com/wws/info/mttcg http://listserver.greensocs.com/wws/info/mttcg If you send

Re: [Qemu-devel] KVM call for agenda for 2014-12-08

2014-12-03 Thread Mark Burton
Hi Juan, is this for the 9th, or did I get the day wrong Anyway - I would like to talk about Multi-core - a huge thank you to everybody for your feedback, we’ll be starting work on this, and I’d like to bring a proposal in terms of the path we’ll take and get consensus on the first steps.

  1   2   >