>>> On 15.06.16 at 18:46, wrote:
> On 06/15/2016 11:54 AM, Jan Beulich wrote:
>>
>> Yes, albeit two then isn't enough either if we want to fully address
>> the basic issue here: We'd have to latch as many translations as
>> there are possibly pages involved in the
On 06/15/2016 11:54 AM, Jan Beulich wrote:
>
> Yes, albeit two then isn't enough either if we want to fully address
> the basic issue here: We'd have to latch as many translations as
> there are possibly pages involved in the execution of a single
> instruction.
Re: translations changing under us
rom: Jan Beulich [mailto:jbeul...@suse.com]
>> >> Sent: 15 June 2016 16:22
>> >> To: Paul Durrant; Boris Ostrovsky
>> >> Cc: Sander Eikelenboom; xen-devel@lists.xen.org
>> >> Subject: Re: [Xen-devel] Xen-unstable 4.8: HVM domain_crash called
>&g
oris Ostrovsky
> >> Cc: Sander Eikelenboom; xen-devel@lists.xen.org
> >> Subject: Re: [Xen-devel] Xen-unstable 4.8: HVM domain_crash called
> from
> >> emulate.c:144 RIP: c000:[<336a>]
> >>
> >> >>> On 15.06.16 at 16:56, <bo
org
>> Subject: Re: [Xen-devel] Xen-unstable 4.8: HVM domain_crash called from
>> emulate.c:144 RIP: c000:[<336a>]
>>
>> >>> On 15.06.16 at 16:56, <boris.ostrov...@oracle.com> wrote:
>> > On 06/15/2016 10:39 AM, Jan Beulich wrote:
>
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 15 June 2016 16:22
> To: Paul Durrant; Boris Ostrovsky
> Cc: Sander Eikelenboom; xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] Xen-unstable 4.8: HVM domain_crash called from
> em
>>> On 15.06.16 at 16:56, wrote:
> On 06/15/2016 10:39 AM, Jan Beulich wrote:
> On 15.06.16 at 16:32, wrote:
>>> So perhaps we shouldn't latch data for anything over page size.
>> But why? What we latch is the start of the accessed
On 06/15/2016 10:39 AM, Jan Beulich wrote:
On 15.06.16 at 16:32, wrote:
>> So perhaps we shouldn't latch data for anything over page size.
> But why? What we latch is the start of the accessed range, so
> the repeat count shouldn't matter?
Because otherwise we
>>> On 15.06.16 at 16:32, wrote:
> So perhaps we shouldn't latch data for anything over page size.
But why? What we latch is the start of the accessed range, so
the repeat count shouldn't matter?
> Something like this (it seems to work):
I'm rather hesitant to take
>>> On 15.06.16 at 16:20, wrote:
> On 06/15/2016 10:07 AM, Jan Beulich wrote:
> On 15.06.16 at 15:58, wrote:
>>> Wednesday, June 15, 2016, 2:48:55 PM, you wrote:
Apart from that, and just to see whether there are other differences
On 06/15/2016 10:20 AM, Boris Ostrovsky wrote:
> On 06/15/2016 10:07 AM, Jan Beulich wrote:
> On 15.06.16 at 15:58, wrote:
>>> Wednesday, June 15, 2016, 2:48:55 PM, you wrote:
Apart from that, and just to see whether there are other differences
between your
On 06/15/2016 10:07 AM, Jan Beulich wrote:
On 15.06.16 at 15:58, wrote:
>> Wednesday, June 15, 2016, 2:48:55 PM, you wrote:
>>> Apart from that, and just to see whether there are other differences
>>> between your guest(s) and mine, could you post a guest config from
>>> On 15.06.16 at 15:58, wrote:
> Wednesday, June 15, 2016, 2:48:55 PM, you wrote:
>> Apart from that, and just to see whether there are other differences
>> between your guest(s) and mine, could you post a guest config from
>> one that's affected?
>
> Hope you are not too
Wednesday, June 15, 2016, 2:48:55 PM, you wrote:
On 15.06.16 at 14:00, wrote:
>> Wednesday, June 15, 2016, 12:12:37 PM, you wrote:
>> On 15.06.16 at 11:38, wrote:
Wednesday, June 15, 2016, 10:57:03 AM, you wrote:
> Wednesday, June
>>> On 15.06.16 at 14:00, wrote:
> Wednesday, June 15, 2016, 12:12:37 PM, you wrote:
> On 15.06.16 at 11:38, wrote:
>>> Wednesday, June 15, 2016, 10:57:03 AM, you wrote:
Wednesday, June 15, 2016, 10:29:37 AM, you wrote:
On 15.06.16 at
Wednesday, June 15, 2016, 12:12:37 PM, you wrote:
On 15.06.16 at 11:38, wrote:
>> Wednesday, June 15, 2016, 10:57:03 AM, you wrote:
>>
>>> Wednesday, June 15, 2016, 10:29:37 AM, you wrote:
>>
>>> On 15.06.16 at 01:49, wrote:
> Just
>>> On 15.06.16 at 11:38, wrote:
> Wednesday, June 15, 2016, 10:57:03 AM, you wrote:
>
>> Wednesday, June 15, 2016, 10:29:37 AM, you wrote:
>
>> On 15.06.16 at 01:49, wrote:
Just tested latest xen-unstable 4.8 (xen_changeset git:d337764),
Wednesday, June 15, 2016, 10:57:03 AM, you wrote:
> Wednesday, June 15, 2016, 10:29:37 AM, you wrote:
> On 15.06.16 at 01:49, wrote:
>>> Just tested latest xen-unstable 4.8 (xen_changeset git:d337764),
>>> but one of the latest commits seems to have broken boot of HVM
Wednesday, June 15, 2016, 10:29:37 AM, you wrote:
On 15.06.16 at 01:49, wrote:
>> Just tested latest xen-unstable 4.8 (xen_changeset git:d337764),
>> but one of the latest commits seems to have broken boot of HVM guests
>> (using qemu-xen) previous build with
>>> On 15.06.16 at 01:49, wrote:
> Just tested latest xen-unstable 4.8 (xen_changeset git:d337764),
> but one of the latest commits seems to have broken boot of HVM guests
> (using qemu-xen) previous build with xen_changeset git:6e908ee worked
> fine.
Primary suspects
Hi,
Just tested latest xen-unstable 4.8 (xen_changeset git:d337764),
but one of the latest commits seems to have broken boot of HVM guests
(using qemu-xen) previous build with xen_changeset git:6e908ee worked
fine.
--
Sander
(XEN) [2016-06-14 22:47:36.827] HVM19 save: CPU
(XEN) [2016-06-14
21 matches
Mail list logo