Am 27.05.2011 01:07, schrieb Josh Durgin:
This patchset moves the complexity of the rbd format into librbd and
adds truncation support.
Changes since v5:
* compare full string, not prefix, with conf in 2/4
* when truncate fails, just return librbd's error
Changes since v4:
* fixed
On 05/26/2011 10:03 PM, Sasha Levin wrote:
Hi Avi,
I'm working on adding ioeventfd support into tools/kvm/.
Currently the implementation creates ioeventfd entries at the
'VIRTIO_PCI_QUEUE_NOTIFY' of each device and waits on all of them using
epoll().
The basics are working - when IO is
https://bugzilla.kernel.org/show_bug.cgi?id=34282
Joerg Roedel j...@8bytes.org changed:
What|Removed |Added
CC||j...@8bytes.org
---
On Fri, 2011-05-27 at 11:30 +0300, Avi Kivity wrote:
On 05/26/2011 10:03 PM, Sasha Levin wrote:
Hi Avi,
I'm working on adding ioeventfd support into tools/kvm/.
Currently the implementation creates ioeventfd entries at the
'VIRTIO_PCI_QUEUE_NOTIFY' of each device and waits on all of
Hi
I have met an interesting problem between different kvmlinux version.
Both the two run on the same platform.
HW Platform:
Processors | physical = 2, cores = 8, virtual = 16, hyperthreading = yes
Speeds | 16x2266.804
Models | 16xIntel(R) Xeon(R) CPU E5520 @ 2.27GHz
Caches | 16x8192 KB
Memory
On 05/26/2011 04:28 PM, Yang, Wei Y wrote:
This patchset enables a new CPU feature SMEP (Supervisor Mode Execution
Protection) in KVM. SMEP prevents kernel from executing code in application.
Updated Intel SDM describes this CPU feature. The document will be published
soon.
This patchset is
On 05/27/2011 05:56 AM, Tian, Kevin wrote:
From: Yang, Wei Y
Sent: Thursday, May 26, 2011 9:29 PM
This patchset enables a new CPU feature SMEP (Supervisor Mode Execution
Protection) in KVM. SMEP prevents kernel from executing code in application.
Updated Intel SDM describes this CPU
On 05/26/2011 03:56 PM, Nikola Ciprich wrote:
Should be more like that one with correct image path:
huh, now I got a bit lost :)
I tried running both:
/usr/bin/qemu-kvm -M pc-0.13 -enable-kvm -m 4096 -smp
1,sockets=1,cores=1,threads=1 -name vmtst04 -uuid
1f8328b8-8849-11e0-91e9-00259009d78c
On 05/27/2011 11:44 AM, Peijie Yu wrote:
Hi
I have met an interesting problem between different kvmlinux version.
Both the two run on the same platform.
HW Platform:
Processors | physical = 2, cores = 8, virtual = 16, hyperthreading = yes
Speeds | 16x2266.804
Models | 16xIntel(R) Xeon(R) CPU
Hello Avi,
Try appending ,cache=none to the -drive parameter?
nope, unfortunately same result :(
n.
Maybe we have a regression with writethrough block devices (a bad idea
anyway).
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
--
* Paul E. McKenney paul...@linux.vnet.ibm.com wrote:
I'm CC'ing Paul and Mathieu as well for urcu.
I am hoping we can get better convergence between the user-level
and kernel-level URCU implementations once I get SRCU merged into
the TREE_RCU and TINY_RCU implementations. [...]
Yeah.
On Thu, 2011-05-26 at 19:09 -0400, Mathieu Desnoyers wrote:
* Sasha Levin (levinsasha...@gmail.com) wrote:
On Thu, 2011-05-26 at 21:21 +0300, Pekka Enberg wrote:
On Thu, May 26, 2011 at 9:11 PM, Avi Kivity a...@redhat.com wrote:
On 05/26/2011 09:05 PM, Ingo Molnar wrote:
Hi guys,
Sorry for resurrecting this but I just checkout kernel v2.6.39 and
this fix doesn't seem to be present in this release...
Am I wrong ?
Thanks
On Wed, Mar 9, 2011 at 11:07 AM, Francis Moreau francis.m...@gmail.com wrote:
On Wed, Mar 9, 2011 at 11:03 AM, Avi Kivity a...@redhat.com
* Mathieu Desnoyers mathieu.desnoy...@efficios.com wrote:
- Check kernel/tinyrcu.c to see how RCU is implemented in its
simplest form. :)
...so simplistic it only works on UP systems, which are not so common
these days on the systems targeted by kvm.
As i said above, in its
* Sasha Levin levinsasha...@gmail.com wrote:
I see that in liburcu there is an implementation of a rcu linked
list but no implementation of a rb-tree.
Another approach would be, until the RCU interactions are sorted out,
to implement a 'big reader lock' thing that is completely lockless on
On Tue, May 17, 2011 at 12:17:50PM +0200, Alexander Graf wrote:
On 16.05.2011, at 07:58, Paul Mackerras wrote:
I do the check there because I was having problems where, if the HDEC
goes negative before we do the partition switch, we would occasionally
not get the HDEC interrupt at all
ioeventfd is way provided by KVM to receive notifications about
reads and writes to PIO and MMIO areas within the guest.
Such notifications are usefull if all we need to know is that
a specific area of the memory has been changed, and we don't need
a heavyweight exit to happen.
The
Use ioeventfds to receive notifications of IO events in virtio-blk.
Doing so prevents an exit every time we read/write from/to the
virtio disk.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/virtio/blk.c | 26 +-
1 files changed, 25 insertions(+), 1
Use ioeventfds to receive notifications of IO events in virtio-net.
Doing so prevents an exit every time we receive/send a packet.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/virtio/net.c | 22 ++
1 files changed, 22 insertions(+), 0 deletions(-)
diff
Use ioeventfds to receive notifications of IO events in virtio-rng.
Doing so prevents an exit every time we need to supply randomness
to the guest.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/virtio/rng.c | 26 +-
1 files changed, 25 insertions(+),
On 27.05.2011, at 12:33, Paul Mackerras wrote:
On Tue, May 17, 2011 at 12:17:50PM +0200, Alexander Graf wrote:
On 16.05.2011, at 07:58, Paul Mackerras wrote:
I do the check there because I was having problems where, if the HDEC
goes negative before we do the partition switch, we would
On Fri, May 27, 2011 at 11:36 AM, Sasha Levin levinsasha...@gmail.com wrote:
ioeventfd is way provided by KVM to receive notifications about
reads and writes to PIO and MMIO areas within the guest.
Such notifications are usefull if all we need to know is that
a specific area of the memory has
On Fri, May 27, 2011 at 1:36 PM, Sasha Levin levinsasha...@gmail.com wrote:
+void ioeventfd__start(void)
+{
+ pthread_t thread;
+
+ pthread_create(thread, NULL, ioeventfd__thread, NULL);
Please be more careful with error handling. If an API call can fail,
there's almost never any
* Sasha Levin levinsasha...@gmail.com wrote:
ioeventfd is way provided by KVM to receive notifications about
reads and writes to PIO and MMIO areas within the guest.
Such notifications are usefull if all we need to know is that
a specific area of the memory has been changed, and we don't
On Fri, 2011-05-27 at 11:47 +0100, Stefan Hajnoczi wrote:
On Fri, May 27, 2011 at 11:36 AM, Sasha Levin levinsasha...@gmail.com wrote:
ioeventfd is way provided by KVM to receive notifications about
reads and writes to PIO and MMIO areas within the guest.
Such notifications are usefull if
On Fri, 2011-05-27 at 12:54 +0200, Ingo Molnar wrote:
* Sasha Levin levinsasha...@gmail.com wrote:
ioeventfd is way provided by KVM to receive notifications about
reads and writes to PIO and MMIO areas within the guest.
Such notifications are usefull if all we need to know is that
a
* Ingo Molnar mi...@elte.hu wrote:
This code is very much tied with the kernel scheduler. [...]
It would not be particularly complex to enable user-space to
request a callback on context switch events.
I was thinking on and off about allowing perf events to generate a
per sampling
On 2011-05-27 07:32, André Weidemann wrote:
Hi Gerd,
I managed to pass through a graphics card to a Windows7 VM using your
kraxel.q35 seabios branch
(http://www.kraxel.org/cgit/seabios/log/?h=kraxel.q35).
Here is my setup:
Intel DX58SO
Core i7 920
Radeon HD 6950
Kernel 2.6.35.7
* Ingo Molnar mi...@elte.hu wrote:
I was thinking about that on and off so loudly that Peter
implemented it long ago via fasync support on the perf event fd!
:-)
So if you set a notification signal via fcntl(F_SETOWN) on the
scheduler context switch event fd, the user-space RCU code
* Ingo Molnar mi...@elte.hu wrote:
Note that you do not want the context switch event, but the CPU
migration event: that will notify user-space when it gets migrated
to another CPU. This is the case that RCU really needs.
Also note that the main current use-case of perf events is
Hi Ingo,
On Fri, May 27, 2011 at 1:54 PM, Ingo Molnar mi...@elte.hu wrote:
A sidenote: i think 'struct kvm *kvm' was a naming mistake - it's way
too aspecific, it tells us nothing. What is a 'kvm'?
Why, an instance of a kernel virtual machine, of course! It was the
very first thing I wrote for
* Pekka Enberg penb...@kernel.org wrote:
Hi Ingo,
On Fri, May 27, 2011 at 1:54 PM, Ingo Molnar mi...@elte.hu wrote:
A sidenote: i think 'struct kvm *kvm' was a naming mistake - it's way
too aspecific, it tells us nothing. What is a 'kvm'?
Why, an instance of a kernel virtual machine,
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c |7 +--
1 files changed, 1 insertions(+), 6 deletions(-)
diff --git a/cpus.c b/cpus.c
index 4b5d187..c7a5dec 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1212,11 +1212,6 @@ static void sig_ipi_handler(int n)
{
}
-static int
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c |5 +
1 files changed, 1 insertions(+), 4 deletions(-)
diff --git a/cpus.c b/cpus.c
index c7a5dec..9b3f218 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1199,8 +1199,6 @@ void list_cpus(FILE *f, fprintf_function cpu_fprintf,
const char
Just like in upstream, assume that cpu_single_env is only non-NULL while
in kvm_cpu_exec. Use qemu_cpu_is_self in pause_all_threads instead of
cpu_single_env and additionally avoid duplicate kicks. Then drop all
related cpu_single_env initializations and assertions.
Signed-off-by: Jan Kiszka
This converts everything except for kvm_main_loop_cpu to the upstream
version, i.e. thread creation, signal setup, some further field
initializations, and completion signaling.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 89
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c |9 +
qemu-kvm.h |2 --
2 files changed, 1 insertions(+), 10 deletions(-)
diff --git a/cpus.c b/cpus.c
index 7bd888a..2cfaa0d 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1187,13 +1187,6 @@ void list_cpus(FILE *f,
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 84 -
sysemu.h |1 +
vl.c |6 +---
3 files changed, 13 insertions(+), 78 deletions(-)
diff --git a/cpus.c b/cpus.c
index 9b3f218..470ab00 100644
--- a/cpus.c
To prepare using the upstream iothread main loop, push some
initialization from kvm_main_loop to qemu_kvm_init_main_loop.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 41 +
1 files changed, 21 insertions(+), 20 deletions(-)
diff --git
No caller depends on cpu_single_env saving/restoring anymore, so we can
call qemu_cond_wait directly.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 14 +++---
1 files changed, 3 insertions(+), 11 deletions(-)
diff --git a/cpus.c b/cpus.c
index fc5605d..4b5d187 100644
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c |9 ++---
sysemu.h |1 +
vl.c |2 +-
3 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/cpus.c b/cpus.c
index 0455481..23c6ccd 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1193,8 +1193,6 @@ void list_cpus(FILE
This temporarily requires our own initialization service as we are still
using the !IOTHREAD version of qemu_init_main_loop.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 57 +++--
1 files changed, 31 insertions(+), 26
As a first step towards unified threading support, move all the related
qemu-kvm-specific bits from qemu-kvm.c to cpus.c. This already allows to
drop 3 identical functions (sigbus_reraise, sigbus_handler,
sigfd_handler) and should provide the environment for consolidating the
rest.
Signed-off-by:
So far, qemu-kvm's build was incompatible with --enable-io-thread. But
to consolidate both iothread versions, we need to start enabling
upstream code under this config option.
This patch force-enables CONFIG_IOTHREAD but still picks the !IOTHREAD
variant of those functions that are used by the
This also means switching to the CONFIG_IOTHREAD version of
qemu_main_loop_start and clean up some related patches of upstream code.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 50 +++---
qemu-kvm.h |1 -
sysemu.h |2
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 26 +-
1 files changed, 1 insertions(+), 25 deletions(-)
diff --git a/cpus.c b/cpus.c
index 383d359..bf666b0 100644
--- a/cpus.c
+++ b/cpus.c
@@ -913,7 +913,6 @@ int qemu_cpu_is_self(void *_env)
return
Switch to CONFIG_IOTHREAD version of qemu_kvm_init_main_loop and drop
qemu_kvm_init_main_loop as well as kvm_init_ap.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 62 +---
kvm-all.c |2 -
qemu-kvm.h |1 -
3
Activate the iothread version of qemu_cpu_kick. We just need to
initialize the yet unused CPUState::halt_cond for it.
This finally obsoletes kvm_update_interrupt_request, so drop it.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c | 15 ---
kvm-all.c |1 -
With this series applied, we are finally at a level of almost zero
redundancy between QEMU upstream and the qemu-kvm tree. The last major
duplication to be removed is the original io-thread implementation and
everything related to it:
- locking
- vcpu wakeup/kicking as well as suspend/resume
-
Most tests in kvm_update_interrupt_request are unneeded today:
- env argument is always non-NULL (caller references it as well)
- current_env must have been created when we get here
- env-thread can't be zero (initialized early during cpu creation)
So simply avoid self signaling and multiple
The differences do not matter for qemu-kvm's iothread code.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpus.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/cpus.c b/cpus.c
index 48bac70..8b9b1f6 100644
--- a/cpus.c
+++ b/cpus.c
@@ -604,7 +604,6 @@ void
With switching to upstream code, TCG mode becomes usable again.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
arch_init.c |3 +-
cpus.c | 89 ---
vl.c|5 +--
3 files changed, 2 insertions(+), 95 deletions(-)
Upstream is now identical to qemu-kvm's versions, and we are using the
same signaling mechanisms now.
Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---
cpu-defs.h | 11 +--
cpus.c | 45 +
kvm-all.c|
On Fri, May 27, 2011 at 12:22:52PM +0200, Francis Moreau wrote:
Hi guys,
Sorry for resurrecting this but I just checkout kernel v2.6.39 and
this fix doesn't seem to be present in this release...
Am I wrong ?
Hmm. Should be fixed by commit: 5601d05b8c340ee2643febc146099325eff187eb
* Ingo Molnar (mi...@elte.hu) wrote:
[ ... ]
All that aside, one advantage of http://lttng.org/urcu is that it
already exists, which allows prototyping to proceed immediately.
it's offline right now:
$ git clone git://git.lttng.org/urcu
Cloning into urcu...
fatal: The remote
* Mathieu Desnoyers mathieu.desnoy...@efficios.com wrote:
instead of bringing in yet another library which is a IIRC a
distant copy of the kernel code to begin with.
This is either a lie, or immensely misinformed. You should go and
look at the source before doing nonsensical
* Sasha Levin (levinsasha...@gmail.com) wrote:
On Thu, 2011-05-26 at 19:09 -0400, Mathieu Desnoyers wrote:
* Sasha Levin (levinsasha...@gmail.com) wrote:
On Thu, 2011-05-26 at 21:21 +0300, Pekka Enberg wrote:
On Thu, May 26, 2011 at 9:11 PM, Avi Kivity a...@redhat.com wrote:
On
* Mathieu Desnoyers mathieu.desnoy...@efficios.com wrote:
it's offline right now:
$ git clone git://git.lttng.org/urcu
Cloning into urcu...
fatal: The remote end hung up unexpectedly
This would be:
git clone git://git.lttng.org/userspace-rcu.git
Hey, my impression wasn't
https://bugzilla.kernel.org/show_bug.cgi?id=34282
--- Comment #2 from Ricardo Wurmus ricardo.wur...@gmail.com 2011-05-27
13:19:57 ---
With 2.6.39 (from the ArchLinux testing repository) this doesn't happen
anymore. As far as I can tell[1] that kernel is unpatched downstream.
___
[1]
* Ingo Molnar (mi...@elte.hu) wrote:
* Mathieu Desnoyers mathieu.desnoy...@efficios.com wrote:
- Check kernel/tinyrcu.c to see how RCU is implemented in its
simplest form. :)
...so simplistic it only works on UP systems, which are not so common
these days on the systems
* Ingo Molnar (mi...@elte.hu) wrote:
* Mathieu Desnoyers mathieu.desnoy...@efficios.com wrote:
it's offline right now:
$ git clone git://git.lttng.org/urcu
Cloning into urcu...
fatal: The remote end hung up unexpectedly
This would be:
git clone
* Mathieu Desnoyers mathieu.desnoy...@efficios.com wrote:
I'm worried about self-recursion behaviors that could be
triggered though: if the userland callback code called from a page
fault triggers a page fault all by itself, then it looks like a
good way to bring the system to its knees.
* Mathieu Desnoyers mathieu.desnoy...@efficios.com wrote:
So yes, kernel code was obviously used in the making of urcu -
just not the RCU kernel code it appears.
Which is a pity i think! :-)
Heh :) You know, I really like the Linux kernel coding style, which
is what I tried to
* Ingo Molnar (mi...@elte.hu) wrote:
* Ingo Molnar mi...@elte.hu wrote:
This code is very much tied with the kernel scheduler. [...]
It would not be particularly complex to enable user-space to
request a callback on context switch events.
I was thinking on and off about
* Ingo Molnar (mi...@elte.hu) wrote:
* Ingo Molnar mi...@elte.hu wrote:
Note that you do not want the context switch event, but the CPU
migration event: that will notify user-space when it gets migrated
to another CPU. This is the case that RCU really needs.
Also note that the main
Hello
2011/5/27 Gleb Natapov g...@redhat.com:
On Fri, May 27, 2011 at 12:22:52PM +0200, Francis Moreau wrote:
Hi guys,
Sorry for resurrecting this but I just checkout kernel v2.6.39 and
this fix doesn't seem to be present in this release...
Am I wrong ?
Hmm. Should be fixed by commit:
On Fri, 2011-05-27 at 12:36 +0200, Ingo Molnar wrote:
* Sasha Levin levinsasha...@gmail.com wrote:
I see that in liburcu there is an implementation of a rcu linked
list but no implementation of a rb-tree.
Another approach would be, until the RCU interactions are sorted out,
to
ioeventfd is way provided by KVM to receive notifications about
reads and writes to PIO and MMIO areas within the guest.
Such notifications are usefull if all we need to know is that
a specific area of the memory has been changed, and we don't need
a heavyweight exit to happen.
The
Use ioeventfds to receive notifications of IO events in virtio-blk.
Doing so prevents an exit every time we read/write from/to the
virtio disk.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/virtio/blk.c | 26 +-
1 files changed, 25 insertions(+), 1
Use ioeventfds to receive notifications of IO events in virtio-net.
Doing so prevents an exit every time we receive/send a packet.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/virtio/net.c | 22 ++
1 files changed, 22 insertions(+), 0 deletions(-)
diff
Use ioeventfds to receive notifications of IO events in virtio-rng.
Doing so prevents an exit every time we need to supply randomness
to the guest.
Signed-off-by: Sasha Levin levinsasha...@gmail.com
---
tools/kvm/virtio/rng.c | 27 ++-
1 files changed, 26 insertions(+),
* Sasha Levin levinsasha...@gmail.com wrote:
Benchmarks ran on a seperate (non boot) 1GB virtio-blk device,
formatted as ext4, using bonnie++.
cmd line:
# bonnie++ -d temp/ -c 2 -s 768 -u 0
Before:
Version 1.96 --Sequential Output-- --Sequential Input-
--Random-
* Sasha Levin levinsasha...@gmail.com wrote:
On Fri, 2011-05-27 at 12:36 +0200, Ingo Molnar wrote:
* Sasha Levin levinsasha...@gmail.com wrote:
I see that in liburcu there is an implementation of a rcu linked
list but no implementation of a rb-tree.
Another approach would be,
On Fri, May 27, 2011 at 11:12:20AM +0200, Ingo Molnar wrote:
* Paul E. McKenney paul...@linux.vnet.ibm.com wrote:
I'm CC'ing Paul and Mathieu as well for urcu.
I am hoping we can get better convergence between the user-level
and kernel-level URCU implementations once I get SRCU
On Fri, 2011-05-27 at 10:53 -0400, vyanktesh yadav wrote:
Hi ,
[Please see screen shot too..]
I am facing Template syntax error while trying to do add hosts in
admin
interface.
Yeah, some of the custom template logic present on the application is
being slashed by the internal changes
Hi,
On 27.05.2011 13:09, Jan Kiszka wrote:
On 2011-05-27 07:32, André Weidemann wrote:
Here is my setup:
Intel DX58SO
Core i7 920
Radeon HD 6950
Kernel 2.6.35.7
qemu-kvm git pull from May 26th
One thing that is not working is the pass-through of a second device, a
sound card in my case.
On 27.05.2011 21:40, André Weidemann wrote:
If I am not mistaken then the graphics card needs 2 bars, one with 256MB
and one with 128K. The sound card then needs 1 bar with 16K of PCI memory.
How big is the PCI memory with seabios?
Is there really not enough space to squeeze in those extra 16K?
On Fri, 2011-05-27 at 19:10 +0200, Ingo Molnar wrote:
* Sasha Levin levinsasha...@gmail.com wrote:
On Fri, 2011-05-27 at 12:36 +0200, Ingo Molnar wrote:
* Sasha Levin levinsasha...@gmail.com wrote:
I see that in liburcu there is an implementation of a rcu linked
list but no
I do the check there because I was having problems where, if the HDEC
goes negative before we do the partition switch, we would
occasionally
not get the HDEC interrupt at all until the next time HDEC went
negative, ~ 8.4 seconds later.
Yikes - so HDEC is edge and doesn't even keep the
On 27.05.2011 21:50, André Weidemann wrote:
On 27.05.2011 21:40, André Weidemann wrote:
If I am not mistaken then the graphics card needs 2 bars, one with 256MB
and one with 128K. The sound card then needs 1 bar with 16K of PCI
memory.
How big is the PCI memory with seabios?
Is there really
On Wed, 25 May 2011 09:07:59 +0300, Michael S. Tsirkin m...@redhat.com
wrote:
On Wed, May 25, 2011 at 11:05:04AM +0930, Rusty Russell wrote:
Hmm I'm not sure I got it, need to think about this.
I'd like to go back and document how my design was supposed to work.
This really should have been
On 27.05.2011, at 22:59, Segher Boessenkool wrote:
I do the check there because I was having problems where, if the HDEC
goes negative before we do the partition switch, we would occasionally
not get the HDEC interrupt at all until the next time HDEC went
negative, ~ 8.4 seconds later.
If HDEC expires when interrupts are off, the HDEC interrupt stays
pending until interrupts get re-enabled. I'm not sure exactly what
the conditions are that cause an HDEC interrupt to get lost, but they
seem to involve at least a partition switch.
On some CPUs, if the top bit of the
On Tue, May 17, 2011 at 12:17:50PM +0200, Alexander Graf wrote:
On 16.05.2011, at 07:58, Paul Mackerras wrote:
I do the check there because I was having problems where, if the HDEC
goes negative before we do the partition switch, we would occasionally
not get the HDEC interrupt at all
On 27.05.2011, at 12:33, Paul Mackerras wrote:
On Tue, May 17, 2011 at 12:17:50PM +0200, Alexander Graf wrote:
On 16.05.2011, at 07:58, Paul Mackerras wrote:
I do the check there because I was having problems where, if the HDEC
goes negative before we do the partition switch, we would
I do the check there because I was having problems where, if the HDEC
goes negative before we do the partition switch, we would
occasionally
not get the HDEC interrupt at all until the next time HDEC went
negative, ~ 8.4 seconds later.
Yikes - so HDEC is edge and doesn't even keep the
On 27.05.2011, at 22:59, Segher Boessenkool wrote:
I do the check there because I was having problems where, if the HDEC
goes negative before we do the partition switch, we would occasionally
not get the HDEC interrupt at all until the next time HDEC went
negative, ~ 8.4 seconds later.
If HDEC expires when interrupts are off, the HDEC interrupt stays
pending until interrupts get re-enabled. I'm not sure exactly what
the conditions are that cause an HDEC interrupt to get lost, but they
seem to involve at least a partition switch.
On some CPUs, if the top bit of the
88 matches
Mail list logo