[no subject]

2022-03-25 Thread Michael S. Tsirkin
Bcc: 
Subject: Re: [PATCH 3/3] virtio: harden vring IRQ
Message-ID: <20220325021422-mutt-send-email-...@kernel.org>
Reply-To: 
In-Reply-To: 

On Fri, Mar 25, 2022 at 11:04:08AM +0800, Jason Wang wrote:
> 
> 在 2022/3/24 下午7:03, Michael S. Tsirkin 写道:
> > On Thu, Mar 24, 2022 at 04:40:04PM +0800, Jason Wang wrote:
> > > This is a rework on the previous IRQ hardening that is done for
> > > virtio-pci where several drawbacks were found and were reverted:
> > > 
> > > 1) try to use IRQF_NO_AUTOEN which is not friendly to affinity managed IRQ
> > > that is used by some device such as virtio-blk
> > > 2) done only for PCI transport
> > > 
> > > In this patch, we tries to borrow the idea from the INTX IRQ hardening
> > > in the reverted commit 080cd7c3ac87 ("virtio-pci: harden INTX interrupts")
> > > by introducing a global irq_soft_enabled variable for each
> > > virtio_device. Then we can to toggle it during
> > > virtio_reset_device()/virtio_device_ready(). A synchornize_rcu() is
> > > used in virtio_reset_device() to synchronize with the IRQ handlers. In
> > > the future, we may provide config_ops for the transport that doesn't
> > > use IRQ. With this, vring_interrupt() can return check and early if
> > > irq_soft_enabled is false. This lead to smp_load_acquire() to be used
> > > but the cost should be acceptable.
> > Maybe it should be but is it? Can't we use synchronize_irq instead?
> 
> 
> Even if we allow the transport driver to synchornize through
> synchronize_irq() we still need a check in the vring_interrupt().
> 
> We do something like the following previously:
> 
>     if (!READ_ONCE(vp_dev->intx_soft_enabled))
>     return IRQ_NONE;
> 
> But it looks like a bug since speculative read can be done before the check
> where the interrupt handler can't see the uncommitted setup which is done by
> the driver.

I don't think so - if you sync after setting the value then
you are guaranteed that any handler running afterwards
will see the new value.

Although I couldn't find anything about this in memory-barriers.txt
which surprises me.

CC Paul to help make sure I'm right.


> 
> > 
> > > To avoid breaking legacy device which can send IRQ before DRIVER_OK, a
> > > module parameter is introduced to enable the hardening so function
> > > hardening is disabled by default.
> > Which devices are these? How come they send an interrupt before there
> > are any buffers in any queues?
> 
> 
> I copied this from the commit log for 22b7050a024d7
> 
> "
> 
>     This change will also benefit old hypervisors (before 2009)
>     that send interrupts without checking DRIVER_OK: previously,
>     the callback could race with driver-specific initialization.
> "
> 
> If this is only for config interrupt, I can remove the above log.


This is only for config interrupt.

> 
> > 
> > > Note that the hardening is only done for vring interrupt since the
> > > config interrupt hardening is already done in commit 22b7050a024d7
> > > ("virtio: defer config changed notifications"). But the method that is
> > > used by config interrupt can't be reused by the vring interrupt
> > > handler because it uses spinlock to do the synchronization which is
> > > expensive.
> > > 
> > > Signed-off-by: Jason Wang 
> > 
> > > ---
> > >   drivers/virtio/virtio.c   | 19 +++
> > >   drivers/virtio/virtio_ring.c  |  9 -
> > >   include/linux/virtio.h|  4 
> > >   include/linux/virtio_config.h | 25 +
> > >   4 files changed, 56 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
> > > index 8dde44ea044a..85e331efa9cc 100644
> > > --- a/drivers/virtio/virtio.c
> > > +++ b/drivers/virtio/virtio.c
> > > @@ -7,6 +7,12 @@
> > >   #include 
> > >   #include 
> > > +static bool irq_hardening = false;
> > > +
> > > +module_param(irq_hardening, bool, 0444);
> > > +MODULE_PARM_DESC(irq_hardening,
> > > +  "Disalbe IRQ software processing when it is not expected");
> > > +
> > >   /* Unique numbering for virtio devices. */
> > >   static DEFINE_IDA(virtio_index_ida);
> > > @@ -220,6 +226,15 @@ static int virtio_features_ok(struct virtio_device 
> > > *dev)
> > >* */
> > >   void virtio_reset_device(struct virtio_device *dev)
> > >   {
> > > +

[no subject]

2018-03-06 Thread Solen win
Hi sir

solenwin2.zendesk.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2018-02-17 Thread Solen win
Confirm
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2018-02-13 Thread Solen win
solenwin2.zendesk.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2018-02-07 Thread Solen win
hong...@solenw.org
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-12-17 Thread Solen win
solen...@freshdesk.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-12-14 Thread Solen win
solen...@freshdesk.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-12-12 Thread Solen win
solen...@freshdesk.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-11-04 Thread Solen win
-- 
null
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-10-29 Thread Solen win
solen...@freshdesk.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-10-15 Thread Solen win2
solen...@freshdesk.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-10-02 Thread Solen win2
solen...@freshdesk.com
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-08-19 Thread Solen win2
all
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2017-07-26 Thread Solen win2
all
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2016-07-04 Thread basavaraj Khodanpur

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2014-09-02 Thread Andy King
This version addresses Sergei's comments.

o Fixed description and added Reported-by
o Removed NULL check for kfree()

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[no subject]

2013-01-07 Thread Michael S. Tsirkin
On Sun, Jan 06, 2013 at 02:36:13PM +0800, Asias He wrote:
 This drops the cmd completion list spin lock and makes the cmd
 completion queue lock-less.
 
 Signed-off-by: Asias He as...@redhat.com


Nicholas, any feedback?

 ---
  drivers/vhost/tcm_vhost.c | 46 +-
  drivers/vhost/tcm_vhost.h |  2 +-
  2 files changed, 14 insertions(+), 34 deletions(-)
 
 diff --git a/drivers/vhost/tcm_vhost.c b/drivers/vhost/tcm_vhost.c
 index b20df5c..3720604 100644
 --- a/drivers/vhost/tcm_vhost.c
 +++ b/drivers/vhost/tcm_vhost.c
 @@ -47,6 +47,7 @@
  #include linux/vhost.h
  #include linux/virtio_net.h /* TODO vhost.h currently depends on this */
  #include linux/virtio_scsi.h
 +#include linux/llist.h
  
  #include vhost.c
  #include vhost.h
 @@ -64,8 +65,7 @@ struct vhost_scsi {
   struct vhost_virtqueue vqs[3];
  
   struct vhost_work vs_completion_work; /* cmd completion work item */
 - struct list_head vs_completion_list;  /* cmd completion queue */
 - spinlock_t vs_completion_lock;/* protects s_completion_list */
 + struct llist_head vs_completion_list; /* cmd completion queue */
  };
  
  /* Local pointer to allocated TCM configfs fabric module */
 @@ -301,9 +301,7 @@ static void vhost_scsi_complete_cmd(struct tcm_vhost_cmd 
 *tv_cmd)
  {
   struct vhost_scsi *vs = tv_cmd-tvc_vhost;
  
 - spin_lock_bh(vs-vs_completion_lock);
 - list_add_tail(tv_cmd-tvc_completion_list, vs-vs_completion_list);
 - spin_unlock_bh(vs-vs_completion_lock);
 + llist_add(tv_cmd-tvc_completion_list, vs-vs_completion_list);
  
   vhost_work_queue(vs-dev, vs-vs_completion_work);
  }
 @@ -347,27 +345,6 @@ static void vhost_scsi_free_cmd(struct tcm_vhost_cmd 
 *tv_cmd)
   kfree(tv_cmd);
  }
  
 -/* Dequeue a command from the completion list */
 -static struct tcm_vhost_cmd *vhost_scsi_get_cmd_from_completion(
 - struct vhost_scsi *vs)
 -{
 - struct tcm_vhost_cmd *tv_cmd = NULL;
 -
 - spin_lock_bh(vs-vs_completion_lock);
 - if (list_empty(vs-vs_completion_list)) {
 - spin_unlock_bh(vs-vs_completion_lock);
 - return NULL;
 - }
 -
 - list_for_each_entry(tv_cmd, vs-vs_completion_list,
 - tvc_completion_list) {
 - list_del(tv_cmd-tvc_completion_list);
 - break;
 - }
 - spin_unlock_bh(vs-vs_completion_lock);
 - return tv_cmd;
 -}
 -
  /* Fill in status and signal that we are done processing this command
   *
   * This is scheduled in the vhost work queue so we are called with the owner
 @@ -377,12 +354,18 @@ static void vhost_scsi_complete_cmd_work(struct 
 vhost_work *work)
  {
   struct vhost_scsi *vs = container_of(work, struct vhost_scsi,
   vs_completion_work);
 + struct virtio_scsi_cmd_resp v_rsp;
   struct tcm_vhost_cmd *tv_cmd;
 + struct llist_node *llnode;
 + struct se_cmd *se_cmd;
 + int ret;
  
 - while ((tv_cmd = vhost_scsi_get_cmd_from_completion(vs))) {
 - struct virtio_scsi_cmd_resp v_rsp;
 - struct se_cmd *se_cmd = tv_cmd-tvc_se_cmd;
 - int ret;
 + llnode = llist_del_all(vs-vs_completion_list);
 + while (llnode) {
 + tv_cmd = llist_entry(llnode, struct tcm_vhost_cmd,
 +  tvc_completion_list);
 + llnode = llist_next(llnode);
 + se_cmd = tv_cmd-tvc_se_cmd;
  
   pr_debug(%s tv_cmd %p resid %u status %#02x\n, __func__,
   tv_cmd, se_cmd-residual_count, se_cmd-scsi_status);
 @@ -426,7 +409,6 @@ static struct tcm_vhost_cmd *vhost_scsi_allocate_cmd(
   pr_err(Unable to allocate struct tcm_vhost_cmd\n);
   return ERR_PTR(-ENOMEM);
   }
 - INIT_LIST_HEAD(tv_cmd-tvc_completion_list);
   tv_cmd-tvc_tag = v_req-tag;
   tv_cmd-tvc_task_attr = v_req-task_attr;
   tv_cmd-tvc_exp_data_len = exp_data_len;
 @@ -859,8 +841,6 @@ static int vhost_scsi_open(struct inode *inode, struct 
 file *f)
   return -ENOMEM;
  
   vhost_work_init(s-vs_completion_work, vhost_scsi_complete_cmd_work);
 - INIT_LIST_HEAD(s-vs_completion_list);
 - spin_lock_init(s-vs_completion_lock);
  
   s-vqs[VHOST_SCSI_VQ_CTL].handle_kick = vhost_scsi_ctl_handle_kick;
   s-vqs[VHOST_SCSI_VQ_EVT].handle_kick = vhost_scsi_evt_handle_kick;
 diff --git a/drivers/vhost/tcm_vhost.h b/drivers/vhost/tcm_vhost.h
 index 7e87c63..47ee80b 100644
 --- a/drivers/vhost/tcm_vhost.h
 +++ b/drivers/vhost/tcm_vhost.h
 @@ -34,7 +34,7 @@ struct tcm_vhost_cmd {
   /* Sense buffer that will be mapped into outgoing status */
   unsigned char tvc_sense_buf[TRANSPORT_SENSE_BUFFER];
   /* Completed commands list, serviced from vhost worker thread */
 - struct list_head tvc_completion_list;
 + struct llist_node tvc_completion_list;
  };
  
  struct tcm_vhost_nexus {
 -- 
 1.7.11.7

[no subject]

2012-11-07 Thread sjur . brandeland
From 0ce16d6a0270daebd9972e94a834034a093228b0 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Sjur=20Br=C3=A6ndeland?= sjur.brandel...@stericsson.com
Date: Wed, 7 Nov 2012 12:20:07 +0100
Subject: [PATCH] virtio_console:Free buffers from out-queue upon close
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Free pending output buffers from the virtio out-queue when
host has acknowledged port_close. Also removed WARN_ON()
in remove_port_data().

Signed-off-by: Sjur Brændeland sjur.brandel...@stericsson.com
---
Hi Amit,

Note: This patch is compile tested only. I have done the removal
of buffers from out-queue in handle_control_message()
when host has acked the close request. This seems less
racy than doing it in the release function.

I you want to change this further, feel free to take over from
here and refine this.

Thanks,
Sjur

 drivers/char/virtio_console.c |   14 ++
 1 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
index 3fa036a..3a5831d 100644
--- a/drivers/char/virtio_console.c
+++ b/drivers/char/virtio_console.c
@@ -1522,15 +1522,9 @@ static void remove_port_data(struct port *port)
while ((buf = virtqueue_detach_unused_buf(port-in_vq)))
free_buf(buf, true);
 
-   /*
-* Check the out-queue for buffers. For VIRTIO_CONSOLE it is a
-* bug if this happens. But for RPROC_SERIAL the remote processor
-* may have crashed, leaving buffers hanging in the out-queue.
-*/
-   while ((buf = virtqueue_detach_unused_buf(port-out_vq))) {
-   WARN_ON_ONCE(!is_rproc_serial(port-portdev-vdev));
+   /* Free pending buffers from the out-queue. */
+   while ((buf = virtqueue_detach_unused_buf(port-out_vq)))
free_buf(buf, true);
-   }
 }
 
 /*
@@ -1655,6 +1649,10 @@ static void handle_control_message(struct ports_device 
*portdev,
 */
spin_lock_irq(port-outvq_lock);
reclaim_consumed_buffers(port);
+
+   /* Free pending buffers from the out-queue. */
+   while ((buf = virtqueue_detach_unused_buf(port-out_vq)))
+   free_buf(buf, true);
spin_unlock_irq(port-outvq_lock);
 
/*
-- 
1.7.5.4

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[no subject]

2011-12-07 Thread Michael S. Tsirkin
Pavel Machek pa...@ucw.cz,
Rafael J. Wysocki r...@sisk.pl,
Len Brown len.br...@intel.com,
linux...@vger.kernel.org
Bcc: 
Subject: Re: [PATCH v4 12/12] virtio: balloon: Add freeze, restore handlers
 to support S4
Reply-To: 
In-Reply-To: 
5deccc36afa59032f0e3b10a653773bad511f303.1323199985.git.amit.s...@redhat.com

On Wed, Dec 07, 2011 at 01:18:50AM +0530, Amit Shah wrote:
 Now to not race with a host issuing ballooning requests while we are in
 the process of freezing, we just exit from the vballoon kthread when the
 processes are asked to freeze.  Upon thaw and restore, we re-start the
 thread.

...

 ---
  drivers/virtio/virtio_balloon.c |   79 
 ++-
  1 files changed, 78 insertions(+), 1 deletions(-)
 
 diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
 index 8bf99be..10ec638 100644
 --- a/drivers/virtio/virtio_balloon.c
 +++ b/drivers/virtio/virtio_balloon.c
 @@ -258,7 +258,13 @@ static int balloon(void *_vballoon)
   while (!kthread_should_stop()) {
   s64 diff;
  
 - try_to_freeze();
 + /*
 +  * On suspend, we want to exit this thread.  We will
 +  * start a new thread on resume.
 +  */
 + if (freezing(current))
 + break;
 +
   wait_event_interruptible(vb-config_change,
(diff = towards_target(vb)) != 0
|| vb-need_stats_update

...

Note: this relies on kthreads being frozen before devices.
Looking at kernel/power/hibernate.c this is the case,
but I think we should add a comment to note this.

Also Cc linux-pm crowd in case I got it wrong.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[no subject]

2011-07-27 Thread Grant McWilliams
http://putige.org/google.php
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Subject: [PATCH 0/2] dm-ioband: I/O bandwidth controller v1.3.0: Introduction

2008-07-11 Thread Ryo Tsuruta
Hi everyone,

This is the dm-ioband version 1.3.0 release.

Dm-ioband is an I/O bandwidth controller implemented as a device-mapper
driver, which gives specified bandwidth to each job running on the same
physical device.

- Can be applied to the kernel 2.6.26-rc5-mm3.
- Changes from 1.2.0 (posted on Jul 4, 2008):
  - I/O smoothing take #2
This feature makes I/O requests of each group issued smoothly.
Once a certain group has used up its tokens, all I/O requests to
the group will be blocked until all the other groups used up
theirs. This feature is to minimize this blocking time and to
issue I/O requests at a constant rate according to the weight,
without decreasing throughput.
We have tested various ideas to achieve this feature and we have
chosen the most effective ways as follows:
  - Shorten the epoch period of dm-ioband, each of which every
ioband group will get new tokens. On the other hand, the
leftover tokens for the past few epochs can be taken over to
the next epoch so that it can keep the fairness between the
groups even when the I/O loads of some groups are changing.
  - Make a new epoch immediately when a group with large weight
used up its tokens even though there remain a lot of in-flight
I/Os.
To gain the throughput, dm-ioband will recharge tokens to all
the groups without waiting their I/O completion if possible.
  - Make the I/O requests which user process have just made
be handled ahead of the blocked I/O requests, since it
would make sense that you assume the groups which issued
these blocked I/O requests will have small weights.
  - Make the number of I/O requests which can be queued in dm-ioband
smaller, so it will prevents all the I/O request of each
group from being issued at the same time when a new epoch
gets made.
- TODO
  - Implementing cgroup support for dm-ioband is in progress. This
feature makes it be able to handle asynchronous I/O requests properly.

I added a new benchmark result on the dm-ioband webpage. This result
shows that dm-ioband can control a bandwidth even when an unbalanced
I/O load is applied.
http://people.valinux.co.jp/~ryov/dm-ioband/benchmark/partition3.html

Thanks,
Ryo Tsuruta
Linux Block I/O Bandwidth Control Project
http://people.valinux.co.jp/~ryov/bwctl/
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


[no subject]

2007-04-10 Thread H. Peter Anvin
[PATCH] Clean up x86 control register and MSR macros

This patch is based on Rusty's recent cleanup of the EFLAGS-related
macros; it extends the same kind of cleanup to control registers and
MSRs.

It also unifies these between i386 and x86-64; at least with regards
to MSRs, the two had definitely gotten out of sync.

Signed-off-by: H. Peter Anvin [EMAIL PROTECTED]

diff -urN --exclude='o.*' stock/linux-2.6.21-rc6-mm1/include/asm-i386/Kbuild 
linux-2.6.21-rc6-mm1/include/asm-i386/Kbuild
--- stock/linux-2.6.21-rc6-mm1/include/asm-i386/Kbuild  2007-04-05 
19:36:56.0 -0700
+++ linux-2.6.21-rc6-mm1/include/asm-i386/Kbuild2007-04-09 
23:28:36.0 -0700
@@ -3,8 +3,10 @@
 header-y += boot.h
 header-y += debugreg.h
 header-y += ldt.h
+header-y += msr-index.h
 header-y += ptrace-abi.h
 header-y += ucontext.h
 
+unifdef-y += msr.h
 unifdef-y += mtrr.h
 unifdef-y += vm86.h
diff -urN --exclude='o.*' 
stock/linux-2.6.21-rc6-mm1/include/asm-i386/msr-index.h 
linux-2.6.21-rc6-mm1/include/asm-i386/msr-index.h
--- stock/linux-2.6.21-rc6-mm1/include/asm-i386/msr-index.h 1969-12-31 
16:00:00.0 -0800
+++ linux-2.6.21-rc6-mm1/include/asm-i386/msr-index.h   2007-04-09 
18:14:04.0 -0700
@@ -0,0 +1,270 @@
+#ifndef __ASM_MSR_INDEX_H
+#define __ASM_MSR_INDEX_ H
+
+/* x86-64 specific MSRs */
+#define MSR_EFER   0xc080 /* extended feature register */
+#define MSR_STAR   0xc081 /* legacy mode SYSCALL target */
+#define MSR_LSTAR  0xc082 /* long mode SYSCALL target */
+#define MSR_CSTAR  0xc083 /* compat mode SYSCALL target */
+#define MSR_SYSCALL_MASK   0xc084 /* EFLAGS mask for syscall */
+#define MSR_FS_BASE0xc100 /* 64bit FS base */
+#define MSR_GS_BASE0xc101 /* 64bit GS base */
+#define MSR_KERNEL_GS_BASE 0xc102 /* SwapGS GS shadow */
+
+/* EFER bits: */
+#define _EFER_SCE  0x /* SYSCALL/SYSRET */
+#define _EFER_LME  0x0008 /* Long mode enable */
+#define _EFER_LMA  0x000a /* Long mode active (read-only) */
+#define _EFER_NX   0x000b /* No execute enable */
+
+#define EFER_SCE   (1_EFER_SCE)
+#define EFER_LME   (1_EFER_LME)
+#define EFER_LMA   (1_EFER_LMA)
+#define EFER_NX(1_EFER_NX)
+
+/* Intel MSRs. Some also available on other CPUs */
+#define MSR_IA32_PERFCTR0  0x00c1
+#define MSR_IA32_PERFCTR1  0x00c2
+#define MSR_FSB_FREQ   0x00cd
+
+#define MSR_MTRRcap0x00fe
+#define MSR_IA32_BBL_CR_CTL0x0119
+
+#define MSR_IA32_SYSENTER_CS   0x0174
+#define MSR_IA32_SYSENTER_ESP  0x0175
+#define MSR_IA32_SYSENTER_EIP  0x0176
+
+#define MSR_IA32_MCG_CAP   0x0179
+#define MSR_IA32_MCG_STATUS0x017a
+#define MSR_IA32_MCG_CTL   0x017b
+
+#define MSR_IA32_PEBS_ENABLE   0x03f1
+#define MSR_IA32_DS_AREA   0x0600
+#define MSR_IA32_PERF_CAPABILITIES 0x0345
+
+#define MSR_MTRRfix64K_0   0x0250
+#define MSR_MTRRfix16K_8   0x0258
+#define MSR_MTRRfix16K_A   0x0259
+#define MSR_MTRRfix4K_C0x0268
+#define MSR_MTRRfix4K_C80000x0269
+#define MSR_MTRRfix4K_D0x026a
+#define MSR_MTRRfix4K_D80000x026b
+#define MSR_MTRRfix4K_E0x026c
+#define MSR_MTRRfix4K_E80000x026d
+#define MSR_MTRRfix4K_F0x026e
+#define MSR_MTRRfix4K_F80000x026f
+#define MSR_MTRRdefType0x02ff
+
+#define MSR_IA32_DEBUGCTLMSR   0x01d9
+#define MSR_IA32_LASTBRANCHFROMIP  0x01db
+#define MSR_IA32_LASTBRANCHTOIP0x01dc
+#define MSR_IA32_LASTINTFROMIP 0x01dd
+#define MSR_IA32_LASTINTTOIP   0x01de
+
+#define MSR_IA32_MC0_CTL   0x0400
+#define MSR_IA32_MC0_STATUS0x0401
+#define MSR_IA32_MC0_ADDR  0x0402
+#define MSR_IA32_MC0_MISC  0x0403
+
+#define MSR_P6_PERFCTR00x00c1
+#define MSR_P6_PERFCTR10x00c2
+#define MSR_P6_EVNTSEL00x0186
+#define MSR_P6_EVNTSEL10x0187
+
+/* K7/K8 MSRs. Not complete. See the architecture manual for a more
+   complete list. */
+#define MSR_K7_EVNTSEL00xc001
+#define MSR_K7_PERFCTR00xc0010004
+#define MSR_K7_EVNTSEL10xc0010001
+#define MSR_K7_PERFCTR10xc0010005
+#define MSR_K7_EVNTSEL20xc0010002
+#define MSR_K7_PERFCTR20xc0010006
+#define MSR_K7_EVNTSEL30xc0010003