Re: [Qemu-devel] [RFC PATCH 7/7] cpus: reclaim allocated vCPU objects

2014-07-30 Thread Anshul Makkar
Hi,

I am testing the cpu-hotunplug  patches. I observed that after the
deletion of the cpu with id = x, if I cpu-add the same cpu again id =
x, then qemu exits with the error that file descriptor already exists.

On debugging I found that if I give cpu-add apic-id = x, then
qemu_kvm_cpu_thread_fn-kvm_init_vcpu is called which sends an IOCTL
(KVM_CREATE_VCPU) to kvm to create a new fd. As the fd already exists
in KVM as we never delete the fd from the kernel and just park it in
separate list, it returns false and QEMU exits. In the above code
flow, no where its being checked if we have the cpu with cpuid = x
available in the parked list and we can reuse it.

Am I missing something or this bit is yet to be implmented.

Thanks
Anshul Makkar

On Fri, Jul 18, 2014 at 4:09 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 Hi Anshul,
 On 07/18/2014 12:24 AM, Anshul Makkar wrote:

 Are we not going to introduce new command cpu_del for deleting the cpu ?

 I couldn't find any patch for addition of cpu_del command. Is this
 intentional and we intend to use device_del (and similarly device_add)
 for cpu hot(un)plug or just skipped to be added later. I have the
 patch for the same which I can release, if the intent is to add this
 command.

 The device_add/device_del interface is the approved way to support add/del 
 cpu,
 which is also more common and elegant than cpu_add/del.
 http://wiki.qemu.org/Features/CPUHotplug
 so we intend to use device_del rather than the cpu_del.
 And IMO, the cpu_add will be replaced by device_add sooner or later.

 Thanks,
 Gu


 Thanks
 Anshul Makkar

 On Fri, Jul 11, 2014 at 11:59 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 After ACPI get a signal to eject a vCPU, the vCPU must be
 removed from CPU list,before the vCPU really removed,  then
 release the all related vCPU objects.
 But we do not close KVM vcpu fd, just record it into a list, in
 order to reuse it.

 Signed-off-by: Chen Fan chen.fan.f...@cn.fujitsu.com
 Signed-off-by: Gu Zheng guz.f...@cn.fujitsu.com
 ---
  cpus.c   |   37 
  include/sysemu/kvm.h |1 +
  kvm-all.c|   57 
 +-
  3 files changed, 94 insertions(+), 1 deletions(-)

 diff --git a/cpus.c b/cpus.c
 index 4dfb889..9a73407 100644
 --- a/cpus.c
 +++ b/cpus.c
 @@ -786,6 +786,24 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void 
 *data), void *data)
  qemu_cpu_kick(cpu);
  }

 +static void qemu_kvm_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +
 +if (kvm_destroy_vcpu(cpu)  0) {
 +fprintf(stderr, kvm_destroy_vcpu failed.\n);
 +exit(1);
 +}
 +
 +object_unparent(OBJECT(cpu));
 +}
 +
 +static void qemu_tcg_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +object_unparent(OBJECT(cpu));
 +}
 +
  static void flush_queued_work(CPUState *cpu)
  {
  struct qemu_work_item *wi;
 @@ -877,6 +895,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
  }
  }
  qemu_kvm_wait_io_event(cpu);
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +qemu_kvm_destroy_vcpu(cpu);
 +qemu_mutex_unlock(qemu_global_mutex);
 +return NULL;
 +}
  }

  return NULL;
 @@ -929,6 +952,7 @@ static void tcg_exec_all(void);
  static void *qemu_tcg_cpu_thread_fn(void *arg)
  {
  CPUState *cpu = arg;
 +CPUState *remove_cpu = NULL;

  qemu_tcg_init_cpu_signals();
  qemu_thread_get_self(cpu-thread);
 @@ -961,6 +985,16 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
  }
  }
  qemu_tcg_wait_io_event();
 +CPU_FOREACH(cpu) {
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +remove_cpu = cpu;
 +break;
 +}
 +}
 +if (remove_cpu) {
 +qemu_tcg_destroy_vcpu(remove_cpu);
 +remove_cpu = NULL;
 +}
  }

  return NULL;
 @@ -1316,6 +1350,9 @@ static void tcg_exec_all(void)
  break;
  }
  } else if (cpu-stop || cpu-stopped) {
 +if (cpu-exit) {
 +next_cpu = CPU_NEXT(cpu);
 +}
  break;
  }
  }
 diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
 index 174ea36..88e2403 100644
 --- a/include/sysemu/kvm.h
 +++ b/include/sysemu/kvm.h
 @@ -178,6 +178,7 @@ int kvm_has_intx_set_mask(void);

  int kvm_init_vcpu(CPUState *cpu);
  int kvm_cpu_exec(CPUState *cpu);
 +int kvm_destroy_vcpu(CPUState *cpu);

  #ifdef NEED_CPU_H

 diff --git a/kvm-all.c b/kvm-all.c
 index 3ae30ee..25e2a43 100644
 --- a/kvm-all.c
 +++ b/kvm-all.c
 @@ -74,6 +74,12 @@ typedef struct KVMSlot

  typedef struct kvm_dirty_log KVMDirtyLog;

 +struct KVMParkedVcpu {
 +unsigned long vcpu_id;
 +int kvm_fd;
 +QLIST_ENTRY(KVMParkedVcpu) node;
 +};
 +
  struct KVMState
  {
  KVMSlot *slots;
 @@ -108,6 +114,7 @@ struct KVMState
  QTAILQ_HEAD(msi_hashtab, KVMMSIRoute

[Qemu-devel] cpu-del support over and above the RFC patches for cpu-hotunplug

2014-08-01 Thread Anshul Makkar
Hi,

Attached is the patch for addition of  cpu-del command over and
above the below patches for hot removing the cpu.

[RFC PATCH 0/7] i386: add cpu hot remove support
[RFC PATCH 1/7] x86: add x86_cpu_unrealizefn() for cpu apic remove
[RFC PATCH 2/7] i386: add cpu device_del support
[RFC PATCH 3/7] qom cpu: rename variable 'cpu_added_notifier' to
'cpu_hotplug_notifier'
[RFC PATCH 4/7] qom cpu: add UNPLUG cpu notify support
[RFC PATCH 5/7] i386: implement pc interface cpu_common_unrealizefn()
in qom/cpu.c
[RFC PATCH 6/7] cpu hotplug: implement function cpu_status_write() for
vcpu ejection
[RFC PATCH 7/7] cpus: reclaim allocated vCPU objects

Useful, just in case if anyone wants to continue with the old /
compatible cpu-del command.

Patch for addition of cpu-del command:
+++ b/include/hw/boards.h
@@ -25,6 +25,8 @@ typedef void QEMUMachineResetFunc(void);

 typedef void QEMUMachineHotAddCPUFunc(const int64_t id, Error **errp);

+typedef void QEMUMachineHotDelCPUFunc(const int64_t id, Error **errp);
+
 typedef int QEMUMachineGetKvmtypeFunc(const char *arg);

 struct QEMUMachine {
@@ -34,6 +36,7 @@ struct QEMUMachine {
 QEMUMachineInitFunc *init;
 QEMUMachineResetFunc *reset;
 QEMUMachineHotAddCPUFunc *hot_add_cpu;
+QEMUMachineHotDelCPUFunc *hot_del_cpu;
 QEMUMachineGetKvmtypeFunc *kvm_type;
 BlockInterfaceType block_default_type;
 int max_cpus;
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index f2d39d2..33350fc 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -163,6 +163,7 @@ void pc_acpi_smi_interrupt(void *opaque, int irq,
int level);

 void pc_cpus_init(const char *cpu_model, DeviceState *icc_bridge);
 void pc_hot_add_cpu(const int64_t id, Error **errp);
+void pc_hot_del_cpu(const int64_t id, Error **errp);
 void pc_acpi_init(const char *default_dsdt);

 PcGuestInfo *pc_guest_info_init(ram_addr_t below_4g_mem_size,
@@ -472,6 +473,7 @@ int e820_add_entry(uint64_t, uint64_t, uint32_t);
 #define PC_DEFAULT_MACHINE_OPTIONS \
 PC_COMMON_MACHINE_OPTIONS, \
 .hot_add_cpu = pc_hot_add_cpu, \
+.hot_del_cpu = pc_hot_del_cpu, \
 .max_cpus = 255

 #endif
diff --git a/qmp.c b/qmp.c
index 95369f9..fc494da 100644
--- a/qmp.c
+++ b/qmp.c
@@ -126,6 +126,18 @@ void qmp_cpu_add(int64_t id, Error **errp)
 }
 }

+void qmp_cpu_del(int64_t id, Error **errp)
+{
+MachineClass *mc;
+mc = MACHINE_GET_CLASS(current_machine);
+if (mc-qemu_machine-hot_del_cpu) {
+mc-qemu_machine-hot_del_cpu(id, errp);
+}
+else {
+error_setg(errp, Not supported);
+}
+}
+
 #ifndef CONFIG_VNC
 /* If VNC support is enabled, the true query-vnc command is
defined in the VNC subsystem */

hw/i386/pc.c
+void pc_hot_del_cpu(const int64_t id, Error **errp)
+{
+   int64_t apic_id = x86_cpu_apic_id_from_index(id);
+fprintf(stderr, pc.c: pc_hot_del_cpu for apic_id = %d\n, apic_id);
+
+if (id  0) {
+error_setg(errp, Invalid CPU id: % PRIi64, id);
+return;
+}
+
+if (!cpu_exists(apic_id)) {
+error_setg(errp, Unable to remove CPU: % PRIi64
+   , it does not exists, id);
+return;
+   }
+
+if (id = max_cpus) {
+error_setg(errp, Unable to remove CPU: % PRIi64
+   , max allowed: %d, id, max_cpus - 1);
+return;
+}
+
+CPUState *cpu = first_cpu;
+X86CPUClass *xcc = NULL;

+while (cpu = CPU_NEXT(cpu)) {
+fprintf(stderr, cpu threa_id = %d, cpu_index = %d\n,
cpu-thread_id, cpu-cpu_index);
+if ((cpu-cpu_index + 1) == apic_id)
+break;
+}

if (cpu == first_cpu) {
fprintf(stderr, Unable to delete the last one cpu.\n);
return;
}
xcc = X86_CPU_GET_CLASS(DEVICE(cpu));
xcc-parent_unrealize(DEVICE(cpu), errp);
}

Anshul Makkar
wwwdotjustkerneldotcom



Re: [Qemu-devel] [RFC PATCH 7/7] cpus: reclaim allocated vCPU objects

2014-08-01 Thread Anshul Makkar
Hi Gu,

Thanks for clarifying.

Ah I missed that bit of the patch. Sorry about that and for making noise.

Yes, now cpu-hotplug and unplug works fine. Next week I plan to run a
series of automated and stress test. Will keep the group posted about
the results.

Thanks
Anshul Makkar

On Fri, Aug 1, 2014 at 6:42 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 Hi Anshul,
 Thanks for your test.
 On 07/30/2014 10:31 PM, Anshul Makkar wrote:

 Hi,

 I am testing the cpu-hotunplug  patches. I observed that after the
 deletion of the cpu with id = x, if I cpu-add the same cpu again id =
 x, then qemu exits with the error that file descriptor already exists.

 Could you please offer the whole reproduce routine? In my test box, we
 can add a removed cpu with the id.


 On debugging I found that if I give cpu-add apic-id = x, then
 qemu_kvm_cpu_thread_fn-kvm_init_vcpu is called which sends an IOCTL
 (KVM_CREATE_VCPU) to kvm to create a new fd. As the fd already exists
 in KVM as we never delete the fd from the kernel and just park it in
 separate list, it returns false and QEMU exits. In the above code
 flow, no where its being checked if we have the cpu with cpuid = x
 available in the parked list and we can reuse it.

 Am I missing something or this bit is yet to be implmented.

 Yes, it is implemented, in the same way as you mention above, please refer
 to function kvm_get_vcpu().

 Thanks,
 Gu


 Thanks
 Anshul Makkar

 On Fri, Jul 18, 2014 at 4:09 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 Hi Anshul,
 On 07/18/2014 12:24 AM, Anshul Makkar wrote:

 Are we not going to introduce new command cpu_del for deleting the cpu ?

 I couldn't find any patch for addition of cpu_del command. Is this
 intentional and we intend to use device_del (and similarly device_add)
 for cpu hot(un)plug or just skipped to be added later. I have the
 patch for the same which I can release, if the intent is to add this
 command.

 The device_add/device_del interface is the approved way to support 
 add/del cpu,
 which is also more common and elegant than cpu_add/del.
 http://wiki.qemu.org/Features/CPUHotplug
 so we intend to use device_del rather than the cpu_del.
 And IMO, the cpu_add will be replaced by device_add sooner or later.

 Thanks,
 Gu


 Thanks
 Anshul Makkar

 On Fri, Jul 11, 2014 at 11:59 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 After ACPI get a signal to eject a vCPU, the vCPU must be
 removed from CPU list,before the vCPU really removed,  then
 release the all related vCPU objects.
 But we do not close KVM vcpu fd, just record it into a list, in
 order to reuse it.

 Signed-off-by: Chen Fan chen.fan.f...@cn.fujitsu.com
 Signed-off-by: Gu Zheng guz.f...@cn.fujitsu.com
 ---
  cpus.c   |   37 
  include/sysemu/kvm.h |1 +
  kvm-all.c|   57 
 +-
  3 files changed, 94 insertions(+), 1 deletions(-)

 diff --git a/cpus.c b/cpus.c
 index 4dfb889..9a73407 100644
 --- a/cpus.c
 +++ b/cpus.c
 @@ -786,6 +786,24 @@ void async_run_on_cpu(CPUState *cpu, void 
 (*func)(void *data), void *data)
  qemu_cpu_kick(cpu);
  }

 +static void qemu_kvm_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +
 +if (kvm_destroy_vcpu(cpu)  0) {
 +fprintf(stderr, kvm_destroy_vcpu failed.\n);
 +exit(1);
 +}
 +
 +object_unparent(OBJECT(cpu));
 +}
 +
 +static void qemu_tcg_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +object_unparent(OBJECT(cpu));
 +}
 +
  static void flush_queued_work(CPUState *cpu)
  {
  struct qemu_work_item *wi;
 @@ -877,6 +895,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
  }
  }
  qemu_kvm_wait_io_event(cpu);
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +qemu_kvm_destroy_vcpu(cpu);
 +qemu_mutex_unlock(qemu_global_mutex);
 +return NULL;
 +}
  }

  return NULL;
 @@ -929,6 +952,7 @@ static void tcg_exec_all(void);
  static void *qemu_tcg_cpu_thread_fn(void *arg)
  {
  CPUState *cpu = arg;
 +CPUState *remove_cpu = NULL;

  qemu_tcg_init_cpu_signals();
  qemu_thread_get_self(cpu-thread);
 @@ -961,6 +985,16 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
  }
  }
  qemu_tcg_wait_io_event();
 +CPU_FOREACH(cpu) {
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +remove_cpu = cpu;
 +break;
 +}
 +}
 +if (remove_cpu) {
 +qemu_tcg_destroy_vcpu(remove_cpu);
 +remove_cpu = NULL;
 +}
  }

  return NULL;
 @@ -1316,6 +1350,9 @@ static void tcg_exec_all(void)
  break;
  }
  } else if (cpu-stop || cpu-stopped) {
 +if (cpu-exit) {
 +next_cpu = CPU_NEXT(cpu);
 +}
  break;
  }
  }
 diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
 index 174ea36..88e2403

Re: [Qemu-devel] [RFC PATCH 10/10] cpus: reclaim allocated vCPU objects

2014-08-07 Thread Anshul Makkar
Thanks Gu.. cpu-hotunplug is working fine in my  tests.

For cpu-hotplug, I get inconsistent result if I delete arbitrary cpu
and not just the last one.

for eg
list of cpus: 1, 2 ,3
device_add cpu 4
device_add cpu 5
device_add cpu 6

device_del cpu 4
device_del cpu 6

now if I do device_add cpu6, then cpu 4 gets added and now if I try to
do add cpu 4 or 6, it says cpu already exist.. Its a kind of vague
behaviour.. Do, we follow any protocol here while adding and deleting
cpus.

Thanks
Anshul Makkar
www.justkernel.com

On Thu, Aug 7, 2014 at 6:54 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 After ACPI get a signal to eject a vCPU, the vCPU must be
 removed from CPU list,before the vCPU really removed,  then
 release the all related vCPU objects.
 But we do not close KVM vcpu fd, just record it into a list, in
 order to reuse it.

 Signed-off-by: Chen Fan chen.fan.f...@cn.fujitsu.com
 Signed-off-by: Gu Zheng guz.f...@cn.fujitsu.com
 ---
  cpus.c   |   37 
  include/sysemu/kvm.h |1 +
  kvm-all.c|   57 
 +-
  3 files changed, 94 insertions(+), 1 deletions(-)

 diff --git a/cpus.c b/cpus.c
 index 4dfb889..9a73407 100644
 --- a/cpus.c
 +++ b/cpus.c
 @@ -786,6 +786,24 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void 
 *data), void *data)
  qemu_cpu_kick(cpu);
  }

 +static void qemu_kvm_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +
 +if (kvm_destroy_vcpu(cpu)  0) {
 +fprintf(stderr, kvm_destroy_vcpu failed.\n);
 +exit(1);
 +}
 +
 +object_unparent(OBJECT(cpu));
 +}
 +
 +static void qemu_tcg_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +object_unparent(OBJECT(cpu));
 +}
 +
  static void flush_queued_work(CPUState *cpu)
  {
  struct qemu_work_item *wi;
 @@ -877,6 +895,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
  }
  }
  qemu_kvm_wait_io_event(cpu);
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +qemu_kvm_destroy_vcpu(cpu);
 +qemu_mutex_unlock(qemu_global_mutex);
 +return NULL;
 +}
  }

  return NULL;
 @@ -929,6 +952,7 @@ static void tcg_exec_all(void);
  static void *qemu_tcg_cpu_thread_fn(void *arg)
  {
  CPUState *cpu = arg;
 +CPUState *remove_cpu = NULL;

  qemu_tcg_init_cpu_signals();
  qemu_thread_get_self(cpu-thread);
 @@ -961,6 +985,16 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
  }
  }
  qemu_tcg_wait_io_event();
 +CPU_FOREACH(cpu) {
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +remove_cpu = cpu;
 +break;
 +}
 +}
 +if (remove_cpu) {
 +qemu_tcg_destroy_vcpu(remove_cpu);
 +remove_cpu = NULL;
 +}
  }

  return NULL;
 @@ -1316,6 +1350,9 @@ static void tcg_exec_all(void)
  break;
  }
  } else if (cpu-stop || cpu-stopped) {
 +if (cpu-exit) {
 +next_cpu = CPU_NEXT(cpu);
 +}
  break;
  }
  }
 diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
 index 174ea36..88e2403 100644
 --- a/include/sysemu/kvm.h
 +++ b/include/sysemu/kvm.h
 @@ -178,6 +178,7 @@ int kvm_has_intx_set_mask(void);

  int kvm_init_vcpu(CPUState *cpu);
  int kvm_cpu_exec(CPUState *cpu);
 +int kvm_destroy_vcpu(CPUState *cpu);

  #ifdef NEED_CPU_H

 diff --git a/kvm-all.c b/kvm-all.c
 index 1402f4f..d0caeff 100644
 --- a/kvm-all.c
 +++ b/kvm-all.c
 @@ -74,6 +74,12 @@ typedef struct KVMSlot

  typedef struct kvm_dirty_log KVMDirtyLog;

 +struct KVMParkedVcpu {
 +unsigned long vcpu_id;
 +int kvm_fd;
 +QLIST_ENTRY(KVMParkedVcpu) node;
 +};
 +
  struct KVMState
  {
  KVMSlot *slots;
 @@ -108,6 +114,7 @@ struct KVMState
  QTAILQ_HEAD(msi_hashtab, KVMMSIRoute) msi_hashtab[KVM_MSI_HASHTAB_SIZE];
  bool direct_msi;
  #endif
 +QLIST_HEAD(, KVMParkedVcpu) kvm_parked_vcpus;
  };

  KVMState *kvm_state;
 @@ -226,6 +233,53 @@ static int kvm_set_user_memory_region(KVMState *s, 
 KVMSlot *slot)
  return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, mem);
  }

 +int kvm_destroy_vcpu(CPUState *cpu)
 +{
 +KVMState *s = kvm_state;
 +long mmap_size;
 +struct KVMParkedVcpu *vcpu = NULL;
 +int ret = 0;
 +
 +DPRINTF(kvm_destroy_vcpu\n);
 +
 +mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
 +if (mmap_size  0) {
 +ret = mmap_size;
 +DPRINTF(KVM_GET_VCPU_MMAP_SIZE failed\n);
 +goto err;
 +}
 +
 +ret = munmap(cpu-kvm_run, mmap_size);
 +if (ret  0) {
 +goto err;
 +}
 +
 +vcpu = g_malloc0(sizeof(*vcpu));
 +vcpu-vcpu_id = kvm_arch_vcpu_id(cpu);
 +vcpu-kvm_fd = cpu-kvm_fd;
 +QLIST_INSERT_HEAD(kvm_state-kvm_parked_vcpus, vcpu, node);
 +err:
 +return ret;
 +}
 +
 +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id

Re: [Qemu-devel] [RFC PATCH 10/10] cpus: reclaim allocated vCPU objects

2014-08-11 Thread Anshul Makkar
Hi Gu,

These are APIC IDs.

Taking the example from the previous mail.

Original cpus:0,1 maxcpus:6
(qemu) device_add qemu64-x86_64-cpu,apic-id=3,id=cpu3
(qemu) device_add qemu64-x86_64-cpu,apic-id=5,id=cpu5

cat /proc/cpuinfo shows
processor 0
processor 1
processor 2
processor 3

instead of 3 and 5 cpus 2 and 3 have been added.

Now if I do again

(qemu) device_add qemu64-x86_64-cpu,apic-id=5,id=cpu5
it says cpu already exists but cat /proc/cpuinfo doesn't show me cpu
with apicid 5.

Scenario 2:

Original cpus:0,1 maxcpus:6
(qemu) device_add qemu64-x86_64-cpu,apic-id=2,id=cpu2
(qemu) device_add qemu64-x86_64-cpu,apic-id=3,id=cpu3
(qemu) device_add qemu64-x86_64-cpu,apic-id=4,id=cpu4
cat /proc/cpuinfo
processor 0
processor 1
processor 2
processor 3
processor 4


(qemu) device_del cpu2
(qemu) device_del cpu4
cat /proc/cpuinof
processor 0
processor 1
processor 3

(qemu) device_add qemu64-x86_64-cpu,apic-id=4,id=cpu4

cpu 2 gets added instead of 4 and cat /proc/cpuinfo shows
processor 0
processor 1
processor 2
processor 3

I can just see that random deletion and addition is not possible.

I have put traces in the code to verify the APIC IDs as I couldn't see
APIC IDs in output of cat /proc/cpuinfo .

Please let me know if I am missing something .

Thanks
Anshul Makkar


On Fri, Aug 8, 2014 at 7:48 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 Hi Anshul,
 On 08/07/2014 09:31 PM, Anshul Makkar wrote:

 Thanks Gu.. cpu-hotunplug is working fine in my  tests.

 Thanks for your quick test.


 For cpu-hotplug, I get inconsistent result if I delete arbitrary cpu
 and not just the last one.

 for eg
 list of cpus: 1, 2 ,3
 device_add cpu 4
 device_add cpu 5
 device_add cpu 6

 What type id do you use here? apic-id or device id?


 device_del cpu 4
 device_del cpu 6

 Could you please offer the detail reproduce info? the more the better.


 now if I do device_add cpu6, then cpu 4 gets added and now if I try to
 do add cpu 4 or 6, it says cpu already exist.. Its a kind of vague
 behaviour.. Do, we follow any protocol here while adding and deleting
 cpus.

 There is not strict restriction here. Does the following routine match
 the condition you mentioned? It works fine in my box.

 Original cpus:0,1 maxcpus:6
 (qemu) device_add qemu64-x86_64-cpu,apic-id=2,id=cpu2
 (qemu) device_add qemu64-x86_64-cpu,apic-id=3,id=cpu3
 (qemu) device_add qemu64-x86_64-cpu,apic-id=4,id=cpu4

 (qemu) device_del cpu2
 (qemu) device_del cpu4

 (qemu) device_add qemu64-x86_64-cpu,apic-id=4,id=cpu4
 (qemu) device_add qemu64-x86_64-cpu,apic-id=2,id=cpu2

 Thanks,
 Gu


 Thanks
 Anshul Makkar
 www.justkernel.com

 On Thu, Aug 7, 2014 at 6:54 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 After ACPI get a signal to eject a vCPU, the vCPU must be
 removed from CPU list,before the vCPU really removed,  then
 release the all related vCPU objects.
 But we do not close KVM vcpu fd, just record it into a list, in
 order to reuse it.

 Signed-off-by: Chen Fan chen.fan.f...@cn.fujitsu.com
 Signed-off-by: Gu Zheng guz.f...@cn.fujitsu.com
 ---
  cpus.c   |   37 
  include/sysemu/kvm.h |1 +
  kvm-all.c|   57 
 +-
  3 files changed, 94 insertions(+), 1 deletions(-)

 diff --git a/cpus.c b/cpus.c
 index 4dfb889..9a73407 100644
 --- a/cpus.c
 +++ b/cpus.c
 @@ -786,6 +786,24 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void 
 *data), void *data)
  qemu_cpu_kick(cpu);
  }

 +static void qemu_kvm_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +
 +if (kvm_destroy_vcpu(cpu)  0) {
 +fprintf(stderr, kvm_destroy_vcpu failed.\n);
 +exit(1);
 +}
 +
 +object_unparent(OBJECT(cpu));
 +}
 +
 +static void qemu_tcg_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +object_unparent(OBJECT(cpu));
 +}
 +
  static void flush_queued_work(CPUState *cpu)
  {
  struct qemu_work_item *wi;
 @@ -877,6 +895,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
  }
  }
  qemu_kvm_wait_io_event(cpu);
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +qemu_kvm_destroy_vcpu(cpu);
 +qemu_mutex_unlock(qemu_global_mutex);
 +return NULL;
 +}
  }

  return NULL;
 @@ -929,6 +952,7 @@ static void tcg_exec_all(void);
  static void *qemu_tcg_cpu_thread_fn(void *arg)
  {
  CPUState *cpu = arg;
 +CPUState *remove_cpu = NULL;

  qemu_tcg_init_cpu_signals();
  qemu_thread_get_self(cpu-thread);
 @@ -961,6 +985,16 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
  }
  }
  qemu_tcg_wait_io_event();
 +CPU_FOREACH(cpu) {
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +remove_cpu = cpu;
 +break;
 +}
 +}
 +if (remove_cpu) {
 +qemu_tcg_destroy_vcpu(remove_cpu);
 +remove_cpu = NULL;
 +}
  }

  return NULL;
 @@ -1316,6

Re: [Qemu-devel] [RFC PATCH 10/10] cpus: reclaim allocated vCPU objects

2014-08-12 Thread Anshul Makkar
Hi Gu,

Looks more likely logic ordering to me based on indexes. Don't spend time
looking into it, let me look into it if I have done something wrong and fix
it .

Thanks
Anshul Makkar


On Mon, Aug 11, 2014 at 4:35 PM, Anshul Makkar 
anshul.mak...@profitbricks.com wrote:

 Hi Gu,

 These are APIC IDs.

 Taking the example from the previous mail.

 Original cpus:0,1 maxcpus:6
 (qemu) device_add qemu64-x86_64-cpu,apic-id=3,id=cpu3
 (qemu) device_add qemu64-x86_64-cpu,apic-id=5,id=cpu5

 cat /proc/cpuinfo shows
 processor 0
 processor 1
 processor 2
 processor 3

 instead of 3 and 5 cpus 2 and 3 have been added.

 Now if I do again

 (qemu) device_add qemu64-x86_64-cpu,apic-id=5,id=cpu5
 it says cpu already exists but cat /proc/cpuinfo doesn't show me cpu
 with apicid 5.

 Scenario 2:

 Original cpus:0,1 maxcpus:6
 (qemu) device_add qemu64-x86_64-cpu,apic-id=2,id=cpu2
 (qemu) device_add qemu64-x86_64-cpu,apic-id=3,id=cpu3
 (qemu) device_add qemu64-x86_64-cpu,apic-id=4,id=cpu4
 cat /proc/cpuinfo
 processor 0
 processor 1
 processor 2
 processor 3
 processor 4


 (qemu) device_del cpu2
 (qemu) device_del cpu4
 cat /proc/cpuinof
 processor 0
 processor 1
 processor 3

 (qemu) device_add qemu64-x86_64-cpu,apic-id=4,id=cpu4

 cpu 2 gets added instead of 4 and cat /proc/cpuinfo shows
 processor 0
 processor 1
 processor 2
 processor 3

 I can just see that random deletion and addition is not possible.

 I have put traces in the code to verify the APIC IDs as I couldn't see
 APIC IDs in output of cat /proc/cpuinfo .

 Please let me know if I am missing something .

 Thanks
 Anshul Makkar


 On Fri, Aug 8, 2014 at 7:48 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
  Hi Anshul,
  On 08/07/2014 09:31 PM, Anshul Makkar wrote:
 
  Thanks Gu.. cpu-hotunplug is working fine in my  tests.
 
  Thanks for your quick test.
 
 
  For cpu-hotplug, I get inconsistent result if I delete arbitrary cpu
  and not just the last one.
 
  for eg
  list of cpus: 1, 2 ,3
  device_add cpu 4
  device_add cpu 5
  device_add cpu 6
 
  What type id do you use here? apic-id or device id?
 
 
  device_del cpu 4
  device_del cpu 6
 
  Could you please offer the detail reproduce info? the more the better.
 
 
  now if I do device_add cpu6, then cpu 4 gets added and now if I try to
  do add cpu 4 or 6, it says cpu already exist.. Its a kind of vague
  behaviour.. Do, we follow any protocol here while adding and deleting
  cpus.
 
  There is not strict restriction here. Does the following routine match
  the condition you mentioned? It works fine in my box.
 
  Original cpus:0,1 maxcpus:6
  (qemu) device_add qemu64-x86_64-cpu,apic-id=2,id=cpu2
  (qemu) device_add qemu64-x86_64-cpu,apic-id=3,id=cpu3
  (qemu) device_add qemu64-x86_64-cpu,apic-id=4,id=cpu4
 
  (qemu) device_del cpu2
  (qemu) device_del cpu4
 
  (qemu) device_add qemu64-x86_64-cpu,apic-id=4,id=cpu4
  (qemu) device_add qemu64-x86_64-cpu,apic-id=2,id=cpu2
 
  Thanks,
  Gu
 
 
  Thanks
  Anshul Makkar
  www.justkernel.com
 
  On Thu, Aug 7, 2014 at 6:54 AM, Gu Zheng guz.f...@cn.fujitsu.com
 wrote:
  After ACPI get a signal to eject a vCPU, the vCPU must be
  removed from CPU list,before the vCPU really removed,  then
  release the all related vCPU objects.
  But we do not close KVM vcpu fd, just record it into a list, in
  order to reuse it.
 
  Signed-off-by: Chen Fan chen.fan.f...@cn.fujitsu.com
  Signed-off-by: Gu Zheng guz.f...@cn.fujitsu.com
  ---
   cpus.c   |   37 
   include/sysemu/kvm.h |1 +
   kvm-all.c|   57
 +-
   3 files changed, 94 insertions(+), 1 deletions(-)
 
  diff --git a/cpus.c b/cpus.c
  index 4dfb889..9a73407 100644
  --- a/cpus.c
  +++ b/cpus.c
  @@ -786,6 +786,24 @@ void async_run_on_cpu(CPUState *cpu, void
 (*func)(void *data), void *data)
   qemu_cpu_kick(cpu);
   }
 
  +static void qemu_kvm_destroy_vcpu(CPUState *cpu)
  +{
  +CPU_REMOVE(cpu);
  +
  +if (kvm_destroy_vcpu(cpu)  0) {
  +fprintf(stderr, kvm_destroy_vcpu failed.\n);
  +exit(1);
  +}
  +
  +object_unparent(OBJECT(cpu));
  +}
  +
  +static void qemu_tcg_destroy_vcpu(CPUState *cpu)
  +{
  +CPU_REMOVE(cpu);
  +object_unparent(OBJECT(cpu));
  +}
  +
   static void flush_queued_work(CPUState *cpu)
   {
   struct qemu_work_item *wi;
  @@ -877,6 +895,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
   }
   }
   qemu_kvm_wait_io_event(cpu);
  +if (cpu-exit  !cpu_can_run(cpu)) {
  +qemu_kvm_destroy_vcpu(cpu);
  +qemu_mutex_unlock(qemu_global_mutex);
  +return NULL;
  +}
   }
 
   return NULL;
  @@ -929,6 +952,7 @@ static void tcg_exec_all(void);
   static void *qemu_tcg_cpu_thread_fn(void *arg)
   {
   CPUState *cpu = arg;
  +CPUState *remove_cpu = NULL;
 
   qemu_tcg_init_cpu_signals();
   qemu_thread_get_self(cpu-thread);
  @@ -961,6 +985,16

[Qemu-devel] change of mac address at runtime

2014-06-17 Thread Anshul Makkar
Hi,

Just want to check this small piece of implementation detail in qemu.

Is it possible to change the mac address of VM at runtime  and does
the same information is conveyed to host if we are using Virtio based
transfers (approach).

Thanks
Anshul Makkar



[Qemu-devel] Feature list for 2.1

2014-05-13 Thread Anshul Makkar
Hi,

The page http://wiki.qemu.org/Planning/1.2 just give the names of the
features that will be implemented in 2.1. Where can I find the details
about the same ?

Thanks
Anshul Makkar
www.justkernel.com



Re: [Qemu-devel] Feature list for 2.1

2014-05-14 Thread Anshul Makkar
Hi Peter,

So it means there are no targets or target list of features that
should be supported in  2.1 ?

If someone wants to pick a particular feature from some target list
and want to contribute to it, where he should look for.

Thanks

On Tue, May 13, 2014 at 5:58 PM, Peter Maydell peter.mayd...@linaro.org wrote:
 On 13 May 2014 16:54, Anshul Makkar anshul.mak...@profitbricks.com wrote:
 The page http://wiki.qemu.org/Planning/1.2 just give the names of the
 features that will be implemented in 2.1.

 Wrong page -- 2.1 != 1.2 :-)

 Where can I find the details about the same ?

 I don't think we've written down a list of features for 2.1:
 it will get whatever people finish and commit by the point
 we're ready to release.

 thanks
 -- PMM



[Qemu-devel] Common header file for error codes

2014-05-21 Thread Anshul Makkar
Hi,

Doesn't there exist a common header file for all the return codes. In the
code I can see return values as 0, -1 etc.

Am I missing something or is this some work in progress.

Thanks
Anshul Makkar
www.justkernel.com


Re: [Qemu-devel] [PATCH 33/35] pc: ACPI BIOS: reserve SRAT entry for hotplug mem hole

2014-05-27 Thread Anshul Makkar
Hi,

I tested the hot unplug patch and doesn't seem to work properly with Debian
6 and Ubuntu host.

Scenario:
I added 3 dimm devices of 1G each:

object_add memory-ram,id=ram0,size=1G, device_add dimm,id=dimm1,memdev=ram0

object_add memory-ram,id=ram1,size=1G, device_add dimm,id=dimm2,memdev=ram1

object_add memory-ram,id=ram2,size=1G, device_add dimm,id=dimm3,memdev=ram2

device_del dimm3: I get the OST EVENT EJECT 0x3 and OST STATUS as 0x84(IN
PROGRESS) If I check on the guest, the device has been successfully
removed. But no OST EJECT SUCCESS event was received.

device_del dimm2: I get OST EVENT EJECT 0x3, OST STATUS 0x84 (IN PROGRESS).
Then 2nd time OST EVENT EJECT 0x3, OST STATUS 0x1 (FAILURE) . Device is not
removed from the guest.

device_del dimm1: I get OST EVENT EJECT 0x3, OST STATUS 0x84 (IN PROGRESS).
Then 2nd OST EVENT EJECT 0x3, OST STATUS 0x1(FAILURE) . Device is not
removed from the guest.

Thus it mean that if for the first time device removal fails with status
indicating in progress, then one more attempt will be made to remove the
device. If the attempts succeeds then no success OST event will be conveyed
else OST event FAILURE will be sent.  Can we be always sure of that
OST_FAILURE event will be sent in case of failure.

Please can you share your thoughts here.

Thanks
Anshul Makkar
www.justkernel.com




On Tue, May 6, 2014 at 3:00 PM, Vasilis Liaskovitis 
vasilis.liaskovi...@profitbricks.com wrote:

 On Tue, May 06, 2014 at 09:52:39AM +0800, Hu Tao wrote:
  On Mon, May 05, 2014 at 05:59:15PM +0200, Vasilis Liaskovitis wrote:
   Hi,
  
   On Mon, Apr 14, 2014 at 06:44:42PM +0200, Igor Mammedov wrote:
On Mon, 14 Apr 2014 15:25:01 +0800
Hu Tao hu...@cn.fujitsu.com wrote:
   
 On Fri, Apr 04, 2014 at 03:36:58PM +0200, Igor Mammedov wrote:
Could you be more specific, what and how doesn't work and why there
 is
need for SRAT entries per DIMM?
I've briefly tested with your unplug patches and linux seemed be ok
 with unplug,
i.e. device node was removed from /sys after receiving remove
 notification.
  
   Just a heads-up, is this the unplug patch that you are using for
 testing:
  
 https://github.com/taohu/qemu/commit/55c9540919e189b0ad2e6a759af742080f8f5dc4
  
   or is there a newer version based on Igor's patchseries?
 
  Yeah. There is a new version. I pushed it up to
  https://github.com/taohu/qemu/commits/memhp for you to check out.

 cool, thanks.

 - Vasilis




Re: [Qemu-devel] [PATCH v10 00/18] Vhost and vhost-net support for userspace based backends

2014-05-28 Thread Anshul Makkar
Hi,

We are also trying to develop a solution where we can implement the switch
in the user mode (thinking of using VDE) and then rdma packet directly to
other end without involving the kernel layers. Is the above solution/patch
series implements that using Snabbswitch ethernet switch .

Confused here, please can you share your thoughts.

Thanks
Anshul Makkar
www.justkernel.com


On Tue, May 27, 2014 at 2:03 PM, Nikolay Nikolaev 
n.nikol...@virtualopensystems.com wrote:

 In this patch series we would like to introduce our approach for putting a
 virtio-net backend in an external userspace process. Our eventual target
 is to
 run the network backend in the Snabbswitch ethernet switch, while receiving
 traffic from a guest inside QEMU/KVM which runs an unmodified virtio-net
 implementation.

 For this, we are working into extending vhost to allow equivalent
 functionality
 for userspace. Vhost already passes control of the data plane of
 virtio-net to
 the host kernel; we want to realize a similar model, but for userspace.

 In this patch series the concept of a vhost-backend is introduced.

 We define two vhost backend types - vhost-kernel and vhost-user. The
 former is
 the interface to the current kernel module implementation. Its control
 plane is
 ioctl based. The data plane is realized by the kernel directly accessing
 the
 QEMU allocated, guest memory.

 In the new vhost-user backend, the control plane is based on communication
 between QEMU and another userspace process using a unix domain socket. This
 allows to implement a virtio backend for a guest running in QEMU, inside
 the
 other userspace process. For this communication we use a chardev with a
 Unix
 domain socket backend. Vhost-user is client/server agnostic regarding the
 chardev, however it does not support the 'nowait' and 'telnet' options.

 We rely on the memdev with a memory-file backend. The backend's share=on
 option
 should be used. HugeTLBFS is required for this option to work.

 The data path is realized by directly accessing the vrings and the buffer
 data
 off the guest's memory.

 The current user of vhost-user is only vhost-net. We add a new netdev
 backend
 that is intended to initialize vhost-net with vhost-user backend.

 Example usage:

 qemu -m 512 \
  -object memory-file,id=mem,size=512M,mem-path=/hugetlbfs,share=on \
  -numa node,memdev=mem \
  -chardev socket,id=chr0,path=/path/to/socket \
  -netdev type=vhost-user,id=net0,chardev=chr0 \
  -device virtio-net-pci,netdev=net0

 On non-MSIX guests the vhost feature can be forced using a special option:

 ...
  -netdev type=vhost-user,id=net0,chardev=chr0,vhostforce
 ...

 In order to use ioeventfds, kvm should be enabled.

 The work is made on top of the NUMA patch series v3.2
 http://lists.gnu.org/archive/html/qemu-devel/2014-05/msg02706.html

 This code can be pulled from g...@github.com:virtualopensystems/qemu.git
 vhost-user-v10
 A simple functional test is available in tests/vhost-user-test.c

 A reference vhost-user slave for testing is also available from
 g...@github.com:virtualopensystems/vapp.git

 Changes from v9:
  - Rebased on the NUMA memdev patchseries and reworked to use memdev
  - Removed -mem-path refactoring
  - Removed all reconnection code
  - Fixed 100% CPU usage in the G_IO_HUP handler after disconnect
  - Reworked vhost feature bits handling so vhost-user has better control
 in the negotiation

 Changes from v8:
  - Removed prealloc property from the -mem-path refactoring
  - Added and use new function - kvm_eventfds_enabled
  - Add virtio_queue_get_avail_idx used in vhost_virtqueue_stop to
get a sane value in case of VHOST_GET_VRING_BASE failure
  - vhost user uses kvm_eventfds_enabled to check whether the ioeventfd
capability of KVM is available
  - Added flag VHOST_USER_VRING_NOFD_MASK to be set when KICK, CALL or ERR
 file
descriptor is invalid or ioeventfd is not available

 Changes from v7:
  - Slave reconnection when using chardev in server mode
  - qtest vhost-user-test added
  - New qemu_chr_fe_get_msgfds for reading multiple fds from the chardev
  - Mandatory features in vhost_dev, used on reconnect to verify for
 conflicts
  - Add vhostforce parameter to -netdev vhost-user (for non-MSIX guests)
  - Extend libqemustub.a to support qemu-char.c

 Changes from v6:
  - Remove the 'unlink' property of '-mem-path'
  - Extend qemu-char: blocking read, send fds, monitor for connection close
  - Vhost-user uses chardev as a backend
  - Poll and reconnect removed (no VHOST_USER_ECHO).
  - Disconnect is deteced by the chardev (G_IO_HUP event)
  - vhost-backend.c split to vhost-user.c

 Changes from v5:
  - Split -mem-path unlink option to a separate patch
  - Fds are passed only in the ancillary data
  - Stricter message size checks on receive/send
  - Netdev vhost-user now includes path and poll_time options
  - The connection probing interval is configurable

 Changes from v4:
  - Use error_report for errors

Re: [Qemu-devel] [PATCH v3 22/34] trace: add acpi memory hotplug IO region events

2014-05-28 Thread Anshul Makkar
Hi,

Sorry, for this basic question. Do the above trace function lead to some
printfs which will be helpful for debugging. If yes, where I can see the
trace logs. I have not been able to find the definition of these trace
functions.

Thanks
Anshul Makkar
www.justkernel.com


On Tue, May 27, 2014 at 3:01 PM, Igor Mammedov imamm...@redhat.com wrote:

 Add events for tracing accesses to memory hotplug IO ports.

 Signed-off-by: Igor Mammedov imamm...@redhat.com
 ---
  hw/acpi/memory_hotplug.c |   13 +
  trace-events |   13 +
  2 files changed, 26 insertions(+), 0 deletions(-)

 diff --git a/hw/acpi/memory_hotplug.c b/hw/acpi/memory_hotplug.c
 index 6138346..73a0501 100644
 --- a/hw/acpi/memory_hotplug.c
 +++ b/hw/acpi/memory_hotplug.c
 @@ -2,6 +2,7 @@
  #include hw/acpi/pc-hotplug.h
  #include hw/mem/dimm.h
  #include hw/boards.h
 +#include trace.h

  static uint64_t acpi_memory_hotplug_read(void *opaque, hwaddr addr,
   unsigned int size)
 @@ -11,6 +12,7 @@ static uint64_t acpi_memory_hotplug_read(void *opaque,
 hwaddr addr,
  MemStatus *mdev;

  if (mem_st-selector = mem_st-dev_count) {
 +trace_mhp_acpi_invalid_slot_selected(mem_st-selector);
  return 0;
  }

 @@ -18,24 +20,30 @@ static uint64_t acpi_memory_hotplug_read(void *opaque,
 hwaddr addr,
  switch (addr) {
  case 0x0: /* Lo part of phys address where DIMM is mapped */
  val = object_property_get_int(OBJECT(mdev-dimm), DIMM_ADDR_PROP,
 NULL);
 +trace_mhp_acpi_read_addr_lo(mem_st-selector, val);
  break;
  case 0x4: /* Hi part of phys address where DIMM is mapped */
  val = object_property_get_int(OBJECT(mdev-dimm), DIMM_ADDR_PROP,
NULL)  32;
 +trace_mhp_acpi_read_addr_hi(mem_st-selector, val);
  break;
  case 0x8: /* Lo part of DIMM size */
  val = object_property_get_int(OBJECT(mdev-dimm), DIMM_SIZE_PROP,
 NULL);
 +trace_mhp_acpi_read_size_lo(mem_st-selector, val);
  break;
  case 0xc: /* Hi part of DIMM size */
  val = object_property_get_int(OBJECT(mdev-dimm), DIMM_SIZE_PROP,
NULL)  32;
 +trace_mhp_acpi_read_size_hi(mem_st-selector, val);
  break;
  case 0x10: /* node proximity for _PXM method */
  val = object_property_get_int(OBJECT(mdev-dimm), DIMM_NODE_PROP,
 NULL);
 +trace_mhp_acpi_read_pxm(mem_st-selector, val);
  break;
  case 0x14: /* pack and return is_* fields */
  val |= mdev-is_enabled   ? 1 : 0;
  val |= mdev-is_inserting ? 2 : 0;
 +trace_mhp_acpi_read_flags(mem_st-selector, val);
  break;
  default:
  val = ~0;
 @@ -56,6 +64,7 @@ static void acpi_memory_hotplug_write(void *opaque,
 hwaddr addr, uint64_t data,

  if (addr) {
  if (mem_st-selector = mem_st-dev_count) {
 +trace_mhp_acpi_invalid_slot_selected(mem_st-selector);
  return;
  }
  }
 @@ -63,6 +72,7 @@ static void acpi_memory_hotplug_write(void *opaque,
 hwaddr addr, uint64_t data,
  switch (addr) {
  case 0x0: /* DIMM slot selector */
  mem_st-selector = data;
 +trace_mhp_acpi_write_slot(mem_st-selector);
  break;
  case 0x4: /* _OST event  */
  mdev = mem_st-devs[mem_st-selector];
 @@ -72,10 +82,12 @@ static void acpi_memory_hotplug_write(void *opaque,
 hwaddr addr, uint64_t data,
  /* TODO: handle device remove OST event */
  }
  mdev-ost_event = data;
 +trace_mhp_acpi_write_ost_ev(mem_st-selector, mdev-ost_event);
  break;
  case 0x8: /* _OST status */
  mdev = mem_st-devs[mem_st-selector];
  mdev-ost_status = data;
 +trace_mhp_acpi_write_ost_status(mem_st-selector,
 mdev-ost_status);
  /* TODO: report async error */
  /* TODO: implement memory removal on guest signal */
  break;
 @@ -83,6 +95,7 @@ static void acpi_memory_hotplug_write(void *opaque,
 hwaddr addr, uint64_t data,
  mdev = mem_st-devs[mem_st-selector];
  if (data  2) { /* clear insert event */
  mdev-is_inserting  = false;
 +trace_mhp_acpi_clear_insert_evt(mem_st-selector);
  }
  break;
  }
 diff --git a/trace-events b/trace-events
 index b6d289d..4f4c58f 100644
 --- a/trace-events
 +++ b/trace-events
 @@ -1252,3 +1252,16 @@ xen_pv_mmio_write(uint64_t addr) WARNING: write to
 Xen PV Device MMIO space (ad
  # hw/pci/pci_host.c
  pci_cfg_read(const char *dev, unsigned devid, unsigned fnid, unsigned
 offs, unsigned val) %s %02u:%u @0x%x - 0x%x
  pci_cfg_write(const char *dev, unsigned devid, unsigned fnid, unsigned
 offs, unsigned val) %s %02u:%u @0x%x - 0x%x
 +
 +#hw/acpi/memory_hotplug.c
 +mhp_acpi_invalid_slot_selected(uint32_t slot) 0x%PRIx32
 +mhp_acpi_read_addr_lo(uint32_t slot, uint32_t addr

Re: [Qemu-devel] status of cpu hotplug work for x86_64?

2014-05-28 Thread Anshul Makkar
Hi,

But where are the patches. The links shared are just sites where
discussion about patches is going on.  Are there any public
repositories from where I can find proper patches ?

Thanks
Anshul Makkar.

On Fri, May 2, 2014 at 3:35 PM, Igor Mammedov imamm...@redhat.com wrote:
 On Fri, 2 May 2014 14:10:35 +0200
 Vasilis Liaskovitis vasilis.liaskovi...@profitbricks.com wrote:

 Hi,

 On Mon, Apr 28, 2014 at 11:58:38AM -0600, Chris Friesen wrote:
  Hi,
 
  I'm trying to figure out what the current status is for cpu hotplug
  and hot-remove on x86_64.
 
  As far as I can tell, it seems like currently there is a QMP
  cpu-add command but no matching remove...is that correct?

 correct. cpu-add is the way to hot-add CPUs.
 There is no support for cpu hot-remove at this point.

 The latest patchset for cpu hot-remove that I know of is:
 http://lists.gnu.org/archive/html/qemu-devel/2013-12/msg04266.html

 If I understand correctly the biggest hurdle is supporting vcpu destruction
 during the VM lifetime on the kvm host side. The corresponding kvm patches 
 have
 not been accepted:
 http://comments.gmane.org/gmane.comp.emulators.kvm.devel/114347

 So I have the same question as you: Is there any plan to support cpu 
 hot-remove?
 Besides above mentioned series and KVM side of the problem there are
 several not yet addressed QEMU issues:
   - remove dependency on cpu_index for X86 cpu (arbitrary CPU add/rm)
   - need an interface to specify apic id when adding CPU
 -  clean way would be -device x86-cpu-foo,{apic-id=xxx || 
 node=foo,socket=x,core=y,thread=z}
 -  x86 cpu subclasses by Eduardo were merged in the last release so
on the road to -device x86-cpu-foo only conversion to of CPU 
 features to properties
is left (I need to rebase and respin it)
   - fix vmstate register/unregister (migration and arbitrary CPU add/rm)

 As for plans QEMU side of work could be done without KVM support first and 
 then
 once KVM would be able to unplug VCPUs it could be added to QEMU without much 
 issues.



 thanks,

 - Vasilis





Re: [Qemu-devel] Qemu-devel Digest, Vol 133, Issue 401

2014-05-02 Thread Anshul Makkar
On Mon, Apr 14, 2014 at 6:49 PM,  qemu-devel-requ...@nongnu.org wrote:
 Re: [PATCH 33/35] pc: ACPI BIOS: reserve SRAT entry for
   hotplug mem hole (Igor Mammedov)

Please can you share the patchset for memory hot unplugging. Is this
the correct commit I am looking at
https://github.com/taohu/qemu/commit/55c9540919e189b0ad2e6a759af742080f8f5dc4
?

Thanks
Anshul Makkar



[Qemu-devel] Patchset for memory hot unplugging.

2014-05-02 Thread Anshul Makkar
Please can you share the patchset for memory hot unplugging.

Is this the correct commit I am looking at
https://github.com/taohu/qemu/commit/55c9540919e189b0ad2e6a759af742080f8f5dc4

Thanks
Anshul Makkar



Re: [Qemu-devel] Help debugging audio problem

2014-07-04 Thread Anshul Makkar
Use of glue is heavily uses in audio code. I completely redesigned it
for Virtualbox and removed all the hard to understand glue code :) .

Not sure if this glue magic is such heavily used anywhere else also.

Moreover audio code uses one big monolythic big audio file audio.c .
So bringing modularity was another aim of my redesigning.

Anshul Makkar

On Thu, Jul 3, 2014 at 9:10 AM, Markus Armbruster arm...@redhat.com wrote:
 Programmingkid programmingk...@gmail.com writes:

 What does this code mean?

  if (!glue (s-nb_hw_voices_, TYPE)) {
 return NULL;
 }

 The code is found in audio_template.h at line 244.

 I tried using GDB to figure out what it was doing, but had little luck.

 The AC97 sound card does not work, and I'm trying to change that.

 Any help would be great. Thanks.

 Definition of macro glue is in osdep.h.  It glues together its
 arguments.  Consult your textbook on C to understand how that kind of
 arcane preprocessor magic works.

 The audio subsystem is exceedingly fond of magic.

 In actual use, macro TYPE has either value in or out, thus the result is
 either s-nb_hw_voices_in or s-nb_hw_voices_out.




[Qemu-devel] virtio + virtq + iommu

2014-07-08 Thread Anshul Makkar
Hi,

Was tracing the buffer handling code flow after the kick has been
initiated from the guest in case of virtio.

Found this function
cpu_physical_memory_map-address_space_map-address_space_translate
which calls address_space_translate_internal and iommu-translate (get
the translation from TLB) to get the corresponding host virtual
address where I think the packet buffer is mapped to (from guest
physical to host virtual).

So, should I conclude that there is no hardware IOMMU involved in this
translation. QEMU maintains its own TLB and translation mapping which
is used. Or iommu-translate leads to hardware MMU call.

We are developing a high speed packet transfer mechanism using
infiniband cards. So, trying to analyse every possible bottleneck.

Confused here, please suggest.

Anshul Makkar
www.justkernel.com



[Qemu-devel] live migration + licensing issue.

2014-07-08 Thread Anshul Makkar
Hi,

In our data center we are using qemu 1.0/ 1.2 and we need to do a live
migration to qemu 2.0.

One of the main hindrance that we are facing is that QEMU 1.0 uses old
PC model so if a user using Windows on the VM running on QEMU 1.0 does
a live migrate to QEMU 2.0 , he will see a licensing issue as after
migration Windows will see a new hardware beneath it.

Any suggestion as to how to overcome this problem.

Thanks
Anshul Makkar
www.justkernel.com



Re: [Qemu-devel] virtio + virtq + iommu

2014-07-09 Thread Anshul Makkar
Hi,

Any suggestions.

Anshul Makkar

On Tue, Jul 8, 2014 at 5:21 PM, Anshul Makkar
anshul.mak...@profitbricks.com wrote:
 Hi,

 Was tracing the buffer handling code flow after the kick has been
 initiated from the guest in case of virtio.

 Found this function
 cpu_physical_memory_map-address_space_map-address_space_translate
 which calls address_space_translate_internal and iommu-translate (get
 the translation from TLB) to get the corresponding host virtual
 address where I think the packet buffer is mapped to (from guest
 physical to host virtual).

 So, should I conclude that there is no hardware IOMMU involved in this
 translation. QEMU maintains its own TLB and translation mapping which
 is used. Or iommu-translate leads to hardware MMU call.

 We are developing a high speed packet transfer mechanism using
 infiniband cards. So, trying to analyse every possible bottleneck.

 Confused here, please suggest.

 Anshul Makkar
 www.justkernel.com



Re: [Qemu-devel] live migration + licensing issue.

2014-07-09 Thread Anshul Makkar
Hi,

Yeah, I am aware of this option. But the point where I am concerned is
that if Windows VM is running in QEMU 1.0 with pc-model 1.0 and then I
upgrade the QEMU to 2.0 and I specify machine as pc-1.2, then Windows
will see this as change in hardware and complain about the license.

Sorry, if my understanding is wrong here or i am missing something.

Anshul Makkar

On Tue, Jul 8, 2014 at 6:25 PM, Andreas Färber afaer...@suse.de wrote:
 Hi,

 Am 08.07.2014 17:24, schrieb Anshul Makkar:
 In our data center we are using qemu 1.0/ 1.2 and we need to do a live
 migration to qemu 2.0.

 One of the main hindrance that we are facing is that QEMU 1.0 uses old
 PC model so if a user using Windows on the VM running on QEMU 1.0 does
 a live migrate to QEMU 2.0 , he will see a licensing issue as after
 migration Windows will see a new hardware beneath it.

 Any suggestion as to how to overcome this problem.

 Please check the documentation. There's the -machine option with
 parameters such as pc-1.0 and pc-1.2 for that exact purpose. libvirt
 should supply the corresponding option automatically.

 More difficult is if you're trying to migrate from qemu-kvm to qemu -
 code changes to your copy of 2.0 will be necessary then.

 Regards,
 Andreas

 --
 SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
 GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg



Re: [Qemu-devel] live migration + licensing issue.

2014-07-09 Thread Anshul Makkar
Thanks. I got the point.

Anshul Makkar

On Wed, Jul 9, 2014 at 9:36 AM, Markus Armbruster arm...@redhat.com wrote:
 Anshul Makkar anshul.mak...@profitbricks.com writes:

 Hi,

 Yeah, I am aware of this option. But the point where I am concerned is
 that if Windows VM is running in QEMU 1.0 with pc-model 1.0 and then I
 upgrade the QEMU to 2.0 and I specify machine as pc-1.2, then Windows
 will see this as change in hardware and complain about the license.

 Works as designed.

 Sorry, if my understanding is wrong here or i am missing something.

 Changing the machine type is the virtual equivalent of replacing the
 motherboard.



Re: [Qemu-devel] live migration + licensing issue.

2014-07-11 Thread Anshul Makkar
Yeah, but I think if we have to take advantage of live vertical
scaling (memory hotplug, memory hotunplug, cpu hotplug) then we need
to upgrade to pc model 1.2.

pc model 1.0 will be incompatible with qemu 2.0 wrt. LVS feature as
the bus architecture and the way how dimms are handled has changed
from pc model 1.0 in qemu 1.0 to pc model 2.0 (pc-i440fx-2.1) to qemu
2.0.

Yeah, true, if we have to avoid the licensing issue then we have to
use same PC model.

Anshul Makkar

On Fri, Jul 11, 2014 at 1:19 AM, Eric Blake ebl...@redhat.com wrote:
 On 07/08/2014 03:10 PM, Anshul Makkar wrote:
 Hi,

 Yeah, I am aware of this option. But the point where I am concerned is
 that if Windows VM is running in QEMU 1.0 with pc-model 1.0 and then I
 upgrade the QEMU to 2.0 and I specify machine as pc-1.2, then Windows
 will see this as change in hardware and complain about the license.

 That's by design.  If you were running under qemu 1.0 with pc-model 1.0,
 then when you upgrade to qemu 2.0, you must STILL use pc-model 1.0 (and
 not pc-1.2) if you want your guest to see the same hardware as what the
 older qemu was providing, and therefore avoid a relicensing issue.

 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org




Re: [Qemu-devel] live migration + licensing issue.

2014-07-11 Thread Anshul Makkar
Hi Andreas,

the point is that the machine version on the destination side needs
to match the source side. I hope this is just to avoid the licensing
issue. Else, in all other circumstance, we can specify different pc
models while migrating from source to destination.

Anshul Makkar

On Wed, Jul 9, 2014 at 6:25 PM, Andreas Färber afaer...@suse.de wrote:
 Am 09.07.2014 13:09, schrieb Anshul Makkar:
 Thanks. I got the point.

 And for the record, the point is that the machine version on the
 destination side needs to match the source side. So, if the default or
 pc alias is used in 1.0, which resolves to pc-1.0, then it needs to be
 pc-1.0, not pc-1.2. If an explicit machine name such as pc-0.15 was used
 then that exact machine must be used on the destination as well.

 Andreas

 On Wed, Jul 9, 2014 at 9:36 AM, Markus Armbruster arm...@redhat.com wrote:
 Anshul Makkar anshul.mak...@profitbricks.com writes:

 Hi,

 Yeah, I am aware of this option. But the point where I am concerned is
 that if Windows VM is running in QEMU 1.0 with pc-model 1.0 and then I
 upgrade the QEMU to 2.0 and I specify machine as pc-1.2, then Windows
 will see this as change in hardware and complain about the license.

 Works as designed.

 Sorry, if my understanding is wrong here or i am missing something.

 Changing the machine type is the virtual equivalent of replacing the
 motherboard.



 --
 SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
 GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg



Re: [Qemu-devel] live migration + licensing issue.

2014-07-11 Thread Anshul Makkar
On Fri, Jul 11, 2014 at 1:12 PM, Markus Armbruster arm...@redhat.com wrote:
 ly, leaving your machine running on the source.


Hmm. Got the point.

But as I mentioned above if we have to use live vertical scaling on
qemu 2.0, then pc-model 1.0 won't help (as the dimm handling and bus
handling has changed in pc-model 2.0 and qemu2.0 uses this changed
model).  QEMU 2.0 will only go with pc-model 2.0 wrt to LVS. Am I
correct here ?

Does the only solution is to shutdown the VM, upgrade qemu and then
start with new QEMU and new PC model .

Anshul Makkar



Re: [Qemu-devel] [RFC PATCH 7/7] cpus: reclaim allocated vCPU objects

2014-07-17 Thread Anshul Makkar
Are we not going to introduce new command cpu_del for deleting the cpu ?

I couldn't find any patch for addition of cpu_del command. Is this
intentional and we intend to use device_del (and similarly device_add)
for cpu hot(un)plug or just skipped to be added later. I have the
patch for the same which I can release, if the intent is to add this
command.

Thanks
Anshul Makkar

On Fri, Jul 11, 2014 at 11:59 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:
 After ACPI get a signal to eject a vCPU, the vCPU must be
 removed from CPU list,before the vCPU really removed,  then
 release the all related vCPU objects.
 But we do not close KVM vcpu fd, just record it into a list, in
 order to reuse it.

 Signed-off-by: Chen Fan chen.fan.f...@cn.fujitsu.com
 Signed-off-by: Gu Zheng guz.f...@cn.fujitsu.com
 ---
  cpus.c   |   37 
  include/sysemu/kvm.h |1 +
  kvm-all.c|   57 
 +-
  3 files changed, 94 insertions(+), 1 deletions(-)

 diff --git a/cpus.c b/cpus.c
 index 4dfb889..9a73407 100644
 --- a/cpus.c
 +++ b/cpus.c
 @@ -786,6 +786,24 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void 
 *data), void *data)
  qemu_cpu_kick(cpu);
  }

 +static void qemu_kvm_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +
 +if (kvm_destroy_vcpu(cpu)  0) {
 +fprintf(stderr, kvm_destroy_vcpu failed.\n);
 +exit(1);
 +}
 +
 +object_unparent(OBJECT(cpu));
 +}
 +
 +static void qemu_tcg_destroy_vcpu(CPUState *cpu)
 +{
 +CPU_REMOVE(cpu);
 +object_unparent(OBJECT(cpu));
 +}
 +
  static void flush_queued_work(CPUState *cpu)
  {
  struct qemu_work_item *wi;
 @@ -877,6 +895,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
  }
  }
  qemu_kvm_wait_io_event(cpu);
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +qemu_kvm_destroy_vcpu(cpu);
 +qemu_mutex_unlock(qemu_global_mutex);
 +return NULL;
 +}
  }

  return NULL;
 @@ -929,6 +952,7 @@ static void tcg_exec_all(void);
  static void *qemu_tcg_cpu_thread_fn(void *arg)
  {
  CPUState *cpu = arg;
 +CPUState *remove_cpu = NULL;

  qemu_tcg_init_cpu_signals();
  qemu_thread_get_self(cpu-thread);
 @@ -961,6 +985,16 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
  }
  }
  qemu_tcg_wait_io_event();
 +CPU_FOREACH(cpu) {
 +if (cpu-exit  !cpu_can_run(cpu)) {
 +remove_cpu = cpu;
 +break;
 +}
 +}
 +if (remove_cpu) {
 +qemu_tcg_destroy_vcpu(remove_cpu);
 +remove_cpu = NULL;
 +}
  }

  return NULL;
 @@ -1316,6 +1350,9 @@ static void tcg_exec_all(void)
  break;
  }
  } else if (cpu-stop || cpu-stopped) {
 +if (cpu-exit) {
 +next_cpu = CPU_NEXT(cpu);
 +}
  break;
  }
  }
 diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
 index 174ea36..88e2403 100644
 --- a/include/sysemu/kvm.h
 +++ b/include/sysemu/kvm.h
 @@ -178,6 +178,7 @@ int kvm_has_intx_set_mask(void);

  int kvm_init_vcpu(CPUState *cpu);
  int kvm_cpu_exec(CPUState *cpu);
 +int kvm_destroy_vcpu(CPUState *cpu);

  #ifdef NEED_CPU_H

 diff --git a/kvm-all.c b/kvm-all.c
 index 3ae30ee..25e2a43 100644
 --- a/kvm-all.c
 +++ b/kvm-all.c
 @@ -74,6 +74,12 @@ typedef struct KVMSlot

  typedef struct kvm_dirty_log KVMDirtyLog;

 +struct KVMParkedVcpu {
 +unsigned long vcpu_id;
 +int kvm_fd;
 +QLIST_ENTRY(KVMParkedVcpu) node;
 +};
 +
  struct KVMState
  {
  KVMSlot *slots;
 @@ -108,6 +114,7 @@ struct KVMState
  QTAILQ_HEAD(msi_hashtab, KVMMSIRoute) msi_hashtab[KVM_MSI_HASHTAB_SIZE];
  bool direct_msi;
  #endif
 +QLIST_HEAD(, KVMParkedVcpu) kvm_parked_vcpus;
  };

  KVMState *kvm_state;
 @@ -226,6 +233,53 @@ static int kvm_set_user_memory_region(KVMState *s, 
 KVMSlot *slot)
  return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, mem);
  }

 +int kvm_destroy_vcpu(CPUState *cpu)
 +{
 +KVMState *s = kvm_state;
 +long mmap_size;
 +struct KVMParkedVcpu *vcpu = NULL;
 +int ret = 0;
 +
 +DPRINTF(kvm_destroy_vcpu\n);
 +
 +mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
 +if (mmap_size  0) {
 +ret = mmap_size;
 +DPRINTF(KVM_GET_VCPU_MMAP_SIZE failed\n);
 +goto err;
 +}
 +
 +ret = munmap(cpu-kvm_run, mmap_size);
 +if (ret  0) {
 +goto err;
 +}
 +
 +vcpu = g_malloc0(sizeof(*vcpu));
 +vcpu-vcpu_id = kvm_arch_vcpu_id(cpu);
 +vcpu-kvm_fd = cpu-kvm_fd;
 +QLIST_INSERT_HEAD(kvm_state-kvm_parked_vcpus, vcpu, node);
 +err:
 +return ret;
 +}
 +
 +static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
 +{
 +struct KVMParkedVcpu *cpu;
 +
 +QLIST_FOREACH(cpu, s-kvm_parked_vcpus, node) {
 +if (cpu-vcpu_id == vcpu_id

Re: [Qemu-devel] virtio + virtq + iommu

2014-07-18 Thread Anshul Makkar
Further adding on to it.

 iotlb = mr-iommu_ops-translate(mr, addr) in address_space_translate
to get the translation from the tlb.

What I found is that the  iommu_ops-translate is assigned a
function pointer only for alpha/typhoon and ppc/spapr. What about x86.
Are we using any of these architectures for emulating
iommu for x86 ?

Anshul Makkar

On Tue, Jul 8, 2014 at 5:21 PM, Anshul Makkar
anshul.mak...@profitbricks.com wrote:
 Hi,

 Was tracing the buffer handling code flow after the kick has been
 initiated from the guest in case of virtio.

 Found this function
 cpu_physical_memory_map-address_space_map-address_space_translate
 which calls address_space_translate_internal and iommu-translate (get
 the translation from TLB) to get the corresponding host virtual
 address where I think the packet buffer is mapped to (from guest
 physical to host virtual).

 So, should I conclude that there is no hardware IOMMU involved in this
 translation. QEMU maintains its own TLB and translation mapping which
 is used. Or iommu-translate leads to hardware MMU call.

 We are developing a high speed packet transfer mechanism using
 infiniband cards. So, trying to analyse every possible bottleneck.

 Confused here, please suggest.

 Anshul Makkar
 www.justkernel.com



Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-08-25 Thread Anshul Makkar
Hi,

I am testing memory hotadd/remove functionality for Windows guest
(currently 2012 server). Memory hot remove is not working.

As mentioned in the mail chain, hot remove on Windows is not supported.So
just wanted to check if its still not supported or has been supported or
its a work in progress. If its already been supported or still a work in
progress, please can you share the relevant links/patches.

Sorry, if I have missed any latest patches that support Windows memory hot
remove.

Thanks
Anshul Makkar

On Wed, May 7, 2014 at 11:15 AM, Stefan Priebe - Profihost AG 
s.pri...@profihost.ag wrote:

 ax number of supported DIMM devices 255 (due to ACPI object name
 limit), could be increased creating several containers and putting
 DIMMs there. (exercise for future)



[Qemu-devel] block IO latency tracker without using QMP socket.

2014-08-27 Thread Anshul Makkar
Hi,

I am writing a block IO latency tracker.

As obvious,  I am calculating the latency by tracking the interval between
start of IO and end of IO.
(firing my latency tracker from function BlockDriverAIOCB *raw_aio_submit()
raw-posix.c when job is submitted).

The latency data per QEMU process will be written to shared memory and then
another app uses this shared memory to read the data. That's a simple
architecture.

Can't use info blockstats QMP command as qmp socket is used and blocked
by some other process in our subsystem.

Just want a suggestion whether my approach is correct given the constraint
that I can't use qmp socket or if any alternative is possible.

Thanks
Anshul Makkar
www.justkernel.com


Re: [Qemu-devel] [RFC V2 10/10] cpus: reclaim allocated vCPU objects

2014-09-11 Thread Anshul Makkar
Bharata, this not expected. info cpus should indicate report proper number
of cpus after deletion.

Anshul Makkar

On Thu, Sep 11, 2014 at 11:35 AM, Bharata B Rao bharata@gmail.com
wrote:

 from


Re: [Qemu-devel] [RFC V2 10/10] cpus: reclaim allocated vCPU objects

2014-09-12 Thread Anshul Makkar
During plugging we can see this event: echo 1  cpu8/online.

But during unplugging , we can't see the event echo 0  cpu8/online.

Just for additional check,  in my code  I have added following udev rule
echo 0  cpu[0-9]*/online. May be this is of any help.

Thanks
Anshul Makkar

On Fri, Sep 12, 2014 at 11:53 AM, Gu Zheng guz.f...@cn.fujitsu.com wrote:

 Hi Bharata,
 On 09/12/2014 04:09 PM, Bharata B Rao wrote:

  On Fri, Sep 12, 2014 at 6:54 AM, Gu Zheng guz.f...@cn.fujitsu.com
 wrote:
  Is guest os enabled acpi cpu hotplug? What's the guest's cpu info?
  Please try latest QEMU, and any feedback is welcome.
 
 
  Tried with latest QEMU git + your patchset and Fedora 20 guest, but
  QEMU monitor still shows the removed CPU.
 
  Guest kernel messages during hotplug:
 
  [root@localhost cpu]# echo 1  cpu8/online
  [   72.936069] smpboot: Booting Node 0 Processor 8 APIC 0x8
  [0.003000] kvm-clock: cpu 8, msr 0:7ffc9201, secondary cpu clock
  [   72.950003] TSC synchronization [CPU#0 - CPU#8]:
  [   72.950003] Measured 199886723309 cycles TSC warp between CPUs,
  turning off TSC clock.
  [   72.950003] tsc: Marking TSC unstable due to check_tsc_sync_source
 failed
  [   72.972976] KVM setup async PF for cpu 8
  [   72.973648] kvm-stealtime: cpu 8, msr 7d30df00
  [   72.974415] Will online and init hotplugged CPU: 8
  [   72.975307] microcode: CPU8 sig=0x663, pf=0x1, revision=0x1
 
  Guest kernel messages during hotunplug:
 
  [root@localhost cpu]# [   95.482172] Unregister pv shared memory for
 cpu 8
  [   95.487169] smpboot: CPU 8 is now offline
  [   95.488667] ACPI: Device does not support D3cold
 
 
  Guest cpuinfo (showing for the last CPU only after adding and removing
 CPU 8)
 
  processor: 7
  vendor_id: GenuineIntel
  cpu family: 6
  model: 6
  model name: QEMU Virtual CPU version 2.1.50
  stepping: 3
  microcode: 0x1
  cpu MHz: 2899.998
  cache size: 4096 KB
  fpu: yes
  fpu_exception: yes
  cpuid level: 4
  wp: yes
  flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca
  cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni
  cx16 x2apic popcnt hypervisor lahf_lm
  bogomips: 5799.99
  clflush size: 64
  cache_alignment: 64
  address sizes: 40 bits physical, 48 bits virtual
  power management:

 Guest ejected CPU 8 successfully.
 I confirmed it with the same environment as yours, it works well.
 Could you please offer your QEMU config and the guest start cmd?
 It may help me to investigate the issue.

 Thanks,
 Gu

 
  [root@localhost boot]# grep -ir hot config-3.11.10-301.fc20.x86_64
  CONFIG_TICK_ONESHOT=y
  # CONFIG_MEMORY_HOTPLUG is not set
  CONFIG_HOTPLUG_CPU=y
  # CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
  # CONFIG_DEBUG_HOTPLUG_CPU0 is not set
  CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
  CONFIG_ACPI_HOTPLUG_CPU=y
  CONFIG_HOTPLUG_PCI_PCIE=y
  CONFIG_HOTPLUG_PCI=y
  CONFIG_HOTPLUG_PCI_ACPI=y
  CONFIG_HOTPLUG_PCI_ACPI_IBM=m
  # CONFIG_HOTPLUG_PCI_CPCI is not set
  CONFIG_HOTPLUG_PCI_SHPC=m
  .
 





Re: [Qemu-devel] [RFC V2 10/10] cpus: reclaim allocated vCPU objects

2014-09-12 Thread Anshul Makkar
I have tested with 3.11 kernel, Kernel should be fine.. But it wouldn't
harm testing with latest kernel, may be it can provide some extra hints..

Anshul Makkar

On Fri, Sep 12, 2014 at 3:52 PM, Bharata B Rao bharata@gmail.com
wrote:

 On Fri, Sep 12, 2014 at 4:23 PM, Anshul Makkar
 anshul.mak...@profitbricks.com wrote:
  During plugging we can see this event: echo 1  cpu8/online.
 
  But during unplugging , we can't see the event echo 0  cpu8/online.

 That's because I didn't do that explicitly, was always trying to
 remove an online cpu from the monitor w/o explicitly offlining it from
 inside the guest. Either ways I still see the removed CPU being listed
 in QEMU monitor.

 I don't ever hit any of the below code paths during CPU removal:

 cpus.c: qemu_kvm_destroy_vcpu()
 cpus.c: x86_cpu_finalizefn()

 I see CPU_REMOVE() being called from above two routines.

 And neither does hw/acpi/cpu_hotplug.c:cpu_status_write() gets called
 here. Does the message ACPI: Device does not support D3cold guest
 kernel throws during hot removal is causing this behaviour here ?
 Guest kernel is 3.11.10, should I be on latest kernel ?

 Regards,
 Bharata.



Re: [Qemu-devel] [RFC V2 10/10] cpus: reclaim allocated vCPU objects

2014-09-15 Thread Anshul Makkar
That explains the cause.

Please verify you have the iasl compiler installed and are not using the
hold .hex (compile .dsl ) files. (Faced this issue in our build setup using
sbuil.).

I hope you have verified that your .dsl file has the changes as mentioned
in the patch.

I have also verified with fedora 20 (unmodified kernel) and cpu-plug/unplug
is working fine.

Thanks
Anshul Makkar

On Mon, Sep 15, 2014 at 12:09 PM, Bharata B Rao bharata@gmail.com
wrote:

 _EJ0 doesn't exist in my DSDT.


Re: [Qemu-devel] [RFC V2 10/10] cpus: reclaim allocated vCPU objects

2014-09-15 Thread Anshul Makkar
Great !!

Anshul Makkar

On Mon, Sep 15, 2014 at 3:53 PM, Bharata B Rao bharata@gmail.com
wrote:

 On Mon, Sep 15, 2014 at 4:03 PM, Anshul Makkar
 anshul.mak...@profitbricks.com wrote:
  That explains the cause.
 
  Please verify you have the iasl compiler installed and are not using the
  hold .hex (compile .dsl ) files. (Faced this issue in our build setup
 using
  sbuil.).
 
  I hope you have verified that your .dsl file has the changes as
 mentioned in
  the patch.

 Oh, it was not obvious to me that I have to install iasl, use
 --iasl=XXX during configure stage to get the new _EJ0 method to be
 included in the ACPI DSDT!

 So finally I now see the CPU getting removed from QEMU. Thanks for all
 the inputs.

 Regards,
 Bharata.



Re: [Qemu-devel] vhost-user:why region[0] always mmap failed ?

2014-10-15 Thread Anshul Makkar
Hi,

Please can you share in what scenario this mapping fails. I am not seeing any 
such issue.

Thanks
Anshul Makkar

On Wed, Sep 17, 2014 at 10:33:23AM +0800, Linhaifeng wrote:
 Hi,
 
 There is two memory regions when receive VHOST_SET_MEM_TABLE message:
 region[0]
 gpa = 0x0
 size = 655360
 ua = 0x2ac0
 offset = 0
 region[1]
 gpa = 0xC
 size = 2146697216
 ua = 0x2acc
 offset = 786432
 
 region[0] always mmap failed.The user code is :
 
 for (idx = 0; idx  msg-msg.memory.nregions; idx++) {
 if (msg-fds[idx]  0) {
 size_t size;
 uint64_t *guest_mem;
 Region *region = vhost_server-memory.regions[i];
 
 region-guest_phys_addr = 
 msg-msg.memory.regions[idx].guest_phys_addr;
 region-memory_size = msg-msg.memory.regions[idx].memory_size;
 region-userspace_addr = 
 msg-msg.memory.regions[idx].userspace_addr;
 region-mmap_offset = msg-msg.memory.regions[idx].mmap_offset;
   
 assert(idx  msg-fd_num);
 assert(msg-fds[idx]  0);
 
 size = region-memory_size + region-mmap_offset;
 guest_mem = mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, 
 msg-fds[idx], 0);
 if (MAP_FAILED == guest_mem) {
 continue;
 }
 i++;
 guest_mem += (region-mmap_offset / sizeof(*guest_mem));
 region-mmap_addr = (uint64_t)guest_mem;
 vhost_server-memory.nregions++;
 }
 }
 
 



Re: [Qemu-devel] Sending packets up to VM using vhost-net User.

2014-11-18 Thread Anshul Makkar
Sorry, forgot to mention I am using  git clone -b vhost-user-v5
https://github.com/virtualopensystems/qemu.git; for vhost-user backend
implementation.

and git clone https://github.com/virtualopensystems/vapp.git  for
reference implementation.

Anshul Makkar

On Tue, Nov 18, 2014 at 5:29 PM, Anshul Makkar 
anshul.mak...@profitbricks.com wrote:

 Hi,

 I am developing an application that is using vhost-user backend for packet
 transfer.

 The architecture:

 1) VM1 is using Vhost-user and executing on server1.

 .qemu-system-x86_64 -m 1024 -mem-path /dev/hugepages,prealloc=on,share=on
 -drive
 file=/home/amakkar/test.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
 -device
 virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
 -vga std -vnc 0.0.0.0:3 -netdev
 type=vhost-user,id=net0,file=/home/amakkar/qemu.sock -device
 virtio-net-pci,netdev=net0

 2) App1 on server1: executing in user-mode connects with vhost-user
 backend over qemu.sock. As expected, initialization is done and guest
 addresses including addresses of descriptor ring , available ring and used
 ring and mapped to my userspace app and I can directly access them.

 I launch PACKETH on VM1 and transfer some packets using eth0 on VM1
 (packet transfer uses virtio-net backend. ifconfig eth0 shows correct TX
 stats)

 In App1 I directly access the avail_ring buffer and consume the packet and
 then I do RDMA transfer to server 2 .

 3) VM2 and App2 executing on server2 and again using VHost-User.

 App2: Vring initializations are successfully done and vring buffers are
 mapped. I get the buffer from App1 and now *I want to transfer this
 buffer (Raw packet) to VM2.*

 To transfer the buffer from App2 to VM2, I directly access the descriptor
 ring, place the buffer in it and update the available index and then issue
 the kick.

 code snippet for it:

 dest_buf = (void *)handler-map_handler(handler-context,
 desc[a_idx].addr);
 memcpy(dest_buf + hdr_len, buf, size);
 avail-ring[avail-idx % num] = a_idx;
 avail-idx++;
 fprintf(stdout, put_vring, synching memory \n);
 sync_shm(dest_buf, size);
 sync_shm((void *)(avail), sizeof(struct vring_avail));

 kick(vhost_user-vring_table, rx_idx);

 But the buffer never reaches to VM2. (I do ifconfig eth0 in VM2 and RX
 stats are 0)

 Please can you share if my approach is correct in transferring the packet
 from App2 to VM. Can I directly place the buffer in descriptor ring and
 issue a kick to notify virtio-net that a packet is available or you can
 smell some implementation problem.

 Thanks
 Anshul Makkar




[Qemu-devel] Sending packets up to VM using vhost-net User.

2014-11-18 Thread Anshul Makkar
Hi,

I am developing an application that is using vhost-user backend for packet
transfer.

The architecture:

1) VM1 is using Vhost-user and executing on server1.

.qemu-system-x86_64 -m 1024 -mem-path /dev/hugepages,prealloc=on,share=on
-drive
file=/home/amakkar/test.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
-device
virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-vga std -vnc 0.0.0.0:3 -netdev
type=vhost-user,id=net0,file=/home/amakkar/qemu.sock -device
virtio-net-pci,netdev=net0

2) App1 on server1: executing in user-mode connects with vhost-user backend
over qemu.sock. As expected, initialization is done and guest addresses
including addresses of descriptor ring , available ring and used ring and
mapped to my userspace app and I can directly access them.

I launch PACKETH on VM1 and transfer some packets using eth0 on VM1 (packet
transfer uses virtio-net backend. ifconfig eth0 shows correct TX stats)

In App1 I directly access the avail_ring buffer and consume the packet and
then I do RDMA transfer to server 2 .

3) VM2 and App2 executing on server2 and again using VHost-User.

App2: Vring initializations are successfully done and vring buffers are
mapped. I get the buffer from App1 and now *I want to transfer this buffer
(Raw packet) to VM2.*

To transfer the buffer from App2 to VM2, I directly access the descriptor
ring, place the buffer in it and update the available index and then issue
the kick.

code snippet for it:

dest_buf = (void *)handler-map_handler(handler-context, desc[a_idx].addr);
memcpy(dest_buf + hdr_len, buf, size);
avail-ring[avail-idx % num] = a_idx;
avail-idx++;
fprintf(stdout, put_vring, synching memory \n);
sync_shm(dest_buf, size);
sync_shm((void *)(avail), sizeof(struct vring_avail));

kick(vhost_user-vring_table, rx_idx);

But the buffer never reaches to VM2. (I do ifconfig eth0 in VM2 and RX
stats are 0)

Please can you share if my approach is correct in transferring the packet
from App2 to VM. Can I directly place the buffer in descriptor ring and
issue a kick to notify virtio-net that a packet is available or you can
smell some implementation problem.

Thanks
Anshul Makkar


Re: [Qemu-devel] Sending packets up to VM using vhost-net User.

2014-11-19 Thread Anshul Makkar
Any suggestions here..

Thanks
Anshul Makkar

On Tue, Nov 18, 2014 at 5:34 PM, Anshul Makkar 
anshul.mak...@profitbricks.com wrote:

 Sorry, forgot to mention I am using  git clone -b vhost-user-v5
 https://github.com/virtualopensystems/qemu.git; for vhost-user backend
 implementation.

 and git clone https://github.com/virtualopensystems/vapp.git  for
 reference implementation.

 Anshul Makkar


 On Tue, Nov 18, 2014 at 5:29 PM, Anshul Makkar 
 anshul.mak...@profitbricks.com wrote:

 Hi,

 I am developing an application that is using vhost-user backend for
 packet transfer.

 The architecture:

 1) VM1 is using Vhost-user and executing on server1.

 .qemu-system-x86_64 -m 1024 -mem-path
 /dev/hugepages,prealloc=on,share=on -drive
 file=/home/amakkar/test.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
 -device
 virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
 -vga std -vnc 0.0.0.0:3 -netdev
 type=vhost-user,id=net0,file=/home/amakkar/qemu.sock -device
 virtio-net-pci,netdev=net0

 2) App1 on server1: executing in user-mode connects with vhost-user
 backend over qemu.sock. As expected, initialization is done and guest
 addresses including addresses of descriptor ring , available ring and used
 ring and mapped to my userspace app and I can directly access them.

 I launch PACKETH on VM1 and transfer some packets using eth0 on VM1
 (packet transfer uses virtio-net backend. ifconfig eth0 shows correct TX
 stats)

 In App1 I directly access the avail_ring buffer and consume the packet
 and then I do RDMA transfer to server 2 .

 3) VM2 and App2 executing on server2 and again using VHost-User.

 App2: Vring initializations are successfully done and vring buffers are
 mapped. I get the buffer from App1 and now *I want to transfer this
 buffer (Raw packet) to VM2.*

 To transfer the buffer from App2 to VM2, I directly access the descriptor
 ring, place the buffer in it and update the available index and then issue
 the kick.

 code snippet for it:

 dest_buf = (void *)handler-map_handler(handler-context,
 desc[a_idx].addr);
 memcpy(dest_buf + hdr_len, buf, size);
 avail-ring[avail-idx % num] = a_idx;
 avail-idx++;
 fprintf(stdout, put_vring, synching memory \n);
 sync_shm(dest_buf, size);
 sync_shm((void *)(avail), sizeof(struct vring_avail));

 kick(vhost_user-vring_table, rx_idx);

 But the buffer never reaches to VM2. (I do ifconfig eth0 in VM2 and RX
 stats are 0)

 Please can you share if my approach is correct in transferring the packet
 from App2 to VM. Can I directly place the buffer in descriptor ring and
 issue a kick to notify virtio-net that a packet is available or you can
 smell some implementation problem.

 Thanks
 Anshul Makkar





[Qemu-devel] directly inject packet in vrings and not use NetClient APIs

2014-11-19 Thread Anshul Makkar
Hi,

Vhost-net backend tap, implements read and write polls to listen for
packets in guest vrings (implemented through ioeventfds)  and gives direct
access to the guest vrings.
While transmitting the packet up to VM, tap backend uses
qemu_send_packet_async/ NetClient APIs to transmit the packets to the
virtio-net driver in guest which then delivers the packet to the app
running in VM.

Vhost-user backend behaves differently. It gives 3rd party user mode app
direct access to guest vrings. The app can directly receive packets after
the guest has posted in the vrings (throught kicks). Vhost-user has no read
and write poll and no ways of transferring the packet up to the VM.

I have implemented a usermode app that is using vhost-user backend and gets
direct access to the guest vrings. I am able to receive packets but when I
post directly to vring and issue kick, packets fails to reach the guest.
(ifconfig eth0 RX counter is unchanged. tcpdump also doesn't detect any
packets.) . I don't want to use any NetClient APIs and want to directly
inject packets into the guest vring.

Please can you share if my approach and understanding is correct.

Thanks
Anshul Makkar


Re: [Qemu-devel] directly inject packet in vrings and not use NetClient APIs

2014-11-19 Thread Anshul Makkar
Thanks Luke..

Ok, so theoretically it should work.

That's useful suggestions. Let me debug virtio-net driver for possible
cause.

Thanks
Anshul Makkar

On Wed, Nov 19, 2014 at 6:39 PM, Luke Gorrie l...@snabb.co wrote:

 Hi Anshul,

 On Wednesday, November 19, 2014, Anshul Makkar 
 anshul.mak...@profitbricks.com wrote:


 I have implemented a usermode app that is using vhost-user backend and
 gets direct access to the guest vrings. I am able to receive packets but
 when I post directly to vring and issue kick, packets fails to reach the
 guest. (ifconfig eth0 RX counter is unchanged. tcpdump also doesn't detect
 any packets.) .


 Sounds to me like something that should work.

 I'd suggest debug-compiling the virtio-net driver in the guest to see why
 it doesn't take the packet. Gets the kick? Processes the used ring? MAC
 address is accepted? Etc. That has been the most productive approach for me.






[Qemu-devel] virtio-net path after kick

2014-12-17 Thread Anshul Makkar
Hi,

I am using vhost-user and have an application which wants to send packet to
VM.

Initial connection establishment phase between qemu and app seems fine and
control messages are exchanged and vrings are setup successfully.

But, as I mentioned earlier on the group, after I kick from the app to
indicate the availability of the packet, qemu doesn't get those packets. As
suggested, on debugging I found that none of the functions involved in the
receive leg like receive_buf in virtio_net.c or virtqueue_get_buf in
virtio_ring.c are hit  if I kick from my app after filling the receive
queue with packets.

Further, kick causes a writes to kickfd and I couldn't find any code in the
qemu that is listening for some fds.

Based on the above observation it seems that qemu doesn't have the code
where it will detect a kick event for a fd. I have looked into virtio_net.c
and virtio_ring.c and virtio_pci.c.

Sorry if I have missed anything, but I have been trying hard to understand
and implement this packet path from app to qemu via direct kicking but so
far failed.

Please can you share your suggestions as to where to look after in qemu
code after I kick from my app.

Thanks
Anshul Makkar


Re: [Qemu-devel] CUSE-TPM : Win 10 reports TPM device doesn't have sufficient resources

2017-04-21 Thread Anshul Makkar
Yes, v2.8.0+tpm branch worked. Thanks Stefan.

Anshul

From: Stefan Berger [mailto:stef...@us.ibm.com]
Sent: 18 April 2017 19:47
To: Anshul Makkar <anshul.mak...@citrix.com>
Cc: qemu-devel@nongnu.org
Subject: Re: CUSE-TPM : Win 10 reports TPM device doesn't have sufficient 
resources

You may want to try it from this version: 
https://github.com/stefanberger/qemu-tpm/tree/v2.8.0+tpm


- Original message -
From: anshul makkar <anshul.mak...@citrix.com<mailto:anshul.mak...@citrix.com>>
To: <qemu-devel@nongnu.org<mailto:qemu-devel@nongnu.org>>
Cc: Stefan Berger/Watson/IBM@IBMUS
Subject: CUSE-TPM : Win 10 reports TPM device doesn't have sufficient resources
Date: Tue, Apr 18, 2017 1:42 PM


Hi,



I am using CUSE-TPM based on

https://github.com/stefanberger/qemu-tpm branch: 2.4.1+tpm



https://github.com/stefanberger/swtpm



https://github.com/ts468/seabios-tpm



I am facing an issue where WIndows 10 guest device manager reports TPM status 
as @
The device status is "The device cannot find enough free resources it can use 
(Code 12)"#

On browsing I found this page  
@https://bugzilla.redhat.com/show_bug.cgi?id=1281413; that reports exactly the 
same problem and the resolution patch @ 
https://bugzilla.redhat.com/attachment.cgi?id=1137166​ .



I applied the patch on the code and verified with debug trace that the patch 
code does executes.



But, I am still observing the same issue on Win 10 guest and on using ACPIdump 
utility in Windows guest I can still see "IRQ 5 and IRQNoFlags" in the ssdt.dsl 
code.
 Device (ISA.TPM)
{
Name (_HID, EisaId ("PNP0C31"))  // _HID: Hardware ID
Name (_STA, 0x0F)  // _STA: Status
Name (_CRS, ResourceTemplate ()  // _CRS: Current Resource 
Settings
{
Memory32Fixed (ReadWrite,
0xFED4, // Address Base
0x5000, // Address Length
)
IRQNoFlags ()
{5}
})





I am bit confused, from my understanding, its the QEMU that builds the SSDT 
table and I can also verify it from the logs. But somehow guest is getting the 
old ACPI values for TPM which is not acceptable to Windows.

Just to be sure, I also verified the SeaBIOS code and couldn't find any link to 
this table.



Here is the patch that I applied based on the link above:


if (misc->tpm_version != TPM_VERSION_UNSPEC) {
 ACPI_BUILD_DPRINTF("TPM: add MMIO\n");
 dev = aml_device("TPM");
 aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C31")));
 aml_append(dev, aml_name_decl("_STA", aml_int(0xF)));
 crs = aml_resource_template();
 aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
 aml_append(dev, aml_name_decl("_CRS", crs));
 aml_append(sb_scope, dev);
 }
aml_append(ssdt, sb_scope);

logs once I start qemu:

CPI_BUILD: init ACPI tables
ACPI_BUILD: TPM: add MMIO
ACPI_BUILD: init ACPI tables
ACPI_BUILD: TPM: add MMIO

tpm_tis:  read.4(0f00) = 00011014
tpm_tis: write.1(0008) = 
tpm_tis:  read.1() = 0081

Commands to start vTPM:
swtpm_cuse -M 260 -m 1 -n vtpm0

qemu-system-x86_64 -enable-kvm -m 1024 -boot d -bios bios.bin -boot menu=on 
-tpmdev cuse-tpm,id=tpm0,path=/dev/vtpm0 -device tpm-tis,tpmdev=tpm0 win.img

Please suggest if I am missing anything .

Thanks
Anshul Makkar




[Qemu-devel] CUSE-TPM : Win 10 reports TPM device doesn't have sufficient resources

2017-04-18 Thread anshul makkar

Hi,


I am using CUSE-TPM based on

https://github.com/stefanberger/qemu-tpmbranch: 2.4.1+tpm


https://github.com/stefanberger/swtpm


https://github.com/ts468/seabios-tpm


I am facing an issue where WIndows 10 guest device manager reports TPM 
status as @


The device status is "The device cannot find enough free resources it can use (Code 
12)"#

On browsing I found this page 
 @https://bugzilla.redhat.com/show_bug.cgi?id=1281413; that reports 
exactly the same problem and the resolution patch @ 
https://bugzilla.redhat.com/attachment.cgi?id=1137166​ .



I applied the patch on the code and verified with debug trace that the 
patch code does executes.



But, I am still observingthe sameissue on Win 10 guest and on using 
ACPIdump utility in Windows guest I canstill see"IRQ5 and IRQNoFlags" in 
the ssdt.dsl code.


 Device (ISA.TPM)
{
Name (_HID, EisaId ("PNP0C31"))  // _HID: Hardware ID
Name (_STA, 0x0F)  // _STA: Status
Name (_CRS, ResourceTemplate ()  // _CRS: Current Resource Settings
{
Memory32Fixed (ReadWrite,
0xFED4, // Address Base
0x5000, // Address Length
)
IRQNoFlags ()
{5}
})



I am bit confused, from my understanding, its the QEMU that builds the 
SSDT table and I can also verify it from the logs. But somehow guest is 
getting the old ACPI values for TPM which is not acceptable to Windows.


Just to be sure, I also verified the SeaBIOS code and couldn't find any 
link to this table.



Here is the patch that I applied based on the link above:


if (misc->tpm_version != TPM_VERSION_UNSPEC) {
 ACPI_BUILD_DPRINTF("TPM: add MMIO\n");
 dev = aml_device("TPM");
 aml_append(dev, aml_name_decl("_HID", aml_eisaid("PNP0C31")));
 aml_append(dev, aml_name_decl("_STA", aml_int(0xF)));
 crs = aml_resource_template();
 aml_append(crs, aml_memory32_fixed(TPM_TIS_ADDR_BASE,
TPM_TIS_ADDR_SIZE, AML_READ_WRITE));
 aml_append(dev, aml_name_decl("_CRS", crs));
 aml_append(sb_scope, dev);
 }
aml_append(ssdt, sb_scope);

logs once I start qemu:

CPI_BUILD: init ACPI tables
ACPI_BUILD: TPM: add MMIO
ACPI_BUILD: init ACPI tables
ACPI_BUILD: TPM: add MMIO

tpm_tis:  read.4(0f00) = 00011014
tpm_tis: write.1(0008) = 
tpm_tis:  read.1() = 0081

Commands to start vTPM:
swtpm_cuse -M 260 -m 1 -n vtpm0

qemu-system-x86_64 -enable-kvm -m 1024 -boot d -bios bios.bin -boot 
menu=on -tpmdev cuse-tpm,id=tpm0,path=/dev/vtpm0 -device 
tpm-tis,tpmdev=tpm0 win.img


Please suggest if I am missing anything .

Thanks
Anshul Makkar