[openstack-dev] [nova] Question about USB passthrough

2014-02-23 Thread Liuji (Jeremy)
Hi, Boris and all other guys:

I have found a BP about USB device passthrough in 
https://blueprints.launchpad.net/nova/+spec/host-usb-passthrough. 
I have also read the latest nova code and make sure it doesn't support USB 
passthrough by now.

Are there any progress or plan for USB passthrough?


Thanks,
Jeremy Liu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-02-26 Thread Liuji (Jeremy)
Yes, PCI devices like GPU or HBA are common resources, admin/user do not need 
to specify which device to which VM. So current PCI passthrough function can 
meet user scenarios.

But USB devices have different user scenarios. Take USB key or USB disk as 
example, admin/user may need the content in USB device but not the device 
itself, so admin/user should specify which USB device to which VM.

There are other things needed to be considered too, for example USB device may 
need a matched USB controller but not the default USB 1.1 controller created by 
qemu.

I'm not clear about how to provide this function but still want to write a wiki 
so that more people can participate in the discussion.

Thanks,
Jeremy Liu

 -Original Message-
 From: yunhong jiang [mailto:yunhong.ji...@linux.intel.com]
 Sent: Wednesday, February 26, 2014 1:17 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: bpavlo...@mirantis.com; Luohao (brian); Yuanjing (D)
 Subject: Re: [openstack-dev] [nova] Question about USB passthrough
 
 On Tue, 2014-02-25 at 03:05 +, Liuji (Jeremy) wrote:
  Now that USB devices are used so widely in private/hybrid cloud like
  used as USB key, and there are no technical issues in libvirt/qemu.
  I think it a valuable feature in openstack.
 
 USB key is an interesting scenario. I assume the USB key is just for some
 specific VM, wondering how the admin/user know which usb disk to which VM?
 
 --jyh
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] a question about Mistral

2014-02-27 Thread Liuji (Jeremy)
Hi, Mistral members

I am very interesting in the project Mistral. 

About the wiki of the Mistral, I have a question about the use case description 
as the follow.

Live migration
A user specifies tasks for VM live migration triggered upon an event from 
Ceilometer (CPU consumption 100%).

Is this mean Mistral has the plan to provide the feature like DRS?

I am a newbie for Mistral and I apologize if I am missing something very 
obvious.

Thanks,
Jeremy Liu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] a question about Mistral

2014-02-27 Thread Liuji (Jeremy)
Hi,

Yes. I mean the feature like VMware Distributed Resource Scheduler.
Now I am totally clear about the question. Thanks for your explanations.

Thanks,
Jeremy Liu

 -Original Message-
 From: Renat Akhmerov [mailto:rakhme...@mirantis.com]
 Sent: Thursday, February 27, 2014 5:03 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Mistral] a question about Mistral
 
 Hey,
 
 Can you please provide more details on what you're interested in? What do
 you mean by DRS?
 
 If you mean VMware Distributed Resource Scheduler then yes and no. It's not
 the major goal of Mistral but Mistral is a more generic tool that could be 
 used
 to build something like this. The primary goal of Mistral is to provide a 
 workflow
 engine and easy way to integrate Mistral with other systems so that we can
 trigger workflow execution upon external events like Ceilometer alarms, timer
 or anything else.
 
 Feel free to ask any questions, thanks!
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 On 27 Feb 2014, at 15:08, Liuji (Jeremy) jeremy@huawei.com wrote:
 
  Hi, Mistral members
 
  I am very interesting in the project Mistral.
 
  About the wiki of the Mistral, I have a question about the use case
 description as the follow.
 
  Live migration
  A user specifies tasks for VM live migration triggered upon an event from
 Ceilometer (CPU consumption 100%).
 
  Is this mean Mistral has the plan to provide the feature like DRS?
 
  I am a newbie for Mistral and I apologize if I am missing something very
 obvious.
 
  Thanks,
  Jeremy Liu
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] A problem about pci-passthrough

2014-03-03 Thread Liuji (Jeremy)
Hi, all

I find a problem about pci-passthrough.

Test scenario:
1)There are two compute nodes in the environment named A and B. A has two NICs 
of vendor_id='8086' and product_id='105e', B has two NICs of vendor_id='8086' 
and product_id='10c9'.
2)I configured pci_alias={vendor_id:8086, product_id:10c9, 
name:a1} in nova.conf on the controller node, and of course the 
pci_passthrough_whitelist on this two compute nodes seperately.
3)Finally, a flavor named MyTest with extra_specs= {u'pci_passthrough:alias': 
u'a1:1'}
4)When I create a new instance with the MyTest flavor, it starts or is error 
randomly.

The problem is in the _schedule function of nova/scheduler/filter_scheduler.py:
chosen_host = random.choice(
weighed_hosts[0:scheduler_host_subset_size])
selected_hosts.append(chosen_host)

# Now consume the resources so the filter/weights
# will change for the next instance.
chosen_host.obj.consume_from_instance(instance_properties)

while scheduler_host_subset_size is configured to 2, the weighed_hosts are A 
and B, but the chosen_host is selected randomly.
When chosen_host is B, the instance starts, but when chosen_host is A, the 
instance becomes error. The consume_from_instance will raise a exception.

I think it is a bug. Is there a problem with my test operation or need some 
other configuration?

Thanks,
Jeremy Liu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Questions about guest NUMA and memory binding policies

2014-03-03 Thread Liuji (Jeremy)
Hi, all

I search the current blueprints and old mails in the mail list, but find 
nothing about Guest NUMA and setting memory binding policies.
I just find a blueprint about vcpu topology and a blueprint about CPU binding.

https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topology
https://blueprints.launchpad.net/nova/+spec/numa-aware-cpu-binding

Is there any plan for the guest NUMA and memory binding policies setting?

Thanks,
Jeremy Liu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A problem about pci-passthrough

2014-03-03 Thread Liuji (Jeremy)
Thanks for your reply. I indeed missed the filter configuration of the 
scheduler. It's ok now. 

Thank you very much,
Jeremy Liu

 -Original Message-
 From: yongli he [mailto:yongli...@intel.com]
 Sent: Tuesday, March 04, 2014 9:28 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova] A problem about pci-passthrough
 
 On 2014年03月03日 22:21, Alexis Lee wrote:
  Liuji (Jeremy) said on Mon, Mar 03, 2014 at 08:06:57AM +:
  Test scenario:
  1)There are two compute nodes in the environment named A and B. A has
 two NICs of vendor_id='8086' and product_id='105e', B has two NICs of
 vendor_id='8086' and product_id='10c9'.
  2)I configured pci_alias={vendor_id:8086, product_id:10c9,
 name:a1} in nova.conf on the controller node, and of course the
 pci_passthrough_whitelist on this two compute nodes seperately.
  3)Finally, a flavor named MyTest with extra_specs=
  {u'pci_passthrough:alias': u'a1:1'} 4)When I create a new instance with the
 MyTest flavor, it starts or is error randomly.
 
  The problem is in the _schedule function of
 nova/scheduler/filter_scheduler.py:
   chosen_host = random.choice(
   weighed_hosts[0:scheduler_host_subset_size])
   selected_hosts.append(chosen_host)
 
   # Now consume the resources so the filter/weights
   # will change for the next instance.
 
  chosen_host.obj.consume_from_instance(instance_properties)
 
  while scheduler_host_subset_size is configured to 2, the
  weighed_hosts are A and B, but the chosen_host is selected randomly.
  When chosen_host is B, the instance starts, but when chosen_host is
  A, the instance becomes error. The consume_from_instance will raise
  a exception.
  Hi Jeremy,
 
  You didn't mention the PciPassthroughFilter, have you enabled this in
 definitely need this filter.
 
 Yongli He
  your scheduler?
 https://wiki.openstack.org/wiki/Pci_passthrough
 
 
  Alexis
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Questions about guest NUMA and memory binding policies

2014-03-04 Thread Liuji (Jeremy)
Hi Steve,

Thanks for your reply.

I didn't know why the blueprint numa-aware-cpu-binding seems to have no more 
progress until read the two mails mentioned in your mail.

The use case analysis in the mails are very clear, they are also what I concern 
about.
I agree that we shouldn't provide pCPU/vCPU mapping for the ending user and how 
to provide them for the user need more consideration. 

The use cases I concern more are the pCPU's exclusively use(pCPU:vCPU=1:1) and 
the guest numa.


Thanks,
Jeremy Liu


 -Original Message-
 From: Steve Gordon [mailto:sgor...@redhat.com]
 Sent: Tuesday, March 04, 2014 10:29 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Luohao (brian); Yuanjing (D)
 Subject: Re: [openstack-dev] [nova] Questions about guest NUMA and memory
 binding policies
 
 - Original Message -
  Hi, all
 
  I search the current blueprints and old mails in the mail list, but
  find nothing about Guest NUMA and setting memory binding policies.
  I just find a blueprint about vcpu topology and a blueprint about CPU
  binding.
 
  https://blueprints.launchpad.net/nova/+spec/support-libvirt-vcpu-topol
  ogy https://blueprints.launchpad.net/nova/+spec/numa-aware-cpu-binding
 
  Is there any plan for the guest NUMA and memory binding policies setting?
 
  Thanks,
  Jeremy Liu
 
 Hi Jeremy,
 
 As you've discovered there have been a few attempts at getting some work
 started in this area. Dan Berrange outlined some of the possibilities in this 
 area
 in a previous mailing list post [1] though it's multi-faceted, there are a 
 lot of
 different ways to break it down. If you dig into the details you will note 
 that the
 support-libvirt-vcpu-topology blueprint in particular got a fair way along but
 there were some concerns noted in the code reviews and on the list [2] around
 the design.
 
 It seems like this is an area that there is a decent amount of interest in 
 and we
 should work on list to flesh out a design proposal, ideally this would be
 presented for further discussion at the Juno design summit. What are your
 particular needs/desires from a NUMA aware nova scheduler?
 
 Thanks,
 
 Steve
 
 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2013-November/019715.h
 tml
 [2]
 http://lists.openstack.org/pipermail/openstack-dev/2013-December/022940.h
 tml
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] a question about instance snapshot

2014-03-06 Thread Liuji (Jeremy)
Hi, all

Current openstack seems not support to snapshot instance with memory and dev 
states.
I searched the blueprint and found two relational blueprint like below. 
But these blueprint failed to get in the branch.

[1]: https://blueprints.launchpad.net/nova/+spec/live-snapshots
[2]: https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms

In the blueprint[1], there is a comment,
We discussed this pretty extensively on the mailing list and in a design summit 
session. 
The consensus is that this is not a feature we would like to have in nova. 
--russellb  
But I can't find the discuss mail about it. I hope to know why we think so ?
Without memory snapshot, we can't to provide the feature for user to revert a 
instance to a checkpoint. 

Anyone who knows the history can help me or give me a hint how to find the 
discuss mail?

I am a newbie for openstack and I apologize if I am missing something very 
obvious.


Thanks,
Jeremy Liu


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review approval

2014-03-06 Thread Liuji (Jeremy)
+1

Agree with you. I like this idea so much.
It makes the blueprint review/discuss better tracked and recorded.
It's convenient for the people joining later to know the design's history.


 -Original Message-
 From: Sean Dague [mailto:s...@dague.net]
 Sent: Friday, March 07, 2014 2:05 AM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [nova] RFC - using Gerrit for Nova Blueprint review 
 approval
 
 One of the issues that the Nova team has definitely hit is Blueprint 
 overload. At
 some point there were over 150 blueprints. Many of them were a single
 sentence.
 
 The results of this have been that design review today is typically not
 happening on Blueprint approval, but is instead happening once the code shows
 up in the code review. So -1s and -2s on code review are a mix of design and
 code review. A big part of which is that design was never in any way 
 sufficiently
 reviewed before the code started.
 
 In today's Nova meeting a new thought occurred. We already have Gerrit which
 is good for reviewing things. It gives you detailed commenting abilities, 
 voting,
 and history. Instead of attempting (and usually
 failing) on doing blueprint review in launchpad (or launchpad + an etherpad, 
 or
 launchpad + a wiki page) we could do something like follows:
 
 1. create bad blueprint
 2. create gerrit review with detailed proposal on the blueprint 3. iterate in
 gerrit working towards blueprint approval 4. once approved copy back the
 approved text into the blueprint (which should now be sufficiently detailed)
 
 Basically blueprints would get design review, and we'd be pretty sure we liked
 the approach before the blueprint is approved. This would hopefully reduce the
 late design review in the code reviews that's happening a lot now.
 
 There are plenty of niggly details that would be need to be worked out
 
  * what's the basic text / template format of the design to be reviewed
 (probably want a base template for folks to just keep things consistent).
  * is this happening in the nova tree (somewhere in docs/ - NEP (Nova
 Enhancement Proposals), or is it happening in a separate gerrit tree.
  * are there timelines for blueprint approval in a cycle? after which point, 
 we
 don't review any new items.
 
 Anyway, plenty of details to be sorted. However we should figure out if the 
 big
 idea has support before we sort out the details on this one.
 
 Launchpad blueprints will still be used for tracking once things are approved,
 but this will give us a standard way to iterate on that content and get to
 agreement on approach.
 
   -Sean
 
 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question about USB passthrough

2014-03-13 Thread Liuji (Jeremy)
Hi,

I have written a wiki about usb controller and usb-passthrough in 
https://wiki.openstack.org/wiki/Nova/proposal_about_usb_passthrough.

Hope I can get your good advices.

Thanks,
Jeremy Liu

 -Original Message-
 From: Liuji (Jeremy) [mailto:jeremy@huawei.com]
 Sent: Thursday, February 27, 2014 9:59 AM
 To: yunhong.ji...@linux.intel.com; OpenStack Development Mailing List (not
 for usage questions)
 Cc: Luohao (brian); Yuanjing (D)
 Subject: Re: [openstack-dev] [nova] Question about USB passthrough
 
 Yes, PCI devices like GPU or HBA are common resources, admin/user do not
 need to specify which device to which VM. So current PCI passthrough function
 can meet user scenarios.
 
 But USB devices have different user scenarios. Take USB key or USB disk as
 example, admin/user may need the content in USB device but not the device
 itself, so admin/user should specify which USB device to which VM.
 
 There are other things needed to be considered too, for example USB device
 may need a matched USB controller but not the default USB 1.1 controller
 created by qemu.
 
 I'm not clear about how to provide this function but still want to write a 
 wiki so
 that more people can participate in the discussion.
 
 Thanks,
 Jeremy Liu
 
  -Original Message-
  From: yunhong jiang [mailto:yunhong.ji...@linux.intel.com]
  Sent: Wednesday, February 26, 2014 1:17 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Cc: bpavlo...@mirantis.com; Luohao (brian); Yuanjing (D)
  Subject: Re: [openstack-dev] [nova] Question about USB passthrough
 
  On Tue, 2014-02-25 at 03:05 +, Liuji (Jeremy) wrote:
   Now that USB devices are used so widely in private/hybrid cloud like
   used as USB key, and there are no technical issues in libvirt/qemu.
   I think it a valuable feature in openstack.
 
  USB key is an interesting scenario. I assume the USB key is just for
  some specific VM, wondering how the admin/user know which usb disk to
 which VM?
 
  --jyh
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev