[openstack-dev] [docs] Question about submit docs

2014-12-28 Thread liuxinguo
We have opened a bug for document of huawei storage driver, and have posted the 
document at https://review.openstack.org/#/c/143926/‍.

Now I have two things not very clear:
1. When will be this document merged into Kilo? At the end of K-2 or will a 
little earlier?
2. What should we do if we want amend the document both in IceHouse and Juno? 
Should I use “git cherry-pick” and how to use “git cherry-pick”?

Any input will be appreciated, thanks!
--
Liu



新国刘
华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]

Phone:
Fax:
Mobile:
Email:
地址:深圳市龙岗区坂田华为基地 邮编:518129
Huawei Technologies Co., Ltd.
Bantian, Longgang District,Shenzhen 518129, P.R.China
http://www.huawei.com

本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Question about submit docs

2014-12-28 Thread Anne Gentle
On Sun, Dec 28, 2014 at 7:26 PM, liuxinguo liuxin...@huawei.com wrote:

  We have opened a bug for document of huawei storage driver, and have
 posted the document at https://review.openstack.org/#/c/143926/‍.

 Now I have two things not very clear:

 1. When will be this document merged into Kilo? At the end of K-2 or will
 a little earlier?


It'll merge into the master branch for openstack-manuals, probably this
week. I'll add some review comments (my) tomorrow.



 2. What should we do if we want amend the document both in IceHouse and
 Juno? Should I use “git cherry-pick” and how to use “git cherry-pick”?


If you want this in prior release branches, use these instructions:
https://wiki.openstack.org/wiki/Documentation/HowTo#How_to_a_cherry-pick_a_change_to_a_stable_branch

Thanks for the doc patch, hope this information helps.
Anne




 Any input will be appreciated, thanks!
 --
 Liu




  --

 新国刘
 华为技术有限公司 Huawei Technologies Co., Ltd.
 [image: Company_logo]

 Phone:
 Fax:
 Mobile:
 Email:
 地址:深圳市龙岗区坂田华为基地 邮编:518129
 Huawei Technologies Co., Ltd.
 Bantian, Longgang District,Shenzhen 518129, P.R.China
 http://www.huawei.com
  --

 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
 This e-mail and its attachments contain confidential information from
 HUAWEI, which
 is intended only for the person or entity whose address is listed above.
 Any use of the
 information contained herein in any way (including, but not limited to,
 total or partial
 disclosure, reproduction, or dissemination) by persons other than the
 intended
 recipient(s) is prohibited. If you receive this e-mail in error, please
 notify the sender by
 phone or email immediately and delete it!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Cancelling team meetings today and next Monday

2014-12-28 Thread Renat Akhmerov
Hi,

Let’s cancel the meeting today and on the next week. I think they won’t gather 
enough people because of the New Year holidays.

Thanks

Renat Akhmerov
@ Mirantis Inc.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-28 Thread Renat Akhmerov
 On 27 Dec 2014, at 01:39, W Chan m4d.co...@gmail.com wrote:
 
  What you’re saying is that whatever is under “$.env” is just the exact same 
  environment that we passed when we started the workflow? If yes then it 
  definitely makes sense to me (it just allows to explicitly access 
  environment, not through the implicit variable lookup). Please confirm.
 Yes. the $.env that I original proposed would be the same dict as the one 
 supplied at start_workflow.  Although we have to agree whether the variables 
 in the environment are allowed to change after the WF started.  Unless 
 there's a valid use case, I would lean toward making env immutable.

Let’s make them immutable. 

  One thing that I strongly suggest is that we clearly define all reserved 
  keys like “env”, “__actions” etc. I think it’d be better if they all 
  started with the same prefix, for example, double underscore.
 
 Agree. How about using double underscore for env as well (i.e. $.__env.var1, 
 $.__env.var2)?

Yes.


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Containers][docker] Networking problem

2014-12-28 Thread Iván Chavero

Hello,

I've installed OpenStack with Docker as hypervisor on a cubietruck, 
everything
seems to work ok but the container ip does not respond to pings nor 
respond to

the service i'm running inside the container (nginx por 80).

I checked how nova created the container and it looks like everything is 
in place:


# nova list
+--+---+++-+--+
| ID   | Name  | Status | Task 
State | Power State | Networks |

+--+---+++-+--+
| 249df778-b2b6-490c-9dce-1126f8f337f3 | test_nginx_13 | ACTIVE | 
-  | Running | public=192.168.1.135 |

+--+---+++-+--+


# docker ps
CONTAINER IDIMAGE COMMAND CREATED STATUS 
PORTS  NAMES
89b59bf9f442sotolitolabs/nginx_arm:latest /usr/sbin/nginx   6 
hours ago Up 6 hours nova-249df778-b2b6-490c-9dce-1126f8f337f3



A funny thing that i noticed but i'm not really sure it's relevant, the 
docker container

does not show network info when created by nova:

# docker inspect 89b59bf9f442

 unnecesary output

NetworkSettings: {
Bridge: ,
Gateway: ,
IPAddress: ,
IPPrefixLen: 0,
PortMapping: null,
Ports: null
},




# neutron router-list
+--+-+---+-+---+
| id   | name| external_gateway_info 
| distributed | ha|

+--+-+---+-+---+
| f8dc7e15-1087-4681-b495-217ecfa95189 | router1 | {network_id: 
160add9a-2d2e-45ab-8045-68b334d29418, enable_snat: true, 
external_fixed_ips: [{subnet_id: 
1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d, ip_address: 192.168.1.120}]} 
| False   | False |

+--+-+---+-+---+


# neutron subnet-list
+--++++
| id   | name   | cidr   
| allocation_pools |

+--++++
| 34995548-bc2b-4d33-bdb2-27443c01e483 | private_subnet | 10.0.0.0/24
| {start: 10.0.0.2, end: 10.0.0.254} |
| 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d | public_subnet  | 192.168.1.0/24 
| {start: 192.168.1.120, end: 192.168.1.200} |

+--++++




# neutron port-list
+--+--+---+--+
| id   | name | mac_address   | 
fixed_ips |

+--+--+---+--+
| 863eb9a3-461c-4016-9bd1-7c4c7210db98 |  | fa:16:3e:24:7b:2c | 
{subnet_id: 34995548-bc2b-4d33-bdb2-27443c01e483, ip_address: 
10.0.0.2}  |
| bbe59188-ab4e-4b92-a578-bbc2d6759295 |  | fa:16:3e:1c:04:6a | 
{subnet_id: 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d, ip_address: 
192.168.1.135} |
| c8b94a90-c7d1-44fc-a582-3370f5486d26 |  | fa:16:3e:6f:69:71 | 
{subnet_id: 34995548-bc2b-4d33-bdb2-27443c01e483, ip_address: 
10.0.0.1}  |
| f108b583-0d54-4388-bcc0-f8d1cbe6efd4 |  | fa:16:3e:bb:3a:1b | 
{subnet_id: 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d, ip_address: 
192.168.1.120} |

+--+--+---+--+



the network namespace is being created:

# ip netns exec 
89b59bf9f442a0d468d9d4d8c9370c53f8e4a3ba4d8affcd6be8b2dde84fff64 ifconfig

lo: flags=73UP,LOOPBACK,RUNNING  mtu 65536
inet 127.0.0.1  netmask 255.0.0.0
inet6 ::1  prefixlen 128  scopeid 0x10host
loop  txqueuelen 0  (Local Loopback)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  

Re: [openstack-dev] [Containers][docker] Networking problem

2014-12-28 Thread Jay Lau
There is no problem for your cluster, it is working well. With nova docker
driver, you need to use namespace to check the network as you did:


2014-12-29 13:15 GMT+08:00 Iván Chavero ichav...@chavero.com.mx:

 Hello,

 I've installed OpenStack with Docker as hypervisor on a cubietruck,
 everything
 seems to work ok but the container ip does not respond to pings nor
 respond to
 the service i'm running inside the container (nginx por 80).

 I checked how nova created the container and it looks like everything is
 in place:

 # nova list
 +--+---+
 ++-+--+
 | ID   | Name  | Status | Task
 State | Power State | Networks |
 +--+---+
 ++-+--+
 | 249df778-b2b6-490c-9dce-1126f8f337f3 | test_nginx_13 | ACTIVE | -
 | Running | public=192.168.1.135 |
 +--+---+
 ++-+--+


 # docker ps
 CONTAINER IDIMAGE COMMAND CREATED STATUS
 PORTS  NAMES
 89b59bf9f442sotolitolabs/nginx_arm:latest /usr/sbin/nginx   6
 hours ago Up 6 hours nova-249df778-b2b6-490c-9dce-1126f8f337f3


 A funny thing that i noticed but i'm not really sure it's relevant, the
 docker container
 does not show network info when created by nova:

 # docker inspect 89b59bf9f442

  unnecesary output

 NetworkSettings: {
 Bridge: ,
 Gateway: ,
 IPAddress: ,
 IPPrefixLen: 0,
 PortMapping: null,
 Ports: null
 },




 # neutron router-list
 +--+-+--
 
 
 -+--
 ---+---+
 | id   | name| external_gateway_info |
 distributed | ha|
 +--+-+--
 
 
 -+--
 ---+---+
 | f8dc7e15-1087-4681-b495-217ecfa95189 | router1 | {network_id:
 160add9a-2d2e-45ab-8045-68b334d29418, enable_snat: true,
 external_fixed_ips: [{subnet_id: 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d,
 ip_address: 192.168.1.120}]} | False   | False |
 +--+-+--
 
 
 -+--
 ---+---+


 # neutron subnet-list
 +--++---
 -++
 | id   | name   | cidr   |
 allocation_pools |
 +--++---
 -++
 | 34995548-bc2b-4d33-bdb2-27443c01e483 | private_subnet | 10.0.0.0/24
 | {start: 10.0.0.2, end: 10.0.0.254} |
 | 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d | public_subnet  | 192.168.1.0/24
 | {start: 192.168.1.120, end: 192.168.1.200} |
 +--++---
 -++




 # neutron port-list
 +--+--+-
 --+-
 -+
 | id   | name | mac_address   |
 fixed_ips |
 +--+--+-
 --+-
 -+
 | 863eb9a3-461c-4016-9bd1-7c4c7210db98 |  | fa:16:3e:24:7b:2c |
 {subnet_id: 34995548-bc2b-4d33-bdb2-27443c01e483, ip_address:
 10.0.0.2}  |
 | bbe59188-ab4e-4b92-a578-bbc2d6759295 |  | fa:16:3e:1c:04:6a |
 {subnet_id: 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d, ip_address:
 192.168.1.135} |
 | c8b94a90-c7d1-44fc-a582-3370f5486d26 |  | fa:16:3e:6f:69:71 |
 {subnet_id: 34995548-bc2b-4d33-bdb2-27443c01e483, ip_address:
 10.0.0.1}  |
 | f108b583-0d54-4388-bcc0-f8d1cbe6efd4 |  | fa:16:3e:bb:3a:1b |
 {subnet_id: 1ae33c0b-a04e-47b6-bdba-bbdf9a3ef14d, ip_address:
 192.168.1.120} |
 +--+--+-
 --+-
 -+



 the network namespace is being created:

 # ip netns exec 89b59bf9f442a0d468d9d4d8c9370c
 

Re: [openstack-dev] [Openstack] Need help in validating CPU Pinning feature

2014-12-28 Thread Srinivasa Rao Ragolu
Hi Steve,

Thank you so much for your reply and detailed steps to go forward.

I am using devstack setup and nova master commit. As I could not able to
see CPUPinningFilter implementation in source, I have used
NUMATopologyFilter.

But same problem exists. I could not able to see any vcpupin in the guest
xml. Please see the section of vcpu from xml below.

metadata
nova:instance xmlns:nova=http://openstack.org/xmlns/libvirt/nova/1.0;
  nova:package version=2015.1/
  nova:nametest_pinning/nova:name
  nova:creationTime2014-12-29 07:30:04/nova:creationTime
  nova:flavor name=pinned.medium
nova:memory2048/nova:memory
nova:disk20/nova:disk
nova:swap0/nova:swap
nova:ephemeral0/nova:ephemeral
nova:vcpus2/nova:vcpus
  /nova:flavor
  nova:owner
nova:user uuid=d72f55401b924e36ac88efd223717c75admin/nova:user
nova:project
uuid=4904cdf59c254546981f577351b818deadmin/nova:project
  /nova:owner
  nova:root type=image uuid=fe017c19-6b4e-4625-93b1-2618dc5ce323/
/nova:instance
  /metadata
  memory unit='KiB'2097152/memory
  currentMemory unit='KiB'2097152/currentMemory
  vcpu placement='static' cpuset='0-3'2/vcpu

Kindly suggest me which branch of NOVA I need to take to validated pinning
feature. Alse let me know CPUPinningFilter is required to validate pinning
feature?

Thanks a lot,
Srinivas.


On Sat, Dec 27, 2014 at 4:37 AM, Steve Gordon sgor...@redhat.com wrote:

 - Original Message -
  From: Srinivasa Rao Ragolu srag...@mvista.com
  To: joejiang ifz...@126.com
 
  Hi Joejing,
 
  Thanks for quick reply. Above xml is getting generated fine if I set
  vcpu_pin_set=1-12 in /etc/nova/nova.conf.
 
  But how to pin each vcpu with pcpu something like below
 
  cputune
 vcpupin vcpu=‘0’ cpuset=‘1-5,12-17’/
 
 vcpupin vcpu=‘1’ cpuset=‘2-3,12-17’/
 
  /cputune
 
 
  One more questions is Are Numa nodes are compulsory for pin each vcpu to
  pcpu?

 The specification for the CPU pinning functionality recently implemented
 in Nova is here:


 http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/virt-driver-cpu-pinning.html

 Note that exact vCPU to pCPU pinning is not exposed to the user as this
 would require them to have direct knowledge of the host pCPU layout.
 Instead they request that the instance receive dedicated CPU resourcing
 and Nova handles allocation of pCPUs and pinning of vCPUs to them.

 Example usage:

 * Create a host aggregate and add set metadata on it to indicate it is to
 be used for pinning, 'pinned' is used for the example but any key value can
 be used. The same key must used be used in later steps though::

 $ nova aggregate-create cpu_pinning
 $ nova aggregate-set-metadata 1 pinned=true

   NB: For aggregates/flavors that wont be dedicated set pinned=false.

 * Set all existing flavors to avoid this aggregate::

 $ for FLAVOR in `nova flavor-list | cut -f 2 -d ' ' | grep -o [0-9]*`;
 do nova flavor-key ${FLAVOR} set
 aggregate_instance_extra_specs:pinned=false; done

 * Create flavor that has extra spec hw:cpu_policy set to dedicated. In
 this example it is created with ID of 6, 2048 MB of RAM, 20 GB drive, and 2
 vCPUs::

 $ nova flavor-create pinned.medium 6 2048 20 2
 $ nova flavor-key 6 set hw:cpu_policy=dedicated

 * Set the flavor to require the aggregate set aside for dedicated pinning
 of guests::

 $ nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true

 * Add a compute host to the created aggregate (see nova host-list to get
 the host name(s))::

 $ nova aggregate-add-host 1 my_packstack_host_name

 * Add the AggregateInstanceExtraSpecsFilter and CPUPinningFilter filters
 to the scheduler_default_filters in /etc/nova.conf::

 scheduler_default_filters =
 RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,

 ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,

 AggregateInstanceExtraSpecsFilter,CPUPinningFilter

   NB: On Kilo code base I believe the filter is NUMATopologyFilter

 * Restart the scheduler::

 # systemctl restart openstack-nova-scheduler

 * After the above - with a normal (non-admin user) try to boot an instance
 with the newly created flavor::

 $ nova boot --image fedora --flavor 6 test_pinning

 * Confirm the instance has succesfully booted and that it's vCPU's are
 pinned to _a single_ host CPU by observing
   the cputune element of the generated domain XML::

 # virsh list
  IdName   State
 
  2 instance-0001  running
 # virsh dumpxml instance-0001
 ...
 vcpu placement='static' cpuset='0-3'2/vcpu
   cputune
 vcpupin vcpu='0' cpuset='0'/
 vcpupin vcpu='1' cpuset='1'/
 /cputune


 -Steve


 ___
 OpenStack-dev mailing list