Re: [openstack-dev] [nova] RFC for Intel RDT/CAT Support in Nova for Virtual Machine QoS

2017-02-21 Thread Qiao, Liyong
Hi Marcelo, 

Reserved size means can’t be allocated to Apps(VMs), it is reserved by host, 
and decided by hardware.

This capabilities only show host’s capabilities, there will be another libvirt 
API to report how many bytes can be allocated to VM per cache bank.
Still working on it.


Best Regards

Eli Qiao(乔立勇)OpenStack Core team OTC Intel.
-- 


On 22/02/2017, 9:07 AM, "Marcelo Tosatti"  wrote:

What reserved means again? 

Some field should include how many bytes are CAT allocatable 
at this time in the L3 socket.

(So libvirtd functions properly with other applications).

This is required.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] RFC for Intel RDT/CAT Support in Nova for Virtual Machine QoS

2017-02-21 Thread Qiao, Liyong
Hi folks:

Seeking community input on an initial design for Intel Resource Director 
Technology (RDT), in particular for Cache Allocation Technology in OpenStack 
Nova to protect workloads from co-resident noisy neighbors, to ensure quality 
of service (QoS).

1. What is Cache Allocation Technology (CAT)?
Intel’s RDT(Resource Director Technology) [1]  is a umbrella of hardware 
support to facilitate the monitoring and reservation of shared resources such 
as cache, memory and network bandwidth towards obtaining Quality of Service. 
RDT will enable fine grain control of resources which in particular is valuable 
in cloud environments to meet Service Level Agreements while increasing 
resource utilization through sharing. CAT is a part of RDT and concerns itself 
with reserving for a process(es) a portion of last level cache with further 
fine grain control as to how much for code versus data. The below figure shows 
a single processor composed of 4 cores and the cache hierarchy. The L1 cache is 
split into Instruction and Data, the L2 cache is next in speed to L1. The L1 
and L2 caches are per core. The Last Level Cache (LLC) is shared among all 
cores. With CAT on the currently available hardware the LLC can be partitioned 
on a per process (virtual machine, container, or normal application) or process 
group basis.


Libvirt and OpenStack [2] already support monitoring cache (CMT) and memory 
bandwidth usage local to a processor socket (MBM_local) and total memory 
bandwidth usage across all processor sockets (MBM_total) for a process or 
process group.


2. How CAT works
To learn more about CAT please refer to the Intel Processor Soft Developer's 
Manual
  volume 3b, chapters 17.16 and 17.17 [3]. Linux kernel support for the same is 
expected in release 4.10 and documented at [4]


3. Libvirt Interface

Libvirt support for CAT is underway with the patch at reversion 7

Interface changes of libvirt:

3.1 The capabilities xml has been extended to reveal cache information


 
   
 
 
   
 


The new `cache` xml element shows that the host has two banks of type L3 or 
Last Level Cache (LLC), one per processor socket. The cache type is l3 cache, 
its size 56320 KiB, and the cpus attribute indicates the physical CPUs 
associated with the same, here ‘0-21’, ‘44-65’ respectively.

The control tag shows that bank belongs to scope L3, with a minimum possible 
allocation of 2816 KiB and still has 2816 KiB need to be reserved.

If the host enabled CDP (Code and Data Prioritization) , l3 cache will be 
divided as code  (L3CODE)and data (L3Data).

Control tag will be extended to:
...
 
 
…

The scope of L3CODE and L3DATA show that we can allocate cache for code/data 
usage respectively, they share same amount of l3 cache.

3.2 Domain xml extended to include new CacheTune element


   
   
   
   
   
...


This means the guest will be have vcpus 0, 1 running on host’s socket 0, with 
2816 KiB cache exclusively allocated to it and vcpus 2, 3 running on host’s 
socket 0, with 2816 KiB cache exclusively allocated to it.

Here we need to make sure vcpus 0, 1 are pinned to the pcpus of socket 0, refer 
capabilities
 :

Here we need to make sure vcpus 2, 3 are pinned to the pcpus of socket 1, refer 
capabilities
 :.

3.3 Libvirt work flow for CAT


  1.  Create qemu process and get it’s PIDs
  2.  Define a new resource control domain also known as Class-of-Service 
(CLOS) under /sys/fs/resctrl and set the desired Cache Bit Mask(CBM) in the 
libvirt domain xml file in addition to updating the default schemata of the host

4. Proposed Nova Changes


  1.  Get host capabilities from libvirt and extend compute node’ filed
  2.  Add new scheduler filter and weight to help schedule host for requested 
guest.
  3.  Extend flavor’s (and image meta) extra spec fields:

We need to specify  numa setting for NUMA hosts if we want to enable CAT, see 
[5] to learn more about NUMA.
In flavor, we can have:

vcpus=8
mem=4
hw:numa_nodes=2 - numa of NUMA nodes to expose to the guest.
hw:numa_cpus.0=0,1,2,3,4,5
hw:numa_cpus.1=6,7
hw:numa_mem.0=3072
hw:numa_mem.1=1024
//  new added in the proposal
hw:cache_banks=2   //cache banks to be allocated to a  guest, (can be less than 
the number of NUMA nodes)
hw:cache_type.0=l3  //cache bank type, could be l3, l3data + l3code
hw:cache_type.1=l3_c+d  //cache bank type, could be l3, l3data + l3code
hw:cache_vcpus.0=0,1  //vcpu list on cache banks, can be none
hw:cache_vcpus.1=6,7
hw:cache_l3.0=2816  //cache size in KiB.
hw:cache_l3_code.1=2816
hw:cache_l3_data.1=2816

Here, user can clear about which vcpus will benefit cache allocation, about 
cache bank, it’s should be co-work with numa cell, it will allocate cache on a 
physical CPU socket, but here cache bank is a logic concept. Cache bank will 
allocate cache for a vcpu list, all vcpu list should group

Mod

Re: [openstack-dev] [Zun] Propose a change of Zun core membership

2016-12-01 Thread Qiao, Liyong
+1 

Best Regards

Eli Qiao(乔立勇)OpenStack Core team OTC Intel.
-- 


在 16/12/2 上午9:55,“Shuu Mutou” 写入:

+1 for both.

Thanks,
Shu

> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Thursday, December 01, 2016 8:38 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [Zun] Propose a change of Zun core membership
> 
> Hi Zun cores,
> 
> 
> 
> I am going to propose the following change of the Zun core reviewers team:
> 
> + Pradeep Kumar Singh (pradeep-singh-u)
> 
> - Vivek Jain (vivek-jain-openstack)
> 
> 
> 
> Pradeep was proven to be a significant contributor to Zun. He ranked first
> at the number of commits, and his patches were non-trivial and with high
> quality. His reviews were also very helpful, and often prompted us to
> re-think the design. It would be great to have him in the core team. I 
would
> like to thank Vivek for his interest to join the core team when Zun was
> found. However, he became inactive in the past a few months, but he is
> welcomed to re-join the core team as long as he becomes active again.
> 
> 
> 
> According to the OpenStack Governance process [1], we require a minimum
> of 4 +1 votes from Zun core reviewers within a 1 week voting window 
(consider
> this proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot
> get enough votes or there is a veto vote prior to the end of the voting
> window, this proposal is rejected.
> 
> 
> 
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
> 
> 
> 
> 
> Best regards,
> 
> Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]

2016-10-10 Thread Qiao, Liyong
@Zhangshuai

I believe you running Magnum service in China’s server, please be note that K8s 
would pull images from gcr.io, so please be sure you can reach gcr.io in case 
of China Firewall blocking it.

—
Best Regards
  此致,
敬礼
Eli Qiao(乔立勇)Intel SSG OTC OpenStack Core Team


发件人: Ton Ngo mailto:t...@us.ibm.com>>
答复: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
日期: 2016年10月11日 星期二 上午5:36
至: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
主题: Re: [openstack-dev] [Magnum]

Hi zhangshuai,
We can only tell from the screenshots that the k8s master node failed. You will 
likely need
to use the CLI for further debugging. It might also be quicker to ask on the 
IRC.
Ton Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kolla][ovs-discuss] error when starting neutron-openvswitch-agent service

2016-06-06 Thread Qiao, Liyong
6: ovs-system:  mtu 1500 qdisc noop state DOWN
link/ether 3e:c8:1d:8e:b5:5b brd ff:ff:ff:ff:ff:ff
7: br-ex:  mtu 1500 qdisc noop state DOWN
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
8: br-int:  mtu 1500 qdisc noop state DOWN
link/ether 32:0d:72:8d:d6:42 brd ff:ff:ff:ff:ff:ff
9: br-tun:  mtu 1500 qdisc noop state DOWN
link/ether ea:d4:72:22:e2:4f brd ff:ff:ff:ff:ff:ff


I noted that these devices are not in UP state, you’d better to check them 
first.

Best Regards,
Qiao, Liyong (Eli) OTC SSG Intel

此致
敬礼!
英特尔(中国)有限公司软件与服务部开源技术中心 乔立勇



From: hu.zhiji...@zte.com.cn [mailto:hu.zhiji...@zte.com.cn]
Sent: Monday, June 06, 2016 6:54 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][Kolla][ovs-discuss] error when starting 
neutron-openvswitch-agent service

Hi Guys,

I am new to Neutron Kolla and OVS, I was trying to deploy Mitaka on CeontOS in 
a all-in-one environment using Kolla. After a successful deploying I realized 
that I should disable NetworkManager service roughly according to: 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/3/html/Installation_and_Configuration_Guide/Disabling_Network_Manager.html

But when I disabled NetworkManager and restarted network service (probably host 
machine also restarted), I cannot ping from my gateway through the external 
interface.

Here is the relevant log of ovs:

2016-06-06 09:19:37.278 1 INFO neutron.common.config [-] Logging enabled!
2016-06-06 09:19:37.283 1 INFO neutron.common.config [-] 
/usr/bin/neutron-openvswitch-agent version 8.0.0
2016-06-06 09:19:43.035 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Mapping physical network 
physnet1 to bridge br-ex
2016-06-06 09:19:45.236 1 ERROR neutron.agent.ovsdb.impl_vsctl 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=type', 'list', 'Interface', 'int-br-ex']. Exception: Exit code: 1; 
Stdin: ; Stdout: ; Stderr: ovs-vsctl: no row "int-br-ex" in table Interface

2016-06-06 09:19:49.979 1 INFO neutron.agent.l2.extensions.manager 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Loaded agent extensions: []
2016-06-06 09:19:52.185 1 WARNING neutron.agent.securitygroups_rpc 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Firewall driver 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver doesn't 
accept integration_bridge parameter in __init__(): __init__() got an unexpected 
keyword argument 'integration_bridge'
2016-06-06 09:19:53.204 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Agent initialized 
successfully, now running...
2016-06-06 09:19:53.733 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Configuring tunnel 
endpoints to other OVS agents



I use enp0s35 as both the VIP interface and the external interface because the 
host only has one interface...



Here is the ip addr  result before the deployment of enp0s35:

2: enp0s25:  mtu 1500 qdisc pfifo_fast state 
UP qlen 1000
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
inet 10.43.114.40/24 brd 10.43.114.255 scope global dynamic enp0s25
   valid_lft 10429sec preferred_lft 10429sec
inet6 fe80::1e6f:65ff:fe05:3711/64 scope link
   valid_lft forever preferred_lft forever


Here is the ip addr result after the deployment

2: enp0s25:  mtu 1500 qdisc pfifo_fast master 
ovs-system state UP qlen 1000
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
inet 10.43.114.40/24 brd 10.43.114.255 scope global dynamic enp0s25
   valid_lft 7846sec preferred_lft 7846sec
inet 10.43.114.149/32 scope global enp0s25
   valid_lft forever preferred_lft forever
inet6 fe80::1e6f:65ff:fe05:3711/64 scope link
   valid_lft forever preferred_lft forever
6: ovs-system:  mtu 1500 qdisc noop state DOWN
link/ether 3e:c8:1d:8e:b5:5b brd ff:ff:ff:ff:ff:ff
7: br-ex:  mtu 1500 qdisc noop state DOWN
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
8: br-int:  mtu 1500 qdisc noop state DOWN
link/ether 32:0d:72:8d:d6:42 brd ff:ff:ff:ff:ff:ff
9: br-tun:  mtu 1500 qdisc noop state DOWN
link/ether ea:d4:72:22:e2:4f brd ff:ff:ff:ff:ff:ff


Please help to see how to locate solve such kind of problem, many thanks!


Zhijiang







ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
r

Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-30 Thread Qiao, Liyong
Oh, that reminds me,
MesosMonitor requires to use master node’s floating ip address directly to get 
state information.

BR, Eli(Li Yong)Qiao

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: Thursday, March 31, 2016 11:41 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?
Importance: Low

Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pull
docker images from docker hub in nodes floating ips are needed. To reduce the
usage of floating ip, we can use proxy. Only some nodes have floating ips, and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
mailto:liyong.q...@intel.com>> wrote:
Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying


__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][keystone] What is the difference between auth_url and auth_uri?

2016-02-29 Thread Qiao, Liyong
Uri and url are different but sometime they might be same.

Well ,you can see it from http://www.ietf.org/rfc/rfc3986.txt
BR, Eli(Li Yong)Qiao

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: Monday, February 29, 2016 7:04 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [openstack][keystone] What is the difference between 
auth_url and auth_uri?

Hi all,

There are two config parameters (auth_uri and auth_url) in keystone_authtoken 
group. I want to know what is the difference between them. Can I use only one 
of them?


Best Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]summary of blue print of 'making_live_migration_api_friendly'

2016-02-24 Thread Qiao, Liyong
Hi folks
There are some discussion about this blue print 
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/making_live_migration_api_friendly.html
As we discussed on yesterday's Nova-Api/Nova-Live-Migration meeting, Alex and 
me did some summary and write them here 
https://etherpad.openstack.org/p/making_live_migration_api_friendly
Please feel free to put your comments on it.

BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all]Whom should I contact to ask for ATC code

2016-02-04 Thread Qiao, Liyong
Hello all
I am sorry for the broadcasting, but I really don't know whom should I contact.
Things is:
My friend is a newly developer who already has contribute to openstack project, 
and would like to know in what way(or email contactor) he can get the ATC code 
for Austin summit.

BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to add feature to diskimage-builder

2015-12-29 Thread Qiao,Liyong

hi
here's some links may helps
[1] is the general development guide for OpenStack contributors. It may 
provide some useful guide.

[2] is the the place you register blueprint.

Hope you play well.

Thanks
Eli

[1]https://wiki.openstack.org/wiki/How_To_Contribute#Feature_development
[2]https://blueprints.launchpad.net/diskimage-builder

On 2015年12月30日 01:16, George Shuklin wrote:

Hello.

I'm trying add a small feature to one of the elements in 
diskimage-builder 
(https://github.com/openstack/diskimage-builder/pull/10/)


I have experience with gerrit and openstack bugfix workflow, but I 
have no idea how to add small enhancements.


Dev guide says I need to add blueprint (how?) and follow the guides 
for the project. But I can't find any dev guides specific for 
diskimage-builder.


What should I do? (Or may be I can just submit changes for review 
without noting blueprint/spec?).



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][api] Looking for a Cross-Project Liaison for the API Working Group

2015-12-01 Thread Qiao,Liyong

hi Everett
I'd like to take it.

thanks
Eli.

On 2015年12月02日 05:18, Everett Toews wrote:

Hello Magnumites,

The API Working Group [1] is looking for a Cross-Project Liaison [2] from the 
Magnum project.

What does such a role entail?

The API Working Group seeks API subject matter experts for each project to 
communicate plans for API updates, review API guidelines with their project’s 
view in mind, and review the API Working Group guidelines as they are drafted. 
The Cross-Project Liaison (CPL) should be familiar with the project’s REST API 
design and future planning for changes to it.

Please let us know if you're interested and we'll bring you on board!

Cheers,
Everett

[1] http://specs.openstack.org/openstack/api-wg/
[2] http://specs.openstack.org/openstack/api-wg/liaisons.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Why set DEFAULT_DOCKER_TIMEOUT = 10 in docker client?

2015-11-24 Thread Qiao,Liyong

hi all
In Magnum code, we hardcode it as DEFAULT_DOCKER_TIMEOUT = 10
This bring troubles in some bad networking environment (or bad 
performance swarm master)

At least it doesn't work on our gate.

Here is the test patch on gate https://review.openstack.org/249522 , I 
set it as 180 to make sure
the failure it due to time_out parameter passed to docker client, but we 
need to chose a suitble one


I check docker client's default value,
DEFAULT_TIMEOUT_SECONDS = 60 , I wonder why we overwrite it  as 10?

Please let me know what's your though? My suggestion is we set 
DEFAULT_DOCKER_TIMEOUT

as long as our rpc time_out.

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum]How do we support swarm master node HA

2015-11-24 Thread Qiao,Liyong

hi all,

failed to set up swarm master HA cluster:
here is my setup .

2 master and 1 node, and I put a LB in front of  2 master node(Round-Robin)

I saw from swarm ha guide and get confirmed from swarm guys:

/jimmyxian | elqiao: I'm here. :). Swarm does not support A-A. But can 
access the standby manager, and it will proxy the request to the primary 
manager/

https://docs.docker.com/swarm/multi-manager-setup/

Swarm replica will do the proxy. but I tested failed.
Since the LB use Round-Robin mode so it will access primary then replica.
every time if LB access primary node, the cluster works fine, but failed 
when access replica.


I wonder if the configuration is wrong ?


/here is the ENV detail:/


master-1 172.24.5.33(floating ip) 192.168.0.5:2(private ip) primary

root  1289  0.1  1.4  35456 29272 ?Ssl  10:02   0:07 /swarm 
manage -H tcp://0.0.0.0:2375 --replication --advertise 192.168.0.5:2375 
--tlsverify --tlscacert=/etc/docker/ca.crt 
--tlskey=/etc/docker/server.key --tlscert=/etc/docker/server.crt 
etcd://192.168.0.3:2379/v2/keys/swarm/


master-2 172.24.5.32(floating ip) 192.168.0.6(private ip) replica
root  1678  0.1  0.8  23572 16824 ?Ssl  11:31   0:00 /swarm 
manage -H tcp://0.0.0.0:2375 --replication --advertise 192.168.0.6:2375 
--tlsverify --tlscacert=/etc/docker/ca.crt 
--tlskey=/etc/docker/server.key --tlscert=/etc/docker/server.crt 
etcd://192.168.0.3:2379/v2/keys/swarm/



on master-1 172.24.5.33 (primary)

bash-4.3# docker -H tcp://172.24.5.33:2376 --tlsverify  --tlscacert 
ca.crt --tlskey server.key --tlscert server.crt info

Containers: 6
Images: 6
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 3
 sw-6ckizfpu4bl-0-mjy7qmxwbc6s-swarm-node-bynksfbxgibf.novalocal: 
192.168.0.7:2375

  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.053 GiB
  └ Labels: executiondriver=native-0.2, 
kernelversion=3.17.4-301.fc21.x86_64, operatingsystem=Fedora 21 (Twenty 
One), storagedriver=devicemapper
 sw-ivtl4icqr-0-7a7s2ycpss2k-swarm-master-mxihlwsyjetc.novalocal: 
192.168.0.5:2375

  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.053 GiB
  └ Labels: executiondriver=native-0.2, 
kernelversion=3.17.4-301.fc21.x86_64, operatingsystem=Fedora 21 (Twenty 
One), storagedriver=devicemapper
 sw-ivtl4icqr-1-35oewlqh25a7-swarm-master-idtxokrzgaek.novalocal: 
192.168.0.6:2375

  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.053 GiB
  └ Labels: executiondriver=native-0.2, 
kernelversion=3.17.4-301.fc21.x86_64, operatingsystem=Fedora 21 (Twenty 
One), storagedriver=devicemapper

CPUs: 3
Total Memory: 6.158 GiB
Name: 78443d1d9ad2
Http Proxy: http://10.239.4.160:911/
Https Proxy: https://10.239.4.160:911/
No Proxy: 
192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4,192.168.0.5,192.168.0.6,192.168.0.7


I can see all containers of the cluster:

bash-4.3# docker -H tcp://172.24.5.32:2376 --tlsverify  --tlscacert 
ca.crt --tlskey server.key --tlscert server.crt ps -a
CONTAINER IDIMAGE   COMMAND CREATED 
STATUS  PORTS NAMES
78443d1d9ad2swarm:1.0.0 "/swarm manage -H tcp" About an 
hour ago   Up About an hour0.0.0.0:2376->2375/tcp swarm-manager
d19e9ab13e07swarm:1.0.0 "/swarm join --addr 1" About an 
hour ago   Up About an hour2375/tcp swarm-agent
bash-4.3# docker -H tcp://172.24.5.33:2376 --tlsverify  --tlscacert 
ca.crt --tlskey server.key --tlscert server.crt ps -a
CONTAINER IDIMAGE   COMMAND CREATED 
STATUS PORTSNAMES
0337ad1ad6a6docker.io/cirros"ping -c 100 10.248.2"   50 
minutes ago  Exited (137) 26 minutes ago 
sw-6ckizfpu4bl-0-mjy7qmxwbc6s-swarm-node-bynksfbxgibf.novalocal/test_ping
6a6e1f1327e2swarm:1.0.0 "/swarm join --addr 1" About an 
hour ago   Up About an hour 2375/tcp 
sw-6ckizfpu4bl-0-mjy7qmxwbc6s-swarm-node-bynksfbxgibf.novalocal/swarm-agent
78443d1d9ad2swarm:1.0.0 "/swarm manage -H tcp" About an 
hour ago   Up About an hour 192.168.0.5:2376->2375/tcp 
sw-ivtl4icqr-0-7a7s2ycpss2k-swarm-master-mxihlwsyjetc.novalocal/swarm-manager
d19e9ab13e07swarm:1.0.0 "/swarm join --addr 1" About an 
hour ago   Up About an hour 2375/tcp 
sw-ivtl4icqr-0-7a7s2ycpss2k-swarm-master-mxihlwsyjetc.novalocal/swarm-agent
a4da371274bcswarm:1.0.0 "/swarm manage -H tcp" About an 
hour ago   Up 3 minutes 192.168.0.6:2376->2375/tcp 
sw-ivtl4icqr-1-35oewlqh25a7-swarm-master-idtxokrzgaek.novalocal/swarm-manager
a211d31dfc6eswarm:1.0.0 "/swarm join --addr 1" About an 
hour ago   Up About an hour 2375/tcp 
sw-ivtl4icqr-1-35oewlqh25a7-swarm-master-idtxokrzgaek.novalocal/swarm-agent



=
on master-2 172.24.5.32(replica)


bash-4.3# docker -H tcp://172.24.5.32:2376 --tlsverify  --tlscacert 
ca.crt -

Re: [openstack-dev] [neutron][heat][magnum] LBaaS of Neutron

2015-11-16 Thread Qiao,Liyong

Egor,
thanks for pointing me out, I read it before,  I didn't notice that words.
hmm.. then that would be fine

On 2015年11月16日 16:28, Egor Guz wrote:

Eli,

you are correct Swarm support only one active/passive deployment model, but 
according to Docker documentation 
https://docs.docker.com/swarm/multi-manager-setup/
even replica can handle user request “You can use the docker command on any Docker 
Swarm primary manager or any replica."

it means "round-robin" should works.

—
Egor

From: "Qiao,Liyong" mailto:liyong.q...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Sunday, November 15, 2015 at 23:50
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][heat][magnum] LBaaS of Neutron

Hi Sergey

Thanks for your information, it's really help.
Actually I am from Magnum team and we are using heat to do orchestration on 
docker swarm bay.

Swarm master only support A-P mode (active-passive), I wonder if there any 
workaround to
implement my requirement :

VIP : 192.168.0.10

master-1 192.168.0.100 (A)
master-2 192.168.0.101 (P)

if I want to make VIP to alway connect with master-1(since it is A mode),
only switch to master-2 when master-1 down. what should I do?
---

Below link is the heat template of k8s(k8s supports A-A mode, so it can use 
ROUND_ROBIN).
https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml#L343

P.S Copy to Magnum team.

thanks
Eli.


On 2015年11月16日 15:15, Sergey Kraynev wrote:
On 16 November 2015 at 09:46, Qiao,Liyong 
mailto:liyong.q...@intel.com>> wrote:
hi, I have some questions about neutorn LBaas.

seen from the wiki, the load balancer only support:


Table 4.6. Load Balancing Algorithms

Name
LEAST_CONNECTIONS
ROUND_ROBIN

https://wiki.openstack.org/wiki/Neutron/LBaaS/API

think about if I have a A-P mode HA

VIP : 192.168.0.10

master-1 192.168.0.100 (A)
master-2 192.168.0.101 (P)

if I want to use VIP to alway connect with master-1(since it is A mode),
only switch to master-2 when master-1 down. what should I do?
any plan to support more algorithms for neutron lbaas?

BTW, the usage is from heat:

   etcd_pool:
 type: OS::Neutron::Pool
 properties:
   protocol: HTTP
   monitors: [{get_resource: etcd_monitor}]
   subnet: {get_resource: fixed_subnet}
   lb_method: ROUND_ROBIN
   vip:
 protocol_port: 2379



thanks,
Eli.

--
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi, Qiao,Liyong

I can not say about LBaaS team plans for supporting some additional algorithms 
:)
AFAIK, they do not plan add it to v1 API.
As I understand it may be discussed as part of v2 API [1]

In the Heat we have related BP [2], with several patches on review. So if it 
will be implemented on Neutron side, we may add such functionality too.

[1] http://developer.openstack.org/api-ref-networking-v2-ext.html#lbaas-v2.0
[2] https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport

--
Regards,
Sergey.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][heat][magnum] LBaaS of Neutron

2015-11-15 Thread Qiao,Liyong

Hi Sergey

Thanks for your information, it's really help.
Actually I am from Magnum team and we are using heat to do orchestration 
on docker swarm bay.


Swarm master only support A-P mode (active-passive), I wonder if there 
any workaround to

implement my requirement :

/VIP : 192.168.0.10//

//master-1 192.168.0.100 (A)//
//master-2 192.168.0.101 (P)//

//if I want to make VIP to alway connect with master-1(since it is A 
mode),//

//only switch to master-2 when master-1 down. what should I do?/
---

Below link is the heat template of k8s(k8s supports A-A mode, so it can 
use ROUND_ROBIN).

https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster.yaml#L343

P.S Copy to Magnum team.

thanks
Eli.


On 2015年11月16日 15:15, Sergey Kraynev wrote:
On 16 November 2015 at 09:46, Qiao,Liyong <mailto:liyong.q...@intel.com>> wrote:


hi, I have some questions about neutorn LBaas.

seen from the wiki, the load balancer only support:

*Table 4.6. Load Balancing Algorithms*

*Name*
LEAST_CONNECTIONS
ROUND_ROBIN


https://wiki.openstack.org/wiki/Neutron/LBaaS/API

think about if I have a A-P mode HA

VIP : 192.168.0.10

master-1 192.168.0.100 (A)
master-2 192.168.0.101 (P)

if I want to use VIP to alway connect with master-1(since it is A
mode),
only switch to master-2 when master-1 down. what should I do?
any plan to support more algorithms for neutron lbaas?

BTW, the usage is from heat:

  etcd_pool:
type: OS::Neutron::Pool
properties:
  protocol: HTTP
  monitors: [{get_resource: etcd_monitor}]
  subnet: {get_resource: fixed_subnet}
*  lb_method: ROUND_ROBIN*
  vip:
protocol_port: 2379



thanks,
Eli.

-- 
BR, Eli(Li Yong)Qiao



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Hi, Qiao,Liyong

I can not say about LBaaS team plans for supporting some additional 
algorithms :)

AFAIK, they do not plan add it to v1 API.
As I understand it may be discussed as part of v2 API [1]

In the Heat we have related BP [2], with several patches on review. So 
if it will be implemented on Neutron side, we may add such 
functionality too.


[1] 
http://developer.openstack.org/api-ref-networking-v2-ext.html#lbaas-v2.0

[2] https://blueprints.launchpad.net/heat/+spec/lbaasv2-suport

--
Regards,
Sergey.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][heat] LBaaS of Neutron

2015-11-15 Thread Qiao,Liyong

hi, I have some questions about neutorn LBaas.

seen from the wiki, the load balancer only support:

*Table 4.6. Load Balancing Algorithms*

*Name*
LEAST_CONNECTIONS
ROUND_ROBIN


https://wiki.openstack.org/wiki/Neutron/LBaaS/API

think about if I have a A-P mode HA

VIP : 192.168.0.10

master-1 192.168.0.100 (A)
master-2 192.168.0.101 (P)

if I want to use VIP to alway connect with master-1(since it is A mode),
only switch to master-2 when master-1 down. what should I do?
any plan to support more algorithms for neutron lbaas?

BTW, the usage is from heat:

  etcd_pool:
type: OS::Neutron::Pool
properties:
  protocol: HTTP
  monitors: [{get_resource: etcd_monitor}]
  subnet: {get_resource: fixed_subnet}
*  lb_method: ROUND_ROBIN*
  vip:
protocol_port: 2379



thanks,
Eli.

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-11 Thread Qiao,Liyong

hello all:

I will update some Magnum functional testing status, 
functional/integration testing
is important to us, since we change/modify the Heat template rapidly, we 
need to
verify the modification is correct, so we need to cover all templates 
Magnum has.
and currently we only has k8s testing(only test with atomic image), we 
need to
add more, like swarm(WIP), mesos(under plan), also , we may need to 
support COS image.

lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo 
summit,

Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it 
after testing.


for each stage, the time costing is follows:

 * devstack prepare: 5-6 mins
 * Running devstack: 15 mins(include downloading atomic image)
 * 1) and 2) 15 mins
 * 3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html

for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack 
setup will take 20 mins already.


To reduce time, I suggest to only create 1 bay each pipeline and do vary 
kinds of testing
on this bay, if want to test some specify bay (for example, 
network_driver etc), create

a new pipeline .

So, I think we can *delete 2)*, since 3) will do similar 
things(create/delete), the different is

3) use tls_disabled=False. *what do you think *?
see https://review.openstack.org/244378 for the time costing, will 
reduce to 45 min (48m 50s in the example.)


=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

 * gate-functional-dsvm-magnum-api 30 mins
 * gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine 
on gate)

https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] live migration sub-team meeting

2015-11-10 Thread Qiao,Liyong

hi Paul

+1, will attend for this meeting.

On 2015年11月10日 19:48, Murray, Paul (HP Cloud) wrote:


Thank you for the prompting Michael. I was chasing up a couple of key 
people to make sure they were available.


The IRC meeting should be Tuesdays at 1400 UTC on #openstack-meeting-3 
starting next week (too late for today).


I will get that sorted out with infra and send another email to 
confirm. I will also sort out all the specs and patches that I know 
about today. More information will be included about that too.


Paul

*From:*Michael Still [mailto:mi...@stillhq.com]
*Sent:* 09 November 2015 21:34
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [Nova] live migration sub-team meeting

So, its been a week. What time are we picking?

Michael

On Thu, Nov 5, 2015 at 10:46 PM, Murray, Paul (HP Cloud) 
mailto:pmur...@hpe.com>> wrote:


> > Most team members expressed they would like a regular IRC
meeting for
> > tracking work and raising blocking issues. Looking at the
contributors
> > here [2], most of the participants seem to be in the European
> > continent (in time zones ranging from UTC to UTC+3) with a few
in the
> > US (please correct me if I am wrong). That suggests that a
time around
> > 1500 UTC makes sense.
> >
> > I would like to invite suggestions for a day and time for a weekly
> > meeting -
>
> Maybe you could create a quick Doodle poll to reach a rough
consensus on
> day/time:
>
> http://doodle.com/

Yes, of course, here's the poll:

http://doodle.com/poll/rbta6n3qsrzcqfbn




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Rackspace Australia



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Qiao, Liyong
+1, Alex worked on Nova project for a long time, and push lot of API feature in 
last few cycles,
And spend lots of time doing reviewing, I am glad to add my +1 to him.

BR, Eli(Li Yong)Qiao

-Original Message-
From: Ed Leafe [mailto:e...@leafe.com] 
Sent: Saturday, November 07, 2015 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

On Nov 6, 2015, at 9:32 AM, John Garbutt  wrote:

> I propose we add Alex Xu[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work, 
> including some quality reviews, particularly around the API.
> 
> Please respond with comments, +1s, or objections within one week.

I'm not a core, but would like to add my hearty +1.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] [RFC] split pip line of functional testing

2015-11-03 Thread Qiao,Liyong

hi Magnum hackers:

Currently there is a pip line on project-config to do magnum functional 
testing [1]


on summit, we've discussed that we need to split it per COE[2], we can 
do this by adding new pip line to testing./

/ /- '{pipeline}-functional-dsvm-magnum{coe}{job-suffix}':/
coe could be swarm/mesos/k8s,
then passing coe in our post_test_hook.sh [3], is this a good idea?
and I still have others questions need to be addressed before split 
functional testing per COE:
1 how can we pass COE parameter to tox in [4], or add some new envs like 
[testenv:functional-swarm] [testenv:functional-k8s] etc?

stupid?
2 also there are some common testing cases, should we run them in all 
COE ?(I think so)

but how to construct the source code tree?

//functional/swarm//
///functional/k8s//
///functional/common ../


[1]https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/projects.yaml#L2288
[2]https://etherpad.openstack.org/p/mitaka-magnum-functional-testing
[3]https://github.com/openstack/magnum/blob/master/magnum/tests/contrib/post_test_hook.sh#L100
[4]https://github.com/openstack/magnum/blob/master/tox.ini#L19

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] facing urllib3 exceptions.SSLError on gate: "EOF occurred in violation of protocol"

2015-10-22 Thread Qiao, Liyong
Hi all,

I don't know who can help on gate, maybe infra team, sorry for the broadcasting.
I get this error when using urllib3 with ca on gate testing, but I can't 
reproduce
it on my setup(Ubuntu 14.4), I hope someone who has been met this issue can 
give some advice.

Error logs can be found here [1], and also I googled some bug link , I tried 
but helpless.


urllib3.exceptions.SSLError: [Errno 8] _ssl.c:510: EOF occurred in violation of 
protocol


[1] 
http://logs.openstack.org/21/232421/18/check/gate-functional-dsvm-magnum/87de53c/console.html#_2015-10-22_12_24_23_092
[2]https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/1363356
[3] 
http://stackoverflow.com/questions/11772847/error-urlopen-error-errno-8-ssl-c504-eof-occurred-in-violation-of-protoco

BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] k8s api tls_enabled mode testing

2015-10-21 Thread Qiao, Liyong
Hello,
I need your help on k8s api tls_enabled mode.
Here’s my patch https://review.openstack.org/232421

It is always failed on gate, but it works in my setup.
Debug more I found that the ca cert return api return length with difference:

On my setup:
10.238.157.49 - - [21/Oct/2015 19:16:17] "POST /v1/certificates HTTP/1.1" 201 
3360
…
10.238.157.49 - - [21/Oct/2015 19:16:17] "GET 
/v1/certificates/d4bf6135-a3d0-4980-a785-e3f2900ca315 HTTP/1.1" 200 1357

On gate:

127.0.0.1 - - [21/Oct/2015 10:59:40] "POST /v1/certificates HTTP/1.1" 201 3352

127.0.0.1 - - [21/Oct/2015 10:59:40] "GET 
/v1/certificates/a9aa1bbd-d624-4791-a4b9-e7a076c8bf58 HTTP/1.1" 200 1349



Misses 8 Bit.



I also print out the cert file content, but the length of both on gate and my 
setup are same.

But failed on gate due to SSL exception.

Does anyone know what will be the root cause?




BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Coe components status

2015-10-20 Thread Qiao,Liyong

hi Vikas,
thanks for propose this changes, I wonder if you can show some examples 
for other coes we currently supported:

swarm, mesos ?

if we propose a public api like you proposed, we'd better to support all 
coes instead of coe specific.


thanks
Eli.

On 2015年10月20日 18:14, Vikas Choudhary wrote:

Hi Team,

I would appreciate any opinion/concern regarding 
"coe-component-status" feature implementation [1].


For example in k8s, using 
APIapi/v1/namespaces/{namespace}/componentstatuses, status of each k8s 
component can be queried. My approach would be to provide a command in 
magnum like "magnum coe-component-status" leveraging coe provided rest 
api and result will be shown to user.


[1] https://blueprints.launchpad.net/magnum/+spec/coe-component-status



-Vikas Choudhary


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Failed to create swarm baywithfedora-21-atomic-5-d181.qcow2

2015-10-18 Thread Qiao,Liyong
I co-worked with Mars last week, but in my environment I can not 
reproduce this issue.

I'v told my docker daemon line to Mars, but same errors.

one thing I forget to ask, it is same with hongbin's question:
if you need proxy to access internet, please add them in baymodel:

taget@taget-ThinkStation-P300:~/devstack$ magnum baymodel-create ...
--dns-nameserver 10.248.2.5 --coe swarm --fixed-network 192.168.0.0/24 
--http-proxy http://myhttpproxy:port/ --https-proxy 
https://myhttpsproxy:port/ --no-proxy 
192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4,192.168.0.5



On 2015年10月17日 05:18, Ton Ngo wrote:


Hi Mars,
Your paste shows that the docker service is not starting, and all the 
following services like swarm-agent fails because of the dependency. 
The error message INVALIDARGUMENT seems odd, I have seen elsewhere but 
not with docker. If you log into the node, you can check the docker 
command itself, like:

docker --help
Or manually run the full command as done in the service:
/usr/bin/docker -d -H fd:// -H tcp://0.0.0.0:2375 --tlsverify 
--tlscacert="/etc/docker/ca.crt" --tlskey="/etc/docker/server.key" 
--tlscert="/etc/docker/server.crt" --selinux-enabled --storage-driver 
devicemapper --storage-opt dm.fs=xfs --storage-opt 
dm.datadev=/dev/mapper/atomicos-docker--data --storage-opt 
dm.metadatadev=/dev/mapper/atomicos-docker--meta


Ton,

Inactive hide details for Hongbin Lu ---10/16/2015 01:05:12 PM---Hi 
Mars, I cannot reproduce the error. My best guess is that yHongbin Lu 
---10/16/2015 01:05:12 PM---Hi Mars, I cannot reproduce the error. My 
best guess is that your VMs don’t have external internet a


From: Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)" 


Date: 10/16/2015 01:05 PM
Subject: Re: [openstack-dev] [magnum] Failed to create swarm bay with 
fedora-21-atomic-5-d181.qcow2






Hi Mars,

I cannot reproduce the error. My best guess is that your VMs don’t 
have external internet access (Could you verify it by ssh into one of 
your VM and type “curl openstack.org” ?). If not, please create a bug 
to report the error (_https://bugs.launchpad.net/magnum_).


Thanks,
Hongbin

*From:*Mars Ma [mailto:wenc...@gmail.com] *
Sent:*October-16-15 2:37 AM*
To:*openstack-dev@lists.openstack.org*
Subject:*[openstack-dev] [magnum] Failed to create swarm bay with 
fedora-21-atomic-5-d181.qcow2


Hi,

I used image fedora-21-atomic-5-d181.qcow2 to create swarm bay , but 
the bay went to failed status with status reason: Resource CREATE 
failed: WaitConditionFailure: 
resources.swarm_nodes.resources[0].resources.node_agent_wait_condition: swarm-agent 
service failed to start.
debug inside swarm node, found that docker failed to start, lead to 
swarm-agent and swarm-manager services failed to start.
[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ 
docker -v

Docker version 1.8.1.fc21, build 32b8b25/1.8.1

detailed debug log, I pasted here :
_http://paste.openstack.org/show/476450/_




Thanks & Best regards !
Mars 
Ma__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Testing result of new atomic-6 image

2015-10-09 Thread Qiao,Liyong

Testing result of new atomic-6 image [1] built by Tango
atomic-5 image has issue to start a container instance(docker version is 
1.7.1), Tango built a new atomic-6 image with docker 1.8.1 version.

eghobo and I (eliqiao) did some testing works (eghobo_ did most of them)

Here is the summary:

 * coe=swarm

1.  can not pull swarm:0.2.0, try to use 0.4.0 or latest works
2.  when creating a container with magnum CLI, the image name
   should use full name like "docker.io/cirros"

examples for 2:

   /taget@taget-ThinkStation-P300:~/kubernetes/examples/redis$ magnum
   container-create --name testcontainer --image cirros --bay swarmbay6
   --command "echo hello"//
   //ERROR: Docker internal Error: 404 Client Error: Not Found ("No
   such image: cirros") (HTTP 500)//
   //taget@taget-ThinkStation-P300:~/kubernetes/examples/redis$ magnum
   container-create --name testcontainer --image docker.io/cirros --bay
   swarmbay6 --command "echo hello"

   /

 * coe=k8s (tls_disabled=True)

kube-apiserver.service can not start up , but could use command line[2] 
to start, I tried to use kubctl get pod, but failed as


   /[minion@k8-5qx66ie62f-0-vaucgvagirv4-kube-master-oemtlcotgak6 ~]$
   kubectl get pod/
   /error: couldn't read version from server: Get
   http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused/


netstat shows that 8080 is not in listened, not sure why(not familiar 
with k8s)


   /[minion@k8-5qx66ie62f-0-vaucgvagirv4-kube-master-oemtlcotgak6 ~]$
   ps aux | grep kub/
   /kube   805  0.5  1.0  30232 21436 ?Ssl  08:12 0:29
   /usr/bin/kube-controller-manager --logtostderr=true --v=0
   --master=http://127.0.0.1:8080/
   /kube   806  0.1  0.6  17332 13048 ?Ssl  08:12 0:09
   /usr/bin/kube-scheduler --logtostderr=true --v=0
   --master=http://127.0.0.1:8080/
   /root  1246  0.0  1.0  33656 22300 pts/0Sl+  09:33 0:00
   /usr/bin/kube-apiserver --logtostderr=true --v=0
   --etcd_servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0
   --insecure-port=8080 --kubelet_port=10250 --allow_privileged=true
   --service-cluster-ip-range=10.254.0.0/16 --runtime_config=api/all=true/
   /minion1276  0.0  0.0  11140  1632 pts/1S+   09:46 0:00 grep
   --color=auto kub/


[1] https://fedorapeople.org/groups/magnum/fedora-21-atomic-6-d181.qcow2
[2] http://paste.openstack.org/show/475824/

-- BR, Eli(Li Yong)Qiao
<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Qiao,Liyong
+1, we can add more detail explanation information of --memory in magnum 
CLI instead of quick start.


Eli.

On 2015年10月09日 07:45, Vikas Choudhary wrote:
In my opinion, there should be a more detailed document explaining 
importance of commands and options.
Though --memory is an important attribute, but since objective of 
quickstart is to get user a minimum working system within minimum 
time, it seems better to skip this option in quickstart.



-Vikas

On Fri, Oct 9, 2015 at 1:47 AM, Egor Guz > wrote:


Adrian,

I agree with Steve, otherwise it’s hard to find balance what
should go to quick start guide (e.g. many operators worry about
cpu or I/O instead of memory).
Also I belve auto-scalling deserve it’s own detail document.

—
Egor

From: Adrian Otto mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" mailto:openstack-dev@lists.openstack.org>>>
Date: Thursday, October 8, 2015 at 13:04
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum] Document adding --memory
option to create containers

Steve,

I agree with the concept of a simple quickstart doc, but there
also needs to be a comprehensive user guide, which does not yet
exist. In the absence of the user guide, the quick start is the
void where this stuff is starting to land. We simply need to put
together a magnum reference document, and start moving content
into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake)
mailto:std...@cisco.com>>> wrote:

Quickstart guide should be dead dead dead dead simple. The goal of
the quickstart guide isn’t to tach people best practices around
Magnum.  It is to get a developer operational to give them that
sense of feeling that Magnum can be worked on.  The goal of any
quickstart guide should be to encourage the thinking that a person
involving themselves with the project the quickstart guide
represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu mailto:hongbin...@huawei.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" mailto:openstack-dev@lists.openstack.org>>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>>
Subject: [openstack-dev] [magnum] Document adding --memory option
to create containers

Hi team,

I want to move the discussion in the review below to here, so that
we can get more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the
memory size of containers. The specification of the memory size is
optional, and the COE won’t reserve any memory to the containers
with unspecified memory size. The debate is whether we should
document this optional parameter in the quickstart guide. Below is
the positions of both sides:

Pros:
· It is a good practice to always specifying the memory
size, because containers with unspecified memory size won’t have
QoS guarantee.
· The in-development autoscaling feature [1] will query
the memory size of each container to estimate the residual
capacity and triggers scaling accordingly. Containers with
unspecified memory size will be treated as taking 0 memory, which
negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as
possible, so it is not a good idea to have the optional parameter
in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org

>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__

[openstack-dev] [magnum] How to verify a service is correctly setup in heat template

2015-10-08 Thread Qiao,Liyong

hi Magnum hackers:

Recently, we upgrade to fedora atomic-5 image, but the docker (1.7.1) in 
that image doesn't works well.

see [1].

When I using that image to create a swarm bay, magnum told me that bay 
is usable, actually swarm-master

swarm-agent service are not running correctly, so that bay is not usable.
I proposed a fix [2] to check all service's status (using systemctl 
status) before trigger a signal,

Andrew Melton feel that checking is not reliable, so he propose fix [3].
but fix[3] is not working because additional signals will be ignored 
since in heat template

the default signal count=1. Please refer more information on [4]

So my question is why [2] can not work well ? is my understand wrong on 
https://bugs.launchpad.net/magnum/+bug/1502329/comments/5 ,

is there any other better way to get an asynchronous signal?

[1]https://bugs.launchpad.net/magnum/+bug/1499607
[2]https://review.openstack.org/#/c/228762/
[3]https://review.openstack.org/#/c/230639/
[4]https://bugs.launchpad.net/magnum/+bug/1502329

Thanks.

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] debugging functional testing on gate

2015-09-25 Thread Qiao,Liyong

hi folks,
I am working on add functional testing case for "creating" bays on gate.
for now, there is k8s bay creation/deletion testing, I want to adding 
more swarm/mesos

type bay testing, but I tried times , gate failed to create swarm bay.

per my experience, swarm-master/node requires to access network outside 
swarm-cluster.

I wonder if gate can support such cases?
can we do some debugging on gate?

I got serious patches:
https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/swarm-functional-testing,n,z

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] 2nd prc hackathon event finised, need your help to review patch

2015-08-21 Thread Qiao, Liyong
Hi folks

We just finished 2nd prc hackathon this Friday.
For nova project, we finially have 31 patch/bug submitted/updated, we finally 
get out a
etherpad link to track all bugs/patches, can you kindly help to review these 
patches on link

https://etherpad.openstack.org/p/hackathon2_nova_list

BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Add periodic task threading for conductor server

2015-06-14 Thread Qiao,Liyong

hi magnum team,

I am planing to add periodic task for magnum conductor service, it will 
be good
to sync task status with heat and container service. and I have already 
have a WIP

patch[1], I'd like to start a discussion on the implement.

Currently, conductor service is an rpc server, and it has several handlers
endpoints = [
docker_conductor.Handler(),
k8s_conductor.Handler(),
bay_conductor.Handler(),
conductor_listener.Handler(),
]
all handler runs in the rpc server.

1. my patch [1] is to add periodic task functions in each handlers (if 
it requires such tasks)

and setup these functions when start rpc server, add them to a thread group.
so for example:

if we have task in bay_conductor.Handler() and docker_conductor.Handler(),
then adding 2 threads to current service's tg. each thread run it own 
periodic tasks.


the advantage is we separate each handler's task job to separate thread.
but hongbin's concern is if it will has some impacts on horizontally 
scalability.


2. another implement is put all tasks in a thread, this thread will run all
tasks(for bay,k8s, docker etc), just like sahara does see [2]

3 last one is start a new service in a separate process to run tasks.( I 
think this

will be too heavy/wasteful)

I'd like to get what's your suggestion, thanks in advance.

[1] https://review.openstack.org/#/c/187090/4
[2] 
https://github.com/openstack/sahara/blob/master/sahara/service/periodic.py#L118


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2015-04-27 Thread Qiao, Liyong
+1

BR, Eli(Li Yong)Qiao

From: David Lyle [mailto:dkly...@gmail.com]
Sent: Thursday, April 23, 2015 8:57 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] TC Candidacy

I'm announcing my candidacy for the Technical Committee elections.

I have been contributing to OpenStack since Grizzly primarily in Horizon. I 
have also had the privilege to serve as Horizon PTL since Icehouse.

Why I'm running:

I believe there should be broader representation on the TC. We are growing the 
OpenStack ecosystem. Let's make sure horizontal teams and diverse parts of the 
ecosystem are represented more directly. I understand concerns of scaling were 
the reason for moving from the TC made up of all PTLs (I question that 
assertion), but the sacrifice so far has been diversity. I feel current TC 
members are exceptionally capable and take a broad viewpoint, but there are 
limits of how well that works. Let's represent broader swaths of our ecosystem 
in the technical leadership.

I think growing the OpenStack ecosystem is fantastic. As a developer and the 
PTL of a project that tries to span across most of that ecosystem it also 
worries me a bit too. I think we need to focus on how the newer and older parts 
of our ecosystem work together. How do we manage all the horizontal needs this 
introduces without going to the extremes of just scaling existing horizontal 
efforts because that won't work. And pushing all horizontal work on the 
individual projects is not appropriate because that yields chaos.

Finally, I believe the TC needs to be more active in guiding overall direction 
of OpenStack and problem resolution. I'm not suggesting a dictatorship of 
course. But let's set a direction, overall release goals for OpenStack and help 
enable and drive them.

I'm really proud to be a part of the OpenStack developer community, but I think 
we're facing some real challenges. We need to address some primary issues or 
this community will struggle to remain the vibrant, supportive place it is now.

Thank you,
David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [resend] [nova][libvirt] livemigration failed due to invalid of cpuset

2015-04-19 Thread Qiao, Liyong
Hi all,
Just resend to bring attentions

BR, Eli(Li Yong)Qiao

From: Qiao, Liyong [mailto:liyong.q...@intel.com]
Sent: Wednesday, April 15, 2015 10:22 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova][libvirt] livemigration failed due to invalid of 
cpuset

Hi all
Live migration an instance will fail due to invalid cpuset, more detail can be 
find in this bug[1]
this exception is raised by python-libvirt's migrateToURI2/ migrateToURI. I'd 
like to get your idea on this:

1. disable live-migration and raise exception early since migrateToURI2/ 
migrateToURI consider this as a exception.
2. manually check cpuset of the instance's domain xml (maybe need to change the 
instance's numa_topology), this is kinds of hacking ??
3. fix this in migrateToURI2/ migrateToURI to allow libvirt live migrate that 
instance.


In my opinion, I think option 2 is better and much more reasonable, but I don't 
know if it possible to approach that changes?

[1] https://launchpad.net/bugs/1440981

I'd like to hear your suggestions, thanks
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] livemigration failed due to invalid of cpuset

2015-04-15 Thread Qiao, Liyong
Hi all
Live migration an instance will fail due to invalid cpuset, more detail can be 
find in this bug[1]
this exception is raised by python-libvirt's migrateToURI2/ migrateToURI. I'd 
like to get your idea on this:

1. disable live-migration and raise exception early since migrateToURI2/ 
migrateToURI consider this as a exception.
2. manually check cpuset of the instance's domain xml (maybe need to change the 
instance's numa_topology), this is kinds of hacking ??
3. fix this in migrateToURI2/ migrateToURI to allow libvirt live migrate that 
instance.


In my opinion, I think option 2 is better and much more reasonable, but I don't 
know if it possible to approach that changes?

[1] https://launchpad.net/bugs/1440981

I'd like to hear your suggestions, thanks
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In loving memory of Chris Yeoh

2015-04-08 Thread Qiao, Liyong
+1 from me.

Chris is also my leader in IBM some time before, He is a helpful and talkative 
man. I learn lots from him, he work so hard that I see he send out email 
shortly before even he is ill in bed.

we never forget the contribution for the nova community, nova v3 api, nova v2.1 
api nova 2.1 micro version api.

I hot he will leave in peace and won’t be worry about the review duty in heaven.
I will never forget his word when ending the scrum, “let talk it tomorrow, CU”

BR, Eli(Li Yong)Qiao

From: Alex Xu [mailto:sou...@gmail.com]
Sent: Wednesday, April 08, 2015 5:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] In loving memory of Chris Yeoh

Feel very sad. Just few weeks ago, I still saw him active on the community. 
Really hard believe this happen such suddenly.

He was my leader in IBM and mentored me on the openstack community also, 
offered lots of help without reservation, really
learn a lot from him.  We have phone call meeting every morning before, he 
always sounds happy and enthusiastic even after
he got health problem.
May his soul rest in peace.

2015-04-08 12:49 GMT+08:00 Michael Still 
mailto:mi...@stillhq.com>>:

It is my sad duty to inform the community that Chris Yeoh passed away this 
morning. Chris leaves behind a daughter Alyssa, aged 6, who I hope will 
remember Chris as the clever and caring person that I will remember him as. I 
haven’t had a chance to confirm with the family if they want flowers or a 
donation to a charity. As soon as I know those details I will reply to this 
email.

Chris worked on open source for a very long time, with OpenStack being just the 
most recent in a long chain of contributions. He worked tirelessly on his 
contributions to Nova, including mentoring other developers. He was dedicated 
to the cause, with a strong vision of what OpenStack could become. He even 
named his cat after the project.

Chris might be the only person to have ever sent an email to his coworkers 
explaining what his code review strategy would be after brain surgery. It takes 
phenomenal strength to carry on in the face of that kind of adversity, but 
somehow he did. Frankly, I think I would have just sat on the beach.

Chris was also a contributor to the Linux Standards Base (LSB), where he helped 
improve the consistency and interoperability between Linux distributions. He 
ran the ‘Hackfest’ programming contests for a number of years at Australia’s 
open source conference -- linux.conf.au. He supported 
local Linux user groups in South Australia and Canberra, including involvement 
at installfests and speaking at local meetups. He competed in a programming 
challenge called Loki Hack, and beat out the world to win the event[1].

Alyssa’s memories of her dad need to last her a long time, so we’ve decided to 
try and collect some fond memories of Chris to help her along the way. If you 
feel comfortable doing so, please contribute a memory or two at 
https://docs.google.com/forms/d/1kX-ePqAO7Cuudppwqz1cqgBXAsJx27GkdM-eCZ0c1V8/viewform

Chris was humble, helpful and honest. The OpenStack and broader Open Source 
communities are poorer for his passing.

Michael

[1] http://www.lokigames.com/hack/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] will instance action deprecate in feature

2015-03-11 Thread Qiao, Liyong
Hi all
will instance action deprecate in feature since we have notify mechanism now?
Currently, nova have instance action and instance event action to record 
specify actions performed on a instances.
For some enterprise user, they may need to compute the latency when they 
perform an action on an instance.

I check instance action only record the start time but no finish time. Instance 
action event have start time and finish time. But for 1 instance action , there 
may be several instance action event, so there is no necessary to check all 
instance action event, we only care about the action time itself, so a finished 
time is necessary to added into instance action.

I made a patch to add finish time of instance action in [1](but need to fix 
unit test issue), I wonder if it is worthy to
Spend time on debug unit test issue? Since I remember that instance action will 
be deprecated soon.

[1] https://review.openstack.org/162910


Best Regards,
Eli(Li Yong) Qiao.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev