[openstack-dev] [cinder]a problem about the implement of limit-volume-copy-bandwidth

2014-06-30 Thread Yuzhou (C)
Hi stackers,

I found some problems about the current implement of 
limit-volume-copy-bandwidth (this patch has been merged in last week.)

Firstly, assume that I configurate volume_copy_bps_limit=10M, If the path 
is a block device, cgroup blkio can limit copy-bandwidth separately for every 
volume.
But If the path is a regular file, according to the current implement, cgroup 
blkio have to limit total copy-bandwidth for all volume on the disk device 
which the file lies on.
The reason is :
In cinder/utils.py, the method get_blkdev_major_minor

elif lookup_for_file:
# lookup the mounted disk which the file lies on
out, _err = execute('df', path)
devpath = out.split(\n)[1].split()[0]
return get_blkdev_major_minor(devpath, False)

If invoke the method copy_volume concurrently, copy-bandwidth for a volume is 
less than 10M. In this case, the meaning of param volume_copy_bps_limit in 
cinder.conf is defferent.

   Secondly, In NFS, the result of cmd 'df' is like this:
[root@yuzhou yuzhou]# df /mnt/111
Filesystem 1K-blocks  Used Available Use% Mounted on
186.100.8.144:/mnt/nfs_storage   51606528  14676992  34308096  30% /mnt
I think the method get_blkdev_major_minor can not deal with the devpath 
'186.100.8.144:/mnt/nfs_storage'.
i.e can not limit volume copy bandwidth in nfsdriver.

So I think maybe we should modify the current implement to make sure 
copy-bandwidth for every volume meet the configuration requirement.
I suggest we use loop device associated with the regular file(losetup 
/dev/loop0 /mnt/volumes/vm.qcow2),
then limit the bps of loop device.( cgset -r 
blkio.throttle.write_bps_device=7:0 1000 test)
After copying volume, detach loop device. (losetup --detach /dev/loop0)

Any suggestion about my improvement opinions?

Thanks!

Zhou Yu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] promote blueprint about deferred deletion for volumes

2014-06-12 Thread Yuzhou (C)

@ john,

Thank you for your comments.

About blueprint volume-delete-protect : 
https://review.openstack.org/#/c/97034/,  I think deferred deletion for volumes 
is valuable,  it seems to me that should be sufficient. 

Firstly currently in cinder, calling the API of deleting volume means 
the volume will be deleted immediately. If the user specify a wrong volume by 
mistake, the data in the volume may be lost forever. To avoid this, we hope to 
add a deferred deletion mechanism for volumes. So for a certain amount of time, 
volumes can be restored after the user find a misuse of deleting volume. So I 
think deferred deletion for volumes is valuable

Moreover, there are deferred deletion implements for instance in nova 
and image in glance, I think it is very common feature to protected important 
resource. 

So I would like to promote this blueprint as soon.

@all,

I have submit this blueprint https://review.openstack.org/#/c/97034/,  It 
introduces some complexity, still exists different options. So I would like to 
get more feedback about this bp.

Thanks.

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack][cinder] hope to get any feedback about delay delete volume

2014-06-12 Thread Yuzhou (C)
Hi all,

I have submit a blueprint about deferred deletion for volumes in cinder: 
https://review.openstack.org/#/c/97034/

The implements of deferred deletion for volumes will introduce some 
complexity, to this point, there are different options in stackers. So we would 
like to get some feedback from anyone, particularly cloud operators.

Here, I introduce the importance of deferred deletion for volumes 
again.  
Currently in cinder, calling the API of deleting volume means the volume 
will be deleted immediately. If the user specify a wrong volume by mistake, the 
data in the volume may be lost forever. To avoid this, I hope to add a deferred 
deletion mechanism for volumes. So for a certain amount of time, volumes can be 
restored after the user find a misuse of deleting volume. 
Moreover, there are deferred deletion implements for instance in nova and 
image in glance, I think it is very common feature to protected important 
resource.
So I think deferred deletion for volumes is valuable,  it seems to me that 
should be sufficient.

Welcome your feedback and suggestions!

Thanks.

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 【openstack-dev】【vmware】【neutron】a question about how to do if old vsphere version want to be managed by openstack ?

2014-05-18 Thread Yuzhou (C)
Hi stackers,
Currently, vmware neutron plugin is NSX plugin, it work with vCenter5.5 
or later and ESX5.0 or later.
I have a question: if we have old ESX version and old vCenter version, but 
I want to these are
managed by openstack icehouse version, how to do?

Thanks

Zhou Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 【openstack-dev】【nova】discussion about add support to SSD ephemeral storage

2014-04-17 Thread Yuzhou (C)
Hi Daniel,
The intention of image type ('default', 'fast', ' shared', 
'sharedfast') look like volume type in cinder. So I think there are two 
solutions :
1. Like using volume type to configure a multiple-storage back-end in cinder, 
we could extend nova API , then create image-type resource to configure a 
multiple-image back-end in nova.
e.g.
in nova.conf,
libvirt_image_type=default:qcow2, fast:qcow2, shared:rbd, 
sharedfast:rbd
instance_path=default:/var/nova/images/hdd, 
fast:/var/nova/imges/ssd
images_rbd_pool=shared:main,sharedfast:mainssd

nova image-type-create normal_image
nova image-type-key normal_image root_disk_type=default
nova image-type-key normal_image ephemeral _disk_type=default 
nova image-type-key normal_image swap_disk_type=default  

nova image-type-create fast_image
nova image-type-key fast_image root_disk_type=fast
nova image-type-key fast_image ephemeral _disk_type=default 
nova image-type-key fast_image swap_disk_type=fast   

nova flavor-key m3.xlarge set quota:image-type= fast_image  

 
2. Like our discussion in mails, image types are defined in configuration file, 
enumerated type, ie set libvirt_image_type in nova.conf
e.g.
in nova.conf,
libvirt_image_type=default:qcow2, fast:qcow2, shared:rbd, 
sharedfast:rbd
instance_path=default:/var/nova/images/hdd, 
fast:/var/nova/imges/ssd
images_rbd_pool=shared:main,sharedfast:mainssd

nova flavor0key m3.xlarge set ephemeral_storage_type =fast
or more fine grained,
nova flavor-key m3.xlarge set quota:root_disk_type=fast
nova flavor-key m3.xlarge set quota:ephemeral_disk_type=default
nova flavor-key m3.xlarge set quota:swap_disk_type=fast


Which solution do you prefer?

If you prefer second solution, I think better to set 
libvirt_image_type like this: libvirt_image_type=default:raw:HDD, fast:raw:SSD
*fast* means what, I think only the deployer of openstack knows clearly. So 
description field would be need. HDD and SSD are the description of image 
type name.
Maybe in second solution, we would not need to create/delete image-type 
resource, but I think the api about listing image-types is needed. Do you think 
so?


 I've already seen people asking for ability to have a choice of local image 
 backends per flavor even before you raised the SSD idea.
I have seen nova blueprint list, not found any blueprint about this idea, I 
will register BP and implement this idea.

Thanks.

Zhou Yu


 -Original Message-
 From: Daniel P. Berrange [mailto:berra...@redhat.com]
 Sent: Wednesday, April 16, 2014 4:48 PM
 To: Yuzhou (C)
 Cc: openstack-dev@lists.openstack.org; Luohao (brian); Liuji (Jeremy); Bohai
 (ricky)
 Subject: Re: 【openstack-dev】【nova】discussion about add support to SSD
 ephemeral storage
 
 On Wed, Apr 16, 2014 at 02:17:13AM +, Yuzhou (C) wrote:
  Hi Daniel,
 
   Thanks for your comments about this
  BP:https://review.openstack.org/#/c/83727/
 
   My initial thoughts is to do little changes then get better
 performance of guest vm. So it is a bit too narrowly focused.
 
   After review SSD use case, I totally agree with your comments. I
 think if I want to implement the broader picture, there are many work items
 that need to do.
 
  1. Add support to create flavor with SSD ephemeral storage.
   The cloud adminstrator create the flavor that indicate which
 backend should be used per instance. e.g.
nova flavor-key m1.ssd set
 quota:ephemeral_storage_type=ssd
  (root_disk ephemeral_disk and swap_disk are placed onto 
  a
 ssd)
   Or more fine grained, e.g.
nova flavor-key m1.ssd set quota:root_disk_type=ssd
nova flavor-key m1.ssd set quota:ephemeral_disk_type=hd
nova flavor-key m1.ssd set quota:swap_disk_type=ssd
  (root_disk and swap_disk are placed onto a ssd,
 ephemeral_disk is
  placed onto a harddisk)
 
 I don't think you should be using the term 'ssd' here, or indeed anywhere.
 We should just be letting the admin configure multiple local image types, and
 given them each a name. Then just refer to the image types by name.
 We don't need to care whether they're SSD backed or not - just that the
 admin can configure whatever backends they want to.  I've already seen
 people asking for ability to have a choice of local image backends per flavour
 even before you raised the SSD idea.
 
  2. When config nova,the deployer of openstack configure
  ephemeral_storage_pools e.g.
   if libvirt_image_type=default (local disk)
ephemeral_storage_pools=path1,path2
   if  libvirt_image_type=RBD
 ephemeral_storage_pools=rdb1,rdb2
 
 We have to bear

[openstack-dev] 【openstack-dev】【nova】discussion about add support to SSD ephemeral storage

2014-04-15 Thread Yuzhou (C)
Hi Daniel,

 Thanks for your comments about this 
BP:https://review.openstack.org/#/c/83727/
 
 My initial thoughts is to do little changes then get better performance of 
guest vm. So it is a bit too narrowly focused.

 After review SSD use case, I totally agree with your comments. I think if 
I want to implement the broader picture, there are many work items that need to 
do.

1. Add support to create flavor with SSD ephemeral storage.
 The cloud adminstrator create the flavor that indicate which backend 
should be used per instance. e.g.
  nova flavor-key m1.ssd set quota:ephemeral_storage_type=ssd 
(root_disk ephemeral_disk and swap_disk are placed onto 
a ssd)
 Or more fine grained, e.g.
  nova flavor-key m1.ssd set quota:root_disk_type=ssd
  nova flavor-key m1.ssd set quota:ephemeral_disk_type=hd
  nova flavor-key m1.ssd set quota:swap_disk_type=ssd
(root_disk and swap_disk are placed onto a ssd, 
ephemeral_disk is placed onto a harddisk)

2. When config nova,the deployer of openstack configure 
ephemeral_storage_pools
e.g.
 if libvirt_image_type=default (local disk)
  ephemeral_storage_pools=path1,path2 
 if  libvirt_image_type=RBD
   ephemeral_storage_pools=rdb1,rdb2

3. According to  ephemeral storage type in compute host, nova-scheduler select 
compute node to create VM.

4. Assume  that ssd mount on path1 ,hd mount on path2, and assume that the 
end user select the flavor with ssd ephemeral storage, when creating VM,  
nova-compute place root_disk/ephemeral_disk /swap_disk onto path1.

My description about  broader picture  is right or not?

Welcome for your more comments!

Thanks.

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] add configuration item to set virtual machine swapfile location

2014-03-19 Thread Yuzhou (C)
Hi everyone,

Currently, disk.swap(the swapfile of instance) is created on the 
instances_path(deflaut : /var/lib/nova/instances/vm-uuid). Maybe we should 
add configuration item in nova.conf to set virtual machine swapfile location. 
With such a feature enabled, swapfiles can be placed onto a specified storage, 
e.g. a SSD, seperatedly.

Thanks,

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-17 Thread Yuzhou (C)
Hi Duncan Thomas,

Maybe the statement about approval process is not very exact. In fact 
in my mail, I mean:
In the enterprise private cloud, if beyond the quota, you want to create a new 
VM ,that needs to wait for approval process.


@stackers,

I think the following two use cases show why non-persistent disk is useful:

1.Non-persistent VDI: 
When users access a non-persistent desktop, none of their settings or 
data is saved once they log out. At the end of a session, 
the desktop reverts back to its original state and the user receives a 
fresh image the next time he logs in.
1). Image manageability, Since non-persistent desktops are built from a 
master image, it's easier for administrators to patch and update the image, 
back it up quickly and deploy company-wide applications to all end users.
2). Greater security, Users can't alter desktop settings or install 
their own applications, making the image more secure.
3). Less storage.

2.As the use case mentioned several days ago by zhangleiqiang:

Let's take a virtual machine which hosts a web service, but it is 
primarily a read-only web site with content that rarely changes. This VM has 
three disks. Disk 1 contains the Guest OS and web application (e.g. 
Apache). Disk 2 contains the web pages for the web site. Disk 3 contains all 
the logging activity.
 In this case, disk 1 (OS  app) are dependent (default) settings and 
is backed up nightly. Disk 2 is independent non-persistent (not backed up, and 
any changes to these pages will be discarded). Disk 3 is   independent 
persistent (not backed up, but any changes are persisted to the disk).
 If updates are needed to the web site's pages, disk 2 must be taken 
out of independent non-persistent mode temporarily to allow the changes to be 
made.
 Now let's say that this site gets hacked, and the pages are doctored 
with something which is not very nice. A simple reboot of this host will 
discard the changes made to the web pages on disk 2, but will persistthe 
logs on disk 3 so that a root cause analysis can be carried out.

Hope to get more suggestions about non-persistent disk!

Thanks.

Zhou Yu




 -Original Message-
 From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
 Sent: Saturday, March 15, 2014 12:56 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
 stopping VM, data will be rollback automatically), do you think we shoud
 introduce this feature?
 
 On 7 March 2014 08:17, Yuzhou (C) vitas.yuz...@huawei.com wrote:
  First, generally, in public or private cloud, the end users of VMs
 have no right to create new VMs directly.
  If someone want to create new VMs, he or she need to wait for approval
 process.
  Then, the administrator Of cloud create a new VM to applicant. So the
 workflow that you suggested is not convenient.
 
 This approval process  admin action is the exact opposite to what cloud is
 all about. I'd suggest that anybody using such a process has little
 understanding of cloud and should be educated, not weird interfaces added
 to nova to support a broken premise. The cloud /is different/ from
 traditional IT, that is its strength, and we should be wary of undermining 
 that
 to allow old-style thinking to continue.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][Designate] A question about DNSaaS?

2014-03-14 Thread Yuzhou (C)
Hi stackers,

Are there any plans about DNSaaS on the neutron roadmap?

As far as I known, Designate provides DNSaaS services for OpenStack.

Why DNSaaS is Independent service , not a network service like LBaas or VPNaaS?

Thanks,

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][QOS]How is the BP about ml-qos going?

2014-03-10 Thread Yuzhou (C)
Hi stackers,

The progress of the bp about ml2-qos is code review for long time.
Why didn't the implementation of qos commit the neutron master ?
Anyone who knows the history can help me or give me a hint how to find the 
discuss mail?

Thanks.

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-07 Thread Yuzhou (C)
:
   
On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao chaoc...@gmail.com
 wrote:
 I think the current snapshot implementation can be a solution
 sometimes, but it is NOT exact same as user's expectation.
 For example, a new blueprint is created last week,

 https://blueprints.launchpad.net/nova/+spec/driver-specific-s
 napshot,
 which
 seems a little similar with this discussion. I feel the user
 is requesting Nova to create in-place snapshot (not a new
 image), in order to revert the instance to a certain state.
 This capability should be very useful when testing new
 software or system settings. It seems a short-term temporary
 snapshot associated with a running instance for Nova.
 Creating a new instance is not that convenient, and may be
 not feasible for the user, especially if he or she is using
 public cloud.

   
Why isn't it easy to create a new instance from a snapshot?
   

 On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
 divakar.padiyar-nanda...@hp.com wrote:

  Why reboot an instance? What is wrong with deleting it
  and create a new one?

 You generally use non-persistent disk mode when you are
 testing new
 software or experimenting with settings.   If something goes
 wrong
 just
 reboot and you are back to clean state and start over again.
 I
 feel
 it's
 convenient to handle this with just a reboot rather than
 recreating the instance.

 Thanks,
 Divakar

 -Original Message-
 From: Joe Gordon [mailto:joe.gord...@gmail.com]
 Sent: Tuesday, March 04, 2014 10:41 AM
 To: OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [nova][cinder] non-persistent
 storage(after stopping VM, data will be rollback
 automatically), do you think we shoud introduce this
 feature?
 Importance: High

 On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang
 zhangleiqi...@huawei.com
 wrote:
 
  This sounds like ephemeral storage plus snapshots.  You
  build a base image, snapshot it then boot from the
  snapshot.
 
 
  Non-persistent storage/disk is useful for sandbox-like
  environment, and this feature has already exists in VMWare
  ESX from version 4.1.
  The
  implementation of ESX is the same as what you said, boot
  from snapshot of the disk/volume, but it will also
  *automatically* delete the transient snapshot after the
  instance reboots or shutdowns. I think the whole procedure
  may be controlled by OpenStack other than user's manual
  operations.

 Why reboot an instance? What is wrong with deleting it and
 create a new one?

 
  As far as I know, libvirt already defines the
  corresponding transient element in domain xml for
  non-persistent disk ( [1] ), but it cannot specify the
  location of the transient snapshot. Although qemu-kvm has
  provided support for this feature by the -snapshot
  command argument, which will create the transient snapshot
  under /tmp directory, the qemu driver of libvirt don't
  support transient element currently.
 
  I think the steps of creating and deleting transient
  snapshot may be better to done by Nova/Cinder other than
  waiting for the transient support added to libvirt, as
  the location of transient snapshot should specified by
  Nova.
 
 
  [1] http://libvirt.org/formatdomain.html#elementsDisks
  --
  zhangleiqiang
 
  Best Regards
 
 
  -Original Message-
  From: Joe Gordon [mailto:joe.gord...@gmail.com]
  Sent: Tuesday, March 04, 2014 11:26 AM
  To: OpenStack Development Mailing List (not for usage
  questions)
  Cc: Luohao (brian)
  Subject: Re: [openstack-dev] [nova][cinder]
  non-persistent storage(after stopping VM, data will be
  rollback automatically), do you think we shoud introduce
  this feature?
 
  On Mon, Mar 3, 2014 at 6:00 PM, Yuzhou (C)
  vitas.yuz...@huawei.com
  wrote:
   Hi stackers,
  
   As far as I know ,there are two types of storage used
   by VM in
   openstack:
  Ephemeral Storage and Persistent Storage.
   Data on ephemeral storage ceases to exist when the
   instance it is associated
  with is terminated. Rebooting the VM or restarting the
  host server, however, will not destroy ephemeral data.
   Persistent storage means that the storage resource
   outlives any other
  resource and is always available, regardless of the state
  of a running instance.
  
   There is a use case that maybe need a new type of
   storage, maybe we can
  call it non-persistent storage .
   The use case is that VMs are assigned to the public
   ephemerally in public
  areas

[openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-03 Thread Yuzhou (C)
Hi stackers,

As far as I know ,there are two types of storage used by VM in openstack: 
Ephemeral Storage and Persistent Storage.
Data on ephemeral storage ceases to exist when the instance it is associated 
with is terminated. Rebooting the VM or restarting the host server, however, 
will not destroy ephemeral data.
Persistent storage means that the storage resource outlives any other resource 
and is always available, regardless of the state of a running instance.

There is a use case that maybe need a new type of storage, maybe we can call it 
non-persistent storage .
The use case is that VMs are assigned to the public ephemerally in public areas.
After the VM is used, new data on storage of VM ceases to exist when the 
instance it is associated with is stopped. 
It means stop the VM, Non-persistent storage used by VM will be rollback 
automatically.

Is there any other suggestions? Or any BPs about this use case?

Thanks!

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]the discussion about traffic storm protection in network virtualization environment

2014-03-03 Thread Yuzhou (C)

Hi lasterjetyang:

There is one thing I am not quite sure, maybe you can coach me. by using OVS 
or OpenDayLight or floodlight, the east-west bound traffic will be defined as 
flow, and I personally don't understand how storm could happen in OpenFlow.
PS I could be wrong on this.

Thanks for your reply.

As far as I know, to ARP or DHCP broadcast request packets , SDN controller can 
reponse packets directly by flowtable instead of  broadcast.
But BUM(broadcast, unknown unicast, or multicast) are not only  these two type 
of packets, for example , many APPs use UDP broadcast.
Right now there are many types of packets that current sdncontroller could not 
deal with but only forward normally.

In addition, I think the traditional network(not support openflow or SDN) still 
exist for long time .

So I think BUM will still exist, traffic storm will still occur.

Thanks for your suggestions!

Zhou Yu



From: laserjetyang [mailto:laserjety...@gmail.com]
Sent: Sunday, March 02, 2014 10:38 AM
To: Yuzhou (C)
Subject: Re: [openstack-dev] [neutron]the discussion about traffic storm 
protection in network virtualization environment

you might want to list how storm happened in either using OVS or Linux Bridge. 
This looks to me a QoS control.

Right now, Nuetron has more problem than traffic control. The L2 agent should 
be unified, the L3 agent should be unified.
You might want to join the IRC chat and talk to Gary, Dan, locally you can 
approach Yong Sheng and the NEC guy to get a core sponsor.
To go further, can you protect the network traffic in nova-network? It is 
really not necessary to get a blueprint to achieve your goal in nova-network 
setup. Neutron should be re-architectured.

There is one thing I am not quite sure, maybe you can coach me. by using OVS or 
OpenDayLight or floodlight, the east-west bound traffic will be defined as 
flow, and I personally don't understand how storm could happen in OpenFlow.
PS I could be wrong on this.

On Thu, Feb 27, 2014 at 8:40 PM, Yuzhou (C) 
vitas.yuz...@huawei.commailto:vitas.yuz...@huawei.com wrote:
Hi everyone:

A traffic storm occurs when broadcast, unknown unicast, or multicast 
(BUM) packets flood the LAN, creating excessive traffic and degrading network 
performance.
So physical switch or router offer traffic storm protection, these approaches:
1.Storm suppression, which enables to limit the size of monitored 
traffic passing through an Ethernet interface by setting a traffic threshold.
When the traffic threshold is exceeded, the interface discards all exceeding 
traffic.
2.Storm control, which enables to shut down Ethernet interfaces or 
block traffic when monitored traffic exceeds the traffic threshold. It also 
enables an interface to send trap or log messages when monitored traffic 
reaches a certain traffic threshold, depending on the configuration.
I want to get traffic storm protection in network virtualization 
environment as same as in physical network. So I registered a BP:  
https://blueprints.launchpad.net/neutron/+spec/traffic-protection  and
wrote a Wiki: https://wiki.openstack.org/wiki/Neutron/TrafficProtection

I would like your opinions about this subject. Specifically, how to 
avoid traffic storm and protect traffic in network virtualization environment ? 
Is there other approaches?
Welcome to share your experiences about it .

Thanks,

Zhou Yu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]the discussion about traffic storm protection in network virtualization environment

2014-02-27 Thread Yuzhou (C)
Hi everyone:

A traffic storm occurs when broadcast, unknown unicast, or multicast 
(BUM) packets flood the LAN, creating excessive traffic and degrading network 
performance. 
So physical switch or router offer traffic storm protection, these approaches:
1.Storm suppression, which enables to limit the size of monitored 
traffic passing through an Ethernet interface by setting a traffic threshold. 
When the traffic threshold is exceeded, the interface discards all exceeding 
traffic.
2.Storm control, which enables to shut down Ethernet interfaces or 
block traffic when monitored traffic exceeds the traffic threshold. It also 
enables an interface to send trap or log messages when monitored traffic 
reaches a certain traffic threshold, depending on the configuration.
I want to get traffic storm protection in network virtualization 
environment as same as in physical network. So I registered a BP:  
https://blueprints.launchpad.net/neutron/+spec/traffic-protection  and
wrote a Wiki: https://wiki.openstack.org/wiki/Neutron/TrafficProtection

I would like your opinions about this subject. Specifically, how to 
avoid traffic storm and protect traffic in network virtualization environment ? 
Is there other approaches?
Welcome to share your experiences about it .

Thanks,

Zhou Yu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete operation should not apply to the volume being used?

2014-02-25 Thread Yuzhou (C)
I thinkforce delete = nova detach volume,then cinder delete volume

Volume status in db shoud be modified after nova detach volume.

Thanks!


From: zhangyu (AI) [mailto:zhangy...@huawei.com]
Sent: Wednesday, February 26, 2014 8:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Cinder] [Nova]Do you think volume force delete 
operation should not apply to the volume being used?

If I understand your question correctly, the case you describe should be like 
the following:

Assume we have created both an instance and a volume, then we try to  attach 
that volume to the instance.
Before that operation is completed (the status of the volume is attaching 
now), for whatever reasons we decide to apply a force delete operation on 
that volume.
Then, after we applied that force delete, we come to see that, from the Cinder 
side, the volume has been successfully deleted and the status is surely 
deleted.
However, from the Nova side, we see that the status of the deleted volume 
remains to be attaching.

If this is truly your case, I think it is a bug. The reason might lie in that, 
Cinder forgets to refresh the attach_status attribute of a volume in DB when 
applying a force delete operation.
Is there any other suggestions?

Thanks!



From: yunling [mailto:yunlingz...@hotmail.com]
Sent: Monday, February 17, 2014 9:14 PM
To: openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Cinder]Do you think volume force delete operation 
should not apply to the volume being used?

Hi stackers:


I found that volume status become inconsistent (nova volume status is 
attaching, verus cinder volume status is deleted) between nova and cinder when 
doing volume force delete operation on an attaching volume.
I think volume force delete operation should not apply to the volume being 
used, which included the attached status of attaching, attached and detached.


How do you think?


thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron ML2 and openvswitch agent

2014-02-25 Thread Yuzhou (C)
If you want to know how exactly ML2 plugin which is working on
neutron server is communicating with openvswitch
agents,you shoud read code: rpc.py /plugin.py /ovs_neutron_agent.py

In rpc.py, define functions for rpc.
RpcCallbacks class define some rpc that agent send to Ml2 plugin.
AgentNotifierApi class define some rpc that Ml2 plugin send to agent

In plugin.py,you should pay close attention to def _setup_rpc(self)

Ml2 plugin send rpc to agent (agent process rpc from plugin)
Network_delete
Port_update
Security_groups_rule_updated
Security_groups_member_updated
Security_groups_provider_updated
Tunnel_update

Ml2 plugin process rpc from agent (agent send rpc to pulgin)
Report_state
Get_device_details
Update_device_down
Update_device_up
Tunnel_sync
Security_group_runles_for_devices






Re: [openstack-dev] Neutron ML2 and openvswitch agent
Sławek Kapłoński Tue, 25 Feb 2014 12:31:54 -0800
Hello,

Trinath, this presentation I saw before You send me it. There is nice 
explanation what methods are (and should be) in type driver and mech driver 
but I need exactly that information what sent me Assaf. Thanks both of You for 
Your help :)

--
Best regards
Sławek Kapłoński
Dnia wtorek, 25 lutego 2014 12:18:50 Assaf Muller pisze:

 - Original Message -
 
  Hi
  
  Hope this helps
  
  http://fr.slideshare.net/mestery/modular-layer-2-in-openstack-neutron
  
  ___
  
  Trinath Somanchi
  
  _
  From: Sławek Kapłoński [sla...@kaplonski.pl]
  Sent: Tuesday, February 25, 2014 9:24 PM
  To: openstack-dev@lists.openstack.org
  Subject: [openstack-dev] Neutron ML2 and openvswitch agent
  
  Hello,
  
  I have question to You guys. Can someone explain me (or send to link
  with such explanation) how exactly ML2 plugin which is working on
  neutron server is communicating with compute hosts with openvswitch
  agents?
 
 Maybe this will set you on your way:
 ml2/plugin.py:Ml2Plugin.update_port uses _notify_port_updated, which then
 uses ml2/rpc.py:AgentNotifierApi.port_update, which makes an RPC call with
 the topic stated in that file.
 
 When the message is received by the OVS agent, it calls:
 neutron/plugins/openvswitch/agent/ovs_neutron_agent.py:OVSNeutronAgent.port_
 update.
  I suppose that this is working with rabbitmq queues but I need
  to add own function which will be called in this agent and I don't know
  how to do that. It would be perfect if such think will be possible with
  writing for example new mechanical driver in ML2 plugin (but how?).
  Thanks in advance for any help from You :)
  
  --
  Best regards
  Slawek Kaplonski
  sla...@kaplonski.pl
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Re: [neutron]The mechanism of physical_network segmentation_id is logical?

2014-02-24 Thread Yuzhou (C)

2014-02-24 21:50 GMT+08:00 Robert Kukura rkuk...@redhat.com:
 On 02/24/2014 07:09 AM, 黎林果 wrote:
 Hi stackers,

   When create a network, if we don't set provider:network_type,
 provider:physical_network or provider:segmentation_id, the
 network_type will be from cfg, but the other tow is from db's first
 record. Code is

 (physical_network,
  segmentation_id) = ovs_db_v2.reserve_vlan(session)



   There has tow questions.
   1, network_vlan_ranges = physnet1:100:200
  Can we config much physical_networks by cfg?

 Hi Lee,

 You can configure multiple physical_networks. For example:

 network_vlan_ranges=physnet1:100:200,physnet1:1000:3000,physnet2:2000:4000,physnet3

 This makes ranges of VLAN tags on physnet1 and physnet2 available for
 allocation as tenant networks (assuming tenant_network_type = vlan).

 This also makes physnet1, physnet2, and physnet3 available for
 allocation of VLAN (and flat for OVS) provider networks (with admin
 privilege). Note that physnet3 is available for allocation of provider
 networks, but not for tenant networks because it does not have a range
 of VLANs specified.


   2, If yes, the physical_network should be uncertainty. Dose this logical?

 Each physical_network is considered to be a separate VLAN trunk, so VLAN
 2345 on physnet1 is a different isolated network than VLAN 2345 on
 physnet2. All the specified (physical_network,segmentation_id) tuples
 form a pool of available tenant networks. Normal tenants have no
 visibility of which physical_network trunk their networks get allocated on.

 -Bob



 Regards!

 Lee Li


Why say  VLAN 2345 on physnet1 is a different isolated network than VLAN 2345 on
physnet2?

I think different physnet make traffic output to different physical NIC,but 
these traffic
have same vlan tag 2345! So why isolated?

Regards

Zhou Yu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev