Re: [openstack-dev] [Neutron] FWaaS: Support for explicit commit

2013-08-03 Thread Stephen Gran

On 03/08/13 02:43, Sumit Naiksatam wrote:

Hi All,

In Neutron Firewall as a Service (FWaaS), we currently support an
implicit commit mode, wherein a change made to a firewall_rule is
propagated immediately to all the firewalls that use this rule (via
the firewall_policy association), and the rule gets applied in the
backend firewalls. This might be acceptable, however this is different
from the explicit commit semantics which most firewalls support.
Having an explicit commit operation ensures that multiple rules can be
applied atomically, as opposed to in the implicit case where each rule
is applied atomically and thus opens up the possibility of security
holes between two successive rule applications.


This all seems quite reasonable.


So the proposal here is quite simple -

* When any changes are made to the firewall_rules
(added/deleted/updated), no changes will happen on the firewall (only
the corresponding firewall_rule resources are modified).


I would leave the default as it currently is, and make this an optional 
mode that can be triggered with a parameter.  This seems to me to 
preserve the principal of least surprise for everyday operations, but 
allow for more complicated things when needed.



* We will support an explicit commit operation on the firewall
resource. Any changes made to the rules since the last commit will now
be applied to the firewall when this commit operation is invoked.

* A show operation on the firewall will show a list of the currently
committed rules, and also the pending changes.

Kindly respond if you have any comments on this.


Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 32% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News  Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News  Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Snapshot List support

2013-08-03 Thread Mikhail Khodos
The Cinder Support Matrix
https://wiki.openstack.org/wiki/CinderSupportMatrix, states that snapshot
listing is implemented in drivers by: LVM, EMC, NetApp, IBM, etc. However,
I could not find any methods or interfaces in the cinder volume api that
implement snapshot listing. Could anyone clarify if snapshot listing is
supported only via DB?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] the performance degradation of swift PUT

2013-08-03 Thread kalrey
hi openstackers,
I'm a learner of swift. I took some benchmark about swift last week and the 
result is not pleasant.
When I put a large number of small files(4KB) under high concurrency, the 
performance degradation  of PUT appeared.
The speed of PUT even can reach to 2000/s at beginning. But it down to 600/s 
after one minute. It's stable at 100/s at last and some error like '503' 
occured. But when I flushed all disk in cluster it could reach back 2000/s. 
In fact, I also took some benchmark about GET in the same environment but it 
works very well(5000/s).

There are some information which maybe useful:
Test environment:
Ubuntu 12.04
1 proxy-node : 128GB-ram / CPU 16core / 1Gb NIC*1
5 Storage-nodes : each for 128GB-ram / CPU 16core / 2TB*4 / 1Gb NIC*1.
[bench]
concurrency = 200
object_size = 4096
num_objects = 200
num_containers = 200
=
I have traced the code of PUT operation to find out what cause the performance 
degradation while putting objects. Some code cost a long time in 
ObjectController::PUT(swift/obj/server.py).

 for chunk in iter(lambda: reader(self.network_chunk_size), ”):
start_time = time.time()
  upload_size += len(chunk)
  if time.time()  upload_expiration:
  self.logger.increment(‘PUT.timeouts’)
  return HTTPRequestTimeout(request=request)
  etag.update(chunk)
  while chunk:
 written = os.write(fd, chunk)
 chunk = chunk[written:]
  sleep() 

   'lambda: reader' will cost average of 600ms per execution. And 'sleep()' 
will cost 500ms per execution.In fact, 'fsync' also spend a lot time when file 
flush to disk at last and I removed it already just for testing. I think the 
time is too long. 
I monitor resource of cluster while putting object.The usage of bandwidth is 
very low and the load of CPUs were very light.
I have tried to change vfs_cache_pressure to a low value and it does not seem 
to work.  
Is there any advice to figure out the problem?
appreciate~



kalrey___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Event API Access Controls

2013-08-03 Thread Jay Pipes

On 08/02/2013 06:26 PM, Herndon, John Luke (HPCS - Ft. Collins) wrote:

Hello, I'm currently implementing the event api blueprint[0], and am
wondering what access controls we should impose on the event api. The
purpose of the blueprint is to provide a StackTach equivalent in the
ceilometer api. I believe that StackTach is used as an internal tool
which end with no access to end users. Given that the event api is
targeted at administrators, I am currently thinking that it should be
limited to admin users only. However, I wanted to ask for input on this
topic. Any arguments for opening it up so users can look at events for
their resources? Any arguments for not doing so? PS -I'm new to the
ceilometer project, so let me introduce myself. My name is John Herndon,
and I work for HP. I've been freed up from a different project and will
be working on ceilometer. Thanks, looking forward to working with
everyone! -john 0:
https://blueprints.launchpad.net/ceilometer/+spec/specify-event-api


Welcome to the contributor community, John. :) I think defaulting the 
access to the service's events API endpoints to just admins makes the 
most sense, and you can use the existing policy engine to make that 
access configurable with the policy.json file.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-08-03 Thread Alessandro Pilotti
Based on an intial run of the Hyper-V Nova tests with pymox 
(https://github.com/emonty/pymox), only 2 of the tests required some minor 
adjustements while the rest was running perfectly fine by just replacing the 
mox import line.

If we plan to support pymox in Havana, I'd be happy to send a patch for review 
with those fixes.


Alessandro




On Jul 27, 2013, at 02:12 , Joe Gordon 
joe.gord...@gmail.commailto:joe.gord...@gmail.com wrote:


On Jul 26, 2013 5:53 AM, Russell Bryant 
rbry...@redhat.commailto:rbry...@redhat.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 07/26/2013 05:35 AM, Roman Bogorodskiy wrote:
  Alex Meade wrote:
 
  +1 to everything Russell just said and of course Blueprints for
  this. One for #3 (changing from mox - Mock) would be good so
  that anyone who is bored or finds this urgent can collaborate.
  Also, we need to make sure reviewers are aware (Hopefully they
  are reading this).
 
  -Alex
 
  I have created a blueprint for Nova:
 
  https://blueprints.launchpad.net/nova/+spec/mox-to-mock-conversion
 
  Our team has some spare cycles, so we can work on that.

 Is this really worth it?  The original post identified pymox as
 compatible with mox and Python 3 compatible.  If that's the case,
 making a specific effort to convert everything doesn't seem very
 valuable.  The downsides are risk of changing test behavior and
 unnecessary burden on the review queue.


++

 If a test needs to be changed anyway, then that seems like an
 opportunistic time to go ahead and rewrite it.

 If you have spare cycles, I can probably help find something more
 valuable to work on.  Feel free to come talk to me.  I'd choose most
 of the open bugs over this, for example.  There are hundreds of them
 to choose from.  :-)

 - --
 Russell Bryant
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.13 (GNU/Linux)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iEYEARECAAYFAlHycHkACgkQFg9ft4s9SAYyTQCfY9lKmr6CHNEPb1q0suRBGU/M
 FA0AnR5fJBBG8AsOTX1aIoT845ru3hvH
 =Bpbo
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev