Re: [openstack-dev] [Neutron] FWaaS: Support for explicit commit

2013-08-03 Thread balaji patnala
Hi Sumit,

So all the other Network Services like LBaaS, VPNaaS as well also has
to support  implicit and explicit  'Commit' modes for configuration.

It is certainly a good idea to support implicit and explicit modes. It is
good if all the other network services also follows the same.

regards,
balaji
On Sat, Aug 3, 2013 at 7:13 AM, Sumit Naiksatam wrote:

> Hi All,
>
> In Neutron Firewall as a Service (FWaaS), we currently support an
> implicit commit mode, wherein a change made to a firewall_rule is
> propagated immediately to all the firewalls that use this rule (via
> the firewall_policy association), and the rule gets applied in the
> backend firewalls. This might be acceptable, however this is different
> from the explicit commit semantics which most firewalls support.
> Having an explicit commit operation ensures that multiple rules can be
> applied atomically, as opposed to in the implicit case where each rule
> is applied atomically and thus opens up the possibility of security
> holes between two successive rule applications.
>
> So the proposal here is quite simple -
>
> * When any changes are made to the firewall_rules
> (added/deleted/updated), no changes will happen on the firewall (only
> the corresponding firewall_rule resources are modified).
>
> * We will support an explicit commit operation on the firewall
> resource. Any changes made to the rules since the last commit will now
> be applied to the firewall when this commit operation is invoked.
>
> * A show operation on the firewall will show a list of the currently
> committed rules, and also the pending changes.
>
> Kindly respond if you have any comments on this.
>
> Thanks,
> ~Sumit.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Usage of mox through out the Openstack project.

2013-08-03 Thread Alessandro Pilotti
Based on an intial run of the Hyper-V Nova tests with pymox 
(https://github.com/emonty/pymox), only 2 of the tests required some minor 
adjustements while the rest was running perfectly fine by just replacing the 
mox import line.

If we plan to support pymox in Havana, I'd be happy to send a patch for review 
with those fixes.


Alessandro




On Jul 27, 2013, at 02:12 , Joe Gordon 
mailto:joe.gord...@gmail.com>> wrote:


On Jul 26, 2013 5:53 AM, "Russell Bryant" 
mailto:rbry...@redhat.com>> wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 07/26/2013 05:35 AM, Roman Bogorodskiy wrote:
> > Alex Meade wrote:
> >
> >> +1 to everything Russell just said and of course Blueprints for
> >> this. One for #3 (changing from mox -> Mock) would be good so
> >> that anyone who is bored or finds this urgent can collaborate.
> >> Also, we need to make sure reviewers are aware (Hopefully they
> >> are reading this).
> >>
> >> -Alex
> >
> > I have created a blueprint for Nova:
> >
> > https://blueprints.launchpad.net/nova/+spec/mox-to-mock-conversion
> >
> > Our team has some spare cycles, so we can work on that.
>
> Is this really worth it?  The original post identified pymox as
> compatible with mox and Python 3 compatible.  If that's the case,
> making a specific effort to convert everything doesn't seem very
> valuable.  The downsides are risk of changing test behavior and
> unnecessary burden on the review queue.
>

++

> If a test needs to be changed anyway, then that seems like an
> opportunistic time to go ahead and rewrite it.
>
> If you have spare cycles, I can probably help find something more
> valuable to work on.  Feel free to come talk to me.  I'd choose most
> of the open bugs over this, for example.  There are hundreds of them
> to choose from.  :-)
>
> - --
> Russell Bryant
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.13 (GNU/Linux)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iEYEARECAAYFAlHycHkACgkQFg9ft4s9SAYyTQCfY9lKmr6CHNEPb1q0suRBGU/M
> FA0AnR5fJBBG8AsOTX1aIoT845ru3hvH
> =Bpbo
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] the performance degradation of swift PUT

2013-08-03 Thread John Dickinson
For those playing along from home, this question has been discussed at 
https://answers.launchpad.net/swift/+question/233444

--John


On Aug 3, 2013, at 10:34 AM, kalrey  wrote:

> hi openstackers,
> I'm a learner of swift. I took some benchmark about swift last week and the 
> result is not pleasant.
> When I put a large number of small files(4KB) under high concurrency, the 
> performance degradation  of PUT appeared.
> The speed of PUT even can reach to 2000/s at beginning. But it down to 600/s 
> after one minute. It's stable at 100/s at last and some error like '503' 
> occured. But when I flushed all disk in cluster it could reach back 2000/s.
> In fact, I also took some benchmark about GET in the same environment but it 
> works very well(5000/s).
>  
> There are some information which maybe useful:
> Test environment:
> Ubuntu 12.04
> 1 proxy-node : 128GB-ram / CPU 16core / 1Gb NIC*1
> 5 Storage-nodes : each for 128GB-ram / CPU 16core / 2TB*4 / 1Gb NIC*1.
> [bench]
> 
> concurrency 
> = 200
> 
> object_size 
> = 4096
> 
> num_objects 
> = 200
> 
> num_containers 
> = 200
> =
> I have traced the code of PUT operation to find out what cause the 
> performance degradation while putting objects. Some code cost a long time in 
> ObjectController::PUT(swift/obj/server.py).
>  
> > for chunk in iter(lambda: reader(self.network_chunk_size), ”):
> start_time = time.time()
> >  upload_size += len(chunk)
> >  if time.time() > upload_expiration:
> >  self.logger.increment(‘PUT.timeouts’)
> >  return HTTPRequestTimeout(request=request)
> >  etag.update(chunk)
> >  while chunk:
> > written = os.write(fd, chunk)
> > chunk = chunk[written:]
> >  sleep() 
>  
>'lambda: reader' will cost average of 600ms per execution. And 
> 'sleep()' will cost 500ms per execution.In fact, 'fsync' also spend a lot 
> time when file flush to disk at last and I removed it already just for 
> testing. I think the time is too long.
> I monitor resource of cluster while putting object.The usage of bandwidth is 
> very low and the load of CPUs were very light.
> I have tried to change vfs_cache_pressure to a low value and it does not seem 
> to work.  
> Is there any advice to figure out the problem?
> appreciate~
> kalrey
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Event API Access Controls

2013-08-03 Thread Jay Pipes

On 08/02/2013 06:26 PM, Herndon, John Luke (HPCS - Ft. Collins) wrote:

Hello, I'm currently implementing the event api blueprint[0], and am
wondering what access controls we should impose on the event api. The
purpose of the blueprint is to provide a StackTach equivalent in the
ceilometer api. I believe that StackTach is used as an internal tool
which end with no access to end users. Given that the event api is
targeted at administrators, I am currently thinking that it should be
limited to admin users only. However, I wanted to ask for input on this
topic. Any arguments for opening it up so users can look at events for
their resources? Any arguments for not doing so? PS -I'm new to the
ceilometer project, so let me introduce myself. My name is John Herndon,
and I work for HP. I've been freed up from a different project and will
be working on ceilometer. Thanks, looking forward to working with
everyone! -john 0:
https://blueprints.launchpad.net/ceilometer/+spec/specify-event-api


Welcome to the contributor community, John. :) I think defaulting the 
access to the service's events API endpoints to just admins makes the 
most sense, and you can use the existing policy engine to make that 
access configurable with the policy.json file.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Monitoring plugin file names

2013-08-03 Thread Gary Kotton
Hi,
As part of the BP blueprint 
utilization-aware-scheduling
 a plugin has been added. I have an issue with the placement of the drivers 
(the code is looking good:)) and would like to know what the community thinks. 
Here are a few examples:

1.   https://review.openstack.org/#/c/35760/17 - a new file has been added 
- 
nova/compute/plugins/libvirt_cpu_monitor_plugin.py

2.   https://review.openstack.org/#/c/39190/ - a new file has been added - 
nova/compute/plugins/libvirt_memory_monitor_plugin.py
I think that these monitoring plugins should either reside in the directory  
nova/virt/libvirt/plugins or 
nova/compute/plugins/virt/libvirt.
It would be interesting to know what others think.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] the performance degradation of swift PUT

2013-08-03 Thread kalrey
hi openstackers,
I'm a learner of swift. I took some benchmark about swift last week and the 
result is not pleasant.
When I put a large number of small files(4KB) under high concurrency, the 
performance degradation  of PUT appeared.
The speed of PUT even can reach to 2000/s at beginning. But it down to 600/s 
after one minute. It's stable at 100/s at last and some error like '503' 
occured. But when I flushed all disk in cluster it could reach back 2000/s. 
In fact, I also took some benchmark about GET in the same environment but it 
works very well(5000/s).

There are some information which maybe useful:
Test environment:
Ubuntu 12.04
1 proxy-node : 128GB-ram / CPU 16core / 1Gb NIC*1
5 Storage-nodes : each for 128GB-ram / CPU 16core / 2TB*4 / 1Gb NIC*1.
[bench]
concurrency = 200
object_size = 4096
num_objects = 200
num_containers = 200
=
I have traced the code of PUT operation to find out what cause the performance 
degradation while putting objects. Some code cost a long time in 
ObjectController::PUT(swift/obj/server.py).

> for chunk in iter(lambda: reader(self.network_chunk_size), ”):
start_time = time.time()
>  upload_size += len(chunk)
>  if time.time() > upload_expiration:
>  self.logger.increment(‘PUT.timeouts’)
>  return HTTPRequestTimeout(request=request)
>  etag.update(chunk)
>  while chunk:
> written = os.write(fd, chunk)
> chunk = chunk[written:]
>  sleep() 

   'lambda: reader' will cost average of 600ms per execution. And 'sleep()' 
will cost 500ms per execution.In fact, 'fsync' also spend a lot time when file 
flush to disk at last and I removed it already just for testing. I think the 
time is too long. 
I monitor resource of cluster while putting object.The usage of bandwidth is 
very low and the load of CPUs were very light.
I have tried to change vfs_cache_pressure to a low value and it does not seem 
to work.  
Is there any advice to figure out the problem?
appreciate~



kalrey___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Snapshot List support

2013-08-03 Thread Scott Devoid
The only snapshot functions in the volume driver are create_snapshot,
delete_shapshot and create_volume_from_snapshot. That row should probably
be deleted from the wiki since listing snapshots occurs entirely via the
db/api.
I've added the current set of supported features for the Solaris ISCSI
driver to the wiki.

Hijacking your thread for a moment: I would think that a
"revert_volume_to_snapshot" function would be useful. Is this implemented
via create_volume_from_snapshot, i.e. snapshot.volume == volume in the
arguments? Or does this functionality not exist?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][neutron] Reserved fixed IPs

2013-08-03 Thread Cristian Tomoiaga
Mark, Ian, than you for your answers.

> Have you considered altering the allocation range of a subnet?  You can still 
> create ports with IPs that are within the subnet, but> outside of the 
> allocation range.  You can then control which instances get the "reserved" 
> IPs from the block that is outside of the> allocation range.  If this does 
> not work, I'd hold off making changes to the IPAM setup as this will be 
> changing in early H3.

> mark


I went ahead and implemented an extension for Neutron to satisfy my needs.
Mark, I see you are currently working on a new IPAM implementation, that
should work with my extension. Mainly I just need an IP from IPAM when I
request one.

I need to track the IP address usage, not sure if this is done anywhere. I
need this to be able to reduce the ability of a tenant to switch IPs at
will. (eg: spammer switches IPs). Tracking will allow for IP allocation
from the already used (by the same tenant) IPs. This will prevent a tenant
from getting a random IP each time he requests one and will "lock" him to
the same IPs he is or was using.
I need to allow a tenant to allocate a public IP directly to a port on a
VM. Much of the functionality is similar with floating IPs. Tenant sees a
list of reserved IPs, he can than allocate one or more of those IPs to a VM
(a port on that VM). Also no NATing.

> It's already possible to port-create with an IP address-and-subnet
> specified, which seems like an effective way of allocating an address
> and setting it aside for later.  Doesn't this satisfy your needs?

> --
> Ian.

Ian,
indeed, I'm able to allocate a specific IP. Currently I go through my
extension to check for old IPs used by a tenant. If I find one, I will use
the functionality you mentioned. If not, I go through the default IPAM to
request an IP and through my extension to "reserve" IT for the tenant.


-- 
Regards,
Cristian Tomoiaga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Snapshot List support

2013-08-03 Thread Giulio Fidente

On 08/03/2013 10:49 AM, Mikhail Khodos wrote:

The Cinder Support Matrix
https://wiki.openstack.org/wiki/CinderSupportMatrix, states that
snapshot listing is implemented in drivers by: LVM, EMC, NetApp, IBM,
etc. However, I could not find any methods or interfaces in the cinder
volume api that implement snapshot listing. Could anyone clarify if
snapshot listing is supported only via DB?


you can list the snapshots via rest at /snapshots or /snapshots/detail

probably the easiest way to look further into this is the code of the 
client:


https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/volume_snapshots.py
--
Giulio Fidente
GPG KEY: 08D733BA | IRC: giulivo

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Snapshot List support

2013-08-03 Thread Mikhail Khodos
The Cinder Support Matrix
https://wiki.openstack.org/wiki/CinderSupportMatrix, states that snapshot
listing is implemented in drivers by: LVM, EMC, NetApp, IBM, etc. However,
I could not find any methods or interfaces in the cinder volume api that
implement snapshot listing. Could anyone clarify if snapshot listing is
supported only via DB?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] FWaaS: Support for explicit commit

2013-08-03 Thread Stephen Gran

On 03/08/13 02:43, Sumit Naiksatam wrote:

Hi All,

In Neutron Firewall as a Service (FWaaS), we currently support an
implicit commit mode, wherein a change made to a firewall_rule is
propagated immediately to all the firewalls that use this rule (via
the firewall_policy association), and the rule gets applied in the
backend firewalls. This might be acceptable, however this is different
from the explicit commit semantics which most firewalls support.
Having an explicit commit operation ensures that multiple rules can be
applied atomically, as opposed to in the implicit case where each rule
is applied atomically and thus opens up the possibility of security
holes between two successive rule applications.


This all seems quite reasonable.


So the proposal here is quite simple -

* When any changes are made to the firewall_rules
(added/deleted/updated), no changes will happen on the firewall (only
the corresponding firewall_rule resources are modified).


I would leave the default as it currently is, and make this an optional 
mode that can be triggered with a parameter.  This seems to me to 
preserve the principal of least surprise for everyday operations, but 
allow for more complicated things when needed.



* We will support an explicit commit operation on the firewall
resource. Any changes made to the rules since the last commit will now
be applied to the firewall when this commit operation is invoked.

* A show operation on the firewall will show a list of the currently
committed rules, and also the pending changes.

Kindly respond if you have any comments on this.


Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 32% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News & Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev