Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-10 Thread Eugene Nikanorov
Hi Stephen,




 While in second approach VIP remains a keeper of L2/L3 information, while
 listeners keep L4+ information.
 That seems to be more clear.


 There's a complication though: Pools may also need some L2/L3 information
 (per the discussion of adding subnet_id as an attribute of the pool, eh.)

Right,  pool needs that, but I'm talking about frontend here. Obviously in
case loadbalancer balances traffic over several pools that may be on
different subnets - it may need to have l2 ports on each of them, just as
you said below.


 And actually, there are a few cases that have been discussed where
 operators do want users to be able to have some (limited) control over the
 back end. These almost all have to do with VIP affinity.

The need of that is understood, however, I think there are other options
based on smarter scheduling and SLAs.

Also we've heard objection to this approach several times from other core
 team members (this discussion has been going for more than half a year
 now), so I would suggest to move forward with single L2 port approach. Then
 the question goes down to terminology: loadbalancer/VIPs or VIP/Listeners.


 To be fair this is definitely about more than terminology. In the examples
 you've listed mentioning loadbalancer objects, it seems to me that you're
 ignoring that this model also still contains Listeners.  So, to be more
 accurate, it's really about:

 loadbalancer/VIPs/Listeners or VIPs/Listeners.

To me it seems that  loadbalancer/VIPs/Listeners is only needed for
multiple l2 enpoints, e.g.:
container / n x L2 / m x L4+
In single L2 endpoint case (i'm, again, talking about the front end) If we
say that VIP is L4+ only (tcp port, protocol, etc), then to properly handle
multiple VIPs in this case, L2/L3 information should be stored in
loadbalancer.


 To me, that says it's all about: Does the loadbalancer object add
 something meaningful to this model?  And I think the answer is:

 * To smaller users with very basic load balancing needs: No (mostly,
 though to many it's still yes)

Agree.

 * To larger customers with advanced load balancing needs:  Yes.

* To operators of any size: Yes.

While operators may want to operate/monitor backends directly, that seems
to be out of tenant API scope.
We need to evaluate those 'advanced needs' for tenants and see if we can
address that without making lbaas a proxy between user and LB appliance.

I've outlined my reasoning for thinking so in the other discussion thread.

The reasoning seems clear to me, I just suggest that there are other
options that could help in supporting those advanced cases.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-10 Thread Eugene Nikanorov
Hi Carlos,



  On May 9, 2014, at 3:36 PM, Eugene Nikanorov enikano...@mirantis.com
  wrote:

 Also we've heard objection to this approach several times from other core
 team members (this discussion has been going for more than half a year
 now), so I would suggest to move forward with single L2 port approach.


 Objections to multiple ports per loadbalancer or objections to the
 Loadbalancer object itself?

  If its the latter then you may have a valid argument by authority
 but its impossible to verify because these core team members are
 remaining silent through out all these discussions. We can't  be dissuaded
 due to FUD(Fear, Uncertainty and Doub)t  that these silent core team
 members will suddenly reject this discussion in the future. We aren't going
 to put our users at risk due to FUD.

I think you had a chance to hear this argument yourself (from several
different core members: Mark McClain, Salvatore Orlando, Kyle Mestery) on
those meetings we had in past 2 months.
I was advocating 'loadbalancer' (in it's extended version) once too,
receiving negative opinions as well.
In general this approach puts too much of control of a backend to user's
hands and this goes in opposite direction than neutron project.

If it's just about the name of the root object - VIP suits this role too,
so I'm fine with that. I also find VIP/Listeners model a bit more clearer
per definitions in our glossary.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Docker] Master node bootstrapping issues

2014-05-10 Thread Mike Scherbakov
It is not related to RAM or CPU. I run installation on my Mac with 1Gb of
RAM for master node, and experience the following:

   - yes, it needs time to bootstrap admin node
   - As soon as I have message that master node is installed, I immediately
   open 10.20.0.2:8000 and try to generate diag snapshot. And it is failed.
   - If I wait a few more minutes, and try again - it is passed.

It actually seems to me that we simply still do not have
https://bugs.launchpad.net/fuel/+bug/1315865 fixed, I'll add more details
there as well as logs.

When I checked logs, I saw:

   - for about a minute, astute was not able to connect to MQ. It means it
   is still started before MQ is ready?
   - shotgun -c /tmp/dump_config  /var/log/dump.log 21  cat
   /var/www/nailgun/dump/last returned 1

When I tried to run diag_snapshot for a second time, the command above
succeeded with 0 return code.

So it obviously needs further debugging and in my opinion even if we need
to increase VCPU or RAM, then no more than 2 VCPU / 2 Gb.

Vladimir, as you and Matt were changing the code which should run
containers in a certain order, I'm looking forward to hear from both of you
suggestions on where and how we should hack it.

Thanks,


On Sat, May 10, 2014 at 1:04 AM, Vladimir Kuklin vkuk...@mirantis.comwrote:

 Hi all

 We are still experiencing some issues with master node bootstrapping after
 moving to container-based installation.

 First of all, these issues are related to our system tests. We have rather
 small nodes running as master node - only 1 GB of RAM and 1 virtual CPU. As
 we are using strongly lrzipped archive, this seems quite not enough and
 leads to timeouts during deployment of the master node.

 I have several suggestions:

 1) Increase amount of RAM for  master node to at least 8 Gigabytes (or do
 some pci virtual memory hotplug during master node bootstrapping) and add
 additional vCPU for the master node.
 2) Run system tests with non-containerized environment (env variable
 PRODUCTION=prod set)
 3) Split our system tests in that way not allowing more than 2 master
 nodes to bootstrap simulteneously on the single hardware node.
 4) do lrzipping as weak as possible during the development phase and lrzip
 it strongly only when we do release
 5) increase bootstrap timeout for the master node in system tests


 Any input would be appreciated.

 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Eugene Nikanorov
Hi Stephen,

Some comments on comments on comments:

On Fri, May 9, 2014 at 10:25 PM, Stephen Balukoff sbaluk...@bluebox.netwrote:

 Hi Eugene,

 This assumes that 'VIP' is an entity that can contain both an IPv4 address
 and an IPv6 address. This is how it is in the API proposal and
 corresponding object model that I suggested, but it is a slight
 re-definition of the term virtual IP as it's used in the rest of the
 industry. (And again, we're not yet in agreement that 'VIP' should actually
 contain two ip addresses like this.)

That seems a minor issue to me. May be we can just introduce a statement
that VIP has L2 endpoint first of all?

In my mind, the main reasons I would like to see the container object are:


- It solves the colocation / apolcation (or affinity / anti-affinity)
problem for VIPs in a way that is much more intuitive to understand and
less confusing for users than either the hints included in my API, or
something based off the nova blueprint for doing the same for virtual
servers/containers. (Full disclosure: There probably would still be a need
for some anti-affinity logic at the logical load balancer level as well,
though at this point it would be an operator concern only and expressed to
the user in the flavor of the logical load balancer object, and probably
be associated with different billing strategies. The user wants a
dedicated physical load balancer? Then he should create one with this
flavor, and note that it costs this much more...)

 In fact, that can be solved by scheduling, without letting user to control
that. Flavor Framework will be able to address that.


- From my experience, users are already familiar with the concept of
what a logical load balancer actually is (ie. something that resembles a
physical or virtual appliance from their perspective). So this probably
fits into their view of the world better.

 That might be so, but apparently it goes in opposite direction than
neutron in general (i.e. more abstraction)


- It makes sense for Load Balancer as a Service to hand out logical
load balancer objects. I think this will aid in a more intuitive
understanding of the service for users who otherwise don't want to be
concerned with operations.
- This opens up the option for private cloud operators / providers to
bill based on number of physical load balancers used (if the logical load
balancer happens to coincide with physical load balancer appliances in
their implementation) in a way that is going to be seen as more fair and
more predictable to the user because the user has more control over it.
And it seems to me this is accomplished without producing any undue burden
on public cloud providers, those who don't bill this way, or those for whom
the logical load balancer doesn't coincide with physical load balancer
appliances.

 I don't see how 'loadbalancer' is better than 'VIP' here, other than being
a bit closer term to 'logical loadbalancer'.



- Attaching a flavor attribute to a logical load balancer seems like
a better idea than attaching it to the VIP. What if the user wants to
change the flavor on which their VIP is deployed (ie. without changing IP
addresses)? What if they want to do this for several VIPs at once? I can
definitely see this happening in our customer base through the lifecycle of
many of our customers' applications.

 I don't see any problems with above cases if VIP is the root object


- Having flavors associated with load balancers and not VIPs also
allows for operators to provide a lot more differing product offerings to
the user in a way that is simple for the user to understand. For example:
   - Flavor A is the cheap load balancer option, deployed on a
   shared platform used by many tenants that has fewer guarantees around
   performance and costs X.
   - Flavor B is guaranteed to be deployed on vendor Q's Super
   Special Product (tm) but to keep down costs, may be shared with other
   tenants, though not among a single tenant's load balancers unless the
   tenant uses the same load balancer id when deploying their VIPs (ie. 
 user
   has control of affinity among their own VIPs, but no control over 
 whether
   affinity happens with other tenants). It may experience variable
   performance as a result, but has higher guarantees than the above and 
 costs
   a little more.
   - Flavor C is guaranteed to be deployed on vendor P's Even
   Better Super Special Product (tm) and is also guaranteed not to be 
 shared
   among tenants. This is essentially the dedicated load balancer option
   that gets you the best guaranteed performance, but costs a lot more than
   the above.
   - ...and so on.

 Right, that's how flavors are supposed to work, but that's again unrelated
to whether we make VIP or loadbalancer our root object.



[openstack-dev] [Heat] [Murano] [Solum] [TC] agenda for cross-project session - how to handle app lifecycle ?

2014-05-10 Thread Ruslan Kamaldinov
I'd like to propose the following agenda for cross-project session
Solum, Murano, Heat: how to handle app lifecycle?:

A number of projects are looking at going up the stack and handle application
workload lifecycle management. This workshop aims at placing them all in the
same room and creating opportunities for convergence in the future.

1. ~10-15 minutes to present Murano including answering questions. We will
   cover mission, positioning (relation to other OpenStack projects) and further
   roadmap which addresses concers on duplication of functionality with other
   projects. We've conducted a document [1], please append your questions or
   topics to discuss upfront.

2. ~10-15 minutes to present Solum, including QA. I would expect some
   overview of Solum too. Solum team, please expand your plan in the linked
   etherpad [2].

3. Open discussion. Goal is to find a common ground on application lifecycle
   in OpenStack, determine project responsibilities (e.g Heat does X, Murano
   does Y, Solum does Z), discuss plan to eliminate any functionality
   duplication between these projects.

If there is another project or initiative willing to present their work please
let us know.

I've added TC to the subject because we might need someone neutral and
authoritative to moderate the session. Also, a number of questions
were raised in
comments to this session, I'll quote Steven Dake:
 I'd be interested in having some TC representation at this session to
 understand how the TC would consider organizing these efforts (separate
 programs or expand scope of orchestration program).

At least one TC member presence would be very welcome :)

If you have a topic to add, please update session etherpad [2].


[1] https://etherpad.openstack.org/p/wGy9S94l3Q
[2] https://etherpad.openstack.org/p/9XQ7Q2NQdv


--
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [cinderdriver]

2014-05-10 Thread Mardan Raghuwanshi
Hello All,

I developed cinder drivers for CloudByte's ElastiStor. Now we want to make
our driver as a part of core openstack release, but I am not aware of
openstack commit process.
Please help me...



Thanks,
Mardan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-05-10 Thread Shyam Prasad N

Hi Clay,
First of all, thanks for the reply.

1. How can I update the eventlet version. I installed swift from source 
(git). Pulling the latest code helps?
2. Yes. Recently my clients changed to chunked encoding for transfer. 
Are you saying chunked encoding is not supported by swift?

3. Yes, the 408s have made it to the clients from proxy servers.

Regards,
Shyam

On Friday 09 May 2014 10:45 PM, Clay Gerrard wrote:
I thought those tracebacks only showed up with old versions of 
eventlet or and eventlet_debug = true?


In my experience that normally indicates a client disconnect on a 
chucked encoding transfer request (request w/o a content-length).  Do 
you know if your clients are using transfer encoding chunked?


Are you seeing the 408 make it's way out to the client?  It wasn't 
clear to me if you only see these tracebacks on the object-servers or 
in the proxy logs as well?  Perhaps only one of the three disks 
involved in the PUT are timing out and the client still gets a 
successful response?


As the disks fill up replication and auditing is going to consume more 
disk resources - you may have to tune the concurrency and rate 
settings on those daemons.  If the errors happen consistently you 
could try running with background consistency processes temporarily 
disabled and rule out if they're causing disk contention on your setup 
with your config.


-Clay


On Fri, May 9, 2014 at 8:54 AM, Ben Nemec openst...@nemebean.com 
mailto:openst...@nemebean.com wrote:


This is a development list, and your question sounds more
usage-related.  Please ask your question on the users list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Thanks.

-Ben


On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

Hi,

I have a two node swift cluster receiving continuous traffic
(mostly
overwrites for existing objects) of 1GB files each.

Soon after the traffic started, I'm seeing the following
traceback from
some transactions...
Traceback (most recent call last):
   File /home/eightkpc/swift/swift/proxy/controllers/obj.py,
line 692,
in PUT
 chunk = next(data_source)
   File /home/eightkpc/swift/swift/proxy/controllers/obj.py,
line 559,
in lambda
 data_source = iter(lambda:
reader(self.app.client_chunk_size), '')
   File /home/eightkpc/swift/swift/common/utils.py, line
2362, in read
 chunk = self.wsgi_input.read(*args, **kwargs)
   File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py,
line 147,
in read
 return self._chunked_read(self.rfile, length)
   File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py,
line 137,
in _chunked_read
 self.chunk_length = int(rfile.readline().split(;,
1)[0], 16)
ValueError: invalid literal for int() with base 16: '' (txn:
tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

Seeing the following errors on storage logs...
object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] PUT

/xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data
408 - PUT

http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data;
txf3b4e5f677004474bbd2f-00536c30d1 proxy-server 12241
95.6405 -

It's success sometimes, but mostly 408 errors. I don't see any
other
logs for the transaction ID. or around these 408 errors in the log
files. Is this a disk timeout issue? These are only 1GB files
and normal
writes to files on these disks are quite fast.

The timeouts from the swift proxy files are...
root@bulkstore-112:~# grep -R timeout /etc/swift/*
/etc/swift/proxy-server.conf:client_timeout = 600
/etc/swift/proxy-server.conf:node_timeout = 600
/etc/swift/proxy-server.conf:recoverable_node_timeout = 600

Can someone help me troubleshoot this issue?

--
-Shyam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Stephen Balukoff
Hi Eugene,

A couple notes of clarification:

On Sat, May 10, 2014 at 2:30 AM, Eugene Nikanorov
enikano...@mirantis.comwrote:


 On Fri, May 9, 2014 at 10:25 PM, Stephen Balukoff 
 sbaluk...@bluebox.netwrote:

 Hi Eugene,

 This assumes that 'VIP' is an entity that can contain both an IPv4
 address and an IPv6 address. This is how it is in the API proposal and
 corresponding object model that I suggested, but it is a slight
 re-definition of the term virtual IP as it's used in the rest of the
 industry. (And again, we're not yet in agreement that 'VIP' should actually
 contain two ip addresses like this.)

 That seems a minor issue to me. May be we can just introduce a statement
 that VIP has L2 endpoint first of all?


Well, sure, except the user is going to want to know what the IP
address(es) are for obvious reasons, and expect them to be taken from
subnet(s) the user specifies. Asking the user to provide a Neutron
network_id (ie. where we'll attach the L2 interface) isn't definitive here
because a neutron network can contain many subnets, and these subnets might
be either IPv4 or IPv6. Asking the user to provide an IPv4 and IPv6 subnet
might cause us problems if the IPv4 subnet provided and the IPv6 subnet
provided are not on the same neutron network. In that scenario, we'd need
two L2 interfaces / neutron ports to service this, and of course some way
to record this information in the model.

We could introduce the restriction that all of the IP addresses / subnets
associated with the VIP must come from the same neutron network, but this
begs the question:  Why? Why shouldn't a VIP be allowed to connect to
multiple neutron networks to service all its front-end IPs?

If the answer to the above is there's no reason or because it's easier
to implement, then I think these are not good reasons to apply these
restrictions. If the answer to the above is because nobody deploys their
IPv4 and IPv6 networks separate like that, then I think you are unfamiliar
with the environments in which many operators must survive, nor the
requirements imposed on us by our users. :P

In any case, if you agree that in the IPv4 + IPv6 case it might make sense
to allow for multiple L2 interfaces on the VIP, doesn't it then also make
more sense to define a VIP as a single IP address (ie. what the rest of the
industry calls a VIP), and call the groupings of all these IP addresses
together a 'load balancer' ? At that point the number of L2 interfaces
required to service all the IPs in this VIP grouping becomes an
implementation problem.

For what it's worth, I do go back and forth on my opinion on this one, as
you can probably tell. I'm trying to get us to a model that is first and
foremost simple to understand for users, and relatively easy for operators
and vendors to implement.


 In my mind, the main reasons I would like to see the container object are:


- It solves the colocation / apolcation (or affinity / anti-affinity)
problem for VIPs in a way that is much more intuitive to understand and
less confusing for users than either the hints included in my API, or
something based off the nova blueprint for doing the same for virtual
servers/containers. (Full disclosure: There probably would still be a need
for some anti-affinity logic at the logical load balancer level as well,
though at this point it would be an operator concern only and expressed to
the user in the flavor of the logical load balancer object, and probably
be associated with different billing strategies. The user wants a
dedicated physical load balancer? Then he should create one with this
flavor, and note that it costs this much more...)

 In fact, that can be solved by scheduling, without letting user to
 control that. Flavor Framework will be able to address that.


I never said it couldn't be solved by scheduling. In fact, my original API
proposal solves it this way!

I was saying that it's *much more intuitive to understand and less
confusing for users* to do it using a logical load balancer construct. I've
yet to see a good argument for why working with colocation_hints /
apolocation_hints or affinity grouping rules (akin to the nova model) is
*easier* *for the user to understand* than working with a logical load
balancer model.

And by the way--  maybe you didn't see this in my example below, but just
because a user is using separate load balancer objects doesn't mean the
vendor or operator needs to implement these on separate pieces of hardware.
Whether or not the operator decides to let the user have this level of
control will be expressed in the flavor.



- From my experience, users are already familiar with the concept of
what a logical load balancer actually is (ie. something that resembles a
physical or virtual appliance from their perspective). So this probably
fits into their view of the world better.

 That might be so, but apparently it goes in opposite direction than
 neutron in 

Re: [openstack-dev] [Fuel][Docker] Master node bootstrapping issues

2014-05-10 Thread Dmitry Borodaenko
FWIW 1GB works fine for me on my laptop, I run the master setup manually.
So I'm against increasing RAM requirement, we have better things to spend
that RAM on.
On May 10, 2014 1:37 AM, Mike Scherbakov mscherba...@mirantis.com wrote:

 It is not related to RAM or CPU. I run installation on my Mac with 1Gb of
 RAM for master node, and experience the following:

- yes, it needs time to bootstrap admin node
- As soon as I have message that master node is installed, I
immediately open 10.20.0.2:8000 and try to generate diag snapshot. And
it is failed.
- If I wait a few more minutes, and try again - it is passed.

 It actually seems to me that we simply still do not have
 https://bugs.launchpad.net/fuel/+bug/1315865 fixed, I'll add more details
 there as well as logs.

 When I checked logs, I saw:

- for about a minute, astute was not able to connect to MQ. It means
it is still started before MQ is ready?
- shotgun -c /tmp/dump_config  /var/log/dump.log 21  cat
/var/www/nailgun/dump/last returned 1

 When I tried to run diag_snapshot for a second time, the command above
 succeeded with 0 return code.

 So it obviously needs further debugging and in my opinion even if we need
 to increase VCPU or RAM, then no more than 2 VCPU / 2 Gb.

 Vladimir, as you and Matt were changing the code which should run
 containers in a certain order, I'm looking forward to hear from both of you
 suggestions on where and how we should hack it.

 Thanks,


 On Sat, May 10, 2014 at 1:04 AM, Vladimir Kuklin vkuk...@mirantis.comwrote:

 Hi all

 We are still experiencing some issues with master node bootstrapping
 after moving to container-based installation.

 First of all, these issues are related to our system tests. We have
 rather small nodes running as master node - only 1 GB of RAM and 1 virtual
 CPU. As we are using strongly lrzipped archive, this seems quite not enough
 and leads to timeouts during deployment of the master node.

 I have several suggestions:

 1) Increase amount of RAM for  master node to at least 8 Gigabytes (or do
 some pci virtual memory hotplug during master node bootstrapping) and add
 additional vCPU for the master node.
 2) Run system tests with non-containerized environment (env variable
 PRODUCTION=prod set)
 3) Split our system tests in that way not allowing more than 2 master
 nodes to bootstrap simulteneously on the single hardware node.
 4) do lrzipping as weak as possible during the development phase and
 lrzip it strongly only when we do release
 5) increase bootstrap timeout for the master node in system tests


 Any input would be appreciated.

 --
 Yours Faithfully,
 Vladimir Kuklin,
 Fuel Library Tech Lead,
 Mirantis, Inc.
 +7 (495) 640-49-04
 +7 (926) 702-39-68
 Skype kuklinvv
 45bk3, Vorontsovskaya Str.
 Moscow, Russia,
 www.mirantis.com http://www.mirantis.ru/
 www.mirantis.ru
 vkuk...@mirantis.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [cinderdriver]

2014-05-10 Thread Martin, Kurt Frederick (ESSN Storage MSDU)
Hello Mardan,
The following Cinder wiki pages contains the information or links to the 
information that you will need to submit a new driver: 
https://wiki.openstack.org/wiki/Cinder and 
https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver
~Kurt

From: Mardan Raghuwanshi [mailto:mardan.si...@cloudbyte.com]
Sent: Saturday, May 10, 2014 6:15 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [cinder] [cinderdriver]

Hello All,
I developed cinder drivers for CloudByte's ElastiStor. Now we want to make our 
driver as a part of core openstack release, but I am not aware of openstack 
commit process.
Please help me...


Thanks,
Mardan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-10 Thread Carlos Garza

On May 10, 2014, at 1:52 AM, Eugene Nikanorov enikano...@mirantis.com
 wrote:

 Hi Carlos,
 I think you had a chance to hear this argument yourself (from several 
 different core members: Mark McClain, Salvatore Orlando, Kyle Mestery) on 
 those meetings we had in past 2 months.
 I was advocating 'loadbalancer' (in it's extended version) once too, 
 receiving negative opinions as well.
 In general this approach puts too much of control of a backend to user's 
 hands and this goes in opposite direction than neutron project.
 
 If it's just about the name of the root object - VIP suits this role too, so 
 I'm fine with that. I also find VIP/Listeners model a bit more clearer per 
 definitions in our glossary.
 
 Thanks,
 Eugene.

I was in those meetings Eugene and was read the irc logs of the two I 
missed and was of the impression they were open to discussion.I'll defer this 
topic until the summit.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Brandon Logan
Hi Sam,
I do not have access to those statistics.  Though, I can say that with
our current networking infrastructure customers that have multiple IPv4
or multiple IPv6 VIPs are in the minority. However, we have received
feature requests on allowing VIPs on our two main networks (public and
private).  This is mainly because of us not charging for bandwidth on
the private network provided the client resides in the same datacenter
as the load balancer (otherwise, its not accessible by the client.)

Having said that, I still would argue that the main reason for having a
load balancer to many vips to many listeners is for user expectations. A
user expects to configure a load balancer, send that configuration to
our service, and then return the details of that fully configured load
balancer back to the user.  Is your argument either 1) A user does not
expect LBaaS to accept and return load balancers or 2) Even if a user
expects this, its not that important of a detail?

Thanks,
Brandon

On Fri, 2014-05-09 at 20:37 +, Samuel Bercovici wrote:
 It boils down to two aspects:
 
 1.  How common is it for tenant to care about affinity or have
 more than a single VIP used in a way that adding an additional
 (mandatory) construct makes sense for them to handle?
 
 For example if 99% of users do not care about affinity or will only
 use a single VIP (with multiple listeners). In this case does adding
 an additional object that tenants need to know about makes sense?
 
 2.  Scheduling this so that it can be handled efficiently by
 different vendors and SLAs. We can elaborate on this F2F next week.
 
  
 
 Can providers share their statistics to assist to understand how
 common are those use cases?
 
  
 
 Regards,
 
 -Sam.
 
  
 
  
 
  
 
 From: Stephen Balukoff [mailto:sbaluk...@bluebox.net] 
 Sent: Friday, May 09, 2014 9:26 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review
 thoughts
 
  
 
 Hi Eugene,
 
  
 
 
 This assumes that 'VIP' is an entity that can contain both an IPv4
 address and an IPv6 address. This is how it is in the API proposal and
 corresponding object model that I suggested, but it is a slight
 re-definition of the term virtual IP as it's used in the rest of the
 industry. (And again, we're not yet in agreement that 'VIP' should
 actually contain two ip addresses like this.)
 
 
  
 
 
 In my mind, the main reasons I would like to see the container object
 are:
 
 
  
 
 
   * It solves the colocation / apolcation (or affinity /
 anti-affinity) problem for VIPs in a way that is much more
 intuitive to understand and less confusing for users than
 either the hints included in my API, or something based off
 the nova blueprint for doing the same for virtual
 servers/containers. (Full disclosure: There probably would
 still be a need for some anti-affinity logic at the logical
 load balancer level as well, though at this point it would be
 an operator concern only and expressed to the user in the
 flavor of the logical load balancer object, and probably be
 associated with different billing strategies. The user wants
 a dedicated physical load balancer? Then he should create one
 with this flavor, and note that it costs this much more...)
   * From my experience, users are already familiar with the
 concept of what a logical load balancer actually is (ie.
 something that resembles a physical or virtual appliance from
 their perspective). So this probably fits into their view of
 the world better.
   * It makes sense for Load Balancer as a Service to hand out
 logical load balancer objects. I think this will aid in a more
 intuitive understanding of the service for users who otherwise
 don't want to be concerned with operations.
   * This opens up the option for private cloud operators /
 providers to bill based on number of physical load balancers
 used (if the logical load balancer happens to coincide with
 physical load balancer appliances in their implementation) in
 a way that is going to be seen as more fair and more
 predictable to the user because the user has more control
 over it. And it seems to me this is accomplished without
 producing any undue burden on public cloud providers, those
 who don't bill this way, or those for whom the logical load
 balancer doesn't coincide with physical load balancer
 appliances.
   * Attaching a flavor attribute to a logical load balancer
 seems like a better idea than attaching it to the VIP. What if
 the user wants to change the flavor on which their VIP is
 deployed (ie. without changing IP addresses)? What if they
 want to do this for several VIPs at once? I can 

Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-10 Thread Brandon Logan
Hi Eugene,

Since this debate has come and gone before, I'd like to thank you for
having the patience to still be debating with us about it.  It's
requiring a lot of patience for everyone on both sides of the argument.
A debate like this can be healthy though.

I, too, have not heard clear and concise reasons why the core team
members would not like a logical load balancer object, or a load
balancer object that maps to many vips, which in turn maps to many
listeners.  I've been to every LBaaS meeting for months now I think, and
I just remember that you and others have said the core team members
object to it, but not any clear reasons.  Would it be possible for you
to point us to an IRC chat log or a ML thread that does discuss that?

A lot of operators have come into this project lately and most (if not
all) would prefer an API construct like the one BBG and Rackspace have
agreed on.  This reason alone should be enough to revisit the topic with
the core team members so we operators can fully understand their
objections.  I believe operators should play a large role in Openstack
and their opinions and reasons why should be heard.

Thanks,
Brandon Logan

On Sat, 2014-05-10 at 10:52 +0400, Eugene Nikanorov wrote:
 Hi Carlos,
 
 
 
 On May 9, 2014, at 3:36 PM, Eugene Nikanorov
 enikano...@mirantis.com
  wrote:
 
  Also we've heard objection to this approach several times
  from other core team members (this discussion has been going
  for more than half a year now), so I would suggest to move
  forward with single L2 port approach.
 
 Objections to multiple ports per loadbalancer or
 objections to the Loadbalancer object itself?
 
 
 If its the latter then you may have a valid argument by
 authority but its impossible to verify because these core
 team members are remaining silent through out all these
 discussions. We can't  be dissuaded due to FUD(Fear,
 Uncertainty and Doub)t  that these silent core team members
 will suddenly reject this discussion in the future. We aren't
 going to put our users at risk due to FUD.
 I think you had a chance to hear this argument yourself (from several
 different core members: Mark McClain, Salvatore Orlando, Kyle Mestery)
 on those meetings we had in past 2 months.
 I was advocating 'loadbalancer' (in it's extended version) once too,
 receiving negative opinions as well.
 In general this approach puts too much of control of a backend to
 user's hands and this goes in opposite direction than neutron project.
 
 
 If it's just about the name of the root object - VIP suits this role
 too, so I'm fine with that. I also find VIP/Listeners model a bit more
 clearer per definitions in our glossary.
 
 
 Thanks,
 Eugene.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Brandon Logan
On Sat, 2014-05-10 at 09:50 -0700, Stephen Balukoff wrote:

 
 Correct me if I'm wrong, but wasn't the existing API is confusing and
 difficult to use one of the major complaints with it (as voiced in
 the IRC meeting, say on April 10th in IRC, starting around... I
 dunno... 14:13 GMT)?  If that's the case, then the user experience
 seems like an important concern, and possibly trumps some vaguely
 defined project direction which apparently doesn't take this into
 account if it's vetoing an approach which is more easily done /
 understood by the user.

+1 Stephen

This is what the API we are proposing accomplishes.  It is not
confusing.  Does having a root object of VIP work? Yes, but anything can
be made to work.  It's more about what makes sense.  To me going with an
API similar to the existing one does not address this issue at all.
Also, what happened to a totally brand new API and object model in
neutron V3?  I thought that was still on the table, and its the perfect
time to create a new load balancer API, backwards compatibility is not
expected.

I'd also like to ask why it seems to not matter at all if most (if not
all) operators like an API proposal?

Thanks,
Brandon Logan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Jay Pipes

On 05/09/2014 04:37 PM, Samuel Bercovici wrote:

It boils down to two aspects:

1.How common is it for tenant to care about affinity or have more than a
single VIP used in a way that adding an additional (mandatory) construct
makes sense for them to handle?

For example if 99% of users do not care about affinity or will only use
a single VIP (with multiple listeners). In this case does adding an
additional object that tenants need to know about makes sense?


Yes, it does make sense. Because it is the difference between an API 
that users intuitively understand and an API that nobody uses because it 
doesn't model what the user intuitively is thinking about.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer

2014-05-10 Thread Eugene Nikanorov
Hi Brandon,


I, too, have not heard clear and concise reasons why the core team
 members would not like a logical load balancer object, or a load
 balancer object that maps to many vips, which in turn maps to many
 listeners.  I've been to every LBaaS meeting for months now I think, and
 I just remember that you and others have said the core team members
 object to it, but not any clear reasons.  Would it be possible for you
 to point us to an IRC chat log or a ML thread that does discuss that?

Well, It seems to me that I understood that reason.
The reason was formulated as 'networking functions, not virtualized
appliances'.
Loadbalancer object as a concept of virtual appliance (which Stephen and
Carlos seem to be advocating)
is not a kind of concept neutron tries to expose via its API. To me it
looks like a valid argument.
Yes, some users may expect that API will give them control of their
super-dedicated LB appliance.
Also, having API that looks like API of *typical* appliance may look more
familiar to users who moved from
operating physical lb appliance. But that's not the kind of ability neutron
tries to allow, letting user to work with
more abstracted concepts instead.


 A lot of operators have come into this project lately and most (if not
 all) would prefer an API construct like the one BBG and Rackspace have
 agreed on.  This reason alone should be enough to revisit the topic with
 the core team members so we operators can fully understand their
 objections.  I believe operators should play a large role in Openstack
 and their opinions and reasons why should be heard.

I agree that operator's concerns need  to be addressed, but the argument
above just suggest that it can be done
by other means rather then providing 'virtual appliance' functionality.
I think it will be more constructive to think and work in this direction.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

2014-05-10 Thread Eugene Nikanorov
Hi Stephen,

Well, sure, except the user is going to want to know what the IP
 address(es) are for obvious reasons, and expect them to be taken from
 subnet(s) the user specifies. Asking the user to provide a Neutron
 network_id (ie. where we'll attach the L2 interface) isn't definitive here
 because a neutron network can contain many subnets, and these subnets might
 be either IPv4 or IPv6. Asking the user to provide an IPv4 and IPv6 subnet
 might cause us problems if the IPv4 subnet provided and the IPv6 subnet
 provided are not on the same neutron network. In that scenario, we'd need
 two L2 interfaces / neutron ports to service this, and of course some way
 to record this information in the model.

Right, that's why VIP need to have clear definition in relation to L2 port:
we allow one L2 port per VIP, hence only addresses from subnets from one
network are allowed. That seems to be a fair limitation.

We could introduce the restriction that all of the IP addresses / subnets
 associated with the VIP must come from the same neutron network,

Right.

 but this begs the question:  Why? Why shouldn't a VIP be allowed to
 connect to multiple neutron networks to service all its front-end IPs?


 If the answer to the above is there's no reason or because it's easier
 to implement, then I think these are not good reasons to apply these
 restrictions. If the answer to the above is because nobody deploys their
 IPv4 and IPv6 networks separate like that, then I think you are unfamiliar
 with the environments in which many operators must survive, nor the
 requirements imposed on us by our users. :P

I approach this question from opposite side: if we allow this - we're
exposing 'virtual appliance'-API, where user fully controls how lb instance
is wired, how many VIPs it has, etc.
As i said in other thread, that is 'virtual functions vs virtualized
appliance' question which is about general neutron project goal.
If something seem to map more easily on physical infrastructure (or to a
concept o physical infra) doesn't mean that cloud API needs to follow that.


 In any case, if you agree that in the IPv4 + IPv6 case it might make sense
 to allow for multiple L2 interfaces on the VIP, doesn't it then also make
 more sense to define a VIP as a single IP address (ie. what the rest of the
 industry calls a VIP), and call the groupings of all these IP addresses
 together a 'load balancer' ? At that point the number of L2 interfaces
 required to service all the IPs in this VIP grouping becomes an
 implementation problem.

 For what it's worth, I do go back and forth on my opinion on this one, as
 you can probably tell. I'm trying to get us to a model that is first and
 foremost simple to understand for users, and relatively easy for operators
 and vendors to implement.

Users are different, and you apparently consider those who understand
networks and load balancing.

I was saying that it's *much more intuitive to understand and less
 confusing for users* to do it using a logical load balancer construct.
 I've yet to see a good argument for why working with colocation_hints /
 apolocation_hints or affinity grouping rules (akin to the nova model) is
 *easier* *for the user to understand* than working with a logical load
 balancer model.

Something done by hand may be much more intuitive than something performed
by magic behind scheduling, flavors etc.
But that doesn't seem like a good reason to me to put user in charge of
defining resource placement.



 And by the way--  maybe you didn't see this in my example below, but just
 because a user is using separate load balancer objects doesn't mean the
 vendor or operator needs to implement these on separate pieces of hardware.
 Whether or not the operator decides to let the user have this level of
 control will be expressed in the flavor.

Yes, and without container user has less than that - only balancing
endpoints - VIPs, without direct control of how they are grouped within
instances.

That might be so, but apparently it goes in opposite direction than neutron
 in general (i.e. more abstraction)


 Doesn't more abstraction give vendors and operators more flexibility in
 how they implement it? Isn't that seen as a good thing in general? In any
 case, this sounds like your opinion more than an actual stated or implied
 agenda from the Neutron team. And even if it is an implied or stated
 agenda, perhaps it's worth revisiting the reason for having it?

I'm translating the argument of other team members and it seems valid to me.
For sure you can try to revisit those reasons ;)



 So what are the main arguments against having this container object? In
 answering this question, please keep in mind:


- If you say implementation details, please just go ahead and be
more specific because that's what I'm going to ask you to do anyway. If
implementation details is the concern, please follow this with a
hypothetical or concrete example as to what kinds of