Re: [Openstack] Site to Site VPN in openstack

2016-09-21 Thread Jaison Peter
Thanks for your reply Han,

That means, if we have 10.0.0.0 network in premises and 192.168.0.0 network
in remote openstack private cloud, and if we need to set a site to site VPN
with routes on the VPN endpoints so that both networks can communicate each
other, then this case won't work if the on premises's VPN endpoint is a
hardware device like ASA?

On Tue, Sep 20, 2016 at 11:39 AM, Jaison Peter <urotr...@gmail.com> wrote:

> Hello all,
>
>
> I was checking if anything prevents us from establishing  a site to site
> VPN from openstack private cloud to a on site hardware device like Cisco
> ASA. I knew that its possible to setup a site to site VPN between two
> openstack clouds using VPNaaS, but I am not sure about openstack to
> hardware device scenario. Please advice.
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Site to Site VPN in openstack

2016-09-20 Thread Jaison Peter
Hello all,


I was checking if anything prevents us from establishing  a site to site
VPN from openstack private cloud to a on site hardware device like Cisco
ASA. I knew that its possible to setup a site to site VPN between two
openstack clouds using VPNaaS, but I am not sure about openstack to
hardware device scenario. Please advice.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] [Openstack] Map Nova flavor to glance image

2016-08-18 Thread Jaison Peter
Thanks David for your inputs. I thought there would be a way to achieve
this using extra_spces in flavor and metedata in images, like images are
filtered on the basis of minimum requirements, like disk size and if the
flavor doesn't have enough disk, that image won't be listed under that
flavor.

On Thu, Aug 18, 2016 at 5:19 PM, David Medberry <openst...@medberry.net>
wrote:

> Hi Jaison,
>
> We do this in Horizon with a custom (locally changed) filter. I'll check
> to see if it is upstreamed (it's certainly shareable, so I'll point you at
> a git commit if it isn't all the way upstreamed once I get to the office.)
>
> I know of no way of doing this directly in Nova at the api level. There is
> a way to make the Murano and Trove instances "somewhat" hidden, by making
> them private to certain projects that have that capability enabled thereby
> not confusing non Murano, non Trove users but as soon as you add another
> project to them, they will become visible everywhere within that project.
>
> (As you might imagine this is a pretty common issue that we've directly
> addressed with Trove but have seen no real solutions for yet.)
>
> On Thu, Aug 18, 2016 at 5:45 AM, David Medberry <openst...@medberry.net>
> wrote:
>
>> Adding the Ops list.
>> -- Forwarded message --
>> From: Jaison Peter <urotr...@gmail.com>
>> Date: Wed, Aug 17, 2016 at 10:29 PM
>> Subject: [Openstack] Map Nova flavor to glance image
>> To: OpenStack General <openst...@lists.openstack.org>
>>
>>
>> Hi all,
>>
>> Is there any way to map a flavor to some specific glance images? For
>> example if a user chooses flavor 'general_medium' then it shouldn't display
>> any images used for trove or murano so that we can avoid the confusion.
>> Right now, all images are displaying while choosing a flavor during
>> instance launching. I think metadata is involved in this, but not sure how
>> to do it.
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> Post to : openst...@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>>
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack] Map Nova flavor to glance image

2016-08-17 Thread Jaison Peter
Hi all,

Is there any way to map a flavor to some specific glance images? For
example if a user chooses flavor 'general_medium' then it shouldn't display
any images used for trove or murano so that we can avoid the confusion.
Right now, all images are displaying while choosing a flavor during
instance launching. I think metadata is involved in this, but not sure how
to do it.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack powered Public cloud

2016-04-26 Thread Jaison Peter
Thanks Brian.

That's a very much needed implementation.


On Tue, Apr 26, 2016 at 10:55 PM, Brian Haley <brian.ha...@hpe.com> wrote:

> On 4/26/16 12:05 PM, Jaison Peter wrote:
>
>> Hi George,
>>
>> Thanks for letting me know that we can create distributed router by
>> disabling SNAT in central router.
>>
>> Even though we use VRRP HA router, it will consume 100 public IPs in the
>> scenario I mentioned above, but can save IPs used in compute fip
>> namespace.
>>
>> Yes, compute node do not need public IPs, but the fip name space in them
>> uses it.
>>
>
> Hi Jaison,
>
> I've been working on a spec to address the public IP usage in the FIP
> namespace, it's at https://review.openstack.org/#/c/300207/ (it needs an
> update).  Basically, it would change the allocation strategy to use
> addresses from an "infrastructure" subnet for ports that don't need to be
> publicly reachable.  It should also support the case where the default SNAT
> IP doesn't need public reach-ability either, you only need it for floating
> IPs.
>
> Plan is to finish this in the Newton cycle.
>
> -Brian
>
>
> On Tue, Apr 26, 2016 at 7:57 PM, David Medberry <openst...@medberry.net
>> <mailto:openst...@medberry.net>> wrote:
>>
>> Hi Jaison,
>>
>> This is an issue that the Neutron team is aware. They will likely be
>> addressing this (at some point) but your understanding aligns with
>> my own. So, public IPV4 usage is a well known, well documented issue
>> and DVR / HA exacerbates it.
>>
>> On Tue, Apr 26, 2016 at 1:33 AM, Jaison Peter <urotr...@gmail.com
>> <mailto:urotr...@gmail.com>> wrote:
>>
>> Hi all,
>>
>> I  was working in an openstack project to build a small to
>> medium level public cloud on the top of openstack. We are
>> researching lot more about scalable large openstack deployments
>> and planning our design accordingly. Initially we will be having
>> 50+ compute nodes and planning to grow up to 200 compute nodes
>> in an year by migrating the existing clients to new platform.
>>
>> I have many concerns about the scaling and right choices , since
>> openstack is offering lot of choices and flexibility, especially
>> in networking side.Our major challenge was choosing between
>> simplicity and performance offered by Linux bridge and features
>> and DVR offered by OVS.  We decided to go with OVS, though some
>> were suggesting like OVS is slow in large deployments. But the
>> distributed L3 agents and bandwidth offered by DVR inclined us
>> towards OVS. Is it a better decision?
>>
>> But one of the major drawback we are seeing with DVR is the
>> public IP consumption.If we have 100 clients and 1 VM per client
>> , eventually there will be 100 tenants and 100 routers. Since
>> its a public cloud, we have to offer public IP for each VM. In
>> DVR mode, fip name space in compute will be consuming one public
>> IP and if 100 VMs are running among 20 computes, then total 20
>> public IPs will be used among computes. And a router SNAT name
>> space will be created for each tenant router(Total 100)  and
>> each of it will be consuming 1 public  IP and so total 100
>> public IPs will be consumed by central SNAT name spaces. So
>> total 100 + 20 = 120 public IPs will be used by openstack
>> components and  100 will be used as floating IPs (1:1 NAT) by
>> VMs. So we need 220 public IPs for providing dedicated public
>> IPs for 100 VMs !! Anything wrong with our calculation?
>>
>> From our point of  view 120 IPs used by openstack components in
>> our case (providing 1:1 NAT for every VM) is wastage of IPs and
>> no any role in network traffic. Centrallized SNAT is useful , if
>> the client is opting for VPC like in AWS and he is not attaching
>> floating IPs to all instances in his VPC.
>>
>> So is there any option while creating DVR router to avoid
>> creating central SNAT name space in controller node ? So that we
>> can save 100 public IPs in the above scenario.
>>
>>
>>
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.open

Re: [Openstack] Openstack powered Public cloud

2016-04-26 Thread Jaison Peter
Hi George,

Thanks for letting me know that we can create distributed router by
disabling SNAT in central router.

Even though we use VRRP HA router, it will consume 100 public IPs in the
scenario I mentioned above, but can save IPs used in compute fip namespace.

Yes, compute node do not need public IPs, but the fip name space in them
uses it.


On Tue, Apr 26, 2016 at 7:57 PM, David Medberry <openst...@medberry.net>
wrote:

> Hi Jaison,
>
> This is an issue that the Neutron team is aware. They will likely be
> addressing this (at some point) but your understanding aligns with my own.
> So, public IPV4 usage is a well known, well documented issue and DVR / HA
> exacerbates it.
>
> On Tue, Apr 26, 2016 at 1:33 AM, Jaison Peter <urotr...@gmail.com> wrote:
>
>> Hi all,
>>
>> I  was working in an openstack project to build a small to medium level
>> public cloud on the top of openstack. We are researching lot more about
>> scalable large openstack deployments and planning our design accordingly.
>> Initially we will be having 50+ compute nodes and planning to grow up to
>> 200 compute nodes in an year by migrating the existing clients to new
>> platform.
>>
>> I have many concerns about the scaling and right choices , since
>> openstack is offering lot of choices and flexibility, especially in
>> networking side.Our major challenge was choosing between simplicity and
>> performance offered by Linux bridge and features and DVR offered by OVS.
>> We decided to go with OVS, though some were suggesting like OVS is slow in
>> large deployments. But the distributed L3 agents and bandwidth offered by
>> DVR inclined us towards OVS. Is it a better decision?
>>
>> But one of the major drawback we are seeing with DVR is the public IP
>> consumption. If we have 100 clients and 1 VM per client , eventually
>> there will be 100 tenants and 100 routers. Since its a public cloud, we
>> have to offer public IP for each VM. In DVR mode, fip name space in compute
>> will be consuming one public IP and if 100 VMs are running among 20
>> computes, then total 20 public IPs will be used among computes. And a
>> router SNAT name space will be created for each tenant router(Total 100)
>> and each of it will be consuming 1 public  IP and so total 100 public IPs
>> will be consumed by central SNAT name spaces. So total 100 + 20 = 120
>> public IPs will be used by openstack components and  100 will be used as
>> floating IPs (1:1 NAT) by VMs. So we need 220 public IPs for providing
>> dedicated public IPs for 100 VMs !! Anything wrong with our calculation?
>>
>> From our point of  view 120 IPs used by openstack components in our case
>> (providing 1:1 NAT for every VM) is wastage of IPs and no any role in
>> network traffic. Centrallized SNAT is useful , if the client is opting for
>> VPC like in AWS and he is not attaching floating IPs to all instances in
>> his VPC.
>>
>> So is there any option while creating DVR router to avoid creating
>> central SNAT name space in controller node ? So that we can save 100 public
>> IPs in the above scenario.
>>
>>
>>
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Openstack powered Public cloud

2016-04-26 Thread Jaison Peter
Yes, thats also an option. But we would like to get the flexibility and
features that a floating IP provides.

So, in that case, there wont be any floating IPs , openstack will assign
public IPs like it assigning private IPs , right?

On Tue, Apr 26, 2016 at 12:57 PM, gustavo panizzo (gfa) <g...@zumbi.com.ar>
wrote:

> On Tue, Apr 26, 2016 at 12:03:03PM +0530, Jaison Peter wrote:
> > Hi all,
> >
> > I  was working in an openstack project to build a small to medium level
> > public cloud on the top of openstack. We are researching lot more about
> > scalable large openstack deployments and planning our design accordingly.
> > Initially we will be having 50+ compute nodes and planning to grow up to
> > 200 compute nodes in an year by migrating the existing clients to new
> > platform.
> >
> > I have many concerns about the scaling and right choices , since
> openstack
> > is offering lot of choices and flexibility, especially in networking
> > side.Our major challenge was choosing between simplicity and performance
> > offered by Linux bridge and features and DVR offered by OVS.  We decided
> to
> > go with OVS, though some were suggesting like OVS is slow in large
> > deployments. But the distributed L3 agents and bandwidth offered by DVR
> > inclined us towards OVS. Is it a better decision?
> >
> > But one of the major drawback we are seeing with DVR is the public IP
> > consumption. If we have 100 clients and 1 VM per client , eventually
> there
> > will be 100 tenants and 100 routers. Since its a public cloud, we have to
> > offer public IP for each VM. In DVR mode, fip name space in compute will
> be
> > consuming one public IP and if 100 VMs are running among 20 computes,
> then
> > total 20 public IPs will be used among computes. And a router SNAT name
> > space will be created for each tenant router(Total 100)  and each of it
> > will be consuming 1 public  IP and so total 100 public IPs will be
> consumed
> > by central SNAT name spaces. So total 100 + 20 = 120 public IPs will be
> > used by openstack components and  100 will be used as floating IPs (1:1
> > NAT) by VMs. So we need 220 public IPs for providing dedicated public IPs
> > for 100 VMs !! Anything wrong with our calculation?
> >
> > From our point of  view 120 IPs used by openstack components in our case
> > (providing 1:1 NAT for every VM) is wastage of IPs and no any role in
> > network traffic. Centrallized SNAT is useful , if the client is opting
> for
> > VPC like in AWS and he is not attaching floating IPs to all instances in
> > his VPC.
> >
> > So is there any option while creating DVR router to avoid creating
> central
> > SNAT name space in controller node ? So that we can save 100 public IPs
> in
> > the above scenario.
>
> I've never used DVR, so I won't speak about it but I've run private
> clouds without wasting public ip address using provider networks.
>
> most of VM's had a single vNIC attached to a private network, shared or
> private to the tenant, optionally a VM may had a second vNIC attached to
> a public shared network.
>
> first i wanted to avoid the network node as it was a SPF, also it
> limits the bandwidth available to VM, also it allowed us to use our
> existing, proven, networking gear.
>
> my 0.02$
>
>
>
> --
> 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333
>
> keybase: http://keybase.io/gfa
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Openstack powered Public cloud

2016-04-26 Thread Jaison Peter
Hi all,

I  was working in an openstack project to build a small to medium level
public cloud on the top of openstack. We are researching lot more about
scalable large openstack deployments and planning our design accordingly.
Initially we will be having 50+ compute nodes and planning to grow up to
200 compute nodes in an year by migrating the existing clients to new
platform.

I have many concerns about the scaling and right choices , since openstack
is offering lot of choices and flexibility, especially in networking
side.Our major challenge was choosing between simplicity and performance
offered by Linux bridge and features and DVR offered by OVS.  We decided to
go with OVS, though some were suggesting like OVS is slow in large
deployments. But the distributed L3 agents and bandwidth offered by DVR
inclined us towards OVS. Is it a better decision?

But one of the major drawback we are seeing with DVR is the public IP
consumption. If we have 100 clients and 1 VM per client , eventually there
will be 100 tenants and 100 routers. Since its a public cloud, we have to
offer public IP for each VM. In DVR mode, fip name space in compute will be
consuming one public IP and if 100 VMs are running among 20 computes, then
total 20 public IPs will be used among computes. And a router SNAT name
space will be created for each tenant router(Total 100)  and each of it
will be consuming 1 public  IP and so total 100 public IPs will be consumed
by central SNAT name spaces. So total 100 + 20 = 120 public IPs will be
used by openstack components and  100 will be used as floating IPs (1:1
NAT) by VMs. So we need 220 public IPs for providing dedicated public IPs
for 100 VMs !! Anything wrong with our calculation?

>From our point of  view 120 IPs used by openstack components in our case
(providing 1:1 NAT for every VM) is wastage of IPs and no any role in
network traffic. Centrallized SNAT is useful , if the client is opting for
VPC like in AWS and he is not attaching floating IPs to all instances in
his VPC.

So is there any option while creating DVR router to avoid creating central
SNAT name space in controller node ? So that we can save 100 public IPs in
the above scenario.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack