Re: [Openstack-operators] [neutron] IPv6 Status

2015-02-05 Thread Marcos Garcia
Hi Andreas

What about https://wiki.openstack.org/wiki/Neutron/IPv6 ?
Or are you looking for a Working Group specialized in this area? In that
case, maybe the NFV - Telco Working group can better drive your
questions (they are very concerned about IPv6 too):
https://wiki.openstack.org/wiki/TelcoWorkingGroup#Technical_Team_Meetings .
They meet every wednesday

Regards

On 2015-02-05 3:14 AM, Andreas Scheuring wrote:
 Hi, 

 is there a central place where I can find a matrix (or something
 similar) that shows what is currently supposed to work in the sense of
 IPv6 Networking?

 I also had a look at a couple of blueprints out there, but I'm looking
 for a simple overview containing what's supported, on which features are
 people working on and what's future. I mean all the good stuff for
 Tenant Networks like 

 - SNAT
 - FloatingIP
 - External Provider Networks
 - DVR
 - fwaas, vpnaas,...

 and also about the Host Network
 - e.g. vxlan/gre tunneling via ipv6 host network...



-- 

*Marcos Garcia
*
Technical Sales Engineer - eNovance (a Red Hat company); RHCE, RHCVA, ITIL

*PHONE : *(514) – 907 - 0068 *EMAIL :*mgarc...@redhat.com
mailto:mgarc...@redhat.com - *SKYPE : *enovance-marcos.garcia**
*ADDRESS :* 127 St-Pierre – Montréal (QC) H2Y 2L6, Canada *WEB
: *www.enovance.com http://www.enovance.com/



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Any good info on GRE tunneling on Icehouse?

2014-12-10 Thread Marcos Garcia

Hello Alex

I've always found RDO documentation very easy to follow (but sometimes 
can be outdated, like instructions using 'quantum' instead of neutron):

https://openstack.redhat.com/Using_GRE_Tenant_Networks
https://openstack.redhat.com/Configuring_Neutron_with_OVS_and_GRE_Tunnels_using_quickstack
https://openstack.redhat.com/NeutronLibvirtMultinodeDevEnvironment
and many others

Most of the doc will refer to the controller node and the network node 
as the same. But packstack configuration will let you split them, if you 
really need it.


All RDO-related docs will describe how to use Packstack on CentOS, so 
you should be ok if you use both. Unless you have to use Ubuntu or other 
distros?


Regards

PS: for a detailed view of what the network node will do and Neutron in 
general: https://openstack.redhat.com/Networking_in_too_much_detail


On 2014-12-10 2:48 PM, Alex Leonhardt wrote:


Hi All,

Am failing to find a good tutorial on how to setup a 3+ node cluster 
using GRE tunneling.


Does anyone have an idea / link / blog ?

We're looking at 1x controller, 1x network node, 3x compute node for a 
poc running GRE.. Our current setup is a FlatNetwork.


Thanks!
Alex



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--

*Marcos Garcia
*
Technical Sales Engineer; RHCE, RHCVA, ITIL

*PHONE : *(514) – 907 - 0068 *EMAIL :*marcos.gar...@enovance.com 
mailto:marcos.gar...@enovance.com - *SKYPE : *enovance-marcos.garcia**
*ADDRESS :*127 St-Pierre – Montréal (QC) H2Y 2L6, Canada *WEB : 
*www.enovance.com http://www.enovance.com/




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Migrating Parallels Virtuozzo Containers to OpenStack

2014-10-30 Thread Marcos Garcia
Well I believe PVC are basically OpenVZ containers, that means there is 
on kernel/ramdisk to boot from it, AFAIK. Openstack doesn't support 
OpenVZ, just LXC (and Docker)


You should take that into account as it may require changes inside the 
'disk image'.


On 2014-10-30 5:51 PM, Michael Dorman wrote:
Anyone have any experience moving from Parallels Virtuozzo Containers 
to OpenStack (KVM)?  We have a large number of PVC Vms and would like 
to get those moved over to OpenStack KVM.


At first glance, the plan would be to shut down the PVC, copy and 
convert the image to qcow2, and [magic] bring it up in OpenStack.  But 
I am sure it’s not that easy.


Any advice or war stories would be really useful.

Thanks,
Mike



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--

*Marcos Garcia
*
Technical Sales Engineer

*PHONE : *(514) – 907 - 0068 *EMAIL :*marcos.gar...@enovance.com 
mailto:marcos.gar...@enovance.com - *SKYPE : *enovance-marcos.garcia**
*ADDRESS :*127 St-Pierre – Montréal (QC) H2Y 2L6, Canada *WEB : 
*www.enovance.com http://www.enovance.com/




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] limit num instance-type per host

2014-09-24 Thread Marcos Garcia

Hello Blair

IMO it's a matter of capacity planning and design to minimize 
fragmentation. I don't know of any mechanisms to filter this and solve 
(i.e. re-balance) a posteriori. Blueprints exists, though, to allow 
re-scheduling and re-balancing.


Maybe I'm wrong and there are indeed some scheduler filters out of the 
box...


Anyways, I recommend you to read this: 
http://rhsummit.files.wordpress.com/2014/04/deterministic-capacity-planning-for-openstack-final.pdf
and open this spreadsheet: 
https://github.com/noslzzp/cloud-resource-calculator , it will show you 
the optimal flavor configuration for minimal fragmetation using 
different variables (vcpu, RAM, disk)


Regards

On 2014-09-24 11:08 AM, Blair Bethwaite wrote:

Hi all,

I'm trying to wrap my head around whether it's possible with the 
existing scheduler filters, to put a limit per host on the number of 
instances per instance-type/flavor? I don't think this is possible 
with the existing filters or weights, but it seems like a fairly 
common requirement.


The issue I'm thinking of using this for is that of instance-to-host 
fragmentation in homogenous deployments, where there is a tendency as 
the zone approaches capacity to hit a utilisation ceiling - there are 
rarely any gaps large enough for high vcpu count instances. I'm 
guessing that limiting the number of smaller instances per host would 
help to alleviate this.


Looks like knocking up such a filter wouldn't be too hard, just want 
to check whether there is another way...?


--
Cheers,
~Blairo


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--

*Marcos Garcia
*
Technical Sales Engineer

*PHONE : *(514) – 907 - 0068 *EMAIL :*marcos.gar...@enovance.com 
mailto:marcos.gar...@enovance.com - *SKYPE : *enovance-marcos.garcia**
*ADDRESS :*127 St-Pierre – Montréal (QC) H2Y 2L6, Canada *WEB : 
*www.enovance.com http://www.enovance.com/




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators