Re: [Openstack-operators] Murano in Production

2016-09-26 Thread Joe Topjian
Hi Serg,

We were indeed hitting that bug, but the cert wasn't self-signed. It was
easier for us to manually patch the Ubuntu Cloud package of Murano with the
stable/mitaka fix linked in that bug report than trying to debug where
OpenSSL/python/requests/etc was going awry.

We might redeploy Murano strictly using virtualenv's and pip so we stay on
the latest stable patches.

Thanks,
Joe

On Mon, Sep 26, 2016 at 11:03 PM, Serg Melikyan 
wrote:

> Hi Joe,
>
> >Also, is it safe to say that communication between agent/engine only, and
> will only, happen during app deployment?
>
> murano-agent & murano-engine keep active connection to the Rabbit MQ
> broker but message exchange happens only during deployment of the app.
>
> >One thing we just ran into, though, was getting the agent/engine rmq
> config to work with SSL
>
> We had related bug fixed in Newton, can you confirm that you are *not*
> hitting bug #1578421 [0]
>
> References:
> [0] https://bugs.launchpad.net/murano/+bug/1578421
>
>
>
>
> On Mon, Sep 26, 2016 at 1:43 PM, Andrew Woodward  wrote:
> > In Fuel we deploy haproxy to all of the nodes that are part of the
> > VIP/endpoint service (This is usually part of the controller role) Then
> the
> > vips (internal or public) can be active on any member of the group.
> > Corosync/Pacemaker is used to move the VIP address (as apposed to
> > keepalived) in our case both haproxy, and the vip live in a namespace and
> > haproxy is always running on all of these nodes bound to 0/0.
> >
> > In the case of murano-rabbit we take the same approach as we do for
> galera,
> > all of the members are listed in the balancer, but with the others as
> > backup's this makes them inactive until the first node is down. This
> allow
> > the vip to move to any of the proxies in the cluster, and continue to
> direct
> > traffic to the same node util that rabbit instance is also un-available
> >
> > isten mysqld
> >   bind 192.168.0.2:3306
> >   mode  tcp
> >   option  httpchk
> >   option  tcplog
> >   option  clitcpka
> >   option  srvtcpka
> >   stick on  dst
> >   stick-table  type ip size 1
> >   timeout client  28801s
> >   timeout server  28801s
> >   server node-1 192.168.0.4:3307  check port 49000 inter 20s fastinter
> 2s
> > downinter 2s rise 3 fall 3
> >   server node-3 192.168.0.6:3307 backup check port 49000 inter 20s
> fastinter
> > 2s downinter 2s rise 3 fall 3
> >   server node-4 192.168.0.5:3307 backup check port 49000 inter 20s
> fastinter
> > 2s downinter 2s rise 3 fall 3
> >
> > listen murano_rabbitmq
> >   bind 10.110.3.3:55572
> >   balance  roundrobin
> >   mode  tcp
> >   option  tcpka
> >   timeout client  48h
> >   timeout server  48h
> >   server node-1 192.168.0.4:55572  check inter 5000 rise 2 fall 3
> >   server node-3 192.168.0.6:55572 backup check inter 5000 rise 2 fall 3
> >   server node-4 192.168.0.5:55572 backup check inter 5000 rise 2 fall 3
> >
> >
> > On Fri, Sep 23, 2016 at 7:30 AM Mike Lowe  wrote:
> >>
> >> Would you mind sharing an example snippet from HA proxy config?  I had
> >> struggled in the past with getting this part to work.
> >>
> >>
> >> > On Sep 23, 2016, at 12:13 AM, Serg Melikyan 
> >> > wrote:
> >> >
> >> > Hi Joe,
> >> >
> >> > I can share some details on how murano is configured as part of the
> >> > default Mirantis OpenStack configuration and try to explain why it's
> >> > done in that way as it's done, I hope it helps you in your case.
> >> >
> >> > As part of Mirantis OpenStack second instance of the RabbitMQ is
> >> > getting deployed specially for the murano, but it's configuration is
> >> > different than for the RabbitMQ instance used by the other OpenStack
> >> > components.
> >> >
> >> > Why to use separate instance of the RabbitMQ?
> >> > 1. Prevent possibility to get access to the RabbitMQ supporting
> >> > whole cloud infrastructure by limiting access on the networking level
> >> > rather than rely on authentication/authorization
> >> > 2. Prevent possibility of DDoS by limiting access on the
> >> > networking level to the infrastructure RabbitMQ
> >> >
> >> > Given that second RabbitMQ instance is used only for the murano-agent
> >> > <-> murano-engine communications and murano-agent is running on the
> >> > VMs we had to make couple of changes in the deployment of the RabbitMQ
> >> > (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> >> > for m-agent <-> m-engine communications):
> >> >
> >> > 1. RabbitMQ is not clustered, just separate instance running on each
> >> > controller node
> >> > 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
> >> > exposed
> >> > 3. It's has different port number than default
> >> > 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> >> > pointing to the RabbitMQ on the current primary controller
> >> >
> >> > Note: How murano-agent is working? Murano-engine creates queue 

Re: [Openstack-operators] Murano in Production

2016-09-26 Thread Serg Melikyan
Hi Joe,

>Also, is it safe to say that communication between agent/engine only, and will 
>only, happen during app deployment?

murano-agent & murano-engine keep active connection to the Rabbit MQ
broker but message exchange happens only during deployment of the app.

>One thing we just ran into, though, was getting the agent/engine rmq config to 
>work with SSL

We had related bug fixed in Newton, can you confirm that you are *not*
hitting bug #1578421 [0]

References:
[0] https://bugs.launchpad.net/murano/+bug/1578421




On Mon, Sep 26, 2016 at 1:43 PM, Andrew Woodward  wrote:
> In Fuel we deploy haproxy to all of the nodes that are part of the
> VIP/endpoint service (This is usually part of the controller role) Then the
> vips (internal or public) can be active on any member of the group.
> Corosync/Pacemaker is used to move the VIP address (as apposed to
> keepalived) in our case both haproxy, and the vip live in a namespace and
> haproxy is always running on all of these nodes bound to 0/0.
>
> In the case of murano-rabbit we take the same approach as we do for galera,
> all of the members are listed in the balancer, but with the others as
> backup's this makes them inactive until the first node is down. This allow
> the vip to move to any of the proxies in the cluster, and continue to direct
> traffic to the same node util that rabbit instance is also un-available
>
> isten mysqld
>   bind 192.168.0.2:3306
>   mode  tcp
>   option  httpchk
>   option  tcplog
>   option  clitcpka
>   option  srvtcpka
>   stick on  dst
>   stick-table  type ip size 1
>   timeout client  28801s
>   timeout server  28801s
>   server node-1 192.168.0.4:3307  check port 49000 inter 20s fastinter 2s
> downinter 2s rise 3 fall 3
>   server node-3 192.168.0.6:3307 backup check port 49000 inter 20s fastinter
> 2s downinter 2s rise 3 fall 3
>   server node-4 192.168.0.5:3307 backup check port 49000 inter 20s fastinter
> 2s downinter 2s rise 3 fall 3
>
> listen murano_rabbitmq
>   bind 10.110.3.3:55572
>   balance  roundrobin
>   mode  tcp
>   option  tcpka
>   timeout client  48h
>   timeout server  48h
>   server node-1 192.168.0.4:55572  check inter 5000 rise 2 fall 3
>   server node-3 192.168.0.6:55572 backup check inter 5000 rise 2 fall 3
>   server node-4 192.168.0.5:55572 backup check inter 5000 rise 2 fall 3
>
>
> On Fri, Sep 23, 2016 at 7:30 AM Mike Lowe  wrote:
>>
>> Would you mind sharing an example snippet from HA proxy config?  I had
>> struggled in the past with getting this part to work.
>>
>>
>> > On Sep 23, 2016, at 12:13 AM, Serg Melikyan 
>> > wrote:
>> >
>> > Hi Joe,
>> >
>> > I can share some details on how murano is configured as part of the
>> > default Mirantis OpenStack configuration and try to explain why it's
>> > done in that way as it's done, I hope it helps you in your case.
>> >
>> > As part of Mirantis OpenStack second instance of the RabbitMQ is
>> > getting deployed specially for the murano, but it's configuration is
>> > different than for the RabbitMQ instance used by the other OpenStack
>> > components.
>> >
>> > Why to use separate instance of the RabbitMQ?
>> > 1. Prevent possibility to get access to the RabbitMQ supporting
>> > whole cloud infrastructure by limiting access on the networking level
>> > rather than rely on authentication/authorization
>> > 2. Prevent possibility of DDoS by limiting access on the
>> > networking level to the infrastructure RabbitMQ
>> >
>> > Given that second RabbitMQ instance is used only for the murano-agent
>> > <-> murano-engine communications and murano-agent is running on the
>> > VMs we had to make couple of changes in the deployment of the RabbitMQ
>> > (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
>> > for m-agent <-> m-engine communications):
>> >
>> > 1. RabbitMQ is not clustered, just separate instance running on each
>> > controller node
>> > 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
>> > exposed
>> > 3. It's has different port number than default
>> > 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
>> > pointing to the RabbitMQ on the current primary controller
>> >
>> > Note: How murano-agent is working? Murano-engine creates queue with
>> > uniq name and put configuration tasks to that queue which are later
>> > getting picked up by murano-agent when VM is booted and murano-agent
>> > is configured to use created queue through cloud-init.
>> >
>> > #1 Clustering
>> >
>> > * Given that per 1 app deployment from we create 1-N VMs and send 1-M
>> > configuration tasks, where in most of the cases N and M are less than
>> > 3.
>> > * Even if app deployment will be failed due to cluster failover it's
>> > can be always re-deployed by the user.
>> > * Controller-node failover most probably will lead to limited
>> > accessibility of the Heat, Nova & Neutron API and application
>> > deployment will fail 

Re: [Openstack-operators] Public cloud operators group in

2016-09-26 Thread Silence Dogood
I figure if you have entity Y's workloads running on entity X's hardware...
and that's 51% or greater portion of gross revenue... you are a public
cloud.

On Mon, Sep 26, 2016 at 11:35 AM, Kenny Johnston 
wrote:

> That seems like a strange definition. It doesn't incorporate the usual
> multi-tenancy requirement that traditionally separates private from public
> clouds. By that definition, Rackspace's Private Cloud offer, where we
> design, deploy and operate a single-tenant cloud on behalf of customers (in
> their data-center or ours) would be considered a "public" cloud.
>
> On Fri, Sep 23, 2016 at 3:54 PM, Rochelle Grober <
> rochelle.gro...@huawei.com> wrote:
>
>> Hi Matt,
>>
>>
>>
>> At considerable risk of heading down a rabbit hole... how are you
>> defining "public" cloud for these purposes?
>>
>>
>>
>> Cheers,
>>
>> Blair
>>
>>
>>
>> Any cloud that provides a cloud to a thirdparty in exchange for money.
>> So, rent a VM, rent a collection of vms, lease a fully operational cloud
>> spec'ed to your requirements, lease a team and HW with your cloud on
>> them.
>>
>>
>>
>> So any cloud that provides offsite IAAS to lessees
>>
>>
>>
>> --Rockyy
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
>
> --
> Kenny Johnston | irc:kencjohnston | @kencjohnston
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] - Workshop on Openstack-Federated Identity integration

2016-09-26 Thread Saverio Proto
Hello operators,

At GARR in Rome there will be an event shortly before Barcelona about
Openstack and Identity Federation.

https://eventr.geant.org/events/2527

This is a use case that is very important for NREN running public
cloud for Universities, where a Identity Federation is already
deployed for other services.

At the meetup in Manchester when we talked about Federation we had an
interesting session:
https://etherpad.openstack.org/p/MAN-ops-Keystone-and-Federation

I tagged the mail with scientific-wg because this looks like a
use-case that is of big interest of the academic institutions. Look
the etherpad of Manchester the section ''
Who do you want to federate with?''

Thank you.

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Evacuate host with host ephemeral storage

2016-09-26 Thread Davíð Örn Jóhannsson
Well it seems that I’m out of luck with  the nova migrate function

2016-09-26 12:16:06.249 28226 INFO nova.compute.manager 
[req-bfccd77e-b351-4c30-a5a4-a0e20fc62add 98b4044d95e34926aa53405f2b7c5a13 
1dda2478e30d44dda0ca752c6047725d - - -] [instance: 
b98be860-9253-43a1-9351-36a7aa125a51] Setting instance back to ACTIVE after: 
Instance rollback performed due to: Migration pre-check error: Migration is not 
supported for LVM backed instances

From: Marcin Iwinski
Date: Friday 23 September 2016 at 14:31
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage

Hi,
double-check if "nova migrate" option will work for you - it uses a different 
mechanisms than live-migration and judging by [1] depending on the release of 
OpenStack you use it might work with LVM backed ephemeral storage. And I 
seccond Kostiantyn email - we seems to be mixing evacuation with migration.


[1] https://bugs.launchpad.net/nova/+bug/1270305

Regards,
Marcin



On 23 Sep 2016 at 16:21:17, Davíð Örn Jóhannsson 
(davi...@siminn.is) wrote:

Thank you for the clarification. My digging around has thus far only revealed 
https://bugs.launchpad.net/nova/+bug/1499405 (live migration is not implemented 
for LVM backed storage)

If any one has any more info on this subject it would be much appreciated.

From: 
"kostiantyn.volenbovs...@swisscom.com"
Date: Friday 23 September 2016 at 13:59
To: David Orn Johannsson, 
"marcin.iwin...@gmail.com", 
"openstack-operators@lists.openstack.org"
Subject: RE: [Openstack-operators] Evacuate host with host ephemeral storage

Hi,

here migration and evacuation are getting mixed up.
In migration case you can access the ephemeral storage of your VM and thus you 
will copy that disk=that file (either as offline aka ‘cold’ migration or via 
live migration)
In evacuation case your Compute Host is either unavailable (or 
assumed-to-be-unavailable) and thus you can’t access (or assume that you can’t 
access) whatever is stored on that Compute Host
So in case your emphemeral (=root disk of VM) disk is actually on that Compute 
Host – then you can’t access that and thus evacuation will result in rebuild
(=taking original image from Glance and thus typically you lose whatever 
happened after initial booting)

But in case you have something shared underneath (like NFS) – then 
--on-shared-storage
nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage
(I guess that it will detect that automatically even in case)
But LVM using NFS share – that sounds like something not very-straightforward 
(not sure if it is OK with OpenStack)

See [1] and [2]
BR,
Konstantin
[1] 
http://docs.openstack.org/admin-guide/compute-configuring-migrations.html#section-configuring-compute-migrations
[2] http://docs.openstack.org/admin-guide/cli-nova-evacuate.html
From: Davíð Örn Jóhannsson [mailto:davi...@siminn.is]
Sent: Friday, September 23, 2016 2:12 PM
To: Marcin Iwinski >; 
openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage

No I have not, I guess there is nothing else to do than just give it a go :)

Thanks for the pointer

From: Marcin Iwinski
Date: Friday 23 September 2016 at 11:39
To: David Orn Johannsson, 
"openstack-operators@lists.openstack.org"
Subject: Re: [Openstack-operators] Evacuate host with host ephemeral storage



On 23 Sep 2016 at 13:25:39, Davíð Örn Jóhannsson 
(davi...@siminn.is) wrote:
OpenStack Liberty
Ubuntu 14.04

I know that using block storage like Cinder you can evacuate instances from 
hosts, but in my case we are not yet using Cinder or other block storage 
solutions, we rely on local ephemeral storage, configured on using LVM

Nova.conf
[libvirt]
images_volume_group=vg_ephemeral
images_type=lvm

Is it possible to evacuate (migrate) ephemeral instances from compute hosts and 
if so does any one have any experience with that?


Hi Davíð

Have you actually tried the regular "nova migrate UUID" option? It does copy 
the entire disk to a different compute - but i'm not sure if it's working with 
LVM. I've also used [1] ("nova live-migrate --block-migrate UUID") in the past 
- but unfortunately this also wasn't LVM backed ephemeral storage.

[1] 
http://www.tcpcloud.eu/en/blog/2014/11/20/block-live-migration-openstack-environment/

BR
Marcin

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] MongoDB as Ceilometer backend - scaling

2016-09-26 Thread gordon chung
Agreed. i'm doing some benchmarking myself currently which i will 
publish soon. whenever y'all do start testing, we welcome any feedback.

On 26/09/2016 4:43 AM, Tobias Urdin wrote:
> Hello Gordon,
>
> I have talked to a lot of different people at various companies, most of
> them (including us) has been looking towards Gnocchi and and surely
> gonna use it in the future however it's still missing packaging,
> documentation and production testing (being used in production).
>
> Therefore it's today mostly used for testing. But I agree that the void
> that Gnocchi tries to fill is a great addition to the Telemetry "family"
> of OpenStack.
>
> Best regards

cheers,
-- 
gord

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] MongoDB as Ceilometer backend - scaling

2016-09-26 Thread Tobias Urdin
Hello Gordon,

I have talked to a lot of different people at various companies, most of
them (including us) has been looking towards Gnocchi and and surely
gonna use it in the future however it's still missing packaging,
documentation and production testing (being used in production).

Therefore it's today mostly used for testing. But I agree that the void
that Gnocchi tries to fill is a great addition to the Telemetry "family"
of OpenStack.

Best regards


On 09/26/2016 03:27 AM, gordon chung wrote:
> i don't want to speak for rest of Telemetry contributors but i don't 
> think many(any) of us suggest using MongoDB or Ceilometer's API for 
> storage. It is basically a data dump of what Ceilometer is collecting so 
> it will be very, very verbose for most/all use cases.
>
> as Joseph mentioned, Gnocchi[1] was developed to target billing/resource 
> usage/health use case so it's optimised to store large sets of 
> time-based measurements. Gnocchi v3 was just released so you may want to 
> give that a try if you it fits your use case. i've been doing some 
> performance testing on it this cycle that might help (specifically if 
> using the Ceph backend)[2].
>
> [1] http://gnocchi.xyz/
> [2] http://www.slideshare.net/GordonChung/gnocchi-profiling-v2
>
> On 14/09/2016 9:57 AM, Tobias Urdin wrote:
>> Hello,
>>
>> We are running Ceilometer with MongoDB as storage backend in production
>> and it's building up quite fast.
>>
>> I'm just having some simple thought on how large MongoDB setups people
>> are having with Ceilometer?
>>
>> More details about backup, replicas and sharding would also be appreciated.
>>
>>
>> I think we will have to look into moving our three replicas to sharding
>> in a short period of time.
>>
>> Best regards
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> cheers,


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Public cloud operators group in Barcelona

2016-09-26 Thread Matt Jarvis
Hi Blair

Agree with you on a lot of that stuff, although on lifecycle management we
certainly have a bunch of tooling in place to handle scenarios like initial
creation of user environments ( basic network and router setup ), freezing
resources for non-payment, offboarding of customers after account deletion
etc. You are correct in the sense that we only get involved in lifecycle
management if someone stops paying, although we've started doing some stuff
around automatic IP address reclaiming from routers where we have customers
sitting without any usage for extended periods.

Matt

On 26 September 2016 at 09:14, Blair Bethwaite 
wrote:

> Hi Matt,
>
> I think your dot points make sense. And yes, I was thinking about Science
> Cloud overlap. I see Science Clouds as potentially sharing most or all of
> these attributes (with the notable exception being charging in terms of end
> users seeing a $ figure, showback and/or instance/cpu hour quotas might be
> used instead). However, Science Clouds are probably not accessible to the
> general public but require membership in, or sponsorship from, some
> community in order to get an account.
>
> The specific problems you mention, rather than attributes of the
> deployment models, are probably the best way to define this in a way that
> let's folks determine their interest. The biggest difference seems to be
> that commercial providers have no major interest in the lifecycle
> management of customer infrastructure, whereas for Science Clouds turning
> things off and cleaning up data is a policy issue that does not appear to
> have resulted in any common tooling yet.
>
> Cheers,
> Blair
>
> On 22 Sep 2016 6:35 PM, "Matt Jarvis" 
> wrote:
>
>> Hey Blair
>>
>> Now you've done it ! OK, I'll bite and have a go at categorising that a
>> bit :
>>
>> 1. Multi-tenant - tenants need clear separation
>> 2. Self service sign up - customers on board themselves
>> 3. Some kind of charging model in place which requires resource accounting
>> 4. API endpoints and possibly management applications presented outside
>> of your own network
>>
>> Not sure that entirely works, but best I can come up with off the top of
>> my head. For commercial public clouds we have some very specific problem
>> spaces in common - handling credit cards and personal information, fraud
>> prevention, commercial modelling for capacity planning etc. Is where you're
>> going with this that some of the science clouds share some of the
>> attributes above ?
>>
>> Matt
>>
>> On 22 September 2016 at 00:40, Blair Bethwaite > > wrote:
>>
>>> Hi Matt,
>>>
>>> At considerable risk of heading down a rabbit hole... how are you
>>> defining "public" cloud for these purposes?
>>>
>>> Cheers,
>>> Blair
>>>
>>> On 21 September 2016 at 18:14, Matt Jarvis <
>>> matt.jar...@datacentred.co.uk> wrote:
>>>
 Given there are quite a few public cloud operators in Europe now, is
 there any interest in a public cloud group meeting as part of the ops
 meetup in Barcelona ? I already know many of you, but I think it could be
 very useful to share our experiences with a wider group.

 DataCentred Limited registered in England and Wales no. 05611763
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


>>>
>>>
>>> --
>>> Cheers,
>>> ~Blairo
>>>
>>
>>
>> DataCentred Limited registered in England and Wales no. 05611763
>
>

-- 
DataCentred Limited registered in England and Wales no. 05611763
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Public cloud operators group in Barcelona

2016-09-26 Thread Blair Bethwaite
Hi Matt,

I think your dot points make sense. And yes, I was thinking about Science
Cloud overlap. I see Science Clouds as potentially sharing most or all of
these attributes (with the notable exception being charging in terms of end
users seeing a $ figure, showback and/or instance/cpu hour quotas might be
used instead). However, Science Clouds are probably not accessible to the
general public but require membership in, or sponsorship from, some
community in order to get an account.

The specific problems you mention, rather than attributes of the deployment
models, are probably the best way to define this in a way that let's folks
determine their interest. The biggest difference seems to be that
commercial providers have no major interest in the lifecycle management of
customer infrastructure, whereas for Science Clouds turning things off and
cleaning up data is a policy issue that does not appear to have resulted in
any common tooling yet.

Cheers,
Blair

On 22 Sep 2016 6:35 PM, "Matt Jarvis"  wrote:

> Hey Blair
>
> Now you've done it ! OK, I'll bite and have a go at categorising that a
> bit :
>
> 1. Multi-tenant - tenants need clear separation
> 2. Self service sign up - customers on board themselves
> 3. Some kind of charging model in place which requires resource accounting
> 4. API endpoints and possibly management applications presented outside of
> your own network
>
> Not sure that entirely works, but best I can come up with off the top of
> my head. For commercial public clouds we have some very specific problem
> spaces in common - handling credit cards and personal information, fraud
> prevention, commercial modelling for capacity planning etc. Is where you're
> going with this that some of the science clouds share some of the
> attributes above ?
>
> Matt
>
> On 22 September 2016 at 00:40, Blair Bethwaite 
> wrote:
>
>> Hi Matt,
>>
>> At considerable risk of heading down a rabbit hole... how are you
>> defining "public" cloud for these purposes?
>>
>> Cheers,
>> Blair
>>
>> On 21 September 2016 at 18:14, Matt Jarvis > > wrote:
>>
>>> Given there are quite a few public cloud operators in Europe now, is
>>> there any interest in a public cloud group meeting as part of the ops
>>> meetup in Barcelona ? I already know many of you, but I think it could be
>>> very useful to share our experiences with a wider group.
>>>
>>> DataCentred Limited registered in England and Wales no. 05611763
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>>
>> --
>> Cheers,
>> ~Blairo
>>
>
>
> DataCentred Limited registered in England and Wales no. 05611763
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators