Re: [Openstack-operators] Openstack and Ceph

2017-02-19 Thread Joseph Bajin
Another question is what type of SSD's are you using.  There is a big
difference between not just vendors of SSD's but the size of them as their
internals make a big difference on how the OS interacts with them.

This link is still very usage today:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/



On Fri, Feb 17, 2017 at 12:54 PM, Alex Hübner  wrote:

> Are these nodes connected to a dedicated or a shared (in the sense there
> are other workloads running) network switches? How fast (1G, 10G or faster)
> are the interfaces? Also, how much RAM are you using? There's a rule of
> thumb that says you should dedicate at least 1GB of RAM for each 1 TB of
> raw disk space. How the clients are consuming the storage? Are they virtual
> machines? Are you using iSCSI to connect those? Are these clients the same
> ones you're testing against your regular SAN storage and are they
> positioned in a similar fashion (ie: over a steady network channel)? What
> Ceph version are you using?
>
> Finally, replicas are normally faster than erasure coding, so you're good
> on this. It's *never* a good idea to enable RAID cache, even when it
> apparently improves IOPS (the magic of Ceph relies on the cluster, it's
> network and the number of nodes, don't approach the nodes as if they where
> isolate storage servers). Also, RAID0 should only be used as a last resort
> for the cases the disk controller doesn't offer JBOD mode.
>
> []'s
> Hubner
>
> On Fri, Feb 17, 2017 at 7:19 AM, Vahric Muhtaryan 
> wrote:
>
>> Hello All ,
>>
>> First thanks for your answers . Looks like everybody is ceph lover :)
>>
>> I believe that you already made some tests and have some results because
>> of until now we used traditional storages like IBM V7000 or XIV or Netapp
>> or something we are very happy to get good iops and also provide same
>> performance to all instances until now.
>>
>> We saw that each OSD eating a lot of cpu and when multiple client try to
>> get same performance from ceph its looks like not possible , ceph is
>> sharing all things with clients and we can not reach hardware raw iops
>> capacity with ceph. For example each SSD can do 90K iops we have three on
>> each node and have 6 nodes means we should get better results then what we
>> have now !
>>
>> Could you pls share your hardware configs , iops test and advise our
>> expectations correct or not ?
>>
>> We are using Kraken , almost all debug options are set 0/0 , we modified
>> op_Tracker or some other ops based configs too !
>>
>> Our Hardware
>>
>> 6 x Node
>> Each Node Have :
>> 2 Socket Intel(R) Xeon(R) CPU E5-2630L v3 @ 1.80GHz each and total 16
>> core and HT enabled
>> 3 SSD + 12 HDD (SSDs are in journal mode 4 HDD to each SSD)
>> Each disk configured Raid 0 (We did not see any performance different
>> with JBOD mode of raid card because of that continued with raid 0 )
>> Also raid card write back cache is used because its adding extra IOPS too
>> !
>>
>> Our Test
>>
>> Its %100 random and write
>> Ceph pool is configured 3 replica set. (we did not use 2 because at the
>> failover time all system stacked and we couldn’t imagine great tunning
>> about it because some of reading said that under high load OSDs can be down
>> and up again we should care about this too ! )
>>
>> Test Command : fio --randrepeat=1 --ioengine=libaio --direct=1
>> --gtod_reduce=1 --name=test --filename=test --bs=4k —iodepth=256 --size=1G
>> --numjobs=8 --readwrite=randwrite —group_reporting
>>
>> Achieved IOPS : 35 K (Single Client)
>> We tested up to 10 Clients which ceph fairly share this usage like almost
>> 4K for each
>>
>> Thanks
>> Regards
>> Vahric Muhtaryan
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Reserve an external network for 1 tenant

2016-10-05 Thread Joseph Bajin
I believe you can actually do this in Liberty..

http://docs.openstack.org/liberty/networking-guide/adv-config-network-rbac.html



On Mon, Oct 3, 2016 at 1:00 AM, Kevin Benton  wrote:

> You will need mitaka to get an external network that is only available to
> specific tenants. That is what the 'access_as_external' you identified does.
>
> Search for the section "Allowing a network to be used as an external
> network" in http://docs.openstack.org/mitaka/networking-guide/
> config-rbac.html.
>
> On Thu, Sep 29, 2016 at 5:01 AM, Saverio Proto  wrote:
>
>> Hello,
>>
>> Context:
>> - openstack liberty
>> - ubuntu trusty
>> - neutron networking with vxlan tunnels
>>
>> we have been running Openstack with a single external network so far.
>>
>> Now we have a specific VLAN in our datacenter with some hardware boxes
>> that need a connection to a specific tenant network.
>>
>> To make this possible I changed the configuration of the network node
>> to support multiple external networks. I am able to create a router
>> and set as external network the new physnet where the boxes are.
>>
>> Everything looks nice except that all the projects can benefit from
>> this new external network. In any tenant I can create a router, and
>> set the external network and connect to the boxes. I cannot restrict
>> it to a specific tenant.
>>
>> I found this piece of documentation:
>>
>> https://wiki.openstack.org/wiki/Neutron/sharing-model-for-
>> external-networks
>>
>> So it looks like it is impossible to have a flat external network
>> reserved for 1 specific tenant.
>>
>> I also tried to follow this documentation:
>> http://docs.openstack.org/liberty/networking-guide/adv-confi
>> g-network-rbac.html
>>
>> But it does not specify if it is possible to specify a policy for an
>> external network to limit the sharing.
>>
>> It did not work for me so I guess this does not work when the secret
>> network I want to create is external.
>>
>> There is an action --action access_as_external that is not clear to me.
>>
>> Also look like this feature is evolving in Newton:
>> http://docs.openstack.org/draft/networking-guide/config-rbac.html
>>
>> Anyone has tried similar setups ? What is the minimum openstack
>> version to get this done ?
>>
>> thank you
>>
>> Saverio
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] MongoDB as Ceilometer backend - scaling

2016-09-17 Thread Joseph Bajin
I think many people tried to run ceilometer with multiple different
backends and for a while were truly unsuccessful.

If you look at Liberty and now in Mitaka there has been a lot of work to
separate out the alarms from the actual data parts.  You now have Adoh from
the alarming standpoint, and then ceilometer using new backends like
Gnocchi. I'm here a lot of good things about Gnocchi and we are going to
start taking a look at re-deploying ceilometer with that backend.

On Wed, Sep 14, 2016 at 9:57 AM, Tobias Urdin 
wrote:

> Hello,
>
> We are running Ceilometer with MongoDB as storage backend in production
> and it's building up quite fast.
>
> I'm just having some simple thought on how large MongoDB setups people
> are having with Ceilometer?
>
> More details about backup, replicas and sharding would also be appreciated.
>
>
> I think we will have to look into moving our three replicas to sharding
> in a short period of time.
>
> Best regards
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Interop-Challenge]Please review the work load test scripts

2016-09-17 Thread Joseph Bajin
It has been merged.



On Thu, Sep 15, 2016 at 9:44 AM, Tong Li  wrote:

> Dear openstack/osop-tools-contrib core reviewers, please review the
> following patch which provides ansible and terraform work load tests for
> Interop-challenge effort.
>
> *https://review.openstack.org/#/c/366784/*
> 
>
> any comments will be appreciated and addressed accordingly.
>
> Thanks.
>
> Tong Li
> IBM Open Technology
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Generate list of default configuration parameters

2016-09-07 Thread Joseph Bajin
I'm not sure if this fits what you need, but the Kolla project produces a
lot of default values that are included in their deployments.  You could
check that out and see if it works for you.

On Wed, Sep 7, 2016 at 6:51 PM, Graeme Gillies  wrote:

> On 08/09/16 05:41, Dan Trainor wrote:
> > Hi -
> >
> > Back in the day when I was using Puppet, I found it particularly useful
> > to "ask" Puppet for a list of configuration parameters[1] to extract
> > information on default parameters.
> >
> > Does the 'openstack' command or any related command have a similar
> > invocation?  I am familiar with 'openstack configuration show' but the
> > information I'm most interested in are defaults for values that would be
> > populated in undercloud.conf - well before I could expect to use
> > 'openstack configuration show'.
> >
> > Thanks!
> > -dant
> >
> > ---
> >
> > [1] https://docs.puppet.com/puppet/latest/reference/config_print.html
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
> Hi Dan,
>
> I assuming with your mention of the the python-openstackclient and
> undercloud.conf that you are talking about TripleO in particular here.
> Unfortunately at this stage there is no such integration for what you
> request. If you are interested in all configuration options for
> undercloud.conf, you can see them in
> /usr/share/instack-undercloud/undercloud.conf.sample
>
> It's worth noting that the functionality you describe by puppet and what
> is given in 'openstack configuration show' is slightly different as
> well. Puppet aims to print/provide all possible configuration options
> you might use in a server or agent configuration
> (/etc/puppet/puppet.conf) while 'openstack configuration show' provides
> high level overview of how the openstack cloud is configured including
> api versions, auth urls, etc. It hasn't got the ability to provide or
> interrogate specific components about all their potential configuration
> options for their config files (at least, this is my understanding,
> someone can correct me here).
>
> Regards,
>
> Graeme
>
> --
> Graeme Gillies
> Principal Systems Administrator
> Openstack Infrastructure
> Red Hat Australia
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [OpenStack-DefCore] [OSOps] Ansible work load test for interop patch set

2016-09-01 Thread Joseph Bajin
Ahh..

Ok, when I read "InterOp Challenge" that sounded like an event and not a
tool.  Since it is a tool, then it totally makes sense, and it could be
used to evaluate a platform which I would agree with you may not
necessarily be an operator tool, but could be something they would use.

I just didn't have any information about it and was just trying to make
sure it was something that should be merged.

Thanks for following up on it and double checking everything!

On Wed, Aug 31, 2016 at 3:47 PM, Kris G. Lindgren <klindg...@godaddy.com>
wrote:

> I originally agreed with you, but then I thought about it more this way:
> It’s a tool to test to see if clouds are interop compatible (atleast that
> heat works the same on the two clouds).  While not technically a tool to
> manage openstack.  But still something that some Operators could want to
> know if they are looking at doing hybrid cloud.  Or they may want to ensure
> that two of their own private clouds are interop compatible.
>
>
>
> ___
>
> Kris Lindgren
>
> Senior Linux Systems Engineer
>
> GoDaddy
>
>
>
> *From: *Joseph Bajin <josephba...@gmail.com>
> *Date: *Wednesday, August 31, 2016 at 1:39 PM
> *To: *"Yih Leong, Sun." <yihle...@gmail.com>
> *Cc: *OpenStack Operators <openstack-operators@lists.openstack.org>,
> defcore-committee <defcore-commit...@lists.openstack.org>
> *Subject: *Re: [Openstack-operators] [OpenStack-DefCore] [OSOps] Ansible
> work load test for interop patch set
>
>
>
> This looks like this was merged, but no one really answered my questions
> about an "InterOp Challenge" code base going into the Operators
> repository.
>
>
>
> --Joe
>
>
>
> On Wed, Aug 31, 2016 at 12:23 PM, Yih Leong, Sun. <yihle...@gmail.com>
> wrote:
>
> Can someone from ospos please review the following patch?
>
> https://review.openstack.org/#/c/351799/
>
>
>
> The patchset was last updated Aug 11th.
>
> Thanks!
>
>
>
>
>
>
>
> On Tue, Aug 16, 2016 at 7:17 PM, Joseph Bajin <josephba...@gmail.com>
> wrote:
>
> Sorry about that. I've been a little busy as of late, and was able to get
> around to taking a look.
>
>
>
> I have a question about these.   What exactly is the Interop Challenge?
> The OSOps repos are usually for code that can help Operators maintain and
> run their cloud.   These don't necessarily look like what we normally see
> submitted.
>
>
>
> Can you expand on what the InterOp Challenge is and if it is something
> that Operators would use?
>
>
>
> Thanks
>
>
>
> Joe
>
>
>
> On Tue, Aug 16, 2016 at 3:02 PM, Shamail <itzsham...@gmail.com> wrote:
>
>
>
> > On Aug 16, 2016, at 1:44 PM, Christopher Aedo <d...@aedo.net> wrote:
> >
> > Tong Li, I think the best place to ask for a look would be the
> > Operators mailing list
> > (http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> ).
> > I've cc'd that list here, though it looks like you've already got a +2
> > on it at least.
> +1
>
> I had contacted JJ earlier and he told me that the best person to contact
> would be Joseph Bajin (RaginBajin in IRC).  I've also added an OSOps tag to
> this message.
>
> >
> > -Christopher
> >
> >> On Tue, Aug 16, 2016 at 7:59 AM, Tong Li <liton...@us.ibm.com> wrote:
> >> The patch set has been submitted to github for awhile, can some one
> please
> >> review the patch set here?
> >>
> >> https://review.openstack.org/#/c/354194/
> >>
> >> Thanks very much!
> >>
> >> Tong Li
> >> IBM Open Technology
> >> Building 501/B205
> >> liton...@us.ibm.com
> >>
> >>
> >> ___
> >> Defcore-committee mailing list
> >> defcore-commit...@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
> >
> > ___
> > Defcore-committee mailing list
> > defcore-commit...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>
> ___
>
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> ___
> Defcore-committee mailing list
> defcore-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [OpenStack-DefCore] [OSOps] Ansible work load test for interop patch set

2016-09-01 Thread Joseph Bajin
Hi Katherine..

The OsOps Working Group meets on the Odd Wednesday's at 1900 UTC in
#openstack-meetings-4. You can find out more here -
http://eavesdrop.openstack.org/#OpenStack_OSOps/Monitoring_and_Tools_Working_Group

Thanks

Joe

On Wed, Aug 31, 2016 at 5:07 PM, Catherine Cuong Diep <cd...@us.ibm.com>
wrote:

> Kris ++.
>
> Kris' statement below is the exact reason why the Interop-Challenge team
> thinks Operators repository is a good place to store the interop workload
> deployment tool sets. In addition, during today's Interop-Challenge IRC
> meeting, we think that it would help if we can have a representative from
> the the Operators team to join the Interop-Challenge IRC meeting
> (Wednesdays at 1400 UTC, IRC Channel (on freenode): #openstack-defcore ) if
> possible,
>
> What is the IRC channel for the "OSOps Working Group" ? As you indicated,
> that would be a good place for us to ask questions. Thanks!
>
> Catherine Diep
> RefStack Project PTL
> IBM
> - Forwarded by Catherine Cuong Diep/San Jose/IBM on 08/31/2016 01:25
> PM -----
>
> From: "Kris G. Lindgren" <klindg...@godaddy.com>
> To: Joseph Bajin <josephba...@gmail.com>, "Yih Leong, Sun." <
> yihle...@gmail.com>
> Cc: OpenStack Operators <openstack-operators@lists.openstack.org>,
> defcore-committee <defcore-commit...@lists.openstack.org>
> Date: 08/31/2016 12:52 PM
> Subject: Re: [OpenStack-DefCore] [Openstack-operators] [OSOps] Ansible
> work load test for interop patch set
> --
>
>
>
> I originally agreed with you, but then I thought about it more this way:
> It’s a tool to test to see if clouds are interop compatible (atleast that
> heat works the same on the two clouds). While not technically a tool to
> manage openstack. But still something that some Operators could want to
> know if they are looking at doing hybrid cloud. Or they may want to ensure
> that two of their own private clouds are interop compatible.
>
> ___
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> *From: *Joseph Bajin <josephba...@gmail.com>
> *Date: *Wednesday, August 31, 2016 at 1:39 PM
> *To: *"Yih Leong, Sun." <yihle...@gmail.com>
> *Cc: *OpenStack Operators <openstack-operators@lists.openstack.org>,
> defcore-committee <defcore-commit...@lists.openstack.org>
> *Subject: *Re: [Openstack-operators] [OpenStack-DefCore] [OSOps] Ansible
> work load test for interop patch set
>
> This looks like this was merged, but no one really answered my questions
> about an "InterOp Challenge" code base going into the Operators repository.
>
> --Joe
>
> On Wed, Aug 31, 2016 at 12:23 PM, Yih Leong, Sun. <*yihle...@gmail.com*
> <yihle...@gmail.com>> wrote:
> Can someone from ospos please review the following patch?
> *https://review.openstack.org/#/c/351799/*
> <https://review.openstack.org/#/c/351799/>
>
> The patchset was last updated Aug 11th.
> Thanks!
>
>
>
> On Tue, Aug 16, 2016 at 7:17 PM, Joseph Bajin <*josephba...@gmail.com*
> <josephba...@gmail.com>> wrote:
>
>Sorry about that. I've been a little busy as of late, and was able to
>get around to taking a look.
>
>I have a question about these. What exactly is the Interop Challenge?
>The OSOps repos are usually for code that can help Operators maintain and
>run their cloud. These don't necessarily look like what we normally see
>submitted.
>
>Can you expand on what the InterOp Challenge is and if it is something
>that Operators would use?
>
>Thanks
>
>Joe
>
>On Tue, Aug 16, 2016 at 3:02 PM, Shamail <*itzsham...@gmail.com*
><itzsham...@gmail.com>> wrote:
>
>
>> On Aug 16, 2016, at 1:44 PM, Christopher Aedo <*d...@aedo.net*
><d...@aedo.net>> wrote:
>>
>> Tong Li, I think the best place to ask for a look would be the
>> Operators mailing list
>> (
>*http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators*
><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>).
>> I've cc'd that list here, though it looks like you've already got a
>+2
>> on it at least.
>+1
>
>I had contacted JJ earlier and he told me that the best person to
>contact would be Joseph Bajin (RaginBajin in IRC). I've also added an OSOps
>tag to this message.
>>
>> -Christopher
>>
>>> On Tue, Aug 16, 2016 at 7:59 AM, Tong Li <*liton...@us.ibm.com*
><liton

Re: [Openstack-operators] OpenStack-DefCore] [OSOps]work load test for docker swarm fix patch

2016-08-31 Thread Joseph Bajin
HI there,

That patch was merged earlier today..

The patch was rebased at 2pm EST. IT was reviewed earlier this morning
again, and then +1 on the 25th of August.

You should be good to go.   If you do have any questions like this the
OSOps Working Group is out there to help as well.  We had another meeting
where only one person showed up (me). So this type of stuff is great to
work on and get informed about.



Thanks

Joe

On Wed, Aug 31, 2016 at 3:16 PM, Tong Li  wrote:

> Can someone from ospos please review the following patch? It has been
> sitting there for a long time, 2 lines get removed. please help out to get
> it merged so that users do not have to get the patch set to run it.
>
> https://review.openstack.org/#/c/356586/
>
> Tong Li
> IBM Open Technology
> Building 501/B205
> liton...@us.ibm.com
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [OSOps] [OpenStack-DefCore] Ansible work load test for interop patch set

2016-08-16 Thread Joseph Bajin
Sorry about that. I've been a little busy as of late, and was able to get
around to taking a look.

I have a question about these.   What exactly is the Interop Challenge?
The OSOps repos are usually for code that can help Operators maintain and
run their cloud.   These don't necessarily look like what we normally see
submitted.

Can you expand on what the InterOp Challenge is and if it is something that
Operators would use?

Thanks

Joe

On Tue, Aug 16, 2016 at 3:02 PM, Shamail <itzsham...@gmail.com> wrote:

>
>
> > On Aug 16, 2016, at 1:44 PM, Christopher Aedo <d...@aedo.net> wrote:
> >
> > Tong Li, I think the best place to ask for a look would be the
> > Operators mailing list
> > (http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> ).
> > I've cc'd that list here, though it looks like you've already got a +2
> > on it at least.
> +1
>
> I had contacted JJ earlier and he told me that the best person to contact
> would be Joseph Bajin (RaginBajin in IRC).  I've also added an OSOps tag to
> this message.
> >
> > -Christopher
> >
> >> On Tue, Aug 16, 2016 at 7:59 AM, Tong Li <liton...@us.ibm.com> wrote:
> >> The patch set has been submitted to github for awhile, can some one
> please
> >> review the patch set here?
> >>
> >> https://review.openstack.org/#/c/354194/
> >>
> >> Thanks very much!
> >>
> >> Tong Li
> >> IBM Open Technology
> >> Building 501/B205
> >> liton...@us.ibm.com
> >>
> >>
> >> ___
> >> Defcore-committee mailing list
> >> defcore-commit...@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
> >
> > ___
> > Defcore-committee mailing list
> > defcore-commit...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] (no subject)

2016-07-30 Thread Joseph Bajin
We let our people access the Glance API.  Glance uses Keystone, so user
authentication is done the same way all the other services are done within
OpenStack.  We allow the glance api ports to be available to them so they
can use the service.

What are you thinking that would prevent you from offering this to your
customers?  I'm curious to know as glance is used in a lot of places.

On Fri, Jul 29, 2016 at 4:30 PM, Serguei Bezverkhi (sbezverk) <
sbezv...@cisco.com> wrote:

>
> Hello folks,
>
> I am curious if any of you allow your cliens to access let's say glance
> API by using glance client cli. If you do, I would appreciate if you could
> share how it is done. Do you expose glance api ports to outside with some
> sort of authentication proxy protection or it is done differently.
>
> Thank you
>
> Serguei
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [OSOps] Bi-Weekly Meeting - This coming Wednesday 7/20/2016

2016-07-20 Thread Joseph Bajin
Hello,

I'm going to have to cancel this week's OSOps meeting set for this today,
unless someone volunteers to run the meeting.

If not, then the next meeting is set for August 3rd, 2016 with the same
agenda.

--Joe


On Mon, Jul 18, 2016 at 7:41 PM, Joseph Bajin <josephba...@gmail.com> wrote:

> Everyone,
>
> The agenda is up for this week's OSOps meeting.  You can find the agenda
> here:
>
> https://etherpad.openstack.org/p/osops-irc-meeting-20160720
>
> I hope to have some ideas on how OSOps can help and how we can get more
> people to participate.
>
> The meeting will start at 1900 UTC in #openstack-meeting-4
>
> Thanks
>
> --Joe
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [OSOps] Bi-Weekly Meeting - This coming Wednesday 7/20/2016

2016-07-18 Thread Joseph Bajin
Everyone,

The agenda is up for this week's OSOps meeting.  You can find the agenda
here:

https://etherpad.openstack.org/p/osops-irc-meeting-20160720

I hope to have some ideas on how OSOps can help and how we can get more
people to participate.

The meeting will start at 1900 UTC in #openstack-meeting-4

Thanks

--Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Bandwidth limitations

2016-06-29 Thread Joseph Bajin
Hi there,

It looks like QOS is already available within the Mitaka release.   Maybe
it doesn't have all the features you need, but looks to be a good start.
http://docs.openstack.org/mitaka/networking-guide/adv-config-qos.html

I haven't used it yet, but maybe someone else will pipe up with some
expierence.

--Joe

On Wed, Jun 29, 2016 at 12:36 PM, Daniel Levy  wrote:

> Hi all,
> I'd like to learn about potential solutions anyone out there is using for
> bandwidth limitations on VMs. Potentially applying QOS (quality of service)
> rules on the VM ports in an automated fashion.
> If there are no current solutions, I might submit a blue print to tackle
> this issue
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [osops] OSOps meeting today in #openstack-meeting-4 1900 UTC

2016-06-01 Thread Joseph Bajin
Everyone,

We are going to be canceling today's meeting. For some reason, we are now
off the schedule of odd-weeks, so we are bumping up against another groups
meeting.This would mean that we are moving our meeting to next week to
get back on the odd week schedule.

Thanks

Joe

On Wed, Jun 1, 2016 at 8:44 AM, Joseph Bajin <josephba...@gmail.com> wrote:

> Operators,
>
> Our bi-weekly meeting is today in the #openstack-meeting-4 room beginning
> at 1900 UTC.
>
> There isn't much on the agenda today, as we haven't gotten a lot of
> participation or volunteering.   You can review the agenda here. [1]   If
> you have anything that you think the Operators community can help with,
> please come join the conversation.
>
>
> Thanks
>
> Joe
>
>
> [1] https://etherpad.openstack.org/p/osops-irc-meeting-20160601
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] OSOps meeting today in #openstack-meeting-4 1900 UTC

2016-06-01 Thread Joseph Bajin
Operators,

Our bi-weekly meeting is today in the #openstack-meeting-4 room beginning
at 1900 UTC.

There isn't much on the agenda today, as we haven't gotten a lot of
participation or volunteering.   You can review the agenda here. [1]   If
you have anything that you think the Operators community can help with,
please come join the conversation.


Thanks

Joe


[1] https://etherpad.openstack.org/p/osops-irc-meeting-20160601
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Fwd: [openstack-dev] [nova] I'm going to expire open bug reports older than 18 months.

2016-05-27 Thread Joseph Bajin
All the different projects have different ways of reporting bugs and
looking for features.

You can see a few of the different ways that the Operators discussed back
at the Summit here -
https://etherpad.openstack.org/p/AUS-ops-Requests-for-Enhancement-Process

That should give you some more context on how to look for adding
bugs/feedback.

--Joe

On Fri, May 27, 2016 at 2:40 PM, Robert Starmer  wrote:

> Seems like a great approach.  You might want to also include:
>
> This bug was probably not triaged due to lack of information to reproduce
> the issue.  Please include as much information about the problem including
> steps to allow a developer to reproduce the issue in order for your time in
> reporting it to be useful for the community!
>
> Is there a 'best practice for reporting a bug' document somewhere, it'd
> likely be very useful to include a link in lieu of a message like the one
> above...
>
> R
>
> On Fri, May 27, 2016 at 9:59 AM, Markus Zoeller <
> mzoel...@linux.vnet.ibm.com> wrote:
>
>> On 27.05.2016 15:47, Vincent Legoll wrote:
>> > Hello,
>> >
>> > Le 27/05/2016 15:25, Markus Zoeller a écrit :
>> >> I don't see a benefit in leaving very old bug reports open when nobody
>> >> is working on it (again, a resource problem). Closing it (with "Won't
>> >> Fix") is explicit and easy to query. The information is not lost. This
>> >> does*not*  mean we don't care about the reported issues. It's simply
>> >> just more than we can currently handle.
>> >
>> > Are you sure "won't fix" is the right message you want to convey to the
>> > users that at least came to report something ?
>> >
>> > Isn't there an "expired" status or something else better suited ?
>> >
>> > "Won't fix" is a very strong message for a user.
>> >
>> > At least put a message explaining this is not really "we don't want to
>> > fix it" but "we expired old stale bugs"...
>> >
>>
>> You're right, there is a status "Expired" which can be set by a script
>> (but not the web UI). I don't have a strong reason to not use it.
>>
>> As explained in the original email, I intend to add this comment to the
>> expired bug reports:
>>
>>  This is an automated cleanup. This bug report got closed because
>>  it is older than 18 months and there is no open code change to
>>  fix this. After this time it is unlikely that the circumstances
>>  which lead to the observed issue can be reproduced.
>>  If you can reproduce it, please:
>>  * reopen the bug report
>>  * AND leave a comment "CONFIRMED FOR: "
>>Only still supported release names are valid.
>>valid example: CONFIRMED FOR: LIBERTY
>>invalid example: CONFIRMED FOR: KILO
>>  * AND add the steps to reproduce the issue (if applicable)
>>
>> I'm open for suggestions to make this sound better. Thanks for this
>> feedback.
>>
>> --
>> Regards, Markus Zoeller (markus_z)
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] OSOps Meeting - Wednesday 1900UTC #openstack-meeting-4

2016-05-16 Thread Joseph Bajin
Everyone,

Our next OSops group meeting will be help this coming Wednesday at 1900 UTC
in the #openstack-meeting-4 IRC room.

I've posted our agenda for this week:

https://etherpad.openstack.org/p/osops-irc-meeting-20160518


Thanks

Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Gathering Actions from the Informal Operators Meetup

2016-05-14 Thread Joseph Bajin
Everyone,

The OSOps team would like to get help from the Operator Community to help
gather issues and problems from the informal Operators meetup.[1]

Here is the etherpad that we have setup.[2] We are asking users to take a
look at the etherpad[1] and if they identify issues that should be raised
with projects, to add it to the new etherpad.[2]  We ask that you provide
some contact information as well, so that we could follow-up in the event
there are additional questions or feedback needed.


Thanks

Joe

[1] - https://etherpad.openstack.org/p/AUS-ops-informal-meetup
[2] - https://etherpad.openstack.org/p/osops-informal-meetup-feedback
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Anyone else use vendordata_driver in nova.conf?

2016-05-13 Thread Joseph Bajin
There is also a golang library that can validate tokens..

http://gophercloud.io/docs/identity/v3/



On Thu, May 12, 2016 at 11:25 PM, David Medberry 
wrote:

> There's a jython implementation of keystone and I thought there was other
> work to validate tokens from within Java. Added Jim Baker to the thread.
>
> -d
>
> On Thu, May 12, 2016 at 5:06 PM, Michael Still  wrote:
>
>> I'm just going to reply to myself here with another status update.
>>
>> The design seems largely settled at this point, with one exception -- how
>> does nova authenticate with the external microservice?
>>
>> The current proposal is to have nova use the client's keystone token to
>> authenticate with the external microservice. This is a neat solution
>> because its what nova does when talking to other services in your OpenStack
>> deployment, so its consistent and well understood.
>>
>> The catch here is that it means your external microservice needs to know
>> how to do keystone authentication. That's well understood for python
>> microservices, and I can provide sample code for that case using the
>> keystone wsgi middleware. On the other hand, its harder for things like
>> Java where I'm not sure I'm aware of any keystone auth implementation. Is
>> effectively requiring the microservices to be written in python a
>> particular problem? I'm hoping not given that all the current plugins are
>> written in python by definition.
>>
>> Cheers,
>> Michael
>>
>>
>>
>>
>> On Wed, May 4, 2016 at 7:37 AM, Michael Still  wrote:
>>
>>> Hey,
>>>
>>> I just wanted to let people know that the review is progressing, but we
>>> have a question.
>>>
>>> Do operators really need to call more than one external REST service to
>>> collect vendordata? We can implement that in nova, but it would be nice to
>>> reduce the complexity to only having one external REST service. If you
>>> needed to call more than one service you could of course write a REST
>>> service that aggregated REST services.
>>>
>>> Does anyone in the operator community have strong feelings either way?
>>> Should nova be able to call more than one external vendordata REST service?
>>>
>>> Thanks,
>>> Michael
>>>
>>>
>>>
>>>
>>> On Sat, Apr 30, 2016 at 4:11 AM, Michael Still 
>>> wrote:
>>>
 So, after a series of hallway track chats this week, I wrote this:

 https://review.openstack.org/#/c/310904/

 Which is a proposal for how to implement vendordata in a way which
 would (probably) be acceptable to nova, whilst also meeting the needs of
 operators. I should reinforce that because this week is so hectic nova core
 hasn't really talked about this yet, but I am pretty sure I understand and
 have addressed Sean's concerns.

 I'd be curious as to if the proposed solution actually meets your needs.

 Michael




 On Mon, Apr 18, 2016 at 10:55 AM, Fox, Kevin M 
 wrote:

> We've used it too to work around the lack of instance users in nova.
> Please keep it until a viable solution can be reached.
>
> Thanks,
> Kevin
> --
> *From:* David Medberry [openst...@medberry.net]
> *Sent:* Monday, April 18, 2016 7:16 AM
> *To:* Ned Rhudy
> *Cc:* openstack-operators@lists.openstack.org
>
> *Subject:* Re: [Openstack-operators] Anyone else use
> vendordata_driver in nova.conf?
>
> Hi Ned, Jay,
>
> We use it also and I have to agree, it's onerous to require users to
> add that functionality back in. Where was this discussed?
>
> On Mon, Apr 18, 2016 at 8:13 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) <
> erh...@bloomberg.net> wrote:
>
>> Requiring users to remember to pass specific userdata through to
>> their instance at every launch in order to replace functionality that
>> currently works invisible to them would be a step backwards. It's an
>> alternative, yes, but it's an alternative that adds burden to our users 
>> and
>> is not one we would pursue.
>>
>> What is the rationale for desiring to remove this functionality?
>>
>> From: jaypi...@gmail.com
>> Subject: Re: [Openstack-operators] Anyone else use vendordata_driver
>> in nova.conf?
>>
>> On 04/18/2016 09:24 AM, Ned Rhudy (BLOOMBERG/ 731 LEX) wrote:
>> > I noticed while reading through Mitaka release notes that
>> > vendordata_driver has been deprecated in Mitaka
>> > (https://review.openstack.org/#/c/288107/) and is slated for
>> removal at
>> > some point. This came as somewhat of a surprise to me - I searched
>> > openstack-dev for vendordata-related subject lines going back to
>> January
>> > and found no discussion on the matter (IRC logs, while available on
>> > eavesdrop, are not trivially searchable without a little scripting
>> to
>> > fetch them 

Re: [Openstack-operators] Moving from distro packages to containers (or virtualenvs...)

2016-05-12 Thread Joseph Bajin
On Thu, May 12, 2016 at 5:04 PM, Joshua Harlow 
wrote:

> Hi there all-ye-operators,
>
> I am investigating how to help move godaddy from rpms to a container-like
> solution (virtualenvs, lxc, or docker...) and a set of questions that comes
> up is the following (and I would think that some folks on this mailing list
> may have some useful insight into the answers):
>
> * Have you done the transition?
>
We've done the transition to containers using both RPM's as well as source
code.  We started out with just putting the RPM's into the container.  We
then moved to building the containers using the source code.  There has
been change in direction a bit that is requiring us to go back to RPM's
which was a simple flip over from an RPM.

The biggest thing we had to think about was the configuration files.  We
wanted them to be as easy and clean as possible.  We didn't want to keep
creating tons of container images for all the different environments.  At
the end of the day, we realized that we could use ETCD to allow us to use
environment variables to make configuration changes very easy.


>
> * How did the transition go?
>
It was very easy for us to move between RPM's on the host to containers.
We started off with one project and worked through that and proceeded on to
the next. We were easily able to mix and match between RPMs on the host and
new containers.   Our automation proved to be very useful to making things
easier (obviously).



>
> * Was/is kolla used or looked into? or something custom?
>
We started down this process way before kolla was out there and running, so
it would take a lot for us to move over to kolla as we have a pretty
detailed deployment setup.


>
> * How long did it take to do the transition from a package based solution
> (with say puppet/chef being used to deploy these packages)?
>

It took a week or two honestly.  It is a lot easier than you think.  Just
take your current configuration file, and put it inside the container and
run it and see what happens.  That was the easiest way to get started and
see how they act within your environment.


>
>   * Follow-up being how big was the team to do this?
>

* What was the roll-out strategy to achieve the final container solution?
>

We use ansible along with docker-compose to do all our file deployments.
We use it to talk with haproxy to take the service out of rotation, wait
for it to drain, take the container down, load the new container, start it
up, run a few test cases to ensure the container is doing what it should be
doing, and then put it back into rotation via Haproxy.


>
> Any other feedback (and/or questions that I missed)?
>
> One thing we realized is that you have to be using host based networking.
Do not try to run the containers using the docker networking that is built
in. You will get some weird results. We seemed to solve all the weirdness
when we moved everything over to host based networking.

We are beginning to work on doing compute nodes and gateway nodes.  Since
those don't change as often as controller functions do, we gained a lot of
efficiency and speed for deployments by moving to containers.

We have started to look at deploying via Kubernetes.  We have it working in
our lab for a while now, but we are still trying to get familiar with it
before we start trying to use it in production.



> Thanks,
>
> Josh
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Need Operators thoughts and opinions on NOVA specs! Get your opinions heard!

2016-05-07 Thread Joseph Bajin
Operators,

The Nova team has requested feedback from the community and especially the
operators on two specs that they are reviewing.

The first spec is regarding some logging changes to the scheduler:
 - Mailing List:
http://lists.openstack.org/pipermail/openstack-operators/2016-May/010322.html
 - Review Link: https://review.openstack.org/#/c/306647/

The second spec is regarding lower casing metadata inside of nova:
 - Review Link: https://review.openstack.org/#/c/311529/

Getting Operators feedback is very important!  Please feel free to post to
operators list or to the review itself if you have any thoughts or
questions.


Thanks


--Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] OSOps Meeting - Tomorrow 2016-05-04 1900 UTC

2016-05-03 Thread Joseph Bajin
Everyone,

The OSOps group will be having their next meeting tomorrow May 4th, 2016 at
1900 UTC.
It will be hosted in the #openstack-meeting-4 room.

The agenda has been added to the etherpad and wiki.  You can find that
here. [1]  The primary goal is to follow-up on the discussions that we had
at the Summit and start working on the tasks outlined in the action items.
We are open to anyone that wants to participate.

See you tomorrow!

--Joe



[1] https://etherpad.openstack.org/p/osops-irc-meeting-20160504
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] OSOps Meeting Notes from Summit

2016-05-02 Thread Joseph Bajin
Hello Operators,

I wanted to provide some updates to the OSOps Operators Meeting that was
held last week.

You can find the etherpad located here. [1]

The highlights of the meeting were the following:

- People see OSOps as an advocation group for patches, tools, and features
that operators need to get done.

- The OSOps group should pick a topic and work on getting that issue
resolved for the Operator community to help kick off the group’s work.

- The repo’s that OSOps maintains can be used to help in a few different
ways. One primary one mentioned was configuration files.  Providing bare
minimum configuration files to use as a sanity check.  The Kolla project
can help produce a few of these to jumpstart the project.

- Documentation Issues - Help facilitate the update of missing
documentation.

- Provide packaging examples that could eventually work into a framework
for operators to use to pull upstream and apply patches.  We are not
talking about finding one method, but providing options for operators to
choose.

Lots of great feedback in both sessions.  The first session was around how
OSOps can help the provide information to and from projects for them to
review.  The second half was held to find other ways that OSOps can help
and utilize the current repo’s to give Operators more tools to better
manage their environments.

Anyone with additional feedback or would like to help is welcome to join
the upcoming OSOps Meeting.


Thanks

Joe


[1] https://etherpad.openstack.org/p/AUS-ops-OSOps
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] Requests for Enhancement Process Meeting Notes

2016-05-02 Thread Joseph Bajin
Hello Operators,

I wanted to provide some updates to the Requests for Enhancement Process
Operators Meeting that was held last week.

You can find the etherpad located here. [1]

The highlights of the meeting were the following:

- RFE's should be groomed by the Operators first before being sent over to
the product teams.
- Product Working group is working on methods to gather resources to help
with implementing enhancements.
- Could the OSOps Team go through the existing Nova Wishlist and use those
as the first set of groomed RFE's?
- This is going to be added to the OSOps agenda set for May 4th at 1900 UTC
in #openstack-meeting-4
- OSOps can work to find liaison's that can help bridge the communication
between Operators and Product Groups for particular enhancements.

There was lots of discussion back and forth about how the RFE's that
Operators provide can be worked into existing work streams.  There was
concern that the list of RFE's that Operators provided was immediately
going to be worked on, and this was not the case.  What the Operators group
would like to do is help provide a single voice and a single place that
Dev's and Operators can go to both receive and provide feedback.

Anyone with additional feedback or would like to help is welcome to join
the upcoming OSOps Meeting.

Thanks

Joe


[1]
https://etherpad.openstack.org/p/AUS-ops-Requests-for-Enhancement-Process
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Maintenance

2016-04-23 Thread Joseph Bajin
That sounds great like a great plan Tim.

I was just making sure if there wasn't any support, that everyone knows
that OSOps can definitely help.

Definitely let us know if there is more that we can do other than just
having the Repo's available.

--Joe

On Fri, Apr 22, 2016 at 9:40 PM, Tim Bell <tim.b...@cern.ch> wrote:

>
> The overall requirements are being reviewed in
> https://etherpad.openstack.org/p/AUS-ops-Nova-maint. A future tool may
> make its way in OSOps but I think we should keep the requirements
> discussion distinct from the available community tools and their tool
> repository.
>
> Tim
>
> From: Joseph Bajin <josephba...@gmail.com>
> Date: Friday 22 April 2016 at 17:55
> To: Robert Starmer <rob...@kumul.us>
> Cc: openstack-operators <openstack-operators@lists.openstack.org>
> Subject: Re: [Openstack-operators] Maintenance
>
> Rob/Jay,
>
> The use of the OSOps Working group and its repos is a great way to address
> this.. If any of you are coming to the Summit, please take a look at our
> Etherpad that we have created.[1]   This could be a great discussion topic
> for the working sessions and we can brainstorm how we could help with this.
>
>
> Joe
>
> [1] https://etherpad.openstack.org/p/AUS-ops-OSOps
>
> On Fri, Apr 22, 2016 at 4:02 PM, Robert Starmer <rob...@kumul.us> wrote:
>
>> Maybe a result of the discussion can be a set of models (let's not go so
>> far as to call them best pracices yet :) for how maintainance can be done
>> at scale, perhaps solidifying the descriptions Jay has above with the user
>> stories Tomi described in his initial note.  This seems like an achievable
>> outcome from a working session, and the output even has a target, either
>> creating scripable workflows that could end up in the OSops repository, or
>> as user stories that can be mapped to the PM working group.
>>
>> R
>>
>> On Fri, Apr 22, 2016 at 12:47 PM, Jay Pipes <jaypi...@gmail.com> wrote:
>>
>>> On 04/14/2016 05:14 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
>>> 
>>>
>>>> As admin I want to know when host is ready to actions to be done by
>>>> admin
>>>> during the maintenance. Meaning physical resources are emptied.
>>>>
>>>
>>> You are equating "host maintenance mode" with the end result of a call
>>> to `nova host-evacuate-live`. The two are not the same.
>>>
>>> "host maintenance mode" typically just refers to taking a Nova compute
>>> node out of consideration for placing new workloads on that compute node.
>>> Putting a Nova compute node into host maintenance mode is as simple as
>>> calling `nova service-disable $hostname nova-compute`.
>>>
>>> Depending on what you need to perform on the compute node that is in
>>> host maintenance mode, you *may* want to migrate the workloads from that
>>> compute node to some other compute node that isn't in host maintenance
>>> mode. The `nova host-evacuate $hostname` and `nova host-evacuate-live
>>> $hostname` commands in the Nova CLI [1] can be used to migrate or
>>> live-migrate all workloads off the target compute node.
>>>
>>> Live migration will reduce the disruption that tenant workloads (data
>>> plane) experience during the workload migration. However, research at
>>> Mirantis has shown that libvirt/KVM/QEMU live migration performed against
>>> workloads with even a medium rate of memory page dirtying can easily never
>>> complete. Solutions like auto-converge and xbzrle compression have minimal
>>> effect on this, unfortunately. Pausing a workload manually is typically
>>> what is done to force the live migration to complete.
>>>
>>> [1] Note that these are commands in the Nova CLI tool
>>> (python-novaclient). Neither a host-evacuate nor a host-evacuate-live REST
>>> API call exists in the Compute API. This fact alone should suggest to folks
>>> that the appropriate place to put logic associated with performing host
>>> maintenance tasks should be *outside* of Nova entirely...
>>>
>>> As owner of a server I want to prepare for maintenance to minimize
>>>> downtime,
>>>> keep capacity on needed level and switch HA service to server not
>>>> affected by maintenance.
>>>>
>>>
>>> This isn't an appropriate use case, IMHO. HA control planes should, by
>>> their very nature, be established across various failure domains. The whole
>>> *point* of having an HA service is so that you don't need to &qu

Re: [Openstack-operators] Maintenance

2016-04-22 Thread Joseph Bajin
Rob/Jay,

The use of the OSOps Working group and its repos is a great way to address
this.. If any of you are coming to the Summit, please take a look at our
Etherpad that we have created.[1]   This could be a great discussion topic
for the working sessions and we can brainstorm how we could help with this.


Joe

[1] https://etherpad.openstack.org/p/AUS-ops-OSOps

On Fri, Apr 22, 2016 at 4:02 PM, Robert Starmer  wrote:

> Maybe a result of the discussion can be a set of models (let's not go so
> far as to call them best pracices yet :) for how maintainance can be done
> at scale, perhaps solidifying the descriptions Jay has above with the user
> stories Tomi described in his initial note.  This seems like an achievable
> outcome from a working session, and the output even has a target, either
> creating scripable workflows that could end up in the OSops repository, or
> as user stories that can be mapped to the PM working group.
>
> R
>
> On Fri, Apr 22, 2016 at 12:47 PM, Jay Pipes  wrote:
>
>> On 04/14/2016 05:14 AM, Juvonen, Tomi (Nokia - FI/Espoo) wrote:
>> 
>>
>>> As admin I want to know when host is ready to actions to be done by admin
>>> during the maintenance. Meaning physical resources are emptied.
>>>
>>
>> You are equating "host maintenance mode" with the end result of a call to
>> `nova host-evacuate-live`. The two are not the same.
>>
>> "host maintenance mode" typically just refers to taking a Nova compute
>> node out of consideration for placing new workloads on that compute node.
>> Putting a Nova compute node into host maintenance mode is as simple as
>> calling `nova service-disable $hostname nova-compute`.
>>
>> Depending on what you need to perform on the compute node that is in host
>> maintenance mode, you *may* want to migrate the workloads from that compute
>> node to some other compute node that isn't in host maintenance mode. The
>> `nova host-evacuate $hostname` and `nova host-evacuate-live $hostname`
>> commands in the Nova CLI [1] can be used to migrate or live-migrate all
>> workloads off the target compute node.
>>
>> Live migration will reduce the disruption that tenant workloads (data
>> plane) experience during the workload migration. However, research at
>> Mirantis has shown that libvirt/KVM/QEMU live migration performed against
>> workloads with even a medium rate of memory page dirtying can easily never
>> complete. Solutions like auto-converge and xbzrle compression have minimal
>> effect on this, unfortunately. Pausing a workload manually is typically
>> what is done to force the live migration to complete.
>>
>> [1] Note that these are commands in the Nova CLI tool
>> (python-novaclient). Neither a host-evacuate nor a host-evacuate-live REST
>> API call exists in the Compute API. This fact alone should suggest to folks
>> that the appropriate place to put logic associated with performing host
>> maintenance tasks should be *outside* of Nova entirely...
>>
>> As owner of a server I want to prepare for maintenance to minimize
>>> downtime,
>>> keep capacity on needed level and switch HA service to server not
>>> affected by maintenance.
>>>
>>
>> This isn't an appropriate use case, IMHO. HA control planes should, by
>> their very nature, be established across various failure domains. The whole
>> *point* of having an HA service is so that you don't need to "prepare" for
>> some maintenance event (planned or unplanned).
>>
>> All HA control planes worth their salt will be able to notify some
>> external listener of a partition in the cluster. This HA control plane is
>> the responsibility of the tenant, not the infrastructure (i.e. Nova). I
>> really do not want to add coupling between infrastructure control plane
>> services and tenant control plane services.
>>
>> As owner of a server I want to know when my servers will be down because
>>> of
>>> host maintenance as it might be servers are not moved to another host.
>>>
>>
>> See above. As an owner of a server involved in an HA cluster, it is *the
>> server owner's* responsibility to set things up so that the cluster
>> rebalances, handles redirected load, or does the custom thing that they
>> want. This isn't, IMHO, the domain of the NVFi but rather a much
>> higher-level NFVO orchestration layer.
>>
>> As owner of a server I want to know if host is to be totally removed, so
>>> instead of keeping my servers on host during maintenance, I want to move
>>> them to somewhere else.
>>>
>>
>> This isn't something the owner of a server even knows about in a cloud
>> environment. Owners of a server don't (and shouldn't) know which compute
>> node they are, nor should they know that a host is having a planned or
>> unplanned host maintenance event.
>>
>> The infrastructure owner (cloud deployer/operator) is responsible for
>> doing the needful and performing a [live] migration of workloads off of a
>> failing host or a host that is undergoing a cold upgrade. The tenant
>> doesn't know 

[Openstack-operators] [osops] OSOps Summit Etherpad is up!

2016-04-19 Thread Joseph Bajin
Operators,

The OSOPs Summit etherpad is up and running. You can find this etherpad
located here. [1]

For those that are not aware of OSOps, it stands for OpenStack Operators.
Currently, we maintain the operators focused github repositories as well as
working with other projects to help address the issues that operators are
experiencing. If there is anything that you think that the OSOps group
could help with, please add it to the etherpad so that we can capture it.

No idea is too small or too complex for us to handle.  Just putting down
your ideas will be a great help to atleast get others talking about them.

Thanks

Joe


[1] https://etherpad.openstack.org/p/austin-ops-osops
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] No Meeting this week..

2016-04-17 Thread Joseph Bajin
OSOps,

I'm thinking it may be good to cancel this week's OSOps meeting and just
meet during the summit.  We have 3 sessions that surround the topics that
we have been talking about and I hope we can get some more topics as well
as participation next week.

Here are the details for the sessions at the summit:

* Monday - 2:00-2:40 - MR409 - OSOps [1]
* Monday - 2:50-3:30 - MR409 - OSOps  [1]
* Monday - 3:40-4:20 - MR406 - Requests for Enhancement Process [2]

I've added links for the etherpads as well.

If you have topics that you would like to bring up, please add them to the
etherpads.

Thanks

Joe

[1] https://etherpad.openstack.org/p/austin-ops-osops
[2]
https://etherpad.openstack.org/p/AUS-ops-Requests-for-Enhancement-Process

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Neutron] Operator Pain Points

2016-04-17 Thread Joseph Bajin
Hi Carl,

This is great..

We also have two sessions going on for Operators in hopes to determine the
best methods for gathering these types of requirements.   This is to help
Operators get them to the right people as well as ways for the Operators to
track these for follow-up.  The two sessions we are hosting are:

* Roadmap Feedback Request from Ops
* OSOps

Both of these are being held on Monday. It would be great to have not only
just Operator representation but Dev people as well.  We have also created
two additional etherpads for these meetings as well. You can find them
here[1] and here [2].

--Joe


[1]
https://etherpad.openstack.org/p/AUS-ops-Requests-for-Enhancement-Process
[2] https://etherpad.openstack.org/p/austin-ops-osops

On Fri, Apr 15, 2016 at 12:35 PM, Carl Baldwin  wrote:

> I'm cross-posting this to the dev and operators mailing lists.
>
> In the upcoming design summit in Austin, I'll be hosting a session [1]
> to discuss how to prioritize operator pain points around Neutron, find
> owners, and plan them for Newton.  In preparation for that discussion,
> I have performed a preliminary search in Launchpad for existing
> operator pain points.  I made a list of existing issues with the
> following tags: "loadimpact", "ops", "troubleshooting", and
> "usability".  This is a subset of standard tags used within the
> Neutron project [2]:
>
> I have posted that list to the etherpad for the session [3] grouped by
> their importance as marked at the time I queried.
>
> Please check the etherpad [3] to see if your pain points are
> represented.  If they are not, please find the launchpad bug for your
> issue, or create a new one if it does not exist.  Add the URL
>
> I am looking forward to a well organized and productive discussion at
> the summit and I'm eager to see you all there.
>
> Carl Baldwin
>
> [1]
> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9103
> [2]
> http://docs.openstack.org/developer/neutron/policies/bugs.html#proposing-new-tags
> [3] https://etherpad.openstack.org/p/newton-neutron-pain-points
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA

2016-04-06 Thread Joseph Bajin
This is great Kenny!

I will see about attending one of the PWG sessions in the next few weeks.

What I was thinking and maybe this works or doesn't was to have the "OSOps"
group be that central point where Operators could feed User Stories.   I
was thinking that this may make it easier for groups to see all the
Operators requests, User Stories, complaints, appreciations, etc in one
central location.  That way if someone didn't know that there was say
Product Work Group, they at least could get their story down for others to
see.



On Tue, Apr 5, 2016 at 2:25 PM, Kenny Johnston <ke...@kencjohnston.com>
wrote:

> I'll throw this out there but the Product Work Group[1] has a place for
> submitting "User Stories"[2] which are meant to be robust descriptions of
> problem statements and use cases. Speaking as a core PWG team member, I can
> say we'd love to have operators submit User Stories with whatever level of
> detail they have available so our Product Team can help iterate and flesh
> out the stories for consideration by project teams.
>
> You can find the PWG team in #openstack-product or in the PWG Mailling
> list[3]
>
> [1]https://wiki.openstack.org/wiki/ProductTeam
> [2]https://github.com/openstack/openstack-user-stories
> [3]http://lists.openstack.org/cgi-bin/mailman/listinfo/product-wg
>
> On Tue, Apr 5, 2016 at 12:29 PM, Joseph Bajin <josephba...@gmail.com>
> wrote:
>
>> Hi Christoph,
>>
>> This is a great idea and a tool I think others would be very interested
>> in having.
>>
>> Now the question that I have to you and everyone else, is within the
>> Operators community how would we capture this information so we could open
>> up dialogue with both the Neutron and Nova teams?
>>
>> --Joe
>>
>> On Tue, Mar 29, 2016 at 11:43 AM, Christoph Andreas Torlinsky <
>> christ...@nuagenetworks.net> wrote:
>>
>>> Hi Joseph, we would propose a migration tool for getting the NOVA
>>> networking database information into mapping it Neutron,
>>> as we are seeing an increasing demand to "upgrade" from Nova Networking
>>> to an OpenStack Neutron and SDN 3rd Party based
>>> model for Cloud Computing amongst our installed base of OpenStack
>>> customers, some of whom grew out of Nova networking and it's
>>> limits in leveraging VLANs.
>>>
>>> I also caught sight of the recent write ups here , which seem to point
>>> in the same direction actually, in lieu of adoption of Liberty from
>>> older release:
>>>
>>>
>>> http://docs.openstack.org/liberty/networking-guide/migration-nova-network-to-neutron.html
>>>
>>> Any tooling to make this easier as a migration process, we (me) are keen
>>> to engage and help,
>>>
>>> Let us know if anyone else is on the same level working on things,
>>>
>>> c
>>>
>>>
>>> Christoph Andreas Torlinski
>>>
>>> +44(0)7872413856 UK Mobile
>>>
>>> christ...@nuagenetworks.net
>>>
>>> On 26 March 2016 at 20:47, Joseph Bajin <josephba...@gmail.com> wrote:
>>>
>>>> Operators,
>>>>
>>>> At the last OSOps Tools and Monitoring meeting, we had mriedem (Nova
>>>> Team) talk with us about a proposal that they are working on. The Nova team
>>>> would like to abandon the use of a wish-list as it has grown to an
>>>> unmanageable size.  They have posted a few times to both the operators list
>>>> [1] as well as the developers list [2] about their idea.  They would like
>>>> to work with us operators to find the best way to get our ideas into their
>>>> prioritization list. At the mid-cycle and summit meet-ups, we continue to
>>>> create etherpads [3] that make mention to the troubles we have, as well as
>>>> the work we have done to overcome them. These are examples of the work that
>>>> the NOVA team would like to hear about.
>>>>
>>>> I'd like to gather some ideas and thoughts from the operator community
>>>> on not only the NOVA proposal but ways that we could get the feedback we
>>>> create each cycle to each of the respective projects.  One idea that I have
>>>> is using the OSOps repo as a way for us to gather and track these wishlist
>>>> type of items.  Then the OSOps working group could be used to help try get
>>>> these worked on.
>>>>
>>>> I'd love to get help and hear ideas from others about how we could
>>>> improve and build on this process. Please let me know your 

Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA

2016-04-05 Thread Joseph Bajin
Hi Christoph,

This is a great idea and a tool I think others would be very interested in
having.

Now the question that I have to you and everyone else, is within the
Operators community how would we capture this information so we could open
up dialogue with both the Neutron and Nova teams?

--Joe

On Tue, Mar 29, 2016 at 11:43 AM, Christoph Andreas Torlinsky <
christ...@nuagenetworks.net> wrote:

> Hi Joseph, we would propose a migration tool for getting the NOVA
> networking database information into mapping it Neutron,
> as we are seeing an increasing demand to "upgrade" from Nova Networking to
> an OpenStack Neutron and SDN 3rd Party based
> model for Cloud Computing amongst our installed base of OpenStack
> customers, some of whom grew out of Nova networking and it's
> limits in leveraging VLANs.
>
> I also caught sight of the recent write ups here , which seem to point in
> the same direction actually, in lieu of adoption of Liberty from
> older release:
>
>
> http://docs.openstack.org/liberty/networking-guide/migration-nova-network-to-neutron.html
>
> Any tooling to make this easier as a migration process, we (me) are keen
> to engage and help,
>
> Let us know if anyone else is on the same level working on things,
>
> c
>
>
> Christoph Andreas Torlinski
>
> +44(0)7872413856 UK Mobile
>
> christ...@nuagenetworks.net
>
> On 26 March 2016 at 20:47, Joseph Bajin <josephba...@gmail.com> wrote:
>
>> Operators,
>>
>> At the last OSOps Tools and Monitoring meeting, we had mriedem (Nova
>> Team) talk with us about a proposal that they are working on. The Nova team
>> would like to abandon the use of a wish-list as it has grown to an
>> unmanageable size.  They have posted a few times to both the operators list
>> [1] as well as the developers list [2] about their idea.  They would like
>> to work with us operators to find the best way to get our ideas into their
>> prioritization list. At the mid-cycle and summit meet-ups, we continue to
>> create etherpads [3] that make mention to the troubles we have, as well as
>> the work we have done to overcome them. These are examples of the work that
>> the NOVA team would like to hear about.
>>
>> I'd like to gather some ideas and thoughts from the operator community on
>> not only the NOVA proposal but ways that we could get the feedback we
>> create each cycle to each of the respective projects.  One idea that I have
>> is using the OSOps repo as a way for us to gather and track these wishlist
>> type of items.  Then the OSOps working group could be used to help try get
>> these worked on.
>>
>> I'd love to get help and hear ideas from others about how we could
>> improve and build on this process. Please let me know your thoughts.
>>
>> Thanks
>>
>> Joe
>>
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-operators/2016-March/009942.html
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/089365.html
>> [3] https://etherpad.openstack.org/p/operator-local-patches
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA

2016-04-05 Thread Joseph Bajin
Just sending this out to the Operator Community for more feedback.

We'll be having a session at the summit about it, but I'd like to get this
discussion going a little more here if we could.

--Joe

On Sat, Mar 26, 2016 at 4:47 PM, Joseph Bajin <josephba...@gmail.com> wrote:

> Operators,
>
> At the last OSOps Tools and Monitoring meeting, we had mriedem (Nova Team)
> talk with us about a proposal that they are working on. The Nova team would
> like to abandon the use of a wish-list as it has grown to an unmanageable
> size.  They have posted a few times to both the operators list [1] as well
> as the developers list [2] about their idea.  They would like to work with
> us operators to find the best way to get our ideas into their
> prioritization list. At the mid-cycle and summit meet-ups, we continue to
> create etherpads [3] that make mention to the troubles we have, as well as
> the work we have done to overcome them. These are examples of the work that
> the NOVA team would like to hear about.
>
> I'd like to gather some ideas and thoughts from the operator community on
> not only the NOVA proposal but ways that we could get the feedback we
> create each cycle to each of the respective projects.  One idea that I have
> is using the OSOps repo as a way for us to gather and track these wishlist
> type of items.  Then the OSOps working group could be used to help try get
> these worked on.
>
> I'd love to get help and hear ideas from others about how we could improve
> and build on this process. Please let me know your thoughts.
>
> Thanks
>
> Joe
>
>
> [1]
> http://lists.openstack.org/pipermail/openstack-operators/2016-March/009942.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2016-March/089365.html
> [3] https://etherpad.openstack.org/p/operator-local-patches
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] Attach routing rules to networks?

2016-04-04 Thread Joseph Bajin
We use the host-route flag as mentioned above.  It makes it easy to sent
internet like traffic to a gateway type of box and internal traffic to the
inside for tenants that do have public access.

On Sat, Apr 2, 2016 at 10:33 PM, James Denton 
wrote:

> Hi Mike,
>
> You should be able to update the subnet(s) and use the --host-route flag
> with destination,nexthop pairs. The routes get pushed via DHCP.
>
> James
>
> Sent from my iPhone
>
> On Apr 2, 2016, at 9:28 PM, Mike Spreitzer  wrote:
>
> Is there a way to attach a routing rule to a network or subnet, with the
> consequence that each device attached to that net or subnet gets that
> routing rule?
>
> Thanks,
> Mike
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Removing seeded flavors

2016-04-04 Thread Joseph Bajin
While I think it's fine to remove this from the normal setup, I think there
should be either a file or process that one could run to add these basic
flavors.  I think without you will have things such as DevStack and other
beginners not being able to get up and going.

I think having the migration as a side option would be a good compromise.

On Sun, Apr 3, 2016 at 1:46 PM, Robert Starmer  wrote:

> I'll add a vote for removal, given how varied private clouds tend to be,
> the flavors are often "wrong" for any one particular purpose.
>
> R
>
> On Sat, Apr 2, 2016 at 8:41 PM, Mike Smith  wrote:
>
>> +1 from me.  We always just remove them and add our own.  Like Dan said,
>> it’s consistent with populating your own images.
>>
>> Mike Smith
>> Lead Cloud Systems Architect
>> Overstock.com
>>
>>
>>
>> On Apr 2, 2016, at 8:33 PM, Eric Windisch  wrote:
>>
>> I recall these being embedded being a real operational pain when my team
>> wanted to replace the defaults. +1 on removal
>> On Mar 31, 2016 2:27 PM, "Dan Smith"  wrote:
>>
>>> Hi all,
>>>
>>> I just wanted to float this past the operators list for visibility:
>>>
>>> Historically Nova has seeded some default flavors in an initial install.
>>> The way it has done this is really atypical of anything else we do, as
>>> it's embedded in the initial database schema migration. Since we're
>>> moving where we store those flavors now, leaving their creation in the
>>> original migration means even new deploys will just have to move them to
>>> the new location. That, and we don't even use them for our own testing
>>> as they're too large.
>>>
>>> So, this will involve removing them from that migration, making sure
>>> that devstack creates you some flavors to use if you're going that
>>> route, and updates to the manuals describing the creation of base
>>> flavors alongside getting an image set up to use.
>>>
>>> For real deployments, there should be little or no effect, but PoC type
>>> deploys that are used to those flavors being present may need to run a
>>> couple of flavor-create commands when bootstrapping, just like you have
>>> to do for images.
>>>
>>> Thanks!
>>>
>>> --Dan
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] Finding ways to get operator issues to projects - Starting with NOVA

2016-03-26 Thread Joseph Bajin
Operators,

At the last OSOps Tools and Monitoring meeting, we had mriedem (Nova Team)
talk with us about a proposal that they are working on. The Nova team would
like to abandon the use of a wish-list as it has grown to an unmanageable
size.  They have posted a few times to both the operators list [1] as well
as the developers list [2] about their idea.  They would like to work with
us operators to find the best way to get our ideas into their
prioritization list. At the mid-cycle and summit meet-ups, we continue to
create etherpads [3] that make mention to the troubles we have, as well as
the work we have done to overcome them. These are examples of the work that
the NOVA team would like to hear about.

I'd like to gather some ideas and thoughts from the operator community on
not only the NOVA proposal but ways that we could get the feedback we
create each cycle to each of the respective projects.  One idea that I have
is using the OSOps repo as a way for us to gather and track these wishlist
type of items.  Then the OSOps working group could be used to help try get
these worked on.

I'd love to get help and hear ideas from others about how we could improve
and build on this process. Please let me know your thoughts.

Thanks

Joe


[1]
http://lists.openstack.org/pipermail/openstack-operators/2016-March/009942.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-March/089365.html
[3] https://etherpad.openstack.org/p/operator-local-patches
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [osops] OSOps Tools and Monitoring Meeting - Wendesday 1900 UTC

2016-03-22 Thread Joseph Bajin
Operators,


We will be having our bi-weekly meeting in the #openstack-meeting-4 room at
1900UTC.

I have published the current agenda for this meeting. You can find that
here:

https://etherpad.openstack.org/p/osops-irc-meeting-20160323


Either let me know or please go ahead and add to the ether pad anything
else that you would like to discuss during the meeting.


Thanks


Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Tools/Monitoring] Meeting Notes from Operators Tools and Monitoring Working Group

2015-09-23 Thread Joseph Bajin
Operators,

The Operators Tools and Monitoring Working Group met at its new time 1900
UTC today.

Meeting notes can be found here:
* https://etherpad.openstack.org/p/ops-tools-monitoring-agenda-20150923

We also have a wiki page that we are updating which can be found here:
 * https://wiki.openstack.org/wiki/Operators_Tools_and_Monitoring

I had some great help today getting some organization from some experienced
operators/contributors, so I wanted to say thanks to those guys.

Hope to see more of you at our next meeting which is:  Wednesday - October
7th @ 1900 hrs (3pm Eastern/12pm pacific)

--Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [OpenStack][Operators] Follow-up on Mid-cycle discussion on RPC Compatibility

2015-09-09 Thread Joseph Bajin
We recently just upgraded to Juno from Icehouse.

We used the upgrade_levels feature to get Nova off the ground, but ran into
major issues around conductor being too slow to respond to messages and
would cause a lot of our instances to not create.  This would only happen
at a particular high amount of load.  A few vms being built were no
problem, but get into anything over say 15, then the conductor queue would
start showing a lot of unacknowledged messages and basically start causing
a lot of instance creation failures.

I'm happy to provide any other details that are needed.

--Joe


On Wed, Sep 9, 2015 at 12:53 PM, Barrett, Carol L  wrote:

> Ops – I’m looking for an example of the RPC compatibility issue that was
> discussed in Palo Alto. Can someone help me out?
> Thanks
> Carol
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Tools/Monitoring] Finding a new Tools and Monitoring Working group Time

2015-09-08 Thread Joseph Bajin
Hi Everyone,

One of the items taken from the Operators Meet-up was that the time (10am
EST) was too early for the west coast people.   I've been looking at the
meeting calendar and have a few suggested times that I'd like to see if
they would work for others to get more participation.

Odd every other Wednesday's - ( 9/9, 9/23, etc)

Suggested times - (All these are in EST)

- 1pm
- 2pm
- 3pm

If Wednesday's do not work, please let me know as well, and I can find
additional times.  It seemed that Wednesdays worked for most when we first
stood up the group, but it could be moved.

--Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Scaling the Ops Meetup

2015-06-30 Thread Joseph Bajin
I agree with all of these items especially with not having Vendor booths.

The only thing I would want to mention is that it would be great to have
something centrally located within the US, if we are going to choose the US
for a session.  That way it is only a 3-4 hour flight instead of a 7-9 hour
event like going from West Coast to the East Coast.

On Tue, Jun 30, 2015 at 7:55 PM, Kevin Carter kevin.car...@rackspace.com
wrote:

  I'm very much in favor of scaling up the Ops meetup and doing so with no
 vendor booths, modest registration fees, dropping the evening event (if
 needed), and creating an alternating North American / other local. I
 don't know what I can do specifically to help out here but if I can help,
 in any way, to make some of this go put me down as available.

  --

 Kevin Carter

 --
 *From:* Mike Dorman mdor...@godaddy.com
 *Sent:* Tuesday, June 30, 2015 6:10 PM
 *To:* Jesse Keating; Matt Fischer

 *Cc:* OpenStack Operators
 *Subject:* Re: [Openstack-operators] Scaling the Ops Meetup

   I pretty much agree with everyone so far.  No vendor booths,
 distributed “underwriters”, modest registration fee, and sans evening
 event.  Not sure separate regional meetings are a good idea, but would be
 in favor of alternating North America vs. other region, like the summits.

  I’ve been looking for approximate meal sponsorship costs, too.  We may
 have funds available for some sort of underwriting as well, but the first
 question I get when going to ask for that is “how much $?”  So it’s
 difficult to get sponsorship commitments without those details.  Could you
 let us know some ballpark figures based on past events, so we have some
 more data points?

  Thanks!!
 Mike


   From: Jesse Keating
 Date: Tuesday, June 30, 2015 at 1:06 PM
 To: Matt Fischer
 Cc: OpenStack Operators
 Subject: Re: [Openstack-operators] Scaling the Ops Meetup

   RE Evening event: I agree it was pretty crowded. Perhaps just a list of
 area venues for various activities and a sign up board somewhere. Let it
 happen organically, and everybody is on their own for paying for whatever
 they do. That way those that may not be into the bar scene don't feel left
 out because everybody else went and got drink/food. Lets eliminate the
 social pressure to put everybody into the same social event.


  - jlk

 On Tue, Jun 30, 2015 at 10:46 AM, Matt Fischer m...@mattfischer.com
 wrote:

  My votes line up with Dave's and Joe's pretty much.

  I think that vendor booth's are a bad idea as well.

  As for registration, I think having a fee that covers the meals/coffee
 is fair. This is not a typical walk in off the street meeting. I don't
 think many companies would balk at an extra $100-$200 fee for registration.
 Especially if you're already paying for travel like 99% of us will be
 doing. I'm also +1 canceling the evening event to cut costs, it was
 overcrowded last time and with 300 people will be unmanageable.

  Tom, What is the actual per-head price range for meals?

  On Tue, Jun 30, 2015 at 11:36 AM, Joe Topjian j...@topjian.net wrote:


   -1 on paid registration, I think we need to be mindful of the smaller
 openstack deployers, their voice is an important one, and their access to
 the larger operations teams is invaluable to them.  I like the idea of
 local teams showing up because it's in the neighborhood and they don't need
 to hassle their budgeting managers too much for travel approval /
 expenses.  This is more accessible currently than the summits for many
 operators.  Let's keep it that way.


  I understand your point.

  IMO, the Ops mid-cycle meetup is a little different than a normal
 local meetup you'll find at meetup.com. It's a multi-day event that
 includes meals and an evening event. Being able to attend for free, while a
 great goal, may not be practical. I would not imagine that the fee would be
 as much as a Summit ticket, nor even broken down to the daily cost of a
 Summit ticket. I see it as something that would go toward the cost of food
 and such.

  The OpenStack foundation does a lot to ensure that people who are
 unable to pay registration fees are still able to attend summits. The same
 courtesy could be extended here as well. As an example, David M has
 mentioned that TWC may help (I understand that may not be official, just
 used as an example of how others may be willing to help with that area).

  Joe

  ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 

Re: [Openstack-operators] OpenSource ELK configurations

2015-05-27 Thread Joseph Bajin
I think those from the Infra Project are a great start, but I do think they
are missing a lot.

Instead of breaking down the message into parts, it just breaks down the
type (INFO, DEBUG, etc) and then makes the rest of the message greedy.
That doesn't help searching or graphing or anything like that (or at least
makes it more difficult over time). At least that is what I have seen.

We are using a little bit tweaked version of the godaddy's scripts.  That
seems to give us a good amount of detail that we can search and filter on.
I'll see if I can post my versions of them to the repo.

On Wed, May 27, 2015 at 2:00 PM, Mark Voelker mvoel...@vmware.com wrote:

 Sounds like a fine thing to point people to…thanks Joe.

 https://github.com/osops/tools-logging/pull/3

 At Your Service,

 Mark T. Voelker


  On May 27, 2015, at 1:12 PM, Joe Gordon joe.gord...@gmail.com wrote:
 
 
 
  On Mon, May 18, 2015 at 3:11 PM, Tom Fifield t...@openstack.org wrote:
  There's some stuff in the osops repo:
 
  https://github.com/osops/tools-logging
 
  Please contribute if you can!
 
  Regards,
 
  Tom
 
 
  On 18/05/15 14:56, Anand Kumar Sankaran wrote:
  Hi all
 
  Is there a set of open source ELK configurations available?  (log stash
  filters, templates, kibana dashboards).  I see a github repository from
  Godaddy, wondering if there is a standard set that is used.
 
  OpenStack has an ELK stack running at logstash.openstack.org to help
 debug test jobs.
 
  The logstash filters can be found at:
 http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/templates/logstash/indexer.conf.erb
 
 
  Thanks.
 
  —
  anand
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Customer Onboarding/Off-Boarding Session

2015-05-24 Thread Joseph Bajin
Hi,

We didn't have a lot of people attend this session (most likely because it
was schedule at 5:30 so many probably went off to dinner), but those that
did attend provided great input.

Coverage of the session can be found at this etherpad:

https://etherpad.openstack.org/p/YVR-ops-customer-onboarding

One major item that was recommended was to really begin using the OSOPS
github group for training and on-boarding documentation.  Some examples
were Welcome Emails, Abandoned Resource Emails,  and Training Modules.

I recommend that anyone that has a comment, suggestion, or idea to use the
operators list to get the word out to as many people as possible.

Thanks

Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [CMDB] CMDB Vancouver Working Group

2015-05-24 Thread Joseph Bajin
Hi,

I wanted to provide a recap of the CMDB working group from the Summit.

There was good attendance with people that had a great amount of experience
working and building a CMDB.  While it may seem easy on the surface, it
actually is a pretty big project with a lot of different options and
theories.

We captured many of the data-points from the session in the below etherpad:

https://etherpad.openstack.org/p/YVR-ops-cmdb

In the next few days, I will send out a few additional emails from the
action items from the session.  I rather not bombard everyone with them at
once as they could easily get lost.  From there, a few people of the
attendees have volunteered to help move this project forward.

We felt that any additional comments, questions or suggestions should be
directed to the operators mailing list to enlist additional input from
others.

Thanks,

Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Vancouver Summit - Customer On-boarding/Off-boarding

2015-05-14 Thread Joseph Bajin
Hi,

I will be moderating the Customer On-Boarding/Off-Boarding[1] session at
the summit, and wanted to make sure we get as much feedback into the
etherpad[2] as possible.

Both adding and removing users seems like a pretty simple idea, but it gets
complicated pretty quickly. So any suggestions, recommendations or examples
are welcome!

Thanks

Joe


[1] - http://sched.co/3C4p
[2] - https://etherpad.openstack.org/p/YVR-ops-customer-onboarding
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] expanding to 2nd location

2015-05-06 Thread Joseph Bajin
Just to add in my $0.02, we run in multiple sites as well.  We are using
regions to do this.  Cells at this point have a lot going for it, but we
thought it wasn't there yet.  We also don't have the necessary resources to
make our own changes to it like a few other places do.

With that, we said the only real thing that we should do is make sure items
such as Tenant and User ID's remained the same. That allows us to do
show-back reporting and it makes it easier on the user-base on when they
want to deploy from one region to another.   With that requirement, we did
used galera in the same manner that many of the others mentioned.  We then
deployed Keystone pointing to that galera DB.  That is the only DB that is
replicated across sites.  Everything else such as Nova, Neutron, etc are
all within its own location.

The only real confusing piece for our users is the dashboard.  When you
first go to the dashboard, there is a dropdown to select a region.  Many
users think that is going to send them to a particular location, so their
information from that location is going to show up.  It is really to which
region do you want to authenticate against.  Once you are in the dashboard,
you can select which Project you want to see.  That has been a major point
of confusion. I think our solution is to just rename that text.





On Tue, May 5, 2015 at 11:46 AM, Clayton O'Neill clay...@oneill.net wrote:

 On Tue, May 5, 2015 at 11:33 AM, Curtis serverasc...@gmail.com wrote:

 Do people have any comments or strategies on dealing with Galera
 replication across the WAN using regions? Seems like something to try
 to avoid if possible, though might not be possible. Any thoughts on
 that?


 We're doing this with good luck.  Few things I'd recommend being aware of:

 Set galera_group_segment so that each site is in a separate segment.  This
 will make it smarter about doing replication and for state transfer.

 Make sure you look at the timers and tunables in Galera and make sure they
 make sense for your network.  We've got lots of BW and lowish latency
 (37ms), so the defaults have worked pretty well for us.

 Make sure that when you do provisioning in one site, you don't have CM
 tools in the other site breaking things.  We can ran into issues during our
 first deploy like this where Puppet was making a change in one site to a
 user, and Puppet in the other site reverted the change nearly immediately.
 You may have to tweak your deployment process to deal with that sort of
 thing.

 Make sure you're running Galera or Galera Arbitrator in enough sites to
 maintain quorum if you have issues.  We run 3 nodes in one DC, and 3 nodes
 in another DC for Horizon, Keystone and Designate.  We run a Galera
 arbitrator in a third DC to settle ties.

 Lastly, the obvious one is just to stay up to date on patches.  Galera is
 pretty stable, but we have run into bugs that we had to get fixes for.

 On Tue, May 5, 2015 at 11:33 AM, Curtis serverasc...@gmail.com wrote:

 Do people have any comments or strategies on dealing with Galera
 replication across the WAN using regions? Seems like something to try
 to avoid if possible, though might not be possible. Any thoughts on
 that?

 Thanks,
 Curtis.

 On Mon, May 4, 2015 at 3:11 PM, Jesse Keating j...@bluebox.net wrote:
  I agree with Subbu. You'll want that to be a region so that the control
  plane is mostly contained. Only Keystone (and swift if you have that)
 would
  be doing lots of site to site communication to keep databases in sync.
 
  http://docs.openstack.org/arch-design/content/multi_site.html is a
 good read
  on the topic.
 
 
  - jlk
 
  On Mon, May 4, 2015 at 1:58 PM, Allamaraju, Subbu su...@subbu.org
 wrote:
 
  I suggest building a new AZ (“region” in OpenStack parlance) in the new
  location. In general I would avoid setting up control plane to operate
  across multiple facilities unless the cloud is very large.
 
   On May 4, 2015, at 1:40 PM, Jonathan Proulx j...@jonproulx.com
 wrote:
  
   Hi All,
  
   We're about to expand our OpenStack Cloud to a second datacenter.
   Anyone one have opinions they'd like to share as to what I would and
   should be worrying about or how to structure this?  Should I be
   thinking cells or regions (or maybe both)?  Any obvious or not so
   obvious pitfalls I should try to avoid?
  
   Current scale is about 75 hypervisors.  Running juno on Ubuntu 14.04
   using Ceph for volume storage, ephemeral block devices, and image
   storage (as well as object store).  Bulk data storage for most (but
 by
   no means all) of our workloads is at the current location (not that
   that matters I suppose).
  
   Second location is about 150km away and we'll have 10G (at least)
   between sites. The expansion will be approximately the same size as
   the existing cloud maybe slightly larger and given site capacities
 the
   new location is also more likely to be where any future grown goes.
  
   Thanks,
   -Jon
  
   

Re: [Openstack-operators] Modification in nova policy file

2015-05-06 Thread Joseph Bajin
The Policy file is not a filtering agent.   It basically just provides ACL
type of abilities.

Can you do this action?  True/False
Do you have the right permissions to call this action? True/False

If you wanted to pull back just the instances that the user owns, then you
would actually have to write some code that would call that particular
filtering action.



On Tue, May 5, 2015 at 11:01 AM, Salman Toor salman.t...@it.uu.se wrote:

  Hi,


  I am trying to setup the policies for nova. Can you please have a look
 if thats correct?


  nova/policy.json
 
 context_is_admin:  role:admin,
 admin_or_owner:  is_admin:True or project_id:%(project_id)s,
 owner:  user_id:%(user_id)s,
 admin_or_user: is_admin:True or user_id:%(user_id)s,
 default: rule:admin_or_owner”,

  compute:get_all: “rule:admin_or_user,
  

  I want users to only see there own instances, not the instances of all
 the users in the same tenant.

  I have restarted the nova-api service on controller, but no effect. I
 have noticed that if I put “rule:context_is_admin”  in “compute:get_all
 than except “admin no one can see anything so system is reading the file
 correctly.

  Important:

  1 - I haven’t changed the  /etc/openstack-dashboard/nova_policy.json

  2 - I have only used the command line client tool to confirm the
 behaviour.

  I am running Juno release.

  Please point to some document that discuss all the policy parameters.

  Thanks in advance.

  /Salman

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Operator Help to get patch merged

2015-04-18 Thread Joseph Bajin
I wanted to see about getting some Operator Help to push through a
patch[1].

The patch is to not give the user a 404 message back when they click the
cancel button why trying to change their password or the user settings. The
patch resets the page.

It's been sitting there for a while, but started to get some -1's and then
+1's, then some move to kilo-rc1 to liberty and back.  Some people think
that those screens should be somewhere else, others think the text should
be replaced, but that is not the purpose of the patch. It's just to not
give back a negative user experience.

So, I'm hoping that I can get some Operator support to get this merged and
if they want to change the text, change the location, etc, then they can do
it later down the road.

Thanks

-Joe


[1] https://review.openstack.org/#/c/166569/
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] MTU on router interface (neutron GRE) without jumbo

2015-03-14 Thread Joseph Bajin
The size of MTU only really matters for the server and client.   The
between connections need to be larger than the packets that are being sent.


Scenario 1:
Server - 1400 MTU
Client - 1400  MTU
Switches - 9216 MTU
OVS - 1500 MTU

Result: Successful - Traffic passes without any issue

Scenario 2:
Server - 1520 MTU
Client - 1520  MTU
Switches - 1516 MTU
OVS - 1500 MTU

Result: Failure - Traffic will have issues passing through.

So just make sure everything in-between is higher than your server and
client.

--Joe



On Fri, Mar 13, 2015 at 9:28 AM, George Shuklin george.shuk...@gmail.com
wrote:

 Hello.

 We've hit badly changes in behaviour of OVS when we switched from 3.08 to
 3.13 kernel. When runs on  3.11 or above, OVS starts to use kernel GRE
 services. And they copy DNF (do not fragment) flag from encapsulated packet
 to GRE packet. And this mess up all things, because ICMP messages about
 dropped GRE never reach neither source nor destination of underlying TCP.

 We've fixed problems with MTU by using option for DHCP for dnsmasq. This
 lower MTU inside instances. But there are routers (router namespaces) and
 they are still using 1500 bytes MTU.

 I feel like this can cause problems with some types of traffic, when
 client (outside of openstack) sending DNF packets to instance (via
 floating) and that packet is silently dropped.

 1) Is those concerns have any real life implication? TCP should take in
 account MTU on server and works smoothly, but other protocols?
 2) Is there any way to lower MTU inside router namespace?

 Thanks.

 P.S. Jumbo frames is not an option due reasons outside of our reach.

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Storage error

2015-03-03 Thread Joseph Bajin
You do have to create something for cinder to have as a backend.  That
could be an LVM volume on a particular node, or it could be a CEPH setup,
that's up to your requirements.

Also, you didn't state the error that you are getting when you try to
create a cinder volume.  That would certainly help in figuring out what is
wrong.


-- Joe




On Tue, Mar 3, 2015 at 5:28 AM, Anwar Durrani durrani.an...@gmail.com
wrote:

 Hello Team,

 I have setup icehouse with following nodes in test environment

 1.) Controller node
 2.) Compute Node
 3.) Network Node

 I have basic setup on it, when i was trying to create a volume to tenant
 called demo or admin i am able to create but with status error, i don't why
 it is so ? do i need to configure storage as a separate node to create
 cinder volume ? where i am missing ?

 Thanks

 --
 Thanks  regards,
 Anwar M. Durrani
 +91-8605010721
 http://in.linkedin.com/pub/anwar-durrani/20/b55/60b



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators