Re: [Openstack-operators] [nova] Would an api option to create an instance without powering on be useful?

2018-11-30 Thread Mohammed Naser
On Fri, Nov 30, 2018 at 7:07 AM Matthew Booth  wrote:

> I have a request to do $SUBJECT in relation to a V2V workflow. The use
> case here is conversion of a VM/Physical which was previously powered
> off. We want to move its data, but we don't want to be powering on
> stuff which wasn't previously on.
>
> This would involve an api change, and a hopefully very small change in
> drivers to support it. Technically I don't see it as an issue.
>
> However, is it a change we'd be willing to accept? Is there any good
> reason not to do this? Are there any less esoteric workflows which
> might use this feature?
>

If you upload an image of said VM which you don't boot, you'd really be
accomplishing the same thing, no?

Unless you want to be in a state where you want the VM to be there but
sitting in SHUTOFF state


> Matt
> --
> Matthew Booth
> Red Hat OpenStack Engineer, Compute DFG
>
> Phone: +442070094448 (UK)
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Re: [Openstack-operators] openstack-annsible networking layout before running playbooks

2018-11-22 Thread Mohammed Naser
Hey there,

You can just have one br-mgmt and skip the second one and everything will
go over br-mgmt :)

Thanks,
Mohammed

On Thu, Nov 22, 2018 at 5:05 AM Jawad Ahmed  wrote:

> Hi all,
> I am deploying openstack-ansible in test environment where I need to use
> br-mgmt bridge for both storage and management traffic (same bridge for
> both) so that container interfaces eth1 and eth2 will connect to br-mgmt
> for mgmt and storage traffic at same time.Does it make sense if I ll setup
> provider networks openstack_user_config.yml as below?
>
>  tunnel_bridge: "br-vxlan" //separate bridge for vxlan though
>   management_bridge: "br-mgmt"
>
>   provider_networks:
> - network:
> container_bridge: "br-mgmt"
> container_type: "veth"
> container_interface: "eth1"
> ip_from_q: "container"
> type: "raw"
> group_binds:
>   - all_containers
>   - hosts
> is_container_address: true
> is_ssh_address: true
>
>
>  - network:
> container_bridge: "br-mgmt"
> container_type: "veth"
> container_interface: "eth2"
> ip_from_q: "storage"
> type: "raw"
> group_binds:
>   - glance_api
>   - cinder_api
>   - cinder_volume
>   - nova_compute
>
> Help would be appreciated.
>
> --
> Greetings,
> Jawad Ahmed
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>


-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Fleio - OpenStack billing - ver. 1.1 released

2018-10-19 Thread Mohammed Naser
On Fri, Oct 19, 2018 at 7:45 PM Jay Pipes  wrote:
>
> Please do not use these mailing lists to advertise
> closed-source/proprietary software solutions.

+1

> Thank you,
> -jay
>
> On 10/19/2018 05:42 AM, Adrian Andreias wrote:
> > Hello,
> >
> > We've just released Fleio version 1.1.
> >
> > Fleio is a billing solution and control panel for OpenStack public
> > clouds and traditional web hosters.
> >
> > Fleio software automates the entire process for cloud users. New
> > customers can use Fleio to sign up for an account, pay invoices, add
> > credit to their account, as well as create and manage cloud resources
> > such as virtual machines, storage and networking.
> >
> > Full feature list:
> > https://fleio.com#features
> >
> > You can see an online demo:
> > https://fleio.com/demo
> >
> > And sign-up for a free trial:
> > https://fleio.com/signup
> >
> >
> >
> > Cheers!
> >
> > - Adrian Andreias
> > https://fleio.com
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction

2018-09-07 Thread Mohammed Naser
t; consensus that just removing the default-to-empty-sqlite behavior is the
> right thing to do. Placement won't magically find nova.conf if it exists
> and jump into its database, and it also won't do the silly thing of
> starting up with an empty database if the very important config step is
> missed in the process of deploying placement itself. Operators will have
> to deploy the new package and do the database surgery (which we will
> provide instructions and a script for) as part of that process, but
> there's really no other sane alternative without changing the current
> agreed-to plan regarding the split.
>
> Is everyone okay with the above summary of the outcome?

I've dropped my -1 from this given the discussion

https://review.openstack.org/#/c/600157/

> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] [placement] extraction (technical) update

2018-09-06 Thread Mohammed Naser
On Wed, Sep 5, 2018 at 12:41 PM Dan Smith  wrote:
>
> > I think there was a period in time where the nova_api database was created
> > where entires would try to get pulled out from the original nova database 
> > and
> > then checking nova_api if it doesn't exist afterwards (or vice versa).  One
> > of the cases that this was done to deal with was for things like instance 
> > types
> > or flavours.
> >
> > I don't know the exact details but I know that older instance types exist in
> > the nova db and the newer ones are sitting in nova_api.  Something along
> > those lines?
>
> Yep, we've moved entire databases before in nova with minimal disruption
> to the users. Not just flavors, but several pieces of data came out of
> the "main" database and into the api database transparently. It's
> doable, but with placement being split to a separate
> project/repo/whatever, there's not really any option for being graceful
> about it in this case.
>
> > At this point, I'm thinking turn off placement, setup the new one, do
> > the migration
> > of the placement-specific tables (this can be a straightforward documented 
> > task
> > OR it would be awesome if it was a placement command (something along
> > the lines of `placement-manage db import_from_nova`) which would import all
> > the right things
> >
> > The idea of having a command would be *extremely* useful for deployment 
> > tools
> > in automating the process and it also allows the placement team to 
> > selectively
> > decide what they want to onboard?
>
> Well, it's pretty cut-and-dried as all the tables in nova-api are either
> for nova or placement, so there's not much confusion about what belongs.
>
> I'm not sure that doing this import in python is really the most
> efficient way. I agree a placement-manage command would be ideal from an
> "easy button" point of view, but I think a couple lines of bash that
> call mysqldump are likely to vastly outperform us doing it natively in
> python. We could script exec()s of those commands from python, but.. I
> think I'd rather just see that as a shell script that people can easily
> alter/test on their own.
>
> Just curious, but in your case would the service catalog entry change at
> all? If you stand up the new placement in the exact same spot, it
> shouldn't, but I imagine some people will have the catalog entry change
> slightly (even if just because of a VIP or port change). Am I
> remembering correctly that the catalog can get cached in various places
> such that much of nova would need a restart to notice?

We already have placement in the catalog and it's behind a load balancer
so changing the backends resolves things right away, so we likely won't
be needing any restarts (and I don't think OSA will either because it uses
the same model).

> --Dan



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] [placement] extraction (technical) update

2018-09-05 Thread Mohammed Naser
On Wed, Sep 5, 2018 at 10:57 AM Matt Riedemann  wrote:
>
> On 9/5/2018 8:47 AM, Mohammed Naser wrote:
> > Could placement not do what happened for a while when the nova_api
> > database was created?
>
> Can you be more specific? I'm having a brain fart here and not
> remembering what you are referring to with respect to the nova_api DB.

I think there was a period in time where the nova_api database was created
where entires would try to get pulled out from the original nova database and
then checking nova_api if it doesn't exist afterwards (or vice versa).  One
of the cases that this was done to deal with was for things like instance types
or flavours.

I don't know the exact details but I know that older instance types exist in
the nova db and the newer ones are sitting in nova_api.  Something along
those lines?

> >
> > I say this because I know that moving the database is a huge task for
> > us, considering how big it can be in certain cases for us, and it
> > means control plane outage too
>
> I'm pretty sure you were in the room in YVR when we talked about how
> operators were going to do the database migration and were mostly OK
> with what was discussed, which was a lot will just copy and take the
> downtime (I think CERN said around 10 minutes for them, but they aren't
> a public cloud either), but others might do something more sophisticated
> and nova shouldn't try to pick the best fit for all.

If we're provided the list of tables used by placement, we could considerably
make the downtime smaller because we don't have to pull in the other huge
tables like instances/build requests/etc

What happens if things like server deletes happen while the placement service
is down?

> I'm definitely interested in what you do plan to do for the database
> migration to minimize downtime.

At this point, I'm thinking turn off placement, setup the new one, do
the migration
of the placement-specific tables (this can be a straightforward documented task
OR it would be awesome if it was a placement command (something along
the lines of `placement-manage db import_from_nova`) which would import all
the right things

The idea of having a command would be *extremely* useful for deployment tools
in automating the process and it also allows the placement team to selectively
decide what they want to onboard?

Just throwing ideas here.

> +openstack-operators ML since this is an operators discussion now.
>
> --
>
> Thanks,
>
> Matt



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [puppet] migrating to storyboard

2018-08-15 Thread Mohammed Naser
It's a +1 from me.  I don't think there is anything linked specifically to it.

On Wed, Aug 15, 2018 at 11:22 AM, Emilien Macchi  wrote:
> On Tue, Aug 14, 2018 at 6:33 PM Tobias Urdin  wrote:
>>
>> Please let me know what you think about moving to Storyboard?
>
> Go for it. AFIK we don't have specific blockers to make that migration
> happening.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [openstack-ansible] dropping selinux support

2018-06-28 Thread Mohammed Naser
Hi Paul:

On Thu, Jun 28, 2018 at 5:03 PM, Paul Belanger  wrote:
> On Thu, Jun 28, 2018 at 12:56:22PM -0400, Mohammed Naser wrote:
>> Hi everyone:
>>
>> This email is to ask if there is anyone out there opposed to removing
>> SELinux bits from OpenStack ansible, it's blocking some of the gates
>> and the maintainers for them are no longer working on the project
>> unfortunately.
>>
>> I'd like to propose removing any SELinux stuff from OSA based on the 
>> following:
>>
>> 1) We don't gate on it, we don't test it, we don't support it.  If
>> you're running OSA with SELinux enforcing, please let us know how :-)
>> 2) It extends beyond the scope of the deployment project and there are
>> no active maintainers with the resources to deal with them
>> 3) With the work currently in place to let OpenStack Ansible install
>> distro packages, we can rely on upstream `openstack-selinux` package
>> to deliver deployments that run with SELinux on.
>>
>> Is there anyone opposed to removing it?  If so, please let us know. :-)
>>
> While I don't use OSA, I would be surprised to learn that selinux wouldn't be
> supported.  I also understand it requires time and care to maintain. Have you
> tried reaching out to people in #RDO, IIRC all those packages should support
> selinux.

Indeed, the support from RDO for SELinux works very well.  In this case however,
OpenStack ansible deploys from source and therefore places binaries in different
places than the default expected locations for the upstream `openstack-selinux`.

As we work towards adding 'distro' support (which to clarify, it means
install from
RPMs or DEBs rather than from source), we'll be able to pull in that package and
automagically get SELinux support that's supported by an upstream that
tracks it.

> As for gating, maybe default to selinux passive for it to report errors, but 
> not
> fail.  And if anybody is interested in support it, they can do so and enable
> enforcing again when everything is fixed.

That's reasonable.  However, right now we have bugs around the distribution
of SELinux modules and how they are compiled inside the the containers,
which means that we're not having problems with the rules as much as uploading
the rules and getting them compiled inside the server.

I hope I cleared up a bit more of our side of things, I'm actually
looking forward
for us being able to support upstream distro packages.

> - Paul
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-ansible] dropping selinux support

2018-06-28 Thread Mohammed Naser
Also, this is the change that drops it, so feel free to vote with your
opinion there too:

https://review.openstack.org/578887 Drop SELinux support from os_swift

On Thu, Jun 28, 2018 at 12:56 PM, Mohammed Naser  wrote:
> Hi everyone:
>
> This email is to ask if there is anyone out there opposed to removing
> SELinux bits from OpenStack ansible, it's blocking some of the gates
> and the maintainers for them are no longer working on the project
> unfortunately.
>
> I'd like to propose removing any SELinux stuff from OSA based on the 
> following:
>
> 1) We don't gate on it, we don't test it, we don't support it.  If
> you're running OSA with SELinux enforcing, please let us know how :-)
> 2) It extends beyond the scope of the deployment project and there are
> no active maintainers with the resources to deal with them
> 3) With the work currently in place to let OpenStack Ansible install
> distro packages, we can rely on upstream `openstack-selinux` package
> to deliver deployments that run with SELinux on.
>
> Is there anyone opposed to removing it?  If so, please let us know. :-)
>
> Thanks!
> Mohammed
>
> --
> Mohammed Naser — vexxhost
> -
> D. 514-316-8872
> D. 800-910-1726 ext. 200
> E. mna...@vexxhost.com
> W. http://vexxhost.com



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-ansible] dropping selinux support

2018-06-28 Thread Mohammed Naser
Hi everyone:

This email is to ask if there is anyone out there opposed to removing
SELinux bits from OpenStack ansible, it's blocking some of the gates
and the maintainers for them are no longer working on the project
unfortunately.

I'd like to propose removing any SELinux stuff from OSA based on the following:

1) We don't gate on it, we don't test it, we don't support it.  If
you're running OSA with SELinux enforcing, please let us know how :-)
2) It extends beyond the scope of the deployment project and there are
no active maintainers with the resources to deal with them
3) With the work currently in place to let OpenStack Ansible install
distro packages, we can rely on upstream `openstack-selinux` package
to deliver deployments that run with SELinux on.

Is there anyone opposed to removing it?  If so, please let us know. :-)

Thanks!
Mohammed

-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova][neutron] How do you use the instance IP filter?

2017-10-26 Thread Mohammed Naser
On Thu, Oct 26, 2017 at 10:23 PM, Matt Riedemann 
wrote:

> Nova has had this long-standing known performance issue if you're
> filtering a large number of instances by IP. The instance IPs are stored in
> a JSON blob in the database so we don't do filtering in SQL. We pull the
> instances out of the database, deserialize the JSON and then apply a regex
> filter match in the nova-api python code.
>
> At the Queens PTG we talked about possible ways to fix this and came up
> with this nova spec:
>
> https://specs.openstack.org/openstack/nova-specs/specs/queen
> s/approved/improve-filter-instances-by-ip-performance.html
>
> The idea is to have nova get ports from neutron and apply the IP filter in
> neutron to whittle down the ports, then from that list of ports get the
> instances to pull out of the nova database.
>
> One issue that has come up with this is neutron does not currently support
> regex filters when listing ports. There is an RFE for adding that:
>
> https://bugs.launchpad.net/neutron/+bug/1718605
>
> The proposed neutron implementation is to just do SQL LIKE substring
> matching in the database.
>
> However, one issue that has come up is that the compute API accepts a
> python regex filter and uses re.match():
>
> https://github.com/openstack/nova/blob/16.0.0/nova/compute/api.py#L2469
>
> At least one good thing about that is match() only matches from the
> beginning of the string unlike search().
>
> So for example I can filter on "192.16.*[1-5]$" if I wanted to, but that's
> not going to work with just a LIKE substring filter in SQL.
>
> The question is, does anyone actually do more than basic substring
> matching with the IP filter today? Because if we started using neutron,
> that behavior would be broken. We've never actually documented the match
> restrictions on the IP filter, but that's not a good reason to break it.
>

The use-case for us is that it helps us easily identify or find VMs which
we get any abuse reports for (or anything we see malicious traffic going
to/from).  We usually search for an *exact* match of the IP address as we
are simply trying to perform a lookup of instance ID based on the IP
address.  Regex matching isn't important in our case.


> One option is to make this configurable such that deployments which rely
> on the complicated pattern matching can just use the existing nova code
> despite performance issues. However, that's not interoperable, I hate
> config-driven API behavior, and it would mean maintaining two code paths in
> nova, which is also terrible.
>
> I was trying to think of a way to determine if the IP filter passed to
> nova is basic or a complicated pattern match and let us decide that way,
> but I'm not sure if there are good ways to detect that - maybe by simply
> looking for special characters like (, ), - and $? But then there is [] and
> we have an IPv6 filter, so that gets messy too...
>
> For now I'd just like to know if people rely on the regex match or not.
> Other ideas on how to handle this are appreciated.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] Should we make rebuild + new image on a volume-backed instance fail fast?

2017-10-06 Thread Mohammed Naser
On Fri, Oct 6, 2017 at 1:32 PM, Mathieu Gagné <mga...@calavera.ca> wrote:
> Why not supporting this use case?
>
> Same reason as before: A user might wish to keep its IP addresses.
>
> The use cannot do the following to bypass the limitation:
> 1) stop instance
> 2) detach root volume
> 3) delete root volume
> 4) create new volume
> 5) attach as root
> 6) boot instance
>
> Last time I tried, operation fails at step 2. I would need to test
> against latest version of Nova to confirm.

You are right, this is indeed something that is not possible.

> Otherwise boot-from-volume feels like a second class citizen.
>
> --
> Mathieu
>
>
> On Fri, Oct 6, 2017 at 1:22 PM, Matt Riedemann <mriede...@gmail.com> wrote:
>> This came up in IRC discussion the other day, but we didn't dig into it much
>> given we were all (2 of us) exhausted talking about rebuild.
>>
>> But we have had several bugs over the years where people expect the root
>> disk to change to a newly supplied image during rebuild even if the instance
>> is volume-backed.
>>
>> I distilled several of those bugs down to just this one and duplicated the
>> rest:
>>
>> https://bugs.launchpad.net/nova/+bug/1482040
>>
>> I wanted to see if there is actually any failure on the backend when doing
>> this, and there isn't - there is no instance fault or anything like that.
>> It's just not what the user expects, and actually the instance image_ref is
>> then shown later as the image specified during rebuild, even though that's
>> not the actual image in the root disk (the volume).
>>
>> There have been a couple of patches proposed over time to change this:
>>
>> https://review.openstack.org/#/c/305079/
>>
>> https://review.openstack.org/#/c/201458/
>>
>> https://review.openstack.org/#/c/467588/
>>
>> And Paul Murray had a related (approved) spec at one point for detach and
>> attach of root volumes:
>>
>> https://review.openstack.org/#/c/221732/
>>
>> But the blueprint was never completed.
>>
>> So with all of this in mind, should we at least consider, until at least
>> someone owns supporting this, that the API should fail with a 400 response
>> if you're trying to rebuild with a new image on a volume-backed instance?
>> That way it's a fast failure in the API, similar to trying to backup a
>> volume-backed instance fails fast.
>>
>> If we did, that would change the API response from a 202 today to a 400,
>> which is something we normally don't do. I don't think a microversion would
>> be necessary if we did this, however, because essentially what the user is
>> asking for isn't what we're actually giving them, so it's a failure in an
>> unexpected way even if there is no fault recorded, it's not what the user
>> asked for. I might not be thinking of something here though, like
>> interoperability for example - a cloud without this change would blissfully
>> return 202 but a cloud with the change would return a 400...so that should
>> be considered.
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [tripleo] Making containerized service deployment the default

2017-09-18 Thread Mohammed Naser
On Mon, Sep 18, 2017 at 3:04 PM, Alex Schultz  wrote:
> Hey ops & devs,
>
> We talked about containers extensively at the PTG and one of the items
> that needs to be addressed is that currently we still deploy the
> services as bare metal services via puppet. For Queens we would like
> to switch the default to be containerized services.  With this switch
> we would also start the deprecation process for deploying services as
> bare metal services via puppet.  We still execute the puppet
> configuration as part of the container configuration process so the
> code will continue to be leveraged but we would be investing more in
> the continual CI of the containerized deployments and reducing the
> traditional scenario coverage.
>
> As we switch over to containerized services by default, we would also
> begin to reduce installed software on the overcloud images that we
> currently use.  We have an open item to better understand how we can
> switch away from the golden images to a traditional software install
> process during the deployment and make sure this is properly tested.
> In theory it should work today by switching the default for
> EnablePackageInstall[0] to true and configuring repositories, but this
> is something we need to verify.
>
> If anyone has any objections to this default switch, please let us know.

I think this is a great initiative.  It would be nice to share some of
the TripleO experience in containerized deployments so that we can use
Puppet for containerized deployments.  Perhaps we can work together on
adding some classes which can help deploy and configure containerized
services with Puppet.

>
> Thanks,
> -Alex
>
> [0] 
> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/tripleo-packages.yaml#L33-L36
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Case studies on Openstack HA architecture

2017-08-26 Thread Mohammed Naser
We've also always done the same.  We deploy any cloud with a minimum of 3 
(quorum, etc) but when the database or message queue needs to scale up, the 
other components probably need to as well so making an equal replica of 
everything across the environment. 

Sent from my iPhone

> On Aug 26, 2017, at 9:13 AM, David Medberry  wrote:
> 
> I'm not aware of any studies as per se, but we have long run rabbitmq, MySQL, 
> and all the API endpoints on the same three nodes. 
> 
> 
> 
>> On Aug 25, 2017 6:12 PM, "Imtiaz Chowdhury"  
>> wrote:
>> Hi Openstack operators,
>> 
>>  
>> 
>> Most Openstack HA deployment use 3 node database cluster, 3 node rabbitMQ 
>> cluster and 3 Controllers. I am wondering whether there are any studies done 
>> that show the pros and cons of co-locating database and messaging service 
>> with the Openstack control services.  In other words, I am very interested 
>> in learning about advantages and disadvantages, in terms of ease of 
>> deployment, upgrade and overall API performance, of having 3 all-in-one 
>> Openstack controller over a more distributed deployment model.
>> 
>>  
>> 
>> References to any work done in this area will be highly appreciated.
>> 
>>  
>> 
>> Thanks, 
>> Imtiaz
>> 
>>  
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [kolla][puppet][openstack-ansible][tripleo] Better way to run wsgi service.

2017-08-25 Thread Mohammed Naser
On Fri, Aug 25, 2017 at 2:56 AM, Jeffrey Zhang <zhang.lei@gmail.com> wrote:
> thanks mnaser and sam for the advice.
>
> i think uwsgi + native http is not a good solution, nova. A http
> server + uwsgi is better. So i am imaging that the deployment
> architecture will be like
>
> haproxy --> http server -> uwsgi_nova_api / uwsgi_glance_api etc.
>
> As mnaster said, one http server serve for others uwsgi services.
>
> on the other hand, which following solution is better?
>
> - apache + mod_uwsgi ( not recommended by uwsgi )

Not recommended by them is probably a sign to stay away

> - apache + mode_proxy_uwsgi ( recommended by uwsgi)

This has two advantages.   DevStack at the gate tests using this exact
architecture AFAIK which means that we would less likely run into
issues that were never discovered in testing.

> - nginx + uwsgi

I like this a lot, however, for applications such as Keystone, they
sometimes rely on Apache for some features (such as Federation:
https://docs.openstack.org/keystone/latest/advanced-topics/federation/configure_federation.html#configure-apache-to-use-a-federation-capable-authentication-method)

However, Apache has done a lot of improvements in terms of catching up
to nginx and if using an MPM worker such as mpm_event, you would end
up with a very similar architecture and performance.

>
> So the question is why community choose apache rather than nginx?
>
> [0] http://uwsgi-docs.readthedocs.io/en/latest/Apache.html
>
>
> On Fri, Aug 25, 2017 at 5:32 AM, Sam Yaple <sam...@yaple.net> wrote:
>>
>> I have been running api services behind uwsgi in http mode from Newton
>> forward. I recently switched to the uwsgi+nginx model with 2 containers
>> since I was having wierd issues with things that I couldn't track down.
>> Mainly after I started using keystone with ldap. There would be timeouts and
>> message-to-long type errors that all went away with nginx.
>>
>> Additionally, running with HTTPS was measurably faster with nginx proxying
>> to a uwsgi socket.
>>
>> This was just my experience with it, if you do want to switch to
>> uwsgi+http make sure you do thorough testing of all the components or you
>> may be left with a component that just won't work with your model.
>>
>>
>> On Thu, Aug 24, 2017 at 12:29 PM, Mohammed Naser <mna...@vexxhost.com>
>> wrote:
>>>
>>> On Thu, Aug 24, 2017 at 11:15 AM, Jeffrey Zhang <zhang.lei@gmail.com>
>>> wrote:
>>> > We are moving to deploy service via WSGI[0].
>>> >
>>> > There are two recommended ways.
>>> >
>>> > - apache + mod_wsgi
>>> > - nginx + uwsgi
>>> >
>>> > later one is more better.
>>> >
>>> > For traditional deployment, it is easy to implement. Use one
>>> > uwsgi progress to launch all wsgi services ( nova-api,cinder-api ,
>>> > etc).
>>> > Then one nginx process to forward the http require into uwsgi server.
>>> >
>>> > But kolla is running one process in one container. If we use
>>> > the recommended solution, we have to use two container to run
>>> > nova-api container. and it will generate lots of containers.
>>> > like: nginx-nova-api, uwsig-nova-api.
>>> >
>>> > i propose use uwsgi native http mode[1]. Then one uwsgi
>>> > container is enough to run nova-api service. Base one the official
>>> > doc, if there is no static resource, uWSGI is recommended to use
>>> > as a real http server.
>>> >
>>> > So how about this?
>>>
>>> I think this is an interesting approach.  At the moment, the Puppet
>>> modules currently allow deploying for services using mod_wsgi.
>>> Personally, I don't think that relying on the HTTP support of uWSGI is
>>> very favorable.   It is quite difficult to configure and 'get right'
>>> and it leaves a lot to be desired (for example, federated auth relies
>>> on Apache modules which would make this nearly impossible).
>>>
>>> Given that the current OpenStack testing infrastructure proxies to
>>> UWSGI, I think it would be best if that approach is taken.  This way,
>>> a container (or whatever service) would expose a UWSGI API, which you
>>> can connect whichever web server that you need.
>>>
>>> In the case of Kolla, the `httpd` container would be similar to what
>>> the `haproxy` is.  In the case of Puppet, I can imagine this being
>>> something to be managed by the user (with some help

Re: [Openstack-operators] [kolla][puppet][openstack-ansible][tripleo] Better way to run wsgi service.

2017-08-24 Thread Mohammed Naser
On Thu, Aug 24, 2017 at 11:15 AM, Jeffrey Zhang  wrote:
> We are moving to deploy service via WSGI[0].
>
> There are two recommended ways.
>
> - apache + mod_wsgi
> - nginx + uwsgi
>
> later one is more better.
>
> For traditional deployment, it is easy to implement. Use one
> uwsgi progress to launch all wsgi services ( nova-api,cinder-api , etc).
> Then one nginx process to forward the http require into uwsgi server.
>
> But kolla is running one process in one container. If we use
> the recommended solution, we have to use two container to run
> nova-api container. and it will generate lots of containers.
> like: nginx-nova-api, uwsig-nova-api.
>
> i propose use uwsgi native http mode[1]. Then one uwsgi
> container is enough to run nova-api service. Base one the official
> doc, if there is no static resource, uWSGI is recommended to use
> as a real http server.
>
> So how about this?

I think this is an interesting approach.  At the moment, the Puppet
modules currently allow deploying for services using mod_wsgi.
Personally, I don't think that relying on the HTTP support of uWSGI is
very favorable.   It is quite difficult to configure and 'get right'
and it leaves a lot to be desired (for example, federated auth relies
on Apache modules which would make this nearly impossible).

Given that the current OpenStack testing infrastructure proxies to
UWSGI, I think it would be best if that approach is taken.  This way,
a container (or whatever service) would expose a UWSGI API, which you
can connect whichever web server that you need.

In the case of Kolla, the `httpd` container would be similar to what
the `haproxy` is.  In the case of Puppet, I can imagine this being
something to be managed by the user (with some helpers in there).  I
think it would be interesting to add the tripleo folks on their
opinion here as consumers of the Puppet modules.

>
>
> [0] https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html
> [1]
> http://uwsgi-docs.readthedocs.io/en/latest/HTTP.html#can-i-use-uwsgi-s-http-capabilities-in-production
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] HTTP/S Termination with Haproxy + Keystone

2017-02-22 Thread Mohammed Naser
I would appreciate it if you can let us know which one it is for Cinder, as it 
looks like there is no SSL middleware for Cinder which allows doing this.

Thanks

> On Feb 22, 2017, at 1:43 PM, Chris Suttles  wrote:
> 
> There's a similar option in heat.conf:
> 
> secure_proxy_ssl_header = X-Forwarded-Proto
> 
> Pretty sure that's needed for most services; I will scrub my configs and 
> check. We are running a pretty simple install of Newton, and doing haproxy 
> for SSL termination of all API endpoints.
> 
> On Wed, Feb 22, 2017 at 9:58 AM, Chris Apsey  > wrote:
> Mathieu,
> 
> That did the trick - thank you.  On a related note, heat is exhibiting the 
> same behavior on some of the API calls (stack list works fine, stack show 
> does not because a http URL is returned in the 302 response field, etc.).
> 
> I attempted the combination of 'oslo_middleware/enable_proxy_headers_parsing' 
> and 'oslo_middleware/secure_proxy_ssl_header' referenced here 
> https://docs.openstack.org/newton/config-reference/orchestration/api.html 
>  
> along with the appropriate haproxy configuration suggested by Mike, but no 
> dice.  The URL doesn't change.  Beyond that, it looks like that option is 
> deprecated anyway (at least in heat), although I have not found any 
> indication about what is supposed to 'replace' those options going forward.
> 
> Ideas?
> 
> Thanks so much,
> 
> ---
> v/r
> 
> Chris Apsey
> bitskr...@bitskrieg.net 
> https://www.bitskrieg.net 
> 
> On 2017-02-21 21:46, Mathieu Gagné wrote:
> Hi,
> 
> The problem is that Keystone doesn't know about HAProxy terminating
> the SSL connection and therefore doesn't know it needs to generate
> URLs with https:// protocol.
> 
> You can override the "auto-detected" URLs with those configurations:
> - [DEFAULT]/public_endpoint
> - [DEFAULT]/admin_endpoint
> 
> See documentation for a bit more explanation about those
> configurations:
> https://docs.openstack.org/draft/config-reference/identity/api.html 
> 
> --
> Mathieu
> 
> 
> On Tue, Feb 21, 2017 at 8:56 PM, Chris Apsey  > wrote:
> I'm having a strange issue with keystone after migrating all public
> endpoints to https (haproxy terminates the SSL connection for each service):
> 
> openstack endpoint list
> 
> +--+---+--++-+---+-+
> | ID   | Region| Service Name | Service Type
> | Enabled | Interface | URL |
> +--+---+--++-+---+-+
> ...
> | 99d302d00ab3461cb9362236c865a430 | RegionOne | keystone | identity
> | True| public| https://some.domain.place:5000/v3 
>  |
> ...
> 
> I have also updated my rc files appropriately.  Whenever I try and use the
> CLI against the public endpoints in debug mode, everything starts out
> looking good:
> 
> REQ: curl -g -i -X GET https://some.domain.place:5000/v3 
>  -H "Accept:
> application/json" -H "User-Agent: osc-lib keystoneauth1/2.12.1
> python-requests/2.11.1 CPython/2.7.9"
> 
> But then, the response body gives a non-https URL:
> 
> RESP BODY: {"version": {"status": "stable", "updated":
> "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type":
> "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links":
> [{"href": "http://some.domain.place:5000/v3/ 
> ", "rel": "self"}]}}
> 
> and then the attempt to authenticate fails:
> 
> Making authentication request to
> http://some.domain.place:5000/v3/auth/tokens 
> 
> Starting new HTTP connection (1): some.domain.place
> Unable to establish connection to
> http://some.domain.place:5000/v3/auth/tokens 
> 
> 
> I've restarted apache2 on my keystone hosts and I have scoured the database
> for any reference to a non-https public endpoint for keystone; I cannot find
> one.
> 
> Does anyone know why my response body is giving the wrong URL?  Horizon
> works perfectly fine with the https endpoints; it's just the command line
> clients that are having issues.
> 
> Thanks in advance,
> 
> --
> v/r
> 
> Chris Apsey
> bitskr...@bitskrieg.net 
> https://www.bitskrieg.net 
> 
> ___
> OpenStack-operators mailing list
> 

Re: [Openstack-operators] HTTP/S Termination with Haproxy + Keystone

2017-02-22 Thread Mohammed Naser
Cinder faces the same issue unfortunately and it will result in failed RefStack 
runs (does this mean everyone who's ran RefStack uses no HTTPS for APIs or uses 
SSL inside Eventlet?)

We're still trying to figure that one out. 

Sent from my iPhone

> On Feb 22, 2017, at 12:58 PM, Chris Apsey  wrote:
> 
> Mathieu,
> 
> That did the trick - thank you.  On a related note, heat is exhibiting the 
> same behavior on some of the API calls (stack list works fine, stack show 
> does not because a http URL is returned in the 302 response field, etc.).
> 
> I attempted the combination of 'oslo_middleware/enable_proxy_headers_parsing' 
> and 'oslo_middleware/secure_proxy_ssl_header' referenced here 
> https://docs.openstack.org/newton/config-reference/orchestration/api.html 
> along with the appropriate haproxy configuration suggested by Mike, but no 
> dice.  The URL doesn't change.  Beyond that, it looks like that option is 
> deprecated anyway (at least in heat), although I have not found any 
> indication about what is supposed to 'replace' those options going forward.
> 
> Ideas?
> 
> Thanks so much,
> 
> ---
> v/r
> 
> Chris Apsey
> bitskr...@bitskrieg.net
> https://www.bitskrieg.net
> 
>> On 2017-02-21 21:46, Mathieu Gagné wrote:
>> Hi,
>> The problem is that Keystone doesn't know about HAProxy terminating
>> the SSL connection and therefore doesn't know it needs to generate
>> URLs with https:// protocol.
>> You can override the "auto-detected" URLs with those configurations:
>> - [DEFAULT]/public_endpoint
>> - [DEFAULT]/admin_endpoint
>> See documentation for a bit more explanation about those
>> configurations:
>> https://docs.openstack.org/draft/config-reference/identity/api.html
>> --
>> Mathieu
>>> On Tue, Feb 21, 2017 at 8:56 PM, Chris Apsey  
>>> wrote:
>>> I'm having a strange issue with keystone after migrating all public
>>> endpoints to https (haproxy terminates the SSL connection for each service):
>>> openstack endpoint list
>>> +--+---+--++-+---+-+
>>> | ID   | Region| Service Name | Service Type
>>> | Enabled | Interface | URL |
>>> +--+---+--++-+---+-+
>>> ...
>>> | 99d302d00ab3461cb9362236c865a430 | RegionOne | keystone | identity
>>> | True| public| https://some.domain.place:5000/v3 |
>>> ...
>>> I have also updated my rc files appropriately.  Whenever I try and use the
>>> CLI against the public endpoints in debug mode, everything starts out
>>> looking good:
>>> REQ: curl -g -i -X GET https://some.domain.place:5000/v3 -H "Accept:
>>> application/json" -H "User-Agent: osc-lib keystoneauth1/2.12.1
>>> python-requests/2.11.1 CPython/2.7.9"
>>> But then, the response body gives a non-https URL:
>>> RESP BODY: {"version": {"status": "stable", "updated":
>>> "2016-10-06T00:00:00Z", "media-types": [{"base": "application/json", "type":
>>> "application/vnd.openstack.identity-v3+json"}], "id": "v3.7", "links":
>>> [{"href": "http://some.domain.place:5000/v3/;, "rel": "self"}]}}
>>> and then the attempt to authenticate fails:
>>> Making authentication request to
>>> http://some.domain.place:5000/v3/auth/tokens
>>> Starting new HTTP connection (1): some.domain.place
>>> Unable to establish connection to
>>> http://some.domain.place:5000/v3/auth/tokens
>>> I've restarted apache2 on my keystone hosts and I have scoured the database
>>> for any reference to a non-https public endpoint for keystone; I cannot find
>>> one.
>>> Does anyone know why my response body is giving the wrong URL?  Horizon
>>> works perfectly fine with the https endpoints; it's just the command line
>>> clients that are having issues.
>>> Thanks in advance,
>>> --
>>> v/r
>>> Chris Apsey
>>> bitskr...@bitskrieg.net
>>> https://www.bitskrieg.net
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] libvirt freezing when loading Nova instance nwfilters

2017-02-22 Thread Mohammed Naser
Forgive my short reply but do you have a lot of VMs on these machines by any 
chance?  We've seen issues with libvirt taking a long time with a large number 
of VMs as it sets up all the network filters. 

Sent from my iPhone

> On Feb 22, 2017, at 10:33 AM, Edmund Rhudy (BLOOMBERG/ 120 PARK) 
>  wrote:
> 
> I recently witnessed a strange issue with libvirt when upgrading one of our 
> clusters from Kilo to Liberty. I'm not really looking for a specific 
> diagnosis here because of the large number of confounding factors and the 
> relative ease of remediating it, but I'm interested to hear if anyone else 
> has witnessed this particular problem.
> 
> Background is we had a number of Kilo-based clusters, all running Ubuntu 
> 14.04.4 with OpenStack installed from the Ubuntu cloud archive. The upgrade 
> process to Liberty involved upgrading the OpenStack components and their 
> dependencies (including libvirt), then afterward upgrading all remaining 
> packages via dist-upgrade (and staging a kernel upgrade from 3.13 to 4.4, to 
> take effect on the next reboot). 7 clusters had all been upgraded 
> successfully using this strategy.
> 
> One cluster, however, decided to get a bit weird. After the upgrade, 4 
> hypervisors showed that nova-compute was refusing to come up properly and was 
> showing as enabled/down in nova service-list. Upon further investigation, 
> nova-compute was starting up but was getting jammed on loading nwfilters. 
> When I ran "virsh nwfilter-list", the command stalled indefinitely. Killing 
> nova-compute and restarting libvirt-bin service allowed the command to work 
> again, but it did not list any of the nova-instance-instance-* nwfilters. 
> Once nova-compute was started, it tried to start loading the 
> instance-specific filters and libvirt would wedge. I spent a while tinkering 
> with the affected systems but could not find any way of correcting the issue 
> other than rebooting the hypervisor, after which everything was fine.
> 
> Has anyone ever seen anything like this? libvirt was upgraded from 1.2.12 to 
> 1.2.16. Hundreds of hypervisors had already received this exact same upgrade 
> without showing this problem, and I have no idea how I could reproduce it. 
> I'm interested to hear if anyone else has ever run into this and if they 
> figured out what the root cause was, though I've already braced myself for 
> tumbleweeds.
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] What would you like in Pike?

2017-01-16 Thread Mohammed Naser

> On Jan 16, 2017, at 7:57 AM, Yaguang Tang  wrote:
> 
> I'd like to see this feature  "Attach a single volume to multiple instances" 
> https://blueprints.launchpad.net/nova/+spec/multi-attach-volume 
>  to be 
> implmented in Nova side. 
> 
> This feature has been working for more than two years, but hasn't been 
> accepted by upstream…

I would second this, we have users which have asked for this for quite 
sometime.  99% of the work is there (at least on the Cinder side) so it’s just 
a matter of integrating it within the Nova side of things.

> 
> On Sun, Jan 15, 2017 at 2:53 PM, Joshua Harlow  > wrote:
> I'll add a couple:
> 
> Cascading deletes,
> 
> Ie when a tenant/project/user is removed from keystone there should be 
> someway to say deny that request if that tenant/project/user has active 
> resources or there should be a away to cascade that delete through the rest 
> of those resources (so they are deleted also).
> 
> Orphans (not the annie kind),
> 
> Pretty sure the osops-tools-generic repo contains a bunch of scripts around 
> orphaned items cleanup; this seems *similar* to the above and it feels these 
> should be like umm fixed (or those scripts should be deleted if its not an 
> issue anymore)?
> 
> $ find . | grep -i "orphan"
> ./libvirt/cleanup-orphaned-vms.sh
> ./libvirt/remove-deleted-orphans.sh
> ./neutron/delete_orphan_floatingips.py
> ./neutron/listorphans.py
> ./nova/orphaned_vms.sh
> ./ansible/playbooks/orphaned-vm-clenaup.yaml
> ./ansible/tasks/orphaned-vms.yaml
> ./cinder/orphaned_volumes.sh
> 
> Same with https://github.com/openstack/ospurge 
>  (which seems like a specific project 
> to try to clean this mess up, sorta funny/sad? that it has to exist in the 
> first place).
> 
> Just search google for 'openstack orphan cleanup' and you'll find more 
> scripts and code that people have been writing...
> 
> -Josh
> 
> Melvin Hillsman wrote:
> Hey everyone,
> 
> I am hoping to get a dialogue started to gain some insight around things
> Operators, Application Developers, and End Users would like to see
> happen in Pike. If you had a dedicated environment, dedicated team, and
> freedom to choose how you deployed, new features, older features,
> enhancements, etc, and were not required to deal with customer/client
> tickets, calls, and maintenances, could keep a good feedback loop
> between your team and the upstream community of any project, what would
> like to make happen or work on hoping the next release of OpenStack
> had/included/changed/enhanced/removed…?
> 
> Kind regards,
> 
> --
> 
> *Melvin Hillsman*
> 
> Ops Technical Lead
> 
> OpenStack Innovation Center
> 
> _mrhills...@gmail.com  
> >_
> 
> phone: (210) 312-1267 
> 
> mobile: (210) 413-1659 
> 
> Learner | Ideation | Belief | Responsibility | Command
> 
> _http://osic.org_ 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
> 
> 
> 
> -- 
> Tang Yaguang
> 
> 
>  
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] [placement] Which service is using port 8778?

2017-01-10 Thread Mohammed Naser
We use virtual hosts, haproxy runs on our VIP at port 80 and port 443 (SSL) 
(with keepalived to make sure it’s always running) and we use `use_backend` to 
send to the appropriate backend, more information here:

http://blog.haproxy.com/2015/01/26/web-application-name-to-backend-mapping-in-haproxy/
 


It makes our catalog nice and neat, we have a -.vexxhost.net 
 internal naming convention, so our catalog looks nice 
and clean and the API calls don’t get blocked by firewalls (the strange ports 
might be blocked on some customer-side firewalls).

+--+--+--+-+-+---+--+
| ID   | Region   | Service Name | Service Type
| Enabled | Interface | URL 
 |
+--+--+--+-+-+---+--+
| 01fdd8e07ca74c9daf80a8b66dcc8bf6 | ca-ymq-1 | cinderv2 | volumev2
| True| internal  | 
https://block-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s |
| 09b4a971659643528875f70d93ef6846 | ca-ymq-1 | manila   | share   
| True| internal  | 
https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s  |
| 203fd4e466b44569aa9ab8c78ef55bad | ca-ymq-1 | heat | orchestration   
| True| admin | 
https://orchestration-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s |
| 20b24181722b49a3983d17d42147a22c | ca-ymq-1 | swift| object-store
| True| admin | 
https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s|
| 2f582f99db974766af7548dda56c3b50 | ca-ymq-1 | nova | compute 
| True| internal  | https://compute-ca-ymq-1.vexxhost.net/v2/$(tenant_id)s  
 |
| 37860b492dd947daa738f461b9084d2a | ca-ymq-1 | neutron  | network 
| True| admin | https://network-ca-ymq-1.vexxhost.net   
 |
| 4d38fa91197e4712a2f2d3f89fcd7dad | ca-ymq-1 | nova | compute 
| True| public| https://compute-ca-ymq-1.vexxhost.net/v2/$(tenant_id)s  
 |
| 58894a7156b848d3baa0382ed465f3c2 | ca-ymq-1 | manilav2 | sharev2 
| True| internal  | 
https://file-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s  |
| 5ebc8fa90c3c46d69d3fa8a03688e452 | ca-ymq-1 | manila   | share   
| True| public| 
https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s  |
| 769a4de22d864c3bb2beefe775e3cb9f | ca-ymq-1 | manila   | share   
| True| admin | 
https://file-storage-ca-ymq-1.vexxhost.net/v1/%(tenant_id)s  |
| 79fa33ff42ec45118ae8b36789fcb8ae | ca-ymq-1 | swift| object-store
| True| public| 
https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s|
| 7a095734e4984cc7b8ac581aa6131f23 | ca-ymq-1 | neutron  | network 
| True| public| https://network-ca-ymq-1.vexxhost.net   
 |
| 7f8b519dfb494cef811b164f5eed0360 | ca-ymq-1 | sahara   | data-processing 
| True| internal  | 
https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| 8842c03d2c51449ebf9ff36778cf17c1 | ca-ymq-1 | glance   | image   
| True| public| https://image-ca-ymq-1.vexxhost.net 
 |
| 8df18f47fcdc4c348d521d4724a5b7ac | ca-ymq-1 | keystone | identity
| True| admin | https://identity-ca-ymq-1.vexxhost.net/v2.0 
 |
| 96357df3d6694477b0ad17fef6091210 | ca-ymq-1 | neutron  | network 
| True| internal  | https://network-ca-ymq-1.vexxhost.net   
 |
| a25efaf48347441a8d36ce302f31d527 | ca-ymq-1 | cinderv2 | volumev2
| True| public| 
https://block-storage-ca-ymq-1.vexxhost.net/v2/%(tenant_id)s |
| b073b767f10d44f895d9d14fbc3e3d6b | ca-ymq-1 | swift| object-store
| True| internal  | 
https://object-storage-ca-ymq-1.vexxhost.net/v1/$(tenant_id)s|
| b132fe7bcf98440f8e72a142df76292d | ca-ymq-1 | sahara   | data-processing 
| True| admin | 
https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| b736338e3c94402a9b21b32b3d0bf1e5 | ca-ymq-1 | sahara   | data-processing 
| True| public| 
https://data-processing-ca-ymq-1.vexxhost.net/v1.1/%(tenant_id)s |
| c0dd9f5f8db248b093d6735b167e1af6 | ca-ymq-1 | keystone | identity
| True| public| https://auth.vexxhost.net/v2.0  
 |
| c8505f07c349413aa7cd61d42337af99 | ca-ymq-1 | keystone | identity
| True| internal  | https://auth.vexxhost.net/v2.0  
 |
| da3d087e0c724338ba12c9a1168ef80c | 

Re: [Openstack-operators] Ansible-driven management of Dell server BIOS and RAID config

2017-01-10 Thread Mohammed Naser
We’ve had a wonderful experience with Molecule and testing roles with it, I 
really think it’s one of the best tools out there.  It’s also useful for 
testing, `molecule converge` and you have a full dev env that you can 
reconverge till you get to the state you want.

> On Jan 10, 2017, at 2:15 PM, John Dewey  wrote:
> 
> On a similar note, if you’re looking to test Ansible roles, have a look at 
> molecule.
> https://molecule.readthedocs.io 
> 
> On January 10, 2017 at 7:42:02 AM, Stig Telfer (stig.openst...@telfer.org 
> ) wrote:
> 
>> Hi All - 
>> 
>> We’ve just published the sources and a detailed writeup for some new tools 
>> for Ansible-driven management of Dell iDRAC BIOS and RAID configuration:
>> 
>> https://www.stackhpc.com/ansible-drac.html 
>> 
>> 
>> The code’s up on Github and Ansible Galaxy.
>> 
>> It should fit neatly into any infrastructure using OpenStack Ironic for 
>> infrastructure management (and Dell server hardware).
>> 
>> Share and enjoy,
>> Stig
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
>> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Analogs of EC2 dedicated instances & dedicated hosts?

2016-12-19 Thread Mohammed Naser

> On Dec 19, 2016, at 5:24 PM, Kimball, Conrad  
> wrote:
> 
> Hi All,
>  
> What mechanisms does OpenStack provide that would enable me to implement 
> behaviors analogous to AWS EC2 dedicated instances and dedicated hosts?
>  
> · Dedicated instances:  an OpenStack tenant can deploy VM instances 
> that are guaranteed to not share a compute host with any other tenant (for 
> example, as the tenant I want physical segregation of my compute).

I don’t think this type of thing exists yet (unless you’re talking bare-metal / 
Ironic).

> · Dedicated hosts: goes beyond dedicated instances, allowing an 
> OpenStack tenant to explicitly place only specific VM instances onto the same 
> compute host (for example, as the tenant I want to place VMs foo and bar onto 
> the same compute host to share a software license that is licensed per host).

http://docs.openstack.org/newton/config-reference/compute/schedulers.html#samehostfilter

>  
> Conrad Kimball
> Associate Technical Fellow
> Chief Architect, Enterprise Cloud Services
> Engineering, Operations & Technology / Information Technology / Core 
> Infrastructure Engineering
> conrad.kimb...@boeing.com 
> P.O. Box 3707, Mail Code 7M-TE
> Seattle, WA  98124-2207
> Bellevue 33-11 bldg, office 3A6-3.9
> Mobile:  425-591-7802
>  
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] sync power states and externally shut off VMs

2016-11-16 Thread Mohammed Naser
Typically, you should not be managing your VMs by virsh. After a power outage, 
I would recommend sending a start API call to instances that are housed on that 
specific hypervisor

Sent from my iPhone

> On Nov 16, 2016, at 4:26 PM, Gustavo Randich  
> wrote:
> 
> When a VM is shutdown without using nova API (kvm process down, libvirt 
> failed to start instance on host boot, etc.), Openstack "freezes" the 
> shutdown power state in the DB, and then re-applies it if the VM is not 
> started via API, e.g.:
> 
> # virsh shutdown 
> 
> [ sync power states -> stop instance via API ], because hypervisor rules 
> ("power_state is always updated from hypervisor to db")
> 
> # virsh startup 
> 
> [ sync power states -> stop instance via API ], because database rules
> 
> 
> I understand this behaviour is "by design", but I'm confused about the 
> asymmetry: if VM is shutdown without using nova API, should I not be able to 
> start it up again without nova API?
> 
> This is a common scenario in power outages or failures external to Openstack, 
> when VMs fail to start and we need to start them up again using virsh.
> 
> Thanks!
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [publicClouds-wg] Public Cloud Working Group

2016-11-15 Thread Mohammed Naser
This sounds very interesting, put us down! :)

On Tue, Nov 15, 2016 at 5:43 AM, matt Jarvis <m...@mattjarvis.org.uk> wrote:
> So after input from a variety of sources over the last few weeks, I'd very
> much like to try and put together a public cloud working group, with the
> very high level goal of representing the interests of the public cloud
> provider community around the globe.
>
> I'd like to propose that, in line with the new process for creation of
> working groups, we set up some initial IRC meetings for all interested
> parties. The goals for these initial meetings would be :
>
> 1. Define the overall scope and mission statement for the working group
> 2. Set out the constituency for the group - communication methods,
> definitions of public clouds, meeting schedules, chairs etc.
> 3. Identify areas of interest - eg. technical issues, collaboration
> opportunities
>
> Before I go ahead and schedule first meetings, I'd very much like to gather
> input from the community on interested parties, and if this seems like a
> reasonable first step forward. My thought initially was to schedule a first
> meeting on #openstack-operators and then decide on best timings, locations
> and communication methods from there, but again I'd welcome input.
>
> At this point it would seem to me that one of the key metrics for this
> working group to be successful is participation as widely as possible within
> the public cloud provider community, currently approximately 21 companies
> globally according to https://www.openstack.org/marketplace/public-clouds/.
> If we could get representation from all of those companies in any potential
> working group, then that would clearly be the best outcome, although that
> may be optimistic ! As has become clear at recent Ops events, it may be that
> not all of those companies are represented on these lists, so I'd welcome
> any input on the best way to reach out to those folks.
>
> Matt
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ML2/OVS odd GRE brokenness

2016-11-09 Thread Mohammed Naser
Have you tried turning off all agents, shut off all VMs on that node, destroy 
br-int and be-tun, start neutron agents and try again?

Sent from my iPhone

> On Nov 9, 2016, at 9:51 AM, Jonathan D. Proulx  wrote:
> 
> 
> Also I have restarted openvswitch-agent on both sides of the broken
> link and it made no difference...
> 
> On Tue, Nov 08, 2016 at 05:43:27PM -0500, Jonathan Proulx wrote:
> :
> :I have an odd issue that seems to just be affecting one private
> :network for one tenant, though I saw a similar thing on a different
> :project network recently which I 'fixed' by rebooting the hypervisor.
> :Since this has now (maybe) happened twice I figure I should try to
> :understand what it is.
> :
> :Given the following four VMs on 4 different hypervisors
> :
> :vm1 on Hypervisor1
> :vm2 on Hypervisor2
> :vm3 on Hypervisor3
> :---
> :vm4 on Hypervisor4
> :
> :
> :vm1 -> vm3 talk fine among themselves but none to 4
> :
> :examining ping traffic transiting from vm1-vm4 I can see arp requests
> :and responses at vm4 and GRE encapsulated ARP responses on
> :Hypervisor1's physical interface.
> :
> :They look the same to me (same ecap id) coming in as the working vms
> :traffic, but they never make it to the qvo device which is before
> :iptables sec_group rules are applied at the tap device.
> :
> :attempting to tare down and recreate this resuls in the same first 3
> :work last one doesn't split (possibly becuase scheduler puts them in
> :the same place? haven't checked) 
> :
> :ovs-vsctl -- set Bridge br-int mirrors=@m  -- --id=@snooper2 get Port 
> snooper2  -- --id=@gre-801e0347 get Port gre-801e0347 -- --id=@m create 
> Mirror name=mymirror select-dst-port=@gre-801e0347 
> select-src-port=@gre-801e0347 output-port=@snooper2
> :
> :tcpdump -i snooper2 
> :
> :Only sees ARP requests but no response, what's broken if I can see GRE
> :encap ARP responses on physical interface but not on gre-
> :interface?  And why is it not broken for all tunnels endpoints?
> :
> :Oddly if I boot a 5th VM on a 5th hypervisor it can talk to 4 but not 1-3 ...
> :
> :hypervisors are Ubuntu 14.04 running Mitaka from cloud archive w/
> :xenial-lts kernels (4.4.0)
> :
> :-Jon
> :
> :-- 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] _member_ role clarification

2016-11-09 Thread Mohammed Naser
http://www.gossamer-threads.com/lists/openstack/dev/38640

This is of your interest. 

Sent from my iPhone

> On Nov 9, 2016, at 6:55 AM, Justin Cattle  wrote:
> 
> Hi,
> 
> 
> Apologies if this questions has been answered already, or is in some doc 
> somewhere. Please point me in the right direction of so.
> 
> 
> I'm upgrading openstack from Juno to Mitaka, in steps.  We role our own 
> openstack using puppet, and have been using identity v3 in Juno with domains 
> via an ldap backend.
> 
> The upgrade process is largely created, tested and working, but not rolled 
> out across our production sites yet.
> 
> However, I notice when I create a new cloud using openstack Mitaka from 
> scratch, not upgraded from Juno, the _member_ role is no longer created 
> automatically when users are assigned to projects [ tenants in old money ].  
> I'm pretty sure this was happening in Juno, and the Juno docs seem to confirm 
> it.
> I believe horizon at least was using this role to allow users access.
> 
> I've noticed this because we have scripts to automate some user/group stuff, 
> and the have some usage of the _member_ role hard coded atm.  They are 
> failing, as the role doesn't exist on non-upgraded clouds :)
> 
> 
> So I would like some advice/clarification on what the situation is.
> 
> What else, if anything, was the _member_ role used for? heat maybe?
> 
> Is the _member_ role no longer required at all, not even by horizon?
> 
> If it's no longer required, is it safe or desirable to remove the _member_ 
> role from upgraded clouds?
> 
> 
> 
> 
> 
> Cheers,
> Just
> 
> Notice:  This email is confidential and may contain copyright material of 
> members of the Ocado Group. Opinions and views expressed in this message may 
> not necessarily reflect the opinions and views of the members of the Ocado 
> Group. 
>  
> If you are not the intended recipient, please notify us immediately and 
> delete all copies of this message. Please note that it is your responsibility 
> to scan this message for viruses. 
>  
> Fetch and Sizzle are trading names of Speciality Stores Limited and Fabled is 
> a trading name of Marie Claire Beauty Limited, both members of the Ocado 
> Group.
>  
> References to the “Ocado Group” are to Ocado Group plc (registered in England 
> and Wales with number 7098618) and its subsidiary undertakings (as that 
> expression is defined in the Companies Act 2006) from time to time.  The 
> registered office of Ocado Group plc is Titan Court, 3 Bishops Square, 
> Hatfield Business Park, Hatfield, Herts. AL10 9NE.
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] old Juno documentation

2016-11-07 Thread Mohammed Naser
As a small workaround, I'd suggest the following:

https://github.com/openstack/openstack-manuals

There are branches for the maintained codebases and tags for the EOL'd
ones, for example, you can find all EOL'd Juno docs here:

https://github.com/openstack/openstack-manuals/tree/juno-eol

Not great, but probably a lot better than nothing.  You could probably
build the docs locally as well

Good luck!
Mohammed

On Mon, Nov 7, 2016 at 5:49 PM, Mathieu Gagné <mga...@calavera.ca> wrote:
> I find it to be an inconvenient. (for the one times I read the docs)
>
> To compare, I found that some projets (like the ones on readthedocs)
> often keep the previous versions so one can refer to it. You can
> easily see all the available versions and switch to the one you need.
> That's very convenient and I very much appreciate the effort made by
> those projects. It's a good experience I would like to see in
> OpenStack.
> --
> Mathieu
>
>
> On Mon, Nov 7, 2016 at 5:38 PM, Kris G. Lindgren <klindg...@godaddy.com> 
> wrote:
>> I don’t have an answer for you, however I have noticed this EXACT same thing
>> happening with API documentation.  The documentation gets replaced with
>> latest version and it’s impossible to point people to the documentation for
>> the older version of the api’s that we are actually running.
>>
>>
>>
>> IE: http://developer.openstack.org/api-ref/compute/ != valid for Liberty and
>> I have no way of specify a micro version or even getting v2 api
>> documentation (the v2 link points back to this link).
>>
>>
>>
>> ___
>>
>> Kris Lindgren
>>
>> Senior Linux Systems Engineer
>>
>> GoDaddy
>>
>>
>>
>> From: Cristina Aiftimiei <cai...@gmail.com>
>> Date: Monday, November 7, 2016 at 3:16 PM
>> To: "openstack-operators@lists.openstack.org"
>> <openstack-operators@lists.openstack.org>
>> Subject: [Openstack-operators] old Juno documentation
>>
>>
>>
>> Dear all,
>>
>> in the last days I'm looking around for an old link that I had on how to
>> configure "Provider networks with Open vSwitch"in Juno release.
>>
>> I had the link
>> http://docs.openstack.org/kilo/networking-guide/scenario_provider_ovs.html
>> that now give s a nice "404"
>>
>> Going to the Juno documentation - http://docs.openstack.org/juno/ - one can
>> lcearly see that under the "Networking Guide" there is a lin pointing to the
>> "wrong" release - http://docs.openstack.org/kilo/networking-guide/, from
>> where one can reach the:
>> http://docs.openstack.org/kilo/networking-guide/scenario_provider_ovs.html
>>
>> None of the documents under the " Operations and Administration Guides "
>> point anymore to the "juno" version.
>>
>> As we have still a/some testbed with the Juno version I would like to ask
>> you if you know of a place where "obsoleted" documentation is moved. As
>> there are still the installation guides for this version I would expect that
>> at least a trace of the Operation & Administration ones is kept.
>> At least as vintage collection I would like to be able to read it once more
>> ...
>>
>> Thank you very much for any information that you can provide,
>>
>> Cristina
>>
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2016-11-04 Thread Mohammed Naser
Just a bit of feedback on our side:

We have been using Puppet 4 for quite sometime from the Puppetlabs repos 
without any issues. The OpenStack modules all work fine for us and I haven't 
ran into any issues around other modules having any major Puppet 4 
compatibility issues. 

Sent from my iPhone

> On Nov 3, 2016, at 11:58 AM, Alex Schultz  wrote:
> 
> Hey everyone,
> 
> Puppet 3 is reaching it's end of life at the end of this year[0].
> Because of this we are planning on dropping official puppet 3 support
> as part of the Ocata cycle.  While we currently are not planning on
> doing any large scale conversion of code over to puppet 4 only syntax,
> we may allow some minor things in that could break backwards
> compatibility.  Based on feedback we've received, it seems that most
> people who may still be using puppet 3 are using older (< Newton)
> versions of the modules.  These modules will continue to be puppet 3.x
> compatible but we're using Ocata as the version where Puppet 4 should
> be the target version.
> 
> If anyone has any concerns or issues around this, please let us know.
> 
> Thanks,
> -Alex
> 
> [0] https://puppet.com/misc/puppet-enterprise-lifecycle
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Live-migration CPU doesn't have compatibility

2016-10-27 Thread Mohammed Naser
Depending on your workload, it will.  If they depend on any custom CPU
extensions, they will miss out on them and performance will be
decreased.  My personal suggestion is to read the docs for it and use
the "smallest common denominator" in terms of CPU usage.

On Thu, Oct 27, 2016 at 11:31 AM, William Josefsson
<william.josef...@gmail.com> wrote:
> On Thu, Oct 27, 2016 at 5:20 PM, Chris Friesen
> <chris.frie...@windriver.com> wrote:
>> In your case you probably want to set both computes to have:
>>
>> [libvirt]
>> cpu_mode = custom
>> cpu_model = Haswell
>>
>
> Hi Chris, thanks!  Yerps, I finally got it working. However, I set
> cpu_model=kvm64 everywhere and it seems to work. It is listed here:
> https://wiki.openstack.org/wiki/LibvirtXMLCPUModel  hopefully 'kvm64'
> has no performance impact what cpu_model is set to, or would 'kvm64'
> as model negatively affect my VMs? thx will
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Live-migration CPU doesn't have compatibility

2016-10-26 Thread Mohammed Naser
The VMs have to be restarted so that the libvirt config is updated
with the new CPU model.

Good luck!

On Wed, Oct 26, 2016 at 8:07 AM, William Josefsson
<william.josef...@gmail.com> wrote:
> Hi list,
>
> I'm facing issues on Liberty/CentOS7 doing live migrations between to
> hosts. The hosts are Haswell and Broadwell. However, there is not
> feature specific running on my VMs
>
> Haswell -> Broadwell works
> Broadwell -> Haswell fails with the error below.
>
>
> I have on both hosts configured
> [libvirt]
> cpu_mode=none
>
> and restarted openstack-nova-compute on hosts, however that didn't
> help, with the same error. there gotta be a way of ignoring this
> check? pls advice. thx will
>
>
>
> 2016-10-26 19:36:29.025 1438627 INFO nova.virt.libvirt.driver
> [req-] Instance launched has CPU info: {"vendor": "Intel",
> "model": "Broadwell", "arch": "x86_64", "features": ["smap", "avx",
> "clflush", "sep", "rtm", "vme", "dtes64", "invpcid", "tsc",
> "fsgsbase", "xsave", "pge", "vmx", "erms", "xtpr", "cmov", "hle",
> "smep", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "msr",
> "adx", "3dnowprefetch", "nx", "fxsr", "syscall", "tm", "sse4.1",
> "pae", "sse4.2", "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx",
> "osxsave", "cx8", "mce", "de", "tm2", "ht", "dca", "lahf_lm", "abm",
> "rdseed", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "pse",
> "ds", "invtsc", "pni", "rdtscp", "avx2", "aes", "sse2", "ss",
> "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", "pse36", "mtrr",
> "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 10,
> "cells": 2, "threads": 2, "sockets": 1}}
> 2016-10-26 19:36:29.028 1438627 ERROR nova.virt.libvirt.driver
> [req-] CPU doesn't have compatibility.
>
> 0
>
> Refer to http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult
> 2016-10-26 19:36:29.057 1438627 ERROR oslo_messaging.rpc.dispatcher
> [req-] Exception during message handling: Unacceptable CPU info:
> CPU doesn't have compatibility.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron Allow tenants to select Fixed VM IPs

2016-09-27 Thread Mohammed Naser
Late to the party, better than ever.

https://github.com/openstack/neutron/blob/master/etc/policy.json#L90

On Sun, Aug 28, 2016 at 8:29 AM, William Josefsson
 wrote:
> yes thanks Kyle, I checked the neutron db and noticed there's a table called
> ipallocations which seems to be link between port and the assigned VM IP
> address. So even in the event I need to replace a crashed Neutron Networking
> node, hopefully the IP addresses will be retrieved from the DB. thx will
>
> On Sun, Aug 28, 2016 at 12:26 AM, Kyle Greenwell  wrote:
>>
>> If I'm not mistaken, the ip is assigned to the neutron port, and that is
>> assigned to the vm. As long as the instance doesn't get blown away and
>> rebuilt with a different neutron port you won't be at risk of losing the ip.
>>
>> Sent from my iPhone
>>
>> On Aug 27, 2016, at 3:23 AM, William Josefsson
>>  wrote:
>>
>> thanks Lubosz! I should explore further if floating IPs will help here.
>>
>> Can you also please clarify if VM IP is stored in the DB the IPs that have
>> been assigned to a VM so there should be no risk of the dhcp giving it a new
>> IP suddenly? That would cause a nightmare with real tenants soon coming
>> onboard my deploy. I know there are some dnsmasq dhcp files in
>> /var/lib/neutron/ on my Networking nodes, but what if I replace one of them
>> with a fresh install, will the necessary vm details such as IP etc. be
>> retrieved from the DB? thx will
>>
>> On Sat, Aug 27, 2016 at 12:17 AM, Kosnik, Lubosz 
>> wrote:
>>>
>>> Lubosz is my first name not Kosnik :P
>>> You can create a VM from Horizon and only specify the floating IP to be
>>> exactly that one. With private networks it’s not available from Horizon.
>>> About getting every time the next IP it’s normal thing. After getting the
>>> roof for that specified IP range it will start looking for free IPs from the
>>> beginning of the range.
>>>
>>> Cheers,
>>> Lubosz Kosnik
>>> Cloud Software Engineer OSIC
>>> lubosz.kos...@intel.com
>>>
>>> On Aug 26, 2016, at 11:06 AM, William Josefsson
>>>  wrote:
>>>
>>> Hi Kosnik. thanks. Is there any way in the GUI for the user to do that,
>>> or they need to do cli 'neutron port-create ...' ?
>>> Maybe I can pre-create the fixed IPs as admin, but how do a standard
>>> tenant user select the Ports just created ..  just as they select the
>>> Networks/Subnets during 'Launch an instance'?
>>>
>>> I notice while provisioning that the IP number increments all the time,
>>> even if previous instances with lower IPs are deleted. What will happen
>>> eventually when I reach the last IP, will the lower number IPs be reused, or
>>> what would the behavior be? thx will
>>>
>>>
>>>
>>> On Thu, Aug 25, 2016 at 10:58 PM, Kosnik, Lubosz
>>>  wrote:

 VM always will get the same IP from DHCP server. To prepare the VM with
 fixed IP you need using neutron create a port in specified network with
 specified IP and after that to boot new VM you’re specifying not net-id but
 port-id and it’s gonna work.

 Cheers,
 Lubosz Kosnik
 Cloud Software Engineer OSIC
 lubosz.kos...@intel.com

 On Aug 25, 2016, at 9:01 AM, William Josefsson
  wrote:

 Hi,

 I wonder if there's any way of allowing my users to select fixed IPs for
 the VMs? I do shared Provider networks, VLAN on Liberty/CentOS.

 I know nova boot from the CLI or API has v4-fixed-ip=ip-addr option,
 however is there any way in the Dashboard where the User can select static
 IP?

 I would also appreciate if anyone can explain the default dnsmasq dhcpd
 lease. Will a VM always get the same IP during it's life time, or it may
 change? thx will
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


>>>
>>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] How to configure keystone to use SSL

2016-09-22 Thread Mohammed Naser
I'm fairly sure the parameters under [ssl] are only for using the
deprecated eventlet server.  You'll need to add your SSL configuration
to the Apache VirtualHost in order to be able to get access to SSL

Good luck!

On Wed, Sep 21, 2016 at 11:14 PM, zhangjian
 wrote:
> Hi, all
>
>
> I have a mitaka environment created by packstack, and i tried to configure
> the keystone to use ssl, but failed, can anyone help me?
> # keystone is a wsgi service now.
>
>
> Configure steps are as following:
> ===
> # keystone-manage ssl_setup --keystone-user keystone --keystone-group
> keystone
> # chown -R keystone:keystone /etc/keystone/ssl
> # keystone endpoint-create --service keystone --region RegionOne --publicurl
> https://{FQDN}:5000/v2.0 --internalurl https://{FQDN}:5000/v2.0 --adminurl
> https://{FQDN}:35357/v2.0
> # cat /etc/keystone/keystone.conf
>   ... ...
>   [ssl]
>   enable=True
>   certfile = /etc/keystone/ssl/certs/keystone.pem
>   keyfile = /etc/keystone/ssl/private/keystonekey.pem
>   ca_certs = /etc/keystone/ssl/certs/ca.pem
>   ca_key = /etc/keystone/ssl/private/cakey.pem
>
> # cat keystonerc_admin
> ... ...
> export OS_AUTH_URL=https://FQDN:5000/v2.0
>
>
> # keystone endpoint-delete Old_Endpoint_For_Keystone
> Unable to delete endpoint.
>
>
> # systemctl restart httpd
> # source keystonerc_admin
>
> # openstack project list
> Discovering versions from the identity service failed when creating the
> password plugin. Attempting to determine version from URL.
> SSL exception connecting to https://FQDN:5000/v2.0/tokens: [SSL:
> UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:765)
> ===
>
> Regards,
> Kenn
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] libvirt: make snapshot use RBD snapshot/clone when available

2016-09-20 Thread Mohammed Naser
I'm fairly certain Redhat OSP has this backported in their liberty release too.

On Mon, Sep 5, 2016 at 4:12 PM, Saverio Proto <ziopr...@gmail.com> wrote:
> Hello there,
>
> I rebuilt my ubuntu packages and I am testing the patch on Liberty.
>
> So far so good.
>
> If anyone else wants this change officially in Ubuntu liberty please
> comment here:
> https://code.launchpad.net/~zioproto/ubuntu/+source/nova/+git/nova/+merge/304952
>
> Saverio
>
>
> 2016-09-03 1:09 GMT+02:00 Logan V. <lo...@protiumit.com>:
>> Have been using it since kilo. It was a clean pick into Liberty and worked
>> great for me.
>>
>> Logan
>>
>>
>> On Friday, September 2, 2016, Saverio Proto <ziopr...@gmail.com> wrote:
>>>
>>> Hello,
>>>
>>> this is merged upstream in Mitaka:
>>>
>>> https://review.openstack.org/#/c/287954/
>>>
>>> anyone already did a cherry pick in Liberty and is running this in
>>> production ?
>>> I plan to test this soon. Any feedback is appreciated !
>>>
>>> thank you
>>>
>>> Saverio
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Converged infrastructure

2016-09-01 Thread Mohammed Naser
I proposed a talk for the Summit which unfortunately did not make it.

We overprovision compute nodes with memory enough for Ceph to run and
we isolate an N number of cores dedicated for the OSD processes.  That
way, there is no competition between VMs and OSDs.  We run all SSD and
it has been quite successful for us.

On Thu, Sep 1, 2016 at 12:37 AM, Blair Bethwaite
<blair.bethwa...@gmail.com> wrote:
> Following on from Edmund's issues... People talking about doing this
> typically seem to cite cgroups as the way to avoid CPU and memory
> related contention - has anyone been successful in e.g. setting up
> cgroups on a nova qemu+kvm hypervisor to limit how much of the machine
> nova uses?
>
> On 1 September 2016 at 04:15, Edmund Rhudy (BLOOMBERG/ 120 PARK)
> <erh...@bloomberg.net> wrote:
>> We currently run converged at Bloomberg with Ceph (all SSD) and I strongly
>> dislike it. OSDs and VMs battle for CPU time and memory, VMs steal memory
>> that would go to the HV pagecache, and it puts a real dent in any plans to
>> be able to deploy hypervisors (mostly) statelessly. Ceph on our largest
>> compute cluster spews an endless litany of deep-scrub-related HEALTH_WARNs
>> because of memory steal from the VMs depleting available pagecache memory.
>> We're going to increase the OS memory reservation in nova.conf to try to
>> alleviate some of the worst of the memory steal, but it's been one hack
>> after another to keep it going. I hope to be able to re-architect our design
>> at some point to de-converge Ceph from the compute nodes so that the two
>> sides can evolve separately once more.
>>
>> From: matt.jar...@datacentred.co.uk
>> Subject: Re:[Openstack-operators] Converged infrastructure
>>
>> Time once again to dredge this topic up and see what the wider operators
>> community thinks this time :) There were a fair amount of summit submissions
>> for Barcelona talking about converged and hyper-converged infrastructure, it
>> seems to be the topic de jour from vendors at the minute despite feeling
>> like we've been round this before with Nebula, Piston Cloud etc.
>>
>> Like a lot of others we run Ceph, and we absolutely don't converge our
>> storage and compute nodes for a variety of performance and management
>> related reasons. In our experience, the hardware and tuning characteristics
>> of both types of nodes are pretty different, in any kind of recovery
>> scenarios Ceph eats memory, and it feels like creating a SPOF.
>>
>> Having said that, with pure SSD clusters becoming more common, some of those
>> issues may well be mitigated, so is anyone doing this in production now ? If
>> so, what does your hardware platform look like, and are there issues with
>> these kinds of architectures ?
>>
>> Matt
>>
>> DataCentred Limited registered in England and Wales no. 05611763
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
>
> --
> Cheers,
> ~Blairo
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators