Re: [openstack-dev] [tripleo] Requesting files from the overcloud from the undercloud

2016-12-13 Thread Zane Bitter

On 12/12/16 09:09, Steven Hardy wrote:

On Wed, Nov 30, 2016 at 01:54:34PM -0700, Alex Schultz wrote:

Hey folks,

So I'm in the process of evaluating options for implementing the
capture-environment-status-and-logs[0] blueprint.  At the moment my
current plan is to implement a mistral workflow to execute the
sosreport to bundle the status and logs up on the requested nodes.
I'm leveraging a similar concept to the the remote execution[1] method
we current expose via 'openstack overcloud execute'.  The issue I'm
currently running into is getting the files off the overcloud node(s)
so that they can be returned to the tripleoclient.  The files can be
large so I don't think they are something that can just be returned as
output from Heat.  So I wanted to ask for some input on the best path
forward.

IDEA 1: Write something (script or utility) to be executed via Heat on
the nodes to push the result files to a container on the undercloud.
Pros:
- The swift container can be used by the mistral workflow for other
actions as part of this bundling
- The tripleoclient will be able to just pull the result files
straight from swift
- No additional user access needs to be created to perform operations
against the overcloud from the undercloud
Cons:
- Swift credentials (or token) need to be passed to the script being
executed by Heat on the overcloud nodes which could lead to undercloud
credentials being leaked to the overcloud


I think we can just use a swift tempurl?  That's in alignment for what we
already do both for polling metadata from heat (which is put into swift,
then we give a tempurl to the nodes, see /etc/os-collect-config.conf on the
overcloud nodes.


+1, was about to say exactly the same thing.


It's also well aligned with what we do for the DeployArtifactURLs
interface.

I guess the main difference here is we're only allowing GET access for
those cases, but here there's probably more scope for abuse, e.g POSTing
giant files from the overcloud nodes could impact e.g disk space on the
undercloud?


- I'm not sure if all overcloud nodes would have access to the
undercloud swift endpoint


I think they will, or the tempurl transport we use for heat won't work.


IDEA 2: Write additional features into undercloud deployment for ssh
key generation and inclusion into the deployment specifically for this
functionality to be able to reach into the nodes and pull files out
(via ssh).
Pros:
- We would be able to leverage these 'support' credentials for future
support features (day 2 operations?)
- ansible (or similar tooling) could be used to perform operations
against the overcloud from the undercloud nodes
Cons:
- Complexity and issues around additional user access
- Depending on where the ssh file transfer occurs (client vs mistral),
additional network access might be needed.

IDEA 2a: Leverage the validations ssh key to pull files off of the
overcloud nodes
Pros:
- ssh keys already exist when enable_validations = true so we can
leverage existing
Cons:
- Validations can be disabled, possibly preventing 'support' features
from working
- Probably should not leverage the same key for multiple functions.

I'm leaning towards idea 1, but wanted to see if there was some other
form of existing functionality I'm not aware of.


Yeah I think (1) is probably the way to go, although cases could be argued
for all approaches you mention.

My main reason for preferring (1) is I think we'll want the data to end up
in swift anyway, e.g so UI users can access it (which won't be possible if
we e.g scp some tarball from overcloud nodes into the undercloud filesystem
directly, so we may as well just push it into swift from the nodes?)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Requesting files from the overcloud from the undercloud

2016-12-12 Thread Steven Hardy
On Wed, Nov 30, 2016 at 01:54:34PM -0700, Alex Schultz wrote:
> Hey folks,
> 
> So I'm in the process of evaluating options for implementing the
> capture-environment-status-and-logs[0] blueprint.  At the moment my
> current plan is to implement a mistral workflow to execute the
> sosreport to bundle the status and logs up on the requested nodes.
> I'm leveraging a similar concept to the the remote execution[1] method
> we current expose via 'openstack overcloud execute'.  The issue I'm
> currently running into is getting the files off the overcloud node(s)
> so that they can be returned to the tripleoclient.  The files can be
> large so I don't think they are something that can just be returned as
> output from Heat.  So I wanted to ask for some input on the best path
> forward.
> 
> IDEA 1: Write something (script or utility) to be executed via Heat on
> the nodes to push the result files to a container on the undercloud.
> Pros:
> - The swift container can be used by the mistral workflow for other
> actions as part of this bundling
> - The tripleoclient will be able to just pull the result files
> straight from swift
> - No additional user access needs to be created to perform operations
> against the overcloud from the undercloud
> Cons:
> - Swift credentials (or token) need to be passed to the script being
> executed by Heat on the overcloud nodes which could lead to undercloud
> credentials being leaked to the overcloud

I think we can just use a swift tempurl?  That's in alignment for what we
already do both for polling metadata from heat (which is put into swift,
then we give a tempurl to the nodes, see /etc/os-collect-config.conf on the
overcloud nodes.

It's also well aligned with what we do for the DeployArtifactURLs
interface.

I guess the main difference here is we're only allowing GET access for
those cases, but here there's probably more scope for abuse, e.g POSTing
giant files from the overcloud nodes could impact e.g disk space on the
undercloud?

> - I'm not sure if all overcloud nodes would have access to the
> undercloud swift endpoint

I think they will, or the tempurl transport we use for heat won't work.

> IDEA 2: Write additional features into undercloud deployment for ssh
> key generation and inclusion into the deployment specifically for this
> functionality to be able to reach into the nodes and pull files out
> (via ssh).
> Pros:
> - We would be able to leverage these 'support' credentials for future
> support features (day 2 operations?)
> - ansible (or similar tooling) could be used to perform operations
> against the overcloud from the undercloud nodes
> Cons:
> - Complexity and issues around additional user access
> - Depending on where the ssh file transfer occurs (client vs mistral),
> additional network access might be needed.
> 
> IDEA 2a: Leverage the validations ssh key to pull files off of the
> overcloud nodes
> Pros:
> - ssh keys already exist when enable_validations = true so we can
> leverage existing
> Cons:
> - Validations can be disabled, possibly preventing 'support' features
> from working
> - Probably should not leverage the same key for multiple functions.
> 
> I'm leaning towards idea 1, but wanted to see if there was some other
> form of existing functionality I'm not aware of.

Yeah I think (1) is probably the way to go, although cases could be argued
for all approaches you mention.

My main reason for preferring (1) is I think we'll want the data to end up
in swift anyway, e.g so UI users can access it (which won't be possible if
we e.g scp some tarball from overcloud nodes into the undercloud filesystem
directly, so we may as well just push it into swift from the nodes?)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Requesting files from the overcloud from the undercloud

2016-12-12 Thread Emilien Macchi
On Wed, Nov 30, 2016 at 3:54 PM, Alex Schultz  wrote:
> Hey folks,
>
> So I'm in the process of evaluating options for implementing the
> capture-environment-status-and-logs[0] blueprint.  At the moment my
> current plan is to implement a mistral workflow to execute the
> sosreport to bundle the status and logs up on the requested nodes.
> I'm leveraging a similar concept to the the remote execution[1] method
> we current expose via 'openstack overcloud execute'.  The issue I'm
> currently running into is getting the files off the overcloud node(s)
> so that they can be returned to the tripleoclient.  The files can be
> large so I don't think they are something that can just be returned as
> output from Heat.  So I wanted to ask for some input on the best path
> forward.
>
> IDEA 1: Write something (script or utility) to be executed via Heat on
> the nodes to push the result files to a container on the undercloud.
> Pros:
> - The swift container can be used by the mistral workflow for other
> actions as part of this bundling
> - The tripleoclient will be able to just pull the result files
> straight from swift
> - No additional user access needs to be created to perform operations
> against the overcloud from the undercloud
> Cons:
> - Swift credentials (or token) need to be passed to the script being
> executed by Heat on the overcloud nodes which could lead to undercloud
> credentials being leaked to the overcloud
> - I'm not sure if all overcloud nodes would have access to the
> undercloud swift endpoint

I'm in favor of prototyping idea 1 and see how we can resolve the
issue with credentials. We could eventually create a special and
dedicated account for these containers?
I think this is the simplest solution for now, let's see how it could work.

> IDEA 2: Write additional features into undercloud deployment for ssh
> key generation and inclusion into the deployment specifically for this
> functionality to be able to reach into the nodes and pull files out
> (via ssh).
> Pros:
> - We would be able to leverage these 'support' credentials for future
> support features (day 2 operations?)
> - ansible (or similar tooling) could be used to perform operations
> against the overcloud from the undercloud nodes
> Cons:
> - Complexity and issues around additional user access
> - Depending on where the ssh file transfer occurs (client vs mistral),
> additional network access might be needed.
>
> IDEA 2a: Leverage the validations ssh key to pull files off of the
> overcloud nodes
> Pros:
> - ssh keys already exist when enable_validations = true so we can
> leverage existing
> Cons:
> - Validations can be disabled, possibly preventing 'support' features
> from working
> - Probably should not leverage the same key for multiple functions.
>
> I'm leaning towards idea 1, but wanted to see if there was some other
> form of existing functionality I'm not aware of.
>
> Thanks,
> -Alex
>
> [0] 
> https://blueprints.launchpad.net/tripleo/+spec/capture-environment-status-and-logs
> [1] https://blueprints.launchpad.net/tripleo/+spec/remote-execution
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Requesting files from the overcloud from the undercloud

2016-11-30 Thread Alex Schultz
Hey folks,

So I'm in the process of evaluating options for implementing the
capture-environment-status-and-logs[0] blueprint.  At the moment my
current plan is to implement a mistral workflow to execute the
sosreport to bundle the status and logs up on the requested nodes.
I'm leveraging a similar concept to the the remote execution[1] method
we current expose via 'openstack overcloud execute'.  The issue I'm
currently running into is getting the files off the overcloud node(s)
so that they can be returned to the tripleoclient.  The files can be
large so I don't think they are something that can just be returned as
output from Heat.  So I wanted to ask for some input on the best path
forward.

IDEA 1: Write something (script or utility) to be executed via Heat on
the nodes to push the result files to a container on the undercloud.
Pros:
- The swift container can be used by the mistral workflow for other
actions as part of this bundling
- The tripleoclient will be able to just pull the result files
straight from swift
- No additional user access needs to be created to perform operations
against the overcloud from the undercloud
Cons:
- Swift credentials (or token) need to be passed to the script being
executed by Heat on the overcloud nodes which could lead to undercloud
credentials being leaked to the overcloud
- I'm not sure if all overcloud nodes would have access to the
undercloud swift endpoint

IDEA 2: Write additional features into undercloud deployment for ssh
key generation and inclusion into the deployment specifically for this
functionality to be able to reach into the nodes and pull files out
(via ssh).
Pros:
- We would be able to leverage these 'support' credentials for future
support features (day 2 operations?)
- ansible (or similar tooling) could be used to perform operations
against the overcloud from the undercloud nodes
Cons:
- Complexity and issues around additional user access
- Depending on where the ssh file transfer occurs (client vs mistral),
additional network access might be needed.

IDEA 2a: Leverage the validations ssh key to pull files off of the
overcloud nodes
Pros:
- ssh keys already exist when enable_validations = true so we can
leverage existing
Cons:
- Validations can be disabled, possibly preventing 'support' features
from working
- Probably should not leverage the same key for multiple functions.

I'm leaning towards idea 1, but wanted to see if there was some other
form of existing functionality I'm not aware of.

Thanks,
-Alex

[0] 
https://blueprints.launchpad.net/tripleo/+spec/capture-environment-status-and-logs
[1] https://blueprints.launchpad.net/tripleo/+spec/remote-execution

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev