[Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-25 Thread Saverio Proto
Hello there,

I need to delete some users  and tenants from my public cloud. Before
deleting the users and tenants from keystone, I need to delete all the
resources in the tenants.

I am stucked listing the glance images uploaded in a specific tenant.
I cannot find the way, I always get either all the images in the
system, or just the ones of the active OS_TENANT_NAME

openstack help image list
usage: openstack image list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN]
[--max-width ] [--noindent]
[--quote {all,minimal,none,nonnumeric}]
[--public | --private | --shared]
[--property 

[Openstack-operators] [tc][security] Proposal to change the CVE embargo window

2016-01-25 Thread John Dickinson
I'd like to lengthen the embargo window on CVE disclosures.

Currently, the process is this 
(https://security.openstack.org/vmt-process.html):

  1. A security bug is reported (and confirmed as valid)
  2. A patch is developed an reviewed
  3. After the proposed fix is approved by reviewers, A CVE is filed
  4. 3-5 business days later, the vulnerability is disclosed publicly and the 
patches are landed upstream

The problem as I see it is that the 3 to 5 day embargo is way too short. 
Specifically, for those supporting OpenStack projects in a product, the short 
embargo does not allow sufficient time for applying, testing, and staging the 
fix in time for the disclosure. This leaves end-users and deployers with the 
situation of having a publicly announced security vulnerability without any 
hope of having a fix.

I would like the embargo period to be lengthened to be 2 weeks.

--John





signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread Christopher Hull
Wow.   Thank you all for the response!

Well, Installing Kilo because I started this last August and have worked on
it gradually.   Should probably do Liberty.

Yes, I get timesouts between Nova and "model service" which I assume is
Glance.


Message timeouts and recoveries for large images.   100GB CentOS server.

2015-12-20 18:19:33.759 3755 TRACE nova.servicegroup.drivers.db
MessagingTimeout: Timed out waiting for a reply to message ID
34fe85f35bf84908b516b8e79110f516.
2015-12-20 18:19:33.759 3755 TRACE nova.servicegroup.drivers.db
2015-12-20 18:19:33.895 3755 WARNING nova.openstack.common.loopingcall
[req-37a3f586-84de-4a1b-9257-1f968ec99273 - - - - -] task > run outlasted interval by 8.43 sec
2015-12-20 18:19:33.896 3755 INFO nova.scheduler.client.report
[req-9f0894b2-95f4-40f1-b9b0-83788d0e75d5 - - - - -] Compute_service record
updated for ('maersk.chrishull.com', 'maersk.chrishull.com')
2015-12-20 18:19:33.896 3755 INFO nova.compute.resource_tracker
[req-9f0894b2-95f4-40f1-b9b0-83788d0e75d5 - - - - -] Compute_service record
updated for maersk.chrishull.com:maersk.chrishull.com
2015-12-20 18:19:54.642 3755 ERROR nova.servicegroup.drivers.db
[req-37a3f586-84de-4a1b-9257-1f968ec99273 - - - - -] Recovered model server
connection!


Perhaps this doesn't happen with smaller images like Cirros.

Here's my Glance.conf

Is this some sort of REST timeout?   RabbitMQ?

 glance-api.conf

[DEFAULT]
notification_driver = noop

# Show more verbose log output (sets INFO log level output)
verbose=True

# Show debugging output in logs (sets DEBUG log level output)
#debug=False

# Maximum image size (in bytes) that may be uploaded through the
# Glance API server. Defaults to 1 TB.
# WARNING: this value should only be increased after careful consideration
# and must be set to a value under 8 EB (9223372036854775808).
#image_size_cap=1099511627776

# Address to bind the API server
#bind_host=0.0.0.0

# Port the bind the API server to
#bind_port=9292

# Log to this file. Make sure you do not set the same log file for both the
API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
#log_file=/var/log/glance/api.log

# Backlog requests when creating socket
#backlog=4096

# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle=600

# Timeout (in seconds) for client connections' socket operations. If an
incoming
# connection is idle for this period it will be closed.  A value of "0"
# means wait forever.
#client_socket_timeout=0

# API to use for accessing data. Default value points to sqlalchemy
# package, it is also possible to use: glance.db.registry.api
# data_api = glance.db.sqlalchemy.api

# The number of child process workers that will be
# created to service API requests. The default will be
# equal to the number of CPUs available. (integer value)
#workers=4

# Maximum line size of message headers to be accepted.
# max_header_line may need to be increased when using large tokens
# (typically those generated by the Keystone v3 API with big service
# catalogs)
# max_header_line = 16384

# Role used to identify an authenticated user as administrator
#admin_role=admin

# Allow unauthenticated users to access the API with read-only
# privileges. This only applies when using ContextMiddleware.
#allow_anonymous_access=False

# Allow access to version 1 of glance api
#enable_v1_api=True

# Allow access to version 2 of glance api
#enable_v2_api=True

# Return the URL that references where the data is stored on
# the backend storage system.  For example, if using the
# file system store a URL of 'file:///path/to/image' will
# be returned to the user in the 'direct_url' meta-data field.
# The default value is false.
#show_image_direct_url=False

# Send headers containing user and tenant information when making requests
to
# the v1 glance registry. This allows the registry to function as if a user
is
# authenticated without the need to authenticate a user itself using the
# auth_token middleware.
# The default value is false.
#send_identity_headers=False

# Supported values for the 'container_format' image attribute
#container_formats=ami,ari,aki,bare,ovf,ova

# Supported values for the 'disk_format' image attribute
#disk_formats=ami,ari,aki,vhd,vmdk,raw,qcow2,vdi,iso

# Property Protections config file
# This file contains the rules for property protections and the
roles/policies
# associated with it.
# If this config value is not specified, by default, property protections
# won't be enforced.
# If a value is specified and the file is not found, then the glance-api
# service will not start.
#property_protection_file =

# Specify whether 'roles' or 'policies' are used in the
# property_protection_file.
# The default value for 

Re: [Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-25 Thread Thomas Blank - Hetzner Online AG
Hey there,

have you tried using the --property option of the openstack-client?
You could filter images by their owner (openstack_project_id or name).

"openstack image list --property owner=[openstack_project_id]"

see:

http://docs.openstack.org/developer/python-openstackclient/command-objects/image.html#cmdoption-image-list--property

thomas blank,



On 25.01.2016 16:19, Saverio Proto wrote:
> Hello there,
> 
> I need to delete some users  and tenants from my public cloud. Before
> deleting the users and tenants from keystone, I need to delete all the
> resources in the tenants.
> 
> I am stucked listing the glance images uploaded in a specific tenant.
> I cannot find the way, I always get either all the images in the
> system, or just the ones of the active OS_TENANT_NAME
> 
> openstack help image list
> usage: openstack image list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN]
> [--max-width ] [--noindent]
> [--quote {all,minimal,none,nonnumeric}]
> [--public | --private | --shared]
> [--property 

Re: [Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-25 Thread Abel Lopez
Also, if you are an admin, you can `export OS_TENANT_ID` to the project and
list images with property is-public false.

On Monday, January 25, 2016, Kris G. Lindgren  wrote:

> This doesn't answer your specific question.  However there are two
> projects out there that are specifically for cleaning up projects and
> everything associated with them for removal.  They are:
>
> The coda project: https://github.com/openstack/osops-coda
>
> Which given a tenant ID will cleanup all resources for the tenant before
> its removed.  This is a project that came out of HP and has been turned
> over to the Openstack-Operators group.
>
> The second one is: https://github.com/openstack/ospurge
>
> This project works on projects that have already been deleted from
> keystone but has orphaned resources (you can also use it on active projects
> as well).
>
> ___
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
>
>
>
>
>
> On 1/25/16, 8:49 AM, "Thomas Blank - Hetzner Online AG" <
> thomas.bl...@hetzner.de > wrote:
>
> >Hey there,
> >
> >have you tried using the --property option of the openstack-client?
> >You could filter images by their owner (openstack_project_id or name).
> >
> >"openstack image list --property owner=[openstack_project_id]"
> >
> >see:
> >
> >
> http://docs.openstack.org/developer/python-openstackclient/command-objects/image.html#cmdoption-image-list--property
> >
> >thomas blank,
> >
> >
> >
> >On 25.01.2016 16:19, Saverio Proto wrote:
> >> Hello there,
> >>
> >> I need to delete some users  and tenants from my public cloud. Before
> >> deleting the users and tenants from keystone, I need to delete all the
> >> resources in the tenants.
> >>
> >> I am stucked listing the glance images uploaded in a specific tenant.
> >> I cannot find the way, I always get either all the images in the
> >> system, or just the ones of the active OS_TENANT_NAME
> >>
> >> openstack help image list
> >> usage: openstack image list [-h] [-f {csv,json,table,value,yaml}] [-c
> COLUMN]
> >> [--max-width ] [--noindent]
> >> [--quote {all,minimal,none,nonnumeric}]
> >> [--public | --private | --shared]
> >> [--property 

[Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread Christopher Hull
Hello all;

I'm an experienced developer and I work at Cisco.  Chances are I've covered
the basics here,but just in case, check me.
I've followed the Kilo install instructions to the letter so far as I can
tell.   I have not installed Swift, but I think everything else, and my
installation almost works.   I'm having a little trouble with Glance.

It seems that when I attempt to create a large image (that may or not may
be the issue), the checksum that Glance records in it's DB is incorrect.
Cirros image runs just fine.  CentOS cloud works.  But when I offload and
create an image from a big CentOS install (say 100gb), nova says the
checksum is wrong when I try to boot it.

Install was on a fresh CentOS7 on new system I built, i7 32GB 7TB.  Plenty
of speed and space.   And this system is dedicated to Openstack.

http://docs.openstack.org/kilo/install-guide/install/yum/content/index.html



Here's a little test I ran.

===
Attempt to deploy image


nova boot --flavor m1.medium --image v4c-centos-volume1-img1 --nic
net-id=61a08e7c-8d4b-42c3-b963-eddcf98113a2 \
   --security-group default --key-name demo-key v4c-centos-volume1-instance1


2016-01-02 16:37:27.764 4490 ERROR nova.compute.manager
[req-87feb5bf-0e29-432b-8f6c-aeac1fba4753 196b1dc42db94eb7bf210c2281b68e67
3690e3975f6546d793b530dffa8f1a8d - - -] [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] Instance failed to spawn
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] Traceback (most recent call last):
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2461, in
_build_resources
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] yield resources
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2333, in
_build_and_run_instance
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]
block_device_info=block_device_info)
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2378,
in spawn
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] admin_pass=admin_password)
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2776,
in _create_image
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] instance, size,
fallback_from_host)
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5894,
in _try_fetch_image_cache
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] size=size)
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
231, in cache
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] *args, **kwargs)
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
480, in create_image
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] prepare_template(target=base,
max_size=size, *args, **kwargs)
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445,
in inner
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] return f(*args, **kwargs)
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
221, in fetch_func_sync
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] fetch_func(target=target, *args,
**kwargs)
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/utils.py", line 501, in
fetch_image
2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
9e7d930d-4ee7-4556-905c-d4d54406c677] max_size=max_size)
2016-01-02 16:37:27.764 4490 TRACE 

Re: [Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-25 Thread Steve Martinelli

You can also use `openstack image list --long` -- this should include the
owner ID (which is the project ID) and do some grepping and cutting for
that output. Not the cleanest, but it'll work.

$ openstack image list --long --format value | grep $project_id | cut -f 1
-d " "

Adding to Thomas' suggestion, you can use `--format value and -c ID` to
only get the image IDs

$ openstack image list --property owner=1ed19ee75d6840cf94cfb04e4405a25e
--format value -c ID
ab1629bc-692c-4947-b3f1-968cbdfcd55c
8360a76c-ee86-4a52-8735-d9a3b9f5ce3a
c12b0aff-2b88-4060-ae2b-3208e0edbd6c

Of course, if you feel strongly that there should be a `--project
` option directly, then please report a bug:
https://bugs.launchpad.net/python-openstackclient/+filebug


Thomas Blank - Hetzner Online AG  wrote on
2016/01/25 10:49:34 AM:

> From: Thomas Blank - Hetzner Online AG 
> To: openstack-operators@lists.openstack.org
> Date: 2016/01/25 10:52 AM
> Subject: Re: [Openstack-operators] how to get glance images for a
> specific tenant with the openstack client ?
>
> Hey there,
>
> have you tried using the --property option of the openstack-client?
> You could filter images by their owner (openstack_project_id or name).
>
> "openstack image list --property owner=[openstack_project_id]"
>
> see:
>
> http://docs.openstack.org/developer/python-openstackclient/command-
> objects/image.html#cmdoption-image-list--property
>
> thomas blank,
>
>
>
> On 25.01.2016 16:19, Saverio Proto wrote:
> > Hello there,
> >
> > I need to delete some users  and tenants from my public cloud. Before
> > deleting the users and tenants from keystone, I need to delete all the
> > resources in the tenants.
> >
> > I am stucked listing the glance images uploaded in a specific tenant.
> > I cannot find the way, I always get either all the images in the
> > system, or just the ones of the active OS_TENANT_NAME
> >
> > openstack help image list
> > usage: openstack image list [-h] [-f {csv,json,table,value,yaml}]
> [-c COLUMN]
> > [--max-width ] [--noindent]
> > [--quote {all,minimal,none,nonnumeric}]
> > [--public | --private | --shared]
> > [--property 

Re: [Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-25 Thread Kris G. Lindgren
This doesn't answer your specific question.  However there are two projects out 
there that are specifically for cleaning up projects and everything associated 
with them for removal.  They are: 

The coda project: https://github.com/openstack/osops-coda

Which given a tenant ID will cleanup all resources for the tenant before its 
removed.  This is a project that came out of HP and has been turned over to the 
Openstack-Operators group.

The second one is: https://github.com/openstack/ospurge

This project works on projects that have already been deleted from keystone but 
has orphaned resources (you can also use it on active projects as well).

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy






On 1/25/16, 8:49 AM, "Thomas Blank - Hetzner Online AG" 
 wrote:

>Hey there,
>
>have you tried using the --property option of the openstack-client?
>You could filter images by their owner (openstack_project_id or name).
>
>"openstack image list --property owner=[openstack_project_id]"
>
>see:
>
>http://docs.openstack.org/developer/python-openstackclient/command-objects/image.html#cmdoption-image-list--property
>
>thomas blank,
>
>
>
>On 25.01.2016 16:19, Saverio Proto wrote:
>> Hello there,
>> 
>> I need to delete some users  and tenants from my public cloud. Before
>> deleting the users and tenants from keystone, I need to delete all the
>> resources in the tenants.
>> 
>> I am stucked listing the glance images uploaded in a specific tenant.
>> I cannot find the way, I always get either all the images in the
>> system, or just the ones of the active OS_TENANT_NAME
>> 
>> openstack help image list
>> usage: openstack image list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN]
>> [--max-width ] [--noindent]
>> [--quote {all,minimal,none,nonnumeric}]
>> [--public | --private | --shared]
>> [--property 

[Openstack-operators] ATC status for OSOps contribution

2016-01-25 Thread David Wahlstrom
I hear the first round of ATC (Active Technical Contributor) has already
gone out for the Austin summit.  I have not, however, received any email
for my ATC status, however I have only contributed to OSOps.  Does OSOPs
contribution make one eligible for ATC status?  If not, has this been
considered/investigated?  Seems like operators who contribute to OpenStack
should be included as ATC.

-- 
David W.
Unix, because every barista in Seattle has an MCSE.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread Clint Byrum
Excerpts from Christopher Hull's message of 2016-01-25 09:11:59 -0800:
> Hello all;
> 
> I'm an experienced developer and I work at Cisco.  Chances are I've covered
> the basics here,but just in case, check me.
> I've followed the Kilo install instructions to the letter so far as I can
> tell.   I have not installed Swift, but I think everything else, and my
> installation almost works.   I'm having a little trouble with Glance.
> 
> It seems that when I attempt to create a large image (that may or not may
> be the issue), the checksum that Glance records in it's DB is incorrect.
> Cirros image runs just fine.  CentOS cloud works.  But when I offload and
> create an image from a big CentOS install (say 100gb), nova says the
> checksum is wrong when I try to boot it.
> 

Did you check the file that glance saved to disk to make sure it was
the same one you uploaded? I kind of wonder if something timed out and
did not properly report the error, leading to a partially written file.

Also, is there some reason you aren't deploying Liberty?

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] how to get glance images for a specific tenant with the openstack client ?

2016-01-25 Thread David Wahlstrom
We have an image promotion process that does this for us.  The command I
use to get images from a specific tenant is:

glance --os-image-api-version 1 image-list --owner=

I'm sure using the v1 API will make some cringe, but I haven't found
anything similar in the v2 API.

On Mon, Jan 25, 2016 at 8:29 AM, Steve Martinelli 
wrote:

> You can also use `openstack image list --long` -- this should include the
> owner ID (which is the project ID) and do some grepping and cutting for
> that output. Not the cleanest, but it'll work.
>
> $ openstack image list --long --format value | grep $project_id | cut -f 1
> -d " "
>
> Adding to Thomas' suggestion, you can use `--format value and -c ID` to
> only get the image IDs
>
> $ openstack image list --property owner=1ed19ee75d6840cf94cfb04e4405a25e
> --format value -c ID
> ab1629bc-692c-4947-b3f1-968cbdfcd55c
> 8360a76c-ee86-4a52-8735-d9a3b9f5ce3a
> c12b0aff-2b88-4060-ae2b-3208e0edbd6c
>
> Of course, if you feel strongly that there should be a `--project
> ` option directly, then please report a bug:
> https://bugs.launchpad.net/python-openstackclient/+filebug
>
>
> Thomas Blank - Hetzner Online AG  wrote on
> 2016/01/25 10:49:34 AM:
>
> > From: Thomas Blank - Hetzner Online AG 
> > To: openstack-operators@lists.openstack.org
> > Date: 2016/01/25 10:52 AM
> > Subject: Re: [Openstack-operators] how to get glance images for a
> > specific tenant with the openstack client ?
>
> >
> > Hey there,
> >
> > have you tried using the --property option of the openstack-client?
> > You could filter images by their owner (openstack_project_id or name).
> >
> > "openstack image list --property owner=[openstack_project_id]"
> >
> > see:
> >
> > http://docs.openstack.org/developer/python-openstackclient/command-
> > objects/image.html#cmdoption-image-list--property
> >
> > thomas blank,
> >
> >
> >
> > On 25.01.2016 16:19, Saverio Proto wrote:
> > > Hello there,
> > >
> > > I need to delete some users  and tenants from my public cloud. Before
> > > deleting the users and tenants from keystone, I need to delete all the
> > > resources in the tenants.
> > >
> > > I am stucked listing the glance images uploaded in a specific tenant.
> > > I cannot find the way, I always get either all the images in the
> > > system, or just the ones of the active OS_TENANT_NAME
> > >
> > > openstack help image list
> > > usage: openstack image list [-h] [-f {csv,json,table,value,yaml}]
> > [-c COLUMN]
> > > [--max-width ] [--noindent]
> > > [--quote {all,minimal,none,nonnumeric}]
> > > [--public | --private | --shared]
> > > [--property 

Re: [Openstack-operators] ATC status for OSOps contribution

2016-01-25 Thread Edgar Magana
We should discuss this in Austin and make it to happen for next release cycle.

Edgar




On 1/25/16, 10:05 AM, "JJ Asghar"  wrote:

>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>On 1/25/16 11:56 AM, David Wahlstrom wrote:
>> I hear the first round of ATC (Active Technical Contributor) has already
>> gone out for the Austin summit.  I have not, however, received any email
>> for my ATC status, however I have only contributed to OSOps.  Does OSOPs
>> contribution make one eligible for ATC status?  If not, has this been
>> considered/investigated?  Seems like operators who contribute to
>> OpenStack should be included as ATC.
>
>Hi!
>
>So yeah...that's the challenge. All the conversations I've had around
>OSOps with the TC and people who make those types of decisions always
>broke down to. "Sorry JJ, you need interest in OSOps before you can
>apply for ATC." I counter with "But I can't people interested in OSOps
>if I can't get ATC out?"
>
>So yeah, chicken and egg.
>
>Long story short, we've been pushing to make OSOps ATC eligible but it
>won't happen till we have "enough involvement" which is up to the TC.
>
>The ultimate goal is ATC, but it's still at least one or two releases
>down the line. :(
>
>- -- 
>Best Regards,
>JJ Asghar
>c: 512.619.0722 t: @jjasghar irc: j^2
>-BEGIN PGP SIGNATURE-
>Version: GnuPG/MacGPG2 v2
>Comment: GPGTools - https://gpgtools.org
>
>iQIcBAEBCgAGBQJWpmP5AAoJEDZbxzMH0+jTAZ4P/2tGORpb7hc51StSMwl7k9qa
>L/aAUr1/SMg27Q1q6ysohl/kqi4E8/KbfuGAjTW8cHICsNv4StjvtwOUD4hHL7S6
>ymHW9RAMGRnTrrWUheD8gntdVUXp3TH4oFTuo1PwKnwHKJimBDC+5xGbraftNfxW
>hmqe9vJF8tVtaMZt6sJvuYHmfBhCRS9dzdb2GA5F7IKLRkGgpSLxeW+owhvSMOMO
>NC4FX+r2JCOdzXahbiNIzfjN01Je/Zq32/kwSUw2r1wXqWMrOqwdHynlvRtRvIOv
>K3GgbcOvymrx3q8VniARcOr8vsGGl6RofxQQAli9WNA+EUjOBpfc+aDPfB7DBclI
>1Y8cbBlOyabvftcX2ncOkocSU3/xi/pauFtj9dvOYKlqhZKVdCsaYOXJ7dU9prEn
>CUC8UeWjkv0n0yuYYtaWYmGf+V4XT77p2ZxCsw8GjP5ho7OnXN88wncrKd6Ls6pH
>CiZ1NNyhMI6NZbgT/8fbN0EnObDE0PaVuxmyIMU7/LwdhWKgPMrTtqTMsV9HrBHi
>p7YOjP1ntHooxwobYBfhq7y8VeL9bfg8Gx3cl/aReklTt2L4nKba4NNXRVbOyecY
>PE8PNwLeGffCaHNCvzXoCr1jreGmuBLAtpU/kyDxV7iV0lK/65GoDrAGS6tAj5Rm
>22ffG09IBdmJqKnnSe6a
>=EGHe
>-END PGP SIGNATURE-
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread Edgar Magana
Same here, we are using apache as front and the same for keystone. In Future we 
will move all public url in from of HAProxy

Edgar




On 1/25/16, 10:40 AM, "Kris G. Lindgren"  wrote:

>In the past we have had issues with having glance terminating ssl and 
>downloads either not completing or being corrupted.  If you are having glance 
>terminate ssl, for us moving ssl termination to haproxy and running glance as 
>non-ssl fixed that issue for us.
>
>___
>Kris Lindgren
>Senior Linux Systems Engineer
>GoDaddy
>
>
>
>
>
>
>
>On 1/25/16, 11:23 AM, "Clint Byrum"  wrote:
>
>>Excerpts from Christopher Hull's message of 2016-01-25 09:11:59 -0800:
>>> Hello all;
>>> 
>>> I'm an experienced developer and I work at Cisco.  Chances are I've covered
>>> the basics here,but just in case, check me.
>>> I've followed the Kilo install instructions to the letter so far as I can
>>> tell.   I have not installed Swift, but I think everything else, and my
>>> installation almost works.   I'm having a little trouble with Glance.
>>> 
>>> It seems that when I attempt to create a large image (that may or not may
>>> be the issue), the checksum that Glance records in it's DB is incorrect.
>>> Cirros image runs just fine.  CentOS cloud works.  But when I offload and
>>> create an image from a big CentOS install (say 100gb), nova says the
>>> checksum is wrong when I try to boot it.
>>> 
>>
>>Did you check the file that glance saved to disk to make sure it was
>>the same one you uploaded? I kind of wonder if something timed out and
>>did not properly report the error, leading to a partially written file.
>>
>>Also, is there some reason you aren't deploying Liberty?
>>
>>___
>>OpenStack-operators mailing list
>>OpenStack-operators@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ATC status for OSOps contribution

2016-01-25 Thread JJ Asghar
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 1/25/16 11:56 AM, David Wahlstrom wrote:
> I hear the first round of ATC (Active Technical Contributor) has already
> gone out for the Austin summit.  I have not, however, received any email
> for my ATC status, however I have only contributed to OSOps.  Does OSOPs
> contribution make one eligible for ATC status?  If not, has this been
> considered/investigated?  Seems like operators who contribute to
> OpenStack should be included as ATC.

Hi!

So yeah...that's the challenge. All the conversations I've had around
OSOps with the TC and people who make those types of decisions always
broke down to. "Sorry JJ, you need interest in OSOps before you can
apply for ATC." I counter with "But I can't people interested in OSOps
if I can't get ATC out?"

So yeah, chicken and egg.

Long story short, we've been pushing to make OSOps ATC eligible but it
won't happen till we have "enough involvement" which is up to the TC.

The ultimate goal is ATC, but it's still at least one or two releases
down the line. :(

- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWpmP5AAoJEDZbxzMH0+jTAZ4P/2tGORpb7hc51StSMwl7k9qa
L/aAUr1/SMg27Q1q6ysohl/kqi4E8/KbfuGAjTW8cHICsNv4StjvtwOUD4hHL7S6
ymHW9RAMGRnTrrWUheD8gntdVUXp3TH4oFTuo1PwKnwHKJimBDC+5xGbraftNfxW
hmqe9vJF8tVtaMZt6sJvuYHmfBhCRS9dzdb2GA5F7IKLRkGgpSLxeW+owhvSMOMO
NC4FX+r2JCOdzXahbiNIzfjN01Je/Zq32/kwSUw2r1wXqWMrOqwdHynlvRtRvIOv
K3GgbcOvymrx3q8VniARcOr8vsGGl6RofxQQAli9WNA+EUjOBpfc+aDPfB7DBclI
1Y8cbBlOyabvftcX2ncOkocSU3/xi/pauFtj9dvOYKlqhZKVdCsaYOXJ7dU9prEn
CUC8UeWjkv0n0yuYYtaWYmGf+V4XT77p2ZxCsw8GjP5ho7OnXN88wncrKd6Ls6pH
CiZ1NNyhMI6NZbgT/8fbN0EnObDE0PaVuxmyIMU7/LwdhWKgPMrTtqTMsV9HrBHi
p7YOjP1ntHooxwobYBfhq7y8VeL9bfg8Gx3cl/aReklTt2L4nKba4NNXRVbOyecY
PE8PNwLeGffCaHNCvzXoCr1jreGmuBLAtpU/kyDxV7iV0lK/65GoDrAGS6tAj5Rm
22ffG09IBdmJqKnnSe6a
=EGHe
-END PGP SIGNATURE-

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread Abel Lopez
First question, what is your glance store?
Also, it sounds like you created the large image from a running instance,
is that correct? If so, was the instance suspended when you initiated the
image-create?

On Monday, January 25, 2016, Christopher Hull  wrote:

> Hello all;
>
> I'm an experienced developer and I work at Cisco.  Chances are I've
> covered the basics here,but just in case, check me.
> I've followed the Kilo install instructions to the letter so far as I can
> tell.   I have not installed Swift, but I think everything else, and my
> installation almost works.   I'm having a little trouble with Glance.
>
> It seems that when I attempt to create a large image (that may or not may
> be the issue), the checksum that Glance records in it's DB is incorrect.
> Cirros image runs just fine.  CentOS cloud works.  But when I offload and
> create an image from a big CentOS install (say 100gb), nova says the
> checksum is wrong when I try to boot it.
>
> Install was on a fresh CentOS7 on new system I built, i7 32GB 7TB.  Plenty
> of speed and space.   And this system is dedicated to Openstack.
>
> http://docs.openstack.org/kilo/install-guide/install/yum/content/index.html
>
>
>
> Here's a little test I ran.
>
> ===
> Attempt to deploy image
>
>
> nova boot --flavor m1.medium --image v4c-centos-volume1-img1 --nic
> net-id=61a08e7c-8d4b-42c3-b963-eddcf98113a2 \
>--security-group default --key-name demo-key
> v4c-centos-volume1-instance1
>
>
> 2016-01-02 16:37:27.764 4490 ERROR nova.compute.manager
> [req-87feb5bf-0e29-432b-8f6c-aeac1fba4753 196b1dc42db94eb7bf210c2281b68e67
> 3690e3975f6546d793b530dffa8f1a8d - - -] [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] Instance failed to spawn
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] Traceback (most recent call last):
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2461, in
> _build_resources
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] yield resources
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2333, in
> _build_and_run_instance
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]
> block_device_info=block_device_info)
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2378,
> in spawn
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] admin_pass=admin_password)
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2776,
> in _create_image
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] instance, size,
> fallback_from_host)
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5894,
> in _try_fetch_image_cache
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] size=size)
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
> 231, in cache
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] *args, **kwargs)
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
> 480, in create_image
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] prepare_template(target=base,
> max_size=size, *args, **kwargs)
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445,
> in inner
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677] return f(*args, **kwargs)
> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
> 221, in fetch_func_sync
> 2016-01-02 16:37:27.764 4490 TRACE 

Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread David Medberry
Also, is it possible your token timed out during the upload (thereby
truncating it)? Validate the byte size of the final uploaded (large) glance
image.

On Mon, Jan 25, 2016 at 10:59 AM, Abel Lopez  wrote:

> First question, what is your glance store?
> Also, it sounds like you created the large image from a running instance,
> is that correct? If so, was the instance suspended when you initiated the
> image-create?
>
>
> On Monday, January 25, 2016, Christopher Hull 
> wrote:
>
>> Hello all;
>>
>> I'm an experienced developer and I work at Cisco.  Chances are I've
>> covered the basics here,but just in case, check me.
>> I've followed the Kilo install instructions to the letter so far as I can
>> tell.   I have not installed Swift, but I think everything else, and my
>> installation almost works.   I'm having a little trouble with Glance.
>>
>> It seems that when I attempt to create a large image (that may or not may
>> be the issue), the checksum that Glance records in it's DB is incorrect.
>> Cirros image runs just fine.  CentOS cloud works.  But when I offload and
>> create an image from a big CentOS install (say 100gb), nova says the
>> checksum is wrong when I try to boot it.
>>
>> Install was on a fresh CentOS7 on new system I built, i7 32GB 7TB.
>> Plenty of speed and space.   And this system is dedicated to Openstack.
>>
>>
>> http://docs.openstack.org/kilo/install-guide/install/yum/content/index.html
>>
>>
>>
>> Here's a little test I ran.
>>
>> ===
>> Attempt to deploy image
>>
>>
>> nova boot --flavor m1.medium --image v4c-centos-volume1-img1 --nic
>> net-id=61a08e7c-8d4b-42c3-b963-eddcf98113a2 \
>>--security-group default --key-name demo-key
>> v4c-centos-volume1-instance1
>>
>>
>> 2016-01-02 16:37:27.764 4490 ERROR nova.compute.manager
>> [req-87feb5bf-0e29-432b-8f6c-aeac1fba4753 196b1dc42db94eb7bf210c2281b68e67
>> 3690e3975f6546d793b530dffa8f1a8d - - -] [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] Instance failed to spawn
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] Traceback (most recent call last):
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2461, in
>> _build_resources
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] yield resources
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2333, in
>> _build_and_run_instance
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]
>> block_device_info=block_device_info)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2378,
>> in spawn
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] admin_pass=admin_password)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2776,
>> in _create_image
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] instance, size,
>> fallback_from_host)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5894,
>> in _try_fetch_image_cache
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] size=size)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 231, in cache
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] *args, **kwargs)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 480, in create_image
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677] prepare_template(target=base,
>> max_size=size, *args, **kwargs)
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 9e7d930d-4ee7-4556-905c-d4d54406c677]   File
>> "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445,
>> in inner
>> 2016-01-02 16:37:27.764 4490 TRACE nova.compute.manager [instance:
>> 

Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread Kris G. Lindgren
In the past we have had issues with having glance terminating ssl and downloads 
either not completing or being corrupted.  If you are having glance terminate 
ssl, for us moving ssl termination to haproxy and running glance as non-ssl 
fixed that issue for us.

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy







On 1/25/16, 11:23 AM, "Clint Byrum"  wrote:

>Excerpts from Christopher Hull's message of 2016-01-25 09:11:59 -0800:
>> Hello all;
>> 
>> I'm an experienced developer and I work at Cisco.  Chances are I've covered
>> the basics here,but just in case, check me.
>> I've followed the Kilo install instructions to the letter so far as I can
>> tell.   I have not installed Swift, but I think everything else, and my
>> installation almost works.   I'm having a little trouble with Glance.
>> 
>> It seems that when I attempt to create a large image (that may or not may
>> be the issue), the checksum that Glance records in it's DB is incorrect.
>> Cirros image runs just fine.  CentOS cloud works.  But when I offload and
>> create an image from a big CentOS install (say 100gb), nova says the
>> checksum is wrong when I try to boot it.
>> 
>
>Did you check the file that glance saved to disk to make sure it was
>the same one you uploaded? I kind of wonder if something timed out and
>did not properly report the error, leading to a partially written file.
>
>Also, is there some reason you aren't deploying Liberty?
>
>___
>OpenStack-operators mailing list
>OpenStack-operators@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] I have an installation question and possible bug

2016-01-25 Thread David Medberry
On Mon, Jan 25, 2016 at 4:15 PM, Christopher Hull 
wrote:

> I have not installed Swift.   Is that an issue?
>

No, not an issue.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators