Re: [openstack-dev] (no subject)

2017-07-05 Thread Lawrence J. Albinson
Hi Andy,

Thank you. Yes, 15.1.6 seems good.

Kind regards, Lawrence

From: Andy McCrae
Sent: 04 July 2017 17:31
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] (no subject)

Hi Lawrence,

On 4 July 2017 at 12:29, Lawrence J. Albinson 
<lawre...@ljalbinson.com<mailto:lawre...@ljalbinson.com>> wrote:
Dear Colleagues,

Before I go problem hunting, has anyone seen openstack-ansible builds failing 
at release 15.1.5 at the following point:

TASK [lxc_container_create : Create localhost config] **

with the error 'lxc-attach: command not found'?

The self-same environment is working fine at 15.1.3.

I've turned up nothing with searches. Any help greatly appreciated.

I know there were some issues created by changes to LXC and Ansible that look 
related to what you're seeing.
Although, I believed those were resolved - 
https://review.openstack.org/#/c/475438/
Looking at the patch that merged it seems this will only be in the latest 
release (which just got released!)
Try updating to 15.1.6 . and see if that resolves it, if not let us know.

Kind Regards,
Andy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2017-07-04 Thread Andy McCrae
Hi Lawrence,

On 4 July 2017 at 12:29, Lawrence J. Albinson 
wrote:

> Dear Colleagues,
>
> Before I go problem hunting, has anyone seen openstack-ansible builds
> failing at release 15.1.5 at the following point:
>
> TASK [lxc_container_create : Create localhost config]
> **
>
> with the error 'lxc-attach: command not found'?
>
> The self-same environment is working fine at 15.1.3.
>
> I've turned up nothing with searches. Any help greatly appreciated.
>

I know there were some issues created by changes to LXC and Ansible that
look related to what you're seeing.
Although, I believed those were resolved -
https://review.openstack.org/#/c/475438/
Looking at the patch that merged it seems this will only be in the latest
release (which just got released!)
Try updating to 15.1.6 . and see if that resolves it, if not let us know.

Kind Regards,
Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2016-08-17 Thread Tom Fifield

On 17/08/16 15:19, UnitedStack 张德通 wrote:

i want to join openstack mailing list


You're on it ^_^


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2015-08-28 Thread Adrian Otto
Let's get these details into the QuickStart doc so anyone else hitting this can 
be clued in.

--
Adrian

On Aug 27, 2015, at 9:38 PM, Vikas Choudhary 
choudharyvika...@gmail.commailto:choudharyvika...@gmail.com wrote:


Hi Stanislaw,


I also faced similar issue.Reason might be that from inside master instance 
openstack heat service is not reachable.
Please check /var/log/cloud-init-log  for any connectivity related error 
message and if found try manually whichever command has failed with correct  
url.



If this is the issue, you need to set correct HOST_IP in localrc.



-Vikas Choudhary


___
Hi Stanislaw,

Your host with Fedora should have special config file, which will send
signal to WaitCondition.
For good example please take a look this template
 
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml
https://github.com/openstack/heat-templates/blob/819a9a3fc9d6f449129c8cefa5e087569340109b/hot/native_waitcondition.yaml

Also the best place for such question I suppose will be
https://ask.openstack.org/en/questions/
https://ask.openstack.org/en/questions/

Regards,
Sergey.

On 26 August 2015 at 09:23, Pitucha, Stanislaw Izaak 
stanislaw.pitucha at 
hp.comhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
wrote:

 Hi all,

 I'm trying to stand up magnum according to the quickstart instructions
 with devstack.

 There's one resource which times out and fails: master_wait_condition. The
 kube master (fedora) host seems to be created, I can login to it via ssh,
 other resources are created successfully.



 What can I do from here? How do I debug this? I tried to look for the
 wc_notify itself to try manually, but I can't even find that script.



 Best Regards,

 Stanis?aw Pitucha



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: OpenStack-dev-request at 
 lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.orgmailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-05-10 Thread Shyam Prasad N

Hi Clay,
First of all, thanks for the reply.

1. How can I update the eventlet version. I installed swift from source 
(git). Pulling the latest code helps?
2. Yes. Recently my clients changed to chunked encoding for transfer. 
Are you saying chunked encoding is not supported by swift?

3. Yes, the 408s have made it to the clients from proxy servers.

Regards,
Shyam

On Friday 09 May 2014 10:45 PM, Clay Gerrard wrote:
I thought those tracebacks only showed up with old versions of 
eventlet or and eventlet_debug = true?


In my experience that normally indicates a client disconnect on a 
chucked encoding transfer request (request w/o a content-length).  Do 
you know if your clients are using transfer encoding chunked?


Are you seeing the 408 make it's way out to the client?  It wasn't 
clear to me if you only see these tracebacks on the object-servers or 
in the proxy logs as well?  Perhaps only one of the three disks 
involved in the PUT are timing out and the client still gets a 
successful response?


As the disks fill up replication and auditing is going to consume more 
disk resources - you may have to tune the concurrency and rate 
settings on those daemons.  If the errors happen consistently you 
could try running with background consistency processes temporarily 
disabled and rule out if they're causing disk contention on your setup 
with your config.


-Clay


On Fri, May 9, 2014 at 8:54 AM, Ben Nemec openst...@nemebean.com 
mailto:openst...@nemebean.com wrote:


This is a development list, and your question sounds more
usage-related.  Please ask your question on the users list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Thanks.

-Ben


On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

Hi,

I have a two node swift cluster receiving continuous traffic
(mostly
overwrites for existing objects) of 1GB files each.

Soon after the traffic started, I'm seeing the following
traceback from
some transactions...
Traceback (most recent call last):
   File /home/eightkpc/swift/swift/proxy/controllers/obj.py,
line 692,
in PUT
 chunk = next(data_source)
   File /home/eightkpc/swift/swift/proxy/controllers/obj.py,
line 559,
in lambda
 data_source = iter(lambda:
reader(self.app.client_chunk_size), '')
   File /home/eightkpc/swift/swift/common/utils.py, line
2362, in read
 chunk = self.wsgi_input.read(*args, **kwargs)
   File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py,
line 147,
in read
 return self._chunked_read(self.rfile, length)
   File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py,
line 137,
in _chunked_read
 self.chunk_length = int(rfile.readline().split(;,
1)[0], 16)
ValueError: invalid literal for int() with base 16: '' (txn:
tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

Seeing the following errors on storage logs...
object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] PUT

/xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data
408 - PUT

http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data;
txf3b4e5f677004474bbd2f-00536c30d1 proxy-server 12241
95.6405 -

It's success sometimes, but mostly 408 errors. I don't see any
other
logs for the transaction ID. or around these 408 errors in the log
files. Is this a disk timeout issue? These are only 1GB files
and normal
writes to files on these disks are quite fast.

The timeouts from the swift proxy files are...
root@bulkstore-112:~# grep -R timeout /etc/swift/*
/etc/swift/proxy-server.conf:client_timeout = 600
/etc/swift/proxy-server.conf:node_timeout = 600
/etc/swift/proxy-server.conf:recoverable_node_timeout = 600

Can someone help me troubleshoot this issue?

--
-Shyam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] (no subject)

2014-05-09 Thread Ben Nemec
This is a development list, and your question sounds more usage-related. 
 Please ask your question on the users list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Thanks.

-Ben

On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

Hi,

I have a two node swift cluster receiving continuous traffic (mostly
overwrites for existing objects) of 1GB files each.

Soon after the traffic started, I'm seeing the following traceback from
some transactions...
Traceback (most recent call last):
   File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 692,
in PUT
 chunk = next(data_source)
   File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 559,
in lambda
 data_source = iter(lambda: reader(self.app.client_chunk_size), '')
   File /home/eightkpc/swift/swift/common/utils.py, line 2362, in read
 chunk = self.wsgi_input.read(*args, **kwargs)
   File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 147,
in read
 return self._chunked_read(self.rfile, length)
   File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 137,
in _chunked_read
 self.chunk_length = int(rfile.readline().split(;, 1)[0], 16)
ValueError: invalid literal for int() with base 16: '' (txn:
tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

Seeing the following errors on storage logs...
object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] PUT
/xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data
408 - PUT
http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A30396AEF6B537B00.2.data;
txf3b4e5f677004474bbd2f-00536c30d1 proxy-server 12241 95.6405 -

It's success sometimes, but mostly 408 errors. I don't see any other
logs for the transaction ID. or around these 408 errors in the log
files. Is this a disk timeout issue? These are only 1GB files and normal
writes to files on these disks are quite fast.

The timeouts from the swift proxy files are...
root@bulkstore-112:~# grep -R timeout /etc/swift/*
/etc/swift/proxy-server.conf:client_timeout = 600
/etc/swift/proxy-server.conf:node_timeout = 600
/etc/swift/proxy-server.conf:recoverable_node_timeout = 600

Can someone help me troubleshoot this issue?

--
-Shyam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-05-09 Thread Clay Gerrard
I thought those tracebacks only showed up with old versions of eventlet or
and eventlet_debug = true?

In my experience that normally indicates a client disconnect on a chucked
encoding transfer request (request w/o a content-length).  Do you know if
your clients are using transfer encoding chunked?

Are you seeing the 408 make it's way out to the client?  It wasn't clear to
me if you only see these tracebacks on the object-servers or in the proxy
logs as well?  Perhaps only one of the three disks involved in the PUT are
timing out and the client still gets a successful response?

As the disks fill up replication and auditing is going to consume more disk
resources - you may have to tune the concurrency and rate settings on those
daemons.  If the errors happen consistently you could try running with
background consistency processes temporarily disabled and rule out if
they're causing disk contention on your setup with your config.

-Clay


On Fri, May 9, 2014 at 8:54 AM, Ben Nemec openst...@nemebean.com wrote:

 This is a development list, and your question sounds more usage-related.
  Please ask your question on the users list: http://lists.openstack.org/
 cgi-bin/mailman/listinfo/openstack

 Thanks.

 -Ben


 On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

 Hi,

 I have a two node swift cluster receiving continuous traffic (mostly
 overwrites for existing objects) of 1GB files each.

 Soon after the traffic started, I'm seeing the following traceback from
 some transactions...
 Traceback (most recent call last):
File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 692,
 in PUT
  chunk = next(data_source)
File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 559,
 in lambda
  data_source = iter(lambda: reader(self.app.client_chunk_size), '')
File /home/eightkpc/swift/swift/common/utils.py, line 2362, in read
  chunk = self.wsgi_input.read(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 147,
 in read
  return self._chunked_read(self.rfile, length)
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 137,
 in _chunked_read
  self.chunk_length = int(rfile.readline().split(;, 1)[0], 16)
 ValueError: invalid literal for int() with base 16: '' (txn:
 tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

 Seeing the following errors on storage logs...
 object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] PUT
 /xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A
 30396AEF6B537B00.2.data
 408 - PUT
 http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A
 30396AEF6B537B00.2.data
 txf3b4e5f677004474bbd2f-00536c30d1 proxy-server 12241 95.6405 -

 It's success sometimes, but mostly 408 errors. I don't see any other
 logs for the transaction ID. or around these 408 errors in the log
 files. Is this a disk timeout issue? These are only 1GB files and normal
 writes to files on these disks are quite fast.

 The timeouts from the swift proxy files are...
 root@bulkstore-112:~# grep -R timeout /etc/swift/*
 /etc/swift/proxy-server.conf:client_timeout = 600
 /etc/swift/proxy-server.conf:node_timeout = 600
 /etc/swift/proxy-server.conf:recoverable_node_timeout = 600

 Can someone help me troubleshoot this issue?

 --
 -Shyam


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-04-25 Thread Dmitriy Ukhlov
In my opinion it would be enough to read table schema
from stdio, then it is possible to use pipe for input from any stream


On Fri, Apr 25, 2014 at 6:25 AM, ANDREY OSTAPENKO (CS) 
andrey_ostape...@symantec.com wrote:

 Hello, everyone!

 Now I'm starting to implement cli client for KeyValue Storage service
 MagnetoDB.
 I'm going to use heat approach for cli commands, e.g. heat stack-create
 --template-file FILE,
 because we have too many parameters to pass to the command.
 For example, table creation command:

 magnetodb create-table --description-file FILE

 File will contain json data, e.g.:

 {
 table_name: data,
 attribute_definitions: [
 {
 attribute_name: Attr1,
 attribute_type: S
 },
 {
 attribute_name: Attr2,
 attribute_type: S
 },
 {
 attribute_name: Attr3,
 attribute_type: S
 }
 ],
 key_schema: [
 {
 attribute_name: Attr1,
 key_type: HASH
 },
 {
 attribute_name: Attr2,
 key_type: RANGE
 }
 ],
 local_secondary_indexes: [
 {
 index_name: IndexName,
 key_schema: [
 {
 attribute_name: Attr1,
 key_type: HASH
 },
 {
 attribute_name: Attr3,
 key_type: RANGE
 }
 ],
 projection: {
 projection_type: ALL
 }
 }
 ]
 }

 Blueprint:
 https://blueprints.launchpad.net/magnetodb/+spec/magnetodb-cli-client

 If you have any comments, please let me know.

 Best regards,
 Andrey Ostapenko




-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev