Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed.

2018-03-15 Thread gordon chung


On 2018-03-15 5:16 AM, __ mango. wrote:
> hi,
> The environment variable that you're talking about has been configured 
> and the error has not gone away.
> 
> I was on OpenStack for the first time, can you be more specific? Thank 
> you very much.
> 

https://gnocchi.xyz/gnocchiclient/shell.html#openstack-keystone-authentication 
you're missing OS_AUTH_TYPE

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed.

2018-03-15 Thread __ mango.
hi??The environment variable that you're talking about has been configured and 
the error has not gone away.




I was on OpenStack for the first time, can you be more specific? Thank you very 
much.









-- Original --
From:  "Julien Danjou"<jul...@danjou.info>;
Date:  Thu, Mar 15, 2018 04:48 PM
To:  "__ mango."<935540...@qq.com>;
Cc:  "openstack-dev"<openstack-dev@lists.openstack.org>; 
Subject:  Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed.



On Thu, Mar 15 2018, __ mango. wrote:

> I have a question about the validation of gnocchi keystone.

There's no question in your message.

> I run the following command, but it is not successful.(api.auth_mode :basic, 
> basic mode can be successful)
>
> # gnocchi status --debug
> REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False
> -H "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H
> "Accept: application/json, */*" -H "User-Agent: gnocchi
> keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12"

There's no token in this request so Keystone auth won't work. You did
not set the environment variable OS_* correctly.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed.

2018-03-15 Thread Julien Danjou
On Thu, Mar 15 2018, __ mango. wrote:

> I have a question about the validation of gnocchi keystone.

There's no question in your message.

> I run the following command, but it is not successful.(api.auth_mode :basic, 
> basic mode can be successful)
>
> # gnocchi status --debug
> REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False
> -H "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H
> "Accept: application/json, */*" -H "User-Agent: gnocchi
> keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12"

There's no token in this request so Keystone auth won't work. You did
not set the environment variable OS_* correctly.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] gnocchi-keystone verification failed.

2018-03-14 Thread __ mango.
hi,
I have a question about the validation of gnocchi keystone.
I run the following command, but it is not successful.(api.auth_mode :basic, 
basic mode can be successful)

# gnocchi status --debug
REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False -H 
"Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H "Accept: 
application/json, */*" -H "User-Agent: gnocchi keystoneauth1/3.1.0 
python-requests/2.18.1 CPython/2.7.12"
Starting new HTTP connection (1): localhost
http://localhost:8041 "GET /v1/status?details=False HTTP/1.1" 401 114
RESP: [401] Content-Type: application/json Content-Length: 114 
WWW-Authenticate: Keystone uri='http://192.168.12.244:5000/v3' Connection: 
Keep-Alive 
RESP BODY: {"error": {"message": "The request you have made requires 
authentication.", "code": 401, "title": "Unauthorized"}}

The request you have made requires authentication. (HTTP 401)
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 400, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 113, in run
column_names, data = self.take_action(parsed_args)
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/status_cli.py", 
line 23, in take_action
status = utils.get_client(self).status.get()
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/status.py", 
line 21, in get
return self._get(self.url + '?details=%s' % details).json()
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/base.py", line 
37, in _get
return self.client.api.get(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 288, 
in get
return self.request(url, 'GET', **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/client.py", line 
52, in request
raise exceptions.from_response(resp, method)
Unauthorized: The request you have made requires authentication. (HTTP 401)
Traceback (most recent call last):
  File "/usr/local/bin/gnocchi", line 11, in 
sys.exit(main())
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/shell.py", line 
251, in main
return GnocchiShell().run(args)
  File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 279, in run
result = self.run_subcommand(remainder)
  File "/usr/lib/python2.7/dist-packages/cliff/app.py", line 400, in 
run_subcommand
result = cmd.run(parsed_args)
  File "/usr/lib/python2.7/dist-packages/cliff/display.py", line 113, in run
column_names, data = self.take_action(parsed_args)
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/status_cli.py", 
line 23, in take_action
status = utils.get_client(self).status.get()
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/status.py", 
line 21, in get
return self._get(self.url + '?details=%s' % details).json()
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/v1/base.py", line 
37, in _get
return self.client.api.get(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 288, 
in get
return self.request(url, 'GET', **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/gnocchiclient/client.py", line 
52, in request
raise exceptions.from_response(resp, method)
gnocchiclient.exceptions.Unauthorized: The request you have made requires 
authentication. (HTTP 401)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi]can we put the api on udp?

2017-11-21 Thread Julien Danjou
On Tue, Nov 21 2017, 李田清 wrote:

> right now, ceilometer notification agent can send samples by udp 
> publisher..
> But gnocchi can only accept by rest api. Is there a way to use udp to 
> accept notification agent's samples that sending by udp?

I'd suggest tracking this here since an issue just has been opened about
it:
  https://github.com/gnocchixyz/gnocchi/issues/496

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi]can we put the api on udp?

2017-11-20 Thread 李田清
Hello,
right now, ceilometer notification agent can send samples by udp publisher..
But gnocchi can only accept by rest api. Is there a way to use udp to 
accept notification agent's samples that sending by udp?


Thanks a lot__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Redis for storage backend

2017-10-24 Thread Yaguang Tang
Hi Gordon,

Thanks for your test results, we investigate more on our env, finally it
turns out that our ceph cluster isn't work as expected.
which made gnocchi performance decrease a lot.

On Thu, Oct 19, 2017 at 1:09 AM, gordon chung  wrote:

>
>
> On 2017-10-18 12:15 PM, Yaguang Tang wrote:
> >
> > We launched 300vms and each vm has about 10 metrics, OpenStack cluster
> > have 3 controllers and 2 compute nodes(ceph replication is set to 2).
>
> seems smaller than my test, i have 20K metrics in my test
>
> >
> > what we want to archive is to make all metric measures data get
> > processed as soon as possible, metric processing delay is set to 10s,
> > and ceilometer polling interval is 30s.
>
> are you batching the data you push to gnocchi? in gnocchi4.1, the redis
> driver will (attempt to) process immediately, rather than cyclically
> using the metric processing delay.
>
> >
> > when the backend of incoming and storeage is set to ceph, the average of
> > "gnocchi status"
> > shows that there are around 7000 measures waiting to be process, but
> > when changing incoming and storage backend to Redis, the result of
> > gnocchi status shows unprocessed measures is around 200.
>
> i should clarify, having a high gnocchi status is not a bad thing, ie,
> if you just send a large spike of measures, it's expected to jump. it's
> bad if never shrinks.
>
> that said, how many workers do you have? i have 18 workers for 20K
> metrics and it takes 2minutes i believe? do you have debug enable? how
> long does it take to process metric?
>
> when i tested gnocchi+ceph vs gnocchi+redis, i didn't see a very large
> difference in performance (redis was definitely better though). maybe
> it's your ceph environment?
>
> >
> > we try to add more metricd process on every controller nodes, to improve
> > the performance of
> > calculate and writing speed to storage backend but  have little effect.
>
> performance should increase (relatively) proportionally. ie. if you 2x
> metricd, you should perform (almost) 2x quicker. if you add 5% more
> metricd, you should perform (almost) 5% quicker.
>
> cheers,
>
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Tang Yaguang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Redis for storage backend

2017-10-18 Thread gordon chung


On 2017-10-18 12:15 PM, Yaguang Tang wrote:
> 
> We launched 300vms and each vm has about 10 metrics, OpenStack cluster 
> have 3 controllers and 2 compute nodes(ceph replication is set to 2).

seems smaller than my test, i have 20K metrics in my test

> 
> what we want to archive is to make all metric measures data get 
> processed as soon as possible, metric processing delay is set to 10s, 
> and ceilometer polling interval is 30s.

are you batching the data you push to gnocchi? in gnocchi4.1, the redis 
driver will (attempt to) process immediately, rather than cyclically 
using the metric processing delay.

> 
> when the backend of incoming and storeage is set to ceph, the average of 
> "gnocchi status"
> shows that there are around 7000 measures waiting to be process, but 
> when changing incoming and storage backend to Redis, the result of 
> gnocchi status shows unprocessed measures is around 200.

i should clarify, having a high gnocchi status is not a bad thing, ie, 
if you just send a large spike of measures, it's expected to jump. it's 
bad if never shrinks.

that said, how many workers do you have? i have 18 workers for 20K 
metrics and it takes 2minutes i believe? do you have debug enable? how 
long does it take to process metric?

when i tested gnocchi+ceph vs gnocchi+redis, i didn't see a very large 
difference in performance (redis was definitely better though). maybe 
it's your ceph environment?

> 
> we try to add more metricd process on every controller nodes, to improve 
> the performance of
> calculate and writing speed to storage backend but  have little effect.

performance should increase (relatively) proportionally. ie. if you 2x 
metricd, you should perform (almost) 2x quicker. if you add 5% more 
metricd, you should perform (almost) 5% quicker.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Redis for storage backend

2017-10-18 Thread Yaguang Tang
Hi Gordon,

We launched 300vms and each vm has about 10 metrics, OpenStack cluster have
3 controllers and 2 compute nodes(ceph replication is set to 2).

what we want to archive is to make all metric measures data get processed
as soon as possible, metric processing delay is set to 10s, and ceilometer
polling interval is 30s.

when the backend of incoming and storeage is set to ceph, the average of
"gnocchi status"
shows that there are around 7000 measures waiting to be process, but when
changing incoming and storage backend to Redis, the result of gnocchi
status shows unprocessed measures is around 200.

we try to add more metricd process on every controller nodes, to improve
the performance of
calculate and writing speed to storage backend but  have little effect.

On Fri, Oct 13, 2017 at 9:03 PM, gordon chung  wrote:

>
>
> On 2017-10-13 03:37 AM, Julien Danjou wrote:
> > On Fri, Oct 13 2017, Yaguang Tang wrote:
> >
> >> I see the latest Gnocchi support using Redis as a storage backend, I am
> >> testing performance of Redis and Ceph, it seems using Redis as storage
> >> backend we can achieve  more realtime metric
> >> data, gnocchi status shows there is always few metric to process.
> >>
> >> Is Redis a recommend storage backend ?
> >
> > Redis is recommended as an incoming measures backend, not really as a
> > storage – though it really depends on the size of your installation.
> >
> > Up until 4.0 version, Gnocchi process metrics every
> > $metricd.metric_processing_delay seconds. With version 4.1 (to be
> > released), the Redis incoming driver has a more realtime processing
> > delay which avoids having to poll for incoming data.
> >
>
> what Julien said :)
>
> redis as a storage driver really depends on how you setup persistence[1]
> and how much you trust it[2].
>
> would love to see your redis vs ceph numbers compared to mine[3] :)
>
> [1] https://redis.io/topics/persistence
> [2] https://aphyr.com/posts/283-jepsen-redis
> [3] https://medium.com/@gord.chung/gnocchi-4-introspective-a83055e99776
> (tested part of 4.1 optimisations)
>
> cheers,
>
> --
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Tang Yaguang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Redis for storage backend

2017-10-13 Thread gordon chung


On 2017-10-13 03:37 AM, Julien Danjou wrote:
> On Fri, Oct 13 2017, Yaguang Tang wrote:
> 
>> I see the latest Gnocchi support using Redis as a storage backend, I am
>> testing performance of Redis and Ceph, it seems using Redis as storage
>> backend we can achieve  more realtime metric
>> data, gnocchi status shows there is always few metric to process.
>>
>> Is Redis a recommend storage backend ?
> 
> Redis is recommended as an incoming measures backend, not really as a
> storage – though it really depends on the size of your installation.
> 
> Up until 4.0 version, Gnocchi process metrics every
> $metricd.metric_processing_delay seconds. With version 4.1 (to be
> released), the Redis incoming driver has a more realtime processing
> delay which avoids having to poll for incoming data.
> 

what Julien said :)

redis as a storage driver really depends on how you setup persistence[1] 
and how much you trust it[2].

would love to see your redis vs ceph numbers compared to mine[3] :)

[1] https://redis.io/topics/persistence
[2] https://aphyr.com/posts/283-jepsen-redis
[3] https://medium.com/@gord.chung/gnocchi-4-introspective-a83055e99776 
(tested part of 4.1 optimisations)

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Redis for storage backend

2017-10-13 Thread Julien Danjou
On Fri, Oct 13 2017, Yaguang Tang wrote:

> I see the latest Gnocchi support using Redis as a storage backend, I am
> testing performance of Redis and Ceph, it seems using Redis as storage
> backend we can achieve  more realtime metric
> data, gnocchi status shows there is always few metric to process.
>
> Is Redis a recommend storage backend ?

Redis is recommended as an incoming measures backend, not really as a
storage – though it really depends on the size of your installation.

Up until 4.0 version, Gnocchi process metrics every
$metricd.metric_processing_delay seconds. With version 4.1 (to be
released), the Redis incoming driver has a more realtime processing
delay which avoids having to poll for incoming data.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] Redis for storage backend

2017-10-12 Thread Yaguang Tang
Hi all,

I see the latest Gnocchi support using Redis as a storage backend, I am
testing performance of Redis and Ceph, it seems using Redis as storage
backend we can achieve  more realtime metric
data, gnocchi status shows there is always few metric to process.

Is Redis a recommend storage backend ?

-- 
Tang Yaguang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi][ceilometer] gnocchi-metricd error using redis as coordination

2017-08-11 Thread Julien Danjou
On Tue, Aug 08 2017, Yaguang Tang wrote:

Yes, this fix is included in Gnocchi 4.0.1 already.

> Thanks Along, finally I figure out that this is a bug  and fixed by this
> commit
>
> commit e749b60f49a4a3b48cc5da67a797f717dd8cd01d
> Author: Julien Danjou 
> Date:   Tue Jun 20 16:36:14 2017 +0200
>
> utils: use ASCII bytes as member id
>
> Tooz actually wants ASCII bytes and not random bytes.
>
> Fixes #130
>
> diff --git a/gnocchi/utils.py b/gnocchi/utils.py
> index f81d93e..7666711 100644
> --- a/gnocchi/utils.py
> +++ b/gnocchi/utils.py
> @@ -90,7 +90,7 @@ def _enable_coordination(coord):
>
>
>  def get_coordinator_and_start(url):
> -my_id = uuid.uuid4().bytes
> +my_id = str(uuid.uuid4()).encode()
>  coord = coordination.get_coordinator(url, my_id)
>  _enable_coordination(coord)
>  return coord, my_id
>
>
> On Mon, Aug 7, 2017 at 10:10 PM, Along Meng  wrote:
>
>> From the log info,It shows that your 'node' maybe is not the valid str.
>> You can show the node name via 'print node' and try to call 
>> str(node).encode('utf-8')
>> , identify does it can goes well.
>>
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils key =
>> str(node).encode('utf-8')
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>> UnicodeDecodeError: 'ascii' codec can't decode byte 0xde in position 4:
>> ordinal not in range(128)
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>>
>>
>>
>> On Sat, Aug 5, 2017 at 7:16 PM, Yaguang Tang  wrote:
>>
>>> Hi gnocchi devs,
>>>
>>> I have an issue when using gnocchi 4.0, the storage backend is ceph, and
>>> tooz coordination is redis. currently  gnocchi api in apache wsgi mode,
>>> only one controller node running gnocchi-metricd & gnocchi-statsd daemon.
>>> the error log of gnocchi-metricd is as follow.
>>>
>>>
>>>
>>> 2017-08-05 18:14:18.643 1329257 INFO gnocchi.storage.common.ceph [-] Ceph
>>> storage backend use 'cradox' python library
>>> 2017-08-05 18:14:18.654 1329257 INFO gnocchi.storage.common.ceph [-] Ceph
>>> storage backend use 'cradox' python library
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils [-] Unhandled
>>> exception
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils Traceback (most
>>> recent call last):
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/cotyledon/_utils.py", line 84, in
>>> exit_on_exception
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils yield
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/cotyledon/_service.py", line 139, in
>>> _run
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.run()
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 120, in run
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>>> self._configure()
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 87, in
>>> wrapped_f
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
>>> r.call(f, *args, **kw)
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 177, in
>>> call
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
>>> fut.result()
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line
>>> 396, in result
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
>>> self.__get_result()
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 159, in
>>> call
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils result =
>>> fn(*args, **kwargs)
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 193, in
>>> _configure
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.GROUP_ID,
>>> partitions=200)
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 284, in
>>> join_partitioned_group
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
>>> partitioner.Partitioner(self, group_id)
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/tooz/partitioner.py", line 45, in
>>> __init__
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>>> partitions=self.partitions)
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>>> "/usr/lib/python2.7/site-packages/tooz/hashring.py", line 47, in __init__
>>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>>> self.add_nodes(set(nodes))
>>> 2017-08-05 

Re: [openstack-dev] [gnocchi][ceilometer] gnocchi-metricd error using redis as coordination

2017-08-07 Thread Yaguang Tang
Thanks Along, finally I figure out that this is a bug  and fixed by this
commit

commit e749b60f49a4a3b48cc5da67a797f717dd8cd01d
Author: Julien Danjou 
Date:   Tue Jun 20 16:36:14 2017 +0200

utils: use ASCII bytes as member id

Tooz actually wants ASCII bytes and not random bytes.

Fixes #130

diff --git a/gnocchi/utils.py b/gnocchi/utils.py
index f81d93e..7666711 100644
--- a/gnocchi/utils.py
+++ b/gnocchi/utils.py
@@ -90,7 +90,7 @@ def _enable_coordination(coord):


 def get_coordinator_and_start(url):
-my_id = uuid.uuid4().bytes
+my_id = str(uuid.uuid4()).encode()
 coord = coordination.get_coordinator(url, my_id)
 _enable_coordination(coord)
 return coord, my_id


On Mon, Aug 7, 2017 at 10:10 PM, Along Meng  wrote:

> From the log info,It shows that your 'node' maybe is not the valid str.
> You can show the node name via 'print node' and try to call 
> str(node).encode('utf-8')
> , identify does it can goes well.
>
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils key =
> str(node).encode('utf-8')
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
> UnicodeDecodeError: 'ascii' codec can't decode byte 0xde in position 4:
> ordinal not in range(128)
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>
>
>
> On Sat, Aug 5, 2017 at 7:16 PM, Yaguang Tang  wrote:
>
>> Hi gnocchi devs,
>>
>> I have an issue when using gnocchi 4.0, the storage backend is ceph, and
>> tooz coordination is redis. currently  gnocchi api in apache wsgi mode,
>> only one controller node running gnocchi-metricd & gnocchi-statsd daemon.
>> the error log of gnocchi-metricd is as follow.
>>
>>
>>
>> 2017-08-05 18:14:18.643 1329257 INFO gnocchi.storage.common.ceph [-] Ceph
>> storage backend use 'cradox' python library
>> 2017-08-05 18:14:18.654 1329257 INFO gnocchi.storage.common.ceph [-] Ceph
>> storage backend use 'cradox' python library
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils [-] Unhandled
>> exception
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils Traceback (most
>> recent call last):
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/cotyledon/_utils.py", line 84, in
>> exit_on_exception
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils yield
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/cotyledon/_service.py", line 139, in
>> _run
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.run()
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 120, in run
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>> self._configure()
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 87, in
>> wrapped_f
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
>> r.call(f, *args, **kw)
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 177, in
>> call
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
>> fut.result()
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line
>> 396, in result
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
>> self.__get_result()
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 159, in
>> call
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils result =
>> fn(*args, **kwargs)
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 193, in
>> _configure
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.GROUP_ID,
>> partitions=200)
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 284, in
>> join_partitioned_group
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
>> partitioner.Partitioner(self, group_id)
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/tooz/partitioner.py", line 45, in
>> __init__
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>> partitions=self.partitions)
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/tooz/hashring.py", line 47, in __init__
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>> self.add_nodes(set(nodes))
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
>> "/usr/lib/python2.7/site-packages/tooz/hashring.py", line 71, in
>> add_nodes
>> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils key =
>> str(node).encode('utf-8')
>> 

Re: [openstack-dev] [gnocchi][ceilometer] gnocchi-metricd error using redis as coordination

2017-08-07 Thread Along Meng
>From the log info,It shows that your 'node' maybe is not the valid str.
You can show the node name via 'print node' and try to call
str(node).encode('utf-8')
, identify does it can goes well.

2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils key =
str(node).encode('utf-8')
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils UnicodeDecodeError:
'ascii' codec can't decode byte 0xde in position 4: ordinal not in
range(128)
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils



On Sat, Aug 5, 2017 at 7:16 PM, Yaguang Tang  wrote:

> Hi gnocchi devs,
>
> I have an issue when using gnocchi 4.0, the storage backend is ceph, and
> tooz coordination is redis. currently  gnocchi api in apache wsgi mode,
> only one controller node running gnocchi-metricd & gnocchi-statsd daemon.
> the error log of gnocchi-metricd is as follow.
>
>
>
> 2017-08-05 18:14:18.643 1329257 INFO gnocchi.storage.common.ceph [-] Ceph
> storage backend use 'cradox' python library
> 2017-08-05 18:14:18.654 1329257 INFO gnocchi.storage.common.ceph [-] Ceph
> storage backend use 'cradox' python library
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils [-] Unhandled
> exception
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils Traceback (most
> recent call last):
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/cotyledon/_utils.py", line 84, in
> exit_on_exception
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils yield
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/cotyledon/_service.py", line 139, in
> _run
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.run()
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 120, in run
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
> self._configure()
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 87, in
> wrapped_f
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
> r.call(f, *args, **kw)
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 177, in call
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
> fut.result()
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 396,
> in result
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
> self.__get_result()
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 159, in call
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils result =
> fn(*args, **kwargs)
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 193, in _configure
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.GROUP_ID,
> partitions=200)
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/tooz/coordination.py", line 284, in
> join_partitioned_group
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
> partitioner.Partitioner(self, group_id)
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/tooz/partitioner.py", line 45, in
> __init__
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
> partitions=self.partitions)
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/tooz/hashring.py", line 47, in __init__
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
> self.add_nodes(set(nodes))
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
> "/usr/lib/python2.7/site-packages/tooz/hashring.py", line 71, in add_nodes
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils key =
> str(node).encode('utf-8')
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils UnicodeDecodeError:
> 'ascii' codec can't decode byte 0xde in position 4: ordinal not in
> range(128)
> 2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
>
> Is this a config issue or bug ? any tips or help is much appreciated :-)
>
>
> --
> Tang Yaguang
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi][ceilometer] gnocchi-metricd error using redis as coordination

2017-08-05 Thread Yaguang Tang
Hi gnocchi devs,

I have an issue when using gnocchi 4.0, the storage backend is ceph, and
tooz coordination is redis. currently  gnocchi api in apache wsgi mode,
only one controller node running gnocchi-metricd & gnocchi-statsd daemon.
the error log of gnocchi-metricd is as follow.



2017-08-05 18:14:18.643 1329257 INFO gnocchi.storage.common.ceph [-] Ceph
storage backend use 'cradox' python library
2017-08-05 18:14:18.654 1329257 INFO gnocchi.storage.common.ceph [-] Ceph
storage backend use 'cradox' python library
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils [-] Unhandled
exception
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils Traceback (most
recent call last):
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/cotyledon/_utils.py", line 84, in
exit_on_exception
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils yield
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/cotyledon/_service.py", line 139, in _run
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.run()
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 120, in run
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self._configure()
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 87, in
wrapped_f
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return r.call(f,
*args, **kw)
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 177, in call
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
fut.result()
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/concurrent/futures/_base.py", line 396,
in result
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
self.__get_result()
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/tenacity/__init__.py", line 159, in call
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils result =
fn(*args, **kwargs)
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 193, in _configure
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils self.GROUP_ID,
partitions=200)
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/tooz/coordination.py", line 284, in
join_partitioned_group
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils return
partitioner.Partitioner(self, group_id)
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/tooz/partitioner.py", line 45, in __init__
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
partitions=self.partitions)
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/tooz/hashring.py", line 47, in __init__
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils
self.add_nodes(set(nodes))
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils   File
"/usr/lib/python2.7/site-packages/tooz/hashring.py", line 71, in add_nodes
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils key =
str(node).encode('utf-8')
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils UnicodeDecodeError:
'ascii' codec can't decode byte 0xde in position 4: ordinal not in
range(128)
2017-08-05 18:14:19.100 1329257 ERROR cotyledon._utils

Is this a config issue or bug ? any tips or help is much appreciated :-)


-- 
Tang Yaguang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] gnocchi-metricd couldn't start up with ceph backend

2017-07-06 Thread Hui Xiang
ACK, thanks Julien, I agree it should be working with ceph :)

Probably I am missing something on the ceph auth configuration, will check
more, in addition, if anyone who have done test with ceph as backend and
give me some help would be greatly appreciated.



On Thu, Jul 6, 2017 at 4:15 PM, Julien Danjou  wrote:

> On Thu, Jul 06 2017, Hui Xiang wrote:
>
> > The index storage mysql has gnocchi database and get two arhive related
> > table.
> > In addition, I am checking the gnocchi-upgrade code, didn't see it will
> > create 'measure' object to metric storage anywhere, could it be somewhere
> > wrong?
>
> It's tested several times a day in our CI, so it really do work. :)
> The error raised is PermissionDenied, are you sure that the cap listed
> in the gnocchi Ceph user are enough?
> (I'm no Ceph expert so it might not be related at all, just thinking out
> loud)
>
> > if not conf.skip_storage:
> > s = storage.get_driver(conf)
> > LOG.info("Upgrading storage %s", s)
> > s.upgrade(index)
>
> FYI this is what handles the storage initial creations of Ceph objects
> or whatever is needed.
>
> --
> Julien Danjou
> -- Free Software hacker
> -- https://julien.danjou.info
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] gnocchi-metricd couldn't start up with ceph backend

2017-07-06 Thread Julien Danjou
On Thu, Jul 06 2017, Hui Xiang wrote:

> The index storage mysql has gnocchi database and get two arhive related
> table.
> In addition, I am checking the gnocchi-upgrade code, didn't see it will
> create 'measure' object to metric storage anywhere, could it be somewhere
> wrong?

It's tested several times a day in our CI, so it really do work. :)
The error raised is PermissionDenied, are you sure that the cap listed
in the gnocchi Ceph user are enough?
(I'm no Ceph expert so it might not be related at all, just thinking out
loud)

> if not conf.skip_storage:
> s = storage.get_driver(conf)
> LOG.info("Upgrading storage %s", s)
> s.upgrade(index)

FYI this is what handles the storage initial creations of Ceph objects
or whatever is needed.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] gnocchi-metricd couldn't start up with ceph backend

2017-07-06 Thread Hui Xiang
Yes, I have done gnocchi-upgrade

The index storage mysql has gnocchi database and get two arhive related
table.
In addition, I am checking the gnocchi-upgrade code, didn't see it will
create 'measure' object to metric storage anywhere, could it be somewhere
wrong?

def upgrade():
conf = cfg.ConfigOpts()
conf.register_cli_opts([
cfg.BoolOpt("skip-index", default=False,
help="Skip index upgrade."),
cfg.BoolOpt("skip-storage", default=False,
help="Skip storage upgrade."),
cfg.BoolOpt("skip-archive-policies-creation", default=False,
help="Skip default archive policies creation."),
cfg.BoolOpt("create-legacy-resource-types", default=False,
help="Creation of Ceilometer legacy resource types.")
])
conf = service.prepare_service(conf=conf)
index = indexer.get_driver(conf)
index.connect()
if not conf.skip_index:
LOG.info("Upgrading indexer %s", index)
index.upgrade(
create_legacy_resource_types=conf.create_legacy_resource_types)
if not conf.skip_storage:
s = storage.get_driver(conf)
LOG.info("Upgrading storage %s", s)
s.upgrade(index)

if (not conf.skip_archive_policies_creation
and not index.list_archive_policies()
and not index.list_archive_policy_rules()):
for name, ap in
six.iteritems(archive_policy.DEFAULT_ARCHIVE_POLICIES):
index.create_archive_policy(ap)
index.create_archive_policy_rule("default", "*", "low")


On Thu, Jul 6, 2017 at 2:52 PM, Julien Danjou  wrote:

> On Thu, Jul 06 2017, Hui Xiang wrote:
>
> >   I am setting up OpenStack with Gnocchi with ceph as the storage
> backend,
> > after service gnocchi-metricd start, it always reported "PermissionError:
> > Failed to operate read op for oid measure", seems at the time metricd
> > start, it tried to list the measure object, however, there is none such
> > object, so does the permission show error? I didn't find any bug related,
> > could anyone help to tell me what am I missing? Thanks.
>
> Did you run gnocchi-upgrade?
>
> --
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] gnocchi-metricd couldn't start up with ceph backend

2017-07-06 Thread Julien Danjou
On Thu, Jul 06 2017, Hui Xiang wrote:

>   I am setting up OpenStack with Gnocchi with ceph as the storage backend,
> after service gnocchi-metricd start, it always reported "PermissionError:
> Failed to operate read op for oid measure", seems at the time metricd
> start, it tried to list the measure object, however, there is none such
> object, so does the permission show error? I didn't find any bug related,
> could anyone help to tell me what am I missing? Thanks.

Did you run gnocchi-upgrade?

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] gnocchi-metricd couldn't start up with ceph backend

2017-07-06 Thread Hui Xiang
Hi guys.

  I am setting up OpenStack with Gnocchi with ceph as the storage backend,
after service gnocchi-metricd start, it always reported "PermissionError:
Failed to operate read op for oid measure", seems at the time metricd
start, it tried to list the measure object, however, there is none such
object, so does the permission show error? I didn't find any bug related,
could anyone help to tell me what am I missing? Thanks.

### ceph pools
[root@node-4 gnocchi]# rados lspools
rbd
images
volumes
compute
backups
metrics
### gnocchi pool(metrics) has none objects
[root@node-4 gnocchi]# rados ls -p metrics
[root@node-4 gnocchi]#

### /etc/gnocchi/gnocchi.conf
[storage]
driver = ceph
ceph_pool = metrics
ceph_username = gnocchi
ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring
ceph_conffile = /etc/ceph/ceph.conf

### ceph auth
client.gnocchi
key: AQAE9FtZussoMRAAiKk4lCX21k9DTDf39AE4Cg==
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=metrics,

### gnocchi/metricd.log
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli   File
"/usr/lib/python2.7/site-packages/gnocchi/cli.py", line 229, in _run_job
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli self.block_size,
self.block_index))
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli   File
"/usr/lib/python2.7/site-packages/gnocchi/storage/incoming/ceph.py", line
141, in list_metric_with_measures_to_process
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli size * (part + 1))
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli   File
"/usr/lib/python2.7/site-packages/gnocchi/storage/incoming/ceph.py", line
125, in _list_object_names_to_process
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli op,
self.MEASURE_PREFIX, flag=self.OMAP_READ_FLAGS)
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli   File "cradox.pyx", line
444, in cradox.requires.wrapper.validate_func (cradox.c:4719)
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli   File "cradox.pyx", line
3028, in cradox.Ioctx.operate_read_op (cradox.c:39707)
2017-07-06 13:52:10.794 2265410 ERROR gnocchi.cli PermissionError: Failed
to operate read op for oid measure


BR.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gnocchi] Difference between Gnocchi-api and uwsgi

2017-06-23 Thread gordon chung


On 23/06/17 03:08 PM, mate...@mailbox.org wrote:
> The quantity of metrics is very low. I'm not sure that batch_size works.
> Regarding the batch_timeout. What will be when the timeout reached ? Will 
> ceilometer throw error to the log file and
> discard the whole batch ? I've got this timeout set to 300, but every minute 
> I receive errors if api doesn't work
> correctly.

you set batch_timeout as 300? it's either or scenario. basically the 
notification agent (or collector) either waits to receive  
messages before processing/publishing or it waits  
seconds before processing/publishing. nothing is thrown away.

i'm not sure why you receive some metrics but get timeouts for others. 
maybe others have an idea.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gnocchi] Difference between Gnocchi-api and uwsgi

2017-06-23 Thread mate200


On Thu, 2017-06-22 at 22:57 +, gordon chung wrote:
> 
> On 22/06/17 04:23 PM, mate...@mailbox.org wrote:
> > Hello everyone !
> > 
> > I'm sorry that I'm disturbing you, but I was sent here from 
> > openstack-operators ML.
> > On my Mitaka test stack I installed Gnocchi as database for measurements, 
> > but I have problems with
> > api part. Firstly, I ran it directly executing gnocchi-api -p 8041. I noted 
> > the warning message and later rerun api
> > using uwsgi daemon. The problem that I'm faced with is a connection errors 
> > that appears in ceilometer-collector.log
> > approximately every 5-10 minutes:
> > 
> > 2017-06-22 12:54:09.751 1846835 ERROR ceilometer.dispatcher.gnocchi 
> > ConnectFailure: Unable to establish connection
> > to ht
> > tp://10.10.10.69:8041/v1/resource/generic/c900fd60-0b65-57b5-a481-
> > eaee8e116312/metric/network.incoming.bytes.rate/measures
> 
> 
> is this failing on all your requests or just some? do you have data in 
> your gnocchi?

Hello Gordon !

Yes I have data in gnocchi. Only some requests failing.


> > 
> > I run uwsgi with the following config:
> > 
> > [uwsgi]
> > #http-socket = 127.0.0.1:8000
> > http-socket = 10.10.10.69:8041
> 
> this should work but i imagine it's not behind a proxy so you could use 
> http instead of http-socket.

Yes I run it directly without http proxy server. With http option it doesn't 
start:

*** Starting uWSGI 2.0.12-debian (64bit) on [Fri Jun 23 19:03:56 2017] ***
compiled with version: 5.3.1 20160412 on 13 April 2016 08:36:06
os: Linux-4.4.0-81-generic #104-Ubuntu SMP Wed Jun 14 08:17:06 UTC 2017
nodename: ZABBIX1
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /root
detected binary path: /usr/bin/uwsgi-core
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
your processes number limit is 15650
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: enabled
Python version: 2.7.12 (default, Nov 19 2016, 06:48:10)  [GCC 5.4.0 20160609]
Python main interpreter initialized at 0x1280140
python threads support enabled
The -s/--socket option is missing and stdin is not a socket.


> > 
> > # Set the correct path depending on your installation
> > wsgi-file = /usr/local/bin/gnocchi-api
> > logto = /var/log/gnocchi/gnocchi-uwsgi.log
> > 
> > master = true
> > die-on-term = true
> > threads = 1
> > # Adjust based on the number of CPU
> > processes = 5
> > enabled-threads = true
> > thunder-lock = true
> > plugins = python
> > buffer-size = 65535
> > lazy-apps = true
> > 
> > 
> > I don't understand why this happens.
> > Maybe I should point wsgi-file as 
> > /usr/local/lib/python2.7/dist-packages/gnocchi/rest/app.wsgi ?
> 
> /usr/local/bin/gnocchi-api is correct... assuming it's in that path and 
> not /usr/bin/gnocchi-api
> 
> > Form uwsgi manual I read that direct parsing of http is slow. So maybe I 
> > need to use apache with uwsgi mod ?
> > 
> 
> not sure about this part. do you have a lot of metrics being pushed to 
> gnocchi? you can minimised connection requirements by setting 
> batch_size/batch_timeout for collector (i think mitaka should support 
> this?). i believe in the gate we have 2 processes assigned to uwsgi so 5 
> should be sufficient.
> 
> cheers,
> -- 
> gord

The quantity of metrics is very low. I'm not sure that batch_size works.
Regarding the batch_timeout. What will be when the timeout reached ? Will 
ceilometer throw error to the log file and
discard the whole batch ? I've got this timeout set to 300, but every minute I 
receive errors if api doesn't work
correctly. 


-- 
Best regards,
Mate200


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Gnocchi] Difference between Gnocchi-api and uwsgi

2017-06-22 Thread gordon chung


On 22/06/17 04:23 PM, mate...@mailbox.org wrote:
> Hello everyone !
>
> I'm sorry that I'm disturbing you, but I was sent here from 
> openstack-operators ML.
> On my Mitaka test stack I installed Gnocchi as database for measurements, but 
> I have problems with
> api part. Firstly, I ran it directly executing gnocchi-api -p 8041. I noted 
> the warning message and later rerun api
> using uwsgi daemon. The problem that I'm faced with is a connection errors 
> that appears in ceilometer-collector.log
> approximately every 5-10 minutes:
>
> 2017-06-22 12:54:09.751 1846835 ERROR ceilometer.dispatcher.gnocchi 
> ConnectFailure: Unable to establish connection to ht
> tp://10.10.10.69:8041/v1/resource/generic/c900fd60-0b65-57b5-a481-
> eaee8e116312/metric/network.incoming.bytes.rate/measures


is this failing on all your requests or just some? do you have data in 
your gnocchi?

>
> I run uwsgi with the following config:
>
> [uwsgi]
> #http-socket = 127.0.0.1:8000
> http-socket = 10.10.10.69:8041

this should work but i imagine it's not behind a proxy so you could use 
http instead of http-socket.

>
> # Set the correct path depending on your installation
> wsgi-file = /usr/local/bin/gnocchi-api
> logto = /var/log/gnocchi/gnocchi-uwsgi.log
>
> master = true
> die-on-term = true
> threads = 1
> # Adjust based on the number of CPU
> processes = 5
> enabled-threads = true
> thunder-lock = true
> plugins = python
> buffer-size = 65535
> lazy-apps = true
>
>
> I don't understand why this happens.
> Maybe I should point wsgi-file as 
> /usr/local/lib/python2.7/dist-packages/gnocchi/rest/app.wsgi ?

/usr/local/bin/gnocchi-api is correct... assuming it's in that path and 
not /usr/bin/gnocchi-api

> Form uwsgi manual I read that direct parsing of http is slow. So maybe I need 
> to use apache with uwsgi mod ?
>

not sure about this part. do you have a lot of metrics being pushed to 
gnocchi? you can minimised connection requirements by setting 
batch_size/batch_timeout for collector (i think mitaka should support 
this?). i believe in the gate we have 2 processes assigned to uwsgi so 5 
should be sufficient.

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gnocchi] Difference between Gnocchi-api and uwsgi

2017-06-22 Thread mate200
Hello everyone !

I'm sorry that I'm disturbing you, but I was sent here from openstack-operators 
ML.
On my Mitaka test stack I installed Gnocchi as database for measurements, but I 
have problems with
api part. Firstly, I ran it directly executing gnocchi-api -p 8041. I noted the 
warning message and later rerun api
using uwsgi daemon. The problem that I'm faced with is a connection errors that 
appears in ceilometer-collector.log
approximately every 5-10 minutes:

2017-06-22 12:54:09.751 1846835 ERROR ceilometer.dispatcher.gnocchi 
ConnectFailure: Unable to establish connection to ht
tp://10.10.10.69:8041/v1/resource/generic/c900fd60-0b65-57b5-a481-
eaee8e116312/metric/network.incoming.bytes.rate/measures

I run uwsgi with the following config:

[uwsgi]
#http-socket = 127.0.0.1:8000
http-socket = 10.10.10.69:8041

# Set the correct path depending on your installation
wsgi-file = /usr/local/bin/gnocchi-api
logto = /var/log/gnocchi/gnocchi-uwsgi.log

master = true
die-on-term = true
threads = 1
# Adjust based on the number of CPU
processes = 5
enabled-threads = true
thunder-lock = true
plugins = python
buffer-size = 65535
lazy-apps = true


I don't understand why this happens. 
Maybe I should point wsgi-file as 
/usr/local/lib/python2.7/dist-packages/gnocchi/rest/app.wsgi ? 
Form uwsgi manual I read that direct parsing of http is slow. So maybe I need 
to use apache with uwsgi mod ?



Thanks in advance.



-- 
Best regards,
Mate200

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] regional incoming storage targets

2017-06-01 Thread Mehdi Abaakouk

On Thu, Jun 01, 2017 at 01:46:21PM +0200, Julien Danjou wrote:

On Wed, May 31 2017, gordon chung wrote:


[…]


i'm not entirely sure this is an issue, just thought i'd raise it to
discuss.


It's a really interesting point you raise. I never thought we could do
that but indeed, we could. Maybe we built a great architecture after
all. ;-)

Easy solution: disable refresh. Problem solved.


I have never liked this refresh feature on API side.

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] regional incoming storage targets

2017-06-01 Thread gordon chung


On 01/06/17 07:46 AM, Julien Danjou wrote:
> Yes, write doc or log an issue at least. It's best way to keep a public
> track now on ideas and what's going on since it's what people are going
> to read and search into.

added here: https://github.com/gnocchixyz/gnocchi/issues/60

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] regional incoming storage targets

2017-06-01 Thread Julien Danjou
On Wed, May 31 2017, gordon chung wrote:


[…]

> i'm not entirely sure this is an issue, just thought i'd raise it to 
> discuss.

It's a really interesting point you raise. I never thought we could do
that but indeed, we could. Maybe we built a great architecture after
all. ;-)

Easy solution: disable refresh. Problem solved.

Also that means you could not push measures to the central API endpoint,
or that'd be a problem. There might be a lot of little problem like that
we need to solve.

> regardless, thoughts on maybe writing up deployment strategies like 
> this? or make everyone who reads this to erase their minds and use this 
> for 'consulting' fees :P

Yes, write doc or log an issue at least. It's best way to keep a public
track now on ideas and what's going on since it's what people are going
to read and search into.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] regional incoming storage targets

2017-05-31 Thread gordon chung
here's a scenario: i'd like aggregates stored centrally like it does 
currently with ceph/swift/s3 drivers but i want to collect data from 
many different regions spanning globe. they can all hit the same 
incoming storage but:
- that will be a hell of a lot of load
- single incoming storage locality might not be optimal for all regions 
causing the write performance to take longer than needed for a 'cache' 
storage
- sending HTTP POST with JSON payload probably more bandwidth than 
binary serialised format gnocchi uses internally.

i'm thinking it'd be good to support ability to have each region store 
data 'locally' to minimise latency and then have regional metricd agents 
aggregate into a central target. this is technically possible right now 
by just declaring regional (write-only?) APIs with same storage target 
and indexer targets but a different incoming target per region. the 
problem i think is how to handle coordination_url. it cannot be the same 
coordination_url since that would cause sack locks to overlap. if 
they're different, then i think there's an issue with having a 
centralised API (in addition to regional APIs). specifically, the 
centralised API cannot 'refresh'.

i'm not entirely sure this is an issue, just thought i'd raise it to 
discuss.

regardless, thoughts on maybe writing up deployment strategies like 
this? or make everyone who reads this to erase their minds and use this 
for 'consulting' fees :P

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi][collectd-ceilometer-plugin][ceilometer][rally] Gates with gnocchi+devstack will break

2017-05-31 Thread Julien Danjou
Hi,

If you're consuming Gnocchi via its devstack in the gate, you'll need to
change that soon. As the repository has been moved to GitHub and the
infra team not wanting to depend on GitHub (external) repositories,
you'll need to set up Gnocchi via pip.

I've started doing the work for Ceilometer here:

  https://review.openstack.org/#/c/468844/
  https://review.openstack.org/#/c/468876/

If it can inspire you!

As soon as https://review.openstack.org/#/c/466317/ is merged, your jobs
will break.

Cheers,
-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running tests

2017-05-24 Thread Julien Danjou
On Tue, May 23 2017, aalvarez wrote:

> Ok so started the tests using:
>
> tox -e py27-postgresql-file
>
> The suite starts running fine, but then I get a failing test:

Can you reproduce it each time?

That's weird, I don't think we ever saw that.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running tests

2017-05-23 Thread aalvarez
Ok so started the tests using:

tox -e py27-postgresql-file

The suite starts running fine, but then I get a failing test:

==
Failed 1 tests - output below:
==

gnocchi.tests.test_indexer.TestIndexerDriver.test_list_resources_without_history


Captured traceback:
~~~
Traceback (most recent call last):
  File "gnocchi/tests/base.py", line 57, in skip_if_not_implemented
return func(*args, **kwargs)
  File "gnocchi/tests/test_indexer.py", line 839, in
test_list_resources_without_history
details=True)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/oslo_db/api.py",
line 150, in wrapper
ectxt.value = e.inner_exc
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
line 220, in __exit__
self.force_reraise()
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/oslo_db/api.py",
line 138, in wrapper
return f(*args, **kwargs)
  File "gnocchi/indexer/sqlalchemy.py", line 1048, in list_resources
all_resources.extend(q.all())
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2703, in all
return list(self)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2855, in __iter__
return self._execute_and_instances(context)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
line 2878, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 945, in execute
return meth(self, multiparams, params)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
line 1046, in _execute_clauseelement
if not self.schema_for_object.is_default else None)
  File "", line 1, in 
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 436, in compile
return self._compiler(dialect, bind=bind, **kw)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
line 442, in _compiler
return dialect.statement_compiler(dialect, self, **kw)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 435, in __init__
Compiled.__init__(self, dialect, statement, **kwargs)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 216, in __init__
self.string = self.process(self.statement, **compile_kwargs)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 242, in process
return obj._compiler_dispatch(self, **kwargs)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py",
line 81, in _compiler_dispatch
return meth(self, **kw)
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 1716, in visit_select
for name, column in select._columns_plus_names
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/compiler.py",
line 1488, in _label_select_column
**column_clause_args
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py",
line 75, in _compiler_dispatch
def _compiler_dispatch(self, visitor, **kw):
  File
"/home/ubuntu/workspace/gnocchi/.tox/py27-postgresql-file/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
line 52, in signal_handler
raise TimeoutException()
fixtures._fixtures.timeout.TimeoutException



==
Totals

Re: [openstack-dev] [gnocchi] Running tests

2017-05-23 Thread Julien Danjou
On Tue, May 23 2017, Andres Alvarez wrote:

> Hello everyone
>
> I am having a hard time in understanding the correct way to run the tests
> in Gnocchi. I have already read about tox and testr, but it seems I still
> can't get to run the tests.

tox -e py27-postgresql-file

That'll run the test with PostgreSQL and the file storage backend.

tox -e py35-mysql-ceph

For MySQL and Ceph, etc.

That's all you need.

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] Running tests

2017-05-22 Thread Andres Alvarez
Hello everyone

I am having a hard time in understanding the correct way to run the tests
in Gnocchi. I have already read about tox and testr, but it seems I still
can't get to run the tests.

Would really appreciate if someone could explain the steps necessary to get
all tests running.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Migration to GitHub

2017-05-19 Thread Julien Danjou
On Thu, May 18 2017, Julien Danjou wrote:

> I've started to migrate Gnocchi itself to GitHub. The Launchpad bugs
> have been re-created at https://github.com/gnocchixyz/gnocchi/issues and
> I'll move the repository as soon as all opened reviews are merged.

Everything has been merged today so the repository is now live at
GitHub.
The rest of the deprecation patches are up for review:
  
https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:jd/move-gnocchi-out

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-19 Thread aalvarez
Understood. 

I have submitted a patch to pbr for review here:
https://review.openstack.org/#/c/466225/



--
View this message in context: 
http://openstack.10931.n7.nabble.com/gnocchi-Running-Gnocchi-API-in-specific-interface-tp135004p135095.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-19 Thread Julien Danjou
On Thu, May 18 2017, aalvarez wrote:

> Yes but doesn't Pecan allow to use a development server (pecan serve) that
> can accept interface and port options? I thought this would be the
> test/development server Gnocchi would use.

We could but there's no need and it's just one line to rely on pbr's
WSGI server. I'm sure adding the feature to pbr and be framework
agnostic would not be a big deal. :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread aalvarez
Yes but doesn't Pecan allow to use a development server (pecan serve) that
can accept interface and port options? I thought this would be the
test/development server Gnocchi would use.



--
View this message in context: 
http://openstack.10931.n7.nabble.com/gnocchi-Running-Gnocchi-API-in-specific-interface-tp135004p135081.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] Migration to GitHub

2017-05-18 Thread Julien Danjou
Hi,

I've started to migrate Gnocchi itself to GitHub. The Launchpad bugs
have been re-created at https://github.com/gnocchixyz/gnocchi/issues and
I'll move the repository as soon as all opened reviews are merged.

Cheers,
-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread Julien Danjou
On Thu, May 18 2017, Hanxi Liu wrote:

> Ceilometer, Gnocchi, Aodh all use pbr, so the port is 8000 by default.
>
> I guess we also should hardcode Gnocchi's port in rdo project, together
> with Aodh.
> i proposed patchs for Aodh and Gnocchi:
>
> https://review.rdoproject.org/r/#/c/5848/
> https://review.rdoproject.org/r/#/c/5847/
>
> But hguemar suggest not to hardcode port.
>
> How do you think about this?

Port for HTTP is 80. The rest, unless assigned by IANA, is fantasy. :-)

You can make all your (OpenStack) services run under
http://example.com/openstack, e.g. http://example.com/openstack/metric
for Gnocchi, if you want. So there's no good reason to assigne a port to
a service more than there is to assigne a hardcoded URL prefix.

So I think I'd agree with Haikel here. But ultimately, in the RDO case,
the packaging files should leverage a real WSGI Web server like uwsgi if
they want to start the service, rather than defaulting *all* packages to
the same pbr default. Which will conflict and is bad user experience.

Nobody asks what the default port of e.g. phpmyadmin. The same should go
with OpenStack services ultimately. Unfortunately, the bad habit has
spread from the early day of Nova and Swift using a port and not
providing a WSGI file.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread Julien Danjou
On Thu, May 18 2017, aalvarez wrote:

> I thought the API was based on and mounted by Pecan? Isn't there a way to
> pass these options to Pecan?

Pecan is an API framework, not a HTTP server.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread aalvarez
I thought the API was based on and mounted by Pecan? Isn't there a way to
pass these options to Pecan?



--
View this message in context: 
http://openstack.10931.n7.nabble.com/gnocchi-Running-Gnocchi-API-in-specific-interface-tp135004p135012.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread Hanxi Liu
On Thu, May 18, 2017 at 3:06 PM, Julien Danjou  wrote:

> On Wed, May 17 2017, aalvarez wrote:
>
> > I do not need this functionality for production, but for testing. I
> think it
> > would be nice if we can specify the interface for the gnocchi-api even
> for
> > test purposes, just like the port.
>
> Feel free to send a patch. This is provided by pbr so that's where you
> should sent the patch:
>
>   https://docs.openstack.org/developer/pbr/
>
> Hi jd,

Ceilometer, Gnocchi, Aodh all use pbr, so the port is 8000 by default.

I guess we also should hardcode Gnocchi's port in rdo project, together
with Aodh.
i proposed patchs for Aodh and Gnocchi:

https://review.rdoproject.org/r/#/c/5848/
https://review.rdoproject.org/r/#/c/5847/

But hguemar suggest not to hardcode port.

How do you think about this?

Cheers,
Hanxi Liu
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread Julien Danjou
On Wed, May 17 2017, aalvarez wrote:

> I do not need this functionality for production, but for testing. I think it
> would be nice if we can specify the interface for the gnocchi-api even for
> test purposes, just like the port.

Feel free to send a patch. This is provided by pbr so that's where you
should sent the patch:

  https://docs.openstack.org/developer/pbr/

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-18 Thread aalvarez
I do not need this functionality for production, but for testing. I think it
would be nice if we can specify the interface for the gnocchi-api even for
test purposes, just like the port.



--
View this message in context: 
http://openstack.10931.n7.nabble.com/gnocchi-Running-Gnocchi-API-in-specific-interface-tp135004p135008.html
Sent from the Developer mailing list archive at Nabble.com.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-17 Thread Mehdi Abaakouk

Hi,

On Thu, May 18, 2017 at 11:14:06AM +0800, Andres Alvarez wrote:

Hello folks

The gnocchi-api command allows running the API server usign a spefic port:

usage: gnocchi-api [-h] [--port PORT] -- [passed options]

positional arguments:
 -- [passed options]   '--' is the separator of the arguments used to start
   the WSGI server and the arguments passed to the WSGI
   application.

optional arguments:
 -h, --helpshow this help message and exit
 --port PORT, -p PORT  TCP port to listen on (default: 8000)

I was wondering if it's possible as well to use a specific interface? (In
my case, I am working on a cloud dev environment, so I need 0.0.0.0)?


gnocchi-api is for testing purpose, for production or any HTTP server
advanced usage, I would recommend to use the wsgi application inside a
real HTTP server, you can find an example with uwsgi here:
http://gnocchi.xyz/running.htm

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] Running Gnocchi API in specific interface

2017-05-17 Thread Andres Alvarez
Hello folks

The gnocchi-api command allows running the API server usign a spefic port:

usage: gnocchi-api [-h] [--port PORT] -- [passed options]

positional arguments:
  -- [passed options]   '--' is the separator of the arguments used to start
the WSGI server and the arguments passed to the WSGI
application.

optional arguments:
  -h, --helpshow this help message and exit
  --port PORT, -p PORT  TCP port to listen on (default: 8000)

I was wondering if it's possible as well to use a specific interface? (In
my case, I am working on a cloud dev environment, so I need 0.0.0.0)?

If not, would this be a welcomed change for a pull request?

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi

2017-05-04 Thread simona marinova
Hello Julien,


Sorry for the late reply.

We uninstalled Gnocchi, because we tried to follow the latest update on 
OpenStack Newton which includes Ceilometer with MongoDB and Alarming with MySQL.


Ceilometer now works, but only the commands which do not include Alarming give 
the correct output, for example "ceilometer meter-list", "ceilometer 
resource-list" etc.


 The Alarming service doesn't work at this point. For example the command 
"ceilometer alarm-list" gives the error:


HTTPConnectionPool(host='controller', port=8042): Max retries exceeded with 
url: /v2/alarms (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 111] 
Connection refused',))

Now our biggest concern is that the Alarming service database (MySQL-based) and 
the Telemetry service database (MongoDB) are not communicating properly. Is it 
possible for the Aodh to access the data from mongoDB?

Additionally aodh-dbsync gives error, because it cannot detect the module 
gnocchiclient. There aren't gnocchi modules involved in this version.

What kind of configuration needs to be done in order for Telemetry and Alarming 
to work properly?

Best regards,
Simona




From: Julien Danjou <jul...@danjou.info>
Sent: Wednesday, April 26, 2017 3:15 PM
To: simona marinova
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Gnocchi

On Wed, Apr 26 2017, simona marinova wrote:

Hi Simona,

> I am a student working on a project that involves an OpenStack Newton 
> platform.
>
> Currently, we are trying to implement the Data Collection service. We saw that
> Gnocchi is recommended for this purpose, and we installed it.
>
> Now we have problems with the configuration.
>
> I have tried to configure the basic parameters, but the same errors appear 
> over and over.
>
> Until this point, every installation and configuration of the services in
> OpenStack is done exactly the same as shown in the official OpenStack
> documentation.
>
>  I am sending you a screenshot of the output when I try to run gnocchi.
>
>
> Can you help me with a basic configuration or some advice?

It looks like you set your Swift URL to a Keystone URL something like
that. Could you join your gnocchi.conf file?

--
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */
jd:/dev/blog | Julien Danjou<https://julien.danjou.info/>
julien.danjou.info
Knowing that collectd is a daemon that collects system and applications metrics 
and that Gnocchi is a scalable timeseries database, it sounds like a good idea 
to ...



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-05-01 Thread gordon chung


On 29/04/17 08:14 AM, Julien Danjou wrote:
> 1. get 'deleted' metrics
> 2. delete all things in storage
>   -> if it fails, whatever, ignore, maybe a janitor is doing the same
>   thing?
> 3. expunge from indexer

possibly? i was thinking it was possible that maybe it would partially 
delete and could not deleted the rest on second go but i guess i'll need 
to look at that and see if we can do that.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-29 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

>>> refresh i believe is always disabled by default regardless of what
>>> interface you're using.
>>
>> You gotta to show me where it is 'cause I can't see that and I don't
>> recall any option for that. :/
>
> https://github.com/openstack/gnocchi/commit/72a2091727431555eba65c6ef8ff89448f3432f0
>  
>
>
> although now that i check, i see it's blocking by default... which means 
> we did guarantee all measures would be return.

I guess we misunderstood each other. What I meant originally is that we
did not have a flag to "disable its usage completely". Because from an
operator point of view it could be an easier way to provoke a DoS.

> hmmm. true. i'm still hoping we don't have to lock an entire sack for 
> one metric and not return an error status just because it can't lock. 
> doesn't seem like a good experience.

If you wait long enough, you always can return. :)

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-29 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

>>> if the sack is unlocked, then it means a processing worker isn't looking
>>> at the sack, and when it does lock the sack, it first has to check sack
>>> for existing measures to process and then check indexer to validate that
>>> they are still active. because it checks indexer later, even if both
>>> janitor and processing worker check lock at same time, we can guarantee
>>> it will have indexer state processing worker sees is > 00:00:00 since
>>> janitor has state before getting lock, while processing worker as state
>>> sometime after getting lock.
>>
>> My brain hurts but that sounds perfect. That even means we potentially
>> did not have to lock currently, sack or no sack.
>>
>
> oh darn, i didn't consider multiple janitors... so this only works if we 
> make janitor completely separate and only allow one janitor ever. back 
> to square 1

Not sure it's a big deal if you have multiple janitors. Do you have a
scenario in mind where we'd have problem?
Since it's mainly:
1. get 'deleted' metrics
2. delete all things in storage
  -> if it fails, whatever, ignore, maybe a janitor is doing the same
  thing?
3. expunge from indexer

I miss something hm?

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 10:50 AM, Julien Danjou wrote:
> On Fri, Apr 28 2017, gordon chung wrote:
>
>> if the sack is unlocked, then it means a processing worker isn't looking
>> at the sack, and when it does lock the sack, it first has to check sack
>> for existing measures to process and then check indexer to validate that
>> they are still active. because it checks indexer later, even if both
>> janitor and processing worker check lock at same time, we can guarantee
>> it will have indexer state processing worker sees is > 00:00:00 since
>> janitor has state before getting lock, while processing worker as state
>> sometime after getting lock.
>
> My brain hurts but that sounds perfect. That even means we potentially
> did not have to lock currently, sack or no sack.
>

oh darn, i didn't consider multiple janitors... so this only works if we 
make janitor completely separate and only allow one janitor ever. back 
to square 1

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 10:50 AM, Julien Danjou wrote:
> On Fri, Apr 28 2017, gordon chung wrote:
>
>> if the sack is unlocked, then it means a processing worker isn't looking
>> at the sack, and when it does lock the sack, it first has to check sack
>> for existing measures to process and then check indexer to validate that
>> they are still active. because it checks indexer later, even if both
>> janitor and processing worker check lock at same time, we can guarantee
>> it will have indexer state processing worker sees is > 00:00:00 since
>> janitor has state before getting lock, while processing worker as state
>> sometime after getting lock.
>
> My brain hurts but that sounds perfect. That even means we potentially
> did not have to lock currently, sack or no sack.
>

yeah, i think you're right.

success! confused you with my jumble of random words.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 10:11 AM, Julien Danjou wrote:
> On Fri, Apr 28 2017, gordon chung wrote:
>
>> refresh i believe is always disabled by default regardless of what
>> interface you're using.
>
> You gotta to show me where it is 'cause I can't see that and I don't
> recall any option for that. :/

https://github.com/openstack/gnocchi/commit/72a2091727431555eba65c6ef8ff89448f3432f0
 


although now that i check, i see it's blocking by default... which means 
we did guarantee all measures would be return.

>
>> in the case of cross-metric aggregations, this is a timeout for entire
>> request or per metric timeout? i think it's going to get quite chaotic
>> in the multiple metric (multiple sacks) refresh. :(
>
> Right I did not think about multiple refresh. Well it'll be a nice
> slalom of lock acquiring. :-)
>
>> i'm hoping not to have a timeout because i imagine there will be
>> scenarios where we block trying to lock sack, and when we finally get
>> sack lock, we find there is no work for us. this means we just added x
>> seconds to response to for no reason.
>
> Right, I see your point. Though we _could_ enhance refresh to first
> check if there's any job to do. It's lock-free. Just checking. :)
>

hmmm. true. i'm still hoping we don't have to lock an entire sack for 
one metric and not return an error status just because it can't lock. 
doesn't seem like a good experience.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

> if the sack is unlocked, then it means a processing worker isn't looking 
> at the sack, and when it does lock the sack, it first has to check sack 
> for existing measures to process and then check indexer to validate that 
> they are still active. because it checks indexer later, even if both 
> janitor and processing worker check lock at same time, we can guarantee 
> it will have indexer state processing worker sees is > 00:00:00 since 
> janitor has state before getting lock, while processing worker as state 
> sometime after getting lock.

My brain hurts but that sounds perfect. That even means we potentially
did not have to lock currently, sack or no sack.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

> refresh i believe is always disabled by default regardless of what 
> interface you're using.

You gotta to show me where it is 'cause I can't see that and I don't
recall any option for that. :/

> in the case of cross-metric aggregations, this is a timeout for entire 
> request or per metric timeout? i think it's going to get quite chaotic 
> in the multiple metric (multiple sacks) refresh. :(

Right I did not think about multiple refresh. Well it'll be a nice
slalom of lock acquiring. :-)

> i'm hoping not to have a timeout because i imagine there will be 
> scenarios where we block trying to lock sack, and when we finally get 
> sack lock, we find there is no work for us. this means we just added x 
> seconds to response to for no reason.

Right, I see your point. Though we _could_ enhance refresh to first
check if there's any job to do. It's lock-free. Just checking. :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 09:23 AM, Julien Danjou wrote:
> On Fri, Apr 28 2017, gordon chung wrote:
>
>> what if we just don't lock ever on delete, but if we still check lock to
>> see if it's processed. at that point, the janitor knows that metric(s)
>> is deleted, and since no one else is working on sack, any metricd that
>> follows will also know that the metric(s) are deleted and therefore,
>> won't work on it.
>
> I did not understand your whole idea, can you detail a bit more?
> Though the "check lock" approach usually does not work and is a source
> of race condition, but again, I did not get the whole picture. :)
>

my thinking is that, when the janitor goes to process a sack, it means 
it has indexer state from time 00:00:00.

if the sack is locked, then it means a processing worker is looking at 
sack and has indexer state from time <= 00:00:00.

if the sack is unlocked, then it means a processing worker isn't looking 
at the sack, and when it does lock the sack, it first has to check sack 
for existing measures to process and then check indexer to validate that 
they are still active. because it checks indexer later, even if both 
janitor and processing worker check lock at same time, we can guarantee 
it will have indexer state processing worker sees is > 00:00:00 since 
janitor has state before getting lock, while processing worker as state 
sometime after getting lock.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 09:19 AM, Julien Danjou wrote:
> That's not what I meant. You can have the same mechanism as currently,
> but then you compute the sacks of all metrics and you
> itertools.groupby() per sack on them before locking the sack and
> expunging them.

yeah, we should do that. i'll add that as a patch.

>
>> > refresh is currently disabled by default so i think we're ok.
> Well you mean it's disabled by default _in the CLI_, not in the API
> right?

refresh i believe is always disabled by default regardless of what 
interface you're using.

>
>> > what's the timeout for? timeout api's attempt to aggregate metric? i
>> > think it's a bad experience if we add any timeout since i assume it will
>> > still return what it can return and then the results become somewhat
>> > ambiguous.
> No, I meant timeout for grabbing the sack's lock. You wouldn't return a
> 2xx but a 5xx stating the API is unable to compute stuff right now, so
> try again without refresh or something.

in the case of cross-metric aggregations, this is a timeout for entire 
request or per metric timeout? i think it's going to get quite chaotic 
in the multiple metric (multiple sacks) refresh. :(

i'm hoping not to have a timeout because i imagine there will be 
scenarios where we block trying to lock sack, and when we finally get 
sack lock, we find there is no work for us. this means we just added x 
seconds to response to for no reason.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

> what if we just don't lock ever on delete, but if we still check lock to 
> see if it's processed. at that point, the janitor knows that metric(s) 
> is deleted, and since no one else is working on sack, any metricd that 
> follows will also know that the metric(s) are deleted and therefore, 
> won't work on it.

I did not understand your whole idea, can you detail a bit more?
Though the "check lock" approach usually does not work and is a source
of race condition, but again, I did not get the whole picture. :)

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

> so the tradeoff here is that now we're doing a lot more calls to 
> indexer. additionally, we're pulling a lot more unused results from db.
> a single janitor currently just grabs all deleted metrics and starts 
> attempting to clean them up one at a time. if we merge, we will have n 
> calls to indexer, where n is number of workers, each pulling in all the 
> deleted metrics, and then checking to see if the metric is in it's sack, 
> and if not, moving on. that's a lot of extra, wasted work. we could 
> reduce that work by adding sack information to indexer ;) but that will 
> still add significantly more calls to indexer (which we could reduce by 
> not triggering cleanup every job interval)

That's not what I meant. You can have the same mechanism as currently,
but then you compute the sacks of all metrics and you
itertools.groupby() per sack on them before locking the sack and
expunging them.

> refresh is currently disabled by default so i think we're ok.

Well you mean it's disabled by default _in the CLI_, not in the API
right?

> what's the timeout for? timeout api's attempt to aggregate metric? i 
> think it's a bad experience if we add any timeout since i assume it will 
> still return what it can return and then the results become somewhat 
> ambiguous.

No, I meant timeout for grabbing the sack's lock. You wouldn't return a
2xx but a 5xx stating the API is unable to compute stuff right now, so
try again without refresh or something.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Moving python-gnocchiclient to GitHub

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, Javier Pena wrote:

> I see none of the tags or branches have been moved over, could they be copied?
> It would of great help to packagers.

Oh for sure, that's my mistake, I pushed all branches and old tags!
Thanks for noting!

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 03:48 AM, Julien Danjou wrote:
> Yes, I wrote that in a review somewhere. We need to rework 1. so
> deletion happens at the same time we lock the sack to process metrics
> basically. We might want to merge the janitor into the worker I imagine.
> Currently a janitor can grab metrics and do dumb things like:
> - metric1 from sackA
> - metric2 from sackB
> - metric3 from sackA
>
> and do 3 different lock+delete -_-

what if we just don't lock ever on delete, but if we still check lock to 
see if it's processed. at that point, the janitor knows that metric(s) 
is deleted, and since no one else is working on sack, any metricd that 
follows will also know that the metric(s) are deleted and therefore, 
won't work on it.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 03:48 AM, Julien Danjou wrote:
>
> Yes, I wrote that in a review somewhere. We need to rework 1. so
> deletion happens at the same time we lock the sack to process metrics
> basically. We might want to merge the janitor into the worker I imagine.
> Currently a janitor can grab metrics and do dumb things like:
> - metric1 from sackA
> - metric2 from sackB
> - metric3 from sackA
>
> and do 3 different lock+delete -_-

so the tradeoff here is that now we're doing a lot more calls to 
indexer. additionally, we're pulling a lot more unused results from db.
a single janitor currently just grabs all deleted metrics and starts 
attempting to clean them up one at a time. if we merge, we will have n 
calls to indexer, where n is number of workers, each pulling in all the 
deleted metrics, and then checking to see if the metric is in it's sack, 
and if not, moving on. that's a lot of extra, wasted work. we could 
reduce that work by adding sack information to indexer ;) but that will 
still add significantly more calls to indexer (which we could reduce by 
not triggering cleanup every job interval)


>>
>> alternatively, this could be solved by keeping per-metric locks in
>> addition to per-sack locks. this would effectively double the number of
>> active locks we have so instead of each metricd worker having a single
>> per-sack lock, it will also have a per-metric lock for whatever metric
>> it may be publishing at the time.
>
> If we got a timeout set for scenario 3, I'm not that worried. I guess
> worst thing is that people would be unhappy with the API spending time
> doing computation anyway so we'd need to rework how refresh work or add
> an ability to disable it.
>

refresh is currently disabled by default so i think we're ok.

what's the timeout for? timeout api's attempt to aggregate metric? i 
think it's a bad experience if we add any timeout since i assume it will 
still return what it can return and then the results become somewhat 
ambiguous.

now that i think about it more this issue still exists in per-metric 
scenario (but to lesser extent). 'refresh' can still be blocked by 
metricd but it's just a significantly smaller chance and the window for 
missed unprocessed measures is smaller.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Moving python-gnocchiclient to GitHub

2017-04-28 Thread Javier Pena


- Original Message -
> On Tue, Apr 25 2017, Julien Danjou wrote:
> 
> > We're in the process of moving python-gnocchiclient to GitHub. The
> > patches are up for review:
> >
> >   https://review.openstack.org/#/c/459748/
> >
> > and its dependencies need to be merged before this happen. As soon as
> > this patch is merged, the repository will officially be available at:
> >
> >   https://github.com/gnocchixyz/python-gnocchiclient
> 
> This happened! The repository has now been moved to GitHub. I've also
> created copies of the existing opened bugs there so we don't lose that
> data.
> 
> If I missed anything, let me know.
> 

Hi Julien,

I see none of the tags or branches have been moved over, could they be copied? 
It would of great help to packagers.

Thanks,
Javier

> --
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Moving python-gnocchiclient to GitHub

2017-04-28 Thread Julien Danjou
On Tue, Apr 25 2017, Julien Danjou wrote:

> We're in the process of moving python-gnocchiclient to GitHub. The
> patches are up for review:
>
>   https://review.openstack.org/#/c/459748/
>
> and its dependencies need to be merged before this happen. As soon as
> this patch is merged, the repository will officially be available at:
>
>   https://github.com/gnocchixyz/python-gnocchiclient

This happened! The repository has now been moved to GitHub. I've also
created copies of the existing opened bugs there so we don't lose that
data.

If I missed anything, let me know.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Thu, Apr 27 2017, gordon chung wrote:

> so as we transition to the bucket/shard/sack framework for incoming 
> writes, we've set up locks on the sacks so we only have one process 
> handling any given sack. this allows us to remove the per-metric locking 
> we had previously.

yay!

> the issue i've notice now is that if we only have per-sack locking, 
> metric based actions can affect sack level processing. for example:
>
> scenario 1:
> 1. delete metric, locks sack to delete single metric,
> 2. metricd processor attempts to process entire sack but can't, so skips.

Yes, I wrote that in a review somewhere. We need to rework 1. so
deletion happens at the same time we lock the sack to process metrics
basically. We might want to merge the janitor into the worker I imagine.
Currently a janitor can grab metrics and do dumb things like:
- metric1 from sackA
- metric2 from sackB
- metric3 from sackA

and do 3 different lock+delete -_-

> scenario 2:
> 1. API request passes in 'refresh' param so they want all unaggregated 
> metrics to be processed on demand and returned.
> 2. API locks 1 or more sacks to process 1 or more metrics
> 3. metricd processor attempts to process entire sack but can't, so 
> skips. potentially multiple sacks unprocessed in currently cycle.
>
> scenario 3
> same as scenario 2 but metricd processor locks first, and either blocks
> API process OR  doesn't allow API to guarantee 'all measures processed'.

Yes, I'm even more worried about scenario 3, we should probably add a
safe guard timeout parameter set by the admin there.

> i imagine these scenarios are not critical unless a very large 
> processing interval is defined or if for some unfortunate reason, the 
> metric-based actions are perfectly timed to lock out background processing.
>
> alternatively, this could be solved by keeping per-metric locks in 
> addition to per-sack locks. this would effectively double the number of 
> active locks we have so instead of each metricd worker having a single 
> per-sack lock, it will also have a per-metric lock for whatever metric 
> it may be publishing at the time.

If we got a timeout set for scenario 3, I'm not that worried. I guess
worst thing is that people would be unhappy with the API spending time
doing computation anyway so we'd need to rework how refresh work or add
an ability to disable it.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-27 Thread gordon chung
hey,

so as we transition to the bucket/shard/sack framework for incoming 
writes, we've set up locks on the sacks so we only have one process 
handling any given sack. this allows us to remove the per-metric locking 
we had previously.

the issue i've notice now is that if we only have per-sack locking, 
metric based actions can affect sack level processing. for example:

scenario 1:
1. delete metric, locks sack to delete single metric,
2. metricd processor attempts to process entire sack but can't, so skips.

OR

scenario 2:
1. API request passes in 'refresh' param so they want all unaggregated 
metrics to be processed on demand and returned.
2. API locks 1 or more sacks to process 1 or more metrics
3. metricd processor attempts to process entire sack but can't, so 
skips. potentially multiple sacks unprocessed in currently cycle.

scenario 3
same as scenario 2 but metricd processor locks first, and either blocks
API process OR  doesn't allow API to guarantee 'all measures processed'.

i imagine these scenarios are not critical unless a very large 
processing interval is defined or if for some unfortunate reason, the 
metric-based actions are perfectly timed to lock out background processing.

alternatively, this could be solved by keeping per-metric locks in 
addition to per-sack locks. this would effectively double the number of 
active locks we have so instead of each metricd worker having a single 
per-sack lock, it will also have a per-metric lock for whatever metric 
it may be publishing at the time.


cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi

2017-04-26 Thread Julien Danjou
On Wed, Apr 26 2017, simona marinova wrote:

Hi Simona,

> I am a student working on a project that involves an OpenStack Newton 
> platform.
>
> Currently, we are trying to implement the Data Collection service. We saw that
> Gnocchi is recommended for this purpose, and we installed it.
>
> Now we have problems with the configuration.
>
> I have tried to configure the basic parameters, but the same errors appear 
> over and over.
>
> Until this point, every installation and configuration of the services in
> OpenStack is done exactly the same as shown in the official OpenStack
> documentation.
>
>  I am sending you a screenshot of the output when I try to run gnocchi.
>
>
> Can you help me with a basic configuration or some advice?

It looks like you set your Swift URL to a Keystone URL something like
that. Could you join your gnocchi.conf file?

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gnocchi

2017-04-26 Thread simona marinova
Hello Mr/Mrs,


I am a student working on a project that involves an OpenStack Newton platform.

Currently, we are trying to implement the Data Collection service. We saw that 
Gnocchi is recommended for this purpose, and we installed it.

Now we have problems with the configuration.

I have tried to configure the basic parameters, but the same errors appear over 
and over.

Until this point, every installation and configuration of the services in 
OpenStack is done exactly the same as shown in the official OpenStack 
documentation.

 I am sending you a screenshot of the output when I try to run gnocchi.


Can you help me with a basic configuration or some advice?


Thank you in advance.


Kind regards,

Simona Marinova


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] Moving python-gnocchiclient to GitHub

2017-04-25 Thread Julien Danjou
Hi,

We're in the process of moving python-gnocchiclient to GitHub. The
patches are up for review:

  https://review.openstack.org/#/c/459748/

and its dependencies need to be merged before this happen. As soon as
this patch is merged, the repository will officially be available at:

  https://github.com/gnocchixyz/python-gnocchiclient

Cheers,
-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread gordon chung


On 18/04/17 10:42 AM, Julien Danjou wrote:
> On Tue, Apr 18 2017, gordon chung wrote:
>
>> do we want it configurable? tbh, would anyway one configure it or know
>> how to configure it? even for us, we're just guessing somewhat.lol i'm
>> going to leave it static for now.
>
> I think we want it to be configurable, though most people would probably
> not tweak it. But I can imagine some setups where increasing it would
> make sure that.
> There's some sense of exposing it anyway, even if it does not change
> much. For example, we never exposed TASKS_PER_WORKER but in the end it
> seems the 16 value is not optimal. But since we did not expose it,
> there's barely no way for tester to try to tweak it and see what value
> works best. :)

well argued. take it :)

>
> You're completely right, we needed to discuss that anyway. All your
> patches version and tries build up our knowledge and expertise on the
> subject, so it was definitely worth the effort, and kudos go to you for
> taking that job!
>
> What will probably make you go back to hashring?

i think your argument that hashring can be configured effectively to 
"work on everything" was a good argument.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread gordon chung


On 18/04/17 10:42 AM, Julien Danjou wrote:
> On Tue, Apr 18 2017, gordon chung wrote:
>
>> do we want it configurable? tbh, would anyway one configure it or know
>> how to configure it? even for us, we're just guessing somewhat.lol i'm
>> going to leave it static for now.
>
> I think we want it to be configurable, though most people would probably
> not tweak it. But I can imagine some setups where increasing it would
> make sure that.
> There's some sense of exposing it anyway, even if it does not change
> much. For example, we never exposed TASKS_PER_WORKER but in the end it
> seems the 16 value is not optimal. But since we did not expose it,
> there's barely no way for tester to try to tweak it and see what value
> works best. :)

well argued. take it :)

>
> You're completely right, we needed to discuss that anyway. All your
> patches version and tries build up our knowledge and expertise on the
> subject, so it was definitely worth the effort, and kudos go to you for
> taking that job!
>
> What will probably make you go back to hashring?

i think your argument that hashring can be configured effectively to 
"work on everything" was a good argument.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread Julien Danjou
On Tue, Apr 18 2017, gordon chung wrote:

> do we want it configurable? tbh, would anyway one configure it or know 
> how to configure it? even for us, we're just guessing somewhat.lol i'm 
> going to leave it static for now.

I think we want it to be configurable, though most people would probably
not tweak it. But I can imagine some setups where increasing it would
make sure that.
There's some sense of exposing it anyway, even if it does not change
much. For example, we never exposed TASKS_PER_WORKER but in the end it
seems the 16 value is not optimal. But since we did not expose it,
there's barely no way for tester to try to tweak it and see what value
works best. :)

> this is not really free. choosing hashring means we will have idle 
> workers and more complexity of figuring out what each of the other 
> agents look like in group. it's a trade-off (especially considering how 
> few keys to nodes we have) which is why i brought up question.

You're right, it's a trade-off. I think we can go with the non-hashring
approach for now, that you implemented, and see how if/how we need to
evolve it. It seems to be better than the current scheduling anyway.

> i'll be honest, i'll probably still switch back to hashring... but want 
> to make sure we're not just thinking hashring only.

You're completely right, we needed to discuss that anyway. All your
patches version and tries build up our knowledge and expertise on the
subject, so it was definitely worth the effort, and kudos go to you for
taking that job!

What will probably make you go back to hashring?

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread gordon chung


On 18/04/17 08:44 AM, Julien Danjou wrote:

>
> Live upgrade never has been supported in Gnocchi, so I don't see how
> that's a problem. It'd be cool to support it for sure, but we're far
> from having been able to implement it at any point in time in the best.
> So it's not a new issue or anything like that. I really don't see
> a problem with loading the number of sacks at startup.
>

it's a problem if you don't do a full shut down of every single gnocchi 
service. my main concern is this is not a 'lose data' situation like if 
you screw up any of the options. this will corrupt your storage. i'll 
ignore discussion for live upgrade for now to not get sidetracked.


>
> I think it's worth it only if you use replicas – and I don't think 2 is
> enough, I'd try 3 at least, and make it configurable. It'll reduce a lot
> lock-contention (e.g. by 17x time in my previous example).

i could make it same reduction in lock contention if i added basic 
partitioning :P

> As far as I'm concerned, since the number of replicas is configurable,
> you can add a knob that would set replicas=number_of_metricd_worker that
> would implement the current behaviour you implemented – every worker
> tries to grab every sack.

do we want it configurable? tbh, would anyway one configure it or know 
how to configure it? even for us, we're just guessing somewhat.lol i'm 
going to leave it static for now.

>
> We're not leveraging the re-balancing aspect of hashring, that's true.
> We could probably use any dumber system to spread sacks across workers,
> We could stick to the good ol' "len(sacks) / len(workers in the group)".
>
> But I think there's a couple of things down the road that may help us:
> Using the hashring makes sure worker X does not jump from sacks [A, B,
> C], to [W, X, Y, Z] but just to [A, B] or [A, B, C, X]. That should
> minimize lock contention when bringing up/down new workers. I admit it's
> a very marginal win, but… it comes free with it.
> Also, I envision a push based approach in the future (to replace the
> metricd_processing_delay) which will require worker to register to
> sacks. Making sure the rebalancing does not shake everything but is
> rather smooth will also reduce workload around that. Again, it comes
> free.
>

this is not really free. choosing hashring means we will have idle 
workers and more complexity of figuring out what each of the other 
agents look like in group. it's a trade-off (especially considering how 
few keys to nodes we have) which is why i brought up question.

i'll be honest, i'll probably still switch back to hashring... but want 
to make sure we're not just thinking hashring only.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread Julien Danjou
On Tue, Apr 18 2017, gordon chung wrote:

> the issue i see is not with how the sacks will be assigned to metricd 
> but how metrics (not daemon) are assigned to sacks. i don't think 
> storing value in storage object solves the issue because when would we 
> load/read it when the api and metricd processes startup? it seems this 
> would require: 1) all services to be shut down and 2) have a completely 
> clean incoming storage path. if any of the two steps aren't done, you 
> have a corrupt incoming storage. if this is a requirement and both of 
> these are done successfully, this means, any kind of 'live upgrade' is 
> impossible in gnocchi.

Live upgrade never has been supported in Gnocchi, so I don't see how
that's a problem. It'd be cool to support it for sure, but we're far
from having been able to implement it at any point in time in the best.
So it's not a new issue or anything like that. I really don't see
a problem with loading the number of sacks at startup.

> i had did test w/ 2 replicas (see: google sheet) and it's still 
> non-uniform but better than without replicas: ~4%-30% vs ~8%-45%. we 
> could also minimise the number lock calls by dividing sacks across 
> workers per agent.
>
> going to play devils advocate now, using hashring in our use case will 
> always hurt throughput (even with perfect distribution since the sack 
> contents themselves are not uniform). returning to original question, is 
> using hashring worth it? i don't think we're even leveraging the 
> re-balancing aspect of hashring.

I think it's worth it only if you use replicas – and I don't think 2 is
enough, I'd try 3 at least, and make it configurable. It'll reduce a lot
lock-contention (e.g. by 17x time in my previous example).
As far as I'm concerned, since the number of replicas is configurable,
you can add a knob that would set replicas=number_of_metricd_worker that
would implement the current behaviour you implemented – every worker
tries to grab every sack.

We're not leveraging the re-balancing aspect of hashring, that's true.
We could probably use any dumber system to spread sacks across workers,
We could stick to the good ol' "len(sacks) / len(workers in the group)".

But I think there's a couple of things down the road that may help us:
Using the hashring makes sure worker X does not jump from sacks [A, B,
C], to [W, X, Y, Z] but just to [A, B] or [A, B, C, X]. That should
minimize lock contention when bringing up/down new workers. I admit it's
a very marginal win, but… it comes free with it.
Also, I envision a push based approach in the future (to replace the
metricd_processing_delay) which will require worker to register to
sacks. Making sure the rebalancing does not shake everything but is
rather smooth will also reduce workload around that. Again, it comes
free.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread gordon chung


On 18/04/17 05:21 AM, Julien Danjou wrote:
>> - dynamic sack size
>> making number of sacks dynamic is a concern. previously, we said to have
>> sack size in conf file. the concern is that changing that option
>> incorrectly actually 'corrupts' the db to a state that it cannot recover
>> from. it will have stray unprocessed measures constantly. if we change
>> the db path incorrectly, we don't actually corrupt anything, we just
>> lose data. we've said we don't want sack mappings in indexer so it seems
>> to me, the only safe solution is to make it sack size static and only
>> changeable by hacking?
>
> Not hacking, just we need a proper tool to rebalance it.
> As I already wrote, I think it's good enough to have this documented and
> set to a moderated good value by default (e.g. 4096). There's no need to
> store it in a configuration file, it should be stored in the storage
> driver itself to avoid any mistake, when the storage is initialized via
> `gnocchi-upgrade'.

the issue i see is not with how the sacks will be assigned to metricd 
but how metrics (not daemon) are assigned to sacks. i don't think 
storing value in storage object solves the issue because when would we 
load/read it when the api and metricd processes startup? it seems this 
would require: 1) all services to be shut down and 2) have a completely 
clean incoming storage path. if any of the two steps aren't done, you 
have a corrupt incoming storage. if this is a requirement and both of 
these are done successfully, this means, any kind of 'live upgrade' is 
impossible in gnocchi.

>
>> - sack distribution
>> to distribute sacks across workers, i initially implemented consistent
>> hashing. the issue i noticed is that because hashring is inherently has
>> non-uniform distribution[1], i would have workers sitting idle because
>> it was given less sacks, while other workers were still working.
>>
>> i tried also to implement jump hash[2], which improved distribution and
>> is in theory, less memory intensive as it does not maintain a hash
>> table. while better at distribution, it still is not completely uniform
>> and similarly, the less sacks per worker, the worse the distribution.
>>
>> lastly, i tried just simple locking where each worker is completely
>> unaware of any other worker and handles all sacks. it will lock the sack
>> it is working on, so if another worker tries to work on it, it will just
>> skip. this will effectively cause an additional requirement on locking
>> system (in my case redis) as each worker will make x lock requests where
>> x is number of sacks. so if we have 50 workers and 2048 sacks, it will
>> be 102K requests per cycle. this is in addition to the n number of lock
>> requests per metric (10K-1M metrics?). this does guarantee if a worker
>> is free and there is work to be done, it will do it.
>>
>> i guess the question i have is: by using a non-uniform hash, it seems we
>> gain possibly less load at the expense of efficiency/'speed'. the number
>> of sacks/tasks we have is stable, it won't really change. the number of
>> metricd workers may change but not constantly. lastly, the number of
>> sacks per worker will always be relatively low (10:1, 100:1 assuming max
>> number of sacks is 2048). given these conditions, do we need
>> consistent/jump hashing? is it better to just modulo sacks and ensure
>> 'uniform' distribution and allow for 'larger' set of buckets to be
>> reshuffled when workers are added?
>
> What about using the hashring with replicas (e.g. 3 by default) and a
> lock per sack? This should reduce largely the number of lock try that
> you see. If you have 2k sacks divided across 50 workers and each one has
> a replica, that make each process care about 122 metrics so they might
> send 122 acquire() try each, which is 50 × 122 = 6100 acquire request,
> 17 times less than 102k.
> This also solve the problem of non-uniform distribution, as having
> replicas make sure every node gets some work.

i had did test w/ 2 replicas (see: google sheet) and it's still 
non-uniform but better than without replicas: ~4%-30% vs ~8%-45%. we 
could also minimise the number lock calls by dividing sacks across 
workers per agent.

going to play devils advocate now, using hashring in our use case will 
always hurt throughput (even with perfect distribution since the sack 
contents themselves are not uniform). returning to original question, is 
using hashring worth it? i don't think we're even leveraging the 
re-balancing aspect of hashring.

>
> You can then probably remove the per-metric-lock too: this is just used
> when processing new measures (here the sack lock is enough) and when
> expunging metrics. You can safely use the same lock sack-lock for
> expunging metric. We may just need to it out from janitor? Something to
> think about!
>

good point, we may not need to lock sack for expunging at all, since 
it's already marked as deleted in indexer so it is effectively not 
accessible.


-- 
gord

Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-18 Thread Julien Danjou
On Mon, Apr 17 2017, gordon chung wrote:

Hi Gordon,

> i've started to implement multiple buckets and the initial tests look 
> promising. here's some things i've done:
>
> - dropped the scheduler process and allow processing workers to figure 
> out tasks themselves
> - each sack is now handled fully (not counting anything added after 
> processing worker)
> - number of sacks are static
>
> after the above, i've been testing it and it works pretty well, i'm able 
> to process 40K metrics, 60 points each, in 8-10mins with 54 workers when 
> it took significantly longer before.

Great!

> the issues i've run into:
>
> - dynamic sack size
> making number of sacks dynamic is a concern. previously, we said to have 
> sack size in conf file. the concern is that changing that option 
> incorrectly actually 'corrupts' the db to a state that it cannot recover 
> from. it will have stray unprocessed measures constantly. if we change 
> the db path incorrectly, we don't actually corrupt anything, we just 
> lose data. we've said we don't want sack mappings in indexer so it seems 
> to me, the only safe solution is to make it sack size static and only 
> changeable by hacking?

Not hacking, just we need a proper tool to rebalance it.
As I already wrote, I think it's good enough to have this documented and
set to a moderated good value by default (e.g. 4096). There's no need to
store it in a configuration file, it should be stored in the storage
driver itself to avoid any mistake, when the storage is initialized via
`gnocchi-upgrade'.

> - sack distribution
> to distribute sacks across workers, i initially implemented consistent 
> hashing. the issue i noticed is that because hashring is inherently has 
> non-uniform distribution[1], i would have workers sitting idle because 
> it was given less sacks, while other workers were still working.
>
> i tried also to implement jump hash[2], which improved distribution and 
> is in theory, less memory intensive as it does not maintain a hash 
> table. while better at distribution, it still is not completely uniform 
> and similarly, the less sacks per worker, the worse the distribution.
>
> lastly, i tried just simple locking where each worker is completely 
> unaware of any other worker and handles all sacks. it will lock the sack 
> it is working on, so if another worker tries to work on it, it will just 
> skip. this will effectively cause an additional requirement on locking 
> system (in my case redis) as each worker will make x lock requests where 
> x is number of sacks. so if we have 50 workers and 2048 sacks, it will 
> be 102K requests per cycle. this is in addition to the n number of lock 
> requests per metric (10K-1M metrics?). this does guarantee if a worker 
> is free and there is work to be done, it will do it.
>
> i guess the question i have is: by using a non-uniform hash, it seems we 
> gain possibly less load at the expense of efficiency/'speed'. the number 
> of sacks/tasks we have is stable, it won't really change. the number of 
> metricd workers may change but not constantly. lastly, the number of 
> sacks per worker will always be relatively low (10:1, 100:1 assuming max 
> number of sacks is 2048). given these conditions, do we need 
> consistent/jump hashing? is it better to just modulo sacks and ensure 
> 'uniform' distribution and allow for 'larger' set of buckets to be 
> reshuffled when workers are added?

What about using the hashring with replicas (e.g. 3 by default) and a
lock per sack? This should reduce largely the number of lock try that
you see. If you have 2k sacks divided across 50 workers and each one has
a replica, that make each process care about 122 metrics so they might
send 122 acquire() try each, which is 50 × 122 = 6100 acquire request,
17 times less than 102k.
This also solve the problem of non-uniform distribution, as having
replicas make sure every node gets some work.

You can then probably remove the per-metric-lock too: this is just used
when processing new measures (here the sack lock is enough) and when
expunging metrics. You can safely use the same lock sack-lock for
expunging metric. We may just need to it out from janitor? Something to
think about!

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-04-17 Thread gordon chung


On 17/04/17 12:09 PM, gordon chung wrote:
> i tried also to implement jump hash[2], which improved distribution and
> is in theory, less memory intensive as it does not maintain a hash
> table. while better at distribution, it still is not completely uniform
> and similarly, the less sacks per worker, the worse the distribution.

hmm... may have spoke incorrectly on this one, it's better but i don't 
think we can use it to assign sacks to processing workers. i think it's 
only usable for assigning metrics into sacks.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread Julien Danjou
On Thu, Mar 02 2017, gordon chung wrote:

> On 02/03/17 10:07 AM, Julien Danjou wrote:
>> That also means we may be able to get rid of the scheduler process?
>
> i think we should probably keep it. the scheduler process of agent will 
> loop through each bucket and start dumping metrics to process on queue 
> and the processing processes will greedily just process them in parallel.
>
> if we let the processing workers do the partitioning, we'll have a lot 
> of extra calls and contentions like before. it's not much compared to 
> standard io time but it was still noticeable in previous benchmarks.

Makes sense.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread gordon chung


On 02/03/17 10:07 AM, Julien Danjou wrote:
> That also means we may be able to get rid of the scheduler process?

i think we should probably keep it. the scheduler process of agent will 
loop through each bucket and start dumping metrics to process on queue 
and the processing processes will greedily just process them in parallel.

if we let the processing workers do the partitioning, we'll have a lot 
of extra calls and contentions like before. it's not much compared to 
standard io time but it was still noticeable in previous benchmarks.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread Julien Danjou
On Thu, Mar 02 2017, gordon chung wrote:

> one of the issues we can't effectively partition the single bucket is 
> partly because we have multiple agents on a single bucket. in theory we 
> can use markers to partition the single bucket but because of multiple 
> workers, the marker has a very high chance of disappearing.
>
> in this case, only one agents is ever working on a bucket so it should 
> minimise the chance of a marker disappearing and therefore let us go 
> deeper into bucket.

Makes sense. So a consistent hashring of a few thousands of buckets
would solve that easily, as only one metricd will be assigned to a (lot
of) bucket(s).

That also means we may be able to get rid of the scheduler process?

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread gordon chung


On 02/03/17 09:52 AM, Julien Danjou wrote:
> Sounds good. What's interesting is how you implement a shard/bucket in
> each driver. I imagine it's a container/bucket/directory.

yeah, same as whatever is used now... except more of them :)

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread gordon chung


On 02/03/17 09:52 AM, Julien Danjou wrote:
>> using hashring idea, the buckets will be distributed among all the
>> > active metricd agents. the metricd agents will loop through all the
>> > assigned buckets based on processing interval. the actual processing of
>> > each bucket will be similar to what we have now: grab metrics, queue it
>> > for processing workers. the only difference is instead of just grabbing
>> > first x metrics and stopping, we keep grabbing until bucket is 'clear'.
>> > this will help us avoid the current issue where some metrics are never
>> > scheduled because the return order puts it at the end.
> It does not change that much the current issue IIUC. The only difference
> is that now we have 1 bucket and N metricd trying to empty it, whereas
> now we would have M buckets with N metricd trying spread so there's M/N
> metricd per bucket trying to empty each bucket. :)
>
> At some scale (larger than currently) it will improve things but it does
> not seem to be a drastic change.
>
> (I am also not saying that I have a better solution :)
>

one of the issues we can't effectively partition the single bucket is 
partly because we have multiple agents on a single bucket. in theory we 
can use markers to partition the single bucket but because of multiple 
workers, the marker has a very high chance of disappearing.

in this case, only one agents is ever working on a bucket so it should 
minimise the chance of a marker disappearing and therefore let us go 
deeper into bucket.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread Julien Danjou
On Thu, Mar 02 2017, gordon chung wrote:

Hi gordon,

> i was thinking more about this yesterday. i've an idea.

You should have seen my face when I read that! ;-P

> how we store new measures
> -
>
> when we add new measures to be processed, the metric itself is already 
> created in indexer, so it already has id. with the id, we can compute 
> and store a shard/bucket location in the indexer with the metric. since 
> metric id is an uuid, we can just mod it with number of buckets and it 
> should give us decent distribution. so with that, when we actually store 
> the new measure, we will look at the bucket location associated with the 
> metric.

Sounds good. What's interesting is how you implement a shard/bucket in
each driver. I imagine it's a container/bucket/directory.

> using hashring idea, the buckets will be distributed among all the 
> active metricd agents. the metricd agents will loop through all the 
> assigned buckets based on processing interval. the actual processing of 
> each bucket will be similar to what we have now: grab metrics, queue it 
> for processing workers. the only difference is instead of just grabbing 
> first x metrics and stopping, we keep grabbing until bucket is 'clear'. 
> this will help us avoid the current issue where some metrics are never 
> scheduled because the return order puts it at the end.

It does not change that much the current issue IIUC. The only difference
is that now we have 1 bucket and N metricd trying to empty it, whereas
now we would have M buckets with N metricd trying spread so there's M/N
metricd per bucket trying to empty each bucket. :)

At some scale (larger than currently) it will improve things but it does
not seem to be a drastic change.

(I am also not saying that I have a better solution :)

> we'll have a new agent (name here). this will walk through each metric 
> in our indexer, recompute a new bucket location, and set it. this will 
> make all new incoming points be pushed to new location. this agent will 
> also go to old location (if different) and process any unprocessed 
> measures of the metric. it will then move on to next metric until complete.
>
> there will probably need to be a state/config table or something so 
> indexer knows bucket size.
>
> i also think there might be a better partitioning technique to minimise 
> the number of metrics that change buckets... need to think about that more.

Yes, it's called consistent hashing, and that's what Swift and the like
are using.

Basically the idea is to create A LOT of buckets (higher than
your maximum number of potential metricd), let's say, 2^16, and then
distribute those containers across your metricds, e.g. if you have 10
metrics they will each be responsible for 6 554 buckets, when a 11th
metricd comes up, you just have to recompute whose responsible for which
bucket. This is exactly what tooz new partitioner system provide and
that we can leverage easily:

  https://github.com/openstack/tooz/blob/master/tooz/partitioner.py#L25

All we have to do is create a lot of buckets and ask tooz which buckets
belongs to each metricd. And then make them poll over and over again
(sigh) to empty them.

This make sure you DON'T have to rebalance your buckets like you
proposed earlier, which is costly, long and painful.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread gordon chung


On 15/11/16 04:53 AM, Julien Danjou wrote:
> Yeah in the case of the Swift driver for Gnocchi, I'm not really sure
> how much buckets we should create. Should we make the user pick a random
> number like the number of partition in Swift and then create the
> containers in Swift? Or can we have something simpler? (I like automagic
> things). WDYT Gordon?


i was thinking more about this yesterday. i've an idea.


how we store new measures
-

when we add new measures to be processed, the metric itself is already 
created in indexer, so it already has id. with the id, we can compute 
and store a shard/bucket location in the indexer with the metric. since 
metric id is an uuid, we can just mod it with number of buckets and it 
should give us decent distribution. so with that, when we actually store 
the new measure, we will look at the bucket location associated with the 
metric.


how we process measures
---

using hashring idea, the buckets will be distributed among all the 
active metricd agents. the metricd agents will loop through all the 
assigned buckets based on processing interval. the actual processing of 
each bucket will be similar to what we have now: grab metrics, queue it 
for processing workers. the only difference is instead of just grabbing 
first x metrics and stopping, we keep grabbing until bucket is 'clear'. 
this will help us avoid the current issue where some metrics are never 
scheduled because the return order puts it at the end.


how we change bucket size
-

we'll have a new agent (name here). this will walk through each metric 
in our indexer, recompute a new bucket location, and set it. this will 
make all new incoming points be pushed to new location. this agent will 
also go to old location (if different) and process any unprocessed 
measures of the metric. it will then move on to next metric until complete.

there will probably need to be a state/config table or something so 
indexer knows bucket size.

i also think there might be a better partitioning technique to minimise 
the number of metrics that change buckets... need to think about that more.


what we set default bucket size to
--

32? say we aim for default 10K metrics, that puts ~310 metrics (and its 
measure objects from POST) in each bucket... or 64?


cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi]

2017-02-13 Thread Mahir Gunyel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-28 Thread gordon chung


On 28/12/16 05:49 AM, Sam Huracan wrote:
> Thanks,
>
>
> How can I increase processing delay? I think it could increase the
> number of measures in queue.
>

you can also force the measures in backlog/queue to be processed on 
request by passing in refresh=True. in theory, you could turn off all 
metricd workers doing this. of course, this will just push all 
calculations to request time which may result in longer response time 
and probably higher memory usage.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-28 Thread Julien Danjou
On Wed, Dec 28 2016, Sam Huracan wrote:

> How can I increase processing delay? I think it could increase the number
> of measures in queue.

It would, but that'll allow gnocchi-metricd to process them in batch, so
it'll use less CPU overall. Obviously, the computing of metrics will be
less real-time-y.

You can increase `metric_processing_delay' to make it less aggressive.
The default is 60 seconds.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-28 Thread Sam Huracan
Thanks,


How can I increase processing delay? I think it could increase the number
of measures in queue.

2016-12-28 16:29 GMT+07:00 Julien Danjou :

> On Wed, Dec 28 2016, Sam Huracan wrote:
>
> > I recently use VM on lab for Gnocchi Server, with 4 vCPU, 8192 MB RAM,
> and
> > after configuring 10 workers, I realize the number or gnocchi measures
> > decrease significantly (gnocchi status), but it drain much more CPU and
> > RAM, and VM freezed after 30 minutes running. :)
>
> Gnocchi precomputes everything, that's why it uses CPU. If you want to
> have it use less CPU, you should increase the processing delay or modify
> the default archive policies to compute less. :)
>
> --
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-28 Thread Julien Danjou
On Wed, Dec 28 2016, Sam Huracan wrote:

> I recently use VM on lab for Gnocchi Server, with 4 vCPU, 8192 MB RAM, and
> after configuring 10 workers, I realize the number or gnocchi measures
> decrease significantly (gnocchi status), but it drain much more CPU and
> RAM, and VM freezed after 30 minutes running. :)

Gnocchi precomputes everything, that's why it uses CPU. If you want to
have it use less CPU, you should increase the processing delay or modify
the default archive policies to compute less. :)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Sam Huracan
Hi guys,

I'm using gnocchi version 3.0:
https://github.com/openstack/gnocchi/tree/stable/3.0 + OpenStack Mitaka,
storing gnocchi metrics on Ceph Storage
All metrics I need collect are VM resources: CPU, Memory, Disk, Network. We
're going to monitor thoses metrics every minutes for alarming when
reaching threshold, and keep them 1 month for billing. (medium archive
policy)

I recently use VM on lab for Gnocchi Server, with 4 vCPU, 8192 MB RAM, and
after configuring 10 workers, I realize the number or gnocchi measures
decrease significantly (gnocchi status), but it drain much more CPU and
RAM, and VM freezed after 30 minutes running. :)

Anyone has ever deployed gnocchi on production, Could you share your
experiences?

Thanks and regards



2016-12-28 5:22 GMT+07:00 Mike Perez :

> On 16:07 Dec 27, Sam Huracan wrote:
> > Hi,
> >
> > I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
> >
> > Do you have a recommendtion for sizing Gnocchi Server configuration?
> >
> > Thanks and regards,
>
> Hi Sam,
>
> I recommend asking the OpenStack operators mailing list [1] for
> configuration
> help. It's likely someone on their has knowledge of running Gnocchi in
> production, as this list is mostly about development discussions.  Thanks!
>
> [1] - http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-operators
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Mike Perez
On 16:07 Dec 27, Sam Huracan wrote:
> Hi,
> 
> I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
> 
> Do you have a recommendtion for sizing Gnocchi Server configuration?
> 
> Thanks and regards,

Hi Sam,

I recommend asking the OpenStack operators mailing list [1] for configuration
help. It's likely someone on their has knowledge of running Gnocchi in
production, as this list is mostly about development discussions.  Thanks!

[1] - http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Alex Krzos
Hi Sam,

I have also been testing Gnocchi and found high resource utilization
depending on the gnocchi configuration and archival-policy.  Since
hardware can vary and your ability to process metrics will be cpu
heavy and your ability to store metrics will be limited by your disk
io it will be highly dependent upon your hardware and worker counts.
Could you elaborate on the configuration and version you are using
along with the resource count you have metrics collected on?  The more
data points we can share with the community the better defaults and
scale/capacity guidelines we can provide.  Also if you could elboarate
on what aggregations you truely need, we can take that into account
for the default policies.

I can make the following recommendations of moving Gnocchi API (hosted
as a WSGI app in httpd) and Gnocchi metricd to separate hardware (if
you are not doing this already.)  This will prevent resource
contention between other OpenStack Services and the Telemetry
Services.

Alex Krzos | Performance Engineering
Red Hat
Desk: 919-754-4280
Mobile: 919-909-6266

On Tue, Dec 27, 2016 at 4:07 AM, Sam Huracan  wrote:
> Hi,
>
> I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
>
> Do you have a recommendtion for sizing Gnocchi Server configuration?
>
> Thanks and regards,
>
> Sam
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Julien Danjou
On Tue, Dec 27 2016, Sam Huracan wrote:

Hi Sam,

> I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.
>
> Do you have a recommendtion for sizing Gnocchi Server configuration?

Which version are you testing? What's the RAM consumption?
You can reduce the default archive policies and increase processing
latency to decrease CPU usage.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gnocchi sizing on production

2016-12-27 Thread Sam Huracan
Hi,

I'm testing gnocchi on lab and realizing it drains CPU and RAM too much.

Do you have a recommendtion for sizing Gnocchi Server configuration?

Thanks and regards,

Sam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] influxdb driver gate error

2016-12-01 Thread Mehdi Abaakouk



Le 2016-12-01 23:48, Sam Morrison a écrit :

Using influxdb v1.1 works fine. anything less than 1.0 I would deem
unusable for influxdb. So to get this to work we’d need a newer
version of influxdb installed.


That's work for me.


Any idea how to do this? I see they push out a custom ceph repo to
install a newer ceph so I guess we’d need to do something similar
although influx don’t provide a repo, just a deb.


We do this kind of thing in tooz:

https://github.com/openstack/tooz/blob/master/tox.ini#L73
https://github.com/openstack/tooz/blob/master/setup-consul-env.sh

A tarball and setting the PATH is easiest and compatible with more 
platforms.


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] influxdb driver gate error

2016-12-01 Thread Sam Morrison
I’ve been working a bit on this and the errors I’m getting in the gate I can 
now replicate in my environment if I use the same version of influxdb in the 
gate (0.10)

Using influxdb v1.1 works fine. anything less than 1.0 I would deem unusable 
for influxdb. So to get this to work we’d need a newer version of influxdb 
installed. 

Any idea how to do this? I see they push out a custom ceph repo to install a 
newer ceph so I guess we’d need to do something similar although influx don’t 
provide a repo, just a deb.

Sam




> On 30 Nov. 2016, at 7:35 pm, Sam Morrison  wrote:
> 
> 
>> On 30 Nov. 2016, at 6:23 pm, Mehdi Abaakouk  wrote:
>> 
>> 
>> 
>> Le 2016-11-30 08:06, Sam Morrison a écrit :
>>> 2016-11-30 06:50:14.969302 | + pifpaf -e GNOCCHI_STORAGE run influxdb
>>> -- pifpaf -e GNOCCHI_INDEXER run mysql -- ./tools/pretty_tox.sh
>>> 2016-11-30 06:50:17.399380 | ERROR: pifpaf: 'ascii' codec can't decode
>>> byte 0xc2 in position 165: ordinal not in range(128)
>>> 2016-11-30 06:50:17.415485 | ERROR: InvocationError:
>>> '/home/jenkins/workspace/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/run-tests.sh’
>> 
>> You can temporary pass '--debug' to pifpaf to get the full backtrace.
> 
> Good idea, thanks! Get this error. Don’t get it on the py3 job though
> 
> Get further with the py3 job but get some other errors I don’t see in my env 
> so trying to figure out what is different.
> 
> 
> 2016-11-30 07:40:17.209979 | + pifpaf --debug -e GNOCCHI_STORAGE run influxdb 
> -- pifpaf -e GNOCCHI_INDEXER run mysql -- ./tools/pretty_tox.sh
> 2016-11-30 07:40:17.746304 | DEBUG: pifpaf.drivers: executing: ['influxd', 
> '-config', '/tmp/tmp.7pq0EBpjgt/tmpikRcvn/config']
> 2016-11-30 07:40:17.759236 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> 2016-11-30 07:40:17.759804 | DEBUG: pifpaf.drivers: influxd[20435] output:  
> 888   .d888 888   888b.  88b.
> 2016-11-30 07:40:17.759909 | DEBUG: pifpaf.drivers: influxd[20435] output:
> 888d88P"  888   888  "Y88b 888  "88b
> 2016-11-30 07:40:17.760003 | DEBUG: pifpaf.drivers: influxd[20435] output:
> 888888888   888888 888  .88P
> 2016-11-30 07:40:17.760094 | DEBUG: pifpaf.drivers: influxd[20435] output:
> 888   8b.  88 888 888  888 888  888 888888 888K.
> 2016-11-30 07:40:17.760196 | DEBUG: pifpaf.drivers: influxd[20435] output:
> 888   888 "88b 888888 888  888  Y8bd8P' 888888 888  "Y88b
> 2016-11-30 07:40:17.760296 | DEBUG: pifpaf.drivers: influxd[20435] output:
> 888   888  888 888888 888  888   X88K   888888 888888
> 2016-11-30 07:40:17.760384 | DEBUG: pifpaf.drivers: influxd[20435] output:
> 888   888  888 888888 Y88b 888 .d8""8b. 888  .d88P 888   d88P
> 2016-11-30 07:40:17.760474 | DEBUG: pifpaf.drivers: influxd[20435] output:  
> 888 888  888 888888  "Y8 888  888 888P"  888P"
> 2016-11-30 07:40:17.760516 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> 2016-11-30 07:40:17.760643 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> 2016/11/30 07:40:17 InfluxDB starting, version 0.10.0, branch unknown, commit 
> unknown, built unknown
> 2016-11-30 07:40:17.760722 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> 2016/11/30 07:40:17 Go version go1.6rc1, GOMAXPROCS set to 8
> 2016-11-30 07:40:17.859524 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> 2016/11/30 07:40:17 Using configuration at: 
> /tmp/tmp.7pq0EBpjgt/tmpikRcvn/config
> 2016-11-30 07:40:17.860852 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> [meta] 2016/11/30 07:40:17 Starting meta service
> 2016-11-30 07:40:17.861033 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> [meta] 2016/11/30 07:40:17 Listening on HTTP: 127.0.0.1:51232
> 2016-11-30 07:40:17.871362 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> [metastore] 2016/11/30 07:40:17 Using data dir: 
> /tmp/tmp.7pq0EBpjgt/tmpikRcvn/meta
> 2016-11-30 07:40:17.878511 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> [metastore] 2016/11/30 07:40:17 Node at localhost:51233 [Follower]
> 2016-11-30 07:40:19.079831 | DEBUG: pifpaf.drivers: influxd[20435] output: 
> [metastore] 2016/11/30 07:40:19 Node at localhost:51233 [Leader]. 
> peers=[localhost:51233]
> 2016-11-30 07:40:19.180811 | Traceback (most recent call last):
> 2016-11-30 07:40:19.180865 |   File "/usr/lib/python2.7/logging/__init__.py", 
> line 884, in emit
> 2016-11-30 07:40:19.182121 | stream.write(fs % msg.encode("UTF-8"))
> 2016-11-30 07:40:19.182194 | UnicodeDecodeError: 'ascii' codec can't decode 
> byte 0xc2 in position 211: ordinal not in range(128)
> 2016-11-30 07:40:19.182225 | Logged from file __init__.py, line 80
> 2016-11-30 07:40:19.183188 | ERROR: pifpaf: Traceback (most recent call last):
> 2016-11-30 07:40:19.183271 |   File 
> 

Re: [openstack-dev] [gnocchi] influxdb driver gate error

2016-11-30 Thread Sam Morrison

> On 30 Nov. 2016, at 6:23 pm, Mehdi Abaakouk  wrote:
> 
> 
> 
> Le 2016-11-30 08:06, Sam Morrison a écrit :
>> 2016-11-30 06:50:14.969302 | + pifpaf -e GNOCCHI_STORAGE run influxdb
>> -- pifpaf -e GNOCCHI_INDEXER run mysql -- ./tools/pretty_tox.sh
>> 2016-11-30 06:50:17.399380 | ERROR: pifpaf: 'ascii' codec can't decode
>> byte 0xc2 in position 165: ordinal not in range(128)
>> 2016-11-30 06:50:17.415485 | ERROR: InvocationError:
>> '/home/jenkins/workspace/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/run-tests.sh’
> 
> You can temporary pass '--debug' to pifpaf to get the full backtrace.

Good idea, thanks! Get this error. Don’t get it on the py3 job though

Get further with the py3 job but get some other errors I don’t see in my env so 
trying to figure out what is different.


2016-11-30 07:40:17.209979 | + pifpaf --debug -e GNOCCHI_STORAGE run influxdb 
-- pifpaf -e GNOCCHI_INDEXER run mysql -- ./tools/pretty_tox.sh
2016-11-30 07:40:17.746304 | DEBUG: pifpaf.drivers: executing: ['influxd', 
'-config', '/tmp/tmp.7pq0EBpjgt/tmpikRcvn/config']
2016-11-30 07:40:17.759236 | DEBUG: pifpaf.drivers: influxd[20435] output: 
2016-11-30 07:40:17.759804 | DEBUG: pifpaf.drivers: influxd[20435] output:  
888   .d888 888   888b.  88b.
2016-11-30 07:40:17.759909 | DEBUG: pifpaf.drivers: influxd[20435] output:
888d88P"  888   888  "Y88b 888  "88b
2016-11-30 07:40:17.760003 | DEBUG: pifpaf.drivers: influxd[20435] output:
888888888   888888 888  .88P
2016-11-30 07:40:17.760094 | DEBUG: pifpaf.drivers: influxd[20435] output:
888   8b.  88 888 888  888 888  888 888888 888K.
2016-11-30 07:40:17.760196 | DEBUG: pifpaf.drivers: influxd[20435] output:
888   888 "88b 888888 888  888  Y8bd8P' 888888 888  "Y88b
2016-11-30 07:40:17.760296 | DEBUG: pifpaf.drivers: influxd[20435] output:
888   888  888 888888 888  888   X88K   888888 888888
2016-11-30 07:40:17.760384 | DEBUG: pifpaf.drivers: influxd[20435] output:
888   888  888 888888 Y88b 888 .d8""8b. 888  .d88P 888   d88P
2016-11-30 07:40:17.760474 | DEBUG: pifpaf.drivers: influxd[20435] output:  
888 888  888 888888  "Y8 888  888 888P"  888P"
2016-11-30 07:40:17.760516 | DEBUG: pifpaf.drivers: influxd[20435] output: 
2016-11-30 07:40:17.760643 | DEBUG: pifpaf.drivers: influxd[20435] output: 
2016/11/30 07:40:17 InfluxDB starting, version 0.10.0, branch unknown, commit 
unknown, built unknown
2016-11-30 07:40:17.760722 | DEBUG: pifpaf.drivers: influxd[20435] output: 
2016/11/30 07:40:17 Go version go1.6rc1, GOMAXPROCS set to 8
2016-11-30 07:40:17.859524 | DEBUG: pifpaf.drivers: influxd[20435] output: 
2016/11/30 07:40:17 Using configuration at: /tmp/tmp.7pq0EBpjgt/tmpikRcvn/config
2016-11-30 07:40:17.860852 | DEBUG: pifpaf.drivers: influxd[20435] output: 
[meta] 2016/11/30 07:40:17 Starting meta service
2016-11-30 07:40:17.861033 | DEBUG: pifpaf.drivers: influxd[20435] output: 
[meta] 2016/11/30 07:40:17 Listening on HTTP: 127.0.0.1:51232
2016-11-30 07:40:17.871362 | DEBUG: pifpaf.drivers: influxd[20435] output: 
[metastore] 2016/11/30 07:40:17 Using data dir: 
/tmp/tmp.7pq0EBpjgt/tmpikRcvn/meta
2016-11-30 07:40:17.878511 | DEBUG: pifpaf.drivers: influxd[20435] output: 
[metastore] 2016/11/30 07:40:17 Node at localhost:51233 [Follower]
2016-11-30 07:40:19.079831 | DEBUG: pifpaf.drivers: influxd[20435] output: 
[metastore] 2016/11/30 07:40:19 Node at localhost:51233 [Leader]. 
peers=[localhost:51233]
2016-11-30 07:40:19.180811 | Traceback (most recent call last):
2016-11-30 07:40:19.180865 |   File "/usr/lib/python2.7/logging/__init__.py", 
line 884, in emit
2016-11-30 07:40:19.182121 | stream.write(fs % msg.encode("UTF-8"))
2016-11-30 07:40:19.182194 | UnicodeDecodeError: 'ascii' codec can't decode 
byte 0xc2 in position 211: ordinal not in range(128)
2016-11-30 07:40:19.182225 | Logged from file __init__.py, line 80
2016-11-30 07:40:19.183188 | ERROR: pifpaf: Traceback (most recent call last):
2016-11-30 07:40:19.183271 |   File 
"/home/jenkins/workspace/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/.tox/py27-mysql/local/lib/python2.7/site-packages/fixtures/fixture.py",
 line 197, in setUp
2016-11-30 07:40:19.183325 | self._setUp()
2016-11-30 07:40:19.183389 |   File 
"/home/jenkins/workspace/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/.tox/py27-mysql/local/lib/python2.7/site-packages/pifpaf/drivers/influxdb.py",
 line 72, in _setUp
2016-11-30 07:40:19.183410 | path=["/opt/influxdb"])
2016-11-30 07:40:19.183469 |   File 
"/home/jenkins/workspace/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/.tox/py27-mysql/local/lib/python2.7/site-packages/pifpaf/drivers/__init__.py",
 line 140, in _exec
2016-11-30 07:40:19.183498 | if wait_for_line and re.search(wait_for_line, 
line.decode()):
2016-11-30 07:40:19.183536 | UnicodeDecodeError: 'ascii' codec can't decode 
byte 

Re: [openstack-dev] [gnocchi] influxdb driver gate error

2016-11-29 Thread Mehdi Abaakouk



Le 2016-11-30 08:06, Sam Morrison a écrit :

2016-11-30 06:50:14.969302 | + pifpaf -e GNOCCHI_STORAGE run influxdb
-- pifpaf -e GNOCCHI_INDEXER run mysql -- ./tools/pretty_tox.sh
2016-11-30 06:50:17.399380 | ERROR: pifpaf: 'ascii' codec can't decode
byte 0xc2 in position 165: ordinal not in range(128)
2016-11-30 06:50:17.415485 | ERROR: InvocationError:
'/home/jenkins/workspace/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/run-tests.sh’


You can temporary pass '--debug' to pifpaf to get the full backtrace.

Cheers,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gnocchi] influxdb driver gate error

2016-11-29 Thread Sam Morrison
Have been working on my influxdb driver [1] and have managed to figure out the 
gate to get it to install the deps ect. Now I just get this cryptic error

2016-11-30 06:50:14.969302 | + pifpaf -e GNOCCHI_STORAGE run influxdb -- pifpaf 
-e GNOCCHI_INDEXER run mysql -- ./tools/pretty_tox.sh 
2016-11-30 06:50:17.399380 | ERROR: pifpaf: 'ascii' codec can't decode byte 
0xc2 in position 165: ordinal not in range(128) 
2016-11-30 06:50:17.415485 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/run-tests.sh’

Full logs at 
http://logs.openstack.org/60/390260/8/check/gate-gnocchi-tox-db-py27-mysql-ubuntu-xenial/42da72d/console.html

Anyone have an idea what is going on here? I can’t replicate on my machine.

Cheers,
Sam


[1] https://review.openstack.org/#/c/390260/






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >