Hi,
Inkscope, a ceph admin and monitoring GUI, is still alive.
It can be now installed with an ansible playbook.
https://github.com/inkscope/inkscope-ansible
Best regards
- - - - - - - - - - - - - - - - -
Ghislain Chevalier
ORANGE/IMT/OLS/DIESE/LCP/DDSD
Software-Defined Storage Architect
Hi,
We did a PoC at Orange and encountered some difficulties in configurating
federation.
Can you check that placements targets are identical on each zone?
brgds
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
wd_hw...@wistron.com
Envoyé : vendredi 6 novembre 2015
Hi,
Did you try to use the cleanup and dispose steps of cosbench?
brgds
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
Somnath Roy
Envoyé : mardi 24 novembre 2015 20:49
À : ceph-users@lists.ceph.com
Objet : [ceph-users] RGW pool contents
Hi Yehuda/RGW experts,
I have
Hi Quentin
Did you check the pool was correctly created
(Pg allocation)?
Envoyé de mon Galaxy Ace4 Orange
Message d'origine
De : quentin.d...@orange.com
Date :17/12/2015 19:45 (GMT+01:00)
À : ceph-users@lists.ceph.com
Cc :
Objet : [ceph-users] [Ceph] Not able to use erasure
Hi Robert,
Sorry for replying late
We finally use a step take at root on the production platform
Even if I tested a rule on the sandbox platform with a step take at a non-root
level ... and it works.
Brgds
-Message d'origine-
De : Robert LeBlanc [mailto:rob...@leblancnet.us]
Envoyé
Hi all,
After installing the cluster, all the disks (sas and ssd) were mixed under a
host, so the calculated reweight was related to the entire capacity.
It doesn't explain why sas disks were selected when using a specific ssd-driven
rule.
Brgds
De : CHEVALIER Ghislain IMT/OLPS
Envoyé :
HI all,
I didn't notice that osd reweight for ssd was curiously set to a low value.
I don't know how and when these values were set so low.
Our environment is Mirantis-driven and the installation was powered by fuel and
puppet.
(the installation was run by the openstack team and I checked the
Hi,
Context:
Firefly 0.80.9
8 storage nodes
176 osds : 14*8 sas and 8*8 ssd
3 monitors
I create an alternate crushmap in order to fulfill tiering requirement i.e.
select ssd or sas.
I created specific buckets "host-ssd" and "host-sas" and regroup them in
"tier-ssd" and "tier-sas" under a
Thx Mark
I understand the specific parameters are mandatory for the S3 implementation
but as they are not for the swift implementation (I tested it...)
it should have been better to distinguish which parameter is mandatory
according to the implementation.
For the S3 implementation, the creation
HI all,
After adding the nss and the keystone admin url parameters in ceph.conf and
creating the openSSL certificates, all is working well.
If I had followed the doc and processed by copy/paste, I wouldn't have
encountered any problems.
As all is working well without this set of parameters
Hi,
Coming back to that issue.
My endpoint wasn’t right set up.
I changed it to myrgw:myport (rgwow:8080) in the cloudberry profile or in the
curl request and I got a 403 error due to a potential bad role returned by
keystone.
In the radosgw log, I got
2015-05-05 14:58:23.895961 7fb9f4fe9700
Addendum
In the keystone log, I got
2015-05-06 11:42:24.594 10435 INFO eventlet.wsgi.server [-] 10.193.108.238 - -
[06/May/2015 11:42:24] POST /v2.0/s3tokens HTTP/1.1 404 247 0.003872
Something is missing
This is my new quest…
De : CHEVALIER Ghislain IMT/OLPS
Envoyé : mercredi 6 mai 2015
Hi again,
I found that a keystone extension is required to interact between s3 and
keystone and it’s possible to get the list of the installed extensions.
When I request post http://10.194.167.23:5000/v2.0/extension, I got in the
response body
?xml version=1.0 encoding=UTF-8?
extensions
Hi,
I finally configure a cloudberry profile by setting what seems to be the right
endpoint for object storage according to the openstack environment :
myrgw:myport/swift/v1
I got a “204 no content” error even if 2 containers were previously created by
a swift operation with object into them.
Hi,
Despite the creation of ec2 credentials which provides an accesskey and a
secretkey for a user, it’s always impossible to connect using S3
(Forbidden/Access denied).
All is right using swift (create container, list container, get object, put
object, delete object)
I use cloudberry client
Thanks Mark
Loic also gave me this link
It would be a good start for sure
Best regards
-Message d'origine-
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de Mark
Nelson
Envoyé : mardi 14 avril 2015 14:11
À : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] how
Thanks a lot
That helps.
De : Erik McCormick [mailto:emccorm...@cirrusseven.com]
Envoyé : lundi 13 avril 2015 18:32
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-users
Objet : Re: [ceph-users] Rados Gateway and keystone
I haven't really used the S3 stuff much, but the credentials should be in
Hi All,
Am I alone to have this need ?
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
ghislain.cheval...@orange.com
Envoyé : vendredi 20 mars 2015 11:47
À : ceph-users
Objet : [ceph-users] how to compute Ceph durability?
Hi all,
I would like to compute the durability
Hi all,
Coming back to that issue.
I successfully used keystone users for the rados gateway and the swift API but
I still don't understand how it can work with S3 API and i.e. S3 users
(AccessKey/SecretKey)
I found a swift3 initiative but I think It's only compliant in a pure OpenStack
swift
HI all,
Works with ceph -admin-daemon
/var/run/ceph/ceph-client.radosgw.fr-rennes-radosgw1.asok config set debug_rgw
20
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
ghislain.cheval...@orange.com
Envoyé : mercredi 25 février 2015 15:06
À : Ceph Users
Objet :
Hi All,
I just would to be sure about keystone configuration for Rados Gateway.
I read the documentation http://ceph.com/docs/master/radosgw/keystone/ and
http://ceph.com/docs/master/radosgw/config-ref/?highlight=keystone
but I didn't catch if after having configured the rados gateway
Hi all,
I would like to compute the durability of data stored in a ceph environment
according to the cluster topology (failure domains) and the data resiliency
(replication/erasure coding).
Does a tool exist ?
Best regards
- - - - - - - - - - - - - - - - -
Ghislain Chevalier ORANGE
Hi
I just want to tell you there is a rgw object visualisation that could help you
in our tool called inkscope available on github
Best regards
Envoyé de mon Galaxy Ace4 Orange
Message d'origine
De : Italo Santos okd...@gmail.com
Date :12/03/2015 21:26 (GMT+01:00)
À : Ben
Message d'origine
De : CHEVALIER Ghislain IMT/OLPS ghislain.cheval...@orange.com
Date :06/03/2015 21:56 (GMT+01:00)
À : Italo Santos okd...@gmail.com
Cc :
Objet : RE : [ceph-users] RadosGW - Bucket link and ACLs
Hi
We encountered this behavior when developing the rgw admin
HI all,
I think this question can maybe be linked to the mail I sent (fev 25) related
to unconsistency between bucket and bucket.instance.
Best regards
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
baijia...@126.com
Envoyé : lundi 2 mars 2015 08:00
À : ceph-users;
Hi all,
Context : Firefly 0.80.8, Ubuntu 14.04 LTS, Lab cluster
Yesterday, I successfully deleted a s3 bucket Bucket001ghis after removing
the contents that were in.
Today, as I was browsing the radosgw system metadata, I discovered an
difference between the bucket metadata and the
Hi all
Context : Firefly 0.80.8, Ubuntu 14.04 LTS
I tried to change live the debug level of a rados gateway using ceph daemon
/var/run/ceph/ceph-client.radosgw.fr-rennes-radosgw1.asok config set debug_rgw
20 the response is { success: } but it has no effect.
Is there another parameter
HI all,
Context : Ubuntu 14.04 TLS firefly 0.80.7
I recently encountered the same issue as described below.
Maybe I missed something between July and January…
I found that the http request wasn't correctly built by
/usr/lib/python2.7/dist-packages/radosgw_agent/client.py
I did the changes
Hi all
Context : firefly 0.80.7
OS : ubuntu 14.04.1 LTS
I'd like to change the chunk size for object stored with rgw to 1MB (4MB is the
default)
I changed ceph.conf setting rgw object stripe size = 1048576 and restarted the
rgw.
The chunk size remains to 4MB.
I saw in different exchanges
Before configuring region and zone, I would like to known which tags can be
updated in metadata bucket.instance?
Are there restrictions according to the capabilities applied to radosgw-admin?
-Message d'origine-
De : ceph-users-boun...@lists.ceph.com
Thanks
Sorry for answering late
I'm going to implement region, zone and placement targets in order to reach my
goals.
Best regards
-Message d'origine-
De : Yehuda Sadeh [mailto:yeh...@inktank.com]
Envoyé : vendredi 11 avril 2014 18:34
À : CHEVALIER Ghislain IMT/OLPS
Cc :
Hi all,
Context : CEPH dumpling on Ubuntu 12.04
I would like to manage as accurately as possible the pools assigned to Rados
gateway
My goal is to apply specific SLA to applications which use http-driven storage.
I 'd like to store the contents by associating a pool to a bucket, or a pool to
a
Hi all,
I'd like to submit a strange behavior...
Context : lab platform
CEPH emperor
Ceph-deploy 1.3.4
Ubuntu 12.04
Issue:
We have 3 OSD up and running; we encountered no difficulties in creating them.
We tried to create an osd.3 using ceph-deploy on a storage node (r-cephosd301)
from an admin
33 matches
Mail list logo