Hi Yehuda,

I have configured cluster and its health is active and clean like below









*root@mon:/etc/ceph# ceph status    cluster
a7f64266-0894-4f1e-a635-d0aeaca0e993     health HEALTH_WARN 110 pgs stuck
unclean     monmap e1: 1 mons at {mon=192.168.0.102:6789/0
<http://192.168.0.102:6789/0>}, election epoch 2, quorum 0 mon     osdmap
e37: 2 osds: 2 up, 2 in      pgmap v243: 2856 pgs, 11 pools, 1311 bytes
data, 47 objects            2122 MB used, 9068 MB / 11837 MB
avail                 110 active                2746 active+clean*

I have configured rados gateway with user and swift access key and it looks
like

srinivas@srinivas:/etc/ceph$ sudo radosgw-admin user info --uid=shrinivas























*[sudo] password for srinivas: { "user_id": "shrinivas",  "display_name":
"shrinivas",  "email": "[email protected] <[email protected]>",  "suspended":
0,  "max_buckets": 1000,  "auid": 0,  "subusers": [        { "id":
"shrinivas:swift",          "permissions": "full-control"}],  "keys":
[        { "user": "shrinivas",          "access_key":
"8ZWUIG95TR2S4KO1LZ2C",          "secret_key":
"snc7NDP2GL8Sq9Y4Y\/iugHDXERruzDytzpdbyEyo"}],  "swift_keys": [        {
"user": "shrinivas:swift",          "secret_key":
"FeS9TmYlC0oxULbSq9PinY2J79chFKWWawaJi+SS"}],  "caps": [],  "op_mask":
"read, write, delete",  "default_placement": "",  "placement_tags": [],
"bucket_quota": { "enabled": false,      "max_size_kb": -1,
"max_objects": -1}}*

Please clarify me about the last 3 lines are ok? If they are errors , How
could I fix them. Please help me in going forward.

Thanks,
Srinivas.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to