[ceph-users] Radosgw keeps writing to specific OSDs wile there other free OSDs

2015-02-21 Thread B L
Hi Ceph community, I’m trying to upload some file with 5GB size, through radosgw, I have 9 OSDs deployed on 3 machines, and my cluster is healthy. The problem is: the 5GB file is being uploaded to osd.0 and osd.1 ,which are near full, while the other OSDs have more space that can have this

[ceph-users] My PG is UP and Acting, yet it is unclean

2015-02-17 Thread B L
Hi All, I have a group of PGs that are up and acting, yet they are not clean, and causing the cluster to be in a warning mode, i.e. non-health. This is my cluster status: $ ceph -s cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 203 pgs stuck unclean; recovery 6/132

Re: [ceph-users] Having problem to start Radosgw

2015-02-16 Thread B L
a little, since it was my first experience to install RGW and add it to the cluster. Now we can run it like: sudo service radosgw start — or — sudo /etc/init.d/radosgw start And everything should work .. Thanks Yehuda for your support .. Beanos! On Feb 15, 2015, at 9:37 AM, B L super.itera

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
://gist.github.com/anonymous/90b77c168ed0606db03d Please let me know if you need something else? Best! On Feb 14, 2015, at 6:22 PM, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote: - Original Message - From: B L super.itera...@gmail.com To: ceph-users@lists.ceph.com Sent: Friday, February 13

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
Shall I run it like this: sudo radosgw -c ceph.conf -d strace -F -T -tt -o/tmp/strace.out radosgw -f On Feb 14, 2015, at 6:55 PM, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote: strace -F -T -tt -o/tmp/strace.out radosgw -f ___ ceph-users mailing

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
Hello Yehyda, The strace command you referred to me, shows this: https://gist.github.com/anonymous/8e9f1ced485996a263bb https://gist.github.com/anonymous/8e9f1ced485996a263bb Additionally, I traced this log file: /var/log/radosgw/ceph-client.radosgw.gateway it has the following: 2015-02-12

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
do something more .. Now I have 2 questions: 1- what RADOS user you refer to? 2- How would I know that I use wrong cephx keys unless I see authentication error or relevant warning? Thanks! Beanos On Feb 14, 2015, at 11:29 PM, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote: From: B L

Re: [ceph-users] Having problem to start Radosgw

2015-02-14 Thread B L
- From: B L super.itera...@gmail.com To: Yehuda Sadeh-Weinraub yeh...@redhat.com Cc: ceph-users@lists.ceph.com Sent: Saturday, February 14, 2015 2:56:54 PM Subject: Re: [ceph-users] Having problem to start Radosgw Yehuda .. In case you will need to know more about my system Here

[ceph-users] Having problem to start Radosgw

2015-02-13 Thread B L
Hi all, I’m having a problem to start radosgw, giving me error that I can’t diagnose: $ radosgw -c ceph.conf -d 2015-02-14 07:46:58.435802 7f9d739557c0 0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process radosgw, pid 27609 2015-02-14 07:46:58.437284 7f9d739557c0 -1

[ceph-users] Can't add RadosGW keyring to the cluster

2015-02-12 Thread B L
Hi all, Trying to do this: ceph -k ceph.client.admin.keyring auth add client.radosgw.gateway -i ceph.client.radosgw.keyring Getting this error: Error EINVAL: entity client.radosgw.gateway exists but key does not match What can this be?? Thanks!

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-11 Thread B L
). Best wishes, Vickie 2015-02-10 22:25 GMT+08:00 B L super.itera...@gmail.com mailto:super.itera...@gmail.com: Thanks for everyone!! After applying the re-weighting command (ceph osd crush reweight osd.0 0.0095), my cluster is getting healthy now :)) But I have one question, what

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
, Vikhyat On 02/10/2015 07:31 PM, B L wrote: Thanks Vikhyat, As suggested .. ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0 Invalid command: osd.0 doesn't represent a float osd crush reweight name float[0.0-] : change name's weight to weight in crush map Error EINVAL

[ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Having problem with my fresh non-healthy cluster, my cluster status summary shows this: ceph@ceph-node1:~$ ceph -s cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; pool data pg_num 128 pgp_num 64

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
On Feb 10, 2015, at 12:37 PM, B L super.itera...@gmail.com wrote: Hi Vickie, Thanks for your reply! You can find the dump in this link: https://gist.github.com/anonymous/706d4a1ec81c93fd1eca https://gist.github.com/anonymous/706d4a1ec81c93fd1eca Thanks! B. On Feb 10, 2015

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
in weight 1 up_from 22 up_thru 0 down_at 0 last_clean_interval [0,0) 172.31.3.56:6805/7019 172.31.3.56:6806/7019 172.31.3.56:6807/7019 172.31.3.56:6808/7019 exists,up da67b604-b32a-44a0-9920-df0774ad2ef3 On Feb 10, 2015, at 12:55 PM, B L super.itera...@gmail.com wrote: On Feb 10, 2015

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
and each of them have 2 disks. I have a question. What result of ceph osd tree. Look like the osd status is down. Best wishes, Vickie 2015-02-10 19:00 GMT+08:00 B L super.itera...@gmail.com mailto:super.itera...@gmail.com: Here is the updated direct copy/paste dump eph@ceph-node1

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
I will try to change the replication size now as you suggested .. but how is that related to the non-healthy cluster? On Feb 10, 2015, at 1:22 PM, B L super.itera...@gmail.com wrote: Hi Vickie, My OSD tree looks like this: ceph@ceph-node3:/home/ubuntu$ ceph osd tree # id weight

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
changes mean 2- How changing replication size can cause the cluster to be un healthy Thanks Vickie! Beanos On Feb 10, 2015, at 1:28 PM, B L super.itera...@gmail.com wrote: I changed the size and min_size as you suggested while opening the ceph -w on a different window, and I got

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
:40.769794 mon.0 [INF] pgmap v94: 256 pgs: 256 active+degraded; 0 bytes data, 200 MB used, 18165 MB / 18365 MB avail 2015-02-10 11:23:45.530713 mon.0 [INF] pgmap v95: 256 pgs: 256 active+degraded; 0 bytes data, 200 MB used, 18165 MB / 18365 MB avail On Feb 10, 2015, at 1:24 PM, B L super.itera

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
Thanks Vikhyat, As suggested .. ceph@ceph-node1:/home/ubuntu$ ceph osd crush reweight 0.0095 osd.0 Invalid command: osd.0 doesn't represent a float osd crush reweight name float[0.0-] : change name's weight to weight in crush map Error EINVAL: invalid command What do you think On Feb

Re: [ceph-users] Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

2015-02-10 Thread B L
. This mean, you OSD must be 10GB or greater! Udo Am 10.02.2015 12:22, schrieb B L: Hi Vickie, My OSD tree looks like this: ceph@ceph-node3:/home/ubuntu$ ceph osd tree # idweighttype nameup/downreweight -10root default -20host ceph-node1 00osd.0up1 10osd.1up1 -30host ceph-node3 20osd