[ceph-users] Reduce the size of the pool .log

2015-11-09 Thread Chang, Fangzhe (Fangzhe)
It seems that the pool .log increases in its size as ceph runs over time. I've 
using 20 placement groups (pgs) for the .log pool. Now it complains that 
"HEALTH_WARN pool .log has too few pgs". I don't have a good understanding on 
when ceph will remove the old log entries by itself. I saw some entries more 
than 9 months old, which I assume would not have any use in the future. If I'd 
like to reduce the size of .log pool manually, what is the recommended way to 
do it? Using 'rados rm  ...' one by one seems very curbersome.

# ceph health detail
HEALTH_WARN pool .log has too few pgs
pool .log objects per pg (8829) is more than 12.9648 times cluster average (681)

# ceph osd dump |grep .log
pool 13 '.log' replicated size 3 min_size 2 crush_ruleset 0 object_hash 
rjenkins pg_num 20 pgp_num 20 last_change 303 owner 18446744073709551615 flags 
hashpspool min_read_recency_for_promote 1 stripe_width 0

# ceph df detail |grep .log
.log   13 - 836M  0.01 2476G
  176581  172k 2  235k

Thanks,

Fangzhe Chang
PS.
Somehow I can only receive summaries from ceph-users list, and cannot 
immediately be notified about the responses. If you don't see my reaction to 
your answers, most likely it is due to that I have not learned about the 
existence of your replies yet ;-(. Thanks very much for any help.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Recommended way of leveraging multiple disks by Ceph

2015-09-15 Thread Chang, Fangzhe (Fangzhe)
Hi,

I'd like to run Ceph on a few machines, each of which has multiple disks. The 
disks are heterogeneous: some are rotational disks of larger capacities while 
others are smaller solid state disks. What are the recommended ways of running 
ceph osd-es on them?

Two of the approaches can be:

1)  Deploy an osd instance on each hard disk. For instance, if a machine 
has six hard disks, there will be six osd instances running on it. In this 
case, does Ceph's replication algorithm recognize that these osd-es are on the 
same machine therefore try to avoid placing replicas on disks/osd-es of a same 
machine?

2)  Create a logical volume spanning multiple hard disks of a machine and 
run a single copy of osd per machine.

If you have previous experiences, benchmarking results, or know a pointer to 
the corresponding documentation, please share with me and other users. Thanks a 
lot.

Cheers,

Fangzhe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD refuses to start after problem with adding monitors

2015-09-14 Thread Chang, Fangzhe (Fangzhe)
Hi,

I started a new Ceph cluster with a single instance, and later added two new 
osd on different machines using ceph-deploy. The osd data directories reside on 
a separate disk than the conventional /var/local/ceph/osd- directory.  
Correspondingly, I changed the replication factor size to 3 though the min_size 
parameter stays at 1.

As a next step, I tried to expand the number of monitors. However, the effort 
of adding two new monitors using ceph-deploy failed. The 'ceph status' command 
only reveals the original monitor whereas the two new monitors are visible when 
retrieving monmap. To resolve the problem, I was looking around and found  the 
'ceph mon add' command. The moment I tried this command, everything got stuck. 
'ceph status' simply hungs. Ceph daemon can no longer be started --- it seems 
that the osd sub-command times out.

Any clue on where to look at for the problems or how to fix them?

Another small problem: Since the osd data directory is not provided in 
/etc/ceph/ceph.conf, I'm wondering how ceph knows  where to find it.

Thanks

Fangzhe

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph/Radosgw v0.94 Content-Type versus Content-type

2015-09-09 Thread Chang, Fangzhe (Fangzhe)
I noticed that S3 Java SDK for getContentType() no longer works in Ceph/Radosgw 
v0.94 (Hammer). It seems that S3 SDK expects the metadata “Content-Type” 
whereas ceph responds with “Content-type”.
Does anyone know how to make a request for having this issue fixed?

Fangzhe


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cannot add/create new monitor on ceph v0.94.3

2015-09-08 Thread Chang, Fangzhe (Fangzhe)
Thanks for the answer.

NTP is running on both the existing monitor and the new monitor being installed.
I did run ceph-deploy in the same directory as I created the cluster. However, 
I need to tweak the options supplied to ceph-deploy a little bit since I was 
running it behind a corporate firewall.

I noticed the ceph-create-keys process is running on the background. When I ran 
it manually, I got the following results.

$ python /usr/sbin/ceph-create-keys --cluster ceph -i 
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'


-Original Message-
From: Brad Hubbard [mailto:bhubb...@redhat.com] 
Sent: Sunday, September 06, 2015 11:58 PM
To: Chang, Fangzhe (Fangzhe)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cannot add/create new monitor on ceph v0.94.3

- Original Message -
> From: "Fangzhe Chang (Fangzhe)" <fangzhe.ch...@alcatel-lucent.com>
> To: ceph-users@lists.ceph.com
> Sent: Saturday, 5 September, 2015 6:26:16 AM
> Subject: [ceph-users] Cannot add/create new monitor on ceph v0.94.3
> 
> 
> 
> Hi,
> 
> I’m trying to add a second monitor using ‘ceph-deploy mon new  hostname>’. However, the log file shows the following error:
> 
> 2015-09-04 16:13:54.863479 7f4cbc3f7700 0 cephx: verify_reply couldn't 
> decrypt with error: error decoding block for decryption
> 
> 2015-09-04 16:13:54.863491 7f4cbc3f7700 0 -- :6789/0 
> >> :6789/0 pipe(0x413 sd=12 :57954 s=1 pgs=0 
> cs=0 l=0 c=0x3f29600).failed verifying authorize reply

A couple of things to look at are verifying all your clocks are in sync (ntp 
helps here) and making sure you are running ceph-deploy in the directory you 
used to create the cluster.

> 
> 
> 
> Does anyone know how to resolve this?
> 
> Thanks
> 
> 
> 
> Fangzhe
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Cannot add/create new monitor on ceph v0.94.3

2015-09-04 Thread Chang, Fangzhe (Fangzhe)
Hi,
I’m trying to add a second monitor using ‘ceph-deploy mon new ’.  However, the log file shows the following error:
2015-09-04 16:13:54.863479 7f4cbc3f7700  0 cephx: verify_reply couldn't decrypt 
with error: error decoding block for decryption
2015-09-04 16:13:54.863491 7f4cbc3f7700  0 -- :6789/0 >> 
:6789/0 pipe(0x413 sd=12 :57954 s=1 pgs=0 cs=0 l=0 
c=0x3f29600).failed verifying authorize reply

Does anyone know how to resolve this?
Thanks

Fangzhe

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Migrating data into a newer ceph instance

2015-08-26 Thread Chang, Fangzhe (Fangzhe)
Thanks, Luis.

The motivation for using the newer version is to keep up-to-date with Ceph 
development, since we suspect the old versioned radosgw could not be restarted 
possibly due to library mismatch.
Do you know whether the self-healing feature of ceph is applicable between 
different versions or not?

Fangzhe

From: Luis Periquito [mailto:periqu...@gmail.com]
Sent: Wednesday, August 26, 2015 10:11 AM
To: Chang, Fangzhe (Fangzhe)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Migrating data into a newer ceph instance

I Would say the easiest way would be to leverage all the self-healing of ceph: 
add the new nodes to the old cluster, allow or force all the data to migrate 
between nodes, and then remove the old ones out.

Well to be fair you could probably just install radosgw on another node and use 
it as your gateway without the need to even create a new OSD node...

Or was there a reason to create a new cluster? I can tell you that one of the 
clusters I have has been around since bobtail, and now it's hammer...

On Wed, Aug 26, 2015 at 2:50 PM, Chang, Fangzhe (Fangzhe) 
fangzhe.ch...@alcatel-lucent.commailto:fangzhe.ch...@alcatel-lucent.com 
wrote:
Hi,

We have been running Ceph/Radosgw version 0.80.7 (Giant) and stored quite some 
amount of data in it. We are only using ceph as an object store via radosgw. 
Last week cheph-radosgw daemon suddenly refused to start (with logs only 
showing “initialization timeout” error on Centos 7).  This triggers me to 
install a newer instance --- Ceph/Radosgw version 0.94.2 (Hammer). The new 
instance has a different set of key rings by default. The next step is to have 
all the data migrated. Does anyone know how to get the existing data out of the 
old ceph  cluster (Giant) and into the new instance (Hammer)? Please note that 
in the old three-node cluster ceph osd is still running but radosgw is not. Any 
suggestion will be greatly appreciated.
Thanks.

Regards,

Fangzhe Chang




___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Migrating data into a newer ceph instance

2015-08-26 Thread Chang, Fangzhe (Fangzhe)
Hi,

We have been running Ceph/Radosgw version 0.80.7 (Giant) and stored quite some 
amount of data in it. We are only using ceph as an object store via radosgw. 
Last week cheph-radosgw daemon suddenly refused to start (with logs only 
showing initialization timeout error on Centos 7).  This triggers me to 
install a newer instance --- Ceph/Radosgw version 0.94.2 (Hammer). The new 
instance has a different set of key rings by default. The next step is to have 
all the data migrated. Does anyone know how to get the existing data out of the 
old ceph  cluster (Giant) and into the new instance (Hammer)? Please note that 
in the old three-node cluster ceph osd is still running but radosgw is not. Any 
suggestion will be greatly appreciated.
Thanks.

Regards,

Fangzhe Chang



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com