Re: [ceph-users] ceph-mon not starting on system startup (Ubuntu 16.04 / systemd)

2016-11-14 Thread Craig Chi
been merged. Sincerely, Craig Chi (Product Developer) Synology Inc. Taipei, Taiwan. On 2016-11-15 01:32, David Turnerwrote: > > > > I had to set my mons to sysvinit while my osds are systemd.That allows > everything to start up when my system boots.I don't know w

Re: [ceph-users] ceph-mon not starting on system startup (Ubuntu 16.04 / systemd)

2016-11-15 Thread Craig Chi
-mon.target -rw-r--r-- root/root162 2016-06-14 20:22 ./lib/systemd/system/ceph-mon.target I would recommend the latter solution. Sincerely, Craig Chi (Product Developer) Synology Inc. Taipei, Taiwan. On 2016-11-15 18:33, Matthew Vernonwrote: > Hi, On 15/11/16 01:27, Craig Chi wrote:>What'

Re: [ceph-users] how possible is that ceph cluster crash

2016-11-18 Thread Craig Chi
experiences about nobarrier and xfs. Sincerely, Craig Chi (Product Developer) Synology Inc. Taipei, Taiwan. Ext. 361 On 2016-11-17 05:04, Nick Fiskwrote: > >-Original Message->From: ceph-users > >[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pedro Benites>

[ceph-users] RBD lost parents after rados cppool

2016-11-19 Thread Craig Chi
I think it may be caused by the change of "images" pool id, right? Is it possible to re-reference the rbds in "volumes" on new "images" pool? Or is it possible to change or specify the pool id of new pool? Any suggestions are very welcome. Thanks Sincerely, Craig Ch

Re: [ceph-users] RBD lost parents after rados cppool

2016-11-21 Thread Craig Chi
Hi Jason, This really did the trick! I can now rescue my rbds, thank you very much! Sincerely, Craig Chi (Product Developer) Synology Inc. Taipei, Taiwan. On 2016-11-21 21:44, Jason Dillamanwrote: > You are correct -- rbd uses the pool id as a reference and now your pool has > a new id.

[ceph-users] Ceph OSDs cause kernel unresponsive

2016-11-24 Thread Craig Chi
nal max write bytes = 1048576000 journal max write entries = 1000 journal queue max ops = 3000 journal queue max bytes = 1048576000 ms dispatch throttle bytes = 1048576000 Sincerely, Craig Chi ___ ceph-users mailing list ceph-users@lists.ceph.com h

Re: [ceph-users] Ceph OSDs cause kernel unresponsive

2016-11-24 Thread Craig Chi
: vm.swappiness=10 kernel.pid_max=4194303 fs.file-max=26234859 vm.zone_reclaim_mode=0 vm.vfs_cache_pressure=50 vm.min_free_kbytes=4194303 I would try to configure vm.min_free_kbytes larger and test. I will be grateful if anyone has the experience of how to tune these values for Ceph. Sincerely, Craig Chi

Re: [ceph-users] Ceph OSDs cause kernel unresponsive

2016-11-24 Thread Craig Chi
your kindness and useful suggestions. Sincerely, Craig Chi On 2016-11-25 07:23, Brad Hubbardwrote: > Two of these appear to be hung task timeouts and the other is an invalid > opcode. > There is no evidence here of memory exhaustion (although it remains to be > seen whether this is

Re: [ceph-users] Ceph OSDs cause kernel unresponsive

2016-11-24 Thread Craig Chi
responsible for each time kernel hang, since most of the time we could not retrieve any related log once the kernel became inactive. Sincerely, Craig Chi (Product Developer) Synology Inc. Taipei, Taiwan. On 2016-11-25 09:46, Craig Chiwrote: > Hi Nick, > > I have seen the report bef

Re: [ceph-users] Ceph OSDs cause kernel unresponsive

2016-11-28 Thread Craig Chi
etwork connection still high and the >kernel hang issue continued. Now we are still struggling with this problem. Please kindly instruct us if you have any directions. Sincerely, Craig Chi On 2016-11-25 21:26, Nick Fiskwrote: > > Hi, > > > >

Re: [ceph-users] Ceph OSDs cause kernel unresponsive

2016-11-28 Thread Craig Chi
nough under normal circumstances?) Thank you very much. Sincerely, Craig Chi On 2016-11-29 10:27, Brad Hubbardwrote: > > > On Tue, Nov 29, 2016 at 3:12 AM, Craig > Chimailto:craig...@synology.com)>wrote: > > Hi guys, > > > > Thanks to both of your sugges

[ceph-users] Wrong pg count when pg number is large

2016-12-01 Thread Craig Chi
lete,nopgchange,nosizechange stripe_width 0 I think I created 25600 pgs totally, but ceph -s reported 25600 / 51200 randomly. However ceph -w always reported 51200 on the latest line. If this a kind of bug or just I was doing something wrong? Feel free to let me know if you need mo

Re: [ceph-users] How to start/restart osd and mon manually (not by init script or systemd)

2016-12-12 Thread Craig Chi
ilure. Sincerely, Craig Chi On 2016-12-11 10:18, WANG Siyuanwrote: > Hi, all > I want to deploy ceph manually. When I finish config, I need to start mon and > osd manually. > I used these command. I found these command in systemd/ceph-mon@.service > andsystemd/ceph-osd@.service

Re: [ceph-users] Wrong pg count when pg number is large

2016-12-12 Thread Craig Chi
Hi Greg, Sorry I didn't reserve the environment due to urgent needs. However I think you are right because at that time I just purged all pools and re-create them in a short time, thank you very much! Sincerely, Craig Chi On 2016-12-13 14:21, Gregory Farnumwrote: > On Thu, Dec 1, 2016

Re: [ceph-users] OSD creation and sequencing.

2016-12-16 Thread Craig Chi
Hi Daniel, If you deploy your cluster by manual method, you can specify the OSD number as you wish. Here are the steps of manual deployment: http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-osds Sincerely, Craig Chi On 2016-12-16 21:51, Daniel Corleywrote: > &

Re: [ceph-users] Why I don't see "mon osd min down reports" in "config show" report result?

2016-12-23 Thread Craig Chi
. https://github.com/ceph/ceph/blob/master/src/common/config_opts.h You should switch to the branch you are using. Sincerely, Craig Chi On 2016-12-23 18:55, Stéphane Kleinwrote: > Hi, > when I execute: > > ``` > root@ceph-mon-1:/home/vagrant# ceph --admin-daemon > /var/run/ceph/c

Re: [ceph-users] Ceph - Health and Monitoring

2017-01-02 Thread Craig Chi
Hello, I suggest Prometheus withceph_exporter(https://github.com/digitalocean/ceph_exporter)and Grafana (UI). It can also monitor the node's health and any other services you want. And it has a beautiful UI. Sincerely, Craig Chi On 2017-01-02 21:32, ulem...@polarzone.de wrote: > Hi

[ceph-users] High OSD apply latency right after new year (the leap second?)

2017-01-04 Thread Craig Chi
Hi List, Three of our Ceph OSDs got unreasonably high latency right after the first second of the new year (2017/01/01 00:00:00 UTC, I have attached the metrics and I am in UTC+8 timezone). There is exactly a pg (size=3) just contains these 3 OSDs. The OSD apply latency is usually up to 25 min

Re: [ceph-users] High OSD apply latency right after new year (the leap second?)

2017-01-05 Thread Craig Chi
Hi , I'm glad to know that it happened not only to me. Though it is unharmful, it seems like kind of bug... Are there any Ceph developers who know how exactly is the implementation of "ceph osd perf" command? Is the leap second really responsible for this behavior? Thanks. Since

[ceph-users] pg stuck in peering while power failure

2017-01-10 Thread Craig Chi
Hi List, I am testing the stability of my Ceph cluster with power failure. I brutally powered off 2 Ceph units with each 90 OSDs on it while the client I/O was continuing. Since then, some of the pgs of my cluster stucked in peering pgmap v3261136: 17408 pgs, 4 pools, 176 TB data, 5082 kobject

Re: [ceph-users] pg stuck in peering while power failure

2017-01-10 Thread Craig Chi
from different host after 20.072026>= grace 20.00) But that OSD was not dead actually, more likely had slow response to heartbeats. What I think is increasing the osd_heartbeat_grace may somehow mitigate the issue. Sincerely, Craig Chi On 2017-01-11 00:08, Samuel Justwrote: > { "

Re: [ceph-users] Migrating data from a Ceph clusters to another

2017-02-09 Thread Craig Chi
Hi John, rbd mirroring can configured by pool.http://docs.ceph.com/docs/master/rbd/rbd-mirroring/ However the rbd mirroring method can only be used on rbd with layering feature, it can not mirror objects other than rbd for you. Sincerely, Craig Chi On 2017-02-09 16:24, Irek Fasikhovwrote

Re: [ceph-users] Migrating data from a Ceph clusters to another

2017-02-09 Thread Craig Chi
Hi, Sorry I gave the wrong feature. rbd mirroring method can only be used on rbd with "journaling" feature (not layering). Sincerely, Craig Chi On 2017-02-09 16:41, Craig Chiwrote: > Hi John, > > rbd mirroring can configured by > pool.http://docs.ceph.com/docs/ma

Re: [ceph-users] - permission denied on journal after reboot

2017-02-13 Thread Craig Chi
ib/udev/rules.d/95-ceph-osd.rules:16 RUN '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name' /lib/udev/rules.d/95-ceph-osd.rules:16 ... Then /dev/sdb2 will have ceph:ceph permission automatically. #>ls -l /dev/sdb2 brw-rw 1 ceph ceph 8, 18 Feb 13 19:43 /dev/sdb2 Sincerel