been merged.
Sincerely,
Craig Chi (Product Developer)
Synology Inc. Taipei, Taiwan.
On 2016-11-15 01:32, David Turnerwrote:
>
>
>
> I had to set my mons to sysvinit while my osds are systemd.That allows
> everything to start up when my system boots.I don't know w
-mon.target
-rw-r--r-- root/root162 2016-06-14 20:22 ./lib/systemd/system/ceph-mon.target
I would recommend the latter solution.
Sincerely,
Craig Chi (Product Developer)
Synology Inc. Taipei, Taiwan.
On 2016-11-15 18:33, Matthew Vernonwrote:
> Hi, On 15/11/16 01:27, Craig Chi wrote:>What'
experiences about nobarrier
and xfs.
Sincerely,
Craig Chi (Product Developer)
Synology Inc. Taipei, Taiwan. Ext. 361
On 2016-11-17 05:04, Nick Fiskwrote:
> >-Original Message->From: ceph-users
> >[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Pedro Benites>
I think it may be caused by the change of "images" pool id, right?
Is it possible to re-reference the rbds in "volumes" on new "images" pool? Or
is it possible to change or specify the pool id of new pool?
Any suggestions are very welcome. Thanks
Sincerely,
Craig Ch
Hi Jason,
This really did the trick!
I can now rescue my rbds, thank you very much!
Sincerely,
Craig Chi (Product Developer)
Synology Inc. Taipei, Taiwan.
On 2016-11-21 21:44, Jason Dillamanwrote:
> You are correct -- rbd uses the pool id as a reference and now your pool has
> a new id.
nal max write bytes = 1048576000
journal max write entries = 1000
journal queue max ops = 3000
journal queue max bytes = 1048576000
ms dispatch throttle bytes = 1048576000
Sincerely,
Craig Chi
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
:
vm.swappiness=10
kernel.pid_max=4194303
fs.file-max=26234859
vm.zone_reclaim_mode=0
vm.vfs_cache_pressure=50
vm.min_free_kbytes=4194303
I would try to configure vm.min_free_kbytes larger and test.
I will be grateful if anyone has the experience of how to tune these values for
Ceph.
Sincerely,
Craig Chi
your kindness and useful suggestions.
Sincerely,
Craig Chi
On 2016-11-25 07:23, Brad Hubbardwrote:
> Two of these appear to be hung task timeouts and the other is an invalid
> opcode.
> There is no evidence here of memory exhaustion (although it remains to be
> seen whether this is
responsible for each time kernel hang,
since most of the time we could not retrieve any related log once the kernel
became inactive.
Sincerely,
Craig Chi (Product Developer)
Synology Inc. Taipei, Taiwan.
On 2016-11-25 09:46, Craig Chiwrote:
> Hi Nick,
>
> I have seen the report bef
etwork connection still high and the
>kernel hang issue continued.
Now we are still struggling with this problem.
Please kindly instruct us if you have any directions.
Sincerely,
Craig Chi
On 2016-11-25 21:26, Nick Fiskwrote:
>
> Hi,
>
>
>
>
nough under normal circumstances?)
Thank you very much.
Sincerely,
Craig Chi
On 2016-11-29 10:27, Brad Hubbardwrote:
>
>
> On Tue, Nov 29, 2016 at 3:12 AM, Craig
> Chimailto:craig...@synology.com)>wrote:
> > Hi guys,
> >
> > Thanks to both of your sugges
lete,nopgchange,nosizechange stripe_width 0
I think I created 25600 pgs totally, but ceph -s reported 25600 / 51200
randomly. However ceph -w always reported 51200 on the latest line.
If this a kind of bug or just I was doing something wrong? Feel free to let me
know if you need mo
ilure.
Sincerely,
Craig Chi
On 2016-12-11 10:18, WANG Siyuanwrote:
> Hi, all
> I want to deploy ceph manually. When I finish config, I need to start mon and
> osd manually.
> I used these command. I found these command in systemd/ceph-mon@.service
> andsystemd/ceph-osd@.service
Hi Greg,
Sorry I didn't reserve the environment due to urgent needs.
However I think you are right because at that time I just purged all pools and
re-create them in a short time, thank you very much!
Sincerely,
Craig Chi
On 2016-12-13 14:21, Gregory Farnumwrote:
> On Thu, Dec 1, 2016
Hi Daniel,
If you deploy your cluster by manual method, you can specify the OSD number as
you wish.
Here are the steps of manual deployment:
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-osds
Sincerely,
Craig Chi
On 2016-12-16 21:51, Daniel Corleywrote:
>
&
.
https://github.com/ceph/ceph/blob/master/src/common/config_opts.h
You should switch to the branch you are using.
Sincerely,
Craig Chi
On 2016-12-23 18:55, Stéphane Kleinwrote:
> Hi,
> when I execute:
>
> ```
> root@ceph-mon-1:/home/vagrant# ceph --admin-daemon
> /var/run/ceph/c
Hello,
I suggest Prometheus
withceph_exporter(https://github.com/digitalocean/ceph_exporter)and Grafana
(UI). It can also monitor the node's health and any other services you want.
And it has a beautiful UI.
Sincerely,
Craig Chi
On 2017-01-02 21:32, ulem...@polarzone.de wrote:
> Hi
Hi List,
Three of our Ceph OSDs got unreasonably high latency right after the first
second of the new year (2017/01/01 00:00:00 UTC, I have attached the metrics
and I am in UTC+8 timezone). There is exactly a pg (size=3) just contains these
3 OSDs.
The OSD apply latency is usually up to 25 min
Hi ,
I'm glad to know that it happened not only to me.
Though it is unharmful, it seems like kind of bug...
Are there any Ceph developers who know how exactly is the implementation of
"ceph osd perf" command?
Is the leap second really responsible for this behavior?
Thanks.
Since
Hi List,
I am testing the stability of my Ceph cluster with power failure.
I brutally powered off 2 Ceph units with each 90 OSDs on it while the client
I/O was continuing.
Since then, some of the pgs of my cluster stucked in peering
pgmap v3261136: 17408 pgs, 4 pools, 176 TB data, 5082 kobject
from different host after
20.072026>= grace 20.00)
But that OSD was not dead actually, more likely had slow response to
heartbeats. What I think is increasing the osd_heartbeat_grace may somehow
mitigate the issue.
Sincerely,
Craig Chi
On 2017-01-11 00:08, Samuel Justwrote:
> { "
Hi John,
rbd mirroring can configured by
pool.http://docs.ceph.com/docs/master/rbd/rbd-mirroring/
However the rbd mirroring method can only be used on rbd with layering feature,
it can not mirror objects other than rbd for you.
Sincerely,
Craig Chi
On 2017-02-09 16:24, Irek Fasikhovwrote
Hi,
Sorry I gave the wrong feature.
rbd mirroring method can only be used on rbd with "journaling" feature (not
layering).
Sincerely,
Craig Chi
On 2017-02-09 16:41, Craig Chiwrote:
> Hi John,
>
> rbd mirroring can configured by
> pool.http://docs.ceph.com/docs/ma
ib/udev/rules.d/95-ceph-osd.rules:16
RUN '/usr/sbin/ceph-disk --log-stdout -v trigger /dev/$name'
/lib/udev/rules.d/95-ceph-osd.rules:16
...
Then /dev/sdb2 will have ceph:ceph permission automatically.
#>ls -l /dev/sdb2
brw-rw 1 ceph ceph 8, 18 Feb 13 19:43 /dev/sdb2
Sincerel
24 matches
Mail list logo