Hi,
Am 01.07.2015 um 23:35 schrieb Loic Dachary:
Hi,
The details of the differences between the Hammer point releases and the
RedHat Ceph Storage 1.3 can be listed as described at
http://www.spinics.net/lists/ceph-devel/msg24489.html reconciliation between
hammer and v0.94.1.2
The
On 2015年07月02日 00:16, Gregory Farnum wrote:
How reproducible is this issue for you? Ideally I'd like to get logs
from both clients and the MDS server while this is happening, with mds
and client debug set to 20. And also to know if dropping kernel caches
and re-listing the directory resolves
Hi Guys,
By reading through ceph client codes, there is timeout mechanism in tick when
doing mount. Recently we met some client requests to mds spending long time to
reply when doing massive test to cephfs. And if we want cephfs user to know the
timeout instead of waiting for the reply, can we
Hi all,
I'm using CephFS on Hammer and I've 1.5 million files , 2 metadata servers in
active/standby configuration with 8 GB of RAM , 20 clients with 2 GB of RAM
each and 2 OSD nodes with 4 80GB osd and 4GB of RAM.
?I've noticed that if I kill the active metadata server the second one took
output from iostat:
*CEPHOSD01:*
Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sdc(ceph-0) 0.00 0.001.00 389.00 0.0035.98
188.9660.32 120.12 16.00 120.39 1.26 49.20
sdd(ceph-1)
Le Wed, 1 Jul 2015 17:13:03 -0300
German Anders gand...@despegar.com écrivait:
Hi cephers,
Is anyone out there that implement enhanceIO in a production
environment? any recommendation? any perf output to share with the
diff between using it and not?
I've tried EnhanceIO back when it
The idea is to cache rbd at a host level. Also could be possible to cache
at the osd level. We have high iowait and we need to lower it a bit, since
we are getting the max from our sas disks 100-110 iops per disk (3TB
osd's), any advice? Flashcache?
On Thursday, July 2, 2015, Jan Schermer
I think I posted my experience here ~1 month ago.
My advice for EnhanceIO: don’t use it.
But you didn’t exactly say what you want to cache - do you want to cache the
OSD filestore disks? RBD devices on hosts? RBD devices inside guests?
Jan
On 02 Jul 2015, at 11:29, Emmanuel Florac
Does anyone have a known-good set of parameters for ext4? I want to try it as
well but I’m a bit worried what happnes if I get it wrong.
Thanks
Jan
On 02 Jul 2015, at 09:40, Nick Fisk n...@fisk.me.uk wrote:
-Original Message-
From: ceph-users
Hi,
after a power-loss all 3 Monitors crashed at the same time.
When I try to start the Monitors all htree report:
=== mon.2 ===
Starting Ceph mon.2 on cephmon172...
2015-07-01 18:09:09.312039 7fe3f923c840 -1 unable to read magic from mon
data
failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i 2
Thanks
Ill test these values, and also add the osd heartbeat grace to 60 seconds
instead of 20, hopefully that would help with the latency during deep scrub.
I changed shards to 6 and shard threads to 2, then it matches physical cores
on the server not including hyperthreading.
Br, T
And those disks are spindles?
Looks like there’s simply too few of there….
Jan
On 02 Jul 2015, at 13:49, German Anders gand...@despegar.com wrote:
output from iostat:
CEPHOSD01:
Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz
avgqu-sz await r_await
How could I use ceph different interfaces?
Here is what I understand so far. Please correct me if I am mistaken.
1. Ceph Object Gatewa http://docs.ceph.com/docs/master/radosgw/y: I am
assuming the only way to use it is to utilize the API.
2. Ceph Block Devic: Here is where I am not very clear. Is
On 07/02/15 13:49, German Anders wrote:
output from iostat:
CEPHOSD01:
Device: rrqm/s wrqm/s r/s w/srMB/swMB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
sdc(ceph-0) 0.00 0.001.00 389.00 0.0035.98
188.9660.32 120.12
yeah 3TB SAS disks
*German Anders*
Storage System Engineer Leader
*Despegar* | IT Team
*office* +54 11 4894 3500 x3408
*mobile* +54 911 3493 7262
*mail* gand...@despegar.com
2015-07-02 9:04 GMT-03:00 Jan Schermer j...@schermer.cz:
And those disks are spindles?
Looks like there’s simply too
Hi,
On 02/07/2015 08:16, Stefan Priebe - Profihost AG wrote:
Hi,
Am 01.07.2015 um 23:35 schrieb Loic Dachary:
Hi,
The details of the differences between the Hammer point releases and the
RedHat Ceph Storage 1.3 can be listed as described at
Hi,
On 02/07/2015 10:16, Vickey Singh wrote:
Thanks Loic / Ken
I am a bit confused , we are running OpenSource Ceph Firefly in Production
and planning to upgrade to Stable hammer release.
Questions :
- Which exact Hammer release is currently stable release , that we can
upgrade to
Just reporting back on my findings
After making these changes the flapping occurred just once during the night.
To fix it further I changed the heartbeat grace to 120secs. Also matched
osd_op_threads and filestore_op_threads to core count.
Br,T
From: ceph-users
Hi,
When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs
do not seem to shut down correctly. Clients hang and ceph osd tree show
the OSDs of that node still up. Repeated runs of ceph osd tree show
them going down after a while. For instance, here OSD.7 is still up,
even
On 07/02/2015 05:53 PM, Steffen Tilsch wrote:
Hello Cephers,
Whenever I read about HDDs for OSDs it is told that they will deliver
around 130 IOPS.
Where does this number come from and how it was measured (random/seq, how
big where the IOs, which queue-dephat what latency) or is it more a
Lionel - thanks for the feedback ... inline below ...
On 7/2/15, 9:58 AM, Lionel Bouton
lionel+c...@bouton.namemailto:lionel+c...@bouton.name wrote:
Ouch. These spinning disks are probably a bottleneck: there are regular advices
on this list to use one DC SSD for 4 OSDs. You would probably
Interesting. Any idea why degraded could be negative? :)
2015-07-02 17:27:11.551959 mon.0 [INF] pgmap v23198138: 36032 pgs: 35468
active+clean, 551 active+recovery_wait, 13 active+recovering; 13005 GB data,
48944 GB used, 21716 GB / 70660 GB avail; 11159KB/s rd, 129MB/s wr, 5059op/s;
I'd def be happy to share what numbers I can get out of it. I'm still a
neophyte w/ Ceph, and learning how to operate it, set it up ... etc...
My limited performance testing to date has been with stock XFS ceph-disk
built filesystem for the OSDs, basic PG/CRUSH map stuff - and using dd across
I was reading the documentation on the website in regards to the
recommended memory for the monitors. It says that there should be 1GB of
RAM per daemon instance. Does the daemon instance refer to the number of
OSDs? I am planning on setting up 4 hosts with 16 OSDs each initially.
Would I need
Are you using the 4TB disks for the journal?
*Nate Curry*
IT Manager
ISSM
*Mosaic ATM*
mobile: 240.285.7341
office: 571.223.7036 x226
cu...@mosaicatm.com
On Thu, Jul 2, 2015 at 12:16 PM, Shane Gibson shane_gib...@symantec.com
wrote:
I'd def be happy to share what numbers I can get out of it.
25 matches
Mail list logo