20:09 użytkownik Lionel Bouton
lionel-subscript...@bouton.name
mailto:lionel-subscript...@bouton.name napisał:
On 06/22/15 17:21, Erik Logtenberg wrote:
I have the journals on a separate disk too. How do you disable the
snapshotting on the OSD?
http://ceph.com/docs/master
What does this do?
- leveldb_compression: false (default: true)
- leveldb_block/cache/write_buffer_size (all bigger than default)
I take it you're running these commands on a monitor (from I think the
Dumpling timeframe, or maybe even Firefly)? These are hitting specific
settings in
Hi,
I ran a config diff, like this:
ceph --admin-daemon (...).asok config diff
There are the obvious things like the fsid and IP-ranges, but two
settings stand out:
- internal_safe_to_start_threads: true (default: false)
What does this do?
- leveldb_compression: false (default: true)
-
Hi,
Can anyone explain what the mount options nodcache and nofsc are for,
and especially why you would want to turn these options on/off (what are
the pros and cons either way?)
Thanks,
Erik.
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
Two days ago I added a new osd to one of my ceph machines, because one
of the existing osd's got rather full. There was quite a difference in
disk space usage between osd's, but I understand this is kind of just
how ceph works. It spreads data over osd's but not perfectly even.
Now check out
still get metadata updates written when
objects are flushed. What data exactly are you seeing that's leading
you to believe writes are happening against these drives? What is the
exact CephFS and cache pool configuration?
-Greg
On Mon, Mar 16, 2015 at 2:36 PM, Erik Logtenberg e...@logtenberg.eu
Hi,
I am getting relatively bad performance from cephfs. I use a replicated
cache pool on ssd in front of an erasure coded pool on rotating media.
When reading big files (streaming video), I see a lot of disk i/o,
especially writes. I have no clue what could cause these writes. The
writes are
big files from cephfs.
So apparently the osd's are doing some non-trivial amount of writing on
their own behalf. What could it be?
Thanks,
Erik.
On 03/16/2015 10:26 PM, Erik Logtenberg wrote:
Hi,
I am getting relatively bad performance from cephfs. I use a replicated
cache pool on ssd
Hi Lindsay,
Actually you just setup two entries for each host in your crush map. One
for hdd's and one for ssd's. My osd's look like this:
# idweight type name up/down reweight
-6 1.8 root ssd
-7 0.45host ceph-01-ssd
0 0.45osd.0 up
Hi Erik,
I have tiering working on a couple test clusters. It seems to be
working with Ceph v0.90 when I set:
ceph osd pool set POOL hit_set_type bloom
ceph osd pool set POOL hit_set_count 1
ceph osd pool set POOL hit_set_period 3600
ceph osd pool set POOL
Mathieson wrote:
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote:
As you can see, I have four hosts: ceph-01 ... ceph-04, but eight
host entries. This works great.
you have - host ceph-01 - host ceph-01-ssd
Don't the host names have to match the real host names
]
host = ceph-01
[osd.2]
host = ceph-01
You see all osd's are linked to the right hostname. But the ssd osd is
then explicitly set to go into the right crush location too.
Kind regards,
Erik.
On 12/30/2014 11:11 PM, Lindsay Mathieson wrote:
On Tue, 30 Dec 2014 10:38:14 PM Erik Logtenberg
Hi,
Every now and then someone asks if it's possible to convert a pool to a
different type (replicated vs erasure / change the amount of pg's /
etc), but this is not supported. The advised approach is usually to just
create a new pool and somehow copy all data manually to this new pool,
removing
Whoops, I accidently sent my mail before it was finished. Anyway I have
some more testing to do, especially with converting between
erasure/replicated pools. But it looks promising.
Thanks,
Erik.
On 23-12-14 16:57, Erik Logtenberg wrote:
Hi,
Every now and then someone asks if it's possible
If you are like me, you have the journals for your OSD's with rotating
media stored separately on an SSD. If you are even more like me, you
happen to use Intel 530 SSD's in some of your hosts. If so, please do
check your S.M.A.R.T. statistics regularly, because these SSD's really
can't cope with
Hi,
I would like to mount a cephfs share from fstab, but it doesn't
completely work.
First of all, I followed the documentation [1], which resulted in the
following line in fstab:
ceph-01:6789:/ /mnt/cephfs/ ceph
name=testhost,secretfile=/root/testhost.key,noacl 0 2
Yes, this works when I
Hi,
I noticed that the docs [1] on adding and removing an MDS are not yet
written...
[1] https://ceph.com/docs/master/rados/deployment/ceph-deploy-mds/
I would like to do exactly that, however. I have an MDS on one machine,
but I'd like a faster machine to take over instead. In fact, It would
I think I might be running into the same issue. I'm using Giant though.
A lot of slow writes. My thoughts went to: the OSD's get too much work
to do (commodity hardware), so I'll have to do some performance tuning
to limit parallellism a bit. And indeed, limiting the amount of threads
for
I know that it is possible to run CephFS with a cache tier on the data
pool in Giant, because that's what I do. However when I configured it, I
was on the previous release. When I upgraded to Giant, everything just
kept working.
By the way when I set it up, I used the following commmands:
ceph
interesting in the kernel logs? OOM killers, or
memory deadlocks?
On Sat, Nov 8, 2014 at 11:19 AM, Erik Logtenberg e...@logtenberg.eu
mailto:e...@logtenberg.eu wrote:
Hi,
I have some OSD's that keep committing suicide. My cluster has ~1.3M
misplaced objects, and it can't really
Hi,
Every time I start any OSD, it always logs that it tried to remove two
btrfs snapshots but failed:
2014-11-15 22:31:08.251600 7f1730f71700 -1
filestore(/var/lib/ceph/osd/ceph-5) unable to destroy snap
'snap_3020746' got (2) No such file or directory
2014-11-15 22:31:09.661161 7f1730f71700
I have no experience with the DELL SAS controller, but usually the
advantage of using a simple controller (instead of a RAID card) is that
you can use full SMART directly.
$ sudo smartctl -a /dev/sda
=== START OF INFORMATION SECTION ===
Device Model: INTEL SSDSA2BW300G3H
Serial Number:
Oops, my apologies if the 3MB logfile that I sent to this list
yesterday was annoying to anybody. I didn't realize that the
combination low bandwith / high mobile tariffs and email client
that automatically downloads all attachments was still a thing.
Apparently it is.
Next time I'll upload a
Hi,
My MDS is very slow, and it logs stuff like this:
2014-11-07 23:38:41.154939 7f8180a31700 0 log_channel(default) log
[WRN] : 2 slow requests, 1 included below; oldest blocked for
187.777061 secs
2014-11-07 23:38:41.154956 7f8180a31700 0 log_channel(default) log
[WRN] : slow request
Hi,
There is a small bug in the Fedora package for ceph-0.87. Two days ago,
Boris Ranto built the first 0.87 package, for Fedora 22 (rawhide) [1].
[1] http://koji.fedoraproject.org/koji/buildinfo?buildID=589731
This build was a succes, so I took that package and built it for Fedora
20 (which is
Hi,
Yesterday I removed two OSD's, to replace them with new disks. Ceph was
not able to completely reach all active+clean state, but some degraded
objects remain. However, the amount of degraded objects is negative
(-82), see below:
2014-10-30 13:31:32.862083 mon.0 [INF] pgmap v209175: 768 pgs:
Yesterday I removed two OSD's, to replace them with new disks. Ceph was
not able to completely reach all active+clean state, but some degraded
objects remain. However, the amount of degraded objects is negative
(-82), see below:
So why didn't it reach that state?
Well, I dunno, I was
.
On 10/30/2014 05:13 PM, John Spray wrote:
There are a couple of open tickets about bogus (negative) stats on PGs:
http://tracker.ceph.com/issues/5884
http://tracker.ceph.com/issues/7737
Cheers,
John
On Thu, Oct 30, 2014 at 12:38 PM, Erik Logtenberg e...@logtenberg.eu wrote:
Hi,
Yesterday I
I would like to add that removing log files (/var/log/ceph is also
removed on uninstall) is also a bad thing.
My suggestion would be to simply drop the whole %postun trigger, since
it does only these two very questionable things.
Thanks,
Erik.
On 10/22/2014 09:16 PM, Dmitry Borodaenko wrote:
, the weight of a host is the sum of all osd
weights on this host. So you just reweight any osd on this host, the
weight of this host is reweighed.
Thanks
LeiDong
On 10/20/14, 7:11 AM, Erik Logtenberg e...@logtenberg.eu wrote:
Hi,
Simple question: how do I reweight a host in crushmap
Hi,
Simple question: how do I reweight a host in crushmap?
I can use ceph osd crush reweight to reweight an osd, but I would like
to change the weight of a host instead.
I tried exporting the crushmap, but I noticed that the weights of all
hosts are commented out, like so:
# weight
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I should consider and/or are there best practices for
choosing the right K/M-parameters?
Loic might have a better anwser, but I think that
I haven't done the actual calculations, but given some % chance of disk
failure, I would assume that losing x out of y disks has roughly the
same chance as losing 2*x out of 2*y disks over the same period.
That's also why you generally want to limit RAID5 arrays to maybe 6
disks or so and
Hi,
Be sure to check this out:
http://ceph.com/community/ceph-calamari-goes-open-source/
Erik.
On 11-08-14 08:50, Irek Fasikhov wrote:
Hi.
I use ZABBIX with the following script:
[ceph@ceph08 ~]$ cat /etc/zabbix/external/ceph
#!/usr/bin/python
import sys
import os
import commands
Hi,
RHEL7 repository works just as well. CentOS 7 is effectively a copy of
RHEL7 anyway. Packages for CentOS 7 wouldn't actually be any different.
Erik.
On 07/10/2014 06:14 AM, Alexandre DERUMIER wrote:
Hi,
I would like to known if a centos7 respository will be available soon ?
Or can I
Yeah, Ceph will never voluntarily reduce the redundancy. I believe
splitting the degraded state into separate wrongly placed and
degraded (reduced redundancy) states is currently on the menu for
the Giant release, but it's not been done yet.
That would greatly improve the accuracy of ceph's
Hi,
If you add an OSD to an existing cluster, ceph will move some existing
data around so the new OSD gets its respective share of usage right away.
Now I noticed that during this moving around, ceph reports the relevant
PG's as degraded. I can more or less understand the logic here: if a
piece
Hi,
I have some osd's on hdd's and some on ssd's, just like the example in
these docs:
http://ceph.com/docs/firefly/rados/operations/crush-map/
Now I'd like to place an erasure encoded pool on the hdd's and a
replicated (cache) pool on the ssd's. In order to do that, I have to
split the crush
enabled was enough to
cause the issue apparently.
So, do you have enough information to possibly fix it, or is there any
way that I can provide additional information?
Thanks,
Erik.
On 06/30/2014 05:13 AM, Yan, Zheng wrote:
On Mon, Jun 30, 2014 at 4:25 AM, Erik Logtenberg e...@logtenberg.eu
anymore.
There are some dependencies that could not be met for Ceph in FC19 so we
decided to stop trying to get builds out for that.
On Sun, Jun 29, 2014 at 2:52 PM, Erik Logtenberg e...@logtenberg.eu wrote:
Nice work! When will the new rpm's be released on
http://ceph.com/rpm/fc19/x86_64
Nice work! When will the new rpm's be released on
http://ceph.com/rpm/fc19/x86_64/ ?
Thanks,
Erik.
On 06/27/2014 10:55 PM, Sage Weil wrote:
This is the second post-firefly development release. It includes a range
of bug fixes and some usability improvements. There are some MDS
debugging
Hi,
Are erasure coded pools suitable for use with MDS?
I tried to give it a go by creating two new pools like so:
# ceph osd pool create ecdata 128 128 erasure
# ceph osd pool create ecmetadata 128 128 erasure
Then looked up their id's:
# ceph osd lspools
..., 6 ecdata,7 ecmetadata
# ceph
please ls -l /usr/lib64/ceph/erasure-code ? If you're connected on
irc.oftc.net#ceph today feel free to ping me ( loicd ).
Cheers
On 14/06/2014 23:25, Erik Logtenberg wrote:
Hi,
I'm trying to set up an erasure coded pool, as described in the
Ceph docs:
http://ceph.com/docs/firefly/dev
the cluster or is it a new cluster ? Could you
please ls -l /usr/lib64/ceph/erasure-code ? If you're connected
on irc.oftc.net#ceph today feel free to ping me ( loicd ).
Cheers
On 14/06/2014 23:25, Erik Logtenberg wrote:
Hi,
I'm trying to set up an erasure coded pool, as described
Hi,
I ran into a weird issue with cephfs today. I create a directory like this:
# mkdir bla
# ls -al
drwxr-xr-x 1 root root0 14 jun 22:22 bla
Now on another host, with the same cephfs mounted, I see different
permissions:
# ls -al
drwxrwxrwx 1 root root 0 14 jun 22:22 bla
Weird, huh?
Hi,
So... I wrote some files into that directory to test performance, and
now I notice that both hosts see the permissions the right way, like
they were when I first created the directory.
What is going on here? ..
Erik.
On 06/14/2014 10:32 PM, Erik Logtenberg wrote:
Hi,
I ran
Hi,
I'm trying to set up an erasure coded pool, as described in the Ceph docs:
http://ceph.com/docs/firefly/dev/erasure-coded-pool/
Unfortunately, creating a pool like that gives me the following error:
# ceph osd pool create ecpool 12 12 erasure
Error EINVAL: cannot determine the erasure code
Hi,
In march 2013 Greg wrote an excellent blog posting regarding the (then)
current status of MDS/CephFS and the plans for going forward with
development.
http://ceph.com/dev-notes/cephfs-mds-status-discussion/
Since then, I understand progress has been slow, and Greg confirmed that
he didn't
://ceph.com/docs/master/rados/troubleshooting/log-and-debug/
On 07/31/2013 03:51 PM, Erik Logtenberg wrote:
Hi,
I just added a second node to my ceph test platform. The first node has
a mon and three osd's, the second node only has three osd's. Adding the
osd's was pretty painless, and ceph
Hi,
I just added a second node to my ceph test platform. The first node has
a mon and three osd's, the second node only has three osd's. Adding the
osd's was pretty painless, and ceph distributed the data from the first
node evenly over both nodes so everything seems to be fine. The monitor
also
to build ceph.
Which distro do you use?
Danny
Am 30.07.2013 01:33, schrieb Patrick McGarry:
-- Forwarded message --
From: Erik Logtenberg e...@logtenberg.eu
Date: Mon, Jul 29, 2013 at 7:07 PM
Subject: [ceph-users] Small fix for ceph.spec
To: ceph-users@lists.ceph.com
.
Regards,
Danny
Am 30.07.2013 09:42, schrieb Erik Logtenberg:
Hi,
Fedora, in this case Fedora 19, x86_64.
Kind regards,
Erik.
On 07/30/2013 09:29 AM, Danny Al-Gaaf wrote:
Hi,
I think this is a bug in packaging of the leveldb package in this case
since the spec-file already sets
* osd: pg log (re)writes are not vastly more efficient (faster peering)
(Sam Just)
Do you really mean are not? I'd think are now would make sense (?)
- Erik.
___
ceph-users mailing list
ceph-users@lists.ceph.com
54 matches
Mail list logo