Hi Ceph gurus,
I've got the following problem with our Ceph installation (Jewel): There
are various websites served from the CephFS mount. Sometimes, when I
copy many new (large?) files onto this mount, it seems that after a
certain delay, everything grinds to a halt. No websites are served;
Hi Ramazan,
I'm no Ceph expert, but what I can say from my experience using Ceph is:
1) During "Scrubbing", Ceph can be extremely slow. This is probably
where your "blocked requests" are coming from. BTW: Perhaps you can even
find out which processes are currently blocking with: ps aux | grep
930: 36 osds: 36 up, 36 in
flags noscrub,nodeep-scrub,sortbitwise,require_jewel_osds
pgmap v17667617: 1408 pgs, 5 pools, 24779 GB data, 6494 kobjects
70497 GB used, 127 TB / 196 TB avail
1407 active+clean
1 active+clean+scrubbing
Hi everyone,
In January, support for Ubuntu Zesty will run out and we're planning to
upgrade our servers to Aardvark. We have a two-node-cluster (and one
additional monitoring-only server) and we're using the packages that
come with the distro. We have mounted CephFS on the same server with th
Hi everyone,
Up until recently, we were using GlusterFS to have two web servers in
sync so we could take one down and switch back and forth between them -
e.g. for maintenance or failover. Usually, both were running, though.
The performance was abysmal, unfortunately. Copying many small files
Hi Vasu,
thank you for your answer.
Yes, all the pools have min_size 1:
root@uhu2 /scripts # ceph osd lspools
0 rbd,1 cephfs_data,2 cephfs_metadata,
root@uhu2 /scripts # ceph osd pool get cephfs_data min_size
min_size: 1
root@uhu2 /scripts # ceph osd pool get cephfs_metadata min_size
min_size:
Wow. Amazing. Thanks a lot!!! This works. 2 (hopefully) last questions
on this issue:
1) When the first node is coming back up, I can just call "ceph osd up
0" and Ceph will start auto-repairing everything everything, right? That
is, if there are e.g. new files that were created during the tim
ean
shutdown.
I can't recall the exact config options off-hand, but it's something
like "mon osd min down reports". Search the docs for that. :)
-Greg
On Thursday, September 29, 2016, Peter Maloney
<mailto:peter.malo...@brockmann-consult.de>> wrote:
On 09/29
Hi all,
We have a two-cluster-node (with a third "monitoring-only" node). Over
the last months, everything ran *perfectly* smooth. Today, I did an
Ubuntu "apt-get upgrade" on one of the two servers. Among others, the
ceph packages were upgraded from 12.2.1 to 12.2.2. A minor release
update, o
ve to update all osd's, mon's etc. I can remember running
into similar issue. You should be able to find more about this in
mailing list archive.
-Original Message-----
From: Ranjan Ghosh [mailto:gh...@pw6.de]
Sent: woensdag 11 april 2018 16:02
To: ceph-users
Subject: [ceph-users] C
preciated.
BR
Ranjan
Am 11.04.2018 um 17:07 schrieb Ranjan Ghosh:
Thank you for your answer. Do you have any specifics on which thread
you're talking about? Would be very interested to read about a success
story, because I fear that if I update the other node that the whole
clust
2 min_size:2 then your cluster will fail when
any osd is restarted, until the osd is up and healthy again. but you
have less chance for dataloss then 2/1 pools.
if you added a osd on a third host you can run size:3 min_size:2 . the
recommended config when you can have both redundancy and
tu. and do the restarts
of services manually. if you wish to maintain service during upgrade
On 25.04.2018 11:52, Ranjan Ghosh wrote:
Thanks a lot for your detailed answer. The problem for us, however,
was that we use the Ceph packages that come with the Ubuntu
distribution. If you do a Ubuntu
Hi all,
we have two small clusters (3 nodes each) called alpha and beta. One
node (alpha0/beta0) is on a remote site and only has monitor & manager.
The two other nodes (alpha/beta-1/2) have all 4 services and contain the
OSDs and are connected via an internal network. In short:
alpha0 -
Hi everyone,
I'm now running our two-node mini-cluster for some months. OSD, MDS and
Monitor is running on both nodes. Additionally there is a very small
third node which is only running a third monitor but no MDS/OSD. On both
main servers, CephFS is mounted via FSTab/Kernel driver. The mounte
Hi all,
When I run "ceph daemon mds. session ls" I always get a fairly
large number for num_caps (200.000). Is this normal? I thought caps are
sth. like open/locked files meaning a client is holding a cap on a file
and no other client can access it during this time. How can I debug this
if it
actually performance impact of all those caps is not that bad... :-/
Am 15.05.2017 um 14:49 schrieb John Spray:
On Mon, May 15, 2017 at 1:36 PM, Henrik Korkuc wrote:
On 17-05-15 13:40, John Spray wrote:
On Mon, May 15, 2017 at 10:40 AM, Ranjan Ghosh wrote:
Hi all,
When I run "ceph d
Hi all,
hope someone can help me. After restarting a node of my 2-node-cluster
suddenly I get this:
root@yak2 /var/www/projects # ceph -s
cluster:
id: 749b2473-9300-4535-97a6-ee6d55008a1b
health: HEALTH_WARN
Reduced data availability: 200 pgs inactive
services:
ions/pg-states/
/The ceph-mgr hasn’t yet received any information about the PG’s
state from an OSD since mgr started up./
ср, 20 февр. 2019 г. в 23:10, Ranjan Ghosh mailto:gh...@pw6.de>>:
Hi all,
hope someone can help me. After restarting a node of my
2-node-cl
Hi my beloved Ceph list,
After an upgrade from Ubuntu Cosmic to Ubuntu Disco (and according Ceph
packages updated from 13.2.2 to 13.2.4), I now get this when I enter
"ceph health":
HEALTH_WARN 3 modules have failed dependencies
"ceph mgr module ls" only reports those 3 modules enabled:
"ena
to work at all with the newest Ubuntu version.
Only one module can be loaded. Sad :-(
Hope this will be fixed soon...
Am 30.04.19 um 21:18 schrieb Ranjan Ghosh:
Hi my beloved Ceph list,
After an upgrade from Ubuntu Cosmic to Ubuntu Disco (and according
Ceph packages updated from 13.2.2 t
Hi all,
After upgrading to Ubuntu 19.10 and consequently from Mimic to Nautilus,
I had a mini-shock when my OSDs didn't come up. Okay, I should have read
the docs more closely, I had to do:
# ceph-volume simple scan /dev/sdb1
# ceph-volume simple activate --all
Hooray. The OSDs came back to lif
minutes on
> encrypted ceph-disk OSDs)
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io <http://www.croit.io>
> Tel: +49 8
Hi Richard,
Ah, I think I understand, now, brilliant. It's *supposed* to do exactly
that. Mount it once on boot and then just exit. So everything is working
as intended. Great.
Thanks
Ranjan
Am 05.12.19 um 15:18 schrieb Richard:
> On 2019-12-05 7:19 AM, Ranjan Ghosh wrote:
>
Okay, now, after I settled the issue with the oneshot service thanks to
the amazing help of Paul and Richard (thanks again!), I still wonder:
What could I do about that MDS warning:
===
health: HEALTH_WARN
1 MDSs report oversized cache
===
If anybody has any ideas? I tried googling it, of cou
have raised mine from the default of 1GiB to 32GiB. My rough
> estimate is 2.5kiB per inode in recent use.
>
>
> On Thu, Dec 5, 2019 at 10:39 AM Ranjan Ghosh wrote:
>> Okay, now, after I settled the issue with the oneshot service thanks to
>> the amazing help of Paul and Richard
Ah, I understand now. Makes a lot of sense. Well, we have a LOT of small
files so that might be the reason. I'll keep an eye on it whether the
message shows up again.
Thank you!
Ranjan
Am 05.12.19 um 19:40 schrieb Patrick Donnelly:
> On Thu, Dec 5, 2019 at 9:45 AM Ranjan Ghosh wrot
27 matches
Mail list logo