Hi,
I have added a new fast_data pool to cephfs, fixed the auth caps, eg
client.f9wn
key:
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw pool=cephfs_data, allow rw pool=fast_data
but the client with kernel mounted cephfs reports an error
Hi Shridhar,
Did u mean'ceph rbd perf image stats' or 'rbd perf image iostat' ?
for monitor intergration, you can enable mgr prometheus module and get
metrics for each image in specified pools
--Original--
From:"Void Star Nill"
Hello,
Is there a way to get read/write I/O statistics for each rbd device for
each mapping?
For example, when an application uses one of the volumes, I would like to
find out what performance (avg read/write bandwidth, IOPS, etc) that
application observed on a given volume. Is that possible?
Dear Alex,
I don't really have a reference for this set up. The ceph documentation
describes this as the simplest possible set up and back then it was basically
dictated by budget. Everything else was several months of experimentation and
benchmarking. I had scripts running for several weeks
On Wed, 2020-05-06 at 11:58 -0700, Patrick Donnelly wrote:
> Hello Michael,
>
> On Wed, Mar 11, 2020 at 1:24 AM Michael Bisig wrote:
> > Hi all,
> >
> > I am trying to setup an active-active NFS Ganesha cluster (with two
> > Ganeshas (v3.0) running in Docker containers). I could manage to get
On Wed, Mar 11, 2020 at 10:41 PM Robert LeBlanc wrote:
>
> This is the second time this happened in a couple of weeks. The MDS locks
> up and the stand-by can't take over so the Montiors black list them. I try
> to unblack list them, but they still say this in the logs
>
> mds.0.1184394 waiting
Dear Marc,
This e-mail is two-part. First part is about the problem of a single client
being able to crash a ceph cluster. Yes, I think this applies to many, if not
any cluster. Second part is about your question about the time-out value.
Part 1:
Yes, I think there is a serious problem with
Hello Michael,
On Wed, Mar 11, 2020 at 1:24 AM Michael Bisig wrote:
>
> Hi all,
>
> I am trying to setup an active-active NFS Ganesha cluster (with two Ganeshas
> (v3.0) running in Docker containers). I could manage to get two Ganesha
> daemons running using the rados_cluster backend for
Hi,
Initial installation was 15.2.0 and we’ve upgraded to 15.2.1. I’ve verified and
the RGWs and OSDs are running the same version - 15.2.1.
I’ve created a bug report for this issue and uploaded the debug logs as
requested:
https://tracker.ceph.com/issues/45412
Hi,
I am trying to setup the Zabbix reporting module, but it is giving an
error which looks like a Python error:
ceph zabbix config-show
Error EINVAL: TypeError: __init__() got an unexpected keyword argument 'index'
I have configured the zabbix_host and identifier already at this point.
The
To answer some of my own questions:
1) Setting
ceph osd set noout
ceph osd set nodown
ceph osd set norebalance
before restart/re-deployment did not harm. I don't know if it helped, because I
didn't retry the procedure that led to OSDs going down. See also point 3 below.
2) A peculiarity of
On Mon, Mar 9, 2020 at 3:19 PM Marc Roos wrote:
>
>
> For testing purposes I changed the kernel 3.10 for a 5.5, now I am
> getting these messages. I assume the 3.10 was just never displaying
> these. Could this be a problem with my caps of the fs id user?
>
> [Mon Mar 9 23:10:52 2020] ceph:
Hello Robert,
On Mon, Mar 9, 2020 at 7:55 PM Robert Ruge wrote:
> For a 1.1PB raw cephfs system currently storing 191TB of data and 390 million
> objects (mostly small Python, ML training files etc.) how many MDS servers
> should I be running?
>
> System is Nautilus 14.2.8.
>
>
>
> I ask
Made it all the way down ;) Thank you very much for the detailed info.
-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] Re: Ceph meltdown, need help
Dear Marc,
This e-mail is two-part. First part is about the problem of a single
client being able to crash a ceph cluster.
Dear all,
We currently run a small Ceph cluster on 2 machines and we wonder what
are the theoretical max BW/IOPS we can achieve through RBD with our setup.
Here are the environment details:
- The Ceph release is an octopus 15.2.1 running on Centos 8, both
machines have 180GB RAM, 72 cores,
Am 06.05.20 um 16:12 schrieb brad.swan...@adtran.com:
Take a look at the available SMR drives:
https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/
Thanks for this nice overview link!
Indeed, my question was more thinking about "the next years". While currently,
SMR is
Take a look at the available SMR drives:
https://www.ixsystems.com/community/resources/list-of-known-smr-drives.141/
I wouldn’t put a single one of those drives into a Ceph system. You won’t save
any money. In fact, it’s likely a sink-hole for labor on your part. In fact,
you couldn’t pay me
We’re prototyping a native SMR object store now to run alongside Ceph, which
has been our only object store backend for the last 3 years and is somewhat
problematic on some metrics. I believe trying to use SMR drives with a file
system in the architecture (as in Ceph) is a non starter, the
From these:
crt=5395'481460 lcod 5395'481461 mlcod 5395'481461 active+clean] do_osd_op
sync_read 4096~1024
2020-05-06T08:06:33.925+0200 7f73b554a700 10 osd.15 pg_epoch: 5395 pg[5.9(
v 5395'481462 (5387'478000,5395'481462] local-lis/les=5394/5395 n=48
ec=67/67 lis/c=5394/5394 les/c/f=5395/5395/0
Dear Janne,
Am 06.05.20 um 09:18 schrieb Janne Johansson:
Den ons 6 maj 2020 kl 00:58 skrev Oliver Freyermuth mailto:freyerm...@physik.uni-bonn.de>>:
Dear Cephalopodians,
seeing the recent moves of major HDD vendors to sell SMR disks targeted for
use in consumer NAS devices (including
Hi Marc,
thank you for the response. Since upgrading to nautilus I also encounter the
cache pressure warnings while doing tape backup of the filesystem, so it is
probably not related to snapshots.
Anyone else who likes to share his experience with cephfs snapshots?
Best regards
Felix
As per the release notes : https://docs.ceph.com/docs/master/releases/octopus/
The dashboard and a few other modules aren't supported on CentOS 7.x due to
python version / dependencies.
On Wed, 06 May 2020 17:18:06 +0800 Sam Huracan
wrote
Hi Cephers,
I am trying to install
On Wed, May 6, 2020 at 3:53 PM Marc Roos wrote:
>
>
> I have been using snapshots on cephfs since luminous, 1xfs and
> 1xactivemds and used an rsync on it for backup.
> Under luminious I did not encounter any problems with this setup. I
> think I was even snapshotting user dirs every 7 days
Hi Cephers,
I am trying to install Ceph Octopus using ceph-deploy on CentOS 7,
as installing ceph-mgr-dashboard, it required these packages:
- python3-cherrypy
- python3-jwt
- python3-routes
https://pastebin.com/dSQPgGJD
But, when I installed these packages, they were not available.
Hi Folks,
I really like to use snapshots on cephfs, but even on octopus release snapshots
are still marked as an experimental feature. Is anyone using snapshots in
production environments? Which issues did you encounter? Do I risk a corrupted
filesystem or just non-working snapshots?
We run
Hi all,
I have 3 nautilus clusters. They started out as mimic but were recently
upgraded to nautilus. On two of them dynamic bucket index sharding seems
to work automatically. However on one of the clusters it doesn;t and I
have no clue why
* If I execute radosgw-admin bucket limit check I find
I have been using snapshots on cephfs since luminous, 1xfs and
1xactivemds and used an rsync on it for backup.
Under luminious I did not encounter any problems with this setup. I
think I was even snapshotting user dirs every 7 days having thousands of
snapshots (which I later heard, is not
Hi
I have a few questions about bucket versioning.
in the output of command "*radosgw-admin bucket stats --bucket=XXX"* there
is info about versions:
"ver": "0#521391,1#516042,2#518098,3#517681,4#518423",
"master_ver": "0#0,1#0,2#0,3#0,4#0",
Also "*metadata get"* returns info about
Hi Igor,
Am 05.05.20 um 16:10 schrieb Igor Fedotov:
> Hi Stefan,
>
> so (surprise!) some DB access counters show a significant difference, e.g.
>
> "kv_flush_lat": {
> "avgcount": 1423,
> "sum": 0.000906419,
> "avgtime": 0.00636
> },
>
Den ons 6 maj 2020 kl 00:58 skrev Oliver Freyermuth <
freyerm...@physik.uni-bonn.de>:
> Dear Cephalopodians,
> seeing the recent moves of major HDD vendors to sell SMR disks targeted
> for use in consumer NAS devices (including RAID systems),
> I got curious and wonder what the current status of
smime.p7m
Description: S/MIME encrypted message
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
Yes, it’s the same error with “—include-all”. I am currently awaiting
confirmation of my account creation on the tracker site.
In the meantime, here are some logs which I’ve obtained:
radosgw-admin gc list --debug-rgw=10 --debug-ms=10:
2020-05-06T06:06:33.922+ 7ff4ccffb700 1 --
Is there a way to get the block,block.db,block.wal path and size?
what if all of them or some of them are colocated in one disk?
I can get the info from a wal,db,block colocated osd like below:
ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0/
{
33 matches
Mail list logo