Hello all:
1 I want to use mClockClientQueue to restrict background operations
like scrub and recovery,but official documents say this is still in
the experimental stage, so I would like to ask if there are any
problems with actual use.
2 I want to subscribe ceph-devel group and I have sent an email
"subscribe ceph devel " to [email protected] but cannot join
in. How can I join?
From
WeiHaocheng
<[email protected]> 于2018年9月30日周日 上午7:39写道:
>
> Send ceph-users mailing list submissions to
> [email protected]
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> or, via email, send a message with subject or body 'help' to
> [email protected]
>
> You can reach the person managing the list at
> [email protected]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of ceph-users digest..."
>
>
> Today's Topics:
>
> 1. Re: Manually deleting an RGW bucket (Konstantin Shalygin)
> 2. mount cephfs from a public network ip of mds (Joshua Chen)
> 3. Re: mount cephfs from a public network ip of mds (Paul Emmerich)
> 4. Re: Any backfill in our cluster makes the cluster unusable
> and takes forever (Pavan Rallabhandi)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 29 Sep 2018 13:39:32 +0700
> From: Konstantin Shalygin <[email protected]>
> To: [email protected]
> Subject: Re: [ceph-users] Manually deleting an RGW bucket
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> > How do I delete an RGW/S3 bucket and its contents if the usual S3 API
> > commands don't work?
> >
> > The bucket has S3 delete markers that S3 API commands are not able to
> > remove, and I'd like to reuse the bucket name. It was set up for
> > versioning and lifecycles under ceph 12.2.5 which broke the bucket when a
> > reshard happened. 12.2.7 allowed me to remove the regular files but not
> > the delete markers.
> >
> > There must be a way of removing index files and so forth through rados
> > commands.
>
>
> What error actually is?
>
> For delete bucket you should delete all bucket objects ("s3cmd rm -rf
> s3://bucket/") and multipart uploads.
>
>
>
> k
>
>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 29 Sep 2018 18:07:20 +0800
> From: Joshua Chen <[email protected]>
> To: ceph-users <[email protected]>
> Subject: [ceph-users] mount cephfs from a public network ip of mds
> Message-ID:
> <CAOUXHtg9sTX9ex1YPTcHpP=aux2sph2y3efbwq-bzwu_40x...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello all,
> I am testing the cephFS cluster so that clients could mount -t ceph.
>
> the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
> All these 6 nodes has 2 nic, one 1Gb nic with real ip (140.109.0.0) and 1
> 10Gb nic with virtual ip (10.32.0.0)
>
> 140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
> 140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
> 140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
> 140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
> 140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
> 140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.
>
>
>
> and I have the following questions:
>
> 1, can I have both public (140.109.0.0) and cluster (10.32.0.0) clients all
> be able to mount this cephfs resource
>
> I want to do
>
> (in a 140.109 network client)
> mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=,,,,
>
> and also in a 10.32.0.0 network client)
> mount -t ceph mds1(10.32.67.48):/
> /mnt/cephfs -o user=,secret=,,,,
>
>
>
>
> Currently, only this 10.32.0.0 clients can mount it. that of public network
> (140.109) can not. How can I enable this?
>
> here attached is my ceph.conf
>
> Thanks in advance
>
> Cheers
> Joshua
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20180929/aad45a46/attachment-0001.html>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: ceph.conf
> Type: application/octet-stream
> Size: 304 bytes
> Desc: not available
> URL:
> <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20180929/aad45a46/attachment-0001.obj>
>
> ------------------------------
>
> Message: 3
> Date: Sat, 29 Sep 2018 12:42:36 +0200
> From: Paul Emmerich <[email protected]>
> To: Joshua Chen <[email protected]>
> Cc: Ceph Users <[email protected]>
> Subject: Re: [ceph-users] mount cephfs from a public network ip of mds
> Message-ID:
> <CAD9yTbEEtFSHFjDp7NteMS9pJfjEiB_8+grC-Y=urndfju-...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> All Ceph clients will always first connect to the mons. Mons provide
> further information on the cluster such as the IPs of MDS and OSDs.
>
> This means you need to provide the mon IPs to the mount command, not
> the MDS IPs. Your first command works by coincidence since
> you seem to run the mons and MDS' on the same server.
>
>
> Paul
> Am Sa., 29. Sep. 2018 um 12:07 Uhr schrieb Joshua Chen
> <[email protected]>:
> >
> > Hello all,
> > I am testing the cephFS cluster so that clients could mount -t ceph.
> >
> > the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
> > All these 6 nodes has 2 nic, one 1Gb nic with real ip (140.109.0.0) and 1
> > 10Gb nic with virtual ip (10.32.0.0)
> >
> > 140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.
> >
> >
> >
> > and I have the following questions:
> >
> > 1, can I have both public (140.109.0.0) and cluster (10.32.0.0) clients all
> > be able to mount this cephfs resource
> >
> > I want to do
> >
> > (in a 140.109 network client)
> > mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=,,,,
> >
> > and also in a 10.32.0.0 network client)
> > mount -t ceph mds1(10.32.67.48):/
> > /mnt/cephfs -o user=,secret=,,,,
> >
> >
> >
> >
> > Currently, only this 10.32.0.0 clients can mount it. that of public network
> > (140.109) can not. How can I enable this?
> >
> > here attached is my ceph.conf
> >
> > Thanks in advance
> >
> > Cheers
> > Joshua
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 M?nchen
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> ------------------------------
>
> Message: 4
> Date: Sat, 29 Sep 2018 17:57:12 +0000
> From: Pavan Rallabhandi <[email protected]>
> To: David Turner <[email protected]>
> Cc: ceph-users <[email protected]>
> Subject: Re: [ceph-users] Any backfill in our cluster makes the
> cluster unusable and takes forever
> Message-ID: <[email protected]>
> Content-Type: text/plain; charset="utf-8"
>
> I looked at one of my test clusters running Jewel on Ubuntu 16.04, and
> interestingly I found this(below) in one of the OSD logs, which is different
> from your OSD boot log, where none of the compression algorithms seem to be
> supported. This hints more at how rocksdb was built on CentOS for Ceph.
>
> 2018-09-29 17:38:38.629112 7fbd318d4b00 4 rocksdb: Compression algorithms
> supported:
> 2018-09-29 17:38:38.629112 7fbd318d4b00 4 rocksdb: Snappy supported: 1
> 2018-09-29 17:38:38.629113 7fbd318d4b00 4 rocksdb: Zlib supported: 1
> 2018-09-29 17:38:38.629113 7fbd318d4b00 4 rocksdb: Bzip supported: 0
> 2018-09-29 17:38:38.629114 7fbd318d4b00 4 rocksdb: LZ4 supported: 0
> 2018-09-29 17:38:38.629114 7fbd318d4b00 4 rocksdb: ZSTD supported: 0
> 2018-09-29 17:38:38.629115 7fbd318d4b00 4 rocksdb: Fast CRC32 supported: 0
>
> ?On 9/27/18, 2:56 PM, "Pavan Rallabhandi" <[email protected]>
> wrote:
>
> I see Filestore symbols on the stack, so the bluestore config doesn?t
> affect. And the top frame of the stack hints at a RocksDB issue, and there
> are a whole lot of these too:
>
> ?2018-09-17 19:23:06.480258 7f1f3d2a7700 2 rocksdb:
> [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.4/rpm/el7/BUILD/ceph-12.2.4/src/rocksdb/table/block_based_table_reader.cc:636]
> Cannot find Properties block from file.?
>
> It really seems to be something with RocksDB on centOS. I still think you
> can try removing ?compression=kNoCompression? from the
> filestore_rocksdb_options And/Or check if rocksdb is expecting snappy to be
> enabled.
>
> Thanks,
> -Pavan.
>
> From: David Turner <[email protected]>
> Date: Thursday, September 27, 2018 at 1:18 PM
> To: Pavan Rallabhandi <[email protected]>
> Cc: ceph-users <[email protected]>
> Subject: EXT: Re: [ceph-users] Any backfill in our cluster makes the
> cluster unusable and takes forever
>
> I got pulled away from this for a while. The error in the log is "abort:
> Corruption: Snappy not supported or corrupted Snappy compressed block
> contents" and the OSD has 2 settings set to snappy by default,
> async_compressor_type and bluestore_compression_algorithm. Do either of
> these settings affect the omap store?
>
> On Wed, Sep 19, 2018 at 2:33 PM Pavan Rallabhandi
> <mailto:[email protected]> wrote:
> Looks like you are running on CentOS, fwiw. We?ve successfully ran the
> conversion commands on Jewel, Ubuntu 16.04.
>
> Have a feel it?s expecting the compression to be enabled, can you try
> removing ?compression=kNoCompression? from the filestore_rocksdb_options?
> And/or you might want to check if rocksdb is expecting snappy to be enabled.
>
> From: David Turner <mailto:[email protected]>
> Date: Tuesday, September 18, 2018 at 6:01 PM
> To: Pavan Rallabhandi <mailto:[email protected]>
> Cc: ceph-users <mailto:[email protected]>
> Subject: EXT: Re: [ceph-users] Any backfill in our cluster makes the
> cluster unusable and takes forever
>
> Here's the [1] full log from the time the OSD was started to the end of
> the crash dump. These logs are so hard to parse. Is there anything useful
> in them?
>
> I did confirm that all perms were set correctly and that the superblock
> was changed to rocksdb before the first time I attempted to start the OSD
> with it's new DB. This is on a fully Luminous cluster with [2] the defaults
> you mentioned.
>
> [1] https://gist.github.com/drakonstein/fa3ac0ad9b2ec1389c957f95e05b79ed
> [2] "filestore_omap_backend": "rocksdb",
> "filestore_rocksdb_options":
> "max_background_compactions=8,compaction_readahead_size=2097152,compression=kNoCompression",
>
> On Tue, Sep 18, 2018 at 5:29 PM Pavan Rallabhandi
> <mailto:mailto:[email protected]> wrote:
> I meant the stack trace hints that the superblock still has leveldb in
> it, have you verified that already?
>
> On 9/18/18, 5:27 PM, "Pavan Rallabhandi"
> <mailto:mailto:[email protected]> wrote:
>
> You should be able to set them under the global section and that
> reminds me, since you are on Luminous already, I guess those values are
> already the default, you can verify from the admin socket of any OSD.
>
> But the stack trace didn?t hint as if the superblock on the OSD is
> still considering the omap backend to be leveldb and to do with the
> compression.
>
> Thanks,
> -Pavan.
>
> From: David Turner <mailto:mailto:[email protected]>
> Date: Tuesday, September 18, 2018 at 5:07 PM
> To: Pavan Rallabhandi <mailto:mailto:[email protected]>
> Cc: ceph-users <mailto:mailto:[email protected]>
> Subject: EXT: Re: [ceph-users] Any backfill in our cluster makes the
> cluster unusable and takes forever
>
> Are those settings fine to have be global even if not all OSDs on a
> node have rocksdb as the backend? Or will I need to convert all OSDs on a
> node at the same time?
>
> On Tue, Sep 18, 2018 at 5:02 PM Pavan Rallabhandi
> <mailto:mailto:mailto:mailto:[email protected]> wrote:
> The steps that were outlined for conversion are correct, have you
> tried setting some the relevant ceph conf values too:
>
> filestore_rocksdb_options =
> "max_background_compactions=8;compaction_readahead_size=2097152;compression=kNoCompression"
>
> filestore_omap_backend = rocksdb
>
> Thanks,
> -Pavan.
>
> From: ceph-users
> <mailto:mailto:mailto:mailto:[email protected]> on behalf of
> David Turner <mailto:mailto:mailto:mailto:[email protected]>
> Date: Tuesday, September 18, 2018 at 4:09 PM
> To: ceph-users <mailto:mailto:mailto:mailto:[email protected]>
> Subject: EXT: [ceph-users] Any backfill in our cluster makes the
> cluster unusable and takes forever
>
> I've finally learned enough about the OSD backend track down this
> issue to what I believe is the root cause. LevelDB compaction is the common
> thread every time we move data around our cluster. I've ruled out PG
> subfolder splitting, EC doesn't seem to be the root cause of this, and it is
> cluster wide as opposed to specific hardware.
>
> One of the first things I found after digging into leveldb omap
> compaction was [1] this article with a heading "RocksDB instead of LevelDB"
> which mentions that leveldb was replaced with rocksdb as the default db
> backend for filestore OSDs and was even backported to Jewel because of the
> performance improvements.
>
> I figured there must be a way to be able to upgrade an OSD to use
> rocksdb from leveldb without needing to fully backfill the entire OSD. There
> is [2] this article, but you need to have an active service account with
> RedHat to access it. I eventually came across [3] this article about
> optimizing Ceph Object Storage which mentions a resolution to OSDs flapping
> due to omap compaction to migrate to using rocksdb. It links to the RedHat
> article, but also has [4] these steps outlined in it. I tried to follow the
> steps, but the OSD I tested this on was unable to start with [5] this
> segfault. And then trying to move the OSD back to the original LevelDB omap
> folder resulted in [6] this in the log. I apologize that all of my logging
> is with log level 1. If needed I can get some higher log levels.
>
> My Ceph version is 12.2.4. Does anyone have any suggestions for how
> I can update my filestore backend from leveldb to rocksdb? Or if that's the
> wrong direction and I should be looking elsewhere? Thank you.
>
>
> [1] https://ceph.com/community/new-luminous-rados-improvements/
> [2] https://access.redhat.com/solutions/3210951
> [3]
> https://hubb.blob.core.windows.net/c2511cea-81c5-4386-8731-cc444ff806df-public/resources/Optimize
> Ceph object storage for production in multisite clouds.pdf
>
> [4] ? Stop the OSD
> ? mv /var/lib/ceph/osd/ceph-/current/omap
> /var/lib/ceph/osd/ceph-/omap.orig
> ? ulimit -n 65535
> ? ceph-kvstore-tool leveldb /var/lib/ceph/osd/ceph-/omap.orig
> store-copy /var/lib/ceph/osd/ceph-/current/omap 10000 rocksdb
> ? ceph-osdomap-tool --omap-path /var/lib/ceph/osd/ceph-/current/omap
> --command check
> ? sed -i s/leveldb/rocksdb/g /var/lib/ceph/osd/ceph-/superblock
> ? chown ceph.ceph /var/lib/ceph/osd/ceph-/current/omap -R
> ? cd /var/lib/ceph/osd/ceph-; rm -rf omap.orig
> ? Start the OSD
>
> [5] 2018-09-17 19:23:10.826227 7f1f3f2ab700 -1 abort: Corruption:
> Snappy not supported or corrupted Snappy compressed block contents
> 2018-09-17 19:23:10.830525 7f1f3f2ab700 -1 *** Caught signal
> (Aborted) **
>
> [6] 2018-09-17 19:27:34.010125 7fcdee97cd80 -1 osd.0 0 OSD:init:
> unable to mount object store
> 2018-09-17 19:27:34.010131 7fcdee97cd80 -1 ESC[0;31m ** ERROR: osd
> init failed: (1) Operation not permittedESC[0m
> 2018-09-17 19:27:54.225941 7f7f03308d80 0 set uid:gid to 167:167
> (ceph:ceph)
> 2018-09-17 19:27:54.225975 7f7f03308d80 0 ceph version 12.2.4
> (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process
> (unknown), pid 361535
> 2018-09-17 19:27:54.231275 7f7f03308d80 0 pidfile_write: ignore
> empty --pid-file
> 2018-09-17 19:27:54.260207 7f7f03308d80 0 load: jerasure load: lrc
> load: isa
> 2018-09-17 19:27:54.260520 7f7f03308d80 0
> filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
> 2018-09-17 19:27:54.261135 7f7f03308d80 0
> filestore(/var/lib/ceph/osd/ceph-0) backend xfs (magic 0x58465342)
> 2018-09-17 19:27:54.261750 7f7f03308d80 0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: FIEMAP
> ioctl is disabled via 'filestore fiemap' config option
> 2018-09-17 19:27:54.261757 7f7f03308d80 0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features:
> SEEK_DATA/SEEK_HOLE is disabled via 'filestore seek data hole' config option
> 2018-09-17 19:27:54.261758 7f7f03308d80 0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: splice()
> is disabled via 'filestore splice' config option
> 2018-09-17 19:27:54.286454 7f7f03308d80 0
> genericfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_features: syncfs(2)
> syscall fully supported (by glibc and kernel)
> 2018-09-17 19:27:54.286572 7f7f03308d80 0
> xfsfilestorebackend(/var/lib/ceph/osd/ceph-0) detect_feature: extsize is
> disabled by conf
> 2018-09-17 19:27:54.287119 7f7f03308d80 0
> filestore(/var/lib/ceph/osd/ceph-0) start omap initiation
> 2018-09-17 19:27:54.287527 7f7f03308d80 -1
> filestore(/var/lib/ceph/osd/ceph-0) mount(1723): Error initializing leveldb :
> Corruption: VersionEdit: unknown tag
>
>
>
>
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ------------------------------
>
> End of ceph-users Digest, Vol 68, Issue 29
> ******************************************
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com