Re: [ceph-users] mount cephfs from a public network ip of mds

2018-09-29 Thread David Turner
The cluster/private network is only used by the OSDs. Nothing else in ceph
or its clients communicate using it. Everything other than osd to osd
communication uses the public network. That includes the MONs, MDSs,
clients, and anything other than an osd talking to an osd. Nothing else
other than osd to osd traffic can communicate on the private/cluster
network.

On Sat, Sep 29, 2018, 6:43 AM Paul Emmerich  wrote:

> All Ceph clients will always first connect to the mons. Mons provide
> further information on the cluster such as the IPs of MDS and OSDs.
>
> This means you need to provide the mon IPs to the mount command, not
> the MDS IPs. Your first command works by coincidence since
> you seem to run the mons and MDS' on the same server.
>
>
> Paul
> Am Sa., 29. Sep. 2018 um 12:07 Uhr schrieb Joshua Chen
> :
> >
> > Hello all,
> >   I am testing the cephFS cluster so that clients could mount -t ceph.
> >
> >   the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
> >   All these 6 nodes has 2 nic, one 1Gb nic with real ip (140.109.0.0)
> and 1 10Gb nic with virtual ip (10.32.0.0)
> >
> > 140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
> > 140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.
> >
> >
> >
> > and I have the following questions:
> >
> > 1, can I have both public (140.109.0.0) and cluster (10.32.0.0) clients
> all be able to mount this cephfs resource
> >
> > I want to do
> >
> > (in a 140.109 network client)
> > mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=
> >
> > and also in a 10.32.0.0 network client)
> > mount -t ceph mds1(10.32.67.48):/
> > /mnt/cephfs -o user=,secret=
> >
> >
> >
> >
> > Currently, only this 10.32.0.0 clients can mount it. that of public
> network (140.109) can not. How can I enable this?
> >
> > here attached is my ceph.conf
> >
> > Thanks in advance
> >
> > Cheers
> > Joshua
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any backfill in our cluster makes the cluster unusable and takes forever

2018-09-29 Thread Pavan Rallabhandi
I looked at one of my test clusters running Jewel on Ubuntu 16.04, and 
interestingly I found this(below) in one of the OSD logs, which is different 
from your OSD boot log, where none of the compression algorithms seem to be 
supported. This hints more at how rocksdb was built on CentOS for Ceph.

2018-09-29 17:38:38.629112 7fbd318d4b00  4 rocksdb: Compression algorithms 
supported:
2018-09-29 17:38:38.629112 7fbd318d4b00  4 rocksdb: Snappy supported: 1
2018-09-29 17:38:38.629113 7fbd318d4b00  4 rocksdb: Zlib supported: 1
2018-09-29 17:38:38.629113 7fbd318d4b00  4 rocksdb: Bzip supported: 0
2018-09-29 17:38:38.629114 7fbd318d4b00  4 rocksdb: LZ4 supported: 0
2018-09-29 17:38:38.629114 7fbd318d4b00  4 rocksdb: ZSTD supported: 0
2018-09-29 17:38:38.629115 7fbd318d4b00  4 rocksdb: Fast CRC32 supported: 0

On 9/27/18, 2:56 PM, "Pavan Rallabhandi"  wrote:

I see Filestore symbols on the stack, so the bluestore config doesn’t 
affect. And the top frame of the stack hints at a RocksDB issue, and there are 
a whole lot of these too:

“2018-09-17 19:23:06.480258 7f1f3d2a7700  2 rocksdb: 
[/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/12.2.4/rpm/el7/BUILD/ceph-12.2.4/src/rocksdb/table/block_based_table_reader.cc:636]
 Cannot find Properties block from file.”

It really seems to be something with RocksDB on centOS. I still think you 
can try removing “compression=kNoCompression” from the 
filestore_rocksdb_options And/Or check if rocksdb is expecting snappy to be 
enabled.

Thanks,
-Pavan.

From: David Turner 
Date: Thursday, September 27, 2018 at 1:18 PM
To: Pavan Rallabhandi 
Cc: ceph-users 
Subject: EXT: Re: [ceph-users] Any backfill in our cluster makes the 
cluster unusable and takes forever

I got pulled away from this for a while.  The error in the log is "abort: 
Corruption: Snappy not supported or corrupted Snappy compressed block contents" 
and the OSD has 2 settings set to snappy by default, async_compressor_type and 
bluestore_compression_algorithm.  Do either of these settings affect the omap 
store?

On Wed, Sep 19, 2018 at 2:33 PM Pavan Rallabhandi 
 wrote:
Looks like you are running on CentOS, fwiw. We’ve successfully ran the 
conversion commands on Jewel, Ubuntu 16.04.

Have a feel it’s expecting the compression to be enabled, can you try 
removing “compression=kNoCompression” from the filestore_rocksdb_options? 
And/or you might want to check if rocksdb is expecting snappy to be enabled.

From: David Turner 
Date: Tuesday, September 18, 2018 at 6:01 PM
To: Pavan Rallabhandi 
Cc: ceph-users 
Subject: EXT: Re: [ceph-users] Any backfill in our cluster makes the 
cluster unusable and takes forever

Here's the [1] full log from the time the OSD was started to the end of the 
crash dump.  These logs are so hard to parse.  Is there anything useful in them?

I did confirm that all perms were set correctly and that the superblock was 
changed to rocksdb before the first time I attempted to start the OSD with it's 
new DB.  This is on a fully Luminous cluster with [2] the defaults you 
mentioned.

[1] https://gist.github.com/drakonstein/fa3ac0ad9b2ec1389c957f95e05b79ed
[2] "filestore_omap_backend": "rocksdb",
"filestore_rocksdb_options": 
"max_background_compactions=8,compaction_readahead_size=2097152,compression=kNoCompression",

On Tue, Sep 18, 2018 at 5:29 PM Pavan Rallabhandi 
 wrote:
I meant the stack trace hints that the superblock still has leveldb in it, 
have you verified that already?

On 9/18/18, 5:27 PM, "Pavan Rallabhandi" 
 wrote:

You should be able to set them under the global section and that 
reminds me, since you are on Luminous already, I guess those values are already 
the default, you can verify from the admin socket of any OSD.

But the stack trace didn’t hint as if the superblock on the OSD is 
still considering the omap backend to be leveldb and to do with the compression.

Thanks,
-Pavan.

From: David Turner 
Date: Tuesday, September 18, 2018 at 5:07 PM
To: Pavan Rallabhandi 
Cc: ceph-users 
Subject: EXT: Re: [ceph-users] Any backfill in our cluster makes the 
cluster unusable and takes forever

Are those settings fine to have be global even if not all OSDs on a 
node have rocksdb as the backend?  Or will I need to convert all OSDs on a node 
at the same time?

 

Re: [ceph-users] mount cephfs from a public network ip of mds

2018-09-29 Thread Paul Emmerich
All Ceph clients will always first connect to the mons. Mons provide
further information on the cluster such as the IPs of MDS and OSDs.

This means you need to provide the mon IPs to the mount command, not
the MDS IPs. Your first command works by coincidence since
you seem to run the mons and MDS' on the same server.


Paul
Am Sa., 29. Sep. 2018 um 12:07 Uhr schrieb Joshua Chen
:
>
> Hello all,
>   I am testing the cephFS cluster so that clients could mount -t ceph.
>
>   the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
>   All these 6 nodes has 2 nic, one 1Gb nic with real ip (140.109.0.0) and 1 
> 10Gb nic with virtual ip (10.32.0.0)
>
> 140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
> 140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
> 140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
> 140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
> 140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
> 140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.
>
>
>
> and I have the following questions:
>
> 1, can I have both public (140.109.0.0) and cluster (10.32.0.0) clients all 
> be able to mount this cephfs resource
>
> I want to do
>
> (in a 140.109 network client)
> mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=
>
> and also in a 10.32.0.0 network client)
> mount -t ceph mds1(10.32.67.48):/
> /mnt/cephfs -o user=,secret=
>
>
>
>
> Currently, only this 10.32.0.0 clients can mount it. that of public network 
> (140.109) can not. How can I enable this?
>
> here attached is my ceph.conf
>
> Thanks in advance
>
> Cheers
> Joshua
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mount cephfs from a public network ip of mds

2018-09-29 Thread Joshua Chen
Hello all,
  I am testing the cephFS cluster so that clients could mount -t ceph.

  the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
  All these 6 nodes has 2 nic, one 1Gb nic with real ip (140.109.0.0) and 1
10Gb nic with virtual ip (10.32.0.0)

140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.



and I have the following questions:

1, can I have both public (140.109.0.0) and cluster (10.32.0.0) clients all
be able to mount this cephfs resource

I want to do

(in a 140.109 network client)
mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=

and also in a 10.32.0.0 network client)
mount -t ceph mds1(10.32.67.48):/
/mnt/cephfs -o user=,secret=




Currently, only this 10.32.0.0 clients can mount it. that of public network
(140.109) can not. How can I enable this?

here attached is my ceph.conf

Thanks in advance

Cheers
Joshua


ceph.conf
Description: Binary data
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Manually deleting an RGW bucket

2018-09-29 Thread Konstantin Shalygin

How do I delete an RGW/S3 bucket and its contents if the usual S3 API commands 
don't work?

The bucket has S3 delete markers that S3 API commands are not able to remove, 
and I'd like to reuse the bucket name.  It was set up for versioning and 
lifecycles under ceph 12.2.5 which broke the bucket when a reshard happened.  
12.2.7 allowed me to remove the regular files but not the delete markers.

There must be a way of removing index files and so forth through rados commands.



What error actually is?

For delete bucket you should delete all bucket objects ("s3cmd rm -rf 
s3://bucket/") and multipart uploads.




k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com