Hi All,
Is it possible to safely identify objects that should be purged from a CephFS
pool, and can we purge them manually?
Background:
ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
We were running 2 MDS, 1 active & 1 standby-replay.
A couple of months ago, af
On Saturday 10 March 2018 02:01 AM, Casey Bodley wrote:
On 03/08/2018 07:16 AM, Amardeep Singh wrote:
Hi,
I am trying to configure server side encryption using Key Management
Service as per documentation
http://docs.ceph.com/docs/master/radosgw/encryption/
Configured Keystone/Barbican inte
Unfortunately I can't quite figure out how to use it. I've got "rgw log http headers =
"authorization" in my ceph.conf but I'm getting no love in the rgw log.
I think this shold have 'http_' prefix, like:
rgw log http headers = "http_host,http_x_forwarded_for"
k
___
On Fri, Mar 9, 2018 at 7:14 AM Yan, Zheng wrote:
> ceph-dencoder can dump individual metadata. most cephfs metadata are
> stored in omap headers/values. you can write a scripts that fetch
> metadata objects' omap header/values and dump them using
> ceph-dencoder.
>
Keep in mind that if you do th
David, that's exactly my goal as well.
On closer reading of the docs, I see that this setting is to be used for
writing these headers to the ops log. I guess it's time for me to learn what
that's about. I've never quite been able to figure out how to get my hands on
it. I also see an option for
Matt, my only goal is to be able to have something that can be checked to
see which key was used to access which resource. The closest I was able to
get in Jewel was rgw debug logging 10/10, but it generates 100+ lines of
logs for every request and as Aaron points out takes some logic to combine
th
Ah yes, I found it:
https://github.com/ceph/ceph/commit/3192ef6a034bf39becead5f87a0e48651fcab705
Unfortunately I can't quite figure out how to use it. I've got "rgw log http
headers = "authorization" in my ceph.conf but I'm getting no love in the rgw
log.
Also, setting rgw debug level to 10 d
On 03/08/2018 07:16 AM, Amardeep Singh wrote:
Hi,
I am trying to configure server side encryption using Key Management
Service as per documentation
http://docs.ceph.com/docs/master/radosgw/encryption/
Configured Keystone/Barbican integration and its working, tested using
curl commands. Aft
Hi Matt,
Sorry about incomplete last message sent by mistake (unknown hotkey
slip, secrets have been invalidated).
So to continue:
In ganesha.conf Access_Key_Id is set to ldap token, that token encodes
a user 'myuser' secret 'whatever'. User_id and Secret_access_key
settings blank - they cannot
Hi list,
During my write test,I find there are always some of the osds have high
fs_apply_latency(1k-5kms,2-8times more than others). At first I think it is
caused by unbalanced pg distribution, but after I reweight the osds the problem
hasn't gone away.
I looked into the osds with high latency
Hi Benjeman,
It is -intended- to work, identically to the standalone radosgw
server. I can try to verify whether there could be a bug affecting
this path.
Matt
On Fri, Mar 9, 2018 at 12:01 PM, Benjeman Meekhof wrote:
> I'm having issues exporting a radosgw bucket if the configured user is
> au
I'm having issues exporting a radosgw bucket if the configured user is
authenticated using the rgw ldap connectors. I've verified that this
same ldap token works ok for other clients, and as I'll note below it
seems like the rgw instance is contacting the LDAP server and
successfully authenticatin
On Fri, Mar 09, 2018 at 03:06:15PM +0100, Ján Senko wrote:
:We are looking at 100+ nodes.
:
:I know that the Ceph official recommendation is 1GB of RAM per 1TB of disk.
:Was this ever changed since 2015?
:CERN is definitely using less (source:
:https://cds.cern.ch/record/2015206/files/CephScaleTest
What you linked was only a 2 week test. When Ceph is healthy it does not
need a lot of RAM, it's during recovery that OOM appears and that's when
you'll find yourself upgrading the RAM on your nodes just to stop OOM and
allow the cluster to recover. Look through the mailing list and you'll see
that
Hum ok
I tested on some VM images, and haven't seen any noticeable differences
in term of output size (this is indeed faster)
Thanks for the information !
On 03/09/2018 05:10 PM, Jason Dillaman wrote:
> On Fri, Mar 9, 2018 at 11:05 AM, wrote:
>> Hi,
>>
>> I am looking for information regardin
On Fri, Mar 9, 2018 at 11:05 AM, wrote:
> Hi,
>
> I am looking for information regarding rbd's --whole-object option
> According to the doc:
> --whole-object
> Specifies that the diff should be limited to the extents of a full
> object instead of showing intra-object deltas. When the object map
>
Hi,
I am looking for information regarding rbd's --whole-object option
According to the doc:
--whole-object
Specifies that the diff should be limited to the extents of a full
object instead of showing intra-object deltas. When the object map
feature is enabled on an image, limiting the diff to the
Hi all,
let me join in on the question, even though the aspect I am looking for may be
slightly different (I also asked earlier on the list without response).
My question relates to the storage of metadata by CephFS MDS servers. They
appear to use almost exclusively RocksDB,
so my question is
Regarding the WAL size, is there a way to specify the WAL and db partition size
at the time of OSD preparation?
Yes, in ceph.conf:
bluestore_block_db_size
bluestore_block_wal_size
both size in bytes.
k
___
ceph-users mailing list
ceph-users@li
ceph-dencoder can dump individual metadata. most cephfs metadata are
stored in omap headers/values. you can write a scripts that fetch
metadata objects' omap header/values and dump them using
ceph-dencoder.
On Fri, Mar 9, 2018 at 10:09 PM, Pavan, Krish wrote:
> Hi All,
>
> We have cephfs with lar
Hello Rich,
I've seen your question on the ML, but I was hoping that after two more months,
maybe we can get some feedback on this topic.
Regarding the WAL size, is there a way to specify the WAL and db partition size
at the time of OSD preparation?
Kind regards,
Laszlo
On 09.03.2018 14:47,
I'd also be very interested in this. At the moment I just use robinhood (
https://github.com/cea-hpc/robinhood) , which is less than optimal. I also
have a few scripts that use the xattrs instead of statting every file.
On Mar 9, 2018 8:09 AM, "Pavan, Krish" wrote:
> Hi All,
>
> We have cephfs
Hi All,
We have cephfs with larger size( > PB) and expected to grow more. I need to
dump the metadata ( cinode,Cdir with ACL, size, ctime, ...) weekly to
find/report usage as well as acl.
Is there any tool to dump the metadata pool and decode, without going via MDS
servers?.
What is the best w
We are looking at 100+ nodes.
I know that the Ceph official recommendation is 1GB of RAM per 1TB of disk.
Was this ever changed since 2015?
CERN is definitely using less (source:
https://cds.cern.ch/record/2015206/files/CephScaleTestMarch2015.pdf)
RedHat suggests using 16GB + 2GB/HDD as the latest
I'd increase ram. 1GB per 1TB of disk is the recommendation.
Another thing you need to consider is your node density. 12x10TB is a lot
of data to have to rebalance if you aren't going to have 20+ nodes. I have
17 nodes with 24x6TB disks each. Rebuilds can take what seems like an
eternity. It may b
I am also curious about this, in light of the reported performance regression
switching from Filestore to Bluestore (when using SSDs for journalling/metadata
db). I didn't get any responses when I asked, though. The major consideration
that seems obvious is that this potentially hugely increases
Hi,
same REX, we had troubles with OutOfMemory Kill Process on OSD
process with ten 8 To disks. After an upgrade to 128 Go these troubles
disapears.
Recommendations on memory aren't overestimated.
Regards,
Tristan
On 09/03/2018 11:31, Eino Tuominen wrote:
On 09/03/2018 12.16, Ján Senko
Hi Will,
Yes, adding new pools will increase the number of PG's per OSD. But you can
always decrease the number of pg's per OSD by adding new Hosts/OSD's.
When you design a cluster you have to calculate how many pools you're going
to use and use that information with PGcalc. (https://ceph.com/pgc
On 09/03/2018 12.16, Ján Senko wrote:
I am planning a new Ceph deployement and I have few questions that I
could not find good answers yet.
Our nodes will be using Xeon-D machines with 12 HDDs each and 64GB each.
Our target is to use 10TB drives for 120TB capacity per node.
We ran into problem
I am planning a new Ceph deployement and I have few questions that I could
not find good answers yet.
Our nodes will be using Xeon-D machines with 12 HDDs each and 64GB each.
Our target is to use 10TB drives for 120TB capacity per node.
1. We want to have small amount of SSDs in the machines. For
Hi Janne:
Thanks for your response. Approximately 100 PGs per OSD, yes, I
missed out this part.
I am still a little confused. Because 100-PGs-per-OSD rule is the
result of sumation of all used pools .
I konw I can create many pools.Assume that I have 5 pools now , and
the rule has already been
On 03/09/2018 12:49 AM, Brad Hubbard wrote:
> On Fri, Mar 9, 2018 at 3:54 AM, Subhachandra Chandra
> wrote:
>> I noticed a similar crash too. Unfortunately, I did not get much info in the
>> logs.
>>
>> *** Caught signal (Segmentation fault) **
>>
>> Mar 07 17:58:26 data7 ceph-osd-run.sh[796380]:
Hi Mike,
> For the easy case, the SCSI command is sent directly to krbd and so if
> osd_request_timeout is less than M seconds then the command will be
> failed in time and we would not hit the problem above.
> If something happens in the target stack like the SCSI command gets
> stuck/queued the
2018-03-09 10:27 GMT+01:00 Will Zhao :
> Hi all:
>
> I have a tiny question. I have read the documents, and it
> recommend approximately 100 placement groups for normal usage.
>
Per OSD. Approximately 100 PGs per OSD, when all used pools are summed up.
For things like radosgw, let it use the
Dear all,
I am wondering whether it helps to increase the bluestore_prefer_deferred_size
to 4MB so the RBD chunks are first written to the WAL, and only later to the
spinning disks.
Any opinions/experiences about this?
Kind regards,
Laszlo
On 08.03.2018 18:15, Budai Laszlo wrote:
Dear all,
Hi all:
I have a tiny question. I have read the documents, and it
recommend approximately 100 placement groups for normal usage.
Because the pg num can not be decreased, so if in current cluster,
the pg num have met this rule, and when I try to create a new pool ,
what pg num I should set ? I
Hi, I am using ceph-ansible to build a test cluster. I want to learn
that If I use lvm scenario and use below settings in osds.yml:
- data: data-lv1
data_vg: vg2
db: db-lv1
db_vg: vg1
wal also using db logical volume right? I plan to use one nvme for 10
osds, so creating 10 lvs from
37 matches
Mail list logo