[ceph-users] Identify rbd snapshot

2019-08-29 Thread Marc Roos
I have this error. I have found the rbd image with the block_name_prefix:1f114174b0dc51, how can identify what snapshot this is? (Is it a snapshot?) 2019-08-29 16:16:49.255183 7f9b3f061700 -1 log_channel(cluster) log [ERR] : deep-scrub 17.36 17:6ca1f70a:::rbd_data.1f114174b0dc51.

[ceph-users] Re: Danish ceph users

2019-08-29 Thread Jens Dueholm Christensen
Yup, I’d be interested, but if at all possible, please do not make it Copenhagen-centric as it would make travel-time cubersome.. Regards, Jens Dueholm Christensen Stakeholder Intelligence IT P +45 5161 7879 j...@ramboll.com Ramb

[ceph-users] Re: [Ceph-users] Re: MDS failing under load with large cache sizes

2019-08-29 Thread Patrick Donnelly
Hi Janek, On Tue, Aug 6, 2019 at 11:25 AM Janek Bevendorff wrote: > > Here are tracker tickets to resolve the issues you encountered: > > > > https://tracker.ceph.com/issues/41140 > > https://tracker.ceph.com/issues/41141 The fix has been merged into master and will be backported soon. I've also

[ceph-users] Re: Ceph + SAMBA (vfs_ceph)

2019-08-29 Thread David Disseldorp
[resend to ceph.io addr] Hi Salsa, The vfs_ceph "ceph: user_id" parameter specifies the CephX user ID which Samba (via libcephfs) uses to authenticate with the Ceph cluster. These credentials are completely separate to the SMB user credentials which you provide with smbclient, etc. It seems that

[ceph-users] Failure to start ceph-mon in docker

2019-08-29 Thread Frank Schilder
Hi Robert, this sounds a bit worse than I thought. I remember that RedHat stopped packaging Ubuntu containers. The default for the public containers is now Centos, which I'm running on all my hosts (I also use ceph containers). If Ubuntu indeed reserves a different default UID for ceph and UID

[ceph-users] Re: cephfs creation error

2019-08-29 Thread Ramanathan S
Hi Patrick, Right I missed the daemons. After it’s creation cluster health returned OK. Thanks for your inputs.. Regards, Ram. On Tue, 20 Aug 2019 at 2:13 AM, Patrick Donnelly wrote: > Hello Ram, > > On Mon, Aug 19, 2019 at 9:51 AM Ramanathan S > wrote: > > mds: cephfs-0/0/1 up > > You ha

[ceph-users] Re: FileStore OSD, journal direct symlinked, permission troubles.

2019-08-29 Thread Marco Gaiarin
Riprendo quanto scritto nel suo messaggio del 29/08/2019... > Another possibilty is to convert the MBR to GPT (sgdisk --mbrtogpt) and > give the partition its UID (also sgdisk). Then it could be linked by > its uuid. and, in another email: > And I forgot that you can also re-create the journal by

[ceph-users] Re: How to map 2 different Openstack users belonging to the same project to 2 distinct radosgw users ?

2019-08-29 Thread Massimo Sgaravatto
Both (swift and S3) S3 access would be done using the EC2 credentials available in OpenStack The main use case that I would like to address is to prevent users from being able to delete the objects created by other users of the same project. Thanks, Massimo On Thu, Aug 29, 2019 at 4:41 PM Burkh

[ceph-users] Ceph pool snapshot mechanism

2019-08-29 Thread Yannick.Martin
Hello Can you tell me if Ceph snapshot at pool level rely on copy-on-write mechanism ? I found a RedHat help page that said that this is incompatible with RBD snapshot (I don't use it) and taks about "irreversible" ? Can you tell me about the last term ? Regards, ___

[ceph-users] Re: modifying "osd_memory_target"

2019-08-29 Thread Mark Nelson
Hi, Currently it's only set at OSD startup time.  There is a PR in the works to fix this however: https://github.com/ceph/ceph/pull/29606 Thanks, Mark On 8/29/19 9:23 AM, Amudhan P wrote: Hi, How do i change "osd_memory_target" in ceph command line. regards Amudhan __

[ceph-users] Re: How to map 2 different Openstack users belonging to the same project to 2 distinct radosgw users ?

2019-08-29 Thread Burkhard Linke
Hi, which protocol do you intend to use? Swift and S3 behave completely different with respect to users and keystone based authentication. Regards, Burkhard ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-us

[ceph-users] How to map 2 different Openstack users belonging to the same project to 2 distinct radosgw users ?

2019-08-29 Thread Massimo Sgaravatto
I have a question regarding ceph-openstack integration for object storage. Is there some configuration/hack/workaround that allows to map 2 different users belonging to the same OpenStack project to 2 distinct radosgw users ? I saw this old bug: https://tracker.ceph.com/issues/20570 but it look

[ceph-users] Re: How to customize object size

2019-08-29 Thread Janne Johansson
Den tors 29 aug. 2019 kl 16:04 skrev : > Hi, > I am new to ceph ... i am trying to increase object file size .. i can > upload file size upto 128MB .. how can i upload more than 128MB file . > > i can upload file using this > rados --pool z10 put testfile-128M.txt testfile-128M.txt > > Thats ok w

[ceph-users] modifying "osd_memory_target"

2019-08-29 Thread Amudhan P
Hi, How do i change "osd_memory_target" in ceph command line. regards Amudhan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Danish ceph users

2019-08-29 Thread jesper
yes Sent from myMail for iOS Thursday, 29 August 2019, 15.52 +0200 from fr...@dtu.dk : >I would be in. > >= >Frank Schilder >AIT Risø Campus >Bygning 109, rum S14 > > >From: Torben Hørup < tor...@t-hoerup.dk > >Sent: 29 August 2019 14:0

[ceph-users] "ceph-users" mailing list!

2019-08-29 Thread Tapas Jana
tapas@wolkcom ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] How to customize object size

2019-08-29 Thread tapas
Hi, I am new to ceph ... i am trying to increase object file size .. i can upload file size upto 128MB .. how can i upload more than 128MB file . i can upload file using this rados --pool z10 put testfile-128M.txt testfile-128M.txt Thats ok when the file size is upto 128MB But its not ok when

[ceph-users] Re: Danish ceph users

2019-08-29 Thread Frank Schilder
I would be in. = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Torben Hørup Sent: 29 August 2019 14:03:13 To: ceph-users@ceph.io Subject: [ceph-users] Danish ceph users Hi A colleague and I are talking about making an event i

[ceph-users] Re: help

2019-08-29 Thread Caspar Smit
Hi, This output doesn't show anything 'wrong' with the cluster. It's just still recovering (backfilling) from what seems like one of your OSD's crashed and restarted. The backfilling is taking a while because max_backfills = 1 and you only have 3 OSD's total so the backfilling per PG has to have f

[ceph-users] Re: help

2019-08-29 Thread Burkhard Linke
Hi, ceph uses a pseudo random distribution within crush to select the target hosts. As a result, the algorithm might not be able to select three different hosts out of three hosts in the configured number of tries. The affected PGs will be shown as undersized and only list two OSDs instead o

[ceph-users] Re: FileStore OSD, journal direct symlinked, permission troubles.

2019-08-29 Thread Alwin Antreich
On Thu, Aug 29, 2019 at 03:01:22PM +0200, Alwin Antreich wrote: > On Thu, Aug 29, 2019 at 02:42:42PM +0200, Marco Gaiarin wrote: > > Mandi! Alwin Antreich > > In chel di` si favelave... > > > > > > There's something i can do? Thanks. > > > Did you go through our upgrade guide(s)? > > > > Sure!

[ceph-users] Re: FileStore OSD, journal direct symlinked, permission troubles.

2019-08-29 Thread Alwin Antreich
On Thu, Aug 29, 2019 at 02:42:42PM +0200, Marco Gaiarin wrote: > Mandi! Alwin Antreich > In chel di` si favelave... > > > > There's something i can do? Thanks. > > Did you go through our upgrade guide(s)? > > Sure! > > > > See the link [0] below, for the > > permission changes. They are neede

[ceph-users] Heavily-linked lists.ceph.com pipermail archive now appears to lead to 404s

2019-08-29 Thread Florian Haas
Hi, is there any chance the list admins could copy the pipermail archive from lists.ceph.com over to lists.ceph.io? It seems to contain an awful lot of messages referred elsewhere by their archive URL, many (all?) of which appear to now lead to 404s. Example: google "Set existing pools to use hdd

[ceph-users] Re: help

2019-08-29 Thread Amudhan P
output from "ceph osd pool ls detail" pool 1 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 74 lfor 0/64 flags hashpspool stripe_width 0 application cephfs pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash

[ceph-users] Re: FileStore OSD, journal direct symlinked, permission troubles.

2019-08-29 Thread Marco Gaiarin
Mandi! Alwin Antreich In chel di` si favelave... > > There's something i can do? Thanks. > Did you go through our upgrade guide(s)? Sure! > See the link [0] below, for the > permission changes. They are needed when an upgrade from Hammer to Jewel > is done. Sure! The problem arise in the 'Se

[ceph-users] Re: help

2019-08-29 Thread Heðin Ejdesgaard Møller
What's the output of ceph osd pool ls detail On hós, 2019-08-29 at 18:06 +0530, Amudhan P wrote: > output from "ceph -s " > > cluster: > id: 7c138e13-7b98-4309-b591-d4091a1742b4 > health: HEALTH_WARN > Degraded data redundancy: 1141587/7723191 objects > degraded (14.78

[ceph-users] Re: FileStore OSD, journal direct symlinked, permission troubles.

2019-08-29 Thread Alwin Antreich
Hello Marco, On Thu, Aug 29, 2019 at 12:55:56PM +0200, Marco Gaiarin wrote: > > I've just finished a double upgrade on my ceph (PVE-based) from hammer > to jewel and from jewel to luminous. > > All went well, apart that... OSD does not restart automatically, > because permission troubles on the

[ceph-users] Re: help

2019-08-29 Thread Amudhan P
output from "ceph -s " cluster: id: 7c138e13-7b98-4309-b591-d4091a1742b4 health: HEALTH_WARN Degraded data redundancy: 1141587/7723191 objects degraded (14.781%), 15 pgs degraded, 16 pgs undersized services: mon: 1 daemons, quorum mon01 mgr: mon01(active) m

[ceph-users] Re: help

2019-08-29 Thread Heðin Ejdesgaard Møller
In adition to ceph -s, could you provide the output of ceph osd tree and specify what your failure domain is ? /Heðin On hós, 2019-08-29 at 13:55 +0200, Janne Johansson wrote: > > > Den tors 29 aug. 2019 kl 13:50 skrev Amudhan P : > > Hi, > > > > I am using ceph version 13.2.6 (mimic) on tes

[ceph-users] Danish ceph users

2019-08-29 Thread Torben Hørup
Hi A colleague and I are talking about making an event in Denmark for the danish ceph community, and we would like to get a feeling of how many ceph users are there in Denmark and hereof who would be interested in a Danish ceph event ? Regards, Torben ___

[ceph-users] pg_autoscale HEALTH_WARN

2019-08-29 Thread James Dingwall
Hi, I'm running a small nautilus cluster (14.2.2) which was recently upgraded from mimic (13.2.6). After the upgrade I enabled the pg_autoscaler which resulted in most of the pools having their pg count changed. All the remapping has completed but the cluster is still reporting a HEALTH_WARN. I

[ceph-users] Re: help

2019-08-29 Thread Janne Johansson
Den tors 29 aug. 2019 kl 13:50 skrev Amudhan P : > Hi, > > I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs. > my ceph health status showing warning . > > "ceph health" > HEALTH_WARN Degraded data redundancy: 1197023/7723191 objects degraded > (15.499%) > > "ceph health deta

[ceph-users] help

2019-08-29 Thread Amudhan P
Hi, I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs. my ceph health status showing warning . "ceph health" HEALTH_WARN Degraded data redundancy: 1197023/7723191 objects degraded (15.499%) "ceph health detail" HEALTH_WARN Degraded data redundancy: 1197128/7723191 objects d

[ceph-users] Re: ceph fs crashes on simple fio test

2019-08-29 Thread Dan van der Ster
Thanks for this thread -- I'd forgotten all about high/low. On one congested rbd cluster it just dropped our 4kB object write latency from ~150ms to ~40ms :-) Who wants to send the PR? -- sounds like an easy way to get on the next t-shirt. -- dan On Mon, Aug 26, 2019 at 10:24 PM Robert LeBlanc