seem to be effective.
Cheers,
Oliver
Am 22.12.19 um 23:50 schrieb Oliver Freyermuth:
> Dear Jonas,
>
> Am 22.12.19 um 23:40 schrieb Jonas Jelten:
>> hi!
>>
>> I've also noticed that behavior and have submitted a patch some time ago
>> that should fix (2):
&g
y,
I can just restart one of the "out" OSDs and see what happens).
Cheers and many thanks,
Oliver
>
> Cheers
> -- Jonas
>
>
> On 22/12/2019 19.48, Oliver Freyermuth wrote:
>> Dear Cephers,
>>
>> I realized the following behaviour only re
Dear Cephers,
I realized the following behaviour only recently:
1. Marking an OSD "out" sets the weight to zero and allows to migrate data away
(as long as it is up),
i.e. it is still considered as a "source" and nothing goes to degraded state
(so far, everything expected).
2. Restarting an
Hi,
On 2019-11-20 15:55, thoralf schulze wrote:
hi,
we were able to track this down to the auto balancer: disabling the auto
balancer and cleaning out old (and probably not very meaningful)
upmap-entries via ceph osd rm-pg-upmap-items brought back stable mgr
daemons and an usable dashboard.
I
On 2019-10-24 09:46, Janne Johansson wrote:
(Slightly abbreviated)
Den tors 24 okt. 2019 kl 09:24 skrev Frank Schilder mailto:fr...@dtu.dk>>:
What I learned are the following:
1) Avoid this work-around too few hosts for EC rule at all cost.
2) Do not use EC 2+1. It does not offe
5952G 0.0302
0.0700 1.0 32 on
rbd 1856G3.0 5952G 0.9359
0.9200 1.0 256 on
Cheers,
Oliver
Am 12.09.19 um 23:34 schrieb Oliver Freyermuth:
Dear Cephalopodians,
I can confirm the same
Dear Wido,
On 2019-09-24 08:53, Wido den Hollander wrote:
On 9/17/19 11:01 PM, Oliver Freyermuth wrote:
Dear Cephalopodians,
I realized just now that:
https://eu.ceph.com/rpm-nautilus/el7/x86_64/
still holds only released up to 14.2.2, and nothing is to be seen of
14.2.3 or 14.2.4,
while
s,
Oliver
Cheers,
Matthew.
(au.ceph.com maintainer)
On 24/9/19 6:48 am, David Majchrzak, ODERLAND Webbhotell AB wrote:
Hi,
I'll have a look at the status of se.ceph.com tomorrow morning, it's
maintained by us.
Kind Regards,
David
On mån, 2019-09-23 at 22:41 +0200, Oliver Freyer
, at least geographically, this only leaves Sweden and UK.
Sweden at se.ceph.com does not load for me, but UK indeed seems fine.
Should people in the EU use that mirror, or should we all just use
download.ceph.com instead of something geographically close-by?
Cheers,
Oliver
On 2019-09-17 23:
ially afterwards, though.
So this probably means we are not affected by the upgrade bug - still, I would
sleep better if somebody can confirm how to detected this bug and - if you are
affected - how to edit the pool
to fix it.
Cheers,
Oliver
On 2019-09-17 21:23, Oliver Freyermuth wrote
Dear Cephalopodians,
I realized just now that:
https://eu.ceph.com/rpm-nautilus/el7/x86_64/
still holds only released up to 14.2.2, and nothing is to be seen of 14.2.3 or
14.2.4,
while the main repository at:
https://download.ceph.com/rpm-nautilus/el7/x86_64/
looks as expected.
Is this issu
Hi together,
it seems the issue described by Ansgar was reported and closed here as being
fixed for newly created pools in post-Luminous releases:
https://tracker.ceph.com/issues/41336
However, it is unclear to me:
- How to find out if an EC cephfs you have created in Luminous is actually affec
uot;up+replaying".
Thanks and all the best,
Oliver
>
> On Fri, Sep 13, 2019 at 12:44 PM Oliver Freyermuth
> wrote:
>>
>> Am 13.09.19 um 18:38 schrieb Jason Dillaman:
>>> On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth
>>> wrote:
>>>
Am 13.09.19 um 18:38 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth
wrote:
Am 13.09.19 um 17:18 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
wrote:
Am 13.09.19 um 16:30 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:17 AM
Am 13.09.19 um 17:18 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
wrote:
Am 13.09.19 um 16:30 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
Dear Jason,
thanks for
Am 13.09.19 um 16:30 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
Dear Jason,
thanks for the very detailed explanation! This was very instructive.
Sadly, the watchers look correct - see details
Am 13.09.19 um 16:17 schrieb Jason Dillaman:
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
Dear Jason,
thanks for the very detailed explanation! This was very instructive.
Sadly, the watchers look correct - see details inline.
Am 13.09.19 um 15:02 schrieb Jason Dillaman:
On Thu
Dear Jason,
thanks for the very detailed explanation! This was very instructive.
Sadly, the watchers look correct - see details inline.
Am 13.09.19 um 15:02 schrieb Jason Dillaman:
On Thu, Sep 12, 2019 at 9:55 PM Oliver Freyermuth
wrote:
Dear Jason,
thanks for taking care and developing a
ays.
Any idea on this (or how I can extract more information)?
I fear keeping high-level debug logs active for ~24h is not feasible.
Cheers,
Oliver
On 2019-09-11 19:14, Jason Dillaman wrote:
> On Wed, Sep 11, 2019 at 12:57 PM Oliver Freyermuth
> wrote:
>>
>> Dear
Dear Cephalopodians,
I can confirm the same problem described by Joe Ryner in 14.2.2. I'm also
getting (in a small test setup):
-
# ceph health detail
HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes; 1 subtrees
have overcommitt
me to figure out what could be
the problem - do you see what I did wrong?
Cheers and thanks again,
Oliver
On 2019-09-10 23:17, Oliver Freyermuth wrote:
Dear Jason,
On 2019-09-10 23:04, Jason Dillaman wrote:
On Tue, Sep 10, 2019 at 2:08 PM Oliver Freyermuth
wrote:
Dear Jason,
On
Dear Jason,
On 2019-09-10 23:04, Jason Dillaman wrote:
> On Tue, Sep 10, 2019 at 2:08 PM Oliver Freyermuth
> wrote:
>>
>> Dear Jason,
>>
>> On 2019-09-10 18:50, Jason Dillaman wrote:
>>> On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth
>>> w
Dear Jason,
On 2019-09-10 18:50, Jason Dillaman wrote:
> On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth
> wrote:
>>
>> Dear Cephalopodians,
>>
>> I have two questions about RBD mirroring.
>>
>> 1) I can not get it to work - my setup is:
>>
&
Dear Cephalopodians,
I have two questions about RBD mirroring.
1) I can not get it to work - my setup is:
- One cluster holding the live RBD volumes and snapshots, in pool "rbd", cluster name
"ceph",
running latest Mimic.
I ran "rbd mirror pool enable rbd pool" on that cluster and
Hi together,
Am 01.08.19 um 08:45 schrieb Janne Johansson:
Den tors 1 aug. 2019 kl 07:31 skrev Muhammad Junaid mailto:junaid.fsd...@gmail.com>>:
Your email has cleared many things to me. Let me repeat my understanding.
Every Critical data (Like Oracle/Any Other DB) writes will be done with
Hi Alfredo,
you may want to check the SMART data for the disk.
I also had such a case recently (see
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035117.html for
the thread),
and the disk had one unreadable sector which was pending reallocation.
Triggering "ceph pg repair" for t
Hi,
Am 31.05.19 um 12:07 schrieb Burkhard Linke:
> Hi,
>
>
> see my post in the recent 'CephFS object mapping.' thread. It describes the
> necessary commands to lookup a file based on its rados object name.
many thanks! I somehow missed the important part in that thread earlier and
only got t
Am 30.05.19 um 17:00 schrieb Oliver Freyermuth:
> Dear Cephalopodians,
>
> I found the messages:
> 2019-05-30 16:08:51.656363 [ERR] Error -5 reading object
> 2:0979ae43:::10002954ea6.007c:head
> 2019-05-30 16:08:51.760660 [WRN] Error(s) ignored
Dear Cephalopodians,
I found the messages:
2019-05-30 16:08:51.656363 [ERR] Error -5 reading object
2:0979ae43:::10002954ea6.007c:head
2019-05-30 16:08:51.760660 [WRN] Error(s) ignored for
2:0979ae43:::10002954ea6.007c:head enough copies available
just now in our logs (Mimic 13.2.5)
lancer/sleep_interval
> *2019-05-29 17:06:54.327 7f40cd3e8700 4 mgr[balancer] Optimize plan
> auto_2019-05-29_17:06:54*
> 2019-05-29 17:06:54.327 7f40cd3e8700 4 mgr get_config get_config key:
> mgr/balancer/mode
> 2019-05-29 17:06:54.327 7f40cd3e8700 4 mgr get_config get_config
nous",
> "num": 3
> }
> ],
> "osd": [
> {
> "features": "0x3ffddff8ffacfffb",
> "release": "luminous",
> "num": 7
> }
> ],
> "client": [
> {
> "features": "0x3ffddff8ffac
Hi Tarek,
what's the output of "ceph balancer status"?
In case you are using "upmap" mode, you must make sure to have a
min-client-compat-level of at least Luminous:
http://docs.ceph.com/docs/mimic/rados/operations/upmap/
Of course, please be aware that your clients must be recent enough (especi
Am 28.05.19 um 03:24 schrieb Yan, Zheng:
On Mon, May 27, 2019 at 6:54 PM Oliver Freyermuth
wrote:
Am 27.05.19 um 12:48 schrieb Oliver Freyermuth:
Am 27.05.19 um 11:57 schrieb Dan van der Ster:
On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
wrote:
Dear Dan,
thanks for the quick reply
Am 27.05.19 um 12:48 schrieb Oliver Freyermuth:
Am 27.05.19 um 11:57 schrieb Dan van der Ster:
On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
wrote:
Dear Dan,
thanks for the quick reply!
Am 27.05.19 um 11:44 schrieb Dan van der Ster:
Hi Oliver,
We saw the same issue after upgrading
Am 27.05.19 um 11:57 schrieb Dan van der Ster:
On Mon, May 27, 2019 at 11:54 AM Oliver Freyermuth
wrote:
Dear Dan,
thanks for the quick reply!
Am 27.05.19 um 11:44 schrieb Dan van der Ster:
Hi Oliver,
We saw the same issue after upgrading to mimic.
IIRC we could make the max_bytes xattr
dnesday, and worst
case could survive until then without quota enforcement, but it's a really
strange and unexpected incompatibility.
Cheers,
Oliver
Does that work?
-- dan
On Mon, May 27, 2019 at 11:36 AM Oliver Freyermuth
wrote:
Dear Cephalopodians,
in the process of migrati
Dear Cephalopodians,
in the process of migrating a cluster from Luminous (12.2.12) to Mimic
(13.2.5), we have upgraded the FUSE clients first (we took the chance during a
time of low activity),
thinking that this should not cause any issues. All MDS+MON+OSDs are still on
Luminous, 12.2.12.
Ho
hting that this is not information to be monitored.
What do you think?
Cheers,
Oliver
>
>
> -- Yury
>
> On Wed, May 01, 2019 at 01:23:57AM +0200, Oliver Freyermuth wrote:
>> Am 01.05.19 um 00:51 schrieb Patrick Donnelly:
>>> On Tue, Apr 30, 2019 at 8:01
Am 01.05.19 um 00:51 schrieb Patrick Donnelly:
> On Tue, Apr 30, 2019 at 8:01 AM Oliver Freyermuth
> wrote:
>>
>> Dear Cephalopodians,
>>
>> we have a classic libvirtd / KVM based virtualization cluster using Ceph-RBD
>> (librbd) as backend and sharing th
Dear Cephalopodians,
we have a classic libvirtd / KVM based virtualization cluster using Ceph-RBD
(librbd) as backend and sharing the libvirtd configuration between the nodes
via CephFS
(all on Mimic).
To share the libvirtd configuration between the nodes, we have symlinked some
folders from
Dear Cephalopodians,
in some recent threads on this list, I have read about the "knobs":
pglog_hardlimit (false by default, available at least with 12.2.11 and
13.2.5)
bdev_enable_discard (false by default, advanced option, no description)
bdev_async_discard (false by default, advance
Hi,
first of: I'm probably not the expert you are waiting for, but we are using
CephFS for HPC / HTC (storing datafiles), and make use of containers for all
jobs (up to ~2000 running in parallel).
We also use RBD, but for our virtualization infrastructure.
While I'm always one of the first to
Am 10.01.19 um 16:53 schrieb Jason Dillaman:
> On Thu, Jan 10, 2019 at 10:50 AM Oliver Freyermuth
> wrote:
>>
>> Dear Jason and list,
>>
>> Am 10.01.19 um 16:28 schrieb Jason Dillaman:
>>> On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth
>>> w
Dear Jason and list,
Am 10.01.19 um 16:28 schrieb Jason Dillaman:
On Thu, Jan 10, 2019 at 4:01 AM Oliver Freyermuth
wrote:
Dear Cephalopodians,
I performed several consistency checks now:
- Exporting an RBD snapshot before and after the object map rebuilding.
- Exporting a backup as raw
anding correct?
Then the underlying issue would still be a bug, but (as it seems) a harmless
one.
I'll let you know if it happens again to some of our snapshots, and if so, if
it only happens to newly created ones...
Cheers,
Oliver
Am 10.01.19 um 01:18 schrieb Oliver Freyerm
Dear Cephalopodians,
inspired by
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-January/032092.html I
did a check of the object-maps of our RBD volumes
and snapshots. We are running 13.2.1 on the cluster I am talking about, all
hosts (OSDs, MONs, RBD client nodes) still on CentOS 7.5.
Am 18.12.18 um 11:48 schrieb Hector Martin:
> On 18/12/2018 18:28, Oliver Freyermuth wrote:
>> We have yet to observe these hangs, we are running this with ~5 VMs with ~10
>> disks for about half a year now with daily snapshots. But all of these VMs
>> have very "
Dear Hector,
we are using the very same approach on CentOS 7 (freeze + thaw), but preceeded
by an fstrim. With virtio-scsi, using fstrim propagates the discards from
within the VM to Ceph RBD (if qemu is configured accordingly),
and a lot of space is saved.
We have yet to observe these hangs,
That's kind of unrelated to Ceph, but since you wrote two mails already,
and I believe it is caused by the mailing list software for ceph-users...
Your original mail distributed via the list ("[ceph-users] Ceph 10.2.11 -
Status not working") did
*not* have the forged-warning.
Only the subseque
There's also an additional issue which made us activate
CEPH_AUTO_RESTART_ON_UPGRADE=yes
(and of course, not have automatic updates of Ceph):
When using compression e.g. with Snappy, it seems that already running OSDs
which try to dlopen() the snappy library
for some version upgrades become u
) by moving the ceph buckets manually to the other
rack / datacenter.
Thanks for the explanation!
Cheers,
Oliver
> -Greg
> On Fri, Nov 30, 2018 at 6:46 AM Oliver Freyermuth
> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>
> Dear Cephalopodians,
>
> sor
move crush item name 'osd.1' initial_weight 3.6824 at location
{datacenter=FTD,host=osd001,root=default}
--
So the request to move to datacenter=FTD arrives at the mon, but no action is
taken, and the OSD is
ve itself
into datacenter=FTD.
But that does not happen...
Any idea what I am missing?
Cheers,
Oliver
Am 30.11.18 um 11:44 schrieb Oliver Freyermuth:
Dear Cephalopodians,
I'm probably missing something obvious, but I am at a loss here on how to
actually make use of a customized
Dear Cephalopodians,
I'm probably missing something obvious, but I am at a loss here on how to
actually make use of a customized crush location hook.
I'm currently on "ceph version 13.2.1" on CentOS 7 (i.e. the last version
before the upgrade-preventing bugs). Here's what I did:
1. Write a sc
8.91019 1.0 8.91TiB 2.48TiB 6.43TiB 27.81 1.00 173
>> 139 mf1hdd 8.91019 1.0 8.91TiB 2.48TiB 6.43TiB 27.84 1.00 173
>> 140 mf1hdd 8.91019
------
0.10.18 um 21:26 schrieb Janne Johansson:
> Ok, can't say "why" then, I'd reweigh them somewhat to even it out,
> 1.22 -vs- 0.74 in variance is a lot, so either a balancer plugin for
> the MGRs, a script or just a few manual tweaks might be in order.
>
> Den lör
more than one pool on them, so RAW space
>>> is what it says, how much free space there is. Then the avail and
>>> %USED on per-pool stats will take replication into account, it can
>>> tell how much data you may write into that particular pool, given that
>>> p
pools replication or EC settings.
>
> Den lör 20 okt. 2018 kl 19:09 skrev Oliver Freyermuth
> :
>>
>> Dear Cephalopodians,
>>
>> as many others, I'm also a bit confused by "ceph df" output
>> in a pretty straightforward configurati
Dear Cephalopodians,
as many others, I'm also a bit confused by "ceph df" output
in a pretty straightforward configuration.
We have a CephFS (12.2.7) running, with 4+2 EC profile.
I get:
# ceph df
GLOBAL:
SIZE
ows to grow / shrink the cluster more easily as needed ;-).
All the best,
Oliver
> Thanks again for your help.
> Best Regards,
> /ST Wong
>
> -Original Message-
> From: Oliver Freyermuth
> Sent: Thursday, September 20, 2018 2:10 AM
> To: ST Wong (ITSC)
> C
date.
All the best,
Oliver
>
> Thanks again.
> /st wong
>
> -Original Message-
> From: Oliver Freyermuth
> Sent: Wednesday, September 19, 2018 5:28 PM
> To: ST Wong (ITSC)
> Cc: Peter Wienemann ; ceph-users@lists.ceph.com
> Subject: Re: [ce
ardware of course).
> Btw, is this one (https://benji-backup.me/) Benji you'r referring to ?
> Thanks a lot.
Exactly :-).
Cheers,
Oliver
>
>
>
> Cheers,
> /ST Wong
>
>
>
> -Original Message-
> From: Oliver Freyermuth
> S
Am 28.08.18 um 07:14 schrieb Yan, Zheng:
> On Mon, Aug 27, 2018 at 10:53 AM Oliver Freyermuth
> wrote:
>>
>> Thanks for the replies.
>>
>> Am 27.08.18 um 19:25 schrieb Patrick Donnelly:
>>> On Mon, Aug 27, 2018 at 12:51 AM, Oliver Freyermuth
>>> wr
Thanks for the replies.
Am 27.08.18 um 19:25 schrieb Patrick Donnelly:
> On Mon, Aug 27, 2018 at 12:51 AM, Oliver Freyermuth
> wrote:
>> These features are critical for us, so right now we use the Fuse client. My
>> hope is CentOS 8 will use a recent enough kernel
>&g
Dear Cephalopodians,
sorry if this is the wrong place to ask - but does somebody know if the
recently added quota support in the kernel client,
and the ACL support, are going to be backported to RHEL 7 / CentOS 7 kernels?
Or can someone redirect me to the correct place to ask?
We don't have a R
Hi,
completely different idea: Have you tried to export the "time capsule" storage
via AFP (using netatalk) instead of Samba?
We are also planning to offer something like this for our users (in the
mid-term future), but my feeling was that compatibility with netatalk / AFP
would be better than
Hi together,
for all others on this list, it might also be helpful to know which setups are
likely affected.
Does this only occur for Filestore disks, i.e. if ceph-volume has taken over
taking care of these?
Does it happen on every RHEL 7.5 system?
We're still on 13.2.0 here and ceph-detect-
Am 23.07.2018 um 14:59 schrieb Nicolas Huillard:
> Le lundi 23 juillet 2018 à 12:40 +0200, Oliver Freyermuth a écrit :
>> Am 23.07.2018 um 11:18 schrieb Nicolas Huillard:
>>> Le lundi 23 juillet 2018 à 18:23 +1000, Brad Hubbard a écrit :
>>>> Ceph doesn't shut do
Am 23.07.2018 um 11:39 schrieb Nicolas Huillard:
> Le lundi 23 juillet 2018 à 10:28 +0200, Caspar Smit a écrit :
>> Do you have any hardware watchdog running in the system? A watchdog
>> could
>> trigger a powerdown if it meets some value. Any event logs from the
>> chassis
>> itself?
>
> Nice sug
Am 23.07.2018 um 11:18 schrieb Nicolas Huillard:
> Le lundi 23 juillet 2018 à 18:23 +1000, Brad Hubbard a écrit :
>> Ceph doesn't shut down systems as in kill or reboot the box if that's
>> what you're saying?
>
> That's the first part of what I was saying, yes. I was pretty sure Ceph
> doesn't re
Since all services are running on these machines - are you by any chance
running low on memory?
Do you have a monitoring of this?
We observe some strange issues with our servers if they run for a long while,
and with high memory pressure (more memory is ordered...).
Then, it seems our Infinib
Hi Satish,
that really completely depends on your controller.
For what it's worth: We have AVAGO MegaRAID controllers (9361 series).
They can be switched to a "JBOD personality". After doing so and reinitializing
(poewrcycling),
the cards change PCI-ID and run a different firmware, optimized f
------
> *Fr
Am 19.07.2018 um 05:57 schrieb Konstantin Shalygin:
>> Now my first question is:
>> 1) Is there a way to specify "take default class (ssd or nvme)"?
>>Then we could just do this for the migration period, and at some point
>> remove "ssd".
>>
>> If multi-device-class in a crush rule is not s
Dear Cephalopodians,
we use an SSD-only pool to store the metadata of our CephFS.
In the future, we will add a few NVMEs, and in the long-term view, replace the
existing SSDs by NVMEs, too.
Thinking this through, I came up with three questions which I do not find
answered in the docs (yet).
Am 18.07.2018 um 16:20 schrieb Sage Weil:
> On Wed, 18 Jul 2018, Oliver Freyermuth wrote:
>> Am 18.07.2018 um 14:20 schrieb Sage Weil:
>>> On Wed, 18 Jul 2018, Linh Vu wrote:
>>>> Thanks for all your hard work in putting out the fixes so quickly! :)
>>>
Am 18.07.2018 um 14:20 schrieb Sage Weil:
> On Wed, 18 Jul 2018, Linh Vu wrote:
>> Thanks for all your hard work in putting out the fixes so quickly! :)
>>
>> We have a cluster on 12.2.5 with Bluestore and EC pool but for CephFS,
>> not RGW. In the release notes, it says RGW is a risk especially t
Also many thanks from my side!
Am 18.07.2018 um 03:04 schrieb Linh Vu:
> Thanks for all your hard work in putting out the fixes so quickly! :)
>
> We have a cluster on 12.2.5 with Bluestore and EC pool but for CephFS, not
> RGW. In the release notes, it says RGW is a risk especially the garbage
stency of directory contents since months which have been fixed
in 12.2.6,
but given this situation, we'll rather live with that a bit longer and hold off
on the update...
>
> Thanks for pointing that out though, it seems like almost the exact same
> situation
>
> On 2
Hi,
all this sounds an awful lot like:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-July/027992.html
In htat case, things started with an update to 12.2.6. Which version are you
running?
Cheers,
Oliver
Am 12.07.2018 um 23:30 schrieb Kevin:
> Sorry for the long posting but trying to
Am 02.06.2018 um 12:35 schrieb Marc Roos:
>
> o+w? I don’t think that is necessary not?
I also wondered about that, but it seems safe - it's only a tmpfs,
with sticky bit set - and all files within have:
-rw---.
as you can check.
Also, on our systems, we have:
drwxr-x---.
for /var/lib/ceph,
Am 02.06.2018 um 11:44 schrieb Marc Roos:
>
>
> ceph-disk does not require bootstrap-osd/ceph.keyring and ceph-volume
> does
I believe that's expected when you use "prepare".
For ceph-volume, "prepare" already bootstraps the OSD and fetches a fresh OSD
id,
for which it needs the keyring.
For
The command mapping from ceph-disk to ceph-volume is certainly not 1:1.
What we are ended up using is:
ceph-volume lvm zap /dev/sda --destroy
This takes care of destroying Pvs and Lvs (as the documentation says).
Cheers,
Oliver
Am 02.06.2018 um 12:16 schrieb Marc Roos:
>
> I guess zap
Am 01.06.2018 um 02:59 schrieb Yan, Zheng:
> On Wed, May 30, 2018 at 5:17 PM, Oliver Freyermuth
> wrote:
>> Am 30.05.2018 um 10:37 schrieb Yan, Zheng:
>>> On Wed, May 30, 2018 at 3:04 PM, Oliver Freyermuth
>>> wrote:
>>>> Hi,
>>>>
>&
Am 30.05.2018 um 10:37 schrieb Yan, Zheng:
> On Wed, May 30, 2018 at 3:04 PM, Oliver Freyermuth
> wrote:
>> Hi,
>>
>> ij our case, there's only a single active MDS
>> (+1 standby-replay + 1 standby).
>> We also get the health warning in case it happ
___
>> From: ceph-users on behalf of Yan, Zheng
>>
>> Sent: Tuesday, 29 May 2018 9:53:43 PM
>> To: Oliver Freyermuth
>> Cc: Ceph Users; Peter Wienemann
>> Subject: Re: [ceph-users] Ceph-fuse getting stuck with "currently failed to
>>
----------
> *From:* ceph-users on behalf of Oliver
> Freyermuth
> *Sent:* Tuesday, 29 May 2018 7:29:06 AM
> *To:* Paul Emmerich
> *Cc:* Ceph Users; Peter
e also use (and the user in question who complained was accessing files
in parallel via NFS and ceph-fuse),
but I don't have a clear indication of that.
Cheers,
Oliver
>
> Paul
>
> 2018-05-28 16:38 GMT+02:00 Oliver Freyermuth <mailto:freyerm...@physik.uni-bonn.de>>
Dear Cephalopodians,
we just had a "lockup" of many MDS requests, and also trimming fell behind, for
over 2 days.
One of the clients (all ceph-fuse 12.2.5 on CentOS 7.5) was in status
"currently failed to authpin local pins". Metadata pool usage did grow by 10 GB
in those 2 days.
Rebooting t
Am 25.05.2018 um 15:39 schrieb Sage Weil:
> On Fri, 25 May 2018, Oliver Freyermuth wrote:
>> Dear Ric,
>>
>> I played around a bit - the common denominator seems to be: Moving it
>> within a directory subtree below a directory for which max_bytes /
>> max_fil
Am 25.05.2018 um 15:26 schrieb Luis Henriques:
> Oliver Freyermuth writes:
>
>> Mhhhm... that's funny, I checked an mv with an strace now. I get:
>> -
>> access("/cephfs/some_fold
tat foo' and 'stat /cephfs/some_folder'?
> (Maybe also the same with 'stat -f'.)
>
> Thanks!
> sage
>
>
> On Fri, 25 May 2018, Ric Wheeler wrote:
>> That seems to be the issue - we need to understand why rename sees them as
>> different.
>&
, rename() returns EXDEV.
Cheers,
Oliver
Am 25.05.2018 um 15:18 schrieb Ric Wheeler:
> That seems to be the issue - we need to understand why rename sees them as
> different.
>
> Ric
>
>
> On Fri, May 25, 2018, 9:15 AM Oliver Freyermuth
> mailto:freyerm...@ph
ver it looks at is confused, that might explain it.
>
> Ric
>
>
> On Fri, May 25, 2018, 9:04 AM Oliver Freyermuth
> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>
> Am 25.05.2018 um 14:57 schrieb Ric Wheeler:
> > Is this move between directories on t
y 25, 2018, 8:51 AM John Spray <mailto:jsp...@redhat.com>> wrote:
>
> On Fri, May 25, 2018 at 1:10 PM, Oliver Freyermuth
> mailto:freyerm...@physik.uni-bonn.de>>
> wrote:
> > Dear Cephalopodians,
> >
> > I was wondering why a simple "
Am 25.05.2018 um 14:50 schrieb John Spray:
> On Fri, May 25, 2018 at 1:10 PM, Oliver Freyermuth
> wrote:
>> Dear Cephalopodians,
>>
>> I was wondering why a simple "mv" is taking extraordinarily long on CephFS
>> and must note that,
>> at least w
Dear Cephalopodians,
I was wondering why a simple "mv" is taking extraordinarily long on CephFS and
must note that,
at least with the fuse-client (12.2.5) and when moving a file from one
directory to another,
the file appears to be copied first (byte by byte, traffic going through the
client?)
ience with ganesha with cephfs if you're
> happy to share some insights. Any tuning you would recommend?
>
> Thanks,
>
> On Wed, May 16, 2018 at 4:14 PM, Oliver Freyermuth
> mailto:freyerm...@physik.uni-bonn.de>> wrote:
>
> Hi David,
>
> did you alr
Hi David,
did you already manage to check your librados2 version and manage to pin down
the issue?
Cheers,
Oliver
Am 11.05.2018 um 17:15 schrieb Oliver Freyermuth:
> Hi David,
>
> Am 11.05.2018 um 16:55 schrieb David C:
>> Hi Oliver
>>
>> Thanks for
1 - 100 of 172 matches
Mail list logo