unlink
of "1/3/bar" may eventually cause "/1/2/foo/bar" to become the new
primary inode?)
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
py of the executable, or `objdump -rdS ` is needed to
> interpret this.
I have a bug report filed for this issue: http://tracker.ceph.com/issues/16983
I believe it should be straightforward to solve and we'll have a fix
for it soon.
Thanks for the report!
--
Patrick Donnelly
___
Aug 10, 2016 at 1:21 PM, Patrick Donnelly <pdonn...@redhat.com>
> wrote:
>>
>> Hello Randy,
>>
>> On Wed, Aug 10, 2016 at 12:20 PM, Randy Orr <randy@nimbix.net> wrote:
>> > mds/Locker.cc: In function 'bool Locker::check_inode_max_size(CInode*,
>&
/hammer/rados/configuration/osd-config-ref/
in particular: "osd client message size cap". Also:
http://docs.ceph.com/docs/hammer/rados/configuration/journal-ref/
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Goncalo,
I believe this segfault may be the one fixed here:
https://github.com/ceph/ceph/pull/10027
(Sorry for brief top-post. Im on mobile.)
On Jul 4, 2016 9:16 PM, "Goncalo Borges"
wrote:
>
> Dear All...
>
> We have recently migrated all our ceph
-rdS ` is needed to
> interpret this.
This one looks like a very different problem. I've created an issue
here: http://tracker.ceph.com/issues/16610
Thanks for the report and debug log!
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
standby
>
> Now after upgrade start and next mon restart, active monitor falls with
> "assert(info.state == MDSMap::STATE_STANDBY)" (even without running mds) .
This is the first time you've upgraded your pool to jewel right?
Straight f
f
> src/client/Client.cc from 9.2.0 to 10.2.2)
The locks were missing in 9.2.0. There were probably instances of the
segfault unreported/unresolved.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t; raise is actually to fast that ceph-fuse segfaults before OOM Killer can
> kill it.
It's possible but we have no evidence yet that ceph-fuse is using up
all the memory on those machines yet, right?
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
anting ceph-mon.target to automatically be enabled on package
install? That doesn't sound good to me but I'm not familiar with
Ubuntu's packaging rules. I would think the sysadmin must enable the
services they install themselves.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Lua module tree into RADOS. Users would install locally and then
upload the tree through some tool.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
m rados classes in C++
> might be the best approach for this for now?
FYI, since you are writing a book: Lua is not an acronym:
https://www.lua.org/about.html#name
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
htt
but I'm
not sure if that's normally recommended.]
Thanks for your writeup!
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
at a write is now 3*
> journaled: 1* by Ceph, and 2* by ZFS. Which means that the used
> bandwidth to the SSDs is double of what it could be.
>
> Had some discussion about this, but disabling the Ceph journal is not
> just setting an option. Although I would like to test performan
admin_socket: exception getting command descriptions: [Errno 2] No such file
> or directory
>
> I am guessing there is a path set up incorrectly somewhere, but I do not
> know where to look.
You need to run the command on the machine where the daem
t will slow down your client.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
o feel free to upgrade to that instead.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
But tried that too. Same results.
>
>
>
> $ dd if=/dev/zero of=/mnt/c/testfile bs=100M count=10 oflag=direct
This looks like your problem: don't use oflag=direct. That will cause
CephFS to do synchronous I/O at great cost to performance in order to
avoid buffering
is an expected?
Perhaps that is the bandwidth limit of your local device rsync is reading from?
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
stands for "does not exist", so the MDS is complaining that it has
> been removed from the mdsmap.
>
> The message could definitely be better worded!
Tracker ticket: http://tracker.ceph.com/issues/20583
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
to look, and
then fails a doc check. A developer must comment on the PR to say it
passes documentation requirements before the bot changes the check to
pass.
This addresses all three points in an automatic way.
--
Patrick Donnelly
___
ceph
a quota of 100MB?
I don't have a cluster to check this on now but perhaps because a
sparse file (you wrote all zeros) does not consume its entire file
size in the quota (only what it uses). Retry with /dev/urandom.
(And the usual disclaimer: quotas only work with libcephfs/ceph-fuse.
The kernel cl
> the immediate recommended course of action? Downgrade or wait for the
> 10.2.9 ?
I'm not aware of or see any changes that would make downgrading back
to 10.2.7 a problem but the safest thing to do would be to replace the
v10.2.8 ceph-mds binaries with the v10.2.7 binary. If that's not
prac
nd backups before trying such a procedure.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
hing in CephFS first.
To me, this looks like: http://tracker.ceph.com/issues/17858
Fortunately you should only need to upgrade to 10.2.6 or 10.2.7 to fix this.
HTH,
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Looks like: http://tracker.ceph.com/issues/17236
The fix is in v10.2.6.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
nough to be evicted.
> We were able to then reboot clients (RHEL 7.4) and have them re-connect
> to the file system.
This looks like an instance of:
http://tracker.ceph.com/issues/21070
Upcoming v12.2.1 has the fix. Until then, you will need to apply the
rently possible but we are thinking about changes which
would allow multiple ceph file systems to use the same data pool by
having each FS work in a separate namespace. See also:
http://tracker.ceph.com/issues/15066
Support with CephFS and RBD using the same pool may follow that.
your
original mail that it appears you're using multiple active metadata
servers? If so, that's not stable in Jewel. You may have tripped on
one of many bugs fixed in Luminous for that configuration.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@l
of quotas?
Adding quota support to the kernel is one of our priorities for Mimic.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e transitioning right now, a number of
> machines still auto-mount users home directories from that nfsd.
You need to try a newer kernel as there have been many fixes since 4.4
which probably have not been backported to your distribution's kernel.
--
Patrick Donnelly
ntly. It may be related to [1]. Are you running out of
memory on these machines?
[1] http://tracker.ceph.com/issues/17517
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
by default:
https://github.com/ceph/ceph/pull/17925
I suggest setting that config manually to false on all of your clients
and ensure each client can remount itself to trim dentries (i.e. it's
being run as root or with sufficient capabiltities) which is a
fallback mechanism.
--
On Thu, Dec 14, 2017 at 4:44 PM, Webert de Souza Lima
<webert.b...@gmail.com> wrote:
> Hi Patrick,
>
> On Thu, Dec 14, 2017 at 7:52 PM, Patrick Donnelly <pdonn...@redhat.com>
> wrote:
>>
>>
>> It's likely you're a victim of a kernel backport that removed a
db(mds.0): Behind on trimming (36252/30)max_segments: 30,
> num_segments: 36252
See also: http://tracker.ceph.com/issues/21975
You can try doubling (several times if necessary) the MDS configs
`mds_log_max_segments` and `mds_log_max_expiring` to make it more
aggressively trim its journal.
t
cannot obtain the necessary locks. No metadata is lost. No
inconsistency is created between clients. Full availability will be
restored when the lost ranks come back online.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
nfigured limit.
If the cache size is larger than the limit (use `cache status` admin
socket command) then we'd be interested in seeing a few seconds of the
MDS debug log with higher debugging set (`config set debug_mds 20`).
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ng that limit. But, the mds process is using over 100GB RAM in my
> 128GB host. I thought I was playing it safe by configuring at 80. What other
> things consume a lot of RAM for this process?
>
> Let me know if I need to create a new thread.
The cache size measurement is imprecise pre
crossing quota boundaries (I think).
It may be possible to allow the rename in the MDS and check quotas
there. I've filed a tracker ticket here:
http://tracker.ceph.com/issues/24305
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
load for you without trying to micromanage things
via pins. You can use pinning to isolate metadata load from other
ranks as a stop-gap measure.
[1] https://github.com/ceph/ceph/pull/21412
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ecome standby?
>
> I've run ceph fs set cephfs max_mds 2 which set the max_mds from 3 to 2 but
> has no effect on my running configuration.
http://docs.ceph.com/docs/luminous/cephfs/multimds/#decreasing-the-number-of-ranks
Note: the behavior is changing in Mimic to be automatic after red
d/
[3] https://github.com/ceph/ceph/pull/22445/files
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e were still able to access ceph folder and everything seems to
> be running.
It depends(tm) on how the metadata is distributed and what locks are
held by each MDS.
Standbys are not optional in any production cluster.
--
Patrick Donnelly
___
ceph-use
by throwing an error or becoming unavailable -- when the
standbys exist to make the system available.
There's nothing to enforce. A warning is sufficient for the operator
that (a) they didn't configure any standbys or (b) MDS daemon
processes/boxes are going away and not coming back as standbys (i.e
ing that would be a godsend!
Thanks for keeping the list apprised of your efforts. Since this is so
easily reproduced for you, I would suggest that you next get higher
debug logs (debug_mds=20/debug_ms=1) from the MDS. And, since this is
a segmentation fault, a backtrace with debug symbols from gdb
On Fri, Jan 5, 2018 at 3:54 AM, Stefan Kooman <ste...@bit.nl> wrote:
> Quoting Patrick Donnelly (pdonn...@redhat.com):
>>
>> It's expected but not desired: http://tracker.ceph.com/issues/21402
>>
>> The memory usage tracking is off by a constant factor. I'd sugg
402
The memory usage tracking is off by a constant factor. I'd suggest
just lowering the limit so it's about where it should be for your
system.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ug ms = 1`. Feel free to create a tracker ticket and use
ceph-post-file [1] to share logs.
[1] http://docs.ceph.com/docs/hammer/man/8/ceph-post-file/
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ration.
[1] http://tracker.ceph.com/issues/22802
[2] http://tracker.ceph.com/issues/22801
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
3: (Client::_ll_drop_pins()+0x67) [0x558336e5dea7]
> 4: (Client::unmount()+0x943) [0x558336e67323]
> 5: (main()+0x7ed) [0x558336e02b0d]
> 6: (__libc_start_main()+0xea) [0x7efc7a892f2a]
> 7: (_start()+0x2a) [0x558336e0b73a]
> ceph-fuse [25154]: (33) Numerical argument out
e a good client configuration
> like cache size, and maybe something to lower the metadata servers load.
>>
>> ##
>> [mds]
>> mds_cache_size = 25
>> mds_cache_memory_limit = 792723456
You should only specify one of those. See also:
http://docs.ceph.com/docs/master
my server files are the most of
> time read-only so MDS data can be also cached for a while.
The MDS issues capabilities that allow clients to coherently cache metadata.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
htt
ever, if you don't have any quotas then there is
no added load on the client/mds.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e firmware may retire bad blocks
and make them inaccessible. It may not be possible for the device to
physically destroy those blocks either even with SMART directives. You
may be stuck with an industrial shredder to be compliant if the rules
are stringen
nux v4.17+.
See also: https://github.com/ceph/ceph/pull/23728/files
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
. How fast does your MDS reach 15GB?
Your MDS cache size should be configured to 1-8GB (depending on your
preference) so it's disturbing to see you set it so low.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ce
On Thu, Jul 12, 2018 at 3:55 PM, Patrick Donnelly wrote:
>> Recommends fixing error by hand. Tried running deep scrub on pg 2.4, it
>> completes but still have the same issue above
>>
>> Final option is to attempt removing mds.ds27. If mds.ds29 was a standby and
>>
t; has data it should become live. If it was not
> I assume we will lose the filesystem at this point
>
> Why didn't the standby MDS failover?
>
> Just looking for any way to recover the cephfs, thanks!
I think it's time to do a scrub on the PG containing that object.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
quot;default" filesystem) if called.
>
> The multi-fs stuff went in for Jewel, so maybe we should think about
> removing the old commands in Mimic: any thoughts Patrick?
These commands have already been removed (obsoleted) in master/Mimic.
You can no longer use
ges to directory inodes.
Traditionally, modifying a file (truncate, write) does not involve
metadata changes to a directory inode.
Whether that is the intended behavior is a good question. Perhaps it
should be changed?
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Mar 14, 2018 at 5:48 AM, Lars Marowsky-Bree <l...@suse.com> wrote:
> On 2018-02-28T02:38:34, Patrick Donnelly <pdonn...@redhat.com> wrote:
>
>> I think it will be necessary to reduce the actives to 1 (max_mds -> 1;
>> deactivate other ranks), shutdown st
; (allows_multimds() || in.size() >1)) && latest_scrubbed_version <
> mimic
This sounds like the right approach to me. The mons should also be
capable of performing the same test and raising a health error that
pre-Mimic MDSs must be started and the number of actives be reduced to
1.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, Apr 12, 2018 at 5:05 AM, Mark Schouten <m...@tuxis.nl> wrote:
> On Wed, 2018-04-11 at 17:10 -0700, Patrick Donnelly wrote:
>> No longer recommended. See:
>> http://docs.ceph.com/docs/master/cephfs/upgrading/#upgrading-the-mds-
>> cluster
>
> Shouldn't d
of having to copy it.
Hardlink handling for snapshots will be in Mimic.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
fs/upgrading/#upgrading-the-mds-cluster
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
wer than those on the test MDS VMs.
As Dan said, this is simply a spurious log message. Nothing is being
exported. This will be fixed in 12.2.6 as part of several fixes to the
load balancer:
https://github.com/ceph/ceph/pull/21412/commits/cace918dd044b979cd0d54b16a6296094c8a9f90
--
Patrick Donnelly
__
o": 7510,
> "traverse_lock": 86236,
> "load_cent": 144401980319,
> "q": 49,
> "exported": 0,
> "exported_inodes": 0,
> "imported": 0,
> "imported_inodes": 0
> }
> }
Can you also share `ceph daemon mds.2 cache status`, the full `ceph
daemon mds.2 perf dump`, and `ceph status`?
Note [1] will be in 12.2.5 and may help with your issue.
[1] https://github.com/ceph/ceph/pull/20527
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t
yet exist for NFS-Ganesha+CephFS outside of Openstack Queens
deployments.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
olders, example:
>
> /vol1
> /vol2
> /vol3
> /vol4
>
> At the moment the root of the cephfs filesystem is mounted to each web
> server. The query is would there be a benefit to having separate mount
> points for each folder like above?
Performance benefit? No. Data isol
the inode count in cache) by
collecting a `perf dump` via the admin socket. Then you can begin to
find out what's consuming all of the MDS memory.
Additionally, I concur with John on digging into why the MDS is
missing heartbeats by collecting debug logs (`debug mds = 15`) at that
time. It may also shed light on the issue.
Thanks for performing the test and letting us know the results.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ove test you're using about 1KB per inode (file).
Using that you can extrapolate how much space the data pool needs
based on your file system usage. (If all you're doing is filling the
file system with empty files, of course you're going to need an
unusually large metadata pool.)
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-note, once I exit the container (and hence close the mount
> namespace), the "old" helper is finally freed.
Once the last mount point is unmounted, FUSE will destroy the userspace helper.
[1]
http://docs.ceph.com/docs/master/rados/configuration/ceph-conf/?highlight=configura
will be necessary to reduce the actives to 1 (max_mds -> 1;
deactivate other ranks), shutdown standbys, upgrade the single active,
then upgrade/start the standbys.
Unfortunately this didn't get flagged in upgrade testing. Thanks for
the report Dan.
--
Patrick Donnelly
On Mon, Feb 26, 2018 at 7:59 AM, Patrick Donnelly <pdonn...@redhat.com> wrote:
> It seems in the above test you're using about 1KB per inode (file).
> Using that you can extrapolate how much space the data pool needs
s/data pool/metadata pool/
--
Patr
e to 13.2.2 ?
>
> or better to wait to 13.2.3 ? or install 13.2.1 for now ?
Upgrading to 13.2.1 would be safe.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
tivation. To get
more help, you need to describe your environment, version of Ceph in
use, relevant log snippets, etc.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
iping via layouts:
http://docs.ceph.com/docs/master/cephfs/file-layouts/
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
for the detailed notes. It looks like the MDS is stuck
somewhere it's not even outputting any log messages. If possible, it'd
be helpful to get a coredump (e.g. by sending SIGQUIT to the MDS) or,
if you're comfortable with gdb, a backtrace of any threads that look
suspicious (e.g. not waiting on a futex
are also
affected but do not require immediate action. A procedure for handling
upgrades of fresh deployments from 13.2.2 to 13.2.3 will be included
in the release notes for 13.2.3.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@l
he NFS server is expected to have a lot of load, breaking out the
exports can have a positive impact on performance. If there are hard
links, then the clients associated with the exports will potentially
fight over capabilities which will add to request latency.)
--
Patrick Donnelly
__
On Thu, Jan 17, 2019 at 2:44 AM Dan van der Ster wrote:
>
> On Wed, Jan 16, 2019 at 11:17 PM Patrick Donnelly wrote:
> >
> > On Wed, Jan 16, 2019 at 1:21 AM Marvin Zhang wrote:
> > > Hi CephFS experts,
> > > From document, I know multi-fs within a cluste
ng 2 kernel mounts on CentOS 7.6
It's unlikely this changes anything unless you also split the workload
into two. That may allow the kernel to do parallel requests?
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
htt
You either need to accept that reads/writes will land on different data
centers, primary OSD for a given pool is always in the desired data center,
or some other non-Ceph solution which will have either expensive, eventual,
or false consistency.
On Fri, Nov 16, 2018, 10:07 AM Vlad Kopylov This
the OSDs):
https://tracker.ceph.com/issues/35848
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t; confused about it.
How did you restart the MDSs? If you used `ceph mds fail` then the
executable version (v12.2.8) will not change.
Also, the monitor failure requires updating the monitor to v12.2.9.
What version is the mons?
--
Patrick Donnelly
___
ceph-users maili
it a single lock per MDS or is it a
> global distributed lock for all MDSs?
per-MDS
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ready running v13.2.2,
> >>upgrading to v13.2.3 does not require special action.
>
> Any special action for upgrading from 13.2.1 ?
No special actions for CephFS are required for the upgrade.
--
Patrick Donnelly
___
ceph-users mailing l
a "daemon mds.blah
> cache drop". The performance bump lasts for quite a long time--far longer
> than it takes for the cache to "fill" according to the stats.
What version of Ceph are you running? Can you expand on what this
performance im
over omap (outside of ease of
> use in the API), correct?
You may prefer xattrs on bluestore if the metadata is small and you
may need to store the xattrs on an EC pool. omap is not supported on
ecpools.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> such opposing concepts that it is simply not worth the effort?
You should not have had issues growing to that number of files. Please
post more information about your cluster including configuration
changes and `ceph osd df`.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
gt; same number of objects was created in the data pool. So the raw usage is
> again at more than 500 GB.
Even for inline files, there is one object created in the data pool to
hold backtrace information (an xattr of the object) used for hard
links and disaster recovery.
--
Patrick Donnelly
_
d ops in flight on the MDSes but all ops that are printed are
> finished in a split second (duration: 0.000152), flag_point": "acquired
> locks".
I believe you're looking at the wrong "ops" dump. You want to check
"objector_requests".
--
Patrick Donnelly
with that number of clients and
mds_cache_memory_limit=17179869184 (16GB).
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
oes inform the monitors if it has been shutdown. If you pull
the plug or SIGKILL, it does not. :)
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e
metadata pool on a separate set of OSDs.
Also, you're not going to saturate a 1.9TB NVMe SSD with one OSD. You
must partition it and setup multiple OSDs. This ends up being positive
for you so that you can put the metadata pool on its own set of OSDs.
[1] https://ceph.com/
.data stat 13c.
cephfs.a.data/13c. mtime 2019-02-18 14:02:11.00, size 211224
So the object holding "grep" still only uses ~200KB and not 4MB.
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
as a median, 32ms average is still on the high side,
> but way, way better.
I'll use this opportunity to point out that serial archive programs
like tar are terrible for distributed file systems. It would be
awesome if someone multithreaded tar or extended it for asynchronous
I/O.
tly think something went wrong.
If you don't mind seeing those errors and you're using 1 active MDS,
then don't worry about it.
Good luck!
--
Patrick Donnelly
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, May 15, 2019 at 5:05 AM Lars Täuber wrote:
> is there a way to migrate a cephfs to a new data pool like it is for rbd on
> nautilus?
> https://ceph.com/geen-categorie/ceph-pool-migration/
No, this isn't possible.
--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Eng
1 - 100 of 130 matches
Mail list logo