. so it's
fairly regular.
the raid5 sets are 12TB so i was hoping to be able to fix the problem,
rather then zapping the md and recreating from scratch. I was also
worrying if there was something fundamentaly wrong about running osd's
on software md raid5 devices.
kind regards
Ronny Aasen
thanks for your commends. answers inline.
On 05/09/16 09:53, Christian Balzer wrote:
Hello,
On Mon, 9 May 2016 09:31:20 +0200 Ronny Aasen wrote:
hello
I am running a small lab ceph cluster consisting of 6 old used servers.
That's larger than quite a few production deployments
machine. Saves you 2 slots for osd's and they are quite
reliable. you could even use 2 sd cards if your machine have the
internal SD slot
http://www.dell.com/downloads/global/products/pedge/en/poweredge-idsdm-whitepaper-en.pdf
kind regards
Ronny Aasen
__
and see if there is any
way to salvage this osd?
And is there any information i should gather before i scratch the
filesystem and recreates it, perhaps there is some valuable insight into
whats's going on ??
kind regards
Ronny Aasen
running debian jessie + hammer 0.94.7
# uname -a
Linux ceph
://ceph.com/planet/ceph-manually-repair-object
so the scrub errors are gone now.
kind regards
Ronny Aasen
On 04. sep. 2016 00:04, Brad Hubbard wrote:
There should actually be "[ERR]" messages in the osd logs some time after
"deep-scrub starts". Can we see those and a pg query for
On 06. sep. 2016 00:58, Brad Hubbard wrote:
On Mon, Sep 05, 2016 at 12:54:40PM +0200, Ronny Aasen wrote:
> Hello
>
> I have a osd that regularly dies on io, especially scrubbing.
> normaly i would assume a bad disk, and replace it. but then i normaly see
> messages in dmesg a
On 06.09.2016 14:45, Ronny Aasen wrote:
On 06. sep. 2016 00:58, Brad Hubbard wrote:
On Mon, Sep 05, 2016 at 12:54:40PM +0200, Ronny Aasen wrote:
> Hello
>
> I have a osd that regularly dies on io, especially scrubbing.
> normaly i would assume a bad disk, and replace it. but the
controller on this list ?
https://wiki.debian.org/LinuxRaidForAdmins
this controller software is often needed for troubleshooting, and can
give status and be monitored as well.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@list
intel 3500
anyone have a clue to what can be wrong ?
kind regrads
Ronny Aasen
-- log debug_filestore=10 --
-19> 2016-09-12 10:31:08.070947 7f8749125880 10
filestore(/var/lib/ceph/osd/ceph-8) getattr
1.fdd_head/1/1df4bfdd/rb.0.392c.238e1f29.002bd134/head '_' = 266
-18>
Ronny Aasen
On 20.07.2016 15:52, M Ranga Swami Reddy wrote:
Do we have any tool to monitor the OSDs usage with help of UI?
Thanks
Swami
[snip]
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
7p1
where /dev/md127p1 is the xfs partition for the osd.
good luck
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
the non starting osd's
kind regards
Ronny Aasen
On 12. sep. 2016 13:16, Ronny Aasen wrote:
after adding more osd's and having a big backfill running 2 of my osd's
keep on stopping.
We also recently upgraded from 0.94.7 to 0.94.9 but i do not know if
that is related.
the log say.
0> 2016
this might
have changed and actualy work for you.
Kind regards
Ronny Aasen
On 05. okt. 2016 21:52, Dan Jakubiec wrote:
Thank Ronny, I am working with Reed on this problem.
Yes something is very strange. Docs say osd_max_backfills default to
10, but when we examined the run-time configuration using
tch of them contain the
broken shard. (or perhaps all 3 of them?)
a bit reluctant to delete on all 3. I have 4+2 erasure coding.
( erasure size 6 min_size 4 ) so finding out witch one is bad would be
nice.
hope someone have an idea how to progress.
kind
added debug journal = 20 and got some new lines in the log. that i added
to the end of this email.
any of you can make something out of them ?
kind regards
Ronny Aasen
On 18.09.2016 18:59, Kostis Fardelas wrote:
If you are aware of the problematic PGs and they are exportable, then
ceph
to export /import to a working osd.
good luck
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
in the documentation
- ceph auth del osd.x
- ceph osd crush remove osd.x
- ceph osd rm osd.x
PS: if your cluster stops to operate when a osd goes down, you have
something else fundamentally wrong. you should look into this as well as
a separate case.
kind regards
Ronny Aasen
/gYEmYOcuil8ANG2
i still have the osd available so i can try starting it again with other
debug values if that is valuable.
i hope someone can shed some light on why this osd crashes.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph
On 06. okt. 2016 13:41, Ronny Aasen wrote:
hello
I have a few osd's in my cluster that are regularly crashing.
[snip]
ofcourse having 3 osd's dying regularly is not good for my health. so i
have set noout, to avoid heavy recoveries.
googeling this error messages gives exactly 1 hit:
https
On 19. okt. 2016 13:00, Ronny Aasen wrote:
On 06. okt. 2016 13:41, Ronny Aasen wrote:
hello
I have a few osd's in my cluster that are regularly crashing.
[snip]
ofcourse having 3 osd's dying regularly is not good for my health. so i
have set noout, to avoid heavy recoveries.
googeling
s where i am stuck.
have tried stopping and starting the 3 osd's but that did not have any
effect.
Anyone have any advice how to proceed ?
full output at: http://paste.debian.net/hidden/be03a185/
this is hammer 0.94.9 on debian 8.
kind regards
Ronny Aasen
,
or by adding more nodes.
kind regards
Ronny Aasen
On 01.11.2016 20:14, Marcus Müller wrote:
> Hi all,
>
> i have a big problem and i really hope someone can help me!
>
> We are running a ceph cluster since a year now. Version is: 0.94.7
(Hammer)
> Here is some info:
>
> Our o
thanks for the suggestion.
is a rolling reboot sufficient? or must all osd's be down at the same
time ?
one is no problem. the other takes some scheduling..
Ronny Aasen
On 01.11.2016 21:52, c...@elchaka.de wrote:
Hello Ronny,
if it is possible for you, try to Reboot all OSD Nodes.
I had
On 16.03.2017 08:26, Youssef Eldakar wrote:
Thanks for the reply, Anthony, and I am sorry my question did not give
sufficient background.
This is the cluster behind archive.bibalex.org. Storage nodes keep archived
webpages as multi-member GZIP files on the disks, which are formatted using XFS
/bluestore..
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
?
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
is the recommended way to connect clients to the public network ?
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Aasen
On 26. april 2017 19:46, Alexandre DERUMIER wrote:
you can try the proxmox stretch repository if you want
http://download.proxmox.com/debian/ceph-luminous/dists/stretch/
- Mail original -
De: "Wido den Hollander" <w...@42on.com>
À: "ceph-users" <ceph-
ple servers
writing in the same metadata area, the same journal area and generaly
shitting over each other. luckily i think most modern filesystems would
detect that the FS is mounted somewhere else and prevent you from
mounting it again without big fat warnings.
kind
you write you had all pg's exported except one. so i assume you have
injected those pg's into the cluster again using the method linked a few
times in this thread. How did that go, were you successfull in
recovering those pg's ?
kind regards.
Ronny Aasen
On 15. sep. 2017 07:52, hjcho616
On 14. sep. 2017 11:58, dE . wrote:
Hi,
I got a ceph cluster where I'm getting a OSD_OUT_OF_ORDER_FULL
health error, even though it appears that it is in order --
full_ratio 0.99
backfillfull_ratio 0.97
nearfull_ratio 0.98
These don't seem like a mistake to me but ceph is complaining --
on multiple servers and use a cluster filesystem like ocfs or gfs
that is made for this kind of solution.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
since it have crush weight 0
it should drain out.
- if that works, verify the injection drive is drained, stop it and
remove it from ceph. zap the drive.
this is all as i said guesstimates so your mileage may vary
good luck
Ronny Aasen
___
ceph
/ceph-manually-repair-object/
good luck
Ronny Aasen
On 20.09.2017 22:17, hjcho616 wrote:
Thanks Ronny.
I decided to try to tar everything under current directory. Is this
correct command for it? Is there any directory we do not want in the
new drive? commit_op_seq, meta, nosnap, omap?
tar
clone and fast syntetic full backups
when using refs on rbd on ceph.
i ofcourse have other backup solutions, but this is spesific for vmware
backups.
possible?
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
strange that no osd is acting for your pg's
can you show the output from
ceph osd tree
mvh
Ronny Aasen
On 13.10.2017 18:53, dE wrote:
Hi,
I'm running ceph 10.2.5 on Debian (official package).
It cant seem to create any functional pools --
ceph health detail
HEALTH_ERR 64 pgs
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
. stasrt by doing
one and one pg. and once you get the hang of the method you can do
multiple pg's at the same time.
good luck
Ronny Aasen
On 11. sep. 2017 06:51, hjcho616 wrote:
It took a while. It appears to have cleaned up quite a bit... but still
has issues. I've been seeing bel
On 13. sep. 2017 07:04, hjcho616 wrote:
Ronny,
Did bunch of ceph pg repair pg# and got the scrub errors down to 10...
well was 9, trying to fix one became 10.. waiting for it to fix (I did
that noout trick as I only have two copies). 8 of those scrub errors
looks like it would need data
oping it doesn't impact the data much... went ahead and allowed those corrupted values. I was able to export osd.4 with journal!
congratulations and well done :)
just imagine tring to do this on $vendors's propitary blackbox...
Ronny Aasen
___
ceph-us
ll install that spesific version.
example in my case:
apt install ceph=10.2.5-7.2
will downgrade to the previous version.
kind regards
Ronny Aasen
On 29.09.2017 15:40, Kashif Mumtaz wrote:
Dear Stefan,
Thanks for your help. You are right. I was missing apt update" after
adding repo.
on that rbd
after all the ceph problems are repaired.
good luck
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
write to a subdirectory on the RBD. so if it is not mounted, the
directory will be missing, and you get a no such file error.
Ronny Aasen
On 25.08.2017 18:04, David Turner wrote:
Additionally, solely testing if you can write to the path could give a
false sense of security if the path
ood luck
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
recovered lost
object using these instructions
http://ceph.com/geen-categorie/incomplete-pgs-oh-my/
I would start by renaming the osd's log file, do a single try at
starting the osd. and posting that log. have you done anything to the
osd's that could make them not run ?
kind regards
Ronny Aasen
des
to cover for the lost one. the more nodes you have the less the impact
of a node failure is. and the less spare room is needed for a 4 node
cluster you should not fill more then 66% if you want to be able to
self-heal + operate.
good luck
Ronny Aa
nts.
http://www.toad.com/gnu/sysadmin/index.html#ddrescue
kind regards
Ronny Aasen
*Steve Taylor* | Senior Software Engineer |***StorageCraft Technology
Corporation* <https://storagecraft.com>
380 Data
pg at the time. and you should only recover pg's that contain
unfound objects. there is realy only 103 unfound objects that you need
to recover.
once the recovery is compleate you can wipe the functioning recovery
drive, and install it as a new osd to the cluster.
kind regards
Ronny Aasen
is still on disk.
or
I would set crush weight to 0 and drain all osd's off the node before
reinstalling. here the backfill will take longer, since you actualy have
to refill disks.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users
nd regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
on 14 nodes means each and every nodes are hit on each write.
kind regards
Ronny Aasen
On 23. okt. 2017 21:12, Jorge Pinilla López wrote:
I have one question, what can or can't do a cluster working on degraded
mode?
With K=10 + M = 4 if one of my OSDs node fails it will start working
read
http://ceph.com/geen-categorie/ceph-erasure-coding-overhead-in-a-nutshell/
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
not have 10Gbps on
everything.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
basically it means if there is less then 2 copies do not accept writes.
if you want to do this depends on your requirements,
is it a bigger disaster to be unavailable a while, then there is to
restore from backup.
kind regards
Ronny Aasen
that have developed
corruptions/inconsistencies with the cluster
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
so the host with least space is the
limiting factor.
check
ceph osd df tree
to see how it looks.
kinds regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
y "correct" :)
with min_size =2 the cluster would not accept a write unless 2 disks
accepted the write.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
d have the space for it.
good luck
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
if you have a global setting in ceph.conf it will only affect the
creation of new pools. i reccomend using the default
size:3 + min_size:2
also check your pools that you have min_size=2
kind regards
Ronny Aasen
On 15.12.2017 23:00, James Okken wrote:
This whole effort went extremely well
db and wal partitions, and for extracting db data from block into a
separate block.db partition.
dd block.db would probably work when you need to replace a worn out ssd
drive. but not so much if you want to deploy separate block.db from a
bluestore made without block.db
for current devices.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
for this to happen you have to lose another osd before backfilling
is done.
Thank You! This clarifies it!
Denes
On 12/05/2017 03:32 PM, Ronny Aasen wrote:
On 05. des. 2017 10:26, Denes Dolhay wrote:
Hi,
This question popped up a few times already under filestore and
bluestore too
ewrite dovecot index/cache) at the same time as a user
accesses imap and writes to dovecot index/cache.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
more eyeballs
willing to give it a look.
good luck and kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
and insert it as a new fresh OSD.
but these 2 lines from your pastebin is a bit over the top. how you can
have this many degraded objects based on only 289090 objects is hard to
get.
recovery 20266198323167232/289090 objects degraded (7010342219781.809%)
37154696925806625 scrub errors
i have not seen
repair-object/ might have
recovered that object for you.
kind regards
Ronny Aasen
On 26. okt. 2017 04:38, dani...@igb.illinois.edu wrote:
Hi Ronny,
From the documentation, I thought this was the proper way to resolve the
issue.
Dan
On 24. okt. 2017 19:14, Daniel Davidson wrote:
Our c
ing object was for. Did you do much troubleshooting before
jumping to this command so you were certain there was no other non
dataloss options ?
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/li
... you can remove (drain or
destroy) - repartition - add the osd and let ceph backfill the drive.
or you can just make a new osd with the remaining disk space.
since the space increase will change the crushmap there is no way to
avoid some data movement, anyway.
mvh
Ronny Aasen
.
http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 04. juni 2018 06:41, Charles Alva wrote:
Hi Guys,
When will the Ceph Mimic packages for Debian Stretch released? I could
not find the packages even after changing the sources.list.
I am also eager to test mimic on my ceph
debian-mimic only contains ceph-deploy atm.
kind regards
Ronny
-maintainers archives are not public.
-Joao
The debian-gcc list is public:
https://lists.debian.org/debian-gcc/2018/04/msg00137.html
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
ard
https://ceph.com/community/new-luminous-crush-device-classes/
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
the client will pick up this mtu value and store it
in/proc/sys/net/ipv6/conf/eth0/mtu
if /proc/sys/net/ipv6/conf/ens32/accept_ra_mtu is enabled.
you can perhaps change what mtu is advertised on the link by altering
your Router or device that advertise RA's
kind regards
Ronny Aasen
ze specify how many
osd's must ack the write before the write is acked to the client.
since failure is most likely when disks are stressing (eg with rebuild),
reducing min_size is just asking for corruption and data loss.
kind regards
Ronny Aasen
___
ceph admin. since it is a web admin tool for kvm
vm's and lxd containers as well as ceph.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 30.12.2017 15:41, Milanov, Radoslav Nikiforov wrote:
Performance as well - in my testing FileStore was much quicker than BlueStore.
with filestore you often have a ssd journal in front, this will often
mask/hide slow spinning disk write performance, until the journal size
becomes the
?
kind regards
Ronny Aasen
On 25.07.2018 12:03, Surya Bala wrote:
time got reduced when MDS from the same region became active
Each region we have a MDS. OSD nodes are in one region and active MDS
is in another region . So that this delay.
On Tue, Jul 17, 2018 at 6:23 PM, John Spray <mailto:
reclaim freed
space.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
for the 12.2.7 release
read : https://ceph.com/releases/12-2-7-luminous-released/
there should come features in 12.2.8 that can deal with the "objects are
in sync but checksums are wrong" scenario.
kind regards
Ronny Aasen
___
ceph-users ma
that, it does not get the
love it needs. and redhat have even stopped supporting it in deployments.
but you can use dm-cache or bcache on osd's
or/and rbd-cache on kvm clients.
good luck
Ronny Aasen
On 09.09.2018 11:20, Alex Lupsa wrote:
Hi,
Any ideas about the below ?
Thanks,
Alex
that is by design.
https://blog.widodh.nl/2015/04/protecting-your-ceph-pools-against-removal-or-property-changes/
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
tes"
sounds much better then the last step beeing admin checking if the
backups are good.,..
i try to do something similar by renaming pools to be deleted but that
is not allways the same as inactive.
kind regards
Ronny Aasen
___
ceph
have both redundancy and high
availabillity.
kind regards
Ronny Aasen
On 11.04.2018 17:42, Ranjan Ghosh wrote:
Ah, nevermind, we've solved it. It was a firewall issue. The only
thing that's weird is that it became an issue immediately after an
update. Perhaps it has sth. to do with monitor n
osd's anyway and the new ones are mostly empty. rather then
when your small osd's are full and your large disks have significant
data on them.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
in the
configuration file.
in next version you will probably not need initial monitors either since
they can be discovered via SRV dns records.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
the checkbox to make posts announcing the new uploads
on social media YouTube decided to post it anyway. Sorry for the
inconvenience.
Kindest regards,
Leo
thanks to the presenters and yourself for your awesome work.
this is a goldmine for us that could not attend. :)
kind regards
Ronny Aasen
regards
Ronny Aasen
BTW: i did not know ubuntu automagically rebooted after a upgrade. you
can probably avoid that reboot somehow in ubuntu. and do the restarts of
services manually. if you wish to maintain service during upgrade
On 25.04.2018 11:52, Ranjan Ghosh wrote:
Thanks a lot for your
and what object is
bad during scrub.
crc is not a replacement for scrub, but a compliment. it improves the
quality of the data you provide to clients, and it makes it easier for
scrub to detect errors.
kind regards
Ronny Aasen
[ 1] https://en.wikipedia.org/wiki/Data_degradation
hosts you
have no failuredomain. but 4 hosts in the minimum sane starting point
for a regular small cluster with 3+2 pools (you can loose a node and
ceph selfheals as long as there are enough freespace.
kind regards
Ronny Aasen
___
ceph-users m
as free space in your cluster to be able to selfheal
point in that slitting the cluster hurts. and if HA is the most
important then you may want to check out rbd mirror.
kind
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
advantage of that in your design/pool configuration.
kind regards
Ronny Aasen
On 22.03.2018 10:53, Hervé Ballans wrote:
Le 21/03/2018 à 11:48, Ronny Aasen a écrit :
On 21. mars 2018 11:27, Hervé Ballans wrote:
Hi all,
I have a question regarding a possible scenario to put both wal and
db
nvram dies, it brings down 22 osd's at once and will be a
huge pain on your cluster. (depending on how large it is...)
i would spread the db's on more devices to reduce the bottleneck and
failure domains in this situation.
kind regards
Ronny Aasen
___
!
kinds regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 05. mars 2018 14:45, Jan Marquardt wrote:
Am 05.03.18 um 13:13 schrieb Ronny Aasen:
i had some similar issues when i started my proof of concept. especialy
the snapshot deletion i remember well.
the rule of thumb for filestore that i assume you are running is 1GB ram
per TB of osd. so
out a solution to this ? I have the same problem now.
I assume you have to download the old version manually and install with
dpkg -i
optionally mirror the ceph repo and build your own repo index containing
all versions.
kind regards
Ronny Aasen
ons/pools/
explains how the osd class is used to define a crush placement rule.
and then you can set the |crush_rule| on the pool and ceph will move the
data. No downtime needed.
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users
mds server's ram, so you cache as much metadata as possible.
good luck
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
he to the mds server's ram, so you cache as much metadata as possible.
Yes, we're in the process of doing that - I belive we're seeing the MDS
suffering
when we saturate a few disks in the setup - and they are sharing. Thus
we'll move
the metadata as per recommendations to
On 18.09.2018 21:15, Alfredo Daniel Rezinovsky wrote:
Can anyone add me to this slack?
with my email alfrenov...@gmail.com
Thanks.
why would a ceph slack be invite only?
Also is the slack bridged to matrix? room id ?
kind regards
Ronny Aasen
1 - 100 of 112 matches
Mail list logo