[ceph-users] One osd crashing daily, the problem with osd.50

2016-05-09 Thread Ronny Aasen
. so it's fairly regular. the raid5 sets are 12TB so i was hoping to be able to fix the problem, rather then zapping the md and recreating from scratch. I was also worrying if there was something fundamentaly wrong about running osd's on software md raid5 devices. kind regards Ronny Aasen

Re: [ceph-users] One osd crashing daily, the problem with osd.50

2016-05-09 Thread Ronny Aasen
thanks for your commends. answers inline. On 05/09/16 09:53, Christian Balzer wrote: Hello, On Mon, 9 May 2016 09:31:20 +0200 Ronny Aasen wrote: hello I am running a small lab ceph cluster consisting of 6 old used servers. That's larger than quite a few production deployments

Re: [ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-12 Thread Ronny Aasen
machine. Saves you 2 slots for osd's and they are quite reliable. you could even use 2 sd cards if your machine have the internal SD slot http://www.dell.com/downloads/global/products/pedge/en/poweredge-idsdm-whitepaper-en.pdf kind regards Ronny Aasen __

[ceph-users] osd dies with m_filestore_fail_eio without dmesg error

2016-09-05 Thread Ronny Aasen
and see if there is any way to salvage this osd? And is there any information i should gather before i scratch the filesystem and recreates it, perhaps there is some valuable insight into whats's going on ?? kind regards Ronny Aasen running debian jessie + hammer 0.94.7 # uname -a Linux ceph

Re: [ceph-users] stubborn/sticky scrub errors

2016-09-05 Thread Ronny Aasen
://ceph.com/planet/ceph-manually-repair-object so the scrub errors are gone now. kind regards Ronny Aasen On 04. sep. 2016 00:04, Brad Hubbard wrote: There should actually be "[ERR]" messages in the osd logs some time after "deep-scrub starts". Can we see those and a pg query for

Re: [ceph-users] osd dies with m_filestore_fail_eio without dmesg error

2016-09-06 Thread Ronny Aasen
On 06. sep. 2016 00:58, Brad Hubbard wrote: On Mon, Sep 05, 2016 at 12:54:40PM +0200, Ronny Aasen wrote: > Hello > > I have a osd that regularly dies on io, especially scrubbing. > normaly i would assume a bad disk, and replace it. but then i normaly see > messages in dmesg a

Re: [ceph-users] osd dies with m_filestore_fail_eio without dmesg error

2016-09-06 Thread Ronny Aasen
On 06.09.2016 14:45, Ronny Aasen wrote: On 06. sep. 2016 00:58, Brad Hubbard wrote: On Mon, Sep 05, 2016 at 12:54:40PM +0200, Ronny Aasen wrote: > Hello > > I have a osd that regularly dies on io, especially scrubbing. > normaly i would assume a bad disk, and replace it. but the

Re: [ceph-users] Replacing a defective OSD

2016-09-07 Thread Ronny Aasen
controller on this list ? https://wiki.debian.org/LinuxRaidForAdmins this controller software is often needed for troubleshooting, and can give status and be monitored as well. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@list

[ceph-users] problem starting osd ; PGLog.cc: 984: FAILED assert hammer 0.94.9

2016-09-12 Thread Ronny Aasen
intel 3500 anyone have a clue to what can be wrong ? kind regrads Ronny Aasen -- log debug_filestore=10 -- -19> 2016-09-12 10:31:08.070947 7f8749125880 10 filestore(/var/lib/ceph/osd/ceph-8) getattr 1.fdd_head/1/1df4bfdd/rb.0.392c.238e1f29.002bd134/head '_' = 266 -18>

Re: [ceph-users] ceph OSD with 95% full

2016-09-08 Thread Ronny Aasen
Ronny Aasen On 20.07.2016 15:52, M Ranga Swami Reddy wrote: Do we have any tool to monitor the OSDs usage with help of UI? Thanks Swami [snip] ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph-osd fail to be started

2016-09-13 Thread Ronny Aasen
7p1 where /dev/md127p1 is the xfs partition for the osd. good luck Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] problem starting osd ; PGLog.cc: 984: FAILED assert hammer 0.94.9

2016-09-13 Thread Ronny Aasen
the non starting osd's kind regards Ronny Aasen On 12. sep. 2016 13:16, Ronny Aasen wrote: after adding more osd's and having a big backfill running 2 of my osd's keep on stopping. We also recently upgraded from 0.94.7 to 0.94.9 but i do not know if that is related. the log say. 0> 2016

Re: [ceph-users] Recovery/Backfill Speedup

2016-10-06 Thread Ronny Aasen
this might have changed and actualy work for you. Kind regards Ronny Aasen On 05. okt. 2016 21:52, Dan Jakubiec wrote: Thank Ronny, I am working with Reed on this problem. Yes something is very strange. Docs say osd_max_backfills default to 10, but when we examined the run-time configuration using

[ceph-users] offending shards are crashing osd's

2016-10-06 Thread Ronny Aasen
tch of them contain the broken shard. (or perhaps all 3 of them?) a bit reluctant to delete on all 3. I have 4+2 erasure coding. ( erasure size 6 min_size 4 ) so finding out witch one is bad would be nice. hope someone have an idea how to progress. kind

Re: [ceph-users] problem starting osd ; PGLog.cc: 984: FAILED assert hammer 0.94.9

2016-09-18 Thread Ronny Aasen
added debug journal = 20 and got some new lines in the log. that i added to the end of this email. any of you can make something out of them ? kind regards Ronny Aasen On 18.09.2016 18:59, Kostis Fardelas wrote: If you are aware of the problematic PGs and they are exportable, then ceph

Re: [ceph-users] Is it possible to recover the data of which all replicas are lost?

2016-09-29 Thread Ronny Aasen
to export /import to a working osd. good luck Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Give up on backfill, remove slow OSD

2016-10-02 Thread Ronny Aasen
in the documentation - ceph auth del osd.x - ceph osd crush remove osd.x - ceph osd rm osd.x PS: if your cluster stops to operate when a osd goes down, you have something else fundamentally wrong. you should look into this as well as a separate case. kind regards Ronny Aasen

[ceph-users] ceph osd crash on startup / crashed first during snap removal

2016-11-10 Thread Ronny Aasen
/gYEmYOcuil8ANG2 i still have the osd available so i can try starting it again with other debug values if that is valuable. i hope someone can shed some light on why this osd crashes. kind regards Ronny Aasen ___ ceph-users mailing list ceph

Re: [ceph-users] offending shards are crashing osd's

2016-10-19 Thread Ronny Aasen
On 06. okt. 2016 13:41, Ronny Aasen wrote: hello I have a few osd's in my cluster that are regularly crashing. [snip] ofcourse having 3 osd's dying regularly is not good for my health. so i have set noout, to avoid heavy recoveries. googeling this error messages gives exactly 1 hit: https

Re: [ceph-users] offending shards are crashing osd's

2016-10-21 Thread Ronny Aasen
On 19. okt. 2016 13:00, Ronny Aasen wrote: On 06. okt. 2016 13:41, Ronny Aasen wrote: hello I have a few osd's in my cluster that are regularly crashing. [snip] ofcourse having 3 osd's dying regularly is not good for my health. so i have set noout, to avoid heavy recoveries. googeling

[ceph-users] pg stuck with unfound objects on non exsisting osd's

2016-11-01 Thread Ronny Aasen
s where i am stuck. have tried stopping and starting the 3 osd's but that did not have any effect. Anyone have any advice how to proceed ? full output at: http://paste.debian.net/hidden/be03a185/ this is hammer 0.94.9 on debian 8. kind regards Ronny Aasen

Re: [ceph-users] Need help! Ceph backfill_toofull and recovery_wait+degraded

2016-11-01 Thread Ronny Aasen
, or by adding more nodes. kind regards Ronny Aasen On 01.11.2016 20:14, Marcus Müller wrote: > Hi all, > > i have a big problem and i really hope someone can help me! > > We are running a ceph cluster since a year now. Version is: 0.94.7 (Hammer) > Here is some info: > > Our o

Re: [ceph-users] pg stuck with unfound objects on non exsisting osd's

2016-11-01 Thread Ronny Aasen
thanks for the suggestion. is a rolling reboot sufficient? or must all osd's be down at the same time ? one is no problem. the other takes some scheduling.. Ronny Aasen On 01.11.2016 21:52, c...@elchaka.de wrote: Hello Ronny, if it is possible for you, try to Reboot all OSD Nodes. I had

Re: [ceph-users] Directly addressing files on individual OSD

2017-03-16 Thread Ronny Aasen
On 16.03.2017 08:26, Youssef Eldakar wrote: Thanks for the reply, Anthony, and I am sorry my question did not give sufficient background. This is the cluster behind archive.bibalex.org. Storage nodes keep archived webpages as multi-member GZIP files on the disks, which are formatted using XFS

Re: [ceph-users] luminous/bluetsore osd memory requirements

2017-08-14 Thread Ronny Aasen
/bluestore.. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] ceph packages on stretch from eu.ceph.com

2017-04-25 Thread Ronny Aasen
? kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] best practices in connecting clients to cephfs public network

2017-04-25 Thread Ronny Aasen
is the recommended way to connect clients to the public network ? kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph packages on stretch from eu.ceph.com

2017-06-19 Thread Ronny Aasen
Aasen On 26. april 2017 19:46, Alexandre DERUMIER wrote: you can try the proxmox stretch repository if you want http://download.proxmox.com/debian/ceph-luminous/dists/stretch/ - Mail original - De: "Wido den Hollander" <w...@42on.com> À: "ceph-users" <ceph-

Re: [ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-14 Thread Ronny Aasen
ple servers writing in the same metadata area, the same journal area and generaly shitting over each other. luckily i think most modern filesystems would detect that the FS is mounted somewhere else and prevent you from mounting it again without big fat warnings. kind

Re: [ceph-users] Power outages!!! help!

2017-09-15 Thread Ronny Aasen
you write you had all pg's exported except one. so i assume you have injected those pg's into the cluster again using the method linked a few times in this thread. How did that go, were you successfull in recovering those pg's ? kind regards. Ronny Aasen On 15. sep. 2017 07:52, hjcho616

Re: [ceph-users] OSD_OUT_OF_ORDER_FULL even when the ratios are in order.

2017-09-14 Thread Ronny Aasen
On 14. sep. 2017 11:58, dE . wrote: Hi, I got a ceph cluster where I'm getting a OSD_OUT_OF_ORDER_FULL health error, even though it appears that it is in order -- full_ratio 0.99 backfillfull_ratio 0.97 nearfull_ratio 0.98 These don't seem like a mistake to me but ceph is complaining --

Re: [ceph-users] access ceph filesystem at storage level and not via ethernet

2017-09-13 Thread Ronny Aasen
on multiple servers and use a cluster filesystem like ocfs or gfs that is made for this kind of solution. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Power outages!!! help!

2017-09-20 Thread Ronny Aasen
since it have crush weight 0 it should drain out. - if that works, verify the injection drive is drained, stop it and remove it from ceph.  zap the drive. this is all as i said guesstimates so your mileage may vary good luck Ronny Aasen ___ ceph

Re: [ceph-users] Power outages!!! help!

2017-09-20 Thread Ronny Aasen
/ceph-manually-repair-object/ good luck Ronny Aasen On 20.09.2017 22:17, hjcho616 wrote: Thanks Ronny. I decided to try to tar everything under current directory.  Is this correct command for it?  Is there any directory we do not want in the new drive?  commit_op_seq, meta, nosnap, omap? tar

[ceph-users] windows server 2016 refs3.1 veeam syntetic backup with fast block clone

2017-10-13 Thread Ronny Aasen
clone and fast syntetic full backups when using refs on rbd on ceph. i ofcourse have other backup solutions, but this is spesific for vmware backups. possible? kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] Brand new cluster -- pg is stuck inactive

2017-10-13 Thread Ronny Aasen
strange that no osd is acting for your pg's can you show the output from ceph osd tree mvh Ronny Aasen On 13.10.2017 18:53, dE wrote: Hi,     I'm running ceph 10.2.5 on Debian (official package). It cant seem to create any functional pools -- ceph health detail HEALTH_ERR 64 pgs

Re: [ceph-users] Power outages!!! help!

2017-08-30 Thread Ronny Aasen
Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Power outages!!! help!

2017-09-12 Thread Ronny Aasen
. stasrt by doing one and one pg. and once you get the hang of the method you can do multiple pg's at the same time. good luck Ronny Aasen On 11. sep. 2017 06:51, hjcho616 wrote: It took a while. It appears to have cleaned up quite a bit... but still has issues. I've been seeing bel

Re: [ceph-users] Power outages!!! help!

2017-09-13 Thread Ronny Aasen
On 13. sep. 2017 07:04, hjcho616 wrote: Ronny, Did bunch of ceph pg repair pg# and got the scrub errors down to 10... well was 9, trying to fix one became 10.. waiting for it to fix (I did that noout trick as I only have two copies). 8 of those scrub errors looks like it would need data

Re: [ceph-users] Power outages!!! help!

2017-09-28 Thread Ronny Aasen
oping it doesn't impact the data much... went ahead and allowed those corrupted values. I was able to export osd.4 with journal! congratulations and well done :) just imagine tring to do this on $vendors's propitary blackbox... Ronny Aasen ___ ceph-us

Re: [ceph-users] Ceph luminous repo not working on Ubuntu xenial

2017-09-29 Thread Ronny Aasen
ll install that spesific version. example in my  case: apt install ceph=10.2.5-7.2 will downgrade to the previous version. kind regards Ronny Aasen On 29.09.2017 15:40, Kashif Mumtaz wrote: Dear Stefan, Thanks for your help. You are right. I was missing apt update" after adding repo.  

Re: [ceph-users] PG in active+clean+inconsistent, but list-inconsistent-obj doesn't show it

2017-09-28 Thread Ronny Aasen
on that rbd after all the ceph problems are repaired. good luck Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Monitoring a rbd map rbd connection

2017-08-25 Thread Ronny Aasen
write to a subdirectory on the RBD. so if it is not mounted, the directory will be missing, and you get a no such file error. Ronny Aasen On 25.08.2017 18:04, David Turner wrote: Additionally, solely testing if you can write to the path could give a false sense of security if the path

Re: [ceph-users] Power outages!!! help!

2017-08-28 Thread Ronny Aasen
ood luck Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Power outages!!! help!

2017-08-28 Thread Ronny Aasen
recovered lost object using these instructions http://ceph.com/geen-categorie/incomplete-pgs-oh-my/ I would start by renaming the osd's log file, do a single try at starting the osd. and posting that log. have you done anything to the osd's that could make them not run ? kind regards Ronny Aasen

Re: [ceph-users] Power outages!!! help!

2017-08-28 Thread Ronny Aasen
des to cover for the lost one. the more nodes you have the less the impact of a node failure is. and the less spare room is needed for a 4 node cluster you should not fill more then 66% if you want to be able to self-heal + operate. good luck Ronny Aa

Re: [ceph-users] Power outages!!! help!

2017-08-30 Thread Ronny Aasen
nts. http://www.toad.com/gnu/sysadmin/index.html#ddrescue kind regards Ronny Aasen *Steve Taylor* | Senior Software Engineer |***StorageCraft Technology Corporation* <https://storagecraft.com> 380 Data

Re: [ceph-users] Power outages!!! help!

2017-09-03 Thread Ronny Aasen
pg at the time. and you should only recover pg's that contain unfound objects. there is realy only 103 unfound objects that you need to recover. once the recovery is compleate you can wipe the functioning recovery drive, and install it as a new osd to the cluster. kind regards Ronny Aasen

Re: [ceph-users] Re install ceph

2017-09-27 Thread Ronny Aasen
is still on disk. or I would set crush weight to 0 and drain all osd's off the node before reinstalling. here the backfill will take longer, since you actualy have to refill disks. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users

Re: [ceph-users] Erasure code profile

2017-10-23 Thread Ronny Aasen
nd regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Erasure code profile

2017-10-24 Thread Ronny Aasen
on 14 nodes means each and every nodes are hit on each write. kind regards Ronny Aasen On 23. okt. 2017 21:12, Jorge Pinilla López wrote: I have one question, what can or can't do a cluster working on degraded mode? With K=10 + M = 4 if one of my OSDs node fails it will start working

Re: [ceph-users] Undersized fix for small cluster, other than adding a 4th node?

2017-11-10 Thread Ronny Aasen
read http://ceph.com/geen-categorie/ceph-erasure-coding-overhead-in-a-nutshell/ kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cluster network slower than public network

2017-11-15 Thread Ronny Aasen
not have 10Gbps on everything. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] HELP with some basics please

2017-12-04 Thread Ronny Aasen
. basically it means if there is less then 2 copies do not accept writes.  if you want to do this depends on your requirements, is it a bigger disaster to be unavailable a while, then there is to restore from backup. kind regards Ronny Aasen

Re: [ceph-users] Another OSD broken today. How can I recover it?

2017-12-05 Thread Ronny Aasen
that have developed corruptions/inconsistencies with the cluster kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Adding multiple OSD

2017-12-05 Thread Ronny Aasen
so the host with least space is the limiting factor. check ceph osd df tree to see how it looks. kinds regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Another OSD broken today. How can I recover it?

2017-12-05 Thread Ronny Aasen
y "correct" :) with min_size =2 the cluster would not accept a write unless 2 disks accepted the write. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-14 Thread Ronny Aasen
d have the space for it. good luck Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread Ronny Aasen
if you have a global setting in ceph.conf it will only affect the creation of new pools. i reccomend using the default size:3 + min_size:2 also check your pools that you have min_size=2 kind regards Ronny Aasen On 15.12.2017 23:00, James Okken wrote: This whole effort went extremely well

Re: [ceph-users] Moving bluestore WAL and DB after bluestore creation

2017-11-17 Thread Ronny Aasen
  db and wal partitions, and for extracting db data from block into a separate block.db partition. dd block.db would probably work when you need to replace a worn out ssd drive. but not so much if you want to deploy separate block.db from a bluestore made without block.db

Re: [ceph-users] Ceph - SSD cluster

2017-11-21 Thread Ronny Aasen
for current devices. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Another OSD broken today. How can I recover it?

2017-12-05 Thread Ronny Aasen
for this to happen you have to lose another osd before backfilling is done. Thank You! This clarifies it! Denes On 12/05/2017 03:32 PM, Ronny Aasen wrote: On 05. des. 2017 10:26, Denes Dolhay wrote: Hi, This question popped up a few times already under filestore and bluestore too

Re: [ceph-users] Corrupted files on CephFS since Luminous upgrade

2017-12-08 Thread Ronny Aasen
ewrite dovecot index/cache) at the same time as a user accesses imap and writes to dovecot index/cache. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Another OSD broken today. How can I recover it?

2017-12-04 Thread Ronny Aasen
more eyeballs willing to give it a look. good luck and kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] I cannot make the OSD to work, Journal always breaks 100% time

2017-12-06 Thread Ronny Aasen
and insert it as a new fresh OSD. but these 2 lines from your pastebin is a bit over the top. how you can have this many degraded objects based on only 289090 objects is hard to get. recovery 20266198323167232/289090 objects degraded (7010342219781.809%) 37154696925806625 scrub errors i have not seen

Re: [ceph-users] MDS damaged

2017-10-26 Thread Ronny Aasen
repair-object/ might have recovered that object for you. kind regards Ronny Aasen On 26. okt. 2017 04:38, dani...@igb.illinois.edu wrote: Hi Ronny, From the documentation, I thought this was the proper way to resolve the issue. Dan On 24. okt. 2017 19:14, Daniel Davidson wrote: Our c

Re: [ceph-users] MDS damaged

2017-10-25 Thread Ronny Aasen
ing object was for. Did you do much troubleshooting before jumping to this command so you were certain there was no other non dataloss options ? kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/li

Re: [ceph-users] How to normally expand OSD’s capacity?

2018-05-10 Thread Ronny Aasen
...   you can remove (drain or destroy)  - repartition - add the osd and let ceph backfill the drive. or you can just make a new osd with the remaining disk space. since the space increase will change the crushmap there is no way to avoid some data movement, anyway. mvh Ronny Aasen

Re: [ceph-users] ceph cluster

2018-06-12 Thread Ronny Aasen
. http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address-the-messy-way kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Ronny Aasen
On 04. juni 2018 06:41, Charles Alva wrote: Hi Guys, When will the Ceph Mimic packages for Debian Stretch released? I could not find the packages even after changing the sources.list. I am also eager to test mimic on my ceph debian-mimic only contains ceph-deploy atm. kind regards Ronny

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-05 Thread Ronny Aasen
-maintainers archives are not public. -Joao The debian-gcc list is public: https://lists.debian.org/debian-gcc/2018/04/msg00137.html Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] 3 monitor servers to monitor 2 different OSD set of servers

2018-04-26 Thread Ronny Aasen
ard https://ceph.com/community/new-luminous-crush-device-classes/ kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-27 Thread Ronny Aasen
. the client will pick up this mtu value and store it in/proc/sys/net/ipv6/conf/eth0/mtu if /proc/sys/net/ipv6/conf/ens32/accept_ra_mtu is enabled. you can perhaps change what mtu is advertised on the link by altering your Router or device that advertise RA's kind regards Ronny Aasen

Re: [ceph-users] Query regarding min_size.

2018-01-03 Thread Ronny Aasen
ze specify how many osd's must ack the write before the write is acked to the client. since failure is most likely when disks are stressing (eg with rebuild), reducing min_size is just asking for corruption and data loss. kind regards Ronny Aasen ___

Re: [ceph-users] Luminous and calamari

2018-02-16 Thread Ronny Aasen
ceph admin. since it is a web admin tool for kvm vm's and lxd containers as well as ceph. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread Ronny Aasen
regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2018-01-01 Thread Ronny Aasen
On 30.12.2017 15:41, Milanov, Radoslav Nikiforov wrote: Performance as well - in my testing FileStore was much quicker than BlueStore. with filestore you often have a ssd journal in front, this will often mask/hide slow spinning disk write performance, until the journal size becomes the

Re: [ceph-users] ls operation is too slow in cephfs

2018-07-25 Thread Ronny Aasen
? kind regards Ronny Aasen On 25.07.2018 12:03, Surya Bala wrote: time got reduced when MDS from the same region became active Each region we have a MDS. OSD nodes are in one region and active MDS is in another region . So that this delay. On Tue, Jul 17, 2018 at 6:23 PM, John Spray <mailto:

Re: [ceph-users] Reclaim free space on RBD images that use Bluestore?????

2018-07-23 Thread Ronny Aasen
reclaim freed space. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] active+clean+inconsistent PGs after upgrade to 12.2.7

2018-07-19 Thread Ronny Aasen
for the 12.2.7 release read : https://ceph.com/releases/12-2-7-luminous-released/ there should come features in 12.2.8 that can deal with the "objects are in sync but checksums are wrong" scenario. kind regards Ronny Aasen ___ ceph-users ma

Re: [ceph-users] Slow Ceph: Any plans on torrent-like transfers from OSDs ?

2018-09-09 Thread Ronny Aasen
that, it does not get the love it needs. and redhat have even stopped supporting it in deployments. but you can use dm-cache or bcache on osd's or/and  rbd-cache on kvm clients. good luck Ronny Aasen On 09.09.2018 11:20, Alex Lupsa wrote: Hi, Any ideas about the below ? Thanks, Alex

Re: [ceph-users] Cannot delete a pool

2018-03-01 Thread Ronny Aasen
that is by design. https://blog.widodh.nl/2015/04/protecting-your-ceph-pools-against-removal-or-property-changes/ kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Delete a Pool - how hard should be?

2018-03-06 Thread Ronny Aasen
tes" sounds much better then the last step beeing admin checking if the backups are good.,.. i try to do something similar by renaming pools to be deleted but that is not allways the same as inactive. kind regards Ronny Aasen ___ ceph

Re: [ceph-users] Cluster degraded after Ceph Upgrade 12.2.1 => 12.2.2

2018-04-11 Thread Ronny Aasen
have both redundancy and high availabillity. kind regards Ronny Aasen On 11.04.2018 17:42, Ranjan Ghosh wrote: Ah, nevermind, we've solved it. It was a firewall issue. The only thing that's weird is that it became an issue immediately after an update. Perhaps it has sth. to do with monitor n

Re: [ceph-users] osds with different disk sizes may killing performance (?? ?)

2018-04-12 Thread Ronny Aasen
osd's anyway and the new ones are mostly empty. rather then when your small osd's are full and your large disks have significant data on them. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

Re: [ceph-users] configuration section for each host

2018-04-24 Thread Ronny Aasen
in the configuration file. in next version you will probably not need initial monitors either since they can be discovered via SRV dns records. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi

Re: [ceph-users] Cephalocon APAC 2018 report, videos and slides

2018-04-24 Thread Ronny Aasen
the checkbox to make posts announcing the new uploads on social media YouTube decided to post it anyway. Sorry for the inconvenience. Kindest regards, Leo thanks to the presenters and yourself for your awesome work. this is a goldmine for us that could not attend. :) kind regards Ronny Aasen

Re: [ceph-users] Cluster degraded after Ceph Upgrade 12.2.1 => 12.2.2

2018-04-25 Thread Ronny Aasen
regards Ronny Aasen BTW: i did not know ubuntu automagically rebooted after a upgrade. you can probably avoid that reboot somehow in ubuntu. and do the restarts of services manually. if you wish to maintain service during upgrade On 25.04.2018 11:52, Ranjan Ghosh wrote: Thanks a lot for your

Re: [ceph-users] Bluestore and scrubbing/deep scrubbing

2018-03-29 Thread Ronny Aasen
and what object is bad during scrub. crc is not a replacement for scrub, but a compliment. it improves the quality of the data you provide to clients, and it makes it easier for scrub to detect errors. kind regards Ronny Aasen [ 1] https://en.wikipedia.org/wiki/Data_degradation

Re: [ceph-users] split brain case

2018-03-29 Thread Ronny Aasen
hosts you have no failuredomain.  but 4 hosts in the minimum sane starting point for a regular small cluster with 3+2 pools  (you can loose a node and ceph selfheals as long as there are enough freespace. kind regards Ronny Aasen ___ ceph-users m

Re: [ceph-users] split brain case

2018-03-29 Thread Ronny Aasen
as free space in your cluster to be able to selfheal point in that slitting the cluster hurts. and if HA is the most important then you may  want to check out rbd mirror. kind Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] Separate BlueStore WAL/DB : best scenario ?

2018-03-22 Thread Ronny Aasen
advantage of that in your design/pool configuration. kind regards Ronny Aasen On 22.03.2018 10:53, Hervé Ballans wrote: Le 21/03/2018 à 11:48, Ronny Aasen a écrit : On 21. mars 2018 11:27, Hervé Ballans wrote: Hi all, I have a question regarding a possible scenario to put both wal and db

Re: [ceph-users] Separate BlueStore WAL/DB : best scenario ?

2018-03-21 Thread Ronny Aasen
nvram dies, it brings down 22 osd's at once and will be a huge pain on your cluster. (depending on how large it is...) i would spread the db's on more devices to reduce the bottleneck and failure domains in this situation. kind regards Ronny Aasen ___

Re: [ceph-users] Ceph newbie(?) issues

2018-03-05 Thread Ronny Aasen
! kinds regards Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph newbie(?) issues

2018-03-05 Thread Ronny Aasen
On 05. mars 2018 14:45, Jan Marquardt wrote: Am 05.03.18 um 13:13 schrieb Ronny Aasen: i had some similar issues when i started my proof of concept. especialy the snapshot deletion i remember well. the rule of thumb for filestore that i assume you are running is 1GB ram per TB of osd. so

Re: [ceph-users] Install previous version of Ceph

2018-02-26 Thread Ronny Aasen
out a solution to this ? I have the same problem now. I assume you have to download the old version manually and install with dpkg -i optionally mirror the ceph repo and build your own repo index containing all versions. kind regards Ronny Aasen

Re: [ceph-users] Bluestore vs. Filestore

2018-10-03 Thread Ronny Aasen
ons/pools/ explains how the osd class is used to define a crush placement rule. and then you can set the |crush_rule| on the pool and ceph will move the data. No downtime needed. kind regards Ronny Aasen ___ ceph-users mailing list ceph-users

Re: [ceph-users] Bluestore vs. Filestore

2018-10-02 Thread Ronny Aasen
mds server's ram, so you cache as much metadata as possible. good luck Ronny Aasen ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Bluestore vs. Filestore

2018-10-02 Thread Ronny Aasen
he to the mds server's ram, so you cache as much metadata as possible. Yes, we're in the process of doing that - I belive we're seeing the MDS suffering when we saturate a few disks in the setup - and they are sharing. Thus we'll move the metadata as per recommendations to

Re: [ceph-users] https://ceph-storage.slack.com

2018-10-10 Thread Ronny Aasen
On 18.09.2018 21:15, Alfredo Daniel Rezinovsky wrote: Can anyone add me to this slack? with my email alfrenov...@gmail.com Thanks. why would a ceph slack be invite only? Also is the slack bridged to matrix?  room id ? kind regards Ronny Aasen

  1   2   >