Re: [ceph-users] Disabling write cache on SATA HDDs reduces write latency 7 times

2018-11-11 Thread Marc Roos
I just did very very short test and don’t see any difference with this cache on or off, so I am leaving it on for now. -Original Message- From: Ashley Merrick [mailto:singap...@amerrick.co.uk] Sent: zondag 11 november 2018 11:43 To: Marc Roos Cc: ceph-users; vitalif Subject: Re

Re: [ceph-users] Disabling write cache on SATA HDDs reduces write latency 7 times

2018-11-11 Thread Marc Roos
Does it make sense to test disabling this on hdd cluster only? -Original Message- From: Ashley Merrick [mailto:singap...@amerrick.co.uk] Sent: zondag 11 november 2018 6:24 To: vita...@yourcmc.ru Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Disabling write cache on SATA HDDs

Re: [ceph-users] ceph 12.2.9 release

2018-11-08 Thread Marc Roos
nich HRB 231263 Web: https://croit.io YouTube: https://goo.gl/PGE1Bx 2018-11-08 10:35 GMT+01:00 Matthew Vernon : > On 08/11/2018 09:17, Marc Roos wrote: >> >> And that is why I don't like ceph-deploy. Unless you have maybe >> hundreds of disks, I don’t see why you cannot

Re: [ceph-users] ceph 12.2.9 release

2018-11-08 Thread Marc Roos
g here. I doubt if ceph-deploy is even much faster. -Original Message- From: Matthew Vernon [mailto:m...@sanger.ac.uk] Sent: donderdag 8 november 2018 10:36 To: ceph-users@lists.ceph.com Cc: Marc Roos Subject: Re: [ceph-users] ceph 12.2.9 release On 08/11/2018 09:17, Marc Roos wrote:

Re: [ceph-users] ceph 12.2.9 release

2018-11-08 Thread Marc Roos
@lists.ceph.com Subject: Re: [ceph-users] ceph 12.2.9 release El Miércoles 07/11/2018 a las 11:28, Matthew Vernon escribió: > On 07/11/2018 14:16, Marc Roos wrote: > > > > > > I don't see the problem. I am installing only the ceph updates when > > others have

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Marc Roos
I don't see the problem. I am installing only the ceph updates when others have done this and are running several weeks without problems. I have noticed this 12.2.9 availability also, did not see any release notes, so why install it? Especially with recent issues of other releases. That bei

Re: [ceph-users] https://ceph-storage.slack.com

2018-10-11 Thread Marc Roos
Why slack anyway? -Original Message- From: Konstantin Shalygin [mailto:k0...@k0ste.ru] Sent: donderdag 11 oktober 2018 5:11 To: ceph-users@lists.ceph.com Subject: *SPAM* Re: [ceph-users] https://ceph-storage.slack.com > why would a ceph slack be invite only? Because this is

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Marc Roos
Luminous is also not having an updated librgw that prevents ganesha from using the multi tenancy mounts. Especially with the current issues of mimic, would it be nice if this could be made available in luminous. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg48659.html https://gith

Re: [ceph-users] cephfs poor performance

2018-10-08 Thread Marc Roos
That is easy I think, so I will give it a try: Faster CPU's, Use fast NVME disks, all 10Gbit or even better 100Gbit, added with a daily prayer. -Original Message- From: Tomasz Płaza [mailto:tomasz.pl...@grupawp.pl] Sent: maandag 8 oktober 2018 7:46 To: ceph-users@lists.ceph.com Sub

Re: [ceph-users] list admin issues

2018-10-06 Thread Marc Roos
-AES256-GCM-SHA384 -Original Message- From: Vasiliy Tolstov [mailto:v.tols...@selfip.ru] Sent: zaterdag 6 oktober 2018 16:34 To: Marc Roos Cc: ceph-users@lists.ceph.com; elias.abacio...@deltaprojects.com Subject: *SPAM* Re: [ceph-users] list admin issues сб, 6 окт. 2018 г. в 16:48

Re: [ceph-users] list admin issues

2018-10-06 Thread Marc Roos
Maybe ask first gmail? -Original Message- From: Elias Abacioglu [mailto:elias.abacio...@deltaprojects.com] Sent: zaterdag 6 oktober 2018 15:07 To: ceph-users Subject: Re: [ceph-users] list admin issues Hi, I'm bumping this old thread cause it's getting annoying. My membership get

Re: [ceph-users] Cannot write to cephfs if some osd's are not available on the client network

2018-10-05 Thread Marc Roos
losed (con state CONNECTING) .. .. .. -Original Message- From: John Spray [mailto:jsp...@redhat.com] Sent: donderdag 27 september 2018 11:43 To: Marc Roos Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Cannot write to cephfs if some osd's are not available on the client

[ceph-users] network latency setup for osd nodes combined with vm

2018-10-03 Thread Marc Roos
It was not my first intention to host vm's on osd nodes of the ceph cluster. But since this test cluster is not doing anything, I might aswell use some of the cores. Currently I have configured a macvtap on the ceph client network configured as a vlan. Disadvantage is that the local osd's ca

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-10-02 Thread Marc Roos
p and move the file to a 3x replicated pool, I assume my data is moved there and more secure. -Original Message- From: Janne Johansson [mailto:icepic...@gmail.com] Sent: dinsdag 2 oktober 2018 15:44 To: jsp...@redhat.com Cc: Marc Roos; Ceph Users Subject: Re: [ceph-users] cephfs issue w

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-10-01 Thread Marc Roos
edhat.com] Sent: maandag 1 oktober 2018 21:28 To: Marc Roos Cc: ceph-users; jspray; ukernel Subject: Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error Moving a file into a directory with a different layout does not, and is not intended to, copy the un

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-10-01 Thread Marc Roos
sdf -Original Message- From: Yan, Zheng [mailto:uker...@gmail.com] Sent: zaterdag 29 september 2018 6:55 To: Marc Roos Subject: Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error check_pool_perm on pool 30 ns need Fr, but no read perm client does

Re: [ceph-users] cephfs kernel client stability

2018-10-01 Thread Marc Roos
How do you test this? I have had no issues under "normal load" with an old kernel client and a stable os. CentOS Linux release 7.5.1804 (Core) Linux c04 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux -Original Message- From: Andras

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-09-28 Thread Marc Roos
dag 28 september 2018 15:45 To: Marc Roos Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error On Fri, Sep 28, 2018 at 2:28 PM Marc Roos wrote: > > > Looks like that if I move files between different da

Re: [ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-09-28 Thread Marc Roos
If I copy the file out6 to out7 in the same location. I can read the out7 file on the nfs client. -Original Message- To: ceph-users Subject: [ceph-users] cephfs issue with moving files between data pools gives Input/output error Looks like that if I move files between different dat

[ceph-users] cephfs issue with moving files between data pools gives Input/output error

2018-09-28 Thread Marc Roos
Looks like that if I move files between different data pools of the cephfs, something is still refering to the 'old location' and gives an Input/output error. I assume this, because I am using different client ids for authentication. With the same user as configured in ganesha, mounting (ker

[ceph-users] Cephfs new file in ganesha mount Input/output error

2018-09-27 Thread Marc Roos
If I add on one client a file to the cephfs, that is exported via ganesha and nfs mounted somewhere else. I can see it in the dir listing on the other nfs client. But trying to read it gives an Input/output error. Other files (older ones in the same dir I can read) Anyone had this also? nfs

[ceph-users] Cannot write to cephfs if some osd's are not available on the client network

2018-09-27 Thread Marc Roos
I have a test cluster and on a osd node I put a vm. The vm is using a macvtap on the client network interface of the osd node. Making access to local osd's impossible. the vm of course reports that it cannot access the local osd's. What I am getting is: - I cannot reboot this vm normally, ne

Re: [ceph-users] PG inconsistent, "pg repair" not working

2018-09-25 Thread Marc Roos
And where is the manual for bluestore? -Original Message- From: mj [mailto:li...@merit.unu.edu] Sent: dinsdag 25 september 2018 9:56 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] PG inconsistent, "pg repair" not working Hi, I was able to solve a similar issue on our cluste

Re: [ceph-users] Ceph balancer "Error EAGAIN: compat weight-set not available"

2018-09-22 Thread Marc Roos
h tunables you can check out the ceph wiki [2] here. [1] ceph osd set-require-min-compat-client hammer ceph osd crush set-all-straw-buckets-to-straw2 ceph osd crush tunables hammer [2] http://docs.ceph.com/docs/master/rados/operations/crush-map/ -Original Message- From: Marc Roos Sent: d

Re: [ceph-users] macos build failing

2018-09-20 Thread Marc Roos
When running ./do_cmake.sh, I get fatal: destination path '/Users/mac/ceph/src/zstd' already exists and is not an empty directory. fatal: clone of 'https://github.com/facebook/zstd' into submodule path '/Users/mac/ceph/src/zstd' failed Failed to clone 'src/zstd'. Retry scheduled fatal: desti

[ceph-users] macos build failing

2018-09-20 Thread Marc Roos
Has anyone been able to build according to this manual? Because here it fails. http://docs.ceph.com/docs/mimic/dev/macos/ I have prepared macos as it is described, took 2h to build this llvm, is that really necessary? I do the git clone --single-branch -b mimic https://github.com/ceph/ceph

Re: [ceph-users] osx support and performance testing

2018-09-19 Thread Marc Roos
I have been trying to do this on a sierra vm, installed xcode 9.2 I had to modify this ceph-fuse.rb and copy it to the folder /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/ (was not there, is that correct?) But I get now the error make: *** No rule to make target `rados'.

[ceph-users] mesos on ceph nodes

2018-09-15 Thread Marc Roos
Just curious, is anyone running mesos on ceph nodes? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-14 Thread Marc Roos
I agree. I was on centos7.4 and updated to I think luminous 12.2.7, and had something not working related to some python dependancy. This was resolved by upgrading to centos7.5 -Original Message- From: David Turner [mailto:drakonst...@gmail.com] Sent: vrijdag 14 september 2018 15

Re: [ceph-users] Performance predictions moving bluestore wall, db to ssd

2018-09-12 Thread Marc Roos
ssage- From: David Turner [mailto:drakonst...@gmail.com] Sent: woensdag 12 september 2018 18:20 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Performance predictions moving bluestore wall, db to ssd You already have a thread talking about benchmarking the addition of WAL and DB parti

[ceph-users] Performance predictions moving bluestore wall, db to ssd

2018-09-12 Thread Marc Roos
When having a hdd bluestore osd with collocated wal and db. - What performance increase can be expected if one would move the wal to an ssd? - What performance increase can be expected if one would move the db to an ssd? - Would the performance be a lot if you have a very slow hdd (and thu

[ceph-users] osx support and performance testing

2018-09-12 Thread Marc Roos
Is this osxfuse, the only and best performing way to mount a ceph filesystem on an osx client? http://docs.ceph.com/docs/mimic/dev/macos/ I am now testing cephfs performance on a client with the fio libaio engine. This engine does not exist on osx, but there is a posixaio. Does anyone have ex

[ceph-users] Ceph balancer "Error EAGAIN: compat weight-set not available"

2018-09-11 Thread Marc Roos
I am new, with using the balancer, I think this should generated a plan not? Do not get what this error is about. [@c01 ~]# ceph balancer optimize balancer-test.plan Error EAGAIN: compat weight-set not available ___ ceph-users mailing list ceph-users

Re: [ceph-users] Need help

2018-09-10 Thread Marc Roos
I guess good luck. Maybe you can ask these guys to hurry up and get something production ready. https://github.com/ceph-dovecot/dovecot-ceph-plugin -Original Message- From: marc-antoine desrochers [mailto:marc-antoine.desroch...@sogetel.com] Sent: maandag 10 september 2018 14:40 To

[ceph-users] Mimic and collectd working?

2018-09-07 Thread Marc Roos
I was thinking of upgrading luminous to mimic, but does anyone have mimic running with collectd and the ceph plugin? When luminous was introduced it took almost half a year before collectd was supporting it. ___ ceph-users mailing list ceph-users@li

[ceph-users] Luminous 12.2.8 deepscrub settings changed?

2018-09-07 Thread Marc Roos
I have only 2 scrubs running on hdd's, but keeping the drives in high busy state. I did not notice this before, did some setting change? Because I can remember dstat listing 14MB/s-20MB/s and not 60MB/s DSK | sdd | busy 95% | read1384 | write 92 | KiB/r 292 | KiB/w

Re: [ceph-users] Rados performance inconsistencies, lower than expected performance

2018-09-07 Thread Marc Roos
the samsung sm863. write-4k-seq: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 randwrite-4k-seq: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 read-4k-seq: (g=2): rw=read, bs=(R) 409

Re: [ceph-users] CephFS on a mixture of SSDs and HDDs

2018-09-06 Thread Marc Roos
To add a data pool to an existing cephfs ceph osd pool set fs_data.ec21 allow_ec_overwrites true ceph osd pool application enable fs_data.ec21 cephfs ceph fs add_data_pool cephfs fs_data.ec21 Then link the pool to the directory (ec21) setfattr -n ceph.dir.layout.pool -v fs_data.ec21 ec21 ---

Re: [ceph-users] Rados performance inconsistencies, lower than expected performance

2018-09-06 Thread Marc Roos
: ceph tell osd.* injectargs --osd_max_backfills=0 Again getting slower towards the end. Bandwidth (MB/sec): 395.749 Average Latency(s): 0.161713 -Original Message- From: Menno Zonneveld [mailto:me...@1afa.com] Sent: donderdag 6 september 2018 16:56 To: Marc Roos; ceph-users Subject:

Re: [ceph-users] Rados performance inconsistencies, lower than expected performance

2018-09-06 Thread Marc Roos
Menno Zonneveld [mailto:me...@1afa.com] Sent: donderdag 6 september 2018 15:52 To: Marc Roos; ceph-users Subject: RE: [ceph-users] Rados performance inconsistencies, lower than expected performance ah yes, 3x replicated with minimal 2. my ceph.conf is pretty bare, just in case it might be rel

Re: [ceph-users] Rados performance inconsistencies, lower than expected performance

2018-09-06 Thread Marc Roos
Test pool is 3x replicated? -Original Message- From: Menno Zonneveld [mailto:me...@1afa.com] Sent: donderdag 6 september 2018 15:29 To: ceph-users@lists.ceph.com Subject: [ceph-users] Rados performance inconsistencies, lower than expected performance I've setup a CEPH cluster to tes

Re: [ceph-users] Upgrading ceph with HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent

2018-09-06 Thread Marc Roos
> > > > > > > The adviced solution is to upgrade ceph only in HEALTH_OK state. And I > > also read somewhere that is bad to have your cluster for a long time in > > an HEALTH_ERR state. > > > > But why is this bad? > > Aside from the obvious (errors are bad things!), many people have > extern

Re: [ceph-users] Upgrading ceph with HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent

2018-09-06 Thread Marc Roos
Thanks interesting to read. So in luminous it is not really a problem. I was expecting to get into trouble with the monitors/mds. Because my failover takes quite long, and thought it was related to the damaged pg Luminous: "When the past intervals tracking structure was rebuilt around exactly t

Re: [ceph-users] help needed

2018-09-06 Thread Marc Roos
Do not use Samsung 850 PRO for journal Just use LSI logic HBA (eg. SAS2308) -Original Message- From: Muhammad Junaid [mailto:junaid.fsd...@gmail.com] Sent: donderdag 6 september 2018 13:18 To: ceph-users@lists.ceph.com Subject: [ceph-users] help needed Hi there Hope, every one wil

[ceph-users] Upgrading ceph with HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent

2018-09-05 Thread Marc Roos
The adviced solution is to upgrade ceph only in HEALTH_OK state. And I also read somewhere that is bad to have your cluster for a long time in an HEALTH_ERR state. But why is this bad? Why is this bad during upgrading? Can I quantify how bad it is? (like with large log/journal file?) _

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-03 Thread Marc Roos
ewly added node has finished. -Original Message- From: Jack [mailto:c...@jack.fr.eu.org] Sent: zondag 2 september 2018 15:53 To: Marc Roos; ceph-users Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's Well, you have more than one pool here pg_num =

Re: [ceph-users] Luminous new OSD being over filled

2018-09-03 Thread Marc Roos
I am adding a node like this, I think it is more efficient, because in your case you will have data being moved within the added node (between the newly added osd's there). So far no problems with this. Maybe limit your ceph tell osd.* injectargs --osd_max_backfills=X Because pg's being move

Re: [ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Marc Roos
h does not spread object on a per-object basis, but on a pg-basis The data repartition is thus not perfect You may increase your pg_num, and/or use the mgr balancer module (http://docs.ceph.com/docs/mimic/mgr/balancer/) On 09/02/2018 01:28 PM, Marc Roos wrote: > > If I have only one rb

[ceph-users] 3x replicated rbd pool ssd data spread across 4 osd's

2018-09-02 Thread Marc Roos
If I have only one rbd ssd pool, 3 replicated, and 4 ssd osd's. Why are these objects so unevenly spread across the four osd's? Should they all not have 162G? [@c01 ]# ceph osd status 2>&1 ++--+---+---++-++-+- --+ | id | host | used | a

[ceph-users] Adding node efficient data move.

2018-09-01 Thread Marc Roos
When adding a node and I increment the crush weight like this. I have the most efficient data transfer to the 4th node? sudo -u ceph ceph osd crush reweight osd.23 1 sudo -u ceph ceph osd crush reweight osd.24 1 sudo -u ceph ceph osd crush reweight osd.25 1 sudo -u ceph ceph osd crush rewei

Re: [ceph-users] Ceph Object Gateway Server - Hardware Recommendations

2018-08-31 Thread Marc Roos
Ok from what I have learned sofar from my own test environment. (Keep in mind I am having a test setup for only a year). The s3 rgw is not so much requiring high latency, so you should be able to do fine with hdd only cluster. I guess my setup should be sufficient for what you need to have,

Re: [ceph-users] librmb: Mail storage on RADOS with Dovecot

2018-08-30 Thread Marc Roos
How is it going with this? Are we getting close to a state where we can store a mailbox on ceph with this librmb? -Original Message- From: Wido den Hollander [mailto:w...@42on.com] Sent: maandag 25 september 2017 9:20 To: Gregory Farnum; Danny Al-Gaaf Cc: ceph-users Subject: Re: [ce

[ceph-users] cephfs mount on osd node

2018-08-29 Thread Marc Roos
I have 3 node test cluster and I would like to expand this with a 4th node that is currently mounting the cephfs and rsync's backups to it. I can remember reading something about that you could create a deadlock situation doing this. What are the risks I would be taking if I would be doing

Re: [ceph-users] Cephfs slow 6MB/s and rados bench sort of ok.

2018-08-28 Thread Marc Roos
Thanks!!! https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46212.html echo 8192 >/sys/devices/virtual/bdi/ceph-1/read_ahead_kb -Original Message- From: Yan, Zheng [mailto:uker...@gmail.com] Sent: dinsdag 28 augustus 2018 15:44 To: Marc Roos Cc: ceph-users Subject: Re: [c

[ceph-users] How to put ceph-fuse fstab remote path?

2018-08-28 Thread Marc Roos
kernel c01,c02,c03:/backup /home/backupceph name=cephfs.backup,secretfile=/root/client.cephfs.backup.key,_netdev 0 0 c01,c02,c03:/backup /home/backup2 fuse.ceph ceph.id=cephfs.backup,_netdev 0 0 Mounts root cephfs c01,c02,c03:/backup /home/backup2

Re: [ceph-users] Cephfs slow 6MB/s and rados bench sort of ok.

2018-08-28 Thread Marc Roos
Was there not some issue a while ago that was related to a kernel setting? Because I can remember doing some tests that ceph-fuse was always slower than the kernel module. -Original Message- From: Marc Roos Sent: dinsdag 28 augustus 2018 12:37 To: ceph-users; ifedotov Subject: Re

Re: [ceph-users] Cephfs slow 6MB/s and rados bench sort of ok.

2018-08-28 Thread Marc Roos
bench) 3) Just a single dd instance vs. 16 concurrent threads for rados bench. Thanks, Igor On 8/28/2018 12:50 PM, Marc Roos wrote: > I have a idle test cluster (centos7.5, Linux c04 > 3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs. > > I tested reading a few fil

[ceph-users] Cephfs slow 6MB/s and rados bench sort of ok.

2018-08-28 Thread Marc Roos
I have a idle test cluster (centos7.5, Linux c04 3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs. I tested reading a few files on this cephfs mount and get very low results compared to the rados bench. What could be the issue here? [@client folder]# dd if=5GB.img of=/dev/null st

Re: [ceph-users] Design a PetaByte scale CEPH object storage

2018-08-27 Thread Marc Roos
> I am a software developer and am new to this domain. So maybe first get some senior system admin or so? You also do not want me to start doing some amateur brain surgery, do you? > each file has approx 15 TB Pfff, maybe rethink/work this to -Original Message- From: Jame

Re: [ceph-users] Stability Issue with 52 OSD hosts

2018-08-24 Thread Marc Roos
Can this be related to numa issues? I have also dual processor nodes, and was wondering if there is some guide on how to optimize for numa. -Original Message- From: Tyler Bishop [mailto:tyler.bis...@beyondhosting.net] Sent: vrijdag 24 augustus 2018 3:11 To: Andras Pataki Cc: ceph-u

Re: [ceph-users] HDD-only CephFS cluster with EC and without SSD/NVMe

2018-08-22 Thread Marc Roos
I also have 2+1 (still only 3 nodes), and 3 replicated. I also moved the meta datapool to ssds. What is nice with the cephfs, you can have folders in your filesystem on the ec21 pool for not so important data and the rest will be 3x replicated. I think the single session performance is not

[ceph-users] backporting to luminous librgw: export multitenancy support

2018-08-21 Thread Marc Roos
Can this be added to luminous? https://github.com/ceph/ceph/pull/19358 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Set existing pools to use hdd device class only

2018-08-20 Thread Marc Roos
I just recently did the same. Take into account that everything starts migrating. How weird it maybe, I had hdd test cluster only and changed the crush rule to having hdd. Took a few days, totally unnecessary as far as I am concerned. -Original Message- From: Enrico Kern [mailto:en

Re: [ceph-users] Silent data corruption may destroy all the object copies after data migration

2018-08-19 Thread Marc Roos
"one OSD's data to generate three copies on new failure domain" because ceph assumes it is correct. Get the pg's that are going to be moved and scrub them? I think the problem is more why these objects are inconsistent before you even do the migration -Original Message- From: poi [

[ceph-users] upgraded centos7 (not collectd nor ceph) now json failed error

2018-08-15 Thread Marc Roos
I upgraded centos7, not ceph nor collectd. Ceph was already 12.2.7 and collectd was already 5.8.0-2 (and collectd-ceph-5.8.0-2) Now I have this error: Aug 14 22:43:34 c01 collectd[285425]: ceph plugin: ds FinisherPurgeQueue.queueLen was not properly initialized. Aug 14 22:43:34 c01 collectd[

Re: [ceph-users] Enable daemonperf - no stats selected by filters

2018-08-15 Thread Marc Roos
Original Message- From: Marc Roos Sent: dinsdag 31 juli 2018 9:24 To: jspray Cc: ceph-users Subject: Re: [ceph-users] Enable daemonperf - no stats selected by filters Luminous 12.2.7 [@c01 ~]# rpm -qa | grep ceph- ceph-mon-12.2.7-0.el7.x86_64 ceph-selinux-12.2.7-0.el7.x86_64 ceph-osd-12.2.7-0

[ceph-users] rhel/centos7 spectre meltdown experience

2018-08-14 Thread Marc Roos
Did anyone notice any performance loss on osd, mon, rgw nodes because of the spectre/meltdown updates? What is general practice concerning these updates? Sort of follow up on this discussion. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg43136.html https://access.redhat.com/arti

[ceph-users] FW:Nfs-ganesha rgw multi user/ tenant

2018-08-06 Thread Marc Roos
Is anyone using nfs-ganesha in a rgw multi user / tenant environment? I recently upgraded to nfs-ganesha 2.6 / luminous 12.2.7 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Cephfs meta data pool to ssd and measuring performance difference

2018-08-03 Thread Marc Roos
l.com] Sent: maandag 30 juli 2018 14:23 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Cephfs meta data pool to ssd and measuring performance difference Something like smallfile perhaps? https://github.com/bengland2/smallfile Or you just time creating/reading lots of files With read ben

[ceph-users] Remove host weight 0 from crushmap

2018-08-01 Thread Marc Roos
Is there already a command to remove an host from the crush map (like ceph osd crush rm osd.23), without having to 'manually' edit the crush map? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.

[ceph-users] fyi: Luminous 12.2.7 pulled wrong osd disk, resulted in node down

2018-08-01 Thread Marc Roos
Today we pulled the wrong disk from a ceph node. And that made the whole node go down/be unresponsive. Even to a simple ping. I cannot find to much about this in the log files. But I expect that the /usr/bin/ceph-osd process caused a kernel panic. Linux c01 3.10.0-693.11.1.el7.x86_64 CentOS

Re: [ceph-users] Enable daemonperf - no stats selected by filters

2018-07-31 Thread Marc Roos
-12.2.7-0.el7.x86_64 -Original Message- From: John Spray [mailto:jsp...@redhat.com] Sent: dinsdag 31 juli 2018 0:35 To: Marc Roos Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Enable daemonperf - no stats selected by filters On Mon, Jul 30, 2018 at 10:27 PM Marc Roos wrote

[ceph-users] Enable daemonperf - no stats selected by filters

2018-07-30 Thread Marc Roos
Do you need to enable the option daemonperf? [@c01 ~]# ceph daemonperf mds.a Traceback (most recent call last): File "/usr/bin/ceph", line 1122, in retval = main() File "/usr/bin/ceph", line 822, in main done, ret = maybe_daemon_command(parsed_args, childargs) File "/usr/bin/ceph"

[ceph-users] Cephfs meta data pool to ssd and measuring performance difference

2018-07-25 Thread Marc Roos
>From this thread, I got how to move the meta data pool from the hdd's to the ssd's. https://www.spinics.net/lists/ceph-users/msg39498.html ceph osd pool get fs_meta crush_rule ceph osd pool set fs_meta crush_rule replicated_ruleset_ssd I guess this can be done on a live system? What would b

Re: [ceph-users] ceph cluster monitoring tool

2018-07-24 Thread Marc Roos
Just use collectd to start with. That is easiest with influxdb. However do not expect to much of the support on influxdb. -Original Message- From: Satish Patel [mailto:satish@gmail.com] Sent: dinsdag 24 juli 2018 7:02 To: ceph-users Subject: [ceph-users] ceph cluster monitoring to

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-22 Thread Marc Roos
I don’t think it will get any more basic than that. Or maybe this? If the doctor diagnoses you, you can either accept this, get 2nd opinion, or study medicine to verify it. In short lvm has been introduced to solve some issues of related to starting osd's (which I did not have, probably bec

[ceph-users] Issues/questions: ceph df (luminous 12.2.7)

2018-07-21 Thread Marc Roos
1. Why is ceph df not always showing 'units' G M k [@c01 ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 81448G 31922G 49526G 60.81 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS iscsi-images

Re: [ceph-users] Converting to BlueStore, and external journal devices

2018-07-20 Thread Marc Roos
I had similar question a while ago, maybe these you want to read. https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46768.html https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46799.html -Original Message- From: Satish Patel [mailto:satish@gmail.com] Sent: vri

Re: [ceph-users] Pool size (capacity)

2018-07-20 Thread Marc Roos
That is the used column not? [@c01 ~]# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED G G G 60.78 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS iscsi-images 16

Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

2018-07-17 Thread Marc Roos
Shalygin; ceph-users@lists.ceph.com; Marc Roos Subject: Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access) I'll chime in as a large scale operator, and a strong proponent of ceph-volume. Ceph-disk wasn't accomplishing what

Re: [ceph-users] ls operation is too slow in cephfs

2018-07-17 Thread Marc Roos
I had similar thing with doing the ls. Increasing the cache limit helped with our test cluster mds_cache_memory_limit = 80 -Original Message- From: Surya Bala [mailto:sooriya.ba...@gmail.com] Sent: dinsdag 17 juli 2018 11:39 To: Anton Aleksandrov Cc: ceph-users@lists.ceph.

[ceph-users] move rbd image (with snapshots) to different pool

2018-06-15 Thread Marc Roos
If I would like to copy/move an rbd image, this is the only option I have? (Want to move an image from a hdd pool to an ssd pool) rbd clone mypool/parent@snap otherpool/child ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.co

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
This is actually not to nice, because this remapping is now causing a nearfull -Original Message- From: Dan van der Ster [mailto:d...@vanderster.com] Sent: woensdag 13 juni 2018 14:02 To: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
Yes thanks I know, I will change it when I get extra an extra node. -Original Message- From: Paul Emmerich [mailto:paul.emmer...@croit.io] Sent: woensdag 13 juni 2018 16:33 To: Marc Roos Cc: ceph-users; k0ste Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map clas

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
: Marc Roos Sent: woensdag 13 juni 2018 7:14 To: ceph-users; k0ste Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary? I just added here 'class hdd' rule fs_data.ec21 { id 4 type erasure min_size 3 max_size

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
step emit } -Original Message- From: Konstantin Shalygin [mailto:k0...@k0ste.ru] Sent: woensdag 13 juni 2018 12:30 To: Marc Roos; ceph-users Subject: *SPAM* Re: *SPAM* Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary? On 06/13/2018 12:06 PM,

Re: [ceph-users] *****SPAM***** Re: Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
Shit, I added this class and now everything start backfilling (10%) How is this possible, I only have hdd's? -Original Message- From: Konstantin Shalygin [mailto:k0...@k0ste.ru] Sent: woensdag 13 juni 2018 9:26 To: Marc Roos; ceph-users Subject: *SPAM* Re: [ceph-users

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-13 Thread Marc Roos
file system. -Original Message- From: Konstantin Shalygin [mailto:k0...@k0ste.ru] Sent: woensdag 13 juni 2018 5:59 To: ceph-users@lists.ceph.com Cc: Marc Roos Subject: *SPAM* Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary? > Is it nece

Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-12 Thread Marc Roos
0 type osd step emit } -Original Message- From: Marc Roos Sent: dinsdag 12 juni 2018 17:07 To: ceph-users Subject: [ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary? Is it necessary to update the crush map with class hdd Before adding ssd&

[ceph-users] Add ssd's to hdd cluster, crush map class hdd update necessary?

2018-06-12 Thread Marc Roos
Is it necessary to update the crush map with class hdd Before adding ssd's the cluster? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

2018-06-08 Thread Marc Roos
se LVM, and stick with direct disk access ? - what are the cost of LVM (performance, latency etc) ? Answers: - unify setup, support for crypto & more - none Tldr: that technical choice is fine, nothing to argue about. On 06/08/2018 07:15 AM, Marc Roos wrote: > > I am getting the i

Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm? (and just not stick with direct disk access)

2018-06-08 Thread Marc Roos
I am getting the impression that not everyone understands the subject that has been raised here. Why do osd's need to be via lvm, and why not stick with direct disk access as it is now? - Bluestore is created to cut out some fs overhead, - everywhere 10Gb is recommended because of better lat

[ceph-users] Stop scrubbing

2018-06-05 Thread Marc Roos
Is it possible to stop the current running scrubs/deep-scrubs? http://tracker.ceph.com/issues/11202 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Bug? ceph-volume zap not working

2018-06-02 Thread Marc Roos
: Marc Roos Cc: ceph-users Subject: Re: [ceph-users] Bug? ceph-volume zap not working Ceph-disk didn't remove an osd from the cluster either. That has never been a thing for ceph-disk or ceph-volume. There are other commands for that. On Sat, Jun 2, 2018, 4:29 PM Marc Roos

Re: [ceph-users] Bug? ceph-volume zap not working

2018-06-02 Thread Marc Roos
But leaves still entries in crush map and maybe also ceph auth ls, and the dir in /var/lib/ceph/osd -Original Message- From: Oliver Freyermuth [mailto:freyerm...@physik.uni-bonn.de] Sent: zaterdag 2 juni 2018 18:29 To: Marc Roos; ceph-users Subject: Re: [ceph-users] Bug? ceph-volume

Re: [ceph-users] Should ceph-volume lvm prepare not be backwards compitable with ceph-disk?

2018-06-02 Thread Marc Roos
>> >> >> ceph-disk does not require bootstrap-osd/ceph.keyring and ceph-volume >> does > >I believe that's expected when you use "prepare". >For ceph-volume, "prepare" already bootstraps the OSD and fetches a fresh OSD id, for which it needs the keyring. >For ceph-disk, this was not par

[ceph-users] Bug? Ceph-volume /var/lib/ceph/osd permissions

2018-06-02 Thread Marc Roos
o+w? I don’t think that is necessary not? drwxr-xr-x 2 ceph ceph 182 May 9 12:59 ceph-15 drwxr-xr-x 2 ceph ceph 182 May 9 20:51 ceph-14 drwxr-xr-x 2 ceph ceph 182 May 12 10:32 ceph-16 drwxr-xr-x 2 ceph ceph 6 Jun 2 17:21 ceph-19 drwxr-x--- 13 ceph ceph 168 Jun 2 17:47 . drwxrwxrwt 2 ce

Re: [ceph-users] Bug? ceph-volume zap not working

2018-06-02 Thread Marc Roos
ev/sdf -Original Message- From: Marc Roos Sent: zaterdag 2 juni 2018 12:17 To: ceph-users Subject: [ceph-users] Bug? ceph-volume zap not working I guess zap should be used instead of destroy? Maybe keep ceph-disk backwards compatibility and keep destroy?? [root@c03 bootstrap-osd]# ceph-volume lvm za

[ceph-users] Bug? ceph-volume zap not working

2018-06-02 Thread Marc Roos
I guess zap should be used instead of destroy? Maybe keep ceph-disk backwards compatibility and keep destroy?? [root@c03 bootstrap-osd]# ceph-volume lvm zap /dev/sdf --> Zapping: /dev/sdf --> Unmounting /var/lib/ceph/osd/ceph-19 Running command: umount -v /var/lib/ceph/osd/ceph-19 stderr: umou

[ceph-users] Bug? if ceph-volume fails, it does not clean up created osd auth id

2018-06-02 Thread Marc Roos
[@ bootstrap-osd]# ceph-volume lvm prepare --bluestore --data /dev/sdf Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c32036fe-ca0b-47d1-be3f-e28943ee3a97

<    1   2   3   4   5   6   >