I just did very very short test and don’t see any difference with this
cache on or off, so I am leaving it on for now.
-Original Message-
From: Ashley Merrick [mailto:singap...@amerrick.co.uk]
Sent: zondag 11 november 2018 11:43
To: Marc Roos
Cc: ceph-users; vitalif
Subject: Re
Does it make sense to test disabling this on hdd cluster only?
-Original Message-
From: Ashley Merrick [mailto:singap...@amerrick.co.uk]
Sent: zondag 11 november 2018 6:24
To: vita...@yourcmc.ru
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Disabling write cache on SATA HDDs
nich HRB 231263
Web: https://croit.io
YouTube: https://goo.gl/PGE1Bx
2018-11-08 10:35 GMT+01:00 Matthew Vernon :
> On 08/11/2018 09:17, Marc Roos wrote:
>>
>> And that is why I don't like ceph-deploy. Unless you have maybe
>> hundreds of disks, I don’t see why you cannot
g
here. I doubt if ceph-deploy is even much faster.
-Original Message-
From: Matthew Vernon [mailto:m...@sanger.ac.uk]
Sent: donderdag 8 november 2018 10:36
To: ceph-users@lists.ceph.com
Cc: Marc Roos
Subject: Re: [ceph-users] ceph 12.2.9 release
On 08/11/2018 09:17, Marc Roos wrote:
@lists.ceph.com
Subject: Re: [ceph-users] ceph 12.2.9 release
El Miércoles 07/11/2018 a las 11:28, Matthew Vernon escribió:
> On 07/11/2018 14:16, Marc Roos wrote:
> >
> >
> > I don't see the problem. I am installing only the ceph updates when
> > others have
I don't see the problem. I am installing only the ceph updates when
others have done this and are running several weeks without problems. I
have noticed this 12.2.9 availability also, did not see any release
notes, so why install it? Especially with recent issues of other
releases.
That bei
Why slack anyway?
-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
Sent: donderdag 11 oktober 2018 5:11
To: ceph-users@lists.ceph.com
Subject: *SPAM* Re: [ceph-users] https://ceph-storage.slack.com
> why would a ceph slack be invite only?
Because this is
Luminous is also not having an updated librgw that prevents ganesha from
using the multi tenancy mounts. Especially with the current issues of
mimic, would it be nice if this could be made available in luminous.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg48659.html
https://gith
That is easy I think, so I will give it a try:
Faster CPU's, Use fast NVME disks, all 10Gbit or even better 100Gbit,
added with a daily prayer.
-Original Message-
From: Tomasz Płaza [mailto:tomasz.pl...@grupawp.pl]
Sent: maandag 8 oktober 2018 7:46
To: ceph-users@lists.ceph.com
Sub
-AES256-GCM-SHA384
-Original Message-
From: Vasiliy Tolstov [mailto:v.tols...@selfip.ru]
Sent: zaterdag 6 oktober 2018 16:34
To: Marc Roos
Cc: ceph-users@lists.ceph.com; elias.abacio...@deltaprojects.com
Subject: *SPAM* Re: [ceph-users] list admin issues
сб, 6 окт. 2018 г. в 16:48
Maybe ask first gmail?
-Original Message-
From: Elias Abacioglu [mailto:elias.abacio...@deltaprojects.com]
Sent: zaterdag 6 oktober 2018 15:07
To: ceph-users
Subject: Re: [ceph-users] list admin issues
Hi,
I'm bumping this old thread cause it's getting annoying. My membership
get
losed (con
state CONNECTING)
..
..
..
-Original Message-
From: John Spray [mailto:jsp...@redhat.com]
Sent: donderdag 27 september 2018 11:43
To: Marc Roos
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Cannot write to cephfs if some osd's are not
available on the client
It was not my first intention to host vm's on osd nodes of the ceph
cluster. But since this test cluster is not doing anything, I might
aswell use some of the cores.
Currently I have configured a macvtap on the ceph client network
configured as a vlan. Disadvantage is that the local osd's ca
p and move the file to a 3x replicated
pool, I assume my data is moved there and more secure.
-Original Message-
From: Janne Johansson [mailto:icepic...@gmail.com]
Sent: dinsdag 2 oktober 2018 15:44
To: jsp...@redhat.com
Cc: Marc Roos; Ceph Users
Subject: Re: [ceph-users] cephfs issue w
edhat.com]
Sent: maandag 1 oktober 2018 21:28
To: Marc Roos
Cc: ceph-users; jspray; ukernel
Subject: Re: [ceph-users] cephfs issue with moving files between data
pools gives Input/output error
Moving a file into a directory with a different layout does not, and is
not intended to, copy the un
sdf
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: zaterdag 29 september 2018 6:55
To: Marc Roos
Subject: Re: [ceph-users] cephfs issue with moving files between data
pools gives Input/output error
check_pool_perm on pool 30 ns need Fr, but no read perm
client does
How do you test this? I have had no issues under "normal load" with an
old kernel client and a stable os.
CentOS Linux release 7.5.1804 (Core)
Linux c04 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 2018
x86_64 x86_64 x86_64 GNU/Linux
-Original Message-
From: Andras
dag 28 september 2018 15:45
To: Marc Roos
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] cephfs issue with moving files between data
pools gives Input/output error
On Fri, Sep 28, 2018 at 2:28 PM Marc Roos
wrote:
>
>
> Looks like that if I move files between different da
If I copy the file out6 to out7 in the same location. I can read the
out7 file on the nfs client.
-Original Message-
To: ceph-users
Subject: [ceph-users] cephfs issue with moving files between data pools
gives Input/output error
Looks like that if I move files between different dat
Looks like that if I move files between different data pools of the
cephfs, something is still refering to the 'old location' and gives an
Input/output error. I assume this, because I am using different client
ids for authentication.
With the same user as configured in ganesha, mounting (ker
If I add on one client a file to the cephfs, that is exported via
ganesha and nfs mounted somewhere else. I can see it in the dir listing
on the other nfs client. But trying to read it gives an Input/output
error. Other files (older ones in the same dir I can read)
Anyone had this also?
nfs
I have a test cluster and on a osd node I put a vm. The vm is using a
macvtap on the client network interface of the osd node. Making access
to local osd's impossible.
the vm of course reports that it cannot access the local osd's. What I
am getting is:
- I cannot reboot this vm normally, ne
And where is the manual for bluestore?
-Original Message-
From: mj [mailto:li...@merit.unu.edu]
Sent: dinsdag 25 september 2018 9:56
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG inconsistent, "pg repair" not working
Hi,
I was able to solve a similar issue on our cluste
h tunables you can check out the ceph wiki [2]
here.
[1]
ceph osd set-require-min-compat-client hammer
ceph osd crush set-all-straw-buckets-to-straw2
ceph osd crush tunables hammer
[2] http://docs.ceph.com/docs/master/rados/operations/crush-map/
-Original Message-
From: Marc Roos
Sent: d
When running ./do_cmake.sh, I get
fatal: destination path '/Users/mac/ceph/src/zstd' already exists and is
not an empty directory.
fatal: clone of 'https://github.com/facebook/zstd' into submodule path
'/Users/mac/ceph/src/zstd' failed
Failed to clone 'src/zstd'. Retry scheduled
fatal: desti
Has anyone been able to build according to this manual? Because here it
fails.
http://docs.ceph.com/docs/mimic/dev/macos/
I have prepared macos as it is described, took 2h to build this llvm, is
that really necessary?
I do the
git clone --single-branch -b mimic https://github.com/ceph/ceph
I have been trying to do this on a sierra vm, installed xcode 9.2
I had to modify this ceph-fuse.rb and copy it to the folder
/usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/Formula/ (was
not there, is that correct?)
But I get now the error
make: *** No rule to make target `rados'.
Just curious, is anyone running mesos on ceph nodes?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I agree. I was on centos7.4 and updated to I think luminous 12.2.7, and
had something not working related to some python dependancy. This was
resolved by upgrading to centos7.5
-Original Message-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: vrijdag 14 september 2018 15
ssage-
From: David Turner [mailto:drakonst...@gmail.com]
Sent: woensdag 12 september 2018 18:20
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Performance predictions moving bluestore wall,
db to ssd
You already have a thread talking about benchmarking the addition of WAL
and DB parti
When having a hdd bluestore osd with collocated wal and db.
- What performance increase can be expected if one would move the wal to
an ssd?
- What performance increase can be expected if one would move the db to
an ssd?
- Would the performance be a lot if you have a very slow hdd (and thu
Is this osxfuse, the only and best performing way to mount a ceph
filesystem on an osx client?
http://docs.ceph.com/docs/mimic/dev/macos/
I am now testing cephfs performance on a client with the fio libaio
engine. This engine does not exist on osx, but there is a posixaio. Does
anyone have ex
I am new, with using the balancer, I think this should generated a plan
not? Do not get what this error is about.
[@c01 ~]# ceph balancer optimize balancer-test.plan
Error EAGAIN: compat weight-set not available
___
ceph-users mailing list
ceph-users
I guess good luck. Maybe you can ask these guys to hurry up and get
something production ready.
https://github.com/ceph-dovecot/dovecot-ceph-plugin
-Original Message-
From: marc-antoine desrochers
[mailto:marc-antoine.desroch...@sogetel.com]
Sent: maandag 10 september 2018 14:40
To
I was thinking of upgrading luminous to mimic, but does anyone have
mimic running with collectd and the ceph plugin?
When luminous was introduced it took almost half a year before collectd
was supporting it.
___
ceph-users mailing list
ceph-users@li
I have only 2 scrubs running on hdd's, but keeping the drives in high
busy state. I did not notice this before, did some setting change?
Because I can remember dstat listing 14MB/s-20MB/s and not 60MB/s
DSK | sdd | busy 95% | read1384 | write 92 | KiB/r
292 | KiB/w
the samsung sm863.
write-4k-seq: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T)
4096B-4096B, ioengine=libaio, iodepth=1
randwrite-4k-seq: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W)
4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
read-4k-seq: (g=2): rw=read, bs=(R) 409
To add a data pool to an existing cephfs
ceph osd pool set fs_data.ec21 allow_ec_overwrites true
ceph osd pool application enable fs_data.ec21 cephfs
ceph fs add_data_pool cephfs fs_data.ec21
Then link the pool to the directory (ec21)
setfattr -n ceph.dir.layout.pool -v fs_data.ec21 ec21
---
:
ceph tell osd.* injectargs --osd_max_backfills=0
Again getting slower towards the end.
Bandwidth (MB/sec): 395.749
Average Latency(s): 0.161713
-Original Message-
From: Menno Zonneveld [mailto:me...@1afa.com]
Sent: donderdag 6 september 2018 16:56
To: Marc Roos; ceph-users
Subject:
Menno Zonneveld [mailto:me...@1afa.com]
Sent: donderdag 6 september 2018 15:52
To: Marc Roos; ceph-users
Subject: RE: [ceph-users] Rados performance inconsistencies, lower than
expected performance
ah yes, 3x replicated with minimal 2.
my ceph.conf is pretty bare, just in case it might be rel
Test pool is 3x replicated?
-Original Message-
From: Menno Zonneveld [mailto:me...@1afa.com]
Sent: donderdag 6 september 2018 15:29
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Rados performance inconsistencies, lower than
expected performance
I've setup a CEPH cluster to tes
>
> >
> >
> > The adviced solution is to upgrade ceph only in HEALTH_OK state. And
I
> > also read somewhere that is bad to have your cluster for a long time
in
> > an HEALTH_ERR state.
> >
> > But why is this bad?
>
> Aside from the obvious (errors are bad things!), many people have
> extern
Thanks interesting to read. So in luminous it is not really a problem. I
was expecting to get into trouble with the monitors/mds. Because my
failover takes quite long, and thought it was related to the damaged pg
Luminous: "When the past intervals tracking structure was rebuilt around
exactly t
Do not use Samsung 850 PRO for journal
Just use LSI logic HBA (eg. SAS2308)
-Original Message-
From: Muhammad Junaid [mailto:junaid.fsd...@gmail.com]
Sent: donderdag 6 september 2018 13:18
To: ceph-users@lists.ceph.com
Subject: [ceph-users] help needed
Hi there
Hope, every one wil
The adviced solution is to upgrade ceph only in HEALTH_OK state. And I
also read somewhere that is bad to have your cluster for a long time in
an HEALTH_ERR state.
But why is this bad?
Why is this bad during upgrading?
Can I quantify how bad it is? (like with large log/journal file?)
_
ewly added
node has finished.
-Original Message-
From: Jack [mailto:c...@jack.fr.eu.org]
Sent: zondag 2 september 2018 15:53
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] 3x replicated rbd pool ssd data spread across
4 osd's
Well, you have more than one pool here
pg_num =
I am adding a node like this, I think it is more efficient, because in
your case you will have data being moved within the added node (between
the newly added osd's there). So far no problems with this.
Maybe limit your
ceph tell osd.* injectargs --osd_max_backfills=X
Because pg's being move
h does not spread object on a per-object basis, but on a pg-basis
The data repartition is thus not perfect You may increase your pg_num,
and/or use the mgr balancer module
(http://docs.ceph.com/docs/mimic/mgr/balancer/)
On 09/02/2018 01:28 PM, Marc Roos wrote:
>
> If I have only one rb
If I have only one rbd ssd pool, 3 replicated, and 4 ssd osd's. Why are
these objects so unevenly spread across the four osd's? Should they all
not have 162G?
[@c01 ]# ceph osd status 2>&1
++--+---+---++-++-+-
--+
| id | host | used | a
When adding a node and I increment the crush weight like this. I have
the most efficient data transfer to the 4th node?
sudo -u ceph ceph osd crush reweight osd.23 1
sudo -u ceph ceph osd crush reweight osd.24 1
sudo -u ceph ceph osd crush reweight osd.25 1
sudo -u ceph ceph osd crush rewei
Ok from what I have learned sofar from my own test environment. (Keep in
mind I am having a test setup for only a year). The s3 rgw is not so
much requiring high latency, so you should be able to do fine with hdd
only cluster. I guess my setup should be sufficient for what you need
to have,
How is it going with this? Are we getting close to a state where we can
store a mailbox on ceph with this librmb?
-Original Message-
From: Wido den Hollander [mailto:w...@42on.com]
Sent: maandag 25 september 2017 9:20
To: Gregory Farnum; Danny Al-Gaaf
Cc: ceph-users
Subject: Re: [ce
I have 3 node test cluster and I would like to expand this with a 4th
node that is currently mounting the cephfs and rsync's backups to it. I
can remember reading something about that you could create a deadlock
situation doing this.
What are the risks I would be taking if I would be doing
Thanks!!!
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46212.html
echo 8192 >/sys/devices/virtual/bdi/ceph-1/read_ahead_kb
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: dinsdag 28 augustus 2018 15:44
To: Marc Roos
Cc: ceph-users
Subject: Re: [c
kernel
c01,c02,c03:/backup /home/backupceph
name=cephfs.backup,secretfile=/root/client.cephfs.backup.key,_netdev 0 0
c01,c02,c03:/backup /home/backup2 fuse.ceph
ceph.id=cephfs.backup,_netdev 0 0
Mounts root cephfs
c01,c02,c03:/backup /home/backup2
Was there not some issue a while ago that was related to a kernel
setting? Because I can remember doing some tests that ceph-fuse was
always slower than the kernel module.
-Original Message-
From: Marc Roos
Sent: dinsdag 28 augustus 2018 12:37
To: ceph-users; ifedotov
Subject: Re
bench)
3) Just a single dd instance vs. 16 concurrent threads for rados bench.
Thanks,
Igor
On 8/28/2018 12:50 PM, Marc Roos wrote:
> I have a idle test cluster (centos7.5, Linux c04
> 3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs.
>
> I tested reading a few fil
I have a idle test cluster (centos7.5, Linux c04
3.10.0-862.9.1.el7.x86_64), and a client kernel mount cephfs.
I tested reading a few files on this cephfs mount and get very low
results compared to the rados bench. What could be the issue here?
[@client folder]# dd if=5GB.img of=/dev/null st
> I am a software developer and am new to this domain.
So maybe first get some senior system admin or so? You also do not want
me to start doing some amateur brain surgery, do you?
> each file has approx 15 TB
Pfff, maybe rethink/work this to
-Original Message-
From: Jame
Can this be related to numa issues? I have also dual processor nodes,
and was wondering if there is some guide on how to optimize for numa.
-Original Message-
From: Tyler Bishop [mailto:tyler.bis...@beyondhosting.net]
Sent: vrijdag 24 augustus 2018 3:11
To: Andras Pataki
Cc: ceph-u
I also have 2+1 (still only 3 nodes), and 3 replicated. I also moved the
meta datapool to ssds.
What is nice with the cephfs, you can have folders in your filesystem on
the ec21 pool for not so important data and the rest will be 3x
replicated.
I think the single session performance is not
Can this be added to luminous?
https://github.com/ceph/ceph/pull/19358
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I just recently did the same. Take into account that everything starts
migrating. How weird it maybe, I had hdd test cluster only and changed
the crush rule to having hdd. Took a few days, totally unnecessary as
far as I am concerned.
-Original Message-
From: Enrico Kern [mailto:en
"one OSD's data to generate three copies on new failure domain" because
ceph assumes it is correct.
Get the pg's that are going to be moved and scrub them?
I think the problem is more why these objects are inconsistent before
you even do the migration
-Original Message-
From: poi [
I upgraded centos7, not ceph nor collectd. Ceph was already 12.2.7 and
collectd was already 5.8.0-2 (and collectd-ceph-5.8.0-2)
Now I have this error:
Aug 14 22:43:34 c01 collectd[285425]: ceph plugin: ds
FinisherPurgeQueue.queueLen was not properly initialized.
Aug 14 22:43:34 c01 collectd[
Original Message-
From: Marc Roos
Sent: dinsdag 31 juli 2018 9:24
To: jspray
Cc: ceph-users
Subject: Re: [ceph-users] Enable daemonperf - no stats selected by
filters
Luminous 12.2.7
[@c01 ~]# rpm -qa | grep ceph-
ceph-mon-12.2.7-0.el7.x86_64
ceph-selinux-12.2.7-0.el7.x86_64
ceph-osd-12.2.7-0
Did anyone notice any performance loss on osd, mon, rgw nodes because of
the spectre/meltdown updates? What is general practice concerning these
updates?
Sort of follow up on this discussion.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg43136.html
https://access.redhat.com/arti
Is anyone using nfs-ganesha in a rgw multi user / tenant environment?
I recently upgraded to nfs-ganesha 2.6 / luminous 12.2.7
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
l.com]
Sent: maandag 30 juli 2018 14:23
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Cephfs meta data pool to ssd and measuring
performance difference
Something like smallfile perhaps? https://github.com/bengland2/smallfile
Or you just time creating/reading lots of files
With read ben
Is there already a command to remove an host from the crush map (like
ceph osd crush rm osd.23), without having to 'manually' edit the crush
map?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
Today we pulled the wrong disk from a ceph node. And that made the whole
node go down/be unresponsive. Even to a simple ping. I cannot find to
much about this in the log files. But I expect that the
/usr/bin/ceph-osd process caused a kernel panic.
Linux c01 3.10.0-693.11.1.el7.x86_64
CentOS
-12.2.7-0.el7.x86_64
-Original Message-
From: John Spray [mailto:jsp...@redhat.com]
Sent: dinsdag 31 juli 2018 0:35
To: Marc Roos
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Enable daemonperf - no stats selected by
filters
On Mon, Jul 30, 2018 at 10:27 PM Marc Roos
wrote
Do you need to enable the option daemonperf?
[@c01 ~]# ceph daemonperf mds.a
Traceback (most recent call last):
File "/usr/bin/ceph", line 1122, in
retval = main()
File "/usr/bin/ceph", line 822, in main
done, ret = maybe_daemon_command(parsed_args, childargs)
File "/usr/bin/ceph"
>From this thread, I got how to move the meta data pool from the hdd's to
the ssd's.
https://www.spinics.net/lists/ceph-users/msg39498.html
ceph osd pool get fs_meta crush_rule
ceph osd pool set fs_meta crush_rule replicated_ruleset_ssd
I guess this can be done on a live system?
What would b
Just use collectd to start with. That is easiest with influxdb. However
do not expect to much of the support on influxdb.
-Original Message-
From: Satish Patel [mailto:satish@gmail.com]
Sent: dinsdag 24 juli 2018 7:02
To: ceph-users
Subject: [ceph-users] ceph cluster monitoring to
I don’t think it will get any more basic than that. Or maybe this? If
the doctor diagnoses you, you can either accept this, get 2nd opinion,
or study medicine to verify it.
In short lvm has been introduced to solve some issues of related to
starting osd's (which I did not have, probably bec
1. Why is ceph df not always showing 'units' G M k
[@c01 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
81448G 31922G 49526G 60.81
POOLS:
NAME ID USED %USED MAX
AVAIL OBJECTS
iscsi-images
I had similar question a while ago, maybe these you want to read.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46768.html
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg46799.html
-Original Message-
From: Satish Patel [mailto:satish@gmail.com]
Sent: vri
That is the used column not?
[@c01 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
G G G 60.78
POOLS:
NAME ID USED %USED MAX
AVAIL OBJECTS
iscsi-images 16
Shalygin; ceph-users@lists.ceph.com; Marc Roos
Subject: Re: [ceph-users] Why the change from ceph-disk to ceph-volume
and lvm? (and just not stick with direct disk access)
I'll chime in as a large scale operator, and a strong proponent of
ceph-volume.
Ceph-disk wasn't accomplishing what
I had similar thing with doing the ls. Increasing the cache limit helped
with our test cluster
mds_cache_memory_limit = 80
-Original Message-
From: Surya Bala [mailto:sooriya.ba...@gmail.com]
Sent: dinsdag 17 juli 2018 11:39
To: Anton Aleksandrov
Cc: ceph-users@lists.ceph.
If I would like to copy/move an rbd image, this is the only option I
have? (Want to move an image from a hdd pool to an ssd pool)
rbd clone mypool/parent@snap otherpool/child
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
This is actually not to nice, because this remapping is now causing a
nearfull
-Original Message-
From: Dan van der Ster [mailto:d...@vanderster.com]
Sent: woensdag 13 juni 2018 14:02
To: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map
Yes thanks I know, I will change it when I get extra an extra node.
-Original Message-
From: Paul Emmerich [mailto:paul.emmer...@croit.io]
Sent: woensdag 13 juni 2018 16:33
To: Marc Roos
Cc: ceph-users; k0ste
Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map clas
: Marc Roos
Sent: woensdag 13 juni 2018 7:14
To: ceph-users; k0ste
Subject: Re: [ceph-users] Add ssd's to hdd cluster, crush map class hdd
update necessary?
I just added here 'class hdd'
rule fs_data.ec21 {
id 4
type erasure
min_size 3
max_size
step emit
}
-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
Sent: woensdag 13 juni 2018 12:30
To: Marc Roos; ceph-users
Subject: *SPAM* Re: *SPAM* Re: [ceph-users] Add ssd's to
hdd cluster, crush map class hdd update necessary?
On 06/13/2018 12:06 PM,
Shit, I added this class and now everything start backfilling (10%) How
is this possible, I only have hdd's?
-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
Sent: woensdag 13 juni 2018 9:26
To: Marc Roos; ceph-users
Subject: *SPAM* Re: [ceph-users
file system.
-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
Sent: woensdag 13 juni 2018 5:59
To: ceph-users@lists.ceph.com
Cc: Marc Roos
Subject: *SPAM* Re: [ceph-users] Add ssd's to hdd cluster, crush
map class hdd update necessary?
> Is it nece
0 type osd
step emit
}
-Original Message-
From: Marc Roos
Sent: dinsdag 12 juni 2018 17:07
To: ceph-users
Subject: [ceph-users] Add ssd's to hdd cluster, crush map class hdd
update necessary?
Is it necessary to update the crush map with
class hdd
Before adding ssd&
Is it necessary to update the crush map with
class hdd
Before adding ssd's the cluster?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
se LVM, and stick with direct disk access ?
- what are the cost of LVM (performance, latency etc) ?
Answers:
- unify setup, support for crypto & more
- none
Tldr: that technical choice is fine, nothing to argue about.
On 06/08/2018 07:15 AM, Marc Roos wrote:
>
> I am getting the i
I am getting the impression that not everyone understands the subject
that has been raised here.
Why do osd's need to be via lvm, and why not stick with direct disk
access as it is now?
- Bluestore is created to cut out some fs overhead,
- everywhere 10Gb is recommended because of better lat
Is it possible to stop the current running scrubs/deep-scrubs?
http://tracker.ceph.com/issues/11202
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
: Marc Roos
Cc: ceph-users
Subject: Re: [ceph-users] Bug? ceph-volume zap not working
Ceph-disk didn't remove an osd from the cluster either. That has never
been a thing for ceph-disk or ceph-volume. There are other commands for
that.
On Sat, Jun 2, 2018, 4:29 PM Marc Roos
But leaves still entries in crush map and maybe also ceph auth ls, and
the dir in /var/lib/ceph/osd
-Original Message-
From: Oliver Freyermuth [mailto:freyerm...@physik.uni-bonn.de]
Sent: zaterdag 2 juni 2018 18:29
To: Marc Roos; ceph-users
Subject: Re: [ceph-users] Bug? ceph-volume
>>
>>
>> ceph-disk does not require bootstrap-osd/ceph.keyring and ceph-volume
>> does
>
>I believe that's expected when you use "prepare".
>For ceph-volume, "prepare" already bootstraps the OSD and fetches a
fresh OSD id, for which it needs the keyring.
>For ceph-disk, this was not par
o+w? I don’t think that is necessary not?
drwxr-xr-x 2 ceph ceph 182 May 9 12:59 ceph-15
drwxr-xr-x 2 ceph ceph 182 May 9 20:51 ceph-14
drwxr-xr-x 2 ceph ceph 182 May 12 10:32 ceph-16
drwxr-xr-x 2 ceph ceph 6 Jun 2 17:21 ceph-19
drwxr-x--- 13 ceph ceph 168 Jun 2 17:47 .
drwxrwxrwt 2 ce
ev/sdf
-Original Message-
From: Marc Roos
Sent: zaterdag 2 juni 2018 12:17
To: ceph-users
Subject: [ceph-users] Bug? ceph-volume zap not working
I guess zap should be used instead of destroy? Maybe keep ceph-disk
backwards compatibility and keep destroy??
[root@c03 bootstrap-osd]# ceph-volume lvm za
I guess zap should be used instead of destroy? Maybe keep ceph-disk
backwards compatibility and keep destroy??
[root@c03 bootstrap-osd]# ceph-volume lvm zap /dev/sdf
--> Zapping: /dev/sdf
--> Unmounting /var/lib/ceph/osd/ceph-19
Running command: umount -v /var/lib/ceph/osd/ceph-19
stderr: umou
[@ bootstrap-osd]# ceph-volume lvm prepare --bluestore --data /dev/sdf
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd
--keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
c32036fe-ca0b-47d1-be3f-e28943ee3a97
201 - 300 of 500 matches
Mail list logo