Thanks a lot !
On Mon, Feb 25, 2019 at 9:18 AM Konstantin Shalygin wrote:
> A few weeks ago I converted everything from straw to straw2 (to be able to
> use the balancer) using the command:
>
> ceph osd crush set-all-straw-buckets-to-straw2
>
> I have now just added a new rack bucket, and moved
Hello,
I recently spun up a small ceph cluster to learn the ropes. I'm a bit
confused by the documentation which seems to refer in different points
to the tools in the subject (I'm using containerized mimic deployed with
ceph-ansible stable-3.2).
In particular, which one between config and conf
"
- As far as I understand the reported 'implicated osds' are only the primary
ones. In the log of the osds you should find also the relevant pg number,
and with this information you can get all the involved OSDs. This might be
useful e.g. to see if a specific OSD node is always involved. This w
Hi ceph users,
As I understand, cephfs in Mimic had significant issues up to and
including version 13.2.2. With some critical patches in Mimic 13.2.4,
is cephfs now production quality in Mimic? Are there folks out there
using it in a production setting? If so, could you share your
experien
Thanks for the hint, but it seems that I/O is not the trigger. This is I/O
in the last 3 hours: http://prntscr.com/mpy0g9 - nothing above 350 iops,
which is IMHO very small load for SSD. Within this 3 hour windows I
experienced the REQUEST_SLOW three times (different minutes though).
These times
On 2/6/19 11:52 AM, Junk wrote:
> I was trying to set my mimic dashboard cert using the instructions
> from
>
> http://docs.ceph.com/docs/mimic/mgr/dashboard/
>
> and I'm pretty sure the lines
>
>
> $ ceph config-key set mgr mgr/dashboard/crt -i dashboard.crt
> $ ceph config-key set mgr mgr/d
I'm using combination of Intel S4510 and Micron 5200 MAX. Slow requests are
happening on both of the brands.
Martin
-- Původní e-mail --
Od: Paul Emmerich
Komu: Massimo Sgaravatto
Datum: 22. 2. 2019 15:08:29
Předmět: Re: [ceph-users] REQUEST_SLOW across many OSDs at the s
Hi David ,
After studying a lot about this configuration I found this post.
https://zeestrataca.com/posts/expanding-ceph-clusters-with-juju/
Thanks for your recommendation!
Regards,
Fabio Abreu
On Sat, Feb 23, 2019 at 11:19 AM David Turner wrote:
> Jewel is really limited on the settings you
Hi Glen,
On 2/24/19 9:21 PM, Glen Baars wrote:
> I am tracking down a performance issue with some of our mimic 13.2.4 OSDs. It
> feels like a lack of memory but I have no real proof of the issue. I have
> used the memory profiling ( pprof tool ) and the OSD's are maintaining their
> 4GB allocat
Fixed, seems even though the block.db/block.wal had correct perms the disk
entry under /dev was missing ceph:ceph ownership after the reboot for some
reason.
Sorry for adding extra emails to your mailbox, but hopefully this may help
someone else one day.
On Mon, Feb 25, 2019 at 11:09 PM Ashley Me
Sorry with log level 20 turned on for bluestore / bluefs
-31> 2019-02-25 15:07:27.842 7f2bfbd71240 10
bluestore(/var/lib/ceph/osd/ceph-8) _open_db initializing bluefs
-30> 2019-02-25 15:07:27.842 7f2bfbd71240 10 bluefs add_block_device
bdev 1 path /var/lib/ceph/osd/ceph-8/block.db
-29> 20
So I was able to change the perms using : chown -h ceph:ceph
/var/lib/ceph/osd/ceph-6/block.db
However now I get the following when starting the OSD which then causes it
to crash
bluefs add_block_device bdev 2 path /var/lib/ceph/osd/ceph-8/block size
8.9 TiB
-1> 2019-02-25 15:03:51.990 7f26
Mandi! Alfredo Deza
In chel di` si favelave...
> There are ways to create partitions without a PARTUUID. We have an
> example in our docs with parted that will produce what is needed:
> http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare/#partitioning
> But then again... I would strongly su
Den mån 25 feb. 2019 kl 13:40 skrev Eugen Block :
> I just moved a (virtual lab) cluster to a different network, it worked
> like a charm.
> In an offline method - you need to:
>
> - set osd noout, ensure there are no OSDs up
> - Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP
>
Hi Eugen,
Thanks for the advice. That helps me a lot :)
Eugen Block 于2019年2月25日周一 下午8:22写道:
> I just moved a (virtual lab) cluster to a different network, it worked
> like a charm.
>
> In an offline method - you need to:
>
> - set osd noout, ensure there are no OSDs up
> - Change the MONs IP, S
I just moved a (virtual lab) cluster to a different network, it worked
like a charm.
In an offline method - you need to:
- set osd noout, ensure there are no OSDs up
- Change the MONs IP, See the bottom of [1] "CHANGING A MONITOR’S IP
ADDRESS", MONs are the only ones really
sticky with the
Den mån 25 feb. 2019 kl 12:33 skrev Zhenshi Zhou :
> I deployed a new cluster(mimic). Now I have to move all servers
> in this cluster to another place, with new IP.
> I'm not sure if the cluster will run well or not after I modify config
> files, include /etc/hosts and /etc/ceph/ceph.conf.
No, ce
Dear all,
in real-world use, is there a significant performance
benefit in using 4kn instead of 512e HDDs (using
Ceph bluestore with block-db on NVMe-SSD)?
Cheers and thanks for any advice,
Oliver
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I deployed a new cluster(mimic). Now I have to move all servers
in this cluster to another place, with new IP.
I'm not sure if the cluster will run well or not after I modify config
files, include /etc/hosts and /etc/ceph/ceph.conf.
Fortunately, the cluster has no data at present. I never en
I think this should give you a bit of isight on using large scale clusters.
https://www.youtube.com/watch?v=NdGHE-yq1gU and
https://www.youtube.com/watch?v=WpMzAFH6Mc4 . Watch the second video I
think it more relates to your problem.
On Mon, Feb 25, 2019, 11:33 M Ranga Swami Reddy
wrote:
> We h
Hi,
sorry to bump this old thread,
but I had this problem recently, with a linux firewall between cephfs client
and cluster
the problem was easy to reproduce with
#firewall is enable with
iptables -A FORWARD -m conntrack --ctstate INVALID -j DROP
iptables -A FORWARD -m conntrack --ctstate RE
On 2/24/19 4:34 PM, David Turner wrote:
> One thing that's worked for me to get more out of nvmes with Ceph is to
> create multiple partitions on the nvme with an osd on each partition.
> That way you get more osd processes and CPU per nvme device. I've heard
> of people using up to 4 partitions
Hi!
ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)
bluestore on all osd.
I got an cluster error this morning:
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
p
I create 2-4 RBD images sized 10GB or more with --thick-provision, then
run
fio -ioengine=rbd -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128
-rw=randwrite -pool=rpool -runtime=60 -rbdname=testimg
For each of them at the same time.
How do you test what total 4Kb random write iops (RB
How do you test what total 4Kb random write iops (RBD) you have?
-Original Message-
From: Vitaliy Filippov [mailto:vita...@yourcmc.ru]
Sent: 24 February 2019 17:39
To: David Turner
Cc: ceph-users; 韦皓诚
Subject: *SPAM* Re: [ceph-users] Configuration about using nvme
SSD
I've
> -Original Message-
> From: Vitaliy Filippov
> Sent: 23 February 2019 20:31
> To: n...@fisk.me.uk; Serkan Çoban
> Cc: ceph-users
> Subject: Re: [ceph-users] [Bluestore] Some of my osd's uses BlueFS slow
> storage for db - why?
>
> X-Assp-URIBL failed: 'yourcmc.ru'(black.uribl.com )
>
> -Original Message-
> From: Konstantin Shalygin
> Sent: 22 February 2019 14:23
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Bluestore] Some of my osd's uses BlueFS slow
> storage for db - why?
>
> Bluestore/RocksDB will only put the next level up size
We have taken care all HW recommendations, but missing that ceph mons
are VMs with good configuration (4 core, 64G RAM + 500G disk)...
Is this ceph-mon configuration might cause issues?
On Sat, Feb 23, 2019 at 6:31 AM Anthony D'Atri wrote:
>
>
> ? Did we start recommending that production mons ru
A few weeks ago I converted everything from straw to straw2 (to be able to
use the balancer) using the command:
ceph osd crush set-all-straw-buckets-to-straw2
I have now just added a new rack bucket, and moved a couple of new osd
nodes in this rack, using the commands:
ceph osd crush add-bucket
A few weeks ago I converted everything from straw to straw2 (to be able to
use the balancer) using the command:
ceph osd crush set-all-straw-buckets-to-straw2
I have now just added a new rack bucket, and moved a couple of new osd
nodes in this rack, using the commands:
ceph osd crush add-bucket
30 matches
Mail list logo