My advise: Upgrade to 12.2.11 and run the stale-instances list asap and
see if you need to rm data.
This isn't available in 13.2.4, but should be in 13.2.5, so on Mimic you
will need to wait. But this might bite you at some point.
I hope I can prevent some admins from having sleepless nights
> Am 20.02.2019 um 09:26 schrieb Konstantin Shalygin :
>
>
>> we run into some OSD node freezes with out of memory and eating all swap
>> too. Till we get more physical RAM I’d like to reduce the osd_memory_target,
>> but can’t find where and how to enable it.
>>
>> We have 24 bluestore
On Thu, Feb 21, 2019 at 4:05 PM Wido den Hollander wrote:
> This isn't available in 13.2.4, but should be in 13.2.5, so on Mimic you
> will need to wait. But this might bite you at some point.
Unfortunately it hasn't been backported to Mimic:
http://tracker.ceph.com/issues/37447
This is the
Hi Mohamad thanks for your email !
I try to separate my racks for each type of sata and create news pools in
my environment.
Thanks and Best Regards,
Fabio Abreu
On Thu, Feb 21, 2019 at 3:46 PM Mohamad Gebai wrote:
> On 2/21/19 1:22 PM, Fabio Abreu wrote:
>
> Hi Everybody,
>
> It's
On 2/21/19 1:22 PM, Fabio Abreu wrote:
> Hi Everybody,
>
> It's recommended join different hardwares in the same rack ?
>
> For example I have a sata rack with Apollo 4200 storage and I will get
> another hardware type to expand this rack, Hp 380 Gen10.
>
> I was made a lot tests to understand
On Thu, Feb 21, 2019 at 03:22:56PM -0300, Fabio Abreu wrote:
:Hi Everybody,
:
:It's recommended join different hardwares in the same rack ?
This is based on my somewhat subjective experience not rigorous
testing or deep understanding of the code base so your results may
vary but ...
Physical
Hi Everybody,
It's recommended join different hardwares in the same rack ?
For example I have a sata rack with Apollo 4200 storage and I will get
another hardware type to expand this rack, Hp 380 Gen10.
I was made a lot tests to understand the performance and these new disks
have 100% of
I will research the bluestore cache thanks for the tip. To answer your
questions though…
1. Measuring performance by the time it takes for my CI to deploy my
application to OpenStack
2. Workload is spin up / spin down of 5 instances, 4 of which have many
different volumes attached (The
Wow. Thank you so much Irek! Your help saved me from a lot of trouble...
It turned out to be indeed a firewall issue. Port 6800 in one direction
wasn't open.
Am 21.02.19 um 07:05 schrieb Irek Fasikhov:
Hi,
You have problems with MRG.
I didn't mean that the fact they are consumer SSDs is the reason for
this performance impact. I was just pointing it out, unrelated to your
problem.
40% is a lot more than one would expect to see. How are you measuring
the performance? What is the workload and what numbers are you getting?
What
Yes stand-alone OSDs (WAL/DB/Data all on the same disk), this is the same as it
was for Jewel / filestore. Even if they are consumer SSDs why would they be 40%
faster with an older version of Ceph?
From: Mohamad Gebai
Date: Thursday, February 21, 2019 at 9:44 AM
To: "Smith, Eric" , Sinan Polat
On 2/21/19 4:30 PM, Hayashida, Mami wrote:
> I followed the documentation
> (http://docs.ceph.com/docs/mimic/mgr/dashboard/) to enable the dashboard
> RGW management, but am still getting the 501 error ("Please consult the
> documentation on how to configure and enable the Object Gateway... ").
I followed the documentation (http://docs.ceph.com/docs/mimic/mgr/dashboard/)
to enable the dashboard RGW management, but am still getting the 501 error
("Please consult the documentation on how to configure and enable the
Object Gateway... "). The dashboard it self is working.
1. create a RGW
Hi Ceph Users,
I've been having some issues with Bluestore on Lumonous (12.2.8 and 12.2.10) as
well as Mimic (13.2.4)
and thought maybe you guys could give me some insight as to what the cause of
my problems may be.
For Context: I am tasked with evaluation Bluestore as a Replacement for the
Hi,
For the last few months I've been getting question about people seeing
warnings about large OMAP objects after scrubs.
I've been digging for a few months (You'll also find multiple threads
about this) and it all seemed to trace back to RGW indexes.
Resharding didn't clean up old indexes
What is your setup with Bluestore? Standalone OSDs? Or do they have
their WAL/DB partitions on another device? How does it compare to your
Filestore setup for the journal?
On a separate note, these look like they're consumer SSDs, which makes
them not a great fit for Ceph.
Mohamad
On 2/21/19
40% slower performance compared to Ceph Jewel / OpenStack Mitaka backed by the
same SSDs ☹ I have 30 OSDs on SSDs (Samsung 860 EVO 1TB each)
From: Sinan Polat
Sent: Thursday, February 21, 2019 8:43 AM
To: ceph-users@lists.ceph.com; Smith, Eric
Subject: Re: [ceph-users] BlueStore / OpenStack
Hi Eric,
40% slower performance compared to ..? Could you please share the current
performance. How many OSD nodes do you have?
Regards,
Sinan
> Op 21 februari 2019 om 14:19 schreef "Smith, Eric" :
>
>
> Hey folks – I recently deployed Luminous / BlueStore on SSDs to back an
> OpenStack
Hey folks - I recently deployed Luminous / BlueStore on SSDs to back an
OpenStack cluster that supports our build / deployment infrastructure and I'm
getting 40% slower build times. Any thoughts on what I may need to do with Ceph
to speed things up? I have 30 SSDs backing an 11 compute node
On Thu, Feb 21, 2019 at 2:34 AM Анатолий Фуников
wrote:
>
> It's strange but parted output for this disk (/dev/sdf) show me that it's GPT:
>
> (parted) print
> Model: ATA HGST HUS726020AL (scsi)
> Disk /dev/sdf: 2000GB
> Sector size (logical/physical): 512B/4096B
> Partition Table: gpt
>
> Number
Hi ~
We have used nvme ssd as storage medium, and found that the
hardware performance was not well developed. Should we change some
parameter? And bluestore provides NVMEDevice using SPDK to optimize
performance, dose anyone know if this module is stable?
Our hardware configuration is 8 nvme
You can reduce min_size to k in an ec pool. But that's a very bad idea
for the same reason that min_size 1 on a replicated pool is bad.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel:
Of coure, you’re right. After using the right name, the connection worked :) I
tried to connect via a newer kernel client (under Ubuntu 16.04) and it worked
as well. So the issue clearly seems to be related to our client kernel version.
Thank you all very much for your time and help!
23 matches
Mail list logo