> Micron 5200 line seems to not have a high endurance SKU like the 5100 line
> sadly.
The 3.84TB 5200 PRO is rated at ~2.4 DWPD — you need higher than that? I do
find references to higher-durability ~5DWPD 5200 MAX models up to 1.9 TB.
Online resources on the 5200 product line don’t always
Hi!
I want to user promethus+grafana to monitor ceph, and I find below url:
http: //docs.ceph.com/docs/master/mgr/prometheus/
Then i download ceph dashboard in grafana:
https://grafana.com/dashboards/7056
It is so cool
But some metrices do not work for ceph 13( Mimic ), like
Hi, I read through the various documentation and had a few questions:
- From what I understand cephFS clients reach the OSDs directly, does the
cluster network need to be opened up as a public network?
- Is it still necessary to have a public and cluster network when the using
cephFS since the
Hi Cepher, I have my OSDs HDD with journal in NVME. I have one question
about config "filestore_wbthrottle_enable=false" what problem I will have
with this configuration disable? Some link where read more about it?
*Att,*
*Bruno Carvalho*
___
This looks fine and will recover on its own.
If you are not seeing enough client IO means that your tuning for
recovery IO vs client IO priority is incorrect.
A simple and effective way is increasing the osd_recovery_sleep_hdd
option (I think the default is 0.05 in Luminous and 0.1 since Mimic?)
On Mon, Sep 17, 2018 at 8:21 AM Graham Allan wrote:
>
>
> On 09/14/2018 02:38 PM, Gregory Farnum wrote:
> > On Thu, Sep 13, 2018 at 3:05 PM, Graham Allan wrote:
> >>
> >> However I do see transfer errors fetching some files out of radosgw -
> the
> >> transfer just hangs then aborts. I'd guess
My favorite SSD is still the Sandisk Ultra II. We got a few hundred of
them for ~200€/TB a few years ago.
The performance isn't that great, but it's amazing for the price.
Certainly orders of magnitude better than HDDs.
Just don't expect them to deliver more than a few hundred random write
IOPS,
> Hi everyone ,
> I'm having a hard time getting the use of norebalance flag . Can anyone
> help me understand.
> What i understood from this link http://tracker.ceph.com/issues/10559
> Was it prevents backfilling PGs in remapped state , Am i right ?
> Please can anyone help me understand this
SM863a were always good to me.
Micron 5100 MAX are fine, but felt less consistent than the Samsung’s.
Haven’t had any issues with Intel S4600.
Intel S3710’s obviously not available anymore, but those were a crowd favorite.
Micron 5200 line seems to not have a high endurance SKU like the 5100 line
Intel DC series also popular both nvme and ssd use case.
https://www.intel.com/content/www/us/en/products/memory-storage/solid-state-drives/data-center-ssds/dc-d3-s4610-series.html
On Mon, Sep 17, 2018 at 8:10 PM Robert Stanford wrote:
>
>
> Awhile back the favorite SSD for Ceph was the Samsung
Awhile back the favorite SSD for Ceph was the Samsung SM863a. Are there
any larger SSDs that are known to work well with Ceph? I'd like around 1TB
if possible. Is there any better alternative to the SM863a?
Regards
R
___
ceph-users mailing list
On 09/14/2018 02:38 PM, Gregory Farnum wrote:
On Thu, Sep 13, 2018 at 3:05 PM, Graham Allan wrote:
However I do see transfer errors fetching some files out of radosgw - the
transfer just hangs then aborts. I'd guess this probably due to one pg stuck
down, due to a lost (failed HDD) osd. I
On Mon, Sep 17, 2018 at 2:49 PM Eugen Block wrote:
>
> Hi,
>
> from your response I understand that these messages are not expected
> if everything is healthy.
I'm not 100% sure of that. It could be that there's a path through
the code that's healthy, but just wasn't anticipated at the point
Hi,
from your response I understand that these messages are not expected
if everything is healthy.
We face them every now and then, three or four times a week, but
there's no real connection to specific jobs or a high load in our
cluster. It's a Luminous cluster (12.2.7) with 1 active, 1
> > I’m running Proxmox VE 5.2 which includes ceph version 12.2.7
> > (94ce186ac93bb28c3c444bccfefb8a31eb0748e4)
> luminous (stable)
> 12.2.8 is in the repositories. ;)
I forgot to reply to this part and I did notice the update afterwards and since
updated but performance was the same.
I redid
Hi,
We have a problem with the radosgw using the S3 REST API.
Trying to create a new bucket does not work.
We got an 405 on API level and the log does indicate an 2002 error.
Do anybody know, what this error-code does mean? Find the radosgw-log attached
Bests,
Michael
2018-09-17
In one env, which is deployed through container, i found the ceph-osd always
be suicide due to "error (24) Too many open files"
Then i increased the LimitNOFILE for the container from 65k to 655k, which
could fix the issue.
But the FDs increase all the time. now the max number is around 155k. I
17 matches
Mail list logo