I created http://tracker.ceph.com/issues/38310 for this.
Regards,
Eugen
Zitat von Konstantin Shalygin :
On 2/14/19 2:21 PM, Eugen Block wrote:
Already did, but now with highlighting ;-)
http://docs.ceph.com/docs/luminous/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval
On 2/14/19 2:21 PM, Eugen Block wrote:
Already did, but now with highlighting ;-)
http://docs.ceph.com/docs/luminous/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval
http://docs.ceph.com/docs/mimic/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval
I
Already did, but now with highlighting ;-)
http://docs.ceph.com/docs/luminous/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval
http://docs.ceph.com/docs/mimic/rados/operations/health-checks/?highlight=osd_deep_mon_scrub_interval
Zitat von Konstantin Shalygin :
On 2/14/19
On 2/14/19 2:16 PM, Eugen Block wrote:
Exactly, it's also not available in a Mimic test-cluster. But it's
mentioned in the docs for L and M (I didn't check the docs for other
releases), that's what I was wondering about.
Can you provide url to this page?
k
My Ceph Luminous don't know anything about this option:
# ceph daemon osd.7 config help osd_deep_mon_scrub_interval
{
"error": "Setting not found: 'osd_deep_mon_scrub_interval'"
}
Exactly, it's also not available in a Mimic test-cluster. But it's
mentioned in the docs for L and M (I
On 2/13/19 9:17 PM, Eugen Block wrote:
Do you have any comment on osd_deep_mon_scrub_interval?
My Ceph Luminous don't know anything about this option:
```
# ceph daemon osd.7 config help osd_deep_mon_scrub_interval
{
"error": "Setting not found: 'osd_deep_mon_scrub_interval'"
}
```
k
Hi,
On a cluster running RGW only I'm running into BlueStore 12.2.11 OSDs
being 100% busy sometimes.
This cluster has 85k stale indexes (stale-instances list) and I've been
slowly trying to remove them.
I noticed that regularly OSDs read their HDD heavily and that device
then becomes 100% busy.
On 2/14/19 4:40 AM, John Petrini wrote:
> Okay that makes more sense, I didn't realize the WAL functioned in a
> similar manner to filestore journals (though now that I've had another
> read of Sage's blog post, New in Luminous: BlueStore, I notice he does
> cover this). Is this to say that
Okay that makes more sense, I didn't realize the WAL functioned in a
similar manner to filestore journals (though now that I've had another read
of Sage's blog post, New in Luminous: BlueStore, I notice he does cover
this). Is this to say that writes are acknowledged as soon as they hit the
WAL?
On Thu, Feb 14, 2019 at 8:09 AM Jeff Layton wrote:
>
> > Hi,
> > As http://docs.ceph.com/docs/master/cephfs/nfs/ says, it's OK to
> > config active/passive NFS-Ganesha to use CephFs. My question is if we
> > can use active/active nfs-ganesha for CephFS.
>
> (Apologies if you get two copies of
> Hi,
> As http://docs.ceph.com/docs/master/cephfs/nfs/ says, it's OK to
> config active/passive NFS-Ganesha to use CephFs. My question is if we
> can use active/active nfs-ganesha for CephFS.
(Apologies if you get two copies of this. I sent an earlier one from the
wrong account and it got stuck
Ceph-users is the proper ML to post questions like this.
On Thu, Dec 20, 2018 at 2:30 PM Joao Eduardo Luis wrote:
> On 12/20/2018 04:55 PM, João Aguiar wrote:
> > I am having an issue with "ceph-ceploy mon”
> >
> > I started by creating a cluster with one monitor with "create-deploy
> new"…
Ceph-users is the correct ML to post questions like this.
On Wed, Jan 2, 2019 at 5:40 PM Rishabh S wrote:
> Dear Members,
>
> Please let me know if you have any link with examples/detailed steps of
> Ceph-Safenet(KMS) integration.
>
> Thanks & Regards,
> Rishabh
>
>
Ceph-users ML is the proper mailing list for questions like this.
On Sat, Jan 26, 2019 at 12:31 PM Meysam Kamali wrote:
> Hi Ceph Community,
>
> I am using ansible 2.2 and ceph branch stable-2.2, on centos7, to deploy
> the playbook. But the deployment get hangs in this step "TASK [ceph-mon :
>
The Ceph-users ML is the correct list to ask questions like this. Did you
figure out the problems/questions you had?
On Tue, Dec 4, 2018 at 11:39 PM Rishabh S wrote:
> Hi Gaurav,
>
> Thank You.
>
> Yes, I am using boto, though I was looking for suggestions on how my ceph
> client should get
Hello,
We'll soon be building out four new luminous clusters with Bluestore.
Our current clusters are running filestore so we're not very familiar
with Bluestore yet and I'd like to have an idea of what to expect.
Here are the OSD hardware specs (5x per cluster):
2x 3.0GHz 18c/36t
22x 1.8TB 10K
This might not be a Ceph issue at all depending on if you're using any sort
of caching. If you have caching on your disk controllers at all, then the
write might have happened to the cache but never made it to the OSD disks
which would show up as problems on the VM RBDs. Make sure you have
On Wed, Feb 13, 2019 at 10:37 AM Jason Dillaman wrote:
>
> For the future Ceph Octopus release, I would like to remove all
> remaining support for RBD image format v1 images baring any
> substantial pushback.
>
> The image format for new images has been defaulted to the v2 image
> format since
Note that this format in fstab does require a certain version of util-linux
because of the funky format of the line. Pretty much it maps all command
line options at the beginning of the line separated with commas.
On Wed, Feb 13, 2019 at 2:10 PM David Turner wrote:
> I believe the fstab line
I believe the fstab line for ceph-fuse in this case would look something
like [1] this. We use a line very similar to that to mount cephfs at a
specific client_mountpoint that the specific cephx user only has access to.
[1] id=acapp3,client_mds_namespace=fs1 /tmp/ceph fuse.ceph
Your CRUSH rule for EC spools is forcing that behavior with the line
step chooseleaf indep 1 type ctnr
If you want different behavior, you’ll need a different crush rule.
On Tue, Feb 12, 2019 at 5:18 PM hnuzhoulin2 wrote:
> Hi, cephers
>
>
> I am building a ceph EC cluster.when a disk is
For the future Ceph Octopus release, I would like to remove all
remaining support for RBD image format v1 images baring any
substantial pushback.
The image format for new images has been defaulted to the v2 image
format since Infernalis, the v1 format was officially deprecated in
Jewel, and
Sorry for the late response on this, but life has been really busy over the
holidays.
We compact our omaps offline with the ceph-kvstore-tool. Here [1] is a
copy of the script that we use for our clusters. You might need to modify
things a bit for your environment. I don't remember which
On Wed, Feb 13, 2019 at 12:49 AM Siegfried Höllrigl <
siegfried.hoellr...@xidras.com> wrote:
> Hi !
>
> We have now successfully upgraded (from 12.2.10) to 12.2.11.
>
Seems to be quite stable. (Using RBD, CephFS and RadosGW)
>
Great!
>
> Most of our OSDs are still on Filestore.
>
> Should we set
Anyone have any insight to offer here? Also I'm now curious to hear
about experiences with 512e vs 4kn drives.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thank you, Konstantin,
I'll give that a try.
Do you have any comment on osd_deep_mon_scrub_interval?
Eugen
Zitat von Konstantin Shalygin :
The expectation was to prevent the automatic deep-scrubs but they are
started anyway
You can disable deep-scrubs per pool via `ceph osd pool set
The "Wants=" clause is just a way to say that there is a
soft-dependency between unit files. In this case, it's saying that if
you "systemctl enable rbdmap.service", it will also ensure that
"network-online.target" and "remote-fs-pre.target" are enabled. The
"Before=" and "After=" clauses just
The expectation was to prevent the automatic deep-scrubs but they are
started anyway
You can disable deep-scrubs per pool via `ceph osd pool set
nodeep-scrub`.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi cephers,
I'm struggling a little with the deep-scrubs. I know this has been
discussed multiple times (e.g. in [1]) and we also use a known crontab
script in a Luminous cluster (12.2.10) to start the deep-scrubbing
manually (a quarter of all PGs 4 times a week). The script works just
Thanks, I wasn't are of that mount option.
If this is more intuitive to me (i.e. the lesser violation of the
Principle of Least Surprise from my point of view) is a whole different
matter...
Am 13.02.2019 um 11:07 schrieb Marc Roos:
Maybe _netdev?
/dev/rbd/rbd/influxdb /var/lib/influxdb
Maybe _netdev?
/dev/rbd/rbd/influxdb /var/lib/influxdb xfs _netdev 0 0
To be honest I can remember having something similar long time ago, but
just tested it on centos7, and have no problems with this.
-Original Message-
From: Clausen, Jörn [mailto:jclau...@geomar.de]
Hi!
I am new to Ceph, Linux, systemd and all that stuff. I have set up a
test/toy Ceph installation using ceph-ansible, and now try to understand
RBD.
My RBD client has a correct /etc/ceph/rbdmap, i.e. /dev/rbd0 is created
during system boot automatically. But adding an entry to /etc/fstab
Hi !
We have now successfully upgraded (from 12.2.10) to 12.2.11.
Seems to be quite stable. (Using RBD, CephFS and RadosGW)
Most of our OSDs are still on Filestore.
Should we set the "pglog_hardlimit" (as it mus not be unset anymore) ?
What exactly will this limit ?
Are there any risks ?
Hi, cephersI am building a ceph EC cluster.when a disk is error,I out it.But its all PGs remap to the osds in the same host,which I think they should remap to other hosts in the same rack.test process is:ceph osd pool create .rgw.buckets.data 8192 8192 erasure ISA-4-2
Hi Igor,
Thanks again for helping !
I have upgrade to last mimic this weekend, and with new autotune memory,
I have setup osd_memory_target to 8G. (my nvme are 6TB)
I have done a lot of perf dump and mempool dump and ps of process to see rss
memory at different hours,
here the reports for
A single OSD should be expendable and you should be able to just "zap"
it and recreate it. Was this not true in your case?
On Wed, Feb 13, 2019 at 1:27 AM Ruben Rodriguez wrote:
>
>
>
> On 2/9/19 5:40 PM, Brad Hubbard wrote:
> > On Sun, Feb 10, 2019 at 1:56 AM Ruben Rodriguez wrote:
> >>
> >>
36 matches
Mail list logo