CfP for the Software Defined Storage devroom at FOSDEM 2019
(Brussels, Belgium, February 3rd).
FOSDEM is a free software event that offers open source communities a place to
meet, share ideas and collaborate. It is renown for being highly developer-
oriented and brings together 8000+
I think PGs have more to do with this, the docs were pretty good at
explaining it. Hope this helps
On Thu, Oct 11, 2018, 6:20 PM ST Wong (ITSC) wrote:
> Hi all, we’re new to CEPH. We’ve some old machines redeployed for
> setting up CEPH cluster for our testing environment.
>
> There are over
Hi all, we're new to CEPH. We've some old machines redeployed for setting up
CEPH cluster for our testing environment.
There are over 100 disks for OSDs. Will use replication with 2 copies. We
wonder if it's better to create pools on all OSDs, or using some OSDs for
particular pools, for
As the osd crash implies, setting "nobackfill" appears to let all the
osds keep running and the pg stays active and can apparently serve data.
If I track down the object referenced below in the object store, I can
download it without error via s3... though as I can't generate a
matching etag,
can't you route to your ceph public network? that would avoid having to
create hosts on the same vlan, I think thats how most shops would do it.
On Thu, Oct 11, 2018 at 2:07 PM Felix Stolte wrote:
> Our ceph cluster is mainly used for openstack but we also need to provide
> storage to linux
Our ceph cluster is mainly used for openstack but we also need to
provide storage to linux workstations via nfs and smb for our windows
clients. Even though our linux workstations could talk to cephs directly
we don't want them to be in our ceph public network. Ceph public network
is only
I'm curious about optane too
We are running Dell 730xd & 740xd with expansion chassis
12 -x 8 TB disks in the server and 12x 8 TB is the exp unit
2 x 2 TB Intel NVMe for caching in the servers (12 disks cached with wal/db on
opposite nVMe from Intel cache- so interleaved)
Intel cache running
Hi,
Any of you uses Optane NVME and is willing to share his/her experience and
tuning settings ?
There was a discussion started by Wido mentioning using intel_pstate=disable
intel_idle.max_cstate=1 processor.max_cstate=1 and disabling irqbalance but
that is all I was able to find
I am using
You should definitely stop using `size 3 min_size 1` on your pools. Go
back to the default `min_size 2`. I'm a little confused why you have 3
different CRUSH rules. They're all identical. You only need different
CRUSH rules if you're using Erasure Coding or targeting a different set of
OSDs
My usuall workaround for that is to set noscrub and nodeep-scrub flags and
wait (sometimes even 3 hours) until all the scheduled scrubs finish. Then a
manually issued scrub or repair starts immediately. After that I unset the
scrub blocking flags.
A general advice regarding pg repair is not to
Thanks for your reply. I'll capture a `ceph status` the next time I
encounter a not working RBD. Here's the other output you asked for:
$ ceph osd crush rule dump
[
{
"rule_id": 0,
"rule_name": "data",
"ruleset": 0,
"type": 1,
"min_size": 1,
My first guess is to ask what your crush rules are. `ceph osd crush rule
dump` along with `ceph osd pool ls detail` would be helpful. Also if you
have a `ceph status` output from a time where the VM RBDs aren't working
might explain something.
On Thu, Oct 11, 2018 at 1:12 PM Nils Fahldieck -
Does anyone have a good blog entry or explanation of bucket sharding
requirements/commands? Plus perhaps a howto?
I upgraded our cluster to Luminous and now I have a warning about 5 large
objects. The official blog says that sharding is turned on by default but
we upgraded, so I cant quite
I am just interested to know more about your use case for NFS as opposed to
just using cephfs directly, and what are you using for HA?
On Thu, Oct 11, 2018 at 1:54 AM Felix Stolte wrote:
> Hey folks,
>
> I use nfs-ganesha to export cephfs to nfs. nfs-ganesha can talk to
> cephfs via libcephfs
If this is a thing, I would like to be invited
-Brent
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Alfredo Daniel Rezinovsky
Sent: Tuesday, September 18, 2018 3:15 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users]
right, the user can be the dn component or something else projected
from the entry, details in the docs
Matt
On Thu, Oct 11, 2018 at 1:26 PM, Adam C. Emerson wrote:
> Ha Son Hai wrote:
>> Hello everyone,
>> I try to apply the bucket policy to my bucket for LDAP user but it doesn't
>> work.
>>
Yeah improving that workflow is in the backlog. (or maybe it's done in
master? I forget.) But it's complicated, so for now that's just how it
goes. :(
On Thu, Oct 11, 2018 at 10:27 AM Brett Chancellor <
bchancel...@salesforce.com> wrote:
> This seems like a bug. If I'm kicking off a repair
This seems like a bug. If I'm kicking off a repair manually it should take
place immediately, and ignore flags such as max scrubs, or minimum scrub
window.
-Brett
On Thu, Oct 11, 2018 at 1:11 PM David Turner wrote:
> As a part of a repair is queuing a deep scrub. As soon as the repair part
>
Ha Son Hai wrote:
> Hello everyone,
> I try to apply the bucket policy to my bucket for LDAP user but it doesn't
> work.
> For user created by radosgw-admin, the policy works fine.
>
> {
>
> "Version": "2012-10-17",
>
> "Statement": [{
>
> "Effect": "Allow",
>
> "Principal": {"AWS":
Hi everyone,
since some time we experience service outages in our Ceph cluster
whenever there is any change to the HEALTH status. E. g. swapping
storage devices, adding storage devices, rebooting Ceph hosts, during
backfills ect.
Just now I had a recent situation, where several VMs hung after I
As a part of a repair is queuing a deep scrub. As soon as the repair part
is over the deep scrub continues until it is done.
On Thu, Oct 11, 2018, 12:26 PM Brett Chancellor
wrote:
> Does the "repair" function use the same rules as a deep scrub? I couldn't
> get one to kick off, until I
Does the "repair" function use the same rules as a deep scrub? I couldn't
get one to kick off, until I temporarily increased the max_scrubs and
lowered the scrub_min_interval on all 3 OSDs for that placement group. This
ended up fixing the issue, so I'll leave this here in case somebody else
runs
I have 4 other slack servers that I'm in for work and personal hobbies.
It's just easier for me to maintain one more slack server than have a
separate application for IRC.
On Thu, Oct 11, 2018, 11:02 AM John Spray wrote:
> On Thu, Oct 11, 2018 at 8:44 AM Marc Roos
> wrote:
> >
> >
> > Why
On Thu, Oct 11, 2018 at 8:44 AM Marc Roos wrote:
>
>
> Why slack anyway?
Just because some people like using it. Don't worry, IRC is still the
primary channel and lots of people don't use slack. I'm not on slack,
for example, which is either a good or bad thing depending on your
perspective
On Thu, Oct 11, 2018 at 9:55 AM Felix Stolte wrote:
>
> Hey folks,
>
> I use nfs-ganesha to export cephfs to nfs. nfs-ganesha can talk to
> cephfs via libcephfs so there is no need for mounting cephfs manually. I
> also like to use directory quotas from cephfs. Anyone knows a way to set
> quota
Hello everyone,
I try to apply the bucket policy to my bucket for LDAP user but it doesn't
work.
For user created by radosgw-admin, the policy works fine.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": ["arn:aws:iam:::user/radosgw-user"]},
So far there is no way to do this
On Thu, Oct 11, 2018 at 4:54 PM Felix Stolte wrote:
>
> Hey folks,
>
> I use nfs-ganesha to export cephfs to nfs. nfs-ganesha can talk to
> cephfs via libcephfs so there is no need for mounting cephfs manually. I
> also like to use directory quotas from cephfs.
using the rslib from shaman repos did the trick
These works fine
ceph-iscsi-cli-2.7-54.g9b18a3b.el7.noarch.rpm
python2-kmod-0.9-20.fc29.x86_64.rpm
python2-rtslib-2.1.fb67-3.fc28.noarch.rpm
tcmu-runner-1.4.0-1.el7.x86_64.rpm
ceph-iscsi-config-2.6-42.gccca57d.el7.noarch.rpm
I only tried to use the Ceph CLI once out of curiosity, simply because
it is there, but I don't really benefit from it.
Usually when I'm working with clusters it requires a combination of
different commands (rbd, rados, ceph etc.), so this would mean either
exiting and entering the CLI back
Hey folks,
I use nfs-ganesha to export cephfs to nfs. nfs-ganesha can talk to
cephfs via libcephfs so there is no need for mounting cephfs manually. I
also like to use directory quotas from cephfs. Anyone knows a way to set
quota on directories without the need to mount it first?
I was
Hi, all.
I want to ask did you had similar experience with upgrading Jewel RGW to
Luminous. After upgrading monitors and OSD's, I started two new Luminous
RGWs and put them to LB together with Jewel ones. And than interesting
things started to happen. Some our jobs start to fail with "
fatal
Den ons 10 okt. 2018 kl 16:20 skrev John Spray :
> So the question is: does anyone actually use this feature? It's not
> particularly expensive to maintain, but it might be nice to have one
> less path through the code if this is entirely unused.
It can go as far as I am concerned too.
Better
Why slack anyway?
-Original Message-
From: Konstantin Shalygin [mailto:k0...@k0ste.ru]
Sent: donderdag 11 oktober 2018 5:11
To: ceph-users@lists.ceph.com
Subject: *SPAM* Re: [ceph-users] https://ceph-storage.slack.com
> why would a ceph slack be invite only?
Because this
On 10/11/2018 03:12 AM, David Turner wrote:
> Not a resolution, but an idea that you've probably thought of.
> Disabling logging on any affected OSDs (possibly just all of them) seems
> like a needed step to be able to keep working with this cluster to
> finish the upgrade and get it healthier.
34 matches
Mail list logo