> Op 4 november 2016 om 2:05 schreef Joao Eduardo Luis :
>
>
> On 11/03/2016 06:18 PM, w...@42on.com wrote:
> >
> >> Personally, I don't like this solution one bit, but I can't see any other
> >> way without a patched monitor, or maybe ceph_monstore_tool.
> >>
> >> If you are
> Op 3 november 2016 om 13:09 schreef Joao Eduardo Luis <j...@suse.de>:
>
>
> On 11/03/2016 09:40 AM, Wido den Hollander wrote:
> > root@mon3:/var/lib/ceph/mon# ceph-monstore-tool ceph-mon3 dump-keys|awk
> > '{print $1}'|uniq -c
> > 96 auth
> >
> Op 3 november 2016 om 10:46 schreef Wido den Hollander <w...@42on.com>:
>
>
>
> > Op 3 november 2016 om 10:42 schreef Dan van der Ster <d...@vanderster.com>:
> >
> >
> > Hi Wido,
> >
> > AFAIK mon's won't trim while a cluster is
aik the MONs trim when all PGs are active+clean. A
cluster can go into WARN state for almost any reason, eg old CRUSH tunables.
Will give it a try though.
Wido
> -- Dan
>
>
> On Thu, Nov 3, 2016 at 10:40 AM, Wido den Hollander <w...@42on.com> wrote:
> > Hi,
> >
> Op 2 november 2016 om 16:21 schreef Sage Weil <s...@newdream.net>:
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > > > I'm pretty sure this is a race condition that got cleaned up as part
> > > > > of
> > > > > https
> Op 2 november 2016 om 16:00 schreef Sage Weil <s...@newdream.net>:
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> > > Op 2 november 2016 om 15:06 schreef Sage Weil <s...@newdream.net>:
> > >
> > >
> > > On Wed, 2 Nov 2016
> Op 2 november 2016 om 15:06 schreef Sage Weil <s...@newdream.net>:
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> >
> > > Op 2 november 2016 om 14:30 schreef Sage Weil <s...@newdream.net>:
> > >
> > >
> > > On W
> Op 2 november 2016 om 14:30 schreef Sage Weil <s...@newdream.net>:
>
>
> On Wed, 2 Nov 2016, Wido den Hollander wrote:
> >
> > > Op 26 oktober 2016 om 11:18 schreef Wido den Hollander <w...@42on.com>:
> > >
> > >
> &
> Op 26 oktober 2016 om 11:18 schreef Wido den Hollander <w...@42on.com>:
>
>
>
> > Op 26 oktober 2016 om 10:44 schreef Sage Weil <s...@newdream.net>:
> >
> >
> > On Wed, 26 Oct 2016, Dan van der Ster wrote:
> > > On Tue, Oct 25,
> Op 31 oktober 2016 om 11:33 schreef 한승진 :
>
>
> Hi all,
>
> I tested straw / straw 2 bucket type.
>
> The Ceph document says below
>
>
>- straw2 bucket type fixed several limitations in the original straw
>bucket
>- *the old straw buckets would change some
to mix
> the scheduler? E.g. CFG for spinners _and_ noop for SSD?
>
Yes, CFQ for the spinners and noop for the SSD is good. The scrubbing doesn't
touch the journal anyway.
Wido
> K.
>
> On 28-10-16 14:43, Wido den Hollander wrote:
> > Make sur
> Op 28 oktober 2016 om 13:18 schreef Kees Meijs :
>
>
> Hi,
>
> On 28-10-16 12:06, w...@42on.com wrote:
> > I don't like this personally. Your cluster should be capable of doing
> > a deep scrub at any moment. If not it will also not be able to handle
> > a node failure during
Bringing back to the list
> Op 27 oktober 2016 om 12:08 schreef Ralf Zerres <ralf.zer...@networkx.de>:
>
>
> > Wido den Hollander <w...@42on.com> hat am 27. Oktober 2016 um 11:51
> > geschrieben:
> >
> >
> >
> > > Op 27 oktob
> Op 27 oktober 2016 om 11:46 schreef Ralf Zerres <hostmas...@networkx.de>:
>
>
> Here we go ...
>
>
> > Wido den Hollander <w...@42on.com> hat am 27. Oktober 2016 um 11:35
> > geschrieben:
> >
> >
> >
> > > Op
> Op 27 oktober 2016 om 11:23 schreef Ralf Zerres :
>
>
> Hello community,
> hello ceph developers,
>
> My name is Ralf working as IT-consultant. In this paticular case I do support
> a
> german customer running a 2 node CEPH cluster.
>
> This customer is
> Op 26 oktober 2016 om 20:44 schreef Brady Deetz :
>
>
> Summary:
> This is a production CephFS cluster. I had an OSD node crash. The cluster
> rebalanced successfully. I brought the down node back online. Everything
> has rebalanced except 1 hung pg and MDS trimming is now
> Op 26 oktober 2016 om 15:51 schreef J David :
>
>
> On Wed, Oct 26, 2016 at 8:55 AM, Andreas Davour wrote:
> > If there are 1 MON in B, that cluster will have quorum within itself and
> > keep running, and in A the MON cluster will vote and reach
> Op 26 oktober 2016 om 10:44 schreef Sage Weil <s...@newdream.net>:
>
>
> On Wed, 26 Oct 2016, Dan van der Ster wrote:
> > On Tue, Oct 25, 2016 at 7:06 AM, Wido den Hollander <w...@42on.com> wrote:
> > >
> > >> Op 24 oktober 2016 om 22:29 sc
> Op 25 oktober 2016 om 18:24 schreef Steffen Weißgerber :
>
>
> Hi,
>
> thank you for answering.
>
>
> >>> Wes Dillingham schrieb am Montag, 24.
> Oktober 2016
> um 17:31:
> > What do the logs of the monitor service say? Increase their
>
> Op 25 oktober 2016 om 12:38 schreef Vincent Godin :
>
>
> We have an Openstack which use Ceph for Cinder and Glance. Ceph is in
> Hammer release and we need to upgrade to Jewel. My question is :
> are the Hammer clients compatible with the Jewel servers ? (upgrade of
_
>
>
> From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of David
> Turner [david.tur...@storagecraft.com]
> Sent: Monday, October 24, 2016 2:24 PM
> To: Wido den Hollander; ceph-us...@ceph.com
> Subject: Re: [ceph-users] All PGs are ac
s yet.
The MON stores are 35GB each right now and I think they are not trimming due to
the pg_temp which still exists.
I'll report back later, but this rebalance will take a lot of time.
Wido
> Cheers, Dan
>
> On 24 Oct 2016 22:19, "Wido den Hollander" <w...@42on.co
Hi,
On a cluster running Hammer 0.94.9 (upgraded from Firefly) I have 29 remapped
PGs according to the OSDMap, but all PGs are active+clean.
osdmap e111208: 171 osds: 166 up, 166 in; 29 remapped pgs
pgmap v101069070: 6144 pgs, 2 pools, 90122 GB data, 22787 kobjects
264 TB used, 184 TB /
> Op 23 oktober 2016 om 10:04 schreef Sebastian Köhler :
>
>
> Hello,
>
> is it possible to reduce the replica count of a pool that already
> contains data? If it is possible how much load will a change in the
> replica size cause? I am guessing it will do a rebalance.
>
Yes,
> Op 21 oktober 2016 om 21:31 schreef Steffen Weißgerber :
>
>
> Hello,
>
> we're running a 6 node ceph cluster with 3 mons on Ubuntu (14.04.4).
>
> Sometimes it happen's that the mon services die and have to restarted
> manually.
>
That they die is not the thing which
> Op 18 oktober 2016 om 14:06 schreef Dan van der Ster :
>
>
> +1 I would find this warning useful.
>
+1 Probably make it configurable, say, you want at least X standby MDS to be
available before WARN. But in general, yes, please!
Wido
>
>
> On Tue, Oct 18, 2016 at
> Op 17 oktober 2016 om 9:16 schreef Somnath Roy :
>
>
> Hi Sage et. al,
>
> I know this issue is reported number of times in community and attributed to
> either network issue or unresponsive OSDs.
> Recently, we are seeing this issue when our all SSD cluster (Jewel
> Op 16 oktober 2016 om 11:57 schreef "Jon Morby (FidoNet)" :
>
>
> Morning
>
> It’s been a few days now since the outage however we’re still unable to
> install new nodes, it seems the repo’s are broken … and have been for at
> least 2 days now (so not just a brief momentary
> Op 14 oktober 2016 om 19:13 schreef i...@witeq.com:
>
>
> Hi all,
>
> after encountering a warning about one of my OSDs running out of space i
> tried to study better how data distribution works.
>
100% perfect data distribution is not possible with straw. It is even very hard
to
> Op 17 oktober 2016 om 6:37 schreef xxhdx1985126 :
>
>
> Hi, everyone.
>
>
> If one OSD's state transforms from up to down, by "kill -i" for example, will
> an "AdvMap" event be triggered on other related
> OSDs?___
iirc it
> Op 10 oktober 2016 om 14:56 schreef Matteo Dacrema :
>
>
> Hi,
>
> I’m planning a similar cluster.
> Because it’s a new project I’ll start with only 2 node cluster witch each:
>
2 nodes in a Ceph cluster is way to small in my opinion.
I suggest that you take a lot more
> Op 29 september 2016 om 10:06 schreef Ilya Moldovan :
>
>
> Hello!
>
> As far as I know at a certain number of disks there is no sense with SSD
> for journals.
Why would that be? SSD journals still improve performance with about 50% in
most situations.
With RBD you
> Op 29 september 2016 om 1:57 schreef Tyler Bishop
> :
>
>
> S1148 is down but the cluster does not mark it as such.
>
A host will never be marked as down, but the output shows that all OSDs are
marked as down however.
Wido
> cluster
> Op 26 september 2016 om 19:51 schreef Sam Yaple <sam...@yaple.net>:
>
>
> On Mon, Sep 26, 2016 at 5:44 PM, Wido den Hollander <w...@42on.com> wrote:
>
> >
> > > Op 26 september 2016 om 17:48 schreef Sam Yaple <sam...@yaple.net>:
> > &
> Op 28 september 2016 om 0:35 schreef "Nick @ Deltaband" :
>
>
> Hi Cephers,
>
> We need to add two new monitors to a production cluster (0.94.9) which has
> 3 existing monitors. It looks like it's as easy as ceph-deploy mon add mon>.
>
You are going to add two
> Op 26 september 2016 om 17:48 schreef Sam Yaple <sam...@yaple.net>:
>
>
> On Mon, Sep 26, 2016 at 9:31 AM, Wido den Hollander <w...@42on.com> wrote:
>
> > Hi,
> >
> > This has been discussed on the ML before [0], but I would like to bring
> &g
Hi,
This has been discussed on the ML before [0], but I would like to bring this up
again with the outlook towards BlueStore.
Bcache [1] allows for block device level caching in Linux. This can be
read/write(back) and vastly improves read and write performance to a block
device.
With the
> Op 23 september 2016 om 10:04 schreef mj :
>
>
> Hi,
>
> On 09/23/2016 09:41 AM, Dan van der Ster wrote:
> >> If you care about your data you run with size = 3 and min_size = 2.
> >>
> >> Wido
>
> We're currently running with min_size 1. Can we simply change this,
>
> Op 23 september 2016 om 5:59 schreef Chengwei Yang
> :
>
>
> Hi list,
>
> I found that ceph repo is broken these days, no any repodata in the repo at
> all.
>
> http://us-east.ceph.com/rpm-jewel/el7/x86_64/repodata/
>
> it's just empty, so how can I install
> Op 23 september 2016 om 9:11 schreef Tomasz Kuzemko
> :
>
>
> Hi,
>
> biggest issue with replica size 2 is that if you find an inconsistent
> object you will not be able to tell which copy is the correct one. With
> replica size 3 you could assume that those 2
> Op 22 september 2016 om 16:13 schreef Matteo Dacrema :
>
>
> To be more precise, the node with different OS are only the OSD nodes.
>
I haven't seen real issues, but a few which I could think of which
*potentially* might be a problem:
- Different tcmalloc version
-
> Op 21 september 2016 om 17:23 schreef Iain Buclaw :
>
>
> On 20 September 2016 at 19:27, Gregory Farnum wrote:
> > In librados getting a stat is basically equivalent to reading a small
> > object; there's not an index or anything so FileStore needs to
> Op 20 september 2016 om 21:23 schreef Heath Albritton :
>
>
> I'm wondering if anyone has some tips for managing different types of
> pools, each of which fall on a different type of OSD.
>
> Right now, I have a small cluster running with two kinds of OSD nodes,
> ones
> Op 20 september 2016 om 20:30 schreef Haomai Wang <hao...@xsky.com>:
>
>
> On Wed, Sep 21, 2016 at 2:26 AM, Wido den Hollander <w...@42on.com> wrote:
> >
> >> Op 20 september 2016 om 19:27 schreef Gregory Farnum <gfar...@redhat.com>:
> >
> Op 20 september 2016 om 19:27 schreef Gregory Farnum :
>
>
> In librados getting a stat is basically equivalent to reading a small
> object; there's not an index or anything so FileStore needs to descend its
> folder hierarchy. If looking at metadata for all the objects in
> Op 20 september 2016 om 10:55 schreef Василий Ангапов :
>
>
> Hello,
>
> Is there any way to copy rgw bucket index to another Ceph node to
> lower the downtime of RGW? For now I have a huge bucket with 200
> million files and its backfilling is blocking RGW completely for
> Op 15 september 2016 om 13:27 schreef Kostis Fardelas :
>
>
> Hello cephers,
> being in a degraded cluster state with 6/162 OSDs down ((Hammer
> 0.94.7, 162 OSDs, 27 "fat" nodes, 1000s of clients) ) like the below
> ceph cluster log indicates:
>
> 2016-09-12
> Op 15 september 2016 om 10:40 schreef Florent B <flor...@coppint.com>:
>
>
> On 09/15/2016 10:37 AM, Wido den Hollander wrote:
> >> Op 15 september 2016 om 10:34 schreef Florent B <flor...@coppint.com>:
> >>
> >>
> >> Hi everyone,
> Op 14 september 2016 om 14:56 schreef "Dennis Kramer (DT)" :
>
>
> Hi Burkhard,
>
> Thank you for your reply, see inline:
>
> On Wed, 14 Sep 2016, Burkhard Linke wrote:
>
> > Hi,
> >
> >
> > On 09/14/2016 12:43 PM, Dennis Kramer (DT) wrote:
> >> Hi Goncalo,
> >>
> >>
> Op 15 september 2016 om 10:34 schreef Florent B :
>
>
> Hi everyone,
>
> I have a Ceph cluster on Jewel.
>
> Monitors are on 32GB ram hosts.
>
> After a few days, ceph-mon process uses 25 to 35% of 32GB (8 to 11 GB) :
>
> 1150 ceph 20 0 15.454g 7.983g 7852 S
version are you running on the client?
Wido
> Jon
>
> On 9/13/2016 11:17 AM, Wido den Hollander wrote:
>
> >> Op 13 september 2016 om 15:58 schreef "WRIGHT, JON R (JON R)"
> >> <jonrodwri...@gmail.com>:
> >>
> >>
> >> Yes,
not always, but it is just that I saw this happening recently after a Jewel
upgrade.
What version are the client(s) still running?
Wido
> Thanks,
>
> Jon
>
>
> On 9/12/2016 4:05 PM, Wido den Hollander wrote:
> >> Op 12 september 2016 om 18:47 schreef "WRIGHT, JON R (J
> Op 12 september 2016 om 16:14 schreef Василий Ангапов :
>
>
> Hello, colleagues!
>
> I have Ceph Jewel cluster of 10 nodes (Centos 7 kernel 4.7.0), 290
> OSDs total with journals on SSDs. Network is 2x10Gb public and 2x10GB
> cluster.
> I do constantly see periodic slow
> Op 12 september 2016 om 18:47 schreef "WRIGHT, JON R (JON R)"
> :
>
>
> Since upgrading to Jewel from Hammer, we're started to see HEALTH_WARN
> because of 'blocked requests > 32 sec'. Seems to be related to writes.
>
> Has anyone else seen this? Or can anyone
> Op 8 september 2016 om 14:58 schreef thomas.swinde...@yahoo.com:
>
>
> We've been doing some performance testing on Bluestore to see whether it
> could be viable to use in the future.
> The good news we are seeing significant performance improvements on using it,
> so thank you for all the
> Op 8 september 2016 om 15:02 schreef Jim Kilborn :
>
>
> Hello all…
>
> I am setting up a ceph cluster (jewel) on a private network. The compute
> nodes are all running centos 7 and mounting the cephfs volume using the
> kernel driver. The ceph storage nodes are dual
Hi,
I've been setting up a RGW Multi-Site [0] configuration in 6 VMs. 3 VMs per
cluster and one RGW per cluster.
Works just fine, I can create a user in the master zone, create buckets and
upload data using s3cmd (S3).
What I see is that ALL data is synced between the two zones. While I
> Op 6 september 2016 om 14:43 schreef John Spray <jsp...@redhat.com>:
>
>
> On Tue, Sep 6, 2016 at 1:12 PM, Wido den Hollander <w...@42on.com> wrote:
> > Hi,
> >
> > Recent threads on on the ML revealed that the Ceph MDS can benefit from
> >
Hi,
Recent threads on on the ML revealed that the Ceph MDS can benefit from using a
fast single threaded CPU.
Some tasks inside the MDS are still single-threaded operations, so the faster
the code executes the better.
Keeping that in mind I started to look at some benchmarks:
> Op 1 september 2016 om 17:37 schreef Iain Buclaw <ibuc...@gmail.com>:
>
>
> On 16 August 2016 at 17:13, Wido den Hollander <w...@42on.com> wrote:
> >
> >> Op 16 augustus 2016 om 15:59 schreef Iain Buclaw <ibuc...@gmail.com>:
> >>
> >
2 active+recovering+degraded
> >2 undersized+degraded+remapped+peered
> >1 stale+active+clean+scrubbing+deep+inconsistent+repair
> >1 active+clean+scrubbing+deep
> >1 active+clean+scrubb
> Op 31 augustus 2016 om 22:56 schreef Reed Dier :
>
>
> After a power failure left our jewel cluster crippled, I have hit a sticking
> point in attempted recovery.
>
> Out of 8 osd’s, we likely lost 5-6, trying to salvage what we can.
>
That's probably to much. How
> Op 31 augustus 2016 om 22:14 schreef Gregory Farnum :
>
>
> On Tue, Aug 30, 2016 at 2:17 AM, Andrei Mikhailovsky
> wrote:
> > Hello
> >
> > I've got a small cluster of 3 osd servers and 30 osds between them running
> > Jewel 10.2.2 on Ubuntu 16.04 LTS
> Op 31 augustus 2016 om 15:28 schreef Daniel Gryniewicz <d...@redhat.com>:
>
>
> I believe this is a Ganesha bug, as discussed on the Ganesha list.
>
Ah, thanks. Do you maybe have a link or subject so I can chime in?
Wido
> Daniel
>
> On 08/31/2016 06:5
> Op 31 augustus 2016 om 12:42 schreef John Spray <jsp...@redhat.com>:
>
>
> On Wed, Aug 31, 2016 at 11:23 AM, Wido den Hollander <w...@42on.com> wrote:
> > Hi,
> >
> > I have a CephFS filesystem which is re-exported through NFS Ganesha
> > (v
Hi,
I have a CephFS filesystem which is re-exported through NFS Ganesha (v2.3.0)
with Ceph 10.2.2
The export works fine, but when calling a chgrp on a file the UID is set to
root.
Example list of commands:
$ chown www-data:www-data myfile
That works, file is now owned by www-data/www-data
> Op 31 augustus 2016 om 9:51 schreef "Brian ::" :
>
>
> Amazing improvements to performance in the preview now.. I wonder
Indeed, great work!
> will there be a filestore --> bluestore upgrade path...
>
Yes and No. Since the OSDs API doesn't change you can 'simply':
1.
> Op 30 augustus 2016 om 12:59 schreef "Dennis Kramer (DBS)" :
>
>
> Hi Goncalo,
>
> Thank you for providing below info. I'm getting the exact same errors:
> ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
> 1: (()+0x2ae88e) [0x5647a76f488e]
> 2: (()+0x113d0)
> Op 30 augustus 2016 om 10:16 schreef Ishmael Tsoaela :
>
>
> Hi All,
>
>
> Is there a way to have ceph reweight osd automatically?
No, there is none at the moment.
>
> As well could a osd reaching 92% cause the entire cluster to reboot?
>
No, it will block, but it
> Op 25 augustus 2016 om 12:14 schreef Steffen Weißgerber
> <weissgerb...@ksnb.de>:
>
>
>
>
>
> Hi,
>
>
> >>> Wido den Hollander <w...@42on.com> schrieb am Dienstag, 9. August 2016 um
> 10:05:
>
> >> Op 8 augustus 2016
> Op 25 augustus 2016 om 19:31 schreef "Deneau, Tom" :
>
>
> If I have an rbd image that is being used by a VM and I want to mount it
> as a read-only /dev/rbd0 kernel device, is that possible?
>
> When I try it I get:
>
> mount: /dev/rbd0 is write-protected, mounting
Hi Dan,
Not on my list currently. I think it's not that difficult, but I never got
around to maintaining rados-java and keep up with librados.
You are more then welcome to send a Pull Request though!
https://github.com/ceph/rados-java/pulls
Wido
> Op 24 augustus 2016 om 21:58 schreef Dan
Hi Ricardo (and rest),
I see that http://tracker.ceph.com/issues/14527 /
https://github.com/ceph/ceph/pull/7741 has been merged which would allow
clients and daemons to find their Monitors through DNS.
mon_dns_srv_name is set to ceph-mon by default, so if I'm correct, this would
work?
Let's
> Op 23 augustus 2016 om 22:24 schreef Nick Fisk <n...@fisk.me.uk>:
>
>
>
>
> > -Original Message-
> > From: Wido den Hollander [mailto:w...@42on.com]
> > Sent: 23 August 2016 19:45
> > To: Ilya Dryomov <idryo...@gmail.com>; Nick Fi
> Op 23 augustus 2016 om 18:32 schreef Ilya Dryomov <idryo...@gmail.com>:
>
>
> On Mon, Aug 22, 2016 at 9:22 PM, Nick Fisk <n...@fisk.me.uk> wrote:
> >> -Original Message-----
> >> From: Wido den Hollander [mailto:w...@42on.com]
> >> S
> Op 22 augustus 2016 om 21:22 schreef Nick Fisk <n...@fisk.me.uk>:
>
>
> > -Original Message-----
> > From: Wido den Hollander [mailto:w...@42on.com]
> > Sent: 22 August 2016 18:22
> > To: ceph-users <ceph-users@lists.ceph.com>; n...@fisk.
> Op 22 augustus 2016 om 15:52 schreef Christian Balzer :
>
>
>
> Hello,
>
> first off, not a CephFS user, just installed it on a lab setup for fun.
> That being said, I tend to read most posts here.
>
> And I do remember participating in similar discussions.
>
> On Mon, 22
> Op 22 augustus 2016 om 15:17 schreef Nick Fisk :
>
>
> Hope it's useful to someone
>
> https://gist.github.com/fiskn/6c135ab218d35e8b53ec0148fca47bf6
>
Thanks for sharing. Might this be worth adding it to ceph-common?
And is 16MB something we should want by default or
> Op 21 augustus 2016 om 10:26 schreef "Brian ::" :
>
>
> If you point at the eu.ceph.com
>
> ceph.apt-get.eu has address 185.27.175.43
>
> ceph.apt-get.eu has IPv6 address 2a00:f10:121:400:48c:baff:fe00:477
>
Yes, however, keep in mind that IPs might change without notice.
> Op 17 augustus 2016 om 23:54 schreef Dan Jakubiec <dan.jakub...@gmail.com>:
>
>
> Hi Wido,
>
> Thank you for the response:
>
> > On Aug 17, 2016, at 16:25, Wido den Hollander <w...@42on.com> wrote:
> >
> >
> >> Op 17 augustu
> Op 17 augustus 2016 om 17:44 schreef Dan Jakubiec :
>
>
> Hello, we have a Ceph cluster with 8 OSD that recently lost power to all 8
> machines. We've managed to recover the XFS filesystems on 7 of the machines,
> but the OSD service is only starting on 1 of them.
>
> Op 16 augustus 2016 om 15:59 schreef Iain Buclaw :
>
>
> Hi,
>
> I've been slowly getting some insight into this, but I haven't yet
> found any compromise that works well.
>
> I'm currently testing ceph using librados C bindings directly, there
> are two components that
> Op 15 augustus 2016 om 9:54 schreef Chengwei Yang
> :
>
>
> Hi List,
>
> I read from ceph document[1] that there are several rbd image features
>
> - layering: layering support
> - striping: striping v2 support
> - exclusive-lock: exclusive locking support
. I have the fsid, the Mon key map and all of
> the osds look to be fine so all of the previous osd maps are there.
>
> I just don't understand what key/values I need inside.
>
> On Aug 11, 2016 1:33 AM, "Wido den Hollander" <w...@42on.com> wrote:
>
> >
>
> 4336 active+clean
>client io 0 B/s rd, 72112 B/s wr, 7 op/s
>
Ok, that's good. Monitors don't trim the logs when the cluster isn't healthy,
but yours is.
Wido
>
> Zitat von Wido den Hollander <w...@42on.com>:
>
> >> Op 11 augustus 2016 om 9:56
> Op 11 augustus 2016 om 9:56 schreef Eugen Block :
>
>
> Hi list,
>
> we have a working cluster based on Hammer with 4 nodes, 19 OSDs and 3 MONs.
> Now after a couple of weeks we noticed that we're running out of disk
> space on one of the nodes in /var.
> Similar to [1]
> Op 11 augustus 2016 om 0:10 schreef Sean Sullivan :
>
>
> I think it just got worse::
>
> all three monitors on my other cluster say that ceph-mon can't open
> /var/lib/ceph/mon/$(hostname). Is there any way to recover if you lose all
> 3 monitors? I saw a post by
hould see
all PGs active+X.
Then the waiting game starts, get coffee, some sleep and wait for it to finish.
By throttling recovery you prevent this to become slow for the clients.
Wido
> Best,
> Martin
>
> On Tue, Aug 9, 2016 at 10:05 AM, Wido den Hollander <w...@42on.com>
> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков :
>
>
> > >> Hello dear community!
> >> >> I'm new to the Ceph and not long ago took up the theme of building
> >> >> clusters.
> >> >> Therefore it is very important to your opinion.
> >> >> It is necessary to create a
> Op 8 augustus 2016 om 16:45 schreef Martin Palma :
>
>
> Hi all,
>
> we are in the process of expanding our cluster and I would like to
> know if there are some best practices in doing so.
>
> Our current cluster is composted as follows:
> - 195 OSDs (14 Storage Nodes)
> -
> Op 8 augustus 2016 om 12:49 schreef John Spray :
>
>
> On Mon, Aug 8, 2016 at 9:26 AM, Dmitriy Lysenko wrote:
> > Good day.
> >
> > My CephFS switched to read only
> > This problem was previously on Hammer, but i recreated cephfs, upgraded to
> > Jewel
> Op 4 augustus 2016 om 18:17 schreef Shain Miley :
>
>
> Hello,
>
> I am thinking about setting up a second Ceph cluster in the near future,
> and I was wondering about the current status of rbd-mirror.
>
I don't have all the answers, but I will give it a try.
> 1)is it
nks!
>
>
>
> Van: Wido den Hollander <w...@42on.com>
> Verzonden: woensdag 3 augustus 2016 10:30
> Aan: Rob Reus; ceph-users@lists.ceph.com
> Onderwerp: Re: [ceph-users] CRUSH map utilization issue
>
>
> > Op 3 augustus 2016 om
> Op 3 augustus 2016 om 10:08 schreef Rob Reus :
>
>
> Hi all,
>
>
> I built a CRUSH map, with the goal to distinguish between SSD and HDD storage
> machines using only 1 root. The map can be found here:
> http://pastebin.com/VQdB0CE9
>
>
> The issue I am having is this:
> Op 30 juli 2016 om 8:51 schreef Richard Thornton :
>
>
> Hi,
>
> Thanks for taking a look, any help you can give would be much appreciated.
>
> In the next few months or so I would like to implement Ceph for my
> small business because it sounds cool and I love
> Op 29 juli 2016 om 16:30 schreef Chengwei Yang <chengwei.yang...@gmail.com>:
>
>
> On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote:
> >
> > > Op 29 juli 2016 om 13:20 schreef Chengwei Yang
> > > <chengwei.yang...
> Op 29 juli 2016 om 13:20 schreef Chengwei Yang :
>
>
> Hi Christian,
>
> Thanks for your reply, and since I do really don't like the HEALTH_WARN and it
> not allowed to decrease pg_num of a pool.
>
> So can I just remove the default **rbd** pool and re-create it
> Op 29 juli 2016 om 11:59 schreef Dan van der Ster :
>
>
> Oh yes, that should help. BTW, which client are people using for the
> Admin Ops API? Is there something better than s3curl.pl ...
>
I wrote my own client a while ago, but that's kind of buggy :)
You might want
> Op 27 juli 2016 om 12:48 schreef jerry :
>
>
> Hello everyone,
>
>
> I want to list the objects stored in the specified placement group through
> rados API, do you know how to deal with
> it?___
As far as I know that's not
> Op 24 juli 2016 om 21:58 schreef Frank Enderle :
>
>
> Hi,
>
> a while ago I updated a cluster from Infernalis to Jewel. After the update
> some problems occured, which I fixed (I had to create some additional pool
> which I was helped with in the IRC channel) -
501 - 600 of 1153 matches
Mail list logo