t = node1
> mon data = /var/lib/ceph/mon/ceph-node1/
>
> [mon.node3]
> host = node3
> mon data = /var/lib/ceph/mon/ceph-node3/
>
> [mon.node2]
> host = node2
> mon data = /var/lib/ceph/mon/ceph-node2/
>
> [mon.node4]
> host = node4
> mon da
:
> http://docs.ceph.com/docs/hammer/rados/api/librados-intro/#getting-librados-for-java
> ---
>
> Thanks again! It great to get some friendly support; it kept me searching...
>
> Best Regards,
>
> Kees
>
>
>
> Op 28-12-2015 om 15:28 schreef Wido den Hollander:
> Kees
>
>
>
> ___________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: c
t;>>> // close the image
>>>>> rbd.close(image);
>>>>> // close the connection
>>>>> rados.ioCtxDestroy(ioctx);
>>>>> rados.shutDown();
>>>>> } catch (RadosException ex) {
>>>>> Logger.getLogger(ApiFuncti
alternative to the Intel's
> 3700/3500 series.
>
> Thanks
>
> Andrei
>
> - Original Message -
>> From: "Wido den Hollander" <w...@42on.com>
>> To: "ceph-users" <ceph-users@lists.ceph.com>
>> Sent: Monday, 21
ls,
> but several variables are not always easy to predict and probably will
> change during the life of your cluster.
>
> Lionel
> ___________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
e same tests if I can. Interesting to see how they perform.
> Best regards,
>
> Lionel
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
&g
On 12/17/2015 05:27 PM, Florian Haas wrote:
> Hey Wido,
>
> On Dec 17, 2015 09:52, "Wido den Hollander" <w...@42on.com
> <mailto:w...@42on.com>> wrote:
>>
>> On 12/17/2015 06:29 AM, Ben Hines wrote:
>> >
>> >
>> > On Wed, D
On 21-12-15 10:34, Florian Haas wrote:
> On Mon, Dec 21, 2015 at 10:20 AM, Wido den Hollander <w...@42on.com> wrote:
>>>>> Oh, and to answer this part. I didn't do that much experimentation
>>>>> unfortunately. I actually am using about 24 index shards p
properly.
Wido
On 05-08-15 16:15, Wido den Hollander wrote:
> Hi,
>
> One of the first things I want to do as the Ceph User Committee is set
> up a proper mirror system for Ceph.
>
> Currently there is ceph.com, eu.ceph.com and au.ceph.com (thanks
> Matthew!), but this isn't
antly improved, but i am not
> sure how much. A faster cluster could probably handle bigger indexes.
>
> -Ben
>
>
>
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/list
for example now has native
RADOS support using phprados.
Isn't ownCloud something that could work? Talking native RADOS is always
the best.
Wido
>
> Kind Regards,
> Alex.
>
>
>
> ___
> ceph-users mailing list
> ceph-users
t; --
> Alex Gorbachev
> Storcium
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phon
ersé. It just wants a mount point
where it can write data to.
You can always manually bootstrap a cluster if you want to.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-use
On 26-11-15 07:58, Wido den Hollander wrote:
> On 11/25/2015 10:46 PM, Gregory Farnum wrote:
>> On Wed, Nov 25, 2015 at 11:09 AM, Wido den Hollander <w...@42on.com> wrote:
>>> Hi,
>>>
>>> Currently we have OK, WARN and ERR as states for a Ceph cluste
>
> I don't see any error packets or drops on switches either.
>
> Ideas?
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den
On 29-11-15 20:20, misa-c...@hudrydum.cz wrote:
> Hi everyone,
>
> for my pet project I've needed python3 rados library. So I've took the
> existing python2 rados code and clean it up a little bit to fit my needs. The
> lib contains basic interface, asynchronous operations and also asyncio
>
On 30-11-15 10:08, Carsten Schmitt wrote:
> Hi all,
>
> I'm running ceph version 0.94.5 and I need to downsize my servers
> because of insufficient RAM.
>
> So I want to remove OSDs from the cluster and according to the manual
> it's a pretty straightforward process:
> I'm beginning with "ceph
gwdefgh43
>>
>> .bucket.meta.rgwdefghijklm119:default.6066.25
>>
>> rgwdefghijklm200
>>
>> .bucket.meta.rgwxghi2:default.5203.4
>>
>> rgwxjk17
>>
>> rgwdefghijklm196
>>
>>
>>
>> ...
>>
>&
ome into action. <= WARN is just a thing you might
want to look in to, but not at 03:00 on Sunday morning.
Does this sound reasonable?
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph
On 11/25/2015 10:46 PM, Gregory Farnum wrote:
> On Wed, Nov 25, 2015 at 11:09 AM, Wido den Hollander <w...@42on.com> wrote:
>> Hi,
>>
>> Currently we have OK, WARN and ERR as states for a Ceph cluster.
>>
>> Now, it could happen that while a Ceph
ems this pool has the buckets listed by the radosgw-admin command.
>
>
>
> Can anybody explain what is *.rgw pool* supposed to contain ?
>
>
This pool contains only the bucket metadata objects, here it references
to the internal IDs.
You can fetch this with 'radosgw-adm
with the name 'rack', that's probably missing.
How many racks do you have? Two? I don't fully understand what you are
trying to do.
>
>
> Any help would be welcome :)
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
&
___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
__
On 17-11-15 11:07, Patrik Plank wrote:
> Hi,
>
>
> maybe a trivial question :-||
>
> I have to shut down all my ceph nodes.
>
> What's the best way to do this.
>
> Can I just shut down all nodes or should i
>
> first shut down the ceph process?
>
First, set the noout flag in the
at should I do? Do you recommend any specific procedure?
>
> Thanks a lot.
> Jose Tavares
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.c
On 11/13/2015 09:11 AM, Karan Singh wrote:
>
>
>> On 11 Nov 2015, at 22:49, David Clarke <dav...@catalyst.net.nz> wrote:
>>
>> On 12/11/15 09:37, Gregory Farnum wrote:
>>> On Wednesday, November 11, 2015, Wido den Hollander <w...@42on.com
>>>
On 13-11-15 10:56, Jens Rosenboom wrote:
> 2015-10-20 16:00 GMT+02:00 Wido den Hollander <w...@42on.com>:
> ...
>> The system consists out of 39 hosts:
>>
>> 2U SuperMicro chassis:
>> * 80GB Intel SSD for OS
>> * 240GB Intel S3700 SSD for Journaling +
On 11/10/2015 09:49 PM, Vickey Singh wrote:
> On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander <w...@42on.com> wrote:
>
>> On 11/09/2015 05:27 PM, Vickey Singh wrote:
>>> Hello Ceph Geeks
>>>
>>> Need your comments with my understanding on str
/var/lib/ceph/mon and 46 disks with OSD data.
Wido
>
> On Mon, Nov 9, 2015 at 7:23 AM, Wido den Hollander <w...@42on.com> wrote:
>> Hi,
>>
>> Recently I got my hands on a Ceph cluster which was pretty damaged due
>> to a human error.
>>
>> I had no ce
Hi,
Recently I got my hands on a Ceph cluster which was pretty damaged due
to a human error.
I had no ceph.conf nor did I have any original Operating System data.
With just the MON/OSD data I had to rebuild the cluster by manually
re-writing the ceph.conf and installing Ceph.
The problem was,
Wido
> Please suggest
>
> Thank You in advance.
>
> - Vickey -
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Ho
ists.ceph.com] On Behalf Of Wido
> den Hollander
> Sent: October-28-15 5:49 AM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph OSDs with bcache experience
>
>
>
> On 21-10-15 15:30, Mark Nelson wrote:
>>
>>
>> On 10/21/2015 01:59 AM, Wido den H
ill get back the RBD image by reverting it in
a special way. With a special cephx capability for example.
This goes a bit in the direction of soft pool-removals as well, it might
be combined.
Comments?
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 990
t; "control_pool": ".eu-zone1.rgw.control",
>> "gc_pool": ".eu-zone1.rgw.gc",
>> "log_pool": ".eu-zone1.log",
>> "intent_log_pool": ".eu-zone1.intent-log",
>> "usage_log_pool&q
por direitos autorais. A divulgação, distribuição,
>> reprodução ou qualquer forma de utilização do teor deste documento
>> depende de autorização do emissor, sujeitando-se o infrator às sanções
>> legais. Caso esta comunicação tenha sido recebida por engano, favor
>> avisar imed
On 03-11-15 01:54, Voloshanenko Igor wrote:
> Thank you, Jason!
>
> Any advice, for troubleshooting
>
> I'm looking in code, and right now don;t see any bad things :(
>
Can you run the CloudStack Agent in DEBUG mode and then see after which
lines in the logs it crashes?
Wido
>
gt;
> So, almost alsways it's exception after RbdUnprotect then in approx
> . 20 minutes - crash..
> Almost all the time - it's happen after GetVmStatsCommand or Disks
> stats... Possible that evil hiden into UpadteDiskInfo method... but
> i can;t find any bad
On 02-11-15 12:30, Loris Cuoghi wrote:
> Hi All,
>
> We're currently on version 0.94.5 with three monitors and 75 OSDs.
>
> I've peeked at the decompiled CRUSH map, and I see that all ids are
> commented with '# Here be dragons!', or more literally : '# do not
> change unnecessarily'.
>
>
On 02-11-15 11:56, Jan Schermer wrote:
> Can those hints be disabled somehow? I was battling XFS preallocation
> the other day, and the mount option didn't make any difference - maybe
> because those hints have precedence (which could mean they aren't
> working as they should), maybe not.
>
On 29-10-15 16:38, Voloshanenko Igor wrote:
> Hi Wido and all community.
>
> We catched very idiotic issue on our Cloudstack installation, which
> related to ceph and possible to java-rados lib.
>
I think you ran into this one:
https://issues.apache.org/jira/browse/CLOUDSTACK-8879
Cleaning
On 21-10-15 15:30, Mark Nelson wrote:
>
>
> On 10/21/2015 01:59 AM, Wido den Hollander wrote:
>> On 10/20/2015 07:44 PM, Mark Nelson wrote:
>>> On 10/20/2015 09:00 AM, Wido den Hollander wrote:
>>>> Hi,
>>>>
>>>> In the &quo
On 27-10-15 09:51, Björn Lässig wrote:
> Hi,
>
> after having some problems with ipv6 and download.ceph.com, i made a
> mirror (debian-hammer only) for my ipv6-only cluster.
>
I see you are from Germany, you can also sync from eu.ceph.com
> Unfortunately after the release of 0.94.5 the rsync
On 27-10-15 11:45, Björn Lässig wrote:
> On 10/27/2015 10:22 AM, Wido den Hollander wrote:
>> On 27-10-15 09:51, Björn Lässig wrote:
>>> after having some problems with ipv6 and download.ceph.com, i made a
>>> mirror (debian-hammer only) for my ipv6-only cluster.
>&
On 26-10-15 14:29, Matteo Dacrema wrote:
> Hi Nick,
>
>
>
> I also tried to increase iodepth but nothing has changed.
>
>
>
> With iostat I noticed that the disk is fully utilized and write per
> seconds from iostat match fio output.
>
Ceph isn't fully optimized to get the maximum
3 is safe.
2 replicas isn't safe, no matter how big or small the cluster is. With
disks becoming larger recovery times will grow. In that window you don't
want to run on a single replica.
> thanks.
>
>
>
> ___
> ceph
On 23-10-15 14:58, Jon Heese wrote:
> Hello,
>
>
>
> We have two separate networks in our Ceph cluster design:
>
>
>
> 10.197.5.0/24 - The "front end" network, "skinny pipe", all 1Gbe,
> intended to be a management or control plane network
>
> 10.174.1.0/24 - The "back end" network,
On 10/21/2015 03:30 PM, Mark Nelson wrote:
>
>
> On 10/21/2015 01:59 AM, Wido den Hollander wrote:
>> On 10/20/2015 07:44 PM, Mark Nelson wrote:
>>> On 10/20/2015 09:00 AM, Wido den Hollander wrote:
>>>> Hi,
>>>>
>>>> In the &quo
sume only the kernel resident RBD module matters.
>
> Any thoughts or pointers appreciated.
>
> ~jpr
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Ceph traine
On 10/21/2015 11:25 AM, Jan Schermer wrote:
>
>> On 21 Oct 2015, at 09:11, Wido den Hollander <w...@42on.com> wrote:
>>
>> On 10/20/2015 09:45 PM, Martin Millnert wrote:
>>> The thing that worries me with your next-gen design (actually your current
>&g
On 10/20/2015 07:44 PM, Mark Nelson wrote:
> On 10/20/2015 09:00 AM, Wido den Hollander wrote:
>> Hi,
>>
>> In the "newstore direction" thread on ceph-devel I wrote that I'm using
>> bcache in production and Mark Nelson asked me to share some details.
>&g
y allocate only 1TB of the
SSD and leave 200GB of cells spare so the Wear-Leveling inside the SSD
has some spare cells.
Wido
>
> ---- Original message
> From: Wido den Hollander <w...@42on.com>
> Date: 20/10/2015 16:00 (GMT+01:00)
> To: ceph-users <c
Hi,
In the "newstore direction" thread on ceph-devel I wrote that I'm using
bcache in production and Mark Nelson asked me to share some details.
Bcache is running in two clusters now that I manage, but I'll keep this
information to one of them (the one at PCextreme behind CloudStack).
In this
On 15-10-15 13:56, Luis Periquito wrote:
> I've been trying to find a way to limit the number of request an user
> can make the radosgw per unit of time - first thing developers done
> here is as fast as possible parallel queries to the radosgw, making it
> very slow.
>
> I've looked into
Hi,
Not to complain or flame about it, but I see a lot of messages which are
being send to both users and ceph-devel.
Imho that beats the purpose of having a users and a devel list, isn't it?
The problem is that messages go to both lists and users hit reply-all
again and so it continues.
For
On 14-10-15 16:30, Björn Lässig wrote:
> On 10/13/2015 11:01 PM, Sage Weil wrote:
>> http://download.ceph.com/debian-testing
>
> unfortunately this site is not reachable at the moment.
>
>
> $ wget http://download.ceph.com/debian-testing/dists/wheezy/InRelease -O -
> --2015-10-14
exist.
Any objections against mirroring the pubkey there as well? If not, could
somebody do it?
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 10/14/2015 06:50 PM, Björn Lässig wrote:
> On 10/14/2015 05:11 PM, Wido den Hollander wrote:
>>
>>
>> On 14-10-15 16:30, Björn Lässig wrote:
>>> On 10/13/2015 11:01 PM, Sage Weil wrote:
>>>> http://download.ceph.com/debian-testing
>>>
>>
On 10/14/2015 06:50 PM, Björn Lässig wrote:
> On 10/14/2015 05:11 PM, Wido den Hollander wrote:
>>
>>
>> On 14-10-15 16:30, Björn Lässig wrote:
>>> On 10/13/2015 11:01 PM, Sage Weil wrote:
>>>> http://download.ceph.com/debian-testing
>>>
>>
you may reply to the sender and should
> delete this e-mail immediately.
> ---
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido
> den Hollander
> Sent: Thursday, October 08, 2015 10:06 PM
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] How to improve 'rbd ls [pool]' response time
>
> On 10/08/2015 10:46 AM,
On 02-10-15 14:16, Stefan Priebe - Profihost AG wrote:
> Hi,
>
> we accidentally added zeros to all our rbd images. So all images are no
> longer thin provisioned. As we do not have access to the qemu guests
> running those images. Is there any other options to trim them again?
>
Rough guess,
On 30-09-15 19:09, Alkaid wrote:
> I try to update packages today, but I got a "connection reset by peer"
> error every time.
> It seems that the server will block my IP if I request a little
> frequently ( refresh page a few times manually per second).
> I guess yum downloads packages in
On 30-09-15 14:19, Mark Nelson wrote:
> On 09/29/2015 04:56 PM, J David wrote:
>> On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh
>> wrote:
The density would be higher than the 36 drive units but lower than the
72 drive units (though with shorter rack
/debian-).
>
Seems like a IPv6 routing issue. If you need, you can always eu.ceph.com
to download your packages.
> Saludos
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listin
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 24-09-15 11:06, Ilya Dryomov wrote:
> On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> If you use RADOS gateway, RBD or CephFS, then you don't need to worry
>> about striping. If you write your own
On 23-09-15 13:38, Olivier Bonvalet wrote:
> Hi,
>
> since several hours http://ceph.com/ doesn't reply anymore in IPv6.
> It pings, and we can open TCP socket, but nothing more :
>
>
> ~$ nc -w30 -v -6 ceph.com 80
> Connection to ceph.com 80 port [tcp/http] succeeded!
> GET /
On 23-09-15 03:49, Dan Mick wrote:
> On 09/22/2015 05:22 AM, Sage Weil wrote:
>> On Tue, 22 Sep 2015, Wido den Hollander wrote:
>>> Hi,
>>>
>>> After the recent changes in the Ceph website the IPv6 connectivity got lost.
>>>
>>>
Hi,
http://ceph.com/maven/ no longer works.
This maven repository was used to host the rados-java bindings, but also
the cephfs java bindings.
Can we put this location back up again?
Wido
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
After the recent changes in the Ceph website the IPv6 connectivity got lost.
www.ceph.com
docs.ceph.com
download.ceph.com
git.ceph.com
The problem I'm now facing with a couple of systems is that they can't
download the Package signing key from git.ceph.com or anything from
download.ceph.com
On 21-09-15 13:18, Dan van der Ster wrote:
> On Mon, Sep 21, 2015 at 12:11 PM, Wido den Hollander <w...@42on.com> wrote:
>> You can also change 'straw_calc_version' to 2 in the CRUSHMap.
>
> AFAIK straw_calc_version = 1 is the optimal. straw_calc_version = 2 is
> no
Hi,
Since the security notice regarding ceph.com the mirroring system broke.
This meant that eu.ceph.com didn't serve new packages since the whole
download system changed.
I didn't have much time to fix this, but today I resolved it by
installing Varnish [0] on eu.ceph.com
The VCL which is
On 21-09-15 15:57, Dan van der Ster wrote:
> On Mon, Sep 21, 2015 at 3:50 PM, Wido den Hollander <w...@42on.com> wrote:
>>
>>
>> On 21-09-15 15:05, SCHAER Frederic wrote:
>>> Hi,
>>>
>>> Forgive the question if the answer is obvious... It
On 21-09-15 11:06, Stefan Priebe - Profihost AG wrote:
> Hi,
>
> how can i upgrade / move from straw to straw2? I checked the docs but i
> was unable to find upgrade informations?
>
First make sure that all clients are running librados 0.9, but keep in
mind that any running VMs or processes
On 21-09-15 15:05, SCHAER Frederic wrote:
> Hi,
>
> Forgive the question if the answer is obvious... It's been more than "an hour
> or so" and eu.ceph.com apparently still hasn't been re-signed or at least
> what I checked wasn't :
>
> # rpm -qp --qf '%{RSAHEADER:pgpsig}'
>
x005f931e in main ()
>
>
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 11-09-15 12:22, Gregory Farnum wrote:
> On Thu, Sep 10, 2015 at 9:46 PM, Wido den Hollander <w...@42on.com> wrote:
>> Hi,
>>
>> I'm running into a issue with Ceph 0.94.2/3 where after doing a recovery
>> test 9 PGs stay incomplete:
>>
>> osdmap
an be found here: http://pastebin.com/qQL699zC
The cluster is running a mix of 0.94.2 and .3 on Ubuntu 14.04.2 with the
3.13 kernel. XFS is being used as the backing filesystem.
Any suggestions to fix this issue? There is no valuable data in these
pools, so I can remove them, but I'd rat
ealisation that for us performance and ease of
> administration is more valuable than 100% uptime. Worst case (Storage server
> dies) we could rebuild from backups in a day. Essentials could be restored in
> a hour. I could experiment with ongoing ZFS replications to a backup server
>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
On 28-08-15 13:07, Gregory Farnum wrote:
On Mon, Aug 24, 2015 at 4:03 PM, Vickey Singh
vickey.singh22...@gmail.com wrote:
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster recovery IO
and client IO operation metrics , that can be further used with collectd.
On 08/26/2015 05:17 PM, Yehuda Sadeh-Weinraub wrote:
On Wed, Aug 26, 2015 at 6:26 AM, Gregory Farnum gfar...@redhat.com wrote:
On Wed, Aug 26, 2015 at 9:36 AM, Wido den Hollander w...@42on.com wrote:
Hi,
It's something which has been 'bugging' me for some time now. Why are
RGW pools prefixed
On 08/26/2015 04:33 PM, Dan van der Ster wrote:
Hi Wido,
On Wed, Aug 26, 2015 at 10:36 AM, Wido den Hollander w...@42on.com wrote:
I'm sending pool statistics to Graphite
We're doing the same -- stripping invalid chars as needed -- and I
would guess that lots of people have written
sending a key like this
you 'break' Graphite: ceph.pools.stats.pool_name.kb_read
A pool like .rgw.root will break this since Graphite splits on periods.
So is there any reason why this is? What's the reasoning behind it?
--
Wido den Hollander
42on B.V.
Ceph trainer and consultant
Phone: +31 (0)20
On 18-08-15 12:25, Benedikt Fraunhofer wrote:
Hi Nick,
did you do anything fancy to get to ~90MB/s in the first place?
I'm stuck at ~30MB/s reading cold data. single-threaded-writes are
quite speedy, around 600MB/s.
radosgw for cold data is around the 90MB/s, which is imho limitted by
On 18-08-15 14:13, Erik McCormick wrote:
I've got a custom named cluster integrated with Openstack (Juno) and
didn't run into any hard-coded name issues that I can recall. Where are
you seeing that?
As to the name change itself, I think it's really just a label applying
to a configuration
Op 18 aug. 2015 om 18:15 heeft Jan Schermer j...@schermer.cz het volgende
geschreven:
On 18 Aug 2015, at 17:57, Björn Lässig b.laes...@pengutronix.de wrote:
On 08/18/2015 04:32 PM, Jan Schermer wrote:
Should ceph care about what scope the address is in? We don't specify it
for
On 14-08-15 14:30, Marcin Przyczyna wrote:
Hello,
this is my first posting to ceph-users mailgroup
and because I am also new to this technology please
be patient with me.
A description of problem I get stuck follows:
3 Monitors are up and running, one of them
is leader, the two are
-users on behalf of Wido den Hollander
ceph-users-boun...@lists.ceph.com on behalf of w...@42on.com wrote:
Hi,
One of the first things I want to do as the Ceph User Committee is set
up a proper mirror system for Ceph.
Currently there is ceph.com, eu.ceph.com and au.ceph.com (thanks
Matthew
On 06-08-15 10:16, Hector Martin wrote:
We have 48 OSDs (on 12 boxes, 4T per OSD) and 4 pools:
- 3 replicated pools (3x)
- 1 RS pool (5+2, size 7)
The docs say:
http://ceph.com/docs/master/rados/operations/placement-groups/
Between 10 and 50 OSDs set pg_num to 4096
Which is what we
Hi,
One of the first things I want to do as the Ceph User Committee is set
up a proper mirror system for Ceph.
Currently there is ceph.com, eu.ceph.com and au.ceph.com (thanks
Matthew!), but this isn't the way I want to see it.
I want to set up a series of localized mirrors from there you can
On 03-08-15 22:25, Samuel Just wrote:
It seems like it's about time for us to make the jump to C++11. This
is probably going to have an impact on users of the librados C++
bindings. It seems like such users would have to recompile code using
the librados C++ libraries after upgrading the
On 04-08-15 16:39, Daniel Marks wrote:
Hi all,
I accidentally deleted a ceph pool while there was still a rados block device
mapped on a client. If I try to unmap the device with “rbd unmap the command
simply hangs. I can´t get rid of the device...
We are on:
Ubuntu 14.04
Client
Thanks! I bought icecream for the whole office since the sun was shining :)
Op 1 aug. 2015 om 00:03 heeft Mark Nelson mnel...@redhat.com het volgende
geschreven:
Most folks have either probably already left or are on their way out the door
late on a friday, but I just wanted to say
On 28-07-15 16:53, Noah Mehl wrote:
When we update the following in ceph.conf:
[osd]
osd_recovery_max_active = 1
osd_max_backfills = 1
How do we make sure it takes affect? Do we have to restart all of the
ceph osd’s and mon’s?
On a client with client.admin keyring you execute:
On 27-07-15 14:21, Jan Schermer wrote:
Hi!
The /cgroup/* mount point is probably a RHEL6 thing, recent distributions
seem to use /sys/fs/cgroup like in your case (maybe because of systemd?). On
RHEL 6 the mount points are configured in /etc/cgconfig.conf and /cgroup is
the default.
I
On 27-07-15 14:56, Dan van der Ster wrote:
On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander w...@42on.com wrote:
I'm testing with it on 48-core, 256GB machines with 90 OSDs each. This
is a +/- 20PB Ceph cluster and I'm trying to see how much we would
benefit from it.
Cool. How many
NUMA nodes indeed.
Wido
Jan
On 27 Jul 2015, at 15:21, Wido den Hollander w...@42on.com wrote:
On 27-07-15 14:56, Dan van der Ster wrote:
On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander w...@42on.com wrote:
I'm testing with it on 48-core, 256GB machines with 90 OSDs each
701 - 800 of 1153 matches
Mail list logo