Re: [ceph-users] failure of public network kills connectivity

2016-01-05 Thread Wido den Hollander
t = node1 > mon data = /var/lib/ceph/mon/ceph-node1/ > > [mon.node3] > host = node3 > mon data = /var/lib/ceph/mon/ceph-node3/ > > [mon.node2] > host = node2 > mon data = /var/lib/ceph/mon/ceph-node2/ > > [mon.node4] > host = node4 > mon da

Re: [ceph-users] [rados-java] SIGSEGV librados.so Ubuntu

2015-12-29 Thread Wido den Hollander
: > http://docs.ceph.com/docs/hammer/rados/api/librados-intro/#getting-librados-for-java > --- > > Thanks again! It great to get some friendly support; it kept me searching... > > Best Regards, > > Kees > > > > Op 28-12-2015 om 15:28 schreef Wido den Hollander:

Re: [ceph-users] [rados-java] SIGSEGV librados.so Ubuntu

2015-12-28 Thread Wido den Hollander
> Kees > > > > ___________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: c

Re: [ceph-users] [rados-java] SIGSEGV librados.so Ubuntu

2015-12-28 Thread Wido den Hollander
t;>>> // close the image >>>>> rbd.close(image); >>>>> // close the connection >>>>> rados.ioCtxDestroy(ioctx); >>>>> rados.shutDown(); >>>>> } catch (RadosException ex) { >>>>> Logger.getLogger(ApiFuncti

Re: [ceph-users] Intel S3710 400GB and Samsung PM863 480GB fio results

2015-12-22 Thread Wido den Hollander
alternative to the Intel's > 3700/3500 series. > > Thanks > > Andrei > > - Original Message - >> From: "Wido den Hollander" <w...@42on.com> >> To: "ceph-users" <ceph-users@lists.ceph.com> >> Sent: Monday, 21

Re: [ceph-users] Intel S3710 400GB and Samsung PM863 480GB fio results

2015-12-22 Thread Wido den Hollander
ls, > but several variables are not always easy to predict and probably will > change during the life of your cluster. > > Lionel > ___________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___

Re: [ceph-users] Intel S3710 400GB and Samsung PM863 480GB fio results

2015-12-21 Thread Wido den Hollander
e same tests if I can. Interesting to see how they perform. > Best regards, > > Lionel > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com &g

Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-21 Thread Wido den Hollander
On 12/17/2015 05:27 PM, Florian Haas wrote: > Hey Wido, > > On Dec 17, 2015 09:52, "Wido den Hollander" <w...@42on.com > <mailto:w...@42on.com>> wrote: >> >> On 12/17/2015 06:29 AM, Ben Hines wrote: >> > >> > >> > On Wed, D

Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-21 Thread Wido den Hollander
On 21-12-15 10:34, Florian Haas wrote: > On Mon, Dec 21, 2015 at 10:20 AM, Wido den Hollander <w...@42on.com> wrote: >>>>> Oh, and to answer this part. I didn't do that much experimentation >>>>> unfortunately. I actually am using about 24 index shards p

Re: [ceph-users] Setting up a proper mirror system for Ceph

2015-12-21 Thread Wido den Hollander
properly. Wido On 05-08-15 16:15, Wido den Hollander wrote: > Hi, > > One of the first things I want to do as the Ceph User Committee is set > up a proper mirror system for Ceph. > > Currently there is ceph.com, eu.ceph.com and au.ceph.com (thanks > Matthew!), but this isn't

Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-17 Thread Wido den Hollander
antly improved, but i am not > sure how much. A faster cluster could probably handle bigger indexes. > > -Ben > > > > _______ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/list

Re: [ceph-users] recommendations for file sharing

2015-12-15 Thread Wido den Hollander
for example now has native RADOS support using phprados. Isn't ownCloud something that could work? Talking native RADOS is always the best. Wido > > Kind Regards, > Alex. > > > > ___ > ceph-users mailing list > ceph-users

Re: [ceph-users] Monitors - proactive questions about quantity, placement and protection

2015-12-12 Thread Wido den Hollander
t; -- > Alex Gorbachev > Storcium > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phon

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Wido den Hollander
ersé. It just wants a mount point where it can write data to. You can always manually bootstrap a cluster if you want to. > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-use

Re: [ceph-users] Would HEALTH_DISASTER be a good addition?

2015-12-01 Thread Wido den Hollander
On 26-11-15 07:58, Wido den Hollander wrote: > On 11/25/2015 10:46 PM, Gregory Farnum wrote: >> On Wed, Nov 25, 2015 at 11:09 AM, Wido den Hollander <w...@42on.com> wrote: >>> Hi, >>> >>> Currently we have OK, WARN and ERR as states for a Ceph cluste

Re: [ceph-users] Flapping OSDs, Large meta directories in OSDs

2015-11-30 Thread Wido den Hollander
> > I don't see any error packets or drops on switches either. > > Ideas? > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den

Re: [ceph-users] python3 librados

2015-11-30 Thread Wido den Hollander
On 29-11-15 20:20, misa-c...@hudrydum.cz wrote: > Hi everyone, > > for my pet project I've needed python3 rados library. So I've took the > existing python2 rados code and clean it up a little bit to fit my needs. The > lib contains basic interface, asynchronous operations and also asyncio >

Re: [ceph-users] Removing OSD - double rebalance?

2015-11-30 Thread Wido den Hollander
On 30-11-15 10:08, Carsten Schmitt wrote: > Hi all, > > I'm running ceph version 0.94.5 and I need to downsize my servers > because of insufficient RAM. > > So I want to remove OSDs from the cluster and according to the manual > it's a pretty straightforward process: > I'm beginning with "ceph

Re: [ceph-users] RGW pool contents

2015-11-28 Thread Wido den Hollander
gwdefgh43 >> >> .bucket.meta.rgwdefghijklm119:default.6066.25 >> >> rgwdefghijklm200 >> >> .bucket.meta.rgwxghi2:default.5203.4 >> >> rgwxjk17 >> >> rgwdefghijklm196 >> >> >> >> ... >> >&

[ceph-users] Would HEALTH_DISASTER be a good addition?

2015-11-25 Thread Wido den Hollander
ome into action. <= WARN is just a thing you might want to look in to, but not at 03:00 on Sunday morning. Does this sound reasonable? -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on ___ ceph

Re: [ceph-users] Would HEALTH_DISASTER be a good addition?

2015-11-25 Thread Wido den Hollander
On 11/25/2015 10:46 PM, Gregory Farnum wrote: > On Wed, Nov 25, 2015 at 11:09 AM, Wido den Hollander <w...@42on.com> wrote: >> Hi, >> >> Currently we have OK, WARN and ERR as states for a Ceph cluster. >> >> Now, it could happen that while a Ceph

Re: [ceph-users] RGW pool contents

2015-11-25 Thread Wido den Hollander
ems this pool has the buckets listed by the radosgw-admin command. > > > > Can anybody explain what is *.rgw pool* supposed to contain ? > > This pool contains only the bucket metadata objects, here it references to the internal IDs. You can fetch this with 'radosgw-adm

Re: [ceph-users] [crush] Selecting the current rack

2015-11-24 Thread Wido den Hollander
with the name 'rack', that's probably missing. How many racks do you have? Two? I don't fully understand what you are trying to do. > > > Any help would be welcome :) > ___ > ceph-users mailing list > ceph-users@lists.ceph.com &

Re: [ceph-users] Bcache and Ceph Question

2015-11-18 Thread Wido den Hollander
___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on __

Re: [ceph-users] restart all nodes

2015-11-17 Thread Wido den Hollander
On 17-11-15 11:07, Patrik Plank wrote: > Hi, > > > maybe a trivial question :-|| > > I have to shut down all my ceph nodes. > > What's the best way to do this. > > Can I just shut down all nodes or should i > > first shut down the ceph process? > First, set the noout flag in the

Re: [ceph-users] Disaster recovery of monitor

2015-11-17 Thread Wido den Hollander
at should I do? Do you recommend any specific procedure? > > Thanks a lot. > Jose Tavares > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.c

Re: [ceph-users] Using straw2 crush also with Hammer

2015-11-13 Thread Wido den Hollander
On 11/13/2015 09:11 AM, Karan Singh wrote: > > >> On 11 Nov 2015, at 22:49, David Clarke <dav...@catalyst.net.nz> wrote: >> >> On 12/11/15 09:37, Gregory Farnum wrote: >>> On Wednesday, November 11, 2015, Wido den Hollander <w...@42on.com >>>

Re: [ceph-users] Ceph OSDs with bcache experience

2015-11-13 Thread Wido den Hollander
On 13-11-15 10:56, Jens Rosenboom wrote: > 2015-10-20 16:00 GMT+02:00 Wido den Hollander <w...@42on.com>: > ... >> The system consists out of 39 hosts: >> >> 2U SuperMicro chassis: >> * 80GB Intel SSD for OS >> * 240GB Intel S3700 SSD for Journaling +

Re: [ceph-users] Using straw2 crush also with Hammer

2015-11-11 Thread Wido den Hollander
On 11/10/2015 09:49 PM, Vickey Singh wrote: > On Mon, Nov 9, 2015 at 8:16 PM, Wido den Hollander <w...@42on.com> wrote: > >> On 11/09/2015 05:27 PM, Vickey Singh wrote: >>> Hello Ceph Geeks >>> >>> Need your comments with my understanding on str

Re: [ceph-users] Seeing which Ceph version OSD/MON data is

2015-11-09 Thread Wido den Hollander
/var/lib/ceph/mon and 46 disks with OSD data. Wido > > On Mon, Nov 9, 2015 at 7:23 AM, Wido den Hollander <w...@42on.com> wrote: >> Hi, >> >> Recently I got my hands on a Ceph cluster which was pretty damaged due >> to a human error. >> >> I had no ce

[ceph-users] Seeing which Ceph version OSD/MON data is

2015-11-09 Thread Wido den Hollander
Hi, Recently I got my hands on a Ceph cluster which was pretty damaged due to a human error. I had no ceph.conf nor did I have any original Operating System data. With just the MON/OSD data I had to rebuild the cluster by manually re-writing the ceph.conf and installing Ceph. The problem was,

Re: [ceph-users] Using straw2 crush also with Hammer

2015-11-09 Thread Wido den Hollander
Wido > Please suggest > > Thank You in advance. > > - Vickey - > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Ho

Re: [ceph-users] Ceph OSDs with bcache experience

2015-11-06 Thread Wido den Hollander
ists.ceph.com] On Behalf Of Wido > den Hollander > Sent: October-28-15 5:49 AM > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] Ceph OSDs with bcache experience > > > > On 21-10-15 15:30, Mark Nelson wrote: >> >> >> On 10/21/2015 01:59 AM, Wido den H

[ceph-users] Soft removal of RBD images

2015-11-06 Thread Wido den Hollander
ill get back the RBD image by reverting it in a special way. With a special cephx capability for example. This goes a bit in the direction of soft pool-removals as well, it might be combined. Comments? -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 990

Re: [ceph-users] Creating RGW Zone System Users Fails with "couldn't init storage provider"

2015-11-05 Thread Wido den Hollander
t; "control_pool": ".eu-zone1.rgw.control", >> "gc_pool": ".eu-zone1.rgw.gc", >> "log_pool": ".eu-zone1.log", >> "intent_log_pool": ".eu-zone1.intent-log", >> "usage_log_pool&q

Re: [ceph-users] One object in .rgw.buckets.index causes systemic instability

2015-11-04 Thread Wido den Hollander
por direitos autorais. A divulgação, distribuição, >> reprodução ou qualquer forma de utilização do teor deste documento >> depende de autorização do emissor, sujeitando-se o infrator às sanções >> legais. Caso esta comunicação tenha sido recebida por engano, favor >> avisar imed

Re: [ceph-users] Cloudstack agent crashed JVM with exception in librbd

2015-11-03 Thread Wido den Hollander
On 03-11-15 01:54, Voloshanenko Igor wrote: > Thank you, Jason! > > Any advice, for troubleshooting > > I'm looking in code, and right now don;t see any bad things :( > Can you run the CloudStack Agent in DEBUG mode and then see after which lines in the logs it crashes? Wido >

Re: [ceph-users] Cloudstack agent crashed JVM with exception in librbd

2015-11-03 Thread Wido den Hollander
gt; > So, almost alsways it's exception after RbdUnprotect then in approx > . 20 minutes - crash.. > Almost all the time - it's happen after GetVmStatsCommand or Disks > stats... Possible that evil hiden into UpadteDiskInfo method... but > i can;t find any bad

Re: [ceph-users] Changing CRUSH map ids

2015-11-02 Thread Wido den Hollander
On 02-11-15 12:30, Loris Cuoghi wrote: > Hi All, > > We're currently on version 0.94.5 with three monitors and 75 OSDs. > > I've peeked at the decompiled CRUSH map, and I see that all ids are > commented with '# Here be dragons!', or more literally : '# do not > change unnecessarily'. > >

Re: [ceph-users] data size less than 4 mb

2015-11-02 Thread Wido den Hollander
On 02-11-15 11:56, Jan Schermer wrote: > Can those hints be disabled somehow? I was battling XFS preallocation > the other day, and the mount option didn't make any difference - maybe > because those hints have precedence (which could mean they aren't > working as they should), maybe not. >

Re: [ceph-users] Cloudstack agent crashed JVM with exception in librbd

2015-10-30 Thread Wido den Hollander
On 29-10-15 16:38, Voloshanenko Igor wrote: > Hi Wido and all community. > > We catched very idiotic issue on our Cloudstack installation, which > related to ceph and possible to java-rados lib. > I think you ran into this one: https://issues.apache.org/jira/browse/CLOUDSTACK-8879 Cleaning

Re: [ceph-users] Ceph OSDs with bcache experience

2015-10-28 Thread Wido den Hollander
On 21-10-15 15:30, Mark Nelson wrote: > > > On 10/21/2015 01:59 AM, Wido den Hollander wrote: >> On 10/20/2015 07:44 PM, Mark Nelson wrote: >>> On 10/20/2015 09:00 AM, Wido den Hollander wrote: >>>> Hi, >>>> >>>> In the &quo

Re: [ceph-users] rsync mirror download.ceph.com - broken file on rsync server

2015-10-27 Thread Wido den Hollander
On 27-10-15 09:51, Björn Lässig wrote: > Hi, > > after having some problems with ipv6 and download.ceph.com, i made a > mirror (debian-hammer only) for my ipv6-only cluster. > I see you are from Germany, you can also sync from eu.ceph.com > Unfortunately after the release of 0.94.5 the rsync

Re: [ceph-users] rsync mirror download.ceph.com - broken file on rsync server

2015-10-27 Thread Wido den Hollander
On 27-10-15 11:45, Björn Lässig wrote: > On 10/27/2015 10:22 AM, Wido den Hollander wrote: >> On 27-10-15 09:51, Björn Lässig wrote: >>> after having some problems with ipv6 and download.ceph.com, i made a >>> mirror (debian-hammer only) for my ipv6-only cluster. >&

Re: [ceph-users] BAD nvme SSD performance

2015-10-26 Thread Wido den Hollander
On 26-10-15 14:29, Matteo Dacrema wrote: > Hi Nick, > > > > I also tried to increase iodepth but nothing has changed. > > > > With iostat I noticed that the disk is fully utilized and write per > seconds from iostat match fio output. > Ceph isn't fully optimized to get the maximum

Re: [ceph-users] why was osd pool default size changed from 2 to 3.

2015-10-24 Thread Wido den Hollander
3 is safe. 2 replicas isn't safe, no matter how big or small the cluster is. With disks becoming larger recovery times will grow. In that window you don't want to run on a single replica. > thanks. > > > > ___ > ceph

Re: [ceph-users] Proper Ceph network configuration

2015-10-23 Thread Wido den Hollander
On 23-10-15 14:58, Jon Heese wrote: > Hello, > > > > We have two separate networks in our Ceph cluster design: > > > > 10.197.5.0/24 - The "front end" network, "skinny pipe", all 1Gbe, > intended to be a management or control plane network > > 10.174.1.0/24 - The "back end" network,

Re: [ceph-users] Ceph OSDs with bcache experience

2015-10-22 Thread Wido den Hollander
On 10/21/2015 03:30 PM, Mark Nelson wrote: > > > On 10/21/2015 01:59 AM, Wido den Hollander wrote: >> On 10/20/2015 07:44 PM, Mark Nelson wrote: >>> On 10/20/2015 09:00 AM, Wido den Hollander wrote: >>>> Hi, >>>> >>>> In the &quo

Re: [ceph-users] hanging nfsd requests on an RBD to NFS gateway

2015-10-22 Thread Wido den Hollander
sume only the kernel resident RBD module matters. > > Any thoughts or pointers appreciated. > > ~jpr > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph traine

Re: [ceph-users] Ceph OSDs with bcache experience

2015-10-22 Thread Wido den Hollander
On 10/21/2015 11:25 AM, Jan Schermer wrote: > >> On 21 Oct 2015, at 09:11, Wido den Hollander <w...@42on.com> wrote: >> >> On 10/20/2015 09:45 PM, Martin Millnert wrote: >>> The thing that worries me with your next-gen design (actually your current >&g

Re: [ceph-users] Ceph OSDs with bcache experience

2015-10-21 Thread Wido den Hollander
On 10/20/2015 07:44 PM, Mark Nelson wrote: > On 10/20/2015 09:00 AM, Wido den Hollander wrote: >> Hi, >> >> In the "newstore direction" thread on ceph-devel I wrote that I'm using >> bcache in production and Mark Nelson asked me to share some details. >&g

Re: [ceph-users] Ceph OSDs with bcache experience

2015-10-21 Thread Wido den Hollander
y allocate only 1TB of the SSD and leave 200GB of cells spare so the Wear-Leveling inside the SSD has some spare cells. Wido > > ---- Original message > From: Wido den Hollander <w...@42on.com> > Date: 20/10/2015 16:00 (GMT+01:00) > To: ceph-users <c

[ceph-users] Ceph OSDs with bcache experience

2015-10-20 Thread Wido den Hollander
Hi, In the "newstore direction" thread on ceph-devel I wrote that I'm using bcache in production and Mark Nelson asked me to share some details. Bcache is running in two clusters now that I manage, but I'll keep this information to one of them (the one at PCextreme behind CloudStack). In this

Re: [ceph-users] radosgw limiting requests

2015-10-15 Thread Wido den Hollander
On 15-10-15 13:56, Luis Periquito wrote: > I've been trying to find a way to limit the number of request an user > can make the radosgw per unit of time - first thing developers done > here is as fast as possible parallel queries to the radosgw, making it > very slow. > > I've looked into

[ceph-users] Cross-posting to users and ceph-devel

2015-10-14 Thread Wido den Hollander
Hi, Not to complain or flame about it, but I see a lot of messages which are being send to both users and ceph-devel. Imho that beats the purpose of having a users and a devel list, isn't it? The problem is that messages go to both lists and users hit reply-all again and so it continues. For

Re: [ceph-users] download.ceph.com unreachable IPv6 [was: v9.1.0 Infernalis release candidate released]

2015-10-14 Thread Wido den Hollander
On 14-10-15 16:30, Björn Lässig wrote: > On 10/13/2015 11:01 PM, Sage Weil wrote: >> http://download.ceph.com/debian-testing > > unfortunately this site is not reachable at the moment. > > > $ wget http://download.ceph.com/debian-testing/dists/wheezy/InRelease -O - > --2015-10-14

[ceph-users] Can we place the release key on download.ceph.com?

2015-10-14 Thread Wido den Hollander
exist. Any objections against mirroring the pubkey there as well? If not, could somebody do it? -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] download.ceph.com unreachable IPv6 [was: v9.1.0 Infernalis release candidate released]

2015-10-14 Thread Wido den Hollander
On 10/14/2015 06:50 PM, Björn Lässig wrote: > On 10/14/2015 05:11 PM, Wido den Hollander wrote: >> >> >> On 14-10-15 16:30, Björn Lässig wrote: >>> On 10/13/2015 11:01 PM, Sage Weil wrote: >>>> http://download.ceph.com/debian-testing >>> >>

Re: [ceph-users] download.ceph.com unreachable IPv6 [was: v9.1.0 Infernalis release candidate released]

2015-10-14 Thread Wido den Hollander
On 10/14/2015 06:50 PM, Björn Lässig wrote: > On 10/14/2015 05:11 PM, Wido den Hollander wrote: >> >> >> On 14-10-15 16:30, Björn Lässig wrote: >>> On 10/13/2015 11:01 PM, Sage Weil wrote: >>>> http://download.ceph.com/debian-testing >>> >>

Re: [ceph-users] How to improve 'rbd ls [pool]' response time

2015-10-08 Thread Wido den Hollander
you may reply to the sender and should > delete this e-mail immediately. > --- > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to improve 'rbd ls [pool]' response time

2015-10-08 Thread Wido den Hollander
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido > den Hollander > Sent: Thursday, October 08, 2015 10:06 PM > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] How to improve 'rbd ls [pool]' response time > > On 10/08/2015 10:46 AM,

Re: [ceph-users] possibility to delete all zeros

2015-10-02 Thread Wido den Hollander
On 02-10-15 14:16, Stefan Priebe - Profihost AG wrote: > Hi, > > we accidentally added zeros to all our rbd images. So all images are no > longer thin provisioned. As we do not have access to the qemu guests > running those images. Is there any other options to trim them again? > Rough guess,

Re: [ceph-users] RPM repo connection reset by peer when updating

2015-10-01 Thread Wido den Hollander
On 30-09-15 19:09, Alkaid wrote: > I try to update packages today, but I got a "connection reset by peer" > error every time. > It seems that the server will block my IP if I request a little > frequently ( refresh page a few times manually per second). > I guess yum downloads packages in

Re: [ceph-users] high density machines

2015-09-30 Thread Wido den Hollander
On 30-09-15 14:19, Mark Nelson wrote: > On 09/29/2015 04:56 PM, J David wrote: >> On Thu, Sep 3, 2015 at 3:49 PM, Gurvinder Singh >> wrote: The density would be higher than the 36 drive units but lower than the 72 drive units (though with shorter rack

Re: [ceph-users] Debian repo down?

2015-09-29 Thread Wido den Hollander
/debian-). > Seems like a IPv6 routing issue. If you need, you can always eu.ceph.com to download your packages. > Saludos > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listin

Re: [ceph-users] rsync broken?

2015-09-29 Thread Wido den Hollander
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Basic object storage question

2015-09-24 Thread Wido den Hollander
On 24-09-15 11:06, Ilya Dryomov wrote: > On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote: >> -BEGIN PGP SIGNED MESSAGE- >> Hash: SHA256 >> >> If you use RADOS gateway, RBD or CephFS, then you don't need to worry >> about striping. If you write your own

Re: [ceph-users] ceph.com IPv6 down

2015-09-23 Thread Wido den Hollander
On 23-09-15 13:38, Olivier Bonvalet wrote: > Hi, > > since several hours http://ceph.com/ doesn't reply anymore in IPv6. > It pings, and we can open TCP socket, but nothing more : > > > ~$ nc -w30 -v -6 ceph.com 80 > Connection to ceph.com 80 port [tcp/http] succeeded! > GET /

Re: [ceph-users] IPv6 connectivity after website changes

2015-09-23 Thread Wido den Hollander
On 23-09-15 03:49, Dan Mick wrote: > On 09/22/2015 05:22 AM, Sage Weil wrote: >> On Tue, 22 Sep 2015, Wido den Hollander wrote: >>> Hi, >>> >>> After the recent changes in the Ceph website the IPv6 connectivity got lost. >>> >>>

[ceph-users] Maven repository lost after website changes

2015-09-22 Thread Wido den Hollander
Hi, http://ceph.com/maven/ no longer works. This maven repository was used to host the rados-java bindings, but also the cephfs java bindings. Can we put this location back up again? Wido ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] IPv6 connectivity after website changes

2015-09-22 Thread Wido den Hollander
Hi, After the recent changes in the Ceph website the IPv6 connectivity got lost. www.ceph.com docs.ceph.com download.ceph.com git.ceph.com The problem I'm now facing with a couple of systems is that they can't download the Package signing key from git.ceph.com or anything from download.ceph.com

Re: [ceph-users] move/upgrade from straw to straw2

2015-09-21 Thread Wido den Hollander
On 21-09-15 13:18, Dan van der Ster wrote: > On Mon, Sep 21, 2015 at 12:11 PM, Wido den Hollander <w...@42on.com> wrote: >> You can also change 'straw_calc_version' to 2 in the CRUSHMap. > > AFAIK straw_calc_version = 1 is the optimal. straw_calc_version = 2 is > no

[ceph-users] EU Ceph mirror changes

2015-09-21 Thread Wido den Hollander
Hi, Since the security notice regarding ceph.com the mirroring system broke. This meant that eu.ceph.com didn't serve new packages since the whole download system changed. I didn't have much time to fix this, but today I resolved it by installing Varnish [0] on eu.ceph.com The VCL which is

Re: [ceph-users] Important security noticed regarding release signing key

2015-09-21 Thread Wido den Hollander
On 21-09-15 15:57, Dan van der Ster wrote: > On Mon, Sep 21, 2015 at 3:50 PM, Wido den Hollander <w...@42on.com> wrote: >> >> >> On 21-09-15 15:05, SCHAER Frederic wrote: >>> Hi, >>> >>> Forgive the question if the answer is obvious... It

Re: [ceph-users] move/upgrade from straw to straw2

2015-09-21 Thread Wido den Hollander
On 21-09-15 11:06, Stefan Priebe - Profihost AG wrote: > Hi, > > how can i upgrade / move from straw to straw2? I checked the docs but i > was unable to find upgrade informations? > First make sure that all clients are running librados 0.9, but keep in mind that any running VMs or processes

Re: [ceph-users] Important security noticed regarding release signing key

2015-09-21 Thread Wido den Hollander
On 21-09-15 15:05, SCHAER Frederic wrote: > Hi, > > Forgive the question if the answer is obvious... It's been more than "an hour > or so" and eu.ceph.com apparently still hasn't been re-signed or at least > what I checked wasn't : > > # rpm -qp --qf '%{RSAHEADER:pgpsig}' >

Re: [ceph-users] How to move OSD form 1TB disk to 2TB disk

2015-09-19 Thread Wido den Hollander
x005f931e in main () > > > > > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph version compatibility with centos(libvirt) and cloudstack

2015-09-13 Thread Wido den Hollander
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] 9 PGs stay incomplete

2015-09-11 Thread Wido den Hollander
On 11-09-15 12:22, Gregory Farnum wrote: > On Thu, Sep 10, 2015 at 9:46 PM, Wido den Hollander <w...@42on.com> wrote: >> Hi, >> >> I'm running into a issue with Ceph 0.94.2/3 where after doing a recovery >> test 9 PGs stay incomplete: >> >> osdmap

[ceph-users] 9 PGs stay incomplete

2015-09-10 Thread Wido den Hollander
an be found here: http://pastebin.com/qQL699zC The cluster is running a mix of 0.94.2 and .3 on Ubuntu 14.04.2 with the 3.13 kernel. XFS is being used as the backing filesystem. Any suggestions to fix this issue? There is no valuable data in these pools, so I can remove them, but I'd rat

Re: [ceph-users] Is Ceph appropriate for small installations?

2015-08-31 Thread Wido den Hollander
ealisation that for us performance and ease of > administration is more valuable than 100% uptime. Worst case (Storage server > dies) we could rebuild from backups in a day. Essentials could be restored in > a hour. I could experiment with ongoing ZFS replications to a backup server >

Re: [ceph-users] Is Ceph appropriate for small installations?

2015-08-29 Thread Wido den Hollander
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20 700 9902 Skype: contact42on ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi

Re: [ceph-users] Opensource plugin for pulling out cluster recovery and client IO metric

2015-08-28 Thread Wido den Hollander
On 28-08-15 13:07, Gregory Farnum wrote: On Mon, Aug 24, 2015 at 4:03 PM, Vickey Singh vickey.singh22...@gmail.com wrote: Hello Ceph Geeks I am planning to develop a python plugin that pulls out cluster recovery IO and client IO operation metrics , that can be further used with collectd.

Re: [ceph-users] Why are RGW pools all prefixed with a period (.)?

2015-08-27 Thread Wido den Hollander
On 08/26/2015 05:17 PM, Yehuda Sadeh-Weinraub wrote: On Wed, Aug 26, 2015 at 6:26 AM, Gregory Farnum gfar...@redhat.com wrote: On Wed, Aug 26, 2015 at 9:36 AM, Wido den Hollander w...@42on.com wrote: Hi, It's something which has been 'bugging' me for some time now. Why are RGW pools prefixed

Re: [ceph-users] ceph monitoring with graphite

2015-08-27 Thread Wido den Hollander
On 08/26/2015 04:33 PM, Dan van der Ster wrote: Hi Wido, On Wed, Aug 26, 2015 at 10:36 AM, Wido den Hollander w...@42on.com wrote: I'm sending pool statistics to Graphite We're doing the same -- stripping invalid chars as needed -- and I would guess that lots of people have written

[ceph-users] Why are RGW pools all prefixed with a period (.)?

2015-08-26 Thread Wido den Hollander
sending a key like this you 'break' Graphite: ceph.pools.stats.pool_name.kb_read A pool like .rgw.root will break this since Graphite splits on periods. So is there any reason why this is? What's the reasoning behind it? -- Wido den Hollander 42on B.V. Ceph trainer and consultant Phone: +31 (0)20

Re: [ceph-users] How to improve single thread sequential reads?

2015-08-18 Thread Wido den Hollander
On 18-08-15 12:25, Benedikt Fraunhofer wrote: Hi Nick, did you do anything fancy to get to ~90MB/s in the first place? I'm stuck at ~30MB/s reading cold data. single-threaded-writes are quite speedy, around 600MB/s. radosgw for cold data is around the 90MB/s, which is imho limitted by

Re: [ceph-users] Rename Ceph cluster

2015-08-18 Thread Wido den Hollander
On 18-08-15 14:13, Erik McCormick wrote: I've got a custom named cluster integrated with Openstack (Juno) and didn't run into any hard-coded name issues that I can recall. Where are you seeing that? As to the name change itself, I think it's really just a label applying to a configuration

Re: [ceph-users] ceph cluster_network with linklocal ipv6

2015-08-18 Thread Wido den Hollander
Op 18 aug. 2015 om 18:15 heeft Jan Schermer j...@schermer.cz het volgende geschreven: On 18 Aug 2015, at 17:57, Björn Lässig b.laes...@pengutronix.de wrote: On 08/18/2015 04:32 PM, Jan Schermer wrote: Should ceph care about what scope the address is in? We don't specify it for

Re: [ceph-users] ODS' weird status. Can not be removed anymore.

2015-08-14 Thread Wido den Hollander
On 14-08-15 14:30, Marcin Przyczyna wrote: Hello, this is my first posting to ceph-users mailgroup and because I am also new to this technology please be patient with me. A description of problem I get stuck follows: 3 Monitors are up and running, one of them is leader, the two are

Re: [ceph-users] Setting up a proper mirror system for Ceph

2015-08-06 Thread Wido den Hollander
-users on behalf of Wido den Hollander ceph-users-boun...@lists.ceph.com on behalf of w...@42on.com wrote: Hi, One of the first things I want to do as the Ceph User Committee is set up a proper mirror system for Ceph. Currently there is ceph.com, eu.ceph.com and au.ceph.com (thanks Matthew

Re: [ceph-users] pg_num docs conflict with Hammer PG count warning

2015-08-06 Thread Wido den Hollander
On 06-08-15 10:16, Hector Martin wrote: We have 48 OSDs (on 12 boxes, 4T per OSD) and 4 pools: - 3 replicated pools (3x) - 1 RS pool (5+2, size 7) The docs say: http://ceph.com/docs/master/rados/operations/placement-groups/ Between 10 and 50 OSDs set pg_num to 4096 Which is what we

[ceph-users] Setting up a proper mirror system for Ceph

2015-08-05 Thread Wido den Hollander
Hi, One of the first things I want to do as the Ceph User Committee is set up a proper mirror system for Ceph. Currently there is ceph.com, eu.ceph.com and au.ceph.com (thanks Matthew!), but this isn't the way I want to see it. I want to set up a series of localized mirrors from there you can

Re: [ceph-users] C++11 and librados C++

2015-08-04 Thread Wido den Hollander
On 03-08-15 22:25, Samuel Just wrote: It seems like it's about time for us to make the jump to C++11. This is probably going to have an impact on users of the librados C++ bindings. It seems like such users would have to recompile code using the librados C++ libraries after upgrading the

Re: [ceph-users] Mapped rbd device still present after pool was deleted

2015-08-04 Thread Wido den Hollander
On 04-08-15 16:39, Daniel Marks wrote: Hi all, I accidentally deleted a ceph pool while there was still a rados block device mapped on a client. If I try to unmap the device with “rbd unmap the command simply hangs. I can´t get rid of the device... We are on: Ubuntu 14.04 Client

Re: [ceph-users] Happy SysAdmin Day!

2015-08-01 Thread Wido den Hollander
Thanks! I bought icecream for the whole office since the sun was shining :) Op 1 aug. 2015 om 00:03 heeft Mark Nelson mnel...@redhat.com het volgende geschreven: Most folks have either probably already left or are on their way out the door late on a friday, but I just wanted to say

Re: [ceph-users] Updating OSD Parameters

2015-07-28 Thread Wido den Hollander
On 28-07-15 16:53, Noah Mehl wrote: When we update the following in ceph.conf: [osd] osd_recovery_max_active = 1 osd_max_backfills = 1 How do we make sure it takes affect? Do we have to restart all of the ceph osd’s and mon’s? On a client with client.admin keyring you execute:

Re: [ceph-users] How to use cgroup to bind ceph-osd to a specific cpu core?

2015-07-27 Thread Wido den Hollander
On 27-07-15 14:21, Jan Schermer wrote: Hi! The /cgroup/* mount point is probably a RHEL6 thing, recent distributions seem to use /sys/fs/cgroup like in your case (maybe because of systemd?). On RHEL 6 the mount points are configured in /etc/cgconfig.conf and /cgroup is the default. I

Re: [ceph-users] How to use cgroup to bind ceph-osd to a specific cpu core?

2015-07-27 Thread Wido den Hollander
On 27-07-15 14:56, Dan van der Ster wrote: On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander w...@42on.com wrote: I'm testing with it on 48-core, 256GB machines with 90 OSDs each. This is a +/- 20PB Ceph cluster and I'm trying to see how much we would benefit from it. Cool. How many

Re: [ceph-users] How to use cgroup to bind ceph-osd to a specific cpu core?

2015-07-27 Thread Wido den Hollander
NUMA nodes indeed. Wido Jan On 27 Jul 2015, at 15:21, Wido den Hollander w...@42on.com wrote: On 27-07-15 14:56, Dan van der Ster wrote: On Mon, Jul 27, 2015 at 2:51 PM, Wido den Hollander w...@42on.com wrote: I'm testing with it on 48-core, 256GB machines with 90 OSDs each

<    3   4   5   6   7   8   9   10   11   12   >