Re: [ceph-users] Cluster network slower than public network

2017-11-16 Thread Jake Young
On Wed, Nov 15, 2017 at 1:07 PM Ronny Aasen wrote: > On 15.11.2017 13:50, Gandalf Corvotempesta wrote: > > As 10gb switches are expansive, what would happen by using a gigabit > cluster network and a 10gb public network? > > Replication and rebalance should be slow,

Re: [ceph-users] Ceph-ISCSI

2017-10-11 Thread Jake Young
On Wed, Oct 11, 2017 at 8:57 AM Jason Dillaman wrote: > On Wed, Oct 11, 2017 at 6:38 AM, Jorge Pinilla López > wrote: > >> As far as I am able to understand there are 2 ways of setting iscsi for >> ceph >> >> 1- using kernel (lrbd) only able on SUSE,

Re: [ceph-users] tunable question

2017-10-03 Thread Jake Young
On Tue, Oct 3, 2017 at 8:38 AM lists wrote: > Hi, > > What would make the decision easier: if we knew that we could easily > revert the > > "ceph osd crush tunables optimal" > once it has begun rebalancing data? > > Meaning: if we notice that impact is too high, or it will

Re: [ceph-users] What HBA to choose? To expand or not to expand?

2017-09-20 Thread Jake Young
c0 show > > > > -Original Message- > From: Jake Young [mailto:jak3...@gmail.com] > Sent: dinsdag 19 september 2017 18:00 > To: Kees Meijs; ceph-us...@ceph.com > Subject: Re: [ceph-users] What HBA to choose? To expand or not to > expand? > > > On Tue, Sep

Re: [ceph-users] What HBA to choose? To expand or not to expand?

2017-09-19 Thread Jake Young
On Tue, Sep 19, 2017 at 9:38 AM Kees Meijs <k...@nefos.nl> wrote: > Hi Jake, > > On 19-09-17 15:14, Jake Young wrote: > > Ideally you actually want fewer disks per server and more servers. > > This has been covered extensively in this mailing list. Rule of thumb >

Re: [ceph-users] What HBA to choose? To expand or not to expand?

2017-09-19 Thread Jake Young
On Tue, Sep 19, 2017 at 7:34 AM Kees Meijs wrote: > Hi list, > > It's probably something to discuss over coffee in Ede tomorrow but I'll > ask anyway: what HBA is best suitable for Ceph nowadays? > > In an earlier thread I read some comments about some "dumb" HBAs running > in IT

Re: [ceph-users] Ceph re-ip of OSD node

2017-08-30 Thread Jake Young
Hey Ben, Take a look at the osd log for another OSD who's ip you did not change. What errors does it show related the re-ip'd OSD? Is the other OSD trying to communicate with the re-ip'd OSD's old ip address? Jake On Wed, Aug 30, 2017 at 3:55 PM Jeremy Hanmer

Re: [ceph-users] Ceph and IPv4 -> IPv6

2017-06-27 Thread Jake Young
On Tue, Jun 27, 2017 at 2:19 PM Wido den Hollander wrote: > > > Op 27 juni 2017 om 19:00 schreef george.vasilaka...@stfc.ac.uk: > > > > > > Hey Ceph folks, > > > > I was wondering what the current status/roadmap/intentions etc. are on > the possibility of providing a way of

Re: [ceph-users] CentOS7 Mounting Problem

2017-04-10 Thread Jake Young
I've had this issue as well. In my case some or most osds on each host do mount, but a few don't mount or start. (I have 9 osds on each host). My workaround is to run partprobe on the device that isn't mounted. This causes the osd to mount and start automatically. The osds then also mount on

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-07 Thread Jake Young
I use 2U servers with 9x 3.5" spinning disks in each. This has scaled well for me, in both performance and budget. I may add 3 more spinning disks to each server at a later time if I need to maximize storage, or I may add 3 SSDs for journals/cache tier if we need better performance. Another

Re: [ceph-users] tgt+librbd error 4

2016-12-18 Thread Jake Young
Bruno Silva <bemanuel...@gmail.com> wrote: > But FreeNAS is based on FreeBSD. > > > > Em dom, 18 de dez de 2016 00:40, ZHONG <desert...@icloud.com> escreveu: > > Thank you for your reply。 > > 在 2016年12月17日,22:21,Jake Young <jak3...@gmail.com> 写道

Re: [ceph-users] tgt+librbd error 4

2016-12-17 Thread Jake Young
I don't have the specific crash info, but I have seen crashes with tgt when the ceph cluster was slow to respond to IO. It was things like this that pushed me to using another iSCSI to Ceph solution (FreeNAS running in KVM Linux hypervisor). Jake On Fri, Dec 16, 2016 at 9:16 PM ZHONG

Re: [ceph-users] Looking for a definition for some undocumented variables

2016-12-12 Thread Jake Young
. On Mon, Dec 12, 2016 at 12:26 PM, John Spray <jsp...@redhat.com> wrote: > On Mon, Dec 12, 2016 at 5:23 PM, Jake Young <jak3...@gmail.com> wrote: > > I've seen these referenced a few times in the mailing list, can someone > > explain what they do exactly? &g

[ceph-users] Looking for a definition for some undocumented variables

2016-12-12 Thread Jake Young
I've seen these referenced a few times in the mailing list, can someone explain what they do exactly? What are the defaults for these values? osd recovery sleep and osd recover max single start Thanks! Jake ___ ceph-users mailing list

Re: [ceph-users] problem after reinstalling system

2016-12-08 Thread Jake Young
Hey Dan, I had the same issue that Jacek had after changing my OS and Ceph version from Ubuntu 14 - Hammer to Centos 7 - Jewel. I was also able to recover from the failure by renaming the .ldb files to .sst files. Do you know why this works? Is it just because leveldb changed the file naming

Re: [ceph-users] Ceph + VMWare

2016-10-07 Thread Jake Young
Hey Patrick, I work for Cisco. We have a 200TB cluster (108 OSDs on 12 OSD Nodes) and use the cluster for both OpenStack and VMware deployments. We are using iSCSI now, but it really would be much better if VMware did support RBD natively. We present a 1-2TB Volume that is shared between 4-8

Re: [ceph-users] ceph + vmware

2016-07-26 Thread Jake Young
On Thursday, July 21, 2016, Mike Christie <mchri...@redhat.com> wrote: > On 07/21/2016 11:41 AM, Mike Christie wrote: > > On 07/20/2016 02:20 PM, Jake Young wrote: > >> > >> For starters, STGT doesn't implement VAAI properly and you will need to > >> dis

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Jake Young
I think the answer is that with 1 thread you can only ever write to one journal at a time. Theoretically, you would need 10 threads to be able to write to 10 nodes at the same time. Jake On Thursday, July 21, 2016, w...@globe.de wrote: > What i not really undertand is: > > Lets

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-07-21 Thread Jake Young
My workaround to your single threaded performance issue was to increase the thread count of the tgtd process (I added --nr_iothreads=128 as an argument to tgtd). This does help my workload. FWIW below are my rados bench numbers from my cluster with 1 thread: This first one is a "cold" run. This

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Jake Young
On Wednesday, July 20, 2016, Jan Schermer wrote: > > > On 20 Jul 2016, at 18:38, Mike Christie > wrote: > > > > On 07/20/2016 03:50 AM, Frédéric Nass wrote: > >> > >> Hi Mike, > >> > >> Thanks for the update on the RHCS iSCSI target. > >> >

Re: [ceph-users] ceph + vmware

2016-07-16 Thread Jake Young
ndlichen Gruessen / Best regards > > Oliver Dzombic > IP-Interactive > > mailto:i...@ip-interactive.de <javascript:;> > > Anschrift: > > IP Interactive UG ( haftungsbeschraenkt ) > Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Ge

Re: [ceph-users] ceph + vmware

2016-07-15 Thread Jake Young
t; Zum Sonnenberg 1-3 > 63571 Gelnhausen > > HRB 93402 beim Amtsgericht Hanau > Geschäftsführung: Oliver Dzombic > > Steuer Nr.: 35 236 3622 1 > UST ID: DE274086107 > > > Am 11.07.2016 um 22:24 schrieb Jake Young: > > I'm using this setup with ESXi 5.1

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Jake Young
We use all Cisco UCS servers (C240 M3 and M4s) with the PCIE VIC 1385 40G NIC. The drivers were included in Ubuntu 14.04. I've had no issues with the NICs or my network what so ever. We have two Cisco Nexus 5624Q that the OSD servers connect to. The switches are just switching two VLANs (ceph

Re: [ceph-users] 40Gb fileserver/NIC suggestions

2016-07-13 Thread Jake Young
My OSDs have dual 40G NICs. I typically don't use more than 1Gbps on either network. During heavy recovery activity (like if I lose a whole server), I've seen up to 12Gbps on the cluster network. For reference my cluster is 9 OSD nodes with 9x 7200RPM 2TB OSDs. They all have RAID cards with 4GB

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Jake Young
I'm using this setup with ESXi 5.1 and I get very good performance. I suspect you have other issues. Reliability is another story (see Nick's posts on tgt and HA to get an idea of the awful problems you can have), but for my test labs the risk is acceptable. One change I found helpful is to

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Jake Young
See https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17112.html On Thursday, June 30, 2016, Mike Jacobacci wrote: > So after adding the ceph repo and enabling the cents-7 repo… It fails > trying to install ceph-common: > > Loaded plugins: fastestmirror > Loading

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-30 Thread Jake Young
bd/add > > It fails with just an i/o error… I am looking into now. My cluster > health is OK, so I am hoping I didn’t miss a configuration or something. > > > On Jun 29, 2016, at 3:28 PM, Jake Young <jak3...@gmail.com > <javascript:_e(%7B%7D,'cvml','jak3...@gmail.com');>

Re: [ceph-users] Mounting Ceph RBD image to XenServer 7 as SR

2016-06-29 Thread Jake Young
On Wednesday, June 29, 2016, Mike Jacobacci wrote: > Hi all, > > Is there anyone using rbd for xenserver vm storage? I have XenServer 7 > and the latest Ceph, I am looking for the the best way to mount the rbd > volume under XenServer. There is not much recent info out there

Re: [ceph-users] RBD with iSCSI

2015-09-10 Thread Jake Young
On Wed, Sep 9, 2015 at 8:13 AM, Daleep Bais wrote: > Hi, > > I am following steps from URL > *http://www.sebastien-han.fr/blog/2014/07/07/start-with-the-rbd-support-for-tgt/ > * > to create

Re: [ceph-users] Ceph 0.94 (and lower) performance on 1 hosts ??

2015-07-29 Thread Jake Young
On Tue, Jul 28, 2015 at 11:48 AM, SCHAER Frederic frederic.sch...@cea.fr wrote: Hi again, So I have tried - changing the cpus frequency : either 1.6GHZ, or 2.4GHZ on all cores - changing the memory configuration, from advanced ecc mode to performance mode, boosting the memory bandwidth from

Re: [ceph-users] Ceph 0.94 (and lower) performance on 1 hosts ??

2015-07-29 Thread Jake Young
On Wed, Jul 29, 2015 at 11:23 AM, Mark Nelson mnel...@redhat.com wrote: On 07/29/2015 10:13 AM, Jake Young wrote: On Tue, Jul 28, 2015 at 11:48 AM, SCHAER Frederic frederic.sch...@cea.fr mailto:frederic.sch...@cea.fr wrote: Hi again, So I have tried - changing the cpus

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-14 Thread Jake Young
13.05.15 um 15:20 schrieb Jake Young: I run my mons as VMs inside of UCS blade compute nodes. Do you use the fabric interconnects or the standalone blade chassis? Jake On Wednesday, May 13, 2015, Götz Reinicke - IT Koordinator goetz.reini...@filmakademie.de javascript:; mailto

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-13 Thread Jake Young
I run my mons as VMs inside of UCS blade compute nodes. Do you use the fabric interconnects or the standalone blade chassis? Jake On Wednesday, May 13, 2015, Götz Reinicke - IT Koordinator goetz.reini...@filmakademie.de wrote: Hi Christian, currently we do get good discounts as an

Re: [ceph-users] Using RAID Controller for OSD and JNL disks in Ceph Nodes

2015-05-04 Thread Jake Young
On Monday, May 4, 2015, Christian Balzer ch...@gol.com wrote: On Mon, 13 Apr 2015 10:39:57 +0530 Sanjoy Dasgupta wrote: Hi! This is an often discussed and clarified topic, but Reason why I am asking is because If We use a RAID controller with Lot of Cache (FBWC) and Configure each

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-28 Thread Jake Young
On Tuesday, April 28, 2015, Dominik Hannen han...@xplace.de wrote: Hi ceph-users, I am currently planning a cluster and would like some input specifically about the storage-nodes. The non-osd systems will be running on more powerful system. Interconnect as currently planned: 4 x 1Gbit

Re: [ceph-users] Cost- and Powerefficient OSD-Nodes

2015-04-28 Thread Jake Young
On Tuesday, April 28, 2015, Nick Fisk n...@fisk.me.uk wrote: -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com javascript:;] On Behalf Of Dominik Hannen Sent: 28 April 2015 15:30 To: Jake Young Cc: ceph-users@lists.ceph.com javascript

Re: [ceph-users] Ceph on Solaris / Illumos

2015-04-17 Thread Jake Young
Young Cc: ceph-users@lists.ceph.com javascript:; Subject: Re: [ceph-users] Ceph on Solaris / Illumos On 04/15/2015 10:36 AM, Jake Young wrote: On Wednesday, April 15, 2015, Mark Nelson mnel...@redhat.com javascript:; mailto:mnel...@redhat.com javascript:; wrote: On 04/15/2015 08

[ceph-users] Ceph on Solaris / Illumos

2015-04-15 Thread Jake Young
Has anyone compiled ceph (either osd or client) on a Solaris based OS? The thread on ZFS support for osd got me thinking about using solaris as an osd server. It would have much better ZFS performance and I wonder if the osd performance without a journal would be 2x better. A second thought I

Re: [ceph-users] Ceph on Solaris / Illumos

2015-04-15 Thread Jake Young
On Wednesday, April 15, 2015, Mark Nelson mnel...@redhat.com wrote: On 04/15/2015 08:16 AM, Jake Young wrote: Has anyone compiled ceph (either osd or client) on a Solaris based OS? The thread on ZFS support for osd got me thinking about using solaris as an osd server. It would have much

Re: [ceph-users] Ceph on Solaris / Illumos

2015-04-15 Thread Jake Young
On Wednesday, April 15, 2015, Alexandre Marangone amara...@redhat.com wrote: The LX branded zones might be a way to run OSDs on Illumos: https://wiki.smartos.org/display/DOC/LX+Branded+Zones For fun, I tried a month or so ago, managed to have a quorum. OSDs wouldn't start, I didn't look

Re: [ceph-users] Cores/Memory/GHz recommendation for SSD based OSD servers

2015-04-02 Thread Jake Young
On Thursday, April 2, 2015, Nick Fisk n...@fisk.me.uk wrote: I'm probably going to get shot down for saying this...but here goes. As a very rough guide, think of it more as you need around 10Mhz for every IO, whether that IO is 4k or 4MB it uses roughly the same amount of CPU, as most of the

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
(zeroing) is always 1MB writes, so I don't think this caused my write size to change. Maybe it did something to the iSCSI packets? Jake On Fri, Mar 6, 2015 at 9:04 AM, Nick Fisk n...@fisk.me.uk wrote: From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jake Young Sent

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
On Thursday, March 5, 2015, Nick Fisk n...@fisk.me.uk wrote: Hi All, Just a heads up after a day’s experimentation. I believe tgt with its default settings has a small write cache when exporting a kernel mapped RBD. Doing some write tests I saw 4 times the write throughput when using

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
On Fri, Mar 6, 2015 at 10:18 AM, Nick Fisk n...@fisk.me.uk wrote: On Fri, Mar 6, 2015 at 9:04 AM, Nick Fisk n...@fisk.me.uk wrote: From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jake Young Sent: 06 March 2015 12:52 To: Nick Fisk Cc: ceph-users@lists.ceph.com

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Jake Young
On Friday, March 6, 2015, Steffen W Sørensen ste...@me.com wrote: On 06/03/2015, at 16.50, Jake Young jak3...@gmail.com javascript:; wrote: After seeing your results, I've been considering experimenting with that. Currently, my iSCSI proxy nodes are VMs. I would like to build a few

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Jake Young
disaster. Many thanks, Nick *From:* Jake Young [mailto:jak3...@gmail.com jak3...@gmail.com] *Sent:* 14 January 2015 16:54 *To:* Nick Fisk *Cc:* Giuseppe Civitella; ceph-users *Subject:* Re: [ceph-users] Ceph, LIO, VMWARE anyone? Yes, it's active/active and I found that VMWare can

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Jake Young
paths. It’s very complicated and am close to giving up. What do you reckon accept defeat and go with a much simpler tgt and virtual IP failover solution for time being until the Redhat patches make their way into the kernel? *From:* Jake Young [mailto:jak3...@gmail.com javascript:_e(%7B%7D

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-16 Thread Jake Young
to that configuration as you reduce/eliminate a lot of the troubles I have had with resources failing over. Nick *From:* Jake Young [mailto:jak3...@gmail.com] *Sent:* 14 January 2015 12:50 *To:* Nick Fisk *Cc:* Giuseppe Civitella; ceph-users *Subject:* Re: [ceph-users] Ceph, LIO, VMWARE

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-14 Thread Jake Young
Nick, Where did you read that having more than 1 LUN per target causes stability problems? I am running 4 LUNs per target. For HA I'm running two linux iscsi target servers that map the same 4 rbd images. The two targets have the same serial numbers, T10 address, etc. I copy the primary's

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-06 Thread Jake Young
created in these area), they can use this flag to skip the time consuming trimming. How do you think? That sounds like a good solution. Like doing undo grow image *From:* Jake Young [mailto:jak3...@gmail.com javascript:_e(%7B%7D,'cvml','jak3...@gmail.com');] *Sent:* Monday, January 5, 2015 9

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-05 Thread Jake Young
to the actual image names that the rbd command line tools understands? Regards, Edwin Peer On 01/04/2015 08:48 PM, Jake Young wrote: On Sunday, January 4, 2015, Dyweni - Ceph-Users 6exbab4fy...@dyweni.com javascript:; mailto:6exbab4fy...@dyweni.com javascript:; wrote: Hi, If its

Re: [ceph-users] rbd resize (shrink) taking forever and a day

2015-01-04 Thread Jake Young
On Sunday, January 4, 2015, Dyweni - Ceph-Users 6exbab4fy...@dyweni.com wrote: Hi, If its the only think in your pool, you could try deleting the pool instead. I found that to be faster in my testing; I had created 500TB when I meant to create 500GB. Note for the Devs: I would be nice if

Re: [ceph-users] Double-mounting of RBD

2014-12-17 Thread Jake Young
On Wednesday, December 17, 2014, Josh Durgin josh.dur...@inktank.com wrote: On 12/17/2014 03:49 PM, Gregory Farnum wrote: On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley bradley.mcnam...@seattle.gov wrote: I have a somewhat interesting scenario. I have an RBD of 17TB formatted using

Re: [ceph-users] tgt / rbd performance

2014-12-13 Thread Jake Young
On Friday, December 12, 2014, Mike Christie mchri...@redhat.com wrote: On 12/11/2014 11:39 AM, ano nym wrote: there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a msa70 which gives me about 600 MB/s continous write speed with rados write bench. tgt on the server with rbd

Re: [ceph-users] Giant osd problems - loss of IO

2014-12-06 Thread Jake Young
to yours. I guess my values should be different as I am running a 40gbit/s network with ipoib. The actual throughput on ipoib is about 20gbit/s according iperf and alike. Andrei -- *From: *Jake Young jak3...@gmail.com javascript:_e(%7B%7D,'cvml','jak3...@gmail.com

Re: [ceph-users] running as non-root

2014-12-06 Thread Jake Young
On Saturday, December 6, 2014, Sage Weil sw...@redhat.com wrote: While we are on the subject of init systems and packaging, I would *love* to fix things up for hammer to - create a ceph user and group - add various users to ceph group (like qemu or kvm user and apache/www-data?) Maybe a

Re: [ceph-users] Giant osd problems - loss of IO

2014-12-04 Thread Jake Young
On Fri, Nov 14, 2014 at 4:38 PM, Andrei Mikhailovsky and...@arhont.com wrote: Any other suggestions why several osds are going down on Giant and causing IO to stall? This was not happening on Firefly. Thanks I had a very similar probem to yours which started after upgrading from Firefly to

Re: [ceph-users] Admin Node Best Practices

2014-10-31 Thread Jake Young
On Friday, October 31, 2014, Massimiliano Cuttini m...@phoenixweb.it wrote: Any hint? Il 30/10/2014 15:22, Massimiliano Cuttini ha scritto: Dear Ceph users, I just received 2 fresh new servers and i'm starting to develop my Ceph Cluster. The first step is: create the admin node in

Re: [ceph-users] PERC H710 raid card

2014-07-17 Thread Jake Young
There are two command line tools for Linux for LSI cards: megacli and storcli You can do pretty much everything from those tools. Jake On Thursday, July 17, 2014, Dennis Kramer (DT) den...@holmes.nl wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, What do you recommend in case of

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Jake Young
On Mon, Jun 23, 2014 at 3:03 PM, Mark Nelson mark.nel...@inktank.com wrote: Well, for random IO you often can't do much coalescing. You have to bite the bullet and either parallelize things or reduce per-op latency. Ceph already handles parallelism very well. You just throw more disks at

Re: [ceph-users] Moving Ceph cluster to different network segment

2014-06-13 Thread Jake Young
I recently changed IP and hostname of an osd node running dumpling and had no problems. You do need to have your ceph.conf file built correctly or your osds won't start. Make sure the new IPs and new hostname are in there before you change the IP. The crushmap showed a new bucket (host name)

Re: [ceph-users] Ceph with VMWare / XenServer

2014-05-12 Thread Jake Young
Hello Andrei, I'm trying to accomplish the same thing with VMWare. So far I'm still doing lab testing, but we've gotten as far as simulating a production workload. Forgive the lengthy reply, I happen to be sitting on an airplane . My existing solution is using NFS servers running in ESXi VMs.

Re: [ceph-users] Manually mucked up pg, need help fixing

2014-05-05 Thread Jake Young
I was in a similar situation where I could see the PGs data on an osd, but there was nothing I could do to force the pg to use that osd's copy. I ended up using the rbd_restore tool to create my rbd on disk and then I reimported it into the pool. See this thread for info on rbd_restore:

Re: [ceph-users] Cannot create a file system on the RBD

2014-04-08 Thread Jake Young
Maybe different kernel versions between the box that can format and the box that can't. When you created the rbd image, was it format 1 or 2? Jake On Thursday, April 3, 2014, Thorvald Hallvardsson thorvald.hallvards...@gmail.com wrote: Hi, I have found that problem is somewhere within the

Re: [ceph-users] No more Journals ?

2014-03-14 Thread Jake Young
You should take a look at this blog post: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/ The test results shows that using a RAID card with a write-back cache without journal disks can perform better or equivalent to using journal disks with XFS. *As to

[ceph-users] Running a mon on a USB stick

2014-03-08 Thread Jake Young
I was planning to setup a small Ceph cluster with 5 nodes. Each node will have 12 disks and run 12 osds. I want to run 3 mons on 3 of the nodes. The servers have an internal SD card that I'll use for the OS and an internal 16GB USB port that I want to mount the mon files to. From what I