Burkhard Linke writes:
>
> The default weight is the size of the OSD in tera bytes. Did you
use
> a very small OSD partition for test purposes, e.g. 20 GB? In that
> case the weight is rounded and results in an effective weight of
> 0.0. As a result the
uffer data loss with a single
osd being removed even if there were no reweighting beforehand. Does the
backfilling temporarily reduce data durability in some way?
Is there a way to see which pgs actually have data on a given osd?
I attached an example of one of the incomplete pgs.
On Friday, July 5, 2019 11:28:32 AM CDT Caspar Smit wrote:
> Kyle,
>
> Was the cluster still backfilling when you removed osd 6 or did you only
> check its utilization?
Yes, still backfilling.
>
> Running an EC pool with m=1 is a bad idea. EC pool min_size = k+1 so los
On Friday, July 5, 2019 11:50:44 AM CDT Paul Emmerich wrote:
> * There are virtually no use cases for ec pools with m=1, this is a bad
> configuration as you can't have both availability and durability
I'll have to look into this more. The cluster only has 4 hosts, so it might be
worth switching
straight forward. Simply configure the
VLAN/subnets on the new physical switches and move links over one by
one. Once all the links are moved over you can remove the VLAN and
subnets that are now on the new kit from the original hardware.
--
Kyle
___
ceph
a 3.2+ kernel (iirc) can give back blocks by issuing TRIM.
http://wiki.qemu.org/Features/QED/Trim
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
as appropriate. With
single replica cache pools loss of OSDs becomes a real concern, in the
case of RBD this means losing arbitrary chunk(s) of your block devices
- bad news. If you want host independence, durability and speed your
best bet is a replicated cache pool (2-3x).
--
Kyle
be erasure coded or N+1 replicated
(I'd recommend N+2 or 3x replica). Ceph could potentially do what you
described in the future, it just doesn't yet.
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
you've
tested that your recovery tunables are suitable for your hardware
configuration.
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
a datacenter with five nines power availability is going to
see 5m of downtime per year, and that would qualify for the highest
rating from the Uptime Institute (Tier IV)! I've lost power to Ceph
clusters on several occasions, in all cases the journals were on
spinning media.
--
Kyle
by default. As such you can use kernel
instrumentation to view what is going on under the Ceph OSDs.
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Can you paste me the whole output of the install? I am curious why/how you
are getting el7 and el6 packages.
priority=1 required in /etc/yum.repos.d/ceph.repo entries
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
via SSD.
--
Warren
On Oct 9, 2013, at 5:52 PM, Kyle Bader kyle.ba...@gmail.com wrote:
Journal on SSD should effectively double your throughput because data will
not be written to the same device twice to ensure transactional integrity.
Additionally, by placing the OSD journal on an SSD you
/listinfo.cgi/ceph-users-ceph.com
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
/**listinfo.cgi/ceph-users-ceph.**comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kyle
before opening them.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
-all is running under upstart you can simply use
ceph-disk-prepare and a new OSD will be created based off the OSD bootstrap
key, the OSD id is automatically allocated by the monitor during this
process.
--
Kyle
___
ceph-users mailing list
ceph-users
this is
much more acute concern for public clouds.
Here's a good RFC wrt overlays if anyone is in dire need of bed time reading:
http://tools.ietf.org/html/draft-mity-nvo3-use-case-04
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
a blog post that goes into more detail:
http://etherealmind.com/difference-twinax-category-6-10-gigabit-ethernet/
I would probably go with the Arista 7050-S over the 7050-T and use
twinax for ToR to OSD node links and SFP+SR uplinks to spine switches
if you need longer runs.
--
Kyle
. Always scan
attachments before opening them.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kyle
___
ceph-users mailing list
ceph-users
/ceph-users-ceph.com
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
option is to slowly lower the weight in CRUSH for the
OSD(s) you want to remove. Hopefully that helps!
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
The quick start guide is linked below, it should help you hit the ground
running.
http://ceph.com/docs/master/start/quick-ceph-deploy/
Let us know if you have questions or bump into trouble!
___
ceph-users mailing list
ceph-users@lists.ceph.com
if it factored in hierarchy to identify
poor performing osds, nodes, racks, etc..
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
a recursive
lookup in response to an A/ request as a work around.
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ST240FN0021 connected via a SAS2x36 to a LSI 9207-8i.
The problem might be SATA transport protocol overhead at the expander.
Have you tried directly connecting the SSDs to SATA2/3 ports on the
mainboard?
--
Kyle
___
ceph-users mailing list
ceph
Summit?
I'd be happy to!
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
be automated lowers the barrier for contribution. Bonus
points if this could be extended to SMART and failed drives so we
could have a community generated report similar to Google's disk
population study they presented at FAST'07.
--
Kyle
___
ceph-users
Would this be something like
http://wiki.ceph.com/01Planning/02Blueprints/Firefly/Ceph-Brag ?
Something very much like that :)
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
replace the files, maybe
that means making a null rule and using -o
Dpkg::Options::='--force-confold in ceph-deploy/chef/puppet/whatever.
You will also want to avoid putting the mounts in fstab because it
could render your node unbootable if the device or filesystem fails.
--
Kyle
via Qemu/KVM. This might be a starting point if your interested in
testing Xen/RBD integration:
http://wiki.xenproject.org/wiki/Ceph_and_libvirt_technology_preview
Hope that helps!
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Is the OS doing anything apart from ceph? Would booting a ramdisk-only
system from USB or compact flash work?
I haven't tested this kind of configuration myself but I can't think of
anything that would preclude this type of setup. I'd probably use sqashfs
layered with a tmpfs via aufs to avoid
This journal problem is a bit of wizardry to me, I even had weird
intermittent issues with OSDs not starting because the journal was not
found, so please do not hesitate to suggest a better journal setup.
You mentioned using SAS for journal, if your OSDs are SATA and a expander
is in the data
Is having two cluster networks like this a supported configuration? Every
osd and mon can reach every other so I think it should be.
Maybe. If your back end network is a supernet and each cluster network is a
subnet of that supernet. For example:
Ceph.conf cluster network (supernet): 10.0.0.0/8
, the former is much preferable!
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
throughput if there are plenty of connections
going on, and less cpu load as no need to reassemble fragments.
One of the DreamHost clusters is using a pair of bonded 1GbE links on
the public network and another pair for the cluster network, we
configured each to use mode 802.3ad.
--
Kyle
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I've been running similar calculations recently. I've been using this
tool from Inktank to calculate RADOS reliabilities with different
assumptions:
https://github.com/ceph/ceph-tools/tree/master/models/reliability
But I've also had similar questions about RBD (or any multi-part files
.
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
For you holiday pleasure I've prepared a SysAdvent article on Ceph:
http://sysadvent.blogspot.com/2013/12/day-15-distributed-storage-with-ceph.html
Check it out!
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
Has anyone tried scaling a VMs io by adding additional disks and
striping them in the guest os? I am curious what effect this would have
on io performance?
Why would it? You can also change the stripe size of the RBD image.
Depending on the workload you might change it from 4MB to something
Do you have any futher detail on this radosgw bug?
https://github.com/ceph/ceph/commit/0f36eddbe7e745665a634a16bf3bf35a3d0ac424
https://github.com/ceph/ceph/commit/0b9dc0e5890237368ba3dc34cb029010cb0b67fd
Does it only apply to emperor?
The bug is present in dumpling too.
some of the other 48 ports, 12 for 2:1 and
24 for a non-blocking fabric. Given number of nodes you have/plan to
have you will be utilizing 6-12 links per switch, leaving you with 12-18
links for clients on a non-blocking fabric, 24-30 for 2:1 and 36-48 for 4:1.
--
Kyle
take it that is to mean that any RBD volume of sufficient size is indeed
spread over all disks?
Spread over all placement groups, the difference is subtle but there
is a difference.
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
.
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#backfilling
Doing both would improve the quality of models generated by the tool.
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
the public network if that
was still available in case the cluster network fails?
Ceph doesn't currently support this type of configuration.
Hope that clears things up!
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
a solution.
That's a shame, but at least you will be better prepared if it happens
again, hopefully your luck is not as unfortunate as mine!
--
Kyle Bader
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Change pg_num for .rgw.buckets to power of 2, an 'crush tunables
optimal' didn't help :(
Did you bump pgp_num as well? The split pgs will stay in place until
pgp_num is bumped as well, if you do this be prepared for (potentially
lots) of data movement.
Why would it help? Since it's not that ONE OSD will be primary for all
objects. There will be 1 Primary OSD per PG and you'll probably have a
couple of thousands PGs.
The primary may be across a oversubscribed/expensive link, in which case a
local replica with a common ancestor to the client may
?
No yet!
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
millions of tenants, you now
have a massive key escrow/management problem that only buys you a bit
of additional security when block devices are detached. Sounds like a
crappy deal to me, I'd either go with #1 or #3.
--
Kyle
___
ceph-users mailing list
feedback regarding our plan would also be welcomed.
I would probably run each disk as it's own OSD, which means you need a
bit more memory per host. Networking could certainly be a bottleneck
with 8 to 16 spindle nodes. YMMV.
--
Kyle
___
ceph-users mailing
://github.com/axboe/fio
Assuming you already have a cluster and a client configured this
should do the trick:
https://github.com/axboe/fio/blob/master/examples/rbd.fio
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
Server or
HAProxy/Nginx. The other components of Ceph are horizontally scalable
and because of the way Ceph's native protocols work you don't need
load balancers doing L2/L3/L7 tricks to achieve HA.
--
Kyle
___
ceph-users mailing list
ceph-users
You're right. Sorry didn't specify I was trying this for Radosgw. Even for
this I'm seeing performance degrade once my clients start to hit the LB VIP.
Could you tell us more about your load balancer and configuration?
--
Kyle
___
ceph-users
MTU?
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
(defaults to formatting xfs). This is known as a single device
OSD, in contrast with a multi-device OSD where the journal is on a
completely different device (like a partition on a shared journaling
SSD).
--
Kyle
___
ceph-users mailing list
ceph-users
the
flashcache OSD method is more particular about disk:ssd ratio, whereas
in a tier the flash could be on s completely separate hosts (possibly
dedicated flash machines).
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
/dev/mapper/x'
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm presuming this is the correct list (rather than the -devel list) please
correct me if I'm wrong there.
I setup ceph (0.56.4) a few months ago with two disk servers and one
dedicated monitor. The disk servers also have monitors, so there are a
total of 3 monitors for the cluster. Each of the
http://www.linkedin.com/in/ircolle
[image: Follow teststamp on Twitter] http://www.twitter.com/ircolle
On 8/14/13 8:28 AM, Kyle Hutson kylehut...@k-state.edu wrote:
Ah, didn't realize the repos were version-specific. Thanks Dan!
On Wed, Aug 14, 2013 at 9:20 AM, Dan van der Ster
do people consider a UPS + Shutdown procedures a suitable substitute?
I certainly wouldn't, I've seen utility power fail and the transfer
switch fail to transition to UPS strings. Had this happened to me with
nobarrier it would have been a very sad day.
--
Kyle Bader
Last night I blew away my previous ceph configuration (this environment is
pre-production) and have 0.87.1 installed. I've manually edited the
crushmap so it down looks like https://dpaste.de/OLEa
I currently have 144 OSDs on 8 nodes.
After increasing pg_num and pgp_num to a more suitable 1024
:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
Of *Don Doerner
*Sent:* 04 March, 2015 12:49
*To:* Kyle Hutson
*Cc:* ceph-users@lists.ceph.com
*Subject:* Re: [ceph-users] New EC pool undersized
Hmmm, I just struggled through this myself. How many racks do you have
That did it.
'step set_choose_tries 200' fixed the problem right away.
Thanks Yann!
On Wed, Mar 4, 2015 at 2:59 PM, Yann Dupont y...@objoo.org wrote:
Le 04/03/2015 21:48, Don Doerner a écrit :
Hmmm, I just struggled through this myself. How many racks do you have?
If not more than 8,
, so
try 2048.
-don-
*From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
Of *Don Doerner
*Sent:* 04 March, 2015 12:14
*To:* Kyle Hutson; Ceph Users
*Subject:* Re: [ceph-users] New EC pool undersized
In this case, that number means that there is not an OSD that can
:
Hmmm, I just struggled through this myself. How many racks do you have?
If not more than 8, you might want to make your failure domain smaller? I.e.,
maybe host? That, at least, would allow you to debug the situation…
-don-
*From:* Kyle Hutson [mailto:kylehut...@ksu.edu]
*Sent:* 04 March
I'm having a similar issue.
I'm following http://ceph.com/docs/master/install/manual-deployment/ to a T.
I have OSDs on the same host deployed with the short-form and they work
fine. I am trying to deploy some more via the long form (because I want
them to appear in a different location in the
for the OSD then issue ceph-disk activate
/dev/sdx1 to restart the OSD process. You probably could stop it with
systemctl since I believe udev creates a resource for it (I should
probably look into that now that this system will be going production
soon).
On Wed, Feb 25, 2015 at 2:13 PM, Kyle Hutson
, it doesn't start the long
running process.
On Wed, Feb 25, 2015 at 2:59 PM, Kyle Hutson kylehut...@ksu.edu wrote:
But I already issued that command (back in step 6).
The interesting part is that ceph-disk activate apparently does it
correctly. Even after reboot, the services start
://ceph.com/docs/giant/dev/erasure-coded-pool/ to create the EC pool.
I'm not sure (i.e. I never tried) to create a EC pool the way you did. The
normal replicated ones do work like this.
On Fri, Feb 20, 2015 at 4:49 PM, Kyle Hutson kylehut...@ksu.edu wrote:
I manually edited my crushmap, basing my
Oh, and I don't yet have any important data here, so I'm not worried about
losing anything at this point. I just need to get my cluster happy again so
I can play with it some more.
On Fri, Feb 20, 2015 at 11:00 AM, Kyle Hutson kylehut...@ksu.edu wrote:
Here was the process I went through.
1) I
I manually edited my crushmap, basing my changes on
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
I have SSDs and HDDs in the same box and was wanting to separate them by
ruleset. My current crushmap can be seen at http://pastie.org/9966238
I had it
was to change from osd_host to just host. See if that works.
On Feb 25, 2015 5:44 PM, Kyle Hutson kylehut...@ksu.edu wrote:
I just tried it, and that does indeed get the OSD to start.
However, it doesn't add it to the appropriate place so it would survive a
reboot. In my case, running 'service ceph status
For what it's worth, I don't think being patient was the answer. I was
having the same problem a couple of weeks ago, and I waited from before 5pm
one day until after 8am the next, and still got the same errors. I ended up
adding a new cephfs pool with a newly-created small pool, but was never
to migrate to AWS.
This sounds far more sensible. I'd look at the I2 (iops) or D2
(density) class instances, depending on use case.
--
Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I upgraded to 0.94.1 from 0.94 on Monday, and everything had been going
pretty well.
Then, about noon today, we had an mds crash. And then the failover mds
crashed. And this cascaded through all 4 mds servers we have.
If I try to start it ('service ceph start mds' on CentOS 7.1), it appears
to
Thank you, John!
That was exactly the bug we were hitting. My Google-fu didn't lead me to
this one.
On Wed, Apr 15, 2015 at 4:16 PM, John Spray john.sp...@redhat.com wrote:
On 15/04/2015 20:02, Kyle Hutson wrote:
I upgraded to 0.94.1 from 0.94 on Monday, and everything had been going
pretty
I upgraded from giant to hammer yesterday and now 'ceph -w' is constantly
repeating this message:
2015-04-09 08:50:26.318042 7f95dbf86700 0 -- 10.5.38.1:0/2037478
10.5.38.1:6789/0 pipe(0x7f95e00256e0 sd=3 :39489 s=1 pgs=0 cs=0 l=1
c=0x7f95e0023670).connect protocol feature mismatch, my
that we can inspect it with our tools?
-Greg
On Thu, Apr 9, 2015 at 7:57 AM, Kyle Hutson kylehut...@ksu.edu wrote:
Here 'tis:
https://dpaste.de/POr1
On Thu, Apr 9, 2015 at 9:49 AM, Gregory Farnum g...@gregs42.com wrote:
Can you dump your crush map and post it on pastebin or something
Here 'tis:
https://dpaste.de/POr1
On Thu, Apr 9, 2015 at 9:49 AM, Gregory Farnum g...@gregs42.com wrote:
Can you dump your crush map and post it on pastebin or something?
On Thu, Apr 9, 2015 at 7:26 AM, Kyle Hutson kylehut...@ksu.edu wrote:
Nope - it's 64-bit.
(Sorry, I missed
think of that might have slipped through our QA.)
On Thu, Apr 9, 2015 at 7:17 AM, Kyle Hutson kylehut...@ksu.edu wrote:
I did nothing to enable anything else. Just changed my ceph repo from
'giant' to 'hammer', then did 'yum update' and restarted services.
On Thu, Apr 9, 2015 at 9:15 AM
102b84a042aca, missing 1
It appears that even the latest kernel doesn't have support
for CEPH_FEATURE_CRUSH_V4
How do I make my ceph cluster backward-compatible with the old cephfs
client?
On Thu, Apr 9, 2015 at 8:58 AM, Kyle Hutson kylehut...@ksu.edu wrote:
I upgraded from giant to hammer
Nice! Thanks!
On Wed, Oct 14, 2015 at 1:23 PM, Sage Weil <s...@newdream.net> wrote:
> On Wed, 14 Oct 2015, Kyle Hutson wrote:
> > > Which bug? We want to fix hammer, too!
> >
> > This
> > one:
> https://www.mail-archive.com/ceph-users@lists.ceph.com/m
A couple of questions related to this, especially since we have a hammer
bug that's biting us so we're anxious to upgrade to Infernalis.
1) RE: ibrbd and librados ABI compatibility is broken. Be careful installing
this RC on client machines (e.g., those running qemu). It will be fixed in
the
> Which bug? We want to fix hammer, too!
This one:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg23915.html
(Adam sits about 5' from me.)
___
ceph-users mailing list
ceph-users@lists.ceph.com
I was wondering if anybody could give me some insight as to how CephFS does
its caching - read-caching in particular.
We are using CephFS with an EC pool on the backend with a replicated cache
pool in front of it. We're seeing some very slow read times. Trying to
compute an md5sum on a 15GB file
015 at 11:58 PM, Kyle Hutson <kylehut...@ksu.edu> wrote:
> > I was wondering if anybody could give me some insight as to how CephFS
> does
> > its caching - read-caching in particular.
> >
> > We are using CephFS with an EC pool on the backend with a replicated
>
On Wed, Sep 9, 2015 at 9:34 AM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Wed, Sep 9, 2015 at 3:27 PM, Kyle Hutson <kylehut...@ksu.edu> wrote:
> > We are using Hammer - latest released version. How do I check if it's
> > getting promoted into the cache?
>
>
that's being done, can be done, is going to be done, or has
even been considered?
On Wed, Sep 9, 2015 at 10:33 AM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Wed, Sep 9, 2015 at 4:26 PM, Kyle Hutson <kylehut...@ksu.edu> wrote:
> >
> >
> > On Wed, Sep 9, 2015
Hello,
I have been working on a very basic cluster with 3 nodes and a single OSD
per node. I am using Hammer installed on CentOS 7
(ceph-0.94.5-0.el7.x86_64) since it is the LTS version. I kept running
into an issue of not getting past the status of
undersized+degraded+peered. I finally
, 2017 at 11:41 AM, Kyle Drake <k...@kyledrake.net> wrote:
> On Sun, Apr 9, 2017 at 9:31 AM, John Spray <jsp...@redhat.com> wrote:
>
>> On Sun, Apr 9, 2017 at 12:48 AM, Kyle Drake <k...@kyledrake.net> wrote:
>> > Pretty much says it all. 1GB test file copy
On Sun, Apr 9, 2017 at 9:31 AM, John Spray <jsp...@redhat.com> wrote:
> On Sun, Apr 9, 2017 at 12:48 AM, Kyle Drake <k...@kyledrake.net> wrote:
> > Pretty much says it all. 1GB test file copy to local:
> >
> > $ time cp /mnt/ceph-kernel-driver-test/test.img
Pretty much says it all. 1GB test file copy to local:
$ time cp /mnt/ceph-kernel-driver-test/test.img .
real 2m50.063s
user 0m0.000s
sys 0m9.000s
$ time cp /mnt/ceph-fuse-test/test.img .
real 0m3.648s
user 0m0.000s
sys 0m1.872s
Yikes. The kernel driver averages ~5MB and the fuse driver
We had a 26-node production ceph cluster which we upgraded to Luminous a
little over a month ago. I added a 27th-node with Bluestore and didn't have
any issues, so I began converting the others, one at a time. The first two
went off pretty smoothly, but the 3rd is doing something strange.
as to why this is happening?
On Thu, Feb 8, 2018 at 3:02 AM, Mike O'Connor <m...@oeg.com.au> wrote:
> On 7/02/2018 8:23 AM, Kyle Hutson wrote:
> > We had a 26-node production ceph cluster which we upgraded to Luminous
> > a little over a month ago. I added a 27th-node with Bl
to work towards sending a pull request in for the docs...
--Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
1 - 100 of 101 matches
Mail list logo