stress on that VM, but not so
much on the overral system, as expected).
I've also do some test of deleting the snapshot created, but some
minute after i've done that snapshot, and nothing relevant happens.
Have you tried restoring a snapshot? I found it unusablly slow - as in hours
--
Lindsay
to freeze all your VMs all in once.
Um ... No. KVM/Qemu is fully virtualised.
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
as the osd's or monitors.
You can use the kernel and fuse mount to access the same pool.
I used cephfs for VM hosting across multiple nodes, never had a problem.
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
are going to see
pretty bad write performance.
POOMA U - but I believe that linked clones, especially old ones are
going to be pretty slow.
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
On 3 January 2017 at 10:42, Kent Borg wrote:
> Can you say more about pool snapshot speed? Slow to make snapshot? Slow to
> access snapshot? Slow to delete a snapshot? Slow to access original? Slow to
> modify original? (Slow to modify at first or slow continually?)
This is
On 2/01/2017 3:06 PM, Henrik Korkuc wrote:
Thanks Henrik, appreciate the answers.
Did you consider Ceph RBD for VM hosting?
Yes, but we have a requirement for frequent use of snapshots and rbd
snapshots are unusably slow. qcow2 on top of cephfs works reasonably well.
--
Lindsay
:( Use
case is hosting VM's.
thanks,
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 3/10/2016 5:59 AM, Sascha Vogt wrote:
Any feedback, especially corrections is highly welcome!
http://maybebuggy.de/post/ceph-cache-tier/
Thanks, that clarified things a lot - much easier to follow than the
offical docs :)
Do cache tiers help with writes as well?
--
Lindsay Mathieson
for best perf/least
amount of cache eviction), rm journal, mv journal.new journal, start OSD
again.
Flush the journal after stopping the OSD !
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
setup, also what SSD model are you usng?
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
and marks the pg invalid when it finds a mismatch, unlike btrfs/zfs
which auto repair the block if the mirror has a valid checksum.
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
On 16 March 2016 at 04:34, Edward Wingate wrote:
> Given my resources,
> I'd still only run a single node with 3 OSDs and replica count of 2.
> I'd then have a VM mount the a Ceph RBD to serve Samba/NFS shares.
>
Fun & instructive to play with ceph that way, but not
On 5/03/2016 3:31 AM, Christoph Adomeit wrote:
I just updated our ceph-cluster to infernalis and now I want to enable the new
image features.
Semi related - is there a description of these new features some where?
--
Lindsay Mathieson
___
ceph
On 03/03/16 13:00, Gregory Farnum wrote:
Yes; it goes through the journal (or whatever the full storage stack
is on the OSD in question).
Thanks
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
(if there aren't any
available OSD numbers less than the OSD being replaced, it will get
the same ID.
Thanks Robert, tried out that procedure last night, it worked the best.
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
to just restore the default args?
thanks,
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
or less and shuffle some things around the cluster.
Thanks Robert, that makes sense. I'll try it out tonight.
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
thanks,
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
u ever get a reply to this Mathew or try it yourself?
Interested in trying it as an alternative to the various SSD caching
options.
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
As per the subject - when using tell to benchmark a OSD does it go
through the OSD's journal or just the osd disk itself?
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
On 28/02/2016 10:23 AM, Shinobu Kinjo wrote:
Does the Ceph have ${subject}?
Well ceph 0.67 was codename "Dumpling", and we are well past that, so
yes I guess ceph has mostly been dedumplified. Which is a shame because
I love dumplings! Yum!
--
Lindsay
at all?
Presumably as long as the SSD read speed exceeds that of the spinners,
that is sufficient.
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 10/02/16 15:43, Shinobu Kinjo wrote:
What is poll?
One suspects "Pool"
--
Lindsay Mathieson
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 28 November 2015 at 13:24, Brian Felton wrote:
> Each storage server contains 72 6TB SATA drives for Ceph (648 OSDs, ~3.5PB
> in total). Each disk is set up as its own ZFS zpool. Each OSD has a 10GB
> journal, located within the disk's zpool.
>
I doubt I have much to
I'm a bit confused re this setting with regard to xfs. Docs state:
"Enables writeahead journaling, default for xfs.", which implies to me that
is on by default for xfs, but then after that is it states:
"Default:false"
So is it on or off by default for xfs? and is there a way to tell?
Also -
On 29 October 2015 at 19:24, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> # ceph tell osd.1 bench
> {
> "bytes_written": 1073741824,
> "blocksize": 4194304,
> "bytes_per_sec": 117403227.00
> }
>
> It might help you to figure out whether individual
On 29 October 2015 at 11:39, Lindsay Mathieson <lindsay.mathie...@gmail.com>
wrote:
> Is there a way to benchmark individual OSD's?
nb - Non-destructive :)
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists
Is there a way to benchmark individual OSD's?
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 29 October 2015 at 10:29, Wah Peng wrote:
> $ ceph osd stat
> osdmap e18: 2 osds: 2 up, 2 in
>
> this is what it shows.
> does it mean I need to add up to 3 osds? I just use the default setup.
>
If you went with the defaults then your pool size will be 3, meaning
On 22 May 2015 at 00:10, gjprabu wrote:
> Hi All,
>
> We are using rbd and map the same rbd image to the rbd device on
> two different client but i can't see the data until i umount and mount -a
> partition. Kindly share the solution for this issue.
>
Whats the
On 21 October 2015 at 16:01, Alexandre DERUMIER wrote:
> If it's a test server, maybe could you test it with proxmox 4.0 hypervisor
> https://www.proxmox.com
>
> I have made a lot of patch inside it to optimize rbd (qemu+jemalloc,
> iothreads,...)
>
Really gotta find time
On 21 October 2015 at 08:09, Andrei Mikhailovsky wrote:
> Same here, the upgrade went well. So far so good.
>
Ditto
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
I'm adding a node (4 * WD RED 3TB) to our small cluster to bring it up to
replica 3. Given how much headache it has been managing multiple osd's
(including disk failures) on my other nodes, I've decided to put all 4
disks on the new node in a ZFS RAID 10 config with SSD SLOG & Cache with
just one
On 19 September 2015 at 01:55, Ken Dreyer wrote:
> To avoid confusion here, I've deleted packages.ceph.com from DNS
> today, and the change will propagate soon.
>
> Please use download.ceph.com (it's the same IP address and server,
> 173.236.248.54)
>
I'm getting:
W: GPG
On 29 August 2015 at 00:53, Tony Nelson tnel...@starpoint.com wrote:
I recently built a 3 node Proxmox cluster for my office. I’d like to get
HA setup, and the Proxmox book recommends Ceph. I’ve been reading the
documentation and watching videos, and I think I have a grasp on the
basics,
On 15 June 2015 at 21:16, Lindsay Mathieson lindsay.mathie...@gmail.com
wrote:
p,s Is there a way to speed up the rebalance? the cluster is unused
overnight, so I can thrash the IO
I bumped max_backfills to 20 and recovery max active to 30 using inject
args. Nothing seems to be breaking yet
If I have two nodes with identical drive/osd setups
Drive 1 = 3TB
Drive 2 = 1TB
Drive 3 = 1TB
All with equal weights of (1)
I now decide to reweight Drive 1 to (3).
Would it be best to do one node at a time, or do both nodes simultaneously?
I would presume that all the data shuffling would be
Can't open at the moment, niever the website or apt.
Trying from Brisbane, Australia.
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 13 April 2015 at 16:00, Christian Balzer ch...@gol.com wrote:
However the vast majority of people with production clusters will be
running something stable, mostly Firefly at this moment.
Sorry, 0.87 is giant.
BTW, you could also set osd_scrub_sleep to your cluster. ceph would
sleep
On 13 April 2015 at 11:02, Christian Balzer ch...@gol.com wrote:
Yeah, that's a request/question that comes up frequently.
And so far there's no option in Ceph to do that (AFAIK), it would be
really nice along with scheduling options (don't scrub during peak hours),
which have also been
On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote:
Hi, all
I have a two-node Ceph cluster, and both are monitor and osd. When
they're both up, osd are all up and in, everything is fine... almost:
Two things.
1 - You *really* need a min of three monitors. Ceph cannot form a quorum with
On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote:
Hi, all
I have a two-node Ceph cluster, and both are monitor and osd. When
they're both up, osd are all up and in, everything is fine... almost:
Two things.
1 - You *really* need a min of three monitors. Ceph cannot form a quorum with
Thanks, thats quite helpful.
On 16 March 2015 at 08:29, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
In an attempt to clarify what Ceph release is stable, LTS or development.
a new page was added to the documentation:
http://ceph.com/docs/master/releases/ It is a matrix where each cell is
On Thu, 12 Mar 2015 12:49:51 PM Vieresjoki, Juha wrote:
But there's really no point, block storage is the only viable option for
virtual machines performance-wise. With images you're dealing with multiple
filesystem layers on top of the actual block devices, plus Ceph as block
storage supports
On Thu, 12 Mar 2015 09:27:43 AM Andrija Panic wrote:
ceph is RAW format - should be all fine...so VM will be using that RAW
format
If you use cephfs you can use qcow2.
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 11 March 2015 at 06:53, Jesus Chavez (jeschave) jesch...@cisco.com
wrote:
KeyNotFoundError: Could not find keyring file:
/etc/ceph/ceph.client.admin.keyring on host aries
Well - have you verified the keyring is there on host aries and has the
right permissions?
--
Lindsay
On 27 February 2015 at 16:01, Alexandre DERUMIER aderum...@odiso.com
wrote:
I just upgraded my debian giant cluster,
1)on each node:
Just done that too, all looking good.
Thanks all.
--
Lindsay
___
ceph-users mailing list
Thanks for the notes Sage
On 27 February 2015 at 00:46, Sage Weil s...@newdream.net wrote:
We recommend that all v0.87 Giant users upgrade to this release.
When upgrading from 0.87 to 0.87.1 is there any special procedure that
needs to followed? or is ti sufficient to upgrade each node and
The Ceph Debian Giant repo (http://ceph.com/debian-giant) seems to have had
an update from 0.87 to 0.87-1 on the 24-Feb.
Are there release notes anywhere on what changed etc? is there an upgrade
procedure?
thanks,
--
Lindsay
___
ceph-users mailing
On Thu, 19 Feb 2015 05:56:46 PM Florian Haas wrote:
As it is, a simple perf top basically hosing the system wouldn't be
something that is generally considered expected.
Could the disk or controller be failing?
___
ceph-users mailing list
Similar setup works well for me - 2 vm hosts, 1 Mon only mode. 6 osd's, 3 per
vm host. Using rbd and cephfs
The more memory on your vm hosts, the better.
Lindsay Mathieson
-Original Message-
From: David Graham xtn...@gmail.com
Sent: 11/02/2015 3:07 AM
To: ceph-us...@ceph.com ceph
On Tue, 3 Feb 2015 05:24:19 PM Daniel Schneller wrote:
Now I think on it, that might just be it - I seem to recall a similar
problem
with cifs mounts, despite having the _netdev option. I had to issue a
mount in /etc/network/if-up.d/
I'll test than and get back to you
We had
On 5 February 2015 at 07:22, Sage Weil s...@newdream.net wrote:
Is the snapshoting performed by ceph or by the fs? Can we switch to
xfs and have the same capabilities: instant snapshot + instant boot
from snapshot?
The feature set and capabilities are identical. The difference is that on
On Thu, 29 Jan 2015 03:05:41 PM Alexis KOALLA wrote:
Hi,
Today we encountered an issue in our Ceph cluster in LAB.
Issue: The servers that host the OSDs have rebooted and we have observed
that after the reboot there is no auto mount of OSD devices and we need
to manually performed the
You only have one osd node (ceph4). The default replication requirements
for your pools (size = 3) require osd's spread over three nodes, so the
data can be replicate on three different nodes. That will be why your pgs
are degraded.
You need to either add mode osd nodes or reduce your size
that the redundancy can
be achieved with multiple OSDs (like multiple disks in RAID) in case you
don't have more nodes. Obviously the single point of failure would be the
box.
My current setting is:
osd_pool_default_size = 2
Thank you
Jiri
On 20/01/2015 13:13, Lindsay Mathieson wrote:
You only
On Sun, 18 Jan 2015 10:17:50 AM lidc...@redhat.com wrote:
No, if you used cache tiering, It is no need to use ssd journal again.
Really? writes are as fast as with ssd journals?
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Wed, 14 Jan 2015 02:20:21 PM Rafał Michalak wrote:
Why data not replicating on mounting fs ?
I try with filesystems ext4 and xfs
The data is visible only when unmounted and mounted again
Because you are not using a cluster aware filesystem - the respective mounts
don't know when changes
On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote:
In Ceph world 0.72.2 is ancient en pretty old. If you want to play with
CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18
Does the kernel version matter if you are using ceph-fuse?
On Thu, 8 Jan 2015 05:36:43 PM Patrik Plank wrote:
Hi Patrick, just a beginner myself, but have been through a similar process
recently :)
With these values above, I get a write performance of 90Mb/s and read
performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver
and
On Tue, 6 Jan 2015 12:07:26 AM Sanders, Bill wrote:
14 and 18 happened to show up during that run, but its certainly not only
those OSD's. It seems to vary each run. Just from the runs I've done
today I've seen the following pairs of OSD's:
Could your osd nodes be paging? I know from
On Mon, 5 Jan 2015 01:15:03 PM Nick Fisk wrote:
I've been having good results with OMD (Check_MK + Nagios)
There is a plugin for Ceph as well that I made a small modification to, to
work with a wider range of cluster sizes
Thanks, I'll check it out.
Currently trying zabbix, seems more
On Mon, 5 Jan 2015 09:21:16 AM Nick Fisk wrote:
Lindsay did this for performance reasons so that the data is spread evenly
over the disks, I believe it has been accepted that the remaining 2tb on the
3tb disks will not be used.
Exactly, thanks Nick.
I only have a terabyte of data, and its not
On 5 January 2015 at 13:02, Christian Balzer ch...@gol.com wrote:
On Fri, 02 Jan 2015 06:38:49 +1000 Lindsay Mathieson wrote:
If you research the ML archives you will find that cache tiering currently
isn't just fraught with peril (there are bugs) but most importantly isn't
really that fast
Well I upgraded my cluster over the weekend :)
To each node I added:
- Intel SSD 530 for journals
- 2 * 1TB WD Blue
So two OSD Nodes had:
- Samsung 840 EVO SSD for Op. Sys.
- Intel 530 SSD for Journals (10GB Per OSD)
- 3TB WD Red
- 1 TB WD Blue
- 1 TB WD Blue
- Each disk weighted at 1.0
- Primary
Did you remove the mds.0 entry from ceph.conf?
On 5 January 2015 at 14:13, debian Only onlydeb...@gmail.com wrote:
i have tried ' ceph mds newfs 1 0 --yes-i-really-mean-it'but not fix
the problem
2014-12-30 17:42 GMT+07:00 Lindsay Mathieson lindsay.mathie...@gmail.com
:
On Tue, 30
I just added 4 OSD's to my 2 OSD cluster (2 Nodes, now have 3 OSD's per
node).
Given its the weekend and not in use, I've set them all to weight 1, but
looks like it going to take a while to rebalance ... :)
Is having them all at weight 1 the fastest way to get back to health, or is
it causing
On Sat, 3 Jan 2015 10:40:30 AM Gregory Farnum wrote:
You might try temporarily increasing the backfill allowance params so that
the stuff can move around more quickly. Given the cluster is idle it's
definitely hitting those limits. ;) -Greg
Thanks Greg, but it finished overnight anyway :)
Expanding my tiny ceph setup from 2 OSD's to six, and two extra SSD's for
journals (IBM 530 120GB)
Yah, I know the 5300's would be much better
Assuming I use 10GB ber OSD for journal and 5GB spare to improve the SSD
lifetime, that leaves 85GB spare per SSD.
Is it worthwhile setting up a
On Thu, 1 Jan 2015 08:27:33 AM Dyweni - Ceph-Users wrote:
I suspect a better configuration would be to leave your weights alone
and to
change your primary affinity so that the osd with the ssd is used first.
Interesting
You
might a little improvement on the writes (since the spinners
On Wed, 31 Dec 2014 11:09:35 AM you wrote:
I believe that the upstart scripts will do this by default, they call out to
a bash script (I can't remember precisely what that is off the top of my
head) which then returns the crush rule, which will default to host=X osd=X
unless it's overridden
As mentioned before :) we have two osd ndoes with one 3TB osd each. (replica
2)
About to add a smaller (1TB) faster drive to each node
From the docs, normal practice would be to weight it in accordance with size,
i.e 3 for the 3TB OSD, 1 for the 1TB OSD.
But I'd like to spread it 50/50 to
On Thu, 1 Jan 2015 02:59:05 PM Jiri Kanicky wrote:
I would expect that if I shut down one node, the system will keep
running. But when I tested it, I cannot even execute ceph status
command on the running node.
2 osd Nodes, 3 Mon nodes here, works perfectly for me.
How many monitors do you
On Thu, 1 Jan 2015 03:46:33 PM Jiri Kanicky wrote:
Hi,
I have:
- 2 monitors, one on each node
- 4 OSDs, two on each node
- 2 MDS, one on each node
POOMA U here, but I don't think you can reach quorum with one out of two
monitors, you need a odd number:
On Tue, 30 Dec 2014 03:11:25 PM debian Only wrote:
ceph 0.87 , Debian 7.5, anyone can help ?
2014-12-29 20:03 GMT+07:00 debian Only onlydeb...@gmail.com:
i want to move mds from one host to another.
how to do it ?
what did i do as below, but ceph health not ok, mds was not removed :
On Tue, 30 Dec 2014 11:26:08 AM Eneko Lacunza wrote:
have a small setup with such a node (only 4 GB RAM, another 2 good
nodes for OSD and virtualization) - it works like a charm and CPU max is
always under 5% in the graphs. It only peaks when backups are dumped to
its 1TB disk using NFS.
I looked at the section for setting up different pools with different OSD's
(e.g SSD Pool):
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
And it seems to make the assumption that the ssd's and platters all live on
separate hosts.
Not the
On Tue, 30 Dec 2014 05:07:31 PM Nico Schottelius wrote:
While writing this I noted that the relation / factor is exactly 5.5 times
wrong, so I *guess* that ceph treats all hosts with the same weight (even
though it looks differently to me in the osd tree and the crushmap)?
I believe If you
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote:
As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host
entries. This works great.
you have
- host ceph-01
- host ceph-01-ssd
Don't the host names have to match the real host names?
--
Lindsay
signature.asc
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote:
As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host
entries. This works great.
you have
- host ceph-01
- host ceph-01-ssd
Don't the host names have to match the real host names?
--
Lindsay
signature.asc
On Tue, 30 Dec 2014 10:38:14 PM Erik Logtenberg wrote:
No, bucket names in crush map are completely arbitrary. In fact, crush
doesn't really know what a host is. It is just a bucket, like rack
or datacenter. But they could be called cat and mouse just as well.
Hmmm, I tried that earlier and
Is there a command to do this without decompiling/editing/compiling the crush
set? makes me nervous ...
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Tue, 30 Dec 2014 11:25:40 PM Erik Logtenberg wrote:
f you want to be able to start your osd's with /etc/init.d/ceph init
script, then you better make sure that /etc/ceph/ceph.conf does link
the osd's to the actual hostname
I tried again and it was ok for a short while, then *something*
On Mon, 29 Dec 2014 07:04:47 PM Mark Kirkwood wrote:
Thanks all, I'll definitely stick with nobarrier
Maybe you meant to say *barrier* ?
Oops :) Yah
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
ceph-users
On Mon, 29 Dec 2014 11:12:06 PM Christian Balzer wrote:
Is that a private cluster network just between Ceph storage nodes or is
this for all ceph traffic (including clients)?
The later would probably be better, a private cluster network twice as
fast as the client one isn't particular helpful
On Sun, 28 Dec 2014 04:08:03 PM Nick Fisk wrote:
If you can't add another full host, your best bet would be to add another
2-3 disks to each server. This should give you a bit more performance. It's
much better to have lots of small disks rather than large multi-TB ones from
a performance
On Mon, 29 Dec 2014 11:29:11 PM Christian Balzer wrote:
Reads will scale up (on a cluster basis, individual clients might
not benefit as much) linearly with each additional device (host/OSD).
I'm taking that to mean individual clients as a whole will be limited by the
speed of individual
On Sun, 28 Dec 2014 04:08:03 PM Nick Fisk wrote:
This should give you a bit more performance. It's
much better to have lots of small disks rather than large multi-TB ones from
a performance perspective. So maybe look to see if you can get 500GB/1TB
drives cheap.
Is this from the docs still
On Tue, 30 Dec 2014 12:48:58 PM Christian Balzer wrote:
Looks like I misunderstood the purpose of the monitors, I presumed they
were just for monitoring node health. They do more than that?
They keep the maps and the pgmap in particular is of course very busy.
All that action is at:
On Tue, 30 Dec 2014 12:48:58 PM Christian Balzer wrote:
Looks like I misunderstood the purpose of the monitors, I presumed they
were just for monitoring node health. They do more than that?
They keep the maps and the pgmap in particular is of course very busy.
All that action is at:
On 30 December 2014 at 14:28, Christian Balzer ch...@gol.com wrote:
Use a good monitoring tool like atop to watch how busy things are.
And do that while running a normal rados bench like this from a client
node:
rados -p rbd bench 60 write -t 32
And again like this:
rados -p rbd bench 60
On Sat, 27 Dec 2014 09:41:19 PM you wrote:
I certainly wouldn't, I've seen utility power fail and the transfer
switch fail to transition to UPS strings. Had this happened to me with
nobarrier it would have been a very sad day.
I'd second that. In addition I've heard of
Appreciate the detailed reply Christian.
On Sun, 28 Dec 2014 02:49:08 PM Christian Balzer wrote:
On Sun, 28 Dec 2014 08:59:33 +1000 Lindsay Mathieson wrote:
I'm looking to improve the raw performance on my small setup (2 Compute
Nodes, 2 OSD's). Only used for hosting KVM images
On Sat, 27 Dec 2014 09:03:16 PM Mark Kirkwood wrote:
Yep. If you have 'em plugged into a RAID/HBA card with a battery backup
(that also disables their individual caches) then it is safe to use
nobarrier, otherwise data corruption will result if the server
experiences power loss.
Thanks
On Sat, 27 Dec 2014 04:59:51 PM you wrote:
Power supply means bigger capex and less redundancy, as the emergency
procedure in case of power failure is less deterministic than with
controlled battery-backed cache.
Yes, the whole auto shut-down procedure is rather more complex and fragile
for
On Sat, 27 Dec 2014 06:02:32 PM you wrote:
Are you able to separate log with data in your setup and check the
difference?
Do you mean putting the OSD journal on a separate disk? I have the journals on
SSD partitions, which has helped a lot, previously I was getting 13 MB/s
Its not a good SSD
I'm looking to improve the raw performance on my small setup (2 Compute Nodes,
2 OSD's). Only used for hosting KVM images.
Raw read/write is roughly 200/35 MB/s. Starting 4+ VM's simultaneously pushes
iowaits over 30%, though the system keeps chugging along.
Budget is limited ... :(
I plan to
On Tue, 16 Dec 2014 11:50:37 AM Robert LeBlanc wrote:
COW into the snapshot (like VMware, Ceph, etc):
When a write is committed, the changes are committed to a diff file and the
base file is left untouched. This only has a single write penalty, if you
want to discard the child, it is fast as
I see a lot of people mount their xfs osd's with nobarrier for extra
performance, certainly it makes a huge difference to my small system.
However I don't do it as my understanding is this runs a risk of data
corruption in the event of power failure - this is the case, even with ceph?
side
Will this make its way into the debian repo eventually?
http://ceph.com/debian-giant
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
1 - 100 of 137 matches
Mail list logo