Morning all ..
I have a simple 3 node 2 osd cluster setup serving VM Images (proxmox). The
two OSD's are on the two VM hosts. Size is set to 2 for replication on both
OSD's. SSD journals.
- if the Ceph Client (VM quest over RBD) is accessing data that is stored on
the local OSD, will it
Are cache tiers reliable in firefly if you *aren't* using erasure pools?
Secondary to that - do they give a big boost with regard to read/write
performance for VM images? any real world feedback?
thanks,
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
On Thu, 20 Nov 2014 03:12:44 PM Mark Nelson wrote:
Personally I'd suggest a lot of testing first. Not sure if there are
any lingering stability issues, but as far as performance goes in
firefly you'll only likely see speed ups with very skewed hot/cold
distributions and potentially slow
Testing ceph on top of ZFS (zfsonlinux), kernel driver.
- Have created ZFS mount:
/var/lib/ceph/osd/ceph-0
- followed the instructions at:
http://ceph.com/docs/firefly/rados/operations/add-or-rm-osds/
failing on the step 4. Initialize the OSD data directory.
ceph-osd -i 0 --mkfs --mkkey
recommendations in that thread, maybe some of them will help.
Found it:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg14154.html
On Tue, Nov 25, 2014 at 4:16 AM, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
Testing ceph on top of ZFS (zfsonlinux), kernel driver.
- Have
On Tue, Nov 25, 2014 at 4:16 AM, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
Testing ceph on top of ZFS (zfsonlinux), kernel driver.
- Have created ZFS mount:
/var/lib/ceph/osd/ceph-0
- followed the instructions at:
http://ceph.com/docs/firefly/rados/operations/add-or-rm
On Wed, 26 Nov 2014 05:37:43 AM Mark Nelson wrote:
I don't know if things have changed, but I don't think you want to
outright move the journal like that. Instead, something like:
ceph-osd -i N --flush-journal
delete old journal
link to the new journal device
ln -s
On Tue, 25 Nov 2014 03:47:08 PM Eric Eastman wrote:
It has been almost a year since I last tried ZFS, but I had to add to the
ceph.conf file:
filestore zfs_snap = 1
journal aio = 0
journal dio = 0
Eric
Thanks Eric, I figured it out in the end, though I haven't tried
On Fri, 28 Nov 2014 08:56:24 PM Ilya Dryomov wrote:
which you are supposed to change on a per-device basis via sysfs.
Is there a way to do this for windows VM's?
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
I have 2 OSD's on two nodes top of zfs that I'd like to rebuild in a more
standard (xfs) setup.
Would the following be a non destructive if somewhat tedious way of doing so?
Following the instructions from here:
I have 2 OSD's on two nodes top of zfs that I'd like to rebuild in a more
standard (xfs) setup.
Would the following be a non destructive if somewhat tedious way of doing so?
Following the instructions from here:
According to the docs, Ceph block devices are thin provisioned. But how do I
list the actual size of vm images hosted on ceph?
I do something like:
rbd ls -l rbd
But that only lists the provisioned sizes, not the real usage.
thanks,
--
Lindsay
signature.asc
Description: This is a digitally
On Sun, 30 Nov 2014 11:37:06 AM Haomai Wang wrote:
Yeah, we still have no way to inspect the actual usage of image.
But we already have existing bp to impl it.
https://wiki.ceph.com/Planning/Blueprints/Hammer/librbd%3A_shared_flag%2C_ob
ject_map
Thanks, good to know.
I did find this:
I had a problem with an osd starting - log seemed to show the journal was a
problem. When I tried to flush the journal I got the errors below.
I was in a hurry so attached a spare ssd partion as a new journal, which fixed
the problem and let it heal.
To fix it for the original ssd journal
You have to be a root user, either via login, su or sudo.
So no, you don't have to use sudo - just logon as root.
On 2 December 2014 at 00:05, Jiri Kanicky ji...@ganomi.com wrote:
Hi.
Do I have to install sudo in Debian Wheezy to deploy CEPH succesfully? I
dont normally use sudo.
Thank you
Anyone know why a VM live restore would be excessively slow on Ceph? restoring
a small VM with 12GB disk/2GB Ram is taking 18 *minutes*. Larger VM's can be
over half an hour.
The same VM's on the same disks, but native, or glusterfs take less than 30
seconds.
VM's are KVM on Proxmox.
Whereabouts to go with this?
ceph -s
cluster f67ef302-5c31-425d-b0fe-cdc0738f7a62
health HEALTH_WARN 256 pgs degraded; 256 pgs stuck degraded; 256 pgs
stuck unclean; 256 pgs stuck undersized; 256 pgs undersized; recovery
10418/447808 objects degraded (2.326%)
monmap e7: 3 mons at
Sending a new thread as I can't see my own to reply.
Solved the stuck pg's by deleting the cephfs andf the pools I created for it.
Health returned to ok instantly.
Side Note: I had to guess the command ceph fs rm as I could not find docs on
it anywhere, and just doing ceph fs gives:
Invalid
Test Msg, at request of list owner
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Last one, sorry
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm finding snapshot restores to be very slow. With a small vm, I can
take a snapshot withing seconds, but restores can take over 15
minutes, sometimes nearly an hou, depending on how I have tweaked
ceph.
The same vm as a QCOW2 image on NFS or native disk can be restored in
under 30 seconds.
Is
Last one, sorry
--
Lindsay Mathieson | Senior Developer
Softlog Australia
43 Kedron Park Road, Wooloowin, QLD, 4030
[T] +61 7 3632 8804 | [F] +61 1800-818-914| [W] softlog.com.au
DISCLAIMER: This Email and any attachments are a confidential communication
intended exclusively
On Tue, 16 Dec 2014 11:26:35 AM you wrote:
Is this normal? is ceph just really slow at restoring rbd snapshots,
or have I really borked my setup?
I'm not looking for a fix or a tuning suggestions, just feedback on whether
this is normal
--
Lindsay
signature.asc
Description: This is a
On Tue, 16 Dec 2014 07:57:19 AM Leen de Braal wrote:
If you are trying to see if your mails come through, don't check on the
list. You have a gmail account, gmail removes mails that you have sent
yourself.
Not the case, I am on a dozen other mailman lists via gmail, all of them show
my posts.
On 17 December 2014 at 04:50, Robert LeBlanc rob...@leblancnet.us wrote:
There are really only two ways to do snapshots that I know of and they have
trade-offs:
COW into the snapshot (like VMware, Ceph, etc):
When a write is committed, the changes are committed to a diff file and the
base
On 17 December 2014 at 11:50, Robert LeBlanc rob...@leblancnet.us wrote:
On Tue, Dec 16, 2014 at 5:37 PM, Lindsay Mathieson
lindsay.mathie...@gmail.com wrote:
On 17 December 2014 at 04:50, Robert LeBlanc rob...@leblancnet.us wrote:
There are really only two ways to do snapshots that I know
Both fuse and kernel module fail to mount,
The mons mds are on two other nodes, so they are available when this node is
booting.
They can be mounted manually after boot.
my fstab:
idmin /mnt/cephfs fuse.ceph defaults,nonempty,_netdev 0 0
On Wed, 17 Dec 2014 02:02:52 PM John Spray wrote:
Can you tell us more about how they fail? Error messages on console,
anything in syslog?
Not quite sure what to look for, but I did a quick scan on ceph through dmesg
syslog, nothing stood out
In the absence of other clues, you might
I'be been experimenting with CephFS for funning KVM images (proxmox).
cephfs fuse version - 0.87
cephfs kernel module - kernel version 3.10
Part of my testing involves running a Windows 7 VM up and running
CrystalDiskMark to check the I/O in the VM. Its surprisingly good with
both the fuse and
On Thu, 18 Dec 2014 10:05:20 PM Mark Kirkwood wrote:
My m550
work vastly better if the journal is a file on a filesystem as opposed
to a partition.
Any particular filesystem? ext4? xfs? or doesn't matter?
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
On Thu, 18 Dec 2014 10:05:20 PM Mark Kirkwood wrote:
The effect of this is *highly* dependent to the SSD make/model. My m550
work vastly better if the journal is a file on a filesystem as opposed
to a partition.
Obviously the Intel S3700/S3500 are a better choice - but the OP has
already
On Thu, 18 Dec 2014 08:41:21 PM Udo Lembke wrote:
have you tried the different cache-options (no cache, write through,
...) which proxmox offer, for the drive?
I tried with writeback and it didn't corrupt.
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
On Thu, 18 Dec 2014 11:23:42 AM Gregory Farnum wrote:
Do you have any information about *how* the drive is corrupted; what
part Win7 is unhappy with?
Failure to find the boot sector I think, I'll run it again and take a screen
shot.
I don't know how Proxmox configures it, but
I assume
On 19 December 2014 at 11:14, Christian Balzer ch...@gol.com wrote:
Hello,
On Thu, 18 Dec 2014 16:12:09 -0800 Craig Lewis wrote:
Firstly I'd like to confirm what Craig said about small clusters.
I just changed my four storage node test cluster from 1 OSD per node to 4
and it can now
Will this make its way into the debian repo eventually?
http://ceph.com/debian-giant
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Fri, 19 Dec 2014 03:27:53 PM you wrote:
On 19/12/2014 15:12, Lindsay Mathieson wrote:
Will this make its way into the debian repo eventually?
This is a development release that is not meant to be published in
distributions such as Debian, CentOS etc.
Ah, thanks.
Its not clear from
On Fri, 19 Dec 2014 03:57:42 PM you wrote:
The stable release have real names, that is what makes them different from
development releases (dumpling, emperor, firefly, giant, hammer).
Ah, so we had two named firefly releases (Firefly 0.86 Firefly 0.87) - they
were both production and we have
On Tue, 16 Dec 2014 11:50:37 AM Robert LeBlanc wrote:
COW into the snapshot (like VMware, Ceph, etc):
When a write is committed, the changes are committed to a diff file and the
base file is left untouched. This only has a single write penalty, if you
want to discard the child, it is fast as
I see a lot of people mount their xfs osd's with nobarrier for extra
performance, certainly it makes a huge difference to my small system.
However I don't do it as my understanding is this runs a risk of data
corruption in the event of power failure - this is the case, even with ceph?
side
On Sat, 27 Dec 2014 09:03:16 PM Mark Kirkwood wrote:
Yep. If you have 'em plugged into a RAID/HBA card with a battery backup
(that also disables their individual caches) then it is safe to use
nobarrier, otherwise data corruption will result if the server
experiences power loss.
Thanks
On Sat, 27 Dec 2014 04:59:51 PM you wrote:
Power supply means bigger capex and less redundancy, as the emergency
procedure in case of power failure is less deterministic than with
controlled battery-backed cache.
Yes, the whole auto shut-down procedure is rather more complex and fragile
for
On Sat, 27 Dec 2014 06:02:32 PM you wrote:
Are you able to separate log with data in your setup and check the
difference?
Do you mean putting the OSD journal on a separate disk? I have the journals on
SSD partitions, which has helped a lot, previously I was getting 13 MB/s
Its not a good SSD
I'm looking to improve the raw performance on my small setup (2 Compute Nodes,
2 OSD's). Only used for hosting KVM images.
Raw read/write is roughly 200/35 MB/s. Starting 4+ VM's simultaneously pushes
iowaits over 30%, though the system keeps chugging along.
Budget is limited ... :(
I plan to
On Sat, 27 Dec 2014 09:41:19 PM you wrote:
I certainly wouldn't, I've seen utility power fail and the transfer
switch fail to transition to UPS strings. Had this happened to me with
nobarrier it would have been a very sad day.
I'd second that. In addition I've heard of
Appreciate the detailed reply Christian.
On Sun, 28 Dec 2014 02:49:08 PM Christian Balzer wrote:
On Sun, 28 Dec 2014 08:59:33 +1000 Lindsay Mathieson wrote:
I'm looking to improve the raw performance on my small setup (2 Compute
Nodes, 2 OSD's). Only used for hosting KVM images
On Mon, 29 Dec 2014 07:04:47 PM Mark Kirkwood wrote:
Thanks all, I'll definitely stick with nobarrier
Maybe you meant to say *barrier* ?
Oops :) Yah
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
ceph-users
On Mon, 29 Dec 2014 11:12:06 PM Christian Balzer wrote:
Is that a private cluster network just between Ceph storage nodes or is
this for all ceph traffic (including clients)?
The later would probably be better, a private cluster network twice as
fast as the client one isn't particular helpful
On Sun, 28 Dec 2014 04:08:03 PM Nick Fisk wrote:
If you can't add another full host, your best bet would be to add another
2-3 disks to each server. This should give you a bit more performance. It's
much better to have lots of small disks rather than large multi-TB ones from
a performance
On Mon, 29 Dec 2014 11:29:11 PM Christian Balzer wrote:
Reads will scale up (on a cluster basis, individual clients might
not benefit as much) linearly with each additional device (host/OSD).
I'm taking that to mean individual clients as a whole will be limited by the
speed of individual
On Sun, 28 Dec 2014 04:08:03 PM Nick Fisk wrote:
This should give you a bit more performance. It's
much better to have lots of small disks rather than large multi-TB ones from
a performance perspective. So maybe look to see if you can get 500GB/1TB
drives cheap.
Is this from the docs still
On Tue, 30 Dec 2014 12:48:58 PM Christian Balzer wrote:
Looks like I misunderstood the purpose of the monitors, I presumed they
were just for monitoring node health. They do more than that?
They keep the maps and the pgmap in particular is of course very busy.
All that action is at:
On Tue, 30 Dec 2014 12:48:58 PM Christian Balzer wrote:
Looks like I misunderstood the purpose of the monitors, I presumed they
were just for monitoring node health. They do more than that?
They keep the maps and the pgmap in particular is of course very busy.
All that action is at:
On 30 December 2014 at 14:28, Christian Balzer ch...@gol.com wrote:
Use a good monitoring tool like atop to watch how busy things are.
And do that while running a normal rados bench like this from a client
node:
rados -p rbd bench 60 write -t 32
And again like this:
rados -p rbd bench 60
On Tue, 30 Dec 2014 03:11:25 PM debian Only wrote:
ceph 0.87 , Debian 7.5, anyone can help ?
2014-12-29 20:03 GMT+07:00 debian Only onlydeb...@gmail.com:
i want to move mds from one host to another.
how to do it ?
what did i do as below, but ceph health not ok, mds was not removed :
On Tue, 30 Dec 2014 11:26:08 AM Eneko Lacunza wrote:
have a small setup with such a node (only 4 GB RAM, another 2 good
nodes for OSD and virtualization) - it works like a charm and CPU max is
always under 5% in the graphs. It only peaks when backups are dumped to
its 1TB disk using NFS.
I looked at the section for setting up different pools with different OSD's
(e.g SSD Pool):
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
And it seems to make the assumption that the ssd's and platters all live on
separate hosts.
Not the
On Tue, 30 Dec 2014 05:07:31 PM Nico Schottelius wrote:
While writing this I noted that the relation / factor is exactly 5.5 times
wrong, so I *guess* that ceph treats all hosts with the same weight (even
though it looks differently to me in the osd tree and the crushmap)?
I believe If you
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote:
As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host
entries. This works great.
you have
- host ceph-01
- host ceph-01-ssd
Don't the host names have to match the real host names?
--
Lindsay
signature.asc
On Tue, 30 Dec 2014 04:18:07 PM Erik Logtenberg wrote:
As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host
entries. This works great.
you have
- host ceph-01
- host ceph-01-ssd
Don't the host names have to match the real host names?
--
Lindsay
signature.asc
On Tue, 30 Dec 2014 10:38:14 PM Erik Logtenberg wrote:
No, bucket names in crush map are completely arbitrary. In fact, crush
doesn't really know what a host is. It is just a bucket, like rack
or datacenter. But they could be called cat and mouse just as well.
Hmmm, I tried that earlier and
Is there a command to do this without decompiling/editing/compiling the crush
set? makes me nervous ...
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Tue, 30 Dec 2014 11:25:40 PM Erik Logtenberg wrote:
f you want to be able to start your osd's with /etc/init.d/ceph init
script, then you better make sure that /etc/ceph/ceph.conf does link
the osd's to the actual hostname
I tried again and it was ok for a short while, then *something*
On Wed, 31 Dec 2014 11:09:35 AM you wrote:
I believe that the upstart scripts will do this by default, they call out to
a bash script (I can't remember precisely what that is off the top of my
head) which then returns the crush rule, which will default to host=X osd=X
unless it's overridden
As mentioned before :) we have two osd ndoes with one 3TB osd each. (replica
2)
About to add a smaller (1TB) faster drive to each node
From the docs, normal practice would be to weight it in accordance with size,
i.e 3 for the 3TB OSD, 1 for the 1TB OSD.
But I'd like to spread it 50/50 to
On Thu, 1 Jan 2015 02:59:05 PM Jiri Kanicky wrote:
I would expect that if I shut down one node, the system will keep
running. But when I tested it, I cannot even execute ceph status
command on the running node.
2 osd Nodes, 3 Mon nodes here, works perfectly for me.
How many monitors do you
On Thu, 1 Jan 2015 03:46:33 PM Jiri Kanicky wrote:
Hi,
I have:
- 2 monitors, one on each node
- 4 OSDs, two on each node
- 2 MDS, one on each node
POOMA U here, but I don't think you can reach quorum with one out of two
monitors, you need a odd number:
Expanding my tiny ceph setup from 2 OSD's to six, and two extra SSD's for
journals (IBM 530 120GB)
Yah, I know the 5300's would be much better
Assuming I use 10GB ber OSD for journal and 5GB spare to improve the SSD
lifetime, that leaves 85GB spare per SSD.
Is it worthwhile setting up a
On Thu, 1 Jan 2015 08:27:33 AM Dyweni - Ceph-Users wrote:
I suspect a better configuration would be to leave your weights alone
and to
change your primary affinity so that the osd with the ssd is used first.
Interesting
You
might a little improvement on the writes (since the spinners
I just added 4 OSD's to my 2 OSD cluster (2 Nodes, now have 3 OSD's per
node).
Given its the weekend and not in use, I've set them all to weight 1, but
looks like it going to take a while to rebalance ... :)
Is having them all at weight 1 the fastest way to get back to health, or is
it causing
On Sat, 3 Jan 2015 10:40:30 AM Gregory Farnum wrote:
You might try temporarily increasing the backfill allowance params so that
the stuff can move around more quickly. Given the cluster is idle it's
definitely hitting those limits. ;) -Greg
Thanks Greg, but it finished overnight anyway :)
On 5 February 2015 at 07:22, Sage Weil s...@newdream.net wrote:
Is the snapshoting performed by ceph or by the fs? Can we switch to
xfs and have the same capabilities: instant snapshot + instant boot
from snapshot?
The feature set and capabilities are identical. The difference is that on
On Tue, 3 Feb 2015 05:24:19 PM Daniel Schneller wrote:
Now I think on it, that might just be it - I seem to recall a similar
problem
with cifs mounts, despite having the _netdev option. I had to issue a
mount in /etc/network/if-up.d/
I'll test than and get back to you
We had
On Wed, 14 Jan 2015 02:20:21 PM Rafał Michalak wrote:
Why data not replicating on mounting fs ?
I try with filesystems ext4 and xfs
The data is visible only when unmounted and mounted again
Because you are not using a cluster aware filesystem - the respective mounts
don't know when changes
On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote:
In Ceph world 0.72.2 is ancient en pretty old. If you want to play with
CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18
Does the kernel version matter if you are using ceph-fuse?
that the redundancy can
be achieved with multiple OSDs (like multiple disks in RAID) in case you
don't have more nodes. Obviously the single point of failure would be the
box.
My current setting is:
osd_pool_default_size = 2
Thank you
Jiri
On 20/01/2015 13:13, Lindsay Mathieson wrote:
You only
You only have one osd node (ceph4). The default replication requirements
for your pools (size = 3) require osd's spread over three nodes, so the
data can be replicate on three different nodes. That will be why your pgs
are degraded.
You need to either add mode osd nodes or reduce your size
On Sun, 18 Jan 2015 10:17:50 AM lidc...@redhat.com wrote:
No, if you used cache tiering, It is no need to use ssd journal again.
Really? writes are as fast as with ssd journals?
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Thu, 29 Jan 2015 03:05:41 PM Alexis KOALLA wrote:
Hi,
Today we encountered an issue in our Ceph cluster in LAB.
Issue: The servers that host the OSDs have rebooted and we have observed
that after the reboot there is no auto mount of OSD devices and we need
to manually performed the
On Mon, 5 Jan 2015 01:15:03 PM Nick Fisk wrote:
I've been having good results with OMD (Check_MK + Nagios)
There is a plugin for Ceph as well that I made a small modification to, to
work with a wider range of cluster sizes
Thanks, I'll check it out.
Currently trying zabbix, seems more
On Mon, 5 Jan 2015 09:21:16 AM Nick Fisk wrote:
Lindsay did this for performance reasons so that the data is spread evenly
over the disks, I believe it has been accepted that the remaining 2tb on the
3tb disks will not be used.
Exactly, thanks Nick.
I only have a terabyte of data, and its not
On Tue, 6 Jan 2015 12:07:26 AM Sanders, Bill wrote:
14 and 18 happened to show up during that run, but its certainly not only
those OSD's. It seems to vary each run. Just from the runs I've done
today I've seen the following pairs of OSD's:
Could your osd nodes be paging? I know from
On Thu, 8 Jan 2015 05:36:43 PM Patrik Plank wrote:
Hi Patrick, just a beginner myself, but have been through a similar process
recently :)
With these values above, I get a write performance of 90Mb/s and read
performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver
and
Similar setup works well for me - 2 vm hosts, 1 Mon only mode. 6 osd's, 3 per
vm host. Using rbd and cephfs
The more memory on your vm hosts, the better.
Lindsay Mathieson
-Original Message-
From: David Graham xtn...@gmail.com
Sent: 11/02/2015 3:07 AM
To: ceph-us...@ceph.com ceph
On 5 January 2015 at 13:02, Christian Balzer ch...@gol.com wrote:
On Fri, 02 Jan 2015 06:38:49 +1000 Lindsay Mathieson wrote:
If you research the ML archives you will find that cache tiering currently
isn't just fraught with peril (there are bugs) but most importantly isn't
really that fast
Well I upgraded my cluster over the weekend :)
To each node I added:
- Intel SSD 530 for journals
- 2 * 1TB WD Blue
So two OSD Nodes had:
- Samsung 840 EVO SSD for Op. Sys.
- Intel 530 SSD for Journals (10GB Per OSD)
- 3TB WD Red
- 1 TB WD Blue
- 1 TB WD Blue
- Each disk weighted at 1.0
- Primary
Did you remove the mds.0 entry from ceph.conf?
On 5 January 2015 at 14:13, debian Only onlydeb...@gmail.com wrote:
i have tried ' ceph mds newfs 1 0 --yes-i-really-mean-it'but not fix
the problem
2014-12-30 17:42 GMT+07:00 Lindsay Mathieson lindsay.mathie...@gmail.com
:
On Tue, 30
On Thu, 19 Feb 2015 05:56:46 PM Florian Haas wrote:
As it is, a simple perf top basically hosing the system wouldn't be
something that is generally considered expected.
Could the disk or controller be failing?
___
ceph-users mailing list
Thanks, thats quite helpful.
On 16 March 2015 at 08:29, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
In an attempt to clarify what Ceph release is stable, LTS or development.
a new page was added to the documentation:
http://ceph.com/docs/master/releases/ It is a matrix where each cell is
On Thu, 12 Mar 2015 12:49:51 PM Vieresjoki, Juha wrote:
But there's really no point, block storage is the only viable option for
virtual machines performance-wise. With images you're dealing with multiple
filesystem layers on top of the actual block devices, plus Ceph as block
storage supports
On Thu, 12 Mar 2015 09:27:43 AM Andrija Panic wrote:
ceph is RAW format - should be all fine...so VM will be using that RAW
format
If you use cephfs you can use qcow2.
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 11 March 2015 at 06:53, Jesus Chavez (jeschave) jesch...@cisco.com
wrote:
KeyNotFoundError: Could not find keyring file:
/etc/ceph/ceph.client.admin.keyring on host aries
Well - have you verified the keyring is there on host aries and has the
right permissions?
--
Lindsay
On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote:
Hi, all
I have a two-node Ceph cluster, and both are monitor and osd. When
they're both up, osd are all up and in, everything is fine... almost:
Two things.
1 - You *really* need a min of three monitors. Ceph cannot form a quorum with
On Tue, 31 Mar 2015 02:42:27 AM Kai KH Huang wrote:
Hi, all
I have a two-node Ceph cluster, and both are monitor and osd. When
they're both up, osd are all up and in, everything is fine... almost:
Two things.
1 - You *really* need a min of three monitors. Ceph cannot form a quorum with
On 27 February 2015 at 16:01, Alexandre DERUMIER aderum...@odiso.com
wrote:
I just upgraded my debian giant cluster,
1)on each node:
Just done that too, all looking good.
Thanks all.
--
Lindsay
___
ceph-users mailing list
The Ceph Debian Giant repo (http://ceph.com/debian-giant) seems to have had
an update from 0.87 to 0.87-1 on the 24-Feb.
Are there release notes anywhere on what changed etc? is there an upgrade
procedure?
thanks,
--
Lindsay
___
ceph-users mailing
Thanks for the notes Sage
On 27 February 2015 at 00:46, Sage Weil s...@newdream.net wrote:
We recommend that all v0.87 Giant users upgrade to this release.
When upgrading from 0.87 to 0.87.1 is there any special procedure that
needs to followed? or is ti sufficient to upgrade each node and
On 13 April 2015 at 16:00, Christian Balzer ch...@gol.com wrote:
However the vast majority of people with production clusters will be
running something stable, mostly Firefly at this moment.
Sorry, 0.87 is giant.
BTW, you could also set osd_scrub_sleep to your cluster. ceph would
sleep
On 13 April 2015 at 11:02, Christian Balzer ch...@gol.com wrote:
Yeah, that's a request/question that comes up frequently.
And so far there's no option in Ceph to do that (AFAIK), it would be
really nice along with scheduling options (don't scrub during peak hours),
which have also been
Can't open at the moment, niever the website or apt.
Trying from Brisbane, Australia.
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
1 - 100 of 137 matches
Mail list logo