Hi Nick,
On 06/12/2014 17:15, Nick Fisk wrote:
Hi Loic,
A very interesting reply and your description of the promotion behaviour
makes perfect sense. I can see how a larger number of data chunks could
impact latency, so would certainly impact a OLTP type workload where low
latency is
You will have to consider in the real world whoever built the cluster might
not document the dangerous option to let support stuff or successor aware.
Thus any experimental feature considered not safe for production should be
included in a warning message in 'ceph health', and logs, either log it
Hello Sage -
Just wondering if you are the module owner for radosgw-agent? If so, can you
please help me to locate the latest source bits for debian wheezy?
Thanks,Lakshmi.
On Wednesday, December 3, 2014 8:42 PM, lakshmi k s lux...@yahoo.com
wrote:
Hello - Please help me here.
Hi,
I had a question on the topic of building ceph v0.88 from source.
I am using a custom linux system that does not have a package manager. and aims
to compile
all software from source, where possible.
I have built several of the software depends and managed to build ceph
semi-successfully.
On Sun, Dec 7, 2014 at 1:51 PM, René Gallati c...@gallati.net wrote:
Hello Antonio,
I use aptly to manage my repositories and mix and match (and snapshot / pin)
I didn't know aptly, thank you for mentioning.
specific versions and non-standard packages, but as far as I know, the
kernel from
On Fri, Dec 5, 2014 at 12:46 PM, Mark Nelson mark.nel...@inktank.com wrote:
I'm in favor of the allow experimental features but instead call it:
ALLOW UNRECOVERABLE DATA CORRUPTING FEATURES which makes things a little
more explicit. With great power comes great responsibility.
+1.
For
On Sat, Dec 6, 2014 at 10:40 AM, Lorieri lori...@gmail.com wrote:
Hi,
if I have a situation when each node in a cluster writes their own
files in cephfs, is it safe to use multiple MDS ?
I mean, is the problem using multiple MDS related to nodes writing same files
?
It's not a problem.
I've been thinking for a while that we need another more general command
than Ceph health to more generally inform you about your cluster. IE I
personally don't like having min/max PG warnings in Ceph health (they
can be independently controlled by ceph.conf options but that kind of
approach
Is it possible to move CephFS on a replicated/mirrored pool to using
erasure coding? Assuming that it's not, is that on the roadmap for any
future release?
I have 10T in CephFS now, and I'm trying to decide if I'd be better off
blowing it away and recreating CephFS with a SSD cache tier over an
Hi Patrick,
Once compiled from sources ceph -s should work. Do you run it from sources ? I
would check if you don't have Ceph libraries (librados ?) installed from a
package. Having a mixture of ceph from source and another version installed
from packages can lead to confusion.
Cheers
On
Hi Scott,
You should be able to add an erasure code pool to an existing replicated pool.
I did not try it myself though :-) Someone with real world experience could
probably comment usefully on the best strategy.
Cheers
On 08/12/2014 16:18, Scott Laird wrote:
Is it possible to move CephFS on
Hello,
I'm running into a problem with a new cluster were I setup two MDS hosts and
created the pools (cephfs_data cephfs_metadata) and it worked for a couple of
days but it looks like I did something that caused it to fail and I'm now
trying to just rebuild it.
I've tried to delete the
I forgot to mention I'm using v0.87 Giant.
-Glen
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Glen
Aidukas
Sent: Monday, December 08, 2014 11:35 AM
To: 'ceph-users@lists.ceph.com'
Subject: [ceph-users] Trying to rebuild cephfs and mds's
Hello,
I'm running into a
Haven't tried other iSCSI implementations (yet).
LIO/targetcli makes it very easy to implement/integrate/wrap/automate around so
I'm really trying to get this right.
PCI-E SSD cache tier in front of spindles-backed erasure coded pool in 10 Gbps
across the board yields results slightly better
I 've just created issues #10271
Best,
George
On Fri, 5 Dec 2014 09:30:45 -0800, Yehuda Sadeh wrote:
It looks like a bug. Can you open an issue on tracker.ceph.com,
describing what you see?
Thanks,
Yehuda
On Fri, Dec 5, 2014 at 7:17 AM, Georgios Dimitrakakis
gior...@acmac.uoc.gr wrote:
It
Do I need to update the ceph.conf to support multiple MDS servers?
—Jiten
On Nov 24, 2014, at 6:56 AM, Gregory Farnum g...@gregs42.com wrote:
On Sun, Nov 23, 2014 at 10:36 PM, JIten Shah jshah2...@me.com wrote:
Hi Greg,
I haven’t setup anything in ceph.conf as mds.cephmon002 nor in any
Hi David,
This is a long shot, but have you checked the Max queue depth on the
iscsi side. I've got a feeling that lio might be set at 32 as default.
This would definitely have an effect at the high queue depths you are
testing with.
On 8 Dec 2014 16:53, David Moreau Simard
I've just stood up a Ceph cluster for some experimentation. Unfortunately,
we're having some performance and stability problems I'm trying to pin down.
More unfortunately, I'm new to Ceph, so I'm not sure where to start looking for
the problem.
Under activity, we'll get monitors going into
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
The hang on ceph command exiting is probably 8797.
On 12/08/2014 07:47 AM, Loic Dachary wrote:
Hi Patrick,
Once compiled from sources ceph -s should work. Do you run it from
sources ? I would check if you don't have Ceph libraries (librados
Hi
We're going to integrate at work ViPER software that was Openstack Cinder
compatible
Is there any people here that have integrate CEPH behind EMC Viper software ?
Thanks
Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi
We're working with EMC provider in our company
EMC is making big teasing to have scaleIO in our environment
I' trying to integrate CEPH with our RedHat GAM
Do you have any sheet or proof of concept that comparing CEPH and ScaleIO ?
Thanks
Sent from my iPhone
EMC also trying to make the sale to us. They claim it can
be done. Would be interested if someone else can try it first.
Steve
On Mon, 8 Dec 2014, Florent MONTHEL wrote:
Hi
We're going to integrate at work ViPER software that was Openstack Cinder
compatible
Is there any people here that
Hi Florent,
I'm sure Red Hat will provide you all you need to compare Ceph with ScaleIO on
technical merits. IMHO the most significant advantage is not technical, it's
free as in freedom. If I was to chose a technology to store data for the next
decades I would make sure that the chosen
Hi,
after using the ceph-cluster for months without any problems - thank you
for that great piece of software -, i recognize one osd crashed with
following output.
What are the recommondations - Just Upgrading or is this not a bug on 0.67 ?
-1 2014-11-08 04:24:51.127924 7f0d92897700 5
At a guess, this is something that has long since been fixed in
dumpling, you probably want to upgrade to the current dumpling point
release.
-Sam
On Mon, Dec 8, 2014 at 2:40 PM, Philipp von Strobl-Albeg
phil...@pilarkto.net wrote:
Hi,
after using the ceph-cluster for months without any
On Mon, 8 Dec 2014, Philipp von Strobl-Albeg wrote:
Hi,
after using the ceph-cluster for months without any problems - thank you for
that great piece of software -, i recognize one osd crashed with following
output.
What are the recommondations - Just Upgrading or is this not a bug on 0.67
Thank you very much.
I planed this step already - so good to know ;-)
Do you recommend firefly or giant - without needing radosgw ?
Best
Philipp
Am 08.12.2014 um 23:42 schrieb Samuel Just:
At a guess, this is something that has long since been fixed in
dumpling, you probably want to upgrade
To start with, dumpling itself is up to v0.67.11. You are running
v0.67.0. There have been many bug fixes just in dumpling in that
time. You should start with upgrading to v0.67.11 even if you plan on
upgrading to firefly or giant later (there were bug fixes in dumpling
for bugs which only
Hi,
thank you- both ;-)
Using gentoo-ebuild with version 0.67.0 and have this options for upgrading:
Available versions: [m]0.67.9 [m](~)0.67.10 [m](~)0.80.5 [m](~)0.80.7
This brings me to three queastions:
1. Is 0.67.10 also good to go (instead of .11) ?
2. can i upgrade from 0.67.0 to
Some distros have LIO setup by default to use 64 for the session wide
(default_cmdsn_depth) so increase that (session wide means all LUs
accessed through that session will be limited by total of 64 requests
across all LUs).
If you are using linux on the initiator side increase
Oh yeah, for the iscsi fio full write test, did you experiment with bs
and numjobs? For just 10 GB iscsi, I think numjibs 1 (around 4 is when
I stop seeing benefits) and bs 1MB (around 64K to 256K) works better.
On 12/08/2014 05:22 PM, Mike Christie wrote:
Some distros have LIO setup by
Hello,
Debian Jessie cluster, thus kernel 3.16, ceph 0.80.7.
3 storage nodes with 8 OSDs (journals on 4 SSDs) each, 3 mons.
2 compute nodes, everything connected via Infiniband.
This is pre-production, currently there are only 3 VMs and 2 of them were
idle at the time. The non-idle one was
They never fixed themselves? Did the reported times ever increase?
If not I think that's just a reporting bug which is fixed in an
unreleased branch, but I'd have to check the tracker to be sure.
On Mon, Dec 8, 2014 at 8:23 PM, Christian Balzer ch...@gol.com wrote:
Hello,
On Mon, 8 Dec 2014
On Mon, Dec 8, 2014 at 8:51 PM, Christian Balzer ch...@gol.com wrote:
On Mon, 8 Dec 2014 20:36:17 -0800 Gregory Farnum wrote:
They never fixed themselves?
As I wrote, it took a restart of OSD 8 to resolve this on the next day.
Did the reported times ever increase?
Indeed, the last before
34 matches
Mail list logo