Hi Greg,
On 24 Nov 2014, at 22:01, Gregory Farnum g...@gregs42.com wrote:
On Thu, Nov 20, 2014 at 9:08 AM, Dan van der Ster
daniel.vanders...@cern.ch wrote:
Hi all,
What is compatibility/incompatibility of dumpling clients to talk to firefly
and giant clusters?
We sadly don't have a
Great news! Thanks for your support!
We'll be waiting for subscribing to it when is ready.
Best,
---
JuanFra Rodriguez Cardoso
2014-11-24 21:55 GMT+01:00 Gregory Farnum g...@gregs42.com:
On Fri, Nov 21, 2014 at 12:34 AM, JuanFra Rodriguez Cardoso
juanfra.rodriguez.card...@gmail.com wrote:
Hello,
as far as I can tell, Ceph does not make any guarantee that reads from an
object return what was actually written to it. In other words, it does not
check data integrity (except doing deep-scrub once every few days).
Considering the fact that BTRFS is not production-ready, not many people
Huynh Dac Nguyen writes:
Hi Chris,
I see.
I'm runing on version 0.80.7.
How do we know which part of document for our version? As you see, we
have only one ceph document here, It make us confused.
Could you show me the document for ceph version 0.80.7?
Tried
Hi travis,
can I have a develop account or tester account in order to submit issue
by myself?
Thanks,
Massimiliano Cuttini
Il 18/11/2014 23:03, Travis Rhoden ha scritto:
I've captured this at http://tracker.ceph.com/issues/10133
On Tue, Nov 18, 2014 at 4:48 PM, Travis Rhoden
Hi
For a large network (say 100 servers and 2500 disks), are there any
strong advantages to using separate switch and physical network
instead of VLAN?
Also, how difficult it would be to switch from a VLAN to using
separate switches later?
-Sreenath
According to XFS docs, setting crc=1 will only enable CRC validation of XFS
metadata (ie. mtime, xattrs, etc.). Still, nothing guarantees integrity of
the actual data.
2014-11-25 11:05 GMT+01:00 Denis Kaganovich maha...@bspu.unibel.by:
How about XFS journal crc (mkfs stage crc=1)? Somebody
Hi Alexandre,
Thanks for your suggestion. I also considered using errors=continue, in
line with the nobarrier idea, but I was afraid it might lead to silent
corruption on errors not caused by slow requests on OSDs. I was hoping
for a solution that would specifically allow slowness of the block
Testing ceph on top of ZFS (zfsonlinux), kernel driver.
- Have created ZFS mount:
/var/lib/ceph/osd/ceph-0
- followed the instructions at:
http://ceph.com/docs/firefly/rados/operations/add-or-rm-osds/
failing on the step 4. Initialize the OSD data directory.
ceph-osd -i 0 --mkfs --mkkey
Massimiliano,
We have a documentation update coming shortly. RHEL 7 doesn't have
yum-priorities, but you can use rpmfind to get it.
Regards,
John
On Tue, Nov 25, 2014 at 3:02 AM, Massimiliano Cuttini m...@phoenixweb.it
wrote:
Hi travis,
can I have a develop account or tester account in
Hi Massimiliano,
On Tue, Nov 25, 2014 at 6:02 AM, Massimiliano Cuttini m...@phoenixweb.it
wrote:
Hi travis,
can I have a develop account or tester account in order to submit issue by
myself?
Registration for the Ceph tracker is open -- anyone can sign up for an
account to report issues.
On Tue, 25 Nov 2014, Tomasz Kuzemko wrote:
Hello,
as far as I can tell, Ceph does not make any guarantee that reads from an
object return what was actually written to it. In other words, it does not
check data integrity (except doing deep-scrub once every few days).
Considering the fact that
On Tue, Nov 25, 2014 at 07:10:26AM -0800, Sage Weil wrote:
On Tue, 25 Nov 2014, Tomasz Kuzemko wrote:
Hello,
as far as I can tell, Ceph does not make any guarantee that reads from an
object return what was actually written to it. In other words, it does not
check data integrity (except
If you are like me, you have the journals for your OSD's with rotating
media stored separately on an SSD. If you are even more like me, you
happen to use Intel 530 SSD's in some of your hosts. If so, please do
check your S.M.A.R.T. statistics regularly, because these SSD's really
can't cope with
On 11/25/2014 09:41 AM, Erik Logtenberg wrote:
If you are like me, you have the journals for your OSD's with rotating
media stored separately on an SSD. If you are even more like me, you
happen to use Intel 530 SSD's in some of your hosts. If so, please do
check your S.M.A.R.T. statistics
Should be all set now. I neglected to push the update yesterday, but
it's there now.
Best Regards,
Patrick McGarry
Director Ceph Community || Red Hat
http://ceph.com || http://community.redhat.com
@scuttlemonkey || @ceph
On Tue, Nov 25, 2014 at 10:01 AM, Brian Rak b...@gameservers.com
Hi.
If you are like me, you have the journals for your OSD's with rotating
media stored separately on an SSD. If you are even more like me, you
happen to use Intel 530 SSD's in some of your hosts. If so, please do
check your S.M.A.R.T. statistics regularly, because these SSD's really
My cluster is actually very fast without SSD drives. Thanks for the
advice!
Michael Kuriger
mk7...@yp.com
818-649-7235
MikeKuriger (IM)
On 11/25/14, 7:49 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 11/25/2014 09:41 AM, Erik Logtenberg wrote:
If you are like me, you have the
sloppy crc uses fs xattrs directly, omap won't help.
-Sam
On Tue, Nov 25, 2014 at 7:39 AM, Tomasz Kuzemko tomasz.kuze...@ovh.net wrote:
On Tue, Nov 25, 2014 at 07:10:26AM -0800, Sage Weil wrote:
On Tue, 25 Nov 2014, Tomasz Kuzemko wrote:
Hello,
as far as I can tell, Ceph does not make any
I have suffered power losses in every data center I've been in. I have
lost SSDs because of it (Intel 320 Series). The worst time, I lost both
SSDs in a RAID1. That was a bad day.
I'm using the Intel DC S3700 now, so I don't have a repeat. My cluster is
small enough that losing a journal SSD
It's mostly about bandwidth. With VLANs, the public and cluster networks
are going to be sharing the inter-switch links.
For a cluster that size, I don't see much advantage to the VLANs. You'll
save a few ports by having the inter-switch links shared, at the expense of
contention on those
Hi,
From my humble opinion if you have enough money, separate switches are
always a better choice.
Regrads, I
2014-11-25 20:47 GMT+01:00 Craig Lewis cle...@centraldesktop.com:
It's mostly about bandwidth. With VLANs, the public and cluster networks
are going to be sharing the inter-switch
For a large network (say 100 servers and 2500 disks), are there any
strong advantages to using separate switch and physical network
instead of VLAN?
Physical isolation will ensure that congestion on one does not affect
the other. On the flip side, asymmetric network failures tend to be
more
Hi All,
I am evaluating Ceph for one of our product requirements.
I have gone through the website, http://ceph.com/docs/master/start/
I am using Ubuntu 14.04 LTS and am done with most of the steps.
Finally, I am struck on Creating File System. From the website,
The ceph fs new command was
Hi,
Use ceph mds newfs {metaid} {dataid} instead
JC
On Nov 25, 2014, at 12:27, Jeripotula, Shashiraj
shashiraj.jeripot...@verizon.com wrote:
Hi All,
I am evaluating Ceph for one of our product requirements.
I have gone through the website, http://ceph.com/docs/master/start/
HI JC,
I tried,
sysenguser@blade3:~$ ceph mds newfs cephfs_metadata cephfs_data
Invalid command: cephfs_metadata doesn't represent an int
mds newfs int[0-] int[0-] {--yes-i-really-mean-it} : make new filesystom
using pools metadata and data
Error EINVAL: invalid command
Here is the original
Thanks for the advise!
I've checked a couple of my Intel 520s which I use for the osd journals and
have been using them for almost 2 years now.
I do not have a great deal of load though. Only have about 60vms or so which
have a general usage.
Disk 1:
ID# ATTRIBUTE_NAME FLAG VALUE WORST
FWIW, I've got Intel 520s in one of our test nodes at Inktank that has a
fair amount of data thrown at it and we haven't lost a drive in 2 years.
Having said that, I'd use higher write endurance drives in production,
especially with how much cheaper they are getting these days.
Mark
On
The two numbers (ints) are meant to the ids of the pools you have created
for data and meta data.
Assuming you have already created the pools, run
ceph osd lspools
and use the numbers from there to create the FS
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
It looks to me like you need to supply it the *ids* of the pools not
their names.
So do:
$ ceph osd dump # (or lspools)
note down the ids of the pools you want to use (suppose I have
cephfs_data 10 and cepfs_metadata 12):
$ ceph mds newfs 10 12 --yes-i-really-mean-it
On 26/11/14 11:30,
Thanks Nick and Mark,
I was able to run with id's and yes-i-really-mean-it,
sysenguser@blade3:~$ ceph mds newfs 3 4 --yes-i-really-mean-it
new fs with metadata pool 3 and data pool 4
Regards
Raj
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
There was a good thread on the mailing list a little while ago. There were
several recommendations in that thread, maybe some of them will help.
Found it:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg14154.html
On Tue, Nov 25, 2014 at 4:16 AM, Lindsay Mathieson
Thanks Craig, I had a good read of it - from what I read, the std ceph
packages should work with zfs, just not make use of its extra features
(writeparallel support), whose performance was not all that good
anyway.
I did note the set xattr to sa comment which gave me a different error:)
ceph-osd
I've found the cause of the problem - ceph was attempting to create
the journal with direct io which zfs doesn't support. I worked round
it by disabling journal dio in ceph.config
[osd]
journal dio = false
Dunno if this is a good idea or not or whether there is a better way
of doing it :_
On 26
Hello,
We are running 6-node ceph cluster version 0.80.7 operating system centos
7. The osd's on one node are not gettng marked up and in. I have
started/restarted the osd's couple of times with no luck. All the osds have
the following message:
2014-11-25 08:36:04.150120 7f9ff676f700 0 --
Hi, all
I create a rbd named foo, and then map it and mount on two different machine,
and when i touch a file on the machine A, machine B can not see the new file,
and machine B can also touch a same file!
I want to know if the rbd the same on machine A and B? or exactly they are two
rbd?
Any
Hi all,
I have a requirement for a highly available, high performance storage
environment to serve data for webheads. (lots of reads on small files,
limited writes) Having looked at all available options, OCFS2 on top of RBD
appears to be the only solution that meets all my needs. I have used
You can't write from 2 nodes mounted to the same rbd at the same time without a
cluster aware file system.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of mail
list
Sent: Tuesday, November 25, 2014 7:30 PM
To: ceph-us...@ceph.com
Subject:
hi,
But i have touched the same file on the two machine under the same rbi with no
error.
will it cause some problem or just not suggested but can do?
On Nov 26, 2014, at 12:08, Michael Kuriger mk7...@yp.com wrote:
You can't write from 2 nodes mounted to the same rbd at the same time
Each server mounting the rbd device thinks it's the only server writing to it.
They are not aware of the other server and therefore will overwrite and corrupt
the filesystem as soon as each server writes a file.
-Original Message-
From: mail list [mailto:louis.hust...@gmail.com]
Sent:
Hi Michael,
I write the same file with different content, and there is no hint for
overwrite, so when the corrupt will appear?
On Nov 26, 2014, at 12:23, Michael Kuriger mk7...@yp.com wrote:
Each server mounting the rbd device thinks it's the only server writing to
it. They are not aware
I cannot go into detail about how or where your particular system is writing
files. All I can reiterate is that bbd images can only be mounted to one host
at a time, unless you're using a cluster aware file system.
Hope that helps!
-Mike
-Original Message-
From: mail list
Thanks very much,Mike !
On Nov 26, 2014, at 12:42, Michael Kuriger mk7...@yp.com wrote:
I cannot go into detail about how or where your particular system is writing
files. All I can reiterate is that bbd images can only be mounted to one
host at a time, unless you're using a cluster aware
43 matches
Mail list logo