It's great to see people outside of Inktank starting to get into using
teuthology. Thanks for the write-up!
-Greg
On Wed, Jan 16, 2013 at 6:01 AM, Loic Dachary l...@dachary.org wrote:
Hi,
I'm happy to report that running teuthology to get a lcov code coverage
report worked for me.
On Wed, Jan 16, 2013 at 3:54 PM, Sam Lang sam.l...@inktank.com wrote:
On Wed, Jan 16, 2013 at 3:52 PM, Gregory Farnum g...@inktank.com wrote:
My biggest concern with this was how it worked on cluster with
multiple data pools, and Sage's initial response was to either
1) create an object
On Tue, Jan 15, 2013 at 6:55 AM, Noah Watkins jayh...@cs.ucsc.edu wrote:
On Tue, Jan 15, 2013 at 1:32 AM, Danny Al-Gaaf danny.al-g...@bisect.de
wrote:
Am 15.01.2013 10:04, schrieb James Page:
On 12/01/13 16:36, Noah Watkins wrote:
On Thu, Jan 10, 2013 at 9:13 PM, Gary Lowell
On Tue, Jan 15, 2013 at 11:00 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
On 01/15/2013 12:36 PM, Gregory Farnum wrote:
On Tue, Jan 15, 2013 at 10:33 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu
wrote:
At the start of the batch #cores-in-the-cluster processes try to mmap
the same 2GB
On Mon, Jan 14, 2013 at 6:09 AM, Florian Haas flor...@hastexo.com wrote:
Hi Mark,
thanks for the comments.
On Mon, Jan 14, 2013 at 2:46 PM, Mark Nelson mark.nel...@inktank.com wrote:
Hi Florian,
Couple of comments:
OSDs use a write-ahead mode for local operations: a write hits the
On Fri, Jan 11, 2013 at 6:57 AM, Sébastien Han han.sebast...@gmail.com wrote:
Is osd.1 using the heap profiler as well? Keep in mind that active use
of the memory profiler will itself cause memory usage to increase —
this sounds a bit like that to me since it's staying stable at a large
but
On Fri, Jan 11, 2013 at 3:07 AM, Marcin Szukala
szukala.mar...@gmail.com wrote:
2013/1/10 Gregory Farnum g...@inktank.com:
On Thu, Jan 10, 2013 at 8:56 AM, Marcin Szukala
szukala.mar...@gmail.com wrote:
Hi,
Scenario is correct but the last line. I can mount the image, but the
data
On Thu, Jan 10, 2013 at 10:45 AM, Isaac Otsiabah zmoo...@yahoo.com wrote:
What is the acceptable attachment file size? because i have been trying to
post a problem with an attachment greater than 1.5MG and it seems to get
lost.
That wouldn't be surprising; Sage suggests the FAQ
On Tue, Jan 8, 2013 at 12:20 PM, Moore, Shawn M smmo...@catawba.edu wrote:
I have been testing ceph for a little over a month now. Our design goal is
to have 3 datacenters in different buildings all tied together over 10GbE.
Currently there are 10 servers each serving 1 osd in 2 of the
On Wed, Jan 9, 2013 at 8:00 AM, Noah Watkins noah.watk...@inktank.com wrote:
Hi Jutta,
On Wed, Jan 9, 2013 at 7:11 AM, Lachfeld, Jutta
jutta.lachf...@ts.fujitsu.com wrote:
the current content of the web page
http://ceph.com/docs/master/cephfs/hadoop shows a configuration parameter
On Wed, Jan 9, 2013 at 10:09 AM, Sylvain Munaut
s.mun...@whatever-company.com wrote:
Just fyi, I also have growing memory on OSD, and I have the same logs:
libceph: osd4 172.20.11.32:6801 socket closed in the RBD clients
That message is not an error; it just happens if the RBD client
doesn't
On Wed, Jan 9, 2013 at 1:33 PM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/9 Mark Kampe mark.ka...@inktank.com:
Asynchronous RADOS replication is definitely on our list,
but more complex and farther out.
Do you have any ETA?
1 month? 6 months ? 1 year?
No, but
On Thu, Jan 10, 2013 at 8:56 AM, Marcin Szukala
szukala.mar...@gmail.com wrote:
Hi,
Scenario is correct but the last line. I can mount the image, but the
data that was written to the image before power failure is lost.
Currently the ceph cluster is not healthy, but i dont think its
related
On Thu, Jan 10, 2013 at 4:39 PM, Yasuhiro Ohara y...@soe.ucsc.edu wrote:
Hi,
What will happen when constructing a cluster of 10 host,
but the hosts are gradually removed from the cluster
one by one (in each step waiting Ceph status to become healthy),
and reaches eventually to, say, 3 hosts
service ceph -a
start then it's going to go and turn on all the daemons listed in its
local ceph.conf.
-Greg
Thanks.
regards,
Yasu
From: Gregory Farnum g...@inktank.com
Subject: Re: Question about configuration
Date: Thu, 10 Jan 2013 16:43:59 -0800
Message-ID:
capylrzhzr81ko3qgc1ogxs_12
On Mon, Dec 31, 2012 at 10:27 AM, Wido den Hollander w...@widodh.nl wrote:
Just make sure you use cephx (enabled by default in 0.55) so that you don't
accidentally connect to the wrong cluster.
Use of cephx will provide an additional layer of protection for the
clients, but the OSDs and
I'm confused. Isn't the HeartbeatMap all about local thread
heartbeating (so, not pings with other OSDs)? I would assume the
upgrade and restart just caused a bunch of work and the CPUs got
overloaded.
-Greg
On Thu, Jan 3, 2013 at 8:52 AM, Sage Weil s...@inktank.com wrote:
Hi Wido,
On Thu, 3
ideas on
improving it?
-Greg
-Message d'origine-
De : Gregory Farnum [mailto:g...@inktank.com]
Envoyé : mardi 8 janvier 2013 18:03
À : Yann ROBIN
Cc : ceph-devel@vger.kernel.org
Objet : Re: Rados gateway init timeout with cache
To clarify, you lost the data on half of your OSDs
What are your CRUSH rules? Depending on how you set this cluster up,
it might not be placing more than one replica in a single host, and
you've only got two hosts so it couldn't satisfy your request for 3
copies.
-Greg
On Tue, Jan 8, 2013 at 2:11 PM, Bryan Stillwell
bstillw...@photobucket.com
type host
step emit
}
On Tue, Jan 8, 2013 at 3:23 PM, Gregory Farnum g...@inktank.com wrote:
What are your CRUSH rules? Depending on how you set this cluster up,
it might not be placing more than one replica in a single host, and
you've only got two hosts so it couldn't satisfy your
On Tue, Jan 8, 2013 at 11:44 AM, Denis Fondras c...@ledeuns.net wrote:
Hello,
What error message do you get when you try and turn it on? If the
daemon is crashing, what is the backtrace?
The daemon is crashing. Here is the full log if you want to take a look :
with larger PG counts on the
pools in use. Otherwise we can do something more detailed.
-Greg
You might also adjust the crush tunables, see
http://ceph.com/docs/master/rados/operations/crush-map/?highlight=tunable#tunables
sage
2013/1/8 Gregory Farnum g...@inktank.com
On Monday, January 7, 2013 at 1:00 PM, Isaac Otsiabah wrote:
When i add a new host (with osd's) to my existing cluster, 1 or 2 previous
osd(s) goes down for about 2 minutes and then they come back up.
[root@h1ct ~]# ceph osd tree
# id weight type name up/down reweight
-1
3 root
On Monday, January 7, 2013 at 9:25 AM, Denis Fondras wrote:
Hello all,
I'm using Ceph 0.55.1 on a Debian Wheezy (1 mon, 1 mds et 3 osd over
btrfs) and every once in a while, an OSD process crashes (almost never
the same osd crashes).
This time I had 2 osd crash in a row and so I only
On Saturday, January 5, 2013 at 4:19 AM, Denis Fondras wrote:
Hello all,
I'm using Ceph 0.55.1 on a Debian Wheezy (1 mon, 1 mds et 3 osd over
btrfs) and every once in a while, an OSD process crashes (almost never
the same osd crashes).
This time I had 2 osd crash in a row and so I only
Thanks!
Gary, can you pull these into a branch and do some before-and-after
package comparisons on our systems (for the different distros in
gitbuilder) and then merge into master?
-Greg
On Fri, Jan 4, 2013 at 9:51 AM, Danny Al-Gaaf danny.al-g...@bisect.de wrote:
This set of patches contains
Sorry for the delay; I've been out on vacation...
On Fri, Dec 14, 2012 at 6:09 AM, Lachfeld, Jutta
jutta.lachf...@ts.fujitsu.com wrote:
I do not have the full output of ceph pg dump for that specific TeraSort
run, but here is a typical output after automatically preparing CEPH for a
I think they might be different just as a consequence of being updated
less recently; that's where all of the lines whose origin I recognize
differ (not certain about the calc_parents stuff though). Sage can
confirm.
The specific issue you encountered previously was of course because
you changed
On Fri, Dec 21, 2012 at 2:07 AM, Amon Ott a...@m-privacy.de wrote:
Am 20.12.2012 15:31, schrieb Mark Nelson:
On 12/20/2012 01:08 AM, Roman Hlynovskiy wrote:
Hello Mark,
for multi-mds solutions do you refer to multi-active arch or 1 active
and many standby arch?
That's a good question! I
On Wed, Dec 26, 2012 at 6:16 PM, lollipop lollipop_...@126.com wrote:
Nowadays, I am wondering doing offline deduplication in ceph?
My idea is:
First in the ceph-client, I try to get the locations of chunks in one file.
The information includes
how many chunks the file has and which osd the
Alex has been doing a lot of work to support this lately, but I don't
think he's sketched out the actual implementation timeline yet.
-Greg
On Mon, Dec 17, 2012 at 4:10 PM, Chris Dunlop ch...@onthe.net.au wrote:
Hi,
Format 2 images (and attendant layering support) are not yet
supported by the
http://en.wikipedia.org/wiki/Software_defined_storage
Basically, with Ceph you've got a bunch of storage boxes providing raw disk.
They're aggregated together via the software daemons they run rather than by a
RAID controller or some hardware box that re-exports them, and the
administrator can
On Thu, Dec 13, 2012 at 9:27 AM, Sage Weil s...@inktank.com wrote:
Hi Jutta,
On Thu, 13 Dec 2012, Lachfeld, Jutta wrote:
Hi all,
I am currently doing some comparisons between CEPH FS and HDFS as a file
system for Hadoop using Hadoop's integrated benchmark TeraSort. This
benchmark first
On Thu, Dec 13, 2012 at 12:23 PM, Cameron Bahar cba...@gmail.com wrote:
Is the chunk size tunable in A Ceph cluster. I don't mean dynamic, but even
statically configurable when a cluster is first installed?
Yeah. You can set chunk size on a per-file basis; you just can't
change it once the
On Wed, Dec 12, 2012 at 1:31 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Hello List,
ceph osd create $NUM
does not seem to work anymore ;-(
# ceph osd createosd. 62
unknown command createosd
Read those two lines again. Very slowly. :)
The correct syntax is
ceph osd create uuid
The
Yeah; 61 is not a valid UUID and you can't specify anything else on that line.
On Wed, Dec 12, 2012 at 1:38 PM, Stefan Priebe s.pri...@profihost.ag wrote:
HI Greg,
sorry just a copy paste error.
[cloud1-ceph1: ~]# ceph osd create 61
(22) Invalid argument
Read those two lines again. Very
On Wed, Dec 12, 2012 at 2:00 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Hi Greg,
thanks for explanation. I'm using current next branch.
I'm using:
host1:
osd 11 .. 14
host2:
osd 21 .. 24
host3:
osd 31 .. 34
host4:
osd 41 .. 44
host5:
osd 51 .. 54
Right now i want to add host6.
On Monday, December 10, 2012 at 1:17 AM, James Page wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 19/09/12 02:53, Dustin Kirkland wrote:
Looking forward, another option might be to implement
encryption inside btrfs (placeholder fields are there in the
disk format,
Oooh, very nice! Do you have a list of the dependencies that you actually
needed to install?
Apart from breaking this up into smaller patches, we'll also want to reformat
some of it. Rather than sticking an #if APPLE on top of every spin lock, we
should have utility functions that do this for
On Wed, Nov 21, 2012 at 11:23 PM, Drunkard Zhang gongfan...@gmail.com wrote:
2012/11/22 Gregory Farnum g...@inktank.com:
On Tue, Nov 20, 2012 at 8:28 PM, Drunkard Zhang gongfan...@gmail.com
wrote:
2012/11/21 Gregory Farnum g...@inktank.com:
No, absolutely not. There is no relationship
On Sat, Dec 1, 2012 at 12:51 PM, Sage Weil s...@inktank.com wrote:
Is ready to merge, after a final look. There is one additional patch that
handles the connection fault and reconnect cases that hasn't been reviewed
yet. Passed the rados suite, although when we add it to the night suite
On Sun, Dec 2, 2012 at 11:53 PM, liuy...@ncic.ac.cn wrote:
Dear Sir,
I am YaqiLiu who had asked you some questions about snap in ceph,
and I have some problem with my gmail now, so I am connecting with you with
this address . Thanks for your reply about all the questions, but I
On Wed, Nov 28, 2012 at 1:30 PM, Cláudio Martins c...@ist.utl.pt wrote:
On Wed, 28 Nov 2012 13:08:08 -0800 Samuel Just sam.j...@inktank.com wrote:
Can you post the output of ceph -s?
'ceph -s' right now gives
health HEALTH_WARN 923 pgs degraded; 8666 pgs down; 9606 pgs peering; 7
pgs
On Sat, Nov 24, 2012 at 9:04 AM, Sylvain Munaut
s.mun...@whatever-company.com wrote:
Hi,
In addition to Sam's question about version, are you using cephx?
I'm running 0.48.2 on Ubuntu Precise and cephx auth is enabled.
Cheers,
Sylvain
Sorry to let this drop for so long, but is this
On Tue, Dec 4, 2012 at 12:46 PM, Sylvain Munaut
s.mun...@whatever-company.com wrote:
Hi,
Sorry to let this drop for so long, but is this something you've seen
happen before/again or otherwise reproduced? I'm not entirely sure how
to best test for it (other than just jerking the time around),
multiple clients. I did a rm -rf myfolder
from a client.
I use the ceph-fs-common (0.48-2argonaut) to mount my ceph fs.
Franck
2012/11/20 Gregory Farnum g...@inktank.com:
On Tue, Nov 13, 2012 at 3:23 AM, Franck Marchand fmarch...@agaetis.fr
wrote:
Hi,
I have a weird pb. I remove a folder
On Thu, Nov 22, 2012 at 4:29 PM, Martin Gerhard Loschwitz
martin.loschw...@hastexo.com wrote:
Hi folks,
I figured it might be a cool thing to have packages of ceph-deploy for
Debian and Ubuntu 12.04; I took the time and created them (along with
packages of python-pushy, which ceph-deploy
Am 20.11.2012 00:21, schrieb Gregory Farnum:
Which version was this on? There was some fairly significant work to
recovery done to introduce a reservation scheme and some other stuff
that might need some different defaults.
-Greg
On Tue, Nov 13, 2012 at 12:33 PM, Stefan Priebe s.pri
On Tue, Dec 4, 2012 at 1:57 PM, Gregory Farnum g...@inktank.com wrote:
On Sun, Nov 25, 2012 at 12:45 PM, Nathan Howell
nathan.d.how...@gmail.com wrote:
So when trawling through the filesystem doing checksum validation
these popped up on the files that are filled with null bytes:
https
On Mon, Dec 3, 2012 at 12:13 PM, Oliver Francke oliver.fran...@filoo.de wrote:
if you encounter all BIOS-, POST-, RAID-controller-checks, linux-boot,
openvswitch-STP setup and so on, one can imagine, that a reboot takes a
couple-of-minutes, normally with our setup after 30 seconds the cluster
On Wednesday, November 28, 2012 at 11:45 PM, Andrey Korolyov wrote:
$ ceph osd down -
osd.0 is already down
$ ceph osd down ---
osd.0 is already down
the same for ``+'', ``/'', ``%'' and so - I think that for osd subsys
ceph cli should explicitly work only with positive integers plus zero,
On Tuesday, November 27, 2012 at 8:45 AM, Sam Lang wrote:
Hi Noah,
I was able to reproduce your issue with a similar test using the fuse
client and the clock_offset option for the mds. This is what I see
happening:
clientA's clock is a few seconds behind the mds clock
clientA
On Fri, Nov 23, 2012 at 12:25 AM, Dave (Bob) d...@bob-the-boat.me.uk wrote:
Greg,
Is there any news on this?
David
Hurray, there is! Your latest stashed version of the PGMap got
corrupted; knowing that helped me to track down a bug — we weren't
checking the return value of an fsync! Fixes to
On Sun, Nov 25, 2012 at 11:32 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Hi Greg,
Am 25.11.2012 20:08, schrieb Gregory Farnum:
On Sunday, November 25, 2012 at 10:26 AM, Stefan Priebe wrote:
Hello list,
are there any useful statistics for ceph? ceph -s shows just the status
but i
On Sunday, November 25, 2012 at 9:30 AM, Drunkard Zhang wrote:
2012/11/26 Sage Weil s...@inktank.com (mailto:s...@inktank.com):
On Sun, 25 Nov 2012, Drunkard Zhang wrote:
I'm using ceph-0.51. I setup 3 monitors. then mount with 3 mon IP at
another host with either kernel mode or fuse,
On Sunday, November 25, 2012 at 10:26 AM, Stefan Priebe wrote:
Hello list,
are there any useful statistics for ceph? ceph -s shows just the status
but i would like to record something which gives me an idea about the
ceph load or usage.
Each daemon has an admin socket which you can use to
On Thursday, November 22, 2012 at 4:33 AM, Jimmy Tang wrote:
Hi All,
Is it possible at this point in time to setup some form of tiering of storage
pools in ceph by modifying the crush map? For example I want to have my most
recently used data on a small set of nodes that have SSD's and
On Thursday, November 22, 2012 at 5:26 AM, Sylvain Munaut wrote:
Hi,
I know that ceph has time synced servers has a requirements, but I
think a sane failure mode like a message in the logs instead of
incontrollably growing memory usage would be a good idea.
I had the NTP process die on me
On Thursday, November 22, 2012 at 7:38 PM, Chen, Xiaoxi wrote:
Hi list,
I am thinking about the possibility to add some primitive in CRUSH to meet
the following user stories:
A. Same host, Same rack
To balance between availability and performance ,one may like such a rule: 3
Replicas,
On Friday, November 23, 2012 at 5:36 AM, Chen, Xiaoxi wrote:
Hi Han,
I have a cluster with 8 nodes(each node with 1 SSD as journal and 3 7200 rpm
sata disk as data disk), each OSD consist of 1 sata disk together with one
30G partition from the SSD.So in total I have 24 OSDs.
My test method
On Tue, Nov 20, 2012 at 8:28 PM, Drunkard Zhang gongfan...@gmail.com wrote:
2012/11/21 Gregory Farnum g...@inktank.com:
No, absolutely not. There is no relationship between different RADOS
pools. If you've been using the cephfs tool to place some filesystem
data in different pools then your
On Wed, Nov 21, 2012 at 4:33 AM, ruslan usifov ruslan.usi...@gmail.com wrote:
So, not possible use ceph as scalable block device without visualization?
I'm not sure I understand, but if you're trying to take a bunch of
compute nodes and glue their disks together, no, that's not a
supported use
On Tue, Nov 20, 2012 at 1:25 AM, Drunkard Zhang gongfan...@gmail.com wrote:
2012/11/20 Gregory Farnum g...@inktank.com:
On Mon, Nov 19, 2012 at 7:55 AM, Drunkard Zhang gongfan...@gmail.com wrote:
I created a ceph cluster for test, here's mistake I made:
Add a second mds: mds.ab, executed 'ceph
On Mon, Nov 19, 2012 at 1:08 PM, Dave Humphreys (Datatone)
d...@datatone.co.uk wrote:
I have a problem in which I can't start my ceph monitor. The log is shown
below.
The log shows version 0.54. I was running 0.52 when the problem arose, and I
moved to the latest in case the newer version
On Mon, Nov 19, 2012 at 7:55 AM, Drunkard Zhang gongfan...@gmail.com wrote:
I created a ceph cluster for test, here's mistake I made:
Add a second mds: mds.ab, executed 'ceph mds set_max_mds 2', then
removed the mds just added;
Then 'ceph mds set_max_mds 1', the first mds.aa crashed, and
On Sun, Nov 18, 2012 at 7:14 PM, liu yaqi liuyaqiy...@gmail.com wrote:
Is the disk on MDS used for journal? Does it has some other use?
The MDS doesn't make any use of local disk space — it stores
everything in RADOS. You need enough local disk to provide a
configuration file, keyring, and debug
On Fri, Nov 16, 2012 at 5:56 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 11/15/2012 01:51 AM, Gandalf Corvotempesta wrote:
2012/11/15 Josh Durgin josh.dur...@inktank.com:
So basically you'd only need a single nic per storage node. Multiple
can be useful to separate frontend and backend
On Sun, Nov 11, 2012 at 11:02 PM, liu yaqi liuyaqiy...@gmail.com wrote:
2012/11/9 Sage Weil s...@inktank.com
Lots of different snapshots:
- librados lets you do 'selfmanaged snaps' in its API, which let an
application control which snapshots apply to which objects.
- you can create a
Which version was this on? There was some fairly significant work to
recovery done to introduce a reservation scheme and some other stuff
that might need some different defaults.
-Greg
On Tue, Nov 13, 2012 at 12:33 PM, Stefan Priebe s.pri...@profihost.ag wrote:
Hi list,
osd recovery seems to
On Tue, Nov 13, 2012 at 3:23 AM, Franck Marchand fmarch...@agaetis.fr wrote:
Hi,
I have a weird pb. I remove a folder using a mounted fs partition. I
did it and it worked well.
What client are you using? How did you delete it? (rm -rf, etc?) Are
you using multiple clients or one, and did you
On Sun, Nov 18, 2012 at 12:05 PM, Noah Watkins jayh...@cs.ucsc.edu wrote:
Wanna have a look at a first pass on this patch?
wip-client-open-layout
Thanks,
Noah
Just glanced over this, and I'm curious:
1) Why symlink another reference to your file_layout.h?
2) There's already a
at 1:45 PM, Gregory Farnum g...@inktank.com wrote:
On Mon, Nov 19, 2012 at 1:08 PM, Dave Humphreys (Datatone)
d...@datatone.co.uk wrote:
I have a problem in which I can't start my ceph monitor. The log is shown
below.
The log shows version 0.54. I was running 0.52 when the problem arose, and I
To be clear, the monitor cluster needs to be within this clock drift —
the rest of the Ceph cluster can be off by as much as you care to.
(Well, there's also a limit imposed by cephx authorization which can
keep nodes out of the cluster, but that drift allowance is measured in
units of hours.)
with writes to lots of disks at the same
time.
2012/11/4 Aleksey Samarin nrg3...@gmail.com:
Ok!
Well, I'll take these tests and write about the results.
btw,
disks are the same, as some may be faster than others?
2012/11/4 Gregory Farnum g...@inktank.com:
That's only nine — where are the other
Sorry we missed this — everybody's been very busy!
If you're still having trouble, can you install the ceph debug symbol
packages and get this again? The backtrace isn't very helpful without
that, unfortunately.
-Greg
On Wed, Oct 24, 2012 at 7:21 PM, jie sun 0maid...@gmail.com wrote:
Hi,
My
I was asked today if there's a way to upgrade RBD volumes from v1 to
v2. I didn't think so, but wanted
1) to make sure I'm right,
2) to ask how hard it would be,
3) to ask if we haven't done it because it didn't occur to us or
because it's too hard.
-Greg
--
To unsubscribe from this list: send the
On Fri, Nov 9, 2012 at 8:04 PM, Josh Durgin josh.dur...@inktank.com wrote:
On 11/09/2012 11:01 AM, Gregory Farnum wrote:
I was asked today if there's a way to upgrade RBD volumes from v1 to
v2. I didn't think so, but wanted
1) to make sure I'm right,
2) to ask how hard it would be,
3
On Wed, Nov 7, 2012 at 4:20 PM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2012/11/7 Gregory Farnum g...@inktank.com:
The mons need to be reachable by everybody. They don't do a ton of
network traffic, but 100Mb/s might be pushing it a bit low...
Some portion of my network
Deleting files on CephFS doesn't instantaneously remove the underlying
objects because they could still be in use by other clients, and
removal takes time proportional to the size of the file. Instead the
MDS queues it up to be removed asynchronously. You should see the
number of objects counter
On Sun, Nov 4, 2012 at 10:58 AM, Aleksey Samarin nrg3...@gmail.com wrote:
Hi all
Im planning use ceph for cloud storage.
My test setup is 2 servers connected via infiniband 40Gb, 6x2Tb disks per
node.
Centos 6.2
Ceph 0.52 from http://ceph.com/rpms/el6/x86_64
This is my config
pg dump
http://pastebin.com/BqLsyMBC
I have still no idea.
All the best. Alex
2012/11/4 Gregory Farnum g...@inktank.com:
On Sun, Nov 4, 2012 at 10:58 AM, Aleksey Samarin nrg3...@gmail.com wrote:
Hi all
Im planning use ceph for cloud storage.
My test setup is 2 servers connected via
/4 Gregory Farnum g...@inktank.com:
On Sun, Nov 4, 2012 at 10:58 AM, Aleksey Samarin nrg3...@gmail.com wrote:
Hi all
Im planning use ceph for cloud storage.
My test setup is 2 servers connected via infiniband 40Gb, 6x2Tb disks per
node.
Centos 6.2
Ceph 0.52 from http://ceph.com/rpms/el6
at 80585 KB/sec
2012-11-04 16:19:25.148613 osd.26 [INF] bench: wrote 1024 MB in blocks
of 4096 KB in 13.541710 sec at 77433 KB/sec
All the best.
2012/11/4 Gregory Farnum g...@inktank.com:
[Sorry for the blank email; I missed!]
On Sun, Nov 4, 2012 at 1:04 PM, Aleksey Samarin nrg3...@gmail.com wrote
On Fri, Nov 2, 2012 at 8:53 PM, Travis Rhoden trho...@gmail.com wrote:
Hi folks,
Are there any plans to release python-ceph to pypi? It would be nice
to see it packaged up in distutils/egg format and added to pypi, that
way other python packages can list it as a dependency.
I know there is
On Fri, Nov 2, 2012 at 5:35 PM, Jimmy Tang jt...@tchpc.tcd.ie wrote:
On 30 Oct 2012, at 16:13, Sage Weil wrote:
It is probably a relatively straighforward porting job (fixing up
#includes, etc.) to get ceph-fuse working under OS X with macfuse or
osxfuse or whatever the latest and greatest
but the
originals are also kept. All of this is done on one machine. The
journals and osd drives are write saturated while this is going on.
On Thu, Nov 1, 2012 at 4:02 PM, Gregory Farnum g...@inktank.com wrote:
Are you using hard links, by any chance?
No, we are using a handfull of soft links though
On Thu, Nov 1, 2012 at 6:00 PM, Dietmar Maurer diet...@proxmox.com wrote:
I always thought a distributed block storage could do such things
faster (or at least as fast) than a single centralized store?
That rather depends on what makes up each of them. ;)
On Thu, Nov 1, 2012 at 6:11 AM,
On Thu, Nov 1, 2012 at 10:33 PM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2012/11/1 Mark Nelson mark.nel...@inktank.com:
It will do that for a while, based on how you've tweaked the flush intervals
and various journal settings to determine how much data ceph will allow to
On Wed, Oct 31, 2012 at 11:55 PM, Ryan Nicholson
ryan.nichol...@kcrg.com wrote:
Correction:
Missed a carriage return when I copy/pasted at first, sorry...
Ryan
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Ryan
I suspect mon force standby active is your problem:
http://ceph.com/wiki/Standby-replay_modes#mon_force_standby_active
Try disabling that?
On Wed, Oct 31, 2012 at 6:02 PM, Matt Weil mw...@genome.wustl.edu wrote:
I have a system with a bunch or ram that I want to remain the active MDS but
still
...@seakr.com wrote:
Okay, that patch worked and it seems to be running, again. Should I continue
to run with that patch, or go back to the original binaries?
Gregory Farnum 10/19/12 4:16 PM
I've written a small patch on top of v0.48.1argonaut which should
avoid this. It's in branch 3369-mds
for that branch, or will there be a 0.48.3 release coming?
Gregory Farnum 11/03/12 11:45 AM
Sage merged it into master, so whatever you like. If you remove the
patch and the error happens again, your MDS will fail on replay as it
did here. If you leave it in, it has no effect other than handling
I'm not sure that latency addition is quite correct. Most use cases
cases do multiple IOs at the same time, and good benchmarks tend to
reflect that.
I suspect the IO limitations here are a result of QEMU's storage
handling (or possibly our client layer) more than anything else — Josh
can talk
On Thu, Nov 1, 2012 at 11:32 PM, Sam Lang sam.l...@inktank.com wrote:
On Thu 01 Nov 2012 11:22:59 AM CDT, Nathan Howell wrote:
We have a small (3 node) Ceph cluster that occasionally has issues. It
loses files and directories, truncates them or fills the contents with
NULL bytes. So far we
On Tue, Oct 30, 2012 at 9:27 AM, Roman Alekseev rs.aleks...@gmail.com wrote:
On 29.10.2012 22:57, Sam Lang wrote:
Hi Roman,
Is this with the ceph fuse client or the ceph kernel module?
Its not surprising that the local file system (/home) is so much faster
than a mounted ceph volume,
On Tue, Oct 30, 2012 at 10:45 AM, eric_yh_c...@wiwynn.com wrote:
Hi all:
I have some question about the limitation of CephFS.
Would you please help to answer these questions? Thanks!
1. Max file size
It's currently set to an (arbitrary) 1TB. It can be set wherever you
like but is limited by
On Tue, Oct 30, 2012 at 11:04 AM, Roman Alekseev rs.aleks...@gmail.com wrote:
On 30.10.2012 13:10, Gregory Farnum wrote:
On Tue, Oct 30, 2012 at 9:27 AM, Roman Alekseev rs.aleks...@gmail.com
wrote:
On 29.10.2012 22:57, Sam Lang wrote:
Hi Roman,
Is this with the ceph fuse client
Not a lot of people are publicly discussing their sizes on things like
that, unfortunately. I believe DreamHost is still the most open. They
have an (RGW-based) object storage service which is backed by ~800
OSDs and are currently beta-testing a compute service using RBD, which
you can see
On Tue, Oct 30, 2012 at 2:36 PM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2012/10/30 Gregory Farnum g...@inktank.com:
Not a lot of people are publicly discussing their sizes on things like
that, unfortunately. I believe DreamHost is still the most open. They
have an (RGW
On Tue, Oct 30, 2012 at 2:38 PM, Stefan Priebe - Profihost AG
s.pri...@profihost.ag wrote:
Am 30.10.2012 14:36, schrieb Gandalf Corvotempesta:
2012/10/30 Gregory Farnum g...@inktank.com:
Not a lot of people are publicly discussing their sizes on things like
that, unfortunately. I believe
601 - 700 of 1146 matches
Mail list logo