The system call is invoked in FileStore::_do_transaction().
Cheers,
xinxin
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Sudarsan, Rajesh
Sent: Thursday, August 14, 2014 3:01 PM
To: ceph-de...@vger.kernel.org; ceph-users@lists.ceph.com
Subject: [ceph-users] Tracking
On Thu, 14 Aug 2014 12:07:54 -0700 Craig Lewis wrote:
On Thu, Aug 14, 2014 at 12:47 AM, Christian Balzer ch...@gol.com wrote:
Hello,
On Tue, 12 Aug 2014 10:53:21 -0700 Craig Lewis wrote:
That's a low probability, given the number of disks you have. I
would've taken that bet (with
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I
Hi Erik,
On 15/08/2014 11:54, Erik Logtenberg wrote:
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now, there are certain combinations of K and M that appear to have more
or less the same
Hi,
I've been trying to tweak and improve the performance of our ceph cluster.
One of the operations that I can't seem to be able to improve much is the
delete. From what I've gathered every time there is a delete it goes
directly to the HDD, hitting its performance - the op may be recorded in
On 08/15/2014 12:23 PM, Loic Dachary wrote:
Hi Erik,
On 15/08/2014 11:54, Erik Logtenberg wrote:
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now, there are certain combinations of K and M that
On 08/15/2014 06:24 AM, Wido den Hollander wrote:
On 08/15/2014 12:23 PM, Loic Dachary wrote:
Hi Erik,
On 15/08/2014 11:54, Erik Logtenberg wrote:
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I should consider and/or are there best practices for
choosing the right K/M-parameters?
Loic might have a better anwser, but I think that
On 15/08/2014 13:24, Wido den Hollander wrote:
On 08/15/2014 12:23 PM, Loic Dachary wrote:
Hi Erik,
On 15/08/2014 11:54, Erik Logtenberg wrote:
Hi,
With EC pools in Ceph you are free to choose any K and M parameters you
like. The documentation explains what K and M do, so far so good.
On 15/08/2014 14:36, Erik Logtenberg wrote:
Now, there are certain combinations of K and M that appear to have more
or less the same result. Do any of these combinations have pro's and
con's that I should consider and/or are there best practices for
choosing the right K/M-parameters?
Loic
On Fri, 15 Aug 2014, Haomai Wang wrote:
Hi Kenneth,
I don't find valuable info in your logs, it lack of the necessary
debug output when accessing crash code.
But I scan the encode/decode implementation in GenericObjectMap and
find something bad.
For example, two oid has same hash and
I haven't done the actual calculations, but given some % chance of disk
failure, I would assume that losing x out of y disks has roughly the
same chance as losing 2*x out of 2*y disks over the same period.
That's also why you generally want to limit RAID5 arrays to maybe 6
disks or so and
On 15/08/2014 15:42, Erik Logtenberg wrote:
I haven't done the actual calculations, but given some % chance of disk
failure, I would assume that losing x out of y disks has roughly the
same chance as losing 2*x out of 2*y disks over the same period.
That's also why you generally want to
After dealing with ubuntu for a few days I decided to circle back to centos 7.
It appears that the latest ceph deploy takes care of the initial issues I had.
Now i'm hitting a new issue that has to do with an improperly defined url.
When I do ceph-deploy install node1 node2 node3 it fails
Found the file. You need to edit /usr/lib/python2.7/site-
packages/ceph_deploy/hosts/centos/install.py line 31 change to return 'rhel'
+ distro.normalized_release.major
Probably a bug that needs to be fixed in the deploy packages.
___
ceph-users
Hi,
When I attempt to use the ceph-deploy install command on one of my nodes I
get this error:
][WARNIN] W: Failed to fetch
http://ceph.com/packages/google-perftools/debian/dists/wheezy/main/binary-armhf/Packages
404 Not Found [IP: 208.113.241.137 80]
[ceph1][WARNIN]
[ceph1][WARNIN] E: Some
Hi ceph team. I encoutered a problem like bug#8641. ( It's bigger than his.)
Then the eviction doesn't work. So I want to know the logic of evicting data or
where is the code of controlling the eviction.
The following text is my configuration.
max_bytes is 1G;
max_objects is 1M.
Hello everyone:
Since there's no cuttlefish package for 14.04 server on ceph
repository (only ceph-deploy there), I tried to build cuttlefish from
source on 14.04.
Here's what I did:
Get source by following http://ceph.com/docs/master/install/clone-source/
Enter the sourcecode directory
git
Actually, we haven't had any CentOS7 builds, that is why there is no
`el7` in the repos. We are in the middle of getting that sorted out.
Sorry you had to find this!
Also, keep in mind that there is no need really to edit those files.
You can tell ceph-deploy what URL to use and force it with:
Running into an issue w/ Cuttlefish where an RBD snap removal (from
OpenStack Glance) crashed my MON. I was able to get the MON back up and
running by shutting Glance off, and restarting the MON.
Now, the OSDs are crashing when trying to catch up, seemingly due to the
same snapshot.
OSD Log
Hi there,
I am using CentOS 7 with Ceph version 0.80.5
(38b73c67d375a2552d8ed67843c8a65c2c0feba6), 3 OSD, 3 MON, 1 RadosGW (which
also serves as ceph-deploy node)
I followed all the instructions in the docs, regarding setting up a basic
Ceph cluster, and then followed the one to setup RadosGW.
I
There have been a ton of updates to Kraken over the past few months. Feel free
to take a look here: http://imgur.com/fDnqpO9
Just as easy to setup before, with a lot more functionality. OSD+MON+AUTH
operations are coming in the next release.
-Original Message-
From: Loic Dachary
It just hasn't been implemented yet. The developers are mostly working on
big features, and waiting to do these small optimizations later. I'm sure
there are plans to address this, but I doubt it will be soon.
If you're interested, you're welcome to contribute:
Hi,
I have installed created a single node/single osd cluster with latest master
for some experiment and saw it is creating only rbd pool by default not the
data/metadata pools. Is this something changed recently ?
Thanks Regards
Somnath
PLEASE NOTE: The
24 matches
Mail list logo