Hi Gregory,
another interesting aspect for me is:
How will a read-request for this block/sub-block (pending between journal and
OSD)
be satisfied (assuming the client will not cache) ?
Will this read go to the journal or to the OSD ?
Best Regards,
-Dieter
On Tue, Mar 05, 2013 at 05:33:13AM
i have a new ceph installation across 4 nodes with 10TB of storage.
i loaded a few TB of objects (averaging 2-3GB each) via teh rados put command.
when i do a rados get command to retrieve one of these objects,
i get an unknown error 1464856576.
i am running bobtail, and none of the logs contains
unit tests are added in test/filestore/store_test.cc for the
FileStore::_detect_fs method, when using ext4. It tests the following
situations:
* without user_xattr, ext4 fails
* mounted with user_xattr, ext4 fails if filestore_xattr_use_omap is false
* mounted with user_xattr, ext4 succeeds if
The first two of these were posted before, but they ran into
trouble because assertions that data information fields in
ceph messages got set only once were failing. Now those
assertions are commented out--initially.
The third and fourth patches in this series address the
reasons the assertions
Define ceph_msg_data_set_pagelist(), ceph_msg_data_set_bio(), and
ceph_msg_data_set_trail() to clearly abstract the assignment the
remaining data-related fields in a ceph message structure. Use the
new functions in the osd client and mds client.
This partially resolves:
When an incoming message is destined for the osd client, the
messenger calls the osd client's alloc_msg method. That function
looks up which request has the tid matching the incoming message,
and returns the request message that was preallocated to receive the
response. The response message is
The mds client no longer tries to assign zero-length message data,
and the osd client no longer sets its data info more than once.
This allows us to activate assertions in the messenger to verify
these things never happen.
This resolves both of these:
http://tracker.ceph.com/issues/4263
On 03/05/2013 05:33 AM, Xing Lin wrote:
Hi Gregory,
Thanks for your reply.
On 03/04/2013 09:55 AM, Gregory Farnum wrote:
The journal [min|max] sync interval values specify how frequently
the OSD's FileStore sends a sync to the disk. However, data is still
written into the normal filesystem as
(This patch is available as the top commit in branch
review/wip-4324 in the ceph-client git repository.)
In ceph_con_in_msg_alloc() it is possible for a connection's
alloc_msg method to indicate an incoming message should be skipped.
By default, read_partial_message() initializes the skip
On 03/05/2013 07:52 AM, Alex Elder wrote:
+void ceph_msg_data_set_pages(struct ceph_msg *msg, struct page **pages,
+ unsigned int page_count, size_t alignment)
+{
+ /* BUG_ON(!pages); */
+ /* BUG_ON(!page_count); */
+ /* BUG_ON(msg-pages); */
+ /*
This is a companion discussion to the blog post at
http://ceph.com/dev-notes/cephfs-mds-status-discussion/ — go read that!
The short and slightly alternate version: I spent most of about two weeks
working on bugs related to snapshots in the MDS, and we started realizing that
we could probably
On 03/04/2013 03:40 PM, Loic Dachary wrote:
CommandFailedError: Command failed on 10.20.0.7 with status 1: 'rmdir --
/tmp/cephtest/ubuntu@teuthology-2013-03-04_21-03-19'
DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: Exception:
failed to fetch package version from
On 03/05/2013 06:03 PM, Greg Farnum wrote:
This is a companion discussion to the blog post at
http://ceph.com/dev-notes/cephfs-mds-status-discussion/ — go read that!
The short and slightly alternate version: I spent most of about two weeks
working on bugs related to snapshots in the MDS, and
On Tuesday, March 5, 2013 at 10:08 AM, Wido den Hollander wrote:
On 03/05/2013 06:03 PM, Greg Farnum wrote:
This is a companion discussion to the blog post at
http://ceph.com/dev-notes/cephfs-mds-status-discussion/ — go read that!
The short and slightly alternate version: I spent most
On Tue, 5 Mar 2013, Greg Farnum wrote:
On Tuesday, March 5, 2013 at 10:08 AM, Wido den Hollander wrote:
On 03/05/2013 06:03 PM, Greg Farnum wrote:
This is a companion discussion to the blog post at
http://ceph.com/dev-notes/cephfs-mds-status-discussion/ ? go read that!
The short
On 03/05/2013 07:28 PM, Sage Weil wrote:
On Tue, 5 Mar 2013, Greg Farnum wrote:
On Tuesday, March 5, 2013 at 10:08 AM, Wido den Hollander wrote:
On 03/05/2013 06:03 PM, Greg Farnum wrote:
This is a companion discussion to the blog post at
It's been two weeks and v0.58 is baked. Notable changes since v0.57
include:
* mon: rearchitected to utilize single instance of paxos and a key/value
store (Joao Luis)
* librbd: fixed some locking issues with flatten (Josh Durgin)
* rbd: udevadm settle on map/unmap to avoid various races
On Tue, 5 Mar 2013, Wido den Hollander wrote:
Wido, by 'user quota' do you mean something that is uid-based, or would
enforcement on subtree/directory quotas be sufficient for your use cases?
I've been holding out hope that uid-based usage accounting is a thing of
the past and that
On Tuesday, March 5, 2013 at 7:33 AM, Alex Elder wrote:
(This patch is available as the top commit in branch
review/wip-4324 in the ceph-client git repository.)
In ceph_con_in_msg_alloc() it is possible for a connection's
alloc_msg method to indicate an incoming message should be skipped.
On Tuesday, March 5, 2013 at 5:54 AM, Wido den Hollander wrote:
On 03/05/2013 05:33 AM, Xing Lin wrote:
Hi Gregory,
Thanks for your reply.
On 03/04/2013 09:55 AM, Gregory Farnum wrote:
The journal [min|max] sync interval values specify how frequently
the OSD's FileStore sends
On Tuesday, March 5, 2013 at 12:37 AM, Dieter Kasper wrote:
Hi Gregory,
another interesting aspect for me is:
How will a read-request for this block/sub-block (pending between journal and
OSD)
be satisfied (assuming the client will not cache) ?
Will this read go to the journal or to the
On Monday, March 4, 2013 at 5:57 PM, Yan, Zheng wrote:
On 03/05/2013 02:26 AM, Gregory Farnum wrote:
On Thu, Feb 28, 2013 at 10:46 PM, Yan, Zheng zheng.z@intel.com
(mailto:zheng.z@intel.com) wrote:
From: Yan, Zheng zheng.z@intel.com (mailto:zheng.z@intel.com)
I've merged this series into the testing branch, with appropriate Reviewed-by
tags from me (and Sage on #4). Thanks much for the code and helping me go
through it. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Monday, March 4, 2013 at 7:38 PM, Yan, Zheng wrote:
On
Hi,
right now i have a bunch of OSD hosts (servers) which have just 4 disks
each. All of them use SSDs right now.
So i have a lot of free harddisk slots in the chassis. So my idea was to
create a second ceph system using these free slots. Is this possible? Or
should i just the first one
[Re-adding ceph-devel]
On Tue, 5 Mar 2013, Stefan Priebe wrote:
Hi Sage,
thanks for the great new features. Are there any plans for incremental rbd
exports? Or export of just the changed rados objects between two snapshots?
The librados bits just landed in master today (thanks to David
On Tuesday, March 5, 2013 at 5:53 AM, Alex Elder wrote:
The ceph file system doesn't typically send information in the
data portion of a message. (It relies on some functionality
exported by the osd client to read and write page data.)
There are two spots it does send data though. The value
On Tuesday, March 5, 2013 at 5:53 AM, Alex Elder wrote:
The mds client no longer tries to assign zero-length message data,
and the osd client no longer sets its data info more than once.
This allows us to activate assertions in the messenger to verify
these things never happen.
This
On 03/05/2013 06:30 PM, Josh Durgin wrote:
On 03/04/2013 03:40 PM, Loic Dachary wrote:
CommandFailedError: Command failed on 10.20.0.7 with status 1: 'rmdir --
/tmp/cephtest/ubuntu@teuthology-2013-03-04_21-03-19'
DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: Exception:
There have been a few important bug fixes that people are hitting or
want:
- the journal replay bug (5d54ab154ca790688a6a1a2ad5f869c17a23980a)
- the - _ pool name vs cap parsing thing that is biting openstack users
- ceph-disk-* changes to support latest ceph-deploy
If there are other things
Thanks very much for all your explanations. I am now much clearer about
it. Have a great day!
Xing
On 03/05/2013 01:12 PM, Greg Farnum wrote:
All the data goes to the disk in write-back mode so it isn't safe yet
until the flush is called. That's why it goes into the journal first, to
be
As an extra request, it would be great if people explained a little
about their use-case for the filesystem so we can better understand
how the features requested map to the type of workloads people are
trying.
Thanks
Neil
On Tue, Mar 5, 2013 at 9:03 AM, Greg Farnum g...@inktank.com wrote:
31 matches
Mail list logo