On Fri, Jan 23, 2015 at 1:43 PM, Sage Weil sw...@redhat.com wrote:
Background:
1) Way back when we made a task that would thrash the cache modes by
adding and removing the cache tier while ceph_test_rados was running.
This mostly worked, but would occasionally fail because we would
-
On Fri, Jan 23, 2015 at 2:18 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 23 Jan 2015, Gregory Farnum wrote:
On Fri, Jan 23, 2015 at 1:43 PM, Sage Weil sw...@redhat.com wrote:
Background:
1) Way back when we made a task that would thrash the cache modes by
adding and removing the cache
On Mon, Feb 2, 2015 at 7:00 AM, Loic Dachary l...@dachary.org wrote:
On 02/02/2015 14:48, Yan, Zheng wrote:
On Mon, Feb 2, 2015 at 9:18 PM, Loic Dachary l...@dachary.org wrote:
Hi,
http://pulpito.ceph.com/loic-2015-01-29_15:39:38-rbd-dumpling-backports---basic-multi/730029/
hangs on
On Fri, Feb 6, 2015 at 1:46 PM, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote:
I have been recently looking at implementing object expiration in rgw. First,
a brief description of the feature:
S3 provides mechanisms to expire objects, and/or to transition them into
different storage class.
On Tue, Feb 3, 2015 at 4:12 AM, Ding Dinghua dingdinghu...@gmail.com wrote:
Hi all:
I don't understand why SnapMapper::get_prefix static_cast snap
to unsigned:
string SnapMapper::get_prefix(snapid_t snap)
{
char buf[100];
int len = snprintf(
buf, sizeof(buf),
%.*X_,
The guys who would normally handle this have been traveling lately, so
they're probably behind on such things.
That said, Github pull requests are probably a more reliable
transmission channel than the mailing list. :)
-Greg
On Tue, Feb 3, 2015 at 11:31 AM, Yazen Ghannam yazen.ghan...@linaro.org
?
Cheng
Cheng
On Thu, Jan 15, 2015 at 12:59 PM, Gregory Farnum g...@gregs42.com wrote:
On Thu, Jan 15, 2015 at 9:53 AM, Cheng Cheng ccheng@gmail.com wrote:
Hi Ceph,
I am wondering is there a mechanism to prioritize the
rbd_aio_write/rbd_aio_read I/Os? Currently all RBD I/Os
On Thu, Jan 15, 2015 at 9:53 AM, Cheng Cheng ccheng@gmail.com wrote:
Hi Ceph,
I am wondering is there a mechanism to prioritize the
rbd_aio_write/rbd_aio_read I/Os? Currently all RBD I/Os are issued in FIFO to
rados layer, and there is NO QoS mechanism to control the priority of these
On Thu, Jan 15, 2015 at 9:44 AM, Sage Weil sw...@redhat.com wrote:
In addition (or instead of) making the API harder to fat-finger, we could
also add a mon config option like
mon allow pool deletion = false
that defaults off. Then, to delete any pool, you need to update ceph.conf
and
Oh, I think it might be fine to require setting a config option before
you delete stuff; I just don't want to prevent the option from being
injectable. :)
On Thu, Jan 15, 2015 at 10:07 AM, Sage Weil sw...@redhat.com wrote:
On Thu, 15 Jan 2015, Gregory Farnum wrote:
On Thu, Jan 15, 2015 at 9:44
On Sun, Jan 18, 2015 at 11:02 PM, Mykola Golub mgo...@mirantis.com wrote:
On Sun, Jan 18, 2015 at 10:33:05AM -0800, Sage Weil wrote:
On Sun, 18 Jan 2015, Mykola Golub wrote:
Hi Ceph,
Right now, for not a monitor leader, if a received command is not
supported locally, but is supported by
On Thu, Jan 15, 2015 at 2:44 PM, Michael Sevilla mikesevil...@gmail.com wrote:
Let me know if this works and/or you need anything else:
https://www.dropbox.com/s/fq47w6jebnyluu0/lookup-logs.tar.gz?dl=0
Beware - the clients were on debug=10. Also, I tried this with the
kernel client and it is
Sage, are these uncaught assertion errors something we normally
ignore? I'm not familiar with any code that tries to catch errors in
our standard init patterns, which is what looks to be the problem on
these new coverity issues in cephfs-table-tool.
-Greg
On Fri, Jan 16, 2015 at 6:39 AM,
On Fri, Jan 16, 2015 at 10:34 AM, Michael Sevilla
mikesevil...@gmail.com wrote:
On Thu, Jan 15, 2015 at 10:37 PM, Gregory Farnum g...@gregs42.com wrote:
On Thu, Jan 15, 2015 at 2:44 PM, Michael Sevilla mikesevil...@gmail.com
wrote:
Let me know if this works and/or you need anything else
On Fri, Jan 16, 2015 at 11:28 AM, Lipeng Wan lipengwa...@gmail.com wrote:
Dear all,
Does Ceph provide a way to collect object-level I/O access traces?
Specifically, can we collect the traces to record how many times each
object has been accessed (read, write, etc.) during a fixed period of
, ceph-mds.ceph-node1.log, etc., which
log file should I look at? Maybe the ceph-mds.ceph-node1.log?
Specifically, is there any keyword I can search in the log file to
locate the object operations?
Thanks!
LW
On Fri, Jan 16, 2015 at 4:21 PM, Gregory Farnum g...@gregs42.com wrote:
On Fri, Jan
Can you post the full logs somewhere to look at? These bits aren't
very helpful on their own (except to say, yes, the client cleared its
I_COMPLETE for some reason).
On Tue, Jan 13, 2015 at 3:45 PM, Michael Sevilla mikesevil...@gmail.com wrote:
On Tue, Jan 13, 2015 at 11:13 AM, Gregory Farnum g
On Wed, Feb 11, 2015 at 8:42 AM, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
Yesterday the dumpling giant backport integration branches were approved by
Yehuda, Sam and Josh and were handed over to QE. An interesting discussion
followed and it revealed that my understanding of the
On Wed, Feb 11, 2015 at 9:33 AM, Alyona Kiseleva
akisely...@mirantis.com wrote:
Hi,
I would like to propose something.
There are a lot of perf counters in different places in code, but the most
of them are undocumented. I found only one commented counter in OSD.cc code,
but not for all
On Wed, Feb 11, 2015 at 4:09 AM, GuangYang yguan...@outlook.com wrote:
Hi ceph-devel,
Recently we are trying the upgrade from Firefly to Giant and it goes pretty
smoothly, however, the problem is that it does not support rollback and seems
like that is by design. For example, there is new
On Wed, Feb 11, 2015 at 10:19 AM, Loic Dachary l...@dachary.org wrote:
On 11/02/2015 18:27, Gregory Farnum wrote:
Mmm. I'm happy to look at suites that get run this way but I'm
unlikely to notice them go by on the list if I'm not poked about them
— I generally filter out anything that has
On Tue, Feb 10, 2015 at 9:22 AM, Loic Dachary l...@dachary.org wrote:
On 10/02/2015 18:19, Yuri Weinstein wrote:
Loic,
The only difference between options if we run suits on merged dumpling vs
dumpling-backports first - is time.
We will have to run suites on the final branch after the
Nifty; it's good to have that sort of blog-style documentation about
the interface. Are you planning to do some work with it that you can
show off as well? :)
-Greg
On Tue, Jan 27, 2015 at 12:48 PM, Marcel Lauhoff m...@irq0.org wrote:
Hi,
I wrote an article about the object store API - How it
On Wed, Jan 28, 2015 at 10:06 AM, Sage Weil s...@newdream.net wrote:
On Wed, 28 Jan 2015, John Spray wrote:
On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum g...@gregs42.com wrote:
My concern is whether we as the FS are responsible for doing anything
more than storing and returning
On Wed, Jan 28, 2015 at 5:24 AM, John Spray john.sp...@redhat.com wrote:
We don't implement the GETFLAGS and SETFLAGS ioctls used for +i.
Adding the ioctls is pretty easy, but then we need somewhere to put
the flags. Currently we don't store a flags attribute on inodes,
but maybe we could
On Mon, Jan 5, 2015 at 4:16 AM, Loic Dachary l...@dachary.org wrote:
On 05/01/2015 13:03, John Spray wrote:
Sounds sane -- is the new plan to always do backports via this
process? i.e. if I see a backport PR which has not been through
integration testing, should I refrain from merging it?
On Mon, Jan 5, 2015 at 4:12 PM, Loic Dachary l...@dachary.org wrote:
:-) This process is helpful if it allows me to help a little more than I
currently do with the backport process. It would be a loss if the end result
is that everyone cares less about backports. My primary incentive for
On Fri, Jan 9, 2015 at 7:20 AM, Sage Weil sw...@redhat.com wrote:
Should we drop this entirely in hammer?
Yes!
If I remember correctly all of
the layout stuff is fully supported using virtual xattrs and standard
tools. The only thing left is the tool that shows you how file blocks map
to
On Fri, Jan 9, 2015 at 10:00 AM, Sage Weil sw...@redhat.com wrote:
On Fri, 9 Jan 2015, Gregory Farnum wrote:
On Fri, Jan 9, 2015 at 7:20 AM, Sage Weil sw...@redhat.com wrote:
Should we drop this entirely in hammer?
Yes!
If I remember correctly all of
the layout stuff is fully supported
On Fri, Jan 9, 2015 at 10:08 AM, Sage Weil sw...@redhat.com wrote:
On Fri, 9 Jan 2015, Gregory Farnum wrote:
On Fri, Jan 9, 2015 at 10:00 AM, Sage Weil sw...@redhat.com wrote:
On Fri, 9 Jan 2015, Gregory Farnum wrote:
On Fri, Jan 9, 2015 at 7:20 AM, Sage Weil sw...@redhat.com wrote
On Mon, Jan 12, 2015 at 10:17 PM, Michael Sevilla
mikesevil...@gmail.com wrote:
I can't get consistent performance with 1 MDS. I have 2 clients create
100,000 files (separate directories) in a CephFS mount. I ran the
experiment 5 times (deleting the pools/fs and restarting the MDS in
between
On Mon, Feb 9, 2015 at 7:33 AM, Yehuda Sadeh-Weinraub yeh...@redhat.com wrote:
- Original Message -
From: Sage Weil s...@newdream.net
To: Yehuda Sadeh-Weinraub yeh...@redhat.com
Cc: Ceph Development ceph-devel@vger.kernel.org
Sent: Monday, February 9, 2015 3:42:40 AM
Subject: Re:
On Sun, Feb 8, 2015 at 1:38 PM, Sage Weil sw...@redhat.com wrote:
Simon Leinen at Switch did a greaet post recently about the impact of
scrub on their cluster(s):
http://blog.simon.leinen.ch/2015/02/ceph-deep-scrubbing-impact.html
Basically the 2 week deep scrub interval kicks in on
Right.
So, memory usage of an OSD is usually linear in the number of PGs it
hosts. However, that memory can also grow based on at least one other
thing: the number of OSD Maps required to go through peering. It
*looks* to me like this is what you're running in to, not growth on
the number of
- Original Message -
From: Haomai Wang haomaiw...@gmail.com
To: Gregory Farnum gfar...@redhat.com
Cc: Sage Weil sw...@redhat.com, ceph-devel@vger.kernel.org
Sent: Friday, February 6, 2015 8:16:42 AM
Subject: Re: About in_seq, out_seq in Messenger
On Fri, Feb 6, 2015 at 10:47 PM
On Tue, Feb 10, 2015 at 10:04 AM, Loic Dachary l...@dachary.org wrote:
On 10/02/2015 18:29, Yuri Weinstein wrote:
On 10/02/2015 18:19, Yuri Weinstein wrote:
Loic,
The only difference between options if we run suits on merged dumpling vs
dumpling-backports first - is time.
We will have to
On Tue, Feb 10, 2015 at 10:26 AM, Sage Weil sw...@redhat.com wrote:
On Tue, 10 Feb 2015, Somnath Roy wrote:
Thanks Sam !
So, is it safe to do ordering if in a transaction *no*
remove/truncate/create/add call ?
For example, do we need to preserve ordering in case of the below
transaction ?
On Tue, Feb 10, 2015 at 10:33 AM, Loic Dachary l...@dachary.org wrote:
On 10/02/2015 19:25, Gregory Farnum wrote:
Now, as it happens there are some reasons to maintain a dumpling
branch that isn't part of backports. We've been doing a lot of work
lately to make the nightlies behave well
On Tue, Feb 10, 2015 at 10:55 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Hello,
last year in june i already reported this but there was no real result.
(http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-July/041070.html)
I then had the hope that this will be fixed itself when
Haomai,
On Sun, Feb 8, 2015 at 2:22 AM, ceph.git ceph-com...@ceph.com wrote:
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project .
The branch, master has been updated
via
On Thu, Feb 12, 2015 at 12:48 AM, GuangYang yguan...@outlook.com wrote:
Thanks Sage and Greg for the response.
2) having a separate switchover point (besides the code upgrade) which
enables all the disk change bits and which doesn't allow you to roll
back.
Let me give two examples which
On Fri, Feb 13, 2015 at 5:05 AM, Sage Weil sw...@redhat.com wrote:
Got this from JJ:
The SA expanded on this by stating that there are basically three main
scenarios here:
1) We trust the UID/GID in a controlled environment. In which case we
can safely rely on the POSIX permissions. As long
On Fri, Feb 13, 2015 at 3:35 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 13 Feb 2015, Gregory Farnum wrote:
On Fri, Feb 13, 2015 at 5:05 AM, Sage Weil sw...@redhat.com wrote:
Got this from JJ:
The SA expanded on this by stating that there are basically three main
scenarios here:
1
On Fri, Feb 13, 2015 at 10:34 PM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
I'm curious to know how you handle the flow of mails from QA runs. Here is a
wild guess:
* from time to time check that the nightlies run the suites that should be run
Uh, I guess?
* read the ceph-qa reports
On Tue, Jan 6, 2015 at 12:39 AM, Loic Dachary l...@dachary.org wrote:
On 06/01/2015 01:22, Gregory Farnum wrote:
On Mon, Jan 5, 2015 at 4:12 PM, Loic Dachary l...@dachary.org wrote:
:-) This process is helpful if it allows me to help a little more than I
currently do with the backport
So last month a bunch of librados functions around watch-notify were
marked as deprecated, and because RBD still uses them everything went
yellow on the gitbuilders. I believe we're expecting a patch series to
move to the new APIs pretty soon, but was wondering when.
In particular, a spot check
On Tue, Jan 6, 2015 at 8:44 AM, Sage Weil sw...@redhat.com wrote:
Hey,
In an exchange on linux-fsdevel yesterday it became clear that even when
FIEMAP isn't buggy it's not a good interface to build a map of sparse
files. For example, XFS defrag or other future fs features may muck with
Clocks in the labs have seemed a lot less well-synced lately than they
had been previously. :( I think there was some issue and then a change
to the NTP configuration, but I'm not clear on the details.
-Greg
On Fri, Feb 20, 2015 at 3:08 PM, David Zafman dzaf...@redhat.com wrote:
On 2 of my
Yeah. If this has gotten easier it's fine, but asphyxiate required a
*lot* of tooling that I'd rather we not require as developer build
deps. I'd imagine we can just produce them as part of the Jenkins
build procedure or something?
-Greg
On Tue, Mar 17, 2015 at 12:27 PM, David Zafman
On Tue, Mar 17, 2015 at 6:46 AM, Sage Weil s...@newdream.net wrote:
On Tue, 17 Mar 2015, Ning Yao wrote:
2015-03-16 22:06 GMT+08:00 Haomai Wang haomaiw...@gmail.com:
On Mon, Mar 16, 2015 at 10:04 PM, Xinze Chi xmdx...@gmail.com wrote:
How to process the write request in primary?
Thanks.
/.
Thanks,
Matt
Matt Conner
keepertechnology
matt.con...@keepertech.com
(240) 461-2657
On Wed, Mar 18, 2015 at 4:11 PM, Gregory Farnum g...@gregs42.com wrote:
On Wed, Mar 18, 2015 at 12:59 PM, Sage Weil s...@newdream.net wrote:
On Wed, 18 Mar 2015, Matt Conner wrote:
I'm working with a 6 rack
On Mon, Mar 9, 2015 at 8:42 AM, Dan van der Ster d...@vanderster.com wrote:
Hi Sage,
On Tue, Feb 10, 2015 at 2:51 AM, Sage Weil s...@newdream.net wrote:
On Mon, 9 Feb 2015, David McBride wrote:
On 09/02/15 15:31, Gregory Farnum wrote:
So, memory usage of an OSD is usually linear
On Tue, Mar 24, 2015 at 4:26 AM, Alistair Israel aisr...@gmail.com wrote:
Thank you Loïc and Sage for the encouragement!
Yes, we'll look into CMake if it simplifies managing the build.
However, a stretch goal is to possibly have the same autotools build
scripts generate .exe and .dll files
:
https://www.dropbox.com/s/uvmexh9impd3f3c/forgreg.tar.gz?dl=0
In this run, only client 1 starts doing the extra lookups.
On Fri, Jan 16, 2015 at 10:43 AM, Gregory Farnum g...@gregs42.com wrote:
On Fri, Jan 16, 2015 at 10:34 AM, Michael Sevilla
mikesevil...@gmail.com wrote:
On Thu, Jan 15, 2015
are
noticeably slow.
-Greg
On Fri, Mar 27, 2015 at 4:50 PM, Gregory Farnum g...@gregs42.com wrote:
On Fri, Mar 27, 2015 at 2:46 PM, Barclay Jameson
almightybe...@gmail.com wrote:
Yes it's the exact same hardware except for the MDS server (although I
tried using the MDS on the old node).
I
On Mon, Mar 30, 2015 at 1:01 PM, Sage Weil s...@newdream.net wrote:
Resurrecting this thread since we need to make a decision soon. The
opinions broke down like so:
A - me
B - john
C - alex
D - loic (and drop release names), yehuda, ilya
openstack - dmsimard
So, most people seem to
On Mon, Mar 23, 2015 at 6:21 AM, Olivier Bonvalet ceph.l...@daevel.fr wrote:
Hi,
I'm still trying to find why there is much more write operations on
filestore since Emperor/Firefly than from Dumpling.
Do you have any history around this? It doesn't sound familiar,
although I bet it's because
On Sat, Mar 21, 2015 at 10:46 AM, shylesh kumar shylesh.mo...@gmail.com wrote:
Hi ,
I was going through this simplified crush algorithm given in ceph website.
def crush(pg):
all_osds = ['osd.0', 'osd.1', 'osd.2', ...]
result = []
# size is the number of copies; primary+replicas
On Mon, Mar 23, 2015 at 7:20 AM, Loic Dachary l...@dachary.org wrote:
Hi,
When scheduling suites that are low priority (giant for instance at
http://pulpito.ceph.com/loic-2015-03-23_01:09:31-rados-giant---basic-multi/),
the --priority 1000 is set because (if I remember correctly) this is
On Wed, Mar 4, 2015 at 7:03 AM, Csaba Henk ch...@redhat.com wrote:
- Original Message -
From: Danny Al-Gaaf danny.al-g...@bisect.de
To: Csaba Henk ch...@redhat.com, OpenStack Development Mailing List
(not for usage questions)
openstack-...@lists.openstack.org
Cc:
On Sat, Feb 28, 2015 at 6:18 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
The fs teuthology suite for the next firefly release as found in
https://github.com/ceph/ceph/commits/firefly-backports came back with three
failures : http://tracker.ceph.com/issues/10641#fs. Do you think it is
On Sun, Mar 1, 2015 at 2:18 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
On 01/03/2015 06:00, Gregory Farnum wrote:
On Sat, Feb 28, 2015 at 6:18 AM, Loic Dachary l...@dachary.org wrote:
Hi Greg,
The fs teuthology suite for the next firefly release as found in
https://github.com/ceph
stress test for lossless_peer_reuse policy, it
can reproduce it easily
On Wed, Feb 25, 2015 at 2:27 AM, Gregory Farnum gfar...@redhat.com wrote:
On Feb 24, 2015, at 7:18 AM, Haomai Wang haomaiw...@gmail.com wrote:
On Tue, Feb 24, 2015 at 12:04 AM, Greg Farnum gfar...@redhat.com wrote
- Original Message -
From: John Spray john.sp...@redhat.com
To: ceph-devel@vger.kernel.org, z...@redhat.com, Gregory Farnum
gfar...@redhat.com
Sent: Thursday, February 19, 2015 2:23:21 PM
Subject: ceph-fuse remount issues
Background: a while ago, we found (#10277) that existing
On Tue, Feb 17, 2015 at 9:37 AM, Mark Nelson mnel...@redhat.com wrote:
Hi All,
I wrote up a short document describing some tests I ran recently to look at
how SSD backed OSD performance has changed across our LTS releases. This is
just looking at RADOS performance and not RBD or RGW. It also
On Feb 24, 2015, at 7:18 AM, Haomai Wang haomaiw...@gmail.com wrote:
On Tue, Feb 24, 2015 at 12:04 AM, Greg Farnum gfar...@redhat.com wrote:
On Feb 12, 2015, at 9:17 PM, Haomai Wang haomaiw...@gmail.com wrote:
On Fri, Feb 13, 2015 at 1:26 AM, Greg Farnum gfar...@redhat.com wrote:
Sorry
So this is exactly the same test you ran previously, but now it's on
faster hardware and the test is slower?
Do you have more data in the test cluster? One obvious possibility is
that previously you were working entirely in the MDS' cache, but now
you've got more dentries and so it's kicking data
there are and what they have permissions on and check; otherwise
you'll have to figure it out from the client side.
-Greg
Thanks for the input!
On Fri, Mar 27, 2015 at 3:04 PM, Gregory Farnum g...@gregs42.com wrote:
So this is exactly the same test you ran previously, but now it's on
faster hardware
On Wed, Mar 18, 2015 at 12:59 PM, Sage Weil s...@newdream.net wrote:
On Wed, 18 Mar 2015, Matt Conner wrote:
I'm working with a 6 rack, 18 server (3 racks of 2 servers , 3 racks
of 4 servers), 640 OSD cluster and have run into an issue when failing
a storage server or rack where the OSDs are
On Tue, Mar 17, 2015 at 6:55 PM, David Zafman dzaf...@redhat.com wrote:
During upgrade testing an error occurred because ceph-objectstore-tool found
during import on a Firefly node the compat_features from a export from
Hammer.
There are 2 new feature bits set as shown in the error message:
On Mon, Aug 3, 2015 at 6:43 PM Loic Dachary l...@dachary.org wrote:
Hi Greg,
The next hammer release as found at https://github.com/ceph/ceph/tree/hammer
passed the fs suite (http://tracker.ceph.com/issues/11990#fs). Do you think
it is ready for QE to start their own round of testing ?
archives
by Gregory Farnum.
- Every casual explanation I found presumes that an omap (a set
of K/V) is associated with an object. But it is not physically in
the object. So, is there a free-standing omap (set of keys)?
Or an omap associated with something else, like a pool?
- Greg
On Tue, Jul 28, 2015 at 7:38 AM, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
The title sounds a little strange (Citerias to become a Ceph project) because
I'm not aware of projects initiated by someone external to Ceph that later
became part of the Ceph nebula of projects (as found at
On Tue, Jul 28, 2015 at 8:55 AM, Wido den Hollander w...@42on.com wrote:
Hi,
I was trying to inject a pre_start command on a bunch of OSDs under
Ubuntu 14.04, but that didn't work.
I found out that only the sysvinit script execute pre_start commands,
but the upstart nor the systemd scripts
On Mon, Aug 3, 2015 at 11:10 PM, Loic Dachary l...@dachary.org wrote:
On 03/08/2015 21:18, John Spray wrote:
On Fri, Jul 31, 2015 at 8:59 PM, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
We require that each commit has a Signed-off-by line with the name and
email of the author. The
On Tue, Jul 21, 2015 at 5:13 PM, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
Today I did something wrong and that blocked the lab for a good half hour.
a) I ran two teuthology-kill simultaneously and that makes them deadlock each
other
b) I let them run unattended only to come back to
On Tue, Jul 21, 2015 at 6:09 PM, Patrick McGarry pmcga...@redhat.com wrote:
Hey cephers,
Just a reminder that the Ceph Tech Talk on CephFS that was scheduled
for last month (and cancelled due to technical difficulties) has been
rescheduled for this month's talk. It will be happening next
On Fri, Jul 10, 2015 at 10:45 PM, Deneau, Tom tom.den...@amd.com wrote:
I have an osd log file from an osd that hit a suicide timeout (with the
previous 1 events logged).
(On this node I have also seen this suicide timeout happen once before and
also a sync_entry timeout.
I can see
Hey Owen,
I haven't followed any of the conversations you've had in ceph-deploy
land, but I've been trying to keep track of the ones on ceph-devel et
al. I can't comment on very much of it because I suck at Python — I
can write C in any language, and do so! ;)
I interject this comment because
I spent a bunch of today looking at http://tracker.ceph.com/issues/12297.
Long story short: the workload is doing a readdir at the same time as
it's unlinking files. The readdir functions (in this case,
_readdir_cache_cb) drop the client_lock each time they invoke the
callback (for obvious
0.00 0.00 0.00
03:54:32 PM sde1 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00
03:54:32 PM sde2 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00
-- Tom
-Original Message-
From: Gregory Farnum
On Tue, Jul 21, 2015 at 3:15 PM, Matt W. Benjamin m...@cohortfs.com wrote:
Hi,
Couple of points.
1) a successor to 2Q is MQ (Li et al). We have an intrusive MQ LRU
implementation
with 2 levels currently, plus a pinned queue, that addresses stuff like
partitioning (sharding), scan
On Fri, Oct 23, 2015 at 7:59 AM, Howard Chu wrote:
> If the stream of writes is large enough, you could omit fsync because
> everything is being forced out of the cache to disk anyway. In that
> scenario, the only thing that matters is that the writes get forced out in
> the order
On Mon, Oct 19, 2015 at 8:31 AM, Milosz Tanski <mil...@adfin.com> wrote:
> On Wed, Oct 14, 2015 at 12:46 AM, Gregory Farnum <gfar...@redhat.com> wrote:
>> On Sun, Oct 11, 2015 at 7:36 PM, Milosz Tanski <mil...@adfin.com> wrote:
>>> On Sun, Oct 11, 2015 at 6:44
On Tue, Oct 27, 2015 at 6:34 AM, huang jun wrote:
> Hi, all
> I am looking rgw storage object, when use the following command to
> view the bucket information:
radosgw-admin bucket stats --bucket = bk0
> {"Bucket": "bk0",
>"pool": ".rgw.buckets",
>"index_pool":
On Tue, Oct 27, 2015 at 11:47 AM, GuangYang wrote:
> Hi there,
> Is there any reason we stuck read only requests as well for a PG when the
> acting set size is less than min_size?
A few.
The most important reason: PGs don't have any concept of a read-only
mode in the code.
Sounds good!
On Fri, Oct 23, 2015 at 1:12 PM, Loic Dachary wrote:
> Hi Greg,
>
> The next firefly release as found at
> https://github.com/ceph/ceph/tree/firefly passed the fs suite
> (http://tracker.ceph.com/issues/11644#note-112). Do you think the firefly
> branch is ready
On Wed, Oct 21, 2015 at 2:33 PM, John Spray wrote:
> On Wed, Oct 21, 2015 at 10:33 PM, John Spray wrote:
>>> John, I know you've got
>>> https://github.com/ceph/ceph-qa-suite/pull/647. I think that's
>>> supposed to be for this, but I'm not sure if you
On Wed, Nov 4, 2015 at 7:07 AM, Gregory Farnum <gfar...@redhat.com> wrote:
> The problem with this approach is that the encoded versions need to be
> platform-independent — they are shared over the wire and written to
> disks that might get transplanted to different machines. Apart
On Tue, Nov 10, 2015 at 7:19 AM, 池信泽 wrote:
> hi, all:
>
> op_wq is declared as ShardedThreadPool::ShardedWQ < pair OpRequestRef> > _wq. I do not know why we should use PGRef in this?
>
> Because the overhead of the smart pointer is not small. Maybe the
>
On Tue, Nov 10, 2015 at 6:32 AM, Oleksandr Natalenko
wrote:
> Hello.
>
> We have CephFS deployed over Ceph cluster (0.94.5).
>
> We experience constant MDS restarting under high IOPS workload (e.g.
> rsyncing lots of small mailboxes from another storage to CephFS using
>
> 2015-11-11 2:28 GMT+08:00 Gregory Farnum <gfar...@redhat.com>:
>> On Tue, Nov 10, 2015 at 7:19 AM, 池信泽 <xmdx...@gmail.com> wrote:
>>> hi, all:
>>>
>>> op_wq is declared as ShardedThreadPool::ShardedWQ < pair <PGRef,
>>> O
Looks like we only have two tagged right now :( but periodically
things in the tracker get tagged with "new-dev".
http://tracker.ceph.com/projects/ceph/search?utf8=✓=1=new-dev
...and looking at that, the osdmap_subscribe ones I think are mostly
dealt with in
On Tue, Nov 3, 2015 at 3:15 AM, Mike wrote:
> Hello!
>
> In our project we planing build a petabayte cluster with Erasure pool.
> Also we looking on Mellanox ConnectX-4 Lx EN Cards/ConnectX-4 EN Cards
> for using its a offloading erasure code feature.
>
> Someone use this
On Thu, Nov 5, 2015 at 7:14 AM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Thanks Gregory,
>
> People are most likely busy and haven't had time to digest this and I
> may be expecting more excitement from it (I'm excited due to the
>
On Wed, Nov 4, 2015 at 7:00 AM, 池信泽 wrote:
> hi, all:
>
> I am focus on the cpu usage of ceph now. I find the struct (such
> as pg_info_t , transaction and so on) encode and decode exhaust too
> much cpu resource.
>
> For now, we should encode every member variable
On Thu, Nov 5, 2015 at 9:59 PM, Allen Samuels wrote:
> I have a question about rebuild in the following situation:
>
> I have a pool with 3x replication.
> For one particular PG we'll designate the active OSD set as [1,2,3] with 1 as
> the primary.
> Assume 2 and 3
On Fri, Nov 6, 2015 at 4:26 AM, John Spray wrote:
> On Fri, Nov 6, 2015 at 10:06 AM, Nathan Cutler wrote:
>> Hi Ceph:
>>
>> Recently I encountered some a "clock skew" issue with 0.94.3. I have
>> some small demo clusters in AWS. When I boot them up, in most
would actually have on
> overall durability (how frequent is this case?). Once Allen does the
> math, we'll have a better idea :)
> -Sam
>
> On Fri, Nov 6, 2015 at 8:43 AM, Gregory Farnum <gfar...@redhat.com> wrote:
>> Argh, I guess I was wrong. Sorry for the misinformati
n1JDyl8hAl9BqKBPFUthRH3gv/RYkkQTejE2iVfdvSn8l9+EcfzCtsdGou
> LXDYb+k5jyxZelvR3qY1QdRxcuBxqLnmYVzS/iPph6nU3TINZGpyi/mFZiN5
> mxIED4BQGNLAG6hBr4OD7WusH9I8U2CEXFs5nGjlMxBsAQpM8L0xTwhmgthC
> 4aHZqp0hH2DlNcBC8L1gNbDV15Q7fg0T8x2jXnh7F81Oq3AF+S4xYm6OzisC
> jUc+Pmb1XwlWoL9wkcwqZ+GwKRcw2W4a/0ryi4KDriU+zTUo7J0P6qQHm6ab
&
1 - 100 of 1146 matches
Mail list logo