I should add, I am using ceph deploy. Here is the output:
2013-12-09 18:42:58,340 [ceph][DEBUG ] The following packages have unmet
dependencies:
2013-12-09 18:42:58,340 [ceph][DEBUG ] ceph : Depends: libgoogle-perftools0
but it is not going to be installed
2013-12-09 18:42:58,340 [ceph][DEBUG
Hi Sage,
It's Ubuntu 12.04 LTS
Mongo wants to install libgoogle-perftools4, whereas ceph wants to install
libgoogle-perftools0.
This is leaving libgoogle-perftools0 in a broken state and, I'm assuming this
is the cause, making ceph fault. Ceph works great before the installation of
libgoogle
Then we have to make a choice between immediately returning with error
and patiently waiting for mds joining. My suggestion is
(1) Leave an error message from the kernel using 'printk(KERN_WARN"no
active mds")' something in __choose_mds()
(2) Add a return value 'E_WAITING_FOR_MAP' to __choose_mds
Hi Donald,
What exactly is the conflict? And what distro? I believe we should be
using whatever version of the library is installed.
sage
On Tue, 10 Dec 2013, Don Talton (dotalton) wrote:
> I'm trying to package/install mongodb and ceph on the same server and there
> are conflict due to the
I'm trying to package/install mongodb and ceph on the same server and there are
conflict due to the version differences.
Donald Talton
Systems Development Unit
dotal...@cisco.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.
On 12/09/2013 04:19 PM, Gregory Farnum wrote:
On Mon, Dec 9, 2013 at 4:11 PM, Josh Durgin wrote:
On 12/06/2013 06:24 PM, Gregory Farnum wrote:
On Fri, Dec 6, 2013 at 6:16 PM, Josh Durgin
wrote:
Don't bother trying to stop ENOSPC on the client side, since it'd need
some
restructuring in the
On Mon, Dec 9, 2013 at 4:11 PM, Josh Durgin wrote:
> On 12/06/2013 06:24 PM, Gregory Farnum wrote:
>>
>> On Fri, Dec 6, 2013 at 6:16 PM, Josh Durgin
>> wrote:
>>> Don't bother trying to stop ENOSPC on the client side, since it'd need
>>> some
>>> restructuring in the kernel side and would be pron
On 12/06/2013 06:24 PM, Gregory Farnum wrote:
On Fri, Dec 6, 2013 at 6:16 PM, Josh Durgin wrote:
On 12/05/2013 08:58 PM, Gregory Farnum wrote:
On Thu, Dec 5, 2013 at 5:47 PM, Josh Durgin
wrote:
On 12/03/2013 03:12 PM, Josh Durgin wrote:
These patches allow rbd to block writes instead of
On 12/06/2013 07:02 PM, Li Wang wrote:
I just had a quick look, did not think it thoroughly.
(1) If possible, there is a race condition, that a former write get
blocked by FULL, a latter write is lucky to be sent to osd after FULL ->
NOFULL,
then the former write is resent, to cause the old data
Is there any posibility to remove this meta files? (whithout recreate cluster)
Files names:
{path}.bucket.meta.test1:default.4110.{sequence number}__head_...
--
Regards
Dominik
2013/12/8 Dominik Mostowiec :
> Hi,
> My api app to put files to s3/ceph checks if bucket exists by create
> this bucket
Fixed this up last night!
s
On Mon, 9 Dec 2013, Loic Dachary wrote:
>
>
> On 09/12/2013 00:12, Loic Dachary wrote:
> >
> >
> > On 09/12/2013 00:00, Loic Dachary wrote:
> >> Hi,
> >>
> >> I accidentally forced push to ceph master at 11pm CEST 8 december 2013 the
> >> following:
> >>
> >> ht
I will mention that this is a good tool if you want really detailed
profiling or cpu counter data about what's going on. Other tools that
are more generic (ie ones that just read data from proc, ie collectl,
sar, etc) may also be options.
Mark
On 12/09/2013 10:45 AM, Loic Dachary wrote:
Hi,
Hi,
Mark Nelson suggested we use perf ( linux-tools ) for benchmarking. It looks
like something that would help indeed : the benchmark program would only
concern itself with doing some work according to the options and let
performances be collected from the outside, using tools that are familia
-- All Branches --
Alfredo Deza
2013-09-27 10:33:52 -0400 wip-5900
Dan Mick
2012-12-18 12:27:36 -0800 wip-rbd-striping
2013-07-16 23:00:06 -0700 wip-5634
Danny Al-Gaaf
2013-11-04 23:35:09 +0100 wip-da-fix-galois-warning
David Zafman
2013-01-28
Well, after double-checking the code, it seems the wait process will be
unconditionally waked up if new MDS map received. Is there a situation
that the client is pushed a new MDS map, but still no mds active. If so,
maybe worth a little bit optimization such as calling check_new_map() to
avoid t
Personally, I don't think there is issue for current implementation,
either. If no ACTIVE mds, the mount process put to wait, until updated
MDS map received and with active mds present indicated in the map, it
will be waked up and go on the mount process, otherwise, EIO returned if
timeout. If
On Mon, Dec 9, 2013 at 8:02 PM, Dzianis Huznou wrote:
> On Sat, 2013-12-07 at 21:59 +0800, Yan, Zheng wrote:
>> On Sun, Dec 8, 2013 at 2:59 AM, Mikhail Campos Guadamuz
>> wrote:
>> > For http://tracker.ceph.com/issues/4386
>> >
>> > It determines situation, when a user is trying to mount CephFS
>
On Sat, 2013-12-07 at 21:59 +0800, Yan, Zheng wrote:
> On Sun, Dec 8, 2013 at 2:59 AM, Mikhail Campos Guadamuz
> wrote:
> > For http://tracker.ceph.com/issues/4386
> >
> > It determines situation, when a user is trying to mount CephFS
> > with no MDS present. Return ECOMM from
> > open_root_dentry
18 matches
Mail list logo