On Mon, Oct 5, 2015 at 10:40 PM, Serg M wrote:
> What difference between memory statistics of "ceph tell {daemon}.{id} heap
> stats"
Assuming you're using tcmalloc (by default you are) this will get
information straight from the memory allocator about what the actual
daemon
On Mon, Oct 5, 2015 at 10:36 PM, Dmitry Ogorodnikov
wrote:
> Good day,
>
> I think I will use wheezy for now for tests. Bad thing is wheezy full
> support ends in 5 months, so wheezy is not ok for persistent production
> cluster.
>
> I cant find out what ceph team
On Mon, Oct 5, 2015 at 11:21 AM, Egor Kartashov wrote:
> Hello!
>
> I have cluster of 3 machines with ceph 0.80.10 (package shipped with Ubuntu
> Trusty). Ceph sucessfully mounts on all of them. On external machine I'm
> reciving error "can't read superblock" and dmesg
On Thu, Oct 8, 2015 at 6:45 PM, Francois Lafont <flafdiv...@free.fr> wrote:
> Hi,
>
> On 08/10/2015 22:25, Gregory Farnum wrote:
>
>> So that means there's no automated way to guarantee the right copy of
>> an object when scrubbing. If you have 3+ copies I'd reco
On Thu, Oct 8, 2015 at 5:01 PM, Rumen Telbizov wrote:
> Hello everyone,
>
> I am very new to Ceph so, please excuse me if this has already been
> discussed. I couldn't find anything on the web.
>
> We are interested in using Ceph and access it directly via its native rados
>
On Thu, Oct 8, 2015 at 6:29 AM, Burkhard Linke
wrote:
> Hammer 0.94.3 does not support a 'dump cache' mds command.
> 'dump_ops_in_flight' does not list any pending operations. Is there any
> other way to access the cache?
"dumpcache", it looks
On Mon, Oct 12, 2015 at 9:50 AM, Mark Nelson wrote:
> Hi Guy,
>
> Given all of the recent data on how different memory allocator
> configurations improve SimpleMessenger performance (and the effect of memory
> allocators and transparent hugepages on RSS memory usage), I
I think you're probably running into the internal PG/collection
splitting here; try searching for those terms and seeing what your OSD
folder structures look like. You could test by creating a new pool and
seeing if it's faster or slower than the one you've already filled up.
-Greg
On Wed, Jul 8,
On Sun, Jul 5, 2015 at 5:37 AM, Michael Metz-Martini | SpeedPartner
GmbH m...@speedpartner.de wrote:
Hi,
after larger moves of serveral placement groups we tried to empty 3 of
our 66 osds by slowly setting weight of them to 0 within the crushmap.
After move completed we're still experiencing
On Tue, Jul 7, 2015 at 4:02 PM, Dan van der Ster d...@vanderster.com wrote:
Hi Greg,
On Tue, Jul 7, 2015 at 4:25 PM, Gregory Farnum g...@gregs42.com wrote:
4. mds cache size = 500 is going to use a lot of memory! We have
an MDS with just 8GB of RAM and it goes OOM after delegating around
On Fri, Jul 3, 2015 at 10:34 AM, Dan van der Ster d...@vanderster.com wrote:
Hi,
We're looking at similar issues here and I was composing a mail just
as you sent this. I'm just a user -- hopefully a dev will correct me
where I'm wrong.
1. A CephFS cap is a way to delegate permission for a
On Thu, Jul 2, 2015 at 11:38 AM, Matteo Dacrema mdacr...@enter.it wrote:
Hi all,
I'm using CephFS on Hammer and I've 1.5 million files , 2 metadata servers
in active/standby configuration with 8 GB of RAM , 20 clients with 2 GB of
RAM each and 2 OSD nodes with 4 80GB osd and 4GB of RAM.
I've
Your first point of troubleshooting is pretty much always to look at
ceph -s and see what it says. In this case it's probably telling you
that some PGs are down, and then you can look at why (but perhaps it's
something else).
-Greg
On Thu, Jul 9, 2015 at 12:22 PM, Mallikarjun Biradar
On Monday, November 16, 2015, min fang wrote:
> Is this function used in detach rx buffer, and complete IO back to the
> caller? From the code, I think this function will not interact with OSD or
> MON side, which means, we just cancel IO from client side. Am I right?
>
What's the full output of "Ceph -s"? Are your new crush rules actually
satisfiable? Is your cluster filling up?
-Greg
On Saturday, November 14, 2015, Peter Theobald wrote:
> Hi list,
>
> I have a 3 node ceph cluster with a total of 9 ods (2,3 and 4 with
> different size
Hallam
Cc: ceph-users@lists.ceph.com; Gregory Farnum
Subject: Re: [ceph-users] Testing CephFS
On Aug 24, 2015, at 18:38, Gregory Farnum gfar...@redhat.com wrote:
On Mon, Aug 24, 2015 at 11:35 AM, Simon Hallam s...@pml.ac.uk wrote:
Hi Greg,
The MDS' detect that the other one went
On Mon, Aug 24, 2015 at 4:03 PM, Vickey Singh
vickey.singh22...@gmail.com wrote:
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster recovery IO
and client IO operation metrics , that can be further used with collectd.
For example , i need to take out these
I haven't looked at the internals of the model, but the PL(site)
you've pointed out is definitely the crux of the issue here. In the
first grouping, it's just looking at the probability of data loss due
to failing disks, and as the copies increase that goes down. In the
second grouping, it's
On Fri, Aug 28, 2015 at 1:42 PM, Wido den Hollander w...@42on.com wrote:
On 28-08-15 13:07, Gregory Farnum wrote:
On Mon, Aug 24, 2015 at 4:03 PM, Vickey Singh
vickey.singh22...@gmail.com wrote:
Hello Ceph Geeks
I am planning to develop a python plugin that pulls out cluster recovery IO
On Wed, Aug 26, 2015 at 9:36 AM, Wido den Hollander w...@42on.com wrote:
Hi,
It's something which has been 'bugging' me for some time now. Why are
RGW pools prefixed with a period?
I tried setting the root pool to 'rgw.root', but RGW (0.94.1) refuses to
start:
ERROR: region root pool name
There is a cephfs-journal-tool that I believe is present in hammer and
ought to let you get your MDS through replay. Depending on which PGs
were lost you will have holes and/or missing files, in addition to not
being able to find parts of the directory hierarchy (and maybe getting
crashes if you
On Thu, Aug 27, 2015 at 2:54 AM, Goncalo Borges
gonc...@physics.usyd.edu.au wrote:
Hey guys...
1./ I have a simple question regarding the appearance of degraded PGs.
First, for reference:
a. I am working with 0.94.2
b. I have 32 OSDs distributed in 4 servers, meaning that I have 8 OSD per
On Thu, Aug 27, 2015 at 11:11 AM, John Spray jsp...@redhat.com wrote:
On Thu, Aug 27, 2015 at 9:33 AM, Andrzej Łukawski alukaw...@interia.pl
wrote:
Hi,
I ran cephfs-journal-tool to inspect journal 12 hours ago - it's still
running. Or... it didn't crush yet, although I don't see any output
On Mon, Aug 31, 2015 at 9:33 AM, Eino Tuominen wrote:
> Hello,
>
> I'm getting a segmentation fault error from the monitor of our test cluster.
> The cluster was in a bad state because I have recently removed three hosts
> from it. Now I started cleaning it up and first marked the
This generally shouldn't be a problem at your bucket sizes. Have you
checked that the cluster is actually in a healthy state? The sleeping
locks are normal but should be getting woken up; if they aren't it
means the object access isn't working for some reason. A down PG or
something would be the
On Mon, Aug 31, 2015 at 5:07 AM, Christian Balzer wrote:
>
> Hello,
>
> I'm about to add another storage node to small firefly cluster here and
> refurbish 2 existing nodes (more RAM, different OSD disks).
>
> Insert rant about not going to start using ceph-deploy as I would have
On Sat, Aug 29, 2015 at 3:32 PM, Евгений Д. wrote:
> I'm running 3-node cluster with Ceph (it's Deis cluster, so Ceph daemons are
> containerized). There are 3 OSDs and 3 mons. After rebooting all nodes one
> by one all monitors are up, but only two OSDs of three are up.
::less, std::allocator > const*) ()
> #11 0x005c388c in Monitor::win_standalone_election() ()
> #12 0x005c42eb in Monitor::bootstrap() ()
> #13 0x005c4645 in Monitor::init() ()
> #14 0x005769c0 in main ()
>
> -Original Message-
> From:
On Mon, Aug 31, 2015 at 8:30 AM, 10 minus wrote:
> Hi ,
>
> I 'm in the process of upgrading my ceph cluster from Firefly to Hammer.
>
> The ceph cluster has 12 OSD spread across 4 nodes.
>
> Mons have been upgraded to hammer, since I have created pools with value
> 512 and
On Sat, Aug 29, 2015 at 11:50 AM, Gerd Jakobovitsch wrote:
> Dear all,
>
> During a cluster reconfiguration (change of crush tunables from legacy to
> TUNABLES2) with large data replacement, several OSDs get overloaded and had
> to be restarted; when OSDs stabilize, I got a
On Mon, Aug 31, 2015 at 12:16 PM, Yan, Zheng <uker...@gmail.com> wrote:
> On Mon, Aug 24, 2015 at 6:38 PM, Gregory Farnum <gfar...@redhat.com> wrote:
>> On Mon, Aug 24, 2015 at 11:35 AM, Simon Hallam <s...@pml.ac.uk> wrote:
>>> Hi Greg,
>>>
>
On Tue, Sep 1, 2015 at 9:20 PM, Erming Pei wrote:
> Hi,
>
> I tried to set up a read-only permission for a client but it looks always
> writable.
>
> I did the following:
>
> ==Server end==
>
> [client.cephfs_data_ro]
> key = AQxx==
> caps mon =
On Wed, Sep 2, 2015 at 10:00 AM, Janusz Borkowski
wrote:
> Hi!
>
> I mount cephfs using kernel client (3.10.0-229.11.1.el7.x86_64).
>
> The effect is the same when doing "echo >>" from another machine and from a
> machine keeping the file open.
>
> The file is
D pool?
Mounting it on another client and seeing if changes are reflected
there would do it. Or unmounting the filesystem, mounting again, and
seeing if the file has really changed.
-Greg
>
> Thanks!
>
> Erming
>
>
>
> On 9/2/15, 2:44 AM, Gregory Farnum wrote:
>
On Tue, Sep 1, 2015 at 3:58 PM, huang jun wrote:
> hi,all
>
> Recently, i did some experiments on OSD data distribution,
> we set up a cluster with 72 OSDs,all 2TB sata disk,
> and ceph version is v0.94.3 and linux kernel version is 3.18,
> and set "ceph osd crush tunables
On Tue, Sep 1, 2015 at 2:31 AM, Shesha Sreenivasamurthy wrote:
> I had a question regarding how OSD locations are determined by CRUSH.
>
> From the CRUSH paper I gather that the replica locations of an object (A) is
> a vector (v) that is got by the function c(r,x) = (hash (x) +
This comes up periodically on the mailing list; see eg
http://www.spinics.net/lists/ceph-users/msg15907.html
I'm not sure if your case fits within those odd parameters or not, but
I bet it does. :)
-Greg
On Mon, Aug 31, 2015 at 8:16 PM, Stillwell, Bryan
wrote:
> On
On Wed, Sep 2, 2015 at 9:34 PM, Bob Ababurko wrote:
> When I lose a disk OR replace a OSD in my POC ceph cluster, it takes a very
> long time to rebalance. I should note that my cluster is slightly unique in
> that I am using cephfs(shouldn't matter?) and it currently contains
On Sun, Sep 6, 2015 at 10:07 AM, Marin Bernard wrote:
> Hi,
>
> I've just setup Ceph Hammer (latest version) on a single node (1 MON, 1
> MDS, 4 OSDs) for testing purposes. I used ceph-deploy. I only
> configured CephFS as I don't use RBD. My pool config is as follows:
>
> $
On Fri, Sep 4, 2015 at 9:15 AM, Florent B wrote:
> Hi everyone,
>
> I would like to know if there is a way on Debian to detect an upgrade of
> ceph-fuse package, that "needs" remouting CephFS.
>
> When I upgrade my systems, I do a "aptitude update && aptitude
> safe-upgrade".
On Fri, Sep 4, 2015 at 12:24 AM, Deneau, Tom wrote:
> After running some other experiments, I see now that the high single-node
> bandwidth only occurs when ceph-mon is also running on that same node.
> (In these small clusters I only had one ceph-mon running).
> If I compare
On Thu, Sep 3, 2015 at 11:58 PM, Kyle Hutson wrote:
> I was wondering if anybody could give me some insight as to how CephFS does
> its caching - read-caching in particular.
>
> We are using CephFS with an EC pool on the backend with a replicated cache
> pool in front of it.
On Thu, Sep 3, 2015 at 7:48 AM, Chris Taylor wrote:
> I removed the latest OSD that was respawing (osd.23) and now I having the
> same problem with osd.30. It looks like they both have pg 3.f9 in common. I
> tried "ceph pg repair 3.f9" but the OSD is still respawning.
>
> Does
On Sep 1, 2015 4:41 PM, "Janusz Borkowski"
wrote:
>
> Hi!
>
> open( ... O_APPEND) works fine in a single system. If many processes
write to the same file, their output will never overwrite each other.
>
> On NFS overwriting is possible, as appending is only
On Tue, Sep 8, 2015 at 2:33 PM, Florent B <flor...@coppint.com> wrote:
>
>
> On 09/08/2015 03:26 PM, Gregory Farnum wrote:
>> On Fri, Sep 4, 2015 at 9:15 AM, Florent B <flor...@coppint.com> wrote:
>>> Hi everyone,
>>>
>>> I would like to
On Thu, Sep 10, 2015 at 9:44 AM, Shinobu Kinjo wrote:
> Hello,
>
> I'm seeing 859 parameters in the output of:
>
> $ ./ceph --show-config | wc -l
> *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
> 859
>
> In:
>
> $ ./ceph --version
>
On Thu, Sep 10, 2015 at 2:34 PM, Stefan Priebe - Profihost AG
wrote:
> Hi,
>
> while we're happy running ceph firefly in production and also reach
> enough 4k read iop/s for multithreaded apps (around 23 000) with qemu 2.2.1.
>
> We've now a customer having a single
>
> On Tue, Sep 8, 2015 at 8:29 AM, Gregory Farnum <gfar...@redhat.com> wrote:
>>
>> On Thu, Sep 3, 2015 at 11:58 PM, Kyle Hutson <kylehut...@ksu.edu> wrote:
>> > I was wondering if anybody could give me some insight as to how CephFS
>> > does
>>
On Wed, Sep 9, 2015 at 4:26 PM, Kyle Hutson <kylehut...@ksu.edu> wrote:
>
>
> On Wed, Sep 9, 2015 at 9:34 AM, Gregory Farnum <gfar...@redhat.com> wrote:
>>
>> On Wed, Sep 9, 2015 at 3:27 PM, Kyle Hutson <kylehut...@ksu.edu> wrote:
>> > We are using
>> To: Ben Hines <bhi...@gmail.com>
>> > >>>> Cc: ceph-users <ceph-users@lists.ceph.com>
>> > >>>> Subject: Re: [ceph-users] Ceph performance, empty vs part full
>> > >>>>
>> > >>>> Hrm, I think it
On Sat, Sep 12, 2015 at 6:13 AM, pragya jain wrote:
> Hello all
>
> I am carrying out research in the area of cloud computing under Department
> of CS, University of Delhi. I would like to contribute my research work
> regarding monitoring of Ceph Object Storage to the Ceph
On Thu, Sep 10, 2015 at 1:07 PM, Kyle Hutson wrote:
> A 'rados -p cachepool ls' takes about 3 hours - not exactly useful.
>
> I'm intrigued that you say a single read may not promote it into the cache.
> My understanding is that if you have an EC-backed pool the clients can't
On Fri, Sep 11, 2015 at 9:52 AM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Mark Nelson
>> Sent: 10 September 2015 16:20
>> To: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] higher read
On Thu, Sep 10, 2015 at 9:46 PM, Wido den Hollander wrote:
> Hi,
>
> I'm running into a issue with Ceph 0.94.2/3 where after doing a recovery
> test 9 PGs stay incomplete:
>
> osdmap e78770: 2294 osds: 2294 up, 2294 in
> pgmap v1972391: 51840 pgs, 7 pools, 220 TB data, 185 Mobjects
On Tue, Sep 15, 2015 at 9:10 AM, Barclay Jameson
wrote:
> So, I asked this on the irc as well but I will ask it here as well.
>
> When one does 'ceph -s' it shows client IO.
>
> The question is simple.
>
> Is this total throughput or what the clients would see?
>
> Since
On Thu, Sep 10, 2015 at 1:02 PM, Deneau, Tom wrote:
> Running 9.0.3 rados bench on a 9.0.3 cluster...
> In the following experiments this cluster is only 2 osd nodes, 6 osds each
> and a separate mon node (and a separate client running rados bench).
>
> I have two pools
>>
>>
>> On 09/15/2015 11:25 AM, Barclay Jameson wrote:
>>>
>>> Unfortunately, it's not longer idle as my CephFS cluster is now in
>>> production :)
>>>
>>> On Tue, Sep 15, 2015 at 11:17 AM, Gregory Farnum <gfar...@redhat.com>
>>
On Thu, Sep 17, 2015 at 1:15 AM, Fulin Sun wrote:
> Hi, experts
>
> While doing the command
> ceph-fuse /home/ceph/cephfs
>
> I got the following error :
>
> ceph-fuse[28460]: starting ceph client
> 2015-09-17 16:03:33.385602 7fabf999b780 -1 init, newargv = 0x2c730c0
On Wed, Sep 16, 2015 at 11:56 AM, Corin Langosch
wrote:
> Hi guys,
>
> afaik rbd always splits the image into chunks of size 2^order (2^22 = 4MB by
> default). What's the benefit of specifying
> the feature flag "STRIPINGV2"? I couldn't find any documenation about it
On Thu, Sep 17, 2015 at 7:55 AM, Corin Langosch
<corin.lango...@netskin.com> wrote:
> Hi Greg,
>
> Am 17.09.2015 um 16:42 schrieb Gregory Farnum:
>> Briefly, if you do a lot of small direct IOs (for instance, a database
>> journal) then striping lets you send each sequen
On Thu, Sep 17, 2015 at 12:41 AM, Selcuk TUNC wrote:
> hello,
>
> we have noticed leveldb compaction on mount causes a segmentation fault in
> hammer release(0.94).
> It seems related to this pull request (github.com/ceph/ceph/pull/4372). Are
> you planning to backport
>
On Tue, Sep 29, 2015 at 3:59 AM, Jogi Hofmüller wrote:
> Hi,
>
> Am 2015-09-25 um 22:23 schrieb Udo Lembke:
>
>> you can use this sources-list
>>
>> cat /etc/apt/sources.list.d/ceph.list
>> deb http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/v0.94.3
>> jessie main
>
> The
The formula for objects in a file is .. So you'll have noticed they all look something like
12345.0001, 12345.0002, 12345.0003, ...
So if you've got a particular inode and file size, you can generate a
list of all the possible objects in it. To find the object->OSD
mapping you'd need
On Thu, Oct 1, 2015 at 4:42 AM, John Spray wrote:
> On Thu, Oct 1, 2015 at 12:35 PM, Florent B wrote:
>> Thank you John, I think about it. I did :
>>
>> # get inodes in pool
>> rados -p my_pool ls | cut -d '.' -f 1 | uniq
>> -u
On Fri, Oct 2, 2015 at 1:57 AM, John Spray wrote:
> On Fri, Oct 2, 2015 at 2:42 AM, Goncalo Borges
> wrote:
>> Dear CephFS Gurus...
>>
>> I have a question regarding ceph-fuse and its memory usage.
>>
>> 1./ My Ceph and CephFS setups are the
On Fri, Sep 18, 2015 at 4:57 AM, Wouter De Borger wrote:
> Hi all,
>
> I have found on the mailing list that it should be possible to have a multi
> datacenter setup, if latency is low enough.
>
> I would like to set this up, so that each datacenter has at least two
>
Do you have a core file from the crash? If you do and can find out
which pointers are invalid that would help...I think "cct" must be the
broken one, but maybe it's just the Inode* or something.
-Greg
On Mon, Sep 21, 2015 at 2:03 PM, Scottix wrote:
> I was rsyncing files to
On Thu, Sep 24, 2015 at 2:06 AM, Ilya Dryomov wrote:
> On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> If you use RADOS gateway, RBD or CephFS, then you don't need to worry
>> about
On Tue, Sep 22, 2015 at 7:21 PM, Jevon Qiao wrote:
> Hi Sage and other Ceph experts,
>
> This is a greeting from Jevon, I'm from China and working in a company which
> are using Ceph as the backend storage. At present, I'm evaluating the
> following two options of using Ceph
So it sounds like you've got two different things here:
1) You get a lot of slow operations that show up as warnings.
2) Rarely, you get blocked op warnings that don't seem to go away
until the cluster state changes somehow.
(2) is the interesting one. Since you say the cluster is under heavy
On Mon, Sep 21, 2015 at 7:07 AM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
>
>
>
> On Mon, Sep 21, 2015 at 3:02 AM, Wouter De Borger wrote:
>> Thank you for your answer! We will use size=4 and min_size=2, which should
>> do the trick.
>>
>>
On Mon, Sep 21, 2015 at 11:43 PM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> I'm starting to wonder if this has to do with some OSDs getting full
> or the 0.94.3 code. Earlier this afternoon, I cleared out my test
> cluster so there was no
On Tue, Sep 22, 2015 at 7:24 AM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Is there some way to tell in the logs that this is happening?
You can search for the (mangled) name _split_collection
> I'm not
> seeing much I/O, CPU usage during
On Thu, Sep 17, 2015 at 7:48 PM, Fulin Sun wrote:
> Hi, guys
>
> I am wondering if I am able to deploy ceph and hadoop into different cluster
> nodes and I can
>
> still use cephfs as the backend for hadoop access.
>
> For example, ceph in cluster 1 and hadoop in cluster
On Sep 24, 2015 5:12 PM, "Cory Hawkless" wrote:
>
> Hi all, thanks for the replies.
> So my confusion was because I was using "rados put test.file someobject
testpool"
> This command does not seem to split my 'files' into chunks when they are
saved as 'objects', hence the
On Mon, Dec 7, 2015 at 6:59 AM, Kostis Fardelas wrote:
> Hi cephers,
> after one OSD node crash (6 OSDs in total), we experienced an increase
> of approximately 230-260 threads for every other OSD node. We have 26
> OSD nodes with 6 OSDs per node, so this is approximately 40
On Tue, Dec 1, 2015 at 10:02 AM, Tom Christensen wrote:
> Another thing that we don't quite grasp is that when we see slow requests
> now they almost always, probably 95% have the "known_if_redirected" state
> set. What does this state mean? Does it indicate we have OSD maps
On Wed, Dec 2, 2015 at 10:54 AM, Major Csaba wrote:
> Hi,
>
> I have a small cluster(5 nodes, 20OSDs), where an OSD crashed. There is no
> any other signal of problems. No kernel message, so the disks seem to be OK.
>
> I tried to restart the OSD but the process stops
On Wed, Dec 2, 2015 at 11:11 AM, Major Csaba wrote:
> Hi,
> [ sorry, I accidentaly left out the list address ]
>
> This is the content of the LOG file in the directory
> /var/lib/ceph/osd/ceph-7/current/omap:
> 2015/12/02-18:48:12.241386 7f805fc27900 Recovering log #26281
s not doable...
>
> Shinobu
>
> - Original Message -
> From: "Gregory Farnum" <gfar...@redhat.com <javascript:;>>
> To: "Shinobu Kinjo" <ski...@redhat.com <javascript:;>>
> Cc: "ceph-users" <ceph-users@lists.ceph.com <j
As that ticket indicates, older versions of the code didn't create the
backtraces, so obviously they aren't present. That certainly includes
Dumpling!
-Greg
On Monday, December 7, 2015, Shinobu Kinjo wrote:
> Hello,
>
> Have any of you tried to upgrade the Ceph cluster
On Thu, Dec 10, 2015 at 2:26 AM, Xavier Serrano
wrote:
> Hello,
>
> We are using ceph version 0.94.4, with radosgw offering S3 storage
> to our users.
>
> Each user is assigned one bucket (and only one; max_buckets is set to 1).
> The bucket name is actually the user
Apparently the keys are now at
https://download.ceph.com/keys/release.asc and you need to upgrade
your ceph-deploy (or maybe just change a config setting? I'm not
really sure).
-Greg
On Thu, Dec 17, 2015 at 7:51 AM, Tim Gipson wrote:
> Is anyone else experiencing issues when
On Thu, Dec 17, 2015 at 2:06 PM, John Spray wrote:
> On Thu, Dec 17, 2015 at 2:31 PM, Simon Hallam wrote:
>> Hi all,
>>
>>
>>
>> I’m looking at sizing up some new MDS nodes, but I’m not sure if my thought
>> process is correct or not:
>>
>>
>>
>> CPU: Limited
On Wed, Dec 16, 2015 at 10:54 AM, Mike Miller wrote:
> Hi,
>
> sorry, the question might seem very easy, probably my bad, but can you
> please help me why I am unable to change read ahead size and other options
> when mounting cephfs?
>
> mount.ceph m2:6789:/ /foo2 -v -o
On Thu, Dec 17, 2015 at 11:43 AM, Bryan Wright wrote:
> Hi folks,
>
> This is driving me crazy. I have a ceph filesystem that behaves normally
> when I "ls" files, and behaves normally when I copy smallish files on or off
> of the filesystem, but large files (~ GB size) hang
On Tue, Dec 15, 2015 at 3:01 AM, Goncalo Borges
wrote:
> Dear Cephfs gurus.
>
> I have two questions regarding ACL support on cephfs.
>
> 1) Last time we tried ACLs we saw that they were only working properly in the
> kernel module and I wonder what is the present
On Tue, Dec 15, 2015 at 12:29 PM, Bryan Wright wrote:
> John Spray writes:
>
>> If you haven't already, also
>> check the overall health of the MDS host, e.g. is it low on
>> memory/swapping?
>
> For what it's worth, I've taken down some OSDs, and that seems to
On Tue, Dec 15, 2015 at 10:21 AM, Burkhard Linke
wrote:
> Hi,
>
> I have a setup with two MDS in active/standby configuration. During times of
> high network load / network congestion, the active MDS is bounced between
> both instances:
>
> 1.
On Fri, Jan 1, 2016 at 12:15 PM, Bryan Wright wrote:
> Hi folks,
>
> "ceph pg dump_stuck inactive" shows:
>
> 0.e8incomplete [406,504] 406 [406,504] 406
>
> Each of the osds above is alive and well, and idle.
>
> The output of "ceph pg 0.e8 query" is
On Fri, Jan 1, 2016 at 9:14 AM, Bryan Wright <bk...@virginia.edu> wrote:
> Gregory Farnum <gfarnum@...> writes:
>
>> Or maybe it's 0.9a, or maybe I just don't remember at all. I'm sure
>> somebody recalls...
>>
>
> I'm still struggling with this.
On Fri, Dec 18, 2015 at 7:03 AM, Bryan Wright <bk...@virginia.edu> wrote:
> Gregory Farnum <gfarnum@...> writes:
>>
>> What's the full output of "ceph -s"?
>>
>> The only time the MDS issues these "stat" ops on objects is during M
On Wed, Dec 23, 2015 at 5:20 AM, HEWLETT, Paul (Paul)
wrote:
> Seasons Greetings Cephers..
>
> Can I assume that http://tracker.ceph.com/issues/12200 is fixed in
> Infernalis?
>
> Any chance that it can be back ported to Hammer ? (I don’t see it planned)
>
> We
On Fri, Dec 18, 2015 at 7:27 AM, Bryan Wright <bk...@virginia.edu> wrote:
> Gregory Farnum <gfarnum@...> writes:
>
>>
>> Nonetheless, it's probably your down or incomplete PGs causing the
>> issue. You can check that by seeing if seed 0.5d427a9a (out of t
On Tue, Nov 24, 2015 at 1:37 PM, Wido den Hollander wrote:
> On 11/24/2015 07:00 PM, Emmanuel Lacour wrote:
>>
>> Dear ceph users,
>>
>>
>> I try to write a crush ruleset that will, for a pool size of 3, put a
>> copy in another host in the local rack and a copy in another rack. I
On Tue, Nov 24, 2015 at 1:50 PM, James Gallagher
wrote:
> Hi there,
>
> I'm currently following the Ceph QSGs and have currently finished the
> Storage Cluster Quick Start and have the current topology of
>
> admin-node - node1 (mon, mds)
> - node2
On Fri, Nov 20, 2015 at 11:33 AM, Simon Engelsman wrote:
> Hi,
>
> We've experienced a very weird problem last week with our Ceph
> cluster. We would like to ask your opinion(s) and advice
>
> Our dedicated Ceph OSD nodes run with:
>
> Total platform
> - IO Average: 2500 wrps,
Yeah, the write proxying is pretty new and the fact that it's missing from
an oddball like READFORWARD isn't surprising. (Not good, exactly, but not
surprising.) What are you doing with this caching mode?
On Thu, Nov 19, 2015 at 10:34 AM, Nick Fisk wrote:
> Don’t know why that
On Wed, Nov 25, 2015 at 11:09 AM, Wido den Hollander wrote:
> Hi,
>
> Currently we have OK, WARN and ERR as states for a Ceph cluster.
>
> Now, it could happen that while a Ceph cluster is in WARN state certain
> PGs are not available due to being in peering or any non-active+?
On Wed, Nov 25, 2015 at 8:37 AM, Götz Reinicke - IT Koordinator
wrote:
> Hi,
>
> discussing some design questions we came across the failover possibility
> of cephs network configuration.
>
> If I just have a public network, all traffic is crossing that lan.
>
>
1101 - 1200 of 2084 matches
Mail list logo