Hi,
On 01/16/2018 09:50 PM, Andras Pataki wrote:
Dear Cephers,
*snipsnap*
We are running with a larger MDS cache than usual, we have
mds_cache_size set to 4 million. All other MDS configs are the defaults.
AFAIK the MDS cache management in luminous has changed, focusing on
memory
Hi, I was wondering what naming scheme you use for naming RBDs in different
pools. There are no strict rules I know, so what might be a best practice?
Something like the target service like fileserver_students or webservers_xen,
webservers_vmware?
A good naming scheme might be helpful :)
See http://tracker.ceph.com/issues/22351#note-11
On Wed, Jan 17, 2018 at 10:09 AM, Brad Hubbard wrote:
> On Wed, Jan 17, 2018 at 5:41 AM, Brad Hubbard wrote:
>> On Wed, Jan 17, 2018 at 2:20 AM, Nikos Kormpakis wrote:
>>> On
On Thu, Jan 4, 2018 at 6:46 PM, David Turner wrote:
> I'm still getting a vibe that this still isn't going to happen. I'd like to
> get my tickets purchased including hotel, but there isn't a venue yet. With
> the conference only 2.5 months away, the details aren't nailed
Hi Nathan,
I would have place the mon_host parameter and assigned it the IP address for
your monitor host in the global section so that the client (ceph -s command).
Have you also checked your firewall setup on your MON box?
To help diagnose you can also use ceph -s --debug-ms=1 so you can
Hello,
In one of our cluster set up, there is frequent monitor elections
happening.
In the logs of one of the monitor, there is "lease_timeout" message before
that happens. Can anyone help me to figure it out ?
(When this happens, the Ceph Dashboard GUI gets stuck and we have to
restart the
On Wed, Jan 17, 2018 at 5:41 AM, Brad Hubbard wrote:
> On Wed, Jan 17, 2018 at 2:20 AM, Nikos Kormpakis wrote:
>> On 01/16/2018 12:53 AM, Brad Hubbard wrote:
>>> On Tue, Jan 16, 2018 at 1:35 AM, Alexander Peters wrote:
i created
I marked the PGs complete using the cephstore tool and that fixed the issues
with the gateways going down. They have been up for 2 days now without issue
and made it through testing. I tried to extract the data from the failing
server, but I was unable to import it. The failing server was on
Hi guys,
I don't think we are really worried about how those patches affect OSDs
performance -patches can be easily disabled via sys- but quite worried about
how do they affect librbd performance.
Librbd is running on the hypervisor, and even if you don't need to patch
hypervisor kernel for
That interesting information, unfortunately our failing server just failed
completely, so copying the disks may be the only option at this point. It’s a
JBOD setup, so we could pull the data off directly from the drives. It would
have been nice to restore from the tool nice and clean. I had
Yes, you're definitely right, docs can be improved. We'd be happy to
get a pull request with any improvements if someone wants to pick it
up.
Thanks,
Uejida
On Tue, Jan 16, 2018 at 1:30 PM, Youzhong Yang wrote:
> My bad ... Once I sent config request to us-east-1 (the master
My bad ... Once I sent config request to us-east-1 (the master zone), it
works, and 'obo mdsearch' against "us-east-es" zone works like a charm.
May I suggest that the following page be modified to reflect this
requirement so that someone else won't run into the same issue? I
understand it may
I'm doing a manual setup following
http://docs.ceph.com/docs/master/install/manual-deployment/
The ceph command hangs until I kill it. I have 1 monitor service started.
==
gentooserver ~ # ceph -s
^CError EINTR: problem getting command descriptions
On Tue, Jan 16, 2018 at 12:20 PM, Youzhong Yang wrote:
> Hi Yehuda,
>
> I can use your tool obo to create a bucket, and upload a file to the object
> store, but when I tried to run the following command, it failed:
>
> # obo mdsearch buck --config='x-amz-meta-foo; string,
Dear Cephers,
We've upgraded the back end of our cluster from Jewel (10.2.10) to
Luminous (12.2.2). The upgrade went smoothly for the most part, except
we seem to be hitting an issue with cephfs. After about a day or two of
use, the MDS start complaining about clients failing to respond to
Hi Yehuda,
I can use your tool obo to create a bucket, and upload a file to the object
store, but when I tried to run the following command, it failed:
# obo mdsearch buck --config='x-amz-meta-foo; string, x-amz-meta-bar;
integer'
ERROR: {"status": 405, "resource": null, "message": "",
On Wed, Jan 17, 2018 at 2:20 AM, Nikos Kormpakis wrote:
> On 01/16/2018 12:53 AM, Brad Hubbard wrote:
>> On Tue, Jan 16, 2018 at 1:35 AM, Alexander Peters wrote:
>>> i created the dump output but it looks very cryptic to me so i can't really
>>> make much
Hello,
my ceph-osd luminous are crashing with a segmentation fault while
backfilling.
Is there any way to manually remove the problematic "data"?
-1> 2018-01-16 20:32:50.001722 7f27d53fe700 0 osd.86 pg_epoch:
917877 pg[3.80e( v 917875'69934125 (917365'69924082,917875'69934125] lb
On Tue, Jan 16, 2018 at 6:07 AM Alex Gorbachev
wrote:
> I found a few WAN RBD cluster design discussions, but not a local one,
> so was wonderinng if anyone has experience with a resilience-oriented
> short distance (<10 km, redundant fiber connections) cluster in two
>
Hi Martin,
On Mon, Jan 15, 2018 at 6:04 PM, Martin Emrich
wrote:
> Hi!
>
> After having a completely broken radosgw setup due to damaged buckets, I
> completely deleted all rgw pools, and started from scratch.
>
> But my problem is reproducible. After pushing ca.
Hello Mike,
Zitat von Mike Lovell :
On Mon, Jan 8, 2018 at 6:08 AM, Jens-U. Mozdzen wrote:
Hi *,
[...]
1. Does setting the cache mode to "forward" lead to above situation of
remaining locks on hot-storage pool objects? Maybe the clients' unlock
On 01/16/2018 12:53 AM, Brad Hubbard wrote:
> On Tue, Jan 16, 2018 at 1:35 AM, Alexander Peters wrote:
>> i created the dump output but it looks very cryptic to me so i can't really
>> make much sense of it. is there anything to look for in particular?
>
> Yes, basically we
I found a few WAN RBD cluster design discussions, but not a local one,
so was wonderinng if anyone has experience with a resilience-oriented
short distance (<10 km, redundant fiber connections) cluster in two
datacenters with a third site for quorum purposes only?
I can see two types of
I would love to do this, but presently do not have the resources.
Would there be (or would anyone be interested in starting) a CRUSH map
cafe site for sharing common CRUSH maps, or a PG-calc style CRUSH map
generator for common use cases?
It seems there's a lot of discussions about best practices
Dito, cya in Darmstadt!
On 01/16/2018 08:47 AM, Wido den Hollander wrote:
> Yes! Looking forward :-) I'll be there :)
>
> Wido
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284
(AG Nürnberg)
signature.asc
Description: OpenPGP digital signature
Hmmm, I have to disagree with
'too many services'
What do you mean, there is a process for each osd, mon, mgr and mds.
There are less processes running than on a default windows fileserver.
What is the complaint here?
'manage everything by your command-line'
What is so bad about this? Even
Thank you Marc,
I wasn't aware that was an option so it will be very useful in future.
I see for Ubuntu you can make use of debsums to verify packages.
Sadly I'm still looking for a solution to my host issue though.
Kind regards
Geoffrey Rhodes
On 15 January 2018 at 23:31, Marc Roos
Hi, I was wondering what naming scheme you use for naming RBDs in different
pools. There are no strict rules I know, so what might be a best practice?
Something like the target service like fileserver_students or webservers_xen,
webservers_vmware?
A good naming scheme might be helpful :)
Hi Massimiliano,
On 01/11/2018 12:15 PM, Massimiliano Cuttini wrote:
> _*3) Management complexity*_
> Ceph is amazing, but is just too big to have everything under control
> (too many services).
> Now there is a management console, but as far as I read this management
> console just show basic
29 matches
Mail list logo