aite
> Sent: 01 May 2018 16:46
> To: Wido den Hollander <w...@42on.com>
> Cc: ceph-users <ceph-users@lists.ceph.com>; Nick Fisk <n...@fisk.me.uk>
> Subject: Re: [ceph-users] Intel Xeon Scalable and CPU frequency scaling on
> NVMe/SSD Ceph OSDs
>
> Also curious
On 05/07/2018 04:53 PM, Reed Dier wrote:
> I’ll +1 on InfluxDB rather than Prometheus, though I think having a version
> for each infrastructure path would be best.
> I’m sure plenty here have existing InfluxDB infrastructure as their TSDB of
> choice, and moving to Prometheus would be less
Hi,
I've been trying to get the lowest latency possible out of the new Xeon
Scalable CPUs and so far I got down to 1.3ms with the help of Nick.
However, I can't seem to pin the CPUs to always run at their maximum
frequency.
If I disable power saving in the BIOS they stay at 2.1Ghz (Silver
On 04/30/2018 10:25 PM, Gregory Farnum wrote:
>
>
> On Thu, Apr 26, 2018 at 11:36 AM Wido den Hollander <w...@42on.com
> <mailto:w...@42on.com>> wrote:
>
> Hi,
>
> I've been investigating the per object overhead for BlueStore as I've
>
On 04/27/2018 08:31 PM, David Turner wrote:
> I'm assuming that the "very bad move" means that you have some PGs not
> in active+clean. Any non-active+clean PG will prevent your mons from
> being able to compact their db store. This is by design so that if
> something were to happen where the
Hi,
I've been investigating the per object overhead for BlueStore as I've
seen this has become a topic for a lot of people who want to store a lot
of small objects in Ceph using BlueStore.
I've writting a piece of Python code which can be run on a server
running OSDs and will print the overhead.
On 04/24/2018 05:01 AM, Mohamad Gebai wrote:
>
>
> On 04/23/2018 09:24 PM, Christian Balzer wrote:
>>
>>> If anyone has some ideas/thoughts/pointers, I would be glad to hear them.
>>>
>> RAM, you'll need a lot of it, even more with Bluestore given the current
>> caching.
>> I'd say 1GB per TB
On 04/23/2018 12:09 PM, John Spray wrote:
> On Fri, Apr 20, 2018 at 9:32 AM, Sean Purdy wrote:
>> Just a quick note to say thanks for organising the London Ceph/OpenStack
>> day. I got a lot out of it, and it was nice to see the community out in
>> force.
>
> +1,
On 04/12/2018 04:36 AM, ? ?? wrote:
> Hi,
>
> For anybody who may be interested, here I share a process of locating the
> reason for ceph cluster performance slow down in our environment.
>
> Internally, we have a cluster with capacity 1.1PB, used 800TB, and raw user
> data is about 500TB.
On 04/10/2018 09:45 PM, Gregory Farnum wrote:
> On Tue, Apr 10, 2018 at 12:36 PM, Wido den Hollander <w...@42on.com> wrote:
>>
>>
>> On 04/10/2018 09:22 PM, Gregory Farnum wrote:
>>> On Tue, Apr 10, 2018 at 6:32 AM Wido den Hollander <w...@42o
On 04/10/2018 09:22 PM, Gregory Farnum wrote:
> On Tue, Apr 10, 2018 at 6:32 AM Wido den Hollander <w...@42on.com
> <mailto:w...@42on.com>> wrote:
>
> Hi,
>
> There have been numerous threads about this in the past, but I wanted to
> bring
Hi,
There have been numerous threads about this in the past, but I wanted to
bring this up again in a new situation.
Running with Luminous v12.2.4 I'm seeing some odd Memory and CPU usage
when using the ceph-fuse client to mount a multi-MDS CephFS filesystem.
health: HEALTH_OK
services:
On 04/09/2018 04:01 PM, Fulvio Galeazzi wrote:
> Hallo,
>
> I am wondering whether I could have the admin socket functionality
> enabled on a server which is a pure Ceph client (no MDS/MON/OSD/whatever
> running on such server). Is this at all possible? How should ceph.conf
> be configured?
On 04/04/2018 08:58 PM, Robert Stanford wrote:
>
> I read a couple of versions ago that ceph-deploy was not recommended
> for production clusters. Why was that? Is this still the case? We
> have a lot of problems automating deployment without ceph-deploy.
>
>
In the end it is just a
On 04/04/2018 07:30 PM, Damian Dabrowski wrote:
> Hello,
>
> I wonder if it is any way to run `trimfs` on rbd image which is
> currently used by the KVM process? (when I don't have access to VM)
>
> I know that I can do this by qemu-guest-agent but not all VMs have it
> installed.
>
> I can't
s3://test/ (bucket):
>Location: us-east-1
>Payer: BucketOwner
>Expiration Rule: none
>policy:none
>cors: none
>ACL: *anon*: READ
>ACL: Test1 User: FULL_CONTROL
>URL: http://192.168.1.111:7480/test/
>
>
>
&g
On 03/28/2018 11:59 AM, Marc Roos wrote:
>
>
>
>
>
> Do you have maybe some pointers, or example? ;)
>
When you upload using s3cmd try using the -P flag, that will set the
public-read ACL.
Wido
> This XML file does not appear to have any style information associated
> with it. The
On 03/28/2018 01:34 AM, Tracy Reed wrote:
>> health: HEALTH_WARN
>> recovery 1230/13361271 objects misplaced (0.009%)
>>
>> and no recovery is happening. I'm not sure why. This hasn't happened
>> before. But the mon db had been growing since long before this
>> circumstance.
>
>
On 03/27/2018 12:58 AM, Jared H wrote:
> I have three datacenters with three storage hosts in each, which house
> one OSD/MON per host. There are three replicas, one in each datacenter.
> I want the cluster to be able to survive a nuke dropped on 1/3
> datacenters, scaling up to 2/5 datacenters.
On 03/27/2018 06:40 AM, Tracy Reed wrote:
> Hello all,
>
> It seems I have underprovisioned storage space for my mons and my
> /var/lib/ceph/mon filesystem is getting full. When I first started using
> ceph this only took up tens of megabytes and I assumed it would stay
> that way and 5G for
On 03/21/2018 06:48 PM, Andre Goree wrote:
> I'm trying to determine the best way to go about configuring IO
> rate-limiting for individual images within an RBD pool.
>
> Here [1], I've found that OpenStack appears to use Libvirt's "iotune"
> parameter, however I seem to recall reading about
Hello Ceph (and CloudStack ;-) ) people!
Together with the Apache CloudStack [0] project we are organizing a Ceph
Day in London on April 19th this year.
As there are many users using Apache CloudStack with Ceph as the storage
behind their Virtual Machines or using Ceph as a object store in
On 03/08/2018 08:01 AM, 赵贺东 wrote:
Hi All,
Every time after we activate osd, we got “Structure needs cleaning” in
/var/lib/ceph/osd/ceph-xxx/current/meta.
/var/lib/ceph/osd/ceph-xxx/current/meta
# ls -l
ls: reading directory .: Structure needs cleaning
total 0
Could Anyone say something
also disabled the balancer for now (will report a issue) and removed
all other upmap entries:
$ ceph osd dump|grep pg_upmap_items|awk '{print $2}'|xargs -n 1 ceph osd
rm-pg-upmap-items
Thanks for the hint!
Wido
mike
On Thu, Feb 22, 2018 at 10:28 AM, Wido den Hollander <w...@42on.
Hi,
I have a situation with a cluster which was recently upgraded to
Luminous and has a PG mapped to OSDs on the same host.
root@man:~# ceph pg map 1.41
osdmap e21543 pg 1.41 (1.41) -> up [15,7,4] acting [15,7,4]
root@man:~#
root@man:~# ceph osd find 15|jq -r '.crush_location.host'
n02
On 02/21/2018 01:39 PM, Konstantin Shalygin wrote:
Is there any changelog for this release ?
https://github.com/ceph/ceph/pull/20503
And this one: https://github.com/ceph/ceph/pull/20500
Wido
k
___
ceph-users mailing list
They aren't there yet: http://docs.ceph.com/docs/master/release-notes/
And no Git commit yet:
https://github.com/ceph/ceph/commits/master/doc/release-notes.rst
I think the Release Manager is doing its best to release them asap.
12.2.3 packages were released this morning :)
Wido
On
Hi,
After having a great Ceph Day last week in Germany it's time for the
announcement of a new Ceph Day in London [0].
This will be a full day packed with talks about Ceph (working on the
schedule) but in addition this event will be co-hosted the Apache
CloudStack [1] project.
As we have
On 02/12/2018 03:16 PM, Behnam Loghmani wrote:
so you mean that rocksdb and osdmap filled disk about 40G for only 800k
files?
I think it's not reasonable and it's too high
Could you check the output of the OSDs using a 'perf dump' on their
admin socket?
The 'bluestore' and 'bluefs'
curious :)
Thanks
Kai
On 12.02.2018 10:39, Wido den Hollander wrote:
The next one is in London on April 19th
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 02/12/2018 12:33 AM, c...@elchaka.de wrote:
Am 9. Februar 2018 11:51:08 MEZ schrieb Lenz Grimmer :
Hi all,
On 02/08/2018 11:23 AM, Martin Emrich wrote:
I just want to thank all organizers and speakers for the awesome Ceph
Day at Darmstadt, Germany yesterday.
I
Hello Ceph (and CloudStack ;-) ) people!
Together with the Apache CloudStack [0] project we are organizing a Ceph
Day in London on April 19th this year.
As there are many users using Apache CloudStack with Ceph as the storage
behind their Virtual Machines or using Ceph as a object store in
On 02/06/2018 01:00 PM, Kai Wagner wrote:
Hi all,
I had the idea to use a RBD device as the SBD device for a pacemaker
cluster. So I don't have to fiddle with multipathing and all that stuff.
Have someone already tested this somewhere and can tell how the cluster
reacts on this?
I think this
on Atom 8-core CPUs to keep power
consumption low, so we went low on amount of PGs
2. The system is still growing and after adding OSDs recently we didn't
increase the amount of PGs yet
On Sat, Feb 3, 2018 at 10:50 AM, Wido den Hollander <w...@42on.com
<mailto:w...@42on.com>> wro
Hi,
I just wanted to inform people about the fact that Monitor databases can
grow quite big when you have a large cluster which is performing a very
long rebalance.
I'm posting this on ceph-users and ceph-large as it applies to both, but
you'll see this sooner on a cluster with a lof of
On 01/31/2018 10:24 AM, David Turner wrote:
I use gdisk to remove the partition and partprobe for the OS to see the
new partition table. You can script it with sgdisk.
That works indeed! I usually write 100M as well using dd just to be sure
any other left-overs are gone.
$ dd
Hi,
Is there a ETA yet for 12.2.3? Looking at the tracker there aren't that
many outstanding issues: http://tracker.ceph.com/projects/ceph/roadmap
On Github we have more outstanding PRs though for the Luminous
milestone: https://github.com/ceph/ceph/milestone/10
Are we expecting 12.2.3 in
On 11/03/2017 02:43 PM, Mark Nelson wrote:
On 11/03/2017 08:25 AM, Wido den Hollander wrote:
Op 3 november 2017 om 13:33 schreef Mark Nelson <mnel...@redhat.com>:
On 11/03/2017 02:44 AM, Wido den Hollander wrote:
Op 3 november 2017 om 0:09 schreef Nigel Williams
<ni
On 01/29/2018 07:26 PM, Nico Schottelius wrote:
Hey Wido,
[...]
Like I said, latency, latency, latency. That's what matters. Bandwidth
usually isn't a real problem.
I imagined that.
What latency do you have with a 8k ping between hosts?
As the link will be setup this week, I cannot
On 01/29/2018 06:33 PM, Nico Schottelius wrote:
Good evening list,
we are soon expanding our data center [0] to a new location [1].
We are mainly offering VPS / VM Hosting, so rbd is our main interest.
We have a low latency 10 Gbit/s link between our other location [2] and
we are wondering,
On 01/29/2018 06:15 PM, David Turner wrote:
+1 for Gregory's response. With filestore, if you lost a journal SSD
and followed the steps you outlined, you are leaving yourself open to
corrupt data. Any write that was ack'd by the journal, but not flushed
to the disk would be lost and
with datacenter 'stray'. So using uniform
anywhere in crush map disables/removes new class information?
Good question! I didn't try a combination of the two yet. It's possible
that it doesn't work indeed.
Wido
On 2018-01-29 15:42, Wido den Hollander wrote:
On 01/29/2018 03:38 PM, Niklas wrote:
When
On 01/29/2018 04:21 PM, Jake Grimmett wrote:
Hi Nick,
many thanks for the tip, I've set "osd_max_pg_per_osd_hard_ratio = 3"
and restarted the OSD's.
So far it's looking promising, I now have 56% objects misplaced rather
than 3021 pgs inactive. cluster now working hard to rebalance.
PGs
On 01/29/2018 03:38 PM, Niklas wrote:
When adding new OSDs to a host, the CRUSH weight for the datacenter one
level up is changed to reflect the change.
What configuration is used to stop ceph from automatic weight management
on the levels above the host?
In your case (datacenters) you
On 01/29/2018 02:07 PM, Nick Fisk wrote:
Hi Jake,
I suspect you have hit an issue that me and a few others have hit in
Luminous. By increasing the number of PG's before all the data has
re-balanced, you have probably exceeded hard PG per OSD limit.
See this thread
On 01/29/2018 01:14 PM, Niklas wrote:
Ceph luminous 12.2.2
$: ceph osd pool create hybrid 1024 1024 replicated hybrid
$: ceph -s
cluster:
id: e07f568d-056c-4e01-9292-732c64ab4f8e
health: HEALTH_WARN
Degraded data redundancy: 431 pgs unclean, 431 pgs
degraded, 431
On 01/29/2018 12:29 PM, Nathan Harper wrote:
Hi,
I don't know if this is strictly a Ceph issue, but hoping someone will
be able to shed some light. We have an Openstack environment (Ocata)
backed onto a Jewel cluster.
We recently ran into some issues with full OSDs but couldn't work out
-1
osd.0 1040 osdmap says I am destroyed, exiting
After this I followed Reed Dier's steps somewhat, zapped the disk again,
removed auth, crush, osd. Zapped the disk/parts and device mapper.
Could then run the create command without issues.
Kind Regards,
David Majchrzak
26 jan. 2018 kl. 18:56 sk
the auth key. Afterwards
ceph-volume will use the bootstrap-osd key to create it again.
I didn't try this with ceph-volume yet, but I'm in the process of doing
the same with ceph-disk going to BlueStore and that works just fine.
Wido
26 jan. 2018 kl. 18:50 skrev Wido den Hollander &l
On 01/26/2018 06:37 PM, David Majchrzak wrote:
Ran:
ceph auth del osd.0
ceph auth del osd.6
ceph auth del osd.7
ceph osd rm osd.0
ceph osd rm osd.6
ceph osd rm osd.7
which seems to have removed them.
Did you destroy the OSD prior to running ceph-volume?
$ ceph osd destroy 6
After you've
On 01/22/2018 08:11 AM, Hüseyin Atatür YILDIRIM wrote:
Hello,
How can I set mon_clock_drift_allowed tunable; should I set it at
monitor nodes /etc/ceph/ceph.conf or where ? In [global]
section, or in a [mon] section ?
Both would work, but the [mon] section is the best.
On 01/20/2018 02:02 PM, Marc Roos wrote:
If I test my connections with sockperf via a 1Gbit switch I get around
25usec, when I test the 10Gbit connection via the switch I have around
12usec is that normal? Or should there be a differnce of 10x.
No, that's normal.
Tests with 8k ping
On 01/20/2018 07:56 PM, Stefan Priebe - Profihost AG wrote:
Hello,
bcache didn't supported partitions on the past so that a lot of our osds
have their data directly on:
/dev/bcache[0-9]
But that means i can't give them the needed part type of
4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that
On 01/16/2018 06:51 AM, Leonardo Vaz wrote:
Hey Cephers!
We are proud to announce our first Ceph Day in 2018 which happens on
February 7 at the Deutsche Telekom AG Office in Darmstadt (25 km South
from Frankfurt Airport).
The conference schedule[1] is being finished and the registration
Hi,
Is there a way to easily modify the device-class of devices on a offline
CRUSHMap?
I know I can decompile the CRUSHMap and do it, but that's a lot of work
in a large environment.
In larger environments I'm a fan of downloading the CRUSHMap, modifying
it to my needs, testing it and
ines.
Wido
Cheers, Dan
[1] http://www.wiwynn.com/english/product/type/details/32?ptype=28
On Fri, Dec 22, 2017 at 12:04 PM, Wido den Hollander <w...@42on.com> wrote:
Hi,
I'm looking at OCP [0] servers for Ceph and I'm not able to find yet what
I'm looking for.
First of all, the geek in m
On Fri, Dec 22, 2017 at 12:04 PM, Wido den Hollander <w...@42on.com> wrote:
Hi,
I'm looking at OCP [0] servers for Ceph and I'm not able to find yet what
I'm looking for.
First of all, the geek in me loves OCP and the design :-) Now I'm trying to
match it with Ceph.
Looking at wiwynn [1] t
Hi,
I'm looking at OCP [0] servers for Ceph and I'm not able to find yet
what I'm looking for.
First of all, the geek in me loves OCP and the design :-) Now I'm trying
to match it with Ceph.
Looking at wiwynn [1] they offer a few OCP servers:
- 3 nodes in 2U with a single 3.5" disk [2]
-
On 12/19/2017 04:33 PM, Garuti, Lorenzo wrote:
Hi all,
we are having a very strange behavior with exclusive locks.
We have one image called test inside a pool called app.
The exclusive lock feature is that only one client can write at the same
time, so they will exchange the lock when
On 12/12/2017 02:18 PM, David Turner wrote:
I always back up my crush map. Someone making a mistake to the crush map
will happen and being able to restore last night's crush map has been
wonderful. That's all I really back up.
Yes, that's what I would suggest as well. Just have a daily
On 12/08/2017 10:27 AM, Florent B wrote:
Hi everyone,
A few days ago I upgraded a cluster from Kraken to Luminous.
I have a few mail servers on it, running Ceph-Fuse & Dovecot.
And since the day of upgrade, Dovecot is reporting some corrupted files
on a large account :
still
Jewel?
Multisite isn't used, it's just a local RGW.
Wido
> Yehuda
>
> On Tue, Dec 5, 2017 at 3:20 PM, Wido den Hollander <w...@42on.com> wrote:
> > Hi,
> >
> > I haven't tried this before but I expect it to work, but I wanted to check
> > be
> Op 6 december 2017 om 10:17 schreef Caspar Smit <caspars...@supernas.eu>:
>
>
> 2017-12-05 18:39 GMT+01:00 Richard Hesketh <richard.hesk...@rd.bbc.co.uk>:
>
> > On 05/12/17 17:10, Graham Allan wrote:
> > > On 12/05/2017 07:20 AM, Wido den Hollander
> Op 5 december 2017 om 18:39 schreef Richard Hesketh
> <richard.hesk...@rd.bbc.co.uk>:
>
>
> On 05/12/17 17:10, Graham Allan wrote:
> > On 12/05/2017 07:20 AM, Wido den Hollander wrote:
> >> Hi,
> >>
> >> I haven't tried this before but
> Op 5 december 2017 om 15:27 schreef Jason Dillaman <jdill...@redhat.com>:
>
>
> On Tue, Dec 5, 2017 at 9:13 AM, Wido den Hollander <w...@42on.com> wrote:
> >
> >> Op 29 november 2017 om 14:56 schreef Jason Dillaman <jdill...@redhat.com>:
> >
client
> breaks the exclusive lock held by a (dead) client.
>
We noticed another VM going into RO when a snapshot was created. When checking
last week this Instance had a watcher, but after the snapshot/RO we found out
it no longer has a watcher registered.
Any suggestions or ideas?
Wido
Hi,
I haven't tried this before but I expect it to work, but I wanted to check
before proceeding.
I have a Ceph cluster which is running with manually formatted FileStore XFS
disks, Jewel, sysvinit and Ubuntu 14.04.
I would like to upgrade this system to Luminous, but since I have to
Hi,
I just didn't think about it earlier, so I did it this morning:
https://eu.ceph.com/
Using Lets Encrypt eu.ceph.com is now available over HTTPS as well.
If you are using this mirror you might want to use SSL to download your
packages and keys from.
Wido
> Op 4 december 2017 om 13:10 schreef Hans van den Bogert
> :
>
>
> Hi all,
>
> Is there a way to get the current usage of the bluestore's block.db?
> I'd really like to monitor this as we have a relatively high number of
> objects per OSD.
>
Yes, using 'perf dump':
> Op 4 december 2017 om 10:59 schreef SOLTECSIS - Victor Rodriguez Cortes
> :
>
>
> Hello,
>
> I have upgraded from v12.2.1 to v12.2.2 and now a warning shows using
> "ceph status":
>
> ---
> # ceph status
> cluster:
> id:
> health: HEALTH_WARN
>
> Op 30 november 2017 om 14:19 schreef Jason Dillaman <jdill...@redhat.com>:
>
>
> On Thu, Nov 30, 2017 at 4:00 AM, Wido den Hollander <w...@42on.com> wrote:
> >
> >> Op 29 november 2017 om 14:56 schreef Jason Dillaman <jdill...@redhat.com>
ened here.
I've asked the OpenStack team for a gcore dump, but they have to get that
cleared before they can send it to me.
This might take a bit of time!
Wido
> As for the VM going R/O, that is the expected behavior when a client
> breaks the exclusive lock held by a (dead) client.
>
Hi,
On a OpenStack environment I encountered a VM which went into R/O mode after a
RBD snapshot was created.
Digging into this I found 10s (out of thousands) RBD images which DO have a
running VM, but do NOT have a watcher on the RBD image.
For example:
$ rbd status
> Op 28 november 2017 om 12:54 schreef Alfredo Deza <ad...@redhat.com>:
>
>
> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander <w...@42on.com> wrote:
> >
> >> Op 27 november 2017 om 14:36 schreef Alfredo Deza <ad...@redhat.com>:
> >>
>
> Op 27 november 2017 om 14:36 schreef Alfredo Deza :
>
>
> For the upcoming Luminous release (12.2.2), ceph-disk will be
> officially in 'deprecated' mode (bug fixes only). A large banner with
> deprecation information has been added, which will try to raise
> awareness.
>
> Op 27 november 2017 om 14:14 schreef German Anders :
>
>
> Hi Jason,
>
> We are using librbd (librbd1-0.80.5-9.el6.x86_64), ok I will change those
> parameters and see if that changes something
>
0.80? Is that a typo? You should really use 12.2.1 on the client.
Wido
> Op 27 november 2017 om 14:02 schreef German Anders :
>
>
> Hi All,
>
> I've a performance question, we recently install a brand new Ceph cluster
> with all-nvme disks, using ceph version 12.2.0 with bluestore configured.
> The back-end of the cluster is using a bond
> Op 23 november 2017 om 7:43 schreef Prasad Bhalerao
> :
>
>
> Hi,
>
> I am new to CEPH and I have some question regarding it?
>
> Could you please help out?
>
Yes, but keep in mind a lot of these questions have been answered previously.
> What is default
> Op 20 november 2017 om 14:02 schreef Iban Cabrillo :
>
>
> Hi cephers,
> I was trying to migrate from Filestore to bluestore followig the
> instructions but after the ceph-disk prepare the new osd had not join to
> the cluster again:
>
>[root@cephadm ~]# ceph
> Op 20 november 2017 om 11:56 schreef Matteo Dacrema :
>
>
> Hi,
>
> I need to switch a cluster of over 200 OSDs from replica 2 to replica 3
> There are two different crush maps for HDD and SSDs also mapped to two
> different pools.
>
> Is there a best practice to use?
t thanks for
> bringing it to my attention. We are using the S3 interface of RGW
> exclusively (nothing custom in there).
>
Only a custom application will use libradosstriper. So don't worry.
Wido
>
> On Thu, Nov 16, 2017 at 9:41 AM, Wido den Hollander <w...@42on.com>
> Op 16 november 2017 om 16:32 schreef Robert Stanford
> :
>
>
> Once 'osd max write size' (90MB by default I believe) is exceeded, does
> Ceph reject the object (which is coming in through RGW), or does it break
> it up into smaller objects (of max 'osd max write
> Op 16 november 2017 om 16:20 schreef David Turner :
>
>
> There is another thread in the ML right now covering this exact topic. The
> general consensus is that for most deployments, a separate network for
> public and cluster is wasted complexity.
>
Indeed. Just for
> Op 16 november 2017 om 14:46 schreef Caspar Smit <caspars...@supernas.eu>:
>
>
> 2017-11-16 14:43 GMT+01:00 Wido den Hollander <w...@42on.com>:
>
> >
> > > Op 16 november 2017 om 14:40 schreef Georgios Dimitrakakis <
> > gior...@acmac.uo
> Op 16 november 2017 om 14:40 schreef Georgios Dimitrakakis
> :
>
>
> @Sean Redmond: No I don't have any unfound objects. I only have "stuck
> unclean" with "active+degraded" status
> @Caspar Smit: The cluster is scrubbing ...
>
> @All: My concern is because of one
> Op 15 november 2017 om 15:03 schreef Richard Hesketh
> :
>
>
> On 15/11/17 12:58, Micha Krause wrote:
> > Hi,
> >
> > I've build a few clusters with separated public/cluster network, but I'm
> > wondering if this is really
> > the way to go.
> >
> >
> Op 8 november 2017 om 22:41 schreef Sage Weil :
>
>
> Who is running nfs-ganesha's FSAL to export CephFS? What has your
> experience been?
>
A customer of mine is going this. They are running Ubuntu and my experience is
that getting Ganesha compiled is already a pain
> Op 7 november 2017 om 22:54 schreef Scottix :
>
>
> Hey,
> I recently updated to luminous and started deploying bluestore osd nodes. I
> normally set osd_max_backfills = 1 and then ramp up as time progresses.
>
> Although with bluestore it seems like I wasn't able to do
> Op 7 november 2017 om 18:27 schreef Brady Deetz :
>
>
> I'm guessing this is not expected behavior
>
It's not indeed.
Two PRs for this are out there:
- https://github.com/ceph/ceph/pull/18734
- https://github.com/ceph/ceph/pull/18515
Where the last one is the one you
> Op 7 november 2017 om 10:14 schreef Jan Pekař - Imatic :
>
>
> Additional info - it is not librbd related, I mapped disk through
> rbd map and it was the same - virtuals were stuck/frozen.
> I happened exactly when in my log appeared
>
Why aren't you using librbd? Is
> Op 6 november 2017 om 20:17 schreef Yehuda Sadeh-Weinraub <yeh...@redhat.com>:
>
>
> On Mon, Nov 6, 2017 at 7:29 AM, Wido den Hollander <w...@42on.com> wrote:
> > Hi,
> >
> > On a Ceph Luminous (12.2.1) environment I'm seeing RGWs stall and a
Hi,
On a Ceph Luminous (12.2.1) environment I'm seeing RGWs stall and about the
same time I see these errors in the RGW logs:
2017-11-06 15:50:24.859919 7f8f5fa1a700 0 ERROR: failed to distribute cache
for
> Op 3 november 2017 om 13:33 schreef Mark Nelson <mnel...@redhat.com>:
>
>
>
>
> On 11/03/2017 02:44 AM, Wido den Hollander wrote:
> >
> >> Op 3 november 2017 om 0:09 schreef Nigel Williams
> >> <nigel.willi...@tpac.org.au>:
> &
> Op 3 november 2017 om 0:09 schreef Nigel Williams
> :
>
>
> On 3 November 2017 at 07:45, Martin Overgaard Hansen
> wrote:
> > I want to bring this subject back in the light and hope someone can provide
> > insight regarding the issue, thanks.
> Op 30 oktober 2017 om 17:42 schreef alastair.dewhu...@stfc.ac.uk:
>
>
> Hello
>
> We have a dual stack test machine running RadosGW. It is currently
> configured for IPv4 only. This is done in the ceph.conf with:
> rgw_frontends="civetweb port=443s
>
; > applies the wrong value (1500).
> >
> > The network guys changed the MTU parameter offered via SLAAC and now it's
> > working:
> >
> > root@ceph-node03:~# cat /proc/sys/net/ipv6/conf/eno1/mtu
> > 9000
> > root@ceph-node03:~# ping6 -c 3 -M do -s 8952 c
> Op 27 oktober 2017 om 14:22 schreef Félix Barbeira :
>
>
> Hi,
>
> I'm trying to configure a ceph cluster using IPv6 only but I can't enable
> jumbo frames. I made the definition on the
> 'interfaces' file and it seems like the value is applied but when I test it
> looks
> Op 25 oktober 2017 om 10:39 schreef koukou73gr <koukou7...@yahoo.com>:
>
>
> On 2017-10-25 11:21, Wido den Hollander wrote:
> >
> >> Op 25 oktober 2017 om 5:58 schreef Christian Sarrasin
> >> <c.n...@cleansafecloud.com>:
> >>
> &
> Op 25 oktober 2017 om 5:58 schreef Christian Sarrasin
> :
>
>
> I'm planning to migrate an existing Filestore cluster with (SATA)
> SSD-based journals fronting multiple HDD-hosted OSDs - should be a
> common enough setup. So I've been trying to parse various
> Op 22 oktober 2017 om 18:45 schreef Sean Sullivan :
>
>
> On freshly installed ubuntu 16.04 servers with the HWE kernel selected
> (4.10). I can not use ceph-deploy or ceph-disk to provision osd.
>
>
> whenever I try I get the following::
>
> ceph-disk -v prepare
201 - 300 of 1153 matches
Mail list logo