On Fri, Jan 11, 2019 at 11:24 AM Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:
> Thanks for the answers, guys!
>
> Am I right to assume msgr2 (http://docs.ceph.com/docs/mimic/dev/msgr2/)
> will provide encryption between Ceph daemons as well as between clients and
> daemons?
>
> Does
Hi Ilya,
thank you for the clarification. After setting the
"osd_map_messages_max" to 10 the io errors and the MDS error
"MDS_CLIENT_LATE_RELEASE" are gone.
The messages of "mon session lost, hunting for new new mon" didn't go
away... can it be that this is related to
Looks like your network deactivated before the rbd volume was unmounted.
This is a known issue without a good programmatic workaround and you’ll
need to adjust your configuration.
On Tue, Jan 22, 2019 at 9:17 AM Gao, Wenjun wrote:
> I’m using krbd to map a rbd device to a VM, it appears when the
This doesn’t look familiar to me. Is the cluster still doing recovery so we
can at least expect them to make progress when the “out” OSDs get removed
from the set?
On Tue, Jan 22, 2019 at 2:44 PM Wido den Hollander wrote:
> Hi,
>
> I've got a couple of PGs which are stuck in backfill_toofull,
Hi,
I add --no-mon-config option after the command cause the error shows
that I can try this option. And the command executes successfully. Now
I have added OSDs into the cluster and seems well.
I'm wondering whether this option have any effect to osd or not?
Thanks
Marc Roos 于2019年1月25日周五
Hi Ketil,
We also offer independent ceph consulting and
and operate productive cluster for more than 4 years and up 2500 osds.
You can meet many in person at the next cephalocon in Barcelona.
(https://ceph.com/cephalocon/barcelona-2019/)
Regards, Joachim
Clyso GmbH
Homepage:
Thanks Marc,
When I next have physical access to the cluster, I’ll add some more OSDs. Would
that cause the hanging though?
No takers on the bluestore salvage?
thanks,
rik.
> On 20 Jan 2019, at 20:36, Marc Roos wrote:
>
>
> If you have a backfillfull, no pg's will be able to migrate.
>
Greetings,
I have a ceph cluster that I've been running since the argonaut release.
I've been upgrading it over the years and now it's mostly on mimic. A
number of the tools (eg. ceph-volume) require the bootstrap keys that are
assumed to be in /var/lib/ceph/bootstrap-*/. Because of the history
This should do it sort of.
{
"Id": "Policy1548367105316",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1548367099807",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Principal": { "AWS": "arn:aws:iam::Company:user/testuser" },
"Resource":
Hello Ketil,
we as croit offer commercial support for Ceph as well as our own Ceph based
unified storage solution.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin
Hi,
I have a cephfs kernel client (Ubuntu 18.04 4.15.0-34-generic) that's
completely hung after the client was evicted by the MDS.
The client logged:
Jan 24 17:31:46 client kernel: [10733559.309496] libceph: FULL or reached pool
quota
Jan 24 17:32:26 client kernel: [10733599.232213] libceph:
Am 24.01.19 um 22:34 schrieb Alfredo Deza:
I have both, plain and luks.
At the moment I played around with the plain dmcrypt OSDs and run into
this problem. I didn't test the luks crypted OSDs.
There is support in the JSON file to define the type of encryption with the key:
On Thu, Jan 24, 2019 at 4:13 PM mlausch wrote:
>
>
>
> Am 24.01.19 um 22:02 schrieb Alfredo Deza:
> >>
> >> Ok with a new empty journal the OSD will not start. I have now rescued
> >> the data with dd and the recrypt it with a other key and copied the
> >> data back. This worked so far
> >>
> >>
On Thu, Jan 24, 2019 at 6:21 PM Andras Pataki
wrote:
>
> Hi Ilya,
>
> Thanks for the clarification - very helpful.
> I've lowered osd_map_messages_max to 10, and this resolves the issue
> about the kernel being unhappy about large messages when the OSDMap
> changes. One comment here though: you
Am 24.01.19 um 22:02 schrieb Alfredo Deza:
Ok with a new empty journal the OSD will not start. I have now rescued
the data with dd and the recrypt it with a other key and copied the
data back. This worked so far
Now I encoded the key with base64 and put it to the key-value store.
Also
On Thu, Jan 24, 2019 at 8:16 PM Martin Palma wrote:
>
> We are experiencing the same issues on clients with CephFS mounted
> using the kernel client and 4.x kernels.
>
> The problem shows up when we add new OSDs, on reboots after
> installing patches and when changing the weight.
>
> Here the
Hi Ilya,
Thanks for the clarification - very helpful.
I've lowered osd_map_messages_max to 10, and this resolves the issue
about the kernel being unhappy about large messages when the OSDMap
changes. One comment here though: you mentioned that Luminous uses 40
as the default, which is indeed
On Thu, Jan 24, 2019 at 3:17 PM Manuel Lausch wrote:
>
>
>
> On Wed, 23 Jan 2019 16:32:08 +0100
> Manuel Lausch wrote:
>
>
> > >
> > > The key api for encryption is *very* odd and a lot of its quirks are
> > > undocumented. For example, ceph-volume is stuck supporting naming
> > > files and keys
I tried setting noout and that did provide a bit better result.
Basically I could stop the OSD on the inactive server and everything
still worked (after a 2-3 second pause) but then when I rebooted the
inactive server everything hung again until it came back online and
resynced with the
Split networks is rarely worth it. One fast network is usually better.
And since you mentioned having only two interfaces: one bond is way
better than two independent interfaces.
IPv4/6 dual stack setups will be supported in Nautilus, you currently
have to use either IPv4 or IPv6.
Jumbo frames:
Hmmm... if i am Not wrong, this Information have to be put within the config
files from you... there isnt a mechanism which extracts this via rbd snap ls ...
Am 7. Januar 2019 13:16:36 MEZ schrieb Marc Roos :
>
>
>How do you configure libvirt so it sees the snapshots already created
>on
>the
Hi Marc,
I'm not actually certain whether the traditional ACLs permit any
solution for that, but I believe with bucket policy, you can achieve
precise control within and across tenants, for any set of desired
resources (buckets).
Matt
On Thu, Jan 24, 2019 at 3:18 PM Marc Roos wrote:
>
>
> It
Hi all,
I want to share a performance issue I just encountered on a test cluster
of mine, specifically related to tuned. I started by setting the
"throughput-performance" tuned profile on my OSD nodes and ran some
benchmarks. I then applied that same profile to my client node, which
intuitively
Hi Matthew,
Some of the logging was intentionally removed because it used to
clutter up the logs. However, we are bringing back some of the useful
stuff back and have a tracker ticket
https://tracker.ceph.com/issues/37886 open for it.
Thanks,
Neha
On Thu, Jan 24, 2019 at 12:13 PM Stefan Kooman
It is correct that it is NOT possible for s3 subusers to have different
permissions on folders created by the parent account?
Thus the --access=[ read | write | readwrite | full ] is for everything
the parent has created, and it is not possible to change that for
specific folders/buckets?
On Wed, 23 Jan 2019 16:32:08 +0100
Manuel Lausch wrote:
> >
> > The key api for encryption is *very* odd and a lot of its quirks are
> > undocumented. For example, ceph-volume is stuck supporting naming
> > files and keys 'lockbox'
> > (for backwards compatibility) but there is no real
ceph osd create
ceph osd rm osd.15
sudo -u ceph mkdir /var/lib/ceph/osd/ceph-15
ceph-disk prepare --bluestore --zap-disk /dev/sdc (bluestore)
blkid /dev/sdb1
echo "UUID=a300d511-8874-4655-b296-acf489d5cbc8
/var/lib/ceph/osd/ceph-15 xfs defaults 0 0" >> /etc/fstab
mount
Quoting Matthew Vernon (m...@sanger.ac.uk):
> Hi,
>
> On our Jewel clusters, the mons keep a log of the cluster status e.g.
>
> 2019-01-24 14:00:00.028457 7f7a17bef700 0 log_channel(cluster) log [INF] :
> HEALTH_OK
> 2019-01-24 14:00:00.646719 7f7a46423700 0 log_channel(cluster) log [INF] :
>
We are experiencing the same issues on clients with CephFS mounted
using the kernel client and 4.x kernels.
The problem shows up when we add new OSDs, on reboots after
installing patches and when changing the weight.
Here the logs of a misbehaving client;
[6242967.890611] libceph: mon4
Hi,
I'm installing a new ceph cluster manually. I get errors when I create osd:
# ceph-osd -i 0 --mkfs --mkkey
2019-01-24 17:07:44.045 7f45f497b1c0 -1 auth: unable to find a keyring on
/var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
2019-01-24 17:07:44.045 7f45f497b1c0 -1
Hi,
On 23/01/2019 22:28, Ketil Froyn wrote:
How is the commercial support for Ceph? More specifically, I was
recently pointed in the direction of the very interesting combination of
CephFS, Samba and ctdb. Is anyone familiar with companies that provide
commercial support for in-house
Hi,
On our Jewel clusters, the mons keep a log of the cluster status e.g.
2019-01-24 14:00:00.028457 7f7a17bef700 0 log_channel(cluster) log
[INF] : HEALTH_OK
2019-01-24 14:00:00.646719 7f7a46423700 0 log_channel(cluster) log
[INF] : pgmap v66631404: 173696 pgs: 10
32 matches
Mail list logo