Sure - you can play with the weights or crush weights to make sure that
all drives fill evenly to their respective capacities. The consequence
of doing this is that about twice as much data will be on the drives
with twice the size obviously. But - a perhaps less glaring consequence
is that
Is not there a way to Deal with this kind of setup when playing with the
"weight" of an OSD? I dont mean the "crush weight".
I am in a Situation where i had to Think about to add a Server with 24 x 2TB
disks - my other osd nodes has 12 x 4TB. Which is in Sum 48TB per node in both
situations.
Hello Eugen,
All OSDs are up. the only issue i have is when server is rebooted i have to
manually start osds.
Now when i delete config of starting up osd in
/run/systemd/system/ceph-osd.target.wants/ and i reboot the server osds
will come automatically.
So config of starting osd in systemd is
Hello,
Clients and cluster are running Octopus. the only config changed after
upgrading to octopus is rbd_read_from_replica_policy set to balance.
Is this a risk configuration? Although the performance of vms is really
good now in my hdd based cluster
On Mon, Apr 6, 2020 at 5:17 PM Jason
On Mon, Apr 6, 2020 at 3:55 AM Lomayani S. Laizer wrote:
>
> Hello,
>
> After upgrade our ceph cluster to octopus few days ago we are seeing vms
> crashes with below error. We are using ceph with openstack(rocky).
> Everything running ubuntu 18.04 with kernel 5.3. We seeing this crashes in
>
Hi,
did you manage to get all OSDs up (you reported issues some days ago)?
Is the cluster in a healthy state?
Zitat von "Lomayani S. Laizer" :
Hello,
After upgrade our ceph cluster to octopus few days ago we are seeing vms
crashes with below error. We are using ceph with
Hi Richard,
We've got a (also relatively small) multisite deployment working with HTTPS
endpoints - so it's certainly possible.
Differences in how we've set this up compared with your description:
1) We're using beast rather than civetweb, so the content of ceph.conf is quite
different e.g.
Hi,
is there anything in the Ceph mgr log file that might give a hint on why
the dashboard is not starting?
Lenz
On 2020-04-05 13:03, sz_cui...@163.com wrote:
> hi:
>
> I enabled the dashboard module of ceph mgr and configed it,but it does not
> works.
>
> why it is?
>
> [cephuser@cadmin
Hi,
we have customer that used multisite https successfully in a
pre-production state. They switched to a stretched cluster later, but
the replication worked (and still works). The configs were slightly
different, they never tried both http and https simultanously
(civetweb
Hello,
I am seeing some commands running on CephFS mounts getting stuck in an
uninterruptible sleep, at which point I can only terminate them by rebooting
the client. Has anyone experienced anything similar and found a way to
safe-guard against this?
My mount is using the ceph kernel driver,
Hi Micha,
cephadm does not (yet) support Filestore.
See https://tracker.ceph.com/issues/44874 for details.
Best,
Sebastian
Am 03.04.20 um 10:11 schrieb Micha:
> Hi,
>
> I want to try using object storage with java.
> Is it possible to set up osds with "only" directories as data destination
>
The keyword to search for is "deferred writes", there are several
parameters that control the size and maximum number of ops that'll be
"cached". Increasing to 1 MB is probably a bad idea.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit
Hi
Just chasing up on this.. is anyone using multisite with HTTPS zone endpoints?
I could not find any examples... should it work?
Thanks
Richard
On 31 March 2020 22:35:30 BST, Richard Kearsley wrote:
>Hi there
>
>I have a fairly simple ceph multisite configuration with 2 ceph
>clusters
>in
good idea!
thanks!
sz_cui...@163.com
From: Eugen Block
Date: 2020-04-06 17:44
To: ceph-users
Subject: [ceph-users] Re: Can I fix the devive name,when using image map?
Hi,
you can use the pool/image names to access your rbd image:
ceph:~ # rbd showmapped
id pool namespace image snap
Hi,
you can use the pool/image names to access your rbd image:
ceph:~ # rbd showmapped
id pool namespace image snap device
0 cinder image1 -/dev/rbd0
ceph:~ # ls -l /dev/rbd/cinder/image1
lrwxrwxrwx 1 root root 10 6. Apr 11:39 /dev/rbd/cinder/image1 -> ../../rbd0
I'm not
Hi:
I use image map to mount a ceph device to localhost,but I found that,the device
name I can not control.
This is a problem,when it's name changed,the FS on it or the database using
this device may get wrongs.
Can I control the device name?
For exapme:
[root@gate2 ~]# rbd showmapped
id
Hi all,
I have a Ceph cluster with ~ 70 OSDs of different sizes running on Mimic
. I'm using ceph-deploy for managing the cluster size.
I have to remove some smaller drives and replace them with bigger
drives. From your experience, are the removing an OSD guidelines from
Mimic docs accurate
Hello,
After upgrade our ceph cluster to octopus few days ago we are seeing vms
crashes with below error. We are using ceph with openstack(rocky).
Everything running ubuntu 18.04 with kernel 5.3. We seeing this crashes in
busy vms. this is cluster was upgraded from nautilus.
kernel:
On 4/5/20 11:53 AM, m.kefay...@afranet.com wrote:
> Hi
> we deploy ceph object storage and want secure RGW. Is there any solution or
> any user experience about it?
> Is it common to use WAF ?
I wouldn't say common, but I did this for many customers. I usually
install Varnish Cache in
On 4/5/20 1:16 PM, Marc Roos wrote:
> No didn't get answer to this.
>
> Yes I thought also, but recently there has been an issue here with an
> upgrade to Octopus, where osd's are being changed automatically and
> consume huge amounts of memory during this. Furthermore if you have a
>
20 matches
Mail list logo