- if my cluster is not well balanced, I have to run the balancer execute
several times, because it only optimises in small steps?
- is there some history of applied plans to see how optimizing brings
down this reported final score 0.054781?
- how can I get the current score?
- I have some
- Everyone here will tell you not to use 2x replica, maybe use some
erasure code if you want to save space.
- I cannot say anything about applying the cache pool, did not use it,
read some things that made me doubt it was useful for us. We decided to
put some vm's on ssd rbd pool. Maybe when
I am not saying controller cache, you should check ssd disk caches.
On Sun, Nov 18, 2018 at 11:40 AM Alex Litvak
wrote:
>
> All 3 nodes have this status for SSD mirror. Controller cache is on for all
> 3.
>
> Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad
> BBU
>
Hmm,
On all nodes
hdparm -W /dev/sdb
/dev/sdb:
SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0d 00 00 00 00 20 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
write-caching = not supported
On 11/18/2018 10:30 AM, Ashley Merrick wrote:
hdparm -W /dev/xxx should show
Hi all,
We are provisioning openstack swift api though ceph rgw (mimic). We have
problems when trying to create two containers in two projects of same name.
After scraping web, i came to know that i have to enable
* rgw_keystone_implicit_tenants in ceph conf file. But no use. Is really
hdparm -W /dev/xxx should show you
On Mon, 19 Nov 2018 at 12:28 AM, Alex Litvak
wrote:
> All machines state the same.
>
> /opt/MegaRAID/MegaCli/MegaCli64 -LDGetProp -DskCache -Lall -a0
>
> Adapter 0-VD 0(target id: 0): Disk Write Cache : Disk's Default
> Adapter 0-VD 1(target id: 1): Disk Write
All machines state the same.
/opt/MegaRAID/MegaCli/MegaCli64 -LDGetProp -DskCache -Lall -a0
Adapter 0-VD 0(target id: 0): Disk Write Cache : Disk's Default
Adapter 0-VD 1(target id: 1): Disk Write Cache : Disk's Default
I assume they are all on which is actually bad based on common sense.
Ah yes sorry be because your behind a raid card.
Your need to check the raid config I know on a HP card for example you have
an option called enabled disk cache.
This is separate to enabling the raid card cache, the config should be per
a drive (is on HP) so worth checking the config outputs for
Hi,
I have a cluster providing cephfs and it looks well. But as times
goes by, more and more clients use it. I wanna write a script
for getting the clients' informations so that I can keep everything
in good order.
I google a lot but dont find any solution which I can get clients
information. Is
'ceph daemon mds.xx session ls'
On Mon, Nov 19, 2018 at 2:40 PM Zhenshi Zhou wrote:
>
> Hi,
>
> I have a cluster providing cephfs and it looks well. But as times
> goes by, more and more clients use it. I wanna write a script
> for getting the clients' informations so that I can keep everything
>
On 2018-11-19 5:05 a.m., Honggang(Joseph) Yang wrote:
hello,
Our cluster side ms_type is async, while client side ms_type is
simple. I want to know if this is a proper way to use, what are the
potential risks?
None if Ceph doesn't complain about async messenger being experimental -
both
Many thanks Yan!
This command can get IP, hostname, mounting point and kernel version. All
of these data are exactly what I need.
Besides, is there a way I can get the sub directory's usage other than the
whole
cephfs usage from the server. For instance, I have /docker, /kvm, /backup,
etc.
I
On Mon, Nov 19, 2018 at 3:06 PM Zhenshi Zhou wrote:
>
> Many thanks Yan!
>
> This command can get IP, hostname, mounting point and kernel version. All
> of these data are exactly what I need.
> Besides, is there a way I can get the sub directory's usage other than the
> whole
> cephfs usage from
On 2018-11-19 8:17 a.m., Honggang(Joseph) Yang wrote:
thank you. but I encountered a problem:
https://tracker.ceph.com/issues/37300
I don't know if this is because of mix use of messger type.
Have you done basic troubleshooting, like checking osd.179 networking?
Usually this means firewall
On 11/17/18 1:07 AM, Vlad Kopylov wrote:
This is what Jean suggested. I understand it and it works with primary.
*But what I need is for all clients to access same files, not separate
sets (like red blue green)*
You should look to other solutions, like GlusterFS. Ceph is overhead for
this
15 matches
Mail list logo