[ceph-users] Re: OSD not starting

2023-11-05 Thread Amudhan P
om/issues/17170 > -- > Alex Gorbachev > Intelligent Systems Services Inc. > http://www.iss-integration.com > https://www.linkedin.com/in/alex-gorbachev-iss/ > > > > On Sat, Nov 4, 2023 at 11:22 AM Amudhan P wrote: > >> Hi, >> >> One of the server

[ceph-users] OSD not starting

2023-11-04 Thread Amudhan P
Hi, One of the server in Ceph cluster accidentally shutdown abruptly due to power failure. After restarting OSD's not coming up and in Ceph health check it shows osd down. When checking OSD status "osd.26 18865 unable to obtain rotating service keys; retrying" For every 30 seconds it's just

[ceph-users] Re: ceph failing to write data - MDSs read only

2023-01-02 Thread Amudhan P
fix the issue if it's acceptable. > > Thanks and Regards, > Kotresh H R > > On Thu, Dec 29, 2022 at 4:35 PM Amudhan P wrote: > >> Hi, >> >> Suddenly facing an issue with Ceph cluster I am using ceph version 16.2.6. >> I couldn't find any solution for the

[ceph-users] ceph failing to write data - MDSs read only

2022-12-29 Thread Amudhan P
Hi, Suddenly facing an issue with Ceph cluster I am using ceph version 16.2.6. I couldn't find any solution for the issue below. Any suggestions? health: HEALTH_WARN 1 clients failing to respond to capability release 1 clients failing to advance oldest client/flush

[ceph-users] ceph mgr alert mail using tls

2021-09-18 Thread Amudhan P
Hi, I am trying to configure Ceph (version 15.2.3) mgr alert email using office365 account I get the below error. [WRN] ALERTS_SMTP_ERROR: unable to send alert email [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056) Configured SMTP server and port 587. I have followed the

[ceph-users] Re: osds crash and restart in octopus

2021-09-03 Thread Amudhan P
I also have a similar problem in my case OSD's starts and stops after a few mins and not much in the log. I have filed a bug waiting for a reply to confirm it's a bug or an issue. On Fri, Sep 3, 2021 at 5:21 PM mahnoosh shahidi wrote: > We still have this problem. Does anybody have any

[ceph-users] Re: OSD stop and fails

2021-08-30 Thread Amudhan P
ticket > at tracker.ceph.com with the bcaktrace and osd log file? We can direct > that to the RADOS team to check out. > -Greg > > On Sat, Aug 28, 2021 at 7:13 AM Amudhan P wrote: > > > > Hi, > > > > I am having a peculiar problem with my ceph octopus cluster. 2 weeks a

[ceph-users] OSD stop and fails

2021-08-28 Thread Amudhan P
Hi, I am having a peculiar problem with my ceph octopus cluster. 2 weeks ago I had an issue that started like too many scrub error and later random OSD stopped which lead to pg corrupt, replica missing. since it's a testing cluster I wanted to understand the issue. I tried to recover PG but it

[ceph-users] Re: Recovery stuck and Multiple PG fails

2021-08-14 Thread Amudhan P
; Suresh > > On Sat, Aug 14, 2021, 9:53 AM Amudhan P wrote: > >> Hi, >> I am stuck with ceph cluster with multiple PG errors due to multiple OSD >> was stopped and starting OSD's manually again didn't help. OSD service >> stops again there is no issue with HDD for sure

[ceph-users] Recovery stuck and Multiple PG fails

2021-08-14 Thread Amudhan P
Hi, I am stuck with ceph cluster with multiple PG errors due to multiple OSD was stopped and starting OSD's manually again didn't help. OSD service stops again there is no issue with HDD for sure but for some reason, OSD stops. I am using running ceph version 15.2.5 on podman container. How do I

[ceph-users] Re: ceph osd continously fails

2021-08-11 Thread Amudhan P
411Z_a06defcc-19c6-41df-a37d-c071166cdcf3/log Aug 11 16:55:48 bash[27152]: --- end dump of recent events --- Aug 11 16:55:48 bash[27152]: reraise_fatal: default handler for signal 6 didn't terminate the process? On Wed, Aug 11, 2021 at 5:53 PM Amudhan P wrote: > Hi, > I am using ceph v

[ceph-users] ceph osd continously fails

2021-08-11 Thread Amudhan P
Hi, I am using ceph version 15.2.7 in 4 node cluster my OSD's is continuously stopping and even if I start again it stops after some time. I couldn't find anything from the log. I have set norecover and nobackfil as soon as I unset norecover OSD starts to fail. cluster: id:

[ceph-users] Re: Not able to read file from ceph kernel mount

2020-11-12 Thread Amudhan P
Hi, This issue is fixed now after setting cluster_IP to only osd's. Mount works perfectly fine. "ceph config set osd cluster_network 10.100.4.0/24" regards Amudhan On Sat, Nov 7, 2020 at 10:09 PM Amudhan P wrote: > Hi, > > At last, the problem fixed for now by adding

[ceph-users] Re: Cephfs Kernel client not working properly without ceph cluster IP

2020-11-12 Thread Amudhan P
have run the command output you have asked and output is after applying all the changes above said. # ceph config get mon cluster_network ouput : #ceph config get mon public_network output : 10.100.3.0/24 Still testing more on this to confirm the issue and playing out with my ceph cluster. regards Amudh

[ceph-users] Re: Cephfs Kernel client not working properly without ceph cluster IP

2020-11-10 Thread Amudhan P
heck your setup. > > > Zitat von Amudhan P : > > > Hi Nathan, > > > > Kernel client should be using only the public IP of the cluster to > > communicate with OSD's. > > > > But here it requires both IP's for mount to work properly. > > > > reg

[ceph-users] Re: Cephfs Kernel client not working properly without ceph cluster IP

2020-11-10 Thread Amudhan P
Hi Janne, My OSD's have both public IP and Cluster IP configured. The monitor node and OSD nodes are co-located. regards Amudhan P On Tue, Nov 10, 2020 at 4:45 PM Janne Johansson wrote: > > > Den tis 10 nov. 2020 kl 11:13 skrev Amudhan P : > >> Hi Nathan, >> >>

[ceph-users] Re: Cephfs Kernel client not working properly without ceph cluster IP

2020-11-10 Thread Amudhan P
mon but not the OSD? > It needs to be able to reach all mons and all OSDs. > > On Sun, Nov 8, 2020 at 4:29 AM Amudhan P wrote: > > > > Hi, > > > > I have mounted my cephfs (ceph octopus) thru kernel client in Debian. > > I get following error in "dme

[ceph-users] Re: Cephfs Kernel client not working properly without ceph cluster IP

2020-11-10 Thread Amudhan P
daemon reconfig osd.3 restarting all daemons. regards Amudhan P On Mon, Nov 9, 2020 at 9:49 PM Eugen Block wrote: > Clients don't need the cluster IP because that's only for OSD <--> OSD > replication, no client traffic. But of course to be able to > communicate with Ceph t

[ceph-users] Re: pg xyz is stuck undersized for long time

2020-11-08 Thread Amudhan P
Hi Frank, You said only one OSD is down but in ceph status shows more than 20 OSD is down. Regards, Amudhan On Sun 8 Nov, 2020, 12:13 AM Frank Schilder, wrote: > Hi all, > > I moved the crush location of 8 OSDs and rebalancing went on happily > (misplaced objects only). Today, osd.1 crashed,

[ceph-users] Cephfs Kernel client not working properly without ceph cluster IP

2020-11-08 Thread Amudhan P
Hi, I have mounted my cephfs (ceph octopus) thru kernel client in Debian. I get following error in "dmesg" when I try to read any file from my mount. "[ 236.429897] libceph: osd1 10.100.4.1:6891 socket closed (con state CONNECTING)" I use public IP (10.100.3.1) and cluster IP (10.100.4.1) in my

[ceph-users] Re: Not able to read file from ceph kernel mount

2020-11-07 Thread Amudhan P
was set up it had only public network. later added cluster with cluster IP and it was working fine until the restart of the entire cluster. regards Amudhan P On Fri, Nov 6, 2020 at 12:02 AM Amudhan P wrote: > >> Hi, >> I am trying to read file from my ceph kernel mount and file read st

[ceph-users] Re: Not able to read file from ceph kernel mount

2020-11-07 Thread Amudhan P
up it had only public network. later added cluster with cluster IP and it was working fine until the restart of the entire cluster. regards Amudhan P On Fri, Nov 6, 2020 at 12:02 AM Amudhan P wrote: > Hi, > I am trying to read file from my ceph kernel mount and file read stays in > bytes

[ceph-users] Not able to read file from ceph kernel mount

2020-11-05 Thread Amudhan P
f8bc7682-0d11-11eb-a332- 0cc47a5ec98a [ 272.132787] libceph: osd1 10.0.104.1:6891 socket closed (con state CONNECTING) Ceph cluster status is healthy no error It was working fine until before my entire cluster was down. Using Ceph octopus in debian. Regards Amudhan P

[ceph-users] Fwd: File read are not completing and IO shows in bytes able to not reading from cephfs

2020-11-04 Thread Amudhan P
(con state CONNECTING)" -- Forwarded message ----- From: Amudhan P Date: Wed, Nov 4, 2020 at 6:24 PM Subject: File read are not completing and IO shows in bytes able to not reading from cephfs To: ceph-users Hi, In my test ceph octopus cluster I was trying to simulate a fa

[ceph-users] File read are not completing and IO shows in bytes able to not reading from cephfs

2020-11-04 Thread Amudhan P
CONNECTING) What went wrong why is this issue.? regards Amudhan P ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Ceph not showing full capacity

2020-10-26 Thread Amudhan P
124 TiB So, PG number is not an issue in showing less size. I am trying other options also to see what made this issue. On Mon, Oct 26, 2020 at 8:20 PM 胡 玮文 wrote: > > 在 2020年10月26日,22:30,Amudhan P 写道: > >  > Hi Jane, > > I agree with you and I was trying to say disk w

[ceph-users] Re: Ceph not showing full capacity

2020-10-26 Thread Amudhan P
that due to replica it's showing half of the space. why it's not showing the entire RAW disk space as available space? Number of PG per pool play any vital role in showing available space? On Mon, Oct 26, 2020 at 12:37 PM Janne Johansson wrote: > > > Den sön 25 okt. 2020 kl 15:18 skrev

[ceph-users] Re: Ceph not showing full capacity

2020-10-25 Thread Amudhan P
s, which I think is too few for > your 48 OSDs [2]. If you have more placement groups, the unbalance issue > will be far less severe. > > [1]: https://docs.ceph.com/en/latest/architecture/#mapping-pgs-to-osds > [2]: https://docs.ceph.com/en/latest/rados/operations/placement-group

[ceph-users] Re: Ceph not showing full capacity

2020-10-25 Thread Amudhan P
Hi Stefan, I have started balancer but what I don't understand is there are enough free space in other disks. Why it's not showing those in available space? How to reclaim the free space? On Sun 25 Oct, 2020, 2:27 PM Stefan Kooman, wrote: > On 2020-10-25 05:33, Amudhan P wrote: >

[ceph-users] Re: Ceph not showing full capacity

2020-10-24 Thread Amudhan P
TiB 5.2 MiB 2.0 GiB 3.9 TiB 28.32 0.569 up MIN/MAX VAR: 0.19/1.88 STDDEV: 22.13 On Sun, Oct 25, 2020 at 12:08 AM Stefan Kooman wrote: > On 2020-10-24 14:53, Amudhan P wrote: > > Hi, > > > > I have created a test Ceph cluster with Ceph Octopus using cephadm.

[ceph-users] Re: Ceph not showing full capacity

2020-10-24 Thread Amudhan P
Hi Nathan, Attached crushmap output. let me know if you find any thing odd. On Sat, Oct 24, 2020 at 6:47 PM Nathan Fish wrote: > Can you post your crush map? Perhaps some OSDs are in the wrong place. > > On Sat, Oct 24, 2020 at 8:51 AM Amudhan P wrote: > > > > Hi, &

[ceph-users] Ceph not showing full capacity

2020-10-24 Thread Amudhan P
Hi, I have created a test Ceph cluster with Ceph Octopus using cephadm. Cluster total RAW disk capacity is 262 TB but it's allowing to use of only 132TB. I have not set quota for any of the pool. what could be the issue? Output from :- ceph -s cluster: id:

[ceph-users] Re: Ceph Octopus

2020-10-23 Thread Amudhan P
at 6:14 PM Eugen Block wrote: > Did you restart the OSD containers? Does ceph config show your changes? > > ceph config get mon cluster_network > ceph config get mon public_network > > > > Zitat von Amudhan P : > > > Hi Eugen, > > > > I did the same

[ceph-users] Re: Ceph Octopus

2020-10-23 Thread Amudhan P
aemon reconfig mon.host2 > >> ceph orch daemon reconfig mon.host3 > >> ceph orch daemon reconfig osd.1 > >> ceph orch daemon reconfig osd.2 > >> ceph orch daemon reconfig osd.3 > >> ---snip--- > >> > >> I haven't tried it myself though. &g

[ceph-users] Ceph Octopus

2020-10-19 Thread Amudhan P
Hi, I have installed Ceph Octopus cluster using cephadm with a single network now I want to add a second network and configure it as a cluster address. How do I configure ceph to use second Network as cluster network?. Amudhan ___ ceph-users mailing

[ceph-users] Cephdeploy support

2020-10-10 Thread Amudhan P
Hi, Future releases of Ceph support cephdeploy or only Cephadm will be the choice. Thanks, Amudhan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephadm not working with non-root user

2020-08-20 Thread Amudhan P
Hi, Any of you used cephadm bootstrap command without root user? On Wed, Aug 19, 2020 at 11:30 AM Amudhan P wrote: > Hi, > > I am trying to install ceph 'octopus' using cephadm. In bootstrap > command, I have specified a non-root user account as ssh-user. > cephadm boo

[ceph-users] cephadm not working with non-root user

2020-08-18 Thread Amudhan P
Hi, I am trying to install ceph 'octopus' using cephadm. In bootstrap command, I have specified a non-root user account as ssh-user. cephadm bootstrap --mon-ip xx.xxx.xx.xx --ssh-user non-rootuser when bootstrap about to complete it threw an error stating. INFO:cephadm:Non-zero exit code 2

[ceph-users] Re: Ceph latest install

2020-06-13 Thread Amudhan P
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/ On Sat, Jun 13, 2020 at 2:31 PM masud parvez wrote: > Could anyone give me the latest version ceph install guide for ubuntu 20.04 > ___ > ceph-users mailing list --

[ceph-users] Ceph deployment and Managing suite

2020-06-13 Thread Amudhan P
Hi, I am looking for a Software suite to deploy Ceph Storage Node and Gateway server (SMB & NFS) and also dashboard Showing entire Cluster status, Individual node health, disk identification or maintenance activity, network utilization. Simple user manageable dashboard. Please suggest any Paid

[ceph-users] Upload speed slow for 7MB file cephfs+Samaba

2020-06-12 Thread Amudhan P
connected in 10G. In the same setup copying 1GB file from windows client to samba getting 90 MB/s. Are there any kernel or network tunning needs to be done? Any suggestions? regards Amudhan P ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe

[ceph-users] Re: Octopus: orchestrator not working correctly with nfs

2020-06-11 Thread Amudhan P
Hi, I have not worked with orchestrator but I remember I read somewhere that NFS implementation is not supported. Refer Cephadm documentation and for NFS you have configure nfs Ganesha. You can manage NFS thru dashboard but for that you have initial config in dashboard and in nfsganaesha you

[ceph-users] Re: Ceph dashboard inventory page not listing osds

2020-06-10 Thread Amudhan P
n the table. > > Regards, > -- > Kiefer Chang (Ni-Feng Chang) > > > > > On 2020/6/7, 11:03 PM, "Amudhan P" wrote: > > Hi, > > I am using Ceph octopus in a small cluster. > > I have enabled ceph dashboard and when I go to inventory page I could se

[ceph-users] Ceph dashboard inventory page not listing osds

2020-06-07 Thread Amudhan P
Hi, I am using Ceph octopus in a small cluster. I have enabled ceph dashboard and when I go to inventory page I could see OSD's running in mgr node only not listing other OSD in other 3 nodes. I don't see any issue in the log. How do I list other OSD'S Regards Amudhan P

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Amudhan P
I didn't do any changes but started working now with jumbo frames. On Sun, May 24, 2020 at 1:04 PM Khodayar Doustar wrote: > So this is your problem, it has nothing to do with Ceph. Just fix the > network or rollback all changes. > > On Sun, May 24, 2020 at 9:05 AM Amud

[ceph-users] Re: Cephfs IO halt on Node failure

2020-05-24 Thread Amudhan P
ypo and you mean you changed min_size to 1? I/O paus with > min_size 1 and size 2 is unexpected, can you share more details like > your crushmap and your osd tree? > > > Zitat von Amudhan P : > > > Behaviour is same even after setting min_size 2. > > > > On Mon 18

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Amudhan P
ured for MTU 9000 >> it >> wouldn't work. >> >> On Sat, May 23, 2020 at 2:30 PM si...@turka.nl wrote: >> >> > Can the servers/nodes ping eachother using large packet sizes? I guess >> not. >> > >> > Sinan Polat >> > >> &

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-24 Thread Amudhan P
No, ping with MTU size 9000 didn't work. On Sun, May 24, 2020 at 12:26 PM Khodayar Doustar wrote: > Does your ping work or not? > > > On Sun, May 24, 2020 at 6:53 AM Amudhan P wrote: > >> Yes, I have set setting on the switch side also. >> >> On Sat 23 Ma

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-23 Thread Amudhan P
ka.nl wrote: > >> Can the servers/nodes ping eachother using large packet sizes? I guess >> not. >> >> Sinan Polat >> >> > Op 23 mei 2020 om 14:21 heeft Amudhan P het >> volgende geschreven: >> > >> > In OSD logs "heartbeat_check

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-23 Thread Amudhan P
In OSD logs "heartbeat_check: no reply from OSD" On Sat, May 23, 2020 at 5:44 PM Amudhan P wrote: > Hi, > > I have set Network switch with MTU size 9000 and also in my netplan > configuration. > > What else needs to be checked? > > > On Sat, May 23, 2020

[ceph-users] Re: Ceph Nautius not working after setting MTU 9000

2020-05-23 Thread Amudhan P
Hi, I have set Network switch with MTU size 9000 and also in my netplan configuration. What else needs to be checked? On Sat, May 23, 2020 at 3:39 PM Wido den Hollander wrote: > > > On 5/23/20 12:02 PM, Amudhan P wrote: > > Hi, > > > > I am using ceph Nautilus i

[ceph-users] Ceph Nautius not working after setting MTU 9000

2020-05-23 Thread Amudhan P
Hi, I am using ceph Nautilus in Ubuntu 18.04 working fine wit MTU size 1500 (default) recently i tried to update MTU size to 9000. After setting Jumbo frame running ceph -s is timing out. regards Amudhan P ___ ceph-users mailing list -- ceph-users

[ceph-users] Re: Cephfs IO halt on Node failure

2020-05-19 Thread Amudhan P
gt; > Zitat von Amudhan P : > > > Hi, > > > > Crush rule is "replicated" and min_size 2 actually. I am trying to test > > multiple volume configs in a single filesystem > > using file layout. > > > > I have created metadata pool with rep 3 (min_

[ceph-users] Re: Cephfs IO halt on Node failure

2020-05-17 Thread Amudhan P
e node failure is handled properly when only having metadata pool and one data pool (rep3). After adding additional data pool to fs, single node failure scenario is not working. regards Amudhan P On Sun, May 17, 2020 at 1:29 AM Eugen Block wrote: > What’s your pool configuration wrt min_siz

[ceph-users] Cephfs IO halt on Node failure

2020-05-16 Thread Amudhan P
. I was expecting read and write continue after a small pause due to a Node failure but it halts and never resumes until the failed node is up. I remember I tested the same scenario before in ceph mimic where it continued IO after a small pause. regards Amudhan P

[ceph-users] Re: Cephfs - NFS Ganesha

2020-05-16 Thread Amudhan P
ngle > /etc/ganesha/ganesha.conf file. > > Daniel > > On 5/15/20 4:59 AM, Amudhan P wrote: > > Hi Rafael, > > > > I have used config you have provided but still i am not able mount nfs. I > > don't see any error in log msg > > > > Output from

[ceph-users] Re: Cephfs - NFS Ganesha

2020-05-15 Thread Amudhan P
] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE Regards Amudhan P On Fri, May 15, 2020 at 1:01 PM Rafael Lopez wrote: > Hello Amudhan, > > The only ceph specific thing required in the ganesha config is to add the > FSAL block to your export, everything else is standard ganesha config a

[ceph-users] Cephfs - NFS Ganesha

2020-05-15 Thread Amudhan P
Hi, I am trying to setup NFS ganesh in Ceph Nautilus. In a ubuntu 18.04 system i have installed nfs-ganesha (v2.6) and nfs-ganesha-ceph pkg and followed the steps in the link https://docs.ceph.com/docs/nautilus/cephfs/nfs/ but i am not able to export my cephfs volume there is no error msg in

[ceph-users] Re: Cluster network and public network

2020-05-14 Thread Amudhan P
Will EC based write benefit from Public network and Cluster network? On Thu, May 14, 2020 at 1:39 PM lin yunfan wrote: > That is correct.I didn't explain it clearly. I said that is because in > some write only scenario the public network and cluster network will > all be saturated the same

[ceph-users] Re: Memory usage of OSD

2020-05-13 Thread Amudhan P
For Ceph release before nautilus to effect osd_memory_target changes need to restart OSD service. I had similar issue in mimic I did the same in my test setup. Before restarting OSD service ensure you set osd nodown and osd noout similar commands to ensure it doesn't trigger OSD down and

[ceph-users] Read speed low in cephfs volume exposed as samba share using vfs_ceph

2020-05-12 Thread Amudhan P
Hi, I am running a small 3 node Ceph Nautilus 14.2.8 cluster on Ubuntu 18.04. I am testing cluster to expose cephfs volume in samba v4 share for the user to access from windows for latter use. Samba Version 4.7.6-Ubuntu and mount.cifs version: 6.8. When I did a test with DD Write (600 MB/s) and

[ceph-users] Cifs slow read speed

2020-05-12 Thread Amudhan P
Hi, I am running a small Ceph Nautilus cluster on Ubuntu 18.04. I am testing cluster to expose cephfs volume in samba v4 share for user to access from windows. When i do test with DD Write (600 MB/s) and md5sum file Read speed is (700 - 800 MB/s) from ceph kernel mount. Same volume i have

[ceph-users] Re: New 3 node Ceph cluster

2020-03-15 Thread Amudhan P
s://goo.gl/PGE1Bx > > > Am So., 15. März 2020 um 14:34 Uhr schrieb Amudhan P >: > >> Thank you, All for your suggestions and ideas. >> >> what is your view on using MON, MGR, MDS and cephfs client or samba-ceph >> vfs in a single machine (10 core xeon CPU wit

[ceph-users] Re: New 3 node Ceph cluster

2020-03-15 Thread Amudhan P
ect? Kernel module mapping or > iSCSI targets. > > > > Another possibilty would be to create an RBD Image containing data and > samba and use it with QEMU. > > > > Regards > > > > Marco Savoca > > > > *Von: *jes...@krogh.cc > *Gesendet: *Samsta

[ceph-users] New 3 node Ceph cluster

2020-03-14 Thread Amudhan P
Hi, I am planning to create a new 3 node ceph storage cluster. I will be using Cephfs + with samba for max 10 clients for upload and download. Storage Node HW is Intel Xeon E5v2 8 core single Proc, 32GB RAM and 10Gb Nic 2 nos., 6TB SATA HDD 24 Nos. each node, OS separate SSD disk. Earlier I

[ceph-users] Re: mds log showing msg with HANGUP

2019-10-22 Thread Amudhan P
Ok, thanks. On Mon, Oct 21, 2019 at 8:28 AM Konstantin Shalygin wrote: > On 10/18/19 8:43 PM, Amudhan P wrote: > > I am getting below error msg in ceph nautilus cluster, do I need to > > worry about this? > > > > Oct 14 06:25:02 mon01 ceph-mds[35067]: 2019-10-14 06:25

[ceph-users] Re: How to reduce or control memory usage during recovery?

2019-09-24 Thread Amudhan P
memory usage was high even when backfills is set to "1". On Mon, Sep 23, 2019 at 8:54 PM Robert LeBlanc wrote: > On Fri, Sep 20, 2019 at 5:41 AM Amudhan P wrote: > > I have already set "mon osd memory target to 1Gb" and I have set > max-backfill from

[ceph-users] Re: Host failure trigger " Cannot allocate memory"

2019-09-10 Thread Amudhan P
ystem.journal corrupted > or uncleanly shut down, renaming and replacing. > [332951.019531] systemd[1]: Started Journal Service. > > On Tue, Sep 10, 2019 at 3:04 PM Amudhan P wrote: > > Hi, > > I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs. > >

[ceph-users] Re: Host failure trigger " Cannot allocate memory"

2019-09-10 Thread Amudhan P
/8f2559099bf54865adc95e5340d04447/system.journal corrupted or uncleanly shut down, renaming and replacing. [332951.019531] systemd[1]: Started Journal Service. On Tue, Sep 10, 2019 at 3:04 PM Amudhan P wrote: > Hi, > > I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs. >

[ceph-users] Host failure trigger " Cannot allocate memory"

2019-09-10 Thread Amudhan P
Hi, I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs. My current setup: 3 nodes, 1 node contain two bricks and other 2 nodes contain single brick each. Volume is a 3 replica, I am trying to simulate node failure. I powered down one host and started getting msg in other

[ceph-users] ceph cluster warning after adding disk to cluster

2019-09-04 Thread Amudhan P
R: 0.64/1.33 STDDEV: 2.43 regards Amudhan P ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: help

2019-08-30 Thread Amudhan P
:49 skrev Amudhan P : > >> After leaving 12 hours time now cluster status is healthy, but why did it >> take such a long time for backfill? >> How do I fine-tune? if in case of same kind error pop-out again. >> >> The backfilling is taking a while because max_backfil

[ceph-users] Re: help

2019-08-30 Thread Amudhan P
gt; e: caspars...@supernas.eu > w: www.supernas.eu > > > Op do 29 aug. 2019 om 14:35 schreef Amudhan P : > >> output from "ceph -s " >> >> cluster: >> id: 7c138e13-7b98-4309-b591-d4091a1742b4 >> health: HEALTH_WARN >>

[ceph-users] modifying "osd_memory_target"

2019-08-29 Thread Amudhan P
Hi, How do i change "osd_memory_target" in ceph command line. regards Amudhan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: help

2019-08-29 Thread Amudhan P
ush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 75 lfor 0/67 flags hashpspool stripe_width 0 application cephfs On Thu, Aug 29, 2019 at 6:13 PM Heðin Ejdesgaard Møller wrote: > What's the output of > ceph osd pool ls detail > > > On hós, 2019-08-29 at 18:06 +0530, Amudhan

[ceph-users] Re: help

2019-08-29 Thread Amudhan P
you provide the output of > ceph osd tree > and specify what your failure domain is ? > > /Heðin > > > On hós, 2019-08-29 at 13:55 +0200, Janne Johansson wrote: > > > > > > Den tors 29 aug. 2019 kl 13:50 skrev Amudhan P : > > > Hi, > > > >

[ceph-users] help

2019-08-29 Thread Amudhan P
Hi, I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs. my ceph health status showing warning . "ceph health" HEALTH_WARN Degraded data redundancy: 1197023/7723191 objects degraded (15.499%) "ceph health detail" HEALTH_WARN Degraded data redundancy: 1197128/7723191 objects