Hi,
I don't know how this happened but it seems second node's hosts file
(/etc/hosts) was broken and "host-1" thinks itself as "host". Fixing
/etc/hosts also fixed this issue.
Thanks,
Gencer.
On 19.11.2020 17:33:52, "Gencer Genç" wrote:
Hi,
I ran those c
hfs mounted
in node "host". Not "host-1".
How can I fix this issue?
Thanks,
Gencer.
2020-11-19T14:26:39.976575+ mgr.host-1.lsgwgx [DBG] Refreshed host host-1
daemons (17)
2020-11-19T14:26:40.002548+ mgr.host-1.lsgwgx [DBG] _check_for_strays
2020-11-19T14:26:40.003664+000
> Did you tune mysql / postgres for this setup? Did you have a default
> ceph rbd setup?
Yes, I had to tune some settings on PostgreSQL. Especially on:
synchronous_commit = off
I have a default RBD settings.
Do you have any recommendation?
Thanks,
Gencer.
__
Yes, I had to tune some settings on PostgreSQL. Especially on:
synchronous_commit = off
I have a default RBD settings.
Do you have any recommendation?
Thanks,
Gencer.
On 19.10.2020 12:49:51, Marc Roos wrote:
> In the past I see some good results (benchmark & latencies) fo
Hi,
After reading few sources, I decided to use dedicated NVMe Disk with native
replication. Those benchmarks and performances I was referred for internal use
only. Not much data stored there. I believe for production usages, it will
deliver more trouble than advantages.
Thanks,
Gencer
Hi Irek,
In the past I see some good results (benchmark & latencies) for MySQL and
PostgreSQL. However, I've always used 4MB object size. Maybe i can get much
better performance on smaller object size. Haven't tried actually.
Why are you not recommending this setup actually?
Ge
Should I use as is or should I use 16KB of object size and different sets of
features for PostgreSQL?
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
so much once again for your great help!
Gencer.
On 6.06.2020 00:15:15, Sebastian Wagner wrote:
Am 05.06.20 um 22:47 schrieb Gencer W. Genç:
> Hi Sebastian,
>
> I have go ahead and dig into github.com/ceph source code. I see that
> mons are grouped under name 'mon'. This ma
. In the past commands i got quorum. This
time it acknowledges about monitor hostname but fail due to not enough monitors
after stopping it.
Any idea on this step?
Thanks,
Gencer.
On 4.06.2020 13:20:09, Sebastian Wagner wrote:
sorry for the late response.
I'm seeing
> Upgrade: It is NOT safe to s
Hi,
I also tried:
$ ceph mon ok-to-stop all
No luck again. It seems Ceph ignores this.
Other Ceph cluster which has 9 nodes (and 3 mons) successfully upgraded.
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
; -16 in 0.003s
2020-06-01T22:10:08.441122+ mgr.vx-rg23-rk65-u43-130-1.pxmyie [INF]
Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130
How can I solve this and upgrade?
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
upgrade can continue?
If so, Should I upgrade, or wait for 15.2.3 because @Ashley said 15.2.2 has
problems
Thanks,
Gencer.
On 25.05.2020 19:25:49, Sebastian Wagner wrote:
Am 22.05.20 um 19:28 schrieb Gencer W. Genç:
> Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130
please make s
gs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 TiB
avail
Sebastian Wagner wrote:
> Hi Gencer,
>
> I'm going to need the full mgr log file.
>
> Best,
> Sebastian
>
> Am 20.05.20 um 15:07 schrieb Gencer W. Genç:
> > Ah yes,
> >
> > {
> >
he see it soon and did not missed it.
Thanks,
Gencer.
On 21.05.2020 20:25:03, Ashley Merrick wrote:
Hello,
Yes I did but wasn't able to suggest anything further to get around it, however:
1/ There is currently an issue with 15.2.2 so I would advise holding off any
upgrade
2/ Another mail list
Hi Sebastian,
I did not get your reply via e-mail. I am very sorry for this. I hope you can
see this message...
I've re-run the upgrade and attached the log.
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
Hi Ashley,
Have you seen my previous reply? If so, and no solution then does anyone has
any idea how can this be done with 2 node?
Thanks,
Gencer.
On 20.05.2020 16:33:53, Gencer W. Genç wrote:
This is 2 node setup. I have no third node :(
I am planning to add more in the future but currently
21:28:19 +0800 Gencer W. Genç
wrote
I have 2 mons and 2 mgrs.
cluster:
id: 7d308992-8899-11ea-8537-7d489fa7c193
health: HEALTH_OK
services:
mon: 2 daemons, quorum vx-rg23-rk65-u43-130,vx-rg23-rk65-u43-130-1 (age 91s)
mgr: vx-rg23-rk65-u43-130.arnvag(active
third mon btw.
Thnaks,
Gencer.
On 20.05.2020 16:21:03, Ashley Merrick wrote:
Yes, I think it's because your only running two mons, so the script is halting
at a check to stop you being in the position of just one running (no backup).
I had the same issue with a single MGR instance and had to add
after line #49. I stopped and
restart upgrade there.
Thanks,
Gencer.
On 20.05.2020 16:13:00, Ashley Merrick wrote:
ceph config set mgr mgr/cephadm/log_to_cluster_level debug
ceph -W cephadm --watch-debug
See if you see anything that stands out as an issue with the update, seems it
has
version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) octopus
(stable)": 28,
"ceph version 15.2.2 (0c857e985a29d90501a285f242ea9c008df49eb8) octopus
(stable)": 2
}
}
How can i fix this?
Gencer.
On 20.05.2020 16:04:33, Ashley Merrick wrote:
Does:
ceph version
Hi Ashley,
$ ceph orch upgrade status
{
"target_image": "docker.io/ceph/ceph:v15.2.2",
"in_progress": true,
"services_complete": [],
"message": ""
}
Thanks,
Gencer.
On 20.05.2020 15:58:34, Ashley Merrick wrote:
What
)
It says 8 hours. It is already ran for 3 hours. No upgrade processed. It get
stuck at this point.
Is there any way to know why this has stuck?
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
dashboard issue and this?
Thanks,
Gencer.
On 19.05.2020 23:44:25, Gencer W. Genç wrote:
Hi,
I was browsing dashboard today. Then suddently it stopped working and i got 502
errors. I checked via root login and see thet ceph health is down to WARN.
I can access all rdb devices and CephFS. They work. All
hadm/ssh_identity_key) root@server-1
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi JC,
Thank you for the reply.
I believe global will override all (take precedence of) "mon.{id}" settings
right?
Thanks,
Gencer.
On 30.04.2020 02:34:18, JC Lopez wrote:
Hi,
later version of Ceph do not rely on the configuration file anymore but on a
MON centralized configura
and apply to
all servers. But i cannot find this on new cephadm tool. I did few changes on
ceph.conf but ceph is unaware of those changes. How can i apply them? I've used
it with docker. Thanks, Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
apply them? I've used it with docker.
Thanks,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Volker,
Thank you so much for your quick fix for me. It worked. I got my dashboard back
and ceph is in HEALTH_OK state.
Thank you so much again and stay safe!
Regards,
Gencer.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send
What about Quasar? (https://www.google.com/search?q=quasar)
It's belong to the universe.
True that there are no so much options for Q.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Volker,
Sure, here you go:
{"users": {"gencer": {"username": "gencer", "password": "",
"roles": ["administrator"], "name": "Gencer Gen\u00e7", "email": &qu
Hi Lenz,
Yeah, I saw the PR and i still hit the issue today. In the meantime while
@Volker investigates, is there a workaround to bring dashboard back? Or
should I wait for @Volker's investigation?
P.S.: I fopund this too: https://tracker.ceph.com/issues/44271
Thanks,
Gencer
: Module 'dashboard' has experienced an error and cannot handle
commands:
('pwdUpdateRequired',)
How can i fix this problem? By disabling dashboard, Ceph status is back to
OK but if i enable ceph dashboard then ERR given.
Thanks,
Gencer.
___
ceph
32 matches
Mail list logo