shouldn't happen. The monitor has to trim
these logs as well.
What could be the problem? Maybe a missing option in the ceph.conf?
Zitat von Wido den Hollander <w...@42on.com>:
Op 11 augustus 2016 om 10:18 schreef Eugen Block <ebl...@nde.ag>:
Thanks for the really quick
lander wrote:
> Op 11 augustus 2016 om 10:18 schreef Eugen Block <ebl...@nde.ag>:
>
>
> Thanks for the really quick response!
>
> > Warning! These are not your regular log files.
>
> Thanks for the warning!
>
> > You shouldn't have to worry about that. T
pools, 1551 GB data, 234 kobjects
3223 GB used, 4929 GB / 8153 GB avail
4336 active+clean
client io 0 B/s rd, 72112 B/s wr, 7 op/s
Zitat von Wido den Hollander <w...@42on.com>:
Op 11 augustus 2016 om 9:56 schreef Eugen Block <ebl...@nde.ag>:
Hi list,
/PBL3kuhq/large-log-like-files-on-monitor
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ebl...@nde.ag
Vorsitzende des Aufsichtsrates
a may help you nail it.
I suspect though, that it may come down to enabling debug logging and
tracking a slow request through the logs.
On Thu, Jan 12, 2017 at 8:41 PM, Eugen Block <ebl...@nde.ag> wrote:
Hi,
Looking at the output of dump_historic_ops and dump_ops_in_flight
I waited for
rformance impact during working
hours.
Please let me know if I missed anything. I really appreciate you
looking into this.
Regards,
Eugen
Zitat von Christian Balzer <ch...@gol.com>:
Hello,
On Wed, 01 Feb 2017 11:43:02 +0100 Eugen Block wrote:
Hi,
I haven't tracked th
have these inconsistencies
since we increased the size to 3.
Zitat von Mio Vlahović <mio.vlaho...@bcs.hr>:
Hello,
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
Behalf Of Eugen Block
I had a similar issue recently, where I had a replication size of 2 (I
changed tha
Glad I could help! :-)
Zitat von Mio Vlahović <mio.vlaho...@bcs.hr>:
From: Eugen Block [mailto:ebl...@nde.ag]
From what I understand, with a rep size of 2 the cluster can't decide
which object is intact if one is broken, so the repair fails. If you
had a size of 3, the cluster would
epoch_started": 577,
"hit_set_history": {
"current_last_update": "0'0",
"history": []
}
}
],
"recovery_state": [
{
"name": "Started\/Primary\
ery. Or should I
have deleted that PG instead of re-activating old OSDs? I'm not sure
what the best practice would be in this case.
Any help is appreciated!
Regards,
Eugen
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax :
far...@redhat.com>:
On Mon, Feb 13, 2017 at 7:05 AM Wido den Hollander <w...@42on.com> wrote:
> Op 13 februari 2017 om 16:03 schreef Eugen Block <ebl...@nde.ag>:
>
>
> Hi experts,
>
> I have a strange situation right now. We are re-organizing our 4 node
> Ham
h.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___ ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung
If you are not the intended recipient of this message or received it
erroneously, please notify the sender and delete it, together with
any attachments, and be advised that any dissemination or copying of
this message is prohibited.
________
-Or
d (rbd) image
so the result looks as if this image has been uploaded to glance:
https://github.com/openstack/nova/blob/a41ee43792a2b37c7e1fd12700a8b2fd3ccba4ec/nova/virt/libvirt/imagebackend.py#L971-L990
Best regards,
Alexey
On Fri, Sep 2, 2016 at 4:11 PM, Eugen Block <ebl...@nde.
r and delete it, together with
any attachments, and be advised that any dissemination or copying of
this message is prohibited.
________
-Original Message-
From: Eugen Block [mailto:ebl...@nde.ag]
Sent: Friday, September 2, 2016 7:12 AM
To: Steve Taylo
e? I would assume that there is some kind of
flag set for that image. Maybe someone can point me to the right
direction.
Thanks,
Eugen
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-224
ogether with
any attachments, and be advised that any dissemination or copying of
this message is prohibited.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
Behalf Of Eugen Block
Sent: Thursday, September 1, 2016 6:51
really greatful for your advice!
Regards,
Eugen
[1] http://ceph.com/planet/ceph-manually-repair-object/
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg
number of scrub errors is increasing, although I started with
more thatn 400 scrub errors.
What I have tried is to manually repair single PGs as described in [1]
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559
didn't know from
which OSD it should recover missing data. Please correct me if I'm
wrong. For production use we should probably increase to a rep size of
3, I guess.
Regards
Eugen
Zitat von lyt_yudi <lyt_y...@icloud.com>:
在 2016年9月26日,下午10:44,Eugen Block <ebl...@nde.ag> 写道:
d, how did that happen , were the image converted
somehow to raw ? have the other part of the image that did not have any
information stored trimmed ?
Thank you
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559
Thank you!
Zitat von Nick Fisk <n...@fisk.me.uk>:
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
Behalf Of Eugen Block
Sent: 22 November 2016 10:11
To: Nick Fisk <n...@fisk.me.uk>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph
the
default for osd_max_scrubs is working now and I don't see major
impacts yet.
But is there something else I can do to reduce the performance impact?
I just found [1] and will have a look into it.
[1] http://prob6.com/en/ceph-pg-deep-scrub-cron/
Thanks!
Eugen
--
Eugen Block
s-boun...@lists.ceph.com] On
Behalf Of Eugen Block
Sent: 22 November 2016 09:55
To: ceph-users@lists.ceph.com
Subject: [ceph-users] deep-scrubbing has large impact on performance
Hi list,
I've been searching the mail archive and the web for some help. I
tried the things I found, but I can
find the location of an executable
[2016-10-03 15:21:15,472][ceph03][INFO ] Running command: sudo
/bin/ceph --cluster=ceph osd stat --format=json
[2016-10-03 15:21:15,698][ceph03][INFO ] Running command: sudo
systemctl enable ceph.target
More details in other thread.
Where am I going wr
o you deal with these slow requests?
Thanks for any help!
Regards,
Eugen
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ebl
list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : eb
8",
"age": 112.118210,
"duration": 26.452526,
They also contain many "waiting for rw locks" messages, but not as
much as the dump from the reporting OSD.
To me it seems that because two OSDs take a lot of time to process
their req
h-pg-deep-scrub-cron/
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ebl...@nde.ag
Vorsitzende des Aufsichtsrates: Angelika Mozdze
corrected the cron job and there was no such message in my inbox, so
I hope this is resolved.
Zitat von Eugen Block <ebl...@nde.ag>:
Hi list,
I use the script from [1] to control the deep-scrubs myself in a
cronjob. It seems to work fine, I get the "finished batch" message
in /
list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ebl.
equivalent, of
course ;-)
Zitat von Christian Balzer <ch...@gol.com>:
Hello,
On Thu, 29 Jun 2017 08:53:25 + Eugen Block wrote:
Hi,
what does systemctl status -l ceph-osd@4.service say? Is anything
suspicious in the syslog?
I'm pretty sure (the OP didn't specify that or other o
;ch...@gol.com>:
On Tue, 22 Aug 2017 09:54:34 + Eugen Block wrote:
Hi list,
we have a productive Hammer cluster for our OpenStack cloud and
recently a colleague added a cache tier consisting of 2 SSDs and also
a pool size of 2, we're still experimenting with this topic.
Risky, but I gue
ilable that could be much easier
to use and give production deployment.
Thanks,
Shambhu Rajak
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail
,
of course, but why does the control node also establish so many
connections?
I'd appreciate any insight!
Regards,
Eugen
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg
w to identify their "owner", is there a way?
Has anyone a hint what else I could check or is it reasonable to
assume that the objects are really the same and there would be no data
loss in case we deleted that pool?
We appreciate any help!
Regards,
Eugen
--
Eugen Block
. But this occurs everytime we have to
restart them. Can anyone explain what is going on there? We could use
your expertise on this :-)
Best regards,
Eugen
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
On Wed, Nov 1, 2017 at 5:14 AM Eugen Block <ebl...@nde.ag> wrote:
Hi experts,
we have upgraded our cluster to Luminous successfully, no big issues
so far. We are also testing cache tier with only 2 SSDs (we know it's
not recommended), and there's one issue to be resolved:
Evertime
Hi,
I'm not sure if this is deprecated or something, but I usually have to
execute an additional "ceph auth del " before recreating an OSD.
Otherwise the OSD fails to start. Maybe this is a missing step.
Regards,
Eugen
Zitat von Gary Molenkamp :
Good morning all,
Last
molen...@uwo.ca>:
Thanks Eugen,
The OSDs will start immediately after completing the "ceph-volume
prepare", but they won't start on a clean reboot. It seems that
the "prepare" is mounting the /var/lib/ceph/osd/ceph-osdX
path/structure but this is missing now in my boot
Hi,
I have a similar issue and would also need some advice how to get rid
of the already deleted files.
Ceph is our OpenStack backend and there was a nova clone without
parent information. Apparently, the base image had been deleted
without a warning or anything although there were
Hi,
So "somthing" goes wrong:
# cat /var/log/libvirt/libxl/libxl-driver.log
-> ...
2018-05-20 15:28:15.270+: libxl:
libxl_bootloader.c:634:bootloader_finished: bootloader failed - consult
logfile /var/log/xen/bootloader.7.log
2018-05-20 15:28:15.270+: libxl:
Igor
On 5/25/2018 2:22 PM, Eugen Block wrote:
Hi list,
we have a Luminous bluestore cluster with separate
block.db/block.wal on SSDs. We were running version 12.2.2 and
upgraded yesterday to 12.2.5. The upgrade went smoothly, but since
the restart of the OSDs I noticed that 'ceph osd df'
Hi list,
we have a Luminous bluestore cluster with separate block.db/block.wal
on SSDs. We were running version 12.2.2 and upgraded yesterday to
12.2.5. The upgrade went smoothly, but since the restart of the OSDs I
noticed that 'ceph osd df' shows a different total disk size:
---cut
Hi,
[root@n1 ~]# ceph osd pool rm mytestpool mytestpool --yes-i-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored
if the command you posted is complete then you forgot one "really" in
the --yes-i-really-really-mean-it option.
Regards
Zitat von Steffen
Hi,
we had to recreate some block.db's for some OSDs just a couple of
weeks ago because our existing journal SSD had failed. This way we
avoided to rebalance the whole cluster, just the OSD had to be filled
up. Maybe this will help you too.
mmands. Is there any way to remove it properly?
Most of the commands work with the name, not the id of the FS, so it's
difficult to access the data from the old FS. Has anyone some insights
on how to clean this up?
Regards,
Eugen
--
Eugen Block voice
tacenterlight.ch
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
| less
It's a Proxmox system. There were only two snapshots on the PG, which I
deleted now. Now nothing gets displayed on the PG... is that possible? A
repair still fails unfortunately...
Best & thank you for the hint!
Karsten
On 19.02.2018 22:42, Eugen Block wrote:
BTW - how can I find out, which
hat is stored in the PG, I get no match with my PG ID anymore.
If I take the approach of "rbd info" which was posted by Mykola Golub, I
get a match - unfortunately the most important VM on our system which
holds the software for our Finance.
Best
Karsten
On 20.02.2018 09:16, Eugen
a new Ceph could not understrand anymore.
We'll see... the VM is large and currently copying... if the error gets
also copied, the VM format/age is the cause. If not, ... hm... :-D
Nevertheless thank you for your help!
Karsten
On 20.02.2018 15:47, Eugen Block wrote:
I'm not quite sure
fo.cgi/ceph-users-ceph.com
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg e-mail : ebl...@nde.ag
Vorsitzende des Aufsichtsrates: Angelika
someone else can share some thoughts on this.
Zitat von Karsten Becker <karsten.bec...@ecologic.eu>:
Hi.
We have size=3 min_size=2.
But this "upgrade" has been done during the weekend. We had size=2
min_size=1 before.
Best
Karsten
On 19.02.2018 13:02, Eugen Block wrote:
H
Hi,
I created a ticket for the rbd import issue:
https://tracker.ceph.com/issues/23038
Regards,
Eugen
Zitat von Jason Dillaman <jdill...@redhat.com>:
On Fri, Feb 16, 2018 at 11:20 AM, Eugen Block <ebl...@nde.ag> wrote:
Hi Jason,
... also forgot to mention "rbd export
uminous release.
Thank you for any information and / or opinion you care to share!
With regards,
Jens
[1] https://github.com/ceph/ceph/pull/15831
--
Jason
--
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
asic_string<char, std::char_traits,
std::allocator >, ObjectStore::Sequencer&)+0x1135)
[0x55eef36002f5]
10: (main()+0x3909) [0x55eef3561349]
11: (__libc_start_main()+0xf1) [0x7facae0892b1]
12: (_start()+0x2a) [0x55eef35e901a]
Aborted
Best
Karsten
On 19.02.2018 17:09, Eugen Block wrote:
Could [1]
size=2.
But this "upgrade" has been done during the weekend. We had size=2
min_size=1 before.
Best
Karsten
On 19.02.2018 13:02, Eugen Block wrote:
Hi,
just to rule out the obvious, which size does the pool have? You aren't
running it with size = 2, do you?
Zitat von Karsten Becker <karsten
Hi,
I send the logfile in the attachment. I can find no error messages
or anything problematic…
I didn't see any log file attached to the email.
Another question: Is there a link between the VMs that fail to write
to CephFS and the hypervisors? Are all failing clients on the same
Hi,
we have a full bluestore cluster and had to deal with read errors on
the SSD for the block.db. Something like this helped us to recreate a
pre-existing OSD without rebalancing, just refilling the PGs. I would
zap the journal device and let it recreate. It's very similar to your
Hi,
Some of the ceph-fuse clients hang on write operations to the cephFS.
Do all the clients use the same credentials for authentication? Have
you tried to mount the filesystem with the same credentials as your
VMs do and then tried to create files? Has it worked before or is this
a new
Hi,
How then can one upgrade journals to BlueStore when there is more than one
journal on the same disk?
if you're using one SSD for multiple OSDs the disk probably has
several partitions. So you could just zap one partition at a time and
replace the OSD. Or am I misunderstanding the
the bluestore partition (wal,
db) larger than the default
On Mon, Aug 6, 2018 at 2:30 PM, Eugen Block wrote:
Hi,
How then can one upgrade journals to BlueStore when there is more than one
journal on the same disk?
if you're using one SSD for multiple OSDs the disk probably has several
ernels use?
Zhenshi Zhou 于2018年8月13日周一 下午10:15写道:
Hi Eugen,
The command shows "mds_cache_memory_limit": "1073741824".
And I'll increase the cache size for a try.
Thanks
Eugen Block 于2018年8月13日周一 下午9:48写道:
Hi,
Depending on your kernel (memory leaks with CephFS) inc
Hi,
1. Is there a formula to calculate the optimal size of partitions on
the SSD for each OSD, given their capacity and IO performance? Or is
there a rule of thumb on this?
Wido and probably some other users already mentioned 10 GB per 1 TB
OSD (1/100th of the OSD). Regarding the WAL size,
Hi,
Depending on your kernel (memory leaks with CephFS) increasing the
mds_cache_memory_limit could be of help. What is your current setting
now?
ceph:~ # ceph daemon mds. config show | grep mds_cache_memory_limit
We had these messages for months, almost every day.
It would occur when
Hi,
could you be hitting the bug from [1]? Watch out for segfaults in dmesg.
Since a couple of days we see random OSDs with a segfault from
safe_timer. We didn't update any packages for months.
Regards
[1] https://tracker.ceph.com/issues/23352
Zitat von Rudenko Aleksandr :
Hi, guys.
Hi,
the missing "ln -snf ..." is probably related to missing LV tags. When
we had to migrate OSD journals to another SSD because of a failed SSD
we noticed the same difference to new (healthy) OSDs. Compare the tags
of your Logical Volumes to their actual UUIDs and all the other
Hi,
I see you're running 12.2.6, there have been several threads here that
recommend an update to 12.2.7 because this release is broken:
http://lists.ceph.com/pipermail/ceph-announce-ceph.com/2018-July/000126.html
Regards,
Eugen
Zitat von krwy0...@163.com:
The crash has happened for
The correct URL should be:
http://lists.ceph.com/pipermail/ceph-large-ceph.com/2018-April/000106.html
Zitat von Jonathan Proulx :
On Mon, Aug 20, 2018 at 06:13:26AM -0400, David Turner wrote:
:There is a thread from the ceph-large users ML that covered a way to do
:this change without
Hi,
I don't know why but, I noticed in the ceph-volume-systemd.log
(above in bold), that there are 2 different lines corresponding to
the lvm-1 (normally associated to the osd.1) ?
One seems to have the correct id, while the other has a bad
one...and it's looks like he's trying to start
Update:
I changed the primary affinity of one OSD back to 1.0 to test if those
metrics change, and indeed they do:
OSD.24 immediately shows values greater than 0.
I guess the metrics are completely unrelated to the flapping.
So the search goes on...
Zitat von Eugen Block :
An hour ago
Hi,
take a look into the logs, they should point you in the right direction.
Since the deployment stage fails at the OSD level, start with the OSD
logs. Something's not right with the disks/partitions, did you wipe
the partition from previous attempts?
Regards,
Eugen
Zitat von Jones de
Update: we are getting these messages again.
So the search continues...
Zitat von Eugen Block :
Hi,
Depending on your kernel (memory leaks with CephFS) increasing the
mds_cache_memory_limit could be of help. What is your current
setting now?
ceph:~ # ceph daemon mds. config show
ortunately that is absolutely not an option.
Thanks a lot in advance for any comments and/or extra suggestions.
Sincerely yours,
Jones
On Sat, Aug 25, 2018 at 5:46 PM Eugen Block wrote:
Hi,
take a look into the logs, they should point you in the right direction.
Since the deployment stage fails at the
Hi,
could you please paste your osd tree and the exact command you try to execute?
Extra note, the while loop in the instructions look like it's bad.
I had to change it to make it work in bash.
The documented command didn't work for me either.
Regards,
Eugen
Zitat von Robert Stanford :
Hello *,
we have an issue with a Luminous cluster (all 12.2.5, except one on
12.2.7) for RBD (OpenStack) and CephFS. This is the osd tree:
host1:~ # ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 22.57602 root default
-41.81998 host host5
mon osd.24 config show | grep debug_client
"debug_client": "0/5",
Should the memory level be turned off as well? I'll give it a try.
Regards,
Eugen
Zitat von Gregory Farnum :
On Wed, Aug 22, 2018 at 6:46 AM Eugen Block wrote:
Hello *,
we have an issue with a Lu
"history_alloc_Mbytes": 0,
"history_alloc_num": 0,
"cached_crc": 0,
"cached_crc_adjusted": 0,
"missed_crc": 0,
-"numpg": 565,
-"numpg_primary": 256,
-"numpg_replica&
have positive effect on the messages. Cause I get fewer
messages than before.
Eugen Block 于2018年8月20日周一 下午9:29写道:
Update: we are getting these messages again.
So the search continues...
Zitat von Eugen Block :
> Hi,
>
> Depending on your kernel (memory leaks with CephFS) i
Hi,
ceph osd pool get your_pool_name size
ceph osd pool ls detail
these are commands to get the size of a pool regarding the
replication, not the available storage.
So the capacity in 'ceph df' is returning the space left on the pool and
not the 'capacity size'.
I'm not aware of a
Hi,
There is no way to resize DB while OSD is running. There is a bit
shorter "unofficial" but risky way than redeploying OSD though. But
you'll need to tag specific OSD out for a while in any case. You
will also need either additional free partition(s) or initial
deployment had to be
/bluestore-config-ref/
Zitat von Robert Stanford :
Thank you. Sounds like the typical configuration is just RocksDB on the
SSD, and both data and WAL on the OSD disk?
On Thu, Jul 19, 2018 at 9:00 AM, Eugen Block wrote:
Hi,
if you have SSDs for RocksDB, you should provide that in the command
Hi,
if you have SSDs for RocksDB, you should provide that in the command
(--block.db $DEV), otherwise Ceph will use the one provided disk for
all data and RocksDB/WAL.
Before you create that OSD you probably should check out the help page
for that command, maybe there are more options you
give up. Just, please, answer this two
questions clearly, before we capitulate? :(
Anyway, thanks a lot, once again,
Jones
On Mon, Sep 3, 2018 at 5:39 AM Eugen Block wrote:
Hi Jones,
I still don't think creating an OSD on a partition will work. The
reason is that SES creates an additiona
Hi,
Are you asking us to do 40GB * 5 partitions on SSD just for block.db?
yes. By default ceph deploys block.db and wal.db on the same device if
no separate wal device is specified.
Regards,
Eugen
Zitat von Muhammad Junaid :
Thanks Alfredo. Just to clear that My configuration has 5
ry size in top is still
only about 5 GB after one week. Obviously there have been improvements
regarding memory consumption of MDS, which is nice. :-)
Regards,
Eugen
Zitat von Eugen Block :
Hi,
I think it does have positive effect on the messages. Cause I get fewer
messages than before.
of sddm.
2018-08-30T15:44:15.613246-03:00 torcello systemd[2295]: Started D-Bus User
Message Bus.
2018-08-30T15:44:15.623989-03:00 torcello dbus-daemon[2311]: [session
uid=1000 pid=2311] Successfully activated service 'org.freedesktop.systemd1'
2018-08-30T15:44:16.447162-03:00 torcello kapplymousethem
ot; (sorry for my poor english) as a
possible error.
Any suggestion on how to proceed?
Thanks a lot in advance,
Jones
On Mon, Aug 27, 2018 at 5:29 AM Eugen Block wrote:
Hi Jones,
all ceph logs are in the directory /var/log/ceph/, each daemon has its
own log file, e.g. OSD logs are named ceph-os
Eugen Block :
Update:
I changed the primary affinity of one OSD back to 1.0 to test if
those metrics change, and indeed they do:
OSD.24 immediately shows values greater than 0.
I guess the metrics are completely unrelated to the flapping.
So the search goes on...
Zitat von Eugen Block
Can you try it on one of your MDS servers? It should work there.
Zitat von Florent B <flor...@coppint.com>:
Hi,
Thank you but I got :
admin_socket: exception getting command descriptions: [Errno 2] No
such file or directory
On 17/01/2018 12:47, Eugen Block wrote:
Hi
mds.NAME is working (without *), but there's no
way to do it on all MDS at the same time ?
On 17/01/2018 12:54, Florent B wrote:
That's what I did, I run it on the active MDS server.
I run 12.2.2 version.
On 17/01/2018 12:53, Eugen Block wrote:
Can you try it on one of your MDS servers? It sh
aps: [osd] allow *
How can I change this value without restarting services ?
Thank you.
Florent
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Eugen Block
018 12:54, Florent B wrote:
That's what I did, I run it on the active MDS server.
I run 12.2.2 version.
On 17/01/2018 12:53, Eugen Block wrote:
Can you try it on one of your MDS servers? It should work there.
Zitat von Florent B <flor...@coppint.com>:
Hi,
Thank you but I got :
y
ceph@host1:~> ceph daemon mds.host1 config show | grep mds_cache_size
"mds_cache_size": "0",
Zitat von Florent B <flor...@coppint.com>:
That's what I did, I run it on the active MDS server.
I run 12.2.2 version.
On 17/01/2018 12:53, Eugen Block wrote:
Can you tr
le process from
the start.
Regards,
Tom
On Mon, Jan 8, 2018 at 2:19 PM, Eugen Block <ebl...@nde.ag> wrote:
Hi list,
all this is on Ceph 12.2.2.
An existing cephFS (named "cephfs") was backed up as a tar ball, then
"removed" ("ceph fs rm cephfs --yes-i-really-m
ting-your-ceph-pools-against-removal-or-property-changes/
kind regards
Ronny Aasen
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Eugen Block voice : +49-40-559 5
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77
Postfach 61 03 15
D-22423 Hamburg
.
Has someone experienced similar issues and can shed some light on
this? Any insights would be very helpful.
Regards,
Eugen
[1]
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024913.html
--
Eugen Block voice : +49-40-559 51 75
NDE Netzdesign
---
So in conclusion, this method is not suited for OpenStack. You could
probably consider it in case of desaster recovery for single VMs, but
not for a whole cloud environment where you would lose all
relationships between base images and their clones.
Regards,
Eugen
Zitat von Eugen Block
Hi Andrei,
we have been using the script from [1] to define the number of PGs to
deep-scrub in parallel, we currently use MAXSCRUBS=4, you could start
with 1 to minimize performance impacts.
And these are the scrub settings from our ceph.conf:
ceph:~ # grep scrub /etc/ceph/ceph.conf
1 - 100 of 189 matches
Mail list logo