Did you reinstall mons as well? If no, check if you've removed that osd
auth (ceph auth ls)
On Fri, Nov 8, 2019, 19:27 nokia ceph wrote:
> Hi,
>
> The fifth node in the cluster was affected by hardware failure and hence
> the node was replaced in the ceph cluster. But we were not able to
. Those few Gb would be the next line of
defense. It's better to have a downtime than 'unable to recover' situation.
Den tors 31 okt. 2019 kl 15:07 skrev George Shuklin
mailto:george.shuk...@gmail.com>>:
Thank you everyone, I got it. There is no way to fix out-of-space
bluestore w
it in a degraded state for weeks at a time...
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Thu, Oct 31, 2019 at 10:50 AM George Shuklin
wrote:
Hello.
In my lab
Hello.
In my lab a nautilus cluster with a bluestore suddenly went dark. As I
found it had used 98% of the space and most of OSDs (small, 10G each)
went offline. Any attempt to restart them failed with this message:
# /usr/bin/ceph-osd -f --cluster ceph --id 18 --setuser ceph --setgroup
Hello.
I've created an rgw installation, had uploaded about 60M files into a
single bucket. Removal had looked as a long adventure, so I "ceph osd
pool rm'ed" both default.rgw.data and default.rgw.index.
Now I have this:
# rados lspools
.rgw.root
default.rgw.control
default.rgw.meta
Good news, it's not an emergency, just a curiosity.
Suppose I lost all monitors in a ceph cluster in my laboratory. I have
all OSDs intact. Is it possible to recover something from Ceph?
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hello.
I'm trying to configure monitoring for ceph. For this I need to give
user 'monitoring' such rights that it can see all details for everything
in Ceph (monitors, osds, pool state, etc), but it shouldn't be able to
change anything and (preferably) shouldn't be able to access stored
Hello.
Where monitors are keeping their keys? I can't see them in 'ceph auth
list'. Are they in that list but I have no permission to see them (as
admin), or are they stored somewhere else? How can I see that list?
___
ceph-users mailing list
Check NUMA status in bios'es. Sometimes linux do swap instead of task
transfer between numa nodes (inside one host).
Set "interleave" or "disable" to see difference.
On 07/27/2016 11:21 AM, Kenneth Waegeman wrote:
Hi all,
When our OSD hosts are running for some time, we start see increased
I looked to ceph multitenancy, and found almost none:
caps mon 'allow r', osd 'allow rwx pool some_pool' give tenant avility
to see pool list, osd tree, pg_dump, even see list of objects in other
pools (via rados ls command).
If I want to give tenant specified RBD for r/w (for his root
Hello.
On 07/13/2016 03:31 AM, Christian Balzer wrote:
Hello,
did you actually read my full reply last week, the in-line parts,
not just the top bit?
http://www.spinics.net/lists/ceph-users/msg29266.html
On Tue, 12 Jul 2016 16:16:09 +0300 George Shuklin wrote:
Yes, linear io speed
94369759 45G unknown
/dev/sdc2 2048 10487807 10485760 5G unknown
Partition table entries are not in disk order.
Am 11.07.2016 um 18:01 schrieb George Shuklin:
Check out partition type for data partition for ceph.
fdisk -l /dev/sdc
On 07/11/2016 04:03 PM, Dirk Laurenz wrote:
hmm, helps
Yes, linear io speed was concern during benchmark. I can not predict how
much linear IO would be generated by clients (compare to IOPS) so we
going to balance HDD-OSD per SSD according to real usage. If users would
generate too much random IO, we will raise HDD/SSD ratio, if they would
Mount OSD data to temp. directory (e.g. /mnt) and check where journal
pointing (ls -la /mnt/journal). It can lead to a different location from
your "--osd-journal=" instruction to --mkjournal.
On 07/11/2016 05:46 PM, William Josefsson wrote:
Hi All,
Initially, I used
Hello.
I want to try CRUSH rule with following idea:
take one OSD from root with SSD drives (and use it as primary).
take two OSD from root with HDD drives.
I've created this rule:
rule rule_mix {
ruleset 2
type replicated
min_size 2
max_size 10
step
George Shuklin:
Short story how OSDs are started in systemd environments:
Ceph OSD parittions has specific typecode (partition type
4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D). It handled by udev rules
shipped by ceph package:
/lib/udev/rules.d/95-ceph-osd.rules
It set up proper owner/group
On 07/11/2016 09:57 AM, kefu chai wrote:
Hi Cephers,
I am proposing drop the support of i386. as we don't compile Ceph with
any i386 gitbuilder now[1] and hence don't test the i386 builds on
sepia on a regular basis. Also, based on the assumption that people
don't use i386 in production, I
Short story how OSDs are started in systemd environments:
Ceph OSD parittions has specific typecode (partition type
4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D). It handled by udev rules shipped
by ceph package:
/lib/udev/rules.d/95-ceph-osd.rules
It set up proper owner/group for this disk ('ceph'
I found no options about source IP for ceph. Probably you may try to use
network namespaces to isolate ceph services with desired interfaces.
This would require a bit more setup though. You would need to create
namespace, add some kind of patch (veth?) interface between namespace
and host, but
.
Partitions really are your best bet.
On Wed, 6 Jul 2016 18:20:43 +0300 George Shuklin wrote:
Yes.
On my lab (not production yet) with 9 7200 SATA (OSD) and one INTEL
SSDSC2BB800G4 (800G, 9 journals)
First and foremost, a DC 3510 with 1 DWPD endurance is not my idea of good
journal device, even
: [osd] allow *
On Wed, Jul 6, 2016 at 10:22 AM, George Shuklin
<george.shuk...@gmail.com <mailto:george.shuk...@gmail.com>> wrote:
In my expirience, 50% of 'permission denied' for OSD were coming
not from filesystem, but from monitors:
check if key in ceph auth list matche
In my expirience, 50% of 'permission denied' for OSD were coming not
from filesystem, but from monitors:
check if key in ceph auth list matches to osd_dir/keyring.
On 07/06/2016 06:08 PM, Samuel Just wrote:
Try strace.
-Sam
On Wed, Jul 6, 2016 at 7:53 AM, RJ Nowling
.
Thanks.
Cheers,
Alwin
On 07/06/2016 02:03 PM, George Shuklin wrote:
Hello.
I've been testing Intel 3500 as journal store for few HDD-based OSD. I stumble on
issues with multiple partitions (>4)
and UDEV (sda5, sda6,etc sometime do not appear after partition creation). And
I'm think
Hello.
I've been testing Intel 3500 as journal store for few HDD-based OSD. I
stumble on issues with multiple partitions (>4) and UDEV (sda5, sda6,etc
sometime do not appear after partition creation). And I'm thinking that
partition is not that useful for OSD management, because linux do no
Hello.
I'm testing different configuration for Ceph. I found that osd are
REALLY hungry for cpu.
I've created a tiny pool with size 1 with single OSD made of fast intel
SSD (2500-series), on old dell server (R210), Xeon E3-1230 V2 @ 3.30GHz.
And when I benchmark it I see some horribly-low
I'm writing ceph playbook and I see some issues with ceph osd create
command. When I call it with all arguments, I can't distinct when it
creates new or when it confirms that this osd already exists.
ceph osd create 5ecc7a8c-388a-11e6-b8ad-5f3ab2552b13 22; echo $?
22
0
ceph osd create
-admin@nodeB:~/.ssh/ceph-cluster$ sudo rbd -p data ls
data_01
network connectivity is fine as per icmp output, is there any other
way I can confirm this?
On Fri, Jun 17, 2016 at 2:59 PM, George Shuklin
<george.shuk...@gmail.com <mailto:george.shuk...@gmail.com>> wrote:
What d
What did
sudo rbd -p data ls
show?
If it freezes too, issue is with pool itself (ceph health) or network
connectivity.
On 06/17/2016 03:37 PM, Ishmael Tsoaela wrote:
Hi,
Thank you for the response but with sudo all it does is freeze:
rbd map data_01 --pool data
Hello.
I'm trying to debug why OSD does not getting up.
It stops at:
2016-06-17 12:28:55.174468 7f0e60fd78c0 -1 osd.6 6366 log_to_monitors
{default=true}
2016-06-17 12:28:55.185917 7f0e60fd78c0 0 osd.6 6366 done with init,
starting boot process
If I enable debug (debug osd = 20 debug ms =
Hello.
How objects are handled in the rbd? If user writes 16k in the RBD image
with 4Mb object size, how much would be written in the OSD? 16k x
replication or 4Mb x replication (+journals for both cases)?
Thanks.
___
ceph-users mailing list
I believe this is the source of issues (cited line).
Purge all ceph packages from this node and remove user/group 'ceph',
than retry.
On 06/13/2016 02:46 PM, Fran Barrera wrote:
[ceph-admin][WARNIN] usermod: user ceph is currently used by process 1303
Hello.
I'm playing with crush, and got issue with ceph osd crush move command.
I've added to the map intermediate 'blabla' bucket type:
type 0 osd
type 1 blabla
type 2 host
I've added few 'blabla' buckets, and now osd tree now looks like this:
-12 1.0 host ssd-pp11
1 0.25000
Hello.
I'm doing small experimental setup. I have two hosts with few OSD, one
OSD has been put down intentionaly, but I regardless the second (alive)
OSD on different host, I see that all IO (rbd, and even rados get) hung
for long time (more than 30 minutes already).
My configuration:
-9
Hello.
Can someone help me to see difference between step choose and step
chooseleaf in CRUSH map?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello.
I'm curious how to get maximum performance without loosing significant
space. OSD+its journal on SSD is good solution? Or using separate SSD
for journal for the few others SSD-based OSD is better?
Thanks.
___
ceph-users mailing list
35 matches
Mail list logo