And does the re-import of the PG work? From the logs I assumed that
the snapshot(s) prevented a successful import, but now that they are
deleted it could work.
Zitat von Karsten Becker :
Hi Eugen,
hmmm, that should be :
rbd -p cpVirtualMachines list | while read LINE; do osdmaptool
--t
Ben, first of all thanks a lot for such quick reply! I appreciate a provided explanation and info on
things to check!
I am new to all that that including InfluxDB that's why I used wrong influx cli to check if there
are actual data is coming. But
https://docs.influxdata.com/influxdb/v1.4/query_l
I have set up a little ceph installation and added about 80k files of
various sizes, then I added 1M files of 1 byte each totalling 1 MB, to
see what kind of overhead is incurred per object.
The overhead for adding 1M objects seems to be 12252M/100 =
0.012252M or 122 kB per file, which is
On 2/20/2018 11:57 AM, Flemming Frandsen wrote:
I have set up a little ceph installation and added about 80k files of
various sizes, then I added 1M files of 1 byte each totalling 1 MB, to
see what kind of overhead is incurred per object.
The overhead for adding 1M objects seems to be 12252M
Another space "leak" might be due BlueStore misbehavior that takes DB
partition(s) space into account when calculating total store size. And
all this space is immediately marked as used even for an empty store. So
if you have 3 OSD with 10 Gb DB device each you unconditionally get 30
Gb used sp
On Mon, Feb 19, 2018 at 9:29 PM, nokia ceph
wrote:
> Hi Alfredo Deza,
>
> We have 5 node platforms with lvm osd created from scratch and another 5
> node platform migrated from kraken which is ceph volume simple. Both has
> same issue . Both platform has only hdd for osd.
>
> We also noticed 2 ti
Nope:
> Write #10:9df3943b:::rbd_data.e57feb238e1f29.0003c2e1:head#
> snapset 0=[]:{}
> Write #10:9df399dd:::rbd_data.4401c7238e1f29.050d:19#
> Write #10:9df399dd:::rbd_data.4401c7238e1f29.050d:23#
> Write #10:9df399dd:::rbd_data.4401c7238e1f29.050d:head
Hi Caspar,
I checked the filesystem and there isn't any error on filesystem.
The disk is SSD and it doesn't any attribute related to Wear level in
smartctl and filesystem is mounted with default options and no discard.
my ceph structure on this node is like this:
it has osd,mon,rgw services
1 SS
I'm not quite sure how to interpret this, but there are different
objects referenced. From the first log output you pasted:
2018-02-19 11:00:23.183695 osd.29 [ERR] repair 10.7b9
10:9defb021:::rbd_data.2313975238e1f29.0002cbb5:head
expected clone
10:9defb021:::rbd_data.2313975238e1
Hi Eugen,
yes, I also see the rbd_data.-Number changing. This can be caused by me
by deleting snapshots and trying to move over VMs to another pool which
is not affected.
Currently I'm trying to move the Finance VM, which is a very old VM
which got created as one of the first VMs and is still ali
Alright, good luck!
The results would be interesting. :-)
Zitat von Karsten Becker :
Hi Eugen,
yes, I also see the rbd_data.-Number changing. This can be caused by me
by deleting snapshots and trying to move over VMs to another pool which
is not affected.
Currently I'm trying to move the Fin
A fully backwards incompatible release of ceph-deploy was completed in
early January [0] which removed ceph-disk as a backend to create OSDs
in favor of ceph-volume.
The backwards incompatible change means that the API for creating OSDs
has changed [1], and also that it now relies on Ceph versions
Hi,
I have decided to redeploy my test cluster using latest ceph-deploy and
Luminous
I cannot pass the ceph-deploy mon create-initial stage due to
[ceph_deploy.gatherkeys][WARNING] No mon key found in host
Any help will be appreciated
ceph-deploy --version
2.0.0
[cephuser@ceph prodceph]$ ls -
HI David [Resending with smaller message size],
I tried setting the OSDs down and that does clear the blocked requests
momentarily but they just return back to the same state. Not sure how to
proceed here, but one thought was just to do a full cold restart of the entire
cluster. We have disab
On 02/19/2018 09:49 PM, Robin H. Johnson wrote:
When I read the bucket instance metadata back again, it still reads
"placement_rule": "" so I wonder if the bucket_info change is really
taking effect.
So it never showed the new placement_rule if you did a get after the
put?
I think not. It's o
Dear Cephalopodians,
with the release of ceph-deploy we are thinking about migrating our
Bluestore-OSDs (currently created with ceph-disk via old ceph-deploy)
to be created via ceph-volume (with LVM).
I note two major changes:
1. It seems the block.db partitions have to be created beforehand, m
Answering the first RDMA question myself...
Am 18.02.2018 um 16:45 schrieb Oliver Freyermuth:
> This leaves me with two questions:
> - Is it safe to use RDMA with 12.2.2 already? Reading through this mail
> archive,
> I grasped it may lead to memory exhaustion and in any case needs some hacks
We currently have multiple CephFS fuse clients mounting the same filesystem
from a single monitor even though our cluster has several monitors. I would
like to automate the fail over from one monitor to another. Is this
possible and where should I bee looking for guidance on accomplishing this
in p
On Tue, Feb 20, 2018 at 5:56 PM, Oliver Freyermuth
wrote:
> Dear Cephalopodians,
>
> with the release of ceph-deploy we are thinking about migrating our
> Bluestore-OSDs (currently created with ceph-disk via old ceph-deploy)
> to be created via ceph-volume (with LVM).
When you say migrating, do
>
> There is also work-in-progress for online
> image migration [1] that will allow you to keep using the image while
> it's being migrated to a new destination image.
Hi Jason,
Is there any recommended method/workaround for live rbd migration in
luminous? eg. snapshot/copy or export/import then
Why are you mounting with a single monitor? What is your mount command or
/etc/fstab? Ceph-fuse should use the available mons you have on the client's
/etc/ceph/ceph.conf.
e.g our /etc/fstab entry:
none/home fuse.ceph
_netdev,ceph.id=myclusterid,ceph.client_mountpoint=/home,nonem
Many thanks for your replies!
Am 21.02.2018 um 02:20 schrieb Alfredo Deza:
> On Tue, Feb 20, 2018 at 5:56 PM, Oliver Freyermuth
> wrote:
>> Dear Cephalopodians,
>>
>> with the release of ceph-deploy we are thinking about migrating our
>> Bluestore-OSDs (currently created with ceph-disk via old
Hi,
We were recently testing luminous with bluestore. We have 6 node cluster
with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the OSD
and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on single
SDD for each WAL. So all the OSD WAL is on the singe SSD. P
Thanks for the hint Linh. I had neglected to read up on mount.fuse.ceph
here: http://docs.ceph.com/docs/master/man/8/mount.fuse.ceph/
I am trying this right now.
Thanks again.
-
- *Paul Kunicki*
- *Systems Manager*
- SproutLoud Media Networks, LLC.
- 954-47
Hi Alfredo Deza,
I understand the point between lvm and simple however we see issue , was it
issue in luminous because we use same ceph config and workload from client.
The graphs i attached in previous mail is from ceph-volume lvm osd.
In this case does it ococcupies 2 times only inside ceph. If
Hi,
We were recently testing luminous with bluestore. We have 6 node cluster
with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the OSD
and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on single
SDD for each WAL. So all the OSD WAL is on the singe SSD. P
Hi,
We were recently testing luminous with bluestore. We have 6 node cluster
with 12 HDD and 1 SSD each, we used ceph-volume with LVM to create all the OSD
and attached with SSD WAL (LVM ). We create individual 10GBx12 LVM on single
SDD for each WAL. So all the OSD WAL is on the singe SSD.
You're welcome :)
From: Paul Kunicki
Sent: Wednesday, 21 February 2018 1:16:32 PM
To: Linh Vu
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Automated Failover of CephFS Clients
Thanks for the hint Linh. I had neglected to read up on mount.fuse.ceph here
Yeah that is the expected behaviour.
From: ceph-users on behalf of Balakumar
Munusawmy
Sent: Wednesday, 21 February 2018 1:41:36 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Luminous: Help with Bluestore WAL
Hi,
We were recently testing luminous
I just wanted to add that even if you only provide one monitor IP the
client will learn about the other monitors on mount so failover will still
work. This only presents a problem when you try to remount or reboot a
client while the monitor it's using is unavailable.
___
Hi,
we got currently the problem that our ceph-cluster has 5 pgs that are
stale+peering.
the cluster is a 16 OSD, 4 Host Cluster.
how can i tell ceph that these pgs are not there anymore?
ceph pg 1.20a mark_unfound_lost delete
Error ENOENT: i don't have pgid 1.20a
ceph pg 1.20a mark_unfoun
31 matches
Mail list logo