On Thu, Jun 29, 2017 at 11:58 AM, Mazzystr wrote:
> just one MON
Try just replacing that MON then?
>
> On Wed, Jun 28, 2017 at 8:05 PM, Brad Hubbard wrote:
>>
>> On Wed, Jun 28, 2017 at 10:18 PM, Mazzystr wrote:
>> > The corruption
just one MON
On Wed, Jun 28, 2017 at 8:05 PM, Brad Hubbard wrote:
> On Wed, Jun 28, 2017 at 10:18 PM, Mazzystr wrote:
> > The corruption is back in mons logs...
> >
> > 2017-06-28 08:16:53.078495 7f1a0b9da700 1 leveldb: Compaction error:
> >
... additionally, the forthcoming 4.12 kernel release will support
non-cooperative exclusive locking. By default, since 4.9, when the
exclusive-lock feature is enabled, only a single client can write to the
block device at a time -- but they will cooperatively pass the lock back
and forth upon
Perhaps just one cluster has low latency and the other has excessively
high latency? You can use "rbd bench-write" to verify.
On Wed, Jun 28, 2017 at 8:04 PM, Murali Balcha wrote:
> We will give it a try. I have another cluster of similar configuration and
> the
Given that your time difference is roughly 10x, best guess is that
qemu-img is sending the IO operations synchronously (queue depth = 1),
whereas, by default, "rbd import" will send up to 10 write requests in
parallel to the backing OSDs. Such an assumption assumes that you have
really high
On Wed, Jun 28, 2017 at 10:18 PM, Mazzystr wrote:
> The corruption is back in mons logs...
>
> 2017-06-28 08:16:53.078495 7f1a0b9da700 1 leveldb: Compaction error:
> Corruption: bad entry in block
> 2017-06-28 08:16:53.078499 7f1a0b9da700 1 leveldb: Waiting after background
thank you!
On Wed, Jun 28, 2017 at 11:48 AM, Mykola Golub wrote:
> On Tue, Jun 27, 2017 at 07:17:22PM -0400, Daniel K wrote:
>
> > rbd-nbd isn't good as it stops at 16 block devices (/dev/nbd0-15)
>
> modprobe nbd nbds_max=1024
>
> Or, if nbd module is loaded by rbd-nbd,
Need some help resolving the performance issues on the my ceph cluster. We are
running acute performance issues when we are using qemu-img convert. However
rbd import operation works perfectly alright. Please ignore image format for a
minute. I am trying to understand why rbd import performs
Hi People,
I am testing the new enviroment, with ceph + rbd with ubuntu 16.04, and i have
one question.
I have my cluster ceph and mount the using the comands to ceph in my linux
enviroment :
rbd create veeamrepo --size 20480
rbd --image veeamrepo info
modprobe rbd
rbd map veeamrepo
rbd
On 06/27/2017 07:08 PM, Jason Dillaman wrote:
> Have you tried blktrace to determine if there are differences in the
> IO patterns to the rbd-backed virtio-scsi block device (direct vs
> indirect through loop)?
I tried today with the kernel tracing features, and I'll give blktrace a
go if
Thanks to everyone for the helpful feedback. I appreciate the responsiveness.
-Jon
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Matt
Benjamin
Sent: Wednesday, June 28, 2017 4:20 PM
To: Gregory Farnum
Cc:
Hi,
That's true, sure. We hope to support async mounts and more normal workflows
in future, but those are important caveats. Editing objects in place doesn't
work with RGW NFS.
Matt
- Original Message -
> From: "Gregory Farnum"
> To: "Matt Benjamin"
On Wed, Jun 28, 2017 at 2:10 PM Matt Benjamin wrote:
> Hi,
>
> A supported way to access S3 objects from a filesystem mount is with RGW
> NFS. That is, RGW now exports the S3 namespace directly as files and
> directories, one consumer is an nfs-ganesha NFS driver.
>
This
On Wed, Jun 28, 2017 at 6:33 AM wrote:
> > I don't think either. I don't think there is another way then just
> 'hacky' changing the MONMaps. There have been talks of being able to make
> Ceph dual-stack, but I don't think there is any code in the source right
>
Hi,
A supported way to access S3 objects from a filesystem mount is with RGW NFS.
That is, RGW now exports the S3 namespace directly as files and directories,
one consumer is an nfs-ganesha NFS driver.
Regards,
Matt
- Original Message -
> From: "David Turner"
Please do! :)
On Wed, Jun 28, 2017 at 1:51 PM Prashant Murthy
wrote:
> Thanks Greg. I thought as much :(
>
> We are looking into what stats are available and also for ways by which we
> can obtain stats from krbd, given this is going to be our current
> deployment in the
On Wed, Jun 28, 2017 at 9:17 AM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> On 06/28/17 16:52, keynes_...@wistron.com wrote:
>
> We were using HP Helion 2.1.5 ( OpenStack + Ceph )
>
> The OpenStack version is *Kilo* and Ceph version is *firefly*
>
>
>
> The way we backup VMs is
Thanks Greg. I thought as much :(
We are looking into what stats are available and also for ways by which we
can obtain stats from krbd, given this is going to be our current
deployment in the near term. If this is useful to the community, we can
share our findings and a proposal, once we have
Yeah, CephFS and RGW can't cross-communicate at all. Out of the original
choices, "this will never happen".
Somebody who was very, very dedicated could set something up. But it would
basically be the same as running s3fs in the Ceph servers instead of on the
clients (or probably the other way
CephFS is very different from RGW. You may be able to utilize s3fs-fuse to
interface with RGW, but I haven't heard of anyone using that on the ML
before.
On Wed, Jun 28, 2017 at 2:57 PM Lefman, Jonathan
wrote:
> Thanks for the prompt reply. I was hoping that there
Thanks for the prompt reply. I was hoping that there would be an s3fs
(https://github.com/s3fs-fuse/s3fs-fuse) equivalent for Ceph since there are
numerous functional similarities. Ideally one would be able to upload data to a
bucket and have the file synced to the local filesystem mount of
On Thu, Jun 22, 2017 at 11:27 AM Prashant Murthy
wrote:
> Hi Ceph users,
>
> We are currently using the Ceph kernel client module (krbd) in our
> deployment and we were looking to determine if there are ways by which we
> can obtain perf counters, log dumps, etc from such
CephFS and RGW store data differently. I have never heard of, nor do I
believe that it's possible, to have CephFS and RGW sharing the same data
pool.
On Wed, Jun 28, 2017 at 2:48 PM Lefman, Jonathan
wrote:
> Yes, sorry. I meant the RadosGW. I still do not know what
Yes, sorry. I meant the RadosGW. I still do not know what the mechanism is to
enable the mapping between data inserted by the rados component and the cephfs
component. I hope that makes sense.
-Jon
From: David Turner [mailto:drakonst...@gmail.com]
Sent: Wednesday, June 28, 2017 2:46 PM
To:
You want to access the same data via a rados API and via cephfs? Are you
thinking RadosGW?
On Wed, Jun 28, 2017 at 1:54 PM Lefman, Jonathan
wrote:
> Hi all,
>
>
>
> I would like to create a 1-to-1 mapping between rados and cephfs. Here's
> the usage scenario:
>
>
>
>
Hi all,
I would like to create a 1-to-1 mapping between rados and cephfs. Here's the
usage scenario:
1. Upload file via rest api through rados compatible APIs
2. Run "local" operations on the file delivered via rados on the linked cephfs
mount
3. Retrieve/download file via rados API on newly
On Wed, Jun 28, 2017 at 8:13 AM, Martin Emrich
wrote:
> Correction: It’s about the Version expiration, not the versioning itself.
>
> We send this rule:
>
>
>
> Rules: [
>
> {
>
> Status: 'Enabled',
>
> Prefix: '',
>
>
On Tue, Jun 27, 2017 at 07:17:22PM -0400, Daniel K wrote:
> rbd-nbd isn't good as it stops at 16 block devices (/dev/nbd0-15)
modprobe nbd nbds_max=1024
Or, if nbd module is loaded by rbd-nbd, use --nbds_max command line
option.
--
Mykola Golub
___
David Turner wrote:
: A couple things. You didn't `ceph osd crush remove osd.21` after doing the
: other bits. Also you will want to remove the bucket (re: host) from the
: crush map as it will now be empty. Right now you have a host in the crush
: map with a weight, but no osds to put that
On 06/28/17 16:52, keynes_...@wistron.com wrote:
>
> We were using HP Helion 2.1.5 ( OpenStack + Ceph )
>
> The OpenStack version is *Kilo* and Ceph version is *firefly*
>
>
>
> The way we backup VMs is create a snapshot by Ceph commands (rbd
> snapshot) then download (rbd export) it.
>
>
>
>
Correction: It's about the Version expiration, not the versioning itself.
We send this rule:
Rules: [
{
Status: 'Enabled',
Prefix: '',
NoncurrentVersionExpiration: {
NoncurrentDays: 60
},
Expiration: {
On 17-05-15 14:49, John Spray wrote:
On Mon, May 15, 2017 at 1:36 PM, Henrik Korkuc wrote:
On 17-05-15 13:40, John Spray wrote:
On Mon, May 15, 2017 at 10:40 AM, Ranjan Ghosh wrote:
Hi all,
When I run "ceph daemon mds. session ls" I always get a fairly
large
I would stop the service, down, out, rm, auth del, crush remove, disable
service, fstab, umount.
So you did remove it from your crush map, then? Could you post your `ceph
osd tree`?
On Wed, Jun 28, 2017, 10:12 AM Mazzystr wrote:
> I've been using this procedure to remove
Hi!
Is the Object Gateway S3 API supposed to be compatible with Amazon S3 regarding
versioning?
Object Versioning is listed as supported in Ceph 12.1, but using the standard
Node.js aws-sdk module (s3.putBucketVersioning()) results in "NotImplemented".
Thanks
Martin
I've been using this procedure to remove OSDs...
OSD_ID=
ceph auth del osd.${OSD_ID}
ceph osd down ${OSD_ID}
ceph osd out ${OSD_ID}
ceph osd rm ${OSD_ID}
ceph osd crush remove osd.${OSD_ID}
systemctl disable ceph-osd@${OSD_ID}.service
systemctl stop ceph-osd@${OSD_ID}.service
sed -i
A couple things. You didn't `ceph osd crush remove osd.21` after doing the
other bits. Also you will want to remove the bucket (re: host) from the
crush map as it will now be empty. Right now you have a host in the crush
map with a weight, but no osds to put that data on. It has a weight
> I don't think either. I don't think there is another way then just 'hacky'
> changing the MONMaps. There have been talks of being able to make Ceph
> dual-stack, but I don't think there is any code in the source right now.
Yeah, that's what I'd like to know. What do the Ceph team think of
The corruption is back in mons logs...
2017-06-28 08:16:53.078495 7f1a0b9da700 1 leveldb: Compaction error:
Corruption: bad entry in block
2017-06-28 08:16:53.078499 7f1a0b9da700 1 leveldb: Waiting after
background compaction error: Corruption: bad entry in block
On Tue, Jun 27, 2017 at 10:42
On 28/06/2017 13:42, Wido den Hollander wrote:
> Honestly I think there aren't that many IPv6 deployments with Ceph out there.
> I for sure am a big fan a deployer of Ceph+IPv6, but I don't know many around
> me.
I got that !
Because IPv6 is so much better than IPv4 :dance:
> Wido
>
>> Best
> Op 28 juni 2017 om 11:24 schreef george.vasilaka...@stfc.ac.uk:
>
>
> Hey Wido,
>
> Thanks for your suggestion. It sounds like the process might be feasible but
> I'd be looking for an "official" thing to do to a production cluster.
> Something that's documented ceph.com/docs, tested and
Hi,
On 06/28/2017 01:19 PM, Jake Grimmett wrote:
Dear All,
Sorry is this has been covered before, but is it possible to configure
cephfs to report free space based on what is available in the main
storage tier?
My "df" shows 76%, this gives a false sense of security, when the EC
tier is 93%
Dear All,
Sorry is this has been covered before, but is it possible to configure
cephfs to report free space based on what is available in the main
storage tier?
My "df" shows 76%, this gives a false sense of security, when the EC
tier is 93% full...
i.e. # df -h /ceph
Filesystem Size
On Wed, Jun 28, 2017 at 6:02 PM, 한승진 wrote:
> Hello Cephers!
>
> I am testing CEPH over RDMA now.
>
> I cloned the latest source code of ceph.
>
> I added below configs in ceph.conf
>
> ms_type = async+rdma
> ms_cluster_type = async+rdma
> ms_async_rdma_device_name = mlx4_0
>
Hello Cephers!
I am testing CEPH over RDMA now.
I cloned the latest source code of ceph.
I added below configs in ceph.conf
ms_type = async+rdma
ms_cluster_type = async+rdma
ms_async_rdma_device_name = mlx4_0
However, I got same error message when I start ceph-mon and ceph-osd
service.
The
Hey Wido,
Thanks for your suggestion. It sounds like the process might be feasible but
I'd be looking for an "official" thing to do to a production cluster. Something
that's documented ceph.com/docs, tested and "endorsed" if you will by the Ceph
team.
We could try this on a pre-prod
I don't think you can do that, it would require running a mixed cluster which,
going by the docs, doesn't seem to be supported.
From: Jake Young [jak3...@gmail.com]
Sent: 27 June 2017 22:42
To: Wido den Hollander; ceph-users@lists.ceph.com; Vasilakakos, George
Hello,
TL;DR: what to do when my cluster reports stuck unclean pgs?
Detailed description:
One of the nodes in my cluster died. CEPH correctly rebalanced itself,
and reached the HEALTH_OK state. I have looked at the failed server,
and decided to take it out of the cluster permanently,
Ceph users,
Got a Hammer cluster installed on old debian wheezy (7.11) boxes (I know :)
root@node4:~# dpkg -l | grep -i ceph
ii ceph 0.94.9-1~bpo70+1 amd64
distributed storage and file system
ii ceph-common
48 matches
Mail list logo