Hi,
did some testing multithreaded access and dd, performance scales as it
should.
Any ideas to improve single threaded read performance further would be
highly appreciated. Some of our use cases requires that we need to read
large files by a single thread.
I have tried changing the
Hi Michiel,
How are you configuring VM disks on Proxmox? What type (virtio, scsi,
ide) and what cache setting?
El 23/11/16 a las 07:53, M. Piscaer escribió:
Hi,
I have an little performance problem with KVM and Ceph.
I'm using Proxmox 4.3-10/7230e60f, with KVM version
I am afraid the most probable cause is context switching time related
to your guest (or guests).
On Wed, Nov 23, 2016 at 9:53 AM, M. Piscaer wrote:
> Hi,
>
> I have an little performance problem with KVM and Ceph.
>
> I'm using Proxmox 4.3-10/7230e60f, with KVM version
>
Hello,
The story goes like this.
I have added another 3 drives to the caching layer. OSDs were added to
crush map one by one after each successful rebalance. When I added the
last OSD and went away for about an hour I noticed that it's still not
finished rebalancing. Further investigation showed
Thanks Jason, very clear explanation.
However, I found some strange behavior when export-diff on a cloned image,
not sure it is a bug on calc_snap_set_diff().
The test is,
Image A is cloned from a parent image. then create snap1 for image A.
The content of export-diff A@snap1 will be changed when
Hi there,
According to the official man page:
http://docs.ceph.com/docs/jewel/man/8/rbd/
export-diff [–from-snap snap-name] [–whole-object] (image-spec | snap-spec)
dest-path
Exports an incremental diff for an image to dest path (use - for stdout).
If an initial snapshot is specified, only changes
Thank you!
Zitat von Nick Fisk :
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
Behalf Of Eugen Block
Sent: 22 November 2016 10:11
To: Nick Fisk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users]
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Eugen Block
> Sent: 22 November 2016 10:11
> To: Nick Fisk
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] deep-scrubbing has large impact on performance
>
>
Hi list,
I've been searching the mail archive and the web for some help. I
tried the things I found, but I can't see the effects. We use Ceph for
our Openstack environment.
When our cluster (2 pools, each 4092 PGs, in 20 OSDs on 4 nodes, 3
MONs) starts deep-scrubbing, it's impossible to
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Eugen Block
> Sent: 22 November 2016 09:55
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] deep-scrubbing has large impact on performance
>
> Hi list,
>
> I've been searching the
Hello,
we have a JEWEL cluster upgraded from FIREFLY. The cluster is encrypted
with dmcrypt.
Yesterday, i added some new OSDs. The first time since the upgrade. I
searched the new keys to backup them and i see that the creation of new
OSDs with the option dmcrypt changed.
To be able to
On Tue, Nov 22, 2016 at 5:31 AM, Zhongyan Gu wrote:
> So if initial snapshot is NOT specified, then:
> rbd export-diff image@snap1 will diff all data to snap1. this cmd equals to
> :
> rbd export image@snap1. Is my understand right or not??
While they will both export all
thx Alan and Anthony for sharing on these P3700 drives.
Anthony, just to follow up on your email: my OS is CentOS7.2. Can
you please elaborate on nvme on the CentOS7.2, I'm in no way expert on
nvme, but I can here see that
You wrote P3700 so that’s what I discussed ;)
If you want to connect to your HBA you’ll want a SATA device like the S3710
series:
http://ark.intel.com/products/family/83425/Data-Center-SSDs#@Server
The P3700 is a PCI device, goes into an empty slot, and is not speed-limited by
the SATA
Hey Jagan,
I'm happy to hear you are interested in contributing to Ceph. I would
suggest taking a look at the tracker (http://tracker.ceph.com/) for
bugs and projects you might be interested in tackling. All code and
associated repositories are available on github
(https://github.com/ceph/).
If
thank you very much for this info.
On 11/21/16 12:33 PM, Eric Eastman wrote:
Have you looked at your file layout?
On a test cluster running 10.2.3 I created a 5GB file and then looked
at the layout:
# ls -l test.dat
-rw-r--r-- 1 root root 524288 Nov 20 23:09 test.dat
# getfattr -n
If you use wpq, I recommend also setting "osd_op_queue_cut_off = high"
as well, otherwise replication OPs are not weighted and really reduces
the benefit of wpq.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, Nov 22, 2016 at 5:34 AM,
Hi ,
As part of migration between hardware I have been building new OSDs and
cleaning up old ones (osd rm osd.x, osd crush rm osd.x, auth del osd.x). To
try and prevent rebalancing kicking in until all the new OSDs are created on a
host I use "ceph osd set noin", however what I have seen
Also, feel free to ask development related questions in #ceph-devel
channel on oftc
On Wed, Nov 23, 2016 at 2:30 AM, Patrick McGarry wrote:
> Hey Jagan,
>
> I'm happy to hear you are interested in contributing to Ceph. I would
> suggest taking a look at the tracker
I've figured out the main reason is.
When swift client request through keystone user like 'admin', keystone
returned with X-Auth-Token header.
After that, the swift client requests with X-Auth-Token to radosgw, but
radosgw returned 'AccessDenied'
Some people says radosgw doesn't support
Thanks for the very quick answer!
If you are using Jewel
We are still using Hammer (0.94.7), we wanted to upgrade to Jewel in a
couple of weeks, would you recommend to do it now?
Zitat von Nick Fisk :
-Original Message-
From: ceph-users
21 matches
Mail list logo