Hi Ilya,
It turns out that sgdisk 0.8.6 -i 2 /dev/vdb removes partitions and re-adds
them on CentOS 7 with a 3.10.0-229.11.1.el7 kernel, in the same way partprobe
does. It is used intensively by ceph-disk and inevitably leads to races where a
device temporarily disapears. The same command
On Fri, Dec 18, 2015 at 1:38 PM, Loic Dachary wrote:
> Hi Ilya,
>
> It turns out that sgdisk 0.8.6 -i 2 /dev/vdb removes partitions and re-adds
> them on CentOS 7 with a 3.10.0-229.11.1.el7 kernel, in the same way partprobe
> does. It is used intensively by ceph-disk and
On 18/12/2015 16:31, Ilya Dryomov wrote:
> On Fri, Dec 18, 2015 at 1:38 PM, Loic Dachary wrote:
>> Hi Ilya,
>>
>> It turns out that sgdisk 0.8.6 -i 2 /dev/vdb removes partitions and re-adds
>> them on CentOS 7 with a 3.10.0-229.11.1.el7 kernel, in the same way
>> partprobe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I adjusted the algorithm from the Weighted Round Robin Queue and
resolved the SSD performance issue. Since it is different, I've
renamed it so that it doesn't cause confusion later.
My tests are all showing a performance improvement of 3-17%. The
Nevermind, got it:
CHANGES WITH 214:
* As an experimental feature, udev now tries to lock the
disk device node (flock(LOCK_SH|LOCK_NB)) while it
executes events for the disk or any of its partitions.
Applications like partitioning programs can lock the
Hey cephers,
Before we all head off to various holiday shenanigans and befuddle our
senses with rest, relaxation, and glorious meals of legend, I wanted
to give you something to look forward to for 2016 in the form of Ceph
Tech Talks!
http://ceph.com/ceph-tech-talks/
First on the docket in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I've been working with Sam Just today and we would like to get some
performance data around client I/O and recovery I/O to test the new Op
queue I've been working on. I know that we can just set and OSD out/in
and such, but there seems like there
Use list_for_each_entry_safe() instead of list_for_each_safe() to
simplify the code.
Signed-off-by: Geliang Tang
---
net/ceph/messenger.c | 14 +-
1 file changed, 5 insertions(+), 9 deletions(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index
Eric,
Do you have iSCSI data digests on?
On 12/15/2015 12:08 AM, Eric Eastman wrote:
> I am testing Linux Target SCSI, LIO, with a Ceph File System backstore
> and I am seeing this error on my LIO gateway. I am using Ceph v9.2.0
> on a 4.4rc4 Kernel, on Trusty, using a kernel mounted Ceph File
The variant pagep will still get the invalid page point, although ceph
fails in function ceph_update_writeable_page.
To fix this issue, Assigne the page to pagep until there is no failure
in function ceph_update_writeable_page.
Signed-off-by: Minfei Huang
---
fs/ceph/addr.c
> I've been working with Sam Just today and we would like to get some
> performance data around client I/O and recovery I/O to test the new Op
> queue I've been working on. I know that we can just set and OSD out/in
> and such, but there seems like there could be a lot of variation in
> the
Hi Mike,
On the EXSi server both Header Digest and Data Digest are set to Prohibited.
Eric
On Fri, Dec 18, 2015 at 2:54 PM, Mike Christie wrote:
> Eric,
>
> Do you have iSCSI data digests on?
>
> On 12/15/2015 12:08 AM, Eric Eastman wrote:
>> I am testing Linux Target
12 matches
Mail list logo