> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: 24 October 2016 10:33
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: Yan, Zheng <uker...@gmail.com>; Gregory Farnum <gfar...@redhat.com>;
> Zheng Yan <z...@redhat.com>; Ceph
> -Original Message-
> From: Yan, Zheng [mailto:uker...@gmail.com]
> Sent: 24 October 2016 10:19
> To: Gregory Farnum <gfar...@redhat.com>
> Cc: Nick Fisk <n...@fisk.me.uk>; Zheng Yan <z...@redhat.com>; Ceph Users
> <ceph-users@lists.ceph.com>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Christian Balzer
> Sent: 24 October 2016 02:30
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] New cephfs cluster performance issues- Jewel -
> cache pressure, capability
From: Robert Sanders [mailto:rlsand...@gmail.com]
Sent: 23 October 2016 16:32
To: n...@fisk.me.uk
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] cache tiering deprecated in RHCS 2.0
On Oct 23, 2016, at 4:32 AM, Nick Fisk <n...@fisk.me.uk
<mailto:n..
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Robert Sanders
> Sent: 22 October 2016 03:44
> To: ceph-us...@ceph.com
> Subject: [ceph-users] Three tier cache
>
> Hello,
>
> Is it possible to create a three level cache tier? Searching
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Zoltan Arnold Nagy
> Sent: 22 October 2016 15:13
> To: ceph-users
> Subject: [ceph-users] cache tiering deprecated in RHCS 2.0
>
> Hi,
>
> The 2.0 release notes
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Haomai Wang
> Sent: 21 October 2016 15:40
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph and TCP States
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Haomai Wang
> Sent: 21 October 2016 15:28
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Ceph and TCP States
>
Hi,
I'm just testing out using a Ceph client in a DMZ behind a FW from the main
Ceph cluster. One thing I have noticed is that if the
state table on the FW is emptied maybe by restarting it or just clearing the
state table...etc. Then the Ceph client will hang for a
long time as the TCP session
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> William Josefsson
> Sent: 20 October 2016 10:25
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] RBD with SSD journa
z : 2545.125
> cpu MHz : 2792.718
> cpu MHz : 2630.156
> cpu MHz : 3090.750
> cpu MHz : 2951.906
> cpu MHz : 2845.875
> cpu MHz : 2553.281
> cpu MHz : 2602.125
> cpu MHz : 2600.906
> cpu MHz
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> William Josefsson
> Sent: 17 October 2016 09:31
> To: Christian Balzer
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] RBD with SSD journals and SAS OSDs
>
> Thx
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Denny Fuchs
> Sent: 05 October 2016 12:43
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] 6 Node cluster with 24 SSD per node:
> Hardwareplanning/ agreement
>
> hi,
>
> I get a
g / agreement
>
> Hi,
>
> thanks for take a look :-)
>
> Am 04.10.2016 16:11, schrieb Nick Fisk:
>
> >> We have two goals:
> >>
> >> * High availability
> >> * Short latency for our transaction services
> >
> > How Low? See
Hi, Comments inline
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Denny Fuchs
> Sent: 04 October 2016 14:43
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] 6 Node cluster with 24 SSD per node: Hardware planning
> / agreement
>
Hi Sascha,
Good article, you might want to add a small section about these two variables
osd_agent_max_high_ops
osd_agent_max_ops
They control how many concurrent flushes happen at the high/low thresholds. Ie
you can set the low one to 1 to minimise the impact
on client IO.
Also the
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Oliver Dzombic
> Sent: 30 September 2016 14:16
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] production cluster down :(
>
> Hi,
>
> we have:
>
> ceph version 10.2.2
>
>
Hi Gerald,
I would say it’s definitely possible. I would make sure you invest in the
networking to make sure you have enough bandwidth and choose disks based on
performance rather than capacity. Either lots of lower capacity disks or SSD’s
would be best. The biggest challenge may be around
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of min
fang
Sent: 29 September 2016 10:34
To: ceph-users
Subject: [ceph-users] ceph write performance issue
Hi, I created 40 osds ceph cluster with 8 PM863 960G SSD as journal. One ssd is
used
is not enough to worry the current limit.
Sent from my SAMSUNG Galaxy S7 on the Telstra Mobile Network
Original message
From: Nick Fisk <n...@fisk.me.uk <mailto:n...@fisk.me.uk> >
Date: 23/09/2016 7:26 PM (GMT+10:00)
To: Adrian Sa
mmmok.
>
> and, how would the affected PG recover, just replacing the affected OSD/DISK?
> or would the affected PG migrate to othe OSD/DISK?
Yes, Ceph would start recovering the PG's to other OSD's. But until your PG
size=min_size then IO will be blocked.
>
> thx
much concurrent work. As they have inherited a setting targeted for
> > SSDs, so I have wound that back to defaults on those machines see if it
> > makes a difference.
> >
> > But I suspect going by the disk activity there is a lot of very small
> > FS metadata update
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ja.
> C.A.
> Sent: 23 September 2016 09:50
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] rbd pool:replica size choose: 2 vs 3
>
> Hi
>
> with rep_size=2 and min_size=2, what
Hi Adrian,
I have also hit this recently and have since increased the osd_snap_trim_sleep
to try and stop this from happening again. However, I
haven't had an opportunity to actually try and break it again yet, but your
mail seems to suggest it might not be the silver bullet
I was looking for.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Matteo
Dacrema
Sent: 19 September 2016 15:24
To: ceph-users@lists.ceph.com
Subject: [ceph-users] capacity planning - iops
Hi All,
I’m trying to estimate how many iops ( 4k direct random write ) my ceph
> -Original Message-
> From: Dan van der Ster [mailto:d...@vanderster.com]
> Sent: 19 September 2016 12:11
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] RBD Snapshots and osd_snap_trim_sleep
>
Hi,
Does the osd_snap_trim_sleep throttle effect the deletion of RBD snapshots?
I've done some searching but am seeing conflicting results on whether this only
effects RADOS pool snapshots.
I've just deleted a snapshot which comprised of somewhere around 150k objects
and it brought the
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jim
> Kilborn
> Sent: 14 September 2016 20:30
> To: Reed Dier
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Replacing a failed OSD
>
> Reed,
>
>
>
>
> -Original Message-
> From: Alex Gorbachev [mailto:a...@iss-integration.com]
> Sent: 11 September 2016 03:17
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: Wilhelm Redbrake <w...@globe.de>; Horace Ng <hor...@hkisl.net>;
> ceph-users <ceph-users@lists.cep
From: Alex Gorbachev [mailto:a...@iss-integration.com]
Sent: 11 September 2016 16:14
To: Nick Fisk <n...@fisk.me.uk>
Cc: Wilhelm Redbrake <w...@globe.de>; Horace Ng <hor...@hkisl.net>; ceph-users
<ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Ceph
Thanks for the hint, I will update my code.
> -Original Message-
> From: Jason Dillaman [mailto:jdill...@redhat.com]
> Sent: 06 September 2016 14:44
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users]
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John
> Spray
> Sent: 06 September 2016 13:44
> To: Wido den Hollander
> Cc: ceph-users
> Subject: Re: [ceph-users] Single Threaded performance for Ceph MDS
function, do I need to call this
periodically to check that the watch is still active?
Thanks,
Nick
> -Original Message-
> From: Jason Dillaman [mailto:jdill...@redhat.com]
> Sent: 24 August 2016 15:54
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users <ceph-users@lists
From: Alex Gorbachev [mailto:a...@iss-integration.com]
Sent: 04 September 2016 04:45
To: Nick Fisk <n...@fisk.me.uk>
Cc: Wilhelm Redbrake <w...@globe.de>; Horace Ng <hor...@hkisl.net>; ceph-users
<ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Ceph
Have you disabled the vaai functions in ESXi? I can't remember off the top of
my head, but one of them makes everything slow to a crawl.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Oliver Dzombic
> Sent: 02 September 2016 09:50
> To:
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido
> den Hollander
> Sent: 01 September 2016 08:19
> To: Reed Dier
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Slow Request on OSD
>
>
> > Op 31
VME journals) and then when stuff like RDMA comes along, you will be in a
better place to take advantage of it.
Kind Regards!
Am 31.08.16 um 09:51 schrieb Nick Fisk:
From: w...@globe.de <mailto:w...@globe.de> [mailto:w...@globe.de]
Sent: 30 August 2016 18:40
To: n...@fisk.me.uk <m
nd questions...
Am 30.08.16 um 19:05 schrieb Nick Fisk:
From: w...@globe.de <mailto:w...@globe.de> [mailto:w...@globe.de]
Sent: 30 August 2016 08:48
To: n...@fisk.me.uk <mailto:n...@fisk.me.uk> ; 'Alex Gorbachev'
<mailto:a...@iss-integration.com> <a...@iss-integration.
Well done Alex, I know the challenges you have worked through to attain this.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex
> Gorbachev
> Sent: 26 August 2016 15:53
> To: scst-de...@lists.sourceforge.net; ceph-users
> -Original Message-
> From: Jason Dillaman [mailto:jdill...@redhat.com]
> Sent: 23 August 2016 13:23
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] RBD Watch Notify for snapshots
>
> Looks g
on Ceph RBD's
>
>
> > Op 23 augustus 2016 om 22:24 schreef Nick Fisk <n...@fisk.me.uk>:
> >
> >
> >
> >
> > > -Original Message-
> > > From: Wido den Hollander [mailto:w...@42on.com]
> > > Sent: 23 August 2016 19:45
> &
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: 23 August 2016 19:45
> To: Ilya Dryomov <idryo...@gmail.com>; Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] udev
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex
> Gorbachev
> Sent: 23 August 2016 16:43
> To: Wido den Hollander <w...@42on.com>
> Cc: ceph-users <ceph-users@lists.ceph.com>; Nick Fisk <n...@fisk.
From: Alex Gorbachev [mailto:a...@iss-integration.com]
Sent: 22 August 2016 20:30
To: Nick Fisk <n...@fisk.me.uk>
Cc: Wilhelm Redbrake <w...@globe.de>; Horace Ng <hor...@hkisl.net>; ceph-users
<ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Ceph + VMware + S
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: 22 August 2016 03:00
> To: 'ceph-users' <ceph-users@lists.ceph.com>
> Cc: Nick Fisk <n...@fisk.me.uk>
> Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance
>
>
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: 22 August 2016 18:22
> To: ceph-users <ceph-users@lists.ceph.com>; n...@fisk.me.uk
> Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's
>
>
> > Op 22 augustus 2
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: 22 August 2016 15:00
> To: Jason Dillaman <dilla...@redhat.com>
> Cc: Nick Fisk <n...@fisk.me.uk>; ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] RBD Wa
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: 22 August 2016 15:16
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's
>
> O
> -Original Message-
> From: Ilya Dryomov [mailto:idryo...@gmail.com]
> Sent: 22 August 2016 14:53
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: Jason Dillaman <dilla...@redhat.com>; ceph-users
> <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] RBD
Hope it's useful to someone
https://gist.github.com/fiskn/6c135ab218d35e8b53ec0148fca47bf6
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Nick
> Fisk
> Sent: 08 July 2016 09:58
> To: dilla...@redhat.com
> Cc: 'ceph-users' <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] RBD Watch Notify for snapshots
>
best it’s
been for a long time and I’m reluctant to fiddle any further.
But as mentioned above, thick vmdk’s with vaai might be a really good fit.
Thanks for your very valuable info on analysis and hw build.
Alex
Am 21.08.2016 um 09:31 schrieb Nick Fisk <n...@fisk.me.uk
ce of that thing. The 16 port version is nearly
double the price of what I paid for the 400GB NVME and that’s without adding on
the 8GB ram and BBU. Maybe it's more suited for a full SSD cluster rather than
spinning disks?
>
> Best Regards !!
>
>
>
> Am 21.08.2016 um 09:31
> -Original Message-
> From: Alex Gorbachev [mailto:a...@iss-integration.com]
> Sent: 21 August 2016 04:15
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: w...@globe.de; Horace Ng <hor...@hkisl.net>; ceph-users
> <ceph-users@lists.ceph.com>
> Subject:
few bytes of the payload.no idea what they are, but
skipping 4 bytes takes you straight to the start of the
text part that you send with notify.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Nick
> Fisk
> Sent: 17 Augus
e slightly less performance.
>
> Cheers
> Nick
>
> On Thursday, August 18, 2016 01:37:46 PM Nick Fisk wrote:
> > > -Original Message-
> > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On
> > > Behalf Of nick Sent: 18 August 2016 12:39
PG's. There is a point in the data path of a PG that
is effectively single threaded.
If you want to improve sequential reads you want to use buffered IO and use a
large read ahead (>16M).
>
> Cheers
> Nick
>
> On Thursday, August 18, 2016 10:23:34 AM Nick Fisk
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> w...@42on.com
> Sent: 18 August 2016 09:35
> To: nick
> Cc: ceph-users
> Subject: Re: [ceph-users] Ceph all NVME Cluster sequential read speed
>
>
Hi All,
I'm writing a small piece of code to call fsfreeze/unfreeze that can be invoked
by a RADOS notify. I have the basic watch/notify
functionality working but I need to be able to determine if the notify message
is to freeze or unfreeze, or maybe something
completely unrelated.
I'm looking
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
?
Sent: 15 August 2016 13:19
To: ceph-users
Subject: [ceph-users] Red Hat Ceph Storage
Hello, dear community.
There were a few questions as we learn Ceph.
-How Do you think,
I’m not sure how stable that ceph dokan is, I would imagine the best way to
present ceph-fs to windows users would be through samba.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
?
Sent: 12 August 2016 07:53
To: ceph-users
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Oliver Dzombic
> Sent: 26 July 2016 04:30
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] cephfs failed to rdlock, waiting
>
> Hi Greg,
>
> i switched the cache tier to
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr]
Sent: 22 July 2016 15:13
To: n...@fisk.me.uk
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph + vmware
Le 22/07/2016 14:10, Nick Fisk a écrit :
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com
Le 22/07/2016 11:48, Nick Fisk a écrit :
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr]
Sent: 22 July 2016 10:40
To: n...@fisk.me.uk <mailto:n...@fisk.me.uk> ; 'Jake Young'
<mailto:jak3...@gmail.com> <jak3...@gmail.com>; 'Jan Schermer'
<mailto:j...@s
From: Frédéric Nass [mailto:frederic.n...@univ-lorraine.fr]
Sent: 22 July 2016 10:40
To: n...@fisk.me.uk; 'Jake Young' <jak3...@gmail.com>; 'Jan Schermer'
<j...@schermer.cz>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph + vmware
Le 22/07/2016 10:23, Nick
> -Original Message-
> From: Martin Millnert [mailto:mar...@millnert.se]
> Sent: 22 July 2016 10:32
> To: n...@fisk.me.uk; 'Ceph Users' <ceph-users@lists.ceph.com>
> Subject: Re: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase
>
> On Fri, 2016-0
Le 22/07/2016 09:47, Nick Fisk a écrit :
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Frédéric Nass
Sent: 22 July 2016 08:11
To: Jake Young <mailto:jak3...@gmail.com> <jak3...@gmail.com>; Jan Schermer
<mailto:j...@schermer.cz> <
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Martin Millnert
> Sent: 22 July 2016 00:33
> To: Ceph Users
> Subject: [ceph-users] Infernalis -> Jewel, 10x+ RBD latency increase
>
> Hi,
>
> I just upgraded
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Frédéric Nass
Sent: 22 July 2016 08:11
To: Jake Young ; Jan Schermer
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph + vmware
Le 20/07/2016 21:20, Jake Young a
: RBD from Ceph Cluster
What do you mean? I think this setup should improve the performance
dramatically or not?
If i enable writeback in these nodes and use tgt for vmware. What happens if
iscsi node 1 goes offline. Power Loss... or Linux Kernel crash.
Am 21.07.16 um 15:57 schrieb Nick Fisk
ally performance improve.
Am 21.07.16 um 14:33 schrieb Nick Fisk:
-Original Message-
From: w...@globe.de <javascript:_e(%7B%7D,'cvml','w...@globe.de');>
[mailto:w...@globe.de <javascript:_e(%7B%7D,'cvml','w...@globe.de');> ]
Sent: 21 July 2016 13:23
To: n...@fisk.m
t: Re: [ceph-users] Ceph + VMware + Single Thread Performance
>
> Is there not a way to enable Linux page Cache? So do not user D_Sync...
>
> Then we would the dramatically performance improve.
>
>
> Am 21.07.16 um 14:33 schrieb Nick Fisk:
> >> -Original Messa
ID cache enabled.
Nick,
What NFS server are you using?
The kernel one. Seems to be working really so far after I got past the XFS
fragmentation issues, I had to set an extent size hint of 16mb at the root.
Jake
On Thursday, July 21, 2016, Nick Fisk <n...@fisk.me.uk <
orage arrays that service IO's in 100us-1ms
range, Ceph is probably about 10x slower than this, hence the problem. Disable
the BBWC on a RAID controller or SAN and you will the same behaviour.
>
> Regards
>
>
> Am 21.07.16 um 14:17 schrieb Nick Fisk:
> >> -Orig
ebastien Han give us 400 MByte/s raw performance from the
> P3700.
>
> https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
>
> How could it be that the rbd client performance is 50% slower?
>
> Regards
>
>
> A
I've had a lot of pain with this, smaller block sizes are even worse. You want
to try and minimize latency at every point as there
is no buffering happening in the iSCSI stack. This means:-
1. Fast journals (NVME or NVRAM)
2. 10GB or better networking
3. Fast CPU's (Ghz)
4. Fix CPU c-state's to
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> m13913886...@yahoo.com
> Sent: 20 July 2016 02:09
> To: Christian Balzer ; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] how to use cache tiering with proxy in
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
m13913886...@yahoo.com
Sent: 19 July 2016 07:44
To: Oliver Dzombic ; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] how to use cache tiering with proxy in ceph-10.2.2
I have configured
t:
>
> IP Interactive UG ( haftungsbeschraenkt ) Zum Sonnenberg 1-3
> 63571 Gelnhausen
>
> HRB 93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 15.07.2016 um 09:32 schrieb Nick Fisk
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Oliver Dzombic
> Sent: 12 July 2016 20:59
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph + vmware
>
> Hi Jack,
>
> thank you!
>
> What has reliability to do with
I've seen something similar if you are using RBD caching, I found that if you
can fill the RBD cache faster than it can flush you
get these stalls. I increased the size of the cache and also the flush
threshold and this solved the problem. I didn't spend much
time looking into it, but it seemed
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Daniel Gryniewicz
> Sent: 11 July 2016 13:38
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] OSPF to the host
>
> On 07/11/2016 08:23 AM, Saverio Proto wrote:
> >> I'm looking at
splitting or merging will happen? Is it enough that a
directory is read, eg. through scrub? If possible I would like to initiate the
process
Regards
Paul
On Sun, Jul 10, 2016 at 10:47 AM, Nick Fisk <n...@fisk.me.uk
<mailto:n...@fisk.me.uk> > wrote:
You need to se
You need to set the option in the ceph.conf and restart the OSD I think. But it
will only take effect when splitting or merging in the future, it won't adjust
the current folder layout.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Paul
See my post from a few days ago. If you value your sanity and free time, use
NFS. Otherwise SCST is probably your best bet at the moment, or maybe try out
the SUSE implementation.
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jan
>
notify APIs). A daemon could register a watch on a custom
> per-host/image/etc object which would sync the disk when a notification is
> received. Prior to creating a snapshot, you would need to
> send a notification to this object to alert the daemon to sync/fsfreeze/etc.
>
> On Thu,
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Zoltan Arnold Nagy
> Sent: 08 July 2016 08:51
> To: Christian Balzer
> Cc: ceph-users ; n...@fisk.me.uk
> Subject: Re: [ceph-users] multiple
Hi All,
I have a RBD mounted to a machine via the kernel client and I wish to be able
to take a snapshot and mount it to another machine
where it can be backed up.
The big issue is that I need to make sure that the process writing on the
source machine is finished and the FS is sync'd before
yring
> > osd max backfills = 1
> > osd recovery max active = 1
> > osd recovery op priority = 1
> > osd client op priority = 63
> > osd disk thread ioprio class = idle
> > osd disk thread ioprio priority = 7
> > [osd.1]
> >
m.
>
> Are you doing mkfs.xfs on SSD? If so, please check SSD data sheets whether
> UNMAP is supported. To avoid unmap during mkfs, use
> mkfs.xfs -K
Thanks for your reply
The RBD's are on normal spinners (+SSD Journals)
>
> Regards,
> Anand
>
> On Thu, Jul 7, 2016 a
Hi Christian,
> -Original Message-
> From: Christian Balzer [mailto:ch...@gol.com]
> Sent: 07 July 2016 12:57
> To: ceph-users@lists.ceph.com
> Cc: Nick Fisk <n...@fisk.me.uk>
> Subject: Re: [ceph-users] multiple journals on SSD
>
>
> Hello Nick,
>
Hi All,
Does anybody else see a massive (ie 10x) performance impact when either
deleting a RBD or running something like mkfs.xfs against an
existing RBD, which would zero/discard all blocks?
In the case of deleting a 4TB RBD, I'm seeing latency in some cases rise up to
10s.
It looks
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Matyas Koszik
> Sent: 07 July 2016 11:26
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] layer3 network
>
>
>
> Hi,
>
> My setup uses a layer3 network, where each node has two
Just to add if you really want to go with lots of HDD's to Journals then go
NVME. They are not a lot more expensive than the equivalent SATA based
3700's, but the latency is low low low. Here is an example of a node I have
just commissioned with 12 HDD's to one P3700
Device: rrqm/s
> -Original Message-
> From: Alex Gorbachev [mailto:a...@iss-integration.com]
> Sent: 04 July 2016 22:00
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: Oliver Dzombic <i...@ip-interactive.de>; ceph-users us...@lists.ceph.com>; mq <maoqi1...@126.com>; Christia
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Alex Gorbachev
> Sent: 04 July 2016 20:50
> To: Campbell Steven
> Cc: ceph-users ; Tim Bishop li...@bishnet.net>
> Subject: Re: [ceph-users] Is
> On 2016-07-01T19:11:34, Nick Fisk <n...@fisk.me.uk> wrote:
>
> > To summarise,
> >
> > LIO is just not working very well at the moment because of the ABORT
> > Tasks problem, this will hopefully be fixed at some point. I'm not
> > sure if SUSE works ar
gt;
>
> > Op 4 juli 2016 om 9:25 schreef Nick Fisk <n...@fisk.me.uk>:
> >
> >
> > Hi All,
> >
> > Quick question. I'm currently in the process of getting ready to
> > deploy a 2nd cluster, which at some point in the next 12 months, I
> > wi
Hi All,
Quick question. I'm currently in the process of getting ready to deploy a
2nd cluster, which at some point in the next 12 months, I will want to
enable RBD mirroring between the new and existing clusters. I'm leaning
towards deploying this new cluster with IPv6, because Wido says so ;-)
> -Original Message-
> From: mq [mailto:maoqi1...@126.com]
> Sent: 04 July 2016 08:13
> To: Nick Fisk <n...@fisk.me.uk>
> Subject: Re: [ceph-users]
> suse_enterprise_storage3_rbd_LIO_vmware_performance_bad
>
> Hi Nick
> i have test NFS: since NFS
201 - 300 of 685 matches
Mail list logo