Sorry for replaying late, I answered in-line
On 10/21/18 6:00 AM, Andreas Dilger wrote:
It would be useful to post information like this on wiki.lustre.org so they can
be found more easily by others. There are already some ZFS tunings there (I
don't have the URL handy, just on a plane), so
It would be useful to post information like this on wiki.lustre.org so they can
be found more easily by others. There are already some ZFS tunings there (I
don't have the URL handy, just on a plane), so it might be useful to include
some information about the hardware and workload to give
On 10/19/18 12:37 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
On Oct 17, 2018, at 7:30 PM, Riccardo Veraldi
wrote:
anyway especially regarding the OSSes you may eventually need some ZFS module parameters
optimizations regarding vdev_write and vdev_read max to increase those values higher
> On Oct 17, 2018, at 7:30 PM, Riccardo Veraldi
> wrote:
>
> anyway especially regarding the OSSes you may eventually need some ZFS module
> parameters optimizations regarding vdev_write and vdev_read max to increase
> those values higher than default. You may also disable ZIL, change the
On 10/17/18 1:20 PM, Kurt Strosahl wrote:
Good Afternoon,
I believe 2.10.* is the long time support.
I am happy with 2.10.5 on my standard performance cluster but for a
very high performance cluster I built 5 months ago where 6GB/s per each OSS
where required in read and write transfers
: Wednesday, October 17, 2018 at 1:20 PM
To: "lustre-discuss@lists.lustre.org"
Subject: [lustre-discuss] lustre 2.10.5 or 2.11.0
Good Afternoon,
I'm in the early planning stages of a lustre upgrade. We are going to be
moving from 2.5 to either 2.10 or 2.11, possibly by standing up a
Good Afternoon,
I'm in the early planning stages of a lustre upgrade. We are going to be
moving from 2.5 to either 2.10 or 2.11, possibly by standing up a new lustre
file system alongside the existing one and migrating the data over. I'm
wondering if anyone has had specific experiences