Dear Lustre experts,
I have some questions about migrate OSTs to ZFS backend.
We have a Lustre filesystem with 200 TB and ten OSTs with lustre 2.4.1.
Due to hardware problems in some OSTs disks we are planning to change
these disks and configure the new ones (server + JBOD) with ZFS backend.
in the filesystem. It will be used at some point in the future.
The L2ARC usage is transparent to the filesystem so it works with Lustre today.
Cheers, Andreas
On Sep 7, 2015, at 06:14, Fernando Perez <fpe...@icm.csic.es> wrote:
Dear Lustre experts,
I have some questions about migrat
Hi Yasir.
You must add the lustre packages to the distribution following the rocks
manual:
http://central6.rocksclusters.org/roll-documentation/base/6.2/customization-adding-packages.html
Finally you must add the lustre network and configure the compute
network interfaces that you will use
Hi all.
We have a 230 TB small lustre system. We are using lustre 2.4.1 with zfs
0.6.2 installed in the OSSs. The lustre architecture is the following:
1 MDS +1 MDT in the same server + 3 OSS with 15 ldiskfs OSTs (external
disks some with fibre controllers and SAS disks and others Coraid
Please, forget my previous message.
I have just find that the problem is due to other reason.
Regards.
=
Fernando Pérez
Institut de Ciències del Mar (CMIMA-CSIC)
Departament Oceanografía Física i Tecnològica
Passeig Marítim de la Barceloneta,37-49
Dear lustre experts.
I detect a serious interoperability problem between lustre 2.8 server
side and lustre 2.9 clients:
We have a lustre 2.8 on mgs/mds and oss's and lustre 2.8 clients.
Yesterday I update a couple of clients to lustre 2.9 and I detect that
this clients cannot read a lot of
Dear all,
We need to upgrade our lustre filesystem: lustre 2.8 with centos 6
servers, zfs 0.6.4.2, ldisk and zfs OSTs and ldisk combined mdt / mds.
I know that we must to reinstall all the servers and upgrade them to
CentOS7 but, what lustre release do you recommend to perform the
upgrade?
, at 11:15, Fernando Perez <fpe...@icm.csic.es> wrote:
Dear all,
We need to upgrade our lustre filesystem: lustre 2.8 with centos 6 servers, zfs
0.6.4.2, ldisk and zfs OSTs and ldisk combined mdt / mds.
I know that we must to reinstall all the servers and upgrade them to CentOS7
but, what
Dear Riccardo.
Have you tried to upgrade e2fsprogs packages before perform the e2fsck?
Regards.
=
Fernando Pérez
Institut de Ciències del Mar (CSIC)
Departament Oceanografía Física i Tecnològica
Passeig Marítim de la Barceloneta,37-49
08003 Barcelona
Dear all.
We have a lustre system with combined mds + mdt ldiskfs and several osts
with zfs and ldiskfs.
I have these errors in our mds server log:
LDISKFS-fs warning (device dm-4): ldiskfs_dx_add_entry:2618: inode
30544460: comm mdt01_047: index 2: reach max htree level 2
Oct 4 09:32:24
Barceloneta,37-49
08003 Barcelona
Phone: (+34) 93 230 96 35
=
On 10/04/2018 11:16 AM, Fernando Perez wrote:
Dear all.
We have a lustre system with combined mds + mdt ldiskfs and several
osts with zfs and ldiskfs.
I have these errors in our mds
Blagodarenko.
On 4 Oct 2018, at 12:28, Fernando Perez wrote:
Sorry, we have lustre 2.10.4 release installed in our lustre filesystem,
migrated previously from 2.4 to 2.8 and 2.10.4
Regards.
=
Fernando Pérez
Institut de Ciències del Mar (CSIC)
Departament
Dear lustre experts,
We have a lustre filesystem 2.10.7 release with a combined ldisfks MDT /
MDS and OSTs with zfs and ldiskfs backend.
The filesystem had problems with the quotes, corrupted data, and we have
performed a tune2fs -O ^quota / tune2fs -O quota in each ldiskfs target
(OSTs and
Thank you Andreas.
I will try to migrate the MGS according my previous idea, based in the
lustre operations manual section for separate a combined MDT/MGS.
I agree that the dd backup of the current combined MDT/MGS is mandatory
before try to perform the migration.
Regards.
Física i Tecnològica
Passeig Marítim de la Barceloneta,37-49
08003 Barcelona
Phone: (+34) 93 230 96 35
=
On 4/4/19 6:36 PM, Fernando Perez wrote:
Dear lustre experts,
We have a lustre filesystem 2.10.7 release with a combined ldisfks MDT
/ MDS and OSTs
i Tecnològica
Passeig Marítim de la Barceloneta,37-49
08003 Barcelona
Phone: (+34) 93 230 96 35
El 16 abr 2019, a las 15:34, Mohr Jr, Richard Frank (Rick Mohr)
escribió:
On Apr 15, 2019, at 10:54 AM, Fernando Perez wrote:
Could anyone confirm
=
On 6/26/19 2:06 PM, Fernando Perez wrote:
Sorry, the command that I runned to start the lfsck is the following:
lctl lfsck_start -M lustre-MDT -t namespace
When I try to run the stop command:
lctl lfsck_stop -M lustre-MDT
The output is:
Fail to stop LFSCK
Dear lustre users,
I tried to run a lfsck namespace in our lustre filesystem, lustre
2.10.7, and the lfsck never ends and I can't stop it using the lctl
lfsck_stop command.
I tried to kill the lfsck process, stop lustre, reboot the metadata
server, and when I mount the combined mdt + mgs
.
=
Fernando Pérez
Institut de Ciències del Mar (CSIC)
Departament Oceanografía Física i Tecnològica
Passeig Marítim de la Barceloneta,37-49
08003 Barcelona
Phone: (+34) 93 230 96 35
=
On 6/26/19 1:58 PM, Fernando Perez wrote:
Dear lustre
Dear lustre experts,
What is the equivalent tool to ldiskfs e2fsck for zfs in lustre?
I have a combined mdt + mds with corrupted quotes and the e2fsck utility
runs very slowly.
What is the best strategy to repair it? Run the e2fsck in ldiskfs volume
and then migrate to zfs or migrate to zfs
20 matches
Mail list logo