Hi,
Can anyone advise how to clean up 1000s of zfs level permanent errors
and the lustre level too?
A similar question was presented on the list but I did not see an answer.
https://www.mail-archive.com/lustre-discuss@lists.lustre.org/msg12454.html
As I was testing new hardware I discovered
On 11/07/16 17:02, Faaland, Olaf P. wrote:
Riccardo,
If you are not booting from a zpool, you do not need the "zfs-dracut" package.
This package causes ZFS to be loaded very early in the boot process, most likely before
your /etc/modprobe.d/zfs.conf files is visible to the kernel.
As long
Riccardo,
If you are not booting from a zpool, you do not need the "zfs-dracut" package.
This package causes ZFS to be loaded very early in the boot process, most
likely before your /etc/modprobe.d/zfs.conf files is visible to the kernel.
As long as you are not booting from a zpool, remove
On 11/07/16 16:15, Faaland, Olaf P. wrote:
1) What is the output of:
rpm -qa | grep zfs
libzfs2-0.6.5.7-1.el7.centos.x86_64
zfs-dkms-0.6.5.7-1.el7.centos.noarch
lustre-osd-zfs-mount-2.8.0-3.10.0_327.18.2.el7.x86_64.x86_64
zfs-0.6.5.7-1.el7.centos.x86_64
zfs-dracut-0.6.5.7-1.el7.centos.x86_64
1) What is the output of:
rpm -qa | grep zfs
from that system after it boots?
2) How do those values get into the /etc/modprobe.d/zfs.conf file? Are they
there before the node boots, or are modifying that file somehow during the boot
process?
Olaf P. Faaland
Livermore Computing
Hello,
I am tailoring my system for lustre on ZFS and I am not able to set
these parameters
writing the config file /etc/modprobe.d/zfs.conf with the following options
options zfs zfs_prefetch_disable=1
options zfs zfs_txg_history=120
options zfs metaslab_debug:unload=1
when I check the
All,
Thanks to the repliers who contributed to the solution. Here's a rundown:
First, here's a way to see if you have connectivity via the router
between client and mdt, etc., using NIDS (to list NIDS, use lctl list_nids)
ltcl ping
If the ping between client and the mdt works, you have
You mentioned that the servers are on the o2ib0 network, but the error messages
indicate that the client is trying to communicate with the MDT on the tcp
network. The file system configuration needs to be updated to use the updated
NIDs.
Doug
> On Jul 11, 2016, at 7:34 AM, Jessica Otey
Hi,
Could someone please help me understand lustre quotas?
I've created a lustre filesystem called "scratch" using RHEL 7.2 and Lustre
2.8. When I run "lctl get_param qmt.scratch-QMT.dt-0x0.*" on the MDT I see
for my ID the following:
- id: 20977
limits: { hard:
All,
I am, as before, working on a small test lustre setup (RHEL 6.8, lustre
v. 2.4.3) to prepare for upgrading at 1.8.9 lustre production system to
2.4.3 (first the servers and lnet routers, then at a subsequent time,
the clients). Lustre servers have IB connections, but the clients are 1G
Hi Patrick,
Thanks for the additional input! I'll skip the exiting live upgrade
this time then.
Regards,
--
Peter Bortas, NSC
On Mon, Jul 11, 2016 at 1:39 AM, Patrick Farrell wrote:
> Because of the issue highlighted by Andreas - a great number of possible
> states when a job
Hi Andreas,
"Backing up" is easy enough. ZFS snapshots are nice and I'll make an
extra dump of the MDS.
The consensus seems to be that online upgrade should work but is
avoided in the field. So I'll skip working into that minefield this
time.
Thanks!
--
Peter Bortas, NSC
On Mon, Jul 11, 2016
Hi,
Could someone please help me understand lustre quotas?
I've created a lustre filesystem called "scratch" using RHEL 7.2 and Lustre
2.8. When I run "lctl get_param qmt.scratch-QMT.dt-0x0.*" on the MDT I see
for my ID the following:
- id: 20977
limits: { hard:
13 matches
Mail list logo