Hi Jessie,
In regards to you seeing 370 objects with errors form ‘zpool status’, but
having over 400 files with “access issues”, I would suggest running the ‘zpool
scrub’ to identify all the ZFS objects in the pool that are reporting permanent
errors.
It would be very important to have a
Hi Jessie,
For clarification, it sounds like you are using hardware based RAID-6, and not
ZFS raid? Is this correct? Or was the faulty card simply an HBA?
At the bottom of the ‘zpool status -v pool_name’ output, you may see paths
and/or zfs object ID’s of the damaged/impacted files. This would
Greetings,
We run a couple of 2.8 file systems with 2.5.3 clients. The only issue we have
"seen" is DNE phase 2 (directory stripe) simply does not work. I believe a 2.8
client is required for that feature.
-Tom
> On Sep 29, 2016, at 12:03, Steve Barnet wrote:
>
>
ndreas
--
Andreas Dilger
Lustre Principal Architect
Intel High Performance Data Division
On 2016/09/15, 15:23, "Crowe, Tom" <thcr...@iu.edu<mailto:thcr...@iu.edu>>
wrote:
Hi Jinshan,
The examples in the first part of the thread are from one of our OST's. We had
all previous files
, and it has a spilled block attached.
Jinshan
On Sep 9, 2016, at 1:34 PM, Crowe, Tom <thcr...@iu.edu<mailto:thcr...@iu.edu>>
wrote:
Greetings All,
I have come across a strange scenario using zfs 0.6.4.2-1 and Lustre 2.8.0.
In a nutshell, when we delete items from lustre using rm, the
Greetings All,
I have come across a strange scenario using zfs 0.6.4.2-1 and Lustre 2.8.0.
In a nutshell, when we delete items from lustre using rm, the files/dirs are
seemingly removed, but the space is not freed on the underlying zfs
dataset/zpool. We have unmounted the OST/dataset, exported
nl.gov>> wrote:
Reposting to the correct mailing list.
____
To: Crowe, Tom;
lustre-de...@lists.lustre.org<mailto:lustre-de...@lists.lustre.org>
Subject: Re: [lustre-devel] LMT 3.2 - MDT display
Hi Tom,
It sounds like maybe you see the summary and the OST list, bu
Greetings Kurt,
I believe the issue you are running into is too many files in a single
directory. We had a similar issue a little while back.
We ended up mounting the OST as ldiskfs, and used debugfs to search for
the inode. An example is below.
# debugfs -R Œncheck 111675137¹ /dev/dm-1
This
Greetings All,
I have been testing lustre pools, attempting to ³restrict² data placement
by setting the striping information for a top level directory of a
specific user/project with Œlfs setstripe p fsname.poolname
top-level-dir¹.
This works as desired, however, when I become the non-root
gt;
>
>
>
>
>
>On 12/18/15, 1:52 PM, "lustre-discuss on behalf of Crowe, Tom"
><lustre-discuss-boun...@lists.lustre.org on behalf of thcr...@iu.edu>
>wrote:
>
>>Greetings All,
>>
>>
>>I have been testing lustre pools, attempting to
Greetings,
I am investigating the possibility of restoring a DD backup of our MDT, onto
test hardware. Our filesystem is 2.1 based.
The general idea would be to get the MDT/MGS restored in their entirety, change
the MGSNODE parameter on the MDT to reflect the test hardware LNET setup, add
did, every
file in your filesystem would be removed or moved to lost+found.
It is not immediately clear to me what amount of useful testing could be done
under that situation. Maybe there is something.
Chris
On 06/25/2015 01:06 PM, Crowe, Tom wrote:
Greetings,
I am investigating
12 matches
Mail list logo