On 2012-09-28, at 2:15, Johann Lombardi johann.lomba...@linux.intel.com wrote:
On 28 sept. 2012, at 09:03, Alfonso Pardo wrote:
If I am need more inodes in my OST, I have a big trouble!, becouse I will
need format all OST in my production storage environment.
Any ideas to increase the
Hi Roman,
The coverage data is interesting. It would be even more useful to be able to
compare it to the previous code coverage run, if they used the same method for
measuring coverage (the new report states that the method has changed and
reduced coverage).
Are the percentages if code
On Oct 3, 2012, at 12:50 AM, Sébastien Buisson wrote:
Le 03/10/2012 00:44, Andreas Dilger a écrit :
Yes, no doubt. It is probably worthwhile to check the CEA Coverity patches
before submitting anything new, in case those failures are already fixed
there.
I am sure everyone of you would
On Oct 15, 2012, at 1:01 PM, Jean-Francois Le Fillatre wrote:
Yes this is one strange formula... There are two ways of reading it:
- one thread per 128MB of RAM, times the number of CPUs in the system
On one of our typical OSSes (24 GB, 8 cores), that would give: ((24*1024) /
128) * 8 =
On 2012-10-15, at 2:15, Danny Sternkopf danny.sternk...@csc.fi wrote:
I wonder if there is anyone out there who has a working test or even
production environment using Lustre for VM disk images?
I run into the Lustre Direct I/O problematic which requires the
application to read and write in
On 2012-10-18, at 16:11, Jason Brooks
brook...@ohsu.edumailto:brook...@ohsu.edu wrote:
I suffered an oss crash where my oss server had a cpu fault. I have it running
again, but I am trying to decommission it. I am migrating the data off of it
onto other ost's using the lfs find command with
On 2012-10-31, at 8:27, Gabriele Paciucci paciu...@gmail.com wrote:
has anyone tried to compile the lustre patchless client on a debian linux for
arm architecture? Could be possible to do?
Gabriel,
Many years ago, someone was working on a MIPS port for Lustre and I believe
they got it
On Oct 31, 2012, at 10:55 AM, Jason Brooks wrote:
Hello,
I am using lustre 1.8.6, on a centos 5.5 system.
The installed version of tar (gnu, 1.15.1) is out of the centos RPM. The man
page claims that use of the —xattrs commandline option will get all of the
filesystem's extended
In hopes of improving the quality, coverage, and currency of the Lustre
User Manual, I'm putting out a call to the Lustre community for
contributors to this important resource.
The user manual is especially important for new users to Lustre, but has
fallen into some disrepair now that there is no
On 2012-11-13, at 14:26, Ned Bass ba...@llnl.gov wrote:
On Tue, Nov 13, 2012 at 11:48:35AM -0800, Nathan Rutman wrote:
Would it be easier to move the manual back to a Wiki? The low hassle
factor of wikis has always been a draw for contribution. The openSFS
site is up and running with
Hamilton, Pam hamilt...@llnl.gov wrote:
Are you in Salt Lake City? We'll be having an 'unofficial' Birds of a
Feather (BOF) session Wednesday evening the 14th, from 5:30pm to 7:30pm, at
the Salt Lake Marriott Downtown City Creek 2nd floor Snowbird Rm.
We've had a really busy year with
On 11/14/12 4:51 PM, Christopher J. Walker c.j.wal...@qmul.ac.uk wrote:
On 13/11/12 21:26, Ned Bass wrote:
- A review process for proposed changes that assures a high standard of
quality
That's important. I've spotted cases where it's clear there's an issue,
but am not sure what the correct
On 11/22/12 10:25 AM, Mark Field mnfi...@gmail.com wrote:
Hi,
I am currently using lustre 1.8, after a OST failure, I deactivated the
OST on the MDS and made the change permanent. If I now run lctl dl on
the client nodes all of them except one show the OST as inactive
(device 7 in the output
On Nov 29, 2012, at 12:39 AM, Alfonso Pardo wrote:
I need to monitoring a directory in the lustre file system.
We don't have any inotify or dnotify support for Lustre. In Lustre 2.x it is
possible to monitor the whole filesystem with the Lustre ChangeLog
functionality. It might be an
On 12/6/12 12:06 PM, Grigory Shamov ga...@yahoo.com wrote:
Hi,
On our cluster, when there is a load on Lustre FS, at some points it
slows down precipitously, and there are very very many slow IO and
slow setattr messages on the OSS servers:
===
[2988758.408968] Lustre: scratch-OST0004:
On 2012-12-07, at 10:26, Jon Yeargers
yearg...@ohsu.edumailto:yearg...@ohsu.edu wrote:
Can Lustre be used to store data like streaming audio / video? I’ve been
scolded about considering it for DB storage but I’m looking at the relative
merits of Lustre vs HDFS.
I've been using Lustre for years
On 2012-12-19, at 11:22, Allen, Benjamin S
b...@lanl.govmailto:b...@lanl.gov wrote:
Hi Jason,
2. Having two paths to your storage should speed things up. I'm guessing you'd
have more than one LUN on the array, so you could do something as simple as
splitting the LUNs between the two paths, or
On 2012-12-18, at 16:39, Jason Brooks brook...@ohsu.edu wrote:
I am currently using lustre 1.8. I am relatively new with lustre. At the
moment, I am trying to move an mds/mgs system from one disk to another.
I used the version of tar modified by whamcloud in order to dump and restore
On Jan 16, 2013, at 3:50 PM, Brett Worth wrote:
I am in the process of doing an upgrade of our 1.8.6 Lustre servers to 2.1.3.
We have
quite a few clients that all run versions of the lustre client close to 1.8.6.
Is there a compatibility matrix somewhere that will show what clients will
On 2013/28/01 3:22 PM, Andrus, Brian Contractor bdand...@nps.edu wrote:
I am using the lustre-2.2.0-2.6.32_220.4.2.el6_lustre.x86_64.x86_64 rpms
downloaded from whamcloud.
I checked the checksums on our system and find:
lctl get_param osc.*.checksum_type
LBUG is a fatal error. You need to reboot. Strongly consider upgrading to a
newer release (e.g. 1.8.8) to fix this problem.
Cheers, Andreas
On 2013-01-31, at 12:10, Θεόδωρος Στυλιανός Κονδύλης
theodoros.stylianos.kondy...@gmail.commailto:theodoros.stylianos.kondy...@gmail.com
wrote:
Hello
On 2013/12/02 7:16 AM, Alfonso Pardo alfonso.pa...@ciemat.es wrote:
Hello,
I am trying to compile lustre client 2.3 (2.2 same problem) for kernel
2.6.34. I need to use this kernel, got from
www.kernel.org http://www.kernel.org, because my clients need ACPI
modules, and only this kernel version
On 2013-02-14, at 9:46, Alastair Ferguson
afergu...@cmcrc.commailto:afergu...@cmcrc.com wrote:
OK, so our old MDS had hardware issues so I configured a new MGS / MDS on a VM
(this is a backup lustre filesystem and I wanted to separate the MGS / MDS from
OSS of the previous), and then did this
On 2013/21/03 4:09 AM, Michael Kluge michael.kl...@tu-dresden.de wrote:
I have read through the documentation for obdfilter-survey but could not
found any information on how invasive the test is. Will it destroy an
already formatted OST or render user data unusable?
It shouldn't - the
Anyone can update Wikipedia, so it would be great if you could update the
download links.
Cheers, Andreas
On 2013-03-25, at 8:50, Christopher Beland cbel...@bbn.com wrote:
Greetings,
I was trying to update the Lustre Wikipedia page today, and while
checking the latest release version, I
On 2013/03/04 11:20 AM, Andrus, Brian Contractor bdand...@nps.edu
wrote:
All,
I am running e2fsck on one of my OSTs.
I run into:
===
File /O/0/d6/768742 (inode #22479589, mod time Fri Aug 17 11:15:51 2012)
has 1034 multiply-claimed block(s), shared with 4 file(s):
...
up the parent FID using:
lfs fid2path /mount/point [0x213c5:0xbae3:0x0]
on any client.
Cheers, Andreas
-Original Message-
From: Dilger, Andreas [mailto:andreas.dil...@intel.com]
Sent: Saturday, April 06, 2013 5:19 PM
To: Andrus, Brian Contractor; lustre-discuss@lists.lustre.org
On 2013-04-16, at 10:49, Verduzco, Benjamin P.
benjamin.p.verdu...@saic.commailto:benjamin.p.verdu...@saic.com wrote:
I’m having trouble with a filesystem after a planned outage. I’m new to
Lustre, but I’ll do my best to explain what I’m seeing.
· I have a 1.6.7.2 MDS and 4 1.8.5
On 2013/26/04 8:56 AM, Andrus, Brian Contractor bdand...@nps.edu wrote:
FWIW, your total should be used+free+reserved
There is normally a % set aside for root only
This is changeable too.
tune2fs -m 0 device
If your filesystem ever gets totally full, it will have permanent
filesystem
On 2013-05-08, at 7:14, Jure Pečar pega...@nerv.eu.org wrote:
I have a lustre 2.2 environment which looks like this:
# lfs df -h
UUID bytesUsed Available Use% Mounted on
lustre22-MDT_UUID 95.0G9.4G 79.3G 11% /lustre[MDT:0]
On 2013-05-20, at 12:26, Rob Stewart robstewar...@gmail.com wrote:
Having read the wiki page on Lustre's changelogs [1], I have a
question about one claim Use changelog entries to exactly replicate
changes in a file system mirror.
Of the record types that the changelog supports, there is not
On 2013/04/06 11:44 AM, Ken Hornstein k...@cmf.nrl.navy.mil wrote:
I tried to move my MDS from one filesystem on the same machine to another,
using the procedure outlined in the Lustre manuals (I didn't use dd, since
the underlying disks weren't the same size and also I did not think
it was
On 2013/04/06 12:21 PM, Colin Faber cfa...@gmail.com wrote:
You could try this
http://xyratex.prod.acquia-sites.com/sites/default/files/Migration_WC2.1_P
atches_1-0.tar.gz
Shadow wrote this tool, it allows you to correct OI database, though it's
not been updated in a while so I'm not sure how
On 2013/13/06 6:36 PM, Jaln valiant...@gmail.com wrote:
Thank you Chris, I'm sort of clear now.
In my question, stripe 0,4 means one process wants to access stripe 0 and
4 at the same time.
there is another process wants to access both stripe 0 and 2,
Just to clarify the Lustre terminology here,
On 2013-06-14, at 17:54, Chan Ching Yu, Patrick
cyc...@clustertech.commailto:cyc...@clustertech.com wrote:
I am considering the harddisk allocation for the Lustre storage system.
There are totally two Lustre IO servers, one acts as MDS/OSS, another acts as
a pure OSS.
Note that if the MDS
,
which Lustre takes to mean sync'd to disk on the server so that it is
safe in
the face of a crash of either client or server, so is not faster unless
very
large writes are done by the client.
Cheers, Andreas
On Fri, Jun 14, 2013 at 2:47 PM, Dilger, Andreas
andreas.dil...@intel.com wrote:
On 2013/13
On 2013/17/06 1:12 AM, Alastair Ferguson afergu...@cmcrc.com wrote:
OK, bit of a weird one, so 3 OSTs are 100%, but there is 30TB of free
space around the other OSTs, so I do:
lfs df -h
Get this part as one of the OSTs I need to deactivate:
AC3-OST000c_UUID 14.3T 13.6T
On 2013-06-19, at 8:52, Lee, Brett
brett@intel.commailto:brett@intel.com wrote:
We probably should clarify the context of inodes:
1) Each OST has inodes – these inodes refer to objects on that OST that
contain block data for a Lustre file.
I prefer to call these as OST objects, to keep
On 2013/28/06 3:25 PM, Andrus, Brian Contractor bdand...@nps.edu wrote:
Basically, I was adding capacity to a system while doing a fresh install.
Turns out /dev/sda which used to be the disk in the bottom slot became
the disk in the top slot instead.
That happened to be where the MDT was, which
On 2013-06-28, at 20:27, Andrus, Brian Contractor
bdand...@nps.edumailto:bdand...@nps.edu wrote:
I am trying to reformat an OST which was originally created with lustre 2.3
with lustre 2.4 mkefs.lustre.
First question is why you want to reformat? You never need to reformat when
upgrading to
On 2013/03/07 5:51 AM, Nikolay Kvetsinski nkvecin...@gmail.com wrote:
Thanks mate, what you say makes sense. Anyway I`m dealing with different
file sizes, varying from KBs to GBs. Luckily some of the files are
grouped in folders, for example in folder /fs/X all files will be 9MB,
which I`ll not
On 2013-07-15, at 6:20, Nikolay Kvetsinski nkvecin...@gmail.com wrote:
Hello, I have a question about metadata inodes considerations, in the manual
says that Lustre metadata is typically 1-2 % of the file system capacity
depending upon file size. After that there are some calculations for
[Apologies if you received multiple copies of this email. ]
---
The 8th Parallel Data Storage Workshop (PDSW13)
held in conjunction with IEEE/ACM Supercomputing (SC) 2013
Denver, Colorado, Monday, November 18, 2013
On 2013/07/18 3:36 PM, Prakash Surya sur...@llnl.gov wrote:
Is there a public yum repo one can use for installing lustre packages?
Don't quote me, but I think that every Jenkins build is a YUM repo.
Cheers, Andreas
On Thu, Jun 20, 2013 at 03:07:56PM +, Lee, Brett wrote:
CY,
If you have
On 2013/07/30 10:43 AM, Diep, Minh minh.d...@intel.com wrote:
We are not actively building Ubuntu 12.04. I would look into this error
checking if Linux was built with CONFIG_CRYPTO in or as moduleŠ no
I don't think we should be depending on CONFIG_CRYPTO. This is needed for
Kerberos support,
On 2013/08/05 12:28 PM, Mohr Jr, Richard Frank (Rick Mohr)
rm...@utk.edu wrote:
I know that the official Lustre support matrix does not show support for
Lustre 1.8 on SLES 11 SP2, but I was wondering if anyone has had
unofficial success making this work?
Probably a long shot, but just thought I
You didn't mention which version of Lustre you are using, but this is probably
related to https://jira.hpdd.intel.com/browse/LU-3671
Cheers, Andreas
On 2013-08-20, at 1:08, Nikolay Kvetsinski
nkvecin...@gmail.commailto:nkvecin...@gmail.com wrote:
Hello guys,
does anyone of you have problems
On 2013/08/25 6:39 AM, Nikolay Kvetsinski nkvecin...@gmail.com wrote:
Hello, I have a production script that do read operations to a lot of
small files. I read that one can gain performance boost with small files
by using a loop device on top of Lustre. So a created 500 GB file striped
across all
The Lustre Filesystem Operations Manual is an important resource for the
Lustre Community for both new and experienced Lustre users to find out
more information about how to use and maintain a Lustre filesystem.
Unfortunately, while there are ongoing efforts to improve the manual over
the past
On 2013-09-24, at 14:15, Carlson, Timothy S timothy.carl...@pnnl.gov wrote:
I've got an odd situation that I can't seem to fix.
My setup is Lustre 1.8.8-wc1 clients on RHEL 6 talking to 1.8.6 servers on
RHEL 5.
My compute nodes have 64 GB of memory and I have a use case where an
In theory this should be possible, but we have not tested this for a long time
since it isn't a configuration that is common.
Note that you need to configure the OST on a separate node from the MDT in this
case. We have not implemented the ability to have multiple OSD types on the
same node.
On 2013/09/19 11:39 PM, Arman Khalatyan arm2...@gmail.com wrote:
Hello,
We are removing one ost from the lustre 2.4.1.
Only few files are written on that OST without any striping.
First we dectivate on MDS
lctl --device 15 deactivate
then on client:
lfs_migrate -y
On 2013-10-11, at 17:59, Weilin Chang
weilin.ch...@huawei.commailto:weilin.ch...@huawei.com wrote:
I like to configure Lustre Serever on a 32 bit ARM system. Where can I download
prebuilt binaries packages and its corresponding sources?
I tried rpm files under
You can try mounting the OST with -o noinit_itable. This doesn't fix the
formatting (not sure why that isn't being recognized), but it prevents the
kernel thread from zeroing the inode table.
This could cause problems for e2fsck in the distant future if certain types of
corruption are hit, but
On 2013/10/17 5:34 AM, Olli Lounela olli.loun...@helsinki.fi wrote:
Hi,
We run four-node Lustre 2.3, and I needed to both change hardware
under MGS/MDS and reassign an OSS ip. Just the same, I added a brand
new 10GE network to the system, which was the reason for MDS hardware
change.
Note that
On 2013/10/16 4:45 PM, Weilin Chang weilin.ch...@huawei.com wrote:
HI,
I am using luster 1.8.5. Server s are up and mounted without any problem.
But client failed to mount the luster file system. I also did not see
luster in /proc/filesystems. Is there other rpm I needed to install on
the
There is no particular danger to the filesystem if clients fail to unmount.
Clients have no direct ability to modify filesystem metadata, so they should
never be able to corrupt the filesystem. This is no different than if clients
crash or if the network fails, or whatever else bad happens to
On 2013/10/29 5:21 PM, Weilin Chang weilin.ch...@huawei.com wrote:
I tried to compile e2fsprogs from its source codes. The compilation
failed because db.h was missing. Does anyone compile the tool from its
source code before? Where does this file locate? Is it ok to use
different version of
Note also that if there are fixes or patches for Debian builds, I'm happy to
get patches to fix that. We do build Lustre clients (2.1-2.5) on Ubuntu 10, but
I suspect that there may be some problems building on newer Ubuntu releases.
Cheers, Andreas
On 2013-11-25, at 9:38, Thomas Stibor
On 2013/12/02 12:31 PM, David Lee Braun dbr...@comcast.net wrote:
Hi Colin,
I have lustre 2.3.0 from the git repository.
I would say that 2.3.0 from git is not a good version to be running on
in the long term. This is a feature release that does not get any bug
fixes. I'd really recommend
On 2013/12/04 12:27 PM, Christopher J. Walker c.j.wal...@qmul.ac.uk
wrote:
How many OSS threads should I be running on an OSS with one OST with 12
disks in RAID6?
http://build.whamcloud.com/job/lustre-manual/lastSuccessfulBuild/artifact/
lustre_manual.xhtml#dbdoclet.50438272_55226
talks about
I saw this same problem on my system - it happens when trying to migrate a file
created with Lustre 1.8.
See https://jira.hpdd.intel.com/browse/LU-4293 for details.
I have an updated version of lfs_migrate that works around this problem that I
should push to Gerrit. The patch will be linked to
lfs_migrate and let you know.
However I have only run lustre 2.5
Thanks,
Pete
On 12/17/2013 07:49 PM, Dilger, Andreas wrote:
I saw this same problem on my system - it happens when trying to migrate a
file created with Lustre 1.8.
See https://jira.hpdd.intel.com/browse/LU-4293 for details.
I
://git.whamcloud.com/?p=fs/lustre-release.git;a=blob;f=lustre/scripts/l
fs_migrate;h=e9c8aaeddec4a9f7efac364ad6740accb4cb3293;hb=a99a1172147b33b5fa
118779399267bc6e4490bd
On 12/17/2013 07:49 PM, Dilger, Andreas wrote:
I saw this same problem on my system - it happens when trying to
migrate a file
Inactive OST means that the MDS cannot contact the OST for some reason (OST is
not running, firewall rules, etc). You need to check console logs on MDS or OSS
to see why.
Cheers, Andreas
On Dec 22, 2013, at 0:04, Amjad Syed
amjad...@gmail.commailto:amjad...@gmail.com wrote:
Hello,
My
If you have the master git repo, just do:
git checkout b2_4
to get the latest 2.4 (in progress) version, or:
git checkout v2_4_2_0
to get exactly the 2.4.2 release.
Cheers, Andreas
On Dec 29, 2013, at 9:59, E.S. Rosenberg esr+lus...@mail.hebrew.edu wrote:
Hi all,
I am not a big git
They are separate lists. The hpdd-discuss list is more relevant for 2.x
releases from Intel.
Cheers, Andreas
On Dec 29, 2013, at 4:11, E.S. Rosenberg esr+lus...@mail.hebrew.edu wrote:
Is hpdd-discuss forwarded here and vice versa?
Or should I be subscribing to that list too?
Thanks,
Eli
1 1347728 0x1490900
Thanks,
Pete
On 12/19/2013 12:07 AM, Dilger, Andreas wrote:
If you have only run with Lustre 2.5, can you please run lfs getstripe
-v /path/to/file to print out the FID and other layout information for
the file.
Cheers, Andreas
On Dec 17
to
http://jira.hpdd.intel.com/browse/LU-4293.
Cheers, Andreas
On 01/10/2014 01:50 PM, Dilger, Andreas wrote:
On 2013/12/19, 6:16 AM, Peter Mistich peter.mist...@rackspace.com
wrote:
hope this helps
/opt/zenoss/perf//Daemons/bb01lr02s06.dfw1/zenperfsnmp-thresholds.pickle
:
cannot
swap
Forwarding this to lustre-discuss and the mxlnd author to reach the
maximum audience.
I definitely think qswlnd is beyond its useful lifetime (Quadrics is out
of business since 2009, so any systems using it are at least 5 years old),
and I believe ralnd is the same boat (it was only used for much
On 2014/03/04, 2:38 AM, 邓尧 tors...@gmail.commailto:tors...@gmail.com
wrote:
We're running low on physical machines, and want to deploy MGS, MDS and OSS on
the same machine, is it officially supported ?
I know that MGS and MDS can be put on the same machine, but not sure about OSS
and MDS.
This is (or at least was) possible for OSTs if you enable failout mode,
instead of the default failover mode. I don't know if we have tested this in
a long time, since it isn't something that most people want.
However, it isn't currently possible to run the MDT in failout mode since there
On 2014/03/19, 9:35 AM, Lee, Brett brett@intel.com wrote:
Hi Jon,
Looks like you are jumping in with both feet (new to Lustre and trying to
replace an OST). Pretty ambitious... ;)
From what I can tell you are following section 14.8.3 in the latest
Lustre manual:
14.8.3. Removing an OST
On 2014/03/11, 1:04 AM, Grégoire Pichon gregoire.pic...@bull.net wrote:
Hi Jinshan,
An additional question on that topic...
Is the precreate object cache tunable ? for instance, to increase the
number of precreated objects in case of metadata intensive applications ?
This should be handled
, 2014, at 10:46, Stearman, Marc stearm...@llnl.gov wrote:
All of our file systems show reserved=0 for all OSTs.
Create counts vary from 512 up to 2048.
-Marc
D. Marc Stearman
Lustre Operations Lead
stearm...@llnl.gov
925.423.9670
On Mar 19, 2014, at 4:04 PM, Dilger
get feedback
from this list will make it less scary to move over to lustre)!
Regards,
/jon
On 03/19/2014 08:51 PM, Dilger, Andreas wrote:
On 2014/03/19, 9:35 AM, Lee, Brett brett@intel.com wrote:
Hi Jon,
Looks like you are jumping in with both feet (new to Lustre and trying
to
replace
On 2014/04/08, 7:16 AM, v.nikonorov v.nikono...@nikiet.ru wrote:
Hello!
Some old post says about plans to implement this feature, is it
implemented yet?
the post I am referencing:
http://lists.lustre.org/pipermail/lustre-discuss/2007-September/003921.htm
l
There is a new project to implement
Correct.
Cheers, Andreas
On Apr 9, 2014, at 6:40, v.nikonorov v.nikono...@nikiet.ru wrote:
Thank you!
So for now, RAID is the only point of storage redundancy?
Dilger, Andreas писал 2014-04-08 17:08:
On 2014/04/08, 7:16 AM, v.nikonorov v.nikono...@nikiet.ru wrote:
Hello!
Some old
You could check the Debian lustre package to see if there are any fixes to the
build system in the package there. I'd be happy to see any fixes included back
into the Lustre repo so that anyone can build the packages.
Cheers, Andreas
On May 2, 2014, at 13:39, Steven Lokie
Lustre does not have block replicas, relying currently on RAID at the device
level.
There is an object selection policy to pick objects on OSTs. Typically it is
round-robin, with some precession if multi-striped files fit evenly into the
total OST count. If the free space in the OSTs is very
It is usually best to use the newest e2fsprogs release, since it has the most
fixes. This is currently 1.42.7.wc2 though we are just in the process of
releasing 1.42.9.wc1.
That said, I would not run e2fsck on the failing device. That can cause extra
stress on the device and cause it to fail
On 2014/05/14, 8:46 PM, 冯景华 fengj...@gmail.commailto:fengj...@gmail.com
wrote:
Before upgrade our cluster system, the lustre server version is 1.8.5,
and client version is 1.8.5, client OS is Redhat 5.3 kernel version 2.6.18
Because we need to run some software on Redhat 6, so we
Are you sure your client is using the IB network? What does:
lctl get_param osc.*.imports
report for the NIDs of the OSTs? If it is still using the TCP connection you
may need to run writeconf to fix the client config log.
Cheers, Andreas
On May 19, 2014, at 4:34, Pardo Diaz, Alfonso
On 2014/05/26, 9:03 AM, Andrus, Brian Contractor
bdand...@nps.edumailto:bdand...@nps.edu wrote:
Is it normal for e2fsck running on an MDT with --msdb to take over a week?
The entire MDT is only 500GB.
This is limited by the performance of the database that e2fsck is using for the
mdsdb. If
I assume it is not a symlink to itself like:
vm-106-disk-1.raw - .
or something?
That said, there is something wrong with that directory. It should not have any
objects assigned to it if it is a directory, only if it is a regular file.
You could check on OST0013 (decimal 19) to see if
On 2014/06/03, 6:37 PM, Anjana Kar k...@psc.edu wrote:
Is there a way to set the number of inodes for zfs MDT?
No. ZFS can create inodes until the filesystem is full.
I've tried using --mkfsoptions=-N value mentioned in lustre 2.0
manual, but it fails to accept it.
This is an ldiskfs-specific
Note that the MDT performance degradation over time (and especially under load)
can be addressed by ZFS ARC cache tunables. In particular arc_meta_limit is
critical, since this artificially limits the amount of metadata that can be
cached on the MDS, since it doesn't store any data.
Cheers,
Scrub and resilver have nothing to so with defrag.
Scrub is scanning of all the data blocks in the pool to verify their checksums
and parity to detect silent data corruption, and rewrite the bad blocks if
necessary.
Resilver is reconstructing a failed disk onto a new disk using parity or
This should be possible by specifying the pool configuration for mkfs.lustre as
you do with zpool:
mkfs.lustre --backfstype zfs {other opts} {poolname/dataset} mirror sde sdf
mirror sdg sdh
There is also a bug open to improve the mkfs.lustre(8) man page to describe the
ZFS options better:
The underlying MDT filesystem is actually ZFS and not ldiskfs, but it should be
equally possible to expand the MDT storage capacity by adding another mirrored
VDEV to the MDT pool.
It is also important to ask if there are any snapshots of the pool? That would
also hold space in the pool even
a lot of certainty in your choices, you may want to
consult various vendors if lustre systems.
Scott
On June 8, 2014 11:42:15 AM CDT, Dilger, Andreas
andreas.dil...@intel.commailto:andreas.dil...@intel.com wrote:
Scrub and resilver have nothing to so with defrag.
Scrub is scanning of all
:14 +
Dilger, Andreas andreas.dil...@intel.com wrote:
It looks like you've already increased arc_meta_limit beyond the
default, which is c_max / 4. That was critical to performance in our
testing.
There is also a patch from Brian that should help performance in your
case:
http
On 2014/06/19, 6:55 PM, Alastair Ferguson
afergu...@cmcrc.commailto:afergu...@cmcrc.com wrote:
OK, so I have a RAID1 array used as Journal disk for two 14TB OSTs. this has 2
x 2GB partitions on 2 x 2TB physical disks.
[root@silicon ~]# fdisk -l /dev/sdav
Disk /dev/sdav: 2000.3 GB,
On 2014/06/23, 7:31 PM, Juan Madrigal jua...@mac.commailto:jua...@mac.com
wrote:
Does Lustre run on FreeBSD? If not are there any plans for a port?
Really could use a solid distributed filesystem on FreeBSD.
Currently no plans to do this. Re-export via NFS (even NFSv4 using Ganesha)
and
What version of Lustre are you using, and do you ever use lfs migrate or
lfs_migrate to migrate files onto new OSTs? There was a bug that would cause
orphan inodes on the MDS after migration.
This does result in some space leakage, but unless you have a large problem
there is no requirement to
On 2014/06/29, 1:24 PM, Indivar Nair
indivar.n...@techterra.inmailto:indivar.n...@techterra.in wrote:
Referring to https://jira.hpdd.intel.com/browse/LUDOC-161
The idea is to be able to backup and restore individual files / directories to
another non-Lustre storage OR tape drive...
Well, the
On 2014/09/02, 7:05 AM, Alexey Lyashkov alexey_lyash...@xyratex.com
wrote:
we don’t need too much sends to single peer, except of LNet routers.
as about other limits
Number of RPC in flight == 1 for MDCMDT links,
Just a minor correction - while there is currently a limit of 1 modifying
RPC in
The FIDs of OST objects are different than those on MDT objects since this
allows more flexibility in creating the layout of the file.
In any case, related to your specific goal - it is possible to get the parent
(MDT) FID of each OST object from an xattr on the object itself.
See the code in
The backward compatibility is usually listed in the release notes for each new
release (it is more difficult to report the future compatibility when the old
release is originally made).
The 2.1 client _should_ be compatible with newer 2.4 and 2.5 servers, but we
only ever test interoperability
It is also worthwhile to note that Lustre does not actually pack the 1MB or 4MB
of data into the RPC request, but only a scatter-gather list of the data
buffers and offsets. The data itself is transferred via RDMA.
Cheers, Andreas
On Oct 7, 2014, at 9:05, teng wang
1 - 100 of 402 matches
Mail list logo