On Apr 02, 2009 21:09 +0200, Johann Lombardi wrote:
On Thu, Apr 02, 2009 at 01:27:52PM -0500, Nirmal Seenu wrote:
To reduce the size of the backup file use the --sparse option in your
tar command. This option only reduces the backup file size but not the
time taken to tar up your backup
On Apr 02, 2009 15:17 -0700, Jordan Mendler wrote:
I deployed Lustre on some legacy hardware and as a result my (4) OSS's each
have 32GB of RAM. Our workflow is such that we are frequently rereading the
same 15GB indexes over and over again from Lustre (they are striped across
all OSS's) by
Hello Andreas,
Thank you for your response. This is exactly what we do on one of our file
systems. We are not able to do this on this other file system for the
following reason:
[r...@iliadaccess04 ~]# lfs quotacheck -ug /n/scratch
quotacheck failed: Input/output error
[r...@iliadaccess04 ~]#
On Friday 03 April 2009, Thomas Wakefield wrote:
Any idea on the timeline for 1.6.7.1 ? Will it be out today, or just
sometime soon?
Knowing if it's hours away or awaiting a complete qa-cycle would be nice. That
would decide if one should wait or rebuild with the patch.
/Peter
Thanks,
Does Lustre increase random access performance? I would like to know
this becauseI have a large random access file (a hash table). I have
striped this file across multiple OSTs. The file is 24 gigabytes, and
the stripe size was 1gig across 10 OSTs. I also tried a stripe size
of 100megabytes.
set...@gmail.com wrote:
Does Lustre increase random access performance? I would like to know
this becauseI have a large random access file (a hash table). I have
striped this file across multiple OSTs. The file is 24 gigabytes, and
the stripe size was 1gig across 10 OSTs. I also tried a
Hello!
On Apr 6, 2009, at 10:41 AM, Peter Kjellstrom wrote:
On Friday 03 April 2009, Thomas Wakefield wrote:
Any idea on the timeline for 1.6.7.1 ? Will it be out today, or just
sometime soon?
Knowing if it's hours away or awaiting a complete qa-cycle would be
nice. That
would decide if
A quick note on my setup, 40 oss's each with a md0 raid1 (The OST)
(2x500gb) (amd x2 5400 8gb ddr2) and a single mdt/mgs/mds (dual xeon
quad core 32gb ddr2 4tb raid6). I have everything mounted and working
properly and am still going through the basic tests for performance
but I am confused
Andreas,
In theory, this should work well. Each index is about 15gb, and usually the
same index is used sequentially at a given time. These are genome
alignments, so we will, for instance align all of a human experiment before
we switch indexes to align all of a rat genome. As such 32gb should be
On Mon, 2009-04-06 at 15:27 -0400, Christopher Deneen wrote:
A quick note on my setup, 40 oss's each with a md0 raid1 (The OST)
(2x500gb) (amd x2 5400 8gb ddr2) and a single mdt/mgs/mds (dual xeon
quad core 32gb ddr2 4tb raid6). I have everything mounted and working
properly and am still
Thank you all for the feedback regarding the inode out of bounds issue,
and I look forward to the upcoming binary patch / release.
We have found some other interesting filesystem issues (probably related to
the bug under discussion) whereby we have some directories full of files
that have no
11 matches
Mail list logo