On January 22, [EMAIL PROTECTED] wrote:
> We have a LAN with about 40 Linux systems on it. We use the Berkeley
> "customs" suite to perform parallelized builds of our product. So we
> hammer NFS pretty hard; 30-40 machines can be simultaneously reading
> and writing a single build tree through
On January 22, [EMAIL PROTECTED] wrote:
We have a LAN with about 40 Linux systems on it. We use the Berkeley
"customs" suite to perform parallelized builds of our product. So we
hammer NFS pretty hard; 30-40 machines can be simultaneously reading
and writing a single build tree through
Trond Myklebust <[EMAIL PROTECTED]> writes:
> What filesystem are you exporting?
Just ext2; all of our file systems are ext2.
The disks here are a mixture of IDE, SCSI (aic7xxx and sym53c8xx), and
Mylex DAC960 RAID. In this case, the machine running 2.2.18 has
aic7xxx SCSI. I suspect I could
> " " == Patrick J LoPresti <[EMAIL PROTECTED]> writes:
> This developer is now regularly seeing two problems which began
> with the 2.2.18 upgrade. First, remote clients occasionally
> get "stale NFS file handle" errors for no apparent reason.
> Second, some of the
" " == Patrick J LoPresti [EMAIL PROTECTED] writes:
This developer is now regularly seeing two problems which began
with the 2.2.18 upgrade. First, remote clients occasionally
get "stale NFS file handle" errors for no apparent reason.
Second, some of the files are being
Trond Myklebust [EMAIL PROTECTED] writes:
What filesystem are you exporting?
Just ext2; all of our file systems are ext2.
The disks here are a mixture of IDE, SCSI (aic7xxx and sym53c8xx), and
Mylex DAC960 RAID. In this case, the machine running 2.2.18 has
aic7xxx SCSI. I suspect I could
6 matches
Mail list logo