Re: Extremely large (70TB) File system/server planning

2001-02-09 Thread Karsten W. Rohrbach

Mike Smith([EMAIL PROTECTED])@Mon, Feb 05, 2001 at 12:52:24PM -0800:
 
 You can't do this with a NetApp either; they max out at about 6TB now 
 (going up to around 12 or so soon).  You might want to talk to EMC and/or 
 IBM, both of whom have *extremely* large filers.
from my experiences with filers (we have both, country and western here
- eg. netapp f740/760 and emc^2 symmetrix/connectrix) i can only say
that emc is a pile of sh** - no pun intended. actually the boxes work
okay, but you need a shitload of datamover boxes from emc to achieve
performance similar to netapp's 760 series (up to 12 data movers with
2gig of ram each). emc goes brute force, netapp use their brains.

when it comes to ibm, as far as i understand you have to hook up their
filers to rs/6000(aix) or s/370 or s/390 systems since they are "only"
fibrechannel or ficon attached raid subsystems, so the client platform
is responsible for handling all the filesystem stuff.

you might also check out lsi logic's filer products, i think they
support 12tb via nas.

 
 Your friend may also want to look at Traakan, who have a novel product in 
 this space.
i checked out their website which says "under construction"
strange...


/k

 
 -- 
 ... every activity meets with opposition, everyone who acts has his
 rivals and unfortunately opponents also.  But not because people want
 to be opponents, rather because the tasks and relationships force
 people to take different points of view.  [Dr. Fritz Todt]
V I C T O R Y   N O T   V E N G E A N C E
 
 
 
 
 To Unsubscribe: send mail to [EMAIL PROTECTED]
 with "unsubscribe freebsd-fs" in the body of the message

-- 
 Hackers know all the right MOVs.
KR433/KR11-RIPE -- http://www.webmonster.de -- ftp://ftp.webmonster.de



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-09 Thread Mike Smith

 
 when it comes to ibm, as far as i understand you have to hook up their
 filers to rs/6000(aix) or s/370 or s/390 systems since they are "only"
 fibrechannel or ficon attached raid subsystems, so the client platform
 is responsible for handling all the filesystem stuff.

Hrrm.  The last box I looked at included a pair of RS6000's in the 
cabinet, and they were touting it as a NAS, but I wasn't paying so much 
attention then.

  Your friend may also want to look at Traakan, who have a novel product in 
  this space.
 i checked out their website which says "under construction"
 strange...

Definitely; they had some neat stuff up there a week or two ago...

-- 
... every activity meets with opposition, everyone who acts has his
rivals and unfortunately opponents also.  But not because people want
to be opponents, rather because the tasks and relationships force
people to take different points of view.  [Dr. Fritz Todt]
   V I C T O R Y   N O T   V E N G E A N C E




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-09 Thread Lyndon Nerenberg

Another company to look at is Yottayotta (www.yottayotta.com).
They just announced their first products last November, and there
isn't much hard product info online yet. For the arena they're
targeting, though, 70TB would be an entry level system.

--lyndon


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-09 Thread Alan Clegg

Unless the network is lying to me again, Lyndon Nerenberg said: 
 Another company to look at is Yottayotta (www.yottayotta.com).

Yeah, and they have a theme song...

http://www.yottayotta.com/images/YottaYotta_Song.mp3

Or is that a reason *NOT* to look at their product?

AlanC


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-09 Thread Lyndon Nerenberg

 "Alan" == Alan Clegg [EMAIL PROTECTED] writes:

Alan Or is that a reason *NOT* to look at their product?

I'd buy the storage gear, but I think I'll pass on the album ;-)

--lyndon


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-09 Thread Mike Smith

 Unless the network is lying to me again, Lyndon Nerenberg said: 
  Another company to look at is Yottayotta (www.yottayotta.com).
 
 Yeah, and they have a theme song...
 
   http://www.yottayotta.com/images/YottaYotta_Song.mp3
 
 Or is that a reason *NOT* to look at their product?

Heh.  The theme song was a worry.

As was the fact that they are well-staffed at the management level, but 
they are still advertising for some *very* key development and 
architectural positions.

What do we know about companies with lots of VP-level employees now? 8(

-- 
... every activity meets with opposition, everyone who acts has his
rivals and unfortunately opponents also.  But not because people want
to be opponents, rather because the tasks and relationships force
people to take different points of view.  [Dr. Fritz Todt]
   V I C T O R Y   N O T   V E N G E A N C E




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-09 Thread Lyndon Nerenberg

 "Mike" == Mike Smith [EMAIL PROTECTED] writes:

Mike As was the fact that they are well-staffed at the management
Mike level, but they are still advertising for some *very* key
Mike development and architectural positions.

Actually, they have a bunch of *very* bright engineers working there.
Much of the crew comes from Myrias Research, who were doing some very
bleeding edge parallel computing work in the mid to late '80s.  (Too
bleeding edge for the marketplace, unfortunately ...)

--lyndon



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-07 Thread fab

Hi Mens,

it's exact that filers can't exceed 6TB but we can have eaysyly performance
(pretty so good) with their.

If you try to have EMC box or IBM, you will have to manage anything that
it's not your job (IO for example).

I think that netapp can be a very simple solution (where other man sells
complexity)

Thanks

Fab.


- Original Message -
From: Mike Smith [EMAIL PROTECTED]
To: Matt Dillon [EMAIL PROTECTED]
Cc: Michael C . Wu [EMAIL PROTECTED]; Mitch Collinsworth
[EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Monday, February 05, 2001 9:52 PM
Subject: Re: Extremely large (70TB) File system/server planning


 
  :|  The files are accessed approximately 3 or 4 times a day on average.
  :|  Older files are archived for reference purpose and may never
  :|  be accessed after a week.
  :|
  :| Ok, this is a start.  Now is the 70 TB the size of the active files?
  :| Or does that also include the older archived files that may never be
  :| accessed again?
  :70TB is the size of the sum of all files, access or no access.
  :(They still want to maintain accessibility even though the chances are
slim.)
 ...
  This doesn't sound like something you can just throw together with
  off-the-shelf PCs and still have something reliable to show for it.
  You need a big honking RAID system - maybe a NetApp, maybe something
  else.  You have to look at the filesystem and file size limitations
  of the unit and the client(s).

 You can't do this with a NetApp either; they max out at about 6TB now
 (going up to around 12 or so soon).  You might want to talk to EMC and/or
 IBM, both of whom have *extremely* large filers.

 Your friend may also want to look at Traakan, who have a novel product in
 this space.

 --
 ... every activity meets with opposition, everyone who acts has his
 rivals and unfortunately opponents also.  But not because people want
 to be opponents, rather because the tasks and relationships force
 people to take different points of view.  [Dr. Fritz Todt]
V I C T O R Y   N O T   V E N G E A N C E




 To Unsubscribe: send mail to [EMAIL PROTECTED]
 with "unsubscribe freebsd-hackers" in the body of the message


_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-06 Thread David Greenman

While talking to a friend about what his company is planning to do,
I found out that he is planning a 70TB filesystem/servers/cluster/db.
(Yes, seventy t-e-r-a-b-y-t-e...)

   We could do this using about 44 of the not-yet-announced TSR-3100 fibre
channel RAID storage systems. These are 1.8TB (1.62TB usable) capacity units
in a 3U cabinet. It would take around 200A @ 120VAC (about 18KW) to power all
of them and should fit in about 5 rack cabinets. Total cost would be about
$3 million.

-DG

David Greenman
Co-founder, The FreeBSD Project - http://www.freebsd.org
President, TeraSolutions, Inc. - http://www.terasolutions.com
Pave the road of life with opportunities.


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Extremely large (70TB) File system/server planning

2001-02-05 Thread Michael C . Wu

Hello Everyone,

While talking to a friend about what his company is planning to do,
I found out that he is planning a 70TB filesystem/servers/cluster/db.
(Yes, seventy t-e-r-a-b-y-t-e...)

Apparently, he has files that go up to 2gb each, and actually require
such a horribly sized cluster.

If he wanted a PC cluster, and having 5TB on each PC, he would have
350 machines to maintain.  From past experience maintaining clusters,
I guarantee that he will have at least 1 box failing every other day.
And I really do not think his idea of using NFS is that good. ;-)

Now if we were to go to the high-end route (and probably more cost
effective), we can pick SAN's, large Sun fileservers, or somesuch.
I still cannot picture him being able to maintain file integrity.

I say that he should attempt to split his filesystems into much
smaller chunks, say 1TB each.  And attempt some way of having a RAID5
array.  Mirroring or other RAID configurations would prove too costly.
What would you guys do in this case? :)
-- 
+--+
| [EMAIL PROTECTED] | [EMAIL PROTECTED] |
| http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
+--+


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Mitch Collinsworth

You didn't say what applications this thing is going to support.
That does matter.  A lot.  One thing worth looking at is AFS,
or maybe MR-AFS.  And now OpenAFS.

-Mitch


On Mon, 5 Feb 2001, Michael C . Wu wrote:

 Hello Everyone,
 
 While talking to a friend about what his company is planning to do,
 I found out that he is planning a 70TB filesystem/servers/cluster/db.
 (Yes, seventy t-e-r-a-b-y-t-e...)
 
 Apparently, he has files that go up to 2gb each, and actually require
 such a horribly sized cluster.
 
 If he wanted a PC cluster, and having 5TB on each PC, he would have
 350 machines to maintain.  From past experience maintaining clusters,
 I guarantee that he will have at least 1 box failing every other day.
 And I really do not think his idea of using NFS is that good. ;-)
 
 Now if we were to go to the high-end route (and probably more cost
 effective), we can pick SAN's, large Sun fileservers, or somesuch.
 I still cannot picture him being able to maintain file integrity.
 
 I say that he should attempt to split his filesystems into much
 smaller chunks, say 1TB each.  And attempt some way of having a RAID5
 array.  Mirroring or other RAID configurations would prove too costly.
 What would you guys do in this case? :)
 -- 
 +--+
 | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
 | http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
 +--+
 
 
 To Unsubscribe: send mail to [EMAIL PROTECTED]
 with "unsubscribe freebsd-fs" in the body of the message
 



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Goblin

NetApp filers? And what exactly is too costly? He's got enormous costs
just in doing backups of this thing, and the savings in using NetApp
filers for doing "snapshots" instead of standard backups will buy you
some disk in the end...

What is this data used for? Archival? How oft is it accessed? How much
of the data is "live"? Has he looked at something other than plain disk?

Broaden his horizens and get specifics of his needs.

On 02/05, Michael C . Wu rearranged the electrons to read:
 Hello Everyone,
 
 While talking to a friend about what his company is planning to do,
 I found out that he is planning a 70TB filesystem/servers/cluster/db.
 (Yes, seventy t-e-r-a-b-y-t-e...)
 
 Apparently, he has files that go up to 2gb each, and actually require
 such a horribly sized cluster.
 
 If he wanted a PC cluster, and having 5TB on each PC, he would have
 350 machines to maintain.  From past experience maintaining clusters,
 I guarantee that he will have at least 1 box failing every other day.
 And I really do not think his idea of using NFS is that good. ;-)
 
 Now if we were to go to the high-end route (and probably more cost
 effective), we can pick SAN's, large Sun fileservers, or somesuch.
 I still cannot picture him being able to maintain file integrity.
 
 I say that he should attempt to split his filesystems into much
 smaller chunks, say 1TB each.  And attempt some way of having a RAID5
 array.  Mirroring or other RAID configurations would prove too costly.
 What would you guys do in this case? :)
 -- 
 +--+
 | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
 | http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
 +--+
 
 
 To Unsubscribe: send mail to [EMAIL PROTECTED]
 with "unsubscribe freebsd-fs" in the body of the message
 Your eyes are weary from staring at the CRT.  You feel sleepy.  Notice how
 restful it is to watch the cursor blink.  Close your eyes.  The opinions
 stated above are yours.  You cannot imagine why you ever felt otherwise.



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread John Gregor

 What would you guys do in this case? :)

I'd call up my friendly regional SGI, Sun, IBM, and Compaq reps
and have them put together proposals.  I'm a former SGI guy and
know that we've had a bunch of installations of this size and larger
(much larger).  It's not that big a deal any more.  I don't know
if that's true for the other vendors, having xfs and 128 processor
machines tends to warp one's definiton of what is "hard".  :-)

-JohnG


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Michael C . Wu

On Mon, Feb 05, 2001 at 10:39:02AM -0500, Mitch Collinsworth scribbled:
| You didn't say what applications this thing is going to support.
| That does matter.  A lot.  One thing worth looking at is AFS,
| or maybe MR-AFS.  And now OpenAFS.

He has database(s) of graphics simulation results. i.e. large files that
are largely unrelated to each other.  Compression is not an option.

The files are accessed approximately 3 or 4 times a day on average.
Older files are archived for reference purpose and may never
be accessed after a week.
-- 
+--+
| [EMAIL PROTECTED] | [EMAIL PROTECTED] |
| http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
+--+


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Leif Neland



On Mon, 5 Feb 2001, Michael C . Wu wrote:

 Hello Everyone,
 
 While talking to a friend about what his company is planning to do,
 I found out that he is planning a 70TB filesystem/servers/cluster/db.
 (Yes, seventy t-e-r-a-b-y-t-e...)
 
 Apparently, he has files that go up to 2gb each, and actually require
 such a horribly sized cluster.
 
You later say some files may never be accessed after a week.

How about a multi-level storage system, where the files eventually gets
written onto dvd's. And either a robot or student :-) to put the requested
disks online?

Leif




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Mitch Collinsworth


On Mon, 5 Feb 2001, Michael C . Wu wrote:

 On Mon, Feb 05, 2001 at 10:39:02AM -0500, Mitch Collinsworth scribbled:
 | You didn't say what applications this thing is going to support.
 | That does matter.  A lot.  One thing worth looking at is AFS,
 | or maybe MR-AFS.  And now OpenAFS.
 
 He has database(s) of graphics simulation results. i.e. large files that
 are largely unrelated to each other.  Compression is not an option.
 
 The files are accessed approximately 3 or 4 times a day on average.
 Older files are archived for reference purpose and may never
 be accessed after a week.

Ok, this is a start.  Now is the 70 TB the size of the active files?
Or does that also include the older archived files that may never be
accessed again?

-Mitch



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Michael C . Wu

On Mon, Feb 05, 2001 at 11:47:58AM -0500, Mitch Collinsworth scribbled:
| On Mon, 5 Feb 2001, Michael C . Wu wrote:
|  On Mon, Feb 05, 2001 at 10:39:02AM -0500, Mitch Collinsworth scribbled:
|  | You didn't say what applications this thing is going to support.
|  | That does matter.  A lot.  One thing worth looking at is AFS,
|  | or maybe MR-AFS.  And now OpenAFS.
|  
|  He has database(s) of graphics simulation results. i.e. large files that
|  are largely unrelated to each other.  Compression is not an option.
|  
|  The files are accessed approximately 3 or 4 times a day on average.
|  Older files are archived for reference purpose and may never
|  be accessed after a week.
| 
| Ok, this is a start.  Now is the 70 TB the size of the active files?
| Or does that also include the older archived files that may never be
| accessed again?
70TB is the size of the sum of all files, access or no access.
(They still want to maintain accessibility even though the chances are slim.)
-- 
+--+
| [EMAIL PROTECTED] | [EMAIL PROTECTED] |
| http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
+--+


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Matt Dillon


:|  The files are accessed approximately 3 or 4 times a day on average.
:|  Older files are archived for reference purpose and may never
:|  be accessed after a week.
:| 
:| Ok, this is a start.  Now is the 70 TB the size of the active files?
:| Or does that also include the older archived files that may never be
:| accessed again?
:70TB is the size of the sum of all files, access or no access.
:(They still want to maintain accessibility even though the chances are slim.)
:-- 
:+--+
:| [EMAIL PROTECTED] | [EMAIL PROTECTED] |
:| http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
:+--+

This doesn't sound like something you can just throw together with
off-the-shelf PCs and still have something reliable to show for it.
You need a big honking RAID system - maybe a NetApp, maybe something
else.  You have to look at the filesystem and file size limitations
of the unit and the client(s).

FreeBSD can only support 1 TB sized filesystems.  Our device layer
converts everything to DEV_BSIZE'd (512) blocks, so to be safe:
2^31 x 512 bytes = 1 TB on Intel boxes.  Our NFS implementation has the
same per-filesystem limitation.  Theoretically UFS/FFS are limited 
to 2^31 x blocksize, where blocksize can be larger (e.g. 16384 bytes,
65536 bytes), but I have grave doubts that that actually works.. I'm
fairly certain that we still convert things to 512 byte block numbers
at the device level, and we only use a 32 bit int to store the 
block number.

So FreeBSD could be used as an NFS client, but probably not a server
for your application.  Considering the number of disks you need to
manage, something like a NetApp or other completely self contained
RAID-5-capable system for handling the disks is mandatory.

-Matt



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Mitch Collinsworth



On Mon, 5 Feb 2001, Michael C . Wu wrote:

 On Mon, Feb 05, 2001 at 11:47:58AM -0500, Mitch Collinsworth scribbled:
 | On Mon, 5 Feb 2001, Michael C . Wu wrote:
 |  On Mon, Feb 05, 2001 at 10:39:02AM -0500, Mitch Collinsworth scribbled:
 |  | You didn't say what applications this thing is going to support.
 |  | That does matter.  A lot.  One thing worth looking at is AFS,
 |  | or maybe MR-AFS.  And now OpenAFS.
 |  
 |  He has database(s) of graphics simulation results. i.e. large files that
 |  are largely unrelated to each other.  Compression is not an option.
 |  
 |  The files are accessed approximately 3 or 4 times a day on average.
 |  Older files are archived for reference purpose and may never
 |  be accessed after a week.
 | 
 | Ok, this is a start.  Now is the 70 TB the size of the active files?
 | Or does that also include the older archived files that may never be
 | accessed again?
 70TB is the size of the sum of all files, access or no access.
 (They still want to maintain accessibility even though the chances are slim.)

Ok, well the next question to look at is how do they define "maintain
accessibility".  In other words what do they consider acceptable?
Accessible in 5 seconds, accessible in 1 minute, accessible in 10
minutes, accessible in 1 hour, accessible overnight?

70 TB, as you have already noticed, is no simple feat to accomplish.
No matter how you slice it it's going to cost $$.  Different levels
of accessibility requirement for the archived data can be accomplished
with differing technologies and at differing costs.

You could rough out a plan for keeping the whole thing online and
spinning for instant access and then compare the costs of that with
various options that keep the hot data online and archive the rest
in varying ways that allow for differing speed of access.  Maybe you
can archive old data on CDs or tapes.  Perhaps keep more recent
archives "online" in a jukebox where they are fairly quickly
accessible, while older archives are on a rack where someone has to
retrieve them as needed.

The real question here is: are they really willing to spend what it
would take to keep an archive of this size spinning, including
systems programmers and administrators?  Or are they willing to
spend less and have it take a bit longer to get access to the older
data?

-Mitch



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Kurt J. Lidl

On Mon, Feb 05, 2001 at 09:50:35AM -0800, Matt Dillon wrote:
 :70TB is the size of the sum of all files, access or no access.
 :(They still want to maintain accessibility even though the chances are slim.)
 
 This doesn't sound like something you can just throw together with
 off-the-shelf PCs and still have something reliable to show for it.
 You need a big honking RAID system - maybe a NetApp, maybe something
 else.  You have to look at the filesystem and file size limitations
 of the unit and the client(s).

NetApp's biggest box can "only" handle 6TB of data, currently, using the
latest and greatest software.  They claim (and I believe them) that
12TB will be the limit later this year.

 So FreeBSD could be used as an NFS client, but probably not a server
 for your application.  Considering the number of disks you need to
 manage, something like a NetApp or other completely self contained
 RAID-5-capable system for handling the disks is mandatory.

Netapps are actually RAID-4 (dedicated parity disk), not RAID-5 (parity data
is recorded across all drives).

-Kurt


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Matt Dillon

:2^31 x 512 bytes = 1 TB on Intel boxes.  Our NFS implementation has the
:same per-filesystem limitation.  Theoretically UFS/FFS are limited 

Oops.  I meant, per-file limitation for NFS clients, not per-filesystem.
1TB per file.

-Matt


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Extremely large (70TB) File system/server planning

2001-02-05 Thread Mike Smith

 
 :|  The files are accessed approximately 3 or 4 times a day on average.
 :|  Older files are archived for reference purpose and may never
 :|  be accessed after a week.
 :| 
 :| Ok, this is a start.  Now is the 70 TB the size of the active files?
 :| Or does that also include the older archived files that may never be
 :| accessed again?
 :70TB is the size of the sum of all files, access or no access.
 :(They still want to maintain accessibility even though the chances are slim.)
...
 This doesn't sound like something you can just throw together with
 off-the-shelf PCs and still have something reliable to show for it.
 You need a big honking RAID system - maybe a NetApp, maybe something
 else.  You have to look at the filesystem and file size limitations
 of the unit and the client(s).

You can't do this with a NetApp either; they max out at about 6TB now 
(going up to around 12 or so soon).  You might want to talk to EMC and/or 
IBM, both of whom have *extremely* large filers.

Your friend may also want to look at Traakan, who have a novel product in 
this space.

-- 
... every activity meets with opposition, everyone who acts has his
rivals and unfortunately opponents also.  But not because people want
to be opponents, rather because the tasks and relationships force
people to take different points of view.  [Dr. Fritz Todt]
   V I C T O R Y   N O T   V E N G E A N C E




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



RE: Extremely large (70TB) File system/server planning

2001-02-05 Thread Charles Randall

Does this have to be a single filesystem?

If not, just provide a database front-end that maps some kind of resource
identifier to the filesystem name.

With that, you can span filers and/or filesystems. Seems like the only thing
that would be reasonable.

Charles

-Original Message-
From: Mike Smith [mailto:[EMAIL PROTECTED]]
Sent: Monday, February 05, 2001 1:52 PM
To: Matt Dillon
Cc: Michael C . Wu; Mitch Collinsworth; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: Extremely large (70TB) File system/server planning 


 
 :|  The files are accessed approximately 3 or 4 times a day on average.
 :|  Older files are archived for reference purpose and may never
 :|  be accessed after a week.
 :| 
 :| Ok, this is a start.  Now is the 70 TB the size of the active files?
 :| Or does that also include the older archived files that may never be
 :| accessed again?
 :70TB is the size of the sum of all files, access or no access.
 :(They still want to maintain accessibility even though the chances are
slim.)
...
 This doesn't sound like something you can just throw together with
 off-the-shelf PCs and still have something reliable to show for it.
 You need a big honking RAID system - maybe a NetApp, maybe something
 else.  You have to look at the filesystem and file size limitations
 of the unit and the client(s).

You can't do this with a NetApp either; they max out at about 6TB now 
(going up to around 12 or so soon).  You might want to talk to EMC and/or 
IBM, both of whom have *extremely* large filers.

Your friend may also want to look at Traakan, who have a novel product in 
this space.

-- 
... every activity meets with opposition, everyone who acts has his
rivals and unfortunately opponents also.  But not because people want
to be opponents, rather because the tasks and relationships force
people to take different points of view.  [Dr. Fritz Todt]
   V I C T O R Y   N O T   V E N G E A N C E




To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



RE: Extremely large (70TB) File system/server planning

2001-02-05 Thread Kenneth P. Stox


Besides what platform you decide to run this on, remember that 70TB will put
off a surprising amount of heat. Plan your HVAC carefully. 



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message