RE: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-05-05 Thread Edgar Martinez
Question for the group...anyone know of a way to use an adaptec U160
controller to connect the NAS server to another system so the second system
can write to the raid container thru the U160?

(FBSD SATA RAID5 SERVER)--(GIG NETLINK)--(NETWORK)
|
-rw-(PCI U160)--(U160 CABLE)--(SERVER)

-Original Message-
From: Tomas Quintero [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 25, 2005 6:06 PM
To: [EMAIL PROTECTED]
Cc: Brent Wiese; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions - UPDATE

I am almost a bit curious why you didn't go with a Microsoft based
solution in a situation like this, where you are needing to provide
SMB based file sharing to obviously Windows client desktops.

Another solution would be to setup a dedicated NAS of some sort. But I
suppose it's too late for all of that.

On 4/25/05, Edgar Martinez [EMAIL PROTECTED] wrote:
 No flaming here, when dealing with projects this big, you cannot be bias
 obviously because generally it is someone else's time and money that is on
 the line. Thanks for the info, I didn't know the whole second array thing,
 that would explain some of the weirdness that I have been seeing.
 
 -Original Message-
 From: Brent Wiese [mailto:[EMAIL PROTECTED]
 Sent: Monday, April 25, 2005 12:54 PM
 To: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
 Subject: RE: 5.8TB RAID5 SATA Array Questions - UPDATE
 
  Any one else think they know of a better method??
 
 Well, I'm probably going to get totally flamed for this, but since you
 asked...
 
 The better method is to install Windows 2003 Server. Assemble your drives
 into 2TB or less RAID5 volumes (btw, you only want 1 per 3Ware card, more
on
 that in a second) and use Windows 2003 to span those volumes. It'll show
up
 as one drive after that. There is some limit, but I can't remember what it
 is. Its huge though.
 
 And in case you didn't know, 3Ware cards are only speed-optimized for the
 first array. Subsequent arrays on a card run painfully slow. They won't
say
 it in any of their lit, but if you corner their support people, they'll
 admit it (it obvious if you try it).
 
 Sorry to mention M$ here, but it sounds like you invested incredible
amounts
 of time, and even Windows 2003 can be cheaper than your time at some
point.
 
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
[EMAIL PROTECTED]
 


-- 
-Tomas Quintero

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-25 Thread Edgar Martinez
No flaming here, when dealing with projects this big, you cannot be bias
obviously because generally it is someone else's time and money that is on
the line. Thanks for the info, I didn't know the whole second array thing,
that would explain some of the weirdness that I have been seeing. 

-Original Message-
From: Brent Wiese [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 25, 2005 12:54 PM
To: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
Subject: RE: 5.8TB RAID5 SATA Array Questions - UPDATE

 Any one else think they know of a better method??

Well, I'm probably going to get totally flamed for this, but since you
asked...

The better method is to install Windows 2003 Server. Assemble your drives
into 2TB or less RAID5 volumes (btw, you only want 1 per 3Ware card, more on
that in a second) and use Windows 2003 to span those volumes. It'll show up
as one drive after that. There is some limit, but I can't remember what it
is. Its huge though.

And in case you didn't know, 3Ware cards are only speed-optimized for the
first array. Subsequent arrays on a card run painfully slow. They won't say
it in any of their lit, but if you corner their support people, they'll
admit it (it obvious if you try it).

Sorry to mention M$ here, but it sounds like you invested incredible amounts
of time, and even Windows 2003 can be cheaper than your time at some point.



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-25 Thread Tomas Quintero
I am almost a bit curious why you didn't go with a Microsoft based
solution in a situation like this, where you are needing to provide
SMB based file sharing to obviously Windows client desktops.

Another solution would be to setup a dedicated NAS of some sort. But I
suppose it's too late for all of that.

On 4/25/05, Edgar Martinez [EMAIL PROTECTED] wrote:
 No flaming here, when dealing with projects this big, you cannot be bias
 obviously because generally it is someone else's time and money that is on
 the line. Thanks for the info, I didn't know the whole second array thing,
 that would explain some of the weirdness that I have been seeing.
 
 -Original Message-
 From: Brent Wiese [mailto:[EMAIL PROTECTED]
 Sent: Monday, April 25, 2005 12:54 PM
 To: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
 Subject: RE: 5.8TB RAID5 SATA Array Questions - UPDATE
 
  Any one else think they know of a better method??
 
 Well, I'm probably going to get totally flamed for this, but since you
 asked...
 
 The better method is to install Windows 2003 Server. Assemble your drives
 into 2TB or less RAID5 volumes (btw, you only want 1 per 3Ware card, more on
 that in a second) and use Windows 2003 to span those volumes. It'll show up
 as one drive after that. There is some limit, but I can't remember what it
 is. Its huge though.
 
 And in case you didn't know, 3Ware cards are only speed-optimized for the
 first array. Subsequent arrays on a card run painfully slow. They won't say
 it in any of their lit, but if you corner their support people, they'll
 admit it (it obvious if you try it).
 
 Sorry to mention M$ here, but it sounds like you invested incredible amounts
 of time, and even Windows 2003 can be cheaper than your time at some point.
 
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
 


-- 
-Tomas Quintero
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-25 Thread Edgar Martinez
Easy answer...the desktops are actually not windows based...they are Apple
OSX / Linux systems...SMB is just for the transient Windows based systems
that will need to access the array, but do not run NFS.

-Original Message-
From: Tomas Quintero [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 25, 2005 6:06 PM
To: [EMAIL PROTECTED]
Cc: Brent Wiese; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions - UPDATE

I am almost a bit curious why you didn't go with a Microsoft based
solution in a situation like this, where you are needing to provide
SMB based file sharing to obviously Windows client desktops.

Another solution would be to setup a dedicated NAS of some sort. But I
suppose it's too late for all of that.

On 4/25/05, Edgar Martinez [EMAIL PROTECTED] wrote:
 No flaming here, when dealing with projects this big, you cannot be bias
 obviously because generally it is someone else's time and money that is on
 the line. Thanks for the info, I didn't know the whole second array thing,
 that would explain some of the weirdness that I have been seeing.
 
 -Original Message-
 From: Brent Wiese [mailto:[EMAIL PROTECTED]
 Sent: Monday, April 25, 2005 12:54 PM
 To: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
 Subject: RE: 5.8TB RAID5 SATA Array Questions - UPDATE
 
  Any one else think they know of a better method??
 
 Well, I'm probably going to get totally flamed for this, but since you
 asked...
 
 The better method is to install Windows 2003 Server. Assemble your drives
 into 2TB or less RAID5 volumes (btw, you only want 1 per 3Ware card, more
on
 that in a second) and use Windows 2003 to span those volumes. It'll show
up
 as one drive after that. There is some limit, but I can't remember what it
 is. Its huge though.
 
 And in case you didn't know, 3Ware cards are only speed-optimized for the
 first array. Subsequent arrays on a card run painfully slow. They won't
say
 it in any of their lit, but if you corner their support people, they'll
 admit it (it obvious if you try it).
 
 Sorry to mention M$ here, but it sounds like you invested incredible
amounts
 of time, and even Windows 2003 can be cheaper than your time at some
point.
 
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
[EMAIL PROTECTED]
 


-- 
-Tomas Quintero

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-25 Thread Tomas Quintero
Ah my mistake, I hadn't read all of what was said in its entirety.

On 4/25/05, Edgar Martinez [EMAIL PROTECTED] wrote:
 Easy answer...the desktops are actually not windows based...they are Apple
 OSX / Linux systems...SMB is just for the transient Windows based systems
 that will need to access the array, but do not run NFS.
 
 -Original Message-
 From: Tomas Quintero [mailto:[EMAIL PROTECTED]
 Sent: Monday, April 25, 2005 6:06 PM
 To: [EMAIL PROTECTED]
 Cc: Brent Wiese; freebsd-questions@freebsd.org
 Subject: Re: 5.8TB RAID5 SATA Array Questions - UPDATE
 
 I am almost a bit curious why you didn't go with a Microsoft based
 solution in a situation like this, where you are needing to provide
 SMB based file sharing to obviously Windows client desktops.
 
 Another solution would be to setup a dedicated NAS of some sort. But I
 suppose it's too late for all of that.
 
 On 4/25/05, Edgar Martinez [EMAIL PROTECTED] wrote:
  No flaming here, when dealing with projects this big, you cannot be bias
  obviously because generally it is someone else's time and money that is on
  the line. Thanks for the info, I didn't know the whole second array thing,
  that would explain some of the weirdness that I have been seeing.
 
  -Original Message-
  From: Brent Wiese [mailto:[EMAIL PROTECTED]
  Sent: Monday, April 25, 2005 12:54 PM
  To: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
  Subject: RE: 5.8TB RAID5 SATA Array Questions - UPDATE
 
   Any one else think they know of a better method??
 
  Well, I'm probably going to get totally flamed for this, but since you
  asked...
 
  The better method is to install Windows 2003 Server. Assemble your drives
  into 2TB or less RAID5 volumes (btw, you only want 1 per 3Ware card, more
 on
  that in a second) and use Windows 2003 to span those volumes. It'll show
 up
  as one drive after that. There is some limit, but I can't remember what it
  is. Its huge though.
 
  And in case you didn't know, 3Ware cards are only speed-optimized for the
  first array. Subsequent arrays on a card run painfully slow. They won't
 say
  it in any of their lit, but if you corner their support people, they'll
  admit it (it obvious if you try it).
 
  Sorry to mention M$ here, but it sounds like you invested incredible
 amounts
  of time, and even Windows 2003 can be cheaper than your time at some
 point.
 
 
  ___
  freebsd-questions@freebsd.org mailing list
  http://lists.freebsd.org/mailman/listinfo/freebsd-questions
  To unsubscribe, send any mail to
 [EMAIL PROTECTED]
 
 
 --
 -Tomas Quintero
 
 


-- 
-Tomas Quintero
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-22 Thread Benson Wong
No. Doesn't work. Fdisk couldn't figure out how to partition it
correctly. Actually it had a very hard time figuring out the correct
Cylinder, Heads, Sectors values that worked correctly. I gave up on
this.

I boot from a 3Ware RAID5 host array (160GB). 

2. No. I had 2.2TB arrays and I couldn't create a filesystem that big.
I split them up in hardware to 1.1TB each and created 4 x 1.1TB
arrays. No other workable solution I could find.

Ben

On 4/22/05, Edgar Martinez [EMAIL PROTECTED] wrote:
 Are you booting to the array? Is it over 2TB? Or are you mounting the
 array?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-22 Thread Benson Wong
Hi Edgar, 

Good to hear you finally got it running. Sounds like you went through
the same challenges I went through. I wound up getting FreeBSD
5.4-STABLE running and it's been stable for weeks. I put it through
quite a bit of load lately and it seems to running well.

Comments below: 

 
 As much loved as BSD is to me.it simply just isn't up to the challenge at
 all.its far too difficult to get in a properly working state.and the
 limitations imposed are just too difficult to overcome easily.

Sounds like you hit the same 2TB limit on both FBSD and Linux. What
was the limitations that were too difficult to overcome?

 
 I ended up using Ubuntu which not only had all the driver support to all
 the
 devices and controllers.but also had little to no problem getting the
 system
 installed properly.It however does not like/want to boot to the array.so I
 installed additional drives (Seagate sata) and created a mirror (300GB) for
 the system to live on and bring up the array (/dev/md0) using mdadm.overall
 it was easy and nice.there are several caveats left to wrestle with.

I wonder why it wouldn't want to boot off of your large array. It
could be that it is way too big for the old PC bios to recognize. I
think you could get around this by creating a small partition at the
beginning of your array. I tried this too, but no luck. My arrays were
over fiber channel but that should have been taken care of by the FC
card.

 
 Currently although the 3ware controller can create a huge 4TB raid5 array,
 nothing exists that I am aware of that can utilize the entire container.
 Every single OS that exists seems to all share the 2TB limitations..so
 while
 the BIOS can see it.everything else will only see 2TB..this includes NFS
 on OSX (which don't get me started on the horrible implementation mistakes
 from apple and their poor NFS support..i mean NFSv4 comeon! Why is that
 hard!!)

That is strange that OSX can't see larger than 2TB partitions over
NFS. I would assume that an OSX client talking to an XServe would be
able to see it. I haven't tested this so I wouldn't know for sure.

I'm more curious about the 2TB limit on Linux. I figured Linux, with
it's great file system support, would be able to handle a larger than
2TB partition. What were the limitations you ran into?

 So to get past Ubuntu's 2TB problem, I created 2xRAID5 2TB (1.8TB
 reporting)
 containers on the array.and then using software raid.created 1xRAID0 using
 the 2xRAID5 containers.which create 1xRAID0 @4TB.

Why did software raid0 help you get over the 2TB limitation? Wouldn't
it still appear as one filesystem that is way too big to use?
Something doesn't add up here. Pun not intended. :)

 
 Utterly horrible.probably the WORST half-assed installation imaginable.in
 my
 honest opinion.here are my desires.

I chose to break my 4.4TB system into 4 x 1.1TB arrays. This is very
well supported by FreeBSD. The downside is that I had to modify my
email system configuration and maintenance scripts to work with four
smaller arrays rather than a single large one.

I purposely avoided using software raid because it makes maintenance
of the array a lot more complex. It usually doesn't take a lot of
skills or time to fix a hardware array but the learning curve for
fixing a software array is a lot higher. Plus I don't think software
raid on linux is any good, or on FreeBSD for that matter.

 Create 1xRAID5 @ 4TB.install the OS TO the array.boot to the array and then
 share out 4TB via NFS/SMB.was that too much to ask?? Obviously it was.
 
 So in response.I can modified the requirements.
 
 Create [EMAIL PROTECTED] an OS TO a [EMAIL PROTECTED] to the
 RAID1..and SHARE out the 4TB.

This is essentially what I did as well. Didn't know about the
limitations when I first started.

ben
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-22 Thread Edgar Martinez
Are you booting to the array? Is it over 2TB? Or are you mounting the array?

-Original Message-
From: Benson Wong [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 22, 2005 1:40 PM
To: [EMAIL PROTECTED]
Cc: Nick Pavlica; Dan Nelson; Nick Evans; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions - UPDATE

Hi Edgar, 

Good to hear you finally got it running. Sounds like you went through
the same challenges I went through. I wound up getting FreeBSD
5.4-STABLE running and it's been stable for weeks. I put it through
quite a bit of load lately and it seems to running well.

Comments below: 

 
 As much loved as BSD is to me.it simply just isn't up to the challenge at
 all.its far too difficult to get in a properly working state.and the
 limitations imposed are just too difficult to overcome easily.

Sounds like you hit the same 2TB limit on both FBSD and Linux. What
was the limitations that were too difficult to overcome?

 
 I ended up using Ubuntu which not only had all the driver support to all
 the
 devices and controllers.but also had little to no problem getting the
 system
 installed properly.It however does not like/want to boot to the array.so I
 installed additional drives (Seagate sata) and created a mirror (300GB)
for
 the system to live on and bring up the array (/dev/md0) using
mdadm.overall
 it was easy and nice.there are several caveats left to wrestle with.

I wonder why it wouldn't want to boot off of your large array. It
could be that it is way too big for the old PC bios to recognize. I
think you could get around this by creating a small partition at the
beginning of your array. I tried this too, but no luck. My arrays were
over fiber channel but that should have been taken care of by the FC
card.

 
 Currently although the 3ware controller can create a huge 4TB raid5 array,
 nothing exists that I am aware of that can utilize the entire container.
 Every single OS that exists seems to all share the 2TB limitations..so
 while
 the BIOS can see it.everything else will only see 2TB..this includes NFS
 on OSX (which don't get me started on the horrible implementation mistakes
 from apple and their poor NFS support..i mean NFSv4 comeon! Why is that
 hard!!)

That is strange that OSX can't see larger than 2TB partitions over
NFS. I would assume that an OSX client talking to an XServe would be
able to see it. I haven't tested this so I wouldn't know for sure.

I'm more curious about the 2TB limit on Linux. I figured Linux, with
it's great file system support, would be able to handle a larger than
2TB partition. What were the limitations you ran into?

 So to get past Ubuntu's 2TB problem, I created 2xRAID5 2TB (1.8TB
 reporting)
 containers on the array.and then using software raid.created 1xRAID0 using
 the 2xRAID5 containers.which create 1xRAID0 @4TB.

Why did software raid0 help you get over the 2TB limitation? Wouldn't
it still appear as one filesystem that is way too big to use?
Something doesn't add up here. Pun not intended. :)

 
 Utterly horrible.probably the WORST half-assed installation imaginable.in
 my
 honest opinion.here are my desires.

I chose to break my 4.4TB system into 4 x 1.1TB arrays. This is very
well supported by FreeBSD. The downside is that I had to modify my
email system configuration and maintenance scripts to work with four
smaller arrays rather than a single large one.

I purposely avoided using software raid because it makes maintenance
of the array a lot more complex. It usually doesn't take a lot of
skills or time to fix a hardware array but the learning curve for
fixing a software array is a lot higher. Plus I don't think software
raid on linux is any good, or on FreeBSD for that matter.

 Create 1xRAID5 @ 4TB.install the OS TO the array.boot to the array and
then
 share out 4TB via NFS/SMB.was that too much to ask?? Obviously it was.
 
 So in response.I can modified the requirements.
 
 Create [EMAIL PROTECTED] an OS TO a [EMAIL PROTECTED] to the
 RAID1..and SHARE out the 4TB.

This is essentially what I did as well. Didn't know about the
limitations when I first started.

ben

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-22 Thread Edgar Martinez
All,

 

So, after a soild chunk of life has fully been drained from me.here are
several conclusions.obviously open for discussion if anyone wants to pick my
brain.(yes we reduced our array size.you'll see why)

 

As much loved as BSD is to me.it simply just isn't up to the challenge at
all.its far too difficult to get in a properly working state.and the
limitations imposed are just too difficult to overcome easily.

 

I ended up using Ubuntu which not only had all the driver support to all the
devices and controllers.but also had little to no problem getting the system
installed properly.It however does not like/want to boot to the array.so I
installed additional drives (Seagate sata) and created a mirror (300GB) for
the system to live on and bring up the array (/dev/md0) using mdadm.overall
it was easy and nice.there are several caveats left to wrestle with.

 

Currently although the 3ware controller can create a huge 4TB raid5 array,
nothing exists that I am aware of that can utilize the entire container.
Every single OS that exists seems to all share the 2TB limitations..so while
the BIOS can see it.everything else will only see 2TB..this includes NFS
on OSX (which don't get me started on the horrible implementation mistakes
from apple and their poor NFS support..i mean NFSv4 comeon! Why is that
hard!!)

 

So to get past Ubuntu's 2TB problem, I created 2xRAID5 2TB (1.8TB reporting)
containers on the array.and then using software raid.created 1xRAID0 using
the 2xRAID5 containers.which create 1xRAID0 @4TB.

 

So.samba allows clients to see 2TB and FTP also allow for 2TB..this is
probably the most complicated.and yet simple thing I can say I have done.

 

Utterly horrible.probably the WORST half-assed installation imaginable.in my
honest opinion.here are my desires.

 

Create 1xRAID5 @ 4TB.install the OS TO the array.boot to the array and then
share out 4TB via NFS/SMB.was that too much to ask?? Obviously it was.

 

So in response.I can modified the requirements.

 

Create [EMAIL PROTECTED] an OS TO a [EMAIL PROTECTED] to the
RAID1..and SHARE out the 4TB.

 

Any one else think they know of a better method??

 

  _  

From: Nick Pavlica [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 15, 2005 12:15 PM
To: [EMAIL PROTECTED]
Cc: Dan Nelson; Nick Evans; Benson Wong; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

 

On 4/15/05, Edgar Martinez [EMAIL PROTECTED] wrote:

OK...so now we are going into some new territory...I am curious if you would
care to elaborate a bit more...I am intrigued...if anyone wants me to do
some experiments or test something, let me know...I for one welcome any 
attempts at pushing any limits or trying new things...


I would help do some testing but I don't have any storage that large at the
moment.  I curious how 5.4RC2 or  handles very large volumes.   Have you
already tried fdisk, newfs ?

--Nick

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Chad Leigh -- Shire . Net LLC
On Apr 14, 2005, at 5:28 PM, Benson Wong wrote:
So theoretically it should go over 1000TBI've conducted several 
bastardized
installations due to sysinstall not being able to do anything over 
the 2TB
limit by creating the partition ahead of timeI am going to be 
attacking
this tonight and my efforts will be primarily focused on creating one 
large
5.8TB slice.wish me luck!!


PS: Muhaa haa haa!
You're probably going to run into boo hoo hoo hoo. Most likely you
won't be able to get over the 2TB limit. Also don't use sysinstall, I
was never able to get it to work well. Probably because my arrays were
mounted over fiber channel and fdisk craps out.
This is what I did:
dd if=/dev/zero of=/dev/da0 bs=1k count=1
disklabel -rw da0 audo
newfs /dev/da0

I have no experience doing any of this.  But this has come up before in 
the lists and someone posted on the magic incantations to use to create 
these things by hand.  So use google or other search engine to search 
the list archives on tb sized file systems.  There is good info there

Chad
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Nick Pavlica
I am going to be attacking this tonight and my efforts will be primarily 
focused on creating one large 5.8TB slice.wish me luck!! 

How did this go? Were you able to create the very large slice?
--Nick 

--
  
 *From:* Nick Pavlica [mailto:[EMAIL PROTECTED] 
 *Sent:* Thursday, April 14, 2005 2:49 PM
 *To:* Benson Wong
 *Cc:* [EMAIL PROTECTED]; freebsd-questions@freebsd.org
 *Subject:* Re: 5.8TB RAID5 SATA Array Questions
  
   Is there any limitations that would prevent a single volume that large? 
 (if
  I remember there is a 2TB limit or something)
 2TB is the largest for UFS2. 1TB is the largest for UFS1.
 
 Is the 2TB limit that you mention only for x86? This file system 
 comparison lists the maximum size to be much larger (
 http://en.wikipedia.org/wiki/Comparison_of_file_systems).
 
 --Nick

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Nick Evans
On Thu, 14 Apr 2005 17:13:48 -0500
Edgar Martinez [EMAIL PROTECTED] wrote:

 Benson..GREAT RESPONSE!! I Don't think I could have done any better myself.
 Although I knew most of the information you provided, it was good to know
 that my knowledge was not very far off. It's also reassuring that I'm not
 the only nut job building ludicrous systems..
 
  
 
 Nick, I believe that we may have some minor misinformation on our hands..
 
  
 
 I refer you both to http://www.freebsd.org/projects/bigdisk/ which according
 to the page.
 
  
 
 When the UFS filesystem was introduced to BSD in 1982, its use of 32 bit
 offsets and counters to address the storage was considered to be ahead of
 its time. Since most fixed-disk storage devices use 512 byte sectors, 32
 bits allowed for 2 Terabytes of storage. That was an almost un-imaginable
 quantity for the time. But now that 250 and 400 Gigabyte disks are available
 at consumer prices, it's trivial to build a hardware or software based
 storage array that can exceed 2TB for a few thousand dollars.
 
 The UFS2 filesystem was introduced in 2003 as a replacement to the original
 UFS and provides 64 bit counters and offsets. This allows for files and
 filesystems to grow to 2^73 bytes (2^64 * 512) in size and hopefully be
 sufficient for quite a long time. UFS2 largely solved the storage size
 limits imposed by the filesystem. Unfortunately, many tools and storage
 mechanisms still use or assume 32 bit values, often keeping FreeBSD limited
 to 2TB.
 
 So theoretically it should go over 1000TB.I've conducted several bastardized
 installations due to sysinstall not being able to do anything over the 2TB
 limit by creating the partition ahead of time.I am going to be attacking
 this tonight and my efforts will be primarily focused on creating one large
 5.8TB slice..wish me luck!! 
 
  
 
 PS: Muhaa haa haa!
 

You'll need to use GPT to make this work for anything over 2TB. Man gpt

Nick
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Edgar Martinez
Yeah it was pretty much boo hoo hoo...it appears we have either backplane,
MI cable issues, or controller problems...I was only getting 11 drives
available with improper identification...so I am going thru the tedious task
of ripping it all down, and testing backplanes, drives, and cables...one at
a time...

On a side note...I was able to do my bastardization procedure using a live
cd to get it up to 3.8TB...that was as far as I took it as I want to get the
other problems fixed first...

Unfortunately, due to my determination (aka: sore loser) one of two options
exist...this WILL work...or one of us is going to die trying...

Is it just me, or does everyone try to plead, reason, and insult their
equipment...I swear to god, it derives pleasure from frustration...

-Original Message-
From: Benson Wong [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 14, 2005 6:29 PM
To: [EMAIL PROTECTED]
Cc: freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

 
 So theoretically it should go over 1000TB.I've conducted several
bastardized
 installations due to sysinstall not being able to do anything over the 2TB
 limit by creating the partition ahead of time.I am going to be attacking
 this tonight and my efforts will be primarily focused on creating one
large
 5.8TB slice..wish me luck!! 
 
   
 
 PS: Muhaa haa haa! 
You're probably going to run into boo hoo hoo hoo. Most likely you
won't be able to get over the 2TB limit. Also don't use sysinstall, I
was never able to get it to work well. Probably because my arrays were
mounted over fiber channel and fdisk craps out.

This is what I did: 

dd if=/dev/zero of=/dev/da0 bs=1k count=1
disklabel -rw da0 audo
newfs /dev/da0

That creates one large slice, UFS2, for FreeBSD. Let know if you get
it over 2TB, I was never able to have any luck.

Another reason you might want to avoid a super large file system is
that UFS2 is not journaling. If the server crashes it will take fschk
a LONG time to check all those inodes!

Ben.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Edgar Martinez
Sorry for the delay in response.I had to go to the IRS today.OMFG.what a
model of inefficiency.

 

I am having some minor hardware issues with the build that we are going to
be working on to get corrected first.but I will def keep everyone informed
on what is going on..

 

  _  

From: Nick Pavlica [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 15, 2005 9:37 AM
To: [EMAIL PROTECTED]
Cc: Benson Wong; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

 

I am going to be attacking this tonight and my efforts will be primarily
focused on creating one large 5.8TB slice..wish me luck!! 
 
How did this go?  Were you able to create the very large slice?


 --Nick 

 


  _  


From: Nick Pavlica [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 14, 2005 2:49 PM
To: Benson Wong
Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

 

 Is there any limitations that would prevent a single volume that large?
(if
 I remember there is a 2TB limit or something)
2TB is the largest for UFS2. 1TB is the largest for UFS1.

Is the 2TB limit that you mention only for x86?  This file system comparison
lists the maximum size to be much larger
(http://en.wikipedia.org/wiki/Comparison_of_file_systems ).

--Nick

 

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Edgar Martinez
Interesting...

gpt add [-b number] [-i index] [-s count] [-t type] device ...
 The add command allows the user to add a new partition to an
 existing table.  By default, it will create a UFS partition
covering the first available block of an unused disk space.  The
 command-specific options can be used to control this behaviour.

I am assuming that the docs were not updated to reflect that its talking
about UFS2? Or is it actually correct?

-Original Message-
From: Nick Evans [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 15, 2005 9:45 AM
To: [EMAIL PROTECTED]
Cc: 'Nick Pavlica'; 'Benson Wong'; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

On Thu, 14 Apr 2005 17:13:48 -0500
Edgar Martinez [EMAIL PROTECTED] wrote:

 Benson..GREAT RESPONSE!! I Don't think I could have done any better
myself.
 Although I knew most of the information you provided, it was good to know
 that my knowledge was not very far off. It's also reassuring that I'm not
 the only nut job building ludicrous systems..
 
  
 
 Nick, I believe that we may have some minor misinformation on our hands..
 
  
 
 I refer you both to http://www.freebsd.org/projects/bigdisk/ which
according
 to the page.
 
  
 
 When the UFS filesystem was introduced to BSD in 1982, its use of 32 bit
 offsets and counters to address the storage was considered to be ahead of
 its time. Since most fixed-disk storage devices use 512 byte sectors, 32
 bits allowed for 2 Terabytes of storage. That was an almost un-imaginable
 quantity for the time. But now that 250 and 400 Gigabyte disks are
available
 at consumer prices, it's trivial to build a hardware or software based
 storage array that can exceed 2TB for a few thousand dollars.
 
 The UFS2 filesystem was introduced in 2003 as a replacement to the
original
 UFS and provides 64 bit counters and offsets. This allows for files and
 filesystems to grow to 2^73 bytes (2^64 * 512) in size and hopefully be
 sufficient for quite a long time. UFS2 largely solved the storage size
 limits imposed by the filesystem. Unfortunately, many tools and storage
 mechanisms still use or assume 32 bit values, often keeping FreeBSD
limited
 to 2TB.
 
 So theoretically it should go over 1000TB.I've conducted several
bastardized
 installations due to sysinstall not being able to do anything over the 2TB
 limit by creating the partition ahead of time.I am going to be attacking
 this tonight and my efforts will be primarily focused on creating one
large
 5.8TB slice..wish me luck!! 
 
  
 
 PS: Muhaa haa haa!
 

You'll need to use GPT to make this work for anything over 2TB. Man gpt

Nick

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Dan Nelson
In the last episode (Apr 15), Nick Evans said:
 You'll need to use GPT to make this work for anything over 2TB. Man
 gpt

Or don't bother with a partition table at all, which makes growing the
filesystem later on quite a bit easier.

-- 
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Edgar Martinez
OK...so now we are going into some new territory...I am curious if you would
care to elaborate a bit more...I am intrigued...if anyone wants me to do
some experiments or test something, let me know...I for one welcome any
attempts at pushing any limits or trying new things...

-Original Message-
From: Dan Nelson [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 15, 2005 11:18 AM
To: Nick Evans
Cc: [EMAIL PROTECTED]; 'Benson Wong'; 'Nick Pavlica';
freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

In the last episode (Apr 15), Nick Evans said:
 You'll need to use GPT to make this work for anything over 2TB. Man
 gpt

Or don't bother with a partition table at all, which makes growing the
filesystem later on quite a bit easier.

-- 
Dan Nelson
[EMAIL PROTECTED]

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Dan Nelson
In the last episode (Apr 15), Edgar Martinez said:
 OK...so now we are going into some new territory...I am curious if
 you would care to elaborate a bit more...I am intrigued...if anyone
 wants me to do some experiments or test something, let me know...I
 for one welcome any attempts at pushing any limits or trying new
 things...

If your array is just going to used for one large filesystem, you can
skip any partitioning steps and newfs the base device directly.  then
if you decide to grow the array (and if your controller supports
nondestructive resizing), you can use growfs to expand the filesystem
without the extra step of manually adjusting a partition table.

-- 
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Nick Pavlica
On 4/15/05, Edgar Martinez [EMAIL PROTECTED] wrote:
 
 OK...so now we are going into some new territory...I am curious if you 
 would
 care to elaborate a bit more...I am intrigued...if anyone wants me to do
 some experiments or test something, let me know...I for one welcome any
 attempts at pushing any limits or trying new things...
 

I would help do some testing but I don't have any storage that large at the 
moment. I curious how 5.4RC2 or  handles very large volumes. Have you 
already tried fdisk, newfs ?

--Nick
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Edgar Martinez
I don't think I have ever done that...or even considered that was
possible...The controller does support growing the array...Guess I'll give
it a shot starting with 2TB and then grow it in increments to see how it
behaves...any suggestions for newfs'in the device directly??

-Original Message-
From: Dan Nelson [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 15, 2005 12:15 PM
To: Edgar Martinez
Cc: 'Nick Evans'; 'Benson Wong'; 'Nick Pavlica';
freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

In the last episode (Apr 15), Edgar Martinez said:
 OK...so now we are going into some new territory...I am curious if
 you would care to elaborate a bit more...I am intrigued...if anyone
 wants me to do some experiments or test something, let me know...I
 for one welcome any attempts at pushing any limits or trying new
 things...

If your array is just going to used for one large filesystem, you can
skip any partitioning steps and newfs the base device directly.  then
if you decide to grow the array (and if your controller supports
nondestructive resizing), you can use growfs to expand the filesystem
without the extra step of manually adjusting a partition table.

-- 
Dan Nelson
[EMAIL PROTECTED]

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Edgar Martinez
I was hoping that 5.4 would be out by the time I started this project.I'll
give it a shot to see how it behaves.

 

  _  

From: Nick Pavlica [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 15, 2005 12:15 PM
To: [EMAIL PROTECTED]
Cc: Dan Nelson; Nick Evans; Benson Wong; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

 

On 4/15/05, Edgar Martinez [EMAIL PROTECTED] wrote:

OK...so now we are going into some new territory...I am curious if you would
care to elaborate a bit more...I am intrigued...if anyone wants me to do
some experiments or test something, let me know...I for one welcome any 
attempts at pushing any limits or trying new things...


I would help do some testing but I don't have any storage that large at the
moment.  I curious how 5.4RC2 or  handles very large volumes.   Have you
already tried fdisk, newfs ?

--Nick

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Benson Wong
 
 If your array is just going to used for one large filesystem, you can
 skip any partitioning steps and newfs the base device directly.  then
 if you decide to grow the array (and if your controller supports
 nondestructive resizing), you can use growfs to expand the filesystem
 without the extra step of manually adjusting a partition table.
 

So you don't actually need to disklabel it?
You can just go newfs {options} /dev/da0 and it will just work? 

Hmm.. wish I had something to test that with because I thought I had
to disklabel first and then newfs it.

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Benson Wong
I'm halfway through a project using about the same amount of storage,
5.6TB on an attach Apple XServe RAID. After everything I have about
4.4TB of usable space, 14 x 400GB HDDs in 2 RAID5 arrays.

 All,

 I have a project in which I have purchased the hardware to build a massive
 file server (specifically for video). The array from all estimates will come
 in at close to 5.8TB after overheard and formatting. Questions are:

 What Version of BSD (5.3, 5.4, 4.X)?
If all your hardware is compatible with 5.3-RELEASE use that. It is
quite stable. I had to upgrade through buildworld to 5.4-STABLE
because the onboard NIC didn't get recognize. Don't use 4.X since it
doesn't support UFS2. Also 4.X doesn't see partitions larger than 1TB.
I sliced up my XRAID so it shows 4 x 1.1TB arrays. This shows up
like this in 5.x:

/dev/da0c  1.1T 32M996G 0%/storage1
/dev/da2c  1.1T 27G969G 3%/storage3
/dev/da3c  1.1T186M996G 0%/storage4
/dev/da1c  1.1T156K996G 0%/storage2

These are NFS mounted, and in FBSD 4.9 they look like this:
server:/storage1   -965.4G32M   996G 0%/storage1
server:/storage2   -965.4G   156K   996G 0%/storage2
server:/storage3   -965.4G27G   969G 3%/storage3
server:/storage4   -965.4G   186M   996G 0%/storage4

I'm in the process of slowly migrating all the servers to 5.3.

Also UFS2 allows for lazy inode initialization. It won't go and
allocate all the inodes at one time, only when it needs more. This is
a large time saving because TB size partitions will likely have
hundreds of millions of inodes. Each one of my 1.1TB arrays has about
146M inodes!


 What should the stripe size be for the array for speed when laying down
 video streams?

This is more of a 3Ware RAID thing. Not sure, use a larger stripe size
because you're likely using larger files. For the FBSD block/fragment
size I stuck with the default 16K blocks 2K fragments even though
using 8K blocks and 1K frags would be more efficient for what I'm
using it for (Maildir storage). I did some benchmarks and 16K/2K
performed slightly better. Stick to the default.


 What filesystem?
UFS2.


 Is there any limitations that would prevent a single volume that large? (if
 I remember there is a 2TB limit or something)
2TB is the largest for UFS2. 1TB is the largest for UFS1.


 The idea is to provide as much network storage as possible as fast as
 possible, any particular service? (SMB. NFS, ETC)

I share it all over NFS. Haven't done extensive testing yet but NFS is
alright. I just made sure I have lots of NFS server processes and
tuned it a bit using nfsiod. Haven't tried SMB but SMB is usually
quite slow. I would recommend using whatever your client machines
support and tuning for that.


 Raid controller: 3Ware 9500S-12MI
I use a 9500S in my system as well. These are quite slow from the
benchmarks I've read.

--
This isn't one of your questions but I'm going to share this anyways.
After building this new massive email storage system I concluded that
FreeBSD large file system support is sub-par. I love FreeBSD and I'm
running it on pretty much every server but progress on large TB file
systems is not up to snuff yet. Likely because the developers do not
have access to large expensive disk arrays and equipment. Maybe the
FreeBSD foundation can throw some $$ towards this.

If you haven't already purchased the equipment I would recommend going
with an XServe + XRAID. Mostly because it will probably be a breeze to
set up and use. The price is a premium but for a couple of extra
grand, it is worth saving the headaches of configuration.

My network is predominantly FBSD so I choose FBSD for to keep things
more homogenous and have FBSD NFS talking to FBSD NFS. If I didn't
dislike Linux distros so much I would probably have used Linux and
it's fantastic selection of stable, modern file systems with
journaling support.

Another thing you will likely run into with FBSD is creating the
partitions. I didn't have much luck with sysinstall/fdisk to create
the large file systems. My arrays are mounted over Fibre channel so
you might have more luck. Basically I had to use disklabel and newfs
from the shell prompt. It worked, but took a few days of googling and
documentation scanning to figure it all out.

Hope that helps. Let me know if you need any more info.

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Nick Pavlica
 Is there any limitations that would prevent a single volume that large? 
(if
 I remember there is a 2TB limit or something)
2TB is the largest for UFS2. 1TB is the largest for UFS1.

Is the 2TB limit that you mention only for x86? This file system comparison 
lists the maximum size to be much larger (
http://en.wikipedia.org/wiki/Comparison_of_file_systems).

--Nick
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Benson Wong
From my experience mucking around with UFS1/UFS2 this is what I
learned.  On UFS2 the largest filesystem you can have is 2TB. I tried
with a 2.2TB and it wouldn't handle it.

I read somewhere that with UFS2 you have 2^(32-1) 1K-blocks and UFS1
supports 2^(31-1) 1K blocks per filesystem. That is essentially a 2TB
max file system for UFS2 and a 1TB filesystem for UFS1.

Ben
 
On 4/14/05, Nick Pavlica [EMAIL PROTECTED] wrote:
  Is there any limitations that would prevent a single volume that large?
 (if
   I remember there is a 2TB limit or something)
  2TB is the largest for UFS2. 1TB is the largest for UFS1.
  
  Is the 2TB limit that you mention only for x86?  This file system
 comparison lists the maximum size to be much larger
 (http://en.wikipedia.org/wiki/Comparison_of_file_systems).
  
  --Nick
  


-- 
blog: http://benzo.tummytoons.com
site: http://www.thephpwtf.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Edgar Martinez
Benson..GREAT RESPONSE!! I Don't think I could have done any better myself.
Although I knew most of the information you provided, it was good to know
that my knowledge was not very far off. It's also reassuring that I'm not
the only nut job building ludicrous systems..

 

Nick, I believe that we may have some minor misinformation on our hands..

 

I refer you both to http://www.freebsd.org/projects/bigdisk/ which according
to the page.

 

When the UFS filesystem was introduced to BSD in 1982, its use of 32 bit
offsets and counters to address the storage was considered to be ahead of
its time. Since most fixed-disk storage devices use 512 byte sectors, 32
bits allowed for 2 Terabytes of storage. That was an almost un-imaginable
quantity for the time. But now that 250 and 400 Gigabyte disks are available
at consumer prices, it's trivial to build a hardware or software based
storage array that can exceed 2TB for a few thousand dollars.

The UFS2 filesystem was introduced in 2003 as a replacement to the original
UFS and provides 64 bit counters and offsets. This allows for files and
filesystems to grow to 2^73 bytes (2^64 * 512) in size and hopefully be
sufficient for quite a long time. UFS2 largely solved the storage size
limits imposed by the filesystem. Unfortunately, many tools and storage
mechanisms still use or assume 32 bit values, often keeping FreeBSD limited
to 2TB.

So theoretically it should go over 1000TB.I've conducted several bastardized
installations due to sysinstall not being able to do anything over the 2TB
limit by creating the partition ahead of time.I am going to be attacking
this tonight and my efforts will be primarily focused on creating one large
5.8TB slice..wish me luck!! 

 

PS: Muhaa haa haa!

 

 

  _  

From: Nick Pavlica [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 14, 2005 2:49 PM
To: Benson Wong
Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
Subject: Re: 5.8TB RAID5 SATA Array Questions

 

 Is there any limitations that would prevent a single volume that large?
(if
 I remember there is a 2TB limit or something)
2TB is the largest for UFS2. 1TB is the largest for UFS1.

Is the 2TB limit that you mention only for x86?  This file system comparison
lists the maximum size to be much larger
(http://en.wikipedia.org/wiki/Comparison_of_file_systems).

--Nick

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Benson Wong
Ahh, that clarifies some things. 
UFS2 can handle 2^64, but disklabel, newfs might not be able to yet.
Not entirely sure where things are still 32-bit, I do know that when I
tried to create a 2.2TB file system with the standard freebsd tools it
didn't work.

Ben.

On 4/14/05, Edgar Martinez [EMAIL PROTECTED] wrote:
  
  
 
 Benson.GREAT RESPONSE!! I Don't think I could have done any better myself.
 Although I knew most of the information you provided, it was good to know
 that my knowledge was not very far off. It's also reassuring that I'm not
 the only nut job building ludicrous systems.. 
 
   
 
 Nick, I believe that we may have some minor misinformation on our hands. 
 
   
 
 I refer you both to
 http://www.freebsd.org/projects/bigdisk/ which according to
 the page 
 
   
 
 When the UFS filesystem was introduced to BSD in 1982, its use of 32 bit
 offsets and counters to address the storage was considered to be ahead of
 its time. Since most fixed-disk storage devices use 512 byte sectors, 32
 bits allowed for 2 Terabytes of storage. That was an almost un-imaginable
 quantity for the time. But now that 250 and 400 Gigabyte disks are available
 at consumer prices, it's trivial to build a hardware or software based
 storage array that can exceed 2TB for a few thousand dollars. 
 
 The UFS2 filesystem was introduced in 2003 as a replacement to the original
 UFS and provides 64 bit counters and offsets. This allows for files and
 filesystems to grow to 2^73 bytes (2^64 * 512) in size and hopefully be
 sufficient for quite a long time. UFS2 largely solved the storage size
 limits imposed by the filesystem. Unfortunately, many tools and storage
 mechanisms still use or assume 32 bit values, often keeping FreeBSD limited
 to 2TB. 
 
 So theoretically it should go over 1000TBI've conducted several bastardized
 installations due to sysinstall not being able to do anything over the 2TB
 limit by creating the partition ahead of timeI am going to be attacking
 this tonight and my efforts will be primarily focused on creating one large
 5.8TB slice.wish me luck!! 
 
   
 
 PS: Muhaa haa haa! 
 
   
 
   
  
  
  
 
 From: Nick Pavlica [mailto:[EMAIL PROTECTED] 
  Sent: Thursday, April 14, 2005 2:49 PM
  To: Benson Wong
  Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
  Subject: Re: 5.8TB RAID5 SATA Array Questions 
  
 
   
 
  Is there any limitations that would prevent a single volume that large?
 (if
   I remember there is a 2TB limit or something)
  2TB is the largest for UFS2. 1TB is the largest for UFS1.
  
  Is the 2TB limit that you mention only for x86?  This file system
 comparison lists the maximum size to be much larger
 (http://en.wikipedia.org/wiki/Comparison_of_file_systems).
  
  --Nick 


-- 
blog: http://benzo.tummytoons.com
site: http://www.thephpwtf.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Benson Wong
 
 So theoretically it should go over 1000TBI've conducted several bastardized
 installations due to sysinstall not being able to do anything over the 2TB
 limit by creating the partition ahead of timeI am going to be attacking
 this tonight and my efforts will be primarily focused on creating one large
 5.8TB slice.wish me luck!! 
 
   
 
 PS: Muhaa haa haa! 
You're probably going to run into boo hoo hoo hoo. Most likely you
won't be able to get over the 2TB limit. Also don't use sysinstall, I
was never able to get it to work well. Probably because my arrays were
mounted over fiber channel and fdisk craps out.

This is what I did: 

dd if=/dev/zero of=/dev/da0 bs=1k count=1
disklabel -rw da0 audo
newfs /dev/da0

That creates one large slice, UFS2, for FreeBSD. Let know if you get
it over 2TB, I was never able to have any luck.

Another reason you might want to avoid a super large file system is
that UFS2 is not journaling. If the server crashes it will take fschk
a LONG time to check all those inodes!

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]