Re: Large raid arrays

2009-01-20 Thread Matias Surdi

Matias Surdi escribió:

Hi,

I've a host with two large (2Tb and 4 Tb) hardware raid5 arrays.

For the backup system we are using, I need to join them to make 1 
logical device.


What would you recomment? ccd or vinum?


Thanks a lot for your suggestions.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 
freebsd-questions-unsubscr...@freebsd.org





Some comments that may help in the decision:

- Reliability/resistance to power failures are the most important factor.

- It doesn't require high performance or high speed.

Thanks a lot.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Large raid arrays

2009-01-20 Thread Frederique Rijsdijk
Matias Surdi wrote:
 Matias Surdi escribió:
 Hi,

 I've a host with two large (2Tb and 4 Tb) hardware raid5 arrays.

 For the backup system we are using, I need to join them to make 1
 logical device.

 What would you recomment? ccd or vinum?

 
 Some comments that may help in the decision:
 
 - Reliability/resistance to power failures are the most important factor.
 
 - It doesn't require high performance or high speed.
 

Either gconcat or ZFS, depending which version of FreeBSD you're running.

gconcat label -v data /dev/raid1 /dev/raid2
newfs /dev/concat/data
mkdir /mnt/data  mount /dev/concat/data /mnt/data
df -h /mnt/data

or

zpool create data /dev/raid1 /dev/raid2
df -h /data



-- Frederique
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Large raid arrays

2009-01-20 Thread Matias Surdi

Frederique Rijsdijk escribió:

Matias Surdi wrote:

Matias Surdi escribió:

Hi,

I've a host with two large (2Tb and 4 Tb) hardware raid5 arrays.

For the backup system we are using, I need to join them to make 1
logical device.

What would you recomment? ccd or vinum?



Some comments that may help in the decision:

- Reliability/resistance to power failures are the most important factor.

- It doesn't require high performance or high speed.



Either gconcat or ZFS, depending which version of FreeBSD you're running.

gconcat label -v data /dev/raid1 /dev/raid2
newfs /dev/concat/data
mkdir /mnt/data  mount /dev/concat/data /mnt/data
df -h /mnt/data

or

zpool create data /dev/raid1 /dev/raid2
df -h /data



-- Frederique
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org




ZFS was a disaster.

That is what we were using until today, when the power went off  and the 
zfs pool ended up corrupted and irrecoverable.


Three other times we had power failures, the zpool ended with some errors.

But, all the times, the UFS partitions remained intact.

I won't use ZFS for a long time.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Large raid arrays

2009-01-20 Thread Wojciech Puchar

gconcat

On Tue, 20 Jan 2009, Matias Surdi wrote:


Hi,

I've a host with two large (2Tb and 4 Tb) hardware raid5 arrays.

For the backup system we are using, I need to join them to make 1 logical 
device.


What would you recomment? ccd or vinum?


Thanks a lot for your suggestions.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Large raid arrays

2009-01-20 Thread Wojciech Puchar





ZFS was a disaster.

That is what we were using until today, when the power went off  and the zfs 
pool ended up corrupted and irrecoverable.


normal.



Three other times we had power failures, the zpool ended with some errors.

But, all the times, the UFS partitions remained intact.


indeed. and fsck time can be fast if you correctly set up block size and 
amount of inodes. i mean not too much inodes and large blocks (32-64K) if 
you store mostly big files.



I won't use ZFS for a long time.

i would recommend never.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Large raid arrays

2009-01-20 Thread Matias Surdi

Wojciech Puchar escribió:

gconcat

On Tue, 20 Jan 2009, Matias Surdi wrote:


Hi,

I've a host with two large (2Tb and 4 Tb) hardware raid5 arrays.

For the backup system we are using, I need to join them to make 1 
logical device.


What would you recomment? ccd or vinum?


Thanks a lot for your suggestions.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 
freebsd-questions-unsubscr...@freebsd.org




___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 
freebsd-questions-unsubscr...@freebsd.org




I've finally set it up with gconcat and works great.

Many thanks for your help guys.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Large RAID arrays, partitioning

2008-08-18 Thread Joze Volf

Thanks for your opinion. For now I will stick with the large RAID volume and no 
slices/partitions. As you said, it makes life less complicated. I think there 
is no problem about future upgrades supporting large volumes. I guess there 
will be support for even larger volumes. The more important concern for me is 
what to do if the capacity needs will rise from a few TB to a few dozen or a 
few hundreds of TB. I gues there is only one economical solution for my 
project. Lustre cluster file system.

Vinum... As I found a much simpler solution, I think there is no need for 
implementing it. My personal opinion is that there is no excuse for using 
software raid solutions on a production server systems (except RAID1 where 
money is realy tight). Most HW RAID controllers are well supported on Linux and 
xBSD and the advantages of hot swapable drives, battery powered write cache and 
high performance XOR IOPs are very important for 24*7 systems. This does not 
mean that I don't want to mess with vinum. I will when there will bee enough 
time for it.

Regards,
Joze



Bill Moran wrote:

In response to Joze Volf [EMAIL PROTECTED]:

I have a HP DL320s 2U server with 12 500 GB SATA drives and Smart Array P400 
RAID controller. The machine will be a video streaming server for a public 
library. The system I am installing is 7.0-RELEASE, amd64.

I made 2 RAID6 volumes, one 120GB for the system and one 4.3TB for the 
streaming media content. The first problem I have encountered is that during 
installation, the large RAID volume wasn't visible. No problem, because I could 
install the system to the small 120G volume.

After the base system installation I decided to delete the large volume using 
the HP ACU and create a few smaller 1TB volumes, which will hopefully be 
recognized by the kernel. They were, buth when I ran the fdisk from sysinstall 
it always reported:
WARNING:  A geometry of xxx/255/32 for da1 is incorrect. Using a more 
likely geometry.  If this geometry is incorrect...


That always happens.  I don't remember the last time I saw a disk where it
_didn't_ complain about that.  Don't know the details of what's going on
there, but I've never seen it cause a problem.


I was trying to do a few 1TB vinum partitions and tying them together into 
single concatenated volume (I already did something similar in linux using LVM 
and it worked great). I had no success.


Well, can't help you much if you don't describe what you tried to do here.


Then I searched the web and found this patch 
http://yogurt.org/FreeBSD/ciss_large.diff and hoped it will resolve the geometry 
problem. It did not, but one other thing it should do is allow kernel to get da 
device for an array  2TB. It did!


What version of FreeBSD is this?  It looks like this driver has seen
significant redesign in 7-STABLE.


I deleted the smaller 1TB volumes and recreated one large 4.3TB RAID volume. 
The kernel recognized it perfectly as /dev/da1. Great! Then I tried to create a 
slice using sysintall fdisk and a filesystem using sysinstall label. Nothing 
but trouble!


Again, without any details, not much anyone can do to help.


I searched the web again and found a possible solution to my problem. I used the newfs -U -O2 
/dev/da1 command to create the filesystem directly on the RAID volume. It worked without a 
problem. Then I mounted /dev/da1 to /var/media and here is the output of df -h command:

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/da0s1a4.3G377M3.6G 9%/
devfs  1.0K1.0K  0B   100%/dev
/dev/da0s1e7.7G 12K7.1G 0%/tmp
/dev/da0s1f 36G1.6G 31G 5%/usr
/dev/da0s1d 58G 25M 53G 0%/var
/dev/da1   4.3T4.0K4.0T 0%/var/media

Is it somehow bad to make a filesystem directly on a storage device such as 
disk drive or hardware raid volume?


Yes and no.  If you use certain type of disk utilities, such as bootable
CDs that check disk health and what not, they may get confused by the fact
that there is no DOS-style fdisk partition on the disk.

Otherwise, it works fine.  I frequently do this to make my life simpler
(why install partitions when you don't need them?)  It also wastes less
disk space (although, who cares about a few hundred bytes on a 4T disk).
Now that you've got it up and running, I'd be more concerned about making
sure your next FreeBSD upgrade will continue to support that sized disk.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Large RAID arrays, partitioning

2008-08-15 Thread Bill Moran
In response to Joze Volf [EMAIL PROTECTED]:
 
 I have a HP DL320s 2U server with 12 500 GB SATA drives and Smart Array P400 
 RAID controller. The machine will be a video streaming server for a public 
 library. The system I am installing is 7.0-RELEASE, amd64.
 
 I made 2 RAID6 volumes, one 120GB for the system and one 4.3TB for the 
 streaming media content. The first problem I have encountered is that during 
 installation, the large RAID volume wasn't visible. No problem, because I 
 could install the system to the small 120G volume.
 
 After the base system installation I decided to delete the large volume using 
 the HP ACU and create a few smaller 1TB volumes, which will hopefully be 
 recognized by the kernel. They were, buth when I ran the fdisk from 
 sysinstall it always reported:
 WARNING:  A geometry of xxx/255/32 for da1 is incorrect. Using a more 
 likely geometry.  If this geometry is incorrect...

That always happens.  I don't remember the last time I saw a disk where it
_didn't_ complain about that.  Don't know the details of what's going on
there, but I've never seen it cause a problem.

 I was trying to do a few 1TB vinum partitions and tying them together into 
 single concatenated volume (I already did something similar in linux using 
 LVM and it worked great). I had no success.

Well, can't help you much if you don't describe what you tried to do here.

 Then I searched the web and found this patch 
 http://yogurt.org/FreeBSD/ciss_large.diff and hoped it will resolve the 
 geometry problem. It did not, but one other thing it should do is allow 
 kernel to get da device for an array  2TB. It did!

What version of FreeBSD is this?  It looks like this driver has seen
significant redesign in 7-STABLE.

 I deleted the smaller 1TB volumes and recreated one large 4.3TB RAID volume. 
 The kernel recognized it perfectly as /dev/da1. Great! Then I tried to create 
 a slice using sysintall fdisk and a filesystem using sysinstall label. 
 Nothing but trouble!

Again, without any details, not much anyone can do to help.

 I searched the web again and found a possible solution to my problem. I used 
 the newfs -U -O2 /dev/da1 command to create the filesystem directly on the 
 RAID volume. It worked without a problem. Then I mounted /dev/da1 to 
 /var/media and here is the output of df -h command:
 
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/da0s1a4.3G377M3.6G 9%/
 devfs  1.0K1.0K  0B   100%/dev
 /dev/da0s1e7.7G 12K7.1G 0%/tmp
 /dev/da0s1f 36G1.6G 31G 5%/usr
 /dev/da0s1d 58G 25M 53G 0%/var
 /dev/da1   4.3T4.0K4.0T 0%/var/media
 
 Is it somehow bad to make a filesystem directly on a storage device such as 
 disk drive or hardware raid volume?

Yes and no.  If you use certain type of disk utilities, such as bootable
CDs that check disk health and what not, they may get confused by the fact
that there is no DOS-style fdisk partition on the disk.

Otherwise, it works fine.  I frequently do this to make my life simpler
(why install partitions when you don't need them?)  It also wastes less
disk space (although, who cares about a few hundred bytes on a 4T disk).
Now that you've got it up and running, I'd be more concerned about making
sure your next FreeBSD upgrade will continue to support that sized disk.

-- 
Bill Moran
http://www.potentialtech.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Large RAID arrays, partitioning

2008-08-15 Thread Wojciech Puchar
I searched the web again and found a possible solution to my problem. I used 
the newfs -U -O2 /dev/da1 command to create the filesystem directly on the 
Is it somehow bad to make a filesystem directly on a storage device such as 
disk drive or hardware raid volume?


no it is all right! it just means that you don't need partitions.
same with winpartitions (fdisk) i never make them, just bsdlabel.


hint - with volume that will store only huge files (you've said video 
server) use little inodes and large blocks.


and set -m 0 to make all space available


newfs -m 0 -O2 -U -i $[4*1024*1024] -b 65536 -f 8192 /dev/da1

this will use 64K blocks, 8K fragments and one inode per 4MB space
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]