Re: [zfs-discuss] How can I copy a ZFS filesystem back and forth

2012-12-04 Thread Anonymous
Thanks for the help Chris!

Cheers,

Fritz

You wrote:

  original and and rename the new one, or zfs send or ?? Can I do a send and
  receive into a filesystem with attributes set as I want or does the receive
  keep the same attributes as well? Thank you.
 
 That will work. Just create the new filesystem with the attributes you
 want and send/recv the latest snapshot. As the data is received the
 gzip compression will be applied. Since the new filesystem already
 exists you will have to do a zfs receive -Fv to force it.
 
 --chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How can I copy a ZFS filesystem back and forth

2012-12-03 Thread Anonymous
I turned compression on for several ZFS filesystems and found performance
was still fine. I turned gzip on and it was also fine and compression on
certain filesystems is excellent. I realize all the files that were on the
filesystem when compression=on did not get the benefit of gzip when I set
compression=gzip.

I would like everything to be compressed with gzip. What's the easiest way
for me to accomplish this? I figured some sort of copy back and forth is
required, but I don't know what would be fastest. Should I just rsync
everything to a new filesystem with compression=gzip set and then delete the
original and and rename the new one, or zfs send or ?? Can I do a send and
receive into a filesystem with attributes set as I want or does the receive
keep the same attributes as well? Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Anonymous
 On 08/30/2012 12:07 PM, Anonymous wrote:
  Hi. I have a spare off the shelf consumer PC and was thinking about loading
  Solaris on it for a development box since I use Studio @work and like it
  better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
  has only one drive. If ZFS detects something bad it might kernel panic and
  lose the whole system right? I realize UFS /might/ be ignorant of any
  corruption but it might be more usable and go happily on it's way without
  noticing? Except then I have to size all the partitions and lose out on
  compression etc. Any suggestions thankfully received.
 
 Simply set copies=2 and go on your merry way. Works for me and protects
 you from bit rot. 

That sounds interesting. How does ZFS implement that? Does it make sure to
keep the pieces of the duplicate on different parts of the drive?

 Even if you do decide to put a second drive in at a later time, just
 remember, RAID is not a backup solution. I use deja-dup to backup my
 important files daily to an off-site machine for that. 

Oh I realize that but this isn't a production machine just an unused lonely
PC that could be running Solaris instead.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Anonymous Remailer (austria)

Hi Darren,

 On 08/30/12 11:07, Anonymous wrote:
  Hi. I have a spare off the shelf consumer PC and was thinking about loading
  Solaris on it for a development box since I use Studio @work and like it
  better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
  has only one drive. If ZFS detects something bad it might kernel panic and
  lose the whole system right? I realize UFS /might/ be ignorant of any
  corruption but it might be more usable and go happily on it's way without
  noticing? Except then I have to size all the partitions and lose out on
  compression etc. Any suggestions thankfully received.
 
 If you are using Solaris 11 or any of the Illumos based distributions 
 you have not choice you must use ZFS as your root/boot filesystem.

I did not realize that. I was trying to decide between S10 I use at work
although on Sun hardware and S11 since I have no experience with it. 

 I would recommend that if physically possible attach a second drive to 
 make it a mirror.

I understand that is the best way to go.

 Personally I've run many many builds of Solaris on single disk laptop 
 systems and never has it lost me access to my data.  The only time I 
 lost access to data on a single disk system was because of total hard 
 drive failure.  I run with copies=2 set on my home directory and any 
 datasets I store data in when on a single disk system.
 
 However much much more importantly ZFS does not preclude the need for 
 off system backups.  Even with mirroring, and snaphots you still have to 
 have a backup of important data elsewhere.  No file system and more 
 importantly no hardware is that good.

Words to live by!

Thanks,

Stu
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Anonymous Remailer (austria)

Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ok for single disk dev box?

2012-08-30 Thread Anonymous
I asked what I thought was a simple question but most of the answers don't
have too much to do with the question. Now it seems to be an argument of
your filesystem is better than any other filesystem. I don't think it is
because I have seen the horror stories lurking on this list. I had no
intention to get into this and I think you should have no intention
either. I like ZFS, I use it at workand I am not here to knock it. 

 1) Anecdotal evidence is nearly worthless in matters of technology.

Agree but fail to see the relevance. Bug reports on this list aren't
worthless or the list wouldn't exist.

 2) Data corruption does happen, and HDD manufacturers can even pin a
number to it (the typical bit error rate on modern HDDs is around
10^-13, i.e. one bit error per ~10TB transferred). That it didn't
hit your sensitive data but only some random pixel in an MPEG movie
is good for you. But ZFS was built to handle environments where all
data is critically important.

I don't think I have 10TB of source code ;)

Other file systems also handle critically important data. Every design has
its tradeoffs and I don't believe ZFS is superior to anything else although
it has many nice management features which aren't available in the same
feature set elsewhere. I am not criticising ZFS, but I don't believe it
solves every problem either.

 3) Data corruption also happens in-transit on the SATA/SAS buses and
in memory (that's why there is a thing as ECC memory).

Right.

 
 4) If it so bothers you, simply set checksum=off and fly without the
parachute (a single core of a modern CPU can checksum at a rate
upwards of 4GB/s, but if the few CPU cycles are so important to you,
turn it off).

You're making up imaginary motives and blaming them on me? I didn't say I
don't want to spend cycles on checksumming. I said I don't want to lose a
system because of a filesystem error. There's no need to be snide or
condescending. Maybe you need a vacation? Who's your boss?

 
  In this specific use case I would rather have a system that's still bootable
  and runs as best it can than an unbootable system that has detected an
  integrity problem especially at this point in ZFS's life. If ZFS would not
  panic the kernel and give the option to fail or mark file(s) bad, I would
  like it more. 
 
 ZFS doesn't panic in case of an unrecoverable single-block error, it
 simply returns an I/O error to the calling application. The panic only
 *can* take place in case of a catastrophic pool failure and isn't the
 default anyway. See man zpool(1M) for the description of the failmode
 option.

ZFS is not perfect and although it may be designed to do what you say I
think errors in ZFS are more likely than bit errors on hard drives. I'm
betting on hardware and /in this scenario/ I would prefer a filesystem that
tolerates it even ignorantly rather than protecting me from myself. What I'd
really like is an option (maybe it exists) in ZFS to say when a block fails
a checksum tell me which file it affects and let me decide to proceed or dump.

  But having the ability manage the disk with one pool and the other nice
  features like compression plus the fact it works nicely on good hardware
  make it hard to go back once you made the jump. Choices, choices.
 
 So you want to enable compression (which is a huge CPU hug) and worry
 about checksumming (which is tiny in comparison)?

Yes, you got it right this time. You're the one trying to put words in my
mouth. Nowhere did I ever suggest CPU cycles are an issue. The issue is
what I said. Scroll up.

 If you're compressing data, you've got all the more reason to enable
 checksumming, since compression tends to make all data corruption much,
 much worse (e.g. that's why a single-bit error in a compressed MPEG stream
 doesn't simply slightly alter the color of a single pixel, but typically
 instead results in a whole macroblock or row of macroblocks messing up
 completely).

Sounds reasonable.

 
  Even if your system does crash, at least you now have an opportunity to
  recognize there is a problem, and think about your backups, rather than
  allowing the corruption to proliferate. 
  
  This isn't a production box as I said it's an unused PC with a single drive,
  and I don't have anybody's bank accounts on it. I can rsync whatever I work
  on that day to a backup server. It won't be a disaster if UFS suddenly
  becomes unreliable and I lose a file or two, or if a drive fails, but it
  would be very annoying if ZFS barfed on a technicality and I had to
  reinstall the whole OS because of a kernel panic and an unbootable system.
 
 As noted before, simple checksum errors won't panic your box, and
 neither will catastrophic pool failure (the default failmode=wait). You
 have to explicitly tell ZFS that you want it to panic your system in
 this situation.

I have read reports on this list that show ZFS does panic the system by
default in some cases. It may not have been for 

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-07 Thread Anonymous Remailer (austria)

 It depends on the model. Consumer models are less likely to
 immediately flush. My understanding that this is done in part to do
 some write coalescing and reduce the number of P/E cycles. Enterprise
 models should either flush, or contain a super capacitor that provides
 enough power for the drive to complete writing any date in its buffer.

My Home Fusion SSD runs on banana peels and eggshells and uses a Flux
Capacitor. I've never had a failure.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question on 4k sectors

2012-07-23 Thread Anonymous
Hans J. Albertsson hans.j.alberts...@branneriet.se wrote:

 I think the problem is with disks that are 4k organised, but report 
 their blocksize as 512.
 
 If the disk reports it's blocksize correctly as 4096, then ZFS should 
 not have a problem.
 At least my 2TB Seagate Barracuda disks seemed to report their 
 blocksizes as 4096, and my zpools on those machines have ashift set to 
 12, which is correct, since 2¹² = 4096

Thanks, this is good to know. Is there any way, looking at manufacturers
data sheets for drives, whether they report their blocksize correctly? From
Seagate and WD that list the number of sectors, it's trivial to determine
what sectors the disk is using. But is this number what the disk is really
organized in or is it the number the disk reports?! It is very confusing...

So far we seem to rely on reports from people on the list, which is good for
us but bad for guys who wasted money on drives that don't work as they
should (the drives that don't report actual sector sector size correctly).

Really, it would be so helpful to know which drives we can buy with
confidence and which should be avoided...is there any way to know from the
manufacturers web sites or do you have to actually buy one and see what it
does? Thanks to everyone for the info.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Question on 4k sectors

2012-07-23 Thread Anonymous Remailer (austria)

You wrote:

 2012-07-23 18:37, Anonymous wrote:
  Really, it would be so helpful to know which drives we can buy with
  confidence and which should be avoided...is there any way to know from the
  manufacturers web sites or do you have to actually buy one and see what it
  does? Thanks to everyone for the info.
 
 I think that vendors' marking like 512e may give a clue on
 their support of emulated 512-byte sectors, whatever they
 would mean by that for a specific model line.

Yeah but buying through the mail it's awfully difficult to see the vendor
markings until it's too late ;)

 I believe you can roughly be certain that all 3Tb HDDs except
 Hitachi use 4Kb native sectors, and 4Tb disks are all 4Kb.
 If these disks don't expose such sector sizing to the OS
 properly, you can work around that in several ways, including,
 as of recent illumos changes, an override config file for the
 SCSI driver.

The question was relative to some older boxes running S10 and not planning
to upgrade the OS, keeping them alive as long as possible...

 The main problem with avoiding 4kb drives seems to be just
 the cases where you want to replace a single disk in an older
 pool built with 512b-native sectored drives.

Right, that's what we're concerned with.

 For new pools (or rather new complete top-level VDEVs) this does not
 matter much, except that your overheads with small data blocks can get
 noticeably bigger.

Understood.

 There were statements on this list that drives emulating 512b
 sectors (whether they announce it properly or not) are not
 all inherently evil - this emulation by itself may be of some
 concern regarding performance, but not one of reliability.
 Then again, firmware errors are possible in any part of the
 stack, of both older and newer models ;)

I haven't seen any post suggesting 512b emulation didn't have very adverse
effects on performance. Given how touchy ZFS seems to be I don't want to
give him any excuses! Thanks for your post.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Move files between ZFS folder/Datasets ?

2012-03-14 Thread Anonymous
 Thanks guys.
 
 I'm only planning to move some directories. Not the complete dataset.
 Just a couble of Gb's.
 
 Would it be save to use mv

Of course, provided your system doesn't crash during the move.

 Using rsync -avPHK --remove-source-files SRC/ DST/
 isn't that just as copying files ? Extra load on the server instead of 
 moving the files from one place to another ?

You may not realize it but a mv consists of a copy and delete. rsync is nice
because you can check that it really got all your files by doing another
rsync with -c after the first one completes. It compares checksums of files
to make sure they actually are identical. With ZFS this is less of an issue
but it is a nice doublecheck if you really can't afford to lose your files.

Make sure you understand rsync syntax before using it live. Make some
directories in /tmp and copy stuff around. One thing about rsync is that a
trailing / means something and not having it means something else.

rsync with the -n option does a trial run

rsync -axvn /tmp/path/to/data /tmp/target # moves a directory named data to
the target dir

rsync -axvn /tmp/path/to/data/ /tmp/target # moves everything in the data
directory (but not the data directory itself!) to the target

rsync -axv # do it live and keep date/time/owner
rsync -axvc # do it again and compare checksums instead of date/time/size

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is your data error rate?

2012-01-25 Thread Anonymous Remailer (austria)

I've been watching the heat control issue carefully since I had to take a
job offshore (cough reverse H1B cough) in a place without adequate AC and I
was able to get them to ship my servers and some other gear. Then I read
Intel is guaranteeing their servers will work up to 100 degrees F ambient
temps in the pricing wars to sell servers, he who goes green and saves data
center cooling budget will win big since now everyone realizes AC costs more
than hardware for server farms. And this is not on new special heat-tolerant
gear, I heard they will put this in writing even for their older units. From
that I would conclude at least commercial server gear can take a lot more
abuse than it gets and still not be affected enough to make components fail
because if they did, Intel could not afford to make this guarantee. YMMV of
course. I still feel nervous running equipment in this kind of environment
but after 3 years of doing that including commodity desktops I haven't seen
any abnormal failures.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failing WD desktop drive in mirror, how to identify?

2012-01-17 Thread Anonymous Remailer (austria)

Hello all,

Trying to reply to everyone so far in one post.

casper@oracle.com said

 Did you try:

   iostat -En

I issued that command and I see (soft) errors from all 4 drives. There is a
serial no. field in the message headers but it is has no contents.


 messages in /var/adm/messages perhaps?  They should include the path
 to the disk in error.

Success, thank you! It has the serial numbers of the drives along with the
path. I had the path before but I couldn't relate it to the physical
connectors on the mobo. The serial number is what I needed. 

Edward Ned Harvey said:

 A few things you could do...

 This will read for 1 second, then pause for one second.  Hopefully making a
 nice consistent blinking light for you to find in your server.
 export baddisk=/dev/rdsk/cXtYdZ
 while true ; do dd if=$baddisk of=/dev/null bs=1024k count=128 ; sleep 1 ;
 done

Alas this is a desktop box and there is one LED that lights for any disk 
activity.

 But if you don't have lights or something...
 The safest thing for you to do is to zpool export, then shutdown, remove
 one disk.  Power on, devfsadm -Cv, and try to zpool import -a When the bad
 disk is gone, you'll be able to import no problem. If you accidentally
 pull the wrong disk, it will not cause any harm.  Pool will refuse to
 import.

Sounds like a good plan b. I will keep this in mind. Since I got the serial
number from  /var/adm/messages I am good to go.

I couldn't copy and paste Jim's message since he posted with MIME instead of
regular ASCII test. Thank you Jim for the help.

This turned out to be a Solaris question not a ZFS question. Sorry and thank
you Casper and Jim and Edward Ned!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failing WD desktop drive in mirror, how to identify?

2012-01-17 Thread Anonymous Remailer (austria)

Richard Elling said

 If the errors bubble up to ZFS, then they will be shown in the output of
 zpool status

On the console I was seeing retryable read errors that eventually
failed. The block number and drive path were included but not any info I
could relate to the actual disk.

zpool status showed a nonzero count of READ errors but nothing more.

 Otherwise, you can use iostat -En, as Casper noted, to show the error
 counters per disk. For more detailed information, use fmdump -eV 

What I needed was to identify the drive and Casper and Jim's suggestion to
look at /var/adm/messages was where I found the info. It is just a drive
going bad AFAIK and ZFS is working fine including letting me detach the bad
drive from a mirror and not screwing up my data. Thanks again to everyone.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create a mirror for a root rpool?

2011-12-16 Thread Anonymous Remailer (austria)

Thank you all for your answers and links :-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can I create a mirror for a root rpool?

2011-12-15 Thread Anonymous Remailer (austria)

On Solaris 10 If I install using ZFS root on only one drive is there a way
to add another drive as a mirror later? Sorry if this was discussed
already. I searched the archives and couldn't find the answer. Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-03 Thread Anonymous Remailer (austria)

You wrote:

 
  Hi Roy, things got alot worse since my first email. I don't know what
  happened but I can't import the old pool at all. It shows no errors but when
  I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
  nv) which looks like is fixed by patch 6801926 which is applied in Solaris
  10u9. But I cannot boot update 9 on this box! I tried Solaris Express, none
  of those will run right either. They all go into maintenance mode. The last
  thing I can boot is update 8 and that is the one with the ZFS bug.
 
 If they go into maintenance mode but could recognize the disk, you
 should still be able to do zfs stuff (zpool import, etc). If you're
 lucky you'd only miss the GUI

Thank you for your comments. This is pretty frustrating.

Unfortunately I'm hitting another bug I saw on the net. Once the Express
Live CD or text installer falls back to maintenance mode, I keep getting a
msg bash: /usr/bin/hostname command not found (from memory, may not be
exact). All ZFS commands fail with this message. I don't know what causes
this and I'm surprised since Solaris 10 update 8 works mostly fine on the
same hardware. I would expect support to get wider not less but the opposite
seems to be happening because I can't install update 9 on this machine.

  I have 200G of files I deliberately copied to this new mirror and now I
  can't get at them! Any ideas?
 
 Another thing you can try (albeit more complex) is use another OS
 (install, or even use a linux Live CD), install virtualbox or similar
 on it, pass the disk as raw vmdk, and use solaris express on the VM.
 You should be able to at least import the pool and recover the data.

I didn't think I would be able to use raw disks with exported pools in a VM
but your comment is interesting. I will consider it. The host is bootable,
it just immediately panics upon trying to import the 2nd pool. Is there some
way I can force a normal boot where my root pool is mounted but it doesn't
mount the other pool? If so I could install VirtualBox and try your
suggestion without moving the drives to another box.

Aren't there any ZFS recovery tools, or is ZFS just not expected to break?

I mentioned I can boot the update 9 installer, but it fails when trying to
read media from the DVD because of a lack of nvidia drivers (documented
limitation) I wonder if I can do some kind of network or jumpstart
install. I have no other Solaris Intel boxes (and this post should explain
some of the reasons why) but I have several Solaris Sparc boxes. I haven't
gone through the doc on net/jumpstart installs, there is alot to read. But
maybe this would get update 9 on that box and maybe it could import the pool.

Are there any other Solaris based live CD or DVD I could try that have
known, good ZFS support? I will need something where I can scp/rcp/rsync the
data off that box, assuming that I can import it somehow.

Thanks,
Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Wrong rpool used after reinstall!

2011-08-02 Thread Anonymous Remailer (austria)

I am having a problem after a new install of Solaris 10. The installed rpool
works fine when I have only those disks connected. When I connect disks from
an rpool I created during a previous installation, my newly installed rpool
is ignored even though the BIOS (x86) is set to boot only from the new rpool
drives. When the system starts it uses the old rpool! How can I get around
this? I want to use my new install and then import the old rpool under a new
name, and clean things up.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-02 Thread Anonymous Remailer (austria)

Hi Roy, things got alot worse since my first email. I don't know what
happened but I can't import the old pool at all. It shows no errors but when
I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
nv) which looks like is fixed by patch 6801926 which is applied in Solaris
10u9. But I cannot boot update 9 on this box! I tried Solaris Express, none
of those will run right either. They all go into maintenance mode. The last
thing I can boot is update 8 and that is the one with the ZFS bug.

I have 200G of files I deliberately copied to this new mirror and now I
can't get at them! Any ideas?

Thanks.

Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server with 4 drives, how to configure ZFS?

2011-06-23 Thread Anonymous Remailer (austria)

 Hi Dave,

Hi Cindy.

 Consider the easiest configuration first and it will probably save
 you time and money in the long run, like this:
 
 73g x 73g mirror (one large s0 on each disk) - rpool
 73g x 73g mirror (use whole disks) - data pool
 
 Then, get yourself two replacement disks, a good backup strategy,
 and we all sleep better.

Oh, you're throwing in free replacement disks too?! This is great! :P

 A complex configuration of slices and a combination of raidZ and
 mirrored pools across the same disks will be difficult to administer,
 performance will be unknown, not to mention how much time it might take
 to replace a disk.

Yeah that's a very good point. But if you guys will make ZFS filesystems
span vdevs then this could work even better! You're right about the
complexity but OTOH the great thing about ZFS is not having to worry about
how to plan mount point allocations and with this scenario (I also have a
few servers with 4x36) the planning issue raises its ugly head again. That's
why I kind of like Edward's suggestion even though it is complicated (for
me) still I think it may be best given my goals. I like breathing room and
not having to worry about a filesystem filling, it's great not having to
know exactly ahead of time how much I have to allocate for a filesystem and
instead let the whole drive be used as needed.

 Use the simplicity of ZFS as it was intended is my advice and you
 will save time and money in the long run.

Thanks. I guess the answer is really using the small drives for root pools
and then getting the biggest drives I can afford for the other bays.

Thanks to everybody.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Anonymous
Hi. I have a development system on Intel commodity hardware with a 500G ZFS
root mirror. I have another 500G drive same as the other two. Is there any
way to use this disk to good advantage in this box? I don't think I need any
more redundancy, I would like to increase performance if possible. I have
only one SATA port left so I can only use 3 drives total unless I buy a PCI
card. Would you please advise me. Many thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any use for extra drives?

2011-03-28 Thread Anonymous Remailer (austria)

 If you plan to generate a lot of data, why use the root pool? You can put
 the /home and /proj filesystems (/export/...) on a separate pool, thus
 off-loading the root pool.

I don't, it's a development box with not alot happening.

 
 My two cents,

thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any use for extra drives?

2011-03-25 Thread Anonymous
 Generally, you choose your data pool config based on data size,
 redundancy, and performance requirements.  If those are all satisfied with
 your single mirror, the only thing left for you to do is think about
 splitting your data off onto a separate pool due to better performance
 etc.  (Because there are things you can't do with the root pool, such as
 striping and raidz) 
 
 That's all there is to it.  To split, or not to split.

Thanks for the update. I guess there's not much to do for this box since
it's a development machine and doesn't have much need for extra redundancy
although if I would have had some extra 500s I would have liked to stripe
the root pool. I see from your answer that's not possible anyway. Cheers.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss