Re: [zfs-discuss] ZFS ok for single disk dev box?
Thanks, sounds awesome! Pretty much takes away my concern of using ZFS! Stu > > >Now that is interesting. But how do you do a receive before you reinstall? > >Live cd?? > > > Just boot off of the CD (or jumpstart server) to single user mode. Format > your new disk, create a zpool, zfs recv, installboot (or installgrub), > reboot and done. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
On Thu, Aug 30, 2012 at 11:15 PM, Nomen Nescio wrote: >> Plus, if you look around a bit, you'll find some tutorials to back up >> the entire OS using zfs send-receive. So even if for some reason the >> OS becomes unbootable (e.g. blocks on some critical file is corrupted, >> which would cause panic/crash no matter what filesystem you use), the >> "reinstall" process is basically just a zfs send-receive plus >> installing the bootloader, so it can be VERY fast. > > Now that is interesting. But how do you do a receive before you reinstall? > Live cd?? Live CD, live USB, or better yet, a full-blown installation on a USB disk. This is different from a live USB in that it's faster and you can customize it (i.e. add/remove packages) just like a normal installation. -- Fajar ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS snapshot used space question
Is there a way to get the total amount of data referenced by a snapshot that isn't referenced by a specified snapshot/filesystem? I think this is what is really desired in order to locate snapshots with offending space usage. The written and written@ attributes seem to only do the reverse. I think you can back calculate it from the snapshot and filesystem "referenced" sizes, and the "written@" property of the filesystem, but that isn't particularly convenient to do (looks like "zfs get -Hp ..." makes it possible to hack a script together for, though). Tim On Thu, Aug 30, 2012 at 11:22 AM, Richard Elling wrote: > For illumos-based distributions, there is a "written" and "written@" > property that shows the > amount of data writtent to each snapshot. This helps to clear the > confusion over the way > the "used" property is accounted. > https://www.illumos.org/issues/1645 > > -- richard > > On Aug 29, 2012, at 11:12 AM, "Truhn, Chad" > wrote: > > All, > > I apologize in advance for what appears to be a question asked quite > often, but I am not sure I have ever seen an answer that explains it. This > may also be a bit long-winded so I apologize for that as well. > > I would like to know how much unique space each individual snapshot is > using. > > I have a ZFS filesystem that shows: > > $zfs list -o space rootpool/export/home > NAME AVAIL USED USEDSNAP USEDDS > USEDREFRESERV USEDCHILD > rootpool/export/home 5.81G 14.4G 8.81G5.54G 0 >0 > > So reading this I see that I have a total of 14.4G of space used by this > data set. Currently 5.54 is "active" data that is available on the normal > filesystem and 8.81G used in snapshots. 8.81G + 5.54G = 14.4G (roughly). > I 100% agree with these numbers and the world makes sense. > > This is also backed up by: > > $zfs get usedbysnapshots rootpool/export/home > NAME PROPERTYVALUE > SOURCE > rootpool/export/home usedbysnapshots 8.81G - > > > Now if I wanted to see how much space any individual snapshot is currently > using I would like to think that this would show me: > > $zfs list -ro space rootpool/export/home > > NAME AVAIL USED > USEDSNAP USEDDS USEDREFRESERV USEDCHILD > rootpool/export/home 5.81G 14.4G 8.81G > 5.54G 0 0 > rootpool/export/home@week3 -202M - - > - - > rootpool/export/home@week2 -104M - - > - - > rootpool/export/home@7daysago-1.37M - - > - - > rootpool/export/home@6daysago-1.20M - - > - - > rootpool/export/home@5daysago-1020K - - > - - > rootpool/export/home@4daysago-342K - - > - - > rootpool/export/home@3daysago-1.28M - - > - - > rootpool/export/home@week1 -0- > - - - > rootpool/export/home@2daysago-0- - > - - > rootpool/export/home@yesterday - 360K - - > - - > rootpool/export/home@today-1.26M - > - - - > > > So normal logic would tell me if USEDSNAP is 8.81G and is composed of 11 > snapshots, I would add up the size of each of those snapshots and that > would roughly equal 8.81G. So time to break out the calculator: > > 202M + 104M + 1.37M + 1.20M + 1020K + 342K + 1.28M +0 +0 + 360K + 1.26M >equals... ~312M! > > That is nowhere near 8.81G. I would accept it even if it was within 15%, > but it's not even close. That definitely not metadata or ZFS overhead or > anything. > > I understand that snapshots are just the delta between the time when the > snapshot was taken and the current "active" filesystem and are truly just > references to a block on disk rather than a "copy". I also understand how > two (or more) snapshots can reference the same block on a disk but yet > there is still only that one block used. If I delete a recent snapshot I > may not save as much space as advertised because some may be inherited by a > "parent" snapshot. But that inheritance is not creating duplicate used > space on disk so it doesn't justify the huge difference in sizes. > > But even with this logic in place there is currently 8.81G of blocks > referred to by snapshots which are not currently on the "active" filesystem > and I don't believe anyone can argue with that. Can something show me how > much space a single snapshot has reserved? > > I searched through some of the
Re: [zfs-discuss] ZFS ok for single disk dev box?
> I asked what I thought was a simple question but most of the answers don't > have too much to do with the question. Hehe, welcome to mailing lists ;). > What I'd > really like is an option (maybe it exists) in ZFS to say when a block fails > a checksum tell me which file it affects It does exactly that. > I have read reports on this list that show ZFS does panic the system by > default in some cases. It may not have been for checksum failures, I have no > idea why it did, but enough people wrote about crashed boxes to make me ask > the question I asked. I've never heard or experienced anything like that. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
I asked what I thought was a simple question but most of the answers don't have too much to do with the question. Now it seems to be an argument of your filesystem is better than any other filesystem. I don't think it is because I have seen the horror stories lurking on this list. I had no intention to get into this and I think you should have no intention either. I like ZFS, I use it at workand I am not here to knock it. > 1) Anecdotal evidence is nearly worthless in matters of technology. Agree but fail to see the relevance. Bug reports on this list aren't worthless or the list wouldn't exist. > 2) Data corruption does happen, and HDD manufacturers can even pin a >number to it (the typical bit error rate on modern HDDs is around >10^-13, i.e. one bit error per ~10TB transferred). That it didn't >hit your sensitive data but only some random pixel in an MPEG movie >is good for you. But ZFS was built to handle environments where all >data is critically important. I don't think I have 10TB of source code ;) Other file systems also handle critically important data. Every design has its tradeoffs and I don't believe ZFS is superior to anything else although it has many nice management features which aren't available in the same feature set elsewhere. I am not criticising ZFS, but I don't believe it solves every problem either. > 3) Data corruption also happens in-transit on the SATA/SAS buses and >in memory (that's why there is a thing as ECC memory). Right. > > 4) If it so bothers you, simply set checksum=off and fly without the >parachute (a single core of a modern CPU can checksum at a rate >upwards of 4GB/s, but if the few CPU cycles are so important to you, >turn it off). You're making up imaginary motives and blaming them on me? I didn't say I don't want to spend cycles on checksumming. I said I don't want to lose a system because of a filesystem error. There's no need to be snide or condescending. Maybe you need a vacation? Who's your boss? > > > In this specific use case I would rather have a system that's still bootable > > and runs as best it can than an unbootable system that has detected an > > integrity problem especially at this point in ZFS's life. If ZFS would not > > panic the kernel and give the option to fail or mark file(s) bad, I would > > like it more. > > ZFS doesn't panic in case of an unrecoverable single-block error, it > simply returns an I/O error to the calling application. The panic only > *can* take place in case of a catastrophic pool failure and isn't the > default anyway. See man zpool(1M) for the description of the "failmode" > option. ZFS is not perfect and although it may be designed to do what you say I think errors in ZFS are more likely than bit errors on hard drives. I'm betting on hardware and /in this scenario/ I would prefer a filesystem that tolerates it even ignorantly rather than protecting me from myself. What I'd really like is an option (maybe it exists) in ZFS to say when a block fails a checksum tell me which file it affects and let me decide to proceed or dump. > > But having the ability manage the disk with one pool and the other nice > > features like compression plus the fact it works nicely on good hardware > > make it hard to go back once you made the jump. Choices, choices. > > So you want to enable compression (which is a huge CPU hug) and worry > about checksumming (which is tiny in comparison)? Yes, you got it right this time. You're the one trying to put words in my mouth. Nowhere did I ever suggest CPU cycles are an issue. The issue is what I said. Scroll up. > If you're compressing data, you've got all the more reason to enable > checksumming, since compression tends to make all data corruption much, > much worse (e.g. that's why a single-bit error in a compressed MPEG stream > doesn't simply slightly alter the color of a single pixel, but typically > instead results in a whole macroblock or row of macroblocks messing up > completely). Sounds reasonable. > > >>> Even if your system does crash, at least you now have an opportunity to > >>> recognize there is a problem, and think about your backups, rather than > >>> allowing the corruption to proliferate. > > > > This isn't a production box as I said it's an unused PC with a single drive, > > and I don't have anybody's bank accounts on it. I can rsync whatever I work > > on that day to a backup server. It won't be a disaster if UFS suddenly > > becomes unreliable and I lose a file or two, or if a drive fails, but it > > would be very annoying if ZFS barfed on a technicality and I had to > > reinstall the whole OS because of a kernel panic and an unbootable system. > > As noted before, simple checksum errors won't panic your box, and > neither will catastrophic pool failure (the default failmode=wait). You > have to explicitly tell ZFS that you want it to panic your system in > this situation. I have read reports on this list that show
Re: [zfs-discuss] ZFS ok for single disk dev box?
>Now that is interesting. But how do you do a receive before you reinstall? >Live cd?? Just boot off of the CD (or jumpstart server) to single user mode. Format your new disk, create a zpool, zfs recv, installboot (or installgrub), reboot and done. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS snapshot used space question
For illumos-based distributions, there is a "written" and "written@" property that shows the amount of data writtent to each snapshot. This helps to clear the confusion over the way the "used" property is accounted. https://www.illumos.org/issues/1645 -- richard On Aug 29, 2012, at 11:12 AM, "Truhn, Chad" wrote: > All, > > I apologize in advance for what appears to be a question asked quite often, > but I am not sure I have ever seen an answer that explains it. This may also > be a bit long-winded so I apologize for that as well. > > I would like to know how much unique space each individual snapshot is using. > > I have a ZFS filesystem that shows: > > $zfs list -o space rootpool/export/home > NAME AVAIL USED USEDSNAP USEDDS > USEDREFRESERV USEDCHILD > rootpool/export/home 5.81G 14.4G 8.81G5.54G 0 > 0 > > So reading this I see that I have a total of 14.4G of space used by this data > set. Currently 5.54 is "active" data that is available on the normal > filesystem and 8.81G used in snapshots. 8.81G + 5.54G = 14.4G (roughly). I > 100% agree with these numbers and the world makes sense. > > This is also backed up by: > > $zfs get usedbysnapshots rootpool/export/home > NAME PROPERTYVALUE > SOURCE > rootpool/export/home usedbysnapshots 8.81G - > > > Now if I wanted to see how much space any individual snapshot is currently > using I would like to think that this would show me: > > $zfs list -ro space rootpool/export/home > > NAME AVAIL USED > USEDSNAP USEDDS USEDREFRESERV USEDCHILD > rootpool/export/home 5.81G 14.4G 8.81G 5.54G > 0 0 > rootpool/export/home@week3 -202M - - >- - > rootpool/export/home@week2 -104M - - >- - > rootpool/export/home@7daysago-1.37M - - > - - > rootpool/export/home@6daysago-1.20M - - > - - > rootpool/export/home@5daysago-1020K - - > - - > rootpool/export/home@4daysago-342K - - > - - > rootpool/export/home@3daysago-1.28M - - > - - > rootpool/export/home@week1 -0- - >- - > rootpool/export/home@2daysago-0- - > - - > rootpool/export/home@yesterday - 360K - - >- - > rootpool/export/home@today-1.26M - - > - - > > > So normal logic would tell me if USEDSNAP is 8.81G and is composed of 11 > snapshots, I would add up the size of each of those snapshots and that would > roughly equal 8.81G. So time to break out the calculator: > > 202M + 104M + 1.37M + 1.20M + 1020K + 342K + 1.28M +0 +0 + 360K + 1.26M > equals... ~312M! > > That is nowhere near 8.81G. I would accept it even if it was within 15%, but > it's not even close. That definitely not metadata or ZFS overhead or > anything. > > I understand that snapshots are just the delta between the time when the > snapshot was taken and the current "active" filesystem and are truly just > references to a block on disk rather than a "copy". I also understand how > two (or more) snapshots can reference the same block on a disk but yet there > is still only that one block used. If I delete a recent snapshot I may not > save as much space as advertised because some may be inherited by a "parent" > snapshot. But that inheritance is not creating duplicate used space on disk > so it doesn't justify the huge difference in sizes. > > But even with this logic in place there is currently 8.81G of blocks referred > to by snapshots which are not currently on the "active" filesystem and I > don't believe anyone can argue with that. Can something show me how much > space a single snapshot has reserved? > > I searched through some of the archives and found this thread > (http://mail.opensolaris.org/pipermail/zfs-discuss/2012-August/052163.html) > from early this month and I feel as if I have the same problem as the OP, but > hopefully attacking it with a little more background. I am not arguing with > discrepancies between df/du and zfs output and I have read the Oracle > documentation about it but haven't found what I feel like should be a simple > answer. I currently have a ticket open with Oracle, but I am getting answers > to all kinds of questions except for the question I am asking so I am hoping >
Re: [zfs-discuss] ZFS ok for single disk dev box?
Thank you. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
> > would be very annoying if ZFS barfed on a technicality and I had to > > reinstall the whole OS because of a kernel panic and an unbootable system. > > It shouldn't do that. I agree but it seems like other people had it happen. > Plus, if you look around a bit, you'll find some tutorials to back up > the entire OS using zfs send-receive. So even if for some reason the > OS becomes unbootable (e.g. blocks on some critical file is corrupted, > which would cause panic/crash no matter what filesystem you use), the > "reinstall" process is basically just a zfs send-receive plus > installing the bootloader, so it can be VERY fast. Now that is interesting. But how do you do a receive before you reinstall? Live cd?? Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
On Thu, Aug 30, 2012 at 9:08 PM, Nomen Nescio wrote: > In this specific use case I would rather have a system that's still bootable > and runs as best it can That's what would happen if the corruption happens on part of the disk (e.g. bad sector). > than an unbootable system that has detected an > integrity problem especially at this point in ZFS's life. If ZFS would not > panic the kernel and give the option to fail or mark file(s) bad, You'd be unable to access that particular file. Access to other files would still be fine. > it > would be very annoying if ZFS barfed on a technicality and I had to > reinstall the whole OS because of a kernel panic and an unbootable system. It shouldn't do that. Plus, if you look around a bit, you'll find some tutorials to back up the entire OS using zfs send-receive. So even if for some reason the OS becomes unbootable (e.g. blocks on some critical file is corrupted, which would cause panic/crash no matter what filesystem you use), the "reinstall" process is basically just a zfs send-receive plus installing the bootloader, so it can be VERY fast. This is what I do on linux (ubuntu + zfsonlinux). Two notebooks and one USB disk (which function as rescue/backup disk) basically store the same copy of the OS dataset, with very small variations (only four files) for each environment. I can even update one of them and copy the update result (using incremental send) to the others, making sure I will always have the same working environment no matter which notebook I'm working on. -- Fajar ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
On 08/30/2012 04:22 PM, Anonymous wrote: >> On 08/30/2012 12:07 PM, Anonymous wrote: >>> Hi. I have a spare off the shelf consumer PC and was thinking about loading >>> Solaris on it for a development box since I use Studio @work and like it >>> better than gcc. I was thinking maybe it isn't so smart to use ZFS since it >>> has only one drive. If ZFS detects something bad it might kernel panic and >>> lose the whole system right? I realize UFS /might/ be ignorant of any >>> corruption but it might be more usable and go happily on it's way without >>> noticing? Except then I have to size all the partitions and lose out on >>> compression etc. Any suggestions thankfully received. >> >> Simply set copies=2 and go on your merry way. Works for me and protects >> you from bit rot. > > That sounds interesting. How does ZFS implement that? Does it make sure to > keep the pieces of the duplicate on different parts of the drive? ZFS allows to store up to 3 copies of a block. It does this normally for metadata (stored in 3 copies, IRC) and it is an option for user data as well (via the "copies" property). The block allocator tries to locate the different copies on different vdevs, if possible, and falls back to the same vdev if not possible (of course, the position is not identical, that would kind of defeat the purpose). If during read back one copy is found to be corrupted, the second copy is read, checked and if it is valid, the original corrupted copy is automatically repaired (rewritten) - that's the whole idea behind the "self-healing" aspect of ZFS. Cheers, -- Saso ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
Hi Darren, > On 08/30/12 11:07, Anonymous wrote: > > Hi. I have a spare off the shelf consumer PC and was thinking about loading > > Solaris on it for a development box since I use Studio @work and like it > > better than gcc. I was thinking maybe it isn't so smart to use ZFS since it > > has only one drive. If ZFS detects something bad it might kernel panic and > > lose the whole system right? I realize UFS /might/ be ignorant of any > > corruption but it might be more usable and go happily on it's way without > > noticing? Except then I have to size all the partitions and lose out on > > compression etc. Any suggestions thankfully received. > > If you are using Solaris 11 or any of the Illumos based distributions > you have not choice you must use ZFS as your root/boot filesystem. I did not realize that. I was trying to decide between S10 I use at work although on Sun hardware and S11 since I have no experience with it. > I would recommend that if physically possible attach a second drive to > make it a mirror. I understand that is the best way to go. > Personally I've run many many builds of Solaris on single disk laptop > systems and never has it lost me access to my data. The only time I > lost access to data on a single disk system was because of total hard > drive failure. I run with copies=2 set on my home directory and any > datasets I store data in when on a single disk system. > > However much much more importantly ZFS does not preclude the need for > off system backups. Even with mirroring, and snaphots you still have to > have a backup of important data elsewhere. No file system and more > importantly no hardware is that good. Words to live by! Thanks, Stu ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
> On 08/30/2012 12:07 PM, Anonymous wrote: > > Hi. I have a spare off the shelf consumer PC and was thinking about loading > > Solaris on it for a development box since I use Studio @work and like it > > better than gcc. I was thinking maybe it isn't so smart to use ZFS since it > > has only one drive. If ZFS detects something bad it might kernel panic and > > lose the whole system right? I realize UFS /might/ be ignorant of any > > corruption but it might be more usable and go happily on it's way without > > noticing? Except then I have to size all the partitions and lose out on > > compression etc. Any suggestions thankfully received. > > Simply set copies=2 and go on your merry way. Works for me and protects > you from bit rot. That sounds interesting. How does ZFS implement that? Does it make sure to keep the pieces of the duplicate on different parts of the drive? > Even if you do decide to put a second drive in at a later time, just > remember, RAID is not a backup solution. I use deja-dup to backup my > important files daily to an off-site machine for that. Oh I realize that but this isn't a production machine just an unused lonely PC that could be running Solaris instead. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
> would be very annoying if ZFS barfed on a technicality and I had to reinstall > the whole OS because of a kernel panic and an unbootable system. Is this a known scenario with ZFS then? I can't recall hearing of this happening. I've seen plenty of UFS filesystems dieing with "panic: freeing free" and then the ensuing fsck-athon convinces the user to just rebuild the fs in question. cheers, --justin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
On 08/30/2012 04:08 PM, Nomen Nescio wrote: >>> Hi. I have a spare off the shelf consumer PC and was thinking about loading >>> Solaris on it for a development box since I use Studio @work and like it >>> better than gcc. I was thinking maybe it isn't so smart to use ZFS since it >>> has only one drive. If ZFS detects something bad it might kernel panic and >>> lose the whole system right? I realize UFS /might/ be ignorant of any >>> corruption but it might be more usable and go happily on it's way without >>> noticing? Except then I have to size all the partitions and lose out on >>> compression etc. Any suggestions thankfully received. >> >> Suppose you start getting checksum errors. Then you *do* want to notice. > > I'm not convinced. I understand the theoretical value of ZFS but it > introduces a whole new layer of problems other filesystems don't have. Even > if it's right in theory it doesn't always make things better in reality. I > like the features it provides and not having to size filesystems like in > the old days is great, but ZFS can and does have bugs and like anything else > is not perfect. Aside from Microsoft which used to be guaranteed to corrupt > filesystems I haven't ever had corruption that caused me any problems. > Certainly there must have been corruptions because of software bugs and > crappy hardware but they had no visible effect and that is good enough for > me in this situation I asked about. I feel this issue is a little overblown > given most of the world runs on other enterprise filesystems and the world > hasn't come to and end yet. ZFS is an important step in the right direction > but it doesn't mean you can't live without it's error detection. We lived > without it until now. What I find hard to live without is the management > features it gives you which is why I have a dilemna. 1) Anecdotal evidence is nearly worthless in matters of technology. 2) Data corruption does happen, and HDD manufacturers can even pin a number to it (the typical bit error rate on modern HDDs is around 10^-13, i.e. one bit error per ~10TB transferred). That it didn't hit your sensitive data but only some random pixel in an MPEG movie is good for you. But ZFS was built to handle environments where all data is critically important. 3) Data corruption also happens in-transit on the SATA/SAS buses and in memory (that's why there is a thing as ECC memory). 4) If it so bothers you, simply set checksum=off and fly without the parachute (a single core of a modern CPU can checksum at a rate upwards of 4GB/s, but if the few CPU cycles are so important to you, turn it off). > In this specific use case I would rather have a system that's still bootable > and runs as best it can than an unbootable system that has detected an > integrity problem especially at this point in ZFS's life. If ZFS would not > panic the kernel and give the option to fail or mark file(s) bad, I would > like it more. ZFS doesn't panic in case of an unrecoverable single-block error, it simply returns an I/O error to the calling application. The panic only *can* take place in case of a catastrophic pool failure and isn't the default anyway. See man zpool(1M) for the description of the "failmode" option. > But having the ability manage the disk with one pool and the other nice > features like compression plus the fact it works nicely on good hardware > make it hard to go back once you made the jump. Choices, choices. So you want to enable compression (which is a huge CPU hug) and worry about checksumming (which is tiny in comparison)? If you're compressing data, you've got all the more reason to enable checksumming, since compression tends to make all data corruption much, much worse (e.g. that's why a single-bit error in a compressed MPEG stream doesn't simply slightly alter the color of a single pixel, but typically instead results in a whole macroblock or row of macroblocks messing up completely). >>> Even if your system does crash, at least you now have an opportunity to >>> recognize there is a problem, and think about your backups, rather than >>> allowing the corruption to proliferate. > > This isn't a production box as I said it's an unused PC with a single drive, > and I don't have anybody's bank accounts on it. I can rsync whatever I work > on that day to a backup server. It won't be a disaster if UFS suddenly > becomes unreliable and I lose a file or two, or if a drive fails, but it > would be very annoying if ZFS barfed on a technicality and I had to > reinstall the whole OS because of a kernel panic and an unbootable system. As noted before, simple checksum errors won't panic your box, and neither will catastrophic pool failure (the default failmode=wait). You have to explicitly tell ZFS that you want it to panic your system in this situation. Cheers, -- Saso ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolari
Re: [zfs-discuss] ZFS ok for single disk dev box?
> > Hi. I have a spare off the shelf consumer PC and was thinking about loading > > Solaris on it for a development box since I use Studio @work and like it > > better than gcc. I was thinking maybe it isn't so smart to use ZFS since it > > has only one drive. If ZFS detects something bad it might kernel panic and > > lose the whole system right? I realize UFS /might/ be ignorant of any > > corruption but it might be more usable and go happily on it's way without > > noticing? Except then I have to size all the partitions and lose out on > > compression etc. Any suggestions thankfully received. > > Suppose you start getting checksum errors. Then you *do* want to notice. I'm not convinced. I understand the theoretical value of ZFS but it introduces a whole new layer of problems other filesystems don't have. Even if it's right in theory it doesn't always make things better in reality. I like the features it provides and not having to size filesystems like in the old days is great, but ZFS can and does have bugs and like anything else is not perfect. Aside from Microsoft which used to be guaranteed to corrupt filesystems I haven't ever had corruption that caused me any problems. Certainly there must have been corruptions because of software bugs and crappy hardware but they had no visible effect and that is good enough for me in this situation I asked about. I feel this issue is a little overblown given most of the world runs on other enterprise filesystems and the world hasn't come to and end yet. ZFS is an important step in the right direction but it doesn't mean you can't live without it's error detection. We lived without it until now. What I find hard to live without is the management features it gives you which is why I have a dilemna. In this specific use case I would rather have a system that's still bootable and runs as best it can than an unbootable system that has detected an integrity problem especially at this point in ZFS's life. If ZFS would not panic the kernel and give the option to fail or mark file(s) bad, I would like it more. But having the ability manage the disk with one pool and the other nice features like compression plus the fact it works nicely on good hardware make it hard to go back once you made the jump. Choices, choices. > > Even if your system does crash, at least you now have an opportunity to > > recognize there is a problem, and think about your backups, rather than > > allowing the corruption to proliferate. This isn't a production box as I said it's an unused PC with a single drive, and I don't have anybody's bank accounts on it. I can rsync whatever I work on that day to a backup server. It won't be a disaster if UFS suddenly becomes unreliable and I lose a file or two, or if a drive fails, but it would be very annoying if ZFS barfed on a technicality and I had to reinstall the whole OS because of a kernel panic and an unbootable system. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS snapshot used space question
>On Wed, Aug 29, 2012 at 8:58 PM, Timothy Coalson wrote: >> As I understand it, the used space of a snapshot does not include anything >> that is in more than one snapshot. >True. It shows the amount that would be freed if you destroyed the >snapshot right away. Data held onto by more than one snapshot cannot >be removed when you destroy just one of them, obviously. The act of >destroying a snapshot will likely change the USED value of the >neighbouring snapshots though. Yup, this is the same thing I came up with as well. Though I am a bit disappointed in the results at least things make sense again. Thank you all for your help! ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
On 08/30/2012 12:07 PM, Anonymous wrote: > Hi. I have a spare off the shelf consumer PC and was thinking about loading > Solaris on it for a development box since I use Studio @work and like it > better than gcc. I was thinking maybe it isn't so smart to use ZFS since it > has only one drive. If ZFS detects something bad it might kernel panic and > lose the whole system right? I realize UFS /might/ be ignorant of any > corruption but it might be more usable and go happily on it's way without > noticing? Except then I have to size all the partitions and lose out on > compression etc. Any suggestions thankfully received. Simply set copies=2 and go on your merry way. Works for me and protects you from bit rot. Even if you do decide to put a second drive in at a later time, just remember, RAID is not a backup solution. I use deja-dup to backup my important files daily to an off-site machine for that. -- Saso ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Anonymous > > Hi. I have a spare off the shelf consumer PC and was thinking about loading > Solaris on it for a development box since I use Studio @work and like it > better than gcc. I was thinking maybe it isn't so smart to use ZFS since it > has only one drive. If ZFS detects something bad it might kernel panic and > lose the whole system right? I realize UFS /might/ be ignorant of any > corruption but it might be more usable and go happily on it's way without > noticing? Except then I have to size all the partitions and lose out on > compression etc. Any suggestions thankfully received. Suppose you start getting checksum errors. Then you *do* want to notice. Even if your system does crash, at least you now have an opportunity to recognize there is a problem, and think about your backups, rather than allowing the corruption to proliferate. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
> has only one drive. If ZFS detects something bad it might kernel panic and > lose the whole system right? What do you mean by "lose the whole system"? A panic is not a bad thing, and also does not imply that the machine will not reboot successfully. It certainly doesn't guarantee your OS will be trashed. > I realize UFS /might/ be ignorant of any corruption but it might be more > usable and go happily on it's way without noticing? UFS has a mount option "onerror" which defines what the OS will do if there is a problem detected with a given filesystem. I think the default is "panic" anyway. Check mount_ufs manpage for details. Your answer is to take regular backups, rather than bury your head in the sand. cheers, --justin ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS ok for single disk dev box?
On 08/30/12 11:07, Anonymous wrote: Hi. I have a spare off the shelf consumer PC and was thinking about loading Solaris on it for a development box since I use Studio @work and like it better than gcc. I was thinking maybe it isn't so smart to use ZFS since it has only one drive. If ZFS detects something bad it might kernel panic and lose the whole system right? I realize UFS /might/ be ignorant of any corruption but it might be more usable and go happily on it's way without noticing? Except then I have to size all the partitions and lose out on compression etc. Any suggestions thankfully received. If you are using Solaris 11 or any of the Illumos based distributions you have not choice you must use ZFS as your root/boot filesystem. I would recommend that if physically possible attach a second drive to make it a mirror. Personally I've run many many builds of Solaris on single disk laptop systems and never has it lost me access to my data. The only time I lost access to data on a single disk system was because of total hard drive failure. I run with copies=2 set on my home directory and any datasets I store data in when on a single disk system. However much much more importantly ZFS does not preclude the need for off system backups. Even with mirroring, and snaphots you still have to have a backup of important data elsewhere. No file system and more importantly no hardware is that good. -- Darren J Moffat ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS ok for single disk dev box?
Hi. I have a spare off the shelf consumer PC and was thinking about loading Solaris on it for a development box since I use Studio @work and like it better than gcc. I was thinking maybe it isn't so smart to use ZFS since it has only one drive. If ZFS detects something bad it might kernel panic and lose the whole system right? I realize UFS /might/ be ignorant of any corruption but it might be more usable and go happily on it's way without noticing? Except then I have to size all the partitions and lose out on compression etc. Any suggestions thankfully received. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss