Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Orvar Korvar
Ok, so mirrors resilver faster. But, it is not uncommon that another disk shows problem during resilver (for instance r/w errors), this scenario would mean your entire raid is gone, right? If you are using mirrors, and one disk crashes and you start resilver. Then the other disk shows r/w

Re: [zfs-discuss] Large scale performance query

2011-08-05 Thread Orvar Korvar
Is mirrors really a realistic alternative? I mean, if I have to resilver a raid with 3TB discs, it can take days I suspect. With 4TB disks it can take a week, maybe. So, if I use mirror and one disk break, then I only have single redundancy while the mirror repairs. Reparation will take long

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-25 Thread Orvar Korvar
How long have you been using a SSD? Do you see any performance decrease? I mean, ZFS does not support TRIM, so I wonder about long term effects... -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-25 Thread Orvar Korvar
There is at least a common perception (misperception?) that devices cannot process TRIM requests while they are 100% busy processing other tasks. Just to confirm; SSD disks can do TRIM while processing other tasks? I heard that Illumos is working on TRIM support for ZFS and will release

Re: [zfs-discuss] Large scale performance query

2011-07-25 Thread Orvar Korvar
Wow. If you ever finish this monster, I would really like to hear more about the performance and how you connected everything. Could be useful as a reference for anyone else building big stuff. *drool* -- This message posted from opensolaris.org ___

Re: [zfs-discuss] SSD vs hybrid drive - any advice?

2011-07-24 Thread Orvar Korvar
I am using the OCZ Vertex 3, 240GB. When I boot Solaris 11 Express, on the splash screen, there is a small red line traveling from left to right. With this SSD, the red line is traveling two times over the screen before S11E has booted up. With hard disk, the red line traveled several times

[zfs-discuss] Each user has his own zfs filesystem??

2011-07-24 Thread Orvar Korvar
I dont get it. I created users with System - Administration - Users and Groups meny. I thought every user will get his own ZFS filesystem? But when I do # zfs list I can not see a zfs listing for each user. I only see this: rpool/export 60,4G 131G32K /export

Re: [zfs-discuss] Each user has his own zfs filesystem??

2011-07-24 Thread Orvar Korvar
cat /etc/passwd meme:x:1000:1000:Michael:/export/home/meme:/bin/bash amme:x:1001:1000:Amme:/export/home/amme:/bin/bash utwww:x:1002:1001:ut admin web server user:/tmp:/bin/sh $ ls -l /export/home total 11 drwxr-xr-x 36 amme user 55 2011-07-24 18:05 amme drwxr-xr-x 39 meme user 77

Re: [zfs-discuss] Each user has his own zfs filesystem??

2011-07-24 Thread Orvar Korvar
Ah, ok. That explains it all. Thanx. And yes, df did the trick. So, if I want each user to have his own zfs filesystem, I just create the filesystem, and then copy everything from /export/home, and then I fire up the GUI and point to the new zfs filesystem? That is all? I dont have to edit

Re: [zfs-discuss] Raidz2 slow read speed (under 5MB/s)

2011-07-21 Thread Orvar Korvar
Have you tried to boot from LiveCD in Solaris 11 Express and compare? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] recover raidz from fried server ??

2011-07-13 Thread Orvar Korvar
You could buy an LSI2008 based JBOD sata card. It has typically 8 sata ports. LSI2008 works directly on S11E, out of the box. That card gives very good performance, typically close to 1GB/Sec transfer speed. And when you switch mobo, just bring the LSI2008 card to the new mobo, and you are set.

Re: [zfs-discuss] Replacement disks for Sun X4500

2011-07-13 Thread Orvar Korvar
Did your x4500 cope with 3TB disks without any modifications? I heard the BIOS does not support 2TB disks? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] How about 4KB disk sectors?

2011-07-13 Thread Orvar Korvar
So, what is the story about 4KB disk sectors? Should such disks be avoided with ZFS? Or, no problem? Or, need to modify some config file before usage? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Pure SSD Pool

2011-07-12 Thread Orvar Korvar
I am now using S11E and a OCZ Vertex 3, 240GB SSD disk. I am using it in a SATA 2 port (not the new SATA 6gbps). The PC seems to work better now, the worst lag is gone. For instance, I am using Sunray, and if my girl friend is using the PC, and I am doing bit torrenting, the PC could lock up

Re: [zfs-discuss] Finding disks [was: # disks per vdev]

2011-07-05 Thread Orvar Korvar
The LSI2008 chipset is supported and works very well. I would actually use 2 vdevs; 8 disks in each. And I would configure each vdev as raidz2. Maybe use one hot spare. And I also have personal, subjective reasons: I like to use the number of 8 in computers. 7 is an ugly number. Everything is

Re: [zfs-discuss] Changed to AHCI, can not access disk???

2011-07-05 Thread Orvar Korvar
I have already formatted one disk, so I can not try this anymore. (But, importing the zpool with the name rpool and exporting the rpool again, was successful. I can now use the disk as usual. But this did not work on the other disk, so I formatted it) -- This message posted from

Re: [zfs-discuss] 700GB gone? zfs list and df differs!

2011-07-04 Thread Orvar Korvar
The problem is more clearly stated here. Look, 700GB is gone (the correct number is 620GB)! First I do zfs list onto TempStorage/Backup which reports 800GB. This is correct. Then I do df -h which reports only 180GB, which is not correct. So, it should be 800GB of data, but df reports only

Re: [zfs-discuss] 700GB gone? zfs list and df differs!

2011-07-04 Thread Orvar Korvar
PS. I do not have any snapshots: root@frasse:~# zfs list NAME USED AVAIL REFER MOUNTPOINT TempStorage916G 45,1G 37,3G /mnt/TempStorage TempStorage/Backup 799G 45,1G 177G

[zfs-discuss] Changed to AHCI, can not access disk???

2011-07-04 Thread Orvar Korvar
I have created some threads here about possible bugs in ZFS or bugs in ZFS beadm (the PC reboots when I try too boot - why? bug in beadm??). But now it seems that maybe there is no problems with ZFS. I will update my threads with SOLVED tag when/if I find the solution. Here is my problem: I

Re: [zfs-discuss] 700GB gone?

2011-07-01 Thread Orvar Korvar
I am using 64bit S11E. Everything worked fine earlier. But now I suspect the disk is breaking down, it behaves weird. I have several partitions: 1) OpenSolaris b134 upgraded to S11E 2) WinXP 3) FAT32 4) ZFS storage pool of 900GB Earlier, everything was fine. But suddenly OpenSolaris does not

[zfs-discuss] 700GB gone?

2011-06-30 Thread Orvar Korvar
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I can only see 300GB. Where is the rest? Is there a command I can do to reach the rest of the data? Will scrub help? -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] ZFS, Oracle and Nexenta

2011-05-24 Thread Orvar Korvar
The netapp lawsuit is solved. No conflicts there. Regarding ZFS, it is open under CDDL license. The leaked source code that is already open is open. Nexenta is using the open sourced version of ZFS. Oracle might close future ZFS versions, but Nexenta's ZFS is open and can not be closed. --

[zfs-discuss] Reboots when importing old rpool

2011-05-17 Thread Orvar Korvar
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the old hard disk, and tried to import it, with: # zpool import -f long id number Old_rpool but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE, starting with OpenSolaris 2009.06 and upgraded to

Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-06 Thread Orvar Korvar
Yes, you create three groups as you described and insert them into your zpool (the zfs raid). So you have only one ZFS raid, consisting of three groups. You dont have three different ZFS raids (unless you configure that). You can also later, swap one disk to a larger and repair the group. Then

Re: [zfs-discuss] ZFS and TRIM - No need for TRIM

2011-02-06 Thread Orvar Korvar
Ok, so can we say that the conclusion for a home user is: 1) Using SSD without TRIM is acceptable. The only drawback is that without TRIM, the SSD will write much more, which effects life time. Because when the SSD has written enough, it will break. I dont have high demands for my OS disk, so

Re: [zfs-discuss] Identifying drives (SATA)

2011-02-06 Thread Orvar Korvar
Will this not ruin the zpool? If you overwrite one of discs in the zpool won't the zpool go broke, so you need to repair it? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Identifying drives (SATA)

2011-02-06 Thread Orvar Korvar
Heh. My bad. Didnt read the command. Yes, that should be safe. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Identifying drives (SATA)

2011-02-06 Thread Orvar Korvar
Roy, I read your question on OpenIndiana mail lists: how can you rebalance your huge raid, without implementing block pointer rewrite? You have an old vdev full of data, and now you have added a new vdev - and you want the data to be evenly spread out to all vdevs. I answer here beceause it is

Re: [zfs-discuss] ZFS/Drobo (Newbie) Question

2011-02-05 Thread Orvar Korvar
If you use drives of varying size, zfs will use the smallest capacity drives. Say you have 1TB + 2TB + 2TB, then ZFS create a raid with 1TB large drives. 3 x 1TB raid will be result. One ZFS raid consists of vdevs, that is, a group of drives. That vdev can be configured as raidz1 (raid-5) or

Re: [zfs-discuss] ZFS and TRIM

2011-02-05 Thread Orvar Korvar
Ok, I read a bit more on TRIM. It seems that without TRIM, there will be more unnecessary reads and writes on the SSD, the result being that writes can take long time. A) So, how big of a problem is it? Sun has for long sold SSDs (for L2ARC and ZIL), and they dont use TRIM? So, is TRIM not a

Re: [zfs-discuss] ZFS and TRIM - No need for TRIM

2011-02-05 Thread Orvar Korvar
So... Sun's SSD used for ZIL and L2ARC does not use TRIM, so how big a problem is lack of TRIM in ZFS really? It should not hinder anyone to run without TRIM? I didnt really understand the answer on this question. Because Sun's SSD does not use TRIM - and it is not consider a hinder? A home

Re: [zfs-discuss] ZFS and TRIM

2011-02-04 Thread Orvar Korvar
So, the bottom line is that Solaris 11 Express can not use TRIM and SSD? Is that the conclusion? So, it might not be a good idea to use a SSD? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS and L2ARC memory requirements?

2011-02-04 Thread Orvar Korvar
100TB storage? Cool! What is the hardware? How many discs? Gief me ze hardware! :oP -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-18 Thread Orvar Korvar
...If this is a general rule, maybe it will be worth considering using SHA512 truncated to 256 bits to get more speed... Doesn't it need more investigation if truncating 512bit to 256bit gives equivalent security as a plain 256bit hash? Maybe truncation will introduce some bias? -- This

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-18 Thread Orvar Korvar
Totally Off Topic: Very interesting. Did you produce some papers on this? Where do you work? Seems very fun place to work at! BTW, I thought about this. What do you say? Assume I want to compress data and I succeed in doing so. And then I transfer the compressed data. So all the information

Re: [zfs-discuss] Hard Errors on HDDs

2011-01-03 Thread Orvar Korvar
Maybe a cable is loose? Reinsert all the cables into all drives? And the controller card? Yes, ZFS detects such problems. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-28 Thread Orvar Korvar
There are problems with Sandforce controllers, according to forum posts. Buggy firmware. And in practice, Sandforce is far below it's theoretical values. I expect Intel to have fewer problems. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Orvar Korvar
A noob question: These drives that people talk about, can you use them as a system disc too? Install Solaris 11 Express on them? Or can you only use them as a L2ARC or Zil? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Orvar Korvar
Your system drive on a Solaris system generally doesn't see enough I/O activity to require the kind of IOPS you can get out of most modern SSD's. My system drive sees a lot of activity, to the degree everything is going slow. I have a SunRay that my girlfriend use, and I have 5-10 torrents

Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Orvar Korvar
I am waiting for the next gen Intel SSD drives, G3. They are arriving very soon. And from what I can infer by reading here, I can use it without issues. Solaris will recognize the Intel SDD drive without any drivers needed, or whatever? Intel new SSD should work with Solaris 11 Express, yes?

Re: [zfs-discuss] is opensolaris support ended?

2010-11-10 Thread Orvar Korvar
You can upgrade with update manager to b134 which is the last build from Sun. You can also upgrade to b147, if you switch to OpenIndiana. Read on OpenIndiana web site. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Newbie ZFS Question: RAM for Dedup

2010-10-20 Thread Orvar Korvar
Sometimes you read about people having low performance deduping: it is because they have too little RAM. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Supermicro AOC-USAS2-L8i

2010-10-17 Thread Orvar Korvar
Does it support 3TB drives? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Finding corrupted files

2010-10-17 Thread Orvar Korvar
budy, here are some links. Remember, the reason you get corrupted files, is because ZFS detects it. Probably, you got corruption earlier as well, but your hardware did not notice it. This is called Silent Corruption. But ZFS is designed to detect and correct Silent Corruption. Which no normal

Re: [zfs-discuss] Finding corrupted files

2010-10-13 Thread Orvar Korvar
Budy, if you are using raid-5 or raid-6 underneath ZFS, then you should know that raid-5/6 might corrupt data. See here for lots of technical articles why raid-5 is bad: http://www.baarf.com/ raid-6 is not better. I can show you links about raid-6 being not safe. I is a good thing you run ZFS,

Re: [zfs-discuss] Has anyone seen zpool corruption with VirtualBox shared folders?

2010-09-22 Thread Orvar Korvar
Now this is a testament to the power of ZFS. Only ZFS is so sensitive it observed these errors to you. Had you run another filesystem, you would never got a notice that your data is slowly being corrupted by some faulty hardware. :o) -- This message posted from opensolaris.org

Re: [zfs-discuss] Please warn a home user against OpenSolaris under VirtualBox under WinXP ; )

2010-09-22 Thread Orvar Korvar
There was a guy doing that: Windows as host and OpenSolaris as guest with raw access to his disks. He lost his 12 TB data. It turned out that VirtualBox dont honor the write flush flag (or something similar). In other words, I would never ever do that. Your data is safer with Windows only and

Re: [zfs-discuss] space_map again nuked!!

2010-09-22 Thread Orvar Korvar
Did you see this thread? http://opensolaris.org/jive/thread.jspa?messageID=500659#500659 He had problems with ZFS. It turned out to be faulty RAM. ZFS is so sensitive it detects and reports problems to you. No other filesystem does that, so you think ZFS is problematic and switch. But the other

Re: [zfs-discuss] file recovery on lost RAIDZ array

2010-09-13 Thread Orvar Korvar
That sounds strange. What happened? You used raidz1? You can mount your zpool into an earlier snapshot. Have you tried that? Or, you can mount your pool within the last 30 seconds or so, I think. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Orvar Korvar
I was thinking to delete all zfs snapshots before zfs send receive to another new zpool. Then everything would be defragmented, I thought. (I assume snapshots works this way: I snapshot once and do some changes, say delete file A and edit file B. When I delete the snapshot, the file A is

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Orvar Korvar
To summarize, A) resilver does not defrag. B) zfs send receive to a new zpool means it will be defragged Correctly understood? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] How to migrate to 4KB sector drives?

2010-09-12 Thread Orvar Korvar
No replies. Does this mean that you should avoid large drives with 4KB sectors, that is, new drives? ZFS does not handle new drives? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] resilver = defrag?

2010-09-11 Thread Orvar Korvar
I am not really worried about fragmentation. I was just wondering if I attach new drives and zfs send recieve to a new zpool, would count as defrag. But apparently, not. Anyway thank you for your input! -- This message posted from opensolaris.org

[zfs-discuss] resilver = defrag?

2010-09-09 Thread Orvar Korvar
A) Resilver = Defrag. True/false? B) If I buy larger drives and resilver, does defrag happen? C) Does zfs send zfs receive mean it will defrag? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] How to migrate to 4KB sector drives?

2010-09-09 Thread Orvar Korvar
ZFS does not handle 4K sector drives well, you need to create a new zpool with 4K property (ashift) set. http://www.solarismen.de/archives/5-Solaris-and-the-new-4K-Sector-Disks-e.g.-WDxxEARS-Part-2.html Are there plans to allow resilver to handle 4K sector drives? -- This message posted from

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Orvar Korvar
Can you add another disk? then you have three 7 disc vdevs. (Always use raidz2.) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-06 Thread Orvar Korvar
Otherwise you can have 2 discs as hot spare. three 6 disc vdevs. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-21 Thread Orvar Korvar
And by the way: Wasn't there a comment of Linus Torvals recently that people shound move their low-quality code into the codebase ??? ;) Anyone knows the link? Good against the Linux fanboys. :o) -- This message posted from opensolaris.org ___

[zfs-discuss] Need a link on data corruption

2010-08-11 Thread Orvar Korvar
Someone posted about CERN having a bad network card which injected faulty bits into the data stream. And ZFS detected it, because of end-to-end checksum. Does anyone has more information on this? -- This message posted from opensolaris.org ___

Re: [zfs-discuss] Severe ZFS corruption, help needed.

2010-07-26 Thread Orvar Korvar
Have you posted on the FreeBSD forums? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] L2ARC and ZIL on same SSD?

2010-07-22 Thread Orvar Korvar
Ok, so the bandwidth will be cut in half, and some people use this configuration. But, how bad is it to have the bandwidth cut in half? Will it hardly notice? (Just ordinary home server, some media files, ebooks, etc) -- This message posted from opensolaris.org

[zfs-discuss] L2ARC and ZIL on same SSD?

2010-07-21 Thread Orvar Korvar
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one partition, and ZIL on the other? Any thoughts? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Expected throughput

2010-07-07 Thread Orvar Korvar
Something like this, maybe http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfect -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] never ending resilver

2010-07-05 Thread Orvar Korvar
If you have one zpool consisting of only one large raidz2, then you have a slow raid. To reach high speed, you need maximum 8 drives in each raidz2. So one of the reasons it takes time, is because you have too many drives in your raidz2. Everything would be much faster if you split your zpool

Re: [zfs-discuss] Complete Linux Noob

2010-06-16 Thread Orvar Korvar
You can't expand a normal RAID, either, anywhere I've ever seen. Is this true? A vdev can be a group of discs configured as raidz1/mirror/etc. An zfs raid consists of several vdev. You can add a new vdev whenever you want. -- This message posted from opensolaris.org

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-13 Thread Orvar Korvar
Great! Please report here so we can read about your impressions. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Virtual to physical migration

2010-05-02 Thread Orvar Korvar
You do know that OpenSolaris + VirtualBox can trash your ZFS raid? You can loose your data. There is a post about write cache and ZFS and VirtualbBox, I think you need to disable it? -- This message posted from opensolaris.org ___ zfs-discuss mailing

Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-21 Thread Orvar Korvar
Great! Dominik, Oracle needs to silence FUD immediately. Proactive initiative. :o) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-09 Thread Orvar Korvar
ONStor sells a ZFS based machine http://searchstorage.techtarget.com/news/article/0,289142,sid5_gci1354658,00.html It seems more like FreeNAS or something? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] bit-flipping in RAM...

2010-04-03 Thread Orvar Korvar
Have not the ZFS data corruption researchers been in touch with Jeff Bonwick and the ZFS team? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Large scale ZFS deployments out there (200 disks)

2010-02-28 Thread Orvar Korvar
Speaking of long boot times, Ive heard that IBM power servers boot in 90 minutes or more. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Idiots Guide to Running a NAS with ZFS/OpenSolaris

2010-02-19 Thread Orvar Korvar
I can strongly recommend this series of articles http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ Very good! :o) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2010-02-15 Thread Orvar Korvar
Yes, if you value your data you should change from USB drives to normal drives. I heard that USB did some strange things? Normal connection such as SATA is more reliable. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Orvar Korvar
A dumb question: I see 24 drives in an external chassi. I presume that chassis does only hold drives, it does not hold a motherboard. How do you connect all drives to your OpenSolaris server? Do you place them next to each other, and then you have three 8 SATA ports in your OpenSolaris

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Orvar Korvar
Ok, I see that the chassi contains a mother board. So never mind that question. Another q: Is it possible to have large chassi with lots of drives, and the opensolaris in another chassi, how do you connect them both? -- This message posted from opensolaris.org

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Orvar Korvar
This reminds me of this attorney that charged very much for a contract template he copied and gave to a client. To that, he responded: -You dont pay for me finding this template and copying to you, which took me 5 minutes. You pay me because I sat 5 years in the university, and have 15 years of

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Orvar Korvar
100% uptime for 20 years? So what makes OpenVMS so much more stable than Unix? What is the difference? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Orvar Korvar
1) SAS HBA seems to be an I/O card which has SAS cable connection. It sits in the OSol server. It is basically just a simple I/O card, right? I hope these cards are cheap? 2) So I can buy a disk chassi with 24 disks, connect all disks to one SAS cable and connect that SAS cable to my OSol

Re: [zfs-discuss] ZFS import hangs with over 66000 context switches shown in top

2010-01-13 Thread Orvar Korvar
It seems there are more info on this issue here http://opensolaris.org/jive/thread.jspa?threadID=121568tstart=0 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] best way to configure raidz groups

2010-01-01 Thread Orvar Korvar
raidz2 is recommended. As discs get large, it can take long time to repair raidz. Maybe several days. With raidz1, if another discs blows during repair, you are screwed. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Dedupe asynchronous mode?

2009-12-11 Thread Orvar Korvar
As I have understood it, reading Jeff Bonwicks blog, async dedup is not supported. The reason is that async is good if you have constraints on CPU and RAM. But todays modern CPU can dedup in real time, so async is not needed. Async allows dedup when you have spare clock cycles to burn (in the

Re: [zfs-discuss] Using iSCSI on ZFS with non-native FS - How to backup.

2009-12-05 Thread Orvar Korvar
I think it should work. I have seen blog post about ZFS, iSCSI and Macs. Just google a bit. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Large ZFS server questions

2009-11-24 Thread Orvar Korvar
Lustre is coming in a year(?). It will then use ZFS -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] The 100, 000th beginner question about a zfs server

2009-11-22 Thread Orvar Korvar
I would suggest a CPU with small L2 cache, as L2 cache will not help a file server. This allows you use AMD's new 45W cpu. And 64 bit. 2-4 cores. And use raidz2. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS GUI - where is it?

2009-11-19 Thread Orvar Korvar
Have you tried webmin? I think it allows to handle ZFS pools and such in a simple manner? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Recovering FAULTED zpool

2009-11-18 Thread Orvar Korvar
There is a new PSARC in b126(?) that allows to rollback to latest functioning uber block. Maybe it can help you? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS storage server hardware

2009-11-18 Thread Orvar Korvar
And I like to cut of your jib, my young fellow me lad! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] scrub differs in execute time?

2009-11-15 Thread Orvar Korvar
Yes that might be the cause. Thanks for identifying that. So I would gain bandwidth if I tucked some drives on the mobo SATA and some drives on the AOC card, instead of having all drives on the AOC card. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Orvar Korvar
I use Intel Q9450 + P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot, not PCI-x. About the HBA, I have no idea. So I had half of the drives in the AOC card, and the other half on the mobo SATA ports. Now I have all drives to the AOC card, and suddenly a scrub takes 15h instead of 8h.

[zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Orvar Korvar
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to other SATA ports, and now it takes 15h to scrub?? Why is that? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Orvar Korvar
Yes I do fine. How do you do-be-do-be-do? I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, which took 8 hours. Some of the drives were connected to the mobo, some of the drives were connected to the AOC-MV8... marvellsx88 card which is used in Thumper. Then I connected

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-11 Thread Orvar Korvar
Other drivers in the stack? Which drivers? And have anyone of them been changed between b125 and b126? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-11 Thread Orvar Korvar
So he did actually hit a bug? But the bug is not dangerous as it doesnt destroy data? But I did not replace any devices and still it showed checksum errors. I think I did a zfs send | zfs receive? I dont remember. But I just copied things back and forth, and the checksum errors showed up. So

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-10 Thread Orvar Korvar
Does this mean that there are no driver changes in marvell88sx2, between b125 and b126? If no driver changes, then it means that we both had extreme unluck with our drives, because we both had checksum errors? And my discs were brand new. How probable is this? Something is weird here. What is

[zfs-discuss] PSARC recover files?

2009-11-09 Thread Orvar Korvar
This new PSARC putback that allows to rollback to an earlier valid uber block is good. This immediately raises a question: could we use this PSARC functionality to recover deleted files? Or some variation? I dont need that functionality now, but I am just curious... -- This message posted

Re: [zfs-discuss] can't delete a zpool

2009-11-09 Thread Orvar Korvar
I had the same problem recently onb125. I had a one disc zpool Movies. And shutdown the computer. Removed the disc Movies and inserted another one disc zpool Misc. Booted and imported the Misc zpool. But the Movies zpool showed exactly the same behaviour as you report. The Movies zpool would

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-08 Thread Orvar Korvar
I can't boot into an older version because the last version I had was b118 which doesn't have zfs version 19 support. I've been looking to see if there's a way to downgrade via IPS but that's turned up a lot of nothing. If someone can tell me which files are needed for the driver I can extract

Re: [zfs-discuss] What can I get with 2x250Gb ?

2009-11-08 Thread Orvar Korvar
A zpool consists of vdevs (a group of discs). You can create a mirror of your 250GB. And later you can add another group of discs to your zpool, on the fly. Each group of discs should have redundancy, for instance, mirror, raidz1 or raidz2. So you can add vdev to a zpool on the fly, but you can

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-08 Thread Orvar Korvar
Great! So if I want another build, for instance b125, I just change step 10? 10) pkg -R /mnt install ent...@0.5.11-0.125 Yes? What is this 0.5.11 thing? Should that be changed too, if I try to install b125? Like 0.5.12-0.125? -- This message posted from opensolaris.org

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-08 Thread Orvar Korvar
This is from build 125. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   >