Ok, so mirrors resilver faster.
But, it is not uncommon that another disk shows problem during resilver (for
instance r/w errors), this scenario would mean your entire raid is gone, right?
If you are using mirrors, and one disk crashes and you start resilver. Then the
other disk shows r/w
Is mirrors really a realistic alternative? I mean, if I have to resilver a raid
with 3TB discs, it can take days I suspect. With 4TB disks it can take a week,
maybe. So, if I use mirror and one disk break, then I only have single
redundancy while the mirror repairs. Reparation will take long
How long have you been using a SSD? Do you see any performance decrease? I
mean, ZFS does not support TRIM, so I wonder about long term effects...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
There is at least a common perception (misperception?) that devices cannot
process TRIM requests while they are 100% busy processing other tasks.
Just to confirm; SSD disks can do TRIM while processing other tasks?
I heard that Illumos is working on TRIM support for ZFS and will release
Wow. If you ever finish this monster, I would really like to hear more about
the performance and how you connected everything. Could be useful as a
reference for anyone else building big stuff.
*drool*
--
This message posted from opensolaris.org
___
I am using the OCZ Vertex 3, 240GB. When I boot Solaris 11 Express, on the
splash screen, there is a small red line traveling from left to right. With
this SSD, the red line is traveling two times over the screen before S11E has
booted up. With hard disk, the red line traveled several times
I dont get it. I created users with
System - Administration - Users and Groups
meny.
I thought every user will get his own ZFS filesystem? But when I do
# zfs list
I can not see a zfs listing for each user. I only see this:
rpool/export 60,4G 131G32K /export
cat /etc/passwd
meme:x:1000:1000:Michael:/export/home/meme:/bin/bash
amme:x:1001:1000:Amme:/export/home/amme:/bin/bash
utwww:x:1002:1001:ut admin web server user:/tmp:/bin/sh
$ ls -l /export/home
total 11
drwxr-xr-x 36 amme user 55 2011-07-24 18:05 amme
drwxr-xr-x 39 meme user 77
Ah, ok. That explains it all. Thanx.
And yes, df did the trick.
So, if I want each user to have his own zfs filesystem, I just create the
filesystem, and then copy everything from /export/home, and then I fire up the
GUI and point to the new zfs filesystem? That is all? I dont have to edit
Have you tried to boot from LiveCD in Solaris 11 Express and compare?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You could buy an LSI2008 based JBOD sata card. It has typically 8 sata ports.
LSI2008 works directly on S11E, out of the box. That card gives very good
performance, typically close to 1GB/Sec transfer speed. And when you switch
mobo, just bring the LSI2008 card to the new mobo, and you are set.
Did your x4500 cope with 3TB disks without any modifications? I heard the BIOS
does not support 2TB disks?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
So, what is the story about 4KB disk sectors? Should such disks be avoided with
ZFS? Or, no problem? Or, need to modify some config file before usage?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I am now using S11E and a OCZ Vertex 3, 240GB SSD disk. I am using it in a SATA
2 port (not the new SATA 6gbps).
The PC seems to work better now, the worst lag is gone. For instance, I am
using Sunray, and if my girl friend is using the PC, and I am doing bit
torrenting, the PC could lock up
The LSI2008 chipset is supported and works very well.
I would actually use 2 vdevs; 8 disks in each. And I would configure each vdev
as raidz2. Maybe use one hot spare.
And I also have personal, subjective reasons: I like to use the number of 8 in
computers. 7 is an ugly number. Everything is
I have already formatted one disk, so I can not try this anymore.
(But, importing the zpool with the name rpool and exporting the rpool again,
was successful. I can now use the disk as usual. But this did not work on the
other disk, so I formatted it)
--
This message posted from
The problem is more clearly stated here. Look, 700GB is gone (the correct
number is 620GB)!
First I do zfs list onto TempStorage/Backup which reports 800GB. This is
correct.
Then I do df -h which reports only 180GB, which is not correct. So, it should
be 800GB of data, but df reports only
PS. I do not have any snapshots:
root@frasse:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
TempStorage916G 45,1G 37,3G /mnt/TempStorage
TempStorage/Backup 799G 45,1G 177G
I have created some threads here about possible bugs in ZFS or bugs in ZFS
beadm (the PC reboots when I try too boot - why? bug in beadm??). But now it
seems that maybe there is no problems with ZFS. I will update my threads with
SOLVED tag when/if I find the solution.
Here is my problem:
I
I am using 64bit S11E. Everything worked fine earlier. But now I suspect the
disk is breaking down, it behaves weird. I have several partitions:
1) OpenSolaris b134 upgraded to S11E
2) WinXP
3) FAT32
4) ZFS storage pool of 900GB
Earlier, everything was fine. But suddenly OpenSolaris does not
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I
can only see 300GB. Where is the rest? Is there a command I can do to reach the
rest of the data? Will scrub help?
--
This message posted from opensolaris.org
___
zfs-discuss
The netapp lawsuit is solved. No conflicts there.
Regarding ZFS, it is open under CDDL license. The leaked source code that is
already open is open. Nexenta is using the open sourced version of ZFS. Oracle
might close future ZFS versions, but Nexenta's ZFS is open and can not be
closed.
--
I have a fresh install of Solaris 11 Express on a new SSD. I have inserted the
old hard disk, and tried to import it, with:
# zpool import -f long id number Old_rpool
but the computer reboots. Why is that? On my old hard disk, I have 10-20 BE,
starting with OpenSolaris 2009.06 and upgraded to
Yes, you create three groups as you described and insert them into your zpool
(the zfs raid). So you have only one ZFS raid, consisting of three groups. You
dont have three different ZFS raids (unless you configure that).
You can also later, swap one disk to a larger and repair the group. Then
Ok, so can we say that the conclusion for a home user is:
1) Using SSD without TRIM is acceptable. The only drawback is that without
TRIM, the SSD will write much more, which effects life time. Because when the
SSD has written enough, it will break.
I dont have high demands for my OS disk, so
Will this not ruin the zpool? If you overwrite one of discs in the zpool won't
the zpool go broke, so you need to repair it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Heh. My bad. Didnt read the command. Yes, that should be safe.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Roy, I read your question on OpenIndiana mail lists: how can you rebalance your
huge raid, without implementing block pointer rewrite? You have an old vdev
full of data, and now you have added a new vdev - and you want the data to be
evenly spread out to all vdevs.
I answer here beceause it is
If you use drives of varying size, zfs will use the smallest capacity drives.
Say you have 1TB + 2TB + 2TB, then ZFS create a raid with 1TB large drives. 3 x
1TB raid will be result.
One ZFS raid consists of vdevs, that is, a group of drives. That vdev can be
configured as raidz1 (raid-5) or
Ok, I read a bit more on TRIM. It seems that without TRIM, there will be more
unnecessary reads and writes on the SSD, the result being that writes can take
long time.
A) So, how big of a problem is it? Sun has for long sold SSDs (for L2ARC and
ZIL), and they dont use TRIM? So, is TRIM not a
So... Sun's SSD used for ZIL and L2ARC does not use TRIM, so how big a problem
is lack of TRIM in ZFS really? It should not hinder anyone to run without TRIM?
I didnt really understand the answer on this question. Because Sun's SSD does
not use TRIM - and it is not consider a hinder? A home
So, the bottom line is that Solaris 11 Express can not use TRIM and SSD? Is
that the conclusion? So, it might not be a good idea to use a SSD?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
100TB storage? Cool! What is the hardware? How many discs? Gief me ze hardware!
:oP
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
...If this is a general rule, maybe it will be worth considering using
SHA512 truncated to 256 bits to get more speed...
Doesn't it need more investigation if truncating 512bit to 256bit gives
equivalent security as a plain 256bit hash? Maybe truncation will introduce
some bias?
--
This
Totally Off Topic:
Very interesting. Did you produce some papers on this? Where do you work? Seems
very fun place to work at!
BTW, I thought about this. What do you say?
Assume I want to compress data and I succeed in doing so. And then I transfer
the compressed data. So all the information
Maybe a cable is loose? Reinsert all the cables into all drives? And the
controller card?
Yes, ZFS detects such problems.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
There are problems with Sandforce controllers, according to forum posts. Buggy
firmware. And in practice, Sandforce is far below it's theoretical values. I
expect Intel to have fewer problems.
--
This message posted from opensolaris.org
___
A noob question:
These drives that people talk about, can you use them as a system disc too?
Install Solaris 11 Express on them? Or can you only use them as a L2ARC or Zil?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Your system drive on a Solaris system generally doesn't see enough I/O
activity to require the kind of IOPS you can get out of most modern SSD's.
My system drive sees a lot of activity, to the degree everything is going slow.
I have a SunRay that my girlfriend use, and I have 5-10 torrents
I am waiting for the next gen Intel SSD drives, G3. They are arriving very
soon. And from what I can infer by reading here, I can use it without issues.
Solaris will recognize the Intel SDD drive without any drivers needed, or
whatever?
Intel new SSD should work with Solaris 11 Express, yes?
You can upgrade with update manager to b134 which is the last build from Sun.
You can also upgrade to b147, if you switch to OpenIndiana. Read on OpenIndiana
web site.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Sometimes you read about people having low performance deduping: it is because
they have too little RAM.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Does it support 3TB drives?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
budy,
here are some links. Remember, the reason you get corrupted files, is because
ZFS detects it. Probably, you got corruption earlier as well, but your hardware
did not notice it. This is called Silent Corruption. But ZFS is designed to
detect and correct Silent Corruption. Which no normal
Budy, if you are using raid-5 or raid-6 underneath ZFS, then you should know
that raid-5/6 might corrupt data. See here for lots of technical articles why
raid-5 is bad:
http://www.baarf.com/
raid-6 is not better. I can show you links about raid-6 being not safe.
I is a good thing you run ZFS,
Now this is a testament to the power of ZFS. Only ZFS is so sensitive it
observed these errors to you. Had you run another filesystem, you would never
got a notice that your data is slowly being corrupted by some faulty hardware.
:o)
--
This message posted from opensolaris.org
There was a guy doing that: Windows as host and OpenSolaris as guest with raw
access to his disks. He lost his 12 TB data. It turned out that VirtualBox dont
honor the write flush flag (or something similar).
In other words, I would never ever do that. Your data is safer with Windows
only and
Did you see this thread?
http://opensolaris.org/jive/thread.jspa?messageID=500659#500659
He had problems with ZFS. It turned out to be faulty RAM. ZFS is so sensitive
it detects and reports problems to you. No other filesystem does that, so you
think ZFS is problematic and switch. But the other
That sounds strange. What happened? You used raidz1?
You can mount your zpool into an earlier snapshot. Have you tried that? Or, you
can mount your pool within the last 30 seconds or so, I think.
--
This message posted from opensolaris.org
___
I was thinking to delete all zfs snapshots before zfs send receive to another
new zpool. Then everything would be defragmented, I thought.
(I assume snapshots works this way: I snapshot once and do some changes, say
delete file A and edit file B. When I delete the snapshot, the file A is
To summarize,
A) resilver does not defrag.
B) zfs send receive to a new zpool means it will be defragged
Correctly understood?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
No replies. Does this mean that you should avoid large drives with 4KB sectors,
that is, new drives? ZFS does not handle new drives?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I am not really worried about fragmentation. I was just wondering if I attach
new drives and zfs send recieve to a new zpool, would count as defrag. But
apparently, not.
Anyway thank you for your input!
--
This message posted from opensolaris.org
A) Resilver = Defrag. True/false?
B) If I buy larger drives and resilver, does defrag happen?
C) Does zfs send zfs receive mean it will defrag?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ZFS does not handle 4K sector drives well, you need to create a new zpool with
4K property (ashift) set.
http://www.solarismen.de/archives/5-Solaris-and-the-new-4K-Sector-Disks-e.g.-WDxxEARS-Part-2.html
Are there plans to allow resilver to handle 4K sector drives?
--
This message posted from
Can you add another disk? then you have three 7 disc vdevs. (Always use raidz2.)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Otherwise you can have 2 discs as hot spare. three 6 disc vdevs.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
And by the way: Wasn't there a comment of Linus Torvals recently that people
shound move their low-quality code into the codebase ??? ;)
Anyone knows the link? Good against the Linux fanboys. :o)
--
This message posted from opensolaris.org
___
Someone posted about CERN having a bad network card which injected faulty bits
into the data stream. And ZFS detected it, because of end-to-end checksum. Does
anyone has more information on this?
--
This message posted from opensolaris.org
___
Have you posted on the FreeBSD forums?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, so the bandwidth will be cut in half, and some people use this
configuration. But, how bad is it to have the bandwidth cut in half? Will it
hardly notice?
(Just ordinary home server, some media files, ebooks, etc)
--
This message posted from opensolaris.org
Are there any drawbacks to partition a SSD in two parts and use L2ARC on one
partition, and ZIL on the other? Any thoughts?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Something like this, maybe
http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfect
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
If you have one zpool consisting of only one large raidz2, then you have a slow
raid. To reach high speed, you need maximum 8 drives in each raidz2. So one of
the reasons it takes time, is because you have too many drives in your raidz2.
Everything would be much faster if you split your zpool
You can't expand a normal RAID, either, anywhere I've ever seen.
Is this true?
A vdev can be a group of discs configured as raidz1/mirror/etc. An zfs raid
consists of several vdev. You can add a new vdev whenever you want.
--
This message posted from opensolaris.org
Great! Please report here so we can read about your impressions.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You do know that OpenSolaris + VirtualBox can trash your ZFS raid? You can
loose your data. There is a post about write cache and ZFS and VirtualbBox, I
think you need to disable it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing
Great! Dominik, Oracle needs to silence FUD immediately. Proactive initiative.
:o)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ONStor sells a ZFS based machine
http://searchstorage.techtarget.com/news/article/0,289142,sid5_gci1354658,00.html
It seems more like FreeNAS or something?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Have not the ZFS data corruption researchers been in touch with Jeff Bonwick
and the ZFS team?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Speaking of long boot times, Ive heard that IBM power servers boot in 90
minutes or more.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I can strongly recommend this series of articles
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
Very good! :o)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Yes, if you value your data you should change from USB drives to normal drives.
I heard that USB did some strange things? Normal connection such as SATA is
more reliable.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
A dumb question:
I see 24 drives in an external chassi. I presume that chassis does only hold
drives, it does not hold a motherboard.
How do you connect all drives to your OpenSolaris server? Do you place them
next to each other, and then you have three 8 SATA ports in your OpenSolaris
Ok, I see that the chassi contains a mother board. So never mind that question.
Another q:
Is it possible to have large chassi with lots of drives, and the opensolaris in
another chassi, how do you connect them both?
--
This message posted from opensolaris.org
This reminds me of this attorney that charged very much for a contract template
he copied and gave to a client. To that, he responded:
-You dont pay for me finding this template and copying to you, which took me 5
minutes. You pay me because I sat 5 years in the university, and have 15 years
of
100% uptime for 20 years?
So what makes OpenVMS so much more stable than Unix? What is the difference?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
1) SAS HBA seems to be an I/O card which has SAS cable connection. It sits in
the OSol server. It is basically just a simple I/O card, right? I hope these
cards are cheap?
2) So I can buy a disk chassi with 24 disks, connect all disks to one SAS cable
and connect that SAS cable to my OSol
It seems there are more info on this issue here
http://opensolaris.org/jive/thread.jspa?threadID=121568tstart=0
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
raidz2 is recommended. As discs get large, it can take long time to repair
raidz. Maybe several days. With raidz1, if another discs blows during repair,
you are screwed.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
As I have understood it, reading Jeff Bonwicks blog, async dedup is not
supported. The reason is that async is good if you have constraints on CPU and
RAM. But todays modern CPU can dedup in real time, so async is not needed.
Async allows dedup when you have spare clock cycles to burn (in the
I think it should work. I have seen blog post about ZFS, iSCSI and Macs. Just
google a bit.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Lustre is coming in a year(?). It will then use ZFS
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I would suggest a CPU with small L2 cache, as L2 cache will not help a file
server. This allows you use AMD's new 45W cpu. And 64 bit. 2-4 cores.
And use raidz2.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Have you tried webmin? I think it allows to handle ZFS pools and such in a
simple manner?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
There is a new PSARC in b126(?) that allows to rollback to latest functioning
uber block. Maybe it can help you?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
And I like to cut of your jib, my young fellow me lad!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yes that might be the cause. Thanks for identifying that. So I would gain
bandwidth if I tucked some drives on the mobo SATA and some drives on the AOC
card, instead of having all drives on the AOC card.
--
This message posted from opensolaris.org
___
I use Intel Q9450 + P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot,
not PCI-x. About the HBA, I have no idea.
So I had half of the drives in the AOC card, and the other half on the mobo
SATA ports. Now I have all drives to the AOC card, and suddenly a scrub takes
15h instead of 8h.
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to
other SATA ports, and now it takes 15h to scrub??
Why is that?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Yes I do fine. How do you do-be-do-be-do?
I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, which
took 8 hours. Some of the drives were connected to the mobo, some of the drives
were connected to the AOC-MV8... marvellsx88 card which is used in Thumper.
Then I connected
Other drivers in the stack? Which drivers? And have anyone of them been changed
between b125 and b126?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
So he did actually hit a bug? But the bug is not dangerous as it doesnt destroy
data?
But I did not replace any devices and still it showed checksum errors. I think
I did a zfs send | zfs receive? I dont remember. But I just copied things back
and forth, and the checksum errors showed up. So
Does this mean that there are no driver changes in marvell88sx2, between b125
and b126? If no driver changes, then it means that we both had extreme unluck
with our drives, because we both had checksum errors? And my discs were brand
new.
How probable is this? Something is weird here. What is
This new PSARC putback that allows to rollback to an earlier valid uber block
is good.
This immediately raises a question: could we use this PSARC functionality to
recover deleted files? Or some variation? I dont need that functionality now,
but I am just curious...
--
This message posted
I had the same problem recently onb125. I had a one disc zpool Movies. And
shutdown the computer. Removed the disc Movies and inserted another one disc
zpool Misc. Booted and imported the Misc zpool. But the Movies zpool showed
exactly the same behaviour as you report. The Movies zpool would
I can't boot into an older version because the last version I had was b118
which doesn't have zfs version 19 support. I've been looking to see if there's
a way to downgrade via IPS but that's turned up a lot of nothing.
If someone can tell me which files are needed for the driver I can extract
A zpool consists of vdevs (a group of discs). You can create a mirror of your
250GB. And later you can add another group of discs to your zpool, on the fly.
Each group of discs should have redundancy, for instance, mirror, raidz1 or
raidz2. So you can add vdev to a zpool on the fly, but you can
Great! So if I want another build, for instance b125, I just change step 10?
10) pkg -R /mnt install ent...@0.5.11-0.125
Yes?
What is this 0.5.11 thing? Should that be changed too, if I try to install
b125? Like 0.5.12-0.125?
--
This message posted from opensolaris.org
This is from build 125.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 205 matches
Mail list logo