I'm just uploading all my data to my server and the space used is much more
than what i'm uploading;
Documents = 147MB
Videos = 11G
Software= 1.4G
By my calculations, that equals 12.547T, yet zpool list is showing 21G as being
allocated;
NAMESIZE ALLOC FREECAP DEDUP HEALTH
Thanks fj.
Should have realized that when it showed 27T available, which is the raw total
size before raid-z2!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hiya,
I am trying to add shares to my Win7 libraries but Windows won't let me add
them due to them not being indexed.
Does S11E have any server-side indexing feature?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hiya,
My S11E server is needed to serve Windows clients. I read a while ago (last
year!) about 'fudging' it so that Everyone has read/write access.
Is it possible for me to lock this down to users? I only have a single user on
my Windows clients and in some case (htpc) this user is logged on
Oh no I am not bothered at all about the target ID numbering. I just wondered
if there was a problem in the way it was enumerating the disks.
Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a
parameter of the command or the slice of a disk - none of my 'data' disks have
Thanks Andrew, Fajar.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hiya,
Now I have figured out how to read disks using dd to make LEDs blink, I want to
write a little script that iterates through all drives, dd's them with a few
thousand counts, stop, then dd's them again with another few thousand counts,
so I end up with maybe 5 blinks.
I don't want
Hiya,
Is there any reason (and anything to worry about) if disk target IDs don't
start at 0 (zero). For some reason mine are like this (3 controllers - 1
onboard and 2 PCIe);
AVAILABLE DISK SELECTIONS:
0. c8t0d0 ATA-ST9160314AS-SDM1 cyl 19454 alt 2 hd 255 sec 63
Hiya,
I am using S11E Live CD to install. The install wouldn't let me select 2 disks
for a mirrored rpool so I done this post-install using this guide;
http://darkstar-solaris.blogspot.com/2008/09/zfs-root-mirror.html
Before I go ahead and continue building my server (zpools) I want to make
Thanks Trond.
I am aware of this, but to be honest I will not be upgrading very often (my
current WHS setup has lasted 5 years without a single change!) and certainly
not to each iteration of TB size increase, so by the time I do upgrade, say in
the next 5 years PCIe will have probably been
The testing was utilizing a portion of our drives, we
have 120 x 750
SATA drives in J4400s dual pathed. We ended up with
22 vdevs each a
raidz2 of 5 drives, with one drive in each of the
J4400, so we can
lose two complete J4400 chassis and not lose any
data.
Thanks pk.
You know I never
OK, I have finally settled on hardware;
2x LSI SAS3081E-R controllers
2x Seagate Momentus 5400.6 rpool disks
15x Hitachi 5K3000 'data' disks
I am still undecided as to how to group the disks. I have read elsewhere that
raid-z1 is best suited with either 3 or 5 disks and raid-z2 is better suited
Thanks.
I ruled out the SAS2008 controller as my motherboard is only PCIe 1.0 so would
not have been able to make the most of the difference in increased bandwidth.
I can't see myself upgrading every few months (my current WHZ build has lasted
over 4 years without a single change) so by the
Hiya,
I''ve been doing a lot of research surrounding this and ZFS, including some
posts on here, though I am still left scratching my head.
I am planning on using slow RPM drives for a home media server, and it's these
that seem to 'suffer' from a few problems;
Seagate Barracuda LP - Looks to
Sorry to pester, but is anyone able to say if the Marvell 9480 chip is now
supported in Solaris?
The article I read saying it wasn't supported was dated May 2010 so over a year
ago.
--
This message posted from opensolaris.org
___
zfs-discuss mailing
Thanks for all the replies.
I have a pretty good idea how the disk enclosure assigns slot locations so
should be OK.
One last thing - I see thet Supermicro has just released a newer version of the
card I mentioned in the first post that supports SATA 6Gbps. From what I can
see it uses the
1 - are the 2 vdevs in the same pool, or two separate
pools?
I was planning on having the 2 z2 vdevs in one pool. Although having 2 pools
and having them sync'd sounds really good, I fear it may be overkill for the
intended purpose.
3 - spare temperature
for levels raidz2 and
Thanks Richard.
How does ZFS enumerate the disks? In terms of listing them does it do them
logically, i.e;
controller #1 (motherboard)
|
|--- disk1
|--- disk2
controller #3
|--- disk3
|--- disk4
|--- disk5
|--- disk6
|--- disk7
|--- disk8
|--- disk9
I was planning on using one of
these
http://www.scan.co.uk/products/icy-dock-mb994sp-4s-4in
1-sas-sata-hot-swap-backplane-525-raid-cage
Imagine if 2.5 2TB disks were price neutral compared to 3.5 equivalents.
I could have 40 of the buggers in my system giving 80TB raw storage! I'd
4 - the 16th port
Can you find somewhere inside the case for an SSD as
L2ARC on your
last port?
Although saying that, if we are saying hot spares may be bad in my scenario, I
could ditch it and use an 3.5 SSD in the 15th drive's place?
--
This message posted from opensolaris.org
Thanks guys.
I have decided to bite the bullet and change to 2TB disks now rather than go
through all the effort using 1TB disks and then maybe changing in 6-12 months
time or whatever. The price difference between 1TB and 2TB disks is marginal
and I can always re-sell my 6x 1TB disks.
I
Thanks Edward.
In that case what 'option' would you choose - smaller raid-z vdevs or larger
raid-z2 vdevs.
I do like the idea of having a hot spare so 2x 7 disk raid-z2 may be the better
option rather than 3x 5 disk raid-z with no hot spare. 2TB loss in the former
could be acceptable I
That's how I understood autoexpand, about not doing so until all disks have
been done.
I do indeed rip from disc rather than grab torrents - to VIDEO_TS folders and
not ISO - on my laptop then copy the whole folder up to WHS in one go. So while
they're not one large single file, they are lots
Hiya,
I am just in the planning stages for my ZFS Home Media Server build at the
moment (to replace WHS v1).
I plan to use 2x motherboard ports and 2x Supermicro AOC-SASLP-MV8 8 port SATA
cards to give 17* drive connections; 2 disks (120GB SATA 2.5) will be used for
the ZFS install using the
Thanks Edward.
I'm in two minds with mirrors. I know they provide the best performance and
protection, and if this was a business critical machine I wouldn't hesitate.
But as it for a home media server, which is mainly WORM access and will be
storing (legal!) DVD/Bluray rips i'm not so sure I
Thanks martysch.
That is what I meant about adding disks to vdevs - not adding disks to vdevs
but adding vdevs to pools.
If the geometry of the vdevs should ideally be the same, it would make sense to
buy one more disk now and have a 7 disk raid-z2 to start with, then buy disks
as and when
It's worse on raidzN than on mirrors, because the
number of items which must
be read is higher in radizN, assuming you're using
larger vdev's and
therefore more items exist scattered about inside
that vdev. You therefore
have a higher number of things which must be randomly
read before
Thanks Edward.
I do agree about mirrored rpool (equivalent to Windows OS volume); not doing it
goes against one of my principles when building enterprise servers.
Is there any argument against using the rpool for all data storage as well as
being the install volume?
Say for example I chucked
Oh, does anyone know if resilvering efficiency is improved or fixed in Solaris
11 Express, as that is what i'm using.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I believe Oracle is aware of the problem, but most of
the core ZFS team has left. And of course, a fix for
Oracle Solaris no longer means a fix for the rest of
us.
OK, that is a bit concerning then. As good as ZFS may be, i'm not sure I want
to committ to a file system that is 'broken' and
Thanks relling.
I suppose at the end of the day any file system/volume manager has it's flaws
so perhaps it's better to look at the positives of each and decide based on
them.
So, back to my question above, is there a deciding argument [i]against[/i]
putting data on the install volume
On the subject of where to install ZFS, I was planning to use either Compact
Flash or USB drive (both of which would be mounted internally); using up 2 of
the drive bays for a mirrored install is possibly a waste of physical space,
considering it's a) a home media server and b) the config can
Thanks for all the replies.
The bit about combining zpools came from this command on the southbrain
tutorial;
zpool create mail \
mirror c6t600D0230006C1C4C0C50BE5BC9D49100d0
c6t600D0230006B66680C50AB7821F0E900d0 \
mirror c6t600D0230006B66680C50AB0187D75000d0
OK cool.
One last question. Reading the Admin Guid for ZFS, it says:
[i]A more complex conceptual RAID-Z configuration would look similar to the
following:
raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c6t0d0 c7t0d0 raidz c8t0d0 c9t0d0
c10t0d0 c11t0d0 c12t0d0 c13t0d0 c14t0d0
If you are creating a
Thanks!
By single drive mirrors, I assume, in a 14 disk setup, you mean 7 sets of 2
disk mirrors - I am thinking of traditional RAID1 here.
Or do you mean 1 massive mirror with all 14 disks?
This is always a tough one for me. I too prefer RAID1 where redundancy is king,
but the trade off for
Hiya,
I have been playing with ZFS for a few days now on a test PC, and I plan to use
if for my home media server after being very impressed!
I've got the basics of creating zpools and zfs filesystems with compression and
dedup etc, but I'm wondering if there's a better way to handle security.
Thanks for the reply.
In that case, wouldn't it be better to, as you say, start with a 6 drive Z2,
then just keep adding drives until the case is full, for a single Z2 zpool?
Or even Z3, if that's available now?
I have an 11x 5.1/4 bay case, with 3x 5-in-3 hot swap caddies giving me 15
drive
37 matches
Mail list logo