On Wed, Jul 22, 2009 at 02:45:52PM -0500, Bob Friesenhahn wrote:
On Wed, 22 Jul 2009, t. johnson wrote:
Lets say I have a simple-ish setup that uses vmware files for
virtual disks on an NFS share from zfs. I'm wondering how zfs'
variable block size comes into play? Does it make the alignment
On 22.07.09 10:45, Adam Leventhal wrote:
which gap?
'RAID-Z should mind the gap on writes' ?
Message was edited by: thometal
I believe this is in reference to the raid 5 write hole, described here:
http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance
It's not.
So I'm not
Follow-up : happy end ...
It took quite some thinkering but... i have my data back...
I ended up starting without the troublesome zfs storage array, de-installed the
iscsitartget software and re-installed it...just to have solaris boot without
complaining about missing modules...
That left me
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog doesn't work
still isn't resolved. A solution is under it's way, according to George Wilson.
But in the mean time, IF something happens you might be in a lot of trouble.
Even without some unfortunate
Hi,
I'm using asus m3a78 boards (with the sb700) for opensolaris and m2a* boards
(with the sb600) for linux some of them with 4*1GB and others with 4*2Gb ECC
memory. Ecc faults will be detected and reported. I tested it with a small
tungsten light. By moving the light source slowly towards the
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog doesn't work
still isn't resolved. A solution is under it's way, according to George Wilson. But in
the mean time, IF something happens you might be in a lot of trouble. Even without
On Thu, Jul 23, 2009 at 10:28:38AM -0400, Kyle McDonald wrote:
In my case the slog slice wouldn't be the slog for the root pool, it
would be the slog for a second data pool.
I didn't think you could add a slog to the root pool anyway. Or has that
changed in recent builds? I'm a little
Brian Hechinger wrote:
On Thu, Jul 23, 2009 at 10:28:38AM -0400, Kyle McDonald wrote:
In my case the slog slice wouldn't be the slog for the root pool, it
would be the slog for a second data pool.
I didn't think you could add a slog to the root pool anyway. Or has that
On Jul 23, 2009, at 5:42 AM, F. Wessels wrote:
Hi,
I'm using asus m3a78 boards (with the sb700) for opensolaris and
m2a* boards (with the sb600) for linux some of them with 4*1GB and
others with 4*2Gb ECC memory. Ecc faults will be detected and
reported. I tested it with a small tungsten
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't resolved. A solution is under it's way,
according to George Wilson. But in the mean time, IF something
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't resolved. A solution is under it's way,
according to George Wilson. But in the
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't resolved. A solution is under
Richard Elling wrote:
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't work still isn't
Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 9:37 AM, Kyle McDonald wrote:
Richard Elling wrote:
On Jul 23, 2009, at 7:28 AM, Kyle McDonald wrote:
F. Wessels wrote:
Thanks posting this solution.
But I would like to point out that bug 6574286 removing a slog
doesn't
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write) which is $699.
Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write)
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
A.
--
Adam Sherman
+1.613.797.6819
On 2009-07-23, at 14:09, Greg Mason gma...@msu.edu wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to
Adam Sherman wrote:
In the context of a low-volume file server, for a few users, is the
low-end Intel SSD sufficient?
You're right, it supposedly has less than half the the write speed, and
that probably won't matter for me, but I can't find a 64GB version of it
for sale, and the 80GB
On 07/23/09 09:19 AM, Richard Elling wrote:
On Jul 23, 2009, at 5:42 AM, F. Wessels wrote:
Hi,
I'm using asus m3a78 boards (with the sb700) for opensolaris and m2a*
boards (with the sb600) for linux some of them with 4*1GB and others
with 4*2Gb ECC memory. Ecc faults will be detected and
I think it is a great idea, assuming the SSD has good write performance.
This one claims up to 230MB/s read and 180MB/s write and it's only $196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s read and 170MB/s write) which is $699.
Are
I've upgrade my OpenSolaris 2008.11 to 2009.06. During that process it created
a new boot environment:
BEActive Mountpoint Space Policy Created
---- -- - -- ---
opensolaris NR / 7.53G static 2009-01-03 13:18
I don't think this is limited to root pools. None of my pools (root or
non-root) seem to have the write cache enabled. Now that I think about
it, all my disks are hidden behind an LSI1078 controller so I'm not
sure what sort of impact that would have on the situation.
I have a few of those
Ok, I've found the solution to my problem on Internet here:
http://sigtar.com/2009/03/17/troubleshooting-time-slider-zfs-snapshots/
This was indeed caused by the old boot environment. This is how to solve it:
- disable snapshots on the old boot environment:
pfexec zfs set
On Jul 23, 2009, at 11:09 AM, Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to 230MB/s read and 180MB/s write and it's only
$196.
http://www.newegg.com/Product/Product.aspx?Item=N82E16820609393
Compared to this one (250MB/s
The Asus M4N78-VM uses a Nvidia GeForce 8200 Chipset (This board only has 1
PCIe-16 slot though, I should look at those that have 2 slots).
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Oh, and another unrelated question:
Would I better off using OpenSolaris or Solaris Community Edition?
I suspect SCE has more drivers (though mayby in a more beta state?), but its
huge download size (several days in backward New Zealand, thanks Telecom NZ!)
means I would only try if there is
On Thu, 2009-07-23 at 14:24 -0700, Richard Elling wrote:
On Jul 23, 2009, at 11:09 AM, Greg Mason wrote:
I think it is a great idea, assuming the SSD has good write
performance.
This one claims up to 230MB/s read and 180MB/s write and it's only
$196.
c == chris no-re...@opensolaris.org writes:
c do you know what the ECC BIOS modes mean?
It's about the hardware scrubbing feature I mentioned.
pgpcOJUfEwhmS.pgp
Description: PGP signature
___
zfs-discuss mailing list
I didn't meant using slog for the root pool. I meant using the slog for a data
pool. Where the data pool consists of (rotating) hard disk and complement them
with a ssd based slog. But instead of a dedicated ssd for the slog I want the
root pool share the ssd with the slog. Both can mirrored to
I'm going the other route here, and using a Intel small server
motherboard.
I'm currently trying the Supermicro X7SBE, which supports a non-Xeon
CPU, and _should_ actually use the (unbuffered) ECC RAM I have in it.
It can also support a network KVM IPMI board, which is nice (though not
cheap -
Adam Leventhal wrote:
Hey Bob,
MTTDL analysis shows that given normal evironmental conditions, the
MTTDL of RAID-Z2 is already much longer than the life of the computer
or the attendant human. Of course sometimes one encounters unusual
conditions where additional redundancy is desired.
To
Robert,
On Fri, Jul 24, 2009 at 12:59:01AM +0100, Robert Milkowski wrote:
To what analysis are you referring? Today the absolute fastest you can
resilver a 1TB drive is about 4 hours. Real-world speeds might be half
that. In 2010 we'll have 3TB drives meaning it may take a full day to
Adam Leventhal wrote:
I just blogged about triple-parity RAID-Z (raidz3):
http://blogs.sun.com/ahl/entry/triple_parity_raid_z
As for performance, on the system I was using (a max config Sun Storage
7410), I saw about a 25% improvement to 1GB/s for a streaming write
workload. YMMV, but I'd
Ok, so it seems that with DiskSuite, detaching a mirror does nothing to
the disk you detached.
However, zpool detach appears to mark the disk as blank, so nothing
will find any pools (import, import -D etc). zdb -l will show labels,
but no amount of work that we have found will bring the
chris wrote:
Ok, so the choice for a MB boils down to:
- Intel desktop MB, no ECC support
This is mostly true. The exceptions are some implementations of the
Socket T LGA 775 (i.e. late Pentium 4 series, and Core 2) D975X and X38
chipsets, and possibly some X48 boards as well. Intel's
Looking at this external array by HP:
http://h18006.www1.hp.com/products/storageworks/600mds/index.html
70 disks in 5U, which could probably be configured in JBOD.
Has anyone attempted to connect this to a box running opensolaris to
create a 70 disk pool?
--
Brent Jones
br...@servuhome.net
On 07/21/09 01:21 PM, Richard Elling wrote:
I never win the lottery either :-)
Let's see. Your chance of winning a 49 ball lottery is apparently
around 1 in 14*10^6, although it's much better than that because of
submatches (smaller payoffs for matches on less than 6 balls).
There are about
Jorgen Lundman wrote:
However, zpool detach appears to mark the disk as blank, so nothing
will find any pools (import, import -D etc). zdb -l will show labels,
For kicks, I tried to demonstrate this does indeed happen, so I dd'ed
the first 1024 1k blocks from the disk, zpool detach it,
Cheers Miles, and thanks also for the tip to look in the BIOS options to see if
ECC is actually used.
Which mode woud you use? Max seems the most appealing, why would anyone use
something called basic? But there must be a catch if they provided several ECC
support modes.
I am glad this
More choice is good!
It seems Intel's server boards sometimes accept desktop CPUS, but don't support
speedstep. Is all OK with those?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, Jul 24, 2009 at 9:24 AM, Jorgen Lundmanlund...@gmo.jp wrote:
However, zpool detach appears to mark the disk as blank, so nothing will
find any pools (import, import -D etc). zdb -l will show labels,
If both disks are bootable (with installboot or installgrub), removing
the mirror and
Note that the 'ecccheck.pl' script depends on the 'pcitweak' utility
which is no longer present in OpenSolaris 2009.06 and Ubuntu 8.10
because of Xorg changes.
This is exactly the kind of hidden trap I fear. One does everything right and
then discovers that xx is missing or has been changed
That is an interesting bit of kit. I wish a white box manufacturer would
create something like this (hint hint supermicro)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
43 matches
Mail list logo