On Sun, Mar 18, 2012 at 9:03 AM, Jim Klimov jimkli...@cos.ru wrote:
Hello, while browsing around today I stumbled across
Seagate Pipeline HD HDDs lineup (i.e. ST2000VM002).
Did any ZFS users have any experience with them?
http://www.seagate.com/www/en-us/products/consumer_electronics/pipeline/
On Wed, Jul 13, 2011 at 6:32 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
If you go the LSI2008 route, avoid raid functionality as it messes up ZFS.
Flash the BIOS to JBOD mode.
You don't even have to do that with the LSI SAS2 cards. They no
longer ship alternate IT-mode firmware
On Tue, Jul 12, 2011 at 1:06 AM, Brandon High bh...@freaks.com wrote:
On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul espr...@omniti.com wrote:
Interesting-- what is the suspected impact of not having TRIM support?
There shouldn't be much, since zfs isn't changing data in place. Any
drive
On Tue, Jul 12, 2011 at 1:35 PM, Brandon High bh...@freaks.com wrote:
Most enterprise SSDs use something like 30% for spare area. So a
drive with 128MiB (base 2) of flash will have 100MB (base 10) of
available storage. A consumer level drive will have ~ 6% spare, or
128MiB of flash and 128MB
On Sat, Jul 9, 2011 at 2:19 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
Most drives should work well for a pure SSD pool. I have a postgresql
database on a linux box on a mirrored set of C300s. AFAIK ZFS doesn't yet
support TRIM, so that can be an issue. Apart from that, it should work
On Wed, Jun 15, 2011 at 4:33 PM, Nomen Nescio nob...@dizum.com wrote:
Has there been any change to the server hardware with respect to number of
drives since ZFS has come out? Many of the servers around still have an even
number of drives (2, 4) etc. and it seems far from optimal from a ZFS
On Tue, Jun 14, 2011 at 10:09 PM, Ding Honghui ding_hong...@vobile.cn wrote:
I expect to have 14*931/1024=12.7TB zpool space, but actually, it only have
12.6TB zpool space:
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datapool 12.6T 9.96T 2.66T 78% ONLINE -
#
On Sat, Jun 4, 2011 at 3:51 PM, Harry Putnam rea...@newsguy.com wrote:
Apparently my OS is new enough (b 147 openindiana)... since the command
is known. Very nice... but where is the documentation?
`man zfs' has no hits on a grep for diff (except different..)
Ahh never mind... I found:
On Fri, Jun 3, 2011 at 11:22 AM, Paul Kraus p...@kraus-haus.org wrote:
So is there a way to read these real I/Ops numbers ?
iostat is reporting 600-800 I/Ops peak (1 second sample) for these
7200 RPM SATA drives. If the drives are doing aggregation, then how to
tell what is really going on ?
On Wed, Jun 1, 2011 at 2:54 PM, Matt Harrison
iwasinnamuk...@genestate.com wrote:
Hi list,
I've got a pool thats got a single raidz1 vdev. I've just some more disks in
and I want to replace that raidz1 with a three-way mirror. I was thinking
I'd just make a new pool and copy everything
On Wed, Jun 1, 2011 at 3:47 PM, Matt Harrison
iwasinnamuk...@genestate.com wrote:
Thanks Eric, however seeing as I can't have two pools named 'tank', I'll
have to name the new one something else. I believe I will be able to rename
it afterwards, but I just wanted to check first. I'd have to
Hi,
One of my colleagues was confused by the output of 'zpool status' on a pool
where a hot spare is being resilvered in after a drive failure:
$ zpool status data
pool: data
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function,
On 12/ 4/09 02:06 AM, Erik Trimble wrote:
Hey folks.
I've looked around quite a bit, and I can't find something like this:
I have a bunch of older systems which use Ultra320 SCA hot-swap
connectors for their internal drives. (e.g. v20z and similar)
I'd love to be able to use modern
Erin wrote:
The issue that we have is that the first two vdevs were almost full, so we
will quickly be in the state where all writes will be on the 3rd vdev. It
would
also be useful to have better read performance, but I figured that solving the
write performance optimization would also help
Matthias Appel wrote:
I am using 2x Gbit Ethernet an 4 Gig of RAM,
4 Gig of RAM for the iRAM should be more than sufficient (0.5 times RAM and
10s worth of IO)
I am aware that this RAM is non-ECC so I plan to mirror the ZIL device.
Any considerations for this setupWill it work as I
better
supported than the AMD stuff, but even the AMD boards work well.
Eric
--
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications Internet Architectures
http://omniti.com
___
zfs-discuss mailing list
zfs-discuss
Scott Meilicke wrote:
So what happens during the txg commit?
For example, if the ZIL is a separate device, SSD for this example, does it
not work like:
1. A sync operation commits the data to the SSD
2. A txg commit happens, and the data from the SSD are written to the
spinning disk
casper@sun.com wrote:
Most of the Intellispeed drives are just 5400rpm; I suppose that this
drive can deliver 150MB/s on sequential access.
I have the earlier generation of the 2TB WD RE4 drive in one of my systems.
With Bonwick's diskqual script I saw an average of 119 MB/s across 14
Adam Leventhal wrote:
Hi James,
After investigating this problem a bit I'd suggest avoiding deploying
RAID-Z
until this issue is resolved. I anticipate having it fixed in build 124.
Adam,
Is it known approximately when this bug was introduced? I have a system running
snv_111 with a large
interested in system administration issues.
+1 to those items. I'd also like to hear about how people are maintaining
offsite DR copies of critical data with ZFS. Just send/recv, or something a
little more live?
Eric
--
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc
the disks with EFI labels when you
give a whole disk (no 's#') as an argument.
Hope this helps,
Eric
--
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications Internet Architectures
http://omniti.com
___
zfs-discuss
mirror of two disks (RAID1).
Regards,
Eric
--
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications Internet Architectures
http://omniti.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
22 matches
Mail list logo