On Jul 1, 2009, at 10:58 PM, Brent Jones br...@servuhome.net wrote:
On Wed, Jul 1, 2009 at 7:29 PM, HUGE | David
Stahldst...@hugeinc.com wrote:
The real benefit of the of using a separate zvol for each vm is the
instantaneous cloning of a machine, and the clone will take almost no
additional
On Fri, Jul 3, 2009 at 9:47 PM, James Leverj...@jamver.id.au wrote:
On 04/07/2009, at 10:42 AM, Ross Walker wrote:
XFS on LVM or EVMS volumes can't do barrier writes due to the lack of
barrier support in LVM and EVMS, so it doesn't do a hard cache sync like it
would on a raw disk partition
On Jul 5, 2009, at 6:06 AM, James Lever j...@jamver.id.au wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with
so-called enterprise SSDs. SSDs with
On Jul 5, 2009, at 9:20 PM, Richard Elling richard.ell...@gmail.com
wrote:
Ross Walker wrote:
Thanks for the info. SSD is still very much a moving target.
I worry about SSD drives long term reliability. If I mirror two of
the same drives what do you think the probability of a double
On Jul 9, 2009, at 4:22 AM, Jim Klimov no-re...@opensolaris.org wrote:
To tell the truth, I expected zvols to be faster than filesystem
datasets. They seem
to have less overhead without inodes, posix, acls and so on. So I'm
puzzled by test
results.
I'm now considering the dd i/o block
On Jul 11, 2009, at 5:41 PM, Galen gal...@zinkconsulting.com wrote:
On Jul 11, 2009, at 2:24 PM, Andrew Gabriel wrote:
Galen wrote:
I have a situation where my zpool (with two radiz2s) is
resilvering and reaches a certain point, then starts over.
There no read, write or checksum errors.
On Jul 12, 2009, at 11:45 PM, Harry Putnam rea...@newsguy.com wrote:
Reading various bits of google output about send/receive I'm starting
to wonder if that process is maybe the wrong way to go at what I want
to do.
I have a filesystem z1/projects I want to remove it from the z1 pool
and put
Maybe it's the disks firmware that is bad or maybe they're jumpered
for 1.5Gbps on a 3.0 only bus? Or maybe it's a problem with the disk
cable/bay/enclosure/slot?
It sounds like there is more then ZFS in the mix here. I wonder if the
drive's status keeps flapping online/offline and
On Jul 13, 2009, at 11:33 AM, Ross no-re...@opensolaris.org wrote:
Gaaah, looks like I spoke too soon:
$ zpool status
pool: rc-pool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error.
An
attempt was made to correct the error. Applications are
On Jul 13, 2009, at 2:54 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Mon, 13 Jul 2009, Brad Diggs wrote:
You might want to have a look at my blog on filesystem cache
tuning... It will probably help
you to avoid memory contention between the ARC and your apps.
On Aug 4, 2009, at 7:26 AM, Joachim Sandvik no-re...@opensolaris.org
wrote:
does anybody have some numbers on speed on sata vs 15k sas? Is it
really a big difference?
For random io the number of IOPS is 1000/(mean access + avg rotational
latency) (in ms)
Avg rotational latency
On Tue, Aug 4, 2009 at 10:33 AM, Richard Ellingrichard.ell...@gmail.com wrote:
On Aug 4, 2009, at 7:01 AM, Ross Walker wrote:
On Aug 4, 2009, at 7:26 AM, Joachim Sandvik no-re...@opensolaris.org
wrote:
does anybody have some numbers on speed on sata vs 15k sas? Is it really
a big
On Tue, Aug 4, 2009 at 10:40 AM, erik.ablesoneable...@mac.com wrote:
You're running into the same problem I had with 2009.06 as they have
corrected a bug where the iSCSI target prior to
2009.06 didn't honor completely SCSI sync commands issued by the initiator.
Some background :
Discussion:
On Tue, Aug 4, 2009 at 9:57 AM, Charles Bakerno-re...@opensolaris.org wrote:
My testing has shown some serious problems with the
iSCSI implementation for OpenSolaris.
I setup a VMware vSphere 4 box with RAID 10
direct-attached storage and 3 virtual machines:
- OpenSolaris 2009.06 (snv_111b)
On Tue, Aug 4, 2009 at 11:21 AM, Ross Walkerrswwal...@gmail.com wrote:
On Tue, Aug 4, 2009 at 9:57 AM, Charles Bakerno-re...@opensolaris.org wrote:
My testing has shown some serious problems with the
iSCSI implementation for OpenSolaris.
I setup a VMware vSphere 4 box with RAID 10
On Aug 4, 2009, at 1:35 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Tue, 4 Aug 2009, Ross Walker wrote:
But this MUST happen. If it doesn't then you are playing Russian
Roulette with your data, as a kernel panic can cause a loss of up to
1/8 of the size of your system's RAM
On Aug 4, 2009, at 2:11 PM, Richard Elling richard.ell...@gmail.com
wrote:
On Aug 4, 2009, at 8:01 AM, Ross Walker wrote:
On Tue, Aug 4, 2009 at 10:33 AM, Richard Ellingrichard.ell...@gmail.com
wrote:
On Aug 4, 2009, at 7:01 AM, Ross Walker wrote:
For random io the number of IOPS
On Aug 4, 2009, at 8:36 PM, Carson Gaspar car...@taltos.org wrote:
Ross Walker wrote:
I get pretty good NFS write speeds with NVRAM (40MB/s 4k sequential
write). It's a Dell PERC 6/e with 512MB onboard.
...
there, dedicated slog device with NVRAM speed. It would be even
better to have
On Aug 4, 2009, at 9:18 PM, James Lever j...@jamver.id.au wrote:
On 05/08/2009, at 10:36 AM, Carson Gaspar wrote:
Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support
recently.
Yep, it's a mega raid device.
I have been using one with a Samsung SSD in RAID0 mode (to avail
On Aug 4, 2009, at 9:55 PM, Carson Gaspar car...@taltos.org wrote:
Ross Walker wrote:
On Aug 4, 2009, at 8:36 PM, Carson Gaspar car...@taltos.org wrote:
Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support
recently.
Yes, but the LSI support of SSDs is on later controllers
On Aug 4, 2009, at 10:17 PM, James Lever j...@jamver.id.au wrote:
On 05/08/2009, at 11:41 AM, Ross Walker wrote:
What is your recipe for these?
There wasn't one! ;)
The drive I'm using is a Dell badged Samsung MCCOE50G5MPQ-0VAD3.
So the key is the drive needs to have the Dell badging
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD? The data is indeed
pushed closer to the disks, but there may be considerably more
latency associated with getting that data
On Aug 6, 2009, at 11:09 AM, Scott Meilicke no-re...@opensolaris.org
wrote:
You can use a separate SSD ZIL.
Yes, but to see if a separate ZIL will make a difference the OP should
try his iSCSI workload first with ZIL then temporarily disable ZIL and
re-try his workload.
Nothing worse
On Wed, Aug 12, 2009 at 2:15 PM, Mike Gerdtsmger...@gmail.com wrote:
On Wed, Aug 12, 2009 at 11:53 AM, Damjan
Perenicdamjan.pere...@guest.arnes.si wrote:
On Tue, Aug 11, 2009 at 11:04 PM, Richard
Ellingrichard.ell...@gmail.com wrote:
On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote:
I suspect
On Wed, Aug 12, 2009 at 6:49 PM, Mattias Pantzarepant...@ludd.ltu.se wrote:
It would be nice if ZFS had something similar to VxFS File Change Log.
This feature is very useful for incremental backups and other
directory walkers, providing they support FCL.
I think this tangent deserves its own
On Aug 13, 2009, at 1:37 AM, James Hess no-re...@opensolaris.org
wrote:
The real benefit of the of using a
separate zvol for each vm is the instantaneous
cloning of a machine, and the clone will take almost
no additional space initially. In our case we build a
You don't have to use ZVOL
On Aug 14, 2009, at 2:31 PM, Joseph L. Casale jcas...@activenetwerx.com
wrote:
I have setup a pool called vmstorage and mounted it as nfs storage
in esx4i.
The pool in freenas contains 4 sata2 disks in raidz. I have 6 vms;
5 linux and 1 windows and performance is terrible.
Any
On Aug 21, 2009, at 5:46 PM, Ron Mexico no-re...@opensolaris.org
wrote:
I'm in the process of setting up a NAS for my company. It's going to
be based on Open Solaris and ZFS, running on a Dell R710 with two
SAS 5/E HBAs. Each HBA will be connected to a 24 bay Supermicro JBOD
chassis.
On Aug 22, 2009, at 5:21 PM, Neil Perrin neil.per...@sun.com wrote:
On 08/20/09 06:41, Greg Mason wrote:
Something our users do quite a bit of is untarring archives with a
lot of small files. Also, many small, quick writes are also one of
the many workloads our users have.
Real-world
On Aug 22, 2009, at 7:33 PM, Ross Walker rswwal...@gmail.com wrote:
On Aug 22, 2009, at 5:21 PM, Neil Perrin neil.per...@sun.com wrote:
On 08/20/09 06:41, Greg Mason wrote:
Something our users do quite a bit of is untarring archives with a
lot of small files. Also, many small, quick
On Aug 23, 2009, at 12:11 AM, Tristan Ball tristan.b...@leica-microsystems.com
wrote:
Ross Walker wrote:
[snip]
We turned up our X4540s, and this same tar unpack took over 17
minutes! We disabled the ZIL for testing, and we dropped this to
under 1 minute. With the X25-E as a slog
On Aug 23, 2009, at 9:59 AM, Ross Walker rswwal...@gmail.com wrote:
On Aug 23, 2009, at 12:11 AM, Tristan Ball tristan.b...@leica-microsystems.com
wrote:
Ross Walker wrote:
[snip]
We turned up our X4540s, and this same tar unpack took over 17
minutes! We disabled the ZIL
On Aug 24, 2009, at 10:02 PM, LEES, Cooper c...@ansto.gov.au wrote:
Hi Duncan,
I also do the same with my Mac for timemachine and get the same WOEFUL
performance to my x4500 filer.
I have mounted ISCSI zvols on a linux machine and it performs as
expected
(50 mbytes a second) as apposed to
On Aug 27, 2009, at 4:30 AM, David Bond david.b...@tag.no wrote:
Hi,
I was directed here after posting in CIFS discuss (as i first
thought that it could be a CIFS problem).
I posted the following in CIFS:
When using iometer from windows to the file share on opensolaris
svn101 and svn111
On Aug 27, 2009, at 11:29 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Thu, 27 Aug 2009, David Bond wrote:
I just noticed that if the server hasnt hit its target arc size,
the pauses are for maybe .5 seconds, but as soon as it hits its arc
target, the iops drop to around
On Sep 3, 2009, at 1:25 PM, Ross myxi...@googlemail.com wrote:
Yeah, I wouldn't mind knowing that too. With the old snv builds I
just downloaded the appropriate image, with OpenSolaris and the
development repository, is there any way to pick a particular build?
I just do a 'pkg list
On Sep 4, 2009, at 2:22 PM, Scott Meilicke scott.meili...@craneaerospace.com
wrote:
So, I just re-read the thread, and you can forget my last post. I
had thought the argument was that the data were not being written to
disk twice (assuming no separate device for the ZIL), but it was
just
On Sep 4, 2009, at 4:33 PM, Scott Meilicke scott.meili...@craneaerospace.com
wrote:
Yes, I was getting confused. Thanks to you (and everyone else) for
clarifying.
Sync or async, I see the txg flushing to disk starve read IO.
Well try the kernel setting and see how it helps.
Honestly
On Sep 4, 2009, at 6:33 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Fri, 4 Sep 2009, Scott Meilicke wrote:
I only see the blocking while load testing, not during regular
usage, so I am not so worried. I will try the kernel settings to
see if that helps if/when I see the
On Sep 4, 2009, at 5:25 PM, Scott Meilicke scott.meili...@craneaerospace.com
wrote:
I only see the blocking while load testing, not during regular
usage, so I am not so worried. I will try the kernel settings to see
if that helps if/when I see the issue in production.
For what it is
On Sep 4, 2009, at 8:59 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Fri, 4 Sep 2009, Ross Walker wrote:
I guess one can find a silver lining in any grey cloud, but for
myself I'd just rather see a more linear approach to writes. Anyway
I have never seen any reads happen
On Sep 4, 2009, at 10:02 PM, David Magda dma...@ee.ryerson.ca wrote:
On Sep 4, 2009, at 21:44, Ross Walker wrote:
Though I have only heard good comments from my ESX admins since
moving the VMs off iSCSI and on to ZFS over NFS, so it can't be
that bad.
What's your pool configuration
On Sun, Sep 6, 2009 at 9:15 AM, James Leverj...@jamver.id.au wrote:
I’m experiencing occasional slow responsiveness on an OpenSolaris b118
system typically noticed when running an ‘ls’ (no extra flags, so no
directory service lookups). There is a delay of between 2 and 30 seconds
but no
On Sep 6, 2009, at 3:32 PM, Thomas Burgess wonsl...@gmail.com wrote:
yes, but it stripes across the vdevs, and when it needs to read data
back, it will absolutely be limited.
During reads the raidz will be the fastest vdev, during writes it
should have about the same write performance
Sorry for my earlier post I responded prematurely.
On Sep 6, 2009, at 9:15 AM, James Lever j...@jamver.id.au wrote:
I’m experiencing occasional slow responsiveness on an OpenSolaris b1
18 system typically noticed when running an ‘ls’ (no extra flags,
so no directory service lookups).
On Tue, Sep 8, 2009 at 3:09 PM, Jon
Whitehousejonathan.whiteho...@zimmer.com wrote:
I'm new to ZFS and a scenario recently came up that I couldn't figure out.
We are used to using Veritas Volume Mgr so that may affect our thinking to
this approach.
Here it is.
1. ServerA was
On Fri, Sep 11, 2009 at 12:53 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Sep 11, 2009, at 5:05 AM, Markus Kovero wrote:
Hi, I was just wondering following idea, I guess somebody mentioned
something similar and I’d like some thoughts on this.
1. create iscsi volume on Node-A
On Sep 16, 2009, at 4:29 PM, Marty Scholes martyscho...@yahoo.com
wrote:
Yes. This is a mathematical way of saying
lose any P+1 of N disks.
I am hesitant to beat this dead horse, yet it is a nuance that
either I have completely misunderstood or many people I've met have
completely
On Sep 16, 2009, at 6:50 PM, Marion Hakanson hakan...@ohsu.edu wrote:
rswwal...@gmail.com said:
There is another type of failure that mirrors help with and that is
controller or path failures. If one side of a mirror set is on one
controller or path and the other on another then a failure of
On Sep 16, 2009, at 6:43 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Wed, 16 Sep 2009, Ross Walker wrote:
There is another type of failure that mirrors help with and that is
controller or path failures. If one side of a mirror set is on one
controller or path
On Thu, Sep 24, 2009 at 11:29 PM, James Lever j...@jamver.id.au wrote:
On 25/09/2009, at 11:49 AM, Bob Friesenhahn wrote:
The commentary says that normally the COMMIT operations occur during
close(2) or fsync(2) system call, or when encountering memory pressure. If
the problem is slow
On Fri, Sep 25, 2009 at 5:24 PM, James Lever j...@jamver.id.au wrote:
On 26/09/2009, at 1:14 AM, Ross Walker wrote:
By any chance do you have copies=2 set?
No, only 1. So the double data going to the slog (as reported by iostat) is
still confusing me and clearly potentially causing
On Fri, Sep 25, 2009 at 1:39 PM, Richard Elling
richard.ell...@gmail.com wrote:
On Sep 25, 2009, at 9:14 AM, Ross Walker wrote:
On Fri, Sep 25, 2009 at 11:34 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 25 Sep 2009, Ross Walker wrote:
As a side an slog device
On Fri, Sep 25, 2009 at 5:47 PM, Marion Hakanson hakan...@ohsu.edu wrote:
j...@jamver.id.au said:
For a predominantly NFS server purpose, it really looks like a case of the
slog has to outperform your main pool for continuous write speed as well as
an instant response time as the primary
On Sep 25, 2009, at 6:19 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Fri, 25 Sep 2009, Ross Walker wrote:
Problem is most SSD manufactures list sustained throughput with large
IO sizes, say 4MB, and not 128K, so it is tricky buying a good SSD
that can handle the throughput
On Sep 27, 2009, at 3:19 AM, Paul Archer p...@paularcher.org wrote:
So, after *much* wrangling, I managed to take on of my drives
offline, relabel/repartition it (because I saw that the first sector
was 34, not 256, and realized there could be an alignment issue),
and get it back into the
On Sep 27, 2009, at 11:49 AM, Paul Archer p...@paularcher.org wrote:
Problem is that while it's back, the performance is horrible. It's
resilvering at about (according to iostat) 3.5MB/sec. And at some
point, I was zeroing out the drive (with 'dd if=/dev/zero of=/dev/
dsk/c7d0'), and iostat
On Sep 27, 2009, at 1:44 PM, Paul Archer p...@paularcher.org wrote:
My controller, while normally a full RAID controller, has had its
BIOS turned off, so it's acting as a simple SATA controller. Plus,
I'm seeing this same slow performance with dd, not just with ZFS.
And I wouldn't think
On Sep 27, 2009, at 8:41 PM, Ron Watkins rwa...@gmail.com wrote:
I have a box with 4 disks. It was my intent to place a mirrored root
partition on 2 disks on different controllers, then use the
remaining space and the other 2 disks to create a raid-5
configuration from which to export
On Sep 27, 2009, at 10:05 PM, Ron Watkins rwa...@gmail.com wrote:
My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another
mirrored app fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross
c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7.
There is no need for the 2 mirrors both on c1t0 and
On Tue, Sep 29, 2009 at 10:35 AM, Richard Elling
richard.ell...@gmail.com wrote:
On Sep 29, 2009, at 2:03 AM, Bernd Nies wrote:
Hi,
We have a Sun Storage 7410 with the latest release (which is based upon
opensolaris). The system uses a hybrid storage pool (23 1TB SATA disks in
RAIDZ2 and 1
On Tue, Sep 29, 2009 at 5:30 PM, David Stewart despasad...@onebox.com wrote:
Before I try these options you outlined I do have a question. I went in to
VMWare Fusion and removed one of the drives from the virtual machine that was
used to create a RAIDZ pool (there were five drives, one for
On Sep 30, 2009, at 10:40 AM, Brian Hubbleday b...@delcam.com wrote:
Just realised I missed a rather important word out there, that could
confuse.
So the conclusion I draw from this is that the --incremental--
snapshot simply contains every written block since the last snapshot
But this is concerning reads not writes.
-Ross
On Oct 20, 2009, at 4:43 PM, Trevor Pretty trevor_pre...@eagle.co.nz
wrote:
Gary
Where you measuring the Linux NFS write performance? It's well know
that Linux can use NFS in a very unsafe mode and report the write
complete when it is
export option 'async' which
is unsafe.
-Ross
Ross Walker wrote:
But this is concerning reads not writes.
-Ross
On Oct 20, 2009, at 4:43 PM, Trevor Pretty
trevor_pre...@eagle.co.nz wrote:
Gary
Where you measuring the Linux NFS write performance? It's well
know that Linux can use
On Nov 2, 2009, at 2:38 PM, Paul B. Henson hen...@acm.org wrote:
On Sat, 31 Oct 2009, Al Hopper wrote:
Kudos to you - nice technical analysis and presentation, Keep
lobbying
your point of view - I think interoperability should win out if it
comes
down to an arbitrary decision.
Thanks;
On Nov 6, 2009, at 11:23 PM, Paul B. Henson hen...@acm.org wrote:
NFSv3 gss:
damien cfservd # mount -o sec=krb5p ike.unx.csupomona.edu:/export/
user/henson /mnt
hen...@damien /mnt/sgid_test $ ls -ld
drwx--s--x+ 2 henson iit 2 Nov 6 20:14 .
hen...@damien /mnt/sgid_test $ mkdir gss
On Nov 8, 2009, at 12:09 PM, Tim Cook t...@cook.ms wrote:
Why not just convert the VM's to run in virtualbox and run Solaris
directly on the hardware?
Or use OpenSolaris xVM (Xen) with either qemu img files on zpools for
the VMs or zvols.
-Ross
On Nov 27, 2009, at 12:55 PM, Carsten Aulbert carsten.aulb...@aei.mpg.de
wrote:
On Friday 27 November 2009 18:45:36 Carsten Aulbert wrote:
I was too fast, now it looks completely different:
scrub: resilver completed after 4h3m with 0 errors on Fri Nov 27
18:46:33
2009
[...]
s13:~# zpool
On Dec 2, 2009, at 6:57 AM, Brian McKerr br...@datamatters.com.au
wrote:
Hi all,
I have a home server based on SNV_127 with 8 disks;
2 x 500GB mirrored root pool
6 x 1TB raidz2 data pool
This server performs a few functions;
NFS : for several 'lab' ESX virtual machines
NFS : mythtv
On Dec 11, 2009, at 4:17 AM, Alexander Skwar alexanders.mailinglists+nos...@gmail.com
wrote:
Hello Jeff!
Could you (or anyone else, of course *G*) please show me how?
Situation:
There shall be 2 snapshots of a ZFS called rpool/rb-test
Let's call those snapshots 01 and 02.
$ sudo zfs
On Dec 11, 2009, at 3:26 PM, Alexander Skwar alexanders.mailinglists+nos...@gmail.com
wrote:
Hi!
On Fri, Dec 11, 2009 at 15:55, Fajar A. Nugraha fa...@fajar.net
wrote:
On Fri, Dec 11, 2009 at 4:17 PM, Alexander Skwar
alexanders.mailinglists+nos...@gmail.com wrote:
$ sudo zfs create
On Dec 21, 2009, at 4:09 PM, Michael Herf mbh...@gmail.com wrote:
Anyone who's lost data this way: were you doing weekly scrubs, or
did you find out about the simultaneous failures after not touching
the bits for months?
Scrubbing on a routine basis is good for detecting problems early,
On Dec 21, 2009, at 11:56 PM, Roman Naumenko ro...@naumenko.ca wrote:
On Dec 21, 2009, at 4:09 PM, Michael Herf
mbh...@gmail.com wrote:
Anyone who's lost data this way: were you doing
weekly scrubs, or
did you find out about the simultaneous failures
after not touching
the bits for
On Dec 22, 2009, at 11:46 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
Raid10 provides excellent performance and if performance is a
priority then I recommend it, but I was under the impression that
resiliency was the priority
On Dec 22, 2009, at 8:40 PM, Charles Hedrick hedr...@rutgers.edu
wrote:
It turns out that our storage is currently being used for
* backups of various kinds, run daily by cron jobs
* saving old log files from our production application
* saving old versions of java files from our production
On Dec 22, 2009, at 8:58 PM, Richard Elling richard.ell...@gmail.com
wrote:
On Dec 22, 2009, at 5:40 PM, Charles Hedrick wrote:
It turns out that our storage is currently being used for
* backups of various kinds, run daily by cron jobs
* saving old log files from our production
On Dec 22, 2009, at 9:08 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
I think zil_disable may actually make sense.
How about a zil comprised of two mirrored iSCSI vdevs formed from a
SSD on each box?
I would not have believed
On Dec 25, 2009, at 6:01 PM, Jeroen Roodhart j.r.roodh...@uva.nl
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Hi Freddie, list,
Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
giving more vdevs to the pool, and thus increasing the IOps for the
whole pool.
On Dec 29, 2009, at 7:55 AM, Brad bene...@yahoo.com wrote:
Thanks for the suggestion!
I have heard mirrored vdevs configuration are preferred for Oracle
but whats the difference between a raidz mirrored vdev vs a raid10
setup?
A mirrored raidz provides redundancy at a steep cost to
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Tue, 29 Dec 2009, Ross Walker wrote:
A mirrored raidz provides redundancy at a steep cost to performance
and might I add a high monetary cost.
I am not sure what a mirrored raidz is. I have never heard
On Wed, Dec 30, 2009 at 12:35 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 29 Dec 2009, Ross Walker wrote:
Some important points to consider are that every write to a raidz vdev
must be synchronous. In other words, the write needs to complete on all the
drives
On Dec 30, 2009, at 11:55 PM, Steffen Plotner
swplot...@amherst.edu wrote:
Hello,
I was doing performance testing, validating zvol performance in
particularly, and found that zvol write performance to be slow
~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs
file
On Sun, Jan 3, 2010 at 1:59 AM, Brent Jones br...@servuhome.net wrote:
On Wed, Dec 30, 2009 at 9:35 PM, Ross Walker rswwal...@gmail.com wrote:
On Dec 30, 2009, at 11:55 PM, Steffen Plotner swplot...@amherst.edu
wrote:
Hello,
I was doing performance testing, validating zvol performance
On Jan 11, 2010, at 2:23 PM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
wrote:
On Mon, 11 Jan 2010, bank kus wrote:
Are we still trying to solve the starvation problem?
I would argue the disk I/O model is fundamentally broken on Solaris
if there is no fair I/O scheduling between
On Jan 14, 2010, at 10:44 AM, Mr. T Doodle tpsdoo...@gmail.com
wrote:
Hello,
I have played with ZFS but not deployed any production systems using
ZFS and would like some opinions
I have a T-series box with 4 internal drives and would like to
deploy ZFS with availability and
On Jan 21, 2010, at 6:47 PM, Daniel Carosone d...@geek.com.au wrote:
On Thu, Jan 21, 2010 at 02:54:21PM -0800, Richard Elling wrote:
+ support file systems larger then 2GiB include 32-bit UIDs a GIDs
file systems, but what about individual files within?
I think the original author meant
On Jan 30, 2010, at 2:53 PM, Mark white...@gmail.com wrote:
I have a 1U server that supports 2 SATA drives in the chassis. I
have 2 750 GB SATA drives. When I install opensolaris, I assume it
will want to use all or part of one of those drives for the install.
That leaves me with the
On Feb 3, 2010, at 9:53 AM, Henu henrik.he...@tut.fi wrote:
Okay, so first of all, it's true that send is always fast and 100%
reliable because it uses blocks to see differences. Good, and thanks
for this information. If everything else fails, I can parse the
information I want from send
On Feb 3, 2010, at 12:35 PM, Frank Cusack frank+lists/
z...@linetwo.net wrote:
On February 3, 2010 12:19:50 PM -0500 Frank Cusack frank+lists/z...@linetwo.net
wrote:
If you do need to know about deleted files, the find method still may
be faster depending on how ddiff determines whether or
On Feb 3, 2010, at 8:59 PM, Frank Cusack frank+lists/z...@linetwo.net
wrote:
On February 3, 2010 6:46:57 PM -0500 Ross Walker
rswwal...@gmail.com wrote:
So was there a final consensus on the best way to find the difference
between two snapshots (files/directories added, files/directories
this manually, using basic file system
functions offered by OS. I scan every byte in every file manually
and it
^^^
On February 3, 2010 10:11:01 AM -0500 Ross Walker rswwal...@gmail.com
wrote:
Not a ZFS method, but you could use rsync
On Feb 5, 2010, at 10:49 AM, Robert Milkowski mi...@task.gda.pl wrote:
Actually, there is.
One difference is that when writing to a raid-z{1|2} pool compared
to raid-10 pool you should get better throughput if at least 4
drives are used. Basically it is due to the fact that in RAID-10 the
On Feb 8, 2010, at 4:58 PM, Edward Ned Harvey macenterpr...@nedharvey.com
wrote:
How are you managing UID's on the NFS server? If user eharvey
connects to
server from client Mac A, or Mac B, or Windows 1, or Windows 2, or
any of
the linux machines ... the server has to know it's eharvey,
On Feb 9, 2010, at 1:55 PM, matthew patton patto...@yahoo.com wrote:
The cheapest solution out there that isn't a Supermicro-like server
chassis, is DAS in the form of HP or Dell MD-series which top out at
15 or 16 3 drives. I can only chain 3 units per SAS port off a HBA
in either case.
On Feb 19, 2010, at 4:57 PM, Ragnar Sundblad ra...@csc.kth.se wrote:
On 18 feb 2010, at 13.55, Phil Harman wrote:
...
Whilst the latest bug fixes put the world to rights again with
respect to correctness, it may be that some of our performance
workaround are still unsafe (i.e. if my iSCSI
On Feb 25, 2010, at 9:11 AM, Giovanni Tirloni gtirl...@sysdroid.com
wrote:
On Thu, Feb 25, 2010 at 9:47 AM, Jacob Ritorto jacob.rito...@gmail.com
wrote:
It's a kind gesture to say it'll continue to exist and all, but
without commercial support from the manufacturer, it's relegated to
On Mar 8, 2010, at 11:46 PM, ольга крыжановская olga.kryzh
anov...@gmail.com wrote:
tmpfs lacks features like quota and NFSv4 ACL support. May not be the
best choice if such features are required.
True, but if the OP is looking for those features they are more then
unlikely looking for an
On Mar 9, 2010, at 1:42 PM, Roch Bourbonnais
roch.bourbonn...@sun.com wrote:
I think This is highlighting that there is extra CPU requirement to
manage small blocks in ZFS.
The table would probably turn over if you go to 16K zfs records and
16K reads/writes form the application.
Next
On Mar 11, 2010, at 8:27 AM, Andrew acmcomput...@hotmail.com wrote:
Ok,
The fault appears to have occurred regardless of the attempts to
move to vSphere as we've now moved the host back to ESX 3.5 from
whence it came and the problem still exists.
Looks to me like the fault occurred as a
1 - 100 of 177 matches
Mail list logo