From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 10:27 PM
Is it still the case, as it once was, that allocating anything other
than whole disks as vdevs forces NCQ / write cache off on the drive
(either or both, forget which, guess write cache)?
I will only
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 10:27 PM
Is it still the case, as it once was, that allocating anything other
than whole disks as vdevs forces NCQ / write cache off on the drive
(either or both, forget which, guess write cache)?
I will only
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
or is it completely random leaving me with some trial and error to work
out
what disk is on what port?
It's highly desirable to have drives with lights on them. So you can
From: Daniel Carosone [mailto:d...@geek.com.au]
Sent: Thursday, June 16, 2011 11:05 PM
the [sata] channel is idle, blocked on command completion, while
the heads seek.
I'm interested in proving this point. Because I believe it's false.
Just hand waving for the moment ... Presenting the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Marty Scholes
On a busy array it is hard even to use the leds as indicators.
Offline the disk. Light stays off.
Use dd to read the disk. Light stays on.
That should make it easy enough.
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Saturday, June 18, 2011 7:47 PM
Actually, all of the data I've gathered recently shows that the number of
IOPS does not significantly increase for HDDs running random workloads.
However the response time does :-(
Could you
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Sunday, June 19, 2011 11:03 AM
I was planning, in the near
future, to go run iozone on some system with, and without the disk cache
enabled according to format -e. If my hypothesis is right, it shouldn't
significantly affect
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave U.Random
My personal preference, assuming 4 disks, since the OS is mostly reads
and
only a little bit of writes, is to create a 4-way mirrored 100G
partition
for the OS, and the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nomen Nescio
Hello Bob! Thanks for the reply. I was thinking about going with a 3 way
mirror and a hot spare. But I don't think I can upgrade to larger drives
unless I do it all at once, is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dave U.Random
If I am going to make a
new install of Solaris 10 does it give me the option to slice and dice my
disks and to issue zpool commands?
No way that I know of, to install Solaris
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Carsten John
Now I'm wondering about the best option to replace the HDD with the SSD:
What version of zpool are you running? If it's = 19, then you could
actually survive a complete ZIL
From: Ross Walker [mailto:rswwal...@gmail.com]
Sent: Friday, June 17, 2011 9:48 PM
The on-disk buffer is there so data is ready when the hard drive head
lands,
without it the drive's average rotational latency will trend higher due to
missed landings because the data wasn't in buffer at the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Conclusion: Yes it matters to enable the write_cache.
Now the question of whether or not it matters to use the whole disk versus
partitioning, and how to enable
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
Here is my problem:
I have an 1.5TB disk with OpenSolaris (b134, b151a) using non AHCI.
I then changed to AHCI in BIOS, which results in severe problems: I can
not
boot the
Given the abysmal performance, I have to assume there is a significant
number of overhead reads or writes in order to maintain the DDT for each
actual block write operation. Something I didn't mention in the other
email is that I also tracked iostat throughout the whole operation. It's
all
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
When you read back duplicate data that was previously written with
dedup, then you get a lot more cache hits, and as a result, the reads go
faster. Unfortunately
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
Sent: Saturday, July 09, 2011 2:33 PM
Could you test with some SSD SLOGs and see how well or bad the system
performs?
These are all async writes, so slog won't be used. Async writes that have a
single fflush() and fsync() at the end
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
--- Performance loss:
I ran one more test, that is rather enlightening. I repeated test #2 (tweak
arc_meta_limit, use the default primarycache=all) but this time I wrote
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
Sent: Saturday, July 09, 2011 3:44 PM
Could you test with some SSD SLOGs and see how well or bad the
system
performs?
These are all async writes, so slog won't be used. Async writes that
have a single fflush() and fsync() at
From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
Sent: Saturday, July 09, 2011 3:44 PM
Sorry, my bad, I meant L2ARC to help buffer the DDT
Also, bear in mind, the L2ARC is only for reads. So it can't help accelerate
writing updates to the DDT. Those updates need to hit the pool,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
By the way, did you estimate how much is dedup's overhead
in terms of metadata blocks? For example it was often said
on the list that you shouldn't bother with dedup unless you
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
Sent: Tuesday, July 12, 2011 9:58 AM
You know what? A year ago I would have said dedup still wasn't stable
enough for production. Now I would say it's plenty stable enough...
But it
needs performance enhancement before it's
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
I understand the argument, DDT must be stored in the primary storage pool
so
you can increase the size of the storage pool without running out of space
to hold the DDT
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tiernan OToole
This might be a stupid question, but here goes... Would adding, say, 4 4
or
8gb usb keys as a zil make enough of a difference for writes on an iscsi
shared
vol?
I am
From: Tiernan OToole [mailto:lsmart...@gmail.com]
Sent: Saturday, July 16, 2011 7:46 AM
I have 2 500Gb internal drives and 2 300Gb USB drives. If i where to
create a 2
pools, a 300Gb and a 500Gb in each, and then mirror over them, would that
work? is it even posible? or what would you
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Well, in terms of mirroring over stripes, if any component of any
stripe
breaks,
the whole half of the mirror is degraded. If another drive from another
half
also breaks,
I recently installed a system, but it seems like the system update process
isn't working right, or else I have wrong expectations. What I really want
is to ensure I have the latest...
It says zpool version 31 and zfs version 5. Can anybody please confirm or
deny that this is the absolute
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Maurice R Volaski
You need to point to the support repository and install a certificate.
/usr/bin/pkg set-publisher -k
/var/pkg/ssl/Oracle_Solaris_11_Express_Support.key.pem \
-c
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
if the OP were so inclined,
he could craft a couple of striped pools (300+500) and
then make a ZFS pool over these two.
Actually, you can't do that. You can't make a vdev from
From: Edward Ned Harvey
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
Intuitive. ;-)
Thank you.
Kidding aside, for anyone finding this thread at a later time, here's the
answer. It sounds unnecessarily complex at first, but then I went through
it ... Only took like a minute
From: Edward Ned Harvey
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
It says zpool version 31 and zfs version 5. Can anybody please confirm or
deny that this is the absolute latest version available to the public in
any
way?
After applying all updates, it's still zpool 31
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Gordon Ross
Anyone have experience with either one? (good or bad)
Opinions whether the lower capacity and higher cost of
the SSD is justified in terms of performance for things
like
In my new oracle server, sol11exp, it's using multipath device names...
Presently I have two disks attached: (I removed the other 10 disks for now,
because these device names are so confusing. This way I can focus on *just*
the OS disks.)
0. c0t5000C5003424396Bd0
From: Ian Collins [mailto:i...@ianshome.com]
Sent: Saturday, July 23, 2011 4:02 AM
Can you provide more details of your tests?
Here's everything:
http://dl.dropbox.com/u/543241/dedup%20tests/dedup%20tests.zip
In particular:
Under the work server directory.
The basic concept goes like
From: Edward Ned Harvey
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
Disk 0 is the one where the OS is installed. During installation, I
opted to install the OS into a partition. Now I'm trying to replicate
the fdisk partition tables (and partition slice tables) onto
From: Edward Ned Harvey
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com]
sudo zpool attach -f rpool ${firstdisk}s0 ${seconddisk}s0
I assume this is still important too:
sudo installgrub /boot/grub/stage1 /boot/grub/stage2
/dev/rdsk/${seconddisk}s0
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
So boot is much quicker.
Everyday use, I dont notice anything. Every application boots quick, and I
dont think about application boot time anymore.
OT but...
At work, we
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
Add to that: if running dedup, get plenty of RAM and cache.
Add plenty RAM. And tweak your arc_meta_limit. You can at least get dedup
performance that's on the same order of
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
Honestly, I think TRIM isn't really useful for anyone.
I'm going to have to disagree.
There are only two times when TRIM isn't useful:
1) Your demand of the system is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
Shouldn't modern SSD controllers be smart enough already that they know:
- if there's a request to overwrite a sector, then the old data on
that sector is no longer needed
After a certain rev, I know you can set the sync property, and it takes
effect immediately, and it's persistent across reboots. But that doesn't
apply to Solaris 10.
My question: Is there any way to make Disabled ZIL a normal mode of
operations in solaris 10? Particularly:
If I do
From: Darren J Moffat [mailto:darr...@opensolaris.org]
Sent: Friday, August 05, 2011 10:14 AM
echo set zfs:zil_disable = 1 /etc/system
This is a great way to cure /etc/system viruses :-)
LOL!
:-)
Thank you.
___
zfs-discuss mailing list
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
On 08/ 6/11 11:48 AM, stuart anderson wrote:
After upgrading to zpool version 29/zfs version 5 on a S10 test system
via
the kernel patch 144501-19 it will now boot only as far
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Orvar Korvar
Ok, so mirrors resilver faster.
But, it is not uncommon that another disk shows problem during resilver
(for
instance r/w errors), this scenario would mean your entire raid is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lanky Doodle
Before I go ahead and continue building my server (zpools) I want to make
sure the above guide is correct for S11E?
You should simply boot from each disk, while the other disk
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
I am facing issue with zfs destroy, this takes almost 3 Hours to
delete the snapshot of size 150G.
Do you have dedup enabled?
I have always found, zfs destroy takes some
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
Unfortunately, if dedup was previously enabled, the damage was already
done since dedup is baked into your pool. The situation may improve
over time as dedup blocks are
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
Are any of you using the Intel 320 as ZIL? It's MLC based, but I
understand its wear and performance characteristics can be bumped up
significantly by increasing the
From: Ian Collins [mailto:i...@ianshome.com]
Sent: Friday, August 12, 2011 11:24 PM
For ZIL, I
suppose we could get the 300GB drive and overcommit to 95%!
What kind of benefit does that offer? I suppose, if you have a 300G
drive
and the OS can only see 30G of it, then the drive can
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
zpool list output show raw disk usage, including all redundant copies of
metadata, all redundant copies of data blocks, all redundancy accounted for
(mirror, raidz), etc.
From: Ray Van Dolson [mailto:rvandol...@esri.com]
Sent: Monday, August 15, 2011 12:26 PM
On the Intel SSD 320 Series, the spare capacity reserved at the
factory is 7% to 11% (depending on the SKU) of the full NAND
capacity. For better random write performance and endurance, the
From: Richard Elling [mailto:richard.ell...@gmail.com]
What do you expect to happen if you're in progress doing a zfs send, and
then simultaneously do a zfs destroy of the snapshot you're sending?
It depends on the release. For modern implementations, a hold is placed on
the snapshot and
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Daniel Carosone
On Sat, Aug 27, 2011 at 08:44:13AM -0700, Richard Elling wrote:
I'm getting a but tired of people designing for fast resilvering.
It is a design consideration, regardless,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jesus Cea
1. Is the L2ARC data stored in the SSD checksummed?. If so, can I
expect that ZFS goes directly to the disk if the checksum is wrong?.
Yup.
2. Can I import a POOL if one/both
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Richard Elling
For example, does 'zfs send -D' use the same DDT as the pool?
No.
Or does it require more memory for it's own DDT, thus impacting
performance of both?
Yes, no.
How
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Freddie Cash
Will be interesting to see whether or not -D works with ZFSv28 in FreeBSD 8-
STABLE/9-BETA. And whether or not zfs send is faster/better/easier/more
reliable than rsyncing
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Fred Liu
For my carelessness, I added two disks into a raid-z2 zpool as normal data
disk, but in fact
I want to make them as zil devices.
That's a huge bummer, and it's the main reason why
From: Fred Liu [mailto:fred_...@issi.com]
Yeah, I also realized this when I send out this message. In NetApp, it is
so
easy to change raid group size. There is still a long way for zfs to go.
Hope I can see that in the future.
This one missing feature of ZFS, IMHO, does not result in a long
From: Krunal Desai [mailto:mov...@gmail.com]
On Mon, Sep 19, 2011 at 9:29 AM, Fred Liu fred_...@issi.com wrote:
Yes. I have connected them back to server. But it does not help.
I am really sad now...
I'll tell you what does not help. This email. Now that you know what you're
trying to
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Raúl Valencia
I must configure a small file server. It only has two disk drives, and
they are
(forcibly) destined to be used in a mirrored, hot-spare configuration.
I think you just
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tony MacDoodle
Original:
mirror-0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
mirror-1 ONLINE
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Matt Banks
Am I crazy for putting something like this into production using Solaris
10/11?
On paper, it really seems ideal for our needs.
Do you have an objection to solaris 10/11 for some
From: Richard Elling [mailto:richard.ell...@gmail.com]
Also, the default settings for the resilver throttle are set for HDDs. For
SSDs,
it is a
good idea to change the throttle to be more aggressive.
You mean...
Be more aggressive, resilver faster?
or Be more aggressive, throttling the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
Got a quick question: what are the latest zpool and zfs versions
supported in Solaris 10 Update 10?
In update 10: pool version 29, ZFS version 5.
I don't know what the other
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha
Does anyone know a good commercial zfs-based storage replication
software that runs on Solaris (i.e. not an appliance, not another OS
based on solaris)?
Kinda like Amanda,
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of deg...@free.fr
I'm not familiar with ZFS stuff, so I'll try to give you as much as info I
can get
with our environment
We are using a ZFS pool as a VLS for a backup server (Sun V445 Solaris
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Cindy Swearingen
In the steps below, you're missing a zpool import step.
I would like to see the error message when the zpool import
step fails.
I see him doing this...
# truss -t open
From: casper@oracle.com [mailto:casper@oracle.com]
What is the partition table?
He also said this...
-Original Message-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of John D Groenveld
# zpool create foo c1t0d0
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
I guess Richard was correct about the usecase description -
I should detail what I'm thinking about, to give some illustration.
After reading all this, I'm still unclear on what
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
The idea is you would dedicate one of the servers in the chassis to be a
Solaris system, which then presents NFS out to the rest of the hosts.
Actually, I looked into a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
In my example - probably not a completely clustered FS.
A clustered ZFS pool with datasets individually owned by
specific nodes at any given time would suffice for such
VM farms.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
Hmm, of course the *latency* of Ethernet has always been much less, but
I did not see it reaching the *throughput* of a single direct attached
disk until gigabit.
Nobody runs a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Harry Putnam
FreeNAS and freebsd.
Maybe you can give a little synopsis of those too. I mean when it
comes to utilizing zfs; is it much the same as if running it on
solaris?
For somebody
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
I have done a poor man's rebalance by copying data after adding
devices. I know this is not a substitute for a real online rebalance,
but it gets the job done (if you can take the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
I had and have redundant storage, it has *NEVER* automatically fixed
it. You're the first person I've heard that has had it automatically fix
it.
That's probably just because it's
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
On Wed, 19 Oct 2011, Peter Jeremy wrote:
Doesn't a scrub do more than what 'fsck' does?
It does different things. I'm not sure about more.
Zfs scrub validates user
From: Fajar A. Nugraha [mailto:w...@fajar.net]
Sent: Tuesday, October 18, 2011 7:46 PM
* In btrfs, there is no equivalent or alternative to zfs send | zfs
receive
Planned. No actual working implementation yet.
In fact, I saw, actual work started on this task about a month ago. So it's
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ian Collins
I just tried sending from a oi151a system to a Solaris 10 backup
server and the server barfed with
zfs_receive: stream is unsupported version 17
I can't find any
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
new CKSUM errors
are being found. There are zero READ or WRITE error counts,
though.
Should we be worried about replacing the ex-hotspare drive
ASAP as well?
You should not be
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Watzlavick
What failure scenario could have caused this? The file was obviously
initially good on the raidz because it got backed up to the USB drive
and that matches the good
From: Robert Watzlavick [mailto:rob...@watzlavick.com]
Sent: Sunday, October 23, 2011 4:36 PM
Now on to find out why the 3 Acronis Backup files got modified. This is
good news so far...
I expect you'll find the same thing for Acronis. Acronis updates those
individual files to make them
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Watzlavick
I have two external USB hard drives
that I use to back up the contents of the ZFS raidz on alternating
months. The USB hard drives use EXT3 so they are connected to a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of weiliam.hong
3. All 4 drives are connected to a single HBA, so I assume the mpt_sas
driver
is used. Are SAS and SATA drives handled differently ?
If they're all on the same HBA, they may be
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Mark Wolek
Still kicking around this idea and didnt see it addressed in any of the
threads
before the forum closed.
If one made an all ssd pool, would a log/cache drive just slow you
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
summer, and came up with a new question. In short, is it
possible to add restartability to ZFS SEND, for example
Rather than building something new and special into the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Rossing
On 10/28/2011 01:04 AM, Mark Wolek wrote:
before the forum closed.
Did I miss something?
Yes. The forums no longer exist. It's only mailman email now.
From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
I notice that the mail activity has diminished substantially since the
forums were shut down. Apparently they were still in use.
I'm sure nobody thought they were unused. I'm sure it was a cost saving
measure. Jive forums start
From: Daniel Carosone [mailto:d...@geek.com.au]
On Tue, Nov 01, 2011 at 06:17:57PM -0400, Edward Ned Harvey wrote:
You can do both poorly for free, or you can do both very well for big
bucks.
That's what opensolaris was doing.
That mess was costing someone money and considered very well
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lachlan Mulcahy
I have been having issues with Solaris kernel based systems locking up
and
am wondering if anyone else has observed a similar symptom before.
...
Dell R710 / 80G Memory
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Lachlan Mulcahy
* Recommendation from Sun (Oracle) to work around a bug:
* 6958068 - Nehalem deeper C-states cause erratic scheduling
behavior
set idle_cpu_prefer_mwait = 0
set
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of sbre...@hotmail.com
However, snapshots seem to be read-only:
Is there any way to force the file removal?
You need to destroy the snapshot completely - But if you want to selectively
delete
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Didier Rebeix
from ZFS documentation it appears unclear to me if a zpool
scrub will black list any found bad blocks so they won't be used
anymore.
If there are any physically bad
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Evaldas Auryla
I'm trying to evaluate what are the risks of running NFS share of zfs
dataset with sync=disabled property. The clients are vmware hosts in our
environment and server is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
1) Use a ZFS mirror of two SSDs
- seems too pricey
2) Use a HDD with redundant data (copies=2 or mirroring
over two partitions), and an SSD for L2ARC (+maybe ZIL)
-
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ding Honghui
But now, as show below, the first 2 raidz1 vdev usage is about 78% and the
last 2 raidz1 vdev usage is about 93%.
In this case, when you write, it should be writing to the first
From: Evaldas Auryla [mailto:evaldas.aur...@edqm.eu]
Sent: Wednesday, November 09, 2011 8:55 AM
I was thinking about STEC ZeusRAM, but unfortunately it's SAS only
device, and it won't make into X4540 (SATA ports only), so another
option could be STEC MACH16iops (50GB SLC SATA SSD).
Perhaps
From: Gregg Wonderly [mailto:gregg...@gmail.com]
There is no automatic way to do it.
For me, this is a key issue. If there was an automatic rebalancing
mechanism,
that same mechanism would work perfectly to allow pools to have disk sets
removed. It would provide the needed basic
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeff Savit
Also, not a good idea for
performance to partition the disks as you suggest.
Not totally true. By default, if you partition the disks, then the disk write
cache gets disabled.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue
1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis
I just want to say, this isn't
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue
Why would you want your root pool to be on the SSD? Do you expect an
extremely high I/O rate for the OS disks? Also, not a good idea for
performance to partition the disks as you
801 - 900 of 1109 matches
Mail list logo