but it doesn't seem to change the behavious
Again - I'm looking for thoughts here - as I have only really just
started looking into this. Should I happen across anything interesting,
I'll followup this post.
Cheers,
Nathan. :)
___
zfs-discuss mailing
* configuration, as I have just the one SSD, but I'll
persist and see what I can get out of it.
Thanks for the thoughts thus far!
Cheers,
Nathan.
On 21/11/2012 8:33 AM, Fajar A. Nugraha wrote:
On Wed, Nov 21, 2012 at 12:07 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris
the current
ones...)
I might just have to bite the bullet and try something with current SW. :).
Nathan.
On 05/29/12 08:54 PM, John Martin wrote:
On 05/28/12 08:48, Nathan Kroenert wrote:
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB
On 29/05/2012 11:10 PM, Jim Klimov wrote:
2012-05-29 16:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC
called 'advanced format'
drives (which as far as I can tell are in no way actually advanced, and
only benefit HDD makers and not the end user).
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Jim Klimov wrote:
It is is hard enough already to justify to an average wife that...snip
That made my night. Thanks, Jim. :)
On 03/20/12 10:29 PM, Jim Klimov wrote:
2012-03-18 23:47, Richard Elling wrote:
...
Yes, it is wrong to think that.
Ok, thanks, we won't try that :)
copy out,
the stack.
Hope this helps somewhat. Let us know how you go.
Cheers!
Nathan.
On 02/ 1/12 04:52 AM, Mohammed Naser wrote:
Hi list!
I have seen less-than-stellar ZFS performance on a setup of one main
head connected to a JBOD (using SAS, but drives are SATA). There are
16 drives (8 mirrors
Do note, that though Frank is correct, you have to be a little careful
around what might happen should you drop your original disk, and only
the large mirror half is left... ;)
On 12/16/11 07:09 PM, Frank Cusack wrote:
You can just do fdisk to create a single large partition. The
attached
considering something different ;)
Cheers!
Nathan.
On 12/19/11 09:05 AM, Jan-Aage Frydenbø-Bruvoll wrote:
Hi,
On Sun, Dec 18, 2011 at 22:00, Fajar A. Nugrahaw...@fajar.net wrote:
From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(or at least Google's cache of it, since
sequentially), I'd have thought it should be a lot faster than 12x.
Can we really only pull stuff from cache at only a little over one
gigabyte per second if it's dedup data?
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
to claim more available space for the same device, and to be lazy
in the CRC generation/checking arena. And to profoundly impact the time
it takes to read or update anything less than 4K. But - then again,
maybe I'm missing something.
Cheers!
Nathan
this helps at least a little.
Cheers,
Nathan.
On 06/14/11 03:20 PM, Maximilian Sarte wrote:
Hi,
I am posting here in a tad of desperation. FYI, I am running FreeNAS 8.0.
Anyhow, I created a raidz1 (tank1) with 4 x 2Tb WD EARS hdds.
All was doing ok until I decided to up the RAM to 4 Gb since
Hi Karl,
Is there any chance at all that some other system is writing to the
drives in this pool? You say other things are writing to the same JBOD...
Given that the amount flagged as corrupt is so small, I'd imagine not,
but thought I'd ask the question anyways.
Cheers!
Nathan.
On 04
when you disable the disk cache.
Nathan.
On 8/03/2011 11:53 PM, Edward Ned Harvey wrote:
From: Jim Dunham [mailto:james.dun...@oracle.com]
ZFS only uses system RAM for read caching,
If your email address didn't say oracle, I'd just simply come out and say
you're crazy, but I'm trying to keep
something administratively silly... ;)
Nathan.
On 7/03/2011 12:14 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
We're heading into the 3rd hour of the zpool destroy on others.
The system isn't locked up
Actually, I find that tremendously encouraging. Lots of internal
Oracle folks still subscribed to the list!
Much better than none... ;)
Nathan.
On 02/26/11 03:29 PM, Yaverot wrote:
Sorry all, didn't realize that half of Oracle would auto-reply to a public
mailing list since they're out
pretty much flat out on a PCI-X 133
3124 based card. (note that there was a pci and a pci-x version of the
3124, so watch out.)
Cheers!
Nathan.
On 02/24/11 02:10 AM, Andrew Gabriel wrote:
Krunal Desai wrote:
On Wed, Feb 23, 2011 at 8:38 AM, Mauricio Tavares
raubvo...@gmail.com wrote:
I
- using eSATA.
Note: All of this is with the 'cheap' view... You can most certainly buy
much better hardware... But bang for buck - I have been happy with the
above.
Cheers!
Nathan.
On 02/26/11 01:58 PM, Brandon High wrote:
On Fri, Feb 25, 2011 at 4:34 PM, Rich Teerrich.t...@rite-group.com
zfs_vdev_max_pending...
Nonetheless, I'm now at a far more balanced point than when I started,
so that's a good thing. :)
Cheers,
Nathan.
On 15/02/2011 6:44 AM, Richard Elling wrote:
Hi Nathan,
comments below...
On Feb 13, 2011, at 8:28 PM, Nathan Kroenert wrote:
On 14/02/2011 4:31 AM, Richard
observing is not great...
I'm also happy to supply lockstats / dtrace output etc if it'll help.
Thoughts?
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
raid controller is actually *slowing* my reads and writes to
disk! ;)
Cheers!
Nathan.
On 14/02/2011 4:08 AM, gon...@comcast.net wrote:
Hi Nathan,
Maybe it is buried somewhere in your email, but I did not see what
zfs version you are using.
This is rather important, because the 145
On 14/02/2011 4:31 AM, Richard Elling wrote:
On Feb 13, 2011, at 12:56 AM, Nathan Kroenertnat...@tuneunix.com wrote:
Hi all,
Exec summary: I have a situation where I'm seeing lots of large reads starving
writes from being able to get through to disk.
snip
What is the average service time
http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
Is there a way except for buying enterprise (RAID specific) drives for a array
to use normal drives?
Does anyone have any success stories regarding a particular model?
The TLER cannot be edited on newer drives from Western Digital
Sorry I probably didn't make myself exactly clear.
Basically drives without particular TLER settings drop out of RAID randomly.
* Error Recovery - This is called various things by various manufacturers
(TLER, ERC, CCTL). In a Desktop drive, the goal is to do everything possible to
recover the
http://www.stringliterals.com/?p=77
This guy talks about it too under Hard Drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
While I am about to embark on building a home NAS box using OpenSolaris with
ZFS.
Currently I have a chassis that will hold 16 hard drives, although not in
caddies - down time doesn't bother me if I need to switch a drive, probably
could do it running anyways just a bit of a pain. :)
I am
What is the best way to use an external HDD for initial replication of a large
ZFS filesystem?
System1 had filesystem; System2 needs to have a copy of filesystem.
Used send/recv on System1 to put filesys...@snap1 on connected external HDD.
Exported external HDD pool and connected/imported on
I figured out what I did wrong. The filesystem as received on the external HDD
had multiple snapshots, but I failed to check for them. So I had created a
snapshot in order to send/recv on System2. That doesn't work, obviously.
A new local send/recv of the filesystem's correct snapshot did the
I have not carried out any research into this area, but when I was
building my home server I wanted to use a Promise SATA-PCI card, but
alas (Open)Solaris has no support at all for the Promise chipsets.
Instead I used a rather old card based on the sil3124 chipset.
n
On Mon, Aug 3, 2009 at 9:35
this before. What you have to do is ask James each question 3
times and on the third time he will tell the truth. ;)
I know it's not in the preview of 2010.2 (build 118).
On a serious note, James - do you know the status of the presentation recording
on ZFS deduplication?
Many thanks,
Nathan
,
Nathan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yes, please write more about this. The photos are terrific and I
appreciate the many useful observations you've made. For my home NAS I
chose the Chenbro ES34069 and the biggest problem was finding a
SATA/PCI card that would work with OpenSolaris and fit in the case
(technically impossible without
I'll maintain hope for seeing/hearing the presentation until you guys announce
that you had NASA store the tape for safe-keeping.
Bump'd.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Regarding the SATA card and the mainboard slots, make sure that
whatever you get is compatible with the OS. In my case I chose
OpenSolaris which lacks support for Promise SATA cards. As a result,
my choices were very limited since I had chosen a Chenbro ES34069 case
and Intel Little Falls 2
Score one more for ZFS! This box has a measly 300GB mirrored, and I have
already seen dud data. (heh... It's also got non-ecc memory... ;)
Cheers!
Nathan.
Dennis Clarke wrote:
On Tue, 24 Mar 2009, Dennis Clarke wrote:
You would think so eh?
But a transient problem that only occurs after
definitely time to bust out some mdb -k and see what it's moaning about.
I did not see the screenshot earlier... sorry about that.
Nathan.
Blake wrote:
I start the cp, and then, with prstat -a, watch the cpu load for the
cp process climb to 25% on a 4-core machine.
Load, measured for example
definitely time to bust out some mdb -K or boot -k and see what it's
moaning about.
I did not see the screenshot earlier... sorry about that.
Nathan.
Blake wrote:
I start the cp, and then, with prstat -a, watch the cpu load for the
cp process climb to 25% on a 4-core machine.
Load, measured
!
Nathan.
On 13/03/09 09:21 AM, Dave wrote:
Tim wrote:
On Thu, Mar 12, 2009 at 2:22 PM, Blake blake.ir...@gmail.com
mailto:blake.ir...@gmail.com wrote:
I've managed to get the data transfer to work by rearranging my disks
so that all of them sit on the integrated SATA controller
your memory, and your
physical backing storage is taking a while to catch up?
Nathan.
Blake wrote:
My dump device is already on a different controller - the motherboards
built-in nVidia SATA controller.
The raidz2 vdev is the one I'm having trouble with (copying the same
files
ZIL and L2ARC would be interesting, though, given the propensity for
SSD's to be either fast read or fast write at the moment, you may well
require some whacky knobs to get it to do what you actually want it to...
hm.
Nathan.
Bill Sommerfeld wrote:
On Wed, 2009-03-04 at 12:49 -0800, Richard
to one minute to undo. That will catch 80% of the mistakes?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan
device...
Seems a little pricey for what it is though.
It's going onto my list of what I'd buy if I had the money... ;)
Nathan.
On 01/30/09 12:10, Janåke Rönnblom wrote:
ACARD have launched a new RAM disk which can take up to 64 GB of ECC RAM
while still looking like a standard SATA drive
You could be the first...
Man up! ;)
Nathan.
Will Murnane wrote:
On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert nathan.kroen...@sun.com
wrote:
Seems a little pricey for what it is though.
For what it's worth, there's also a 9010B model that has only one sata
port and room for six dimms
, but it would have it's merits...
Cheers!
Nathan.
Jacob Ritorto wrote:
Hi,
I just said zfs destroy pool/fs, but meant to say zfs destroy
pool/junk. Is 'fs' really gone?
thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
to test the *actual* disk performance, you should just
use the underlying disk device like /dev/rdsk/c0t0d0s0
Beware, however, that any writes to these devices will indeed result in
the loss of the data on those devices, zpools or other.
Cheers.
Nathan.
Richard Elling wrote:
Rob Brown wrote
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Systems Engineer Phone: +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288 //
// Level 7, 476 St. Kilda Road
Are you able to qualify that a little?
I'm using a realtek interface with OpenSolaris and am yet to experience
any issues.
Nathan.
Brandon High wrote:
On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
Several people reported this same problem
Interesting. I'll have a poke...
Thanks!
Nathan.
Brandon High wrote:
On Thu, Jan 22, 2009 at 1:29 PM, Nathan Kroenert
nathan.kroen...@sun.com wrote:
Are you able to qualify that a little?
I'm using a realtek interface with OpenSolaris and am yet to experience any
issues.
There's a lot
An interesting interpretation of using hot spares.
Could it be that the hot-spare code only fires if the disk goes down
whilst the pool is active?
hm.
Nathan.
Scot Ballard wrote:
I have configured a test system with a mirrored rpool and one hot spare.
I powered the systems off, pulled one
Hey, Tom -
Correct me if I'm wrong here, but it seems you are not allowing ZFS any
sort of redundancy to manage.
I'm not sure how you can class it a ZFS fail when the Disk subsystem has
failed...
Or - did I miss something? :)
Nathan.
Tom Bird wrote:
Morning,
For those of you who
...
It would be interesting to see if you see the same issues using a
Solaris or other OS client.
Hope this helps somewhat. Let us know how it goes.
Nathan.
fredrick phol wrote:
I'm currently experiencing exactly the same problem and it's been driving me
nuts. Tried open soalris and am currently
compression, which
might, on the slower Atom style chips, get in the way.
Looking forward to any reports.
Nathan.
On 13/01/09 01:47 PM, JZ wrote:
ok, was I too harsh on the list?
sorry folks, as I said, I have the biggest ego.
no one can hurt that by trying to fight me, but yes, it can be hurt
I have moved the zpool image file to an OpenSolaris machine running 101b.
r...@opensolaris:~# uname -a
SunOS opensolaris 5.11 snv_101b i86pc i386 i86pc Solaris
Here I am able to attempt an import of the pool and at least the OS does not
panic.
r...@opensolaris:~# zpool import -d /mnt
pool:
I don't know if this is relevant or merely a coincidence but the zdb command
fails an assertion in the same txg_wait_synced function.
r...@opensolaris:~# zdb -p /mnt -e zones
Assertion failed: tx-tx_threads == 2, file ../../../uts/common/fs/zfs/txg.c,
line 423, function txg_wait_synced
Abort
Thanks for the reply. I tried the following:
$ zpool import -o failmode=continue -d /mnt -f zones
But the situation did not improve. It still hangs on the import.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I've had some success.
I started with the ZFS on-disk format PDF.
http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
The uberblocks all have magic value 0x00bab10c. Used od -x to find that value
in the vdev.
r...@opensolaris:~# od -A x -x /mnt/zpool.zones | grep b10c 00ba
I have a ZFS pool that has been corrupted. The pool contains a single device
which was actually a file on UFS. The machine was accidentally halted and now
the pool is corrupt. There are (of course) no backups and I've been asked to
recover the pool. The system panics when trying to do anything
once every month or so, depending on the system.
So, in direct answer to your question, No - You don't *need* to scrub.
But - It's better if you do. ;)
My 2c.
Nathan.
On 10/11/08 11:38 AM, Douglas Walker wrote:
Hi,
I'm running a 3Tb RAIDZ2 array and was wondering about the zfs scrub
A quick google shows that it's not so much about the mirror, but the BE...
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/
Might help?
Nathan.
On 7/11/08 02:39 PM, Krzys wrote:
What am I doing wrong? I have sparc V210 and I am having difficulty with boot
-L, I was under
Not wanting to hijack this thread, but...
I'm a simple man with simple needs. I'd like to be able to manually spin
down my disks whenever I want to...
Anyone come up with a way to do this? ;)
Nathan.
Jens Elkner wrote:
On Mon, Nov 03, 2008 at 02:54:10PM -0800, Yuan Chu wrote:
Hi
are available in that current zfs / zpool version...
That way, you would never need to do anything to bash/zfs once it was
done the first time... do it once, and as ZFS changes, the prompts
change automatically...
Or - is this old hat, and how we do it already? :)
Nathan.
On 10/10/08 05:06 PM, Boyd
the utilities
that work on and in it to perform at a reasonable rate... which for the
most part is around the 100K files or less...
Perhaps you are using larger hardware than I am for some of this stuff? :)
Nathan.
On 1/10/08 07:29 AM, Toby Thain wrote:
On 30-Sep-08, at 7:50 AM, Ram Sharma
I second that question, and also ask what brand folks like for
performance and compatibility?
Ebay is killing me with vast choice and no detail... ;)
Nathan.
Al Hopper wrote:
On Wed, Aug 20, 2008 at 12:57 PM, Neal Pollack [EMAIL PROTECTED] wrote:
Ian Collins wrote:
Brian Hechinger wrote
It starts with Z, which makes it the one of the last to be considered if
it's listed alphabetically?
Nathan.
Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
plzz its urgent...
help me.
This message posted from opensolaris.org
of hassle getting it working, but in
the ZFS space, it works great pretty much out of the box (plus ethernet
address change if the nvidia driver is still busted... ;)
Cheers!
Nathan.
*Going like stink means going like a hairy goat - like lightning - like
s*it off a shovel - like a zyrtec - fast
experiment :) so I'll be watching this thread with renewed interest to
see who else is doing what...
Nathan.
Bob Friesenhahn wrote:
On Thu, 17 Jul 2008, Ben Rockwood wrote:
zfs list is mighty slow on systems with a large number of objects,
but there is no foreseeable plan that I'm aware
Even better would be using the ZFS block checksums (assuming we are only
summing the data, not it's position or time :)...
Then we could have two files that have 90% the same blocks, and still
get some dedup value... ;)
Nathan.
Charles Soto wrote:
A really smart nexus for dedup is right when
...
Awesome. Now to work on audio...
heh.
Nathan.
Nathan Kroenert wrote:
Hey all -
Just spent quite some time trying to work out why my 2 disk mirrored ZFS
pool was running so slow, and found an interesting answer...
System: new Gigabyte M750sli-DS4, AMD 9550, 4GB memory and 2 X Seagate
. With 4 cores @ 2.2Ghz (phenom 9550) it's looking
like it'll do what I wanted quite nicely.
Later...
Nathan.
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Systems Engineer Phone: +61 3 9869-6255
but cannot get 'em in Australia any more... :)
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a couple of 'better' USB hubs (Mine
are pretty much the cheapest I could buy) and see how that goes.
For gags, take ZFS out of the equation and validate that your hardware
is actually providing a stable platform for ZFS... Mine wasn't...
Nathan.
Evan Geller wrote:
So, I've been stuck in kind
to ring true.
Not at all sure about SAS.
If I'm wrong here, hopefully someone else will provide the complete set
of logic for determining cache enabling semantics.
:)
Nathan.
Brian Hechinger wrote:
On Wed, Jun 04, 2008 at 09:17:05PM -0400, Ellis, Mike wrote:
The FAQ document (
http
know many don't consider
this an issue these days, but I'd still be inclined to keep /var (and
especially /var/tmp) separated from /
In ZFS, this is, of course, just two filesystems in the same pool, with
differing quotas...
:)
Nathan.
Rich Teer wrote:
On Wed, 4 Jun 2008, Bob Friesenhahn wrote
, when you get the chance, deliberately panic the box to
make sure you can actually capture a dump...
dumpadm is your friend as far as checking where you are going to dump
to, and it it's one side of your swap mirror, that's bad, M'Kay?
:)
Nathan.
Jorgen Lundman wrote:
OK, this is a pretty damn
.
Does any of this make you feel any better (or worse)?
Nathan.
Mark A. Carlson wrote:
fmd(1M) can log faults to syslogd that are already diagnosed. Why
would you want the random spew as well?
-- mark
Carson Gaspar wrote:
[EMAIL PROTECTED] wrote:
It's not safe to jump to this conclusion
caffeine... :)
Nathan
Vic Engle wrote:
I'm hoping someone can help me understand a zfs data corruption symptom. We
have a zpool with checksum turned off. Zpool status shows that data
corruption occured. The application using the pool at the time reported a
read error and zoppl status (see below
. :)
Nathan.
Nicolas Williams wrote:
On Wed, Apr 09, 2008 at 11:38:03PM -0400, Jignesh K. Shah wrote:
Can zfs send utilize multiple-streams of data transmission (or some sort
of multipleness)?
Interesting read for background
http://people.planetpostgresql.org/xzilla/index.php?/archives/338-guid.html
Did you do anything specific with the drive caches?
How is your ZFS performance?
Nathan. :)
Rich Teer wrote:
On Wed, 19 Mar 2008, Terence Ng wrote:
I am new to Solaris. I have Sun X2100 with 2 x 80G harddisks (run as
email server, run tomcat, jboss and postgresql) and want to run
sort of issues.
An external 500GB disk + external USB enclosure runs for what - $150?
That's what I use anyways. :)
Nathan.
Paul Kraus wrote:
On Thu, Mar 6, 2008 at 10:22 AM, Brian D. Horn [EMAIL PROTECTED] wrote:
ZFS is not 32-bit safe. There are a number of places in the ZFS code where
Bob Friesenhahn wrote:
On Tue, 4 Mar 2008, Nathan Kroenert wrote:
It does seem that some of us are getting a little caught up in disks
and their magnificence in what they write to the platter and read
back, and overlooking the potential value of a simple (though
potentially
going to shutup now. I think I have done this to death, and I
don't want to end up in everyone's kill filter.
Cheers!
Nathan.
Bob Friesenhahn wrote:
On Tue, 4 Mar 2008, Nathan Kroenert wrote:
The circus trick can be handled via a user-contributed utility. In fact,
people can compete
Hm -
Based on this detail from the page:
Change lever for switching between Rotation
+ Hammering , Neutral and Hammering only
I'd hope it could still hammer... Though I'd suspect the size of nails
it would hammer would be somewhat limited... ;)
Nathan.
Boyd Adamson wrote:
Richard Elling
minute
snapshots.
Nathan.
Uwe Dippel wrote:
atomic view?
Your post was on the gory details on how ZFS writes. Atomic View here is,
that 'save' of a file is an 'atomic' operation: at one moment in time you
click 'save', and some other moment in time it is done. It means indivisible
?
Nathan.
Nicolas Williams wrote:
On Tue, Feb 26, 2008 at 06:34:04PM -0800, Uwe Dippel wrote:
The rub is this: how do you know when a file edit/modify has completed?
Not to me, I'm sorry, this is task of the engineer, the implementer.
(See 'atomic', as above.) It would be a shame if a file
for it, just in case it's one for which
there was a known problem. (which was worked around in the driver)
I *think* there was an issue with at least one or two...
Cheers!
Nathan.
Sandro wrote:
hi folks
I've been running my fileserver at home with linux for a couple of years and
last week I
And would drive storage requirements through the roof!!
I like it!
;)
Nathan.
Jonathan Loran wrote:
David Magda wrote:
On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
In some circles, CDP is big business. It would be a great ZFS offering.
ZFS doesn't have it built-in, but AVS made
What about new blocks written to an existing file?
Perhaps we could make that clearer in the manpage too...
hm.
Mattias Pantzare wrote:
If you created them after, then no worries, but if I understand
correctly, if the *file* was created with 128K recordsize, then it'll
keep that
files are updated as well...
hm.
Cheers!
Nathan.
Richard Elling wrote:
Nathan Kroenert wrote:
And something I was told only recently - It makes a difference if you
created the file *before* you set the recordsize property.
Actually, it has always been true for RAID-0, RAID-5, RAID-6
I understand correctly.
Hopefully someone else on the list will be able to confirm.
Cheers!
Nathan.
Richard Elling wrote:
Anton B. Rang wrote:
Create a pool [ ... ]
Write a 100GB file to the filesystem [ ... ]
Run I/O against that file, doing 100% random writes with an 8K block size
. If anyone has
ideas on specific incantations I should use or some specific D or
anything else, I'd be most appreciative.
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
(A single thread of an N2 is only so fast... Just think of what you
could do with 64 of them ;)
I'll be interested to see what the others have to say. :)
Hope this helps.
Nathan.
Michael Stalnaker wrote:
We’re looking at building out sever ZFS servers, and are considering an
x86 platform vs
Any chance the disks are being powered down, and you are waiting for
them to power back up?
Nathan. :)
Neal Pollack wrote:
I'm running Nevada build 81 on x86 on an Ultra 40.
# uname -a
SunOS zbit 5.11 snv_81 i86pc i386 i86pc
Memory size: 8191 Megabytes
I started with this zfs pool many
I see a business opportunity for someone...
Backups for the masses... of Unix / VMS and other OS/s out there.
any takers? :)
Nathan.
Jonathan Loran wrote:
eric kustarz wrote:
On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
www.mozy.com appears to have unlimited backups for 4.95
format -e
then from there, re-label using SMI label, versus EFI.
Cheers
Al Slater wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
What is the quickest way of clearing the label information on a disk
that has been previously used in a zpool?
regards
- --
Al Slater
of these would require non-sparse file creation for the
DB etc, but would it be plausible?
For very read intensive and position sensitive applications, I guess
this sort of capability might make a difference?
Just some stabs in the dark...
Cheers!
Nathan.
Louwtjie Burger wrote:
Hi
After a clean
on
occasion...
Maybe it's not just me... Unfortunately, I'm still running old nv and
xen bits, so I can't speak to the 'current' situation...
Cheers.
Nathan.
Martin wrote:
Hello
I've got Solaris Express Community Edition build 75 (75a) installed on an
Asus P5K-E/WiFI-AP (ip35/ICH9R based
Hey all -
Time for my silly question of the day, and before I bust out vi and
dtrace...
If there a simple, existing way I can observe the read / write / IOPS on
a per-zvol basis?
If not, is there interest in having one?
Cheers!
Nathan.
___
zfs
pool...
I'm ok(ish) with the panic on a failed write to a non-redundant storage.
I expect it by now...
Cheers!
Nathan.
Victor Engle wrote:
Wouldn't this be the known feature where a write error to zfs forces a panic?
Vic
On 10/4/07, Ben Rockwood [EMAIL PROTECTED] wrote:
Dick Davies
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
the dud pool.
But, maybe I'm missing your point?
Nathan.
eric kustarz wrote:
Client A
- import pool make
step. :)
Cheers.
Nathan.
Eric Schrock wrote:
On Fri, Oct 05, 2007 at 08:20:13AM +1000, Nathan Kroenert wrote:
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
1 - 100 of 128 matches
Mail list logo