--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// PTS EngineerPhone: +61 2 9844-5235 //
// Sun ServicesDirect Ext: x57235 //
// Level 2, 828 Pacific HwyFax:+61 2 9844-5311 //
// Gordon2072 New
Not X86?
:(
(Yes - I know there are lots of other things that need to happen first,
but :( nonetheless... )
Nathan.
On Wed, 2006-05-31 at 01:51, Lori Alt wrote:
Roland Mainz wrote:
Hi!
It is our intention to support system suspend on SPARC
when booted off a zfs root file system.
And, this is a worst case, no?
If the device itself also does some funky stuff under the covers, and
ZFS only writes an update if there is *actually* something to write,
then it could be much much longer than 4 years.
Actually - That's an interesting. I assume ZFS only writes something
when
Just some random thoughts on this...
One of the initial design criteria of ZFS is that it's simple. If it's
not, that was a bug...
If we need tutorials to use the zfs commands, has something missed the
mark?
If the information that is needed to do the work is NOT in the man
pages, perhaps we
Hey all -
Was playing a little with zfs today and noticed that when I was
untarring a 2.5gb archive both from and onto the same spindle in my
laptop, I noticed that the bytes red and written over time was seesawing
between approximately 23MB/s and 0MB/s.
It seemed like we read and read and read
Jeff -
That sounds like a great idea...
Another idea might to be have a zpool create announce the 'availability'
of any given configuration, and output the Single points of failure.
# zpool create mypool a b c
NOTICE: This pool has no redundancy.
Without hardware
Something I often do when I'm a little suspicious of this sort of
activity is to run something that steals vast quantities of memory...
eg: something like this:
#include stdio.h
#include stdlib.h
int main()
{
int memsize=0;
char *input_string;
char *memory;
Hey, Bob -
It might be worth exploring where your data stream for the writes was
coming from. Moreover, it might be worth exploring how fast it was
filling up caches for writing.
Were you delivering enough data to keep the disks busy 100% of the time?
I have been tricked by this before... :)
I might be wrong here, but I think it's telling you that there are no
errors.
Something like:
errors: none
or
errors: None that we know of, but we'll let you know if there are any.
At least that is how I'd read it.
:)
Do you have an actual problem other than the text?
Nathan.
On Tue,
I'll take a crack at this.
First off, I'm assuming that the RAID you are talking about it provided
by the hardware and not by ZFS.
IF that's the case, then it will depend on the way you created the raid
set, the bios of the controller, and whether or not these two things
match up with any
On Thu, 2006-11-09 at 10:21, Richard Elling - PAE wrote:
One way to populate
an ABE is to mirror slices. However, you cannot mirror between a
device that starts at cylinder 0 and one that does not.
Where is this restriction documented? It doesn't make sense to me.
Maybe you have a
For me, it came down to - Do I want to patch, or upgrade?
My gateway to the internet is a solaris 10 box, patched whenever
required. I like that as soon as a security patch is available, I can
apply it and reboot. Simple.
My laptop runs nevada. I upgrade from network / dvd when I see a new
Hm.
If the system is hung, it's unlikely that a reboot -d will help.
You want to be booting into kmdb, then using the F1-a interrupt sequence
then dumping using $systemdump at the kmdb prompt.
See the following documents:
Index of lots of useful stuff:
Hm. If the disk has no label, why would it have an s0?
Or, did you mean p0?
Nathan.
On Wed, 2006-12-06 at 04:45, Krzys wrote:
Does not work :(
dd if=/dev/zero of=/dev/rdsk/c3t6d0s0 bs=1024k count=1024
dd: opening `/dev/rdsk/c3t6d0s0': I/O error
That is so strange... it seems like I
On a recent journey of pain and frustration, I had to recover a UFS
filesystem from a broken disk. The disk had many bad blocks and more
were going bad over time. Sadly, there were just a few files that I
wanted, but I could not mount the disk without it killing my system.
(PATA disks... PITA
Random thoughts:
If we were to use some intelligence in the design, we could perhaps have
a monitor that profiles the workload on the system (a pool, for example)
over a [week|month|whatever] and selects a point in time, based on
history, that it would expect the disks to be quite, and can
Urk!
Where is this documented? And - is it something you can do nothing
about, or are we ultimately trying to address it somewhere / somehow?
Thanks!!
Nathan.
Bill Moore wrote:
On Wed, Jan 31, 2007 at 05:01:19AM -0800, Tom Buskey wrote:
As a followup, the system I'm trying to use this on
begin crackly, broken record :)
I, for one, would love to have similar functionality that we had in good
old netware, where we could 'salvage' deleted files.
The concept was that when the files were deleted, they were not actually
removed, nor were the all important references to the files
I'd usually agree with that, but - if we have an opportunity to make
users love ZFS even more, why not at least investigate it.
A perfect example might be exactly what I did on one occasion, where I
copied a bunch of photos off a CF card. I then reformatted the CF card,
and cleaned up the the
Simple test - mkfile 8gb now and see where the data goes... :)
Victor Latushkin wrote:
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM Hello,
LM I've got some weird problem: ZFS does not seem to be utilizing
LM all disks in my pool properly. For some
Some time ago I encountered issues using the odd numbered ports on my
SIL3114 based card.
I currently use ports 0 and 2 without issue.
I never did get ports 1 and 3 working...
If I have a disk connected to ports 1 or 3, it just conks out on the way
up when it's initializing the disks.
Hey all -
Just saw something really weird.
I have been playing with by box for a little while now, and just noticed
something whilst checking how fast / slow my IDE ports were on a newish
motherboard...
I had been copying around an image. Not a particularly large one - 500M
ISO...
I had
For what it's worth, I bought a Gigabyte GA-M57SLI-S4 a couple of months
ago and it rocks on a reasonably current Nevada.
Certainly not the cheapest or most expensive, but I felt a good choice
for multiple PCI-E slots and a couple of PCI slots.
And if there is a rubbish file somewhere, I *think* you should be able
to cat /dev/null thatfile
Which would free up it's blocks.
Assuming you don't have snapshots... ;)
Nathan.
Anton B. Rang wrote:
At least three alternatives --
1. If you don't have the latest patches installed, apply
Some people are just dumb. Take me, for instance... :)
Was just looking into ZFS on iscsi and doing some painful and unnatural
things to my boxes and dropped a panic I was not expecting.
Here is what I did.
Server: (S10_u4 sparc)
- zpool create usb /dev/dsk/c4t0d0s0
(on a 4gb USB stick,
wrote:
On 04/10/2007, Nathan Kroenert [EMAIL PROTECTED] wrote:
Client A
- import pool make couple-o-changes
Client B
- import pool -f (heh)
Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80:
Oct 4 15:03:12 fozzie genunix: [ID 603766 kern.notice] assertion
failed: dmu_read
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
the dud pool.
But, maybe I'm missing your point?
Nathan.
eric kustarz wrote:
Client A
- import pool make
step. :)
Cheers.
Nathan.
Eric Schrock wrote:
On Fri, Oct 05, 2007 at 08:20:13AM +1000, Nathan Kroenert wrote:
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
Hey all -
Time for my silly question of the day, and before I bust out vi and
dtrace...
If there a simple, existing way I can observe the read / write / IOPS on
a per-zvol basis?
If not, is there interest in having one?
Cheers!
Nathan.
___
I observed something like this a while ago, but assumed it was something
I did. (It usually is... ;)
Tell me - If you watch with an iostat -x 1, do you see bursts of I/O
then periods of nothing, or just a slow stream of data?
I was seeing intermittent stoppages in I/O, with bursts of data on
This question triggered some silly questions in my mind:
Lots of folks are determined that the whole COW to different locations
are a Bad Thing(tm), and in some cases, I guess it might actually be...
What if ZFS had a pool / filesystem property that caused zfs to do a
journaled, but non-COW
format -e
then from there, re-label using SMI label, versus EFI.
Cheers
Al Slater wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
What is the quickest way of clearing the label information on a disk
that has been previously used in a zpool?
regards
- --
Al Slater
I see a business opportunity for someone...
Backups for the masses... of Unix / VMS and other OS/s out there.
any takers? :)
Nathan.
Jonathan Loran wrote:
eric kustarz wrote:
On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
www.mozy.com appears to have unlimited backups for 4.95 a
Any chance the disks are being powered down, and you are waiting for
them to power back up?
Nathan. :)
Neal Pollack wrote:
I'm running Nevada build 81 on x86 on an Ultra 40.
# uname -a
SunOS zbit 5.11 snv_81 i86pc i386 i86pc
Memory size: 8191 Megabytes
I started with this zfs pool many
For what it's worth, I configured a T5220 this week with a 6 disk, three
mirror zpool. (three top level mirror vdevs...).
Used only internal disks...
When pushing to disk, I was seeing bursts of 70 odd MB/s per spindle,
with all 6 spindles making the 70MB/s, so 350MB/s ish.
Read performance
Hey all -
I'm working on an interesting issue where I'm seeing ZFS being quite
cranky about writing O_SYNC written blocks.
Bottom line is that I have a small test case that does essentially this:
open file for writing -- O_SYNC
loop(
write() 8KB of random data
print time taken
And something I was told only recently - It makes a difference if you
created the file *before* you set the recordsize property.
If you created them after, then no worries, but if I understand
correctly, if the *file* was created with 128K recordsize, then it'll
keep that forever...
Assuming
What about new blocks written to an existing file?
Perhaps we could make that clearer in the manpage too...
hm.
Mattias Pantzare wrote:
If you created them after, then no worries, but if I understand
correctly, if the *file* was created with 128K recordsize, then it'll
keep that
files are updated as well...
hm.
Cheers!
Nathan.
Richard Elling wrote:
Nathan Kroenert wrote:
And something I was told only recently - It makes a difference if you
created the file *before* you set the recordsize property.
Actually, it has always been true for RAID-0, RAID-5, RAID-6
My guess is that you have some defective hardware in the system that's
causing bit flips in the checksum or the data payload.
I'd suggest running some sort of system diagnostics for a few hours to
see if you can locate the bad piece of hardware.
My suspicion would be your memory or CPU, but
And would drive storage requirements through the roof!!
I like it!
;)
Nathan.
Jonathan Loran wrote:
David Magda wrote:
On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
In some circles, CDP is big business. It would be a great ZFS offering.
ZFS doesn't have it built-in, but AVS made be
Are you indicating that the filesystem know's or should know what an
application is doing??
It seems to me that to achieve what you are suggesting, that's exactly
what it would take.
Or, you are assuming that there are no co-dependent files in
applications that are out there...
Whichever the
It occurred to me that we are likely missing the point here because Uwe
is thinking of this as a One User on a System sort of perspective,
whereas most of the rest of us are thinking of it from a 'Solaris'
perspective, where we are typically expecting the system to be running
many applications
Bob Friesenhahn wrote:
On Tue, 4 Mar 2008, Nathan Kroenert wrote:
It does seem that some of us are getting a little caught up in disks
and their magnificence in what they write to the platter and read
back, and overlooking the potential value of a simple (though
potentially
going to shutup now. I think I have done this to death, and I
don't want to end up in everyone's kill filter.
Cheers!
Nathan.
Bob Friesenhahn wrote:
On Tue, 4 Mar 2008, Nathan Kroenert wrote:
The circus trick can be handled via a user-contributed utility. In fact,
people can compete
Paul -
Don't substitute redundancy for backup...
if your data is important to you, for the love of steak, make sure you
have a backup that would not be destroyed by, say, a lightening strike,
fire or stray 747.
For what it's worth, I'm also using ZFS on 32 bit and am yet to
experience any
Did you do anything specific with the drive caches?
How is your ZFS performance?
Nathan. :)
Rich Teer wrote:
On Wed, 19 Mar 2008, Terence Ng wrote:
I am new to Solaris. I have Sun X2100 with 2 x 80G harddisks (run as
email server, run tomcat, jboss and postgresql) and want to run as
but upwards of 24 hours.
Nico
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Technical Support Engineer Phone: +61 3 9869-6255 //
// Sun Services Fax:+61 3 9869-6288 //
// Level 3
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Technical Support Engineer Phone: +61 3 9869-6255 //
// Sun Services Fax:+61 3 9869-6288
the
whole disk where it automatically turns the disk cache on)?
Also, how can you check if the disk's cache has been enabled or not?
Thanks,
-brian
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Technical Support
without one
inheriting limits from the other?
Why would one do that? Just keep an eye on the root pool and all is good.
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Technical Support Engineer Phone: +61
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Technical Support
Tim wrote:
**pci or pci-x. Yes, you might see *SOME* loss in speed from a pci
interface, but let's be honest, there aren't a whole lot of users on
this list that have the infrastructure to use greater than 100MB/sec who
are asking this sort of question. A PCI bus should have no
. With 4 cores @ 2.2Ghz (phenom 9550) it's looking
like it'll do what I wanted quite nicely.
Later...
Nathan.
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Systems Engineer Phone: +61 3 9869-6255
...
Awesome. Now to work on audio...
heh.
Nathan.
Nathan Kroenert wrote:
Hey all -
Just spent quite some time trying to work out why my 2 disk mirrored ZFS
pool was running so slow, and found an interesting answer...
System: new Gigabyte M750sli-DS4, AMD 9550, 4GB memory and 2 X Seagate
Even better would be using the ZFS block checksums (assuming we are only
summing the data, not it's position or time :)...
Then we could have two files that have 90% the same blocks, and still
get some dedup value... ;)
Nathan.
Charles Soto wrote:
A really smart nexus for dedup is right when
In one of my prior experiments, I included the names of the snapshots I
created in a plain text file.
I used this file, and not the zfs list output to determine which
snapshots I was going to remove when it came time.
I don't even remember *why* I did that in the first place, but it
certainly
It starts with Z, which makes it the one of the last to be considered if
it's listed alphabetically?
Nathan.
Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
plzz its urgent...
help me.
This message posted from opensolaris.org
And I can certainly vouch for that series of chipsets... I have a
750a-sli chipset (the one below the 790) and the SATA ports (in AHCI
mode) Just Work(tm) under nevada / opensolaris.
I'm yet to give it a while on S10, mostly as I pretty much run nevada
everywhere... As S10 does indeed have an
Pollack
Any further information welcome.
Ian
Regards,
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Systems Engineer Phone: +61 3 9869-6255 //
// Sun Microsystems Fax
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Senior Systems Engineer Phone: +61 3 9869 6255 //
// Global Systems Engineering Fax:+61 3 9869 6288 //
// Level 7, 476 St. Kilda Road //
// Melbourne 3004
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Senior Systems Engineer Phone: +61 3 9869 6255
Not wanting to hijack this thread, but...
I'm a simple man with simple needs. I'd like to be able to manually spin
down my disks whenever I want to...
Anyone come up with a way to do this? ;)
Nathan.
Jens Elkner wrote:
On Mon, Nov 03, 2008 at 02:54:10PM -0800, Yuan Chu wrote:
Hi,
a
--
//
// Nathan Kroenert [EMAIL PROTECTED] //
// Senior Systems Engineer Phone: +61 3 9869 6255 //
// Global Systems Engineering Fax:+61 3 9869 6288 //
// Level 7, 476 St. Kilda Road //
// Melbourne 3004 Victoria
overlap with my
nightly rsyncs causing yet more I/O. Wouldn't this stress the disks more?
If it is necessary - how often are people running a manually scrub? Once
a week? month?
regards
D
--
//
// Nathan Kroenert
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Senior Systems Engineer Phone: +61 3 9869 6255 //
// Global Systems Engineering Fax:+61 3 9869 6288
2C from Oz:
Windows (at least XP - I have thus far been lucky enough to avoid
running vista on metal) has packet schedulers, quality of service
settings and other crap that can severely impact windows performance on
the network.
I have found that setting the following made a difference to me:
in df output.
thanks
--
///
// Nathan Kroenert nathan.kroen...@sun.com //
// Senior Systems Engineer Phone:+61 3 9869 6255//
// Global Systems Engineering Fax:+61 3 9869 6288 //
// Level 7, 476
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Systems Engineer Phone: +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288 //
// Level 7, 476 St. Kilda Road Mobile: 0419 305 456
Interesting. I'll have a poke...
Thanks!
Nathan.
Brandon High wrote:
On Thu, Jan 22, 2009 at 1:29 PM, Nathan Kroenert
nathan.kroen...@sun.com wrote:
Are you able to qualify that a little?
I'm using a realtek interface with OpenSolaris and am yet to experience any
issues.
There's a lot
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Systems Engineer Phone: +61 3 9869-6255 //
// Sun Microsystems Fax
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Systems Engineer Phone: +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288 //
// Level 7, 476 St. Kilda Road
to one minute to undo. That will catch 80% of the mistakes?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Senior Systems Engineer Phone: +61 3 9869 6255 //
// Global Systems Engineering Fax:+61 3 9869 6288 //
// Level 7, 476 St. Kilda Road
You could be the first...
Man up! ;)
Nathan.
Will Murnane wrote:
On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert nathan.kroen...@sun.com
wrote:
Seems a little pricey for what it is though.
For what it's worth, there's also a 9010B model that has only one sata
port and room for six dimms
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
///
// Nathan Kroenert nathan.kroen...@sun.com //
// Senior Systems Engineer Phone:+61 3 9869 6255//
// Global Systems Engineering Fax:+61 3
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Systems Engineer Phone: +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288
, but fails to do so. Only sometimes do I see an
error on the machine's local console - mos times, it simply reboots.
On Thu, Mar 12, 2009 at 1:55 AM, Nathan Kroenert
nathan.kroen...@sun.com wrote:
Hm -
Crashes, or hangs? Moreover - how do you know a CPU is pegged?
Seems like we could do a little more
zvol, but fails to do so. Only sometimes do I see an
error on the machine's local console - mos times, it simply reboots.
On Thu, Mar 12, 2009 at 1:55 AM, Nathan Kroenert
nathan.kroen...@sun.com wrote:
Hm -
Crashes, or hangs? Moreover - how do you know a CPU is pegged?
Seems like we could do
.
--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert nathan.kroen...@sun.com
faults *after* I go to init 3.
That does seem odd.
Dennsi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan
Hi all,
Exec summary: I have a situation where I'm seeing lots of large reads
starving writes from being able to get through to disk.
Some detail:
I have a newly constructed box (was an old box, but blew the mobo -
different story - sigh).
Anyhoo - It's a Gigabyte 890GPA-UD3H - with lots
Hi Steve,
Thanks for the thoughts - I think that everything you asked about is in
the original email - but for reference again, it's 151a (s11 express).
Are you really suggesting, for a single user system I need 16GB of
memory, just to get ZFS to be able to write when it's reading? (and even
On 14/02/2011 4:31 AM, Richard Elling wrote:
On Feb 13, 2011, at 12:56 AM, Nathan Kroenertnat...@tuneunix.com wrote:
Hi all,
Exec summary: I have a situation where I'm seeing lots of large reads starving
writes from being able to get through to disk.
snip
What is the average service time
zfs_vdev_max_pending...
Nonetheless, I'm now at a far more balanced point than when I started,
so that's a good thing. :)
Cheers,
Nathan.
On 15/02/2011 6:44 AM, Richard Elling wrote:
Hi Nathan,
comments below...
On Feb 13, 2011, at 8:28 PM, Nathan Kroenert wrote:
On 14/02/2011 4:31 AM, Richard
I can confirm that on *at least* 4 different cards - from different
board OEMs - I have seen single bit ZFS checksum errors that went away
immediately after removing the 3114 based card.
I stepped up to the 3124 (pci-x up to 133mhz) and 3132 (pci-e) and have
never looked back.
I now throw
I'm with the gang on this one as far as USB being the spawn of the
devil for mass storage you want to depend on. I'd rather scoop my eyes
out with a red hot spoon than depend on permanently attached USB
storage... And - don't even start me on SPARC and USB storage... It's
like watching pitch
Actually, I find that tremendously encouraging. Lots of internal
Oracle folks still subscribed to the list!
Much better than none... ;)
Nathan.
On 02/26/11 03:29 PM, Yaverot wrote:
Sorry all, didn't realize that half of Oracle would auto-reply to a public
mailing list since they're out of
Why wouldn't they try a reboot -d? That would at least get some data in
the form of a crash dump if at all possible...
A power cycle seems a little medieval to me... At least in the first
instance.
The other thing I have noted is that sometimes things to get wedged, and
if you can find
Ed -
Simple test. Get onto a system where you *can* disable the disk cache,
disable it, and watch the carnage.
Until you do that, you can pose as many interesting theories as you like.
Bottom line is that at 75 IOPS per spindle won't impress many people,
and that's the sort of rate you get
Hi Karl,
Is there any chance at all that some other system is writing to the
drives in this pool? You say other things are writing to the same JBOD...
Given that the amount flagged as corrupt is so small, I'd imagine not,
but thought I'd ask the question anyways.
Cheers!
Nathan.
On
Hi Max,
Unhelpful questions about your CPU aside, what else is your box doing?
Can you run up a second or third shell (ssh or whatever) and watch if
the disks / system are doing any work?
Were it Solaris, I'd run:
iostat -x
prstat -a
vmstat
mpstat (Though as discussed, you
On 12/11/11 01:05 AM, Pawel Jakub Dawidek wrote:
On Wed, Dec 07, 2011 at 10:48:43PM +0200, Mertol Ozyoney wrote:
Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
The only vendor i know that can do this is Netapp
And you really work at Oracle?:)
The answer is
Do note, that though Frank is correct, you have to be a little careful
around what might happen should you drop your original disk, and only
the large mirror half is left... ;)
On 12/16/11 07:09 PM, Frank Cusack wrote:
You can just do fdisk to create a single large partition. The
attached
I know some others may already have pointed this out - but I can't see
it and not say something...
Do you realise that losing a single disk in that pool could pretty much
render the whole thing busted?
At least for me - the rate at which _I_ seem to lose disks, it would be
worth
Hey there,
Few things:
- Using /dev/zero is not necessarily a great test. I typically use
/dev/urandom to create an initial block-o-stuff - something like a gig
or so worth, in /tmp, then use dd to push that to my zpool. (/dev/zero
will return dramatically different results depending on
Jim Klimov wrote:
It is is hard enough already to justify to an average wife that...snip
That made my night. Thanks, Jim. :)
On 03/20/12 10:29 PM, Jim Klimov wrote:
2012-03-18 23:47, Richard Elling wrote:
...
Yes, it is wrong to think that.
Ok, thanks, we won't try that :)
copy out,
Hi folks,
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now (which
are 512 byte sector).
Anyone offer up suggestions of either 3 or preferably 4TB drives that
actually work well with ZFS out of the box? (And not
1 - 100 of 106 matches
Mail list logo