On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jonathan Loran
Because you're at pool v15, it does not matter if the log device fails while
you're running, or you're offline and trying
?
I'm going to perform a full backup of this guy (not so easy on my
budget), and I would rather only get the good files.
Thanks,
Jon
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager
, Paul Choi wrote:
zpool clear just clears the list of errors (and # of checksum
errors) from its stats. It does not modify the filesystem in any
manner. You run zpool clear to make the zpool forget that it ever
had any issues.
-Paul
Jonathan Loran wrote:
Hi list,
First off:
# cat /etc
is not good from a ZFS perspective. How many SATA
plugs are there on the MB in this guy?
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences
on this list! :-)
Thanks!
___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
--
- _/ _/ / - Jonathan Loran
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED
:
Fe = 46% failures/month * 12 months = 5.52 failures
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
Jorgen Lundman wrote:
# /usr/X11/bin/scanpci | /usr/sfw/bin/ggrep -A1 vendor 0x11ab device
0x6081
pci bus 0x0001 cardnum 0x01 function 0x00: vendor 0x11ab device 0x6081
Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
But it claims resolved for our version:
SunOS
Miles Nordin wrote:
s == Steve [EMAIL PROTECTED] writes:
s http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
no ECC:
http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets
This MB will take these:
be considerably different since NFS requests that its
data be committed to disk.
Bob
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory
/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
- _/ _/ / - Jonathan Loran
profile is just like Tim's: Terra bytes of satellite
data. I'm going to guess that the d11p ratio won't be fantastic for
us. I sure would like to measure it though.
Jon
--
- _/ _/ / - Jonathan Loran
/erickustarz/entry/how_dedupalicious_is_your_pool
Unfortunately we are on Solaris 10 :( Can I get a zdb for zfs V4 that
will dump those checksums?
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager
reference count. If a block has few references, it should expire
first, and vise versa, blocks with many references should be the last
out. With all the savings on disks, think how much RAM you could buy ;)
Jon
--
- _/ _/ / - Jonathan Loran
be very excited to see block level ZFS deduplication
roll out. Especially since we already have the infrastructure in place
using Solaris/ZFS.
Cheers,
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager
, but
make sure you power supply is running clean. I can't tell you how many
times I've seen very strange and intermittent system errors occur from a
flaky power supply.
Jon
--
- _/ _/ / - Jonathan Loran
Jonathan Loran wrote:
Since no one has responded to my thread, I have a question: Is zdb
suitable to run on a live pool? Or should it only be run on an exported
or destroyed pool? In fact, I see that it has been asked before on this
forum, but is there a users guide to zdb
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED
Hi List,
First of all: S10u4 120011-14
So I have the weird situation. Earlier this week, I finally mirrored up
two iSCSI based pools. I had been wanting to do this for some time,
because the availability of the data in these pools is important. One
pool mirrored just fine, but the other
the Solaris map, thus:
auto_home:
*zfs-server:/home/
Sorry to be so off (ZFS) topic.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences
Dominic Kay wrote:
Hi
Firstly apologies for the spam if you got this email via multiple aliases.
I'm trying to document a number of common scenarios where ZFS is used
as part of the solution such as email server, $homeserver, RDBMS and
so forth but taken from real implementations where
Bob Friesenhahn wrote:
The problem here is that by putting the data away from your machine,
you loose the chance to scrub
it on a regular basis, i.e. there is always the risk of silent
corruption.
Running a scrub is pointless since the media is not writeable. :-)
But that's the
Bob Friesenhahn wrote:
On Tue, 22 Apr 2008, Jonathan Loran wrote:
But that's the point. You can't correct silent errors on write once
media because you can't write the repair.
Yes, you can correct the error (at time of read) due to having both
redundant media, and redundant blocks
Luke Scharf wrote:
Maurice Volaski wrote:
Perhaps providing the computations rather than the conclusions would
be more persuasive on a technical list ;
2 16-disk SATA arrays in RAID 5
2 16-disk SATA arrays in RAID 6
1 9-disk SATA array in RAID 5.
4 drive failures over
Chris Siebenmann wrote:
| What your saying is independent of the iqn id?
Yes. SCSI objects (including iSCSI ones) respond to specific SCSI
INQUIRY commands with various 'VPD' pages that contain information about
the drive/object, including serial number info.
Some Googling turns up:
Just to report back to the list... Sorry for the lengthy post
So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more
or less work as expected. If I unplug one side of the mirror - unplug
or power down one of the iSCSI targets - I/O to the zpool stops for a
while, perhaps a
kristof wrote:
If you have a mirrored iscsi zpool. It will NOT panic when 1 of the
submirrors is unavailable.
zpool status will hang for some time, but after I thinkt 300 seconds it will
put the device on unavailable.
The panic was the default in the past, And it only occurs if all
This guy seems to have had lots of fun with iSCSI :)
http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html
This is scaring the heck out of me. I have a project to create a zpool
mirror out of two iSCSI targets, and if the failure of one of them will
panic my system, that will
Bob Friesenhahn wrote:
On Tue, 25 Mar 2008, Robert Milkowski wrote:
As I wrote before - it's not only about RAID config - what if you have
hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with
specific parameters, then specific file system options, etc.
Some zfs-related
CIFS compatibility, and it is the way the industry will be moving.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
Robert Milkowski wrote:
Hello Jonathan,
Friday, March 14, 2008, 9:48:47 PM, you wrote:
Carson Gaspar wrote:
Bob Friesenhahn wrote:
On Fri, 14 Mar 2008, Bill Shannon wrote:
What's the best way to backup a zfs filesystem to tape, where the size
of the filesystem is
Patrick Bachmann wrote:
Jonathan,
On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote:
I'm 'not sure I follow how this would work.
The keyword here is thin provisioning. The sparse zvol only uses
as much space as the actual data needs. So, if you use a sparse
zvol, you
Quick question:
If I create a ZFS mirrored pool, will the read performance get a boost?
In other words, will the data/parity be read round robin between the
disks, or do both mirrored sets of data and parity get read off of both
disks? The latter case would have a CPU expense, so I would
Roch Bourbonnais wrote:
Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
Quick question:
If I create a ZFS mirrored pool, will the read performance get a boost?
In other words, will the data/parity be read round robin between the
disks, or do both mirrored sets of data and parity get
Roch Bourbonnais wrote:
Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :
Roch Bourbonnais wrote:
Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
Quick question:
If I create a ZFS mirrored pool, will the read performance get a
boost?
In other words, will the data/parity be read
David Magda wrote:
On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
In some circles, CDP is big business. It would be a great ZFS offering.
ZFS doesn't have it built-in, but AVS made be an option in some cases:
http://opensolaris.org/os/project/avs/
Point in time copy (as AVS offers
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
more on random I/O. The
be a good provider to hit up for the VFS layer.
I'd also check syscall latencies - it might be too obvious, but it can be
worth checking (eg, if you discover those long latencies are only on the
open syscall)...
Brendan
--
- _/ _/ / - Jonathan Loran
[EMAIL PROTECTED] wrote:
On Tue, Feb 12, 2008 at 10:21:44PM -0800, Jonathan Loran wrote:
Thanks for any help anyone can offer.
I have faced similar problem (although not exactly the same) and was going to
monitor disk queue with dtrace but couldn't find any docs/urls about
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
...
I know, I know, I should have gone with a JBOD setup, but it's too late for
that in this iteration of this server. We we set this up, I had the gear
already, and it's not in my budget to get new stuff right now.
What kind of
Hi List,
I'm wondering if one of you expert DTrace guru's can help me. I want to
write a DTrace script to print out a a histogram of how long IO requests
sit in the service queue. I can output the results with the quantize
method. I'm not sure which provider I should be using for this.
Anton B. Rang wrote:
Careful here. If your workload is unpredictable, RAID 6 (and RAID 5)
for that matter will break down under highly randomized write loads.
Oh? What precisely do you mean by break down? RAID 5's write performance is
well-understood and it's used successfully in
Richard Elling wrote:
Nick wrote:
Using the RAID cards capability for RAID6 sounds attractive?
Assuming the card works well with Solaris, this sounds like a
reasonable solution.
Careful here. If your workload is unpredictable, RAID 6 (and RAID 5)
for that matter
is that the
requirement for this very stability is why we haven't seen the features
in the ZFS code we need in Solaris 10.
Thanks,
Jon
Mike Gerdts wrote:
On Jan 30, 2008 2:27 PM, Jonathan Loran [EMAIL PROTECTED] wrote:
Before ranting any more, I'll do the test of disabling the ZIL. We may
have to build out
10 U? as a preferred
method.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643
Neil Perrin wrote:
Roch - PAE wrote:
Jonathan Loran writes:
Is it true that Solaris 10 u4 does not have any of the nice ZIL
controls that exist in the various recent Open Solaris flavors? I
would like to move my ZIL to solid state storage, but I fear I
can't do it until I
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED
off to see how my NFS on ZFS performance is effected before spending
the $'s. Anyone know when will we see this in Solaris 10?
Thanks,
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED
Joerg Schilling wrote:
Carsten Bormann [EMAIL PROTECTED] wrote:
On Dec 29 2007, at 08:33, Jonathan Loran wrote:
We snapshot the file as it exists at the time of
the mv in the old file system until all referring file handles are
closed, then destroy the single file snap. I know
with the semantics. It's not just a path change as in a directory mv.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED]
- __/__/__/ AST:7731^29u18e3
Gary Mills wrote:
On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote:
This is the same configuration we use on 4 separate servers (T2000, two
X4100, and a V215). We do use a different iSCSI solution, but we have
the same multi path config setup with scsi_vhci. Dual GigE
Jonathan Loran wrote:
Gary Mills wrote:
On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote:
This is the same configuration we use on 4 separate servers (T2000, two
X4100, and a V215). We do use a different iSCSI solution, but we have
the same multi path config setup
devices,
of course, but by two different paths. Is this a correct configuration
for ZFS? I assume it's safe, but I thought I should check.
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED]
- __/__/__/ AST:7731
Richard Elling wrote:
Jonathan Loran wrote:
snip...
Do not assume that a compressed file system will send compressed.
IIRC, it
does not.
Let's say, if it were possible to detect the remote compression support,
couldn't we send it compressed? With higher compression rates, wouldn't
Nicolas Williams wrote:
On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote:
I can envision a highly optimized, pipelined system, where writes and
reads pass through checksum, compression, encryption ASICs, that also
locate data properly on disk. ...
I've argued before
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager
Paul B. Henson wrote:
On Sat, 22 Sep 2007, Jonathan Loran wrote:
My gut tells me that you won't have much trouble mounting 50K file
systems with ZFS. But who knows until you try. My questions for you is
can you lab this out?
Yeah, after this research phase has been completed
a user's
files are when they want to access them :(.
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED
62 matches
Mail list logo