Hi list,
I've got a system with 3 WD and 3 seagate drives. Today I got an email
that zpool status indicated one of the seagate drives as REMOVED.
I've tried clearing the error but the pool becomes faulted again. Taken
out the offending drive and plugged into a windows box with seatools
On 11/09/2011 18:32, Krunal Desai wrote:
On Sep 11, 2011, at 13:01 , Richard Elling wrote:
The removed state can be the result of a transport issue. If this is a
Solaris-based
OS, then look at fmadm faulty for a diagnosis leading to a removal. If none,
then look at fmdump -eV for errors
Hi list,
I want to monitor the read and write ops/bandwidth for a couple of pools
and I'm not quite sure how to proceed. I'm using rrdtool so I either
want an accumulated counter or a gauge.
According to the ZFS admin guide, running zpool iostat without any
parameters should show the
On 28/06/2011 16:44, Tomas Ögren wrote:
Matt Harrisoniwasinnamuk...@genestate.com wrote:
Hi list,
I want to monitor the read and write ops/bandwidth for a couple of
pools
and I'm not quite sure how to proceed. I'm using rrdtool so I either
want an accumulated counter or a gauge.
According
Hi list,
I've got a pool thats got a single raidz1 vdev. I've just some more
disks in and I want to replace that raidz1 with a three-way mirror. I
was thinking I'd just make a new pool and copy everything across, but
then of course I've got to deal with the name change.
Basically, what is
On 01/06/2011 20:45, Eric Sproul wrote:
On Wed, Jun 1, 2011 at 2:54 PM, Matt Harrison
iwasinnamuk...@genestate.com wrote:
Hi list,
I've got a pool thats got a single raidz1 vdev. I've just some more disks in
and I want to replace that raidz1 with a three-way mirror. I was thinking
I'd just
On 01/06/2011 20:52, Eric Sproul wrote:
On Wed, Jun 1, 2011 at 3:47 PM, Matt Harrison
iwasinnamuk...@genestate.com wrote:
Thanks Eric, however seeing as I can't have two pools named 'tank', I'll
have to name the new one something else. I believe I will be able to rename
it afterwards, but I
On 01/06/2011 20:53, Cindy Swearingen wrote:
Hi Matt,
You have several options in terms of migrating the data but I think the
best approach is to do something like I have described below.
Thanks,
Cindy
1. Create snapshots of the file systems to be migrated. If you
want to capture the file
On 13/04/2011 00:36, David Magda wrote:
On Apr 11, 2011, at 17:54, Brandon High wrote:
I suspect that the minimum memory for most moderately sized pools is
over 16GB. There has been a lot of discussion regarding how much
memory each dedup'd block requires, and I think it was about 250-270
On 11/04/2011 10:04, Brandon High wrote:
On Sun, Apr 10, 2011 at 10:01 PM, Matt Harrison
iwasinnamuk...@genestate.com wrote:
The machine only has 4G RAM I believe.
There's your problem. 4G is not enough memory for dedup, especially
without a fast L2ARC device.
It's time I should be heading
greatly appreciated,
thanks
Matt Harrison
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/04/2011 05:25, Brandon High wrote:
On Sun, Apr 10, 2011 at 9:01 PM, Matt Harrison
iwasinnamuk...@genestate.com wrote:
I had a de-dup dataset and tried to destroy it. The command hung and so did
anything else zfs related. I waited half and hour or so, the dataset was
only 15G
Richard Elling wrote:
On Jul 21, 2009, at 12:49 PM, Bob Friesenhahn wrote:
On Tue, 21 Jul 2009, Andrew Gabriel wrote:
The X25-M drives referred to are Intel's Mainstream drives, using MLC
flash.
The Enterprise grade drives are X25-E, which currently use SLC flash
(less dense, more
I know this may have been discussed before but my google-fu hasn't
turned up anything directly related.
My girlfriend had some files stored in a zfs dataset on my home server.
She assured me that she didn't need them any more so I destroyed the
dataset (I know I should have kept it anyway for
dick hoogendijk wrote:
On Mon, 22 Jun 2009 21:42:23 +0100
Matt Harrison iwasinnamuk...@genestate.com wrote:
She's now desperate to get it back as she's realised there some
important work stuff hidden away in there.
Without snapshots you're lost.
Ok, thanks. It was worth a shot. Guess
Simon Breden wrote:
Hi Matt!
As kim0 says, that s/w PhotoRec looks like it might work, if it can work with
ZFS... would be interested to hear if it works.
Good luck,
Simon
I'll give it a go as soon as I get a chance. I've had a very quick look
and ZFS isn't in the list of supported
Christine Tran wrote:
There was a very long discussion about this a couple of weeks ago on
one of the lists. Apparently the decision was made to put the GNU
utilities in default system wide path before the native Sun utilities
in order to make it easier to attract Linux users by making the
Brandon High wrote:
On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
Several people reported this same problem. They changed their
ethernet adaptor to an Intel ethernet interface and the performance
problem went away. It was not ZFS's fault.
It may not
JZ wrote:
Beloved Jonny,
I am just like you.
There was a day, I was hungry, and went for a job interview for sysadmin.
They asked me - what is a protocol?
I could not give a definition, and they said, no, not qualified.
But they did not ask me about CICS and mainframe. Too bad.
Jonny Gerold wrote:
Meh this is retarted. It looks like zpool list shows an incorrect
calculation? Can anyone agree that this looks like a bug?
r...@fsk-backup:~# df -h | grep ambry
ambry 2.7T 27K 2.7T 1% /ambry
r...@fsk-backup:~# zpool list
NAMESIZE USED
Bob Friesenhahn wrote:
On Sun, 28 Dec 2008, Robert Bauer wrote:
It would be nice if gnome could notify me automatically when one of
my zpools are degraded or if any kind of ZFS error occurs.
Yes. It is a weird failing of Solaris to have an advanced fault
detection system without a
Jeff Waddell wrote:
I'm having a permission issue with a ZFS fs over CIFS. I'm new to
OpenSolaris, and fairly new to *nix (only real experience being OS
X), so any help would be appreciated.
Sounds like you're falling foul as I did when I started.
Child filesystems are not accessible via
Ok, I have recieved a new set of NICs and a new switch and the problem still
remains.
Just for something to do I ran some tests:
Copying a 200Mb file over scp from the main problem workstation to a totally
unrelated gentoo linux box. Absolutely no problems.
So I thought it was down to the zfs
Well, somehow it's fixed:
Since putting in the new Intel card, the transfer from the box dropped so
badly, I couldn't even copy a snoop from it.
So I removed the dohwchksum line from /etc/system and rebooted. Then just to
clean up a bit I disabled the onboard NICs in the bios.
Now I'm still
Nigel Smith wrote:
Hi Matt
Well this time you have filtered out any SSH traffic on port 22 successfully.
But I'm still only seeing half of the conversation!
Grr this is my day, I think I know what the problem was...user error as
I'm not used to snoop.
I see packets sent from client to
On Tue, Oct 28, 2008 at 05:30:55PM -0700, Nigel Smith wrote:
Hi Matt.
Ok, got the capture and successfully 'unzipped' it.
(Sorry, I guess I'm using old software to do this!)
I see 12840 packets. The capture is a TCP conversation
between two hosts using the SMB aka CIFS protocol.
On Tue, Oct 28, 2008 at 05:45:48PM -0700, Richard Elling wrote:
I replied to Matt directly, but didn't hear back. It may be a driver issue
with checksum offloading. Certainly the symptoms are consistent.
To test with a workaround see
http://bugs.opensolaris.org/view_bug.do?bug_id=6686415
On Wed, Oct 29, 2008 at 10:01:09AM -0700, Nigel Smith wrote:
Hi Matt
Can you just confirm if that Ethernet capture file, that you made available,
was done on the client, or on the server. I'm beginning to suspect you
did it on the client.
That capture was done from the client
You can get a
On Wed, Oct 29, 2008 at 05:32:39PM -0700, Nigel Smith wrote:
Hi Matt
In your previous capture, (which you have now confirmed was done
on the Windows client), all those 'Bad TCP checksum' packets sent by the
client,
are explained, because you must be doing hardware TCP checksum offloading
On Mon, Oct 27, 2008 at 06:18:59PM -0700, Nigel Smith wrote:
Hi Matt
Unfortunately, I'm having problems un-compressing that zip file.
I tried with 7-zip and WinZip reports this:
skipping _1_20081027010354.cap: this file was compressed using an unknown
compression method.
Please
On Sat, Oct 25, 2008 at 06:50:46PM -0700, Nigel Smith wrote:
Hi Matt
What chipset is your PCI network card?
(obviously, it not Intel, but what is it?)
Do you know which driver the card is using?
I believe it's some sort of Realtek (8139 probably). It's coming up as rtls0
You say '..The
Nigel Smith wrote:
Ok on the answers to all my questions.
There's nothing that really stands out as being obviously wrong.
Just out of interest, what build of OpenSolaris are you using?
One thing you could try on the Ethernet capture file, is to set
the WireShark 'Time' column like this:
Nigel Smith wrote:
Ok on the answers to all my questions.
There's nothing that really stands out as being obviously wrong.
Just out of interest, what build of OpenSolaris are you using?
Damn forgot to add that, I'm running SXCE snv_97.
Thanks
Matt
Bob Friesenhahn wrote:
Other people on this list who experienced the exact same problem
ultimately determined that the problem was with the network card. I
recall that Intel NICs were the recommended solution.
Note that 100MBit is now considered to be a slow link and PCI is also
considered
On Sat, Oct 25, 2008 at 11:10:42AM -0500, Bob Friesenhahn wrote:
Hmmm, this may indicate that there is an ethernet cable problem. Use
'netstat -I interface' (where interface is the interface name shown by
'ifconfig -a') to see if the interface error count is increasing. If you
are using a
Hi all,
I've got a lot of video files on a zfs/cifs fileserver running SXCE. A
little while ago the dual onboard NICs died and I had to replace them with a
PCI 10/100 NIC. The system was fine for a couple of weeks but now the
performance when viewing a video file from the cifs share is appauling.
Todd H. Poole wrote:
But you're not attempting hotswap, you're doing hot plug
Do you mean hot UNplug? Because I'm not trying to get this thing to recognize
any new disks without a restart... Honest. I'm just trying to prevent the
machine from freezing up when a drive fails. I have no
Ross wrote:
Hi,
First of all, I really should warn you that I'm very new to Solaris, I'll
happily share my thoughts but be aware that there's not a lot of experience
backing them up.
From what you've said, and the logs you've posted I suspect you're hitting
recoverable read errors. ZFS
Miles Nordin wrote:
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh I'm worried about is if the entire batch is failing slowly
mh and will all die at the same time.
If you can download smartctl, you can use the approach described here:
http://web.Ivy.NET/~carton/rant/ml/raid
Johan Hartzenberg wrote:
On Sun, Aug 3, 2008 at 8:48 PM, Matt Harrison
[EMAIL PROTECTED]wrote:
Miles Nordin wrote:
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh I'm worried about is if the entire batch is failing slowly
mh and will all die at the same time.
Matt, can you
Richard Elling wrote:
Matt Harrison wrote:
Aug 2 14:46:06 exodus Error for Command: read_defect_data
Error Level: Informational
key here: Informational
Aug 2 14:46:06 exodus scsi: [ID 107833 kern.notice]Requested
Block: 0 Error Block: 0
Aug 2
Hi everyone,
I've been running a zfs fileserver for about a month now (on snv_91) and
it's all working really well. I'm scrubbing once a week and nothing has
come up as a problem yet.
I'm a little worried as I've just noticed these messages in
/var/adm/message and I don't know if they're bad
Ross wrote:
What does zpool status say?
zpool status says everythings fine, i've run another scrub and it hasn't
found any errors, so can i just consider this harmless? its filling up
my log quickly though
thanks
Matt
No virus found in this outgoing message.
Checked by AVG -
Matt Harrison wrote:
Ross wrote:
What does zpool status say?
zpool status says everythings fine, i've run another scrub and it hasn't
found any errors, so can i just consider this harmless? its filling up
my log quickly though
I've just checked past logs and i'm getting up to about
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Steve wrote:
| I'm a fan of ZFS since I've read about it last year.
|
| Now I'm on the way to build a home fileserver and I'm thinking to go
with Opensolaris and eventually ZFS!!
|
| Apart from the other components, the main problem is to choose the
Hi gurus,
I like zpool iostat and I like system monitoring, so I setup a script
within sma to let me get the zpool iostat figures through snmp.
The problem is that as zpool iostat is only run once for each snmp
query, it always reports a static set of figures, like so:
[EMAIL PROTECTED]:snmp
Mike Gerdts wrote:
$ kstat -p ::vopstats_zfs:{nread,read_bytes,nwrite,write_bytes}
unix:0:vopstats_zfs:nread 418787
unix:0:vopstats_zfs:read_bytes612076305
unix:0:vopstats_zfs:nwrite163544
unix:0:vopstats_zfs:write_bytes 255725992
Thanks Mike, thats exactly what I was
Hi gurus,
Just wanted some input on this for the day when an upgrade is necessary.
Lets say I have simple pool made up of 3 750gb SATA disks in raidz1,
giving around 1.3tb usable space. If we wanted to upgrade the disks,
what is the accepted procedure? There are 6 SATA ports in the machine in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tomas Ögren wrote:
| On 28 June, 2008 - Matt Harrison sent me these 0,6K bytes:
|
| Hi gurus,
|
| Just wanted some input on this for the day when an upgrade is necessary.
|
| Lets say I have simple pool made up of 3 750gb SATA disks in raidz1
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matt Harrison wrote:
| Tomas Ögren wrote:
| | On 28 June, 2008 - Matt Harrison sent me these 0,6K bytes:
| |
| | Hi gurus,
| |
| | Just wanted some input on this for the day when an upgrade is
necessary.
| |
| | Lets say I have simple pool made up
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
James C. McPherson wrote:
| Matt Harrison wrote:
|
| I seem to have overlooked the first part of your reply, I can just
| replace the disks one at a time, and of course the pool would rebuild
| itself onto the new disk. Would this automatically
access individual zfs's
under one share, i guess i will have to have 6 network drives instead of
one but this will of course confuse the users no end.
Thanks
- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32
as it would with individual zfs'.
I hope this is a better explanation,
Thanks
- --
Matt Harrison
[EMAIL PROTECTED]
http://mattharrison.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
iEYEARECAAYFAkhjEBAACgkQxNZfa+YAUWFSfwCfQxvONHtrqsf5F2FcUNYIRA8L
53 matches
Mail list logo