I have a zfs dataset that I use for network home directories. The box is
running 2008.11 with the auto-snapshot service enabled. To help debug some
mysterious file deletion issues, I've enabled nfs logging (all my clients are
NFSv3 Linux boxes).
I keep seeing lines like this in the nfslog:
On Apr 15, 2009, at 8:28 AM, Nicholas Lee emptysa...@gmail.com wrote:
On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane
will.murn...@gmail.com wrote:
Has anyone done any specific testing with SSD devices and solaris
other than
the FISHWORKS stuff? Which is better for what - SLC and
This is really great information, though most of the controllers
mentioned aren't on the OpenSolaris HCL. Seems like that should be
corrected :)
My thanks to the community for their support.
On Mar 12, 2009, at 10:42 PM, James C. McPherson james.mcpher...@sun.com
wrote:
On Thu, 12 Mar
Check out http://www.sun.com/bigadmin/hcl/data/os
Sent from my iPhone
On Feb 28, 2009, at 2:20 AM, Harry Putnam rea...@newsguy.com wrote:
Brian Hechinger wo...@4amlunch.net writes:
[...]
I think it would be better to answer this question that it would to
attempt to answer the VirtualBox
Shrinking pools would also solve the right-sizing dilemma.
Sent from my iPhone
On Feb 28, 2009, at 3:37 AM, Joe Esposito j...@j-espo.com wrote:
I'm using opensolaris and zfs at my house for my photography storage
as well as for an offsite backup location for my employer and several
side web
You can upgrade live. 'zfs upgrade' with no arguments shows you the
zfs version status of filesystems present without upgrading.
On Jan 24, 2009, at 10:19 AM, Ben Miller mil...@eecis.udel.edu wrote:
We haven't done 'zfs upgrade ...' any. I'll give that a try the
next time the system
I would in this case also immediately export the pool (to prevent any
write attempts) and see about a firmware update for the failed drive
(probably need windows for this).
Sent from my iPhone
On Jan 20, 2009, at 3:22 AM, zfs user zf...@itsbeen.sent.com wrote:
I would get a new 1.5 TB and
I'm having a very similar issue. Just updated to 10 u6 and upgrade my zpools.
They are fine (all 3-way mirors), but I've lost the machine around 12:30am two
nights in a row.
I'm booting ZFS root pools, if that makes any difference.
I also don't see anything in dmesg, nothing on the console
I am directly on the console. cde-login is disabled, so i'm dealing with
direct entry.
Are you directly on the console, or is the console on
a serial port? If
you are
running over X windows, the input might still get in,
but X may not be
displaying.
If keyboard input is not
Upstream when using DSL is much slower than downstream?
Blake
On Dec 1, 2008, at 7:42 PM, Francois Dion [EMAIL PROTECTED]
wrote:
Source is local to rsync, copying from a zfs file system,
destination is remote over a dsl connection. Takes forever to just
go through the unchanged files.
I've used that tool only with the Marvell chipset that ships with the
thumpers. (in a supermicro hba)
Have you looked at cfgadm?
Blake
On Dec 1, 2008, at 7:49 PM, [EMAIL PROTECTED] wrote:
(http://cuddletech.com/blog/pivot/entry.php?id=993). Will the SUNWhd
can't dump all SMART data, but
As jritorto is noting, I think the issue here is whether the fix has been
backported to Solaris 10 5/08 or 10/08. It's a nasty problem to run into on a
production machine. In my case, I'm restoring from tape because my pool went
corrupt waiting for resilvers to finish which were getting
did you follow the instructions for updating grub after the image-update:
http://opensolaris.org/jive/thread.jspa?messageID=277115tstart=0
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I've confirmed the problem with automatic resilvers as well. I will see about
submitting a bug.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Looks like there is a closed bug for this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6655927
It's been closed as 'not reproducible', but I can reproduce consistently on Sol
10 5/08. How can I re-open this bug?
I'm using a pair of Supermicro AOC-SAT2-MV8 on a fully patched install of
I'm also very interested in this. I'm having a lot of pain with status
requests killing my resilvers. In the example below I was trying to test to
see if timf's auto-snapshot service was killing my resilver, only to find that
calling zpool status seems to be the issue:
[EMAIL PROTECTED] ~]#
).
On 10/13/08, Richard Elling [EMAIL PROTECTED] wrote:
Blake Irvin wrote:
I'm also very interested in this. I'm having a lot of pain with status
requests killing my resilvers. In the example below I was trying to test
to see if timf's auto-snapshot service was killing my resilver, only
I'm using Neelakanth's arcstat tool to troubleshoot performance problems with a
ZFS filer we have, sharing home directories to a CentOS frontend Samba box.
Output shows an arc target size of 1G, which I find odd, since I haven't tuned
the arc, and the system has 4G of RAM. prstat -a tells me
I think I need to clarify a bit.
I'm wondering why arc size is staying so low, when i have 10 nfs clients and
about 75 smb clients accessing the store via resharing (on one of the 10 linux
nfs clients) of the zfs/nfs export. Or is it normal for the arc target and arc
size to match? Of note, I
I was doing a manual resilver, not with spares. I suspect still the issue
comes from your script running as root, which is common for reporting scripts.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
is there a bug for the behavior noted in the subject line of this post?
running 'zpool status' or 'zpool status -xv' during a resilver as a
non-privileged user has no adverse effect, but if i do the same as root, the
resilver restarts.
while i'm not running opensolaris here, i feel this is a
Hmm. That's kind of sad. I grabbed the latest Areca drivers and haven't had a
speck of trouble. Was the driver revision specified in the docs you read the
latest one?
Flash boot does seem nice in a way, since Solaris writes to the boot volume so
seldom on a machine that has enough RAM to
This Areca card is Solaris Certified (so says the HCL) and not that expensive:
http://www.sun.com/bigadmin/hcl/data/components/details/1179.html
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
We are currently using the 2-port Areca card SilMech offers for boot, and 2 of
the Supermicro/Marvell cards for our array. Silicon Mechanics gave us great
support and burn-in testing for Solaris 10. Talk to a sales rep there and I
don't think you will be disappointed.
cheers,
Blake
This
Truly :)
I was planning something like 3 pools concatenated. But we are only populating
12 bays at the moment.
Blake
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The only supported controller I've found is the Areca ARC-1280ML. I want to
put it in one of the 24-disk Supermicro chassis that Silicon Mechanics builds.
Has anyone had success with this card and this kind of chassis/number of drives?
cheers,
Blake
This message posted from opensolaris.org
26 matches
Mail list logo