___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adrian Smith (ISUnix), Ext: 55070
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
You could list by inode, then use find with rm.
# ls -i
7223 -O
# find . -inum 7223 -exec rm {} \;
David
On 11/23/11 2:00 PM, Jason King (Gmail) jason.brian.k...@gmail.com
wrote:
Did you try rm -- filename ?
Sent from my iPhone
On Nov 23, 2011, at 1:43 PM, Harry Putnam
Cindy,
I gave your suggestion a try. I did the zpool clear and then did another zpool
scrub and all is happy now. Thank you for your help.
David
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Cindy,
Thanks for the reply. I'll get that a try and then send an update.
Thanks,
David
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I recently had an issue with my LUNs from our storage unit going offline. This
caused the zpool to get numerous errors on the luns. The pool is on-line, and
I did a scrub, but one of the raid sets is
degraded:
raidz2-3 DEGRADED 0 0 0
On 6/22/11 10:28 PM, Fajar A. Nugraha w...@fajar.net wrote:
On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith smith...@llnl.gov wrote:
When I tried out Solaris 11, I just exported the pool prior to the install of
Solaris 11. I was lucky in that I had mirrored the boot drive, so after I
had
/pci10de,376@e/pci1000,3150@0/sd@3c,0:a'
whole_disk=1
create_txg=269718
rewind_txg_ts=1308690257
bad config type 7 for seconds_of_rewind
verify_data_errors=0
Please let me know if you need more info...
Thanks,
David W. Smith
I was recently running Solaris 10 U9 and I decided that I would like to go
to Solaris 11 Express so I exported my zpool, hoping that I would just do
an import once I had the new system installed with Solaris 11. Now when I
try to do an import I'm getting the following:
# /home/dws# zpool import
I was recently running Solaris 10 U9 and I decided that I would like to go
to Solaris 11 Express so I exported my zpool, hoping that I would just do
an import once I had the new system installed with Solaris 11.
An update:
I had mirrored my boot drive when I installed Solaris 10U9 originally, so I
went ahead and rebooted the system to this disk instead of my Solaris 11
install. After getting the system up, I imported the zpool, and everything
worked normally.
So I guess there is some sort of
On Wed, Jun 22, 2011 at 06:32:49PM -0700, Daniel Carosone wrote:
On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:
# /home/dws# zpool import
pool: tank
id: 13155614069147461689
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot
Disk /dev/zvol/rdsk/pool/dcpool: 4295GB
Sector size (logical/physical): 512B/512B
Just to check, did you already try:
zpool import -d /dev/zvol/rdsk/pool/ poolname
?
thanks Andy.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Still i wonder what Gartner means with Oracle monetizing on ZFS..
It simply means that Oracle want to make money from ZFS (as is normal
for technology companies with their own technology). The reason this
might cause uncertainty for ZFS is that maintaining or helping make
the open source
Hi,
see the seeksize script on this URL:
http://prefetch.net/articles/solaris.dtracetopten.html
Not used it but looks neat!
cheers Andy.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi,
I am using FreeBSD 8.2 in production with ZFS. Although I have had
one issue with it in the past but I would recommend it and I consider
it production ready. That said if you can wait for FreeBSD 8.3 or 9.0
to come out (a few months away) you will get a better system as these
will
On Feb 16, 2011, at 7:38 AM, whitetr6 at gmail.com wrote:
My question is about the initial seed of the data. Is it possible
to use a portable drive to copy the initial zfs filesystem(s) to the
remote location and then make the subsequent incrementals over the
network? If so, what would I
It is a 4k sector drive, but I thought zfs recognised those drives and didn't
need any special configuration...?
4k drives are a big problem for ZFS, much has been posted/written
about it. Basically, if the 4k drives report 512 byte blocks, as they
almost all do, then ZFS does not detect
Basically I think yes you need to add all the vdevs you require in the
circumstances you describe.
You just have to consider what ZFS is able to do with the disks that
you give it. If you have 4x mirrors to start with then all writes will
be spread across all disks and you will get nice
Quoting Bob Friesenhahn bfrie...@simple.dallas.tx.us:
What function is the system performing when it is so busy?
The work load of the server is SMTP mail server, with associated spam
and virus scanning, and serving maildir email via POP3 and IMAP.
Wrong conclusion. I am not sure what
Ok, think I have the biggest issue. The drives are 4k sector drives,
and I wasn't aware of that. My fault, I should have checked this. Had
the disks for ages and are sub 1TB so had the idea that they wouldn't
be 4k drives...
I will obviously have to address this, either by creating a pool
, there is little indication of any progress being made.
Maybe some other 'zfs-discuss' readers would try zdb on there pools,
if using a recent dev build and see if they get a similar problem...
Thanks
Nigel Smith
# mdb core
Loading modules: [ libumem.so.1 libc.so.1 libzpool.so.1 libtopo.so.1
libavl.so.1
?
And what device driver is the controller using?
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello Carsten
Have you examined the core dump file with mdb ::stack
to see if this give a clue to what happend?
Regards
Nigel
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The iSCSI COMSTAR Port Provider is not installed by default.
What release of OpenSolaris are you running?
If pre snv_133 then:
$ pfexec pkg install SUNWiscsit
For snv_133, I think it will be:
$ pfexec pkg install network/iscsi/target
Regards
Nigel Smith
--
This message posted from
-for-iscsi-and-nfs-over-1gb-ethernet
BTW, what sort of network card are you using,
as this can make a difference.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
://www.cuddletech.com/blog/pivot/entry.php?id=820
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Another things you could check, which has been reported to
cause a problem, is if network or disk drivers share an interrupt
with a slow device, like say a usb device. So try:
# echo ::interrupts -d | mdb -k
... and look for multiple driver names on an INT#.
Regards
Nigel Smith
--
This message
Hi Robert
Have a look at these links:
http://delicious.com/nwsmith/opensolaris-nas
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
'.
If Native IDE is selected the ICH10 SATA interface should
appear as two controllers, the first for ports 0-3,
and the second for ports 4 5.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
high %b.
And strange that you have c7,c8,c9,c10,c11
which looks like FIVE controllers!
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
More ZFS goodness putback before close of play for snv_128.
http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010768.html
http://hg.genunix.org/onnv-gate.hg/rev/216d8396182e
Regards
Nigel Smith
--
This message posted from opensolaris.org
to raise the priority on
his todo list.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/src/uts/common/io/sata/adapters/
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Robert
I think you mean snv_128 not 126 :-)
6667683 need a way to rollback to an uberblock from a previous txg
http://bugs.opensolaris.org/view_bug.do?bug_id=6667683
http://hg.genunix.org/onnv-gate.hg/rev/8aac17999e4d
Regards
Nigel Smith
--
This message posted from opensolaris.org
Hi Gary
I will let 'website-discuss' know about this problem.
They normally fix issues like that.
Those pages always seemed to just update automatically.
I guess it's related to the website transition.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
Ok, thanks everyone then (but still thanks to Victor for the heads up) :-)
On Mon, Nov 2, 2009 at 4:03 PM, Victor Latushkin
victor.latush...@sun.com wrote:
On 02.11.09 18:38, Ross wrote:
Double WOHOO! Thanks Victor!
Thanks should go to Tim Haley, Jeff Bonwick and George Wilson ;-)
the dev
repository will be updated to snv_128.
Then we see if any bugs emerge as we all rush to test it out...
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
This is opensolaris on a Tecra M5 using an 128GB SSD as the boot device. This
device is partitioned into two roughly 60GB partitions.
I installed opensolaris 2009.06 into the first partition then did an image
update to build 124 from the dev repository. All went well so then I created a
Hi, I'm setting up a ZFS environment running on a Sun x4440 + J4400 arrays
(similar to 7410 environment) and I was trying to figure out the best way to
map a disk drive physical location (tray and slot) to the Solaris device
c#t#d#. Do I need to install the CAM software to do this, or is
I am just a simple home user. When I was using linux, I backed up my home
directory (which contained all my critical data) using tar. I backed up my
linux partition using partimage. These backups were put on dvd's. That way I
could restore (and have) even if the hard drive completely went belly
Let me try rephrasing this. I would like the ability to restore so my system
mirrors its state at the time when I backed it up given the old hard drive is
now a door stop.
Cork
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
that anyone
using raidz, raidz2, raidz3, should not upgrade to that release?
For the people who have already upgraded, presumably the
recommendation is that they should revert to a pre 121 BE.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs
server by using
zpool history oradata
That's awesome - thank you very much!
S.
--
Stephen Nelson-Smith
Technical Director
Atalanta Systems Ltd
www.atalanta-systems.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
you.
I'm not sure about the remote mount. It appears to be a local SMB resource
mounted as NFS? I've never seen that before.
Ah that's just a Sharity mount - it's a red herring. u0[1-4] will be the same.
Thanks very much,
S.
--
Stephen Nelson-Smith
Technical Director
Atalanta Systems Ltd
made,
or to actively help with code reviews or testing.
Best Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
held back on
announcing the work on deduplication, as it just seems to
have ramped up frustration, now that it seems no
more news is forthcoming. It's easy to be wise after the event
and time will tell.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
Yup, somebody pointed that out to me last week and I can't wait :-)
On Wed, Jul 29, 2009 at 7:48 PM, Davedave-...@dubkat.com wrote:
Anyone (Ross?) creating ZFS pools over iSCSI connections will want to pay
attention to snv_121 which fixes the 3 minute hang after iSCSI disk
problems:
David Magda wrote:
This is also (theoretically) why a drive purchased from Sun is more
that expensive then a drive purchased from your neighbourhood computer
shop: Sun (and presumably other manufacturers) takes the time and
effort to test things to make sure that when a drive says I've
I have the following configuation.
My storage:
12 luns from a Clariion 3x80. Each LUN is a whole 6 disk raid-6.
My host:
Sun t5240 with 32 hardware threads and 16gig of ram.
My zpool:
all 12 luns from the clariion in a simple pool
My test data:
A 1 gig backup file of a ufsdump from /opt on a
options are there, and what advice/experience can you share?
Thanks,
S.
--
Stephen Nelson-Smith
Technical Director
Atalanta Systems Ltd
www.atalanta-systems.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
/message.jspa?messageID=318009
On Fri, Feb 13, 2009 at 11:09 PM, Richard Elling
richard.ell...@gmail.com wrote:
Tim wrote:
On Fri, Feb 13, 2009 at 4:21 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us mailto:bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Ross Smith wrote
On Fri, Feb 13, 2009 at 7:41 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Ross wrote:
Something like that will have people praising ZFS' ability to safeguard
their data, and the way it recovers even after system crashes or when
hardware has gone wrong. You
On Fri, Feb 13, 2009 at 8:24 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Ross Smith wrote:
You have to consider that even with improperly working hardware, ZFS
has been checksumming data, so if that hardware has been working for
any length of time, you *know
On Fri, Feb 13, 2009 at 8:24 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Ross Smith wrote:
You have to consider that even with improperly working hardware, ZFS
has been checksumming data, so if that hardware has been working for
any length of time, you *know
be needed.
On Fri, Feb 13, 2009 at 8:59 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Fri, 13 Feb 2009, Ross Smith wrote:
Thinking about this a bit more, you've given me an idea: Would it be
worth ZFS occasionally reading previous uberblocks from the pool, just
to check
Heh, yeah, I've thought the same kind of thing in the past. The
problem is that the argument doesn't really work for system admins.
As far as I'm concerned, the 7000 series is a new hardware platform,
with relatively untested drivers, running a software solution that I
know is prone to locking
That would be the ideal, but really I'd settle for just improved error
handling and recovery for now. In the longer term, disabling write
caching by default for USB or Firewire drives might be nice.
On Thu, Feb 12, 2009 at 8:35 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Thu, Feb 12, 2009
I can check on Monday, but the system will probably panic... which
doesn't really help :-)
Am I right in thinking failmode=wait is still the default? If so,
that should be how it's set as this testing was done on a clean
install of snv_106. From what I've seen, I don't think this is a
problem
the cache should be writing).
On Fri, Feb 6, 2009 at 7:04 PM, Brent Jones br...@servuhome.net wrote:
On Fri, Feb 6, 2009 at 10:50 AM, Ross Smith myxi...@googlemail.com wrote:
I can check on Monday, but the system will probably panic... which
doesn't really help :-)
Am I right in thinking
It's not intuitive because when you know that -o sets options, an
error message saying that it's not a valid property makes you think
that it's not possible to do what you're trying.
Documented and intuitive are very different things. I do appreciate
that the details are there in the manuals,
That's my understanding too. One (STEC?) drive as a write cache,
basically a write optimised SSD. And cheaper, larger, read optimised
SSD's for the read cache.
I thought it was an odd strategy until I read into SSD's a little more
and realised you really do have to think about your usage cases
What 'verbose information' is reported by the zfs send -v snapshot contain?
Also on Solaris 10u6 I don't get any output at all - is this a bug?
Regards,
Nick
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hmm... that's a tough one. To me, it's a trade off either way, using
a -r parameter to specify the depth for zfs list feels more intuitive
than adding extra commands to modify the -r behaviour, but I can see
your point.
But then, using -c or -d means there's an optional parameter for zfs
list
I was wondering if anyone has any experience with how long a zfs destroy of
about 40 TB should take? So far, it has been about an hour... Is there any
good way to tell if it is working or if it is hung?
Doing a zfs list just hangs. If you do a more specific zfs list, then it is
okay... zfs
A few more details:
The system is a Sun x4600 running Solaris 10 Update 4.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 2009-01-08 at 13:26 -0500, Brian H. Nelson wrote:
David Smith wrote:
I was wondering if anyone has any experience with how long a zfs destroy
of about 40 TB should take? So far, it has been about an hour... Is there
any good way to tell if it is working or if it is hung
On Fri, Dec 19, 2008 at 6:47 PM, Richard Elling richard.ell...@sun.com wrote:
Ross wrote:
Well, I really like the idea of an automatic service to manage
send/receives to backup devices, so if you guys don't mind, I'm going to
share some other ideas for features I think would be useful.
Absolutely.
The tool shouldn't need to know that the backup disk is accessed via
USB, or whatever. The GUI should, however, present devices
intelligently, not as cXtYdZ!
Yup, and that's easily achieved by simply prompting for a user
friendly name as devices are attached. Now you could
On Thu, Dec 18, 2008 at 7:11 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Thu, Dec 18, 2008 at 07:05:44PM +, Ross Smith wrote:
Absolutely.
The tool shouldn't need to know that the backup disk is accessed via
USB, or whatever. The GUI should, however, present devices
Of course, you'll need some settings for this so it's not annoying if
people don't want to use it. A simple tick box on that pop up dialog
allowing people to say don't ask me again would probably do.
I would like something better than that. Don't ask me again sucks
when much, much later
I was thinking more something like:
- find all disk devices and slices that have ZFS pools on them
- show users the devices and pool names (and UUIDs and device paths in
case of conflicts)..
I was thinking that device pool names are too variable, you need to
be reading serial numbers
Forgive me for not understanding the details, but couldn't you also
work backwards through the blocks with ZFS and attempt to recreate the
uberblock?
So if you lost the uberblock, could you (memory and time allowing)
start scanning the disk, looking for orphan blocks that aren't
refernced
I'm not sure I follow how that can happen, I thought ZFS writes were
designed to be atomic? They either commit properly on disk or they
don't?
On Mon, Dec 15, 2008 at 6:34 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 15 Dec 2008, Ross wrote:
My concern is that ZFS has all
Ahhh...I missed the difference between a volume and a FS. That was it...thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
When I create a volume I am unable to mount it locally. I pretty sure it has
something to do with the other volumes in the same ZFS pool being shared out as
ISCSI luns. For some reason ZFS things the base volume is ISCSI. Is there a
flag that I am missing? Thanks in advanced for the help.
Hi Dan, replying in line:
On Fri, Dec 5, 2008 at 9:19 PM, David Anderson [EMAIL PROTECTED] wrote:
Trying to keep this in the spotlight. Apologies for the lengthy post.
Heh, don't apologise, you should see some of my posts... o_0
I'd really like to see features as described by Ross in his
Yeah, thanks Maurice, I just saw that one this afternoon. I guess you
can't reboot with iscsi full stop... o_0
And I've seen the iscsi bug before (I was just too lazy to look it up
lol), I've been complaining about that since February.
In fact it's been a bad week for iscsi here, I've managed
Hey folks,
I've just followed up on this, testing iSCSI with a raided pool, and
it still appears to be struggling when a device goes offline.
I don't see how this could work except for mirrored pools. Would that
carry enough market to be worthwhile?
-- richard
I have to admit, I've not
Hi Richard,
Thanks, I'll give that a try. I think I just had a kernel dump while
trying to boot this system back up though, I don't think it likes it
if the iscsi targets aren't available during boot. Again, that rings
a bell, so I'll go see if that's another known bug.
Changing that setting
On Fri, Nov 28, 2008 at 5:05 AM, Richard Elling [EMAIL PROTECTED] wrote:
Ross wrote:
Well, you're not alone in wanting to use ZFS and iSCSI like that, and in
fact my change request suggested that this is exactly one of the things that
could be addressed:
The idea is really a two stage RFE,
Hey Jeff,
Good to hear there's work going on to address this.
What did you guys think to my idea of ZFS supporting a waiting for a
response status for disks as an interim solution that allows the pool
to continue operation while it's waiting for FMA or the driver to
fault the drive?
I do
PS. I think this also gives you a chance at making the whole problem
much simpler. Instead of the hard question of is this faulty,
you're just trying to say is it working right now?.
In fact, I'm now wondering if the waiting for a response flag
wouldn't be better as possibly faulty. That way
No, I count that as doesn't return data ok, but my post wasn't very
clear at all on that.
Even for a write, the disk will return something to indicate that the
action has completed, so that can also be covered by just those two
scenarios, and right now ZFS can lock the whole pool up if it's
Hmm, true. The idea doesn't work so well if you have a lot of writes,
so there needs to be some thought as to how you handle that.
Just thinking aloud, could the missing writes be written to the log
file on the rest of the pool? Or temporarily stored somewhere else in
the pool? Would it be an
The shortcomings of timeouts have been discussed on this list before. How do
you tell the difference between a drive that is dead and a path that is just
highly loaded?
A path that is dead is either returning bad data, or isn't returning
anything. A highly loaded path is by definition reading
of that.
On Tue, Nov 25, 2008 at 3:57 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Tue, 25 Nov 2008, Ross Smith wrote:
Good to hear there's work going on to address this.
What did you guys think to my idea of ZFS supporting a waiting for a
response status for disks as an interim solution
/zfs-discuss/2008-May/047270.html
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Snapshots are not replacements for traditional backup/restore features.
If you need the latter, use what is currently available on the market.
-- richard
I'd actually say snapshots do a better job in some circumstances.
Certainly they're being used that way by the desktop team:
If the file still existed, would this be a case of redirecting the
file's top level block (dnode?) to the one from the snapshot? If the
file had been deleted, could you just copy that one block?
Is it that simple, or is there a level of interaction between files
and snapshots that I've
Hi Darren,
That's storing a dump of a snapshot on external media, but files
within it are not directly accessible. The work Tim et all are doing
is actually putting a live ZFS filesystem on external media and
sending snapshots to it.
A live ZFS filesystem is far more useful (and reliable) than
any good while that is happening.
I think you need to try a different network card in the server.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
is closed source :-(
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be interesting to do two separate captures - one on the client
and the one on the server, at the same time, as this would show if the
switch was causing disruption. Try to have the clocks on the client
server synchronised as close as possible.
Thanks
Nigel Smith
--
This message posted from opensolaris.org
' for the network card, just in case it turns out to be a driver bug.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, that's my thoughts conclusion for now.
Maybe you could get some more snoop captures with other clients, and
with a different switch, and do a similar analysis.
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
.
If your using Solaris, maybe try 'prtvtoc'.
http://docs.sun.com/app/docs/doc/819-2240/prtvtoc-1m?a=view
(Unless someone knows a better way?)
Thanks
Nigel Smith
# prtvtoc /dev/rdsk/c1t1d0
* /dev/rdsk/c1t1d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 1465149168 sectors
* 1465149101 accessible
/2007/660/onepager/
http://bugs.opensolaris.org/view_bug.do?bug_id=5044205
Regards
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a 'zpool status')
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'status' of your zpool on Server2?
(You have not provided a 'zpool status')
Thanks
Nigel Smith
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
'smartctl' (fully) working with PATA and
SATA drives on x86 Solaris.
I've done a quick search on PSARC 2007/660 and it was
closed approved fast-track 11/28/2007.
I did a quick search, but I could not find any code that had been
committed to 'onnv-gate' that references this case.
Regards
Nigel Smith
1 - 100 of 186 matches
Mail list logo