-f8151d0d
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21175057id_secret=21175057-02786781
Powered by Listbox: http://www.listbox.com
--
Eric Schrock
Delphix
http://blog.delphix.com/eschrock
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
every day :-)
http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/mkdir.c
Thanks,
- Eric
--
Eric Schrock
Delphix
http://blog.delphix.com/eschrock
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing
it with the 'compression'
property on a per-filesystem level, and is fundamentally per-block. Dedup
is also controlled per-filesystem, though the DDT is global to the pool.
If you think there are compelling features lurking here, then by all means
grab the code and run with it :-)
- Eric
--
Eric Schrock
Delphix
*illumos-developer* |
Archiveshttps://www.listbox.com/member/archive/182179/=now
https://www.listbox.com/member/archive/rss/182179/21175057-f8151d0d |
Modifyhttps://www.listbox.com/member/?member_id=21175057id_secret=21175057-02786781Your
Subscription
http://www.listbox.com
--
Eric Schrock
...@lists.illumos.org
http://lists.illumos.org/m/listinfo/developer
--
Eric Schrock
Delphix
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Webrev has been updated:
http://dev1.illumos.org/~eschrock/cr/zfs-refratio/
- Eric
--
Eric Schrock
Delphix
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
--
Eric Schrock
Delphix
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
://lists.illumos.org/m/listinfo/developer
--
Eric Schrock
Delphix
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
will not reuse blocks for 3 transaction groups. This is why uberblock
rollback will do normally only attempt a rollback of up to two previous txgs.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss
the right thing for checksum
errors. That is a very small subset of possible device failure modes.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
block. If it's even possible to implement this
paranoid ZIL tunable, are you willing to take a 2-5x performance hit to be
able to detect this failure mode?
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
would
certainly be appreciated. This possibility was originally why the 'snapdir'
property was named as it was, so we could someday support 'snapdir=every' to
export .zfs in every directory.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote:
On 18/06/2010 00:18, Garrett D'Amore wrote:
On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote:
On the SS7000 series, you get an alert that the enclosure has been detached
from the system. The fru-monitor code (generalization
is very bad (fault = broken hardware = service call = $$$).
- Eric
P.S. the bug in the ZFS scheme module is legit, we just haven't fixed it yet
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs
for a new
class of HBAs/drivers, that'd be more than welcome. If you choose to represent
missing devices as faulted in your own third party system, that's your own
prerogative, but it's not the current Solaris FMA model.
Hope that helps,
- Eric
--
Eric Schrock, Fishworks
with I/O
errors in zfs-diagnosis that gives some grace period to detect removal before
declaring a disk faulted.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss
, not 30 hours). Expect
to see fixes for these remaining issues in the near future.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Apr 5, 2010, at 11:43 AM, Garrett D'Amore wrote:
I see ereport.fs.zfs.io_failure, and ereport.fs.zfs.probe_failure. Also,
ereport.io.service.lost and ereport.io.device.inval_state. There is indeed a
fault.fs.zfs.device in the list as well.
The ereports are not interesting, only the
a 'zpool
replace c2t3d1 c2t3d2' by hand succeed?
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that the spares should be re-evaluated when they
become available at a later point in time. Certainly a reasonable RFE, but not
something ZFS does today.
You can 'zpool attach' the spare like a normal device - that's all that the
retire agent is doing under the hood.
Hope that helps,
- Eric
--
Eric
that free operations used to be in-memory
only but with dedup enabled can result in synchronous I/O to disks in
syncing context.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
if it is indeed just missing.
#2 is being worked on, but also does not affect the standard reboot case.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On 01/11/10 17:42, Paul B. Henson wrote:
On Sat, 9 Jan 2010, Eric Schrock wrote:
No, it's fine. DEGRADED just means the pool is not operating at the
ideal state. By definition a hot spare is always DEGRADED. As long as
the spare itself is ONLINE it's fine.
One more question on this; so
On Jan 11, 2010, at 6:35 PM, Paul B. Henson wrote:
On Mon, 11 Jan 2010, Eric Schrock wrote:
No, there is no way to tell if a pool has DTL (dirty time log) entries.
Hmm, I hadn't heard that term before, but based on a quick search I take it
that's the list of data in the pool
that helps,
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/09/09 12:58, Brent Jones wrote:
Are these recent developments due to help/support from Oracle?
No.
Or is it business as usual for ZFS developments?
Yes.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
= 0x179e471c0732582
(end resource)
(end fault-list[0])
So, I'm still stumped.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworks
a better job of staying in sync
We've followed Eric's work on ZFS device enumeration for the Fishwork project
with great interest - hopefully this will eventually get extended to the fmdump
output as suggested.
Yep, we're working on it ;-)
- Eric
--
Eric Schrock, Fishworkshttp
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to include this use case.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0
c0t7d0 ONLINE 0 0 0 48.5K resilvered
errors: No known data errors
On 10/14/09 15:23, Eric Schrock wrote:
On 10/14/09 14:17, Cindy Swearingen wrote:
Hi Jason,
I think you are asking how do you tell ZFS that you want to replace the
failed disk c8t7d0
disks, this can end
up taking a very long time.
The ZFS team is actively working on improvements in this area.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
/
~henson/
Operating Systems and Network Analyst | hen...@csupomona.edu
California State Polytechnic University | Pomona CA 91768
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs
the translation via SATL is done in
hardware, not software.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be cleaned up to bail when
processing an invalid record. I can file a CR for you if you haven't
already done so. Also, I'd encourage any developers out there with
one of these drives to take a shot at fixing the issue via the
OpenSolaris sponsor process.
- Eric
--
Eric Schrock, Fishworks
, while we (via SATL) keep
going. But any way you slice it, the drive is returning invalid data.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
and the translated SCSI data. My guess is that it just gives up
at the first invalid version record, something we should probably be
doing.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs
some correspondence with Eric Schrock who
indicated it looked like a combination of buggy Intel firmware and a
bug in
the Solaris SATL driver, but haven't heard back from him as to
whether they
might fix it.
It's clearly bad firmware - there's no bug in the sata driver. That
drive basically
on-disk format.
The pool can
still be used, but some features are unavailable.
The system pool will never be upgraded - there is no point.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss
devices or not.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
context which won't
stop the txg train and admin commands should continue to work.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
, proftpd
isn't actually at fault (and presumably does the right thing). It was
ultimately a kernel bug.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
drwx-- 2 root sys 512 13 dic 2007 xprt
the content of the file is not printable.
Maurilio.
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
data.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
matter if reads are fast for slogs.
With the txg being a working set of the active commit, so might be a
set of NFS iops?
If the NFS ops are synchronous, then yes. Async operations do not use
the ZIL and therefore don't have anything to do with slogs.
- Eric
--
Eric Schrock, Fishworks
such a pool from being imported.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Australia //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworks
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
#!/sbin/dtrace -s
#pragma D option quiet
will find
more experts in this area.
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
for
the software to function correctly.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
without NSPF, we have
redundant cables, HBAs, power supplies, and controllers, so this is only
required if you are worried about disk backplane failure (a very rare
failure mode).
Can you point to the literature that suggests this is not possible?
- Eric
--
Eric Schrock, Fishworks
that the absence of the above (missing or broken disks) meant
supported, but I admit that I did not state that explicitly, and not in
the context of adding storage.
Thanks,
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
://blogs.sun.com/fishworks
There will be much more information throughout the day and in the coming
weeks. If you want to give it a spin, be sure to check out the freely
available VM images.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, or is there
something that can be done to get the pool back online at least in degraded
mode?
Thanks in advance,
--Terry.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock
. one that could repair things automatically) would
actually *do*, let alone how it would work in the variety of situations
it needs to (compressed RAID-Z?) where the standard ZFS infrastructure
fails.
- Eric
[1]
http://mbruning.blogspot.com/2008/08/recovering-removed-file-on-zfs-disk.html
--
Eric
, this is bug 6667208 fixed in build 100 of nevada.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing
?
Ross
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
understand how this works. Imagine two I/Os, just
with different sd timeouts and retry logic - that's B_FAILFAST. It's
quite simple, and independent of any hardware implementation.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Aug 15, 2008 at 02:14:02PM -0700, Eric Schrock wrote:
The fact that it's DEGRADED and not FAULTED indicates that it thinks the
DTL (dirty time logs) for the two sides of the mirrors overlap in some
way, so detaching it would result in loss of data. In the process of
doing
(such as sata).
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
=205000c0ff086b4a:server-id=/ses-enclosure=0/bay=11
- Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworks
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
PCI-E half-height slots on
the X4540.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jul 09, 2008 at 03:52:27PM -0500, Tim wrote:
Is the 4540 still running a rageXL? I find that somewhat humorous if it's
an Nvidia chipset with ATI video :)
According to SMBIOS there is an on-board device of type AST2000 VGA.
- Eric
--
Eric Schrock, Fishworks
/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss
?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
are broken, at
least up to B85. I haven't seen code changes in this space so I
presume this is likely an unaddressed problem.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric
provide more information about how to reproduce this problem?
Perhaps without rebooting into B70 in the middle?
Thanks,
- Eric
On Tue, May 27, 2008 at 01:50:04PM -0700, Eric Schrock wrote:
Yeah, I noticed this the other day while I was working on an unrelated
problem. The basic problem
On Wed, May 21, 2008 at 04:59:54PM -0400, Chris Siebenmann wrote:
[Eric Schrock:]
| Look at alternate cachefiles ('zpool set cachefile', 'zpool import -c
| cachefile', etc). This avoids scanning all devices in the system
| and instead takes the config from the cachefile.
This sounds great
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
this property will be helpful.
Thank you
Ajay
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
,
Adrian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs
.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
device? I've noticed the former, but not the latter.
The 'failmode' property only applies when writes fail, or
read-during-write dependies, such as the spacemaps. It does not affect
normal reads.
- Eric
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
On Mon, Feb 18, 2008 at 11:15:34AM -0800, Eric Schrock wrote:
The 'failmode' property only applies when writes fail, or
read-during-write dependies, such as the spacemaps. It does not affect
^
That should read 'dependencies', obviously ;-)
- Eric
--
Eric Schrock
- Matt
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
is the remainder of the larger disk.
I see no documentation mentioning how to scrub, then
wait-until-completed. I'm happy to be pointed at any such
documentation. I'm also happy to be otherwise clued-in if no such
documentation exists, or if no such feature exists.
--
Eric Schrock, Fishworks
. This is one of those
times when a Are you sure you...? would be helpful. :(
benr.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, FishWorks
/mailman/listinfo/zfs-discuss
--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
/usr/bin (or every scripts do export PATH=$its_own_like)
Yes, please take any philosophical discussions about the choice of PATH
(or the compatibility of GNU utilities) to indiana-discuss. I was just
pointing out the solution to this particular problem ;-)
- Eric
--
Eric Schrock, FishWorks
/zfs-discuss
--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, with no luck. Are these scattered about or
is there some errors.c file I don't know about?
Thanks in advance.
Asa
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Eric Schrock
the FMA sensor
framework to create a unified view through libtopo. Until that's
complete, you'll be stuck using ad hoc methods.
- Eric
--
Eric Schrock, FishWorkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs
1 - 100 of 306 matches
Mail list logo