Re: [zfs-discuss] opensolaris lightweight install

2010-01-07 Thread William D. Hathaway
The OpenSolaris "Just enough OS" (JeOS) project has been working on making 
stripped down images available for virtual machines as well as automated 
installer profiles.

See: http://hub.opensolaris.org/bin/view/Project+jeos/WebHome
for the project home page.

Also, a frequently updated blog on the topic is:
http://blogs.sun.com/VirtualGuru/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-help] zfs destroy stalls, need to hard reboot

2009-12-30 Thread William D. Hathaway
I know dedup is on the roadmap for the 7000 series, but I don't think it is 
officially supported yet, since we would have seen a note about the release of 
the software on the FishWorks Wiki 
http://wikis.sun.com/display/FishWorks/Software+Updates
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs pool Configuration "calculator"?

2009-12-24 Thread William D. Hathaway
There is a calculator at Corporate Strategies: 
http://ctistrategy.com/resources/sun-7000-calculator/


Note that if the ctistrategy site is unavailable for some reason, you can also 
just download the free 7000 series virtual appliance which will run happily in 
VMWare or VirtualBox.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance issues over 2.5 years.

2009-12-16 Thread William D. Hathaway
Hi Yariv -
   It is hard to say without more data, but perhaps you might be a victim of 
"Stop looking and start ganging":
http://bugs.opensolaris.org/view_bug.do?bug_id=6596237

It looks like this was fixed in S10u8, which was released last month. 

If you open a support ticket (or search for this bug id on the web), I think 
you should be able to get some DTrace scripts to determine if that bug is 
impacting you.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Random Read Performance

2009-11-25 Thread William D. Hathaway
If you are using (3) 3511's, then won't it be possibly that your 3GB workload 
will be largely or entirely served out of RAID controller cache?

Also, I had a question for your production backups (millions of small files), 
do you have atime=off set for the filesystems?  That might be helpful.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slow ls or slow zfs

2009-06-26 Thread William D. Hathaway
As others have mentioned, it would be easier to take a stab at this if there is 
some more data to look at.

Have you done any ZFS tuning?  If so, please provide the /etc/system, adb, zfs 
etc info.

Can you provide zpool status output?

As far as checking ls performance, just to remove name service lookups from the 
possibilities, lets use the  '-n' option instead of '-l'. I know you mentioned 
it was unlikely to be a problem, but the less variables the better.


Can you characterize what your ''ls -an" output looks like?  Is it 100 files or 
100,000?

How about some sample output like:
for run in  1 2 3 4
do
  echo run $run
  truss -c ls -an | wc -l
  echo ""
  echo
done
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread William D. Hathaway
I don't understand your statement/questions.  This wasn't a response to "ZFS 
versus every possible storage platform in the world".  The original poster was 
asking about comparing  ZFS versus hardware RAID on specific machines as 
mentioned in the title.  AFAIK you don't get compression, snapshots and clones 
with standard hardware RAID cards.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-04 Thread William D. Hathaway
Keep in mind that if you use ZFS you get a lot of additional functionality like 
snapshots, compression, clones.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] REĀ : rsync using 100% of a cp u

2008-12-02 Thread William D. Hathaway
How are the two sides different?  If you run something like 'openssl md5sum' on 
both sides is it much faster on one side?

Does one machine have a lot more memory/ARC and allow it to skip the physical 
reads?  Is the dataset compressed on one side?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-10-01 Thread William D. Hathaway
You might want to also try toggling the Nagle tcp setting to see if that helps 
with your workload:
ndd -get /dev/tcp tcp_naglim_def 
(save that value, default is 4095)
ndd -set /dev/tcp tcp_naglim_def 1

If no (or a negative) difference, set it back to the original value
ndd -set /dev/tcp tcp_naglim_def 4095 (or whatever it was)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance degradation when backups are running

2008-09-30 Thread William D. Hathaway
Gary -
   Besides the network questions...

   What does your zpool status look like?


   Are you using compression on the file systems?
   (Was single-threaded and fixed in s10u4 or equiv patches)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool error: must be a block device or regular file

2008-09-30 Thread William D. Hathaway
The zfs kernel modules handle the caching/flushing of data across all the 
devices in the zpools.  It uses a different method for this than the "standard" 
virtual memory system used by traditional file systems like UFS.  Try defining 
your NVRAM card with ZFS as a log device using the /dev/dsk/xyz path and let us 
know how it goes.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool error: must be a block device or regular file

2008-09-29 Thread William D. Hathaway
/dev/rdsk/* devices are character based devices, not block based.  In general, 
character based devices have to be accessed serially (and don't do buffering), 
versus block devices which buffer and allow random access to the data. If you 
use:
ls -lL /dev/*dsk/c3d1p0
you should see that the /dev/dsk/c3d1p0 device is a block based device and 
/dev/rdsk/c3d1p0 (via the first letter of the 'ls' output).

So while /dev/rdsk/xxx and /dev/dsk/xxx point to the same hardware, the access 
methods that are available via the two interfaces are very different.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for ZFS plaiding

2008-03-26 Thread William D. Hathaway
If you are using 6 Thumpers via iSCSI to provide storage to your zpool and 
don't use either mirroring or RAIDZ/RAIDZ2 across the Thumpers, if one Thumper 
goes down then your storage pool is unavailable.  I think you want some form of 
RAID at both levels.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS write time performance question

2007-11-30 Thread William D. Hathaway
In addition to Brendan's advice about benchmarking, it would be a good idea to 
use the newer Solaris release (Solaris 10 08/07), which has a lot of ZFS 
improvements (performance and functional).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS usb keys

2007-06-27 Thread William D. Hathaway
It would be really handy if whoever was responsible for the message at:
http://www.sun.com/msg/ZFS-8000-A5
could add data about which zpool versions  are supported at specific OS/patch 
releases.

The current message doesn't help the user figure out how to accomplish their 
implied task, which is to import the pool on a different system.


Adding the version number of the pool that couldn't be imported to the zpool 
import error message would be nice too.


> > Shouldn't S10u3 just see the newer on-disk format
> and
> > report that fact, rather than complain it is
> corrupt?
> 
> Yep, I just tried it, and it refuses to "zpool
> import" the newer pool,
> telling me about the incompatible version.  So I
> guess the pool
> format isn't the correct explanation for the Dick
> Davies' (number9)
> problem.
> 
> 
> 
> On a S-x86 box running snv_68, ZFS version 7:
> 
> # mkfile 256m /home/leo.nobackup/tmp/zpool_test.vdev
> # zpool create test_pool
> /home/leo.nobackup/tmp/zpool_test.vdev
> # zpool export test_pool
> 
> 
> On a S-sparc box running snv_61, ZFS version 3
> (I get the same error on S-x86, running S10U2, ZFS
> version 2):
> 
> # zpool import -d /home/leo.nobackup/tmp/
>   pool: test_pool
>   id: 6231880247307261822
> tate: FAULTED
> status: The pool is formatted using an incompatible
> version.
> action: The pool cannot be imported.  Access the pool
> on a system running newer
> software, or recreate the pool from backup.
> http://www.sun.com/msg/ZFS-8000-A5
> config:
> 
> test_pool
> UNAVAIL   newer
> version
> /home/leo.nobackup/tmp//zpool_test.vdev
>   ONLINE
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Reading a ZFS Snapshot

2007-05-18 Thread William D. Hathaway
An example would be if you had a raw snapshot on tape.  A single file or subset 
of files could be restored from it without needing the space to load the full 
snapshot into a zpool.  This would be handy if you have a zpool with 500GB of 
space and 300GB used.  If you had a snapshot that was 250GB and wanted to load 
it back up to restore a file, you wouldn't have sufficient space.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Reading a ZFS Snapshot

2007-05-18 Thread William D. Hathaway
I think it would be handy if a utility could read a full  zfs snapshot and 
restore subsets of files or directories like using something like tar -xf or 
ufsrestore -i.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Status Update before Reinstall?

2007-04-24 Thread William D. Hathaway
I've only used Lori Alt's patch for b62 boot images via jumpstart
(http://www.opensolaris.org/jive/thread.jspa?threadID=28725&tstart=15)
which made it an easy process with mirrored boot ZFS drives and no UFS 
partitions required.  If you have a jumpstart server, I think that is the best 
way to go.

--
William Hathaway
http://www.williamhathaway.com
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs boot image conversion kit is posted

2007-04-21 Thread William D. Hathaway
Hi Lori,
  Thanks to you and your team for posting the zfs boot image kit.  I was able 
to jumpstart a VMWare virtual machine using a Nevada b62 image patched with 
your conversion kit and it went very smoothly.

Here is the profile that I used:
# Jumpstart profile for VMWare image w/ two emulated IDE drives
# ZFS boot settings based off Nevada b62 patched install image
install_type initial_install
cluster SUNWCreq
cluster SUNWCssh
package SUNWbash add
filesys c0d0s1 auto swap
pool bootpool free / mirror c0d0s0 c0d1s0
dataset bootpool/BE1 auto /
dataset bootpool/BE1/usr auto /usr
dataset bootpool/BE1/opt auto /opt
dataset bootpool/BE1/var auto /var
dataset bootpool/BE1/export auto /export

# uname -a
SunOS zfsboot 5.11 snv_62 i86pc i386 i86pc
# df -k
Filesystemkbytesused   avail capacity  Mounted on
bootpool/BE1 7676928  353874 7153830 5%/
/devices   0   0   0 0%/devices
/dev   0   0   0 0%/dev
ctfs   0   0   0 0%/system/contract
proc   0   0   0 0%/proc
mnttab 0   0   0 0%/etc/mnttab
swap  300032 364  299668 1%/etc/svc/volatile
objfs  0   0   0 0%/system/object
sharefs0   0   0 0%/etc/dfs/sharetab
bootpool/BE1/usr 7676928  158180 7153830 3%/usr
/usr/lib/libc/libc_hwcap1.so.1
 7312010  158180 7153830 3%/lib/libc.so.1
fd 0   0   0 0%/dev/fd
bootpool/BE1/var 76769289672 7153830 1%/var
swap  299668   0  299668 0%/tmp
swap  299692  24  299668 1%/var/run
bootpool/BE1/export  7676928  18 7153830 1%/export
bootpool/BE1/opt 7676928  18 7153830 1%/opt
#
# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
bootpool   7.44G511M   6.94G 6%  ONLINE -
# zpool status
  pool: bootpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
bootpoolONLINE   0 0 0
  mirrorONLINE   0 0 0
c0d0s0  ONLINE   0 0 0
c0d1s0  ONLINE   0 0 0

errors: No known data errors

--
William Hathaway
http://williamhathaway.com
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Testing of UFS, VxFS and ZFS

2007-04-16 Thread William D. Hathaway
Why are you using software-based RAID 5/RAIDZ for the tests?  I didn't think 
this was a common setup in cases where file system performance was the primary 
consideration.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS improvements

2007-04-10 Thread William D. Hathaway
There was some discussion on the "always panic for fatal pool failures" issue 
in April 2006, but I haven't seen if an actual RFE was generated.
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/017276.html
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: simple Raid-Z question

2007-04-08 Thread William D. Hathaway
One option is you can replace all the existing devices in a raidz vdev with 
larger devices, and then export/import the pool and the vdev will grow in size. 
 I agree that you simply can't add a single device to grow a raidz vdev.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] misleading zpool state and panic -- nevada b60 x86

2007-04-07 Thread William D. Hathaway
I'm running Nevada build 60 inside VMWare, it is a test rig with no data of 
value. 
SunOS b60 5.11 snv_60 i86pc i386 i86pc
I wanted to check out the FMA handling of a serious zpool error, so I did the 
following:

2007-04-07.08:46:31 zpool create tank mirror c0d1 c1d1
2007-04-07.15:21:37 zpool scrub tank
(inserted some errors with dd on one device to see if it showed up, which it 
did, but healed fine)
2007-04-07.15:22:12 zpool scrub tank
2007-04-07.15:22:46 zpool clear tank c1d1
(added a single device without any redundancy)
2007-04-07.15:28:29 zpool add -f tank /var/500m_file
(then I copied data into /tank and removed the /var/500m_file, a panic 
resulted, which was expected)

I created a new /var/500m_file and then decided to destroy the pool and start 
over again.  This caused a panic, which I wasn't expecting.  On reboot, I did a 
zpool -x, which shows:
  pool: tank
 state: ONLINE
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
tank  ONLINE   0 0 0
  mirror  ONLINE   0 0 0
c0d1  ONLINE   0 0 0
c1d1  ONLINE   0 0 0
  /var/500m_file  UNAVAIL  0 0 0  corrupted data

errors: No known data errors

Since there was no redundancy for the /var/500m_file vdev, I don't see how a 
replace will help (unless I still had the original device/file with the data 
intact).

When I try to destroy the pool with "zpool destroy tank", I get a panic with:
Apr  7 16:00:17 b60 genunix: [ID 403854 kern.notice] assertion failed: 
vdev_config_sync(rvd, t
xg) == 0, file: ../../common/fs/zfs/spa.c, line: 2910
Apr  7 16:00:17 b60 unix: [ID 10 kern.notice]
Apr  7 16:00:17 b60 genunix: [ID 353471 kern.notice] d893cd0c 
genunix:assfail+5a (f9e87e74, f9
e87e58,)
Apr  7 16:00:17 b60 genunix: [ID 353471 kern.notice] d893cd6c zfs:spa_sync+6c3 
(da89cac0, 1363
, 0)
Apr  7 16:00:17 b60 genunix: [ID 353471 kern.notice] d893cdc8 
zfs:txg_sync_thread+1df (d467854
0, 0)
Apr  7 16:00:18 b60 genunix: [ID 353471 kern.notice] d893cdd8 
unix:thread_start+8 ()
Apr  7 16:00:18 b60 unix: [ID 10 kern.notice]
Apr  7 16:00:18 b60 genunix: [ID 672855 kern.notice] syncing file systems...

My question/comment boil down to:
1) Should the pool state really be 'online' after losing a non-redundant vdev?
2) It seems like a bug if I get a panic when trying to destroy a pool (although 
this clearly may be related to #1).

Am I hitting a known bug (or misconceptions about how the pool should function)?
I will happily provide any debugging info that I can.

I haven't tried a 'zpool destroy -f tank' yet since I didn't know if there was 
any debugging value in my current state.

Thanks,
William Hathaway
www.williamhathaway.com
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: today panic ...

2007-03-29 Thread William D. Hathaway
If the fix is put into Solaris 10 update 4 (as Matt expects) it should trickle 
into the R&S patch cluster as well.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Proposal: multiple copies of user data

2006-09-11 Thread William D. Hathaway
Hi Matt,
   Interesting proposal.  Has there been any consideration if free space being 
reported for a ZFS filesystem would take into account the copies setting?

Example:
zfs create mypool/nonredundant_data
zfs create mypool/redundant_data
df -h /mypool/nonredundant_data /mypool/redundant_data 
(shows same amount of free space)
zfs set copies=3 mypool/redundant_data

Would a new df of /mypool/redundant_data now show a different amount of free 
space (presumably 1/3 if different) than /mypool/nonredundant_data?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: SPEC SFS97 benchmark of ZFS,UFS,VxFS

2006-08-07 Thread William D. Hathaway
If this is reproducible, can you force a panic so it can be analyzed?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss