Re: [zfs-discuss] At what level does the zfs directory exist?

2010-06-16 Thread Thommy M . Malmström
  From: zfs-discuss-boun...@opensolaris.org
 [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Arne Jansen
  
  Don't host 50k filesystems on a single pool. It's
 more pain than it's
  worth.
 
 I assume Michael has reached this conclusion due to
 factors which are not
 necessary to discuss here.  He has a problem, asked
 for help.  The above is
 not helpful.

Actually it is a helpful advice. Arne talked about a single pool i.e. zpool, 
not a single system. 
It's a known fact that many ZFS filesystems in one pool is bad for boot time.

Have a look here: 
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide



 Michael, I assume you reached this conclusion (must
 host 50k users on a
 single system) due to some logical process and/or
 resource constraint,
 right?
 
 Did you consider the possibility of something like
 DFS, and/or shadow
 copies, which are included in Windows Server, and
 allow for snapshots and
 load distribution across multiple servers, using the
 same UNC space?
 \\domainname\users\eharvey could be hosted by any
 number of servers, and I
 as a user, would have no idea and no care which
 one(s) were serving the
 requests.
 
 The problem with previous versions is that it's
 only available to windows
 clients.  If you happen to be using OSX or Ubuntu or
 whatever, as your cifs
 client, you won't have access to previous versions
 (AFAIK).
 
 So solaris  zfs certainly have a place here.  In
 some environments, windows
 might be the better server solution.  In some
 environments, osol might be
 better.  And by logical process, you might be stuck
 putting it all on a
 single server.  Or the average workload might be
 small enough that a single
 server is perfectly adequate.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss

-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Compellant announces zNAS

2010-04-29 Thread Thommy M . Malmström
What operating system does it run?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] new disk but zfs/zpool commands hangs

2009-08-19 Thread Thommy M . Malmström
I bought a 1 TB external USB disk from Western Digital (1) and put it in my 
2008.11 machine.The machine discovered the disk directly and I did a 'zpool 
create xpool c11t0d0´ command

# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
[...]
xpool  928G81K   928G 0%  ONLINE  -
# zpool status
[...]
  pool: xpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
xpool   ONLINE   0 0 0
  c11t0d0   ONLINE   0 0 0

errors: No known data errors


So fa, so good. Then I tried to create a ZFS filesystem and copy some data like 
this

# zfs create -o casesensitivity=mixed xpool/data
# rsync -a /data/ /xpool/data

After that all commands involving disk access just hanged. zfs, zpool, df can´t 
even end them with ctrl-c

Then I pulled the disk USB cable and inserted it again. Then the commands 
worked again but I got errors like this. (Just for this disk)

# zpool status -v xpool
  pool: xpool
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-HC
 scrub: resilver completed after 0h0m with 0 errors on Sat Aug 15 00:23:35 2009
config:

NAMESTATE READ WRITE CKSUM
xpool   UNAVAIL  018 0  insufficient replicas
  c11t0d0   UNAVAIL 1227 0  experienced I/O failures

errors: Permanent errors have been detected in the following files:

metadata:0x0
metadata:0x11
metadata:0x15
metadata:0x16
metadata:0x2a

Trying to scrub makes all commands hang again...
# zpool scrub xpool (no output for 3 hours...)

So, is my disk dead or can I just try to create a new pool on it? Ideas???



1) http://www.wdc.com/en/products/products.asp?driveid=563
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Thommy M.
Bob Friesenhahn wrote:
 On Mon, 15 Jun 2009, Shannon Fiume wrote:
 
 I just installed 2009.06 and found that compression isn't enabled by
 default when filesystems are created. Does is make sense to have an
 RFE open for this? (I'll open one tonight if need be.) We keep telling
 people to turn on compression. Are there any situations where turning
 on compression doesn't make sense, like rpool/swap? what about
 rpool/dump?
 
 In most cases compression is not desireable.  It consumes CPU and
 results in uneven system performance.

IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait for so much data
from the disks and that the CPU was fast at unpacking data. But sure, it
uses more CPU (and probably memory).

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to mount rpool and edit vfstab from LiveCD?

2009-01-04 Thread Thommy M . Malmström
My son (15 years old) has installed OpenSolaris 2008.11 on disk on
his system and everything was OK until he made a newbie mistake and
edited the /etc/vfstab file incorrectly, that now prevents him from
booting. (Think he had done too much Linux...)
It just hangs on the splash screen.

My idea was to try to do a live cd boot, import the rpool on the disk,
mount my ZFS root filesystem somewhere, and then edit the
misconfigured vfstab, save it and reboot from disk.

Here's the commands he tried after some ideas from me was...

1. zpool import

   Shows the rpool with a long id number

2. zpool import -f long id number xpool

3. mkdir /b

4. zfs set mountpoint=/b xpool/ROOT/opensolaris

5. zfs mount xpool/ROOT/opensolaris

6. cat /b/etc/vfstab

But he can't see the edits that he made.

So, how do he get the real file mounted?
He made a _LOT_ of additions and upgrades after the initial install
and do really want to recover...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for ZFS on boot disk - use HW RAID? or ZFS Mirroring?

2008-11-19 Thread Thommy M. Malmström
Raymond Scott wrote:
 I'm very glad to see ZFS for boot available now. We have begun to use 
 X4150 servers and had settled on using the built-in HW RAID for mirroring 
 the drives in pairs. Two for Boot, two for data etc...
 
 Is it a good idea to first create a HW RAID mirror and then install the OS 
 using 
 ZFS on that RAID device?  Or, would it be better to not use the HW RAID and 
 just go with ZFS mirroring?

It demends (as usual). If you're on a support contract with Sun then you can 
opt 
for the HW RAID but if you want to run something rock solid with only your own 
support, I'd say go for ZFS.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Subversion repository on ZFS

2008-08-28 Thread Thommy M. Malmström
Toby Thain wrote:
 On 27-Aug-08, at 5:47 PM, Ian Collins wrote:
 
 Tim writes:

 On Wed, Aug 27, 2008 at 3:29 PM, Ian Collins [EMAIL PROTECTED]  
 wrote:

 Does anyone have any tuning tips for a Subversion repository on  
 ZFS?  The
 repository will mainly be storing binary (MS Office documents).

 It looks like a vanilla, uncompressed file system is the best bet.

 I believe this is called sharepoint :D
 Don't mention that abomination!
 
 Amen.

Don't mention _that_ abomination!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS questions

2008-07-23 Thread Thommy M.
Richard Gilmore wrote:
 Hello Zfs Community,
 
 I am trying to locate if zfs has a compatible tool to Veritas's 
 vxbench?  Any ideas?  I see a tool called vdbench that looks close, but 
 it is not a Sun tool, does Sun recommend something to customers moving 
 from Veritas to ZFS and like vxbench and its capabilities?


filebench

http://sourceforge.net/projects/filebench/
http://www.solarisinternals.com/wiki/index.php/FileBench
http://blogs.sun.com/dom/entry/filebench:_a_zfs_v_vxfs

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel panic - was it zfs related?

2008-07-16 Thread Thommy M.
Michael Hale wrote:
 Around 9:45 this morning, our mailserver (SunOS 5.11 snv_91 i86pc i386  
 i86pc) rebooted.
[...]
 dumping to /dev/zvol/dsk/rootpool/dump, offset 65536, content: kernel
 
 Is there a way to tell if ZFS caused the kernel panic?  I notice that  
 it says imapd: in the middle of the msgbuffer, does that mean imapd  
 caused the kernel panic?  I'm just trying to figure out what to do  
 here and determine if a bug caused the panic so that I can submit the  
 proper information to get it fixed :^)

Is it really true that you run your companies mailserver on snv_91 and
with root on ZFS? No offense, but in that case I think the proper thing
to do is to switch to Solaris 10 5/08.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] More USB Storage Issues

2008-06-05 Thread Thommy M. Malmström
Nathan Kroenert wrote:
 For what it's worth, I started playing with USB + flash + ZFS and was 
 most unhappy for quite a while.
 
 I was suffering with things hanging, going slow or just going away and 
 breaking, and thought I was witnessing something zfs was doing as I was 
 trying to do mirror recovery and all that sort of stuff.
 
 On a hunch, I tried doing UFS and RAW instead and saw the same issues.
 
 It's starting to look like my USB hubs. Once they are under any 
 reasonable read/write load, they just make bunches of things go offline.
 
 Yep - They are powered and plugged in.
 
 So, at this stage, I'll be grabbing a couple of 'better' USB hubs (Mine 
 are pretty much the cheapest I could buy) and see how that goes.
 
 For gags, take ZFS out of the equation and validate that your hardware 
 is actually providing a stable platform for ZFS... Mine wasn't...

That's my experience too. USB HUBs are cheap shit mostly...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Hardware Check, OS X Compatibility, NEWBIE!!

2008-06-03 Thread Thommy M.
Darryl wrote:
 This thread really messed me up, posts dont follow a chronological order...  
 so sorry for all the extra posts!

That's what you get when you don't use working tools like usenet news.
nntp for ever!!!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems with USB Storage devices

2008-06-02 Thread Thommy M.
Paulo Soeiro wrote:
 Greetings,
 
 I was experimenting with zfs, and i made the following test, i shutdown
 the computer during a write operation
 in a mirrored usb storage filesystem.
 
 Here is my configuration
 
 NGS USB 2.0 Minihub 4
 3 USB Silicom Power Storage Pens 1 GB each
 
 These are the ports:
 
 hub devices
 /---\  
 | port 2 | port  1  |
 | c10t0d0p0  | c9t0d0p0  |
 -
 | port 4 | port  4  |
 | c12t0d0p0  | c11t0d0p0|
 \/
 
 Here is the problem:
 
 1)First i create a mirror with port2 and port1 devices
 
 zpool create myPool mirror c10t0d0p0 c9t0d0p0
 -bash-3.2# zpool status
   pool: myPool
  state: ONLINE
  scrub: none requested
 config:
 
 NAME   STATE READ WRITE CKSUM
 myPool ONLINE   0 0 0
   mirror   ONLINE   0 0 0
 c10t0d0p0  ONLINE   0 0 0
 c9t0d0p0   ONLINE   0 0 0
 
 errors: No known data errors
 
   pool: rpool
  state: ONLINE
  scrub: none requested
 config:
 
 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   c5t0d0s0  ONLINE   0 0 0
 
 errors: No known data errors
 
 2)zfs create myPool/myfs
 
 3)created a random file (file.txt - more or less 100MB size)
 
 digest -a md5 file.txt
 3f9d17531d6103ec75ba9762cb250b4c
 
 4)While making a second copy of the file:
 
 cp file.txt test 
 
 I've shutdown the computer while the file was being copied. And
 restarted the computer again. And here is the result:
 
 
 -bash-3.2# zpool status
   pool: myPool
  state: UNAVAIL
 status: One or more devices could not be used because the label is missing
 or invalid.  There are insufficient replicas for the pool to continue
 functioning.
 action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-5E
  scrub: none requested
 config:
 
 NAME   STATE READ WRITE CKSUM
 myPool UNAVAIL  0 0 0  insufficient replicas
   mirror   UNAVAIL  0 0 0  insufficient replicas
 c12t0d0p0  OFFLINE  0 0 0
 c9t0d0p0   FAULTED  0 0 0  corrupted data
 
   pool: rpool
  state: ONLINE
  scrub: none requested
 config:
 
 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   c5t0d0s0  ONLINE   0 0 0
 
 errors: No known data errors
 
 ---
 
 I was expecting that only one of the files was corrupted, not the all
 the filesystem.

This looks exactly like the problem I had (thread USB stick unavailable
after restart) and the answer I got was that you can't relay on the HUB ...

I haven't tried another HUB yet but will eventually test the Adaptec
XHub 4 (AUH-4000) which is on the HCL list...




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems with USB Storage devices

2008-06-02 Thread Thommy M.
Justin Vassallo wrote:
 Thommy,
 
 If I read correctly your post stated that the pools did not automount on
 startup, not that they would go corrupt. It seems to me that Paulo is
 actually experiencing a corrupt fs

Nah, I also had indications of corrupted data if you read my posts.
But the data was there after I fiddled with the sticks and
exported/imported the pool.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS4-sharing-ZFS issues

2008-05-22 Thread Thommy M. Malmström
Bob Friesenhahn wrote:
 On Wed, 21 May 2008, Will Murnane wrote:
 So, my questions are:
 * Are there options I can set server- or client-side to make Solaris
 child mounts happen automatically (i.e., match the Linux behavior)?
 * Will this behave with automounts?  What I'd like to do is list
 /export/home in the automount master file, but not all the child
 filesystems.
 
 Here is the answer you were looking for:
 
 In /etc/auto_home:
 # Home directory map for automounter
 #
 *   server:/home/
 
 This works on Solaris 9, Solaris 10, and OS-X Leopard.

Shouldn't that be

*   server:/export/home/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss