Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-16 Thread Robert Milkowski
Hello Richard,

Thursday, June 12, 2008, 6:54:29 AM, you wrote:


RE Oracle bails out after 10 minutes (ORA-27062) ask me how I know... :-P


So how do you know?


-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays

2008-06-16 Thread Erik Trimble
One thing I should mention on this is that I've had  _very_ bad 
experience with using single-LUN ZFS filesystems over FC.

that is, using an external SAN box to create a single LUN, export that 
LUN to a FC-connected host, then creating a pool as follows:

zpool create tank LUN_ID

It works fine, up until something bad happens to the array, or the FC 
connection (like, say, losing power to the whole system), and the host 
computer cannot talk to the LUN.

This will corrupt the zpool permanently, and there is no way to fix the 
pool  (and, without some magic in /etc/system , will leave the host in a 
permanent kernel panic loop).  This is a known bug, and the fix isn't 
looking to be available anytime soon.

This problem doesn't seem to manifest itself if the zpool has redundant 
members, even if they are on the same array (and thus, the host loses 
contact with both LUNs at the same time).

So, for FC or iSCSI targets, I would HIGHLY recommend that ZFS _ALWAYS_ 
be configured in a redundant setup.

-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and Automount/Hal/fstyp

2008-06-16 Thread Matthew Gardiner
Hi,

I've got an external hard disk and I've done the stuff with zpool - so
its all working.

The problem I have, however, is whether it is possible to actually set
it up so that zfs devices mount just like cd's and drives formatted as
fat.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-16 Thread Matthew Gardiner
  I've got a couple of identical old sparc boxes
 running nv90 - one
  on ufs, the other zfs. Everything else is the same.
 (SunBlade
  150 with 1G of RAM, if you want specifics.)
 
  The zfs root box is significantly slower all
 around. Not only is
  initial I/O slower, but it seems much less able to
 cache data.

 Exactly the same here, though with different hardware
 (Netra T1 200
 with 1 GB RAM and 2x 36 GB SCSI).  If you put the UFS
 on top of
 an SVM mirror the difference is less noticeable but
 still there.

I think that if you notice the common thread; those who run SPARC's
are having performance issues vs. those who are running x86. I know
from my experience, I have a P4 3.2Ghz prescott desktop with 2.5gb
ram, and a Lenovo t61p laptop with 4gb, both of them have no
performance issues with zfs; infact, with zfs, the performance has
gone up.

Matthew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-16 Thread Kaiwai Gardiner
  I've got a couple of identical old sparc boxes
 running nv90 - one
  on ufs, the other zfs. Everything else is the same.
 (SunBlade
  150 with 1G of RAM, if you want specifics.)
 
  The zfs root box is significantly slower all
 around. Not only is
  initial I/O slower, but it seems much less able to
 cache data.
 
 Exactly the same here, though with different hardware
 (Netra T1 200
 with 1 GB RAM and 2x 36 GB SCSI).  If you put the UFS
 on top of
 an SVM mirror the difference is less noticeable but
 still there.

I think that if you notice the common thread; those who run SPARC's are having 
performance issues vs. those who are running x86. I know from my experience, I 
have a P4 3.2Ghz prescott desktop with 2.5gb ram, and a Lenovo t61p laptop with 
4gb, both of them have no performance issues with zfs; infact, with zfs, the 
performance has gone up.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to identify zpool version

2008-06-16 Thread Brian H. Nelson
Peter Hawkins wrote:
 Can zpool on U3 be patched to V4? I've applied the latest cluster and it 
 still seems to be V3.

   
Yes, you can patch your way up to the Sol 10 U4 kernel (or even U5 
kernel) which will give you zpool v4 support. The particular patch you 
need is 120011-14 or 120012-14 (sparc or x86). There is at least one 
dependency patch that is obsolete (122660-10/122661-10) but must still 
be installed before the kernel patch will go in, so you may need to 
install one or two patches manually to get it working.

http://mail.opensolaris.org/pipermail/zfs-discuss/2007-October/043331.html

-Brian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Problems under vmware

2008-06-16 Thread Anthony Worrall
I am seeing the same problem using a seperate virtual disk for the pool.
This is happening with Solaris 10 U3, U4 and U5


SCSI reservations is know to be an issue with clustered solaris 
http://blogs.sun.com/SC/entry/clustering_solaris_guests_that_run

I wonder if this is the same problem. Maybe we have to use Raw Device Mapping 
(RDM) to get zfs to work under vmware.

Anthony Worrall
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-16 Thread Peter Tribble
On Mon, Jun 16, 2008 at 12:05 PM, Matthew Gardiner
[EMAIL PROTECTED] wrote:

 I think that if you notice the common thread; those who run SPARC's
 are having performance issues vs. those who are running x86.

Not that simple. I'm seeing performance issues on x86 just as
much as sparc. My sparc comparison was simply that the only
pair of identical machines I could do testing on just happened
to be sparc.

The *real* common thread is that you need ridiculous amounts
of memory to get decent performance out of ZFS, whereas UFS
gives reasonable performance on much smaller systems. On my
servers where 16G minimum is reasonable, ZFS is fine. But the
bulk of the installed base of machines accessed by users is still
in the 512M-1G range - and Sun are still selling 512M machines.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Accessing zfs partitions on HDD from LiveCD

2008-06-16 Thread Fat Ted
Answer is:

# zpool import

(which will pick up the zpool on the HDD and lists its name and id)

# zpool import rpool 

(rpool is default opensolaris zpool)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Mirror Problem

2008-06-16 Thread Matthew C Aycock
Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror of 
a T3B lun and a corresponding lun of a SE3511 brick. I did this since I was new 
with ZFS and wanted to ensure that my data would survive an array failure. It 
turns out that I was smart for doing this :)

I had a hardware failure on the SE3511 that caused the complete RAID5 lun on 
the se3511 to die. (The first glance showed 6 drives failed :( ) However, I 
would have expected that ZFS would detect the failed mirror halves and offline 
them as would ODS and VxVM. To my shock, it basically hung the server. I 
eventually had to unmap the SE3511 luns and replace them space I had available 
from another brick in the SE3511. I then did a zpool replace and ZFS reslivered 
the data.

So, why did ZFS hang my server?

This is on Solaris 11/06 kernel patch 127111-05 and ZFS version 4.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-16 Thread dick hoogendijk
On Mon, 16 Jun 2008 16:21:26 +0100
Peter Tribble [EMAIL PROTECTED] wrote:

 The *real* common thread is that you need ridiculous amounts
 of memory to get decent performance out of ZFS

That's FUD. Older systems might not have enough memory, but newer ones
can't hardly be bought with less then 2Gb. Read the specs before you
write such nonsense about ridiculous memory amounts.

 bulk of the installed base of machines accessed by users is still
 in the 512M-1G range

True, buth those systems don't qualify for Vista nor for OpenSolaris,
nor for a good ZFS based system. That's normal. Those machines are old.
Not too old for ancient filesystems and lightweight desktops, but the
-are- too old for modern software.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Deferred Frees

2008-06-16 Thread Torrey McMahon
I'm doing some simple testing of ZFS block reuse and was wondering when 
deferred frees kick in. Is it on some sort of timer to ensure data 
consistency? Does an other routine call it? Would something as simple as 
sync(1M) get the free block list written out so future allocations could 
use the space?

... or am I way off in the weeds? :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Problems under vmware

2008-06-16 Thread Anthony Worrall
Added an vdev using rdm and that seems to be stable over reboots

however the pools based on a virtual disk now also seems to be stable after 
doing an export and import -f
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius
Hi,

zpool does not to create a pool on USB disk (formatted in FAT32).

# /usr/sbin/zpool create alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy

or

# /usr/sbin/zpool create alpha /dev/rdsk/c5t0d0p0
cannot use '/dev/rdsk/c5t0d0p0': must be a block device or regular file

What is gonna do to create a pool on a disk please?

Regards.

Andrius
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread dick hoogendijk
On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:

 zpool does not to create a pool on USB disk (formatted in FAT32).

It's already been formatted.
Try zpool create -f alpha c5t0d0p0

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius
dick hoogendijk wrote:
 On Mon, 16 Jun 2008 18:10:14 +0100
 Andrius [EMAIL PROTECTED] wrote:
 
 zpool does not to create a pool on USB disk (formatted in FAT32).
 
 It's already been formatted.
 Try zpool create -f alpha c5t0d0p0
 

The same story

# /usr/sbin/zpool create -f alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy

Regards,
Andrius
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread dick hoogendijk
On Mon, 16 Jun 2008 18:23:35 +0100
Andrius [EMAIL PROTECTED] wrote:

 The same story
 
 # /usr/sbin/zpool create -f alpha c5t0d0p0
 cannot open '/dev/dsk/c5t0d0p0': Device busy

Are you sure you're not on that device?
Are you also sure your usb stick is called c5t0d0p0?
What does rmformat (as root) say?

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius

dick hoogendijk wrote:

On Mon, 16 Jun 2008 18:23:35 +0100
Andrius [EMAIL PROTECTED] wrote:


The same story

# /usr/sbin/zpool create -f alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy


Are you sure you're not on that device?
Are you also sure your usb stick is called c5t0d0p0?
What does rmformat (as root) say?



The device is on, but it is empty. It is not a stick, it is a mobile 
hard disk Iomega 160 GB.

# rmformat
Looking for devices...
 1. Volmgt Node: /vol/dev/aliases/cdrom0
Logical Node: /dev/rdsk/c1t0d0s2
Physical Node: /[EMAIL PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
Connected Device: PIONEER  DVD-RW  DVR-112D 1.21
Device Type: DVD Reader/Writer
 2. Volmgt Node: /vol/dev/aliases/rmdisk0
Logical Node: /dev/rdsk/c5t0d0p0
Physical Node: /[EMAIL PROTECTED],0/pci1106,[EMAIL PROTECTED],4/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
Connected Device: Ext Hard  Disk
Device Type: Removable


--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Neal Pollack
Andrius wrote:
 dick hoogendijk wrote:
   
 On Mon, 16 Jun 2008 18:10:14 +0100
 Andrius [EMAIL PROTECTED] wrote:

 
 zpool does not to create a pool on USB disk (formatted in FAT32).
   
 It's already been formatted.
 Try zpool create -f alpha c5t0d0p0

 

 The same story

 # /usr/sbin/zpool create -f alpha c5t0d0p0
 cannot open '/dev/dsk/c5t0d0p0': Device busy
   

When you insert a USB stick into a running Solaris system, and it is 
FAT32 formatted,
it may be automatically mounted as a filesystem, read/write.

The command above fails since it is already mounted and busy.
You may wish to use the df command to verify this.
If it is mounted, try unmounting it  fist, and then using the command;

# /usr/sbin/zpool create -f alpha c5t0d0p0



 Regards,
 Andrius
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ?: 1/2 Billion files in ZFS

2008-06-16 Thread Steffen Weiberle
Has anybody stored 1/2 billion small ( 50KB) files in a ZFS data store? 
If so, any feedback in how many file systems [and sub-file systems, if 
any] you used?

How were ls times? And insights in snapshots, clones, send/receive, or 
restores in general?

How about NFS access?

Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays

2008-06-16 Thread Vincent Fox
I'm not sure why people obsess over this issue so much.  Disk is cheap.

We have a fair number of 3510 and 2540 on our SAN.  They make RAID-5 LUNs 
available to various servers.

On the servers we take RAID-5 LUNs from different arrays and ZFS mirror them.  
So if any array goes away we are still uperational.

VERY ROBUST!

If you are trying to be cheap, then you could:
1) Use copies=2 to make sure data is duplicated
2) Advertise individual disks as LUN build RAIDZ2 on them.

The advantage of intelligent array is I have low-level control of matching a 
hot-spare in array#1 to the LUN in array#1.  ZFS does not have this 
fine-grained hot-spare capability yet so I just don't use ZFS sparing.  Also 
the array has SAN connectivity and caching and dual-controllers that just don't 
exist in the JBOD world.

I am hosting mailboxs for  50K people, we cannot afford lengthy downtimes.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius

Neal Pollack wrote:

Andrius wrote:

dick hoogendijk wrote:
 

On Mon, 16 Jun 2008 18:10:14 +0100
Andrius [EMAIL PROTECTED] wrote:

   

zpool does not to create a pool on USB disk (formatted in FAT32).
  

It's already been formatted.
Try zpool create -f alpha c5t0d0p0




The same story

# /usr/sbin/zpool create -f alpha c5t0d0p0
cannot open '/dev/dsk/c5t0d0p0': Device busy
  


When you insert a USB stick into a running Solaris system, and it is 
FAT32 formatted,

it may be automatically mounted as a filesystem, read/write.

The command above fails since it is already mounted and busy.
You may wish to use the df command to verify this.
If it is mounted, try unmounting it  fist, and then using the command;


That is true, disc is detected automatically. But

# umount /dev/rdsk/c5t0d0p0
umount: warning: /dev/rdsk/c5t0d0p0 not in mnttab
umount: /dev/rdsk/c5t0d0p0 not mounted


--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Problem with missing disk in RaidZ

2008-06-16 Thread Peter Hawkins
Thanks to the help in a previous post I have imported my pool. However I would 
appreciate some help with my next problem.

This all arose because my motherboard failed while my zpool was resilvering 
from a failed disk. I moved the disks to a new motherboard and imported the 
pool with the help of the posters here. However when imported the new system 
spawned regular error messages regarding the new disk and eventually the system 
would hang after about a minute, really hang - completeley locked. I tried 
killing the resilver with scrub -s but it just said that no scrub was in 
progress. Eventaully I detached the replacement disk and the system stayed 
running with the pool imported.

However my pool is now in this state:

  pool: rz1500
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: resilver completed with 0 errors on Mon Jun 16 18:37:07 2008
config:

NAMESTATE READ WRITE CKSUM
rz1500  DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c2d0p0  ONLINE   0 0 0
c3d0p0  ONLINE   0 0 0
c6d0p0  ONLINE   0 0 0
c7d0UNAVAIL  0 0 0  cannot open

errors: No known data errors

The missing device was c7d0p0 and I now have another brand new disk to replace 
it. I can't attach the device as there seems to be no RaidZ attach syntax, and 
onlining the device makes no difference.  I need to add back the device that I 
detached to this RaidZ pool.

I'm on S10 x86 U3 patched to zpool V4.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread dick hoogendijk
On Mon, 16 Jun 2008 18:38:11 +0100
Andrius [EMAIL PROTECTED] wrote:

 The device is on, but it is empty. It is not a stick, it is a mobile 
 hard disk Iomega 160 GB.

Like Neal writes: check if the drive is mounted. Do a df -h
Unmount it if neccessary (umount /dev/dsk/c5t0d0) and then do a zpool
create alpha c5t1d0
Afaik the p0 is not needed.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Neal Pollack
Andrius wrote:
 Neal Pollack wrote:
 Andrius wrote:
 dick hoogendijk wrote:
  
 On Mon, 16 Jun 2008 18:10:14 +0100
 Andrius [EMAIL PROTECTED] wrote:

   
 zpool does not to create a pool on USB disk (formatted in FAT32).
   
 It's already been formatted.
 Try zpool create -f alpha c5t0d0p0

 

 The same story

 # /usr/sbin/zpool create -f alpha c5t0d0p0
 cannot open '/dev/dsk/c5t0d0p0': Device busy
   

 When you insert a USB stick into a running Solaris system, and it is 
 FAT32 formatted,
 it may be automatically mounted as a filesystem, read/write.

 The command above fails since it is already mounted and busy.
 You may wish to use the df command to verify this.
 If it is mounted, try unmounting it  fist, and then using the command;

 That is true, disc is detected automatically. But

 # umount /dev/rdsk/c5t0d0p0
 umount: warning: /dev/rdsk/c5t0d0p0 not in mnttab
 umount: /dev/rdsk/c5t0d0p0 not mounted

The umount command works best with a filesystem name.
the mount command will show what filesytems are mounted.
For example, if I stick in a USB thumb-drive:

#mount
...
/media/LEXAR MEDIA on /dev/dsk/c9t0d0p0:1 
read/write/nosetuid/nodevices/hidden/nofoldcase/clamptime/noatime/timezone=28800/dev=e01050
 
on Mon Jun 16 11:01:37 2008

#df -hl
/dev/dsk/c9t0d0p0:1991M   923M68M94%/media/LEXAR MEDIA

#umount /media/LEXAR MEDIA
#

And then it no longer shows up in the df or the mount command.

Neal








 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread dick hoogendijk
On Mon, 16 Jun 2008 20:00:59 +0200
dick hoogendijk [EMAIL PROTECTED] wrote:
 Unmount it if neccessary (umount /dev/dsk/c5t0d0)
Should be /dev/dsk/c5t1d0  --

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread dick hoogendijk
On Mon, 16 Jun 2008 20:04:08 +0200
dick hoogendijk [EMAIL PROTECTED] wrote:
 Should be /dev/dsk/c5t1d0  --
Sh***t! No it should not. rmformat showed c5t0d0, didn't it?
So be careful. A typo is quickly made (see my msgs) ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius

Miles Nordin wrote:

a == Andrius  [EMAIL PROTECTED] writes:


 a # umount /dev/rdsk/c5t0d0p0

maybe there is another problem, too, but this is wrong.  type 'df -k'
as he suggested and use the device or pathname listed there.


This is end of df -k
/vol/dev/dsk/c5t0d0/unnamed_rmdisk:c
 156250144  96 156250048 1% 
/rmdisk/unnamed_rmdisk




--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius

dick hoogendijk wrote:

On Mon, 16 Jun 2008 18:54:04 +0100
Andrius [EMAIL PROTECTED] wrote:


That is true, disc is detected automatically. But
# umount /dev/rdsk/c5t0d0p0
umount: warning: /dev/rdsk/c5t0d0p0 not in mnttab


umount /dev/dsk/c5t0d0 should do it.



The same

# umount /dev/dsk/c5t0d0
umount: warning: /dev/dsk/c5t0d0 not in mnttab
umount: /dev/dsk/c5t0d0 no such file or directory

--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with missing disk in RaidZ

2008-06-16 Thread Eric Schrock
Try 'zpool replace'.

- Eric

On Mon, Jun 16, 2008 at 10:57:40AM -0700, Peter Hawkins wrote:
 Thanks to the help in a previous post I have imported my pool. However I 
 would appreciate some help with my next problem.
 
 This all arose because my motherboard failed while my zpool was resilvering 
 from a failed disk. I moved the disks to a new motherboard and imported the 
 pool with the help of the posters here. However when imported the new system 
 spawned regular error messages regarding the new disk and eventually the 
 system would hang after about a minute, really hang - completeley locked. I 
 tried killing the resilver with scrub -s but it just said that no scrub was 
 in progress. Eventaully I detached the replacement disk and the system stayed 
 running with the pool imported.
 
 However my pool is now in this state:
 
   pool: rz1500
  state: DEGRADED
 status: One or more devices could not be opened.  Sufficient replicas exist 
 for
 the pool to continue functioning in a degraded state.
 action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-D3
  scrub: resilver completed with 0 errors on Mon Jun 16 18:37:07 2008
 config:
 
 NAMESTATE READ WRITE CKSUM
 rz1500  DEGRADED 0 0 0
   raidz1DEGRADED 0 0 0
 c2d0p0  ONLINE   0 0 0
 c3d0p0  ONLINE   0 0 0
 c6d0p0  ONLINE   0 0 0
 c7d0UNAVAIL  0 0 0  cannot open
 
 errors: No known data errors
 
 The missing device was c7d0p0 and I now have another brand new disk to 
 replace it. I can't attach the device as there seems to be no RaidZ attach 
 syntax, and onlining the device makes no difference.  I need to add back the 
 device that I detached to this RaidZ pool.
 
 I'm on S10 x86 U3 patched to zpool V4.
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread dick hoogendijk
On Mon, 16 Jun 2008 19:10:18 +0100
Andrius [EMAIL PROTECTED] wrote:

 /rmdisk/unnamed_rmdisk
umount /rmdisk/unnamed_rmdisk should do the trick

It's probably also mounted on /media depending on your solaris version.
If so, umount /media/unnamed_rmdisk unmounts the disk too.
-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius

dick hoogendijk wrote:

On Mon, 16 Jun 2008 20:00:59 +0200
dick hoogendijk [EMAIL PROTECTED] wrote:

Unmount it if neccessary (umount /dev/dsk/c5t0d0)

Should be /dev/dsk/c5t1d0  --



Still the same
# umount /dev/rdsk/c5t1d0
umount: warning: /dev/rdsk/c5t1d0 not in mnttab
umount: /dev/rdsk/c5t1d0 no such file or directory


--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius

dick hoogendijk wrote:

On Mon, 16 Jun 2008 19:10:18 +0100
Andrius [EMAIL PROTECTED] wrote:


/rmdisk/unnamed_rmdisk

umount /rmdisk/unnamed_rmdisk should do the trick

It's probably also mounted on /media depending on your solaris version.
If so, umount /media/unnamed_rmdisk unmounts the disk too.


It is mounted on /rmdisk/unnamed_rmdisk. It is Solaris 10.

#umount /rmdisk/unnamed_rmdisk
umount: warning: /rmdisk/unnamed_rmdisk not in mnttab
umount: /rmdisk/unnamed_rmdisk not mounted


--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RFE 4852783

2008-06-16 Thread Miles Nordin
Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
happen within the next year?

My use-case is home user.  I have 16 disks spinning, two towers of
eight disks each, exporting some of them as iSCSI targets.  Four disks
are 1TB disks already in ZFS mirrors, and 12 disks are 180 - 320GB and
contain 12 individual filesystems.

If RFE 4852783 will happen in a year, I can move the smaller disks and
their data into the ZFS mirror.  As they die I will replace them with
pairs of ~1TB disks.

I worry the RFE won't happen because it looks 5 years old with no
posted ETA.  If it won't be closed within a year, some of those 12
disks will start failing and need replacement.  We find we lose one or
two each year.  If I added them to ZFS, I'd have to either waste
money, space, power on buying undersized replacement disks, or else do
silly and dangerously confusing things with slices.  Therefore in that
case I will leave the smaller disks out of ZFS and add only 1TB
devices to these immutable vdev's.


pgp3B4Guob7dg.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Martin Winkelman
On Mon, 16 Jun 2008, Andrius wrote:

 dick hoogendijk wrote:
 On Mon, 16 Jun 2008 19:10:18 +0100
 Andrius [EMAIL PROTECTED] wrote:
 
 /rmdisk/unnamed_rmdisk
 umount /rmdisk/unnamed_rmdisk should do the trick
 
 It's probably also mounted on /media depending on your solaris version.
 If so, umount /media/unnamed_rmdisk unmounts the disk too.

 It is mounted on /rmdisk/unnamed_rmdisk. It is Solaris 10.

 #umount /rmdisk/unnamed_rmdisk
 umount: warning: /rmdisk/unnamed_rmdisk not in mnttab
 umount: /rmdisk/unnamed_rmdisk not mounted

This disk is probably under volume manager control. Try running eject 
unnamed_rmdisk.

--
Martin Winkelman  -  [EMAIL PROTECTED]  -  303-272-3122
http://www.sun.com/solarisready/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ?: 1/2 Billion files in ZFS

2008-06-16 Thread Peter Tribble
On Mon, Jun 16, 2008 at 6:42 PM, Steffen Weiberle
[EMAIL PROTECTED] wrote:
 Has anybody stored 1/2 billion small ( 50KB) files in a ZFS data store?
 If so, any feedback in how many file systems [and sub-file systems, if
 any] you used?

I'm not quite there yet, although I have a thumper with about 110 million
files on it. That's across a couple of dozen filesystems, one has 27 million
files (and is going to get to one or two hundred million on its own before
it's done), and several have over 10 million. So while we're not there yet, it's
only a question of time.

 How were ls times? And insights in snapshots, clones, send/receive, or
 restores in general?

Directory listings aren't quick. Snapshots are easy to create; we have seen
destroying a snapshot take hours. Using send/receive (or anything else, like
tar) isn't quick. I suspect that using raidz is less than ideal for this sort of
workload (our workload has changed somewhat over the last year); I haven't
got anything like the resources to try alternatives, as I suspect
we're being bitten
by the relatively poor performance of raidz for random reads (basically you
only get one disk's worth of I/O per vdev).

Backups are slow. We seem to be able to do about 10 million files a day. I'm
wishing I don't ever have to tell you what restore times are like ;-)

I think that you need some way of breaking the data up - either by filesystem
or just by directory hierarchy - into digestible chunks. For us that's
at about the
1Tbyte/10 million file point at the most - we're looking at restructuring the
directory hierarchy for the filesystems that are beyond this so we can back them
up in pieces.

 How about NFS access?

Seems to work fine.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius

Martin Winkelman wrote:

On Mon, 16 Jun 2008, Andrius wrote:


dick hoogendijk wrote:

On Mon, 16 Jun 2008 19:10:18 +0100
Andrius [EMAIL PROTECTED] wrote:


/rmdisk/unnamed_rmdisk

umount /rmdisk/unnamed_rmdisk should do the trick

It's probably also mounted on /media depending on your solaris version.
If so, umount /media/unnamed_rmdisk unmounts the disk too.


It is mounted on /rmdisk/unnamed_rmdisk. It is Solaris 10.

#umount /rmdisk/unnamed_rmdisk
umount: warning: /rmdisk/unnamed_rmdisk not in mnttab
umount: /rmdisk/unnamed_rmdisk not mounted


This disk is probably under volume manager control. Try running eject 
unnamed_rmdisk.


--
Martin Winkelman  -  [EMAIL PROTECTED]  -  303-272-3122
http://www.sun.com/solarisready/


# eject /rmdisk/unnamed_rmdisk
No such file or directory
# eject /dev/rdsk/c5t0d0s0
/dev/rdsk/c5t0d0s0 is busy (try 'eject floppy' or 'eject cdrom'?)
# eject rmdisk
/vol/dev/rdsk/c5t0d0/unnamed_rmdisk: Inappropriate ioctl for device
# eject /vol/dev/rdsk/c5t0d0/unnamed_rmdisk
/vol/dev/rdsk/c5t0d0/unnamed_rmdisk: No such file or directory



--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Martin Winkelman
On Mon, 16 Jun 2008, Andrius wrote:

 # eject /rmdisk/unnamed_rmdisk
 No such file or directory
 # eject /dev/rdsk/c5t0d0s0
 /dev/rdsk/c5t0d0s0 is busy (try 'eject floppy' or 'eject cdrom'?)
 # eject rmdisk
 /vol/dev/rdsk/c5t0d0/unnamed_rmdisk: Inappropriate ioctl for device
 # eject /vol/dev/rdsk/c5t0d0/unnamed_rmdisk
 /vol/dev/rdsk/c5t0d0/unnamed_rmdisk: No such file or directory

# mount |grep rmdisk
/rmdisk/unnamed_rmdisk on /vol/dev/dsk/c2t0d0/unnamed_rmdisk:c 
read/write/setuid/devices/nohidden/nofoldcase/dev=16c1003 on Mon Jun 16 
12:51:07 2008
# eject unnamed_rmdisk
# mount |grep rmdisk
#


--
Martin Winkelman  -  [EMAIL PROTECTED]  -  303-272-3122
http://www.sun.com/solarisready/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Andrius

Martin Winkelman wrote:

On Mon, 16 Jun 2008, Andrius wrote:


# eject /rmdisk/unnamed_rmdisk
No such file or directory
# eject /dev/rdsk/c5t0d0s0
/dev/rdsk/c5t0d0s0 is busy (try 'eject floppy' or 'eject cdrom'?)
# eject rmdisk
/vol/dev/rdsk/c5t0d0/unnamed_rmdisk: Inappropriate ioctl for device
# eject /vol/dev/rdsk/c5t0d0/unnamed_rmdisk
/vol/dev/rdsk/c5t0d0/unnamed_rmdisk: No such file or directory


# mount |grep rmdisk
/rmdisk/unnamed_rmdisk on /vol/dev/dsk/c2t0d0/unnamed_rmdisk:c 
read/write/setuid/devices/nohidden/nofoldcase/dev=16c1003 on Mon Jun 16 
12:51:07 2008

# eject unnamed_rmdisk
# mount |grep rmdisk
#


--
Martin Winkelman  -  [EMAIL PROTECTED]  -  303-272-3122
http://www.sun.com/solarisready/



Sorry what a second row should be please?

--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE 4852783

2008-06-16 Thread Neil Perrin
This is actually quite a tricky fix as obviously data and meta data have
to be relocated. Although there's been no visible activity in this bug
there has been substantial design activity to allow the RFE to be easily
fixed. 

Anyway, to answer your question, I would fully expect this RFE would
be fixed within a year, but can't guarantee it.

Neil.

Miles Nordin wrote:
 Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
 happen within the next year?
 
 My use-case is home user.  I have 16 disks spinning, two towers of
 eight disks each, exporting some of them as iSCSI targets.  Four disks
 are 1TB disks already in ZFS mirrors, and 12 disks are 180 - 320GB and
 contain 12 individual filesystems.
 
 If RFE 4852783 will happen in a year, I can move the smaller disks and
 their data into the ZFS mirror.  As they die I will replace them with
 pairs of ~1TB disks.
 
 I worry the RFE won't happen because it looks 5 years old with no
 posted ETA.  If it won't be closed within a year, some of those 12
 disks will start failing and need replacement.  We find we lose one or
 two each year.  If I added them to ZFS, I'd have to either waste
 money, space, power on buying undersized replacement disks, or else do
 silly and dangerously confusing things with slices.  Therefore in that
 case I will leave the smaller disks out of ZFS and add only 1TB
 devices to these immutable vdev's.
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-16 Thread Peter Tribble
On Mon, Jun 16, 2008 at 5:20 PM, dick hoogendijk [EMAIL PROTECTED] wrote:
 On Mon, 16 Jun 2008 16:21:26 +0100
 Peter Tribble [EMAIL PROTECTED] wrote:

 The *real* common thread is that you need ridiculous amounts
 of memory to get decent performance out of ZFS

 That's FUD. Older systems might not have enough memory, but newer ones
 can't hardly be bought with less then 2Gb. Read the specs before you
 write such nonsense about ridiculous memory amounts.

Hogwash. What is the reasonable minimum? I'm suspecting it's well
over 2G.

And as for being unable to get machines with less than 2G, just look at
Sun's price list - plenty of 1G, and the X2100, Ultra 20, and Ultra 24 all
come in 512M configurations. Yes, it's not very smart, but it doesn't
just set the target range now but for the working lifetime of the machines,
which is at least 3 years.

 bulk of the installed base of machines accessed by users is still
 in the 512M-1G range

 True, buth those systems don't qualify for Vista nor for OpenSolaris,
 nor for a good ZFS based system. That's normal. Those machines are old.
 Not too old for ancient filesystems and lightweight desktops, but the
 -are- too old for modern software.

So you're saying that if people want to even try OpenSolaris then they need
to throw away their perfectly functional hardware and buy something new?
Hardly a strategy for success. 1G is more than enough to run a modern
desktop (although heavier use and more apps will drive the requirement up
beyond that).

(And it's not just a case of looking at the memory in the hardware - as
virtualization becomes more and more widely used that memory allocation
gets split up into smaller chunks that get allocated to virtual systems.)

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB hard to ZFS

2008-06-16 Thread Paul Gress
Since Volume Management has control and eject didn't work, just turning 
off Volume Management will do the trick.

# svcadm disable volfs

Now you can remove it safely.

Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Andrius

Paul Gress wrote:
Since Volume Management has control and eject didn't work, just turning 
off Volume Management will do the trick.


# svcadm disable volfs

Now you can remove it safely.

Paul



Thanks! It works. Volume managagement is that thing that does not exist 
in zfs perhaps and made disk managemet more easy. Thanks for everybody 
for advices.


Volume Manager should be off before creating pools in removable disks.
--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Bob Friesenhahn
On Mon, 16 Jun 2008, Andrius wrote:
 Thanks! It works. Volume managagement is that thing that does not exist in 
 zfs perhaps and made disk managemet more easy. Thanks for everybody for 
 advices.

 Volume Manager should be off before creating pools in removable disks.

Probably it will work to edit /etc/vold.conf and comment out the line

use rmdisk drive /dev/rdsk/c*s2 dev_rmdisk.so rmdisk%d

Then do

kill -HUP `pgrep vold`

Otherwise cdroms and other valuable devices won't be mounted.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] questions about ZFS Send/Receive

2008-06-16 Thread Stefano Pini

Hi guys,
we are  proposing  a customer  a couple of X4500 (24 Tb) used as NAS  
(i.e. NFS server).
Both server will contain the same files and should be accessed by  
different clients at the same time (i.e. they should be both active)

So we need to guarantee that both x4500 contain the same files:
We could simply copy the contents on both x4500 , which is an option  
because the new files are in a limited number and rate , but we  
would really like to use ZFS send  receive commands:


AFAIK the commands works fine but  generally speaking are there any  
known limitations ?
And, in detail , it is not clear  if the receiving ZFS file system  
could be used regularly while it is in receiving mode:
in poor words is it possible to read and export in nfs,   files from  
a  ZFS file system while it is receiving update from  another  ZFS  
send ?


Clearly  until the new updates are received and applied the old copy  
would be used


TIA
Stefano



Sun Microsystems Spa
Viale Fulvio testi 327
20162 Milano ITALYSTEFANO PINI
Senior Technical Specialist at Sun Microsystems Italy
contact | [EMAIL PROTECTED] | +39 02 64152150

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-16 Thread dick hoogendijk
On Mon, 16 Jun 2008 20:04:47 +0100
Peter Tribble [EMAIL PROTECTED] wrote:

 Hogwash. What is the reasonable minimum? I'm suspecting it's well
 over 2G.

2Gb is perfectly alright.

 And as for being unable to get machines with less than 2G, just look
 at Sun's price list
I'm not saying you can't buy machines w/ 512mb-1gb. I'm saying that the
majority of computers offered in stores comes w/ a minimum of 2Gb. At
least in the Netherlands.

 So you're saying that if people want to even try OpenSolaris then
 they need to throw away their perfectly functional hardware and buy
 something new?

512mb is the bare minimum for OpenSolaris. Take it or leave it. That
doesn't mean people have to throw their machines away. They could try
to add ram. I -do- say that 512mb ram is stone age.

 1G is more than enough to run a modern desktop (although heavier use
 and more apps will drive the requirement up beyond that).

1Gb is minimum for a modern desktop and a few apps like the Gimp /
OpenOffice. That leaves hardly some room for modern filesystems, nor
does it leave room for virtualization.

 (And it's not just a case of looking at the memory in the hardware -
 as virtualization becomes more and more widely used that memory
 allocation gets split up into smaller chunks that get allocated to
 virtual systems.)

That's why a modern machine needs at least 2GB ram. That way you can
have a modern desktop; a modern FS like ZFS and one xVM.

Below that all you have is a modern desktop. No room to play with the
modern goodies like xVM / ZFS
Given the fact that 2GB sales for about 30 euro, that's cheap.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv90 ++
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE 4852783

2008-06-16 Thread Tim
Why would you have to buy smaller disks?  You can replace the 320's
with 1tb drives and after the last 320 is out of the raidgroup, it
will grow automatically.





On 6/16/08, Miles Nordin [EMAIL PROTECTED] wrote:
 Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
 happen within the next year?

 My use-case is home user.  I have 16 disks spinning, two towers of
 eight disks each, exporting some of them as iSCSI targets.  Four disks
 are 1TB disks already in ZFS mirrors, and 12 disks are 180 - 320GB and
 contain 12 individual filesystems.

 If RFE 4852783 will happen in a year, I can move the smaller disks and
 their data into the ZFS mirror.  As they die I will replace them with
 pairs of ~1TB disks.

 I worry the RFE won't happen because it looks 5 years old with no
 posted ETA.  If it won't be closed within a year, some of those 12
 disks will start failing and need replacement.  We find we lose one or
 two each year.  If I added them to ZFS, I'd have to either waste
 money, space, power on buying undersized replacement disks, or else do
 silly and dangerously confusing things with slices.  Therefore in that
 case I will leave the smaller disks out of ZFS and add only 1TB
 devices to these immutable vdev's.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Andrius

Bob Friesenhahn wrote:

On Mon, 16 Jun 2008, Andrius wrote:
Thanks! It works. Volume managagement is that thing that does not 
exist in zfs perhaps and made disk managemet more easy. Thanks for 
everybody for advices.


Volume Manager should be off before creating pools in removable disks.


Probably it will work to edit /etc/vold.conf and comment out the line

use rmdisk drive /dev/rdsk/c*s2 dev_rmdisk.so rmdisk%d

Then do

kill -HUP `pgrep vold`

Otherwise cdroms and other valuable devices won't be mounted.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/




After commenting
# kill -HUP 'pgrep vold'
kill: invalid id


--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Rich Teer
On Mon, 16 Jun 2008, Andrius wrote:

 After commenting
 # kill -HUP 'pgrep vold'
 kill: invalid id

We're in the 21st century, so

# pkill -HUP vold

should work just fine.

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Lida Horn
Andrius wrote:
 Bob Friesenhahn wrote:
 On Mon, 16 Jun 2008, Andrius wrote:
 Thanks! It works. Volume managagement is that thing that does not 
 exist in zfs perhaps and made disk managemet more easy. Thanks for 
 everybody for advices.

 Volume Manager should be off before creating pools in removable disks.

 Probably it will work to edit /etc/vold.conf and comment out the line

 use rmdisk drive /dev/rdsk/c*s2 dev_rmdisk.so rmdisk%d

 Then do

 kill -HUP `pgrep vold`

 Otherwise cdroms and other valuable devices won't be mounted.

 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], 
 http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/



 After commenting
 # kill -HUP 'pgrep vold'
 kill: invalid id
You used forward quotes, not back quotes.  Use ` not '.


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Andrius

Bob Friesenhahn wrote:

On Mon, 16 Jun 2008, Andrius wrote:




After commenting
# kill -HUP 'pgrep vold'
kill: invalid id


It looks like you used forward quotes rather than backward quotes.

I did just try this procedure myself with my own USB drive and it works 
fine.


Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/




That is true, but
# kill -HUP `pgrep vold`
usage: kill [ [ -sig ] id ... | -l ]


--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Brian H. Nelson


Andrius wrote:

 That is true, but
 # kill -HUP `pgrep vold`
 usage: kill [ [ -sig ] id ... | -l ]



I think you already did this as per a previous message:

# svcadm disable volfs

As such, vold isn't running. Re-enable the service and you should be fine.


-Brian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [SOLVED] USB hard to ZFS

2008-06-16 Thread Andrius

Brian H. Nelson wrote:



Andrius wrote:


That is true, but
# kill -HUP `pgrep vold`
usage: kill [ [ -sig ] id ... | -l ]




I think you already did this as per a previous message:

# svcadm disable volfs

As such, vold isn't running. Re-enable the service and you should be fine.


-Brian




Cool! Thank's. Another question arised to transfer (or copy) file 
systems from one pool to another, but hope to find it in manuals.


--
Regards,
Andrius Burlega
begin:vcard
fn:Andrius Burlega
n:Burlega;Andrius
email;internet:[EMAIL PROTECTED]
tel;cell:+353876301575
x-mozilla-html:FALSE
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-16 Thread Chris Siebenmann
| I guess I find it ridiculous you're complaining about ram when I can
| purchase 4gb for under 50 dollars on a desktop.
|
| Its not like were talking about a 500 dollar purchase.

 'On a desktop' is an important qualification here. Server RAM is
more expensive, and then you get to multiply it by the number of
servers you are buying. It does add up.

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] memory hog

2008-06-16 Thread Tim
Remind me again what a veritas license is.  If you can't find ram for
less than that you need to find a new var/disti





On 6/16/08, Chris Siebenmann [EMAIL PROTECTED] wrote:
 | I guess I find it ridiculous you're complaining about ram when I can
 | purchase 4gb for under 50 dollars on a desktop.
 |
 | Its not like were talking about a 500 dollar purchase.

  'On a desktop' is an important qualification here. Server RAM is
 more expensive, and then you get to multiply it by the number of
 servers you are buying. It does add up.

   - cks
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirror Problem

2008-06-16 Thread Richard Elling
Matthew C Aycock wrote:
 Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror 
 of a T3B lun and a corresponding lun of a SE3511 brick. I did this since I 
 was new with ZFS and wanted to ensure that my data would survive an array 
 failure. It turns out that I was smart for doing this :)

 I had a hardware failure on the SE3511 that caused the complete RAID5 lun on 
 the se3511 to die. (The first glance showed 6 drives failed :( ) However, I 
 would have expected that ZFS would detect the failed mirror halves and 
 offline them as would ODS and VxVM. To my shock, it basically hung the 
 server. I eventually had to unmap the SE3511 luns and replace them space I 
 had available from another brick in the SE3511. I then did a zpool replace 
 and ZFS reslivered the data.

 So, why did ZFS hang my server?
   

It was patiently waiting.

 This is on Solaris 11/06 kernel patch 127111-05 and ZFS version 4.
  
   

Additional failure management improvements were integrated
into NV b72 (IIRC).  I'm not sure when or if those changes will
make it into Solaris 10, but update 6 would be a good guess.
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with missing disk in RaidZ

2008-06-16 Thread Peter Hawkins
Tried zpool replace. Unfortunately that takes me back into the cycle where as 
soon as the resilver starts the system hangs, not even CAPS Lock works. When I 
reset the system I have about a 10 second window to detach the device again to 
get the system back before it freezes. Finally detached it so I'm back where I 
started.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Mirror Problem

2008-06-16 Thread Erik Trimble
Richard Elling wrote:
 Matthew C Aycock wrote:
   
 Well, I have a zpool created that contains four vdevs. Each Vdev is a mirror 
 of a T3B lun and a corresponding lun of a SE3511 brick. I did this since I 
 was new with ZFS and wanted to ensure that my data would survive an array 
 failure. It turns out that I was smart for doing this :)

 I had a hardware failure on the SE3511 that caused the complete RAID5 lun on 
 the se3511 to die. (The first glance showed 6 drives failed :( ) However, I 
 would have expected that ZFS would detect the failed mirror halves and 
 offline them as would ODS and VxVM. To my shock, it basically hung the 
 server. I eventually had to unmap the SE3511 luns and replace them space I 
 had available from another brick in the SE3511. I then did a zpool replace 
 and ZFS reslivered the data.

 So, why did ZFS hang my server?
   
 

 It was patiently waiting.

   
 This is on Solaris 11/06 kernel patch 127111-05 and ZFS version 4.
  
   
 

 Additional failure management improvements were integrated
 into NV b72 (IIRC).  I'm not sure when or if those changes will
 make it into Solaris 10, but update 6 would be a good guess.
  -- richard
   

My understanding talking with the relevant folks is that the fix will be 
in 10 Update 6, but not likely available as a patch beforehand.

-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] LSAI SAS SATA card and MB comptability questions?

2008-06-16 Thread Aaron Moore
Hello,

I am new to open solaris and am trying to setup a ZFS based storage solution.

I am looking at setting up a system with the following specs:

Intel BOXDG33FBC 
Intel Core 2 Duo 2.66Ghz
2 or 4 GB ram

For the drives I am looking at using a 
LSI SAS3081E-R 

I've been reading around and it sounds like LSI solutions work well in terms of 
compatability with solaris. Could someone help me verify this?

Or are there any alternate cards I should be looking at?

I'm looking at having a max of 12 HDs so I'd use this card in conjunction with 
another 2 or 4 port card.

My other option is to get 3 PCI or PCIE based 4 port cards which I am open to. 
I'm just trying to keep the cost low.

Thank you
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LSAI SAS SATA card and MB comptability questions?

2008-06-16 Thread James C. McPherson
Aaron Moore wrote:
 I am new to open solaris and am trying to setup a ZFS based storage
 solution.
 
 I am looking at setting up a system with the following specs:
 
 Intel BOXDG33FBC Intel Core 2 Duo 2.66Ghz 2 or 4 GB ram
 
 For the drives I am looking at using a LSI SAS3081E-R
 
 I've been reading around and it sounds like LSI solutions work well in
 terms of compatability with solaris. Could someone help me verify this?
 
 Or are there any alternate cards I should be looking at?
 
 I'm looking at having a max of 12 HDs so I'd use this card in conjunction
 with another 2 or 4 port card.
 
 My other option is to get 3 PCI or PCIE based 4 port cards which I am
 open to. I'm just trying to keep the cost low.

In general, LSI cards based on the 1064 or 1068 chips should
work out of the box with the Sun-supplied mpt(7d) driver.

I think your motherboard and cpu choice are fine, and I encourage
you to stuff as much ram as possible onto the board ;)


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss