[zfs-discuss] Nexsan SATABeast and ZFS

2009-03-09 Thread Lars-Gunnar Persson
I'm trying to implement a Nexsan SATABeast (an external disk array,  
read more: http://www.nexsan.com/satabeast.php, 14 disks available)  
with a Sun Fire X4100 M2 server running Solaris 10 u6 (connected via  
fiber) and have a couple of questions:


(My motivation for this is the corrupted ZFS volume discussion I had  
earlier with no result, and this time I'm trying to make a more robust  
implementation)


1. On the external disk array, I not able to configure JBOD or RAID 0  
or 1 with just one disk. I can't find any options for my Solaris  
server to access the disk directly so I have to configure some raids  
on the SATABeast. I was thinking of striping two disks in each raid  
and then add all 7 raids to one zpool as a zraid. The problem with  
this is if one disk breaks down, I'll loose one RAID 0 disk but maybe  
ZFS can handle this? Should I rather implement RAID5 disks one the  
SATABeast and then export them to the Solaris machine? 14 disks would  
give me 4 RAID5 volumes and 2 spare disks? I'll loose a lot of disk  
space. What about create larger RAID volumes on the SATABeast? Like 3  
RAID volumes with 5 disks in 2 RAIDS and 4 disks in one RAID? I'm  
really not sure what to choose ... At the moment I've striped two  
disks in one RAID volume.


2. After reading from the ZFS Evil Tuning Guide (http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes 
) about cache flushes I checked the cache configuration on the  
SATABaest and I can change these settings:


System Admin
Configure Cache

Cache Configuration
Current write cache state: Enabled, FUA ignored - 495 MB
Manually override current write cache status: [ ] Force write cache to  
Disabled

Desired write cache state: [X] Enabled [ ] Disabled
Allow attached host to override write cache configuration: [ ]
Ignore force unit access (FUA) bit: [X]
Write cache streaming mode: [ ]
Cache optimization setting:
 [ ] Random access
 [X] Mixed sequential/random
 [ ] Sequential access

And from the help section:

Write cache will normally speed up host writes,  data is buffered in  
the RAID controllers memory when the installed disk drives are not  
ready to accept the write data. The RAID controller write cache  
memory is battery backed, this allows any unwritten array data to be  
kept intact during a power failure situation. When power is restored  
this battery backed data will be flushed out to the RAID array.


Current write cache state - This is the current state of the write  
cache that the RAID system is using.


Manually override current write cache status - This allows the write  
caching to be forced on or off by the user, this change will take  
effect immediately.


Desired write cache state - This is the state of the write cache the  
user wishes to have after boot up.


Allow attached host to override write cache configuration - This  
allows the host system software to issue commands to the RAID system  
via the host interface that will either turn off or on the write  
caching.


Ignore force unit access (FUA) bit - When the force unit access  
(FUA) bit is set by a host system on a per command basis data is  
written / read directly to / from the disks without using the  
onboard cache. This will incur a time overhead, but guarantees the  
data is on the media. Set this option to force the controller to  
ignore the FUA bit such that command execution times are more  
consistent.


Write cache streaming mode - When the write cache is configured in  
streaming mode (check box ticked), the system continuously flushes  
the cache (it runs empty). This provides maximum cache buffering to  
protect against raid system delays adversely affecting command  
response times to the host.
When the write cache operates in non-streaming mode (check box not  
ticked) the system runs with a full write cache to maximise cache  
hits and maximise random IO performance.


Cache optimization setting - The cache optimization setting adjusts  
the cache behaviour to maximize performance for the expected host I/ 
O pattern.


Note that the write cache will be flushed 5 seconds after the last  
host write. It is recommended that all host activity is stopped 30  
seconds before powering the system off.


Any thoughts about this?

Regards,

Lars-Gunnar Persson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool vs df

2009-03-09 Thread Lars-Gunnar Persson
I've a interesting situation. I've created two pool now and one pool  
named Data and another named raid5. Check the details here:


bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
Data   10.7T   9.82T892G91%  ONLINE -
raid5  10.9T145K   10.9T 0%  ONLINE -

As you see, the sizes are approximately the same. If I run the df  
command, it reports:


bash-3.00# df -h /Data
Filesystem size   used  avail capacity  Mounted on
Data11T   108M   154G 1%/Data
bash-3.00# df -h /raid5
Filesystem size   used  avail capacity  Mounted on
raid5  8.9T40K   8.9T 1%/raid5

You see that the Data has 11 TB when zpool reported 10.7 TB and the  
raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a  
difference of 2 TB. Where did they go?


Any explanation would be find.

Regards,

Lars-Gunnar Persson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool vs df

2009-03-09 Thread Tomas Ögren
On 09 March, 2009 - Lars-Gunnar Persson sent me these 1,1K bytes:

 I've a interesting situation. I've created two pool now and one pool  
 named Data and another named raid5. Check the details here:

 bash-3.00# zpool list
 NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
 Data   10.7T   9.82T892G91%  ONLINE -
 raid5  10.9T145K   10.9T 0%  ONLINE -

 As you see, the sizes are approximately the same. If I run the df  
 command, it reports:

 bash-3.00# df -h /Data
 Filesystem size   used  avail capacity  Mounted on
 Data11T   108M   154G 1%/Data
 bash-3.00# df -h /raid5
 Filesystem size   used  avail capacity  Mounted on
 raid5  8.9T40K   8.9T 1%/raid5

 You see that the Data has 11 TB when zpool reported 10.7 TB and the  
 raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a  
 difference of 2 TB. Where did they go?

To your raid5 (raidz) parity.

Check 'zpool status' to see how your two pools differ.. zpool list shows
the disk space you have.. zfs/df shows how much you can store there..

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS GSoC ideas page rough draft

2009-03-09 Thread C.


Here's my rough draft of GSoC ideas

http://www.osunix.org/docs/DOC-1022

Also want to thank everyone for their feedback.

Please keep in mind that for creating a stronger application we only 
have a few days.


We still need to :

1) Find more mentors.  (Please add your name to the doc or confirm via 
email and which idea you're most interested in)
2) Add contacts from each organization that may be interested 
(OpenSolaris, FreeBSD...)
3) Finalize the application, student checklist, mentor checklist and 
template
4) Start to give ideas for very accurate project descriptions/details 
(We have some time for this)


Thanks

./Christopher

---
Community driven OpenSolaris Technology - http://www.osunix.org
blog: http://www.codestrom.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool vs df

2009-03-09 Thread Lars-Gunnar Persson

Here is what zpool status reports:

bash-3.00# zpool status
  pool: Data
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
Data ONLINE   0 0 0
  c4t5000402001FC442Cd0  ONLINE   0 0 0

errors: No known data errors

  pool: raid5
 state: ONLINE
 scrub: none requested
config:

NAME   STATE READ  
WRITE CKSUM
raid5  ONLINE   0  
0 0
  raidz1   ONLINE   0  
0 0
c7t6000402001FC442C609DCA22d0  ONLINE   0  
0 0
c7t6000402001FC442C609DCA4Ad0  ONLINE   0  
0 0
c7t6000402001FC442C609DCAA2d0  ONLINE   0  
0 0
c7t6000402001FC442C609DCABFd0  ONLINE   0  
0 0
c7t6000402001FC442C609DCADBd0  ONLINE   0  
0 0
c7t6000402001FC442C609DCAF8d0  ONLINE   0  
0 0


errors: No known data errors


On 9. mars. 2009, at 14.29, Tomas Ögren wrote:


On 09 March, 2009 - Lars-Gunnar Persson sent me these 1,1K bytes:


I've a interesting situation. I've created two pool now and one pool
named Data and another named raid5. Check the details here:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH  
ALTROOT

Data   10.7T   9.82T892G91%  ONLINE -
raid5  10.9T145K   10.9T 0%  ONLINE -

As you see, the sizes are approximately the same. If I run the df
command, it reports:

bash-3.00# df -h /Data
Filesystem size   used  avail capacity  Mounted on
Data11T   108M   154G 1%/Data
bash-3.00# df -h /raid5
Filesystem size   used  avail capacity  Mounted on
raid5  8.9T40K   8.9T 1%/raid5

You see that the Data has 11 TB when zpool reported 10.7 TB and the
raid5 has 10.9TB in zpool but only 8.9 TB when using df. Thats a
difference of 2 TB. Where did they go?


To your raid5 (raidz) parity.

Check 'zpool status' to see how your two pools differ.. zpool list  
shows

the disk space you have.. zfs/df shows how much you can store there..

/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



.--.
|Lars-Gunnar  
Persson   |
|IT- 
sjef   |
| 
  |
|Nansen senteret for miljø og  
fjernmåling  |
|Adresse  : Thormøhlensgate 47, 5006  
Bergen|
|Direkte  : 55 20 58 31, sentralbord: 55 20 58 00, fax: 55 20 58  
01|
|Internett: http://www.nersc.no, e-post: lars- 
gunnar.pers...@nersc.no  |

'--'

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Other zvols for swap and dump?

2009-03-09 Thread Casper . Dik


Can you use a different zvol for dump and swap rather than using the swap
and dump zvol created by liveupgrade?

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Other zvols for swap and dump?

2009-03-09 Thread Darren J Moffat

casper@sun.com wrote:


Can you use a different zvol for dump and swap rather than using the swap
and dump zvol created by liveupgrade?


Yes you can.

Swap just uses a normal ZVOL.  Dump uses a special one.

When you run dumpadm to change/set the dump device to a zvol it will 
dumpify it so it can be used for dumping too.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-09 Thread Bob Friesenhahn

On Mon, 9 Mar 2009, Lars-Gunnar Persson wrote:


1. On the external disk array, I not able to configure JBOD or RAID 0 or 1 
with just one disk. I can't find any options for my Solaris server to access 
the disk directly so I have to configure some raids on the SATABeast. I was 
thinking of striping two disks in each raid and then add all 7 raids to one 
zpool as a zraid. The problem with this is if one disk breaks down, I'll 
loose one RAID 0 disk but maybe ZFS can handle this? Should I rather 
implement RAID5 disks one the SATABeast and then export them to the Solaris 
machine? 14 disks would give me 4 RAID5 volumes and 2 spare disks? I'll loose 
a lot of disk space. What about create larger RAID volumes on the SATABeast? 
Like 3 RAID volumes with 5 disks in 2 RAIDS and 4 disks in one RAID? I'm 
really not sure what to choose ... At the moment I've striped two disks in 
one RAID volume.


Your idea to stripe two disks per LUN should work.  Make sure to use 
raidz2 rather than plain raidz for the extra reliability.  This 
solution is optimized for high data throughput from one user.


An alternative is to create individual RAID 0 LUNs which actually 
only contain a single disk.  Then implement the pool as two raidz2s 
with six LUNs each, and two hot spares.  That would be my own 
preference.  Due to ZFS's load share this should provide better 
performance (perhaps 2X) for multi-user loads.  Some testing may be 
required to make sure that your hardware is happy with this.


Avoid RAID5 if you can because it is not as reliable with today's 
large disks and the resulting huge LUN size can take a long time to 
resilver if the RAID5 should fail (or be considered to have failed). 
There is also the issue that a RAID array bug might cause transient 
wrong data to be returned and this could cause confusion for ZFS's own 
diagnostics/repair and result in useless repairs.  If ZFS reports a 
problem but the RAID array says that the data is fine, then there is 
confusion, finger-pointing, and likely a post to this list.  If you 
are already using ZFS, then you might as well use ZFS for most of the 
error detection/correction as well.


These are my own opinions and others will surely differ.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread Richard Morris - Sun Microsystems - Burlington United States

On 03/08/09 23:16, Blake wrote:

I think it's filesystems, not snapshots, that take a long time to
enumerate.  (If I'm wrong, somebody correct me :)


The time needed to iterate through the same number of snapshots and
filesystems should be about the same.  However, whenever any of the
ZFS filesystems or snapshots to be listed are not already cached in
memory, then it does take time for zfs list to load them from disk. 
However, some prefetching has been added to help speed this up.


No extra time is needed to boot a system with thousands of filesystems
or snapshots.  However, ZFS mounts filesystems by default and it does
take time for thousands of filesystems to be mounted.  Changes have
been made to speed this up by reducing the number of mnttab lookups.

And zfs list has been changed to no longer show snapshots by default.
But it still might make sense to limit the number of snapshots saved:
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10

-- Rich


On Sun, Mar 8, 2009 at 10:10 PM, mike mike...@gmail.com wrote:
  

I do a daily snapshot of two filesystems, and over the past few months
it's obviously grown to a bunch.

zfs list shows me all of those.

I can change it to use the -t flag to not show them, so that's good.
However, I'm worried about boot times and other things.

Will it get to a point with 1000's of snapshots that it takes a long
time to boot, or do any sort of sync or scrub activities?

Thanks :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to make a ZFS pool with discs of the other machines of the LAN?

2009-03-09 Thread Thiago Martins
I'll drop AoE in favor of the iSCSI to export my 20 discs from Linux Debian 
Lenny to OpenSolaris 2008.11. Now, I believe this setup will be more compatible 
with OpenSolaris OS.

Thanks!
Thiago

- Thiago Martins thiago.mart...@worldweb.com.br escreveu:

 Sriram,
 
 - Sriram Narayanan sri...@belenix.org escreveu:
 
  On Sat, Mar 7, 2009 at 11:05 AM, Thiago C. M. Cordeiro | World Web
  thiago.mart...@worldweb.com.br wrote:
   Hi!
  
    Today I have ten computers with Xen and Linux, each with 2 discs
 of
  500G in raid1, each node sees only its own raid1 volume, I do not
 have
  live motion of my virtual machines... and moving the data from one
  hypervisor to another is a pain task...
  
    Now that I discovered this awesome file system! I want that the
 ZFS
  manages all my discs in a network environment.
  
    But I don't know the best way to make a pool using all my 20
 discs
  in one big pool with 10T of capacity.
  
    My first contact with Solaris, was with the OpenSolaris 2008.11,
 as
  a virtual machine (paravirtual domU) on a Linux (Debian 5.0) dom0. I
  also have more opensolaris on real machines to make the tests...
  
    I'm thinking in export all my 20 discs, through the AoE protocol,
  and in my dom0 that I'm running the opensolaris domU (in HA through
  the Xen), I will make the configuration file for it (zfs01.cfg) with
  20 block devices of 500G and inside the opensolaris domu, I will
 share
  the pool via iSCSI targets and/or NFS back to the domUs of my
  cluster...  Is this a good idea?
 
  I share a three disk pool over NFS for some VMWare ESXi based
  hosting.
 This three discs are on the same machine or in three distinct?
 I mean, your pool have three local or remote discs?
 
  There is considerably high disk I/O caused by the apps that run on
  these VMs. ZFS + NFS is working fine for me.
 
  I intend to experiment with iSCSI later when I free up some machines
  for such an experiment.
 
  -- Sriram
 
 Thanks!
 -
 Thiago
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread mike
Well, I could just use the same script to create my daily snapshot to
remove a snapshot with the same prefix, just different date (say, keep
30 days only or something)

My hope was to just keep a running archive indefinitely. But I guess
snapshots are only as good as needed, and I doubt I will realize I
need a file I lost 6+ months ago...

On Mon, Mar 9, 2009 at 11:49 AM, Richard Morris - Sun Microsystems -
Burlington United States richard.mor...@sun.com wrote:
 On 03/08/09 23:16, Blake wrote:

 I think it's filesystems, not snapshots, that take a long time to
 enumerate.  (If I'm wrong, somebody correct me :)

 The time needed to iterate through the same number of snapshots and
 filesystems should be about the same.  However, whenever any of the
 ZFS filesystems or snapshots to be listed are not already cached in
 memory, then it does take time for zfs list to load them from disk.
 However, some prefetching has been added to help speed this up.

 No extra time is needed to boot a system with thousands of filesystems
 or snapshots.  However, ZFS mounts filesystems by default and it does
 take time for thousands of filesystems to be mounted.  Changes have
 been made to speed this up by reducing the number of mnttab lookups.

 And zfs list has been changed to no longer show snapshots by default.
 But it still might make sense to limit the number of snapshots saved:
 http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10

 -- Rich

 On Sun, Mar 8, 2009 at 10:10 PM, mike mike...@gmail.com wrote:


 I do a daily snapshot of two filesystems, and over the past few months
 it's obviously grown to a bunch.

 zfs list shows me all of those.

 I can change it to use the -t flag to not show them, so that's good.
 However, I'm worried about boot times and other things.

 Will it get to a point with 1000's of snapshots that it takes a long
 time to boot, or do any sort of sync or scrub activities?

 Thanks :)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread Richard Elling

mike wrote:

Well, I could just use the same script to create my daily snapshot to
remove a snapshot with the same prefix, just different date (say, keep
30 days only or something)
  


NB the autosnapshot feature (aka Time Slider Manager) already
has this capability.


My hope was to just keep a running archive indefinitely. But I guess
snapshots are only as good as needed, and I doubt I will realize I
need a file I lost 6+ months ago...
  


famous last words... :-)
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot mount '/export' directory is not empty

2009-03-09 Thread Jan Hlodan
Hello,

I am desperate. Today I realized that my OS 108 doesn't want to boot.
I have no idea what I screwed up. I upgraded on 108 last week without
any problems.
Here is where I'm stuck:

Reading ZFS config: done.
Mounting ZFS filesystems: (1/17) cannot mount '/export': directory is
not empty (17/17)

$ svcs -x
svc:/system/filesystem/local:default (local file system mounts)
State: maintenance since ...
Impact: 45 dependent services are not running.

svc:/network/rpc/smserver:default (removable media management)
State: uninitialized since...
Impact: 2 dependent services are not running.

$ zfs mount

rpool/ROOT/opensolaris-2/
rpool/export/home/export/home/wewek
rpool /rpool
tank  /tank
tank/projects/tank/projects
.
.
.
.

$ pfexec zfs mount -a
cannot mount '/export': directory is not empty

I can mount rpool/export by -O option but I'll lost my home directory
and Gnome can't come up.
$ pfexec zfs mount -O rpool/export
$ svcadm clear filesystem/local:default
Mounting ZFS filesystems: (17/17)

GDM login comes up, after log in:

Your home directory is listed as: '/export/home/wewek' but it doesn't
appear to exist. Do you want to log in with the / (root) directory as
your home directory? It's unlikely anything will work unless you use a
failsafe session.

Can you help me please? I don't want to loose all my configurations.

Thank you!



Regards,

Jan Hlodan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread mike
Bear in mind I am not that comfortable with smf and manifests and
Solaris userland and such... I wouldn't want to be messing up anything
or not setting it up correctly.

My little PHP script (yes, I use PHP for shell scripting) does a
perfect job of what I need and I got a neat idea from Brad Stone about
rolling up daily snapshots into monthly snapshots, which would roll up
into yearly snapshots...

On Mon, Mar 9, 2009 at 1:29 PM, Richard Elling richard.ell...@gmail.com wrote:
 mike wrote:

 Well, I could just use the same script to create my daily snapshot to
 remove a snapshot with the same prefix, just different date (say, keep
 30 days only or something)


 NB the autosnapshot feature (aka Time Slider Manager) already
 has this capability.

 My hope was to just keep a running archive indefinitely. But I guess
 snapshots are only as good as needed, and I doubt I will realize I
 need a file I lost 6+ months ago...


 famous last words... :-)
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/export' directory is not empty

2009-03-09 Thread Tomas Ögren
On 09 March, 2009 - Jan Hlodan sent me these 1,7K bytes:

 Hello,
 
 I am desperate. Today I realized that my OS 108 doesn't want to boot.
 I have no idea what I screwed up. I upgraded on 108 last week without
 any problems.
 Here is where I'm stuck:
 
 Reading ZFS config: done.
 Mounting ZFS filesystems: (1/17) cannot mount '/export': directory is
 not empty (17/17)
 
 $ svcs -x
 svc:/system/filesystem/local:default (local file system mounts)
 State: maintenance since ...
 Impact: 45 dependent services are not running.
 
 svc:/network/rpc/smserver:default (removable media management)
 State: uninitialized since...
 Impact: 2 dependent services are not running.
 
 $ zfs mount
 
 rpool/ROOT/opensolaris-2/
 rpool/export/home/export/home/wewek
 rpool /rpool
 tank  /tank
 tank/projects/tank/projects
 .
 .
 .
 .
 
 $ pfexec zfs mount -a
 cannot mount '/export': directory is not empty
 
 I can mount rpool/export by -O option but I'll lost my home directory
 and Gnome can't come up.
 $ pfexec zfs mount -O rpool/export
 $ svcadm clear filesystem/local:default
 Mounting ZFS filesystems: (17/17)
 
 GDM login comes up, after log in:
 
 Your home directory is listed as: '/export/home/wewek' but it doesn't
 appear to exist. Do you want to log in with the / (root) directory as
 your home directory? It's unlikely anything will work unless you use a
 failsafe session.
 
 Can you help me please? I don't want to loose all my configurations.

It seems like you have some stuff in /export  which does not belong to
the filesystem that should be mounted in /export

That is, you have /export/somefileordirectory  that belongs to the /
filesystem.. Try this:

pfexec mv /export /oldexport
pfexec mkdir /export
pfexec zfs mount -a

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread Richard Elling

mike wrote:

Bear in mind I am not that comfortable with smf and manifests and
Solaris userland and such... I wouldn't want to be messing up anything
or not setting it up correctly.
  


For the benefit of others who may be lurking, the policies are:
   svcintervalkeep
-
auto-snapshot:frequent 15 mins4
auto-snapshot:hourly1 hours  24
auto-snapshot:daily 1 days   31
auto-snapshot:weekly7 days4
auto-snapshot:monthly   1 months 12


My little PHP script (yes, I use PHP for shell scripting) does a
perfect job of what I need and I got a neat idea from Brad Stone about
rolling up daily snapshots into monthly snapshots, which would roll up
into yearly snapshots...
  


For scripting wizards, the same table is available from:

$ for i in frequent hourly daily weekly monthly; do
 echo $i $(svcprop -p zfs/period auto-snapshot:$i) $(svcprop -p 
zfs/interval auto-snapshot:$i) keep $(svcprop -p zfs/keep auto-snapshot:$i)

 done
frequent 15 minutes keep 4
hourly 1 hours keep 24
daily 1 days keep 31
weekly 7 days keep 4
monthly 1 months keep 12

-- richard



On Mon, Mar 9, 2009 at 1:29 PM, Richard Elling richard.ell...@gmail.com wrote:
  

mike wrote:


Well, I could just use the same script to create my daily snapshot to
remove a snapshot with the same prefix, just different date (say, keep
30 days only or something)

  

NB the autosnapshot feature (aka Time Slider Manager) already
has this capability.



My hope was to just keep a running archive indefinitely. But I guess
snapshots are only as good as needed, and I doubt I will realize I
need a file I lost 6+ months ago...

  

famous last words... :-)
-- richard





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-09 Thread Frank Cusack
On March 9, 2009 12:06:40 PM +0100 Lars-Gunnar Persson 
lars-gunnar.pers...@nersc.no wrote:

I'm trying to implement a Nexsan SATABeast

...

1. On the external disk array, I not able to configure JBOD or RAID 0 or
1 with just one disk.


exactly why i didn't buy this product.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/export' directory is not empty

2009-03-09 Thread Jan Hlodan
Hi Tomas,

thanks for the answer.
Unfortunately, it didn't help much.
However I can mount all file systems, but system is broken - desktop
wont come up.

Could not update ICEauthority file /.ICEauthority
There is a problem with the configuration serve.
(/usr/lib/gconf-check-2-exited with status 256)

Then I can see wallpaper and cursor. That's it, nothing more.

Regards,

Jan Hlodan


Tomas Ögren wrote:
 On 09 March, 2009 - Jan Hlodan sent me these 1,7K bytes:

   
 Hello,

 I am desperate. Today I realized that my OS 108 doesn't want to boot.
 I have no idea what I screwed up. I upgraded on 108 last week without
 any problems.
 Here is where I'm stuck:

 Reading ZFS config: done.
 Mounting ZFS filesystems: (1/17) cannot mount '/export': directory is
 not empty (17/17)

 $ svcs -x
 svc:/system/filesystem/local:default (local file system mounts)
 State: maintenance since ...
 Impact: 45 dependent services are not running.

 svc:/network/rpc/smserver:default (removable media management)
 State: uninitialized since...
 Impact: 2 dependent services are not running.

 $ zfs mount

 rpool/ROOT/opensolaris-2/
 rpool/export/home/export/home/wewek
 rpool /rpool
 tank  /tank
 tank/projects/tank/projects
 .
 .
 .
 .

 $ pfexec zfs mount -a
 cannot mount '/export': directory is not empty

 I can mount rpool/export by -O option but I'll lost my home directory
 and Gnome can't come up.
 $ pfexec zfs mount -O rpool/export
 $ svcadm clear filesystem/local:default
 Mounting ZFS filesystems: (17/17)

 GDM login comes up, after log in:

 Your home directory is listed as: '/export/home/wewek' but it doesn't
 appear to exist. Do you want to log in with the / (root) directory as
 your home directory? It's unlikely anything will work unless you use a
 failsafe session.

 Can you help me please? I don't want to loose all my configurations.
 

 It seems like you have some stuff in /export  which does not belong to
 the filesystem that should be mounted in /export

 That is, you have /export/somefileordirectory  that belongs to the /
 filesystem.. Try this:

 pfexec mv /export /oldexport
 pfexec mkdir /export
 pfexec zfs mount -a

 /Tomas
   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/export' directory is not empty

2009-03-09 Thread Jason King
On Mon, Mar 9, 2009 at 5:31 PM, Jan Hlodan jan.hlo...@sun.com wrote:
 Hi Tomas,

 thanks for the answer.
 Unfortunately, it didn't help much.
 However I can mount all file systems, but system is broken - desktop
 wont come up.

 Could not update ICEauthority file /.ICEauthority
 There is a problem with the configuration serve.
 (/usr/lib/gconf-check-2-exited with status 256)

 Then I can see wallpaper and cursor. That's it, nothing more.

There's a bug with mounting hierarchical mounts (i.e. trying to mount
/export/home before /export or such), you might be hitting that
(unfortunately the bugid escapes me at the moment).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to unwind raidz1 or zpool

2009-03-09 Thread Harry Putnam
I'm probably overlooking a lot of functionality in man zfs but as
always its difficult to really understand the various cmds and
properties when lacking real experience.

After having a created zpool raidz1 from various parts of installed
disks, is there a command available to quickly see the underlying
architecture again?

For example:
For mat shows these two disks
 0. c3d0 [...]
 1. c3d1 [...]


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to unwind raidz1 or zpool

2009-03-09 Thread Will Murnane
On Mon, Mar 9, 2009 at 18:48, Harry Putnam rea...@newsguy.com wrote:
 I'm probably overlooking a lot of functionality in man zfs but as
 always its difficult to really understand the various cmds and
 properties when lacking real experience.

 After having a created zpool raidz1 from various parts of installed
 disks, is there a command available to quickly see the underlying
 architecture again?
Does zpool status do what you want?

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to unwind raidz1 or zpool

2009-03-09 Thread Harry Putnam
[ Sorry to have inadvertantly hit send in earlier duplicate]

I'm probably overlooking a lot of functionality in man zpool but as
always its difficult to really understand the various cmds and
properties when lacking real experience.

After having created zpool raidz1 from various parts of installed
disks, is there a command available to quickly see the underlying
architecture again?

For example:
Format shows these two disks
 0. c3d0 [...]
 1. c3d1 [...]

c3d0 is divided into 2 fdisk partitions.
  (c3d0p1 c3d0p2)

c3d1 is divided into 3 fdisk paritions with the rest unpartitoned.
  (c3d1p1  c3d1p2  c3d1p3) 

rpool is on c3d0p1

Just as a learning exercise... I created a zpool raidz1:

  zpool create t1 raidz1 c3d0p2 c3d1p1 c3d1p2

I just divided up the disks to have several mock discs to work with.

Its nice that I don´t really need to concern myself with that
underlying structure, but if I ever wanted to see that underlying
structure again, how can I make zpool show it?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to unwind raidz1 or zpool

2009-03-09 Thread Harry Putnam
Will Murnane will.murn...@gmail.com writes:

 After having a created zpool raidz1 from various parts of installed
 disks, is there a command available to quickly see the underlying
 architecture again?

 Does zpool status do what you want?

It sure does... sorry about the line noise.  It wasn't obvious from
the examples I saw since they were only run against 1 underlying
structure, and I just overlooked the disk name in the output... 
Thanks.

ps- I tried to cancel that post since I let it get away before having
finished it... so there is another unnecessarily windier
semi-clone. In the thread.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Nexsan SATABeast and ZFS

2009-03-09 Thread Kees Nuyt
On Mon, 9 Mar 2009 12:06:40 +0100, Lars-Gunnar Persson
lars-gunnar.pers...@nersc.no wrote:

1. On the external disk array, I not able to configure JBOD or RAID 0  
or 1 with just one disk. 

In some arrays it seems to be possible to configure separate
disks by offering the array just one disk in one slot at a
time, and, very important, leaving all other slots empty(!).

Repeat for as many disks as you have, seating each disk in
its own slot, and all other slots empty.

(ok, it's just hear-say, but it might be worth a try with
the first 4 disks or so).
-- 
  (  Kees Nuyt
  )
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss