Re: [zfs-discuss] Please trim posts

2010-06-20 Thread James C. McPherson

On 19/06/10 07:58 AM, Marion Hakanson wrote:

doug.lin...@merchantlink.com said:

Apparently, before Outlook there WERE no meetings, because it's clearly
impossible to schedule one without it.


Don't tell my boss, but I use Outlook for the scheduling, and fetchmail
plus procmail to download email out of Exchange and into my favorite
email client.  Thankfully, Exchange listens to incoming SMTP when I need
to send messages.



And please don't mail me with your favorite OSS solution.  I've tried them
all.  None of them integrate with Exchange *smoothly* and *cleanly*.  They're
all workarounds and kludges that are as annoying in the end as Outlook.


Hmm, what I'm doing doesn't _integrate_ with Exchange;  It just bypasses
it for the email portion of my needs.  Non-OSS:  Mac OS X 10.6 claims to
integrate with Exchange, although I have not yet tried it myself.



Could we all please STOP RESPONDING to this thread?

It's not about ZFS at all.


James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-20 Thread Roy Sigurd Karlsbakk
- Original Message -
 Thanks guys - I will take a look at those clustered file systems.
 
 My goal is not to stick with Windows - I would like to have a Storage
 pool for XenServer (free) so that I can have guests, but using a
 storage server (Opensolaris - ZFS) as the iSCSI storage pool.
 
 Any suggestions for the added redundancy or failover to be
 implemented? Also I am not sure if the Shared Storage on XenServer
 has the same problem that you mentioned NTFS has, which is only 1 host
 can control at the same time.

You might want to look into glusterfs if you want a redundant storage system

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] One dataset per user?

2010-06-20 Thread Roy Sigurd Karlsbakk
Hi all

We're working on replacing our current fileserver with something based on 
either Solaris or NexentaStor. We have about 200 users with variable needs. 
There will also be a few common areas for each department and perhaps a backup 
area. I think these should be separated with datasets, for simplicity and 
overview, but I'm not sure if it's a good idea.

I have read people are having problems with lengthy boot times with lots of 
datasets. We're planning to do extensive snapshotting on this system, so there 
might be close to a hundred snapshots per dataset, perhaps more. With 200 users 
and perhaps 10-20 shared department datasets, the number of filesystems, 
snapshots included, will be around 20k or more.

Will trying such a setup be betting on help from some god, or is it doable? The 
box we're planning to use will have 48 gigs of memory and about 1TB L2ARC 
(shared with SLOG, we just use some slices for that).

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] does sharing an SSD as slog and l2arc reduces its life span?

2010-06-20 Thread Bob Friesenhahn

On Sat, 19 Jun 2010, Richard Jahnel wrote:

For a certain brand of indellix drive I calculated the life span in 
the following way. Based on the maximum sustained write speed of the 
drive and the size of the drive (256GB by the way) it would take 9 
months to over write the entire drive 1 times at 100% busy 
writing.


Did you consider the 'write amplification' factor?  For example, if 
the SSD uses a 4K erasure block but only one 512-byte sector is 
updated per write?  Tiny writes can wear out the SSD much faster than 
bulk write rates would suggest.



Just some food for thought.


More food!

The MLC drives seem to usually have more write latency than the SLC 
drives.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One dataset per user?

2010-06-20 Thread Arne Jansen

Roy Sigurd Karlsbakk wrote:


I have read people are having problems with lengthy boot times with lots of 
datasets. We're planning to do extensive snapshotting on this system, so there 
might be close to a hundred snapshots per dataset, perhaps more. With 200 users 
and perhaps 10-20 shared department datasets, the number of filesystems, 
snapshots included, will be around 20k or more.


In my experience the boot time mainly depends on the number of datasets, not the
number of snapshots. 200 datasets is fairly easy (we have 7000, but did
some boot-time tuning).



Will trying such a setup be betting on help from some god, or is it doable? The 
box we're planning to use will have 48 gigs of memory and about 1TB L2ARC 
(shared with SLOG, we just use some slices for that).


Try. The main problem with having many snapshots is the time used for zfs list,
because it has to scrape all the information from disk, but with having so much
RAM/L2ARC that shouldn't be a problem here.
Another thing to consider is the frequency with which you plan to take the snap-
shots and if you want individual schedules for each dataset. Taking a snapshot
is a heavy-weight operation as it terminates the current txg.

Btw, what did you plan to use as L2ARC/slog?

--Arne





Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One dataset per user?

2010-06-20 Thread David Magda

On Jun 20, 2010, at 11:55, Roy Sigurd Karlsbakk wrote:

There will also be a few common areas for each department and  
perhaps a backup area.


The back up area should be on a different set of disks.

IMHO, a back up isn't a back up unless it is an /independent/ copy of  
the data. The copy can be made via ZFS send/recv, tar, rsync, Legato/ 
NetBackup, etc., but it needs to be on independent media. Otherwise,  
if the original copy goes, so does the backup.


I have read people are having problems with lengthy boot times with  
lots of datasets. We're planning to do extensive snapshotting on  
this system, so there might be close to a hundred snapshots per  
dataset, perhaps more. With 200 users and perhaps 10-20 shared  
department datasets, the number of filesystems, snapshots included,  
will be around 20k or more.


You may also want to consider breaking things up into different pools  
as well. There seems to be an implicit assumption in this conversation  
that everything will be in one pool, and that may not be the best  
course of action.


Perhaps one pool for users' homedirs, and another for the departmental  
stuff? Or perhaps even two different pools for homedirs, with users  
'randomly' distributed between the two (though definitely don't do  
something like alphabetical (it'll be non-even) or departmental  
(people transfer) distribution).


This could add a bit of overhead, but I don't think have 2 or 3 pools  
would be much more of a big deal than one.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] does sharing an SSD as slog and l2arc reduces its life span?

2010-06-20 Thread Richard Jahnel
TBH write amp was not considered, but since I've never heard of a write amp 
over 1.5, for my purposes on the 256gb drives they still last welll over the 
required 5 year life span.

Again it does hurt a lot when your using smaller drives that less space 
available for wear leveling.

I suppose for cache drives it will only be a minor annoyance when you have to 
replace the drive. Seeing as a cache failure won't lead to data loss. In my 
mind it would be more of a concern for a slog drive.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One dataset per user?

2010-06-20 Thread Ian Collins

On 06/21/10 03:55 AM, Roy Sigurd Karlsbakk wrote:

Hi all

We're working on replacing our current fileserver with something based on 
either Solaris or NexentaStor. We have about 200 users with variable needs. 
There will also be a few common areas for each department and perhaps a backup 
area. I think these should be separated with datasets, for simplicity and 
overview, but I'm not sure if it's a good idea.

I have read people are having problems with lengthy boot times with lots of 
datasets. We're planning to do extensive snapshotting on this system, so there 
might be close to a hundred snapshots per dataset, perhaps more. With 200 users 
and perhaps 10-20 shared department datasets, the number of filesystems, 
snapshots included, will be around 20k or more.

   
200 user filesystems isn't too big.  One of the systems I look after has 
about 1100 user filesystems with up to 20 snapshots each.  The impact on 
boot time is minimal.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs periodic writes on idle system [Re: Getting desktop to auto sleep]

2010-06-20 Thread Jürgen Keil
Why does zfs produce a batch of writes every 30 seconds on opensolaris b134
(5 seconds on a post b142 kernel), when the system is idle?

On an idle OpenSolaris 2009.06 (b111) system,  /usr/demo/dtrace/iosnoop.d
shows no i/o activity for at least 15 minutes.

The same dtrace test on an idle b134 system shows a batch of writes every 30 
seconds.

And on current opensolaris bits, on an idle system, I see writes every 5 
seconds.


The periodic writes prevents that the disk can enter power save mode.
And this breaks the /etc/power.conf autoS3 feature.  Why does zfs have
to write something to disk when the system is idle?



  Putting the flag does not seem to do anything to the
  system. Here is my power.conf file: 
 ...
  autopm  enable
  autoS3  enable
  S3-support  enable
 
 Problem seems to be that all power managed devices
 must be at their lowest power level, otherwise autoS3
 won't suspend the system.  And somehow one or more
 device does not reach the lowest power level.
...
 The laptop still does not power down, because every
 30 seconds there is a batch of writes to the hdd drive,
 apparently from zfs, and that keeps the hdd powered
 up.
 
 The periodic writes can be monitored with:
 
 dtrace -s /usr/demo/dtrace/iosnoop.d
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Scrub time dramaticy increased

2010-06-20 Thread bonso
Hello all,
 I recently noticed that my storage pool has started to take a lot of time 
finishing a scrub, approximately the final 10% takes 30m to finish while the 
previous 90 are done is as many minutes. The 'zpool status' command does 
however not change its estimated remaining time. Currently 61% of the capacity 
is in use and I didn't observe this behaviour when it was ~58%.
 Monitoring the output of 'zpool iostat' I've learnt that during the later part 
of a scrub read speed drops to a tenth of what it is when the scrub starts. 
Sure, disks have different performance on different parts of the platter and 
then there is seek times but a tenth?? According to 'iostat' all looks good, no 
errors or congestion.

 I'm considering moving data out of the pool to see if that affects the scrub 
behaviour but first I would like to ask the following. Has anyone here 
experienced anything similar and if so what caused it? And if it is possible to 
monitor which file is currently being scrubbed during a scrub?

Thank you!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One dataset per user?

2010-06-20 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
 
 Will trying such a setup be betting on help from some god, or is it
 doable? The box we're planning to use will have 48 gigs of memory and

There's nothing difficult about it.  Go ahead and test.

Personally, I don't see much value in using lots of separate filesystems.  
They're all in the same pool, right?  I use one big filesystem.

There are legitimate specific reasons to use separate filesystems in some 
circumstances.  But if you can't name one reason why it's better ... then it's 
not better for you.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] One dataset per user?

2010-06-20 Thread James C. McPherson

On 21/06/10 12:58 PM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk

Will trying such a setup be betting on help from some god, or is it
doable? The box we're planning to use will have 48 gigs of memory and


There's nothing difficult about it.  Go ahead and test.

Personally, I don't see much value in using lots of separate filesystems.

 They're all in the same pool, right?  I use one big filesystem.

There are legitimate specific reasons to use separate filesystems in some

 circumstances.  But if you can't name one reason why it's better ...
 then it's not better for you.

On the build systems that I maintain inside the firewall,
we mandate one filesystem per user, which is a very great
boon for system administration. My management scripts are
considerably faster running when I don't have to traverse
whole directory trees (ala ufs).



James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss