Re: [zfs-discuss] 100% kernel usage

2009-06-15 Thread arjun
Some more insight:

I have the following zpools setup:
aaa_zvol: 2 250GB IDE in Raid0

storage raidZ1:
   1 500 GB IDE
   1 500 GB SATA
   //aaa_zvol/aaa_zvol (the zvol exported from the aaa_zvol pool)

When I run the array in a degraded mode, ie place one of the drives in the 
offline state, the kernel doesn't seem to spike. When I put the offline drive 
online and resliver, the 100% kernel usage appears when transferring from the 
network to the full operational array. 

I remove each of the drives in turn and tried this experiment, so it doesn't 
seem to be down to a specific drive/interface. 

I wonder if using a zvol as part of a raidz is an issue?

Also, cross-posting to ZFS list.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-15 Thread Simon Breden
Sorry, I don't know what happened, but it seems like I was not subscribed to 
receive replies for this thread so I never saw people's replies to my original 
post... user error probably :)

jone, I think you hit the nail on the head: I *do* seem to remember issuing a 
'zfs mount -a' at some point, but I unfortunately I don't remember why -- 
presumably as some panacea for file systems that failed to mount due to the 
problem mentioned in this thread.

For Richard Elling, this info might be useful:
- I originally had ZFS file systems as follows:
/tank/home
/tank/home/simon
/tank/home/simon/projects
/tank/home/simon/photo
/tank/home/simon/video

As part of the boot disk of SXCE (not OpenSolaris with its rpool zpool), I had 
the following:
/export/home/simon/...

Therefore when I tried to change the mountpoint of /tank/home/simon/... to 
/export/home/simon/... it failed as /export/home/simon etc already existed.

It was at that point that the trouble started, but I don't remember if the 
problem occurred immediately at this point, or immediately afterwards when I 
tried to restore the mountpoint back to /tank/home/simon/...

I think 'jone' is correct in stating that there is some kind of 'race' 
condition occurring when ZFS tries to remount the file systems after this kind 
of error occurs (can't mount, blah blah directory not empty). 

If I was a betting man, I would bet that ZFS is trying to mount child file 
systems before parent file systems. I think I came to this conclusion by 
scanning various log files as root when I was in the single user mode after 
boot errors, and then I saw that it was trying to mount e.g. 
/tank/home/simon/projects before /tank/home/simon . So it looks like some table 
containing the file system mount order got screwed up, if that's possible.

If you need any more help, I'll try to help, but I can't scan back through log 
files for clues, as I broke the pin on that IDE boot drive whilst re-inserting 
the cable... oh... *%!£$$!  :) 

Cheers,
Simon

http://breden.org.uk/2009/05/01/home-fileserver-a-year-in-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-15 Thread Simon Breden
No probs, glad it worked for you too. It gave me quite a fright too when it 
happened :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] eon or nexentacore or opensolaris

2009-06-15 Thread Bogdan M. Maryniuk
On Mon, Jun 15, 2009 at 12:45 PM, Andre Lueno-re...@opensolaris.org wrote:
 Hi Bogdan,

 I'd recommend the following RAM minimums for a fair balance of performance.
 700Mb 32-bit
 1Gb     64-bit

OK, it probably means 2GB when it goes actually practical. :-) Thanks!

--
Kind regards, bm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] pkg.opensolaris.org dead ?

2009-06-15 Thread Udo Grabowski
Cannot refresh the package catalog today : 
Unable to contact valid package server
Encountered the following error(s):
Unable to contact any configured publishers. 
This is likely a network configuration problem.

Any known outage at opensolaris.org ? Or is it the network in between ?

Network here (Germany) is ok, traceroute ends at bbnet.
...
9  sl-bb20-cop-15-0-0.sprintlink.net (80.77.64.33)  24.927 ms  24.886 ms  
24.981 ms
10  144.232.24.12 (144.232.24.12)  110.464 ms  104.632 ms  110.111 ms
11  sl-crs1-rly-0-8-5-0.sprintlink.net (144.232.20.73)  109.393 ms  108.952 ms  
109.313 ms
12  sl-crs2-sj-0-5-0-0.sprintlink.net (144.232.20.186)  176.372 ms  176.715 ms  
176.464 ms
13  sl-gw19-sj-15-0-0.sprintlink.net (144.232.0.250)  169.439 ms  169.585 ms  
169.491 ms
14  144.232.191.202 (144.232.191.202)  162.901 ms  167.966 ms  163.025 ms
15  border2.te7-1-bbnet1.sfo002.pnap.net (63.251.63.17)  168.038 ms  162.928 ms 
 163.028 ms
16  * * *
17  * * *
...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] pkg.opensolaris.org dead ?

2009-06-15 Thread Udo Grabowski
Damn, wrong list. Sorry !
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs replication via zfs send/recv dead after 2009.06 update

2009-06-15 Thread Daniel Liebster
Hello,

I had two thumpers replicating via zfs incremental send/recv croned over ssh 
with blowfish eneabled under 2008.11. The 2009.06 update nuked blowfish and my 
cronjob failed in then deleted the snapshots on the master and slave servers.

now if I try to run the job I get the error:

cannot receive incremental stream: most recent snapshot of slave/slavezfsvol 
does not
match incremental source

Is there any way to recover from this error without re-syncing the zfs volumes 
from scratch?
Here is the cron job 

/root/zync dataPool/wigler r...@bhstore11 dataPool/wiglerBHStore11

Here is the salient cron script

ENCRYPT=-c blowfish
DATE=`/usr/gnu/bin/date +%s`
HOSTNAME=`hostname`

   # Datafile is found, creating incr.
echo Incremental started at `date`
zfs snapshot $...@${date}
zfs send  -i  $...@`cat /root/run/zynk` $...@${date} | ssh $ENCRYPT 
${2} zfs recv -F  ${3}
zfs destroy $...@`cat /root/run/zynk`
ssh ${2} zfs destroy $...@`cat /root/run/zynk`
echo ${DATE}  /root/run/zynk
echo Incremental complete at `date`



Is there is a way to force a re-sync of the zfs volumes, as there has been no 
change in data on the master volume hat needs to be synced to the slave server?

Thanks,

Dan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Shannon Fiume
Hi,

I just installed 2009.06 and found that compression isn't enabled by default 
when filesystems are created. Does is make sense to have an RFE open for this? 
(I'll open one tonight if need be.) We keep telling people to turn on 
compression. Are there any situations where turning on compression doesn't make 
sense, like rpool/swap? what about rpool/dump?

Thanks,
~~sa

Shannon A. Fiume
System Administrator, Infrastructure and Lab Management,  Cloud Computing
shannon dot fiume at sun dot com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs list -t snapshots

2009-06-15 Thread Cindy . Swearingen

Hi Harry,

I use this stuff every day and I can't figure out the right syntax
either. :-)

Reviewing the zfs man page syntax, it looks like you should be able
to use this syntax:

# zfs list -t snapshot dataset

But it doesn't work:

# zfs list -t snapshot rpool/export
cannot open 'rpool/export': operation not applicable to datasets of this 
type


Instead, use -r (recursive) option, like this:

# zfs list -rt snapshot z3/www

If you modified the auto-snapshot feature, then check this section for
where that information is stored:

http://wikis.sun.com/display/OpenSolarisInfo/How+to+Manage+the+Automatic+ZFS+Snapshot+Service

Cindy

Harry Putnam wrote:

I've been very inactive on opensolaris for a while and have forgotten a
discouraging amount of what little I knew.

I want to get back using the snapshot capability of zfs and am having
a time figuring out how to use zfs list -t snapshot.

man zfs shows:
 zfs list [-rH] [-o property[,...]] [-t type[,...]]
  [-s property] ... [-S property]
 ... [filesystem|volume|snapshot] 


So I guess I can give a final argument of a filesystem but not getting
it right.

zfs list -t snapshot  (with no more args) shows only one pool and
filesystem.

I have several but all I see is a list like this:
[...]
  z3/www/rea...@zfs-auto-snap:hourly-2009-06-14-20:00[...]
  z3/www/z...@zfs-auto-snap:frequent-2009-06-14-20:0 [...]
[...]

Everything in the list is under z3/www[...]

But zfs list 


shows 3 different pools with filesystems under them
rpool z2 z3.

Does it mean no snapshots are being taken anywhere else?

I may have set something up but can't remember... and not sure where
to look and find out.

Also what is a legal name to give to zfs list -t snapshot 


  zfs list -t snapshot name

None of z3/www
z3/www/reader
rpool/exports
rpool/
/rpool
works.

Man page specifies `filesystems|volume|snapshot' so what notation works?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-15 Thread Orvar Korvar
According to this webpage, there are some errors that makes ZFS unusable under 
certain conditions. That is not really optimal for an Enterprise file system. 
In my opinion the ZFS team should focus on bug correction instead of adding new 
functionality. The functionality that exists far surpass any other file system, 
therefore it is better to fix bugs. In my opinion. Read those error reports and 
complaints and data corruption:
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-15 Thread Joerg Schilling
Orvar Korvar no-re...@opensolaris.org wrote:

 According to this webpage, there are some errors that makes ZFS unusable 
 under certain conditions. That is not really optimal for an Enterprise file 
 system. In my opinion the ZFS team should focus on bug correction instead of 
 adding new functionality. The functionality that exists far surpass any other 
 file system, therefore it is better to fix bugs. In my opinion. Read those 
 error reports and complaints and data corruption:
 http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS

Could you help,

I cannot see any reference to data corruption in this page

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] replication issue

2009-06-15 Thread Thomas Maier-Komor
Hi,

I just tried replicating a zfs dataset, which failed because the dataset
has a mountpoint set and zfs received tried to mount the target dataset
to the same directory.

I.e. I did the following:
$ zfs send -R mypool/h...@20090615 | zfs receive -d backup
cannot mount '/var/hg': directory is not empty

Is this a known issue or is this a user error because of -d on the
receiving side?

This happened on:
% uname -a
SunOS azalin 5.10 Generic_139555-08 sun4u sparc SUNW,Sun-Blade-2500

- Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-15 Thread Orvar Korvar
In the comments there are several people complaining of loosing data. That 
doesnt sound to good. It takes a long time to build a good reputation, and 5 
minutes to ruin it. We dont want ZFS to loose it's reputation of an uber file 
system.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-15 Thread Bob Friesenhahn

On Mon, 15 Jun 2009, Orvar Korvar wrote:

In the comments there are several people complaining of loosing 
data. That doesnt sound to good. It takes a long time to build a 
good reputation, and 5 minutes to ruin it. We dont want ZFS to loose 
it's reputation of an uber file system.


I recognize the fellow who griped the most on Slashdot.  He wasted 
quite a lot of time here because he was not willing to read any of the 
zfs documentation.  His PC had failing memory chips which resulted in 
data corruption.  He did not use any ZFS RAID features.


Basically this Slashdot discussion is typical Apple discussion with 
lots of people who don't know anything at all talking about what Apple 
may or may not do.  Anyone who did learn what Apple is planning to do 
can't say anything since they had to sign an NDA to learn it.  As 
usual, the users will learn what Apple decided to do at midnight on 
the day the new OS is released.


If Apple dumps ZFS it would be most likely due to not having developed 
sufficient GUIs to make it totally user friendly.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-15 Thread Tim Cook
On Mon, Jun 15, 2009 at 12:57 PM, Joerg Schilling 
joerg.schill...@fokus.fraunhofer.de wrote:

 Orvar Korvar no-re...@opensolaris.org wrote:

  According to this webpage, there are some errors that makes ZFS unusable
 under certain conditions. That is not really optimal for an Enterprise file
 system. In my opinion the ZFS team should focus on bug correction instead of
 adding new functionality. The functionality that exists far surpass any
 other file system, therefore it is better to fix bugs. In my opinion. Read
 those error reports and complaints and data corruption:
 
 http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS

 Could you help,

 I cannot see any reference to data corruption in this page

 Jörg


Did you actually search the page?

http://opensolaris.org/jive/message.jspa?messageID=318457#318457
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-April/027748.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-April/027748.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-April/027765.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-January/025601.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-March/027629.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-March/027365.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-March/027257.html


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-15 Thread roland
so, besides performance there COULD be some stability issues.

thanks for the answers - i think i`ll stay with 32bit, even if there COULD be 
issues. (i`m happy to report and help fixing those)

i don`t have free 64bit hardware around for building storage boxes.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-15 Thread Sean Sprague

Orvar Korvar wrote:

In the comments there are several people complaining of loosing data. That 
doesnt sound to good. It takes a long time to build a good reputation, and 5 
minutes to ruin it. We dont want ZFS to loose it's reputation of an uber file 
system.
  


With due respect, I recommend that no-one waste the same five minutes 
that I have just done reading the comments section on Slashdot. It is a 
complete load of subjective claptrap. Do something sensible instead like 
microwaving a curry or calling your Mom (well maybe not the latter...)


Bob F got it absolutely right about possible lack of GUI being a 
stumbler for potential users of ZFS in the Apple camp; but to be able to 
manipulate a filesystem with the underlying power that ZFS has via just 
two commands (or a few more if you include the SMF bits) is mindblowing. 
Try comparing that with the mess that is VxVM/VxFS.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Thommy M.
Bob Friesenhahn wrote:
 On Mon, 15 Jun 2009, Shannon Fiume wrote:
 
 I just installed 2009.06 and found that compression isn't enabled by
 default when filesystems are created. Does is make sense to have an
 RFE open for this? (I'll open one tonight if need be.) We keep telling
 people to turn on compression. Are there any situations where turning
 on compression doesn't make sense, like rpool/swap? what about
 rpool/dump?
 
 In most cases compression is not desireable.  It consumes CPU and
 results in uneven system performance.

IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait for so much data
from the disks and that the CPU was fast at unpacking data. But sure, it
uses more CPU (and probably memory).

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Glenn Lagasse
* Shannon Fiume (shannon.fi...@sun.com) wrote:
 Hi,
 
 I just installed 2009.06 and found that compression isn't enabled by
 default when filesystems are created. Does is make sense to have an
 RFE open for this? (I'll open one tonight if need be.) We keep telling
 people to turn on compression. Are there any situations where turning
 on compression doesn't make sense, like rpool/swap? what about
 rpool/dump?

That would be enhancement request #86.

http://defect.opensolaris.org/bz/show_bug.cgi?id=86

Cheers,

-- 
Glenn
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread dick hoogendijk
On Mon, 15 Jun 2009 22:51:12 +0200
Thommy M. thommy.m.malmst...@gmail.com wrote:

 IIRC there was a blog about I/O performance with ZFS stating that it
 was faster with compression ON as it didn't have to wait for so much
 data from the disks and that the CPU was fast at unpacking data. But
 sure, it uses more CPU (and probably memory).

IF at all, it certainly should not be the DEFAULT.
Compression is a choice, nothing more.

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | nevada / OpenSolaris 2009.06 release
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] clones and sub-datasets

2009-06-15 Thread Todd Stansell
I had a zpool and was using the implicit zfs filesystem in it:

data  10.2G   124G  8.53G  /var/tellme
d...@hotbackup.h2383.4M  -  8.52G  -
d...@hotbackup.h0025.9M  -  8.52G  -
d...@hotbackup.h0116.2M  -  8.52G  -
...

These contained hourly zfs snapshots that I preferred to preserve.
However, I was also trying to convert this to follow our standard naming,
which meant that this filesystem should have been called data/var_tellme.

I ran the following:

# zfs snapshot d...@clean
# zfs clone d...@clean data/var_tellme
# zfs promote data/var_tellme

This worked as expected and now I have:

NAMEUSED  AVAIL  REFER  MOUNTPOINT
data   9.70G   124G  8.53G  legacy
data/var_tellme9.70G   124G  8.10G  legacy
data/var_tel...@clean   717M  -  8.53G  -
data/var_tel...@hotbackup.h14  24.4M  -  8.09G  -
data/var_tel...@hotbackup.h15  10.0M  -  8.09G  -
data/var_tel...@hotbackup.h16  6.14M  -  8.09G  -
...

However, now I cannot remove the data/var_tel...@clean snapshot because it
is now labelled as the 'origin' for data itself:

# zfs get origin data
NAME  PROPERTY  VALUE  SOURCE
data  origindata/var_tel...@clean  -

I don't care about the 'data' filesystem anymore, I just want to be able
to nuke the data/var_tel...@clean snapshot so it doesn't end up filling my
zpool with changes.

Any thoughts on how this can be done?  I do have other systems I can use
to test this procedure, but ideally it would not introduce any downtime,
but that can be arranged if necessary.

Thanks,

Todd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-15 Thread Orvar Korvar
Ive asked the same question about 32bit. I created a thread and asked. It were 
something like does 32bit ZFS fragments RAM? or something similar. As I 
remember it, 32 bit had some issues. Mostly due to RAM fragmentation or 
something similar. The result was that you had to restart your server after a 
while. But I shuts down my desktopPC every night so I never had any issues.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Rich Teer
On Mon, 15 Jun 2009, dick hoogendijk wrote:

 IF at all, it certainly should not be the DEFAULT.
 Compression is a choice, nothing more.

I respectfully disagree somewhat.  Yes, compression shuould be a
choice, but I think the default should be for it to be enabled.

-- 
Rich Teer, SCSA, SCNA, SCSECA

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Bob Friesenhahn

On Mon, 15 Jun 2009, Thommy M. wrote:


In most cases compression is not desireable.  It consumes CPU and
results in uneven system performance.


IIRC there was a blog about I/O performance with ZFS stating that it was
faster with compression ON as it didn't have to wait for so much data
from the disks and that the CPU was fast at unpacking data. But sure, it
uses more CPU (and probably memory).


I'll believe this when I see it. :-)

With really slow disks and a fast CPU it is possible that reading data 
the first time is faster.  However, Solaris is really good at caching 
data so any often-accessed data is highly likely to be cached and 
therefore read just one time.  The main point of using compression for 
the root pool would be so that the OS can fit on an abnormally small 
device such as a FLASH disk.  I would use it for a read-mostly device 
or an archive (backup) device.


On desktop systems the influence of compression on desktop response is 
quite noticeable when writing, even with very fast CPUs and multiple 
cores.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Rich Teer
On Mon, 15 Jun 2009, Bob Friesenhahn wrote:

 In most cases compression is not desireable.  It consumes CPU and results in
 uneven system performance.

You actually have that backwards.  :-)  In most cases, compression is very
desirable.  Performance studies have shown that today's CPUs can compress
data faster than it takes for the uncompressed data to be read or written.
That is, the time to read or write compressed data + the time to compress
or decompress it is less than the time read or write the uncompressed data.

Such is the difference between CPUs and I/O!

You are correct that the compression/decompression uses CPU, but most systems
have an abundance of CPU, especially when performing I/O.

-- 
Rich Teer, SCSA, SCNA, SCSECA

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Dennis Clarke

 On Mon, 15 Jun 2009, dick hoogendijk wrote:

 IF at all, it certainly should not be the DEFAULT.
 Compression is a choice, nothing more.

 I respectfully disagree somewhat.  Yes, compression shuould be a
 choice, but I think the default should be for it to be enabled.

I agree that Compression is a choice and would add :

   Compression is a choice and it is the default.

Just my feelings on the issue.

Dennis Clarke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-15 Thread Bob Friesenhahn

On Mon, 15 Jun 2009, Rich Teer wrote:


You actually have that backwards.  :-)  In most cases, compression is very
desirable.  Performance studies have shown that today's CPUs can compress
data faster than it takes for the uncompressed data to be read or written.


Do you have a reference for such an analysis based on ZFS?  I would be 
interested in linear read/write performance rather than random access 
synchronous access.


Perhaps you are going to make me test this for myself.


You are correct that the compression/decompression uses CPU, but most systems
have an abundance of CPU, especially when performing I/O.


I assume that you are talking about single-user systems with little 
else to do?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-15 Thread Keith Bierman

I had a 32 bit zfs server up for months with no such issue

Performance is not great but it's no buggier than anything else. War  
stories from the initial zfs drops notwithstanding


khb...@gmail.com | keith.bier...@quantum.com
Sent from my iPod

On Jun 15, 2009, at 3:59 PM, Orvar Korvar no-re...@opensolaris.org  
wrote:


Ive asked the same question about 32bit. I created a thread and  
asked. It were something like does 32bit ZFS fragments RAM? or  
something similar. As I remember it, 32 bit had some issues. Mostly  
due to RAM fragmentation or something similar. The result was that  
you had to restart your server after a while. But I shuts down my  
desktopPC every night so I never had any issues.

--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on 32 bit?

2009-06-15 Thread milosz
one of my disaster recovery servers has been running on 32bit hardware
(ancient northwood chip) for about a year.  the only problems i've run into
are: slow (duh) and will not take disks that are bigger than 1tb.  that is
kind of a bummer and means i'll have to switch to a 64bit base soon.
 everything else has been fine.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-15 Thread Bogdan M. Maryniuk
On Tue, Jun 16, 2009 at 2:45 AM, Orvar Korvarno-re...@opensolaris.org wrote:
 According to this webpage, there are some errors that makes ZFS unusable 
 under certain conditions.
 That is not really optimal for an Enterprise file system. In my opinion the 
 ZFS team should focus
 on bug correction instead of adding new functionality. The functionality that 
 exists far surpass
 any other file system, therefore it is better to fix bugs. In my opinion. 
 Read those error reports
 and complaints and data corruption:
 http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS

Slashdot? 'cmon, Orvar... You've found the resource reference to, LOL.
Try to say in Slashdot something really reasonable, like that GNOME
(GUI No One Might Enjoy) actually sucks in its integration and is
still horrible on small resolutions (e.g. you get OK/Cancel off the
screen on a netbooks) and you will be an enemy of the whole world. And
if you say that the latest KDE (Kids Desktop Environment) is actually
even more terrible than Windows 95 — you're just simply dead. :-)

Personally, I tried to get scared on ZFS, but all the time when yet
another slashdotter (read: teenager) screams about dramatical data
loss, I am unable to reproduce the problem. Thus I think it would be
much better to the community if we actually find a real step-by-step
reproducible crashes (VirtualBox is our friend here), fill a real bug
reports and then it would be much more reasonable to speak about a
particular case, rather then spreading out stupid FUD, taken from a
useless slashdot commenters.

P.S. I mean, let's don't waste our time on slashdot and let's find
something actually bad, reproduce, fill a bug and then report here.
:-)

--
Kind regards, bm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs

2009-06-15 Thread Brad Reese
Hi Victor,

'zdb -e -bcsv -t 2435913 tank' ran for about a week with no output. We had yet 
another brown out and then the comp shut down (have a UPS on the way). A few 
days before that I started the following commands, which also had no output:

zdb -e -bcsv -t 2435911 tank
zdb -e -bcsv -t 2435897 tank

I've given up on these because I don't think they'll finish...should I try 
again?

Right now I am trying the following commands which so far have no output:

zdb -e -bcsvL -t 2435913 tank
zdb -e -bsvL -t 2435913 tank
zdb -e -bb -t 2435913 tank

'zdb -e - -t 2435913 tank' has output and is very long...is there anything 
I should be looking for? Without -t 243... this command failed on dmu_read, now 
it just keeps going forever.

Your help is much appreciated.

Thanks,

Brad
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss