[zfs-discuss] problem in recovering back data solaris zfs

2010-04-13 Thread MstAsg
Hello,

I have problem regarding zfs. After installing solaris 10 x86, it worked for a 
while, and then, some problem happened that there was problem in solaris and it 
could not get loaded! Even fail safe didn't resolve the problem. I put an open 
solaris CD and booted from it, I wrote the bellow commands;
zpool create raidz c2t0d0 c2t1d0 c2t2d0
zfs create indexes/db1 (&/db2)
mount -F zfs /dev/dsk/c2t0d0 /name


Then, I didn't see the data I had after cd /name. When I retried to boot, the 
error of 'no active partition' was what I saw in my system. I don't know how, 
but I installed and chose the disks while installing. After installation, I 
copied boot block to the other disks for not having problem of boot. I hope 
that someone help me with that.



Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-13 Thread Brandon High
On Tue, Apr 13, 2010 at 7:03 AM, Harry Putnam  wrote:
> Apparently you are not disagreeing with Daniel Cs' comment above so I
> guess you are talking about disk partitions here?

I'm not disagreeing, but the use case for a server is different than
for a laptop that can only hold one drive.

> Currently two 500gb IDE drives make up rpool.  So to have mirror
> redundancy of rpool, do you mean to create identical partitions on
> each disk for mirrored rpool, then use the other partition for data?

Er, not exactly.

There are a few constraints on rpools and on zpools that limit what
you can do for the rpool.

The first constraint is that it only supports one top-level vdev, and
that vdev can only be non-redundant or a mirror. If the pool fills up,
you can't grow it by adding more disks. This limits your future
growth. You could replace the drives with larger ones, but that's
about it. Once it fills up, you have to start deleting.

By default the write cache is disabled when zfs doesn't have access to
the whole disk, and due to the way BIOS boots, the boot drive is
partitioned. The rpool will have slightly worse performance as a
result. Partitioning out a smaller boot volume and using the rest in a
second zpool will just wind up with the second pool having poor
performance too. You could add more vdevs to the second pool (or add
them one that already exists) but performance will also be unbalanced
since the partitions will be slower. It's not a pretty picture. It'll
work, it just won't offer the performance that you probably want.

> I thought I remembered being told it was better not to partition disks
> but I'm probably just confused about it.

You're right, it's best not to. If you want to keep your rpool
separate, dedicate a drive (or two) as the boot drives and leave it at
that. Move any user data to your larger (and better performing) pools
and call it a day.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-13 Thread Victor Latushkin

On Apr 14, 2010, at 2:42 AM, Ragnar Sundblad wrote:

> 
> On 12 apr 2010, at 19.10, Kyle McDonald wrote:
> 
>> On 4/12/2010 9:10 AM, Willard Korfhage wrote:
>>> I upgraded to the latest firmware. When I rebooted the machine, the pool 
>>> was back, with no errors. I was surprised.
>>> 
>>> I will work with it more, and see if it stays good. I've done a scrub, so 
>>> now I'll put more data on it and stress it some more.
>>> 
>>> If the firmware upgrade fixed everything, then I've got  a question about 
>>> which I am better off doing: keep it as-is, with the raid card providing 
>>> redundancy, or turn it all back into pass-through drives and let ZFS handle 
>>> it, making the Areca card just a really expensive way of getting a bunch of 
>>> SATA interfaces?
>>> 
>> 
>> AS one of the other posters mentioned there may be a third way that
>> might give you something close to "the best of both worlds".
>> 
>> Try using the Areca card to make 12 single disk RAID 0 LUNs, and then
>> use those in ZFS.
>> I'm not sure of the definition of 'passthrough', but if it disables any
>> battery backed cache that the card may have, then by setting up 12 HW
>> RAID LUNs instead, you it should give you an improvement by allowing the
>> Card to cache writes.
>> 
>> The one downside of doing this vs. something more like 'jbod' is that if
>> the controller dies you will need to move the disks to another Areca
>> controller, where as with 12  'jbod' connections you could move them to
>> pretty much any controller you wanted.
> 
> And that if you use the write cache in the controller and the controller
> dies, parts of your recently written data is only in the dead controller,
> and your pool may be more or less corrupt and may have to be rolled back
> a few versions to be rescued or may be not rescuable at all.
> This may may not be acceptable.

There was successful recovery of what seemed to be result of lost cache on 
Areca controller, see this thread:

http://opensolaris.org/jive/thread.jspa?threadID=109007

it was manual recovery, but these days we have 'zpool import -fFX ' 
that would do the same in a lot more user-fiendly manner.

--
regards
victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-13 Thread Willard Korfhage
These are all good reasons to switch back to letting ZFS handle it. I did put 
about 600GB of data on the pool as configured with Raid 6 on the card, verified 
the data, and scrubbed it a couple time in the process and there's no problems, 
so it appears that the firmware upgrade fixed my problems. However, I'm going 
to switch it back to passthrough disks, remake the pool and try it again.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] b134 panic in ddt_sync_entry()

2010-04-13 Thread Victor Latushkin

On Apr 13, 2010, at 9:52 PM, Cyril Plisko wrote:

> Hello !
> 
> I've had a laptop that crashed a number of times during last 24 hours
> with this stack:
> 
> panic[cpu0]/thread=ff0007ab0c60:
> assertion failed: ddt_object_update(ddt, ntype, nclass, dde, tx) == 0,
> file: ../../common/fs/zfs/ddt.c, line: 968
> 
> 
> ff0007ab09a0 genunix:assfail+7e ()
> ff0007ab0a20 zfs:ddt_sync_entry+2f1 ()
> ff0007ab0a80 zfs:ddt_sync_table+dd ()
> ff0007ab0ae0 zfs:ddt_sync+136 ()
> ff0007ab0ba0 zfs:spa_sync+41f ()
> ff0007ab0c40 zfs:txg_sync_thread+24a ()
> ff0007ab0c50 unix:thread_start+8 ()
> 
> 
> Is that a known issue ?

There is CR 6912741 with similar stack reported. It is now closed, as problem 
was seen on some custom kernel, and was not reproducible.

> I have vmdump files available in case people want to have a look.


If you can pack and upload your dumps to e.g. supportfiles.sun.com (or provide 
a link to download), it is definitely interesting to have a look and reopen the 
bug (or even file a new one).

--
regards
victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why would zfs have too many errors when underlying raid array is fine?

2010-04-13 Thread Ragnar Sundblad

On 12 apr 2010, at 19.10, Kyle McDonald wrote:

> On 4/12/2010 9:10 AM, Willard Korfhage wrote:
>> I upgraded to the latest firmware. When I rebooted the machine, the pool was 
>> back, with no errors. I was surprised.
>> 
>> I will work with it more, and see if it stays good. I've done a scrub, so 
>> now I'll put more data on it and stress it some more.
>> 
>> If the firmware upgrade fixed everything, then I've got  a question about 
>> which I am better off doing: keep it as-is, with the raid card providing 
>> redundancy, or turn it all back into pass-through drives and let ZFS handle 
>> it, making the Areca card just a really expensive way of getting a bunch of 
>> SATA interfaces?
>> 
> 
> AS one of the other posters mentioned there may be a third way that
> might give you something close to "the best of both worlds".
> 
> Try using the Areca card to make 12 single disk RAID 0 LUNs, and then
> use those in ZFS.
> I'm not sure of the definition of 'passthrough', but if it disables any
> battery backed cache that the card may have, then by setting up 12 HW
> RAID LUNs instead, you it should give you an improvement by allowing the
> Card to cache writes.
> 
> The one downside of doing this vs. something more like 'jbod' is that if
> the controller dies you will need to move the disks to another Areca
> controller, where as with 12  'jbod' connections you could move them to
> pretty much any controller you wanted.

And that if you use the write cache in the controller and the controller
dies, parts of your recently written data is only in the dead controller,
and your pool may be more or less corrupt and may have to be rolled back
a few versions to be rescued or may be not rescuable at all.
This may may not be acceptable.

/ragge

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots taking too much space

2010-04-13 Thread Paul Archer

Yesterday, Arne Jansen wrote:


Paul Archer wrote:


Because it's easier to change what I'm doing than what my DBA does, I
decided that I would put rsync back in place, but locally. So I changed
things so that the backups go to a staging FS, and then are rsync'ed
over to another FS that I take snapshots on. The only problem is that
the snapshots are still in the 500GB range.

So, I need to figure out why these snapshots are taking so much more
room than they were before.

This, BTW, is the rsync command I'm using (and essentially the same
command I was using when I was rsync'ing from the NetApp):

rsync -aPH --inplace --delete /staging/oracle_backup/
/backups/oracle_backup/


Try adding --no-whole-file to rsync. rsync disables block-by-block
comparison if used locally by default.



Thanks for the tip. I didn't realize rsync had that behavior. It looks like 
that got my snapshots back to the 50GB range. I'm going to try dedup on the 
staging FS as well, so I can do a side-by-side of which gives me the better 
space savings.


Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver help.

2010-04-13 Thread Eric Andersen
> Hi all.
> 
> Im pretty new to the whole OpenSolaris thing, i've
> been doing a bit of research but cant find anything
> on what i need.
> 
> I am thinking of making myself a home file server
> running OpenSolaris with ZFS and utilizing Raid/Z
> 
> I was wondering if there is anything i can get that
> will allow Windows Media Center based hardware (HTPC
> or XBOX 360) to stream from my new fileserver?
> 
> Any help is appreciated and remember im new :)
> 
> Message was edited by: cloudz

If whatever you are streaming to will read CIFS (or NFS) shares, you're golden. 
 Getting set up is literally one command.

If you are looking for uPnP streaming, the easiest (an only thing I ever got to 
work) solution out there is PS3mediaserver.  It says it has Xbox360 support.   
It depends on mplayer and ffmpeg which are both available in the blastwave 
community repository (or you can try building them from source if you want.)

There are a couple howto's on getting TwonkyMedia and MediaTomb running under 
Solaris if you google for them.  I never could get either one to compile, but I 
haven't tried it in quite some time.

I've heard of people running uPnP servers from linux branded zones as well, so 
that might be an option for you.  I have no experience whatsoever with that, so 
I can't tell you much else about it.

Personally, I gave up on trying to stream to my PS3.  That is mainly because I 
don't have ethernet run to it, and trying to stream any media over wireless-g, 
especially the HD stuff, is frustrating to say the least.  I dropped $100 on an 
xtreamer media player, and it's great.  Plays any format/container I can throw 
at it.  Works real well for me.  Good luck!

Eric
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tuning the ARC towards LRU

2010-04-13 Thread Peter Schuller
I realized I forgot to follow-up on this thread. Just to be clear, I
have confirmed that I am seeing what to me is undesirable behavior
even with the ARC being 1500 MB in size on an almost idle system (<
0.5 mb/sec read load, almost 0 write load). Observe these recursive
searches through /usr/src/sys:

% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats  2.74s user 1.19s system 20% cpu 19.143 total
% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats 2> /dev/null > /dev/null  2.45s user 0.51s system 99% cpu
2.986 total
% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats 2> /dev/null > /dev/null  2.41s user 0.62s system 53% cpu
5.667 total
% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats 2> /dev/null > /dev/null  2.37s user 0.68s system 50% cpu
6.025 total
% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats 2> /dev/null > /dev/null  2.45s user 0.61s system 45% cpu
6.694 total
% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats 2> /dev/null > /dev/null  2.45s user 0.59s system 53% cpu
5.651 total
% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats 2> /dev/null > /dev/null  2.32s user 0.72s system 46% cpu
6.503 total
% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats 2> /dev/null > /dev/null  2.41s user 0.66s system 44% cpu
6.843 total
% time ack arcstats 2>/dev/null 1>/dev/null
ack arcstats 2> /dev/null > /dev/null  2.37s user 0.67s system 49% cpu
6.119 total

The first was entirely cold. For some reason the second was close to
CPU-bound, while the remainder were significantly disk-bound even if
not to the extent of the initial run. I correlated with 'iostat -x 1'
to confirm that I am in fact generating I/O (but no, I do not have
dtrace output).

Anyways, presumably the answer to my original question is no and the
above isn't really very interesting other than to show that under some
circumstances you can see behavior that is decidedly non-optimal for
interactive desktop use of certain kinds. Whether this is ARC in
general or something FreeBSD specific, I don't know. But it does, at
this point, not have to do with ARC sizing since the ARC is sensibly
large.

(I realize I should investigate properly and report back, but I'm not
likely to have time to dig into this now.)

-- 
/ Peter Schuller
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-13 Thread J James
I was finally able to generate syslog messages, thanks to the clue given by timh

cat /usr/lib/fm/fmd/plugins/syslog-msgs.conf
setprop console true
setprop facility LOG_LOCAL0  -log as facility local0
setprop syslogd true

svcadm restart fmd- restart FMD

Note that some of the error messages are generated only once.

Joji James
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] b134 panic in ddt_sync_entry()

2010-04-13 Thread Cyril Plisko
Hello !

I've had a laptop that crashed a number of times during last 24 hours
with this stack:

panic[cpu0]/thread=ff0007ab0c60:
assertion failed: ddt_object_update(ddt, ntype, nclass, dde, tx) == 0,
file: ../../common/fs/zfs/ddt.c, line: 968


ff0007ab09a0 genunix:assfail+7e ()
ff0007ab0a20 zfs:ddt_sync_entry+2f1 ()
ff0007ab0a80 zfs:ddt_sync_table+dd ()
ff0007ab0ae0 zfs:ddt_sync+136 ()
ff0007ab0ba0 zfs:spa_sync+41f ()
ff0007ab0c40 zfs:txg_sync_thread+24a ()
ff0007ab0c50 unix:thread_start+8 ()


Is that a known issue ?

I have vmdump files available in case people want to have a look.

-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-13 Thread J James
Thanks for the clue.

Still not successful, but some hope is there.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS RAID-Z Degraded Array won't import

2010-04-13 Thread Peter Tripp
Hi folks,

At home I run OpenSolaris x86 with a 4 drive Raid-Z (4x1TB) zpool and it's not 
in great shape.  A fan stopped spinning and soon after the top disk failed 
(cause you know, heat rises).  Naturally, OpenSolaris and ZFS didn't skip a 
beat; I didn't even notice it was dead until I saw the disk activity LED stuck 
on nearly a week later.

So I decided I would attach the disks to 2nd system (with working fans) where I 
could backup the data to tape. So here's where I got dumb...I ran 'zpool 
export'.  Of course, I never actually ended up attaching the disks to another 
machine, but ever since that export I've been unable to import the pool at all. 
I've ordered a replacement 1TB disk, but it hasn't arrived yet. Since I got no 
errors from the scrub I ran while the array was degraded, I'm pretty confident 
that the remaining 3 disks have valid data.

* Should I be able to import a degraded pool?
* If not, shouldn't there be a warning when exporting a degraded pool?
* If replace 1TB dead disk with a blank disk, might the import work?
* Are there any tools (or commercial services) for ZFS recovery?

I read a blog post (which naturally now I can't find) where someone in similar 
circumstances was able to import his pool after restoring /etc/zfs/zpool.cache 
from a backup before the 'zpool export'. Naturally this guy was doing it with 
ZFS-FUSE under Linux, so it's another step removed, but can someone explain to 
me the logic & risks of trying such a thing?  Will it work if the zpool.cache 
comes from 1day/1week/1month old backup?

So here's what I get...
pe...@pickle:~$ pfexec zpool import
  pool: andre
id: 5771661786439152324
 state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

andre   FAULTED  corrupted data
  raidz1DEGRADED
c5t0d0  ONLINE
c5t1d0  ONLINE
c5t2d0  ONLINE
c5t3d0  UNAVAIL  cannot open

Any constructive suggestions would be greatly appreciated.
Thanks
--Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggestions about current ZFS setup

2010-04-13 Thread Bob Friesenhahn

On Tue, 13 Apr 2010, Christian Molson wrote:


Now I would like to add my 4 x 2TB drives, I get a warning message 
saying that: "Pool uses 5-way raidz and new vdev uses 4-way raidz" 
Do you think it would be safe to use the -f switch here?


It should be "safe" but chances are that your new 2TB disks are 
considerably slower than the 1TB disks you already have.  This should 
be as much cause for concern (or more so) than the difference in raidz 
topology.


Maybe you should create a second temporary pool with these drives and 
exercise them under load for a while to see how they behave.  If they 
behave well, then destroy the temporary pool and add the drives to 
your main pool.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Secure delete?

2010-04-13 Thread Bob Friesenhahn

On Tue, 13 Apr 2010, Joerg Schilling wrote:


I believe you make a mistake with this assumption.


I see that you make some mistakes with your own assumptions. :-)


-   The SSD cannot know which blocks are currently not in use.


It does know that blocks in its spare pool are not in use.


-   In special with a COW filesystem, after some time all net space
may have been written to but the SSD does not know whether it
is still used or not. So you see s mainly empty filesystem
while the SSD does not know this fact.


You are assuming that a COW filesystem will tend to overwrite all disk 
blocks.  That is not a good assumption since filesystems are often 
optimized to prefer disk "outer tracks".  FLASH does not have any 
tracks, but it is likely that existing optimizations remain.


Filesystems are not of great value unless they store data so 
optimizing for the empty case is not useful in the real world.  The 
natives grow restless (and look for scalps) if they find that 
write performance goes away once the device contains data.



-   If wou write not too much to a SSD, it may be that the spare space
for defect management is sufficient in order to have suficient
prepared erased space.

-   Once you write more, I see no reason why a COW filesystem should
be any better than a non-COW filesystem.


The main reason why a COW filesystem like zfs may perform better is 
that zfs sends a fully defined block to the device.  This reduces the 
probability that the FLASH device will need to update an existing 
FLASH block.  COW increases the total amount of data written, but it 
also reduces the FLASH read/update/re-write cycle.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Secure delete?

2010-04-13 Thread Bob Friesenhahn

On Mon, 12 Apr 2010, Eric D. Mudama wrote:


The advantage of TRIM, even in high end SSDs, is that it allows you to
effectively have additional "considerable extra space" available to
the device for garbage collection and wear management when not all
sectors are in use on the device.

For most users, with anywhere from 5-15% of their device unused, this
difference is significant and can improve performance greatly in some
workloads.  Without TRIM, the device has no way to use this space for
anything but tracking the data that is no longer active.

Based on the above, I think TRIM has the potential to help every SSD,
not just the "cheap" SSDs.


It seems that the "above" was missing.  What concrete evidence were 
you citing?


The value should be clearly demonstrated an fact (with many months of 
prototype testing with various devices) before the feature becomes a 
pervasive part of the operating system.  Every article I have read 
about the value of TRIM is pure speculation.


Perhaps it will be found that TRIM has more value for SAN storage (to 
reclaim space for accounting purposes) than for SSDs.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Suggestions about current ZFS setup

2010-04-13 Thread Christian Molson
Hi,

(Main questions at bottom of post)

I recently discovered the joys of ZFS. I have a home file server for 
backups+media, also hosting some virtual machine (over LAN).

I was wondering if I could get some feedback as to whether I have set things up 
properly.

Drives:
20 x 1TB (Mix of seagate and Hitachi)
4 x 2TB (WD20EARS)

Also the server has 24GB ram (for the virtual machines, but I can leave lots 
free for ZFS)

I deleted the partitions off of all drives. Then created 4 vdevs each with 5 
1TB drives in raidz.

I get some errors at bootup about no valid partition table or soemthing on the 
drives, is this to be expected?

Now I would like to add my 4 x 2TB drives, I get a warning message saying that: 
"Pool uses 5-way raidz and new vdev uses 4-way raidz"  Do you think it would be 
safe to use the -f switch here?

My chassis is at its limit in terms of hdd, 24(hot swap bays).. So I would have 
to stick one inside somewhere..



In case I lost you guys up there, I have 2 main questions:

1 - Errors in dmesg while booting about "Corrupt of Invalid GPT detected" "The 
secodary GPT is corrupt or invalid" This can be ignored as the drives giving 
this error are all part of the pool.

2 - I would like to add a 4 drive raidz to my pool, although zfs warns that my 
current vdevs are all 5-way raidz.. Is it safe (and recommended ) to use the -f 
switch and add a 4x2TB raidz vdev to the pool?


Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshots and Data Loss

2010-04-13 Thread Richard Elling
On Apr 13, 2010, at 5:22 AM, Tony MacDoodle wrote:

> I was wondering if any data was lost while doing a snapshot on a running 
> system?

ZFS will not lose data during a snapshot.

> Does it flush everything to disk or would some stuff be lost?

Yes, all ZFS data will be committed to disk and then the snapshot
is taken.

Applications can take advantage of this and there are services available
to integrate ZFS snapshots with Oracle databases, Windows clients, etc.
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver help.

2010-04-13 Thread R.G. Keen
Offhand, I'd say EON  

http://sites.google.com/site/eonstorage/

This probably the best answer right now. It will be even better when they get a 
web administration GUI running. Some variant of freenas on freebsd is also 
possible. 

Opensolaris is missing a good opportunity to expand its user base on two fronts.
1. It's *hard* to figure out what motherboards will just work, and what will 
have problems.
2. There's not a better-packaged solution to this particular question. EON is 
very, very close to solving this one.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Create 1 pool from 3 exising pools in mirror configuration

2010-04-13 Thread Harry Putnam
Brandon High  writes:

[...]
Harry wrote:

>> So having some data on rpool (besides the OS I mean) is not
>> necessarily a bad thing then?

Daniel C answered:

>> Not at all; laptops would be screwed otherwise. 

Brandon H. responded:

> The pool will resilver faster if it's got less data on it, which may
> be important to you.
> 
> The rpool only supports mirrored redundancy, and you can't add more
> vdevs to it, so the ability to grow it is limited.
> 
> It's also good practice to keep your OS install separate from your
> data for maintenance reasons. For my home server, the rpool contains
> only the OS install and everything else is in a separate pool.

Apparently you are not disagreeing with Daniel Cs' comment above so I
guess you are talking about disk partitions here?  

Currently two 500gb IDE drives make up rpool.  So to have mirror
redundancy of rpool, do you mean to create identical partitions on
each disk for mirrored rpool, then use the other partition for data?

I thought I remembered being told it was better not to partition disks
but I'm probably just confused about it. 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Secure delete?

2010-04-13 Thread James Van Artsdalen
If you're concerned about someone reading the charge level of a Flash cell to 
infer the value of the cell before being erased, then overwrite with random 
data twice before issuing TRIM (remapping in an SSD probably makes this 
ineffective).

Most people needing a secure erase feature need it to satisfy legal 
requirements not national security requirements.  Anyone needing a strong 
TINFOIL_HAT_ERASE feature is going to be encrypting the data anyway.  A 
SECURE_ERASE for them is mainly to satisfy legal and statutory language 
requiring that the data be actually erased (when it's not worth the lawyer's 
fees to convince a court that loss of a key makes encrypted data unrecoverable).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Snapshots and Data Loss

2010-04-13 Thread Erik Ableson
A snapshot is a picture of the storage at a point in time so  
everything depends on the applications using the storage. If you're  
running a db with lots of cache it's probably a good idea to stop the  
service or force a flush to disk before taking the snapshot to ensure  
the integrity of the data. That said, rolling back to a snapshot would  
be roughly the same thing as stopping the application brutally and  
it's up to the application to evaluate the data. Some will handle it  
better than others.


If you're running virtual machines the ideal solution is to take a VM  
snapshot, followed by the filesystem snapshot, then deleting the VM  
snashot.


ZFS snapshots are very reliable but it's scope is limited to the disks  
that it manages so if there's unflushed data living at a higher level,  
ZFS won't be aware of it.


Cordialement,

Erik Ableson

On 13 avr. 2010, at 14:22, Tony MacDoodle  wrote:

I was wondering if any data was lost while doing a snapshot on a  
running system? Does it flush everything to disk or would some stuff  
be lost?


Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Snapshots and Data Loss

2010-04-13 Thread Tony MacDoodle
I was wondering if any data was lost while doing a snapshot on a running
system? Does it flush everything to disk or would some stuff be lost?

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver help.

2010-04-13 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Daniel
> 
> Im pretty new to the whole OpenSolaris thing, i've been doing a bit of
> research but cant find anything on what i need.
> 
> I am thinking of making myself a home file server running OpenSolaris
> with ZFS and utilizing Raid/Z
> 
> I was wondering if there is anything i can get that will allow Windows
> Media Center based hardware (HTPC or XBOX 360) to steam from my new
> fileserver?

I have a question, in response to this question.  For everyone else here to
answer, other than Daniel.  ;-)

For a newbie such as the above, does some other distribution have an easier
setup?  Would you recommend Opensolaris, or one of the various derivative
distributions?  I've never looked at those distros; I don't know what
they're designed for.

Would you recommend something like FreeNAS?  I don't know what Daniel's
opinion would be, but a lot of people asking similar questions might prefer
not to learn about opensolaris, and simply have a drop-in appliance with
simple gui configuration.  I never looked myself, because that's not
attractive to me.  But I wonder if something like that might exist, to
satisfy Daniel and others.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Post crash - what to do - update

2010-04-13 Thread Bruno Sousa
Hi all,

Since Google can be your friend and this good article at
http://cuddletech.com/blog/pivot/entry.php?id=965, by Ben Rockwood, i
have new informations, and hopefully someone might be able to see
something interesting in this.
So based on what i can understand a thread  ff001f7f3c60 running in
cpu 4 caused a panic (freeing a free IOMMU page: paddr=0xccca2000) and
this thread belongs to a process called  zpool-TEST .

Now things gets more "dark" to me, since the pools available in the
system are :

zpool list (filtered info)

NAME
RAID10  
RAIDZ2 
rpool  

So, i have no zpool called TEST, however i had it on the past, and
basically i exported the zpool and imported it with a different name.
2010-02-23.08:33:47 zpool export TEST
2010-02-23.08:34:05 zpool import TEST RAID10

Now..can this rename thing lead to this type or errors, or i'm
completely wrong?

Thanks in advance for all your time,

Bruno



Detailed info :

mdb -k unix.0 vmcore.0

mdb: warning: dump is from SunOS 5.11 snv_132; dcmds and macros may not
match kernel implementation
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc
pcplusmp scsi_vhci zfs mpt sd sockfs ip hook neti sctp arp usba uhci
fctl stmf md lofs idm nfs random sppp fcip cpc crypto logindmux ptm
nsctl ufs ipc ]

::status

debugging crash dump vmcore.0 (64-bit) from san01
operating system: 5.11 snv_132 (i86pc)
panic message: Freeing a free IOMMU page: paddr=0xccca2000
dump content: kernel pages only

::stack

vpanic()
iommu_page_free+0xcb(ff04e3da5000, ccca2000)
iommu_free_page+0x15(ff04e3da5000, ccca2000)
iommu_setup_level_table+0xa0(ff054406d000, ff0543b99000, 8)
iommu_setup_page_table+0xa0(ff054406d000, 100c000)
iommu_map_page_range+0x6a(ff054406d000, 100c000, 3c2329000,
3c2329000, 2)
iommu_map_dvma+0x50(ff054406d000, 100c000, 3c2329000, 1000,
ff001f7f31d0)
intel_iommu_map_sgl+0x22f(ff0553b43e00, ff001f7f31d0, 41)
rootnex_coredma_bindhdl+0x11e(ff04e3ef5cb0, ff04e607f540,
ff0553b43e00, ff001f7f31d0, ff0553efdc50, ff0553efdbf8)
rootnex_dma_bindhdl+0x36(ff04e3ef5cb0, ff04e607f540,
ff0553b43e00, ff001f7f31d0, ff0553efdc50, ff0553efdbf8)
ddi_dma_buf_bind_handle+0x117(ff0553b43e00, ff055860cd00, a, 0,
0, ff0553efdc50)
scsi_dma_buf_bind_attr+0x48(ff0553efdb90, ff055860cd00, a, 0, 0)
scsi_init_cache_pkt+0x2d0(ff05456302e0, 0, ff055860cd00, a, 20, 0)
scsi_init_pkt+0x5c(ff05456302e0, 0, ff055860cd00, a, 20, 0)
vhci_bind_transport+0x54d(ff0543191c58, ff055d2f8968, 4, 0)
vhci_scsi_init_pkt+0x160(ff0543191c58, 0, ff055860cd00, a, 20, 0)
scsi_init_pkt+0x5c(ff0543191c58, 0, ff055860cd00, a, 20, 0)
sd_setup_rw_pkt+0x12a(ff0543b9d080, ff001f7f3688,
ff055860cd00, 4, f7a91b80, ff0543b9d080)
sd_initpkt_for_buf+0xad(ff055860cd00, ff001f7f36f8)
sd_start_cmds+0x197(ff0543b9d080, 0)
sd_core_iostart+0x186(4, ff0543b9d080, ff055860cd00)
sd_mapblockaddr_iostart+0x306(3, ff0543b9d080, ff055860cd00)
sd_xbuf_strategy+0x50(ff055860cd00, ff0544cf0a00, ff0543b9d080)
xbuf_iostart+0x1e5(ff04f21cce80)
ddi_xbuf_qstrategy+0xd3(ff055860cd00, ff04f21cce80)
sdstrategy+0x101(ff055860cd00)
bdev_strategy+0x75(ff055860cd00)
ldi_strategy+0x59(ff04f29a4df8, ff055860cd00)
vdev_disk_io_start+0xd0(ff055c2379a0)
zio_vdev_io_start+0x17d(ff055c2379a0)
zio_execute+0x8d(ff055c2379a0)
vdev_queue_io_done+0x92(ff055c2fe680)
zio_vdev_io_done+0x62(ff055c2fe680)
zio_execute+0x8d(ff055c2fe680)
taskq_thread+0x248(ff0543a086a0)
thread_start+8()


::msgbuf

panic[cpu4]/thread=ff001f7f3c60:
Freeing a free IOMMU page: paddr=0xccca2000


ff001f7f2e90 rootnex:iommu_page_free+cb ()
ff001f7f2eb0 rootnex:iommu_free_page+15 ()
ff001f7f2f10 rootnex:iommu_setup_level_table+a0 ()
ff001f7f2f50 rootnex:iommu_setup_page_table+a0 ()
ff001f7f2fd0 rootnex:iommu_map_page_range+6a ()
ff001f7f3020 rootnex:iommu_map_dvma+50 ()
ff001f7f30e0 rootnex:intel_iommu_map_sgl+22f ()
ff001f7f3180 rootnex:rootnex_coredma_bindhdl+11e ()
ff001f7f31c0 rootnex:rootnex_dma_bindhdl+36 ()
ff001f7f3260 genunix:ddi_dma_buf_bind_handle+117 ()
ff001f7f32c0 scsi:scsi_dma_buf_bind_attr+48 ()
ff001f7f3350 scsi:scsi_init_cache_pkt+2d0 ()
ff001f7f33d0 scsi:scsi_init_pkt+5c ()
ff001f7f3480 scsi_vhci:vhci_bind_transport+54d ()
ff001f7f3500 scsi_vhci:vhci_scsi_init_pkt+160 ()
ff001f7f3580 scsi:scsi_init_pkt+5c ()
ff001f7f3660 sd:sd_setup_rw_pkt+12a ()
ff001f7f36d0 sd:sd_initpkt_for_buf+ad ()
ff001f7f3740 sd:sd_start_cmds+197 ()


::panicinfo

 cpu4
  thread ff001f7f3c60
 message Freeing a free IOMMU page: paddr=0xccca2000
 rdi f78ede80
 rsi ff001f7f2e10
 rdx ccca2000
 rcx1
  r8 ff

Re: [zfs-discuss] Secure delete?

2010-04-13 Thread Joerg Schilling
"David Magda"  wrote:

> Given that ZFS probably would not have to go back to "old" blocks until
> it's reached the end of the disk, that should give the SSDs' firmware
> plenty of time to do block-remapping and background erasing--something
> that's done now anyway regardless of whether an SSD supports TRIM or not.
> You don't need TRIM to make ZFS go fast, though it doesn't hurt.

This is only true as long as the filesystem is far awy from being full and
as long as you write less than the spare size of the SSD.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Secure delete?

2010-04-13 Thread Joerg Schilling
Bob Friesenhahn  wrote:

> Yes of course.  Properly built SSDs include considerable extra space 
> to support wear leveling, and this same space may be used to store 
> erased blocks.  A block which is "overwritten" can simply be written 
> to a block allocated from the extra free pool, and the existing block 
> can be re-assigned to the free pool and scheduled for erasure.  This 
> is a fairly simple recirculating algorithm which just happens to 
> also assist with wear management.

I believe you make a mistake with this assumption.

-   The SSD cannot know which blocks are currently not in use.

-   In special with a COW filesystem, after some time all net space
may have been written to but the SSD does not know whether it
is still used or not. So you see s mainly empty filesystem
while the SSD does not know this fact.

-   If wou write not too much to a SSD, it may be that the spare space
for defect management is sufficient in order to have suficient
prepared erased space.

-   Once you write more, I see no reason why a COW filesystem should
be any better than a non-COW filesystem.



Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Secure delete?

2010-04-13 Thread Joerg Schilling
Bob Friesenhahn  wrote:

> On Sun, 11 Apr 2010, James Van Artsdalen wrote:
>
> > OpenSolaris needs support for the TRIM command for SSDs.  This 
> > command is issued to an SSD to indicate that a block is no longer in 
> > use and the SSD may erase it in preparation for future writes.
>
> There does not seem to be very much `need' since there are other ways 
> that a SSD can know that a block is no longer in use so it can be 
> erased.  In fact, ZFS already uses an algorithm (COW) which is 
> friendly for SSDs.

Could you please explain what "other ways" you have in mind and how they work?

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver help.

2010-04-13 Thread Thomas Maier-Komor
On 13.04.2010 10:12, Ian Collins wrote:
> On 04/13/10 05:47 PM, Daniel wrote:
>> Hi all.
>>
>> Im pretty new to the whole OpenSolaris thing, i've been doing a bit of
>> research but cant find anything on what i need.
>>
>> I am thinking of making myself a home file server running OpenSolaris
>> with ZFS and utilizing Raid/Z
>>
>> I was wondering if there is anything i can get that will allow Windows
>> Media Center based hardware (HTPC or XBOX 360) to steam from my new
>> fileserver?
>>
>> Any help is appreciated and remember im new :)
>>
> OpenSolaris as a native CIFS service, which enables sharing filesystems
> to windows clients.
> 
> I used this blog entry to setup my windows shares:
> 
> http://blogs.sun.com/timthomas/entry/solaris_cifs_in_workgroup_mode
> 
> With OpenSolaris, you can get the SMB server with the package manager GUI.
> 

I guess Daniel is rather looking for a UPnP Media Server [1]  like
ushare or coherence that is able to transcode media files and hand them
out to streaming clients.

I have been trying to get this up and running on a Solaris 10 based
SPARC box, but I had no luck. I am not sure if the problem is my
streaming client (Philips TV), because my FritzBox, which has a
streaming server is also not always visible on the Philips TV. But the
software running on the Solaris box never showed up as a service
provider on the TV...

Anyway, this was on Solaris 10, and I didn't bother too much to get it
setup and running on OpenSolaris. There might be even a package
available in the repository. Just look for the candidates like ushare,
coherence, and of course libupnp. If those aren't available you'll have
to build by hand, I guess this will also require some portability work.

Cheers,
Thomas


[1] http://en.wikipedia.org/wiki/UPnP_AV_MediaServers
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Post crash - what to do

2010-04-13 Thread Bruno Sousa
Hi all,

Recently one of the servers , a Dell R710, attached to 2 J4400 started
to crash quite often.
Finally i got a message in /var/adm/messages that might point to
something usefull, but i don't have the expertise to start to
troubleshooting this problem, so any help would be highly valuable.

Best regards,
Bruno


The significant messages are :

Apr 13 11:12:04 san01 savecore: [ID 570001 auth.error] reboot after
panic: Freeing a free IOMMU page: paddr=0xccca2000
Apr 13 11:12:04 san01 savecore: [ID 385089 auth.error] Saving compressed
system crash dump in /var/crash/san01/vmdump.0

I also noticed other "interesting" messages like :

Apr 13 11:11:10 san01 unix: [ID 378719 kern.info] NOTICE: cpu_acpi: _PSS
package evaluation failed for with status 5 for CPU 0.
Apr 13 11:11:10 san01 unix: [ID 388705 kern.info] NOTICE: cpu_acpi:
error parsing _PSS for CPU 0
Apr 13 11:11:10 san01 unix: [ID 928200 kern.info] NOTICE: SpeedStep
support is being disabled due to errors parsing ACPI P-state objects
exported by BIOS

Apr 13 11:10:50 san01 scsi: [ID 243001 kern.info]
/p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0):
Apr 13 11:10:50 san01   DMA restricted below 4GB boundary due to errata

Apr 13 11:11:32 san01 scsi: [ID 243001 kern.info]
/p...@0,0/pci8086,3...@9/pci1000,3...@0 (mpt2):
Apr 13 11:11:32 san01   DMA restricted below 4GB boundary due to errata



Relevant specs of the machine :

SunOS san01 5.11 snv_134 i86pc i386 i86pc Solaris

rpool boot drives attached to a Dell SAS6/iR Integrated RAID Controller
(mpt0 Firmware version v0.25.47.0 (IR) )
2 HBA LSI 1068E, each connect to a J4400 jbod (mpt1 Firmware version
v1.26.0.0 (IT) )

multipath enabled and working

2 Quad-Cores, 16Gb ram





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to add gpxe,dhcpd to EON

2010-04-13 Thread TienDoan
Hi All,

I'm researching boot diskless over iSCSI . I want to add gpxe and dhcpd to EON 
. Could you help me  ?

Thanks and Regards
Tien Doan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID-Z1 Degraded Array won't import

2010-04-13 Thread Peter Tripp
>> * Should I be able to import a degraded pool?
> In general, yes. But it is complaining about corrupted data, which can 
> be due to another failure.
Any suggestions on how to discover what that failure might be?

>> * If not, shouldn't there be a warning when exporting a degraded pool?
> What should the warning say?
"You're exporting a degraded pool, it's recommended you replace address this 
issue by replacing missing/failing/failed disks and allowing resilver to 
complete before exporting, otherwise this pool may subsequently fail to import. 
Use -f to export anyways."

>> * If replace 1TB dead disk with a blank disk, might the import work?
> Have you tried simply removing the dead drive?  
Yep. No help.

> Also, the ZFS Troubleshooting Guide has procedures
> that might help.
I've been reading this document, but it seems to cover working with either 
zpools already imported and with rpools (mirrored not raidz) which cause the 
system to fail to boot, but maybe I'm misreading some of it. 
 
> Versions of OpenSolaris after b128 have additional recovery capability
> using the "zpool import -F" option.
Ooo...that sounds promising, is this PSARC 2009/479 or something different?
http://www.c0t0d0s0.org/archives/6067-PSARC-2009479-zpool-recovery-support.html

Since 2010.03 (aka 2010.someday) isn't coming anytime soon, can anyone 
recommend another distro with a LiveCD based on snv128 or later so I can try 
and give this a shot?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fileserver help.

2010-04-13 Thread Ian Collins

On 04/13/10 05:47 PM, Daniel wrote:

Hi all.

Im pretty new to the whole OpenSolaris thing, i've been doing a bit of research 
but cant find anything on what i need.

I am thinking of making myself a home file server running OpenSolaris with ZFS 
and utilizing Raid/Z

I was wondering if there is anything i can get that will allow Windows Media 
Center based hardware (HTPC or XBOX 360) to steam from my new fileserver?

Any help is appreciated and remember im new :)
   
OpenSolaris as a native CIFS service, which enables sharing filesystems 
to windows clients.


I used this blog entry to setup my windows shares:

http://blogs.sun.com/timthomas/entry/solaris_cifs_in_workgroup_mode

With OpenSolaris, you can get the SMB server with the package manager GUI.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots taking too much space

2010-04-13 Thread Khyron
Now is probably a good time to mention that dedupe likes LOTS of RAM, based
on
experiences described here.  8 GiB minimum is a good start.  And to avoid
those
obscenely long removal times due to updating the DDT, an SSD based L2ARC
device
seems to be highly recommended as well.

That is, of course, if the OP decides to go the dedupe route.  I get the
feeling there
is an actual solution to, or at least an intelligent reason for, for the
symptoms he's
experiencing.  I'm just not sure what either of those might be.

On Tue, Apr 13, 2010 at 03:09, Peter Tripp  wrote:

> Oops, I meant SHA256.  My mind just maps SHA->SHA1, totally forgetting that
> ZFS actually uses SHA256 (a SHA-2 variant).
>
> More on ZFS dedup, checksums and collisions:
> http://blogs.sun.com/bonwick/entry/zfs_dedup
> http://www.c0t0d0s0.org/archives/6349-Perceived-Risk.html
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


-- 
"You can choose your friends, you can choose the deals." - Equity Private

"If Linux is faster, it's a Solaris bug." - Phil Harman

Blog - http://whatderass.blogspot.com/
Twitter - @khyron4eva
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots taking too much space

2010-04-13 Thread Peter Tripp
Oops, I meant SHA256.  My mind just maps SHA->SHA1, totally forgetting that ZFS 
actually uses SHA256 (a SHA-2 variant).

More on ZFS dedup, checksums and collisions:
http://blogs.sun.com/bonwick/entry/zfs_dedup
http://www.c0t0d0s0.org/archives/6349-Perceived-Risk.html
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] select a qlogic fc HBA port to a qlc/qlt mode?

2010-04-13 Thread likaijun
hello,all,I want to add a dual ports  FC HBA (qlogic 2462)to my opensolaris 
snv-133.
My Purpose is one port is set to Initiator mode and another is target mode.

#luxadm -e port
/device/pci...@...  connected
/device/pci...0,1...@..  connected
I konw  it is the initiator mode.
then I updadate_drv -a -i 'pci0,1@' qlt
after reboot,
Changes is not effected.It is to say .now I only use them all to qlc or qlt 
mode.
updadate_drv -a -i 'pci0@' qlt
is the same.
I can only add or remove : update_drv -a(-d) -i 'pciex1077,2432' qlc(qlt)
I read the /etc/path_to inst
"/p...@0,0/pci8086,3...@6/pci1077,1...@0" 0 "qlc"
"/p...@0,0/pci8086,3...@6/pci1077,1...@0" 0 "qlt"
"/p...@0,0/pci8086,3...@6/pci1077,1...@0" 1 "qlc"
"/p...@0,0/pci8086,3...@6/pci1077,1...@0" 1"qlt"

I find a basic doc web:
http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+Fibre+Channel+Ports
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS RAID-Z1 Degraded Array won't import

2010-04-13 Thread Peter Tripp
> Did you try with -f?  I doubt it will help.
Yep, no luck with -f, -F or -fF. 

> > * If replace 1TB dead disk with a blank disk, might
> the import work?
> 
> Only if the import is failing because the dead disk
> is nonresponsive in a way that makes the import hang.
> Otherwise, you'd import the pool first then replace the drive.
That's what I thought. The bad disk isn't recognized by the SATA card BIOS, so 
it's not half-gone...it's totally missing (also tried with disk removed, no 
difference).

> If you have auto-snapshots of your running BE (/etc) from before the import,
> that should work fine. Note that you can pass import an argument
> "-c cachefile" so you don't have to interfere with the current system one.
> 
> You'd have to do this on the original system, I think.
Just checked and didn't have automatic snapshots/time slider enabled.  My rpool 
only has an initial snapshot from install, and sadly I didn't have these four 
disks attached at that point.  And naturally I don't have any backups. FML.

> The logic is that the cachefile contains copies of the labels of the missing
> devices, and can substitute for the devices themselves when importing a
> degradedd pool (typically at boot).
> 
> This is useful enough that i'd like to see some of the reserved area
> between the on-disk labels and the first metaslab on each disk, used
> to store a copy of the cache file / same data.  That way every pool
> member has the information about other members necessary to import a
> degraded pool.   Even if it had to be extracted first with zdb to be
> used as a separate zpool.cache as above, it would be helpful for this
> scenario. 
Yeah, I'd totally give up another 256KB/disk or whatever to make a degraded 
array easier to import.  Anyone know if there's an RFE for this?

I may have a disk lying around from a previous OpenSolaris install or from 
Linux ZFS-FUSE where this zpool was originally created.  Maybe I'll get lucky 
and find an old zpool.cache to try.

Any other ideas? (Besides the obvious 'BACKUP YOUR F**KING /ETC ONCE IN A 
WHILE!')
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss