Re: [zfs-discuss] Help! OS drive lost, can I recover data?

2012-08-27 Thread eXeC001er
If your data-disks are OK, then do not worry. Just reinstall your OS and
import the data-pool.

Thanks.

2012/8/28 Adam adamthek...@gmail.com

 Hi All,

 Bit of a newbie here, in desperate need of help.

 I had a fileserver based on FreeNAS/ZFS - 4 SATA drives in RaidZ, with the
 OS on a USB stick (actually, a spare MicroSD card in a USB adapter).
 Yesterday we had a power outage - that seems to have fried the MicroSD
 card. The other disks *appear* to be OK (although I've done nothing much to
 check them yet - they're being recognised on boot), but the OS is gone -
 the MicroSD is completely unreadable. I think I was using FreeNAS 0.7 - I
 honestly can't remember.

 Can I recover the data? Can anyone talk me through the process?

 Thanks - Adam...

 ** **

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool resize

2011-04-04 Thread eXeC001er
try to export and next import your volume.

2011/4/4 For@ll for...@stalowka.info

 W dniu 01.04.2011 14:50, Richard Elling pisze:

 On Apr 1, 2011, at 4:23 AM, For@ll wrote:

  Hi,

 LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I'm
 changing LUN size on netapp and solaris format see new value but zpool still
 have old value.
 I tryed zpool export and zpool import but it didn't resolve my problem.

 bash-3.00# format
 Searching for disks...done


 AVAILABLE DISK SELECTIONS:
   0. c0d1DEFAULT cyl 6523 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
   1. c2t1d0NETAPP-LUN-7340-22.00GB
  /iscsi/d...@iqn.1992-08.com.netapp%3Asn.13510595203E9,0
 Specify disk (enter its number): ^C
 bash-3.00# zpool list
 NAME  SIZE  ALLOC   FREECAP  HEALTH  ALTROOT
 TEST 9,94G93K  9,94G 0%  ONLINE  -



 What can I do that zpool show new value?


 zpool set autoexpand=on TEST
 zpool set autoexpand=off TEST
  -- richard


 I tried your suggestion, but no effect.



 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copy complete zpool via zfs send/recv

2010-12-18 Thread eXeC001er
2010/12/18 Stephan Budach stephan.bud...@jvm.de

  Am 18.12.10 15:14, schrieb Edward Ned Harvey:

  From: Stephan Budach [mailto:stephan.bud...@jvm.de stephan.bud...@jvm.de]

 Ehh. well. you answered it. sort of. ;)
 I think I simply didn't dare to overwrite the root zfs on the destination

  zpool

  with -F, but of course you're right, that this is the way to go.

  What are you calling the root zfs on the destination?
 You're not trying to overwrite / are you?  That would ... admittedly ... not
 be so straightforward.  But I don't think it's impossible.


  The root zfs, to me, is the fs that gets created once you create the
 zpool. So, if I create the zpool tank, I also get the zfs fs tank, no?

 Yes, but zfs receive can put received data only to another pool. You cannot
zfs receive to RAW  disk


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Live resize/grow of iscsi shared ZVOL

2010-10-03 Thread eXeC001er
correct link:

http://kornax.org/wordpress/2010/10/zfs-iscsi-how-to-do-it-and-my-journey-part-2/

2010/10/3 Steve stwag...@prg.com

 I am certainly a little late to this post, but I recently began using ZFS
 and had to figure this all out.

 There are ways to do this without disturbing the volume or removing it and
 re-connecting it on the Windows side.  I had a bit of research involved, and
 I put up a blog about it.  Plan to share most of my opensolaris and future
 zfs tidbits there.


 http://kornax.org/wordpress/2010/10/zfs-iscsi-how-to-do-it-and-my-jouney-part-2/

 Hope this helps people!
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 'sync' properties and write operations.

2010-08-28 Thread eXeC001er
Hi.

Can you explain to me:

1. dataset has 'sync=always'

I start write to file on this dataset in no-sync mode: system write file in
sync or async mode?

2. dataset has 'sync=disabled'

I start write to file on this dataset in sync mode: system write file in
sync or async mode?



Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs lists discrepancy after added a new vdev to pool

2010-08-28 Thread eXeC001er
 On Sat, Aug 28, 2010 at 02:54, Darin Perusich
 darin.perus...@cognigencorp.com wrote:
  Hello All,
 
  I'm sure this has been discussed previously but I haven't been able to
 find an
  answer to this. I've added another raidz1 vdev to an existing storage
 pool and
  the increased available storage isn't reflected in the 'zfs list' output.
 Why
  is this?
 
  The system in question is runnning Solaris 10 5/09 s10s_u7wos_08, kernel
  Generic_139555-08. The system does not have the lastest patches which
 might be
  the cure.
 
  Thanks!
 
  Here's what I'm seeing.
  zpool create datapool raidz1 c1t50060E800042AA70d0  c1t50060E800042AA70d1
 
  zpool status
   pool: datapool
   state: ONLINE
   scrub: none requested
  config:
 
 NAME   STATE READ WRITE CKSUM
 datapool   ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d0  ONLINE   0 0 0
 c1t50060E800042AA70d1  ONLINE   0 0 0
 
  zfs list
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  datapool   108K   196G18K  /datapool
 
  zpool add datapool raidz1 c1t50060E800042AA70d2 c1t50060E800042AA70d3
 
  zpool status
   pool: datapool
   state: ONLINE
   scrub: none requested
  config:
 
 NAME   STATE READ WRITE CKSUM
 datapool   ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d0  ONLINE   0 0 0
 c1t50060E800042AA70d1  ONLINE   0 0 0
   raidz1   ONLINE   0 0 0
 c1t50060E800042AA70d2  ONLINE   0 0 0
 c1t50060E800042AA70d3  ONLINE   0 0 0
 
  zfs list
  NAME   USED  AVAIL  REFER  MOUNTPOINT
  datapool   112K   392G18K  /datapool


Darin, you created 'pool'-vdev from the two 'raid-z'-vdev: result you have
size_of_pool = 2 * 'raid-z'




 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs iostat - which unit bit vs. byte

2010-06-17 Thread eXeC001er
answer for your questions:

usr/src/cmd/zpool/zpool_main.c
} else {
print_one_stat(newvs-vs_alloc);
print_one_stat(newvs-vs_space - newvs-vs_alloc);
}

print_one_stat((uint64_t)(scale * (newvs-vs_ops[ZIO_TYPE_READ] -
oldvs-vs_ops[ZIO_TYPE_READ])));

print_one_stat((uint64_t)(scale * (newvs-vs_ops[ZIO_TYPE_WRITE] -
oldvs-vs_ops[ZIO_TYPE_WRITE])));

print_one_stat((uint64_t)(scale * (newvs-vs_bytes[ZIO_TYPE_READ] -
oldvs-vs_bytes[ZIO_TYPE_READ])));

print_one_stat((uint64_t)(scale * (newvs-vs_bytes[ZIO_TYPE_WRITE] -
oldvs-vs_bytes[ZIO_TYPE_WRITE])));

usr/src/uts/common/sys/fs/zfs.h
/*
 * Vdev statistics.  Note: all fields should be 64-bit because this
 * is passed between kernel and userland as an nvlist uint64 array.
 */
typedef struct vdev_stat {
hrtime_tvs_timestamp;   /* time since vdev load */
uint64_tvs_state;   /* vdev state   */
uint64_tvs_aux; /* see vdev_aux_t   */
uint64_tvs_alloc;   /* space allocated  */
uint64_tvs_space;   /* total capacity   */
uint64_tvs_dspace;  /* deflated capacity*/
uint64_tvs_rsize;   /* replaceable dev size */
uint64_tvs_ops[ZIO_TYPES];  /* operation count  */
uint64_tvs_bytes[ZIO_TYPES];/* bytes read/written   */
uint64_tvs_read_errors; /* read errors  */
uint64_tvs_write_errors;/* write errors */
uint64_tvs_checksum_errors; /* checksum errors  */
uint64_tvs_self_healed; /* self-healed bytes*/
uint64_tvs_scan_removing;   /* removing?*/
uint64_tvs_scan_processed;  /* scan processed bytes */
} vdev_stat_t;


2010/6/17 pitutek maciej.pl...@gmail.com

 Guys,

 # zpool iostat pool1
   capacity operationsbandwidth
 pool used  avail   read  write   read  write
 --  -  -  -  -  -  -
 pool1822M   927G  0  0435  28.2K


 In which units is bandwidth measured?
 I suppose capital K means Byte but Im not sure.
 Anyway abbreviation for mega is only capital M.

 Docs just say:
 WRITE BANDWIDTH The bandwidth of all write operations, expressed as units
 per second.

 Thanks!

 /M
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disable ZFS ACL

2010-06-16 Thread eXeC001er
Hi All.

Can you explain to me hot to disable ACL on ZFS ?

'aclmode' prop does not exists in props of zfs dataset, but this prop on the
zfs man( http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?l=ena=viewq=zfs
 )

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Component Naming Requirements

2010-06-07 Thread eXeC001er
Hi All!

Can i create pool or dataset with name that contains non-latin letters
(russian letters, specific germany letters, etc ...)?

I tried to create pool with non-latin letters, but could not.

In ZFS User Guide i see next information:

 Each ZFS component must be named according to the following rules:

-

Empty components are not allowed.
-

Each component can only contain alphanumeric characters in addition to
the following four special characters:
-

   Underscore (_)
   -

   Hyphen (-)
   -

   Colon (:)
   -

   Period (.)
   -

Pool names must begin with a letter, except for the following
restrictions:
-

   The beginning sequence c[0-9] is not allowed
   -

   The name log is reserved
   -

   A name that begins with mirror, raidz, or spare is not allowed
   because these name are reserved.

In addition, pool names must not contain a percent sign (%)
-

Dataset names must begin with an alphanumeric character. Dataset names
must not contain a percent sign (%).


As you can see guide has no information about only-latin letters.

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-25 Thread eXeC001er
try to zdb -l /dev/rdsk/c1d0s0

2010/5/25 h bajsadb...@pleasespam.me

 eon:1:~#zdb -l /dev/rdsk/c1d0
 
 LABEL 0
 
 failed to unpack label 0
 
 LABEL 1
 
 failed to unpack label 1
 
 LABEL 2
 
 failed to unpack label 2
 
 LABEL 3
 
 failed to unpack label 3


 same for the other five drives in the pool
 what now?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] spares bug: explain to me status of bug report.

2010-05-18 Thread eXeC001er
Hi.

In bugster i found bug about spares.
I can to reproduce the problem. but developer set status Not a defect.
Why?

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6905317

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] spares bug: explain to me status of bug report.

2010-05-18 Thread eXeC001er
6887163 /bugdatabase/view_bug.do?bug_id=6887163 11-Closed:Duplicate
 (Closed)
6945634 /bugdatabase/view_bug.do?bug_id=6945634 11-Closed:Duplicate
 (Closed)


2010/5/18 Cindy Swearingen cindy.swearin...@oracle.com

 Hi--

 The scenario in the bug report below is that the pool is exported.

 The spare can't kick in if the pool is exported. It looks like the
 issue reported in this CR's See Also section, CR 6887163 is still
 open.

 Thanks,

 Cindy


 On 05/18/10 11:19, eXeC001er wrote:

 Hi.

 In bugster i found bug about spares. I can to reproduce the problem. but
 developer set status Not a defect. Why?

 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6905317

 Thanks.


 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I/O statistics for each file system

2010-05-17 Thread eXeC001er
good.
but this utility is used to view statistics for mounted FS.
How can i view statistics for iSCSI shared FS?

Thanks.

2010/5/17 Darren J Moffat darr...@opensolaris.org

 On 17/05/2010 12:41, eXeC001er wrote:

 I known that i can view statistics for the pool (zpool iostat).
 I want to view statistics for each file system on pool. Is it possible?


 See fsstat(1M)

 --
 Darren J Moffat

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I/O statistics for each file system

2010-05-17 Thread eXeC001er
perfect!

I found info about kstat for Perl.

Where can I find the meaning of each field?

r...@atom:~# kstat stmf:0:stmf_lu_io_ff00d1c2a8f8
1274100947
module: stmfinstance: 0
name:   stmf_lu_io_ff00d1c2a8f8 class:io
crtime  2333040.65018394
nread   9954962
nwritten5780992
rcnt0
reads   599
rlastupdate 2334856.48028583
rlentime2.792307252
rtime   2.453258966
snaptime2335022.3396771
wcnt0
wlastupdate 2334856.43951113
wlentime0.103487047
writes  510
wtime   0.069508209

2010/5/17 Henrik Johansen hen...@scannet.dk

 Hi,


 On 05/17/10 01:57 PM, eXeC001er wrote:

 good.
 but this utility is used to view statistics for mounted FS.
 How can i view statistics for iSCSI shared FS?


 fsstat(1M) relies on certain kstat counters for it's operation -
 last I checked I/O against zvols does not update those counters.

 It your are using newer builds and COMSTAR you can use the stmf kstat
 counters to get I/O details per target and per LUN.

  Thanks.

 2010/5/17 Darren J Moffat darr...@opensolaris.org
 mailto:darr...@opensolaris.org


On 17/05/2010 12:41, eXeC001er wrote:

I known that i can view statistics for the pool (zpool iostat).
I want to view statistics for each file system on pool. Is it
possible?


See fsstat(1M)

--
Darren J Moffat




 --
 Med venlig hilsen / Best Regards

 Henrik Johansen
 hen...@scannet.dk
 Tlf. 75 53 35 00

 ScanNet Group
 A/S ScanNet
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] I/O statistics for each file system

2010-05-17 Thread eXeC001er
Good! I found all the necessary information.
Thanks.

2010/5/17 Henrik Johansen hen...@scannet.dk

 On 05/17/10 03:05 PM, eXeC001er wrote:

 perfect!

 I found info about kstat for Perl.

 Where can I find the meaning of each field?


 Most of them can be found here under the section I/O kstat :

 http://docs.sun.com/app/docs/doc/819-2246/kstat-3kstat?a=view


  r...@atom:~# kstat stmf:0:stmf_lu_io_ff00d1c2a8f8
 1274100947
 module: stmfinstance: 0
 name:   stmf_lu_io_ff00d1c2a8f8 class:io
 crtime  2333040.65018394
 nread   9954962
 nwritten5780992
 rcnt0
 reads   599
 rlastupdate 2334856.48028583
 rlentime2.792307252
 rtime   2.453258966
 snaptime2335022.3396771
 wcnt0
 wlastupdate 2334856.43951113
 wlentime0.103487047
 writes  510
 wtime   0.069508209

 2010/5/17 Henrik Johansen hen...@scannet.dk mailto:hen...@scannet.dk


Hi,


On 05/17/10 01:57 PM, eXeC001er wrote:

good.
but this utility is used to view statistics for mounted FS.
How can i view statistics for iSCSI shared FS?


fsstat(1M) relies on certain kstat counters for it's operation -
last I checked I/O against zvols does not update those counters.

It your are using newer builds and COMSTAR you can use the stmf
kstat counters to get I/O details per target and per LUN.

Thanks.

2010/5/17 Darren J Moffat darr...@opensolaris.org
mailto:darr...@opensolaris.org
mailto:darr...@opensolaris.org mailto:darr...@opensolaris.org



On 17/05/2010 12:41, eXeC001er wrote:

I known that i can view statistics for the pool (zpool
iostat).
I want to view statistics for each file system on pool.
Is it
possible?


See fsstat(1M)

--
Darren J Moffat




--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk mailto:hen...@scannet.dk

Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




 --
 Med venlig hilsen / Best Regards

 Henrik Johansen
 hen...@scannet.dk
 Tlf. 75 53 35 00

 ScanNet Group
 A/S ScanNet

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] dedup ration for iscsi-shared zfs dataset

2010-05-06 Thread eXeC001er
Hi.

How can i get this info?

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread eXeC001er
Perhaps the problem is that the old version of pool have shareiscsi, but new
version have not this option, and for share LUN via iscsi you need to make
lun-mapping.



2010/5/4 Przemyslaw Ceglowski prze...@ceglowski.net

 Jim,

 On May 4, 2010, at 3:45 PM, Jim Dunham wrote:

 
  On May 4, 2010, at 2:43 PM, Richard Elling wrote:
 
  On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote:
  
   It does not look like it is:
  
   r...@san01a:/export/home/admin# svcs -a | grep iscsi
   online May_01   svc:/network/iscsi/initiator:default
   online May_01   svc:/network/iscsi/target:default
  
  This is COMSTAR.
 
  Thanks Richard, I am aware of that.
 
 Since you upgrade to b134, not b136 the iSCSI Target Daemon is still
 around, just not on our system.
 
 IPS packaging changes have not installed the iSCSI Target Daemon (among
 other things) by default. It is contained in IPS package known as either
 SUNWiscsitgt or network/iscsi/target/legacy. Visit your local package
 repository for updates: http://pkg.opensolaris.org/dev/
 
 Of course starting with build 136..., iSCSI Target Daemon (and ZFS
 shareiscsi) are gone, so you will need to reconfigure your two ZVOLs
 'vol01/zvol01' and 'vol01/zvol02', under COMSTAR soon.
 
 
 http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Ports
 http://wikis.sun.com/display/OpenSolarisInfo/COMSTAR+Administration
 
 - Jim

 The migrated zVols have been running under COMSTAR originally on b104 which
 makes me wonder even more. Is there any way I can get rid of those messages?

 
 
 
  _
  Przem
 
 
 
  
  From: Rick McNeal [ramcn...@gmail.com]
  Sent: 04 May 2010 13:14
  To: Przemyslaw Ceglowski
  Subject: Re: [storage-discuss] iscsitgtd failed request to share on
  zpool import after upgrade from b104 to b134
 
  Look and see if the target daemon service is still enabled. COMSTAR
  has been the official scsi target project for a while now. In fact, the
  old iscscitgtd was removed in build 136.
 
 For Nexenta, the old iscsi target was removed in 3.0 (based on b134).
  -- richard
 
  It does not answer my original question.
  -- Przem
 
 
 
  Rick McNeal
 
 
  On May 4, 2010, at 5:38 AM, Przemyslaw Ceglowski
 prze...@ceglowski.net wrote:
 
  Hi,
 
  I am posting my question to both storage-discuss and zfs-discuss
 as I am not quite sure what is causing the messages I am receiving.
 
  I have recently migrated my zfs volume from b104 to b134 and
 upgraded it from zfs version 14 to 22. It consist of two zvol's
 'vol01/zvol01' and 'vol01/zvol02'.
  During zpool import I am getting a non-zero exit code, however the
 volume is imported successfuly. Could you please help me to understand
 what could be the reason of those messages?
 
  r...@san01a:/export/home/admin#zpool import vol01
  r...@san01a:/export/home/admin#cannot share 'vol01/zvol01':
 iscsitgtd failed request to share
  r...@san01a:/export/home/admin#cannot share 'vol01/zvol02':
 iscsitgtd failed request to share
 
  Many thanks,
  Przem
  ___
  storage-discuss mailing list
  storage-disc...@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/storage-discuss
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 --
 ZFS storage and performance consulting at http://www.RichardElling.com
  ___
  storage-discuss mailing list
  storage-disc...@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/storage-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recovering data

2010-04-20 Thread eXeC001er
Hi All.

I have pool (3 disks, raidz1). I made recabling for disks and now some of
disks in pool not available (cannot open). bounce back is not possible. Can
i recovery data from this pool?

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Perfomance

2010-04-14 Thread eXeC001er
Hi All.

How many disk space i need to reserve for save ZFS perfomance ?

any official doc?

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Perfomance

2010-04-14 Thread eXeC001er
20 % - it is big size on for large volumes. right ?


2010/4/14 Yariv Graf ya...@walla.net.il

 Hi
 Keep below 80%

 10

 On Apr 14, 2010, at 6:49 PM, eXeC001er execoo...@gmail.com wrote:

 Hi All.

 How many disk space i need to reserve for save ZFS perfomance ?

 any official doc?

 Thanks.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss