Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-04 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi

may be check out stec ssd
or  checkout the service manual of sun zfs appliance service manual
to see the read and write ssd in the system
regards


Sent from my iPad

On Aug 3, 2012, at 22:05, Hung-Sheng Tsao (LaoTsao) Ph.D laot...@gmail.com 
wrote:

 Intel 311 Series Larsen Creek 20GB 2.5 SATA II SLC Enterprise Solid State 
 Disk SSDSA2VP020G201
 
 Average Rating
 (12 reviews)
 Write a Review
 
 Sent from my iPad
 
 On Aug 3, 2012, at 21:39, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
 wrote:
 
 On Fri, 3 Aug 2012, Karl Rossing wrote:
 
 I'm looking at 
 http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
  wondering what I should get.
 
 Are people getting intel 330's for l2arc and 520's for slog?
 
 For the slog, you should look for a SLC technology SSD which saves unwritten 
 data on power failure.  In Intel-speak, this is called Enhanced Power Loss 
 Data Protection.  I am not running across any Intel SSDs which claim to 
 match these requirements.
 
 Extreme write IOPS claims in consumer SSDs are normally based on large write 
 caches which can lose even more data if there is a power failure.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-03 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
Intel 311 Series Larsen Creek 20GB 2.5 SATA II SLC Enterprise Solid State Disk 
SSDSA2VP020G201

Average Rating
(12 reviews)
Write a Review

Sent from my iPad

On Aug 3, 2012, at 21:39, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Fri, 3 Aug 2012, Karl Rossing wrote:
 
 I'm looking at 
 http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
  wondering what I should get.
 
 Are people getting intel 330's for l2arc and 520's for slog?
 
 For the slog, you should look for a SLC technology SSD which saves unwritten 
 data on power failure.  In Intel-speak, this is called Enhanced Power Loss 
 Data Protection.  I am not running across any Intel SSDs which claim to 
 match these requirements.
 
 Extreme write IOPS claims in consumer SSDs are normally based on large write 
 caches which can lose even more data if there is a power failure.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import the zpool

2012-08-02 Thread Hung-Sheng Tsao Ph.D.

http://docs.oracle.com/cd/E19963-01/html/821-1448/gbbwl.html

what is the output of
*zpool import -nF tXstpool*

On 8/2/2012 2:21 AM, Suresh Kumar wrote:

Hi Hung-sheng,
Thanks for your response.
I tried to import the zpool using *zpool import -nF tXstpool*
please consider the below output.
*bash-3.2#  zpool import -nF tXstpool
bash-3.2#
bash-3.2# zpool status tXstpool
cannot open 'tXstpool': no such pool
*
I got these meesages when I run the command using *truss.*
* truss -aefo /zpool.txt zpool import -F tXstpool*

  742  14582:  ioctl(3, ZFS_IOC_POOL_STATS, 0x08041F40) Err#2 ENOENT
  743  14582:  ioctl(3, ZFS_IOC_POOL_TRYIMPORT, 0x08041F90)= 0
  744  14582:  sysinfo(SI_HW_SERIAL, 75706560, 11)   = 9
  745  14582:  ioctl(3, ZFS_IOC_POOL_IMPORT, 0x08041C40) Err#6 ENXIO
  746  14582:  fstat64(2, 0x08040C70)  = 0
  747  14582:  write(2,  c a n n o t   i m p o r.., 24)  = 24
  748  14582:  write(2,  :  , 2) = 2
  749  14582:  write(2,  o n e   o r   m o r e  .., 44)  = 44
  750  14582:  write(2, \n, 1)   = 1

/*Thanks  Regards*/
/*Suresh*/


--

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import the zpool

2012-08-02 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
so zpool import -F ..
zpool import -f ...
all not working?
regards


Sent from my iPad

On Aug 2, 2012, at 7:47, Suresh Kumar sachinnsur...@gmail.com wrote:

 Hi Hung-sheng,
  
 It is not displaying any output, like the following.
  
 bash-3.2#  zpool import -nF tXstpool
 bash-3.2#
  
  
 Thanks  Regards,
 Suresh.
  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import the zpool

2012-08-02 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
can you post zpool history
regards

Sent from my iPad

On Aug 2, 2012, at 7:47, Suresh Kumar sachinnsur...@gmail.com wrote:

 Hi Hung-sheng,
  
 It is not displaying any output, like the following.
  
 bash-3.2#  zpool import -nF tXstpool
 bash-3.2#
  
  
 Thanks  Regards,
 Suresh.
  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import the zpool

2012-08-01 Thread Hung-sheng Tsao
Hi
Try 
zpool import -nF tXstpool
To see 
If it can roll back to some. Good state
If you can afford some lost data
zpool  import -F  tXstpool



Sent from my iPhone

On Aug 1, 2012, at 3:21 AM, Suresh Kumar sachinnsur...@gmail.com wrote:

 Dear ZFS-Users,
 
 I am using Solarisx86 10u10, All the devices which are belongs to my zpool 
 are in available state .
 But I am unable to import the zpool.
 
 #zpool import tXstpool
 cannot import 'tXstpool': one or more devices is currently unavailable
 ==
 bash-3.2# zpool import
   pool: tXstpool
 id: 13623426894836622462
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
 devices and try again.
see: http://www.sun.com/msg/ZFS-8000-6X
 config:
 
 tXstpool UNAVAIL  missing device
   mirror-0   DEGRADED
 c2t210100E08BB2FC85d0s0  FAULTED  corrupted data
 c2t21E08B92FC85d2ONLINE
 
 Additional devices are known to be part of this pool, though their
 exact configuration cannot be determined.
 =
 Any suggestion regarding this case is very helpful.
 
 Regards,
 Suresh.
 
  
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-26 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
imho, the 147440-21 does not list the bugs that solved by 148098-
even through it obsoletes the 148098



Sent from my iPad

On Jul 25, 2012, at 18:14, Habony, Zsolt zsolt.hab...@hp.com wrote:

 Thank you for your replies.
 
 First, sorry for misleading info.  Patch 148098-03  indeed not included in 
 recommended set, but trying to download it shows that 147440-15 obsoletes it
 and 147440-19 is included in latest recommended patch set.
 Thus time solves the problem elsewhere.
 
 Just for fun, my case was:
 
 A standard LUN used as a zfs filesystem, no redundancy (as storage already 
 has), and no partition is used, disk is given directly to zpool.
 # zpool status -oraarch
  pool: -oraarch
 state: ONLINE
 scan: none requested
 config:
 
NAME STATE READ WRITE CKSUM
xx-oraarch   ONLINE   0 0 0
  c5t60060E800570B90070B96547d0  ONLINE   0 0 0
 
 errors: No known data errors
 
 Partitioning shows this.  
 
 partition pr
 Current partition table (original):
 Total disk sectors available: 41927902 + 16384 (reserved sectors)
 
 Part  TagFlag First SectorSizeLast Sector
  0usrwm   256  19.99GB 41927902
  1 unassignedwm 0  0  0
  2 unassignedwm 0  0  0
  3 unassignedwm 0  0  0
  4 unassignedwm 0  0  0
  5 unassignedwm 0  0  0
  6 unassignedwm 0  0  0
  8   reservedwm  41927903   8.00MB 41944286
 
 
 As I mentioned I did not partition it, zpool create did.  I had absolutely 
 no idea how to resize these partitions, where to get the available number of 
 sectors and how many should be skipped and reserved ...
 Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) 
 , restored data.  
 
 Partition looks like this now, I do not think I could have created it easily 
 manually.
 
 partition pr
 Current partition table (original):
 Total disk sectors available: 209700062 + 16384 (reserved sectors)
 
 Part  TagFlag First Sector Size Last Sector
  0usrwm   256   99.99GB  209700062
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 2097000638.00MB  209716446
 
 Thank you for your help.
 Zsolt Habony
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any company willing to support a 7410 ?

2012-07-19 Thread Hung-Sheng Tsao Ph.D.

hi
you have two issues here
1)one is the HW support
2)one is the SW support
no one but oracle can provide SW support even if you find someone for HW 
support

regards
For Other you mentioned  are all opensolaris fock, some does provide Gui 
but pricing model are very different

AFAIK Nexenta is charged by raw capacity:-(
Not sure these are in HW supoort business


On 7/19/2012 5:38 AM, sol wrote:
Other than Oracle do you think any other companies would be willing to 
take over support for a clustered 7410 appliance with 6 JBODs?


(Some non-Oracle names which popped out of google: 
Joyent/Coraid/Nexenta/Greenbytes/NAS/RackTop/EraStor/Illumos/???)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--



attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris derivate with the best long-term future

2012-07-11 Thread Hung-Sheng Tsao Ph.D.

hi
if U have not check this page please do
http://en.wikipedia.org/wiki/ZFS
interesting info about the  status of ZFS in various OS
regards
my 2c
1)if you have the money buy ZFS appliance
2)if you want to build your self napp-it get solaris 11 support, it only 
charge the SW/socket  and not change by storage capacity

Nexenta  Enterprise platform charge U $$ for raw capacity
On 7/11/2012 7:51 AM, Eugen Leitl wrote:

As a napp-it user who recently needs to upgrade from NexentaCore I recently saw
preferred for OpenIndiana live but running under Illumian, NexentaCore and Solaris 
11 (Express)
as a system recommendation for napp-it.

I wonder about the future of OpenIndiana and Illumian, which
fork is likely to see the most continued development, in your opinion?

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--



attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Hung-Sheng Tsao (LaoTsao) Ph.D


Sent from my iPad

On Jul 11, 2012, at 13:11, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Wed, 11 Jul 2012, Richard Elling wrote:
 The last studio release suitable for building OpenSolaris is available in 
 the repo.
 See the instructions at 
 http://wiki.illumos.org/display/illumos/How+To+Build+illumos
 
 Not correct as far as I can tell.  You should re-read the page you 
 referenced.  Oracle recinded (or lost) the special Studio releases needed to 
 build the OpenSolaris kernel.  

hi
you can still download 12 12.1 12.2, AFAIK through OTN


 The only way I can see to obtain these releases is illegally.
 
 However, Studio 12.3 (free download) produces user-space executables which 
 run fine under Illumos.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Hung-Sheng Tsao Ph.D.


On 7/11/2012 3:16 PM, Bob Friesenhahn wrote:

On Wed, 11 Jul 2012, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:


Not correct as far as I can tell.  You should re-read the page you 
referenced.  Oracle recinded (or lost) the special Studio releases 
needed to build the OpenSolaris kernel.


you can still download 12 12.1 12.2, AFAIK through OTN


That is true (and I have done so).  Unfortunately the versions offered 
are not the correct ones to build the OpenSolaris kernel. Special 
patched versions with particular date stamps are required.  The only 
way that I see to obtain these files any more is via distribution

channels primarily designed to perform copyright violations.
one can still download the patches through MOS, but one need to pay the 
support for development tolls:-(


Bob


--



attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots slow on sol11?

2012-06-27 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
just wondering can you change from samba to SMB?
regards

Sent from my iPad

On Jun 27, 2012, at 2:46, Carsten John cj...@mpi-bremen.de wrote:

 -Original message-
 CC:ZFS Discussions zfs-discuss@opensolaris.org; 
 From:Jim Klimov jimkli...@cos.ru
 Sent:Tue 26-06-2012 22:34
 Subject:Re: [zfs-discuss] snapshots slow on sol11?
 2012-06-26 23:57, Carsten John wrote:
 Hello everybody,
 
 I recently migrated a file server (NFS  Samba) from OpenSolaris (Build 
 111) 
 to Sol11.
 (After?) the move we are facing random (or random looking) outages of 
 our Samba...
 
 As for the timeouts, check if your tuning (i.e. the migrated files
 like /etc/system) don't enforce long TXG syncs (default was 30sec)
 or something like that.
 
 Find some DTrace scripts to see if ZIL is intensively used during
 these user-profile writes, and if these writes are synchronous -
 maybe an SSD/DDR logging device might be useful for this scenario?
 
 Regarding the zfs-auto-snapshot, it is possible to install the old
 scripted package from OpenSolaris onto Solaris 10 at least; I did
 not have much experience with newer releases yet (timesliderd) so
 can't help better.
 
 HTH,
 //Jim Klimov
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 Hi everybody,
 
 in the meantime I was able to eliminate the snapshots. I disabled snapshot, 
 but the issue still persists. I will now check Jim's suggestions.
 
 thx so far
 
 
 Carsten
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (fwd) Re: ZFS NFS service hanging on Sunday

2012-06-25 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
in solaris zfs cache many things, you should have more ram
if you setup 18gb swap , imho, ram should be high than 4gb
regards

Sent from my iPad

On Jun 25, 2012, at 5:58, tpc...@mklab.ph.rhul.ac.uk wrote:

 
 2012-06-14 19:11, tpc...@mklab.ph.rhul.ac.uk wrote:
 
 In message 201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk, 
 tpc...@mklab.ph.r
 hul.ac.uk writes:
 Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
 My WAG is that your zpool history is hanging due to lack of
 RAM.
 
 Interesting.  In the problem state the system is usually quite responsive, 
 eg. not memory trashing.  Under Linux which I'm more
 familiar with the 'used memory' = 'total memory - 'free memory', refers to 
 physical memory being used for data caching by
 the kernel which is still available for processes to allocate as needed 
 together with memory allocated to processes, as opposed to
 only physical memory already allocated and therefore really 'used'.  Does 
 this mean something different under Solaris ?
 
 Well, it is roughly similar. In Solaris there is a general notion
 
 [snipped]
 
 Dear Jim,
Thanks for the detailed explanation of ZFS memory usage.  Special 
 thanks also to John D Groenveld for the initial suggestion of a lack of RAM
 problem.  Since up-ing the RAM from 2GB to 4GB the machine has sailed though 
 the last two Sunday mornings w/o problem.  I was interested to
 subsequently discover the Solaris command 'echo ::memstat | mdb -k' which 
 reveals just how much memory ZFS can use.
 
 Best regards
 Tom.
 
 --
 Tom Crane, Dept. Physics, Royal Holloway, University of London, Egham Hill,
 Egham, Surrey, TW20 0EX, England.
 Email:  T.Crane@rhul dot ac dot uk
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-16 Thread Hung-Sheng Tsao Ph.D.
may be so, but in all x86 installation if one choose the default or use 
vbox image
s11, s11 express , oi, s10u10 all have zfs rpool disk partition start 
from cyl 1
On this list I even come across some user some issues with zpool create 
that use disk partition start from cyl 0.

rather be safe then sorry
regards



On 6/16/2012 12:23 PM, Richard Elling wrote:

On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:


by the way
when you format start with cylinder 1 donot use 0

There is no requirement for skipping cylinder 0 for root on Solaris, and there
never has been.
  -- richard



--


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Hung-Sheng Tsao Ph.D.


one possible way:
1)break the mirror
2)install new hdd, format the HDD
3)create new zpool on new hdd with 4k block
4)create new BE  on the new pool with the old root pool as source (not 
sure  which version of solaris or openSolaris ypu are using the 
procedure may be different depend on version

5)activate the new BE
6)boot the new BE
7)destroy the old zpool
8)replace old HDD with new HDD
9)format the HDD
10)attach the HDD to the new root pool
regards



On 6/15/2012 8:14 AM, Hans J Albertsson wrote:

I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only has room for two disks. I do have an esata connector, 
though, and a suitable external cabinet for connecting one extra disk.


How would I go about migrating/expanding the root pool to the larger 
disks so I can then use the larger disks for booting?


I have no extra machine to use.



Skickat från min Android Mobil


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Hung-Sheng Tsao Ph.D.

yes
which version of solaris or bsd you are using?
for bsd I donot know the steps for create new BE (boot env)
for s10 and opensolaris and solaris express (may be other opensolaris 
fork) , you use the liveupgrade

for s11 you use beadm
regards



On 6/15/2012 9:13 AM, Hans J Albertsson wrote:
I suppose I must start by labelling the new disk properly, and give 
the s0 partition to zpool, so the new zpool can be booted?





Skickat från min Android Mobil

Hung-Sheng Tsao Ph.D. laot...@gmail.com skrev:

one possible way:
1)break the mirror
2)install new hdd, format the HDD
3)create new zpool on new hdd with 4k block
4)create new BE  on the new pool with the old root pool as source (not 
sure  which version of solaris or openSolaris ypu are using the 
procedure may be different depend on version

5)activate the new BE
6)boot the new BE
7)destroy the old zpool
8)replace old HDD with new HDD
9)format the HDD
10)attach the HDD to the new root pool
regards



On 6/15/2012 8:14 AM, Hans J Albertsson wrote:

I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only has room for two disks. I do have an esata connector, 
though, and a suitable external cabinet for connecting one extra disk.


How would I go about migrating/expanding the root pool to the larger 
disks so I can then use the larger disks for booting?


I have no extra machine to use.



Skickat från min Android Mobil


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--



--


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Hung-Sheng Tsao Ph.D.

by the way
when you format start with cylinder 1 donot use 0
depend on the version of Solaris you may not be able to use 2TB as root
regards


On 6/15/2012 9:53 AM, Hung-Sheng Tsao Ph.D. wrote:

yes
which version of solaris or bsd you are using?
for bsd I donot know the steps for create new BE (boot env)
for s10 and opensolaris and solaris express (may be other opensolaris 
fork) , you use the liveupgrade

for s11 you use beadm
regards



On 6/15/2012 9:13 AM, Hans J Albertsson wrote:
I suppose I must start by labelling the new disk properly, and give 
the s0 partition to zpool, so the new zpool can be booted?





Skickat från min Android Mobil

Hung-Sheng Tsao Ph.D. laot...@gmail.com skrev:

one possible way:
1)break the mirror
2)install new hdd, format the HDD
3)create new zpool on new hdd with 4k block
4)create new BE  on the new pool with the old root pool as source 
(not sure  which version of solaris or openSolaris ypu are using 
the procedure may be different depend on version

5)activate the new BE
6)boot the new BE
7)destroy the old zpool
8)replace old HDD with new HDD
9)format the HDD
10)attach the HDD to the new root pool
regards



On 6/15/2012 8:14 AM, Hans J Albertsson wrote:

I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only has room for two disks. I do have an esata 
connector, though, and a suitable external cabinet for connecting 
one extra disk.


How would I go about migrating/expanding the root pool to the larger 
disks so I can then use the larger disks for booting?


I have no extra machine to use.



Skickat från min Android Mobil


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--



--



--


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating 512 byte block zfs root pool to 4k disks

2012-06-15 Thread Hung-Sheng Tsao Ph.D.

hi
what is the version of Solaris?
uname -a output?
regards


On 6/15/2012 10:37 AM, Hung-Sheng Tsao Ph.D. wrote:

by the way
when you format start with cylinder 1 donot use 0
depend on the version of Solaris you may not be able to use 2TB as root
regards


On 6/15/2012 9:53 AM, Hung-Sheng Tsao Ph.D. wrote:

yes
which version of solaris or bsd you are using?
for bsd I donot know the steps for create new BE (boot env)
for s10 and opensolaris and solaris express (may be other opensolaris 
fork) , you use the liveupgrade

for s11 you use beadm
regards



On 6/15/2012 9:13 AM, Hans J Albertsson wrote:
I suppose I must start by labelling the new disk properly, and give 
the s0 partition to zpool, so the new zpool can be booted?





Skickat från min Android Mobil

Hung-Sheng Tsao Ph.D. laot...@gmail.com skrev:

one possible way:
1)break the mirror
2)install new hdd, format the HDD
3)create new zpool on new hdd with 4k block
4)create new BE  on the new pool with the old root pool as source 
(not sure  which version of solaris or openSolaris ypu are using 
the procedure may be different depend on version

5)activate the new BE
6)boot the new BE
7)destroy the old zpool
8)replace old HDD with new HDD
9)format the HDD
10)attach the HDD to the new root pool
regards



On 6/15/2012 8:14 AM, Hans J Albertsson wrote:

I've got my root pool on a mirror on 2 512 byte blocksize disks.
I want to move the root pool to two 2 TB disks with 4k blocks.
The server only has room for two disks. I do have an esata 
connector, though, and a suitable external cabinet for connecting 
one extra disk.


How would I go about migrating/expanding the root pool to the 
larger disks so I can then use the larger disks for booting?


I have no extra machine to use.



Skickat från min Android Mobil


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--



--



--



--


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk failure chokes all the disks attached to thefailingdisk HBA

2012-05-31 Thread Hung-Sheng Tsao Ph.D.

just FYI
this is from intel
http://www.intel.com/support/motherboards/server/sb/CS-031831.htm

Another observation:
Oracle/Sun has move  away from SATA to SAS in ZFS storage/Appliance

If you want to go deeper  take look these presentations
http://www.scsita.org/sas_library/tutorials/
and other presentations  on the site
regards


On 5/31/2012 12:45 PM, Antonio S. Cofiño wrote:

Markus,

After Jim's answer I have started to read bout the well known issue.


Is it just mpt causing the errors or also mpt_sas?


Both drivers are causing the reset storm (See my answer to Jim's e-mail).

General consensus from various people: don't use SATA drives on SAS 
back-

planes. Some SATA drives might work better, but there seems to be no
guarantee. And even for SAS-SAS, try to avoid SAS1 backplanes.


In the Paul Kraus's answer it mentions that Oracle support says (among 
other things)

4. the problem happens with SAS as well as SATA drives, but is much
less frequent




That means, that using SAS drives it will reduce the probability of 
the issue but no guarantee exists.



General consensus from various people: don't use SATA drives on SAS 
back-

planes. Some SATA drives might work better, but there seems to be no
guarantee. And even for SAS-SAS, try to avoid SAS1 backplanes.


Yes, may be the 'general consensus' is right but 'general consensus' 
said me to use hardware based raid solutions. But I started to do 
'risky business' (as some vendors told me) using ZFS  and have ended 
discovering how robust is ZFS for this kind of protocol errors.


From my complete naive point of view it appears more a issue with the 
HBA's FW than a issue with SATA drives.


With you answers I have make a lot of re-search helping me to learn 
new things.


Please more comments and help are welcome (from some SAS expert?).

Antonio

--
Antonio S. Cofiño


El 31/05/2012 18:04, Weber, Markus escribió:

Antonio S. Cofiño wrote:

[...]
The system is a supermicro motherboard X8DTH-6F in a 4U chassis
(SC847E1-R1400LPB) and an external SAS2 JBOD (SC847E16-RJBOD1).
It makes a system with a total of 4 backplanes (2x SAS + 2x SAS2)
each of them connected to a 4 different HBA (2x LSI 3081E-R (1068
chip) + 2x LSI SAS9200-8e (2008 chip)).
This system is has a total of 81 disk (2x SAS (SEAGATE ST3146356SS)
+ 34 SATA3 (Hitachi HDS722020ALA330) + 45 SATA6 (Hitachi 
HDS723020BLA642))

The issue arise when one of the disk starts to fail making long time
accesses. After some time (minutes, but I'm not sure) all the disks,
connected to the same HBA, start to report errors. This situation
produce a general failure on the ZFS making the whole POOL unavailable.
[...]


Have been there and gave up at the end[1]. Could reproduce (even though
it took a bit longer) under most Linux versions (incl. using latest LSI
drivers) and LSI 3081E-R HBA.

Is it just mpt causing the errors or also mpt_sas?

In a lab environment the LSI 9200 HBA behaved better - I/O only dropped
shortly and then continued on the other disks without generating errors.

Had a lengthy Oracle case on this, but all proposed workarounds did
not worked for me at all, which had been (some also from other forums)

- disabling NCQ
- allow-bus-device-reset=0; to /kernel/drv/sd.conf
- set zfs:zfs_vdev_max_pending=1
- set mpt:mpt_enable_msi=0
- keep usage below 90%
- no fmservices running and did temporarily did fmadm unload 
disk-transport

   or other disk access stuff (smartd?)
- tried changing retries-timeout via sd-conf for the disks without any
   success and ended it doing via mdb

At the end I knew the bad sector of the bad disk and by simply dd
this sector once or twice to /dev/zero I could easily bring down the
system/pool without any load on the disk system.


General consensus from various people: don't use SATA drives on SAS 
back-

planes. Some SATA drives might work better, but there seems to be no
guarantee. And even for SAS-SAS, try to avoid SAS1 backplanes.

Markus



[1] Search for What's wrong with LSI 3081 (1068) + expander + (bad) 
SATA

 disk?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs_arc_max values

2012-05-17 Thread Hung-Sheng Tsao Ph.D.


hi
it  will depend on how zfs is used in the system
1)root zpool only?
2)Db files also?
3)Is this RAC configuration?
regards

On 5/17/2012 3:19 PM, Paynter, Richard wrote:


I have a customer who wants to set zfs_arc_max to 1G for a 16G system, 
and 2G for a 32G system.  Both of these are SAP and/or Oracle db 
servers.  They apparently want to maximize the amount of memory 
available for the applications.


My question, now, is whether it would be advisable to do this, and -- 
if not -- what the impact might be.


The 32G system is an M3000; the 16G system is a V490.  Both are 
running Solaris 10 Update 8, patched to 147440-09.


I have a feeling that setting zfs_arc_max this low is not a good idea, 
but would like to get some corroborating information/references, if 
possible.


Thanks

Rick

*From:*zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] *On Behalf Of *Richard Elling

*Sent:* Thursday, May 17, 2012 11:24 AM
*To:* Paul Kraus
*Cc:* ZFS Discussions
*Subject:* Re: [zfs-discuss] zfs_arc_max values

On May 17, 2012, at 5:28 AM, Paul Kraus wrote:



On Wed, May 16, 2012 at 3:35 PM, Paynter, Richard
richard.payn...@infocrossing.com 
mailto:richard.payn...@infocrossing.com wrote:



Does anyone know what the minimum value for zfs_arc_max should be set to?

Does it depend on the amount of memory on the system, and -- if so
-- is there

a formula, or percentage, to use to determine what the minimum
value is?


   I would assume that it should not be set any lower than `kstat
zfs::arcstats:c_min`, which I assume you can set with zfs_arc_min in
/etc/system, but that does not really answer your question.

arc_c_max and arc_c_min are used for different things. If you make 
significant


changes there, then it is a good idea to set both zfs_arc_min, 
zfs_arc_max, and


zfs_arc_meta_limit as a set.

NB, the minimum size of these is 64MB.

 -- richard

--

ZFS Performance and Training

richard.ell...@richardelling.com mailto:richard.ell...@richardelling.com

+1-760-896-4422



Confidentiality Note: This e-mail, including any attachment to it, may 
contain material that is confidential, proprietary, privileged and/or 
Protected Health Information, within the meaning of the regulations 
under the Health Insurance Portability  Accountability Act as 
amended. If it is not clear that you are the intended recipient, you 
are hereby notified that you have received this transmittal in error, 
and any review, dissemination, distribution or copying of this e-mail, 
including any attachment to it, is strictly prohibited. If you have 
received this e-mail in error, please immediately return it to the 
sender and delete it from your system. Thank you.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs_arc_max values

2012-05-17 Thread Hung-Sheng Tsao Ph.D.

sorry
what is data storage?
ASS U ME the root is local hdd
regards


On 5/17/2012 3:31 PM, Hung-Sheng Tsao Ph.D. wrote:


hi
it  will depend on how zfs is used in the system
1)root zpool only?
2)Db files also?
3)Is this RAC configuration?
regards

On 5/17/2012 3:19 PM, Paynter, Richard wrote:


I have a customer who wants to set zfs_arc_max to 1G for a 16G 
system, and 2G for a 32G system.  Both of these are SAP and/or Oracle 
db servers.  They apparently want to maximize the amount of memory 
available for the applications.


My question, now, is whether it would be advisable to do this, and – 
if not – what the impact might be.


The 32G system is an M3000; the 16G system is a V490.  Both are 
running Solaris 10 Update 8, patched to 147440-09.


I have a feeling that setting zfs_arc_max this low is not a good 
idea, but would like to get some corroborating 
information/references, if possible.


Thanks

Rick

*From:*zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] *On Behalf Of *Richard 
Elling

*Sent:* Thursday, May 17, 2012 11:24 AM
*To:* Paul Kraus
*Cc:* ZFS Discussions
*Subject:* Re: [zfs-discuss] zfs_arc_max values

On May 17, 2012, at 5:28 AM, Paul Kraus wrote:



On Wed, May 16, 2012 at 3:35 PM, Paynter, Richard
richard.payn...@infocrossing.com 
mailto:richard.payn...@infocrossing.com wrote:



Does anyone know what the minimum value for zfs_arc_max should be set to?

Does it depend on the amount of memory on the system, and – if so
– is there

a formula, or percentage, to use to determine what the minimum
value is?


   I would assume that it should not be set any lower than `kstat
zfs::arcstats:c_min`, which I assume you can set with zfs_arc_min in
/etc/system, but that does not really answer your question.

arc_c_max and arc_c_min are used for different things. If you make 
significant


changes there, then it is a good idea to set both zfs_arc_min, 
zfs_arc_max, and


zfs_arc_meta_limit as a set.

NB, the minimum size of these is 64MB.

 -- richard

--

ZFS Performance and Training

richard.ell...@richardelling.com 
mailto:richard.ell...@richardelling.com


+1-760-896-4422



Confidentiality Note: This e-mail, including any attachment to it, 
may contain material that is confidential, proprietary, privileged 
and/or Protected Health Information, within the meaning of the 
regulations under the Health Insurance Portability  Accountability 
Act as amended. If it is not clear that you are the intended 
recipient, you are hereby notified that you have received this 
transmittal in error, and any review, dissemination, distribution or 
copying of this e-mail, including any attachment to it, is strictly 
prohibited. If you have received this e-mail in error, please 
immediately return it to the sender and delete it from your system. 
Thank you.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--



--


attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and zpool for NetApp FC LUNs

2012-05-16 Thread Hung-sheng Tsao
IMHO
Just use whole stack of Vcs
Vxfm vxfs 



Sent from my iPhone

On May 16, 2012, at 5:20 AM, Bruce McGill brucemcgill@gmail.com wrote:

 Hi All,
 
 I have FC LUNs from NetApp FAS 3240 mapped to two SPARC T4-2 clustered
 nodes running Veritas Cluster Server software. For now, the
 configuration on NetApp is as follows:
 
 /vol/EBUSApp/EBUSApp  100G
online   MBSUN04 : 0 MBSUN05 : 0
 /vol/EBUSBinry/EBUSBinry  200G
online   MBSUN04 : 1 MBSUN05 : 1
 /vol/EBUSCtrlog/EBUSCtrlog   5G
 online   MBSUN04 : 2 MBSUN05 : 2
 /vol/EBUSDB/EBUSDB
 300.0G online   MBSUN04 : 3 MBSUN05 : 3
 /vol/EBUSIO_FENC1/EBUSIO_FENC1  5G  online
 MBSUN04 : 4 MBSUN05 : 4
 /vol/EBUSIO_FENC2/EBUSIO_FENC2  5G  online
 MBSUN04 : 5 MBSUN05 : 5
 /vol/EBUSIO_FENC3/EBUSIO_FENC3  5G  online
 MBSUN04 : 6 MBSUN05 : 6
 /vol/EBUSRDlog/EBUSRDlog   5G
   online   MBSUN04 : 7 MBSUN05 : 7
 
 I will be mapping the above volumes on hosts and creating zpool andzfs
 file system. More volumes will be created in the future.
 
 Is it certified by Symantec to use ZFS and Zpool for SnapMirror?
 
 At the moment, Veritas Cluster Server software is installed on 2
 servers at the primary site and there is no cluster software on the DR
 site. NetApp FAS 3240 provides FC LUNs to the 2 clustered nodes at the
 primary site and the same filer also provides different LUNs to the
 server at the DR site. The two clustered nodes at primary site are
 “mbsun4” and “mbsun5”. The server at DR site is “mbsun6.
 
 To list the steps that were carried out:
 
 On mbsun4:
 
 We create a ZFS pool from the LUN carved out of NetApp filer:
 
 # zpool create -f orapool c0t60A98000646E2F6F67346A79642D6570d0
 
 We then create the file sytem.
 
 # zfs create orapool/vol01
 
 We bring the mount under Veritas Cluster Server control:
 
 # zfs set mountpoint=legacy orapool/vol01
 
 # zpool export orapool
 
 We then execute the following commands on the other cluster node:
 
 On mbsun5:
 
 # zpool import orapool
 # zfs create orapool/vol01
 # zfs set mountpoint=legacy orapool/vol01
 
 Once configured under Veritas Cluster Server, the ZFS mount and Zpool
 will failover among clustered nodes.
 
 At the DR site (there is no cluster software), the server is called
 “mbsun6”, we execute the following commands to create a different
 Zpool and ZFS file system:
 
 # zpool create -f orapool c0t60A98000646E2F6F67346B3145653874d0
 # zfs create orapool/vol01
 # zfs set mountpoint=/vol01 orapool/vol01
 
 NetApp Snapmirror will be used to replicate the volumes from the
 primary to DR site and we want know if we can use Zpool and ZFS
 instead of the old UFS file system.
 
 My question is:
 
 Is it a good idea to use Zpool for these devices and then create ZFS
 file system or just use ZFS file system? Will replication through
 NetApp Snapmirror work when we use Zpools and ZFS?
 Is it certified by Symantec to use ZFS and Zpool for SnapMirror?
 If Zpools can be used, is it a good idea to create a single ZFS zpool
 and add all the devices or create multiple ZFS zpools and map each
 device to the individual zpool?
 
 
 Regards,
 Bruce
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-04 Thread Hung-Sheng Tsao Ph.D.


hi
 s11 come with its own driver for some lsi sas HCA
but on  the HCL
I only see LSI
SAS 9200-8e 
http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/lsi_logic/sol_11_11_11/9409.html 

LSI MegaRAID SAS 9260-8i 
http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/lsi/sol_10_10_09/3264.html 

LSI 6Gb SAS2008 daughtercard 
http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/lsi/sol_10_10_09/3263.html 


regards


On 5/4/2012 8:25 AM, Roman Matiyenko wrote:

Hi all,

I have a bad bad problem with our brand new server!

The lengthy details are below but to cut the story short, on the same
hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS
sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and
only 200-240MB/s on latest Solaris 11.11 (same zpool config). By
writing directly to raw disks I found that in S10 the speed is 140MB/s
sequential writes per disk (consistent with combined 1.4GB/s for my
zpool) whereas only 24MB/s in Solaris 11 (consistent with 240MB/s
zpool, 10 mirrors 24MB/s each).

This must be the controller drivers, right? I downloaded drivers
version 4.7 off LSI site (says for Solaris 10 and later) - they
failed to attach on S11. Version 3.03 worked but the system would
randomly crash, so I moved my experiments off S11 to S10. However, S10
has only the old implementation if iSCSI which gives me other problems
so I decided to give S11 another go.

Would there be any advice in this community?

Many thanks!

Roman

==


root@carbon:~# echo | format | grep Hitachi
   1. c5t8d1ATA-Hitachi HUA72303-A5C0-2.73TB
   2. c5t9d1ATA-Hitachi HUA72303-A5C0-2.73TB
   3. c5t10d1ATA-Hitachi HUA72303-A5C0-2.73TB
   4. c5t11d1ATA-Hitachi HUA72303-A5C0-2.73TB
   5. c5t13d1ATA-Hitachi HUA72303-A5C0-2.73TB
   6. c5t14d1ATA-Hitachi HUA72303-A5C0-2.73TB
   7. c5t15d1ATA-Hitachi HUA72303-A5C0-2.73TB
   9. c6t9d1ATA-Hitachi HUA72303-A5C0-2.73TB
  10. c6t10d1ATA-Hitachi HUA72303-A5C0-2.73TB
  11. c6t11d1ATA-Hitachi HUA72303-A5C0-2.73TB
  12. c6t13d1ATA-Hitachi HUA72303-A5C0-2.73TB
  13. c6t14d1ATA-Hitachi HUA72303-A5C0-2.73TB
  14. c6t15d1ATA-Hitachi HUA72303-A5C0-2.73TB
  15. c7t8d1ATA-Hitachi HUA72303-A5C0-2.73TB
  17. c7t10d1ATA-Hitachi HUA72303-A5C0-2.73TB
  18. c7t11d1ATA-Hitachi HUA72303-A5C0-2.73TB
  19. c7t12d1ATA-Hitachi HUA72303-A5C0-2.73TB
  20. c7t13d1ATA-Hitachi HUA72303-A5C0-2.73TB
  21. c7t14d1ATA-Hitachi HUA72303-A5C0-2.73TB
  22. c7t15d1ATA-Hitachi HUA72303-A5C0-2.73TB



Reading DD from all disks:
(dd of=/dev/null bs=1024kb if=/dev/rdsk/c7t9d1)

# Iostat –xznM 2

extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
  614.50.0  153.60.0  0.0  1.00.01.6   0  98 c5t8d1
  595.50.0  148.90.0  0.0  1.00.01.7   0  99 c7t8d1
  1566.50.0  391.60.0  0.0  1.00.00.6   1  96 c6t8d1 # (SSD)
  618.50.0  154.60.0  0.0  1.00.01.6   0  99 c6t9d1
  616.50.0  154.10.0  0.0  1.00.01.6   0  99 c5t9d1
  1564.00.0  391.00.0  0.0  1.00.00.6   1  96 c7t9d1# (SSD)
  616.00.0  154.00.0  0.0  1.00.01.6   0  98 c7t10d1
  554.00.0  138.50.0  0.0  1.00.01.8   0  99 c6t10d1
  598.50.0  149.60.0  0.0  1.00.01.7   0  99 c5t10d1
  588.50.0  147.10.0  0.0  1.00.01.7   0  98 c6t11d1
  590.50.0  147.60.0  0.0  1.00.01.7   0  98 c7t11d1
  591.50.0  147.90.0  0.0  1.00.01.7   0  99 c5t11d1
  600.50.0  150.10.0  0.0  1.00.01.6   0  98 c6t13d1
  617.50.0  154.40.0  0.0  1.00.01.6   0  99 c7t12d1
  611.00.0  152.80.0  0.0  1.00.01.6   0  99 c5t13d1
  625.00.0  156.30.0  0.0  1.00.01.6   0  99 c6t14d1
  592.50.0  148.10.0  0.0  1.00.01.7   0  99 c7t13d1
  596.00.0  149.00.0  0.0  1.00.01.7   0  99 c5t14d1
  598.50.0  149.60.0  0.0  1.00.01.6   0  98 c6t15d1
  618.50.0  154.60.0  0.0  1.00.01.6   0  98 c7t14d1
  606.50.0  151.60.0  0.0  1.00.01.6   0  98 c5t15d1
  625.00.0  156.30.0  0.0  1.00.01.6   0  98 c7t15d1
extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
  620.50.0  155.10.0  0.0  1.00.01.6   0  99 c5t8d1
  620.50.0  155.10.0  0.0  1.00.01.6   0  99 c7t8d1
  1581.00.0  395.20.0  0.0  1.00.00.6   1  96 c6t8d1
  611.50.0  152.90.0  0.0  1.00.01.6   0  99 c6t9d1
  587.50.0  146.90.0  0.0  1.00.01.7   0  99 c5t9d1
  1580.00.0  395.00.0  0.0  1.00.00.6   1  97 c7t9d1
  593.00.0  148.20.0  0.0  1.00.01.7   0  99 c7t10d1
  616.00.0  154.00.0  0.0  1.00.01.6   0  99 c6t10d1
  

Re: [zfs-discuss] Replacing root pool disk

2012-04-12 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

hi
by default the disk partition s2 cover the whole disk
this is fine for ufs  for LONG time.
Now zfs  does not like this overlap so you just need to run format then 
delete s2

or use s2 and delete all other partitions
(by default  when you run format/fdisk it create s2 whole disk and s7 
for boot, so there is also overlap between s2 and s7:-()


ZFS root need to fix this problem that require partition and not the 
whole disk (without s?)

my 2c
regards


On 4/12/2012 1:35 PM, Peter Wood wrote:

Hi,

I was following the instructions in ZFS Troubleshooting Guide on how 
to replace a disk in the root pool on x86 system. I'm using 
OpenIndiana, ZFS pool v.28 with mirrored system rpool. The replacement 
disk is brand new.


root:~# zpool status
  pool: rpool
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are 
unaffected.

action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scan: resilvered 17.6M in 0h0m with 0 errors on Wed Apr 11 17:45:16 2012
config:

NAME STATE READ WRITE CKSUM
rpoolDEGRADED 0 0 0
  mirror-0   DEGRADED 0 0 0
c2t5000CCA369C55DB8d0s0  OFFLINE  0   126 0
c2t5000CCA369D5231Cd0s0  ONLINE   0 0 0

errors: No known data errors
root:~#

I'm not very familiar with Solaris partitions and slices so somewhere 
in the format/partition commands I must to have made a mistake because 
when I try to replace the disk I'm getting the following error:


root:~# zpool replace rpool c2t5000CCA369C55DB8d0s0 
c2t5000CCA369C89636d0s0

invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2t5000CCA369C89636d0s0 overlaps with 
/dev/dsk/c2t5000CCA369C89636d0s2

root:~#

I used -f and it worked but I was wondering is there a way to 
completely reset the new disk? Remove all partitions and start from 
scratch.


Thank you
Peter


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] webserver zfs root lock contention under heavy load

2012-03-27 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

hi
you did not answer the question, what is the RAM of the server? how many 
socket and core  etc

what is the block size of zfs?
what is the cache  ram of your  san array?
what is the block size/strip size  of your raid in san array? raid 5 or 
what?

what is your test program and how (from what kind  client)
regards



On 3/26/2012 11:13 PM, Aubrey Li wrote:

On Tue, Mar 27, 2012 at 1:15 AM, Jim Klimovj...@cos.ru  wrote:

Well, as a further attempt down this road, is it possible for you to rule
out
ZFS from swapping - i.e. if RAM amounts permit, disable the swap at all
(swap -d /dev/zvol/dsk/rpool/swap) or relocate it to dedicated slices of
same or better yet separate disks?


Thanks Jim for your suggestion!



If you do have lots of swapping activity (that can be seen in vmstat 1 as
si/so columns) going on in a zvol, you're likely to get much fragmentation
in the pool, and searching for contiguous stretches of space can become
tricky (and time-consuming), or larger writes can get broken down into
many smaller random writes and/or gang blocks, which is also slower.
At least such waiting on disks can explain the overall large kernel times.

I took swapping activity into account, even when the CPU% is 100%, si
(swap-ins) and so (swap-outs) are always ZEROs.


You can also see the disk wait times ratio in iostat -xzn 1 column %w
and disk busy times ratio in %b (second and third from the right).
I dont't remember you posting that.

If these are accounting in tens, or even close or equal to 100%, then
your disks are the actual bottleneck. Speeding up that subsystem,
including addition of cache (ARC RAM, L2ARC SSD, maybe ZIL
SSD/DDRDrive) and combatting fragmentation by moving swap and
other scratch spaces to dedicated pools or raw slices might help.

My storage system is not quite busy, and there are only read operations.
=
# iostat -xnz 3
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   112.40.0 1691.50.0  0.0  0.50.04.8   0  41 c11t0d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   118.70.0 1867.00.0  0.0  0.50.04.5   0  42 c11t0d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   127.70.0 2121.60.0  0.0  0.60.04.7   0  44 c11t0d0
 extended device statistics
 r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   141.30.0 2158.50.0  0.0  0.70.04.6   0  48 c11t0d0
==

Thanks,
-Aubrey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840

http://laotsao.blogspot.com/
http://laotsao.wordpress.com/
http://blogs.oracle.com/hstsao/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good tower server for around 1,250 USD?

2012-03-23 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
well
check  this link

https://shop.oracle.com/pls/ostore/product?p1=SunFireX4270M2serverp2=p3=p4=sc=ocom_x86_SunFireX4270M2servertz=-4:00

you may not like the price



Sent from my iPad

On Mar 23, 2012, at 17:16, The Honorable Senator and Mrs. John 
Blutarskybl...@nymph.paranoici.org wrote:

 On Fri Mar 23 at 10:06:12 2012 laot...@gmail.com wrote:
 
 well
 use component of x4170m2 as example you will be ok
 intel cpu
 lsi sas controller non raid
 sas 72rpm hdd
 my 2c
 
 That sounds too vague to be useful unless I could afford an X4170M2. I
 can't build a custom box and I don't have the resources to go over the parts
 list and order something with the same components. Thanks though.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good tower server for around 1,250 USD?

2012-03-22 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
well
use component of x4170m2 as example you will be ok
intel cpu
lsi sas controller non raid
sas 72rpm hdd
my 2c

Sent from my iPad

On Mar 22, 2012, at 14:41, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Thu, 22 Mar 2012, The Honorable Senator and Mrs. John Blutarsky wrote:
 
 This will be a do-everything machine. I will use it for development, hosting
 various apps in zones (web, file server, mail server etc.) and running other
 systems (like a Solaris 11 test system) in VirtualBox. Ultimately I would
 like to put it under Solaris support so I am looking for something
 officially approved. The problem is there are so many systems on the HCL I
 don't know where to begin. One of the Supermicro super workstations looks
 
 Almost all of the systems listed on the HCL are defunct and no longer 
 purchasable except for on the used market.  Obtaining an approved system 
 seems very difficult. In spite of this, Solaris runs very well on many 
 non-approved modern systems.
 
 I don't know what that means as far as the ability to purchase Solaris 
 support.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
are the disk/sas controller the same on both server?
-LT

Sent from my iPad

On Mar 13, 2012, at 6:10, P-O Yliniemi p...@bsd-guide.net wrote:

 Hello,
 
 I'm currently replacing a temporary storage server (server1) with the one 
 that should be the final one (server2). To keep the data storage from the old 
 one I'm attempting to import it on the new server. Both servers are running 
 OpenIndiana server build 151a.
 
 Server 1 (old)
 The zpool consists of three disks in a raidz1 configuration:
 # zpool status
  pool: storage
 state: ONLINE
  scan: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0
 
 errors: No known data errors
 
 Output of format command gives:
 # format
 AVAILABLE DISK SELECTIONS:
   0. c2t1d0 LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0 ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1 ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0 ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   4. c5d1 ST3000DM- W1F07HZ-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
 
 (c5d1 was previously used as a hot spare, but I removed it as an attempt to 
 export and import the zpool without the spare)
 
 # zpool export storage
 
 # zpool list
 (shows only rpool)
 
 # zpool import
   pool: storage
 id: 17210091810759984780
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:
 
storage ONLINE
  raidz1-0  ONLINE
c4d0ONLINE
c4d1ONLINE
c5d0ONLINE
 
 (check to see if it is importable to the old server, this has also been 
 verified since I moved back the disks to the old server yesterday to have it 
 available during the night)
 
 zdb -l output in attached files.
 
 ---
 
 Server 2 (new)
 I have attached the disks on the new server in the same order (which 
 shouldn't matter as ZFS should locate the disks anyway)
 zpool import gives:
 
 root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:
 
storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE
 
 The problem is that all the disks are there and online, but the pool is 
 showing up as unavailable.
 
 Any ideas on what I can do more in order to solve this problem ?
 
 Regards,
  PeO
 
 
 
 zdb_l_c4d0s0.txt
 zdb_l_c4d1s0.txt
 zdb_l_c5d0s0.txt
 zdb_l_c5d1s0.txt
 zdb_l_c7t5000C50044A30193d0s0.txt
 zdb_l_c7t5000C50044E0F316d0s0.txt
 zdb_l_c7t5000C50044760F6Ed0s0.txt
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Hung-sheng Tsao
IMHO
Zfs is smart but not smart when you deal with two different controller


Sent from my iPhone

On Mar 13, 2012, at 3:32 PM, P-O Yliniemi p...@bsd-guide.net wrote:

 Jim Klimov skrev 2012-03-13 15:24:
 2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:
 hi
 are the disk/sas controller the same on both server?
 
 Seemingly no. I don't see the output of format on Server2,
 but for Server1 I see that the 3TB disks are used as IDE
 devices (probably with motherboard SATA-IDE emulation?)
 while on Server2 addressing goes like SAS with WWN names.
 
 Correct, the servers are all different.
 Server1 is a HP xw8400, and the disks are connected to the first four SATA 
 ports (the xw8400 has both SAS and SATA ports, of which I use the SAS ports 
 for the system disks).
 On Server2, the disk controller used for the data disks is a LSI SAS 9211-8i, 
 updated with the latest IT-mode firmware (also tested with the original 
 IR-mode firmware)
 
 The output of the 'format' command on Server2 is:
 
 AVAILABLE DISK SELECTIONS:
   0. c2t0d0 ATA-OCZ-VERTEX3-2.11-55.90GB
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@0,0
   1. c2t1d0 ATA-OCZ-VERTEX3-2.11-55.90GB
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@1,0
   2. c3d1 Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c4d0 Unknown-Unknown-0001 cyl 38910 alt 2 hd 255 sec 63
  /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
   4. c7t5000C5003F45CCF4d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c5003f45ccf4
   5. c7t5000C50044E0F0C6d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c50044e0f0c6
   6. c7t5000C50044E0F611d0 ATA-ST3000DM001-9YN1-CC46-2.73TB
  /scsi_vhci/disk@g5000c50044e0f611
 
 Note that this is what it looks like now, not at the time I sent the 
 question. The difference is that I have set up three other disks (items 4-6) 
 on the new server, and are currently transferring the contents from Server1 
 to this one using zfs send/receive.
 
 I will probably be able to reconnect the correct disks to the Server2 
 tomorrow when the data has been transferred to the new disks (problem 
 'solved' at that moment), if there is anything else that I can do to try to 
 solve it the 'right' way.
 
 It may be possible that on one controller disks are used
 natively while on another they are attached as a JBOD
 or a set of RAID0 disks (so the controller's logic or its
 expected layout intervenes), as recently discussed on-list?
 
 On the HP, on a reboot, I was reminded that the 3TB disks were displayed as 
 800GB-something by the BIOS (although correctly identified by OpenIndiana and 
 ZFS). This could be a part of the problem with the ability to export/import 
 the pool.
 
 On Mar 13, 2012, at 6:10, P-O Yliniemip...@bsd-guide.net  wrote:
 
 Hello,
 
 I'm currently replacing a temporary storage server (server1) with the one 
 that should be the final one (server2). To keep the data storage from the 
 old one I'm attempting to import it on the new server. Both servers are 
 running OpenIndiana server build 151a.
 
 Server 1 (old)
 The zpool consists of three disks in a raidz1 configuration:
 # zpool status
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0
 
 errors: No known data errors
 
 Output of format command gives:
 # format
 AVAILABLE DISK SELECTIONS:
   0. c2t1d0LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
 
 Server 2 (new)
 I have attached the disks on the new server in the same order (which 
 shouldn't matter as ZFS should locate the disks anyway)
 zpool import gives:
 
 root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:
 
storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE
 
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-08 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
just note that you can has different zpool name but with the same old 
mount point for export purpose

-LT


On 3/8/2012 8:40 AM, Paul Kraus wrote:

Lots of suggestions (not included here), but ...

 With the exception of Cindy's suggestion of using 4 disks and
mirroring (zpool attach two new disks to existing vdevs), I would
absolutely NOT do anything unless I had a known good backup of the
data! I have seen too many cases described here on this list of people
trying complicated procedures with ZFS and making one small mistake
and loosing their data, or spending weeks or months trying to recover
it.

 Regarding IMPORT / EXPORT, these functions are have two real
purposes in my mind:

1. you want to move a zpool from one host to another. You EXPORT from
the first host, physically move the disks, then IMPORT on the new
host.

2. You want (or need) to physically change the connectivity between
the disks and the host, and implicit in that is that the device paths
will change. EXPORT, change connectivity, IMPORT. Once again I have
seen many cases described on this list of folks who moved disks
around, which ZFS is _supposed_ to handle, but then had a problem.

 I use ZFS first for reliability and second for performance. With
that in mind, one of my primary rules for ZFS is to NOT move disks
around without first exporting the zpool. I have done some pretty rude
things regarding devices underlying vdev disappearing and then much
later reappearing (mostly in test, but occasionally in production),
and I have yet to lose any data, BUT none of the devices changed path
in the process.

On Wed, Mar 7, 2012 at 4:38 PM, Bob Doolittlebob.doolit...@oracle.com  wrote:

Hi,

I had a single-disk zpool (export) and was given two new disks for expanded
storage. All three disks are identically sized, no slices/partitions. My
goal is to create a raidz1 configuration of the three disks, containing the
data in the original zpool.

However, I got off on the wrong foot by doing a zpool add of the first
disk. Apparently this has simply increased my storage without creating a
raidz config.

Unfortunately, there appears to be no simple way to just remove that disk
now and do a proper raidz create of the other two. Nor am I clear on how
import/export works and whether that's a good way to copy content from one
zpool to another on a single host.

Can somebody guide me? What's the easiest way out of this mess, so that I
can move from what is now a simple two-disk zpool (less than 50% full) to a
three-disk raidz configuration, starting with one unused disk? In the end I
want the three-disk raidz to have the same name (and mount point) as the
original zpool. There must be an easy way to do this.


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840

http://laotsao.blogspot.com/
http://laotsao.wordpress.com/
http://blogs.oracle.com/hstsao/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

IMHO, there is no easy way out for you
1)tape backup and restore
2)find a larger USB SATA disk, copy the data over then restore later 
after raidz1 setup

-LT


On 3/7/2012 4:38 PM, Bob Doolittle wrote:

Hi,

I had a single-disk zpool (export) and was given two new disks for 
expanded storage. All three disks are identically sized, no 
slices/partitions. My goal is to create a raidz1 configuration of the 
three disks, containing the data in the original zpool.


However, I got off on the wrong foot by doing a zpool add of the 
first disk. Apparently this has simply increased my storage without 
creating a raidz config.


Unfortunately, there appears to be no simple way to just remove that 
disk now and do a proper raidz create of the other two. Nor am I clear 
on how import/export works and whether that's a good way to copy 
content from one zpool to another on a single host.


Can somebody guide me? What's the easiest way out of this mess, so 
that I can move from what is now a simple two-disk zpool (less than 
50% full) to a three-disk raidz configuration, starting with one 
unused disk? In the end I want the three-disk raidz to have the same 
name (and mount point) as the original zpool. There must be an easy 
way to do this.


Thanks for any assistance.

-Bob

P.S. I would appreciate being kept on the CC list for this thread to 
avoid digest mailing delays.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840

http://laotsao.blogspot.com/
http://laotsao.wordpress.com/
http://blogs.oracle.com/hstsao/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
read the link please
it seems that afmter you create the  radiz1 zpool
you need to destroy the fakedisk so it will have contains data when you to the 
copy
copy the data by following the steps in the link

replace the  fakedisk withnthe real disk

this is a good approach that i did not know before
-LT

Sent from my iPad

On Mar 7, 2012, at 17:48, Bob Doolittle bob.doolit...@oracle.com wrote:

 Wait, I'm not following the last few steps you suggest. Comments inline:
 
 On 03/07/12 17:03, Fajar A. Nugraha wrote:
 - use the one new disk to create a temporary pool
 - copy the data (zfs snapshot -r + zfs send -R | zfs receive)
 - destroy old pool
 - create a three-disk raidz pool using two disks and a fake device,
 something like http://www.dev-eth0.de/creating-raidz-with-missing-device/
 
 Don't I need to copy the data back from the temporary pool to the new raidz 
 pool at this point?
 I'm not understanding the process beyond this point, can you clarify please?
 
 - destroy the temporary pool
 
 So this leaves the data intact on the disk?
 
 - replace the fake device with now-free disk
 
 So this replicates the data on the previously-free disk across the raidz pool?
 
 What's the point of the following export/import steps? Renaming? Why can't I 
 just give the old pool name to the raidz pool when I create it?
 
 - export the new pool
 - import the new pool and rename it in the process: zpool import
 temp_pool_name old_pool_name
 
 Thanks!
 
 -Bob
 
 
 
 In the end I
 want the three-disk raidz to have the same name (and mount point) as the
 original zpool. There must be an easy way to do this.
 Nope.
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need hint on pool setup

2012-02-01 Thread Hung-Sheng Tsao (laoTsao)
my 2c
1 just  do mirror  of 2 dev with 20 hdd with 1 spare
2 raidz2 with   5 dev for 20 hdd,  with one spare 

Sent from my iPad

On Feb 1, 2012, at 3:49, Thomas Nau thomas@uni-ulm.de wrote:

 Hi
 
 On 01/31/2012 10:05 PM, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. wrote:
 what is your main application for ZFS? e.g. just NFS or iSCSI  for home dir 
 or VM? or Window client?
 
 Yes, fileservice only using CIFS, NFS, Samba and maybe iSCSI
 
 Is performance important? or space is more important?
 
 a good balance ;)
 
 what is the memory of your server?
 
 96G
 
 do you want to use ZIL or L2ARC?
 
 ZEUS STECRAM as ZIL (mirrored); maybe SSDs and L2ARC
 
 what is your backup  or DR plan?
 
 continuous rolling snapshot plus send/receive to remote site
 TSM backup at least once a week to tape; depends on how much
 time the TSM client needs to walk the filesystems
 
 You need to answer all these question first
 
 did so
 
 Thomas
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need hint on pool setup

2012-01-31 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
what is your main application for ZFS? e.g. just NFS or iSCSI  for home 
dir or VM? or Window client?

Is performance important? or space is more important?
what is the memory of your server?
do you want to use ZIL or L2ARC?
what is your backup  or DR plan?
You need to answer all these question first
my 2c

On 1/31/2012 3:44 PM, Thomas Nau wrote:

Dear all
We have two JBODs with 20 or 21 drives available per JBOD hooked up
to a server. We are considering the following setups:

RAIDZ2 made of 4 drives
RAIDZ2 made of 6 drives

The first option wastes more disk space but can survive a JBOD failure
whereas the second is more space effective but the system goes down when
a JBOD goes down. Each of the JBOD comes with dual controllers, redundant
fans and power supplies so do I need to be paranoid and use option #1?
Of course it also gives us more IOPs but high end logging devices should take
care of that

Thanks for any hint
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840

http://laotsao.blogspot.com/
http://laotsao.wordpress.com/
http://blogs.oracle.com/hstsao/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Hung-Sheng Tsao (laoTsao)
what is the server you attach to D2700?
the hp spec for d2700 did not include solaris, so not sure how you get support 
from hp:-(

Sent from my iPad

On Jan 31, 2012, at 20:25, Ragnar Sundblad ra...@csc.kth.se wrote:

 
 Just to follow up on this, in case there are others interested:
 
 The D2700s seems to work quite ok for us. We have four issues with them,
 all of which we will ignore for now:
 - They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
  both for log device and cache device, and I had those around).
  This could probably be fixed with a firmware upgrade, but:
 - It seems the firmware can't be upgraded if you don't have one of a few
  special HP raid cards! Silly!
 - The LEDs on the disks: On the first bay it is turned off, on the rest
  it is turned on. They all flash at activity. I have no idea why this
  is, and I know to little about SAS chassises to even guess. This could
  possibly change with a firmware upgrade of the chassis controllers, but
  maybe not.
 - In Solaris 11, the /dev/chassis/HP-D2700-SAS-AJ941A.xx.../Drive_bay_NN
  is supposed to contain a soft link to the device for the disk in the bay.
  This doesn't seem to work for bay 0. It may be related to the previous
  problem, but maybe not.
 
 (We may buy a HP raid card just to be able to upgrade their firmware.)
 
 If we have had the time we probably would have tested some other jbods
 too, but we need to get those rolling soon, and these seem good enough.
 
 We have tested them with multipathed SAS, using a single LSI SAS 9205-8e
 HBA and connecting the two ports on the HBA to the two controllers in the
 D2700.
 
 To get multipathing, you need to configure the scsi_vhci driver, in
 /kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
 sol11-x86. To get better performance, you probably want to use
 load-balance=logical-block instead of load-balance=round-robin.
 See examples below.
 
 You may also need to run stmsboot -e to enable multipathing. I still haven't
 figured out what that does (more than updating /etc/vfstab and /etc/dumpdates
 which you typically don't use with ifs), maybe nothing.
 
 Thanks to all that have helped with input!
 
 /ragge
 
 
 -
 
 
 For solaris 10u8 and later, in /kernel/drv/scsi_vhci.conf.DIST:
 ###
 ...
 device-type-scsi-options-list =
  HP  D2700 SAS AJ941A, symmetric-option,
  HP  EG, symmetric-option;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 symmetric-option = 0x100;
 
 device-type-mpxio-options-list =
  device-type=HP  D2700 SAS AJ941A, 
 load-balance-options=logical-block-options,
  device-type=HP  EG, load-balance-options=logical-block-options;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 logical-block-options=load-balance=logical-block, region-size=20;
 ...
 ###
 
 
 For solaris 11, in /etc/driver/drv/scsi_vhci.conf on x86
 (in /kernel/drv/scsi_vhci.conf.DIST on sparc?):
 ###
 ...
 #load-balance=round-robin;
 load-balance=logical-block;
 region-size=20;
 ...
 scsi-vhci-failover-override =
   HP  D2700 SAS AJ941A, f_sym,
   HP  EG,   f_sym;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 ###
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and iscsi performance help

2012-01-27 Thread Hung-Sheng Tsao (laoTsao)
hi
IMHO, upgrade to s11 if possible
use the COMSTAR based iscsi 

Sent from my iPad

On Jan 26, 2012, at 23:25, Ivan Rodriguez ivan...@gmail.com wrote:

 Dear fellows,
 
 We have a backup server with a zpool size of 20 TB, we transfer
 information using zfs snapshots every day (we have around 300 fs on
 that pool),
 the storage is a dell md3000i connected by iscsi, the pool is
 currently version 10, the same storage is connected
 to another server with a smaller pool of 3 TB(zpool version 10) this
 server is working fine and speed is good between the storage
 and the server, however  in the server with 20 TB pool performance is
 an issue  after we restart the server
 performance is good but with the time lets say a week the performance
 keeps dropping until we have to
 bounce the server again (same behavior with new version of solaris in
 this case performance drops in 2 days), no errors in logs or storage
 or the zpool status -v
 
 We suspect that the pool has some issues probably there is corruption
 somewhere, we tested solaris 10 8/11 with zpool 29,
 although we haven't update the pool itself, with the new solaris the
 performance is even worst and every time
 that we restart the server we get stuff like this:
 
 SOURCE: zfs-diagnosis, REV: 1.0
 EVENT-ID: 0168621d-3f61-c1fc-bc73-c50efaa836f4
 DESC: All faults associated with an event id have been addressed.
 Refer to http://sun.com/msg/FMD-8000-4M for more information.
 AUTO-RESPONSE: Some system components offlined because of the
 original fault may have been brought back online.
 IMPACT: Performance degradation of the system due to the original
 fault may have been recovered.
 REC-ACTION: Use fmdump -v -u EVENT-ID to identify the repaired components.
 [ID 377184 daemon.notice] SUNW-MSG-ID: FMD-8000-6U, TYPE: Resolved,
 VER: 1, SEVERITY: Minor
 
 And we need to export and import the pool in order to be  able to  access it.
 
 Now my question is do you guys know if we upgrade the pool does this
 process  fix some issues in the metadata of the pool ?
 We've been holding back the upgrade because we know that after the
 upgrade there is no way to return to version 10.
 
 Does anybody has experienced corruption in the pool without a hardware
 failure ?
 Is there any tools or procedures to find corruption on the pool or
 File systems inside the pool ? (besides scrub)
 
 So far we went through the connections cables, ports and controllers
 between the storage and the server everything seems fine, we've
 swapped network interfaces, cables, switch ports etc etc.
 
 
 Any ideas would be really appreciate it.
 
 Cheers
 Ivan
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-27 Thread Hung-Sheng Tsao (laoTsao)
it seems that you will need to work  with oracle support:-(

Sent from my iPad

On Jan 27, 2012, at 3:49, sureshkumar sachinnsur...@gmail.com wrote:

 Hi Christian ,
 
 I was disabled the MPXIO   zpool clear is working for some times  its 
 failed in few iterations.
 
 I am using one Sparc machine [with the same OS level of x-86 ] I didn't 
 faced any issue with the Sparc architecture.
 Is it was the problem with booting sequence?
 
 Please help me.
 
 Thanks,
 Sudheer.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-24 Thread Hung-Sheng Tsao (laoTsao)
how did you issue  reboot, try 
shutdown -i6  -y -g0 

Sent from my iPad

On Jan 24, 2012, at 7:03, sureshkumar sachinnsur...@gmail.com wrote:

 Hi all,
 
 
 I am new to Solaris  I am facing an issue with the dynapath [multipath s/w] 
 for Solaris10u10 x86 .
 
 I am facing an issue with the zpool.
 
 Whats my problem is unable to access the zpool after issue a reboot.
 
 I am pasting the zpool status below.
 
 ==
 bash-3.2# zpool status
   pool: test
  state: UNAVAIL
  status: One or more devices could not be opened.  There are insufficient
 replicas for the pool to continue functioning.
  action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
  scan: none requested
  config:
 
 NAME STATE READ WRITE CKSUM
 test UNAVAIL  0 0 0  insufficient 
 replicas
 = 
 
 But all my devices are online  I am able to access them.
 when I export  import the zpool , the zpool comes to back to available state.
 
 I am not getting whats the problem with the reboot.
 
 Any suggestions regarding this was very helpful.
 
 Thanks Regards,
 Sudheer.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] patching a solaris server with zones on zfs file systems

2012-01-21 Thread Hung-Sheng Tsao (laoTsao)
which version of solaris?
s10u? live upgrade, zfs snap, halt zone, backup zone, zoneadm detach zone, 
zoneadm attach -U zone after os upgrade by  zfs snap and liveupgrade of just 
upgrade from dvd
or s11? beadm for new root, upgrade os, treat zone as above
regards



Sent from my iPad

On Jan 21, 2012, at 5:46, bhanu prakash bhanu.sys...@gmail.com wrote:

 Hi All,
 
 Please let me know the procedure how to patch a server which is having 5 
 zones on zfs file systems.
 
 Root file system exists on internal disk and zones are existed on SAN.
 
 Thank you all,
 Bhanu
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Do the disks in a zpool have a private region that I can read to get a zpool name or id?

2012-01-12 Thread Hung-Sheng Tsao (laoTsao)
if the disks are assigned by two hosts
may be just do zpool import should see the zpool in other hosts? not sure

as for AIX, control hdd, zpool will need partition that solaris could understand
i do not know what AIX used for partition should not be the same for solaris



Sent from my iPad

On Jan 12, 2012, at 9:51, adele@oracle.com adele@oracle.com wrote:

 Hi all,
 
 My cu has following question.
 
 
 Assume I have allocated a LUN from external storage to two hosts ( by mistake 
 ). I create a zpool with this LUN on host1 with no errors. On host2 when I 
 try to create a zpool by
 using the same disk ( which is allocated to host2 as well ), zpool create - 
 comes back with an error saying   /dev/dsk/cXtXdX is part of exported or 
 potentially active ZFS pool test.
 Is there a way for me to check what zpool disk belongs to from 'host2'. Do 
 the disks in a zpool have a private region that I can read to get a zpool 
 name or id?
 
 
 Steps required to reproduce the problem
 
 Disk doubly allocated to host1, host2
 host1 sees disk as disk100
 host2 sees disk as disk101
 host1# zpool create host1_pool disk1 disk2 disk100
 returns success ( as expected )
 host2# zpool create host2_pool disk1 disk2 disk101
 invalid dev specification
 use '-f' to overrite the following errors:
 /dev/dsk/disk101 is part of exported or potentially active ZFS pool test. 
 Please see zpool
 
 zpool did catch that the disk is part of an active pool, but since it's not 
 on the same host I am not getting the name of the pool to which disk101 is 
 allocated. It's possible we might go ahead and use '-f' option to create the 
 zpool and start using this filesystem. By doing this we're potentially 
 destroying filesystems on host1, host2 which could lead to severe downtime.
 
 Any way to get the pool name to which the disk101 is assigned ( with a 
 different name on a different host )? This would aid us tremendously in 
 avoiding a potential issue. This has happened with Solaris 9, UFS once before 
 taking out two Solaris machines.
 
 What happens if diskis assigned to a AIX box and is setup as part of a Volume 
 manager on AIX and we try to create 'zpool' on Solaris host. Will ZFS catch 
 this, by saying something wrong with the disk?
 
 Regards,
 Adele
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to allocate dma memory for extra SGL

2012-01-11 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph. D.



On 1/10/2012 9:44 PM, Ray Van Dolson wrote:

On Tue, Jan 10, 2012 at 06:23:50PM -0800, Hung-Sheng Tsao (laoTsao) wrote:

how is the ram size what is the zpool setup and what is your hba and
hdd size and type

Hmm, actually this system has only 6GB of memory.  For some reason I
though it had more.

IMHO,  you will need more RAM
did you cap the ARC in /etc/system?



The controller is an LSISAS2008 (which oddly enough dose not seem to be
recognized by lsiutil).

There are 23x1TB disks (SATA interface, not SAS unfortunately) in the
system.  Three RAIDZ2 vdevs of seven disks each and one spare comprises
a single zpool with two zfs file systems mounted (no deduplication or
compression in use).

There are two internally mounted Intel X-25E's -- these double as the
rootpool and ZIL devices.

There is an 80GB X-25M mounted to the expander along with the 1TB
drives operating as L2ARC.


On Jan 10, 2012, at 21:07, Ray Van Dolsonrvandol...@esri.com  wrote:


Hi all;

We have a Solaris 10 U9 x86 instance running on Silicon Mechanics /
SuperMicro hardware.

Occasionally under high load (ZFS scrub for example), the box becomes
non-responsive (it continues to respond to ping but nothing else works
-- not even the local console).  Our only solution is to hard reset
after which everything comes up normally.

Logs are showing the following:

  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2MPT SGL mem alloc failed
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2Unable to allocate dma memory for 
extra SGL.
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2Unable to allocate dma memory for 
extra SGL.
  Jan  8 09:44:10 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:10 prodsys-dmz-zfs2Unable to allocate dma memory for 
extra SGL.
  Jan  8 09:44:10 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:10 prodsys-dmz-zfs2MPT SGL mem alloc failed
  Jan  8 09:44:11 prodsys-dmz-zfs2 rpcmod: [ID 851375 kern.warning] WARNING: 
svc_cots_kdup no slots free

I am able to resolve the last error by adjusting upwards the duplicate
request cache sizes, but have been unable to find anything on the MPT
SGL errors.

Anyone have any thoughts on what this error might be?

At this point, we are simply going to apply patches to this box (we do
see an outstanding mpt patch):

147150 --  01 R-- 124 SunOS 5.10_x86: mpt_sas patch
147702 --  03 R--  21 SunOS 5.10_x86: mpt patch

But we have another identically configured box at the same patch level
(admittedly with slightly less workload, though it also undergoes
monthly zfs scrubs) which does not experience this issue.

Ray

Thanks,
Ray
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to allocate dma memory for extra SGL

2012-01-10 Thread Hung-Sheng Tsao (laoTsao)
how is the ram size
what is the zpool setup and what is your hba and hdd size and type


Sent from my iPad

On Jan 10, 2012, at 21:07, Ray Van Dolson rvandol...@esri.com wrote:

 Hi all;
 
 We have a Solaris 10 U9 x86 instance running on Silicon Mechanics /
 SuperMicro hardware.
 
 Occasionally under high load (ZFS scrub for example), the box becomes
 non-responsive (it continues to respond to ping but nothing else works
 -- not even the local console).  Our only solution is to hard reset
 after which everything comes up normally.
 
 Logs are showing the following:
 
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2MPT SGL mem alloc failed
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:10 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:10 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:10 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:10 prodsys-dmz-zfs2MPT SGL mem alloc failed
  Jan  8 09:44:11 prodsys-dmz-zfs2 rpcmod: [ID 851375 kern.warning] WARNING: 
 svc_cots_kdup no slots free
 
 I am able to resolve the last error by adjusting upwards the duplicate
 request cache sizes, but have been unable to find anything on the MPT
 SGL errors.
 
 Anyone have any thoughts on what this error might be?
 
 At this point, we are simply going to apply patches to this box (we do
 see an outstanding mpt patch):
 
 147150 --  01 R-- 124 SunOS 5.10_x86: mpt_sas patch
 147702 --  03 R--  21 SunOS 5.10_x86: mpt patch
 
 But we have another identically configured box at the same patch level
 (admittedly with slightly less workload, though it also undergoes
 monthly zfs scrubs) which does not experience this issue.
 
 Ray
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs defragmentation via resilvering?

2012-01-07 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

it seems that s11 shadow migration can help:-)


On 1/7/2012 9:50 AM, Jim Klimov wrote:

Hello all,

  I understand that relatively high fragmentation is inherent
to ZFS due to its COW and possible intermixing of metadata
and data blocks (of which metadata path blocks are likely
to expire and get freed relatively quickly).

  I believe it was sometimes implied on this list that such
fragmentation for static data can be currently combatted
only by zfs send-ing existing pools data to other pools at
some reserved hardware, and then clearing the original pools
and sending the data back. This is time-consuming, disruptive
and requires lots of extra storage idling for this task (or
at best - for backup purposes).

  I wonder how resilvering works, namely - does it write
blocks as they were or in an optimized (defragmented)
fashion, in two usecases:
1) Resilvering from a healthy array (vdev) onto a spare drive
   in order to replace one of the healthy drives in the vdev;
2) Resilvering a degraded array from existing drives onto a
   new drive in order to repair the array and make it redundant
   again.

Also, are these two modes different at all?
I.e. if I were to ask ZFS to replace a working drive with
a spare in the case (1), can I do it at all, and would its
data simply be copied over, or reconstructed from other
drives, or some mix of these two operations?

  Finally, what would the gurus say - does fragmentation
pose a heavy problem on nearly-filled-up pools made of
spinning HDDs (I believe so, at least judging from those
performance degradation problems writing to 80+%-filled
pools), and can fragmentation be effectively combatted
on ZFS at all (with or without BP rewrite)?

  For example, can(does?) metadata live separately
from data in some dedicated disk areas, while data
blocks are written as contiguously as they can?

  Many Windows defrag programs group files into several
zones on the disk based on their last-modify times, so
that old WORM files remain defragmented for a long time.
There are thus some empty areas reserved for new writes
as well as for moving newly discovered WORM files to
the WORM zones (free space permitting)...

  I wonder if this is viable with ZFS (COW and snapshots
involved) when BP-rewrites are implemented? Perhaps such
zoned defragmentation can be done based on block creation
date (TXG number) and the knowledge that some blocks in
certain order comprise at least one single file (maybe
more due to clones and dedup) ;)

What do you think? Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840

http://laotsao.blogspot.com/
http://laotsao.wordpress.com/
http://blogs.oracle.com/hstsao/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thinking about spliting a zpool in system and data

2012-01-06 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.


may be one can do the following (assume c0t0d0 and c0t1d0)
1)split rpool mirror: zpool split rpool newpool c0t1d0s0
1b)zpool destroy newpool
2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1)
3)zpool create rpool2 c0t1d0s1
4)use lucreate  -c c0t0d0s0 -n new-zfsbe -p c0t1d0s0
5)lustatus
c0t0d0s0
new-zfsbe
6)luactivate new-zfsbe
7)init 6
now you have two BE old and new
you can create dpool on  slice1 add L2ARC and zil and repartition c0t0d0
if you want you can create  rpool on c0t0d0s0 and new BE so everything 
will be name rpool for root pool


SWAP and DUMP can be on different rpool

good luck


On 1/6/2012 12:32 AM, Jesus Cea wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sorry if this list is inappropriate. Pointers welcomed.

Using Solaris 10 Update 10, x86-64.

I have been a ZFS heavy user since available, and I love the system.
My servers are usually small (two disks) and usually hosted in a
datacenter, so I usually create a ZPOOL used both for system and data.
That is, the entire system contains an unique two-disk zpool.

This have worked nice so far.

But my new servers have SSD too. Using them for L2ARC is easy enough,
but I can not use them as ZIL because no separate ZIL device can be
used in root zpools. Ugh, that hurts!.

So I am thinking about splitting my full two-disk zpool in two zpools,
one for system and other for data. Both using both disks for
mirroring. So I would have two slices per disk.

I have the system in production in a datacenter I can not access, but
I have remote KVM access. Servers are in production, I can't reinstall
but I could be allowed to have small (minutes) downtimes for a while.

My plan is this:

1. Do a scrub to be sure the data is OK in both disks.

2. Break the mirror. The A disk will keep working, B disk is idle.

3. Partition B disk with two slices instead of current full disk slice.

4. Create a system zpool in B.

5. Snapshot zpool/ROOT in A and zfs send it to system in B.
Repeat several times until we have a recent enough copy. This stream
will contain the OS and the zones root datasets. I have zones.

6. Change GRUB to boot from system instead of zpool. Cross fingers
and reboot. Do I have to touch the bootfs property?

Now ideally I would be able to have system as the zpool root. The
zones would be mounted from the old datasets.

7. If everything is OK, I would zfs send the data from the old zpool
to the new one. After doing a few times to get a recent copy, I would
stop the zones and do a final copy, to be sure I have all data, no
changes in progress.

8. I would change the zone manifest to mount the data in the new zpool.

9. I would restart the zones and be sure everything seems ok.

10. I would restart the computer to be sure everything works.

So fair, it this doesn't work, I could go back to the old situation
simply changing the GRUB boot to the old zpool.

11. If everything works, I would destroy the original zpool in A,
partition the disk and recreate the mirroring, with B as the source.

12. Reboot to be sure everything is OK.

So, my questions:

a) Is this workflow reasonable and would work?. Is the procedure
documented anywhere?. Suggestions?. Pitfalls?

b) *MUST* SWAP and DUMP ZVOLs reside in the root zpool or can they
live in a nonsystem zpool? (always plugged and available). I would
like to have a quite small(let say 30GB, I use Live Upgrade and quite
a fez zones) system zpool, but my swap is huge (32 GB and yes, I use
it) and I would rather prefer to have SWAP and DUMP in the data zpool,
if possible  supported.

c) Currently Solaris decides to activate write caching in the SATA
disks, nice. What would happen if I still use the complete disks BUT
with two slices instead of one?. Would it still have write cache
enabled?. And yes, I have checked that the cache flush works as
expected, because I can only do around one hundred write+sync per
second.

Advices?.

- -- 
Jesus Cea Avion _/_/  _/_/_/_/_/_/

j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTwaHW5lgi5GaxT1NAQLe/AP9EIK0tckVBhqzrTHWbNzT2TPUGYc7ZYjS
pZYX1EXkJNxVOmmXrWApmoVFGtYbwWeaSQODqE9XY5rUZurEbYrXOmejF2olvBPL
zyGFMnZTcmWLTrlwH5vaXeEJOSBZBqzwMWPR/uv2Z/a9JWO2nbidcV1OAzVdT2zU
kfboJpbxONQ=
=6i+A
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC

Re: [zfs-discuss] Thinking about spliting a zpool in system and data

2012-01-06 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.


correction
On 1/6/2012 3:34 PM, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. wrote:


may be one can do the following (assume c0t0d0 and c0t1d0)
1)split rpool mirror: zpool split rpool newpool c0t1d0s0
1b)zpool destroy newpool
2)partition 2nd hdd c0t1d0s0 into two slice (s0 and s1)
3)zpool create rpool2 c0t1d0s1 ===should be c0t1d0s0
4)use lucreate  -c c0t0d0s0 -n new-zfsbe -p c0t1d0s0 ==rpool2
5)lustatus
c0t0d0s0
new-zfsbe
6)luactivate new-zfsbe
7)init 6
now you have two BE old and new
you can create dpool on  slice1 add L2ARC and zil and repartition c0t0d0
if you want you can create  rpool on c0t0d0s0 and new BE so everything 
will be name rpool for root pool


SWAP and DUMP can be on different rpool

good luck


On 1/6/2012 12:32 AM, Jesus Cea wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Sorry if this list is inappropriate. Pointers welcomed.

Using Solaris 10 Update 10, x86-64.

I have been a ZFS heavy user since available, and I love the system.
My servers are usually small (two disks) and usually hosted in a
datacenter, so I usually create a ZPOOL used both for system and data.
That is, the entire system contains an unique two-disk zpool.

This have worked nice so far.

But my new servers have SSD too. Using them for L2ARC is easy enough,
but I can not use them as ZIL because no separate ZIL device can be
used in root zpools. Ugh, that hurts!.

So I am thinking about splitting my full two-disk zpool in two zpools,
one for system and other for data. Both using both disks for
mirroring. So I would have two slices per disk.

I have the system in production in a datacenter I can not access, but
I have remote KVM access. Servers are in production, I can't reinstall
but I could be allowed to have small (minutes) downtimes for a while.

My plan is this:

1. Do a scrub to be sure the data is OK in both disks.

2. Break the mirror. The A disk will keep working, B disk is idle.

3. Partition B disk with two slices instead of current full disk slice.

4. Create a system zpool in B.

5. Snapshot zpool/ROOT in A and zfs send it to system in B.
Repeat several times until we have a recent enough copy. This stream
will contain the OS and the zones root datasets. I have zones.

6. Change GRUB to boot from system instead of zpool. Cross fingers
and reboot. Do I have to touch the bootfs property?

Now ideally I would be able to have system as the zpool root. The
zones would be mounted from the old datasets.

7. If everything is OK, I would zfs send the data from the old zpool
to the new one. After doing a few times to get a recent copy, I would
stop the zones and do a final copy, to be sure I have all data, no
changes in progress.

8. I would change the zone manifest to mount the data in the new zpool.

9. I would restart the zones and be sure everything seems ok.

10. I would restart the computer to be sure everything works.

So fair, it this doesn't work, I could go back to the old situation
simply changing the GRUB boot to the old zpool.

11. If everything works, I would destroy the original zpool in A,
partition the disk and recreate the mirroring, with B as the source.

12. Reboot to be sure everything is OK.

So, my questions:

a) Is this workflow reasonable and would work?. Is the procedure
documented anywhere?. Suggestions?. Pitfalls?

b) *MUST* SWAP and DUMP ZVOLs reside in the root zpool or can they
live in a nonsystem zpool? (always plugged and available). I would
like to have a quite small(let say 30GB, I use Live Upgrade and quite
a fez zones) system zpool, but my swap is huge (32 GB and yes, I use
it) and I would rather prefer to have SWAP and DUMP in the data zpool,
if possible  supported.

c) Currently Solaris decides to activate write caching in the SATA
disks, nice. What would happen if I still use the complete disks BUT
with two slices instead of one?. Would it still have write cache
enabled?. And yes, I have checked that the cache flush works as
expected, because I can only do around one hundred write+sync per
second.

Advices?.

- -- Jesus Cea Avion _/_/  _/_/_/
_/_/_/

j...@jcea.es - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/  _/_/_/_/_/
.  _/_/  _/_/_/_/  _/_/  _/_/
Things are not so easy  _/_/  _/_/_/_/  _/_/_/_/  _/_/
My name is Dump, Core Dump   _/_/_/_/_/_/  _/_/  _/_/
El amor es poner tu felicidad en la felicidad de otro - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQCVAwUBTwaHW5lgi5GaxT1NAQLe/AP9EIK0tckVBhqzrTHWbNzT2TPUGYc7ZYjS
pZYX1EXkJNxVOmmXrWApmoVFGtYbwWeaSQODqE9XY5rUZurEbYrXOmejF2olvBPL
zyGFMnZTcmWLTrlwH5vaXeEJOSBZBqzwMWPR/uv2Z/a9JWO2nbidcV1OAzVdT2zU
kfboJpbxONQ=
=6i+A
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http

Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
what is zpool zpool status
are you using the default 128k stripsize for zpool
is your server x86 ? or sparc t3 , how many socket?
IMHO, t3  for oracle need careful tuning
since many oracle ops need fast single thread cpu


Sent from my iPad

On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:

 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
one still does not understand your setup
1 what is hba in T3-2
2 did u setup raid6 (how) in ramsan array? or present the ssd as jbod to zpool
3 which model of RAMSAN
4 are there any other storage behind RAMSAN
5 do you set up zpool with zil and or ARC?
6 IMHO, the hybre approach to ZFS is the most cost effective, 7200rpm SAS with 
zil and ARC and mirror Zpool

the problem with raid6 with 8k and oracle 8k is the mismatch of stripsize
we know the zpool use dynamic stripsize in raid, not the same as in hw raid
but similar consideration still exist



Sent from my iPad

On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:

 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
i just take look the ramsan web site
there are many whitepaper on oracle, none on ZFS


Sent from my iPad

On Jan 5, 2012, at 12:58, Hung-Sheng Tsao (laoTsao) laot...@gmail.com wrote:

 one still does not understand your setup
 1 what is hba in T3-2
 2 did u setup raid6 (how) in ramsan array? or present the ssd as jbod to zpool
 3 which model of RAMSAN
 4 are there any other storage behind RAMSAN
 5 do you set up zpool with zil and or ARC?
 6 IMHO, the hybre approach to ZFS is the most cost effective, 7200rpm SAS 
 with zil and ARC and mirror Zpool
 
 the problem with raid6 with 8k and oracle 8k is the mismatch of stripsize
 we know the zpool use dynamic stripsize in raid, not the same as in hw raid
 but similar consideration still exist
 
 
 
 Sent from my iPad
 
 On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:
 
 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-04 Thread Hung-Sheng Tsao (laoTsao)
what is your storage?
internal sas or external array
what is  your zfs setup?


Sent from my iPad

On Jan 4, 2012, at 17:59, grant lowe glow...@gmail.com wrote:

 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resolving performance issue w/ deduplication (NexentaStor)

2011-12-30 Thread Hung-Sheng Tsao (laoTsao)
now s11 support shadow migration, just for this  purpose, AFAIK
not sure nexentaStor support shadow migration



Sent from my iPad

On Dec 30, 2011, at 2:03, Ray Van Dolson rvandol...@esri.com wrote:

 On Thu, Dec 29, 2011 at 10:59:04PM -0800, Fajar A. Nugraha wrote:
 On Fri, Dec 30, 2011 at 1:31 PM, Ray Van Dolson rvandol...@esri.com wrote:
 Is there a non-disruptive way to undeduplicate everything and expunge
 the DDT?
 
 AFAIK, no
 
  zfs send/recv and then back perhaps (we have the extra
 space)?
 
 That should work, but it's disruptive :D
 
 Others might provide better answer though.
 
 Well, slightly _less_ disruptive perhaps.  We can zfs send to another
 file system on the same system, but different set of disks.  We then
 disable NFS shares on the original, do a final zfs send to sync, then
 share out the new undeduplicated file system with the same name.
 Hopefully the window here is short enough that NFS clients are able to
 recover gracefully.
 
 We'd then wipe out the old zpool, recreate and do the reverse to get
 data back onto it..
 
 Thanks,
 Ray
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Hung-Sheng Tsao (laoTsao)
what is the ram size?
are there many snap? create then delete?
did you run a scrub?

Sent from my iPad

On Dec 18, 2011, at 10:46, Jan-Aage Frydenbø-Bruvoll j...@architechs.eu wrote:

 Hi,
 
 On Sun, Dec 18, 2011 at 15:13, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
 laot...@gmail.com wrote:
 what are the output of zpool status pool1 and pool2
 it seems that you have mix configuration of pool3 with disk and mirror
 
 The other two pools show very similar outputs:
 
 root@stor:~# zpool status pool1
  pool: pool1
 state: ONLINE
 scan: resilvered 1.41M in 0h0m with 0 errors on Sun Dec  4 17:42:35 2011
 config:
 
NAME  STATE READ WRITE CKSUM
pool1  ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t12d0   ONLINE   0 0 0
c1t13d0   ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
c1t24d0   ONLINE   0 0 0
c1t25d0   ONLINE   0 0 0
  mirror-2ONLINE   0 0 0
c1t30d0   ONLINE   0 0 0
c1t31d0   ONLINE   0 0 0
  mirror-3ONLINE   0 0 0
c1t32d0   ONLINE   0 0 0
c1t33d0   ONLINE   0 0 0
logs
  mirror-4ONLINE   0 0 0
c2t2d0p6  ONLINE   0 0 0
c2t3d0p6  ONLINE   0 0 0
cache
  c2t2d0p10   ONLINE   0 0 0
  c2t3d0p10   ONLINE   0 0 0
 
 errors: No known data errors
 root@stor:~# zpool status pool2
  pool: pool2
 state: ONLINE
 scan: scrub canceled on Wed Dec 14 07:51:50 2011
 config:
 
NAME  STATE READ WRITE CKSUM
pool2 ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t14d0   ONLINE   0 0 0
c1t15d0   ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
c1t18d0   ONLINE   0 0 0
c1t19d0   ONLINE   0 0 0
  mirror-2ONLINE   0 0 0
c1t20d0   ONLINE   0 0 0
c1t21d0   ONLINE   0 0 0
  mirror-3ONLINE   0 0 0
c1t22d0   ONLINE   0 0 0
c1t23d0   ONLINE   0 0 0
logs
  mirror-4ONLINE   0 0 0
c2t2d0p7  ONLINE   0 0 0
c2t3d0p7  ONLINE   0 0 0
cache
  c2t2d0p11   ONLINE   0 0 0
  c2t3d0p11   ONLINE   0 0 0
 
 The affected pool does indeed have a mix of straight disks and
 mirrored disks (due to running out of vdevs on the controller),
 however it has to be added that the performance of the affected pool
 was excellent until around 3 weeks ago, and there have been no
 structural changes nor to the pools neither to anything else on this
 server in the last half year or so.
 
 -jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Hung-Sheng Tsao (laoTsao)
not sure oi support shadow migration 
or you may be to send zpool to another server then send back to  do defrag
regards

Sent from my iPad

On Dec 19, 2011, at 8:15, Gary Mills gary_mi...@fastmail.fm wrote:

 On Mon, Dec 19, 2011 at 11:58:57AM +, Jan-Aage Frydenbø-Bruvoll wrote:
 
 2011/12/19 Hung-Sheng Tsao (laoTsao) laot...@gmail.com:
 did you run a scrub?
 
 Yes, as part of the previous drive failure. Nothing reported there.
 
 Now, interestingly - I deleted two of the oldest snapshots yesterday,
 and guess what - the performance went back to normal for a while. Now
 it is severely dropping again - after a good while on 1.5-2GB/s I am
 again seeing write performance in the 1-10MB/s range.
 
 That behavior is a symptom of fragmentation.  Writes slow down
 dramatically when there are no contiguous blocks available.  Deleting
 a snapshot provides some of these, but only temporarily.
 
 -- 
 -Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-19 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
AFAIK, most ZFS based storage appliance are move to SAS with 7200 rpm or 
15k rpm

most SSD are SATA and are connecting to on bd SATA with IO chips


On 12/19/2011 9:59 AM, tono wrote:

Thanks for the sugestions, especially all the HP info and build
pictures.

Two things crossed my mind on the hardware front. The first is regarding
the SSDs you have pictured, mounted in sleds. Any Proliant that I've
read about connects the hotswap drives via a SAS backplane. So how did
you avoid that (physically) to make the direct SATA connections?

The second is regarding a conversation I had with HP pre-sales. A rep
actually told me, in no uncertain terms, that using non-HP HBAs, RAM, or
drives would completely void my warranty. I assume this is BS but I
wonder if anyone has ever gotten resistance due to 3rd party hardware.
In the States, at least, there is the Magnuson–Moss act. I'm just not
sure if it applies to servers.

Back to SATA though. I can appreciate fully about not wanting to take
unnecessary risks, but there are a few things that don't sit well with
me.

A little background: this is to be a backup server for a small/medium
business. The data, of course, needs to be safe, but we don't need
extreme HA.

I'm aware of two specific issues with SATA drives: the TLER/CCTL
setting, and the issue with SAS expanders. I have to wonder if these
account for most of the bad rap that SATA drives get. Expanders are
built into nearly all of the JBODs and storage servers I've found
(including the one in the serverfault post), so they must be in common
use.

So I'll ask again: are there any issues when connecting SATA drives
directly to a HBA? People are, after all, talking left and right about
using SATA SSDs... as long as they are connected directly to the MB
controller.

We might just do SAS at this point for peace of mind. It just bugs me
that you can't use inexpensive disks in a R.A.I.D. I would think that
RAIDZ and AHCI could handle just about any failure mode by now.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-18 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

what are the output of zpool status pool1 and pool2
it seems that you have mix configuration of pool3 with disk and mirror


On 12/18/2011 9:53 AM, Jan-Aage Frydenbø-Bruvoll wrote:

Dear List,

I have a storage server running OpenIndiana with a number of storage
pools on it. All the pools' disks come off the same controller, and
all pools are backed by SSD-based l2arc and ZIL. Performance is
excellent on all pools but one, and I am struggling greatly to figure
out what is wrong.

A very basic test shows the following - pretty much typical
performance at the moment:

root@stor:/# for a in pool1 pool2 pool3; do dd if=/dev/zero of=$a/file
bs=1M count=10; done
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.00772965 s, 1.4 GB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.00996472 s, 1.1 GB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 71.8995 s, 146 kB/s

The zpool status of the affected pool is:

root@stor:/# zpool status pool3
   pool: pool3
  state: ONLINE
  scan: resilvered 222G in 24h2m with 0 errors on Wed Dec 14 15:20:11 2011
config:

 NAME  STATE READ WRITE CKSUM
 pool3 ONLINE   0 0 0
   c1t0d0  ONLINE   0 0 0
   c1t1d0  ONLINE   0 0 0
   c1t2d0  ONLINE   0 0 0
   c1t3d0  ONLINE   0 0 0
   c1t4d0  ONLINE   0 0 0
   c1t5d0  ONLINE   0 0 0
   c1t6d0  ONLINE   0 0 0
   c1t7d0  ONLINE   0 0 0
   c1t8d0  ONLINE   0 0 0
   c1t9d0  ONLINE   0 0 0
   c1t10d0 ONLINE   0 0 0
   mirror-12   ONLINE   0 0 0
 c1t26d0   ONLINE   0 0 0
 c1t27d0   ONLINE   0 0 0
   mirror-13   ONLINE   0 0 0
 c1t28d0   ONLINE   0 0 0
 c1t29d0   ONLINE   0 0 0
   mirror-14   ONLINE   0 0 0
 c1t34d0   ONLINE   0 0 0
 c1t35d0   ONLINE   0 0 0
 logs
   mirror-11   ONLINE   0 0 0
 c2t2d0p8  ONLINE   0 0 0
 c2t3d0p8  ONLINE   0 0 0
 cache
   c2t2d0p12   ONLINE   0 0 0
   c2t3d0p12   ONLINE   0 0 0

errors: No known data errors

Ditto for the disk controller - MegaCli reports zero errors, be that
on the controller itself, on this pool's disks or on any of the other
attached disks.

I am pretty sure I am dealing with a disk-based problem here, i.e. a
flaky disk that is just slow without exhibiting any actual data
errors, holding the rest of the pool back, but I am at a miss as how
to pinpoint what is going on.

Would anybody on the list be able to give me any pointers as how to
dig up more detailed information about the pool's/hardware's
performance?

Thank you in advance for your kind assistance.

Best regards
Jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-16 Thread Hung-Sheng Tsao (laoTsao)
imho, if possible pick sas 7200 hdd
no hw-raid for ZFS
mirror and with ZIL and good size memory


Sent from my iPad

On Dec 16, 2011, at 17:36, t...@ownmail.net wrote:

 I could use some help with choosing hardware for a storage server. For
 budgetary and density reasons, we had settled on LFF SATA drives in the
 storage server. I had closed in on models from HP (DL180 G6) and IBM
 (x3630 M3), before discovering warnings against connecting SATA drives
 with SAS expanders.
 
 So I'd like to ask what's the safest way to manage SATA drives. We're
 looking for a 12 (ideally 14) LFF server, 2-3U, similar to the above
 models. The HP and IBM models both come with SAS expanders built into
 their backplanes. My questions are:
 
 1. Kludginess aside, can we build a dependable SMB server using
 integrated HP or IBM expanders plus the workaround
 (allow-bus-device-reset=0) presented here: 
 http://gdamore.blogspot.com/2010/12/update-on-sata-expanders.html ?
 
 2. Would it be better to find a SATA card with lots of ports, and make
 1:1 connections? I found some cards (arc-128, Adaptec 2820SA) w/Solaris
 support, for example, but I don't know how reliable they are or whether
 they support a clean JBOD mode.
 
 3. Assuming native SATA is the way to go, where should we look for
 hardware? I'd like the IBM  HP options because of the LOM  warranty,
 but I wouldn't think the hot-swap backplane offers any way to bypass the
 SAS expanders (correct me if I'm wrong here!). I found this JBOD:
 http://www.pc-pitstop.com/sata_enclosures/sat122urd.asp  I also know
 about SuperMicro. Are there any other vendors or models worth
 considering?
 
 Thanks!
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and 
ZIL(SSD)

may be try the ZFS simulator SW
regards




On 12/12/2011 2:28 PM, Albert Chin wrote:

We're preparing to purchase an X4170M2 as an upgrade for our existing
X4100M2 server for ZFS, NFS, and iSCSI. We have a choice for CPU, some
more expensive than others. Our current system has a dual-core 1.8Ghz
Opteron 2210 CPU with 8GB. Seems like either a 6-core Intel E5649
2.53Ghz CPU or 4-core Intel E5620 2.4Ghz CPU would be more than
enough. Based on what we're using the system for, it should be more
I/O bound than CPU bound. We are doing compression in ZFS but that
shouldn't be too CPU intensive. Seems we should be caring more about
more cores than high Ghz.

Recommendations?



--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

4c@2.4ghz

On 12/12/2011 2:44 PM, Albert Chin wrote:

On Mon, Dec 12, 2011 at 02:40:52PM -0500, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. 
wrote:

please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and
ZIL(SSD)
may be try the ZFS simulator SW

Good point. Thanks.


regards

On 12/12/2011 2:28 PM, Albert Chin wrote:

We're preparing to purchase an X4170M2 as an upgrade for our existing
X4100M2 server for ZFS, NFS, and iSCSI. We have a choice for CPU, some
more expensive than others. Our current system has a dual-core 1.8Ghz
Opteron 2210 CPU with 8GB. Seems like either a 6-core Intel E5649
2.53Ghz CPU or 4-core Intel E5620 2.4Ghz CPU would be more than
enough. Based on what we're using the system for, it should be more
I/O bound than CPU bound. We are doing compression in ZFS but that
shouldn't be too CPU intensive. Seems we should be caring more about
more cores than high Ghz.

Recommendations?


--
Hung-Sheng Tsao Ph D.
Founder   Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

begin:vcard
fn:Hung-Sheng Tsao
n:Tsao;Hung-Sheng
email;internet:laot...@gmail.com
tel;cell:9734950840
x-mozilla-html:TRUE
version:2.1
end:vcard





--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU sizing for ZFS/iSCSI/NFS server

2011-12-12 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.



On 12/12/2011 3:02 PM, Gary Driggs wrote:

On Dec 12, 2011, at 11:42 AM, \Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.\ wrote:


please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and ZIL(SSD)

Do those appliances also use the F20 PCIe flash cards?
no,  these controller need the slots for SAS HBA to make  HA-cluster 
configuration orFC0E,  FC -HBA or 10ge HBA

7120 only support logzilla
7320 (x4170M2 head support readzilla (L2ARC) and logzilla (ZIL) 18GB
7420 (x4470M2 head) support readzilla(500GB or 1TB) and logzilla

I know the
Exadata storage cells use them but they aren't utilizing ZFS in the
Linux version of the X2-2. Has that changed with the Solaris x86
versions of the appliance? Also, does OCZ or someone make an
equivalent to the F20 now?

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS not starting

2011-12-01 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

FYI
http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-113-size-zfs-dedup-1354231.html
never to late:-(


On 12/1/2011 5:19 PM, Freddie Cash wrote:


The system has 6GB of RAM and a 10GB swap partition. I added a 30GB
swap file but this hasn't helped.


ZFS doesn't use swap for the ARC (it's wired aka unswappable memory). 
 And ZFS uses the ARC for dedupe support.


You will need to find a lot of extra RAM to stuff into that machine in 
order for it to boot correctly, load the dedeupe tables into ARC, 
process the intent log, and then import the pool.


And, you'll need that extra RAM in order to destroy the ZFS filesystem 
that has dedupe enabled.


Basically, your DDT (dedupe table) is running you out of ARC space and 
livelocking (or is it deadlocking, never can keep those terms 
straight) the box.


You can remove the RAM once you have things working again.  Just don't 
re-enable dedupe until you have at least 16 GB of RAM in the box that 
can be dedicated to ZFS.  And be sure to add a cache device to the pool.


I just went through something similar with an 8 GB ZFS box (RAM is on 
order, but purchasing dept ordered from wrong supplier so we're stuck 
waiting for it to arrive) where I tried to destroy dedupe'd 
filesystem.  Exact same results as you.  Stole RAM out of a different 
server temporarily to get things working on this box again.


# sysctl hw.physmem
hw.physmem: 6363394048 tel:6363394048

# sysctl vfs.zfs.arc_max
vfs.zfs.arc_max: 5045088256 tel:5045088256

(I lowered arc_max to 1GB but hasn't helped)


DO NOT LOWER THE ARC WHEN DEDUPE ENABLED!!

--
Freddie Cash
fjwc...@gmail.com mailto:fjwc...@gmail.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS forensics

2011-11-23 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

did you see this link
http://www.solarisinternals.com/wiki/index.php/ZFS_forensics_scrollback_script
may be out of date already
regards


On 11/23/2011 11:14 AM, Gary Driggs wrote:

Is zdb still the only way to dive in to the file system? I've seen the 
extensive work by Max Bruning on this but wonder if there are any tools that 
make this easier...?

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle releases Solaris 11 for Sparc and x86 servers

2011-11-10 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.


AFAIK, there is no change in open source policy for Oracle Solaris

On 11/9/2011 10:34 PM, Fred Liu wrote:

... so when will zfs-related improvement make it to solaris-
derivatives :D ?


I am also very curious about Oracle's policy about source code. ;-)


Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sd_max_throttle

2011-11-03 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

for ZFS appliance
NFS or SMB(CIFS) as File server sd_max_throttle donot play
for FC  or iSCSI  it may play
regards


On 11/3/2011 5:29 PM, Gary wrote:

Hi folks,

I'm reading through some I/O performance tuning documents and am
finding some older references to sd_max_throttle kernel/project
settings. Have there been any recent books or documentation written
that talks about this more in depth? It seems to be more appropriate
for FC or DAS but I'm wondering if anyone has had to touch this or
other settings with ZFS appliances they've built...?

-Gary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is there any implementation of VSS for a ZFS iSCSI snapshot on Solaris?

2011-09-15 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

http://download.oracle.com/docs/cd/E22471_01/html/820-4167/application_integration__microsoft.html#application_integration__microsoft__sun_storage_7000_provider_for_microsoft_vs

On 9/15/2011 9:19 AM, S Joshi wrote:
By iirc do you mean 'if i remember correctly' or is there a company 
called iirc? Which ZSS appliance are you referring to?


Thanks


CC: zfs-discuss@opensolaris.org
From: laot...@gmail.com
Subject: Re: [zfs-discuss] Is there any implementation of VSS for a 
ZFS iSCSI snapshot on Solaris?

Date: Wed, 14 Sep 2011 18:01:37 -0400
To: bit05...@hotmail.com

iirc zfs appliance haa vss support

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Sep 14, 2011, at 17:02, S Joshi bit05...@hotmail.com 
mailto:bit05...@hotmail.com wrote:


I am using a Solaris + ZFS environment to export a iSCSI block
layer device and use the snapshot facility to take a snapshot of
the ZFS volume. Is there an existing Volume Shadow Copy (VSS)
implementation on Windows for this environment?


Thanks

S Joshi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
then choose single user mode(6))

2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool  import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool


On 8/15/2011 9:12 AM, Stu Whitefish wrote:

On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
swhitef...@yahoo.com  wrote:

  # zpool import -f tank

  http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

I encourage you to open a support case and ask for an escalation on CR 7056738.

--
Mike Gerdts

Hi Mike,

Unfortunately I don't have a support contract. I've been trying to set up a 
development system on Solaris and learn it.
Until this happened, I was pretty happy with it. Even so, I don't have 
supported hardware so I couldn't buy a contract
until I bought another machine and I really have enough machines so I cannot 
justify the expense right now. And I
refuse to believe Oracle would hold people hostage in a situation like this, 
but I do believe they could generate a lot of
goodwill by fixing this for me and whoever else it happened to and telling us 
what level of Solaris 10 this is fixed at so
this doesn't continue happening. It's a pretty serious failure and I'm not the 
only one who it happened to.

It's incredible but in all the years I have been using computers I don't ever 
recall losing data due to a filesystem or OS issue.
That includes DOS, Windows, Linux, etc.

I cannot believe ZFS on Intel is so fragile that people lose hundreds of gigs 
of data and that's just the way it is. There
must be a way to recover this data and some advice on preventing it from 
happening again.

Thanks,
Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.



On 8/15/2011 11:25 AM, Stu Whitefish wrote:


Hi. Thanks I have tried this on update 8 and Sol 11 Express.

The import always results in a kernel panic as shown in the picture.

I did not try an alternate mountpoint though. Would it make that much 
difference?

try it



- Original Message -

From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.laot...@gmail.com
To: zfs-discuss@opensolaris.org
Cc:
Sent: Monday, August 15, 2011 3:06:20 PM
Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
inaccessible!

may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris
then choose single user mode(6))
2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool  import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool


On 8/15/2011 9:12 AM, Stu Whitefish wrote:

  On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
  swhitef...@yahoo.com   wrote:

# zpool import -f tank

   http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

  I encourage you to open a support case and ask for an escalation on CR

7056738.
  -- 
  Mike Gerdts

  Hi Mike,

  Unfortunately I don't have a support contract. I've been trying to

set up a development system on Solaris and learn it.

  Until this happened, I was pretty happy with it. Even so, I don't have

supported hardware so I couldn't buy a contract

  until I bought another machine and I really have enough machines so I

cannot justify the expense right now. And I

  refuse to believe Oracle would hold people hostage in a situation like

this, but I do believe they could generate a lot of

  goodwill by fixing this for me and whoever else it happened to and telling

us what level of Solaris 10 this is fixed at so

  this doesn't continue happening. It's a pretty serious failure and

I'm not the only one who it happened to.

  It's incredible but in all the years I have been using computers I

don't ever recall losing data due to a filesystem or OS issue.

  That includes DOS, Windows, Linux, etc.

  I cannot believe ZFS on Intel is so fragile that people lose hundreds of

gigs of data and that's just the way it is. There

  must be a way to recover this data and some advice on preventing it from

happening again.

  Thanks,
  Jim
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Scripting

2011-08-10 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.


hi
most mordern server has separate ILOM that support IPMLtool that can 
talk to HDD

what is your server? does it has separate  remote management port?

On 8/10/2011 8:36 AM, Lanky Doodle wrote:

Hiya,

Now I have figured out how to read disks using dd to make LEDs blink, I want to 
write a little script that iterates through all drives, dd's them with a few 
thousand counts, stop, then dd's them again with another few thousand counts, 
so I end up with maybe 5 blinks.

I don't want somebody to write something for me, I'd like to be pointed in the 
right direction so I can build one myself :)

Thanks
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Adding mirrors to an existing zfs-pool

2011-07-26 Thread hung-sheng tsao
Hi
It is better just ceate new ool in array 8
Then use cpio ro copy the data

On 7/26/11, Bernd W. Hennig consult...@hennig-consulting.com wrote:
 G'Day,

 - zfs pool with 4 disks (from Clariion A)
 - must migrate to Clariion B (so I created 4 disks with the same size,
   avaiable for the zfs)

 The zfs pool has no mirrors, my idea was to add the new 4 disks from
 the Clariion B to the 4 disks which are still in the pool - and later
 remove the original 4 disks.

 I only found in all example how to create a new pool with mirrors
 but no example how to add to a pool without mirrors a mirror disk
 for each disk in the pool.

 - is it possible to add disks to each disk in the pool (they have different
   sizes, so I have exact add the correct disks form Clariion B to the
   original disk from Clariion B)
 - can I later remove the disks from the Clariion A, pool is intact, user
   can work with the pool


 ??

 Sorry for the beginner questions

 Tnx for help
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Sent from my mobile device

Hung-Sheng Tsao, Ph.D. laot...@gmail.com
laot...@gmail.com
http://laotsao.wordpress.com
cell:9734950840
gvoice:8623970640
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import -R /tpools zpool hangs

2011-06-29 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

hi
try to import zpool at different mnt root
it hangs forever
how to recover
can one kill the import job
 1 S root 5 0   0   0 SD?  0?   Jun 27 
?   8:58 zpool-rootpool
 1 S root 16786 0   0   0 SD?  0? 16:11:15 
?   0:00 zpool-as_as
 0 S root 16866 16472   0  40 20?   1261? 16:13:09 
pts/4   0:01 zpool import -R /tpools ora_as_arch
 1 S root 16856 0   0   0 SD?  0? 16:12:57 
?   0:00 zpool-ora_asdb_new
 1 S root 16860 0   0   0 SD?  0? 16:13:02 
?   0:00 zpool-as_wc_new
 1 S root 16865 0   0   0 SD?  0? 16:13:09 
?   0:00 zpool-ora_ppl_arch
 1 S root 16867 0   0   0 SD?  0? 16:13:11 
?   0:00 zpool-ora_as_arch
 1 S root 16858 0   0   0 SD?  0? 16:12:59 
?   0:00 zpool-as_search_new
 1 S root 16854 0   0   0 SD?  0? 16:12:55 
?   0:00 zpool-ora_as_arch_new
 1 S root 16863 0   0   0 SD?  0? 16:13:06 
?   0:00 zpool-ora_herm_arch

what are these other jobs?
TIA



attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Have my RMA... Now what??

2011-05-28 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

yes good idea, another things to keep in mind
technology change so fast, by the time you want a replacement, may be 
HDD does exist any more
or the supplier changed, so the drives are not exactly like your 
original drive





On 5/28/2011 6:05 PM, Michael DeMan wrote:

Always pre-purchase one extra drive to have on hand.  When you get it, confirm 
it was not dead-on-arrival by hooking up on an external USB to a workstation 
and running whatever your favorite tools are to validate it is okay.  Then put 
it back in its original packaging, and put a label on it about what it is, and 
that it is a spare for box(s) XYZ disk system.

When a drive fails, use that one off the shelf to do your replacement 
immediately then deal with the RMA, paperwork, and snailmail to get the bad 
drive replaced.

Also, depending how many disks you have in your array - keeping multiple spares 
can be a good idea as well to cover another disk dying while waiting on that 
replacement one.

In my opinion, the above goes whether you have your disk system configured with 
hot spare or not.  And the technique is applicable to both personal/home-use 
and commercial uses if your data is important.


- Mike

On May 28, 2011, at 9:30 AM, Brian wrote:


I have a raidz2 pool with one disk that seems to be going bad, several errors 
are noted in iostat.  I have an RMA for the drive, however - no I am wondering 
how I proceed.  I need to send the drive in and then they will send me one 
back.  If I had the drive on hand, I could do a zpool replace.

Do I do a zpool offline? zpool detach?
Once I get the drive back and put it in the same drive bay..  Is it just a zpool 
replacedevice?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread Hung-Sheng Tsao (LaoTsao) Ph. D.


IMHO, zfs need to run in all kind of HW
T-series CMT server that can help sha calculation since T1 day, did not 
see any work in ZFS to take advantage it



On 5/10/2011 11:29 AM, Anatoly wrote:

Good day,

I think ZFS can take advantage of using GPU for sha256 calculation, 
encryption and maybe compression. Modern video card, like 5xxx or 6xxx 
ATI HD Series can do calculation of sha256 50-100 times faster than 
modern 4 cores CPU.


kgpu project for linux shows nice results.

'zfs scrub' would work freely on high performance ZFS pools.

The only problem that there is no AMD/Nvidia drivers for Solaris that 
support hardware-assisted OpenCL.


Is anyone interested in it?

Best regards,
Anatoly Legkodymov.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss