Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-04 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi

may be check out stec ssd
or  checkout the service manual of sun zfs appliance service manual
to see the read and write ssd in the system
regards


Sent from my iPad

On Aug 3, 2012, at 22:05, Hung-Sheng Tsao (LaoTsao) Ph.D laot...@gmail.com 
wrote:

 Intel 311 Series Larsen Creek 20GB 2.5 SATA II SLC Enterprise Solid State 
 Disk SSDSA2VP020G201
 
 Average Rating
 (12 reviews)
 Write a Review
 
 Sent from my iPad
 
 On Aug 3, 2012, at 21:39, Bob Friesenhahn bfrie...@simple.dallas.tx.us 
 wrote:
 
 On Fri, 3 Aug 2012, Karl Rossing wrote:
 
 I'm looking at 
 http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
  wondering what I should get.
 
 Are people getting intel 330's for l2arc and 520's for slog?
 
 For the slog, you should look for a SLC technology SSD which saves unwritten 
 data on power failure.  In Intel-speak, this is called Enhanced Power Loss 
 Data Protection.  I am not running across any Intel SSDs which claim to 
 match these requirements.
 
 Extreme write IOPS claims in consumer SSDs are normally based on large write 
 caches which can lose even more data if there is a power failure.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-03 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
Intel 311 Series Larsen Creek 20GB 2.5 SATA II SLC Enterprise Solid State Disk 
SSDSA2VP020G201

Average Rating
(12 reviews)
Write a Review

Sent from my iPad

On Aug 3, 2012, at 21:39, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Fri, 3 Aug 2012, Karl Rossing wrote:
 
 I'm looking at 
 http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-ssd.html
  wondering what I should get.
 
 Are people getting intel 330's for l2arc and 520's for slog?
 
 For the slog, you should look for a SLC technology SSD which saves unwritten 
 data on power failure.  In Intel-speak, this is called Enhanced Power Loss 
 Data Protection.  I am not running across any Intel SSDs which claim to 
 match these requirements.
 
 Extreme write IOPS claims in consumer SSDs are normally based on large write 
 caches which can lose even more data if there is a power failure.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import the zpool

2012-08-02 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
so zpool import -F ..
zpool import -f ...
all not working?
regards


Sent from my iPad

On Aug 2, 2012, at 7:47, Suresh Kumar sachinnsur...@gmail.com wrote:

 Hi Hung-sheng,
  
 It is not displaying any output, like the following.
  
 bash-3.2#  zpool import -nF tXstpool
 bash-3.2#
  
  
 Thanks  Regards,
 Suresh.
  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import the zpool

2012-08-02 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
can you post zpool history
regards

Sent from my iPad

On Aug 2, 2012, at 7:47, Suresh Kumar sachinnsur...@gmail.com wrote:

 Hi Hung-sheng,
  
 It is not displaying any output, like the following.
  
 bash-3.2#  zpool import -nF tXstpool
 bash-3.2#
  
  
 Thanks  Regards,
 Suresh.
  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] online increase of zfs after LUN increase ?

2012-07-26 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
imho, the 147440-21 does not list the bugs that solved by 148098-
even through it obsoletes the 148098



Sent from my iPad

On Jul 25, 2012, at 18:14, Habony, Zsolt zsolt.hab...@hp.com wrote:

 Thank you for your replies.
 
 First, sorry for misleading info.  Patch 148098-03  indeed not included in 
 recommended set, but trying to download it shows that 147440-15 obsoletes it
 and 147440-19 is included in latest recommended patch set.
 Thus time solves the problem elsewhere.
 
 Just for fun, my case was:
 
 A standard LUN used as a zfs filesystem, no redundancy (as storage already 
 has), and no partition is used, disk is given directly to zpool.
 # zpool status -oraarch
  pool: -oraarch
 state: ONLINE
 scan: none requested
 config:
 
NAME STATE READ WRITE CKSUM
xx-oraarch   ONLINE   0 0 0
  c5t60060E800570B90070B96547d0  ONLINE   0 0 0
 
 errors: No known data errors
 
 Partitioning shows this.  
 
 partition pr
 Current partition table (original):
 Total disk sectors available: 41927902 + 16384 (reserved sectors)
 
 Part  TagFlag First SectorSizeLast Sector
  0usrwm   256  19.99GB 41927902
  1 unassignedwm 0  0  0
  2 unassignedwm 0  0  0
  3 unassignedwm 0  0  0
  4 unassignedwm 0  0  0
  5 unassignedwm 0  0  0
  6 unassignedwm 0  0  0
  8   reservedwm  41927903   8.00MB 41944286
 
 
 As I mentioned I did not partition it, zpool create did.  I had absolutely 
 no idea how to resize these partitions, where to get the available number of 
 sectors and how many should be skipped and reserved ...
 Thus I backed up the 10G, destroyed zpool, created zpool (size was fine now) 
 , restored data.  
 
 Partition looks like this now, I do not think I could have created it easily 
 manually.
 
 partition pr
 Current partition table (original):
 Total disk sectors available: 209700062 + 16384 (reserved sectors)
 
 Part  TagFlag First Sector Size Last Sector
  0usrwm   256   99.99GB  209700062
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 2097000638.00MB  209716446
 
 Thank you for your help.
 Zsolt Habony
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New fast hash algorithm - is it needed?

2012-07-11 Thread Hung-Sheng Tsao (LaoTsao) Ph.D


Sent from my iPad

On Jul 11, 2012, at 13:11, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Wed, 11 Jul 2012, Richard Elling wrote:
 The last studio release suitable for building OpenSolaris is available in 
 the repo.
 See the instructions at 
 http://wiki.illumos.org/display/illumos/How+To+Build+illumos
 
 Not correct as far as I can tell.  You should re-read the page you 
 referenced.  Oracle recinded (or lost) the special Studio releases needed to 
 build the OpenSolaris kernel.  

hi
you can still download 12 12.1 12.2, AFAIK through OTN


 The only way I can see to obtain these releases is illegally.
 
 However, Studio 12.3 (free download) produces user-space executables which 
 run fine under Illumos.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snapshots slow on sol11?

2012-06-27 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
just wondering can you change from samba to SMB?
regards

Sent from my iPad

On Jun 27, 2012, at 2:46, Carsten John cj...@mpi-bremen.de wrote:

 -Original message-
 CC:ZFS Discussions zfs-discuss@opensolaris.org; 
 From:Jim Klimov jimkli...@cos.ru
 Sent:Tue 26-06-2012 22:34
 Subject:Re: [zfs-discuss] snapshots slow on sol11?
 2012-06-26 23:57, Carsten John wrote:
 Hello everybody,
 
 I recently migrated a file server (NFS  Samba) from OpenSolaris (Build 
 111) 
 to Sol11.
 (After?) the move we are facing random (or random looking) outages of 
 our Samba...
 
 As for the timeouts, check if your tuning (i.e. the migrated files
 like /etc/system) don't enforce long TXG syncs (default was 30sec)
 or something like that.
 
 Find some DTrace scripts to see if ZIL is intensively used during
 these user-profile writes, and if these writes are synchronous -
 maybe an SSD/DDR logging device might be useful for this scenario?
 
 Regarding the zfs-auto-snapshot, it is possible to install the old
 scripted package from OpenSolaris onto Solaris 10 at least; I did
 not have much experience with newer releases yet (timesliderd) so
 can't help better.
 
 HTH,
 //Jim Klimov
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 Hi everybody,
 
 in the meantime I was able to eliminate the snapshots. I disabled snapshot, 
 but the issue still persists. I will now check Jim's suggestions.
 
 thx so far
 
 
 Carsten
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (fwd) Re: ZFS NFS service hanging on Sunday

2012-06-25 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
in solaris zfs cache many things, you should have more ram
if you setup 18gb swap , imho, ram should be high than 4gb
regards

Sent from my iPad

On Jun 25, 2012, at 5:58, tpc...@mklab.ph.rhul.ac.uk wrote:

 
 2012-06-14 19:11, tpc...@mklab.ph.rhul.ac.uk wrote:
 
 In message 201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk, 
 tpc...@mklab.ph.r
 hul.ac.uk writes:
 Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
 My WAG is that your zpool history is hanging due to lack of
 RAM.
 
 Interesting.  In the problem state the system is usually quite responsive, 
 eg. not memory trashing.  Under Linux which I'm more
 familiar with the 'used memory' = 'total memory - 'free memory', refers to 
 physical memory being used for data caching by
 the kernel which is still available for processes to allocate as needed 
 together with memory allocated to processes, as opposed to
 only physical memory already allocated and therefore really 'used'.  Does 
 this mean something different under Solaris ?
 
 Well, it is roughly similar. In Solaris there is a general notion
 
 [snipped]
 
 Dear Jim,
Thanks for the detailed explanation of ZFS memory usage.  Special 
 thanks also to John D Groenveld for the initial suggestion of a lack of RAM
 problem.  Since up-ing the RAM from 2GB to 4GB the machine has sailed though 
 the last two Sunday mornings w/o problem.  I was interested to
 subsequently discover the Solaris command 'echo ::memstat | mdb -k' which 
 reveals just how much memory ZFS can use.
 
 Best regards
 Tom.
 
 --
 Tom Crane, Dept. Physics, Royal Holloway, University of London, Egham Hill,
 Egham, Surrey, TW20 0EX, England.
 Email:  T.Crane@rhul dot ac dot uk
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good tower server for around 1,250 USD?

2012-03-23 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
well
check  this link

https://shop.oracle.com/pls/ostore/product?p1=SunFireX4270M2serverp2=p3=p4=sc=ocom_x86_SunFireX4270M2servertz=-4:00

you may not like the price



Sent from my iPad

On Mar 23, 2012, at 17:16, The Honorable Senator and Mrs. John 
Blutarskybl...@nymph.paranoici.org wrote:

 On Fri Mar 23 at 10:06:12 2012 laot...@gmail.com wrote:
 
 well
 use component of x4170m2 as example you will be ok
 intel cpu
 lsi sas controller non raid
 sas 72rpm hdd
 my 2c
 
 That sounds too vague to be useful unless I could afford an X4170M2. I
 can't build a custom box and I don't have the resources to go over the parts
 list and order something with the same components. Thanks though.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good tower server for around 1,250 USD?

2012-03-22 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
well
use component of x4170m2 as example you will be ok
intel cpu
lsi sas controller non raid
sas 72rpm hdd
my 2c

Sent from my iPad

On Mar 22, 2012, at 14:41, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote:

 On Thu, 22 Mar 2012, The Honorable Senator and Mrs. John Blutarsky wrote:
 
 This will be a do-everything machine. I will use it for development, hosting
 various apps in zones (web, file server, mail server etc.) and running other
 systems (like a Solaris 11 test system) in VirtualBox. Ultimately I would
 like to put it under Solaris support so I am looking for something
 officially approved. The problem is there are so many systems on the HCL I
 don't know where to begin. One of the Supermicro super workstations looks
 
 Almost all of the systems listed on the HCL are defunct and no longer 
 purchasable except for on the used market.  Obtaining an approved system 
 seems very difficult. In spite of this, Solaris runs very well on many 
 non-approved modern systems.
 
 I don't know what that means as far as the ability to purchase Solaris 
 support.
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
hi
are the disk/sas controller the same on both server?
-LT

Sent from my iPad

On Mar 13, 2012, at 6:10, P-O Yliniemi p...@bsd-guide.net wrote:

 Hello,
 
 I'm currently replacing a temporary storage server (server1) with the one 
 that should be the final one (server2). To keep the data storage from the old 
 one I'm attempting to import it on the new server. Both servers are running 
 OpenIndiana server build 151a.
 
 Server 1 (old)
 The zpool consists of three disks in a raidz1 configuration:
 # zpool status
  pool: storage
 state: ONLINE
  scan: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0
 
 errors: No known data errors
 
 Output of format command gives:
 # format
 AVAILABLE DISK SELECTIONS:
   0. c2t1d0 LSILOGIC-LogicalVolume-3000 cyl 60785 alt 2 hd 255 sec 126
  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0 ST3000DM- W1F07HW-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1 ST3000DM- W1F05H2-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0 ST3000DM- W1F032R-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   4. c5d1 ST3000DM- W1F07HZ-0001-2.73TB
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0
 
 (c5d1 was previously used as a hot spare, but I removed it as an attempt to 
 export and import the zpool without the spare)
 
 # zpool export storage
 
 # zpool list
 (shows only rpool)
 
 # zpool import
   pool: storage
 id: 17210091810759984780
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:
 
storage ONLINE
  raidz1-0  ONLINE
c4d0ONLINE
c4d1ONLINE
c5d0ONLINE
 
 (check to see if it is importable to the old server, this has also been 
 verified since I moved back the disks to the old server yesterday to have it 
 available during the night)
 
 zdb -l output in attached files.
 
 ---
 
 Server 2 (new)
 I have attached the disks on the new server in the same order (which 
 shouldn't matter as ZFS should locate the disks anyway)
 zpool import gives:
 
 root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:
 
storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE
 
 The problem is that all the disks are there and online, but the pool is 
 showing up as unavailable.
 
 Any ideas on what I can do more in order to solve this problem ?
 
 Regards,
  PeO
 
 
 
 zdb_l_c4d0s0.txt
 zdb_l_c4d1s0.txt
 zdb_l_c5d0s0.txt
 zdb_l_c5d1s0.txt
 zdb_l_c7t5000C50044A30193d0s0.txt
 zdb_l_c7t5000C50044E0F316d0s0.txt
 zdb_l_c7t5000C50044760F6Ed0s0.txt
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Advice for migrating ZFS configuration

2012-03-07 Thread Hung-Sheng Tsao (LaoTsao) Ph.D
read the link please
it seems that afmter you create the  radiz1 zpool
you need to destroy the fakedisk so it will have contains data when you to the 
copy
copy the data by following the steps in the link

replace the  fakedisk withnthe real disk

this is a good approach that i did not know before
-LT

Sent from my iPad

On Mar 7, 2012, at 17:48, Bob Doolittle bob.doolit...@oracle.com wrote:

 Wait, I'm not following the last few steps you suggest. Comments inline:
 
 On 03/07/12 17:03, Fajar A. Nugraha wrote:
 - use the one new disk to create a temporary pool
 - copy the data (zfs snapshot -r + zfs send -R | zfs receive)
 - destroy old pool
 - create a three-disk raidz pool using two disks and a fake device,
 something like http://www.dev-eth0.de/creating-raidz-with-missing-device/
 
 Don't I need to copy the data back from the temporary pool to the new raidz 
 pool at this point?
 I'm not understanding the process beyond this point, can you clarify please?
 
 - destroy the temporary pool
 
 So this leaves the data intact on the disk?
 
 - replace the fake device with now-free disk
 
 So this replicates the data on the previously-free disk across the raidz pool?
 
 What's the point of the following export/import steps? Renaming? Why can't I 
 just give the old pool name to the raidz pool when I create it?
 
 - export the new pool
 - import the new pool and rename it in the process: zpool import
 temp_pool_name old_pool_name
 
 Thanks!
 
 -Bob
 
 
 
 In the end I
 want the three-disk raidz to have the same name (and mount point) as the
 original zpool. There must be an easy way to do this.
 Nope.
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need hint on pool setup

2012-02-01 Thread Hung-Sheng Tsao (laoTsao)
my 2c
1 just  do mirror  of 2 dev with 20 hdd with 1 spare
2 raidz2 with   5 dev for 20 hdd,  with one spare 

Sent from my iPad

On Feb 1, 2012, at 3:49, Thomas Nau thomas@uni-ulm.de wrote:

 Hi
 
 On 01/31/2012 10:05 PM, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. wrote:
 what is your main application for ZFS? e.g. just NFS or iSCSI  for home dir 
 or VM? or Window client?
 
 Yes, fileservice only using CIFS, NFS, Samba and maybe iSCSI
 
 Is performance important? or space is more important?
 
 a good balance ;)
 
 what is the memory of your server?
 
 96G
 
 do you want to use ZIL or L2ARC?
 
 ZEUS STECRAM as ZIL (mirrored); maybe SSDs and L2ARC
 
 what is your backup  or DR plan?
 
 continuous rolling snapshot plus send/receive to remote site
 TSM backup at least once a week to tape; depends on how much
 time the TSM client needs to walk the filesystems
 
 You need to answer all these question first
 
 did so
 
 Thomas
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP JBOD D2700 - ok?

2012-01-31 Thread Hung-Sheng Tsao (laoTsao)
what is the server you attach to D2700?
the hp spec for d2700 did not include solaris, so not sure how you get support 
from hp:-(

Sent from my iPad

On Jan 31, 2012, at 20:25, Ragnar Sundblad ra...@csc.kth.se wrote:

 
 Just to follow up on this, in case there are others interested:
 
 The D2700s seems to work quite ok for us. We have four issues with them,
 all of which we will ignore for now:
 - They hang when I insert an Intel SSD SATA (!) disk (I wanted to test,
  both for log device and cache device, and I had those around).
  This could probably be fixed with a firmware upgrade, but:
 - It seems the firmware can't be upgraded if you don't have one of a few
  special HP raid cards! Silly!
 - The LEDs on the disks: On the first bay it is turned off, on the rest
  it is turned on. They all flash at activity. I have no idea why this
  is, and I know to little about SAS chassises to even guess. This could
  possibly change with a firmware upgrade of the chassis controllers, but
  maybe not.
 - In Solaris 11, the /dev/chassis/HP-D2700-SAS-AJ941A.xx.../Drive_bay_NN
  is supposed to contain a soft link to the device for the disk in the bay.
  This doesn't seem to work for bay 0. It may be related to the previous
  problem, but maybe not.
 
 (We may buy a HP raid card just to be able to upgrade their firmware.)
 
 If we have had the time we probably would have tested some other jbods
 too, but we need to get those rolling soon, and these seem good enough.
 
 We have tested them with multipathed SAS, using a single LSI SAS 9205-8e
 HBA and connecting the two ports on the HBA to the two controllers in the
 D2700.
 
 To get multipathing, you need to configure the scsi_vhci driver, in
 /kernel/drv/scsi_vhci.conf for sol10 or /etc/driver/drv/scsi_vhci.conf for
 sol11-x86. To get better performance, you probably want to use
 load-balance=logical-block instead of load-balance=round-robin.
 See examples below.
 
 You may also need to run stmsboot -e to enable multipathing. I still haven't
 figured out what that does (more than updating /etc/vfstab and /etc/dumpdates
 which you typically don't use with ifs), maybe nothing.
 
 Thanks to all that have helped with input!
 
 /ragge
 
 
 -
 
 
 For solaris 10u8 and later, in /kernel/drv/scsi_vhci.conf.DIST:
 ###
 ...
 device-type-scsi-options-list =
  HP  D2700 SAS AJ941A, symmetric-option,
  HP  EG, symmetric-option;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 symmetric-option = 0x100;
 
 device-type-mpxio-options-list =
  device-type=HP  D2700 SAS AJ941A, 
 load-balance-options=logical-block-options,
  device-type=HP  EG, load-balance-options=logical-block-options;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 logical-block-options=load-balance=logical-block, region-size=20;
 ...
 ###
 
 
 For solaris 11, in /etc/driver/drv/scsi_vhci.conf on x86
 (in /kernel/drv/scsi_vhci.conf.DIST on sparc?):
 ###
 ...
 #load-balance=round-robin;
 load-balance=logical-block;
 region-size=20;
 ...
 scsi-vhci-failover-override =
   HP  D2700 SAS AJ941A, f_sym,
   HP  EG,   f_sym;
 # HP 600 GB 2.5 inch SAS disks: EG0600FBDBU, EG0600FBDSR
 ###
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and iscsi performance help

2012-01-27 Thread Hung-Sheng Tsao (laoTsao)
hi
IMHO, upgrade to s11 if possible
use the COMSTAR based iscsi 

Sent from my iPad

On Jan 26, 2012, at 23:25, Ivan Rodriguez ivan...@gmail.com wrote:

 Dear fellows,
 
 We have a backup server with a zpool size of 20 TB, we transfer
 information using zfs snapshots every day (we have around 300 fs on
 that pool),
 the storage is a dell md3000i connected by iscsi, the pool is
 currently version 10, the same storage is connected
 to another server with a smaller pool of 3 TB(zpool version 10) this
 server is working fine and speed is good between the storage
 and the server, however  in the server with 20 TB pool performance is
 an issue  after we restart the server
 performance is good but with the time lets say a week the performance
 keeps dropping until we have to
 bounce the server again (same behavior with new version of solaris in
 this case performance drops in 2 days), no errors in logs or storage
 or the zpool status -v
 
 We suspect that the pool has some issues probably there is corruption
 somewhere, we tested solaris 10 8/11 with zpool 29,
 although we haven't update the pool itself, with the new solaris the
 performance is even worst and every time
 that we restart the server we get stuff like this:
 
 SOURCE: zfs-diagnosis, REV: 1.0
 EVENT-ID: 0168621d-3f61-c1fc-bc73-c50efaa836f4
 DESC: All faults associated with an event id have been addressed.
 Refer to http://sun.com/msg/FMD-8000-4M for more information.
 AUTO-RESPONSE: Some system components offlined because of the
 original fault may have been brought back online.
 IMPACT: Performance degradation of the system due to the original
 fault may have been recovered.
 REC-ACTION: Use fmdump -v -u EVENT-ID to identify the repaired components.
 [ID 377184 daemon.notice] SUNW-MSG-ID: FMD-8000-6U, TYPE: Resolved,
 VER: 1, SEVERITY: Minor
 
 And we need to export and import the pool in order to be  able to  access it.
 
 Now my question is do you guys know if we upgrade the pool does this
 process  fix some issues in the metadata of the pool ?
 We've been holding back the upgrade because we know that after the
 upgrade there is no way to return to version 10.
 
 Does anybody has experienced corruption in the pool without a hardware
 failure ?
 Is there any tools or procedures to find corruption on the pool or
 File systems inside the pool ? (besides scrub)
 
 So far we went through the connections cables, ports and controllers
 between the storage and the server everything seems fine, we've
 swapped network interfaces, cables, switch ports etc etc.
 
 
 Any ideas would be really appreciate it.
 
 Cheers
 Ivan
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-27 Thread Hung-Sheng Tsao (laoTsao)
it seems that you will need to work  with oracle support:-(

Sent from my iPad

On Jan 27, 2012, at 3:49, sureshkumar sachinnsur...@gmail.com wrote:

 Hi Christian ,
 
 I was disabled the MPXIO   zpool clear is working for some times  its 
 failed in few iterations.
 
 I am using one Sparc machine [with the same OS level of x-86 ] I didn't 
 faced any issue with the Sparc architecture.
 Is it was the problem with booting sequence?
 
 Please help me.
 
 Thanks,
 Sudheer.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-24 Thread Hung-Sheng Tsao (laoTsao)
how did you issue  reboot, try 
shutdown -i6  -y -g0 

Sent from my iPad

On Jan 24, 2012, at 7:03, sureshkumar sachinnsur...@gmail.com wrote:

 Hi all,
 
 
 I am new to Solaris  I am facing an issue with the dynapath [multipath s/w] 
 for Solaris10u10 x86 .
 
 I am facing an issue with the zpool.
 
 Whats my problem is unable to access the zpool after issue a reboot.
 
 I am pasting the zpool status below.
 
 ==
 bash-3.2# zpool status
   pool: test
  state: UNAVAIL
  status: One or more devices could not be opened.  There are insufficient
 replicas for the pool to continue functioning.
  action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
  scan: none requested
  config:
 
 NAME STATE READ WRITE CKSUM
 test UNAVAIL  0 0 0  insufficient 
 replicas
 = 
 
 But all my devices are online  I am able to access them.
 when I export  import the zpool , the zpool comes to back to available state.
 
 I am not getting whats the problem with the reboot.
 
 Any suggestions regarding this was very helpful.
 
 Thanks Regards,
 Sudheer.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] patching a solaris server with zones on zfs file systems

2012-01-21 Thread Hung-Sheng Tsao (laoTsao)
which version of solaris?
s10u? live upgrade, zfs snap, halt zone, backup zone, zoneadm detach zone, 
zoneadm attach -U zone after os upgrade by  zfs snap and liveupgrade of just 
upgrade from dvd
or s11? beadm for new root, upgrade os, treat zone as above
regards



Sent from my iPad

On Jan 21, 2012, at 5:46, bhanu prakash bhanu.sys...@gmail.com wrote:

 Hi All,
 
 Please let me know the procedure how to patch a server which is having 5 
 zones on zfs file systems.
 
 Root file system exists on internal disk and zones are existed on SAN.
 
 Thank you all,
 Bhanu
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Do the disks in a zpool have a private region that I can read to get a zpool name or id?

2012-01-12 Thread Hung-Sheng Tsao (laoTsao)
if the disks are assigned by two hosts
may be just do zpool import should see the zpool in other hosts? not sure

as for AIX, control hdd, zpool will need partition that solaris could understand
i do not know what AIX used for partition should not be the same for solaris



Sent from my iPad

On Jan 12, 2012, at 9:51, adele@oracle.com adele@oracle.com wrote:

 Hi all,
 
 My cu has following question.
 
 
 Assume I have allocated a LUN from external storage to two hosts ( by mistake 
 ). I create a zpool with this LUN on host1 with no errors. On host2 when I 
 try to create a zpool by
 using the same disk ( which is allocated to host2 as well ), zpool create - 
 comes back with an error saying   /dev/dsk/cXtXdX is part of exported or 
 potentially active ZFS pool test.
 Is there a way for me to check what zpool disk belongs to from 'host2'. Do 
 the disks in a zpool have a private region that I can read to get a zpool 
 name or id?
 
 
 Steps required to reproduce the problem
 
 Disk doubly allocated to host1, host2
 host1 sees disk as disk100
 host2 sees disk as disk101
 host1# zpool create host1_pool disk1 disk2 disk100
 returns success ( as expected )
 host2# zpool create host2_pool disk1 disk2 disk101
 invalid dev specification
 use '-f' to overrite the following errors:
 /dev/dsk/disk101 is part of exported or potentially active ZFS pool test. 
 Please see zpool
 
 zpool did catch that the disk is part of an active pool, but since it's not 
 on the same host I am not getting the name of the pool to which disk101 is 
 allocated. It's possible we might go ahead and use '-f' option to create the 
 zpool and start using this filesystem. By doing this we're potentially 
 destroying filesystems on host1, host2 which could lead to severe downtime.
 
 Any way to get the pool name to which the disk101 is assigned ( with a 
 different name on a different host )? This would aid us tremendously in 
 avoiding a potential issue. This has happened with Solaris 9, UFS once before 
 taking out two Solaris machines.
 
 What happens if diskis assigned to a AIX box and is setup as part of a Volume 
 manager on AIX and we try to create 'zpool' on Solaris host. Will ZFS catch 
 this, by saying something wrong with the disk?
 
 Regards,
 Adele
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to allocate dma memory for extra SGL

2012-01-10 Thread Hung-Sheng Tsao (laoTsao)
how is the ram size
what is the zpool setup and what is your hba and hdd size and type


Sent from my iPad

On Jan 10, 2012, at 21:07, Ray Van Dolson rvandol...@esri.com wrote:

 Hi all;
 
 We have a Solaris 10 U9 x86 instance running on Silicon Mechanics /
 SuperMicro hardware.
 
 Occasionally under high load (ZFS scrub for example), the box becomes
 non-responsive (it continues to respond to ping but nothing else works
 -- not even the local console).  Our only solution is to hard reset
 after which everything comes up normally.
 
 Logs are showing the following:
 
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2MPT SGL mem alloc failed
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:08 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:08 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:10 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:10 prodsys-dmz-zfs2Unable to allocate dma memory for 
 extra SGL.
  Jan  8 09:44:10 prodsys-dmz-zfs2 scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci8086,3410@9/pci1000,72@0 (mpt_sas0):
  Jan  8 09:44:10 prodsys-dmz-zfs2MPT SGL mem alloc failed
  Jan  8 09:44:11 prodsys-dmz-zfs2 rpcmod: [ID 851375 kern.warning] WARNING: 
 svc_cots_kdup no slots free
 
 I am able to resolve the last error by adjusting upwards the duplicate
 request cache sizes, but have been unable to find anything on the MPT
 SGL errors.
 
 Anyone have any thoughts on what this error might be?
 
 At this point, we are simply going to apply patches to this box (we do
 see an outstanding mpt patch):
 
 147150 --  01 R-- 124 SunOS 5.10_x86: mpt_sas patch
 147702 --  03 R--  21 SunOS 5.10_x86: mpt patch
 
 But we have another identically configured box at the same patch level
 (admittedly with slightly less workload, though it also undergoes
 monthly zfs scrubs) which does not experience this issue.
 
 Ray
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
what is zpool zpool status
are you using the default 128k stripsize for zpool
is your server x86 ? or sparc t3 , how many socket?
IMHO, t3  for oracle need careful tuning
since many oracle ops need fast single thread cpu


Sent from my iPad

On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:

 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
one still does not understand your setup
1 what is hba in T3-2
2 did u setup raid6 (how) in ramsan array? or present the ssd as jbod to zpool
3 which model of RAMSAN
4 are there any other storage behind RAMSAN
5 do you set up zpool with zil and or ARC?
6 IMHO, the hybre approach to ZFS is the most cost effective, 7200rpm SAS with 
zil and ARC and mirror Zpool

the problem with raid6 with 8k and oracle 8k is the mismatch of stripsize
we know the zpool use dynamic stripsize in raid, not the same as in hw raid
but similar consideration still exist



Sent from my iPad

On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:

 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-05 Thread Hung-Sheng Tsao (laoTsao)
i just take look the ramsan web site
there are many whitepaper on oracle, none on ZFS


Sent from my iPad

On Jan 5, 2012, at 12:58, Hung-Sheng Tsao (laoTsao) laot...@gmail.com wrote:

 one still does not understand your setup
 1 what is hba in T3-2
 2 did u setup raid6 (how) in ramsan array? or present the ssd as jbod to zpool
 3 which model of RAMSAN
 4 are there any other storage behind RAMSAN
 5 do you set up zpool with zil and or ARC?
 6 IMHO, the hybre approach to ZFS is the most cost effective, 7200rpm SAS 
 with zil and ARC and mirror Zpool
 
 the problem with raid6 with 8k and oracle 8k is the mismatch of stripsize
 we know the zpool use dynamic stripsize in raid, not the same as in hw raid
 but similar consideration still exist
 
 
 
 Sent from my iPad
 
 On Jan 5, 2012, at 11:40, grant lowe glow...@gmail.com wrote:
 
 Ok. I blew it. I didn't add enough information. Here's some more detail:
 
 Disk array is a RAMSAN array, with RAID6 and 8K stripes. I'm measuring 
 performance with the results of the bonnie++ output and comparing with with 
 the the zpool iostat output. It's with the zpool iostat I'm not seeing a lot 
 of writes.
 
 Like I said, I'm new to this and if I need to provide anything else I will. 
 Thanks, all.
 
 
 On Wed, Jan 4, 2012 at 2:59 PM, grant lowe glow...@gmail.com wrote:
 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stress test zfs

2012-01-04 Thread Hung-Sheng Tsao (laoTsao)
what is your storage?
internal sas or external array
what is  your zfs setup?


Sent from my iPad

On Jan 4, 2012, at 17:59, grant lowe glow...@gmail.com wrote:

 Hi all,
 
 I've got a solaris 10 running 9/10 on a T3. It's an oracle box with 128GB 
 memory RIght now oracle . I've been trying to load test the box with 
 bonnie++. I can seem to get 80 to 90 K writes, but can't seem to get more 
 than a couple K for writes. Any suggestions? Or should I take this to a 
 bonnie++ mailing list? Any help is appreciated. I'm kinda new to load 
 testing. Thanks.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Resolving performance issue w/ deduplication (NexentaStor)

2011-12-30 Thread Hung-Sheng Tsao (laoTsao)
now s11 support shadow migration, just for this  purpose, AFAIK
not sure nexentaStor support shadow migration



Sent from my iPad

On Dec 30, 2011, at 2:03, Ray Van Dolson rvandol...@esri.com wrote:

 On Thu, Dec 29, 2011 at 10:59:04PM -0800, Fajar A. Nugraha wrote:
 On Fri, Dec 30, 2011 at 1:31 PM, Ray Van Dolson rvandol...@esri.com wrote:
 Is there a non-disruptive way to undeduplicate everything and expunge
 the DDT?
 
 AFAIK, no
 
  zfs send/recv and then back perhaps (we have the extra
 space)?
 
 That should work, but it's disruptive :D
 
 Others might provide better answer though.
 
 Well, slightly _less_ disruptive perhaps.  We can zfs send to another
 file system on the same system, but different set of disks.  We then
 disable NFS shares on the original, do a final zfs send to sync, then
 share out the new undeduplicated file system with the same name.
 Hopefully the window here is short enough that NFS clients are able to
 recover gracefully.
 
 We'd then wipe out the old zpool, recreate and do the reverse to get
 data back onto it..
 
 Thanks,
 Ray
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Hung-Sheng Tsao (laoTsao)
what is the ram size?
are there many snap? create then delete?
did you run a scrub?

Sent from my iPad

On Dec 18, 2011, at 10:46, Jan-Aage Frydenbø-Bruvoll j...@architechs.eu wrote:

 Hi,
 
 On Sun, Dec 18, 2011 at 15:13, Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.
 laot...@gmail.com wrote:
 what are the output of zpool status pool1 and pool2
 it seems that you have mix configuration of pool3 with disk and mirror
 
 The other two pools show very similar outputs:
 
 root@stor:~# zpool status pool1
  pool: pool1
 state: ONLINE
 scan: resilvered 1.41M in 0h0m with 0 errors on Sun Dec  4 17:42:35 2011
 config:
 
NAME  STATE READ WRITE CKSUM
pool1  ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t12d0   ONLINE   0 0 0
c1t13d0   ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
c1t24d0   ONLINE   0 0 0
c1t25d0   ONLINE   0 0 0
  mirror-2ONLINE   0 0 0
c1t30d0   ONLINE   0 0 0
c1t31d0   ONLINE   0 0 0
  mirror-3ONLINE   0 0 0
c1t32d0   ONLINE   0 0 0
c1t33d0   ONLINE   0 0 0
logs
  mirror-4ONLINE   0 0 0
c2t2d0p6  ONLINE   0 0 0
c2t3d0p6  ONLINE   0 0 0
cache
  c2t2d0p10   ONLINE   0 0 0
  c2t3d0p10   ONLINE   0 0 0
 
 errors: No known data errors
 root@stor:~# zpool status pool2
  pool: pool2
 state: ONLINE
 scan: scrub canceled on Wed Dec 14 07:51:50 2011
 config:
 
NAME  STATE READ WRITE CKSUM
pool2 ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t14d0   ONLINE   0 0 0
c1t15d0   ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
c1t18d0   ONLINE   0 0 0
c1t19d0   ONLINE   0 0 0
  mirror-2ONLINE   0 0 0
c1t20d0   ONLINE   0 0 0
c1t21d0   ONLINE   0 0 0
  mirror-3ONLINE   0 0 0
c1t22d0   ONLINE   0 0 0
c1t23d0   ONLINE   0 0 0
logs
  mirror-4ONLINE   0 0 0
c2t2d0p7  ONLINE   0 0 0
c2t3d0p7  ONLINE   0 0 0
cache
  c2t2d0p11   ONLINE   0 0 0
  c2t3d0p11   ONLINE   0 0 0
 
 The affected pool does indeed have a mix of straight disks and
 mirrored disks (due to running out of vdevs on the controller),
 however it has to be added that the performance of the affected pool
 was excellent until around 3 weeks ago, and there have been no
 structural changes nor to the pools neither to anything else on this
 server in the last half year or so.
 
 -jan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Hung-Sheng Tsao (laoTsao)
not sure oi support shadow migration 
or you may be to send zpool to another server then send back to  do defrag
regards

Sent from my iPad

On Dec 19, 2011, at 8:15, Gary Mills gary_mi...@fastmail.fm wrote:

 On Mon, Dec 19, 2011 at 11:58:57AM +, Jan-Aage Frydenbø-Bruvoll wrote:
 
 2011/12/19 Hung-Sheng Tsao (laoTsao) laot...@gmail.com:
 did you run a scrub?
 
 Yes, as part of the previous drive failure. Nothing reported there.
 
 Now, interestingly - I deleted two of the oldest snapshots yesterday,
 and guess what - the performance went back to normal for a while. Now
 it is severely dropping again - after a good while on 1.5-2GB/s I am
 again seeing write performance in the 1-10MB/s range.
 
 That behavior is a symptom of fragmentation.  Writes slow down
 dramatically when there are no contiguous blocks available.  Deleting
 a snapshot provides some of these, but only temporarily.
 
 -- 
 -Gary Mills--refurb--Winnipeg, Manitoba, Canada-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-16 Thread Hung-Sheng Tsao (laoTsao)
imho, if possible pick sas 7200 hdd
no hw-raid for ZFS
mirror and with ZIL and good size memory


Sent from my iPad

On Dec 16, 2011, at 17:36, t...@ownmail.net wrote:

 I could use some help with choosing hardware for a storage server. For
 budgetary and density reasons, we had settled on LFF SATA drives in the
 storage server. I had closed in on models from HP (DL180 G6) and IBM
 (x3630 M3), before discovering warnings against connecting SATA drives
 with SAS expanders.
 
 So I'd like to ask what's the safest way to manage SATA drives. We're
 looking for a 12 (ideally 14) LFF server, 2-3U, similar to the above
 models. The HP and IBM models both come with SAS expanders built into
 their backplanes. My questions are:
 
 1. Kludginess aside, can we build a dependable SMB server using
 integrated HP or IBM expanders plus the workaround
 (allow-bus-device-reset=0) presented here: 
 http://gdamore.blogspot.com/2010/12/update-on-sata-expanders.html ?
 
 2. Would it be better to find a SATA card with lots of ports, and make
 1:1 connections? I found some cards (arc-128, Adaptec 2820SA) w/Solaris
 support, for example, but I don't know how reliable they are or whether
 they support a clean JBOD mode.
 
 3. Assuming native SATA is the way to go, where should we look for
 hardware? I'd like the IBM  HP options because of the LOM  warranty,
 but I wouldn't think the hot-swap backplane offers any way to bypass the
 SAS expanders (correct me if I'm wrong here!). I found this JBOD:
 http://www.pc-pitstop.com/sata_enclosures/sat122urd.asp  I also know
 about SuperMicro. Are there any other vendors or models worth
 considering?
 
 Thanks!
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread Hung-Sheng Tsao (LaoTsao) Ph. D.


IMHO, zfs need to run in all kind of HW
T-series CMT server that can help sha calculation since T1 day, did not 
see any work in ZFS to take advantage it



On 5/10/2011 11:29 AM, Anatoly wrote:

Good day,

I think ZFS can take advantage of using GPU for sha256 calculation, 
encryption and maybe compression. Modern video card, like 5xxx or 6xxx 
ATI HD Series can do calculation of sha256 50-100 times faster than 
modern 4 cores CPU.


kgpu project for linux shows nice results.

'zfs scrub' would work freely on high performance ZFS pools.

The only problem that there is no AMD/Nvidia drivers for Solaris that 
support hardware-assisted OpenCL.


Is anyone interested in it?

Best regards,
Anatoly Legkodymov.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss