Re: [zfs-discuss] stupid ZFS question - floating point operations

2010-12-22 Thread Angelo Rajadurai
If I remember correctly Solaris like most other operating system does not save 
or restore the floating point registers when context switching from User to 
Kernel so doing any floating point ops in the kernel would corrupt user 
floating point state. This means ZFS cannot be doing any floating point ops in 
the kernel context. 

Others wiser than I may be able to asert this with more certainty 

-Angelo


On Dec 22, 2010, at 2:44 PM, Jerry Kemp wrote:

 I have a coworker, who's primary expertise is in another flavor of Unix.
 
 This coworker lists floating point operations as one of ZFS detriments.
 
 I's not really sure what he means specifically, or where he got this
 reference from.
 
 In an effort to refute what I believe is an error or misunderstanding on
 his part, I have spent time on Yahoo, Google, the ZFS section of
 OpenSolaris.org, etc.  I really haven't turned up much of anything that
 would prove or disprove his comments.  The one thing I haven't done is
 to go through the ZFS source code, but its been years since I have done
 any serious programming.
 
 If someone from Oracle, or anyone on this mailing list could point me
 towards any documentation, or give me a definitive word, I would sure
 appreciate it.  If there were floating point operations going on within
 ZFS, at this point I am uncertain as to what they would be.
 
 TIA for any comments,
 
 Jerry
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to put solaris 11 into continous reboot

2010-12-16 Thread Angelo Rajadurai
Why not write an SMF service that reboots the system? Make it dependent on all 
the services that you need to start before the reboot. This would be equivalent 
to creating an rc-script. 

-Angelo

On Dec 16, 2010, at 12:07 AM, rachana wrote:

 Hi, 
   Can anybody tell me how to put my solaris-11 vm into continous reboot, i 
 need to perform boot halt test.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS storage server hardware

2009-11-17 Thread Angelo Rajadurai
Also if you are a startup, there are some ridiculously sweet deals on Sun 
hardware through the Sun Startup Essentials program. 
http://sun.com/startups

This way you do not need to worry about compatibility and you get all the 
Enterprise RAS features at a pretty low price point.

-Angelo


On Nov 17, 2009, at 4:14 PM, Bruno Sousa wrote:

 Hi,
 
 I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
 Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
 good..
 So i have a 48 TB raw capacity, with a mirror configuration for NFS
 usage (Xen VMs) and i feel that for the price i paid i have a very nice 
 system.
 
 
 Bruno
 
 Ian Allison wrote:
 Hi,
 
 I know (from the zfs-discuss archives and other places [1,2,3,4]) that
 a lot of people are looking to use zfs as a storage server in the
 10-100TB range.
 
 I'm in the same boat, but I've found that hardware choice is the
 biggest issue. I'm struggling to find something which will work nicely
 under solaris and which meets my expectations in terms of hardware.
 Because of the compatibility issues, I though I should ask here to see
 what solutions people have already found.
 
 
 I'm learning as I go here, but as far as I've been able to determine,
 the basic choices for attaching drives seem to be
 
 1) SATA Port multipliers
 2) SAS Multilane Enclosures
 3) SAS Expanders
 
 In option 1, the controller can only talk to one device at a time, in
 option 2 each miniSAS connector can talk to 4 drives at a time but in
 option 3 the expander can allow for communication with up to 128
 drives. I'm thinking about having ~8-16 drives on each controller
 (PCI-e card) so I think I want option 3. Additionally, because I might
 get greedier in the future and decide to add more drives on each
 controller I think option 3 is the best way to go. I can have a
 motherboard with a lot of PCIe slots and have one controller card for
 each expander.
 
 
 Cases like the Supermicro 846E1-R900B have 24 hot swap bays accessible
 via a single (4u) LSI SASX36 SAS expander chip, but I'm worried about
 controller death and having the backplane as a single point of failure.
 
 I guess, ideally, I'd like a 4u enclosure with 2x2u SAS expanders. If
 I wanted hardware redundancy, I could then use mirrored vdevs with one
 side of each  mirror on one controller/expander pair and the other
 side on a separate pair. This would allow me to survive controller or
 expander death as well hard drive failure.
 
 
 Replace motherboard: ~500
 Replace backplane: ~500
 Replace controller: ~300
 Replace disk (SATA): ~100
 
 
 Does anyone have any example systems they have built or any thoughts
 on what I could do differently?
 
 Best regards,
Ian.
 
 
 [1] http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg27234.html
 [2] http://www.avsforum.com/avs-vb/showthread.php?p=17543496
 [3] http://www.stringliterals.com/?p=53
 [4] http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg22761.html
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Angelo Rajadurai
Just FYI. I ran a slightly different version of the test. I used SSD  
(for log  cache)! 3 x 32GB SSDs. 2 mirrored for log and one for  
cache. The systems is a 4150 with 12 GB of RAM. Here are the results


$ pfexec ./zfs-cache-test.ksh sdpool
System Configuration:
System architecture: i386
System release level: 5.11 snv_111b
CPU ISA list: amd64 pentium_pro+mmx pentium_pro pentium+mmx pentium  
i486 i386 i86


Pool configuration:
  pool: sdpool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Fri Jul 10  
11:33:01 2009

config:

NAMESTATE READ WRITE CKSUM
sdpool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c7t1d0  ONLINE   0 0 0
c7t3d0  ONLINE   0 0 0
logsONLINE   0 0 0
  mirrorONLINE   0 0 0
c7t2d0  ONLINE   0 0 0
c8t5d0  ONLINE   0 0 0
cache
  c8t4d0ONLINE   0 0 0

errors: No known data errors

zfs unmount sdpool/zfscachetest
zfs mount sdpool/zfscachetest

Doing initial (unmount/mount) 'cpio -C 131072 -o  /dev/null'
48000256 blocks

real3m27.06s
user0m2.05s
sys 0m30.14s

Doing second 'cpio -C 131072 -o  /dev/null'
48000256 blocks

real2m47.32s
user0m2.09s
sys 0m32.32s

Feel free to clean up with 'zfs destroy sdpool/zfscachetest'.

-Angelo


On Jul 14, 2009, at 12:09 PM, Bob Friesenhahn wrote:


On Tue, 14 Jul 2009, Jorgen Lundman wrote:

I have no idea. I downloaded the script from Bob without  
modifications and ran it specifying only the name of our pool.  
Should I have changed something to run the test?


If your system has quite a lot of memory, the number of files should  
be increased to at least match the amount of memory.


We have two kinds of x4500/x4540, those with Sol 10 10/08, and 2  
running svn117 for ZFS quotas. Worth trying on both?


It is useful to test as much as possible in order to fully  
understand the situation.


Since results often get posted without system details, the script is  
updated to dump some system info and the pool configuration.   
Refresh from


http://www.simplesystems.org/users/bfriesen/zfs-discuss/zfs-cache-test.ksh

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Thumper Origins Q

2007-01-24 Thread Angelo Rajadurai


On 24 Jan 2007, at 13:04, Bryan Cantrill wrote:



On Wed, Jan 24, 2007 at 09:46:11AM -0800, Moazam Raja wrote:

Well, he did say fairly cheap. the ST 3511 is about $18.5k. That's
about the same price for the low-end NetApp FAS250 unit.


Note that the 3511 is being replaced with the 6140:

  http://www.sun.com/storagetek/disk_systems/midrange/6140/

Also, don't read too much into the prices you see on the website --  
that's
the list price, and doesn't reflect any discounting.  If you're  
interested

in what it _actually_ costs, you should talk to a Sun rep or one of our
channel partners to get a quote.  (And lest anyone attack the  
messenger:

I'm not defending this system of getting an accurate price, I'm just
describing it.)



If your company can qualify as a start-up (4 year old or less with less
than 150 employees) you may want to look at the Sun Startup essentials
program. It provides Sun hardware at big discounts for startups.

http://www.sun.com/emrkt/startupessentials/

For an idea on the levels of discounts see
http://kalsey.com/2006/11/sun_startup_essentials_pricing/

-Angelo



- Bryan

--- 
---
Bryan Cantrill, Solaris Kernel Development.
http://blogs.sun.com/bmc

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss