Re: [zfs-discuss] OpenStorage GUI

2008-11-11 Thread Ed Saipetch
Can someone clarify Sun's approach to opensourcing projects and  
software?  I was under the impression the strategy was to charge for  
hardware, maintenance and PS.  If not, some clarification would be nice.

On Nov 11, 2008, at 12:38 PM, Bryan Cantrill wrote:

  4.  If we do make something available, it won't be free.

 If you are willing/prepared(/eager?) to abide by these constraints,  
 please
 let us ([EMAIL PROTECTED]) know -- that will help us build the  
 business
 case for doing this...

   - Bryan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenStorage GUI

2008-11-11 Thread Ed Saipetch
Boyd,

That's exactly what I was getting at.  This list probably isn't the  
place to discuss but this is the first real instance aside of maybe  
xVM Ops Center where it was pretty much put out in the open that you  
can expect to pay to get the goods.

Fishworks seems to have much more than just a nice wrapper put around  
Solaris, ZFS, NFS, FMA, AVS etc.  A lot of my ability to evangelize  
the benefits of Solaris in the storage world to my customers hinges on  
me being able to say Try it... you'll like it  I know Try-and- 
buy exists but in the grand scheme of things, adoption of Solaris  
hinges on easy accessibility.

I apologize for the tangent and the VM instance is a good start but  
the stance on opensourcing right or wrong seems like it has changed.

On Nov 11, 2008, at 8:30 PM, Boyd Adamson wrote:

 Bryan Cantrill [EMAIL PROTECTED] writes:

 On Tue, Nov 11, 2008 at 02:21:11PM -0500, Ed Saipetch wrote:
 Can someone clarify Sun's approach to opensourcing projects and
 software?  I was under the impression the strategy was to charge for
 hardware, maintenance and PS.  If not, some clarification would be  
 nice.

 There is no single answer -- we use open source as a business  
 strategy,
 not as a checkbox or edict.  For this product, open source is an  
 option
 going down the road, but not a priority.  Will our software be open
 sourced in the fullness of time?  My Magic 8-Ball tells me signs
 point to yes (or is that ask again later?) -- but it's certainly
 not something that we have concrete plans for at the moment...

 I think that's fair enough. What Sun choose to do is, of course, up to
 Sun.

 One can, however, understand that people might have expected otherwise
 given statements like this:

 With our announced intent to open source the entirety of our  
 software
 offerings, every single developer across the world now has access to
 the most sophisticated platform available for web 1.0, 2.0 and  
 beyond

 - Jonathan Schwartz
 http://www.sun.com/smi/Press/sunflash/2005-11/sunflash.20051130.1.xml

 -- 
 Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Traditional SAN

2008-08-21 Thread Ed Saipetch


 That's the one that's been an issue for me and my customers - they  
 get billed back for GB allocated to their servers by the back end  
 arrays.
 To be more explicit about the 'self-healing properties' -
 To deal with any fs corruption situation that would traditionally  
 require an fsck on UFS (SAN switch crash, multipathing issues,  
 cables going flaky or getting pulled, server crash that corrupts  
 fs's) ZFS needs some disk redundancy in place so it has parity and  
 can recover.  (raidz, zfs mirror, etc)
 Which means to use ZFS a customer have to pay more to get the back  
 end storage redundancy they need to recover from anything that would  
 cause an fsck on UFS.  I'm not saying it's a bad implementation or  
 that the gains aren't worth it, just that cost-wise, ZFS is more  
 expensive in this particular bill-back model.

 cheers,
 Brian

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Why would the customer need to use raidz or zfs mirroring if the array  
is doing it for them?  As someone else posted, metadata is already  
redundant by default and doesn't consume a ton of space.  Some people  
may disagree but the first thing I like about ZFS is the ease of pool  
management and the second thing is the checksumming.

When a customer had issues with Solaris 10 x86, vxfs and EMC  
powerpath, I took them down the road of using powerpath and zfs.  Made  
some tweaks so we didn't tell the array to flush to rust and they're  
happy as clams.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] J4200/J4400 Array

2008-07-02 Thread Ed Saipetch
This array has not been formally announced yet and information on  
general availability is not available as far as I know.  I saw the  
docs last week and the product was supposed to be launched a couple of  
weeks ago.

Unofficially this is Sun's continued push to develop cheaper storage  
options that can be combined with Solaris and the Open Storage  
initiative to provide customers with options they don't have today.   
I'd expect the price-point to be quite a bit cheaper than the LC 24XX  
series of arrays.

On Jul 2, 2008, at 7:49 AM, Ben B. wrote:

 Hi,

 According to the Sun Handbook, there is a new array :
 SAS interface
 12 disks SAS or SATA

 ZFS could be used nicely with this box.

 There is an another version called
 J4400 with 24 disks.

 Doc is here :
 http://docs.sun.com/app/docs/coll/j4200

 Does someone know price and availability for these products ?

 Best Regards,
 Ben


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problem with oracle

2008-04-01 Thread Ed Saipetch

Wiwat,

You should make sure that you have read the Best Practices Guide and the 
Evil Tuning Guide for helpful information on optimizing ZFS for Oracle.  
There are some things you can do to tweak ZFS to get better performance 
like using a separate filesystem for logs and separating the ZFS intent 
log (ZIL) from the main pool.


They can be found here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide

Also, what kind of disk subsystem (number of disks, is it an array?, 
etc.) and how do you have your zfs pools configured (raid type, separate 
ZIL, etc.)?


Hope this gives you a start.

-Ed

Wiwat Kiatdechawit wrote:


I implement ZFS with Oracle but it slower than UFS very much. Do you 
have any solution?


 


Can I fix this problem with ZFS direct I/O. If it can, how to set it?

 


Wiwat

 




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Ed Saipetch
Tried that... completely different cases with different power supplies.

On Oct 30, 2007, at 10:28 AM, Al Hopper wrote:

 On Mon, 29 Oct 2007, MC wrote:

 Here's what I've done so far:

 The obvious thing to test is the drive controller, so maybe you  
 should do that :)


 Also - while you're doing swapTronics - don't forget the Power Supply
 (PSU).  Ensure that your PSU has sufficient capacity on its 12Volt
 rails (older PSUs did'nt even tell you how much current they can push
 out on the 12V outputs).

 See also: http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta

 Regards,

 Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
 OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
 Graduate from sugar-coating school?  Sorry - I never attended! :)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Ed Saipetch
Hello,

I'm experiencing major checksum errors when using a syba silicon image 3114 
based pci sata controller w/ nonraid firmware.  I've tested by copying data via 
sftp and smb.  With everything I've swapped out, I can't fathom this being a 
hardware problem.  There have been quite a few blog posts out there with people 
having a similar config and not having any problems.

Here's what I've done so far:
1. Changed solaris releases from S10 U3 to NV 75a
2. Switched out motherboards and cpus from AMD sempron to a Celeron D
3. Switched out memory to use completely different dimms
4. Switched out sata drives (2-3 250gb hitachi's and seagates in RAIDZ, 3x400GB 
seagates RAIDZ and 1x250GB hitachi with no raid)

Here's output of a scrub and the status (ignore the date and time, I haven't 
reset it on this new motherboard) and please point me in the right direction if 
I'm barking up the wrong tree.

# zpool scrub tank
# zpool status
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0   293
  c0d1  ONLINE   0 0   293

errors: 140 data errors, use '-v' for a list
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss