Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Robert Milkowski

>So, the only supported (or even possible) way is indeed to us it
>as NAS for file or block IO from another head running the database
>or application servers?..

Technically speaking you can get access to standard shell and do whatever
you want - this would essentially void support contract though.

-- 
Robert Milkowski
http://milek.blogspot.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Ian Collins

On 11/23/12 05:50, Jim Klimov wrote:

On 2012-11-22 17:31, Darren J Moffat wrote:

Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(if that concept still applies) as is done on their current
Solaris-based box?

No it is a true appliance, it might look like it has Solaris underneath
but it is just based on Solaris.

You can script administrative tasks but not using bash/ksh style
scripting you use the ZFSSA's own scripting language.

So, the only supported (or even possible) way is indeed to us it
as NAS for file or block IO from another head running the database
or application servers?..


Yes.


I wonder if it would make weird sense to get the boxes, forfeit the
cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
get the most flexibility and bang for a buck from the owned hardware...
Or, rather, shop for the equivalent non-appliance servers...


As Tim Cook says, that would be a very expensive option.

I'm sure Oracle dropped the Thumper line because they competed head on 
with the appliances and gave way more flexibility.


If you are experienced with Solaris and ZFS, you will find using 
appliances very frustrating! You can't use the OS as you would like and 
you have to go through support when you would other wise fix things 
yourself.  In my part of the world, that isn't much fun.


Buy and equivalent JBOD and head unit and pretend you have a new Thumper.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel DC S3700

2012-11-22 Thread David Magda
On Wed, November 21, 2012 16:06, Jim Klimov wrote:
> On 2012-11-21 21:55, Ian Collins wrote:
>> I can't help thinking these drives would be overkill for an ARC device.
>> All of the expensive controller hardware is geared to boosting random
>> write IOPs, which somewhat wasted on a write slowly, read often device.
>> The enhancements would be good for a ZIL, but the smallest drive is at
>> least an order of magnitude too big...
>
> I think, given the write-endurance and powerloss protection, these
> devices might make for good pool devices - whether for an SSD-only
> pool, or for an rpool+zil(s) mirrors with main pools (and likely
> L2ARCs, yes) being on different types of devices.

Or partition them.

While general best practices encourage using the whole device for either
L2ARC or ZIL, it doesn't always have to be the case.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Tim Cook
On Thu, Nov 22, 2012 at 10:50 AM, Jim Klimov  wrote:

> On 2012-11-22 17:31, Darren J Moffat wrote:
>
>> Is it possible to use the ZFS Storage appliances in a similar
>>> way, and fire up a Solaris zone (or a few) directly on the box
>>> for general-purpose software; or to shell-script administrative
>>> tasks such as the backup archive management in the global zone
>>> (if that concept still applies) as is done on their current
>>> Solaris-based box?
>>>
>>
>> No it is a true appliance, it might look like it has Solaris underneath
>> but it is just based on Solaris.
>>
>> You can script administrative tasks but not using bash/ksh style
>> scripting you use the ZFSSA's own scripting language.
>>
>
> So, the only supported (or even possible) way is indeed to us it
> as NAS for file or block IO from another head running the database
> or application servers?..
>
> In the Datasheet I read that "Cloning" and "Remote replication" are
> separately licensed features; does this mean that the capability
> for "zfs send|zfs recv" backups from remote Solaris systems should
> be purchased separately? :(
>
> I wonder if it would make weird sense to get the boxes, forfeit the
> cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
> get the most flexibility and bang for a buck from the owned hardware...
> Or, rather, shop for the equivalent non-appliance servers...
>
> //Jim
>



You'd be paying a massive premium to buy them and then install some other
OS on them.  You'd be far better off buying equivalent servers.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Jim Klimov

On 2012-11-22 17:31, Darren J Moffat wrote:

Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(if that concept still applies) as is done on their current
Solaris-based box?


No it is a true appliance, it might look like it has Solaris underneath
but it is just based on Solaris.

You can script administrative tasks but not using bash/ksh style
scripting you use the ZFSSA's own scripting language.


So, the only supported (or even possible) way is indeed to us it
as NAS for file or block IO from another head running the database
or application servers?..

In the Datasheet I read that "Cloning" and "Remote replication" are
separately licensed features; does this mean that the capability
for "zfs send|zfs recv" backups from remote Solaris systems should
be purchased separately? :(

I wonder if it would make weird sense to get the boxes, forfeit the
cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
get the most flexibility and bang for a buck from the owned hardware...
Or, rather, shop for the equivalent non-appliance servers...

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Darren J Moffat



On 11/22/12 16:24, Jim Klimov wrote:

A customer is looking to replace or augment their Sun Thumper
with a ZFS appliance like 7320. However, the Thumper was used
not only as a protocol storage server (home dirs, files, backups
over NFS/CIFS/Rsync), but also as a general-purpose server with
unpredictably-big-data programs running directly on it (such as
corporate databases, Alfresco for intellectual document storage,
etc.) in order to avoid the networking transfer of such data
between pure-storage and compute nodes - this networking was
seen as both a bottleneck and a possible point of failure.

Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(if that concept still applies) as is done on their current
Solaris-based box?


No it is a true appliance, it might look like it has Solaris underneath 
but it is just based on Solaris.


You can script administrative tasks but not using bash/ksh style 
scripting you use the ZFSSA's own scripting language.



Is it possible to run VirtualBoxes in the ZFS-SA OS, dare I ask? ;)


No.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Jim Klimov

A customer is looking to replace or augment their Sun Thumper
with a ZFS appliance like 7320. However, the Thumper was used
not only as a protocol storage server (home dirs, files, backups
over NFS/CIFS/Rsync), but also as a general-purpose server with
unpredictably-big-data programs running directly on it (such as
corporate databases, Alfresco for intellectual document storage,
etc.) in order to avoid the networking transfer of such data
between pure-storage and compute nodes - this networking was
seen as both a bottleneck and a possible point of failure.

Is it possible to use the ZFS Storage appliances in a similar
way, and fire up a Solaris zone (or a few) directly on the box
for general-purpose software; or to shell-script administrative
tasks such as the backup archive management in the global zone
(if that concept still applies) as is done on their current
Solaris-based box?

Is it possible to run VirtualBoxes in the ZFS-SA OS, dare I ask? ;)

Thanks,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Woeful performance from an iSCSI pool

2012-11-22 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
> 
> I look after a remote server that has two iSCSI pools.  The volumes for
> each pool are sparse volumes and a while back the target's storage
> became full, causing weird and wonderful corruption issues until they
> manges to free some space.
> 
> Since then, one pool has been reasonably OK, but the other has terrible
> performance receiving snapshots.  Despite both iSCSI devices using the
> same IP connection, iostat shows one with reasonable service times while
> the other shows really high (up to 9 seconds) service times and 100%
> busy.  This kills performance for snapshots with many random file
> removals and additions.
> 
> I'm currently zero filling the bad pool to recover space on the target
> storage to see if that improves matters.
> 
> Has anyone else seen similar behaviour with previously degraded iSCSI
> pools?

This sounds exactly like the behavior I was seeing with my attempt at two 
machines zpool mirror'ing each other via iscsi.  In my case, I had two machines 
that are both targets and initiators.  I made the initiator service dependent 
on the target service, and I made the zpool mount dependent on the initiator 
service, and I made the virtualbox guest start dependent on the zpool mount.

Everything seemed fine for a while, including some reboots.  But then one 
reboot, one of my systems stayed down too long, and when it finally came back 
up, both machines started choking.

So far I haven't found any root cause, and so far the only solution I've found 
was to reinstall the OS.  I tried everything I know in terms of removing, 
forgetting, recreating the targets, initiators, and pool, but somehow none of 
that was sufficient.

I recently (yesterday) got budgetary approval to dig into this more, so 
hopefully maybe I'll have some insight before too long, but don't hold your 
breath.  I could fail, and even if I don't, it's likely to be weeks or months.

What I want to know from you is:

Which machines are your solaris machines?  Just the targets?  Just the 
initiators?  All of them?

You say you're having problems just with snapshots.  Are you sure you're not 
having trouble with all sorts of IO, and not just snapshots?  What about import 
/ export?

In my case, I found I was able to zfs send, zfs receive, zfs status, all fine.  
But when I launched a guest VM, there would be a massive delay - you said up to 
9 seconds - I was sometimes seeing over 30s - sometimes crashing the host 
system.  And the guest OS was acting like it was getting IO error, without 
actually displaying error message indicating IO error.  I would attempt, and 
sometimes fail, to power off the guest vm (kill -KILL VirtualBox).  After the 
failure began, zpool status still works (and reports no errors), but if I try 
to do things like export/import, they fail indefinitely, and I need to power 
cycle the host.  While in the failure mode, I can zpool iostat, and I sometimes 
see 0 transactions with nonzero bandwidth.  Which defies my understanding.

Did you ever see the iscsi targets "offline" or "degraded" in any way?  Did you 
do anything like "online" or "clear?"

My systems are openindiana - the latest, I forget if that's 151a5 or a6

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss