Re: [OmniOS-discuss] OWC Mercury Accelsior

2013-06-06 Thread Uwe Reh

Hi,

I'm using the cheaper but quite comparable OCZ RevoDrive 3 X2 and I like 
it. Building a Lucene index is twice as fast than on 15K SAS drives. 
Unfortunately the card has an unsupported Marvell chip. So I have to 
live with one evil Ubuntu node in my solaris zoo. ;-)


Since I don't have to deal with big data (less than 1T), I decided not 
to do any ZIL or ARC tricks.


Uwe

Am 06.06.2013 22:19, schrieb Fábio Rabelo:

Some one tested one of this babys with OminOS :

http://www.storagereview.com/owc_mercury_accelsior_pcie_ssd_review

Specs are Little sketchy, but mention LSI chipset ...

Thinking about ZIL and/or cache


Fábio Rabelo


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] OWC Mercury Accelsior

2013-06-06 Thread Saso Kiselkov
For to CC the list:

On 06/06/2013 21:19, Fábio Rabelo wrote:
> Some one tested one of this babys with OminOS :
> 
> http://www.storagereview.com/owc_mercury_accelsior_pcie_ssd_review
> 
> Specs are Little sketchy, but mention LSI chipset ...
> 
> Thinking about ZIL and/or cache

PCI-e SSDs are difficult for slog because in case the storage head dies,
you can't fail-over to a backup node. PCIe doesn't support multiple
hosts on the same bus.

Cheers,
-- 
Saso

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] OWC Mercury Accelsior

2013-06-06 Thread Fábio Rabelo
Some one tested one of this babys with OminOS :

http://www.storagereview.com/owc_mercury_accelsior_pcie_ssd_review

Specs are Little sketchy, but mention LSI chipset ...

Thinking about ZIL and/or cache


Fábio Rabelo
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] statistics

2013-06-06 Thread Richard Elling
On Jun 6, 2013, at 12:34 PM, Michael Palmer  wrote:

> I guess the main question was graphical statistics.  Besides buying
> oracle storage I would suggest an network management system like
> opennms.  There are lots of chooses out there but not sure if that is
> the best solution.
> 
> Now for non-graphical,  I would say dtrace.  Google nicstat, zilstat,
> arcsummary.  Also, there is 'zpool iostat -v 1' and 'iostat -x 1'

These tend to be useful for general-purpose bottleneck identification.
But in a storage server situation, you need something that is more specific
to the protocol. Fortunately, there is nfssvrtop and friends:
https://github.com/richardelling/tools

These show the internal latency of serving NFS, iSCSI, and CIFS.
See the preso there to understand where the data is being collected.
 -- richard

> 
> Thanks,
> Michael
> Sent from my iPhone
> 
> On Jun 6, 2013, at 11:26 AM, "Fábio Rabelo"  wrote:
> 
>> Hi to all
>> 
>> First, thanks a lot for the product, it is realy great !
>> 
>> My question now :
>> 
>> Is there any way to make a graphical statsics od ZFS performance ?
>> 
>> I have some botleneck in my sistem ( OminOS V 0.9a9 Nightly mar 04 2013 + 
>> Napp-it ) that I can not identify ...
>> 
>> It stores all VMs in a Proxmox Cluster with 5 nodes and 14 mixed VMs ( 3 
>> W2K12 Server, 2 Win7Pro and all others Linux Debian or Ubuntu ) .
>> 
>> There are a 10 GB switcher to interconnect all nodes and Storage .
>> 
>> Almost all aplications are running fine and fast .
>> 
>> The exception are a Debian running Oracle 11G .
>> 
>> I just do not know how to find out what I have to do  ...
>> 
>> 
>> Fábio Rabelo
>> ___
>> OmniOS-discuss mailing list
>> OmniOS-discuss@lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

--

richard.ell...@richardelling.com
+1-760-896-4422



___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] statistics

2013-06-06 Thread Michael Palmer
I guess the main question was graphical statistics.  Besides buying
oracle storage I would suggest an network management system like
opennms.  There are lots of chooses out there but not sure if that is
the best solution.

Now for non-graphical,  I would say dtrace.  Google nicstat, zilstat,
arcsummary.  Also, there is 'zpool iostat -v 1' and 'iostat -x 1'

Thanks,
Michael
Sent from my iPhone

On Jun 6, 2013, at 11:26 AM, "Fábio Rabelo"  wrote:

> Hi to all
>
> First, thanks a lot for the product, it is realy great !
>
> My question now :
>
> Is there any way to make a graphical statsics od ZFS performance ?
>
> I have some botleneck in my sistem ( OminOS V 0.9a9 Nightly mar 04 2013 + 
> Napp-it ) that I can not identify ...
>
> It stores all VMs in a Proxmox Cluster with 5 nodes and 14 mixed VMs ( 3 
> W2K12 Server, 2 Win7Pro and all others Linux Debian or Ubuntu ) .
>
> There are a 10 GB switcher to interconnect all nodes and Storage .
>
> Almost all aplications are running fine and fast .
>
> The exception are a Debian running Oracle 11G .
>
> I just do not know how to find out what I have to do  ...
>
>
> Fábio Rabelo
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] statistics

2013-06-06 Thread Michael Palmer
Are you using iSCSI or NFS?
How is your zpool configure?
What raid are you using?
Do you have a zil and/or cache device?
What hard drives or SSDs are you using and how many of each?


Sent from my iPhone

On Jun 6, 2013, at 11:26 AM, "Fábio Rabelo"  wrote:

> Hi to all
>
> First, thanks a lot for the product, it is realy great !
>
> My question now :
>
> Is there any way to make a graphical statsics od ZFS performance ?
>
> I have some botleneck in my sistem ( OminOS V 0.9a9 Nightly mar 04 2013 + 
> Napp-it ) that I can not identify ...
>
> It stores all VMs in a Proxmox Cluster with 5 nodes and 14 mixed VMs ( 3 
> W2K12 Server, 2 Win7Pro and all others Linux Debian or Ubuntu ) .
>
> There are a 10 GB switcher to interconnect all nodes and Storage .
>
> Almost all aplications are running fine and fast .
>
> The exception are a Debian running Oracle 11G .
>
> I just do not know how to find out what I have to do  ...
>
>
> Fábio Rabelo
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] statistics

2013-06-06 Thread Fábio Rabelo
Hi to all

First, thanks a lot for the product, it is realy great !

My question now :

Is there any way to make a graphical statsics od ZFS performance ?

I have some botleneck in my sistem ( OminOS V 0.9a9 Nightly mar 04 2013 +
Napp-it ) that I can not identify ...

It stores all VMs in a Proxmox Cluster with 5 nodes and 14 mixed VMs ( 3
W2K12 Server, 2 Win7Pro and all others Linux Debian or Ubuntu ) .

There are a 10 GB switcher to interconnect all nodes and Storage .

Almost all aplications are running fine and fast .

The exception are a Debian running Oracle 11G .

I just do not know how to find out what I have to do  ...


Fábio Rabelo
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] some problems with last update to bloody omnios-8d266aa 64-bit

2013-06-06 Thread Theo Schlossnagle
Ah, you're using the omnios-build system? It does use /tmp/ (to be fast),
but all the boxes we run it on have a lot of room there.  There are some
other scenarios where it makes sense to change that to /var/tmp/ as well.
 Maybe we should change the default to be /var/tmp/


On Thu, Jun 6, 2013 at 3:34 AM, Richard PALO  wrote:

> BTW, I believe I found the issue with my build on "release", which appears
> to be more an issue with tmp & swap (tmp was being overloaded), switching
> to use /var/tmp seems to get over resource allocation problems experienced.
> Testing with both "release" and "bloody" now..
>
>
> __**_
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.**com 
> http://lists.omniti.com/**mailman/listinfo/omnios-**discuss
>



-- 

Theo Schlossnagle

http://omniti.com/is/theo-schlossnagle
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] some problems with last update to bloody omnios-8d266aa 64-bit

2013-06-06 Thread Richard PALO
BTW, I believe I found the issue with my build on "release", which 
appears to be more an issue with tmp & swap (tmp was being overloaded), 
switching to use /var/tmp seems to get over resource allocation problems 
experienced. Testing with both "release" and "bloody" now..


___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss