Hello,
Any comments/suggestions about this would be very nice..
Thanks!
-- Pasi
On Fri, Feb 08, 2013 at 05:09:56PM +0200, Pasi Kärkkäinen wrote:
>
> I'm seeing weird output aswell:
>
> # zpool list foo
> NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
>
On Fri, Feb 08, 2013 at 09:47:38PM +, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
> > From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
> >
> > What's the correct way of finding out what actually uses/reserves that 1023G
> > of FREE in the zpool?
On Wed, Feb 06, 2013 at 08:03:13PM -0700, Jan Owoc wrote:
> On Wed, Feb 6, 2013 at 4:26 PM, Edward Ned Harvey
> (opensolarisisdeadlongliveopensolaris)
> wrote:
> >
> > When I used "zpool status" after the system crashed, I saw this:
> > NAME SIZE ALLOC FREE EXPANDSZCAP DEDUP HEALTH
On Sun, Jan 20, 2013 at 07:51:15PM -0800, Richard Elling wrote:
>
> 2. VAAI support.
>
>VAAI has 4 features, 3 of which have been in illumos for a long time. The
>remaining
>feature (SCSI UNMAP) was done by Nexenta and exists in their NexentaStor
>product,
>but the CEO ma
On Tue, Jan 08, 2013 at 06:36:18AM -0500, Ray Arachelian wrote:
> On 01/07/2013 04:16 PM, Sašo Kiselkov wrote:
> > PERC H200 are well behaved cards that are easy to reflash and work
> > well (even in JBOD mode) on Illumos - they are essentially a LSI SAS
> > 9211. If you can get them, they're one h
On Thu, Nov 29, 2012 at 09:42:21AM +0100, Grégory Giannoni wrote:
>
> Le 29 nov. 2012 à 09:27, Pasi Kärkkäinen a écrit :
> >> The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested
> >> LSI 9260-16I or LSI 9280-24i.
> >>
> >
> >
On Tue, Nov 27, 2012 at 08:52:06AM +0100, Grégory Giannoni wrote:
>
> The LSI 9240-4I was not able to connect to the 25-drives bay ; Not tested
> LSI 9260-16I or LSI 9280-24i.
>
What was the problem connecting LSI 9240-4i to the 25-drives bay?
-- Pasi
On Tue, Sep 18, 2012 at 05:30:56PM +0200, Erik Ableson wrote:
>
> If you're running ESXi with a vSphere license, I'd recommend looking at VDR
> (free with the vCenter license) for backing up the VMs to the little HPs
> since you get compressed and deduplicated backups that will minimize the
> r
On Wed, Jun 27, 2012 at 01:42:27AM +0300, Pasi Kärkkäinen wrote:
> On Fri, Jun 15, 2012 at 06:23:42PM -0500, Timothy Coalson wrote:
> > Sorry, if you meant distinguishing between true 512 and emulated
> > 512/4k, I don't know, it may be vendor-specific as to whether they
&
On Fri, Jun 15, 2012 at 06:23:42PM -0500, Timothy Coalson wrote:
> Sorry, if you meant distinguishing between true 512 and emulated
> 512/4k, I don't know, it may be vendor-specific as to whether they
> expose it through device commands at all.
>
At least on Linux you can see the info from:
/sys
On Sun, Jan 08, 2012 at 06:59:57AM +0400, Jim Klimov wrote:
> 2012-01-08 5:37, Richard Elling ??:
>> The big question is whether they are worth the effort. Spares solve a
>> serviceability
>> problem and only impact availability in an indirect manner. For single-parity
>> solutions, spares
On Sat, Nov 12, 2011 at 10:08:04AM -0800, Richard Elling wrote:
>
> On Nov 12, 2011, at 8:31 AM, Pasi Kärkkäinen wrote:
>
> > On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote:
> >> On Nov 12, 2011, at 00:55, Richard Elling wrote:
> >>
> >>&g
; on. Might as well enable the functionality now, when 4K is rarer, so you have
> more time to test and tunes things out?rather than later when you can
> potentially be left scrambling.
>
> As Pasi Kärkkäinen mentions, there's not much you can do if the disks lies
> (just as has
On Fri, Nov 11, 2011 at 09:55:29PM -0800, Richard Elling wrote:
> On Nov 10, 2011, at 7:47 PM, David Magda wrote:
>
> > On Nov 10, 2011, at 18:41, Daniel Carosone wrote:
> >
> >> On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote:
> >>> Under both Solaris 10 and Solaris 11x, I recei
On Sat, Aug 06, 2011 at 07:45:31PM +0200, Roy Sigurd Karlsbakk wrote:
> > Might this be the SATA drives taking too long to reallocate bad
> > sectors? This is a common problem "desktop" drives have, they will
> > stop and basically focus on reallocating the bad sector as long as it
> > takes, which
On Sat, Jun 18, 2011 at 09:49:44PM +0200, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> I have a few machines setup with OI 148, and I can't make the LEDs on the
> drives work when something goes bad. The chassies are supermicro ones, and
> work well, normally. Any idea how to make drive LEDs wirk wi
On Sat, Jun 11, 2011 at 08:26:34PM +0400, Jim Klimov wrote:
> 2011-06-11 19:15, Pasi Kärkkäinen ??:
>> On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
>>> I've had two incidents where performance tanked suddenly, leaving the VM
>>> gue
On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
>Posted in greater detail at Server Fault
>- [1]http://serverfault.com/q/277966/13325
>
>I have an HP ProLiant DL380 G7 system running NexentaStor. The server has
>36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expander
On Fri, Mar 18, 2011 at 06:26:37PM -0700, Michael DeMan wrote:
> ZFSv28 is in HEAD now and will be out in 8.3.
>
> ZFS + HAST in 9.x means being able to cluster off different hardware.
>
> In regards to OpenSolaris and Indiana - can somebody clarify the relationship
> there? It was clear with O
On Sat, Feb 12, 2011 at 08:54:26PM +0100, Roy Sigurd Karlsbakk wrote:
> > I see that Pinguy OS, an uber-Ubuntu o/s, includes native ZFS support.
> > Any pointers to more info on this?
>
> There are some work in progress from http://zfsonlinux.org/, but the posix
> layer was still lacking last I c
On Mon, Jan 31, 2011 at 03:41:52PM +0100, Joerg Schilling wrote:
> Brandon High wrote:
>
> > On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
> > wrote:
> > > What is the status of ZFS support for TRIM?
> >
> > I believe it's been supported for a while now.
> > http://www.c0t0d0s0.org/archives
On Tue, Jan 25, 2011 at 11:53:49AM -0800, Rocky Shek wrote:
> Philip,
>
> You can consider DataON DNS-1600 4U 24Bay 6Gb/s SAS JBOD Storage.
> http://dataonstorage.com/dataon-products/dns-1600-4u-6g-sas-to-sas-sata-jbod
> -storage.html
>
> It is the best fit for ZFS Storage application. It can be
On Sat, Jan 08, 2011 at 12:33:50PM -0500, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Garrett D'Amore
> >
> > When you purchase NexentaStor from a top-tier Nexenta Hardware Partner,
> > you get a product that
On Wed, Dec 22, 2010 at 01:43:35PM +, Jabbar wrote:
>Hello,
>
>I was thinking of buying a couple of SSD's until I found out that Trim is
>only supported with SATA drives.
>
Yes, because TRIM is ATA command. SATA means Serial ATA.
SCSI (SAS) drives have "WRITE SAME" command, which
On Wed, Dec 22, 2010 at 11:36:48AM +0100, Stephan Budach wrote:
>Hello all,
>
>I am shopping around for 3.5" SSDs that I can mount into my storage and
>use as ZIL drives.
>As of yet, I have only found 3.5" models with the Sandforce 1200, which
>was not recommended on this list.
On Thu, Dec 16, 2010 at 08:43:02PM +0100, Alexander Lesle wrote:
> Hello All,
>
> I want to build a home file and media server now. After experiment with a
> Asus Board and running in unsolve problems I have bought this
> Supermicro Board X8SIA-F with Intel i3-560 and 8 GB Ram
> http://www.supermi
On Tue, Nov 09, 2010 at 04:18:17AM -0800, Andreas Koppenhoefer wrote:
> From Oracle Support we got the following info:
>
> Bug ID: 6992124 reboot of Sol10 u9 host makes zpool FAULTED when zpool uses
> iscsi LUNs
> This is a duplicate of:
> Bug ID: 6907687 zfs pool is not automatically fixed when
On Wed, Nov 17, 2010 at 10:14:10AM +, Bruno Sousa wrote:
>Hi all,
>
>Let me tell you all that the MC/S *does* make a difference...I had a
>windows fileserver using an ISCSI connection to a host running snv_134
>with an average speed of 20-35 mb/s...After the upgrade to snv_151a
On Sat, Oct 16, 2010 at 08:38:28AM -0700, Richard Elling wrote:
>On Oct 15, 2010, at 6:18 AM, Stephan Budach wrote:
>
> So, what would you suggest, if I wanted to create really big pools? Say
> in the 100 TB range? That would be quite a number of single drives then,
> especially
On Tue, Sep 14, 2010 at 08:08:42AM -0700, Ray Van Dolson wrote:
> On Tue, Sep 14, 2010 at 06:59:07AM -0700, Wolfraider wrote:
> > We are looking into the possibility of adding a dedicated ZIL and/or
> > L2ARC devices to our pool. We are looking into getting 4 ??? 32GB
> > Intel X25-E SSD drives. Wo
On Sat, Jul 17, 2010 at 12:57:40AM +0200, Richard Elling wrote:
>
> > Because of BTRFS for Linux, Linux's popularity itself and also thanks
> > to the Oracle's help.
>
> BTRFS does not matter until it is a primary file system for a dominant
> distribution.
> From what I can tell, the dominant
On Tue, Jun 15, 2010 at 10:57:53PM +0530, Anil Gulecha wrote:
> Hi All,
>
> On behalf of NexentaStor team, I'm happy to announce the release of
> NexentaStor Community Edition 3.0.3. This release is the result of the
> community efforts of Nexenta Partners and users.
>
> Changes over 3.0.2 includ
On Fri, Jun 18, 2010 at 02:21:15AM -0700, artiepen wrote:
> 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2,
> and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to
> 40 very rarely.
>
> As far as random vs. sequential. Correct me if I'm wrong, b
On Fri, Jun 18, 2010 at 05:15:44AM -0400, Thomas Burgess wrote:
>On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen <[1]pa...@iki.fi> wrote:
>
> On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> > Well, I've searched my brains out and I can't s
, in the worst case of small random IO.
(the parity needs to be written and that limits the performance of raidz/z2/z3
to the performance of single disk).
This is not really zfs specific at all, it's the same with any raid
implementation.
-- Pasi
> On Fri, Jun 18, 2010 at 4:42 AM, Pasi
On Thu, Jun 17, 2010 at 09:58:25AM -0700, Ray Van Dolson wrote:
> On Thu, Jun 17, 2010 at 09:54:59AM -0700, Ragnar Sundblad wrote:
> >
> > On 17 jun 2010, at 18.17, Richard Jahnel wrote:
> >
> > > The EX specs page does list the supercap
> > >
> > > The pro specs page does not.
> >
> > They do
On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
> Well, I've searched my brains out and I can't seem to find a reason for this.
>
> I'm getting bad to medium performance with my new test storage device. I've
> got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the
On Fri, Jun 11, 2010 at 03:30:26PM -0400, Miles Nordin wrote:
> >>>>> "pk" == Pasi Kärkkäinen writes:
>
> >>> You're really confused, though I'm sure you're going to deny
> >>> it.
>
> >> I don
On Tue, Jun 08, 2010 at 08:33:40PM -0500, Bob Friesenhahn wrote:
> On Tue, 8 Jun 2010, Miles Nordin wrote:
>
>>> "re" == Richard Elling writes:
>>
>>re> Please don't confuse Ethernet with IP.
>>
>> okay, but I'm not. seriously, if you'll look into it.
>>
>> Did you misread where I said FC
On Thu, Jun 10, 2010 at 05:46:19AM -0700, Peter Eriksson wrote:
> Just a quick followup that the same issue still seems to be there on our
> X4500s with the latest Solaris 10 with all the latest patches and the
> following SSD disks:
>
> Intel X25-M G1 firmware 8820 (80GB MLC)
> Intel X25-M G2 f
On Fri, Jun 04, 2010 at 08:43:32AM -0400, Cassandra Pugh wrote:
>Thank you, when I manually mount using the "mount -t nfs4" option, I am
>able to see the entire tree, however, the permissions are set as
>nfsnobody.
>"Warning: rpc.idmapd appears not to be running.
> All u
On Tue, May 25, 2010 at 01:52:47PM +0100, Karl Pielorz wrote:
>
> --On 25 May 2010 15:28 +0300 Pasi Kärkkäinen wrote:
>
>>> I've tried contacting Intel to find out if it's true their "enterprise"
>>> SSD has no cache protection on it, and what the
On Tue, May 25, 2010 at 10:08:57AM +0100, Karl Pielorz wrote:
>
>
> --On 24 May 2010 23:41 -0400 rwali...@washdcmail.com wrote:
>
>> I haven't seen where anyone has tested this, but the MemoRight SSD (sold
>> by RocketDisk in the US) seems to claim all the right things:
>>
>> http://www.rocketdisk.
On Mon, May 17, 2010 at 03:12:44PM -0700, Erik Trimble wrote:
> On Mon, 2010-05-17 at 12:54 -0400, Dan Pritts wrote:
> > On Mon, May 17, 2010 at 06:25:18PM +0200, Tomas Ögren wrote:
> > > Resilver does a whole lot of random io itself, not bulk reads.. It reads
> > > the filesystem tree, not "block
On Sat, May 15, 2010 at 11:01:00AM +, Marc Bevand wrote:
> I have done quite some research over the past few years on the best (ie.
> simple, robust, inexpensive, and performant) SATA/SAS controllers for ZFS.
> Especially in terms of throughput analysis (many of them are designed with an
> i
On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Robert Milkowski
> >
> > if you can disable ZIL and compare the performance to when it is off it
> > will give you an estim
46 matches
Mail list logo