I have zpool on SATA SSD 128GB on LSI 9211-8i with builds - I have no problems with it. I¹m using dilos with mpt_sas updates from illumos-nexenta, But I¹m using only one SSD drive on zpool. It configured with lz4 compression.
I think illumos builds are loading it with a lot of IOPs. I¹m using different directories with sources with parallel builds. -- Best regards, Igor Kozhukhov From: "Andrew M. Hettinger" <[email protected]> Reply-To: OpenIndiana Developer mailing list <[email protected]> Date: Tuesday 14 October 2014 03:12 To: OpenIndiana Developer mailing list <[email protected]> Subject: Re: [oi-dev] SSD-based pools > Ok, > > I've now tried to run this on another machine (no hardware in common, save the > SSDs), with > interlopers, still generating checksum errors. I'm using iozone to load it up > for testing (I was > initially asked to do this to demonstrate how much of an improvement we could > expect). > > I was wondering if you guys had any more thoughts or could do a similar test? > > Andrew Hettinger > http://Prominic.NET | Skype: AndrewProminic > Tel: 866.339.3169 (toll free) -or- 1.217.356.2888 x. 110 (int'l) > Fax: 866.372.3356 (toll free) -or- 1.217.356.3356 (int'l) > > "Schweiss, Chip" <[email protected]> wrote on 09/30/2014 07:49:44 AM: > >> > From: "Schweiss, Chip" <[email protected]> >> > To: OpenIndiana Developer mailing list <[email protected]> >> > Date: 09/30/2014 07:58 AM >> > Subject: Re: [oi-dev] SSD-based pools >> > >> > On Mon, Sep 29, 2014 at 6:14 PM, Andrew M. Hettinger >> <[email protected] >>> > > wrote: >> > Bob Friesenhahn <[email protected]> wrote on 09/29/2014 >> > 05:57:26 PM: >> > >>> > > How would ZFS know if the data stored is "incorrect" from the user's >>> > > perspective? >>> > > >> > >> > Presumably because the checksum is wrong. >> > >> > Exactly, if the data is returned incorrect from the SSD ZFS will >> > detect it via checksum. It will then rebuild from the raidz1 >> > parity, if that fails it will return a data read error. >> > >> > In relation to the original topic, with 163 days uptime, the scratch >> > pool has had zero checksum errors. It gets completely rewritten >> > about twice a week on average. At its peak usage it was rewritten >> > daily for about 45 days. >> > >> > root@hcp-iops1:~# uptime >> > 07:44am up 163 days 20:57, 1 user, load average: 0.13, 0.15, 0.16 >> > root@hcp-iops1:~# zpool status scratch >> > pool: scratch >> > state: ONLINE >> > scan: scrub repaired 0 in 7h59m with 0 errors on Sat Sep 6 00:59:21 2014 >> > config: >> > >> > NAME STATE READ WRITE CKSUM >> > scratch ONLINE 0 0 0 >> > raidz1-0 ONLINE 0 0 0 >> > c1t500253855035D1B1d0s0 ONLINE 0 0 0 >> > c1t500253855035D12Fd0s0 ONLINE 0 0 0 >> > c1t500253855035D114d0s0 ONLINE 0 0 0 >> > c1t500253855035D10Ed0s0 ONLINE 0 0 0 >> > c1t500253855035D109d0s0 ONLINE 0 0 0 >> > raidz1-1 ONLINE 0 0 0 >> > c1t500253855035D1C1d0s0 ONLINE 0 0 0 >> > c1t500253855035D1C0d0s0 ONLINE 0 0 0 >> > c1t500253855035D1BFd0s0 ONLINE 0 0 0 >> > c1t500253855035D1BEd0s0 ONLINE 0 0 0 >> > c1t500253855035D1B5d0s0 ONLINE 0 0 0 >> > raidz1-2 ONLINE 0 0 0 >> > c1t500253855035D1E3d0s0 ONLINE 0 0 0 >> > c1t500253855035D1E1d0s0 ONLINE 0 0 0 >> > c1t500253855035D1C8d0s0 ONLINE 0 0 0 >> > c1t500253855035D1C6d0s0 ONLINE 0 0 0 >> > c1t500253855035D1C3d0s0 ONLINE 0 0 0 >> > raidz1-3 ONLINE 0 0 0 >> > c1t500253855035D8C0d0s0 ONLINE 0 0 0 >> > c1t500253855035D8BDd0s0 ONLINE 0 0 0 >> > c1t500253855035D1F6d0s0 ONLINE 0 0 0 >> > c1t500253855035D1E6d0s0 ONLINE 0 0 0 >> > c1t500253855035D1E5d0s0 ONLINE 0 0 0 >> > raidz1-4 ONLINE 0 0 0 >> > c1t500253855035D8C7d0s0 ONLINE 0 0 0 >> > c1t500253855035D8C6d0s0 ONLINE 0 0 0 >> > c1t500253855035D8C3d0s0 ONLINE 0 0 0 >> > c1t500253855035D8C2d0s0 ONLINE 0 0 0 >> > c1t500253855035D8C1d0s0 ONLINE 0 0 0 >> > raidz1-5 ONLINE 0 0 0 >> > c1t500253855035E2F6d0s0 ONLINE 0 0 0 >> > c1t500253855035E2F5d0s0 ONLINE 0 0 0 >> > c1t500253855035E2ECd0s0 ONLINE 0 0 0 >> > c1t500253855035E2EBd0s0 ONLINE 0 0 0 >> > c1t500253855035E2D7d0s0 ONLINE 0 0 0 >> > raidz1-6 ONLINE 0 0 0 >> > c1t500253855035F484d0s0 ONLINE 0 0 0 >> > c1t500253855035F483d0s0 ONLINE 0 0 0 >> > c1t500253855035F480d0s0 ONLINE 0 0 0 >> > c1t500253855035F472d0s0 ONLINE 0 0 0 >> > c1t500253855035F46Fd0s0 ONLINE 0 0 0 >> > raidz1-7 ONLINE 0 0 0 >> > c1t5002538550363742d0s0 ONLINE 0 0 0 >> > c1t500253855036373Ed0s0 ONLINE 0 0 0 >> > c1t50025385503633BDd0s0 ONLINE 0 0 0 >> > c1t5002538550363164d0s0 ONLINE 0 0 0 >> > c1t500253855035F489d0s0 ONLINE 0 0 0 >> > raidz1-8 ONLINE 0 0 0 >> > c1t500253855036378Ad0s0 ONLINE 0 0 0 >> > c1t5002538550363789d0s0 ONLINE 0 0 0 >> > c1t5002538550363786d0s0 ONLINE 0 0 0 >> > c1t500253855036374Cd0s0 ONLINE 0 0 0 >> > c1t500253855036374Bd0s0 ONLINE 0 0 0 >> > raidz1-9 ONLINE 0 0 0 >> > c1t500253855035D1F4d0s0 ONLINE 0 0 0 >> > c1t500253855035D1ECd0s0 ONLINE 0 0 0 >> > c1t500253855035D1E2d0s0 ONLINE 0 0 0 >> > c1t500253855035D1DAd0s0 ONLINE 0 0 0 >> > c1t500253855035D1B2d0s0 ONLINE 0 0 0 >> > raidz1-10 ONLINE 0 0 0 >> > c1t500253855035D12Dd0s0 ONLINE 0 0 0 >> > c1t500253855035D8C8d0s0 ONLINE 0 0 0 >> > c1t500253855035D8C5d0s0 ONLINE 0 0 0 >> > c1t500253855035D8C4d0s0 ONLINE 0 0 0 >> > c1t500253855035D1F8d0s0 ONLINE 0 0 0 >> > raidz1-11 ONLINE 0 0 0 >> > c1t5002538550363793d0s0 ONLINE 0 0 0 >> > c1t500253855035E2DBd0s0 ONLINE 0 0 0 >> > c1t500253855035E2DAd0s0 ONLINE 0 0 0 >> > c1t500253855035E2D9d0s0 ONLINE 0 0 0 >> > c1t500253855035D12Ed0s0 ONLINE 0 0 0 >> > spares >> > c1t5002538550363794d0s0 AVAIL >> > c1t5002538550363797d0s0 AVAIL >> > c1t500253855035E2D8d0s0 AVAIL >> > >> > errors: No known data errors_______________________________________________ >> > oi-dev mailing list >> > [email protected] >> > http://openindiana.org/mailman/listinfo/oi-dev > _______________________________________________ oi-dev mailing list > [email protected] http://openindiana.org/mailman/listinfo/oi-dev
_______________________________________________ oi-dev mailing list [email protected] http://openindiana.org/mailman/listinfo/oi-dev
