As you see, we use Toshiba PX04SHB020 as SLOG devices. It's a 200gb in size, but we reduce size of the disk to extend their life (increase DWPD) we do it as follows: root@znstor1-n1:~# sg_format /dev/rdsk/c0t500003979C88FF79d0 TOSHIBA PX04SHB020 0106 peripheral_type: disk [0x0] << supports protection information>> Mode Sense (block descriptor) data, prior to changes: Number of blocks=390721968 [0x1749f1b0] Block size=512 [0x200] Read Capacity (10) results: Number of logical blocks=390721968 Logical block size=512 bytes
root@znstor1-n1:~# sg_format --size 4096 --format /dev/rdsk/c0t500003979C88FF79d0 TOSHIBA PX04SHB020 0106 peripheral_type: disk [0x0] << supports protection information>> Mode Sense (block descriptor) data, prior to changes: Number of blocks=390721968 [0x1749f1b0] Block size=512 [0x200] A FORMAT will commence in 10 seconds ALL data on /dev/rdsk/c0t500003979C88FF79d0 will be DESTROYED Press control-C to abort A FORMAT will commence in 5 seconds ALL data on /dev/rdsk/c0t500003979C88FF79d0 will be DESTROYED Press control-C to abort Format has started FORMAT Complete root@znstor1-n1:~# sg_format --resize --count=0x1749f1b /dev/rdsk/c0t500003979C88FF79d0 TOSHIBA PX04SHB020 0106 peripheral_type: disk [0x0] << supports protection information>> Mode Sense (block descriptor) data, prior to changes: Number of blocks=48840246 [0x2e93e36] Block size=4096 [0x1000] Resize operation seems to have been successful root@znstor1-n1:~# update_drv -f sd Every jbod has following layout (24 disk total) 2 disk for mirror slog 1 disk for l2arc 1 disk for spare 2 x raidz2 (8D + 2P) I didn't observe used space on every SLOG mirror group more than 3gb. пт, 29 сент. 2017 г. в 23:14, sergey ivanov <serge...@gmail.com>: > Thanks, Artem, it is a very good list of suitable hardware! > I have a question about ZIL. > I read that SSD for it should be about 8G. I did not see more than one > gigabyte allocated for ZIL on our servers. Can you comment it? > -- > Sergey. > > Regards, > Sergey Ivanov > > > On Thu, Sep 28, 2017 at 3:25 AM, Artem Penner <apenner...@gmail.com> > wrote: > > Hi, Sergey. > > My be following information will be useful for you. > > We use solaris as HA NAS in following configuration: > > > > DATA_DISKS: HUC101818CS4204 / HUH728080AL5204 > > SLOG: PX04SHB020 / PX05SMB040 / HUSMH8020BSS204 (PX04SHB020 has best > > performance > > https://dhelios.blogspot.ru/2016/11/ssd-hgst-husmh8020bss204_11.html) > > L2ARC: HUSMH8080BSS204 / PX05SMB080 > > > > Servers: > > 2 x Cisco UCS 240M4 ( 2 x E5-2697A, 768gb ram, 3 local hard disk for OS, > 2 x > > UCSC-SAS9300-8E, 2 x Intel X520 Dual Port 10Gb SFP+ Adapter) > > > > JBODS: > > 216BE2C-R741JBOD - for SFF disks > > SC847E2C-R1K28JBOD - for LFF disks > > > > We have two HA variant: > > 1) Solaris 11.3 + Solaris Cluster 4.3 (for VMware) > > 2) OmniOS + RSF-1 (as Cinder for Openstack) > > > > If you need some additional info, I'll be glad to share any information > that > > I have. > > > > ср, 27 сент. 2017 г. в 22:39, sergey ivanov <serge...@gmail.com>: > >> > >> Hi, > >> as end-of-life of r151014 approaches, we are planning upgrade for our > >> NFS servers. > >> I'm thinking about 2 servers providing ISCSI targets, and 2 another > >> OmniOS servers using these ISCSI block devices in mirrored ZPOOL > >> setup. IP address for NFS service can be a floating IP between those 2 > >> servers. > >> I have the following questions: > >> 1. Are there any advantages to have separate ISCSI target servers and > >> NFS servers or I should better combine one ISCSI target and NFS > >> server on each of 2 hosts? > >> 2. I do not want snapshots, checksums, and other ZFS features for > >> block devices at the level where they are exported as ISCSI targets, - > >> I would prefer these features at the level where these block devices > >> are combined into mirror Zpools. Maybe it's better to have these ISCSI > >> target servers running some not so advanced OS and have 2 Linux boxes? > >> 3. But if I have SSD for intent log and for cache, - maybe they can > >> improve performance for ZVOLs used as block devices for ISCSI targets? > >> > >> Does anybody have experience setting up such redundant NFS servers? > >> -- > >> Regards, > >> Sergey Ivanov > >> _______________________________________________ > >> OmniOS-discuss mailing list > >> OmniOS-discuss@lists.omniti.com > >> http://lists.omniti.com/mailman/listinfo/omnios-discuss >
_______________________________________________ OmniOS-discuss mailing list OmniOS-discuss@lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss