Hello,

We've set up a local replication between two zsan storages over direct 1 Gb 
connection

/dev/rdsk/c3t0d0s0      ->      zsanback.local:/dev/rdsk/c3t0d0s0
{6 more}
/dev/rdsk/c3t7d0s0      ->      zsanback.local:/dev/rdsk/c3t7d0s0

There is a zfs filesystem being accessed with iscsi. Recently users started to 
complain about delays in accessing files.

Can somebody look into this magic number and explain why async_throttle_delay 
slowly grows over the time
and if might be related to the delays?

#kstat sndr:1:setinfo 15 | grep async_throttle_delay
        async_throttle_delay            17232313
        async_throttle_delay            17235445
        async_throttle_delay            17235445
        async_throttle_delay            17240204
        async_throttle_delay            17240204
        async_throttle_delay            17245600
        async_throttle_delay            17245600
        async_throttle_delay            17251474
        async_throttle_delay            17251474
        async_throttle_delay            17257441

# kstat sndr:1:setinfo
module: sndr                                instance: 1    
name:   setinfo                             class:    storedge
        async_block_hwm             21069
        async_item_hwm              2439
        async_queue_blocks          17982
        async_queue_items           215
        async_queue_type            memory
        async_throttle_delay        17271404
        autosync                            0
        bitmap                              /dev/md/rdsk/bmp1
        bitsset                             332
        bmpflags                            0
        bmp_size                            5713920
        crtime                                  1301557.77839719
        disk_status                         0
        flags                                   6150
        if_down                             0
        if_rpc_version                  7
        maxqfbas                        16384
        maxqitems                       4096
        primary_host                    mainzsan.local
        primary_vol                     /dev/rdsk/c3t1d0s0
        secondary_host               zsanback.local
        secondary_vol                /dev/rdsk/c3t1d0s0
        snaptime                         2247328.51220685
        syncflags                        0
        syncpos                          2925489887
        type_flag                        5
        volsize                          2925489887

About this values:
maxqfbas                      16384
maxqitems                    4096

If I setup them with higher values then I will see increase in async_block_hwm  
and  async_item_hwm  respectively.
Does it make sense to change them with 1G local connection?

Typical load is about 5Mb/s reading/writing and sometimes it goes up to 40Mb/s
Right now I can't see relationship between zpool I/O load spikes and access 
delays.

-- 
Best regards,
Roman Naumenko
Network Administrator

[email protected]
-- 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to