I have a customer who is currently looking at standing up 4 8 node RHCS 
clusters each of which will have a 8TB GFS2 file system.  RHEL 5.3 32bit; 
vmware host virtualization; fence_vmware_vi.py fencing scripts.  The cluster 
and fencing are all working.

I am looking to get some GFS2 tuning recommendations for these file systems.  
They will contain directory structures and files similar to the following 
configurations; the following are rough estimates from the application vendor.  
Currently we are looking at GFS2 partitions set up with the default settings; 
default i386 block size, 8 journals.

Chiliad Raw Data (html/xml) files:
Estimate # of Directories up to 100k
Typical FS Layout: /datastore/chiliad/extract/<form>/<year>/<month>/<day of 
month>/<bin>/*.html
Number of files in a directory: min=1,000, max=10,000, avg=1,000
File size: min=1KB, max=2MB, avg=5KB
Directory depth: min=1, max=25, avg=5

Chiliad Index files:
Estimate # of Directories: Thousands
Number of files in a directory: min=5, max=30, avg=15-20
File size: min=1KB, max=2GB, avg=1-2GB
Directory depth: min=1, max=10, avg=5

XXXXX:(/root)# gfs2_tool gettune /datastore/
new_files_directio = 0
new_files_jdata = 0
quota_scale = 1.0000   (1, 1)
quotad_secs = 5
logd_secs = 1
recoverd_secs = 60
statfs_quantum = 30
stall_secs = 600
quota_cache_secs = 300
quota_simul_sync = 64
statfs_slow = 0
reclaim_limit = 5000
complain_secs = 10
max_readahead = 262144
atime_quantum = 3600
quota_quantum = 60
quota_warn_period = 10
jindex_refresh_secs = 60
log_flush_secs = 60
incore_log_blocks = 1024
demote_secs = 300

Thank you,

Michael Hayes 
Red Hat 
[email protected] 


--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to