Ben wrote:
 Jim wrote:
  
You can try to improve the performance by:

1) Defragmenting the jBASE files by jrf'ing them, or if they are being
resized too big (which seems to be the case lately), then writing a
script to dd them to new copies then moving them back over the original,
using a large block size with dd (to give a large chance that the
filesystem will allocate contiguous disk).
    

Ben> before I run COB, I refresh the environment. So i think we don't
have this problem.

  
Be careful here, because that assumes that your environment has:

a) A properly defragmented file system (within the bounds that this can be done);
b) Files are correctly sized
c) That the files themselves are contiguously allocated on the disk;

If by refresh you mean you untar from a save, then c) will be taken care of. If you don't mean that, then I would:

a) Resize all the files;
b)  Make a tar (maybe two just in case ;-)
c)  rm all the files in your environment file system
d)  untar those files (gives contiguous disk allocation for jBASE files)
e) Run the file system defrag tool (AIX right?)
f) Save teh environment.

This way you know it is all good.

  
2) Make sure the files are sized correctly of course;
3) Take in to account the fact that there is probably transaction
journaling going on;
    

Ben> I check the account and customer.account table.  They are set to
use LOG.  when I run jlogstatus to check,  it reports we don't have
the license to run
in our benchmark server.  it means we don't use the jbase transaction
journal. right?  In production, they are using transaction journal.
the transaction journal log file is using separater LV and PV.

  
Yes, you are not running logging (sounds like yes, we have no banana's I guess).

  
4) Ask for advice from TEMENOS and the SAN supplier on tuning the SAN
for jBASE access patterns;
    

  
I would still do this bit BTW - the SAM guys will know who to set up for random read and write with some files being sequentially read.

  
First though, make your benchmarks with a local array, to give you a
base point for comparison. Then make one change at a time and re-run the
benchmarks.
    

Ben> I have tested it. if I use the file system in rootvg. the
throughput is only 20MB/s.  If using SAN,  I can get the 400MB/s
(random write) using the disk IO simulation tool.
  
Is this iostone? Did you make the test file big enough to make sure the SAN was being accessed? The rootvg isn't a good test because it is likely just a single disk. I was hoping you had enough disks to make a local array for comparison.

Also, what do you get with:

1) jBASE sequential on rootvg;
2) jBASE random on rootvg;
3) IO test random on rootvg
4) IO test random on SAN;

Without seeing all the results, including jBASE on the SAN, and the test program you are using for jBASE and IO tests, I am guessing. You need to have every configuration of every test documented with results. You may be just seeing the difference between what COB can achieve with jBASE files and what raw IO can do (there is usually quite a huge difference).

Ben> This weekend, we will run benchmark mark again.  God bless
us.  :)
  
Sorry about your woes!

Jim

--~--~---------~--~----~------------~-------~--~----~
Please read the posting guidelines at: http://groups.google.com/group/jBASE/web/Posting%20Guidelines

IMPORTANT: Type T24: at the start of the subject line for questions specific to Globus/T24

To post, send email to [email protected]
To unsubscribe, send email to [email protected]
For more options, visit this group at http://groups.google.com/group/jBASE?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to