Jim,

> Well, you are now performing reads over 25 individual files, so your
> performance will be nothing like dd from a pseudo device. In this case it is
> not the sizing so much as you won't have that file in memory. Also, is that
> just a straight SELECT with no criteria, as you imply? Is the select list
> coming back in the natural order of the part files, and the natural order of
> the keys in each part file?
>



yeah thats a straight SELECT FBNK.CATEG.ENTRY query im giving.
the records are distributed according 1 digit of the ID i think.ill
ask the programmer about that.
Jim i also tried

> > jstat -v FBNK.CATEG.ENTRY.NEW.19
> > File ../bnk.data/ac/FBNK.CATE038
> > Type=J4 , Hash method = 5
>
> Try hashmethod=2

im not familiar with hash methods so ill look it up and why not try
it.

> > 3) "how many ranks?"i m not familiar with the ranks term.
> > 4)i have been looking for the stripe size for some time but i cannot find
> > anything like that on the storage.

> Look at the physical volumes and so on using smitty, it will tell you.

apparently i cant find anything related to the term ranks on smitty.

> > you are probably asking about the raid
> > stripe size of the array. (segment size is 128KB which is also another
> relevant
> > parameter but adjusting it to 256KB some time ago didnt really gain a
> > significant advantage.)
>
> jBASE is reading in 4096 byte blocks.


so 4K is the logical block size of jbase?


> Have you tuned AIX and the filesystem?

not really but it seems that mallotypes=buckets may giving back a
minor advantage.

>
> Turn off journaling on your jfs file system - this is doing a lot of work
> that is of no benefit to you jBASE files.
>


ill look into this as well.an IBM guy will be visiting tomorrow :)

>
> I suspect that the combination of your tests is just exposing that your SAN
> is either misconfigured or just not very good. I have yet to meet a San
> system that I didn't hate. The only people that get them to work nicely,
> seem to employ someone full time to tune it and rebalance it and things. It
> is like the billionaire with two Lamborghinis because after driving one of
> them for a day it needs a day's worth of tune up.


its probably the fact that its not the top range IBM SAN or there are
not really a lot to configure.
the IBM guys say that it works fine and theres nothing else to be
done.they try dd with a large block size, its hits around 500+MB/sec
so all good.its easy to blame jbase for 5MB/sec after that :|


> > But if speeds of like 10Mb/sec were mentioned i would worry a bit less :).
>
> That is slow, and I can get about 70MBs on my 4x SATA II physical hard
> drives on my Linux box (that's not the SSD drives)


so Jim you get 70MB/sec through topas just by running a select on a
file?that sounds amazing to me.
by the way just to make sure...im using both topas and nmon to look up
to performance readings.so im looking at the relevant KB-read at disk
usage lines.


> > i just want have a view about hows other people running in general cause i
> > havent got really anybody else to compare and talk about it.
>
> Sure - this is the place for that. I think you need to find a way to do the
> same test on a local array (you probably have some local disks on your
> system - see if you can hijack enough space to do the same test on a local
> system. My guess i that without doing anything, you will find it is much
> faster and that the SAN is just good at dd ;-)


ok i did that and results where bad again.it was only 1 internal SAS i
think 10K hdd (mirrored to another 1)
the average read speeds where around <2MB/sec with a maximum of around
4MB/sec.I also ran a COB and it was painfully slow :)
so the local disk doesnt seem an option.im sure though that if i had
5-6 hdds it would be a lot better but hey i wasnt able to get more
disks.


> Get some results on the same machine without the Sam involved. I bet that it
> shows you that the SAN needs configuring for jBASE (without fail, that has
> been the case at every [well, both -SANS were not around in MDIS days ;)]
> database company I worked at.
>
> Also, remove the distributed file overhead and go for a straight SELECT on a
> single file. Start at say 100MB and go up in increasing sizes of 100MB. I
> also bet that at some point the SAN array drops off like a stone.


I also tried selecting one part file (SELECT FBNK.CATEG.ENTRY.19) and
the difference than when selecting FBNK.CATEG.ENTRY is minor.
In fact i get an average read speed of 2.4MB/sec when selecting
FBNK.CATEG.ENTRY and 2.9MB/sec when selecting only the single part
file.seems pointless.

Thanks.

-- 
Please read the posting guidelines at: 
http://groups.google.com/group/jBASE/web/Posting%20Guidelines

IMPORTANT: Type T24: at the start of the subject line for questions specific to 
Globus/T24

To post, send email to [email protected]
To unsubscribe, send email to [email protected]
For more options, visit this group at http://groups.google.com/group/jBASE?hl=en

Reply via email to