To take advantage of SMT performance optimisations on Aix 5.3 you
require a minimum of

    a)   the jBASE 5.0.13 patch release, ( current jBASE 5.0 patch
release is jBASE 5.0.16 ),

and

    b)  to specifically request the Aix 5.3 SMT version of jBASE 5.0
from Temenos Distribution

Pat.

On 17 May, 03:44, aarcee74 aar <[email protected]> wrote:
> Thanks Jim for the reply , Yes i did had a look at the settings , but i
> assume that is more towards AIX5.2 and 5.3 has many changes as compared to
> 5.2, especially with memory handling.So i was just referring whether any
> other changes are reqd with 5.3 , related to jBASE .
>
> In the case of SAN, we use EMC, symmetrix and Clariion disks and i am sure
> that the performance is pretty good if we take jBASE out of the picture.  A
> simple tar and compress itself is giving me 60-70MB /s and normal "dd"
> commands also giving me asimilar throughput. I need to verify everything
> from the server side , before going to EMC for clarifications...!
>
> Is there any direct relationship to the modulo which is being used to create
> a file and the I/O which it makes when the file is accessed ?
>
> Also, one more query, we have found that the jBASE/T24 is not making use of
> SMT, the second logical thread for the CPU. Is it that something needs to be
> compiled again.
>
> Regards
> RC
>
> On Fri, May 15, 2009 at 1:33 AM, Jim Idle <[email protected]> wrote:
>
> > rc wrote:
> > > Hi ,
>
> > > I would be interested to know if there any specfic performance
> > > settings at the server level for jBASE 5.0.1.5.
> > Did you look at the AIX tuning guide on the web site for this group?
>
> > >  We use AIX 5.3 and we
> > > have jBASE on p570's , p590's and p595 . On some of the benchmarking
> > > tests we did find the I/O to the SAN storage i,horribly slow .
> > See many past posts, but SAN arrays are usually pathetic in my
> > experience and you are better with a really good local array. But if you
> > are seeing really bad performance, look to tuning the SAN access
> > patterns rather than the local system per se.
> > > The
> > > throughput is not even going beyond an MB for some of the jobs.  The
> > > environment is on T24 and the logical volume is striped across 12 hard
> > > disks with 4K as stripe width .
>
> > 4K stripe does not sound very good, usually you want something like 8MB
> > because you will benefit from track reads on individual disks. WHo is
> > advising you about your SAN array? Also, on SANs, you usually cannot
> > know how the disks are organized when viewing from teh OS - do you mean
> > you have configured your SAN to do that with physical disks?
> > > If not I/O where else can the bottle necks be.
>
> > > Appreciate a reply to this
> > You need to supply a lot more information to get any help basically.
> > Sounds like your SAN is poorly configured, but you don't say what SAN
> > you are using (or why you think a SAN is good idea ;-), or what tests
> > you have done before trying jBASE on the SAN - I presume you have run
> > some standard disk IO benchmarks right? Take jBASE out of the equation
> > and get your disk IO performing - you want a balance of sequential and
> > random performance, with a lean towards sequential read if you are
> > concerned about batch jobs (but generally the problem lies with people's
> > batch jobs and the use of SSELECT over SELECT and so on).
>
> > Jim
--~--~---------~--~----~------------~-------~--~----~
Please read the posting guidelines at: 
http://groups.google.com/group/jBASE/web/Posting%20Guidelines

IMPORTANT: Type T24: at the start of the subject line for questions specific to 
Globus/T24

To post, send email to [email protected]
To unsubscribe, send email to [email protected]
For more options, visit this group at http://groups.google.com/group/jBASE?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to