This might be more related to IP processing of the connection and
transfer of the data between z/OS and Linux. If you are using z/VM or
Hipersockets put the link between Linux and z/OS on their own network
segment with a large MTU size.  There should be fewer trips to move the
data.  With a seperate segment, other network traffic (eg. telnet)
shouldn't be affected.

On Tue, 2004-01-06 at 08:56, Herve Bonvin wrote:
> We are also trying to save some money by moving data from z/OS DB2 to z/Linux DB2.
>
> I did some tests and in some cases (when a lot of data are selected), z/OS uses more 
> cpu for a remote access than for an access on z/OS DB2. We think this is because of 
> the EBCDIC/ASCII conversion.
>
> Is it possible to improve this situation by tuning something ?
>
> Regards,
> Herve
>
>
>
> -----Original Message-----
> From: David Boyes [mailto:[EMAIL PROTECTED]
> Sent: Monday, December 22, 2003 8:31 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Thoughts on DB2...on Linux/390
>
>
> > Do you have any feel for any performance impacts(if any) we'd
> > pay going to
> > a 'remote' database environment.  Granted, same box,
> > hipersockets, etc,
> > but it is remote non-the-less.
>
> I think it would depend a lot on how your application uses the database and
> how good your programmers are.
> We've observed about a 1-3 sec increase in time to complete a transaction
> from CICS with that customer's specific workload, but haven't really done
> any exhaustive measurements (their comment: "for $310K in savings, we'll
> just buy another IFL if we don't like the performance!"). The time-critical
> tables in this customer's app are still on z/OS; they chose to distribute
> the less time-sensitive stuff which only get hit a few times during a
> transactionto the UDB instances as a gradual committment to the environment.
> New apps are going direct to the UDB instances as they come online, and we
> haven't observed any problems yet.
>
> > >   b) working within the technical restrictions
> >
> >         I'm more curious on this item, and I believe it is on database
> > sizes,
> > but I'm not educated enough yet on this.
>
> Mostly. There are small differences in some of the management tools, too,
> but nothing earth-shattering. The biggest arguments I hear at the moment
> seems to be 31-bit vs 64-bit support, but in the environment you describe, I
> suspect this may not be a huge problem. Unless you've got some tables with
> bazillions of rows, it's a matter of who caches what, which can be solved by
> systematic tuning.
>
> > >   c) overcoming your IBM rep's reluctance to losing z/OS MIPS and
> > >corresponding revenue.
> > We would actually stay stable, vs reduce, most likely not
> > growing.
>
> Which, to a sales guy, is almost equivalent to reducing -- the "what does
> not grow, dies" phrase comes to mind. Some IBMers get really upset about
> this; the ones that used to be customers themselves are a bit more flexible.
>
>
> > > You should also price HA
> > > software for L/390 and network HA hardware as part of the
> > mix as well.
> >
> > In our cost case we did include the cost of Linux support,
> > training(for
> > Linux, DB2, and VM), but what HA items are you talking about?
>
> For production use, you'll want to have multiple clustered instances serving
> the data to allow you to do concurrent maintenance on one or more of the
> instances.  Something like Veritas Cluster Manager or LVS is probably
> something you're going to want to make sure that the production server is
> always there.
>
> I missed the bit about it being the same machine, so disregard the HA
> hardware comment, other than to think about having multiple paths between
> z/OS and the Linuxen (ie more than 1 HS between the LPARs).
>
> >  We have a
> > standby 9672-R26 that would be capable of running VM, and linux in a
> > pinch,
> > it is our D/R box.  I'm thinking shared DASD, and if the server is not
> > available in its' normal home, we can boot on the 9672-R26.
> > We were not
> > thinking of continuous availablility, as it may be overkill
> > for us, but I
> > would be curious what you are thinking.
>
> I figured that production = continuous availability, but if you can live
> without that, it's lots cheaper.
>
> If you have both boxes nearby and you can cable both boxes to the same
> devices, seriously think about setting up the Cross-System Extensions (CSE)
> function of CP.  That would allow switching systems to be as simple as
> logging the Linux id on on the other system (assuming the network is
> sufficiently virtualized), but it does take some preparation to make CSE
> work.
>
> Caveat emptor: CSE is not very well documented. Practice on 2nd level
> systems first is highly recommended.
>
> > The FUD from the DBA's is that they've 'heard' that DB2 on
> > linux on the
> > mainframe will be a dog.
>
> That's curious. I wonder where? Most of the buzz I hear from DBAs about UDB
> *anywhere* is positive. I've seen some claims that it's within 10% of the
> z/OS DB/2 performance -- not sure I believe them, but it's nice to hear.
>
> > Your idea's of selling them on their importance is a good
> > one, and I guess
> > would have
> > thought it would have been obvious.  I'll have to work on that.
>
> It *is* obvious, but if you say it, that's really what they want to hear, so
> you get less static on other more important issues.
>
> It's stupid politics, but it's very common that the root of their objections
> isn't technical, but is basic fear of needing new skills or that their place
> in the grand scheme of things will somehow be diminished. A little ego
> massage goes a long way in this kind of situation, and will buy you all
> kinds of stuff down the road.
>
> -- db
--
Rich Smrcina
Sr. Systems Engineer
Sytek Services - A Division of DSG
Milwaukee, WI
rsmrcina at wi.rr.com
rsmrcina at dsgroup.com

Catch the WAVV! Stay for requirements and the free-for-all.
Update your zSeries skills in 4 days for a very reasonable price.
WAVV 2004 in Chattanooga, TN
April 30-May 4, 2004
For details see http://www.wavv.org

Reply via email to