Rao,

Basically, this is what I was saying to do.  My solution was a tad different
in that I proposed to use something that already exists whereby an Excel
file is translated into a .TXT file and the .TXT file is FTP'd to the
mainframe and a GDG is created on the mainframe.  When the file is
catalogued, a job gets kicked off that processes the file and updates the
database.  A report is created along the way that is e-mailed to the user
who initiated the process.

The so-called 'problem' with doing it this way was (a) the application I
speak of is a java package that processes about 15 or so spreadsheets and
it's a pain to keep everybody that uses it up-to-date and (b) people are
saying that FTP may 'lose' records while MQ guarantees 100% delivery.

My counterpoint to the first 'problem' is to get rid of that java package
that all the users load onto their PC and use the server that they are using
to transmit the messages via MQ.  There is no reason the application they
use cannot simply FTP a file instead of using MQ.  That would eliminate the
problem of keeping individual machines up-to-date (which is virtually
impossible because the machines we are talking about are outside users in
may cases and 100% of the time for this specific application).

As for the second 'problem,' I do not agree with the assumption that FTP
'loses' records as we DBAs run a set of processes every night that easily
sends hundreds-of-thousands (and sometimes over a million) records to our
data warehouse and we have never experienced any sort of data loss.

Like you said, this solution is so, so, 80's-ish and people seem to want
spiffy, new, flashy solutions.  I am more of the type of person who wants to
see a solid solution, which your suggestion is.

Thanks a lot!  I have gotten more out of discussions on this group than all
the pages of the MQ manuals I have read.




-----Original Message-----
From: Adiraju, Rao [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 02, 2004 6:08 PM
To: [EMAIL PROTECTED]
Subject: Re: A novice question--THE SAGA CONTINUES


Raymond

If you want to keep things SIMPLE and also want high throughput try this
alternative:

1) On the Windows platform, create txt file and leave it there (DON"T PUT in
MQ).

2) Just a write a "one" message with the file name in your so called
"trigger" queue.

3) this single message gets transferred to the main-frame MQ. Use either
CICS triggering or batch triggering but bottom line is achieve the
following:

        a) write a batch JCL with first step doing FTP receive and create a
QSAM file
        b) second step reads this QSAM and does whatever your backend
processing is

Only the assumption is the file where you leave, that box has to have a FTP
listener - if it is a SERVER box it does by default.

This being PULL from main-frame side, you don't have additional mainframe
user-id and password issues. If you want, you can set up  a proxy user-id
and password on the Server box and grant authority of a particular folder on
the server for this user-id.

This I will assure you, will be lightening fast compare to your MQ solution.
Some people may not like it because it is not a fancy solution. If you want
to have any further details, or FTP JCLs with all error handling get back to
me directly (because I guess that doesn't fit to MQ forum domain).

Cheers

Rao


-----Original Message-----
From: Adiraju, Rao
Sent: 3 March 2004 11:42 AM
To: 'MQSeries List'
Subject: RE: A novice question--THE SAGA CONTINUES

We had the similar problem with one of my previous client sites. I am not
sure what is your average size of the row / record in the text file. But in
my experience our row was 80 bytes and to put 80 bytes message, MQ adds 3 to
4 fours times of this size as Message headers and others. So what we have
noticed is the sheer amount of data transfer for 1MB flat file turned out to
be 5 to 10 times of that volume. So what I have done is started packing (or
blocking if you are from main-frame background) the rows. So instead of
putting one row as a message, I have defined a message buffer of 8K and
moved the rows there with a two bytes row-length prefix (so that I can
support variable length file - all txt files are nothing but variable
records) and keep accumulating until 8K and then issue a single PUT message.
Do the opposite on other end. I tried with various combinations of message
size and if I remember correctly after 8 or 9K the performance graph has
flattened. That was few years back and this threshold might not hold water
in today's context - but the principle will work even today. Along with it
fine tune your channel batch size definition as well.

Let me know, if this helps in your case.

Cheers

Rao




-----Original Message-----
From: Kinzler, Raymond C [mailto:[EMAIL PROTECTED]
Sent: 3 March 2004 8:56 AM
To: [EMAIL PROTECTED]
Subject: A novice question--THE SAGA CONTINUES

Hello,

Thanks for all the help, by the way.  We have been successful in addressing
the 2033 errors and are now onto the next stage of our project.  Actually,
by trying to do this project, we found the aforementioned 2033 problem.
Basically, this is what we are trying to do...

The applications people want to parse various Excel spreadsheets into text
files on a server and pass those files to the mainframe utilizing MQ Series.
This seems to suck up a HUGE amount of resources and is very slow.  I don't
know if it is a setting someplace, the way we are extracting the data, or
what it is.

The file gets translated into a .TXT file and every 'row' becomes a record
on the ECL.DATA.REQUEST queue on the server.  Once the LAST row has been PUT
on the ECL.DATA.REQUEST queue, a record is written to the trigger queue
(ECL.DATA.REQUEST).  The proper program is kicked off on the mainframe side
which will perform an MQGET on each ECL.DATA.REQUEST record and post that
record to a file on the mainframe.

That's it.  We even tweaked this program to simply perform MQGETs and
nothing else to try and achieve maximum throughput.  BUT...the absolute best
we can process these records is 35 records per second.  This is EXTREMELY
slow because the user will frequently have upwards of 10,000 rows on the
spreadsheet.  That comes out to almost five minutes--too long for something
like this because the user will exit the web screen before that amount of
time (we have very impatient users and most of them, by far, are outside
distributors and we have little to no control over their habits.

I say this is a BATCH process and should be treated like a BATCH process..
our current MQ process is set up to mimic on-line screens and we would abuse
everything if we use it for this process, too.

On the other hand, we have a person here who says she knows MQ Series and it
works MUCH faster than 35-records-per-second.

I agree it seems somewhat slow and it makes sense that we could improve upon
it but this process, in general is batch-oriented (in my opinion).

Is there anything we can do to increase the throughput?  Or should we stick
to FTPing the file to a GDG on the mainframe?

Many thanks!

Ray Kinzler

Instructions for managing your mailing list subscription are provided in the
Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

This communication is confidential and may contain privileged material.
If you are not the intended recipient you must not use, disclose, copy or
retain it.
If you have received it in error please immediately notify me by return
email
and delete the emails.
Thank you.

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Instructions for managing your mailing list subscription are provided in
the Listserv General Users Guide available at http://www.lsoft.com
Archive: http://vm.akh-wien.ac.at/MQSeries.archive

Reply via email to