BTW:
for another customer (also z/OS based), we put all the data in a data
space,
using a 3rd party system that the other customer provided. No problem
there.
For Windows server, the application runs multi-threaded, and the cache data
of the table system needs to be serialized; this is done by setting
semaphores
when entering the cache handling routines. Works without problems. So
one thread
uses the cache data that another thread did read.
For Linux server, we first tried the same approach. But the semaphore
calls in
pthread turned out to be so expensive (much more expensive than the code to
examine the cache), that the application didn't run well with Linux. So
in this case
we mapped the whole containers to memory using mmap at the beginning of
the process and simply didn't use file I/O at all.
The z/OS applications are batch regions, running all day long. With DB2,
the
cache is the database DBM1 area. But there is some overhead doing the
communication from the batch regions to DB2. With the file I/O, there is no
DB2 usage and no overhead, but now the cache is in the batch regions. It
counts
for some MB only, so this is no big problem. The problem is, that the
CPU reduction
(which is significant) leads to an increase in elapsed time, because of the
file I/O waits.
What we will try next:
- of course: look for optimizations in file I/O
- increase cache size to reduce file I/O (can be set using environment
variables)
- experiment with data space solutions (like above)
Kind regards
Bernd
Am 21.05.2013 16:24, schrieb Bernd Oppolzer:
We already have improvements to cut the open/close/free overhead.
The ca. 800 little tables are put into 8 large containers, which have
a (sort of) directory at the beginning, so that there is only one open
for the container. The open is done only once, and the containers stay
open throughout the whole process (all day long). So open/close/free
is no concern. Same goes for Windows/Linux/Unix.
I have still to examine if the z/OS record sizes fit well to the fread
sizes.
I did not do the customization to z/OS, only the original design for the
other platforms. z/OS Unix is an option, but there are customers out
there, that want the files to be "classic" OS files.
(there are 18 different platforms where this applications runs on,
including Solaris and BS/2000).
Kind regards
Bernd
Am 21.05.2013 14:38, schrieb Paul Gilmartin:
On Tue, 21 May 2013 09:55:51 +0200, Bernd Oppolzer wrote:
Slightly drifting topic:
We use fseek / ftell / fread to do the file I/O. The files are
normal sequential
OS files.
Sounds like the worst of both worlds. Have you tried it with normal
z/OS UNIX files? The kernel may do the caching for you.
I've found that for large numbers of small files z/OS UNIX files
vastly outperform legacy data sets. The allocate/open/close/free
overhead is brutal.
-- gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN