You have to redesign.  The 16MB is indeed a restriction, and when you have
really many files, an ACCESS of the dirid take ages.

You could use a Directory Control Directory (DIRC) and place it in a
dataspace, then the 16MB still applies, but you get the full 16MB (no space
required for other resident stuff, like parts of the CMS nucleus).  But, a
DIRC can be accessed RW by only virtual machine at a time.  If you place the
DIRC in a dataspace, and if you assure always one user has it accessed, an
ACCESS command is very fast, data and FST space are shared amongst all
users.
But, your users won't see new files until they reACCESS the DIRC, and if you
have many updates throughout the day, it would cause many copies of the
dataspace.  For my former customer we had a structure like this:
  - a normal, small, DIR accessed as eg F, there updates during the day were
placed
  - a big DIRC, with a dataspace accessed befind F.
Each night, the files from the small DIR were moved into the DIRC.  Then the
service machine accessed it as RO, so we always had at least this one
accessing the DIRC (and keeping the dataspace alive).  Then users of the
DIRC were recycled.   We had some other service machine checking the number
of dataspaces for that DIRC (using QUERY ACCESSORS dirid (DATASPACE and
check the number of levels; if more than x, we sent out an email).

A few months ago I redesigned a bad setup for another customer: they had two
directories with batch job listings: 'this month' and 'previous month', with
a monthly cleanup.  But near the end of the month ACCESS became slow,
sometimes impossible due to lack of storage.  Now we have a dirid per day.
Problems gone and much faster.  To make the transition invisible, maybe the
use of temporaly ALIASes can help (the code storing the files was changed at
time x; the code reading the files still found its stuff but via ALIASes,
and I had some extra day to fix the read code).


2011/4/28 clifford jackson <cliffordjackson...@msn.com>

>  Hello All, I have a SFS data base with a Hierarchical directory structure
> of File Control my tree structure currently has one top level directory,  this
> directory is used for report repository. This directory has grown very
> large, I have an application that access this directory by customers (thru
> worker machines) to view reports, I’ve run into a problem where it seems
> that upon each request of the SFS data base for a report it seems the
> complete top level directory is being loaded into the requesters virtual
> machine storage and it seems there is a  16m line restriction.
>
>
>
> Is there a solution that I can implement without interfering with my
> production environment too much or  can I possibly redesign the SFS data
> base structure and use  Directory Control and possibly use Data Spaces.
>
>
>
> Cliff Jackson
>
> Senior Systems Programmer
>



-- 
Kris Buelens,
IBM Belgium, VM customer support

Reply via email to