On Mon, 2 Feb 2004, Wiggins d Anconia wrote:

> 
> Sounds pretty good to me.  One concern, do the sub record types always
> have the same number of fields?  Using your array to unpack into may
> turn into a maintenance nightmare with respect to indexing into it to
> get values if the record formats are signficantly different, etc. 

Actually, that was only an example. I really hope to have the result 
returned more like:

if ($subrec = '0100') {
($name, $address, $city ) = unpack $template{$subrec}, $_ ;
} elsif ($subrec = '0101') {
($some1, $some2) = unpack $template{$subrec}, $_;
} 

and so on for each defined $subrec.

> Second concern, are you processing the records completely within the
> loop or needing to parse them all before doing anything with them?  In
> the latter case you may need to store them to an array based on type
> rather than directly to a 'values' temporary array, etc.

I will be processing the records one at a time and putting them in a 
"persistant storage" for retrieval later in a reporting program. I have 
not yet determined what sort of "persistant storage" that I want. Perhaps 
DBM, perhaps PostgreSQL, perhaps mySQL, <whatever>.

I may end up not even doing this since PostgreSQL, at least, has a way to 
load records from a "flat file". I just like to leave my options open. And 
I'm looking a Perl solutions right now mainly because I'm trying to learn 
Perl.

<off-topic>
Also, if I find a "nice" Perl solution, I may implement it "in production" 
on our mainframe (IBM zSeries) at work. The actual data being parsed is a 
RACF (security system) database unload. If I can ftp that data from z/OS 
to our Linux/390 system and do all my reporting there, I can save z/OS CPU 
utilization. That's because Linux/390 on our zSeries runs on a separate 
processor from the z/OS work. The z/OS work cannot use this processor due 
to licensing restrictions. So, any work that I can "offload" from z/OS is 
a net gain because the IFL (Linux processor) is basically idle right now. 
I would then use Perl to create reports which would then be ftp'ed back to 
the z/OS system. This gets me "brownie points" by offloading z/OS 
processing. We are critically short of z/OS processor power and the next 
upgrade would cost 1.5 million dollars in software "upgrade" fees.

If this works for the database unload, I can use a similar system for RACF 
reports run against the "reformatted audit logs". Again, getting "brownie 
points" for offloading work.

This is why I'm considering a Perl-only solution. I have Perl on our SuSE 
Linux/390 system. I do not have any SQL database and am not really good 
enough to try to port something like PostgreSQL or mySQL.

</off-topic>

> 
> For the first concern you may consider using a hash slice with the keys
> being associated with the subtype stored in the original hash where you
> retrieve the record format from.
> 

Good idea. I'll keep it in mind.

thanks much!

--
Maranatha!
John McKown



-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to