John D Groenveld wrote:
>
> I have several large CSV files which I need to import into Oracle tables.
> I've written code using DBD::CSV and DBD::Oracle to do the actual data
> transfer, but I'd like to do some of the validation of the data (does
> a particular data item conform to the Oracle data dictionary? does it
> follow the basic business rules) inside Perl. Before I grow my code much
> more, has this problem already been generalized?
>
> %columns = (
> last_name => {
> nullable=>"NO",
> size=>32,
> regex=>'...',
> }
> );
The new SQL::Parser (part of the SQL::Statement distribution) breaks SQL
into data structures similar to that:
'column_defs' => {
'sample' => {
'constraints' => [ 'UNIQUE' ],
'data_length' => '40',
'data_type' => 'VARCHAR'
}
}
Of course it can't currently handle alot of the Oracle syntax but some
of that will be added in later releases so maybe we can pool resources
and you can build your structures to fit in with the SQL::Parser
structures and we can eventually merge. Write me off list if you'd like
to collaborate.
To see the structures, do something like:
use Data::Dumper;
use SQL::Parser;
my $parser = SQL::Parser->new('ANSI',{PrintError=>1});
my $sql = "CREATE TABLE foo (sample VARCHAR(40) UNIQUE)";
if ( my $rv = $parser->parse( $sql ) ){
print Dumper $parser->structure;
}
--
Jeff
P.S. is that "psu.edu" as in Puddlecity State University, the one I can
see out my rainy S.E. Portland window right now?