I think I got this stuff from wotsit.org... The header & record size are
stored as binary fields in pos 9-10 & 11-12 of the file header. Here is
some D3 code that I use to read dbf files: (you still need to already
know the record layout re: fields, etc, although I'm sure that
information is encoded in the header as well; this code just figures out
where the header ends so you can start parsing the data.)

----------------------- begin basic-------------------
*  [42] Sometimes headers are one char smaller than expected,
*         which shifts entire file.  To watch for this, check
*         the first record's RC code -- if that doesn't start
*         with an "R", then we're shifted, so subtract one
*         from the header size.

block.size = 10000
execute "cd /path/to/files"
execute "!exec ls *dbf" capturing dbf.files

max = dcount(dbf.files,@am)
for n = 1 to max
  dbf.file = dbf.files<n>
  print "now processing ":dbf.file:"..."
  fv = %open(dbf.file,o$rdonly)
  char new.block[block.size]

  first.time = 1
  done = 0
  loop until done do

    r = %read(fv,new.block,block.size)

    if r lt block.size then
      done = 1
      new.block = new.block[1,r]

    if first.time then
      * header.size & rec.size can be determined by reading the
      *  header data, but make sure that block size is greater
      *  than the header size.
      h1 = seq(new.block[9,1])
      h2 = seq(new.block[10,1])
      header.size = (256*h2)+h1 + 2
      r1 = seq(new.block[11,1])
      r2 = seq(new.block[12,1])
      rec.size = (256*r2)+r1

      ** [42]
      rc.index = header.size + 100
      rc.char  = new.block[rc.index,1]
      if rc.char ne "R" then
        header.size = header.size - 1

      first.time = 0
      block = new.block[header.size,999999]
    end else
      block = block:new.block

    while len(block) ge rec.size do
      rec = block[1,rec.size]
      block = block[rec.size+1,999999]
      gosub 1000  ;* parse the rec


  * last one may be partial record
  ach.rec = block
  gosub 1000

  x = %close(fv)

next n
-------------------------- end basic -----------------------------

I have also seen a freeware DOS program that convert dbase files to csv
(google for convert file?), but you have to have a DOS environment
available to shell out to...

Good Luck,

Scott Ballinger
Pareto Corporation
Edmonds WA USA

-----Original Message-----
On Behalf Of Richard A. Wilson
Sent: Wednesday, April 07, 2004 3:35 PM
To: U2 Users Discussion List
Subject: Re: Extracting data directly from dbf files...

I'm fairly certain that the cedarville download utility handles dbf 
files. Since it comes with source code perhaps you can change the write 
logic to read


Lee Messenger wrote:

> Has anyone developed any processes to natively read '.dbf' type files 
> from within Universe basic?
> I know we could use ODBC or UniObjects but prefer a more direct 
> approach.
u2-users mailing list

Reply via email to