I use the tilde '~' character a lot.
On Thu, 15 Mar 2007, Griggs, Donald wrote:
> Regarding: "What is the default separator?" [for using with .import in
> the command line utility]
>
> It is the vertical bar ("|", virgule, pipe character).
>
> By the way, I don't *think* that .import allows the separator to be
> quoted, does it?
> E.g., don't try to import
> 5,238,"Cohen, Jr.",Rafi
> expecting the "Cohen, Jr." to be considered a single field.
>
> For that reason, depending on your data, sometimes the pipe or tab makes
> a better separator.
>
>
>
> -----Original Message-----
> From: Rafi Cohen [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 15, 2007 12:42 PM
> To: [email protected]
> Subject: RE: [sqlite] Format of csv file RE: [sqlite] date/time
> implementation question
>
> Dennis, without knowing that we are talking here about non-standard
> format of the csv file, my client, for whom I'm preparing this project
> decided to change the file extension to .txt in order not to confuse
> with the standard format of .csv.
> Anyway, the time for this project is beginning to be tight so I'll still
> try today to implement the sql approach, but if I fail, at least for
> this version, I'll proceed with the first approach.
> Now, I understand that .import accepts any kind of file, just under
> condition that each line represents a valid row of the table and the
> separator is either the default one or the one given explicitly by the
> .separator command.
> What is the default separator? I could not discover this by looking into
> the code.
> My idea is to read the file sent by the client within the C application,
> check what needs to be checked for validation and store the relevant
> parts of it formatted appropriately in another file and then import from
> this file into a table and proceed as you suggested yesterday.
> Do you see any problem with this approach or have anhy other suggestion?
> Concerning separator, I can add the .separator "," command into the
> script before .import, but I may also use the default, if I know what it
> is.
> I tried this manually in sqlite3 and I know that coma is not the default
> separator as .import failed.
> But after .separator "," .import succeeded and I could indeed select the
> rows from the table.
> I hope this signs a success for me to implement this approach in my C
> code as you suggested.
> Thanks, Rafi.
>
> -----Original Message-----
> From: Dennis Cote [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 14, 2007 11:11 PM
> To: [email protected]
> Subject: Re: [sqlite] Format lf csv file RE: [sqlite] date/time
> implementation question
>
>
> Rafi Cohen wrote:
> > Hi Dennis, the first approach is clear now and I may proceed with it.
> >
> Good to hear.
> > The second approach is interesting and chalenging, but leaves some
> > issues to clarify and in case I find solutions to those issues I well
> > may adopt it. 1. format of csv file: I have no idea how this csv file
> > is created and which database engine is used. I do know that I receive
>
> > such a file once or twice a day.
> > While reading the file onto the list of structures, I also validate
> the
> > consistency of the data in the file.
> > Before the data lines (starting with "d,") there is a header line
> > "hdr,". This line contains the sequential number of the file, number
> of
> > records in the file and a checksum on a specific field of the data
> (say
> > coumn 2).
> > As I knew nothing up to now about .import, I wonder if there is a way
> to
> > include those checings in the second approach?
> >
> Firstly, this does not sound like a standard CSV format file (see
> http://www.creativyst.com/Doc/Articles/CSV/CSV01.htm ).
>
> Secondly, your validation checks can not be added to the normal csv
> .import command using the sqlite shell, but they could be added to a
> customized copy of the import routine that you add to your own code. It
> might also be possible to convert some of the validation tests to SQL
> check constraints on the table columns as well.
> > 2. The deletion of the future dates is incorrect. On the contrary, in
> > the first approach, I re-examine the remaining structures each half a
> > minute until any of them becomes past date, then I process it just
> > like any other past date structures and then free it. In case a new
> > .csv file arrives, I add the new list of structures to the remaining
> > ones and continue to examine them every half a minute. I could do the
> > same with the sql3_exec statement in the second approach, but I need
> > another approach for the case of the future records. I hope you have
> > satisfying answers for those 2 issues and then I'll be glad to proceed
>
> > with the second approach.
> >
> In that case you could split the imported data into two tables using the
>
> date test. And then process only the table that contains the old
> records. The future records would remain in a second table. This second
> table would probably be the same one you import your new csv file
> records into. The processing of the old records would proceed as before.
>
> //assumes table imported contains the imported records
> //select records to process based on date and time
> create table process as
> select * from imported where (date || ' ' || time) <=
> datetime('now');
> delete from imported where id in (select id from process);
>
> You could also skip the concatenation by splitting the data and time
> test if you replace the condition above with
>
> date <= date('now' and time <= time('now')
>
> Its probably a case of six of one or half a dozen of the other.
>
> There really are a lot of different possibilities for processing the
> data once you have them in table in the database.
>
> HTH
> Dennis Cote
>
> ------------------------------------------------------------------------
> -----
> To unsubscribe, send email to [EMAIL PROTECTED]
> ------------------------------------------------------------------------
> -----
>
>
>
> --
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.5.446 / Virus Database: 268.18.11/721 - Release Date:
> 3/13/2007 4:51 PM
>
>
>
> ------------------------------------------------------------------------
> -----
> To unsubscribe, send email to [EMAIL PROTECTED]
> ------------------------------------------------------------------------
> -----
>
>
> -----------------------------------------------------------------------------
> To unsubscribe, send email to [EMAIL PROTECTED]
> -----------------------------------------------------------------------------
>
>
You have to be BRAVE to grow OLD.
There are no old CARELESS pilots or electricians.
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------