Re: Load data from a csv file without using COPY

2018-06-19 Thread Nicolas Paris
hi

AFAIK you can use copy from a jdbc command since copy allows to stream data
(from stdin version)
However while faster than INSERT INTO,  this might lock the target table
during the process

2018-06-19 22:16 GMT+02:00 Ravi Krishna :

> In order to test a real life scenario (and use it for benchmarking) I want
> to load large number of data from csv files.
> The requirement is that the load should happen like an application writing
> to the database ( that is, no COPY command).
> Is there a tool which can do the job.  Basically parse the csv file and
> insert it to the database row by row.
>
> thanks
>
>
>
>
>


Re: Using COPY to import large xml file

2018-06-25 Thread Nicolas Paris
2018-06-25 16:25 GMT+02:00 Anto Aravinth :

> Thanks a lot. But I do got lot of challenges! Looks like SO data contains
> lot of tabs within itself.. So tabs delimiter didn't work for me. I thought
> I can give a special demiliter but looks like Postrgesql copy allow only
> one character as delimiter :(
>
> Sad, I guess only way is to insert or do a through serialization of my
> data into something that COPY can understand.
>

​easiest way would be:
xml -> csv -> \copy

​by csv, I mean regular quoted csv (Simply wrap csv field with double
quote, and escape
enventually contained quotes with an other double quote.).

Postgresql copy csv parser is one of the most robust I ever tested
before.


Re: Using COPY to import large xml file

2018-06-25 Thread Nicolas Paris
2018-06-25 17:30 GMT+02:00 Anto Aravinth :

>
>
> On Mon, Jun 25, 2018 at 8:54 PM, Anto Aravinth <
> anto.aravinth@gmail.com> wrote:
>
>>
>>
>> On Mon, Jun 25, 2018 at 8:20 PM, Nicolas Paris 
>> wrote:
>>
>>>
>>> 2018-06-25 16:25 GMT+02:00 Anto Aravinth :
>>>
>>>> Thanks a lot. But I do got lot of challenges! Looks like SO data
>>>> contains lot of tabs within itself.. So tabs delimiter didn't work for me.
>>>> I thought I can give a special demiliter but looks like Postrgesql copy
>>>> allow only one character as delimiter :(
>>>>
>>>> Sad, I guess only way is to insert or do a through serialization of my
>>>> data into something that COPY can understand.
>>>>
>>>
>>> ​easiest way would be:
>>> xml -> csv -> \copy
>>>
>>> ​by csv, I mean regular quoted csv (Simply wrap csv field with double
>>> quote, and escape
>>> enventually contained quotes with an other double quote.).
>>>
>>
>> I tried but no luck. Here is the sample csv, I wrote from my xml
>> convertor:
>>
>> 1   "Are questions about animations or comics inspired by Japanese
>> culture or styles considered on-topic?"  "pExamples include a href=""
>> http://www.imdb.com/title/tt0417299/"; rel=""nofollow""Avatar/a, a
>> href=""http://www.imdb.com/title/tt1695360/"; rel=""nofollow""Korra/a
>> and, to some extent, a href=""http://www.imdb.com/title/tt0278238/";
>> rel=""nofollow""Samurai Jack/a. They're all widely popular American
>> cartoons, sometimes even referred to as ema href=""
>> https://en.wikipedia.org/wiki/Anime-influenced_animation";
>> rel=""nofollow""Amerime/a/em./p
>>
>>
>> pAre questions about these series on-topic?/p
>>
>> "   "pExamples include a href=""http://www.imdb.com/title/tt0417299/";
>> rel=""nofollow""Avatar/a, a href=""http://www.imdb.com/title/tt1695360/";
>> rel=""nofollow""Korra/a and, to some extent, a href=""
>> http://www.imdb.com/title/tt0278238/"; rel=""nofollow""Samurai Jack/a.
>> They're all widely popular American cartoons, sometimes even referred to as
>> ema href=""https://en.wikipedia.org/wiki/Anime-influenced_animation";
>> rel=""nofollow""Amerime/a/em./p
>>
>>
>> pAre questions about these series on-topic?/p
>>
>> "   "null"
>>
>> the schema of my table is:
>>
>>   CREATE TABLE so2 (
>> id  INTEGER NOT NULL PRIMARY KEY,
>> title varchar(1000) NULL,
>> posts text,
>> body TSVECTOR,
>> parent_id INTEGER NULL,
>> FOREIGN KEY (parent_id) REFERENCES so1(id)
>> );
>>
>> and when I run:
>>
>> COPY so2 from '/Users/user/programs/js/node-mbox/file.csv';
>>
>>
>> I get:
>>
>>
>> *ERROR:  missing data for column "body"*
>
> *CONTEXT:  COPY so2, line 1: "1 "Are questions about animations or comics
> inspired by Japanese culture or styles considered on-top..."*
>
>
>> CONTEXT:  COPY so2, line 1: "1 "Are questions about animations or comics
>> inspired by Japanese culture or styles considered on-top..."
>>
>> Not sure what I'm missing. Not sure the above csv is breaking because I
>> have newlines within my content. But the error message is very hard to
>> debug.
>>
>>

​What you are missing is the configuration of COPY statement​ (please refer
to https://www.postgresql.org/docs/9.2/static/sql-copy.html)
such format, delimiter, quote and escape


Re: Foreign Data Wrapper

2017-12-21 Thread Nicolas Paris
Le 21 déc. 2017 à 13:24, Virendra Shaktawat - Quipment India écrivait :
> Quipment Logo
> 
> Hello ,
> 
>  
> 
> I have stuck at foreign data wrapper. I am accessing the form MS Sql Server.
> Foreign table has been created with data. unfortunately I am unable to perform
> DML operation like insert, update and delete on foreign table. Whenever I 
> tried
> to perform DML operation on foreign table we are getting error i.e. “ERROR:
> cannot insert into foreign table "test12"
> 
> SQL state: 0A000”
> 
>  
> 
> Kindly give me response soon because I am stuck at the middle of the project.
> 
>

Hi,

I would give a try to a specific MSSQL FDW such:
https://www.openscg.com/bigsql/docs/tds_fdw/

and the github:
https://github.com/tds-fdw/tds_fdw

While insert is not yet available there apparently some work have
already been done and could be improved
https://github.com/tds-fdw/tds_fdw/issues/9



psql format result as markdown tables

2018-01-13 Thread Nicolas Paris
Hello

I wonder if someone knows how to configure psql to output results as
markdown tables. 
Then instead of :

SELECT * FROM (values(1,2),(3,4)) as t;
 column1 | column2 
-+-
   1 |   2
   3 |   4

Get the result as :
SELECT * FROM (values(1,2),(3,4)) as t;
| column1 | column2| 
|-||-
|   1 |   2|
|   3 |   4|

Thanks by advance



Re: New Copy Formats - avro/orc/parquet

2018-02-11 Thread Nicolas Paris
> > That is true, but the question is how significant the overhead is. If
> > it's 50% then reducing it would make perfect sense. If it's 1% then no
> > one if going to be bothered by it.
> 
> I think it's pretty clear that it's going to be way way much more than
> 1%. 

Good news but not sure to anderstand why.

> It's trivial to construct cases where input parsing / output
> formatting takes the majority of the time. 

Binary -> ORC
^
|
   PROGRAM parsing/output formating on the fly

> And a lot of that you're going to be able to avoid with binary formats.

Still the above diagram shows both parsing/formating step, isn't it ?






Re: New Copy Formats - avro/orc/parquet

2018-02-11 Thread Nicolas Paris
Le 11 févr. 2018 à 21:03, Andres Freund écrivait :
> 
> 
> On February 11, 2018 12:00:12 PM PST, Nicolas Paris <nipari...@gmail.com> 
> wrote:
> >> > That is true, but the question is how significant the overhead is.
> >If
> >> > it's 50% then reducing it would make perfect sense. If it's 1% then
> >no
> >> > one if going to be bothered by it.
> >> 
> >> I think it's pretty clear that it's going to be way way much more
> >than
> >> 1%. 
> >
> >Good news but not sure to anderstand why.
> 
> I think you might have misunderstood my reply? I'm saying that going through 
> PROGRAM will have significant overhead. I can't quite make sense of the rest 
> of your reply otherwise?

True, I misunderstood. Then I agree the computation overhead should be
non-negligible.

I have also the storage and network transfers overhead in mind:
All those new formats are compressed; this is not true for current
postgres BINARY format and obviously text based format. By experience,
the binary format is 10 to 30% larger than the text one. On the
contrary, an ORC file can be up to 10 times smaller than a text base
format.



Re: New Copy Formats - avro/orc/parquet

2018-02-11 Thread Nicolas Paris
Le 11 févr. 2018 à 21:53, Andres Freund écrivait :
> On 2018-02-11 21:41:26 +0100, Nicolas Paris wrote:
> > I have also the storage and network transfers overhead in mind:
> > All those new formats are compressed; this is not true for current
> > postgres BINARY format and obviously text based format. By experience,
> > the binary format is 10 to 30% larger than the text one. On the
> > contrary, an ORC file can be up to 10 times smaller than a text base
> > format.
> 
> That seems largely irrelevant when arguing about using PROGRAM though,
> right?
> 

Indeed those storage and network transfers are only considered versus
CSV/BINARY format. No link with PROGRAM aspect.



Re: New Copy Formats - avro/orc/parquet

2018-02-11 Thread Nicolas Paris
Le 11 févr. 2018 à 22:19, Adrian Klaver écrivait :
> On 02/11/2018 12:57 PM, Nicolas Paris wrote:
> > Le 11 févr. 2018 à 21:53, Andres Freund écrivait :
> > > On 2018-02-11 21:41:26 +0100, Nicolas Paris wrote:
> > > > I have also the storage and network transfers overhead in mind:
> > > > All those new formats are compressed; this is not true for current
> > > > postgres BINARY format and obviously text based format. By experience,
> > > > the binary format is 10 to 30% larger than the text one. On the
> > > > contrary, an ORC file can be up to 10 times smaller than a text base
> > > > format.
> > > 
> > > That seems largely irrelevant when arguing about using PROGRAM though,
> > > right?
> > > 
> > 
> > Indeed those storage and network transfers are only considered versus
> > CSV/BINARY format. No link with PROGRAM aspect.
> > 
> 
> Just wondering what your time frame is on this? Asking because this would be
> considered a new feature and so would need to be added to a major release of
> Postgres. Currently work is going on for Postgres version 11 to be
> released(just a guess) late Fall 2018/early Winter 2019. The
> CommitFest(https://commitfest.postgresql.org/) for this release is currently
> approximately 3/4 of the way through. Not sure that new code could make it
> in at this point. This means it would be bumped to version 12 for 2019/2020.
> 

Right now, exporting (bilions rows * hundred columns) from postgres to
distributed tools such spark is feasible while beeing based on parsing,
transfers, tooling and workaround overhead.

Waiting until 2020 to get the oportunity to write COPY extensions would
mean using this feature around 2022. I mean, writing the ORC COPY
extension, extending the postgres JDBC driver, extending the spark jdbc
connector, all from different communities: this will be a long process.

But again, posgres would be the most advanced RDBMS because AFAIK not
any DB deal with those distributed format for the moment. Having in mind
that such feature will be released one day, make think the place of
postgres in a datawarehouse architecture accordingly.



New Copy Formats - avro/orc/parquet

2018-02-10 Thread Nicolas Paris
Hello

I d'found useful to be able to import/export from postgres to those modern data
formats:
- avro (c writer=https://avro.apache.org/docs/1.8.2/api/c/index.html)
- parquet (c++ writer=https://github.com/apache/parquet-cpp)
- orc (all writers=https://github.com/apache/orc)

Something like :
COPY table TO STDOUT ORC;

Would be lovely.

This would greatly enhance how postgres integrates in big-data ecosystem.

Any thought ?

Thanks



Re: New Copy Formats - avro/orc/parquet

2018-02-10 Thread Nicolas Paris
> > I d'found useful to be able to import/export from postgres to those modern 
> > data
> > formats:
> > - avro (c writer=https://avro.apache.org/docs/1.8.2/api/c/index.html)
> > - parquet (c++ writer=https://github.com/apache/parquet-cpp)
> > - orc (all writers=https://github.com/apache/orc)
> > 
> > Something like :
> > COPY table TO STDOUT ORC;
> > 
> > Would be lovely.
> > 
> > This would greatly enhance how postgres integrates in big-data ecosystem.
> > 
> > Any thought ?
> 
> https://www.postgresql.org/docs/10/static/sql-copy.html
> 
> "PROGRAM
> 
> A command to execute. In COPY FROM, the input is read from standard
> output of the command, and in COPY TO, the output is written to the standard
> input of the command.
> 
> Note that the command is invoked by the shell, so if you need to pass
> any arguments to shell command that come from an untrusted source, you must
> be careful to strip or escape any special characters that might have a
> special meaning for the shell. For security reasons, it is best to use a
> fixed command string, or at least avoid passing any user input in it.
> "
>

PROGRAM would involve overhead of transforming data from CSV or BINARY
to AVRO for example. 

Here, I am talking about native format exports/imports for performance
considerations.



Re: Multiple COPY on the same table

2018-08-20 Thread Nicolas Paris
> Can I split a large file into multiple files and then run copy using
> each file.  

AFAIK, copy command locks the table[1] while there is no mention of this
in the documentation[2]. 

>  Will the performance boost by close to 4x??

You might be interested in the pbBulkInsert tool[3] that allows parallel
copy with some succes accordingly to benchmarks. However, that tool does
not handle multiline csv. Because of that limitation I have been using
the standard copy command with binary format with some succes.


[1] https://grokbase.com/t/postgresql/pgsql-general/01597pv3qs/copy-locking
[2] https://www.postgresql.org/docs/current/static/sql-copy.html
[3] https://github.com/bytefish/PgBulkInsert



COPY error when \. char

2018-03-20 Thread Nicolas Paris
Hello

I get an error when loading this kind of csv:

> test.csv:
"hello ""world"" "
"\."
"this
works
"
"this
\.
does
not"

> table:
create table test (field text);

> sql:
\copy test (field) from 'test.csv' CSV  quote '"' ESCAPE '"';
ERROR:  unterminated CSV quoted field
CONTEXTE : COPY test, line 7: ""this
"

Apparently, having the \.  string in a single line make it break.
Is this normal ?

Thanks



Re: Full-text Search - Thesaurus relationships

2018-10-31 Thread Nicolas Paris
On Wed, Oct 31, 2018 at 10:49:04AM +0100, Laurenz Albe wrote:
> Nicolas Paris wrote:
> > > > The documentation[1] says thesaurus can include informations of terms
> > > > relationships such broader terms, preferred terms ...
> > > > I haven't been able to find out how to exploit those relationship in
> > > > postgres. Is there any keyword to and associated syntax to make use of
> > > > them ?
> > 
> > If "broader than" or "narrower than" have the same behavior than "is
> > equivalent to" I cannot figure out what's the purpose of them.
> Can you come up with a clear question?

Actually no because I finally understood the behavior. 

Thanks,

-- 
nicolas



Full-text Search - Thesaurus relationships

2018-10-30 Thread Nicolas Paris
Hi,

The documentation[1] says thesaurus can include informations of terms
relationships such broader terms, preferred terms ...

I haven't been able to find out how to exploit those relationship in
postgres. Is there any keyword to and associated syntax to make use of
them ?

Thanks,


[1]: 
https://www.postgresql.org/docs/current/static/textsearch-dictionaries.html#TEXTSEARCH-THESAURUS
-- 
nicolas



Re: Full-text Search - Thesaurus relationships

2018-10-31 Thread Nicolas Paris
On Wed, Oct 31, 2018 at 07:56:28AM +0100, Laurenz Albe wrote:
> > The documentation[1] says thesaurus can include informations of terms
> > relationships such broader terms, preferred terms ...
> > I haven't been able to find out how to exploit those relationship in
> > postgres. Is there any keyword to and associated syntax to make use of
> > them ?
> No, it should happen automatically.

If "broader than" or "narrower than" have the same behavior than "is
equivalent to" I cannot figure out what's the purpose of them.

-- 
nicolas



Re: Default Privilege Table ANY ROLE

2018-11-15 Thread Nicolas Paris
On Wed, Nov 14, 2018 at 03:19:00PM +0100, Nicolas Paris wrote:
> Hi
> 
> I d'like my user be able to select on any new table from other users.
> 
> > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner1"  IN SCHEMA "myschema" GRANT 
> >  select ON TABLES TO "myuser"
> > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner2"  IN SCHEMA "myschema" GRANT 
> >  select ON TABLES TO "myuser"
> > ...
> 
> 
> Do I really have to repeat the command for all users ?
> 
> The problem is I have many user able to create tables and all of them
> have to read each other. 
> 

There is apparently no trivial solution, could the Postgres DCL be
extended with this syntax in the future ?

> ALTER DEFAULT PRIVILEGES  FOR  ALL ROLE  IN SCHEMA "myschema" GRANT select ON 
> TABLES TO "myuser"




-- 
nicolas



Default Privilege Table ANY ROLE

2018-11-14 Thread Nicolas Paris
Hi

I d'like my user be able to select on any new table from other users.

> ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner1"  IN SCHEMA "myschema" GRANT  
> select ON TABLES TO "myuser"
> ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner2"  IN SCHEMA "myschema" GRANT  
> select ON TABLES TO "myuser"
> ...


Do I really have to repeat the command for all users ?

The problem is I have many user able to create tables and all of them
have to read each other. 

Thanks



-- 
nicolas



Re: Default Privilege Table ANY ROLE

2018-11-14 Thread Nicolas Paris
On Wed, Nov 14, 2018 at 09:04:44PM +0100, Laurenz Albe wrote:
> Nicolas Paris wrote:
> > I d'like my user be able to select on any new table from other users.
> > 
> > > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner1"  IN SCHEMA "myschema" 
> > > GRANT  select ON TABLES TO "myuser"
> > > ALTER DEFAULT PRIVILEGES  FOR  ROLE "theowner2"  IN SCHEMA "myschema" 
> > > GRANT  select ON TABLES TO "myuser"
> > > ...
> > 
> > 
> > Do I really have to repeat the command for all users ?
> > 
> > The problem is I have many user able to create tables and all of them
> > have to read each other. 
> 
> Now whenever "alice" has to create a table, she runs
> SET ROLE tableowner;
> Then all these tables belong to "tableowner", and each user in group 
> "tablereader"
> can SELECT from them:

Yes, this step is overhead to me:
> SET ROLE tableowner;

In my mind, both bob/alice inherit from the same group, so they should
share the table they build according to this:

> ALTER DEFAULT PRIVILEGES FOR ROLE tableowner IN SCHEMA myschema GRANT SELECT 
> ON TABLES TO tablereader;




-- 
nicolas



Re: Default Privilege Table ANY ROLE

2018-11-14 Thread Nicolas Paris
On Wed, Nov 14, 2018 at 03:53:39PM -0500, Tom Lane wrote:
> Maybe I'm missing something, but doesn't this solve your problem
> as stated?
> 
> ALTER DEFAULT PRIVILEGES IN SCHEMA myschema GRANT SELECT ON TABLES TO public;


Not sure that's equivalent to what I am looking for below (but is not allowed):

> ALTER DEFAULT PRIVILEGES  FOR  ROLE  *.* IN SCHEMA "myschema" GRANT  select 
> ON TABLES TO "myuser"

-- 
nicolas



Re: Default Privilege Table ANY ROLE

2018-11-21 Thread Nicolas Paris
On Fri, Nov 16, 2018 at 03:17:59PM -0500, Tom Lane wrote:
> Stephen Frost  writes:
> > There was much discussion of being able to have 'FOR ALL ROLES' or
> > similar for ALTER DEFAULT PRIVILEGES when it went in, but there was a
> > lot of concern about one user getting to define the default privileges
> > for objects created by some other user.
> 
> Yeah, it's hard to see how you could allow such a command to anybody
> but a superuser.
> 

I have some applications using specific schema. I don't wan't them to be
superuser, but I wan't them to be able to access any table in that
schema.

Because many users are able to create tables in that schema, I have to
write one ALTER DEFAULT PRIVILEGE foreach user.


Any chance to have superuser per schema ?


-- 
nicolas



announce: spark-postgres 3 released

2019-11-10 Thread Nicolas Paris
Hello postgres users,

Spark-postgres is designed for reliable and performant ETL in big-data
workload and offers read/write/scd capability to better bridge spark and
postgres. The version 3 introduces a datasource API. It outperforms
sqoop by factor 8 and the apache spark core jdbc by infinity.

Features:
- use of pg COPY statements
- parallel reads/writes
- use of hdfs to store intermediary csv 
- reindex after bulk-loading
- SCD1 computations done on the spark side
- use unlogged tables when needed
- handle arrays and multiline string columns
- useful jdbc functions (ddl, updates...)

The official repository:
https://framagit.org/parisni/spark-etl/tree/master/spark-postgres

And its mirror on microsoft github:
https://github.com/EDS-APHP/spark-etl/tree/master/spark-postgres

-- 
nicolas




Re: How to import Apache parquet files?

2019-11-10 Thread Nicolas Paris
> I would like to import (lots of) Apache parquet files to a PostgreSQL 11

you might be intersted in spark-postgres library. Basically the library
allows you to bulk load parquet files in one spark command:

> spark
> .read.format("parquet")
> .load(parquetFilesPath) // read the parquet files
> .write.format("postgres")
> .option("host","yourHost")
> .option("partitions", 4) // 4 threads
> .option("table","theTable")
> .option("user","theUser")
> .option("database","thePgDatabase")
> .option("schema","thePgSchema")
> .loada // bulk load into postgres

more details at https://github.com/EDS-APHP/spark-etl/tree/master/spark-postgres

On Tue, Nov 05, 2019 at 03:56:26PM +0100, Softwarelimits wrote:
> Hi, I need to come and ask here, I did not find enough information so I hope I
> am just having a bad day or somebody is censoring my search results for fun...
> :)
> 
> I would like to import (lots of) Apache parquet files to a PostgreSQL 11
> cluster - yes, I believe it should be done with the Python pyarrow module, but
> before digging into the possible traps I would like to ask here if there is
> some common, well understood and documented tool that may be helpful with that
> process?
> 
> It seems that the COPY command can import binary data, but I am not able to
> allocate enough resources to understand how to implement a parquet file import
> with that.
> 
> I really would like follow a person with much more knowledge than me about
> either PostgreSQL or Apache parquet format instead of inventing a bad wheel.
> 
> Any hints very welcome,
> thank you very much for your attention!
> John

-- 
nicolas




Re: ERROR: COPY escape must be a single one-byte character (multi-delimiter appears to work on Postgres 9.0 but does not on Postgres 9.2)

2019-11-16 Thread Nicolas Paris
> I am unable to edit this Talend job, as it's very old and we do not have the
> source code for the job anymore. I am unable to see what the actual delimiter

Compiled talend jobs produce jars file with java .class files in which
the SQL statements are in plain text. You should be at least able to get
the copy statement (which is plain text SQL), and also being able to
modify it.

On Wed, Nov 13, 2019 at 07:40:59PM -0500, Brandon Ragland wrote:
> Hello,
> 
> I have a Talend enterprise job that loads data into a PostgreSQL database via
> the COPY command. When migrating to a new server this command fails with the
> following error message: org.postgresql.util.PSQLException:ERROR: COPY escape
> must be a single one-byte character
> 
> The thing is, I looked over the documentation for both Postgres 9.0 and 9.2.
> Both documentations say that multi-byte delimiters are not allowed. So I'm 
> very
> confused on why this job works perfectly on Postgres 9.0 but not on 9.2.
> 
> I am unable to edit this Talend job, as it's very old and we do not have the
> source code for the job anymore. I am unable to see what the actual delimiter
> is. I am also unable to see exactly how the COPY command is being run, such as
> whether it's pushing directly to the server via the Postgres driver, or if 
> it's
> created a temporary CSV file somewhere and then loading the data into the
> server. I believe the reason we have multi byte delimiters setup is due to the
> use of various special characters in a few of the columns for multiple tables.
> 
> I am not aware of any edits to the source code of the old 9.0 Postgres server.
> 
> The reason we are migrating servers is due to the end of life for CentOS 5. 
> The
> new server runs CentOS 7. I believe that both servers are using the default
> Postgres versions that come in the default CentOS repositories. I know for 
> sure
> that the CentOS 7 server is indeed running the default Postgres version, as I
> installed it myself through yum.
> 
> Any help would be greatly appreciated.
> 
> Also, is there a way to copy the old Postgres server, dependencies, and
> executables to our new server, in case the source was modified?
> 
> Brandon Ragland
> Software Engineer
> BREAKFRONT SOFTWARE
> Office: 704.688.4085 | Mobile: 240.608.9701 | Fax: 704.973.0607

-- 
nicolas