[sqlite] Quotation handling bug?

2006-09-18 Thread He Shiming

Hi,

I think I found a bug in sqlite version 3.3.7. The steps to reproduce it is 
as follows. I've tested it on Windows only.


C:\Something>sqlite3 newdb.db
CREATE TABLE 'MYTABLE' ('ID' INTEGER PRIMARY KEY);
INSERT INTO 'MYTABLE' ('ID') VALUES(1);
INSERT INTO 'MYTABLE' ('ID') VALUES(2);
INSERT INTO 'MYTABLE' ('ID') VALUES(3);

This is pretty straightfoward. But when I try to fetch the data out...
SELECT 'ID' FROM 'MYTABLE' WHERE 'ID'=2;  // no result
SELECT 'ID' FROM 'MYTABLE' WHERE ID=2; // result is ID
SELECT ID FROM 'MYTABLE' WHERE ID=2; // result is 2
SELECT 'MYTABLE'.'ID' FROM 'MYTABLE' WHERE 'MYTABLE'.'ID'=2; // result is 2

I guess, to make it safer, I'll have to use the last one. However, the 
behavior or the first one and the second one looks like malfunctioning. The 
four queries should produce completely equivalent results, which is "2". Or 
is it something I did wrong?


Best regards,
He Shiming 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Profermance of "Group BY" and "Distinct"

2006-09-18 Thread PY

Hi all,

which one has the better performance between "GROUP BY" and "DISTINCT"?

I need to retrieve a distinct value of a specific column. Either "GROUP BY"
and "DISTINCT" could be finish that.
But I  want to know which one has the better performance. Or I can use
another sql command to finish that with best performance.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


SELECT Distinct x from foo;
or
SELECT x from foo group by x;
or
others.

Would you please help to tell me how to improve that?

Thanks for your great help.


Thanks,
VK


[sqlite] How to speed up the performance in LIMIT & OFFSET?

2006-09-18 Thread PY

Hi All,
I have a problem about LIMIT & OFFSET profermance.
Due to the limitation of memory, I could not get all of the query result at
a time.
In our soluction, we use the LIMIT and OFFSET to avoid the problem of memory
issue.

we observed the performance of LIMIT & OFFSET, it looks like a liner grow of
the response time. In our table, it only has 300~500 records.



Here is the dummy script
Create Table foo(id INTEGER PRIMARY KEY AUTOINCREMENT, x TEXT);
create index idx_foo on foo(x);

insert into foo(x) values('text001');
:
:
insert into foo(x) values('text300');


Select distinct x from foo LIMIT 20 OFFSET 0; Select distinct x from foo
LIMIT 20 OFFSET 20; Select distinct x from foo LIMIT 20 OFFSET 40; Select
distinct x from foo LIMIT 20 OFFSET 60;
:
:
Select distinct x from foo LIMIT 20 OFFSET 280;



Would you please help to tell me how to improve that?

Thanks for your great help.


Thanks,
VK


Re: [sqlite] sqlite_exec query string maximum length?

2006-09-18 Thread Kervin L. Pierre
Hello,

Thanks for the explaination.  Looks like
this isn't going to help me after all.
Thought sqlite3_exec() would compile the
query only once.

Best regards,
Kervin

--- Dennis Cote <[EMAIL PROTECTED]> wrote:

> Kervin L. Pierre wrote:
> > and for performance, I would like to execute as
> > few sqlite_exec() calls as possible.
> >
> >   
> 
> Kervin,
> 
> While you can pass several SQL statements to
> sqlite3_exec in one sql 
> string, each statement is compiled and executed
> separately, so the 
> performance increase over separate calls to
> sqlite3_exec is not 
> substantial. If you add 1000 inserts into a string
> and pass that to 
> sqlite3_exec, it will repeat the following steps
> 1000 times; parse 
> insert statement sql, generate insert statement,
> execute insert 
> statement, and destroy insert statement.
> 
> However, if you use a prepared statement, and then
> simply bind new 
> values to it for each insert you eliminate the
> parse, generate, and 
> destroy steps for 999 out of the 1000 statements.
> This will provide a 
> much more substantial performance boost.
> 
> Note, it is also extremely important to wrap your
> block of inserts with 
> "begin transaction" and "end transaction" commands.
> 
> HTH
> Dennis Cote
> 
> 
>
-
> To unsubscribe, send email to
> [EMAIL PROTECTED]
>
-
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite_exec query string maximum length?

2006-09-18 Thread Kervin L. Pierre
Hello,

Thanks.

Best regards,
Kervin


--- [EMAIL PROTECTED] wrote:

> "Kervin L. Pierre" <[EMAIL PROTECTED]> wrote:
> > what is the maximumm number
> > of characters there can be in a query
> > string sent to sqlite_exec()? 
> 
> 2147483647 bytes
> 
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
> 
> 
>
-
> To unsubscribe, send email to
> [EMAIL PROTECTED]
>
-
> 
> 


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite_exec query string maximum length?

2006-09-18 Thread Dennis Cote

Kervin L. Pierre wrote:

and for performance, I would like to execute as
few sqlite_exec() calls as possible.

  


Kervin,

While you can pass several SQL statements to sqlite3_exec in one sql 
string, each statement is compiled and executed separately, so the 
performance increase over separate calls to sqlite3_exec is not 
substantial. If you add 1000 inserts into a string and pass that to 
sqlite3_exec, it will repeat the following steps 1000 times; parse 
insert statement sql, generate insert statement, execute insert 
statement, and destroy insert statement.


However, if you use a prepared statement, and then simply bind new 
values to it for each insert you eliminate the parse, generate, and 
destroy steps for 999 out of the 1000 statements. This will provide a 
much more substantial performance boost.


Note, it is also extremely important to wrap your block of inserts with 
"begin transaction" and "end transaction" commands.


HTH
Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Index Optimisation

2006-09-18 Thread Dennis Cote

Da Martian wrote:


I cannot tell if sqlite uses my indeces or not. Is there a way
to tell what indexes are used in a query ? 


Use the explain query plan command. Simply add these keywords before 
your query.


   EXPLAIN QUERY PLAN SELECT 


Instead on the query results you will get a table listing the tables and 
indexes that will be used to implement your query.


HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: Re[4]: [sqlite] reg:blob data reading

2006-09-18 Thread Thomas . L
On Mon, 18 Sep 2006 10:20:36 -0400, you wrote:

>There's no right or wrong way.

There is viewpoint from June 2006 with title: 
   "To BLOB or Not To BLOB:
Large Object Storage in a Database or a Filesystem?"

at
http://research.microsoft.com/research/pubs/view.aspx?type=technical%20report=1089

Maybe it turns a light on... ;-)


Best Regards
Thomas

www.thlu.de

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re[2]: [sqlite] reg:blob data reading

2006-09-18 Thread Teg
Hello Dennis,

Monday, September 18, 2006, 11:50:03 AM, you wrote:

DJ> Jay Sprenkle wrote:
>> On 9/18/06, Teg <[EMAIL PROTECTED]> wrote:
>>> Hello Jay,

DJ> Everyone has different needs.  We like keeping all of the data (blobs
DJ> included) in one data file.  We also use the encryption extension, and
DJ> it is mandatory that our blobs be encrypted.  I don't need "read 
DJ> arbitrary byte ranges from a blob" for my work project, but I could use
DJ> them in a personal project that involves sqlite (no encryption here; but
DJ> it is important to keep all data in one data file).


DJ> 
-
DJ> To unsubscribe, send email to [EMAIL PROTECTED]
DJ> 
-


I'm, probably going to be offering optional encryption too. Why did
you chose to use the SQLite encryption extensions versus just
encrypting the blobs after you read them back in and before you write
them out?

-- 
Best regards,
 Tegmailto:[EMAIL PROTECTED]


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] reg:blob data reading

2006-09-18 Thread Dennis Jenkins

Jay Sprenkle wrote:

On 9/18/06, Teg <[EMAIL PROTECTED]> wrote:

Hello Jay,

The whole reason I store files in the DB in the first place is to have
a single "package" to move around and backup when needed. My
application is storing whole series of PNG and JPG files in the
DB with meta data describing where the images came from.


My technique won't help you then. I use it for things like scanning 
images

of documents and using the database to keep track of the documents.
I never have to search a picture using a select statement so it would
be silly for me to put them into the database. I just back up the file
system using off the shelf backup software and it works fine.



Everyone has different needs.  We like keeping all of the data (blobs 
included) in one data file.  We also use the encryption extension, and 
it is mandatory that our blobs be encrypted.  I don't need "read 
arbitrary byte ranges from a blob" for my work project, but I could use 
them in a personal project that involves sqlite (no encryption here; but 
it is important to keep all data in one data file).



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: Re[4]: [sqlite] reg:blob data reading

2006-09-18 Thread Jay Sprenkle

On 9/18/06, Teg <[EMAIL PROTECTED]> wrote:

Hello Jay,

There's no right or wrong way. I was just suggesting that there are
cases where you want to store the whole file in the DB. I have an
application that generates 1000's of 150K compressed files. I've been
toying with the idea of shoving them all onto a DB because of the way
Windows groans when you have to enumerate folders with many small
files.


The scanner I use takes care of that by making a directory for each
batch of documents scanned. I've seen that same thing on Windows
systems for other projects. I ended up having to create a manager that would
store files into numbered sub directories to avoid it.


The downside of course is I'd have to vacuum the tables from time to
time.


That's why we get paid the 'big bucks'   ;)

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re[4]: [sqlite] reg:blob data reading

2006-09-18 Thread Teg
Hello Jay,

Monday, September 18, 2006, 10:05:19 AM, you wrote:

JS> On 9/18/06, Teg <[EMAIL PROTECTED]> wrote:
>> Hello Jay,
>>
>> The whole reason I store files in the DB in the first place is to have
>> a single "package" to move around and backup when needed. My
>> application is storing whole series of PNG and JPG files in the
>> DB with meta data describing where the images came from.

JS> My technique won't help you then. I use it for things like scanning images
JS> of documents and using the database to keep track of the documents.
JS> I never have to search a picture using a select statement so it would
JS> be silly for me to put them into the database. I just back up the file
JS> system using off the shelf backup software and it works fine.

JS> --
JS> SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
JS> http://www.reddawn.net/~jsprenkl/Sqlite

JS> Cthulhu Bucks!
JS> http://www.cthulhubucks.com

JS> 
-
JS> To unsubscribe, send email to [EMAIL PROTECTED]
JS> 
-


There's no right or wrong way. I was just suggesting that there are
cases where you want to store the whole file in the DB. I have an
application that generates 1000's of 150K compressed files. I've been
toying with the idea of shoving them all onto a DB because of the way
Windows groans when you have to enumerate folders with many small
files.

In the case of these small files, performance is dominated by
enumerating and decompressing so, even if it's a bit slower selecting
the files out of the DB, any improvement in enumeration speed would make
a noticeable performance boost.

The downside of course is I'd have to vacuum the tables from time to
time.

-- 
Best regards,
 Tegmailto:[EMAIL PROTECTED]


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] reg:blob data reading

2006-09-18 Thread sandhya
Thank you very much just i need confirmation on this..
If you don't mind could you please tell me if i want to perform that kind of
operation Is there any way other than storing in some temp file and
reading.using normal fopen() calls.

Please do needful
Thanks a lot
Sandhya R




- Original Message - 
From: <[EMAIL PROTECTED]>
To: ; "Teg" <[EMAIL PROTECTED]>
Sent: Monday, September 18, 2006 7:32 PM
Subject: Re: [sqlite] reg:blob data reading


> SQLite does not (at this time) have the ability to incrementally
> read or write BLOBs.  You have to read and write the whole blob
> all at once.
> --
> D. Richard Hipp   <[EMAIL PROTECTED]>
>
>
> --
---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
---
>



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] INSERT into a whole table at once

2006-09-18 Thread Jay Sprenkle

On 9/18/06, Richard Stern <[EMAIL PROTECTED]> wrote:

I have a big table and I want to change all the values in a row to the same
thing but I'm not sure what key word would allow me to do this.



Nothing built into the SQL language.
it would be something like this:
update mytable
set column1 = 'test', column2 = 'test', column3 = 'test'
where rowid = something

You can write a program to do this fairly simply.

--
--
SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: Re[2]: [sqlite] reg:blob data reading

2006-09-18 Thread Jay Sprenkle

On 9/18/06, Teg <[EMAIL PROTECTED]> wrote:

Hello Jay,

The whole reason I store files in the DB in the first place is to have
a single "package" to move around and backup when needed. My
application is storing whole series of PNG and JPG files in the
DB with meta data describing where the images came from.


My technique won't help you then. I use it for things like scanning images
of documents and using the database to keep track of the documents.
I never have to search a picture using a select statement so it would
be silly for me to put them into the database. I just back up the file
system using off the shelf backup software and it works fine.

--
SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] reg:blob data reading

2006-09-18 Thread drh
SQLite does not (at this time) have the ability to incrementally
read or write BLOBs.  You have to read and write the whole blob
all at once.
--
D. Richard Hipp   <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] INSERT into a whole table at once

2006-09-18 Thread Richard Stern
I have a big table and I want to change all the values in a row to the same
thing but I'm not sure what key word would allow me to do this.

I tried things like:

INSERT INTO Table ALL VALUES ("test");

But it has an error on the ALL. I tried putting the ALL in lots of different
places but it didn't work. I also tried * as a wildcard in the place where
you normally specify columns but that didn't work either.

There is a way to do this right?


Richard



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re[2]: [sqlite] reg:blob data reading

2006-09-18 Thread Teg
Hello Jay,

Monday, September 18, 2006, 9:23:27 AM, you wrote:

JS> On 9/18/06, sandhya <[EMAIL PROTECTED]> wrote:
>> I think too, if they are Big-Blobs, it is better to store only a Reference
>> to a File.
>>
>> May i know litlle more clearly about this?What it mean actually?

JS> Store the path to the file in the database ( C:\somefile.dat   or
JS> /tmp/somefile.dat ).
JS> Then open the file using regular file handing routines (  fopen() etc ).

JS> 
-
JS> To unsubscribe, send email to [EMAIL PROTECTED]
JS> 
-


The whole reason I store files in the DB in the first place is to have
a single "package" to move around and backup when needed. My
application is storing whole series of PNG and JPG files in the
DB with meta data describing where the images came from.

I like the concept of being able to set an upper limit on the number
of retrieved bytes on a blob. I don't see any easy way to do it
though. My images files tend to be from 50-500K so, performance wise it's
pretty quick.

-- 
Best regards,
 Tegmailto:[EMAIL PROTECTED]


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] reg:blob data reading

2006-09-18 Thread Jay Sprenkle

On 9/18/06, sandhya <[EMAIL PROTECTED]> wrote:

I think too, if they are Big-Blobs, it is better to store only a Reference
to a File.

May i know litlle more clearly about this?What it mean actually?


Store the path to the file in the database ( C:\somefile.dat   or
/tmp/somefile.dat ).
Then open the file using regular file handing routines (  fopen() etc ).

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] reg:blob data reading

2006-09-18 Thread sandhya
You mean,
  I have to get the entire BLOB from the DB and has to store it in some
temp file and reading it?

Is there no way we can read the required no.of bytes of the data  from the
DB directly and setting the pointer or handle to the current position?
Please tell me whether the way i am thinking is wrong?

Help me
Thank you
Sandhya






- Original Message - 
From: "Jay Sprenkle" <[EMAIL PROTECTED]>
To: 
Sent: Monday, September 18, 2006 6:53 PM
Subject: Re: [sqlite] reg:blob data reading


> On 9/18/06, sandhya <[EMAIL PROTECTED]> wrote:
> > I think too, if they are Big-Blobs, it is better to store only a
Reference
> > to a File.
> >
> > May i know litlle more clearly about this?What it mean actually?
>
> Store the path to the file in the database ( C:\somefile.dat   or
> /tmp/somefile.dat ).
> Then open the file using regular file handing routines (  fopen() etc ).
>
> --
---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
---
>



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Multiuser on LAN, how good/bad is it ?

2006-09-18 Thread Nikki Locke
Martin Alfredsson wrote:
> Reading the documentation is a bit contradicting: 
>  
> >A good rule of thumb is that you should avoid using SQLite in 
> situations where 
> >the same database will be accessed simultaneously from many computers 
> over a network filesystem. 
>  
> >Most SQL database engines are client/server based. Of those that are 
> serverless, SQLite is the only 
> >one that this author knows of that allows multiple applications to 
> access the same database at the same time. 
>  
> Though I can interpret the the second statement as "on a single machine" 
> I'd like to hear about what 
> your experience is with using it as multiuser on a lan. 
>  
> Does it crash due to network problem, does it work better with WinXP 
> than Win9x/2k/linux  etc. 

All SQLite write operations obtain an exclusive lock on the whole database. 
This 
is fine, but obviously a bit limiting if you have thousands of tasks.

The exclusive lock is a file system lock, which is fine except where the 
database file is on certain NFS mounted file systems - there are known bugs in 
some implementations of NFS which means locking does not work as expected (two 
people can end up holding the same exclusive lock at the same time!). If you 
hold a SQLite database on such an NFS filesystem, and try to use it multi-user, 
it may well beak and corrupt the database. I repeat that this is not a bug in 
SQLite, but in some implementations of NFS.

Hope this explanation is clear and correct (I'm just an SQLite user who gets 
most of his knowledge from reading this list).

-- 
Nikki Locke, Trumphurst Ltd.  PC & Unix consultancy & programming
http://www.trumphurst.com/



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] time is off

2006-09-18 Thread drh
TiNo <[EMAIL PROTECTED]> wrote:
> example:
> 
> sqlite> select datetime('now');
> 2006-09-18 10:11:48
> sqlite> select datetime('now','utc');
> 2006-09-18 08:11:52
> sqlite> select datetime('now','localtime');
> 2006-09-18 12:11:58
> 
> 
> select datetime('now') shows utc time, Localtime shows my localtime,
> but now,utc is off. It should be the same as datetime('now'). Why is this?

The 'utc' modifier interprets the time to the left as localtime
and converts it to UTC.  Since in the example above, the time to
the left was already in UTC, you are doing a double-conversion.

--
D. Richard Hipp   <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Multiuser on LAN, how good/bad is it ?

2006-09-18 Thread Jay Sprenkle

On 9/17/06, Martin Alfredsson <[EMAIL PROTECTED]> wrote:

Reading the documentation is a bit contradicting:

 >A good rule of thumb is that you should avoid using SQLite in
situations where
 >the same database will be accessed simultaneously from many computers
over a network filesystem.


The problem is usually that locking does not work correctly.
The database gets corrupted.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] time is off

2006-09-18 Thread Jay Sprenkle

On 9/18/06, TiNo <[EMAIL PROTECTED]> wrote:

example:

sqlite> select datetime('now');
2006-09-18 10:11:48
sqlite> select datetime('now','utc');
2006-09-18 08:11:52
sqlite> select datetime('now','localtime');
2006-09-18 12:11:58


select datetime('now') shows utc time, Localtime shows my localtime,
but now,utc is off. It should be the same as datetime('now'). Why is this?


Did you set your locale on your machine?
It can't convert to localtime correctly without being set for the
right time zone.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] time is off

2006-09-18 Thread TiNo

example:

sqlite> select datetime('now');
2006-09-18 10:11:48
sqlite> select datetime('now','utc');
2006-09-18 08:11:52
sqlite> select datetime('now','localtime');
2006-09-18 12:11:58


select datetime('now') shows utc time, Localtime shows my localtime,
but now,utc is off. It should be the same as datetime('now'). Why is this?


[sqlite] reg:blob data reading

2006-09-18 Thread sandhya
I think too, if they are Big-Blobs, it is better to store only a Reference
to a File.

May i know litlle more clearly about this?What it mean actually?

Right now what i am doing is, I have a directory in which somany files are
there and i have loaded all the files into database and the data(ie content
of files as blob).
But my application is reading only 512 bytes at a time..If that is the case
how can i perform / handle this data .
Please suggest me your views.

Thanks a lot
Sandhya R


Re: [sqlite] sqlite_blob

2006-09-18 Thread sandhya
I think too, if they are Big-Blobs, it is better to store only a Reference
to a File.
May i know litlle more clearly about this?What it mean actually?

Right now what i am doing is, I have a directory in which somany files are
there and i have loaded all the files into database and the data(ie content
of files as blob).
But my application is reading only 512 bytes at a time..If that is the case
how can i perform / handle this data .
Please suggest me your views.
Thanks a lot
Sandhya R




- Original Message - 
From: <[EMAIL PROTECTED]>
To: 
Sent: Monday, September 18, 2006 12:51 PM
Subject: Re: [sqlite] sqlite_blob


> Hello
>
> sandhya wrote:
> > Hi,
> > Is there any way of reading Blob data for the given no.of bytes?
> > I mean is there any lseek kind of  function call in sqlite to handle
> > reading BLOB data.
>
> http://www.sqlite.org/capi3ref.html says:
> "If the result is a BLOB then the sqlite3_column_bytes() routine returns
> the number of bytes in that BLOB."
>
> If you fetch the Blob into RAM, you can do with it with your given number
> of Bytes whatever you want
>
> I think too, if they are Big-Blobs, it is better to store only a
> Reference to a File.
>
> Best Regards
> Thomas
>
> -- 
> www.thlu.de
>
>
> --
---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
---
>



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite_blob

2006-09-18 Thread thomas . l
Hello

sandhya wrote:
> Hi,
> Is there any way of reading Blob data for the given no.of bytes?
> I mean is there any lseek kind of  function call in sqlite to handle
> reading BLOB data.

http://www.sqlite.org/capi3ref.html says:
"If the result is a BLOB then the sqlite3_column_bytes() routine returns
the number of bytes in that BLOB."

If you fetch the Blob into RAM, you can do with it with your given number
of Bytes whatever you want

I think too, if they are Big-Blobs, it is better to store only a
Reference to a File.

Best Regards
Thomas

-- 
www.thlu.de


-
To unsubscribe, send email to [EMAIL PROTECTED]
-