Re: [sqlite] How to truncate table?

2005-07-19 Thread chan wilson

Dear Puneet Kishor,

  I never thought it does since I checked out the documentation of sqlite, 
it does not mention it.


  Greatly thanks for you very kind help!

  Cheers!

  Yours,
  Wilson Chan


From: Puneet Kishor <[EMAIL PROTECTED]>
Reply-To: sqlite-users@sqlite.org
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to truncate table?
Date: Wed, 20 Jul 2005 00:40:33 -0500


On Jul 20, 2005, at 12:33 AM, chan wilson wrote:


Hi Puneet Kishor ,

 Yes, you are right. I have to check out whether "DELETE FROM 
table;" will really set the auto imcrement primary key seed back 
zero.


Yes, it will. Here you go --

sqlite> create table t (a integer primary key, b);
sqlite> select * from t;
sqlite> insert into t (b) values ('foo');
sqlite> insert into t (b) values ('bar');
sqlite> select * from t;
1|foo
2|bar
sqlite> delete from t;
sqlite> select * from t;
sqlite> insert into t (b) values ('qux');
sqlite> insert into t (b) values ('baz');
sqlite> select * from t;
1|qux
2|baz
sqlite>






From: Puneet Kishor <[EMAIL PROTECTED]>
Reply-To: sqlite-users@sqlite.org
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to truncate table?
Date: Wed, 20 Jul 2005 00:29:27 -0500


On Jul 19, 2005, at 11:50 PM, chan wilson wrote:


Hi,

 In MS SQL, there is a "TRUNCATE TABLE" SQL that will remove all 
the records of a table and make the identity back to zero.


 Is there any means to accomplish it in sqlite?



How about

DELETE FROM table;

I am assuming, by 'identity' you refer to the auto-incrementing 
primary key. If so, yes, SQLite will do the same thing.


Fwiw, I believe TRUNCATE TABLE does a bit more than what you are 
implying above. It also truncates the logs and whatnot. But, that 
may be neither here nor there.



--
Puneet Kishor



_
[EMAIL PROTECTED];Aw#,Cb7QOBTX~} MSN Messenger:  
http://messenger.msn.com/cn



--
Puneet Kishor



_
与世界各地的朋友进行交流,免费下载 MSN Messenger:  
http://messenger.msn.com/cn 



Re: [sqlite] How to truncate table?

2005-07-19 Thread Puneet Kishor


On Jul 20, 2005, at 12:33 AM, chan wilson wrote:


Hi Puneet Kishor ,

 Yes, you are right. I have to check out whether "DELETE FROM table;" 
will really set the auto imcrement primary key seed back zero.


Yes, it will. Here you go --

sqlite> create table t (a integer primary key, b);
sqlite> select * from t;
sqlite> insert into t (b) values ('foo');
sqlite> insert into t (b) values ('bar');
sqlite> select * from t;
1|foo
2|bar
sqlite> delete from t;
sqlite> select * from t;
sqlite> insert into t (b) values ('qux');
sqlite> insert into t (b) values ('baz');
sqlite> select * from t;
1|qux
2|baz
sqlite>






From: Puneet Kishor <[EMAIL PROTECTED]>
Reply-To: sqlite-users@sqlite.org
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to truncate table?
Date: Wed, 20 Jul 2005 00:29:27 -0500


On Jul 19, 2005, at 11:50 PM, chan wilson wrote:


Hi,

 In MS SQL, there is a "TRUNCATE TABLE" SQL that will remove all the 
records of a table and make the identity back to zero.


 Is there any means to accomplish it in sqlite?



How about

DELETE FROM table;

I am assuming, by 'identity' you refer to the auto-incrementing 
primary key. If so, yes, SQLite will do the same thing.


Fwiw, I believe TRUNCATE TABLE does a bit more than what you are 
implying above. It also truncates the logs and whatnot. But, that may 
be neither here nor there.



--
Puneet Kishor



_
[EMAIL PROTECTED];Aw#,Cb7QOBTX~} MSN Messenger:  http://messenger.msn.com/cn


--
Puneet Kishor



Re: [sqlite] How to truncate table?

2005-07-19 Thread chan wilson

Hi Puneet Kishor ,

 Yes, you are right. I have to check out whether "DELETE FROM table;" will 
really set the auto imcrement primary key seed back zero.



From: Puneet Kishor <[EMAIL PROTECTED]>
Reply-To: sqlite-users@sqlite.org
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to truncate table?
Date: Wed, 20 Jul 2005 00:29:27 -0500


On Jul 19, 2005, at 11:50 PM, chan wilson wrote:


Hi,

 In MS SQL, there is a "TRUNCATE TABLE" SQL that will remove all 
the records of a table and make the identity back to zero.


 Is there any means to accomplish it in sqlite?



How about

DELETE FROM table;

I am assuming, by 'identity' you refer to the auto-incrementing 
primary key. If so, yes, SQLite will do the same thing.


Fwiw, I believe TRUNCATE TABLE does a bit more than what you are 
implying above. It also truncates the logs and whatnot. But, that 
may be neither here nor there.



--
Puneet Kishor



_
与世界各地的朋友进行交流,免费下载 MSN Messenger:  
http://messenger.msn.com/cn 



Re: [sqlite] How to truncate table?

2005-07-19 Thread Puneet Kishor


On Jul 19, 2005, at 11:50 PM, chan wilson wrote:


Hi,

 In MS SQL, there is a "TRUNCATE TABLE" SQL that will remove all the 
records of a table and make the identity back to zero.


 Is there any means to accomplish it in sqlite?



How about

DELETE FROM table;

I am assuming, by 'identity' you refer to the auto-incrementing primary 
key. If so, yes, SQLite will do the same thing.


Fwiw, I believe TRUNCATE TABLE does a bit more than what you are 
implying above. It also truncates the logs and whatnot. But, that may 
be neither here nor there.



--
Puneet Kishor



[sqlite] How to truncate table?

2005-07-19 Thread chan wilson

Hi,

 In MS SQL, there is a "TRUNCATE TABLE" SQL that will remove all the 
records of a table and make the identity back to zero.


 Is there any means to accomplish it in sqlite?

 Thanks!

_
与世界各地的朋友进行交流,免费下载 MSN Messenger:  
http://messenger.msn.com/cn 



[sqlite] Question about temp table performance

2005-07-19 Thread K. Haley

I have two versions of the same algorithm.  The first operates directly
on the main db table.  The second operates on a temp table containing
only the working set.  The problem is that the second version is about
20x slower, 1.5 sec versus 30 sec.  If the EXISTS line in the second
version is commented out the execution time drops to 9 sec.  Any ideas?


Version 1:
  sq_res=sqlite3_prepare(db,"UPDATE group_article SET parent=null
WHERE group_id=?;",-1,,NULL);
  sq_res=sqlite3_prepare(db,"UPDATE group_article SET parent="
  "( SELECT article.id FROM refs ,article "
  "WHERE refs.article_id=group_article.article_id "
  "AND reference=hash "
  "AND EXISTS (SELECT id FROM group_article WHERE
group_id=?1 AND article_id=article.id) "
  "ORDER BY refs.id DESC LIMIT 1 ) "
  "WHERE group_id=?1;",-1,,NULL);

Version 2:
  sq_res=sqlite3_prepare(db,"CREATE TEMP TABLE thrd(aid UNIQUE,
parent);",-1,,NULL);
  sq_res=sqlite3_prepare(db,"INSERT INTO thrd(aid) SELECT article_id
FROM group_article "
   "WHERE group_id=?;", -1, , NULL);
  sq_res=sqlite3_prepare(db,"UPDATE thrd SET parent="
  "( SELECT article.id FROM refs ,article "
  "WHERE refs.article_id=thrd.aid "
  "AND reference=hash "
  "AND EXISTS (SELECT aid FROM thrd WHERE aid=article.id) "
   "ORDER BY refs.id DESC LIMIT 1 ) "
   ";",-1,,NULL);
  sq_res=sqlite3_prepare(db,"UPDATE group_article SET parent="
  "( SELECT parent FROM thrd "
  "WHERE aid=article_id ) "
  "WHERE group_id=? ;", -1, , NULL);



signature.asc
Description: OpenPGP digital signature


Re: [sqlite] Issue with Mac OS X and database file metadata (file size/disk free)

2005-07-19 Thread Aaron Burghardt


On Jul 19, 2005, at 7:50 PM, Aaron Burghardt wrote:


On Jul 19, 2005, at 4:21 PM, Doug Currie wrote:



[...] If I have exceeded the amount of free space,
though, attempting to commit the transaction will fail.


That is also as expected. SQLite will cache modified pages in RAM,  
and

attempt to write them to disk at commit time.


OK, that's for the explanation.

So, to avoid running out of disk space, I should stop inserting  
records when the amount of free disk space falls below (PRAGMA  
cache_size;) * 1.5 KB? Is there a better guideline?


Sorry, I meant "thanks for the explanation", of course.

Also, I didn't look closely enough at the caching pragmas. The best  
threshold to avoid running out of disk space appears to be:


(PRAGMA cache_size;) * (PRAGMA page_size;)

Does anyone recommend anything different?

Thanks for the help,

Aaron



Re: [sqlite] Issue with Mac OS X and database file metadata (file size/disk free)

2005-07-19 Thread Aaron Burghardt


On Jul 19, 2005, at 4:21 PM, Doug Currie wrote:





[...] If I have exceeded the amount of free space,
though, attempting to commit the transaction will fail.




That is also as expected. SQLite will cache modified pages in RAM, and
attempt to write them to disk at commit time.




OK, that's for the explanation.

So, to avoid running out of disk space, I should stop inserting  
records when the amount of free disk space falls below (PRAGMA  
cache_size;) * 1.5 KB? Is there a better guideline?


Regards,

Aaron





RE: [sqlite] Efficient record insertion techniques?

2005-07-19 Thread Griggs, Donald

Regarding inserting log records to sqlite database:


Re: OK then I guess I need to batch them to improve performance. Temp tables
best way to go?
 Yes, if, after leaving the database open, you still need more
performance improvement, then batching multiple inserts per transaction
should help a lot.  You may decide you don't need a temporary table, just
commit and begin a new transaction whenever either:
  a) You (temporarily) run out of items to be written, or
  b) One second (for example) has elapsed.



Re: My concern here is that leaving the DB open for long leaves the DB file
susceptible to corruption during power failures, spikes etc. Opening,
writing and closing reduces this window.

To my understanding, simply having your database on a spinning disk
opens it to spikes, etc.  Because sqlite is a journaled database,
partially-completed transactions are handled during the next database open.
To ensure against catastrophic disk failures (or software run amok that
writes in the middle of your files) you would want to use UPS's,
integrity-promoting levels of RAID drives, backups, etc.  Many of the
world's most critical applications run on systems which keep databases open
for long periods.


Re:  Could you clarify how is the DB unlocked if my app that obtained  a
handle on it got killed before releasing the locked?

I won't attempt (and couldn't) to personally give you the nitty gritty
details, but:

   A) If by "lock" you mean the database lock, then remember that the
database is ONLY locked during your transactions -- not simply because the
database is open.   If your process was killed during the middle of a
transaction, then the journalling code comes into play when someone next
opens the database.   (You may want to read up on database journalling, and
on "ACID" -- atomic, consistent, isolated, and durable -- transactions.)   

B) If by "lock" you instead mean your unix file system's lock on the
database file, then unix should clear that lock when your process is killed
(isn't this right?).



Re:  When the DB got locked the ext2 partition usage was just 6%!
Someone more knowledgable may be able to help you here.   Might your
defined allotment of disk space be only that 6% of the partition by chance?
What version of sqlite are you using, and under what environment?  (i.e.
what language, etc.)


Re: [sqlite] Issue with Mac OS X and database file metadata (file size/disk free)

2005-07-19 Thread Doug Currie
Tuesday, July 19, 2005, 12:19:48 PM, Aaron wrote:

> We are inserting records into SQLite databases, and in our testing
> have discovered that in some circumstances it is possible to be  
> inserting records inside a transaction, yet not have the fact that  
> the database journal file is growing be reflected accurately by the
> file system.

The journal records the state of database pages before the
transaction. So, if you have an empty database, there will be nothing
in the journal no matter how many records you insert. In general, the
maximum size of the journal will be proportional to the size of the
database at the start of the transaction.

> [...] If I have exceeded the amount of free space,
> though, attempting to commit the transaction will fail.

That is also as expected. SQLite will cache modified pages in RAM, and
attempt to write them to disk at commit time.

e




Re: [sqlite] Efficient record insertion techniques?

2005-07-19 Thread R S
Thanks Donald. Inline


On 7/19/05, Griggs, Donald <[EMAIL PROTECTED]> wrote:
> I'm no expert, but I hope the following are accurate.
> 
> Regarding: 1) Should I use a transaction even for a single record?
>   You already are, since your inserts automatically become one-line
> transactions.
OK then I guess I need to batch them to improve performance. Temp
tables best way to go?
 
> 
> Re: 2) I open the DB, insert the record and close the DB for every record
> inserted. What if the process crashed before I closed the DB. Would the DB
> get locked? If so how do I unlock it?
> 
> You only need worry about crashing *during* the insert, and even then
> the sqlite journal capability should do glorious things when you next open
> the database, restoring it to a consistent state.
> 
> If you left the database open during your logging sessions then it should be
> much more efficient -- at least in terms of disk activity and CPU time.

My concern here is that leaving the DB open for long leaves the DB
file susceptible to corruption during power failures, spikes etc.
Opening, writing and closing reduces this window.

Could you clarify how is the DB unlocked if my app that obtained  a
handle on it got killed before releasing the locked?


> 
> Re:  3) Six million and (not) counting:
>I don't really know, but are you perhaps on a FAT filesystem with a
> 2GByte filesize limit?
No. Its an ext2 :-(
When the DB got locked the partition usage was just 6%!


> 
> 
> Donald Griggs
> 
> Opinions are not necessarily those of Misys Healthcare Systems nor its board
> of directors.
> 
> 
> 
> -Original Message-
> From: R S [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, July 19, 2005 2:23 PM
> To: sqlite-users@sqlite.org
>


RE: [sqlite] Efficient record insertion techniques?

2005-07-19 Thread Griggs, Donald
I'm no expert, but I hope the following are accurate.

Regarding: 1) Should I use a transaction even for a single record?
  You already are, since your inserts automatically become one-line
transactions.


Re: 2) I open the DB, insert the record and close the DB for every record
inserted. What if the process crashed before I closed the DB. Would the DB
get locked? If so how do I unlock it?

You only need worry about crashing *during* the insert, and even then
the sqlite journal capability should do glorious things when you next open
the database, restoring it to a consistent state.

If you left the database open during your logging sessions then it should be
much more efficient -- at least in terms of disk activity and CPU time.


Re:  3) Six million and (not) counting:
   I don't really know, but are you perhaps on a FAT filesystem with a
2GByte filesize limit?



Donald Griggs

Opinions are not necessarily those of Misys Healthcare Systems nor its board
of directors.



-Original Message-
From: R S [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 19, 2005 2:23 PM
To: sqlite-users@sqlite.org


[sqlite] Efficient record insertion techniques?

2005-07-19 Thread R S
Hi,
  I wrote a Real Time logging app that insert logs from various Unix
machines in our lab into a sqlite Database.
The insertions are not batched as of now. Maybe that itself is an optimization.
1) Should I use a transaction even for a single record?
2) I open the DB, insert the record and close the DB for every record
inserted. What if the process crashed before I closed the DB. Would
the DB get locked? If so how do I unlock it?
3) I got the app running over the last week and noticed it worked well
till it reached 6 million records. The DB then suddenly got locked w/o
the app crashing or anything. I then killed the app and could run
queries via the Command Line tool. Restarting the app locked the DB
again.
Any ideas? :-(
Thanks!


RE: [sqlite] Newbie Help Please

2005-07-19 Thread Wood, Lee
Dennis and Tim-
 
TY VM! VM! VM! VM!
 
I was about to use MySQL because I was too stupid to figure this out!
 
TY VM
 
Lee

-Original Message- 
From: Tim McDaniel [mailto:[EMAIL PROTECTED] 
Sent: Mon 7/18/2005 1:48 PM 
To: sqlite-users@sqlite.org 
Cc: 
Subject: RE: [sqlite] Newbie Help Please




> -Original Message-
> From: Dennis Cote [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 18, 2005 11:55 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Newbie Help Please
>
> Wood, Lee wrote:
>
> >I tried to do the quick-start example and I could not get it
> to work. It is not explicit enough for some like me. First
> off, the quickstart doesn't specify that you need to include
> the source (just the external interface header). But since it
> does not come with a *.lib I tried to include the source
> files and I get a bunch of unresolved external dependencies
> (can't find some function definitions). I even removed the
> tcl*.c files and still got the errors. I'm compiling in
> Visual C++ 7.1.
> >
> >Also, I tried to build with the *.dll but the *.dll file
> doesn't come with a header for it or static *.lib to link
> against. If I use the *.dll do I have to fn* every function I
> wish to use?
> >
> > 
> >
> Lee,
>
> You need to use the LIB command line utility that comes with
> VC++ to generate an sqlite3.lib file which will let you use
> the sqlite3.dll library. You need to feed the sqlite3.def
> file into the LIB utility and it will generate an sqlite3.lib
> which you can link with your project.
> The following command is used in the Makefile.
>
>   lib /machine:i386 /def:sqlite3.def
>
> HTH
> Dennis Cote
>

This might help...
I created a Wiki page with a VS.NET 2003 Solution to compile SQLite.
http://www.sqlite.org/cvstrac/wiki?p=VsNetSolution





[sqlite] Issue with Mac OS X and database file metadata (file size/disk free)

2005-07-19 Thread Aaron Burghardt

Hello,

We are inserting records into SQLite databases, and in our testing  
have discovered that in some circumstances it is possible to be  
inserting records inside a transaction, yet not have the fact that  
the database journal file is growing be reflected accurately by the  
file system. In other words, these steps will reproduce the issue:


1. open an affected database on the command line: sqlite3  
myAffectedDatabase.db

2. begin a transaction: sqlite3> begin;
3. insert a record: insert into table values ();
4. in a separate Terminal window, check the disk free and size  
of the database and its journal:

df -k /path/to/myAffectedDatabase.db
ls -l /path/to/myAffectedDatabase*
5. repeat steps 3 and 4 until you pass the point where the  
volume should have run out of memory.


I expect to see df and ls reflect changes to the filesystem and I  
expect SQLite to report a disk full error. Instead, df and ls output  
show changes during the first few inserts, then stop changing no  
matter how many records I insert, and I never receive a disk full  
error from SQLite. If I have exceeded the amount of free space,  
though, attempting to commit the transaction will fail.


Some additional relevant information:

I am working with Mac OS X 10.2 (Jaguar) and 10.4 (Tiger). I have  
reproduced the problem with SQLite 3.0.8 and 3.1.3.


The problem only manifests when using certain databases, but if the  
problem appears with a given database, then it will consistently do  
so for that database. This includes copying the database to another  
volume.


We detected the problem on a volume with little free space  
(100-200KB), but the symptom also appears with many GB free.


I have tried watching activity with fs_usage, but I don't see  
anything unusual, except for the fact that the database path isn't  
listed in the fs_usage output.


I have reproduced the problem on HFS+ and FAT16 filesystems.

I suspect this is an OS X-specific issue, but I'm hoping to get some  
insight from the list. Any help would be greatly appreciated.


Thanks,

-
Aaron Burghardt
Booz Allen Hamilton
13200 Woodland Park Drive
Suite 5035
Herndon, VA 20171
703-984-3112




Re: [sqlite] How to store ' ?

2005-07-19 Thread Edwin Knoppert

Gosh, sorry, just re-read the faq:
INSERT INTO xyz VALUES('5 O''clock');


- Original Message - 
From: "Edwin Knoppert" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, July 19, 2005 1:49 PM
Subject: [sqlite] How to store ' ?



If i read correct BLOB data can now handle chr(0) as well.
But another issue i solved with access is the embedding of ' characters.
I'm using single quotes for field data during INSERT.
How can a field contain single quotes for data?
Access allows to use BASIC code and let's you insert + Chr() stuff.



[sqlite] How to store ' ?

2005-07-19 Thread Edwin Knoppert
If i read correct BLOB data can now handle chr(0) as well.
But another issue i solved with access is the embedding of ' characters.
I'm using single quotes for field data during INSERT.
How can a field contain single quotes for data?
Access allows to use BASIC code and let's you insert + Chr() stuff.


Re: [sqlite] make fails on Solaris

2005-07-19 Thread Christian Smith
On Fri, 15 Jul 2005, Dan Kennedy wrote:

>See if your installation has "nawk" or "gawk". Then search through
>Makefile.in and change the "awk" invocations to "nawk" or "gawk"
>(three changes). It might work then.


Solaris /usr/bin/awk is horrible and not nawk compatible.


>
>
>--- H S <[EMAIL PROTECTED]> wrote:
>
>> You are right.  opcodes.h is corrupted.  It has 0 byte..


Put /usr/xpg4/bin in your path before /usr/bin. The awk there is nawk
compatible and should work.


>>
>> I have all other files:  lemon exists in the build directory as well
>> as parrse.c and parse.h
>>
>> Here is the messages from configure
>>
>> gums2-sun% ../sqlite-3.2.2/configure --prefix=/dssweb/sqlite
>> checking build system type... sparc-sun-solaris2.8
>> checking host system type... sparc-sun-solaris2.8
>> checking for gcc... gcc
>> checking for C compiler default output file name... a.out
>> checking whether the C compiler works... yes
>> checking whether we are cross compiling... no
>> checking for suffix of executables...
>> checking for suffix of object files... o
>> checking whether we are using the GNU C compiler... yes
>> checking whether gcc accepts -g... yes
>> checking for gcc option to accept ANSI C... none needed
>> checking for a sed that does not truncate output... /usr/bin/sed
>> checking for egrep... grep -E
>> checking for ld used by gcc...
>> /prod/cygnus/gnupro-03r1/H-sparc-sun-solaris2.8/sparc-sun-solaris2.\
>> 8/bin/ld
>> checking if the linker
>> (/prod/cygnus/gnupro-03r1/H-sparc-sun-solaris2.8/sparc-sun-solaris2.8/bin/l\
>> d) is GNU ld... yes
>> checking for 
>> /prod/cygnus/gnupro-03r1/H-sparc-sun-solaris2.8/sparc-sun-solaris2.8/bin/ld
>> option to\
>>  reload object files... -r
>> checking for BSD-compatible nm... /usr/ccs/bin/nm -p
>> checking whether ln -s works... yes
>> checking how to recognise dependent libraries... pass_all
>> checking how to run the C preprocessor... gcc -E
>> checking for ANSI C header files... yes
>> checking for sys/types.h... yes
>> checking for sys/stat.h... yes
>> checking for stdlib.h... yes
>> checking for string.h... yes
>> checking for memory.h... yes
>> checking for strings.h... yes
>> checking for inttypes.h... yes
>> checking for stdint.h... no
>> checking for unistd.h... yes
>> checking dlfcn.h usability... yes
>> checking dlfcn.h presence... yes
>> checking for dlfcn.h... yes
>> checking for g++... g++
>> checking whether we are using the GNU C++ compiler... yes
>> checking whether g++ accepts -g... yes
>> checking how to run the C++ preprocessor... g++ -E
>> checking for g77... no
>> checking for f77... no
>> checking for xlf... no
>> checking for frt... no
>> checking for pgf77... no
>> checking for fort77... no
>> checking for fl32... no
>> checking for af77... no
>> checking for f90... no
>> checking for xlf90... no
>> checking for pgf90... no
>> checking for epcf90... no
>> checking for f95... no
>> checking for fort... no
>> checking for xlf95... no
>> checking for ifc... no
>> checking for efc... no
>> checking for pgf95... no
>> checking for lf95... no
>> checking for gfortran... no
>> checking whether we are using the GNU Fortran 77 compiler... no
>> checking whether  accepts -g... no
>> checking the maximum length of command line arguments... 262144
>> checking command to parse /usr/ccs/bin/nm -p output from gcc object... ok
>> checking for objdir... .libs
>> checking for ar... ar
>> checking for ranlib... ranlib
>> checking for strip... strip
>> checking if gcc static flag  works... yes
>> checking if gcc supports -fno-rtti -fno-exceptions... yes
>> checking for gcc option to produce PIC... -fPIC
>> checking if gcc PIC flag -fPIC works... yes
>> checking if gcc supports -c -o file.o... yes
>> checking whether the gcc linker
>> (/prod/cygnus/gnupro-03r1/H-sparc-sun-solaris2.8/sparc-sun-solaris\
>> 2.8/bin/ld) supports shared libraries... yes
>> checking whether -lc should be explicitly linked in... yes
>> checking dynamic linker characteristics... solaris2.8 ld.so
>> checking how to hardcode library paths into programs... immediate
>> checking whether stripping libraries is possible... no
>> checking if libtool supports shared libraries... yes
>> checking whether to build shared libraries... yes
>> checking whether to build static libraries... yes
>> configure: creating libtool
>> appending configuration tag "CXX" to libtool
>> checking for ld used by g++...
>> /prod/cygnus/gnupro-03r1/H-sparc-sun-solaris2.8/sparc-sun-solaris2.\
>> 8/bin/ld
>> checking if the linker
>> (/prod/cygnus/gnupro-03r1/H-sparc-sun-solaris2.8/sparc-sun-solaris2.8/bin/l\
>> d) is GNU ld... yes
>> checking whether the g++ linker
>> (/prod/cygnus/gnupro-03r1/H-sparc-sun-solaris2.8/sparc-sun-solaris\
>> 2.8/bin/ld) supports shared libraries... yes
>> checking for g++ option to produce PIC... -fPIC
>> checking if g++ PIC flag -fPIC works... yes
>> checking if g++ supports -c -o file.o... yes
>> checking whether the g++ linker
>>