Poul L. Christiansen writes:
> Isn't easier to reduce the table every day and make a daily vacuum which only
> lasts a few seconds?
I doubt that it would last just a few seconds. From my experience, VACUUM
on large tables can sap your I/O subsystem, slowing down overall
performance for everyone
Hi All.
Shouldn't Postgres block while vacuuming, and then
continue inserting starting where it left off? Is the
time lag too much?
I am curious because I am going to build a similar app
soon, basically parsing and inserting log file
entries.
W
--- Stephan Szabo <[EMAIL PROTECTED]>
wrote:
>
Bernie Huang wrote:
>
> Hi,
>
> I bet people have asked this question several times, but oh well, please
> do anwser again. Thanks. =)
>
> I have a product table and a log file.
>
> product_tb
> ---
> prod_id
> prod_name
> ...
>
> log_tb
> -
> log_id
> prod_id
> cust_id
> tr
Hi,
I bet people have asked this question several times, but oh well, please
do anwser again. Thanks. =)
I have a product table and a log file.
product_tb
---
prod_id
prod_name
...
log_tb
-
log_id
prod_id
cust_id
transact_date
...
How do I fetch the latest log for each produ
On Thu, 17 Aug 2000, Joerg Hessdoerfer wrote:
> Hi!
>
> I have an application, where I have to insert data into a table at several
> rows per second, 24 hours a day, 365 days a year.
>
> After some period (a week, maybe a month) the data will be reducted to some
> degree and deleted from the t
On Thu, 17 Aug 2000, Andreas Tille wrote:
> On Wed, 16 Aug 2000, Stephan Szabo wrote on [EMAIL PROTECTED]:
> (sorry for the crossposting, just to tell the list that I now switched to
> the right one hopefully)
>
> > I think the thing is that most people don't have basic examples, they
> Perhaps
Hi!
At 15:50 17.08.00 +0100, you wrote:
> Isn't easier to reduce the table every day and make a daily vacuum which
only
> lasts a few seconds?
Well, sounds simple, but I still have some headaches here:
a) Full Data must be available a month or so (OK, that could be done by
COPYing
the insert
I am trying to make a administration web page for postgreSQL users. The main
purpose of this web page is to add , remove and modify pgsql users. To do
this I am connecting to a database under the postgres user. The following is
some of the code being used.
$dataSource="dbi:Pg:dbname=alidb";
$dbh
Isn't easier to reduce the table every day and make a daily vacuum which only
lasts a few seconds?
Joerg Hessdoerfer wrote:
> Hi!
>
> I have an application, where I have to insert data into a table at several
> rows per second, 24 hours a day, 365 days a year.
>
> After some period (a week, mayb
Hi!
I have an application, where I have to insert data into a table at several
rows per second, 24 hours a day, 365 days a year.
After some period (a week, maybe a month) the data will be reducted to some
degree and deleted from the table.
As far as I understood, I would have to use VACUUM to r
On Thu, Aug 17, 2000 at 01:55:19PM +0200, Maarten Boekhold wrote:
>
> Now that's a bit stupid. Why create this view when you can do the same
> thing with a table alias (I'm I calling this by the right name?).
Now now, let's not call it stupid: how about 'not optimal'?
>
> select m.miles, m.dat
On Wed, 16 Aug 2000, Stephan Szabo wrote on [EMAIL PROTECTED]:
(sorry for the crossposting, just to tell the list that I now switched to
the right one hopefully)
> I think the thing is that most people don't have basic examples, they
Perhaps someone knows one nice doc. I only found some hints fo
This solution isn't good when there are +1 tuples in the table, it's
slowly...
anybody can help me ? :
string = "SELECT service, noeud, rubrique FROM table" ;
res = PQexec( conn, string.data() ) ;
if ( (! res) || (status = PQresultStatus( res ) != PGRES_TUPLES_OK) )
Use default now()
--
Jesus Aneiros Sosa
mailto:[EMAIL PROTECTED]
http://jagua.cfg.sld.cu/~aneiros
On Wed, 16 Aug 2000, Ang Sei Heng wrote:
> Hello to all the SQL gurus...
>
> I have this little table:
>
> test1 (
> id char(8) primary key,
> name char(20),
> create_date
> View definition: SELECT mileage.miles, mileage.date FROM mileage;
> detail=# select mileage.miles, mileage.date, sum(dup.miles) from
> mileage, dup
>where
>dup.date <= mileage.date group by mileage.date, mileage.miles
> order by mileage.date;
Now that's a bit stupid
Hi,
You can specify a default value for the create_date
column like:
create_date timestamp default 'now'
In the insert statement just ommit that field and you
will get the default, as
insert into test1 (id, name) values (1, 'xxx');
Regards,
--
Guo Bin
--- Ang Sei Heng <[EMAIL PROTECTE
in C, I work on a database (4 table).
COPY FROM file ;
SELECT, INSERT, UPDATE, DELETE for a result in the last
table.
COPY TO file ;
in the file, are stored 1 informations.
-> It's slowly !!!
Can you help me for optimization this?
I
17 matches
Mail list logo