Re: [PERFORM] Regarding COPY command from Postgres 8.2.0

2007-09-17 Thread soni de
We have upgraded postgres from 7.2.4 to 8.2.0.
We have program which executes COPY command and our new database is changed
having some extra columns in some tables.
Because of this, COPY commands are failing.
So, we wanted the option to COPY the data without specifying column names.

Thanks,
Sonal


On 9/14/07, Ansgar -59cobalt- Wiechers [EMAIL PROTECTED] wrote:

 On 2007-09-14 soni de wrote:
  In Postgres 7.2.4, COPY command is working fine even if tables have 6
  fields but we are copying only 5 fields from the file
 
  But in Postgres 8.2.0, if table has 6 fields and we need to copy data
  for 5 fields only, then we need to specify the column names too in
  COPY command.
 
  Is there any configurable option, so that without specifying column
  name in COPY command we can copy records in table as happened in
  Postgres 7.2.4?

 I don't know if it is possible, but even if it were I'd strongly
 recommend against it, as you'd be relying on the order the columns were
 created in. That's a rather bad idea IMHO. Why would you want to avoid
 giving the names of the columns in the first place?

 Regards
 Ansgar Wiechers
 --
 The Mac OS X kernel should never panic because, when it does, it
 seriously inconveniences the user.
 --http://developer.apple.com/technotes/tn2004/tn2118.html

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match



[PERFORM] Regarding COPY command from Postgres 8.2.0

2007-09-14 Thread soni de
Hello,


In Postgres 7.2.4, COPY command is working fine even if tables have 6 fields
but we are copying only 5 fields from the file

But in Postgres 8.2.0, if table has 6 fields and we need to copy data for 5
fields only, then we need to specify the column names too in COPY command.

Is there any configurable option, so that without specifying column name in
COPY command we can copy records in table as happened in Postgres 7.2.4?



Please provide us some help regarding above query.



Thanks,

Soni


[PERFORM] Regarding Timezone

2007-06-19 Thread soni de

Hello,

We have installed postgres 8.2.0

default time zone which postgres server using is

template1=# SHOW timezone;
TimeZone
---
ETC/GMT-5
(1 row)


But we want to set this timezone parameter to IST.
Our system timezone is also in IST. We are using solaris.

Please provide me some help regarding this.



Thanks,

Soni


Re: [PERFORM] client crashes in PQfinish

2006-10-30 Thread soni de
Any response?

On 10/27/06, soni de [EMAIL PROTECTED] wrote:

Hello,

My Server is crashed in PQfinish. Below is the core file details:

=[1] DLRemHead(0x2b7780, 0xfb6bc008, 0x319670, 0xfb6bc008, 0x21c40, 0x3106f8), at 0xfded10e4 [2] DLFreeList(0x2b7780, 0x0, 0x417b48, 0xfdec5aa4, 0x21c18, 0x0), at 0xfded0c64 [3] freePGconn(0x371ea0, 0x0, 0x289f48, 0xfbfb61b8, 0x21c18, 0x0), at 0xfdec5ac0 
 [4] PQfinish(0x371ea0, 0x289ce8, 0x289ce8, 0xf9a0b65c, 0x20fa0, 0xfb0718dc), at 0xfdec5cc4 [5] abc(0x289ce0, 0xfafec000, 0xfb5b1d88, 0x0, 0xf9a0ba8c, 0x7), at 0xfb071aec

Server is crashed at DLRemHead. This crash is not easily reproducible.

Can anybody please tell me whether above problem is related to postgres or not?

Thanks,
Sonal


[PERFORM] client crashes in PQfinish

2006-10-27 Thread soni de
Hello,

My Server is crashed in PQfinish. Below is the core file details:

=[1] DLRemHead(0x2b7780, 0xfb6bc008, 0x319670, 0xfb6bc008, 0x21c40, 0x3106f8), at 0xfded10e4 [2] DLFreeList(0x2b7780, 0x0, 0x417b48, 0xfdec5aa4, 0x21c18, 0x0), at 0xfded0c64 [3] freePGconn(0x371ea0, 0x0, 0x289f48, 0xfbfb61b8, 0x21c18, 0x0), at 0xfdec5ac0
 [4] PQfinish(0x371ea0, 0x289ce8, 0x289ce8, 0xf9a0b65c, 0x20fa0, 0xfb0718dc), at 0xfdec5cc4 [5] abc(0x289ce0, 0xfafec000, 0xfb5b1d88, 0x0, 0xf9a0ba8c, 0x7), at 0xfb071aec

Server is crashed at DLRemHead. This crash is not easily reproducible.

Can anybody please tell me whether above problem is related to postgres or not?

Thanks,
Sonal


Re: [PERFORM] Regarding Bitmap Scan

2006-10-27 Thread soni de
Thanks a lot for your help.

Thanks,
Soni
On 10/17/06, Dawid Kuroczko [EMAIL PROTECTED] wrote:
On 10/17/06, soni de 
[EMAIL PROTECTED] wrote: 
 


I didn't understand theBitmap Scan andthe sentence indexes will be dynamically converted to bitmaps in memory. What does mean by Bitmap Scan in database? 


Can anybody help us regarding above query?

Assume you have a table:CREATE TABLE foo ( some_key int, some_time timestamp with time zone, some_data text);And two indexes:CREATE INDEX foo_key ON foo (some_key);CREATE INDEX foo_time ON foo (some_time);
Now, you make a query:SELECT * from foo WHERE some_key  10 AND some_time  '2006-10-01'::timestamptz;...originally planner would choose only one index to use -- and would use theone which it think its best.
The 8.1 version does differently: It will scan foo_key index -- make a bitmap out of it,scan foo_time index -- make another bitmap out of it, binary AND these bitmaps,and will read the data from the table using such combined bitmap. It could as well
use OR if you used OR in your query.Hence -- it can be faster, especially for large tables and selective queries.Regards, DAwid


Re: [PERFORM] Related to Inserting into the database from XML file

2006-08-27 Thread soni de

I am little bit confused between whether to insert XML file as it is or insert data from the XML file in to a particular field from thetable.

I will decided it depending upon the performance factor

For storing the XML file as it is, will there be any performance cause if compared to storing values in particular fields.


If performance issue is not there for XML formats then we have around 12 to 13 tables, 
if we store XML data as it is in all tables then is there any generic format for select query?



Thanks
Soni
On 8/27/06, George Pavlov [EMAIL PROTECTED] wrote:
 On Fri, 2006-08-25 at 21:23 +0530, soni de wrote:  Hello,   I want to ask, Is there any way to insert records from XML
  file to the postgres database? Try the contrib/xml2 module.Alas, that module will not help you much with the insertion of records.It is more about querying XML that is stored within the database.
A basic question is whether you want to store XML in the DB or you justhave data that is in XML now and you want it loaded into a tablestructure. The xml2 module will only be useful in the first case.
In either case the approach is to transform the data into a form thatPGSQL's COPY understands or into a bunch of INSERT statements (much lessperformant). To that end you probably want to become familiar with XSLT
unless the data is so simple that a processing with regular tools (perl,sed, awk) will suffice.George


[PERFORM] Related to Inserting into the database from XML file

2006-08-25 Thread soni de
Hello,

I want to ask, Is there any way to insert records from XML file to the postgres database?

Please provide me some help regarding above query.

Postgres version which we are using is 7.2.4

Thanks,
Sonal


[PERFORM] Regarding ERROR: fmgr_info: function 2720768: cache lookup failed

2006-06-21 Thread soni de
Hello,

I am getting following error while inserting a row into the "abc" table:
ERROR: fmgr_info: function 2720768: cache lookup failed


Table "abc" has one trigger called "abct"
Definition is as follows:

BEGIN;
 LOCK TABLE abc IN SHARE ROW EXCLUSIVE MODE;

 create TRIGGER abct
 AFTER INSERT OR DELETE on abc 
 FOR EACH ROW EXECUTE PROCEDURE abc_function();

COMMIT;

abc_function() updates entry from the "xyz" table for every insert and delete operations on table "abc".


"xyz" table maintains the count of total number of rows in table "abc" 

Currently "abc" table contains 190 rows. And same count is available in table "xyz". 
But now I am not able to insert any records into the "abc" table because of above mentioned error.

Please provide me some help regarding this.

Thanks,
Soni


Re: [PERFORM] Regarding ALTER Command

2006-06-08 Thread soni de
Hello,

We are planning to use latest postgres version. I have one query as below:

One more thing I have to mention is that we are using 2 postmasters running on different machines and both are accessing same data directory. (
i.e both the machines uses same tables or the databases)
In that case if from first machine, continuous INSERT operation on any table are going on and from the second we have to update the same table using ALTER command.

Would this create any problem because INSERT and ALTER operations are executed from the two different postmasters but for a same data directory?

Would there be any data loss or in this case also ALTER will block all the new accesses to the table?

Thanks,
Soni
On 6/7/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Wed, Jun 07, 2006 at 06:13:11PM +0530, soni de wrote: Hello, We have database on which continueous operations of INSERT, DELETE, UPDATE
 are going on, In the mean time irrespective of INSERT and UPDATE we want to ALTER some filelds from the table can we do that? Would the ALTER command on heavily loaded database create any perfomance
 problem? Is it feasible to do ALTER when lots of INSERT operations are going on?The problem you'll run into is that ALTER will grab an exclusive tablelock. If *all* the transactions hitting the table are very short, this
shouldn't be too big of an issue; the ALTER will block all new accessesto the table while it waits for all the pending ones to complete, but ifall the pending ones complete quickly it shouldn't be a big issue.
If one of the pending statements takes a long time though... Postgresql version we are using is -- PostgreSQL 7.2.4You very badly need to upgrade. 7.2 is no longer supported, and therehave been over a half-dozen data loss bugs fixed since then.
--Jim C. Nasby, Sr. Engineering Consultant[EMAIL PROTECTED]Pervasive Softwarehttp://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461


[PERFORM] Regarding pg_dump utility

2006-06-08 Thread soni de


Hello,

We have to take a backup of database and we know the pg_dump utility of postgresql.
But may I know, is there any API for this pg_dump utility so that we can call it from the C program? Or only script support is possible for this.


I think script support is bit risky because if anything goes wrong while taking backup using pg_dump then user will not understand the problem of falling

If only script support is possible then what should we prefer perl or shell?

Please provide me some help regarding this

Thanks,
Soni


[PERFORM] Regarding ALTER Command

2006-06-07 Thread soni de


Hello,

We have database on which continueous operations of INSERT, DELETE, UPDATE are going on, In the meantimeirrespective of INSERT and UPDATE we want to ALTER some filelds from thetable can we do that? 

Would the ALTER command on heavily loaded database create any perfomance problem? 
Is it feasible to do ALTER when lots of INSERT operations are going on?

Postgresql version we are using is -- PostgreSQL 7.2.4 

Please provide me some help regarding this.

Thanks,
Soni


Re: [PERFORM] Takes too long to fetch the data from database

2006-05-08 Thread soni de
Hello,

I have tried the query SELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50; and it is working great.
EXPLAIN ANALYSE of the above query is:
pdb=# EXPLAIN ANALYZE select * from wan order by stime desc limit 50 ;NOTICE: QUERY PLAN:

Limit (cost=0.00..12.10 rows=50 width=95) (actual time=24.29..50.24 rows=50 loops=1) - Index Scan Backward using wan_pkey on wan (cost=0.00..19983.31 rows=82586 width=95) (actual time=24.28..50.14 rows=51 loops=1)
Total runtime: 50.55 msec
EXPLAIN

Now I am facing another problem, If I use where clause is select query it is taking too much time. Can you please help me on this.

Explain analyze are follows:
pdb=# EXPLAIN ANALYZE select count(1) from wan where kname = 'pluto';NOTICE: QUERY PLAN:

Aggregate (cost=3507.84..3507.84 rows=1 width=0) (actual time=214647.53..214647.54 rows=1 loops=1) - Seq Scan on wan (cost=0.00..3507.32 rows=208 width=0) (actual time=13.65..214599.43 rows=18306 loops=1)
Total runtime: 214647.87 msec
EXPLAINpdb=# EXPLAIN ANALYZE select * from wan where kname = 'pluto' order by stime limit 50;NOTICE: QUERY PLAN:
Limit (cost=3515.32..3515.32 rows=50 width=95) (actual time=230492.69..230493.07 rows=50 loops=1) - Sort (cost=3515.32..3515.32 rows=208 width=95) (actual time=230492.68..230493.00 rows=51 loops=1) - Seq Scan on wan (cost=
0.00..3507.32 rows=208 width=95) (actual time=0.44..229217.38 rows=18306 loops=1)Total runtime: 230631.62 msec
EXPLAINpdb=# EXPLAIN ANALYZE SELECT * FROM wan WHERE stime = 20123 AND stime = 24000 ORDER BY stime limit 50;NOTICE: QUERY PLAN:
Limit (cost=0.00..2519.70 rows=50 width=95) (actual time=7346.74..7351.42 rows=50 loops=1) - Index Scan using wan_pkey on wan (cost=0.00..20809.17 rows=413 width=95) (actual time=7346.73..7351.32 rows=51 loops=1)
Total runtime: 7351.71 msec
EXPLAIN
for above queries if I use desc order thenthe queriestakestoo much time.
I am not getting for the above queries how do I increase the speed.

Postgresql version is 7.2.3
total no. of records: 5700300
On 4/21/06, Dave Dutcher [EMAIL PROTECTED] wrote:



I've never used a cursor in Postgres, but I don't think it will help you a lot. In theory cursors make it easier to do paging, but your main problem is that getting the first page is slow.
 A cursor isn't going to be any faster at getting the first page than OFFSET/LIMIT is.

Did you try Bruno's suggestion of:

SELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50;

You should run an EXPLAIN ANALYZE on that query to see if it is using an index scan. Also what version of 
Postgres are you using? You can run select version(); to check.





-Original Message-From: 
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] 
On Behalf Of soni deSent: Thursday, April 20, 2006
 11:42 PM
To: Merlin MoncureCc: 
pgsql-performance@postgresql.orgSubject: Re: [PERFORM] Takes too long to fetch the data from database



I don't want to query exactly 81900 rows into set. I just want to fetch 50 or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100 rows starting from last to end).




if we fetched sequentially, there is also problem in fetching all the records (select * from wanwhere kname='pluto' order by stime) it is taking more than 4~5 minutes. tried it on same table having more than 326054 records. 




On 4/20/06, Merlin Moncure 
[EMAIL PROTECTED] wrote: 
 SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;you need to try and solve the problem without using 'offset'.you could do: 
BEGIN;DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;FETCH ABSOLUTE 81900 in crs;FETCH 49 in crs;CLOSE crs;COMMIT;this may be a bit faster but will not solve the fundamental problem. 
the more interesting question is why you want to query exactly 81900rows into a set.This type of thinking will always get you intotrouble, absolute positioning will not really work in a true sqlsense.if you are browsing a table sequentially, there are much 
better methods.merlin




Re: [PERFORM] Takes too long to fetch the data from database

2006-04-19 Thread soni de


Please provide me some help regarding how could I use cursor in following cases? :


I want to fetch 50 records at a time starting from largest stime. 


Total no. of records in the wan table:
 82019

pdb=# \d wan
 Table wan

 
Column | Type | Modifiers
-+--+---

stime
 | bigint | not null
kname
 | character varying(64) |
eid
 | smallint |
rtpe
 | smallint |
taddr
 | character varying(16) |
ntime
 | bigint |
Primary key: wan_pkey


stime is the primary key.

pdb=#

SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;


pdb=# explain analyze SELECT * FROM wan ORDER BY stime LIMIT 50

OFFSET 81900;

NOTICE: QUERY PLAN:


Limit (cost=17995.15..17995.15
 rows=50 width=95) (actual time=9842.92..9843.20
rows=50 loops=1)
 -
 Sort (cost=17995.15..17995.15 rows=82016 width=95) (actual time=9364.56..
9793.00 rows=81951 loops=1)
 -
 Seq Scan on wan (cost=0.00..3281.16 rows=82016 width=95) (actu
al time=0.11..3906.29 rows=82019 loops=1)
Total runtime: 10010.76 msec

EXPLAIN
pdb=#


SELECT * FROM wan where kname='pluto' ORDER BY stime LIMIT 50 OFFSET 81900;


pdb=# explain analyze SELECT * from wan where kname='pluto' order by stime limit 50 offset 81900;

NOTICE: QUERY PLAN:


Limit (cost=3494.13..3494.13
 rows=1 width=95) (actual time=9512.85..9512.85 rows=0 loops=1)
 -
 Sort (cost=3494.13..3494.13 rows=206 width=95) (actual time=9330.74..9494.90 rows=27485 loops=1)
 -
 Seq Scan on wan (cost=0.00..3486.20 rows=206 width=95) (actual time=0.28..4951.76 rows=27485 loops=1)
Total runtime: 9636.96 msec

EXPLAIN

SELECT * FROM wan where kname='pluto' and rtpe=20 ORDER BY stime LIMIT 50 OFFSET 81900;


pdb=# explain analyze SELECT * from wan where kname='pluto' and rtpe = 20 order by stime limit 50 offset 81900;

NOTICE: QUERY PLAN:


Limit (cost=3691.25..3691.25
 rows=1 width=95) (actual time=7361.50..7361.50 rows=0 loops=1)
 -
 Sort (cost=3691.25..3691.25 rows=1 width=95) (actual time=7361.50..7361.50 rows=0 loops=1)
 -
 Seq Scan on wan (cost=0.00..3691.24 rows=1 width=95) (actual time=7361.30..7361.30 rows=0 loops=1)
Total runtime: 7361.71 msec

EXPLAIN
pdb=#

all the above queries taking around 7~10 sec. to fetch the last 50 records. I want to reduce this time because table is growing and table can contain more than 1 GB data then for 1 GB data above queries will take too much time.


I am not getting how to use cursor to fetch records starting from last records in the above case offset can be any number (less than total no. of records). 


I have use following cursor, but it is taking same time as query takes.


BEGIN;
DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;

FETCH ALL in crs;
CLOSE crs;
COMMIT;


On 4/11/06, Merlin Moncure [EMAIL PROTECTED] wrote:
 pdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate
= '2004-07-21' ) AND ( sdate = '2004-07-21' ) ))ORDER BY sdate, stime ;this query would benefit from an index onpluto, cno, pno, sdatecreate index Ian_idx on Ian(bname, cno, pno, sdate);
 pdb=# explain analyze SELECT ALL sdate, stime, rbts from lan WHERE (( bname = 'neptune' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate = '2004-07-21' ) AND ( sdate = '2004-07-21' ) ))ORDER BY sdate, stime ;
ditto above.Generally, the closer the fields in the where clause arematched by the index, the it will speed up your query.Merlin


Re: [PERFORM] Takes too long to fetch the data from database

2006-04-11 Thread soni de

I have flushed the database, so currently records in the lan table are: 665280
but records can be increased more than 1GB and in that case it takes more than 1 hour

Below is explain analyze output taken from the table having 665280 records

pdb=# explain analyze SELECT sdate, stime, rbts from lan WHERE (
( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate
= '2004-07-21' ) AND ( sdate = '2004-07-21' ) ) ) ORDER BY sdate, stime ;
NOTICE: QUERY PLAN:
Sort (cost=17.13..17.13 rows=1 width=16) (actual time=619140.18..619140.29 rows
=288 loops=1)
 - Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16) (ac
tual time=7564.44..619121.61 rows=288 loops=1)
Total runtime: 619140.76 msec

EXPLAIN

bsdb=# explain analyze SELECT DISTINCT sdate, stime, rbts from lan
WHERE ( ( bname = 'pluto' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate = '2004-07-21' ) AND ( sdate = '2004-07-21' ) ) ) 

ORDER BY sdate, stime ;
NOTICE: QUERY PLAN:

Unique (cost=17.13..17.14 rows=1 width=16) (actual time=610546.66..610564.31 rows=288 loops=1)
 - Sort (cost=17.13..17.13 rows=1 width=16) (actual time=610546.65..610546.75 rows=288 loops=1)
 - Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16) (actual time=7524.47..610533.50 rows=288 loops=1)

Total runtime: 610565.51 msec

EXPLAIN

pdb=# explain analyze SELECT ALL sdate, stime, rbts from lan WHERE ( ( bname = 'neptune' ) AND ( cno = 17 ) AND ( pno = 1 ) AND ( ( sdate = '2004-07-21' ) AND ( sdate = '2004-07-21' ) ) ) ORDER BY sdate, stime ;

NOTICE: QUERY PLAN:

Sort (cost=17.13..17.13 rows=1 width=16) (actual time=1260756.66..1260756.76 rows=288 loops=1)
 - Index Scan using lan_pkey on lan (cost=0.00..17.12 rows=1 width=16) (actual time=7725.97..1260752.47 rows=288 loops=1)

Total runtime: 1260757.09 msec


pdb=# \d lan
 Table lan
 Column | Type | Modifiers
--+---+---
bname | character varying(64) | not null
sdate | date | not null
stime | integer | not null
cno | smallint | not null
pno | smallint | not null
rbts | bigint |
tbts | bigint |
u_inpkt | bigint |
u_outpkt | bigint |
m_inpkt | bigint |
m_outpkt | bigint |
b_inpkt | bigint |
b_outpkt | bigint |
Primary key: lan_pkey
Check constraints: lan_stime ((stime = 0) AND (stime  86400))
On 4/10/06, Joshua D. Drake [EMAIL PROTECTED] wrote:
Rajesh Kumar Mallah wrote: what is the query ? use LIMIT or a restricting where clause.
You could also use a cursor.Joshua D. Drake regds mallah. On 4/10/06, *soni de*  [EMAIL PROTECTED] mailto:
[EMAIL PROTECTED] wrote: Hello, I have difficulty in fetching the records from the database. Database table contains more than 1 GB data. For fetching the records it is taking more the 1 hour and that's why
 it is slowing down the performance. please provide some help regarding improving the performance and how do I run query so that records will be fetched in a less time.
--=== The PostgreSQL Company: Command Prompt, Inc. ===Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240Providing the most comprehensivePostgreSQL solutions since 1997
 http://www.commandprompt.com/


[PERFORM] Takes too long to fetch the data from database

2006-04-10 Thread soni de
Hello,

I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's why it is slowing down the performance.
please provide some help regarding improving the performance and how do I run query so that records will be fetched in a less time.