Re: [GENERAL] MySQL -> PostgreSQL conversion issue

2011-08-20 Thread DM
Thank you for your reply, sorry for the late response, your reply really
helped me.

Thanks
Deepak

On Fri, Aug 19, 2011 at 11:39 AM, Alban Hertroys <
dal...@solfertje.student.utwente.nl> wrote:

> On 19 Aug 2011, at 14:50, DM wrote:
>
> > Hi All,
> >
> > how to insert mysql (datetime interval) data to postgrres interval
> datatype.
> >
> > mysql
> >  | test_interval   | datetime | YES  | | 1970-01-02 00:00:00 |
>  |
> >
> > psql
> >  test_interval   | interval   |
> >
> >
> > any solution for this?
>
> I think you want something like this?:
>
> development=> select '1970-01-02 00:00:00'::timestamp without time zone -
> '1970-01-01 00:00:00'::timestamp without time zone;
>  ?column?
>  --
>  1 day(1 row)
>
> Midnight 1970-01-01 being the baseline for their calculations is a bit of a
> guess, but that seems to make sense with the example you gave.
> You probably don't want postgres to be smart with your client's time zone
> settings, hence the "without time zone".
>
> Alban Hertroys
>
> --
> Screwing up is an excellent way to attach something to the ceiling.
>
>
> !DSPAM:1293,4e4eadd212091422813852!
>
>
>


[GENERAL] MySQL -> PostgreSQL conversion issue

2011-08-19 Thread DM
Hi All,

how to insert mysql (datetime interval) data to postgrres interval datatype.

mysql
 | test_interval   | datetime | YES  | | 1970-01-02 00:00:00 |
 |

psql
 test_interval   | interval   |


any solution for this?

thanks
Deepak


[GENERAL] SHMMAX and SHMALL question

2011-01-21 Thread DM
Firstof all sorry for posting linux kernel question in pgsql-general, but i
am trying to figure out what value to set for postgresql db server.

RAM = 16GB, what value should i set for shmall?

If i set shmall to 4294967296 (4GB), current PAGE_SIZE is 4096 should i need
to set my shmmax = 4294967296*4096 = 17592186044416?


cat /proc/sys/kernel/shmall
4294967296

getconf PAGE_SIZE
4096

Current Shmmax -
==> cat /proc/sys/kernel/shmmax
68719476736

should i need to increase my shmmax to 17592186044416?

also what value should i need to set shmall?

what happens if i set shmmax to lower  than the expected size?


Thanks
Deepak


Re: [GENERAL] How can I find the schema that a table belongs to?

2011-01-19 Thread DM
or you could use the below query


* QUERY **
SELECT n.nspname as "Schema",
  c.relname as "Name",
  CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN
'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",
  pg_catalog.pg_get_userbyid(c.relowner) as "Owner"
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('r','v','S','')
  AND n.nspname <> 'pg_catalog'
  AND n.nspname <> 'information_schema'
  AND n.nspname !~ '^pg_toast'
 AND pg_catalog.pg_table_is_visible(c.oid)
 AND c.relname like '%my_table_name%'
ORDER BY 1,2;
**



On Wed, Jan 19, 2011 at 1:54 PM, DM  wrote:

> If your looking for Views then you could use pg_views ==> select * from
> pg_views limit 1;
>
> Here is one more, there was a recent post same as your request, Please see
> the below email, hope this helps you
>
> Tom Lane =>
> Thomas Kellerer  writes:
> > Jerry LeVan, 19.01.2011 17:35:
>
> >> So I guess the question is:
> >> Given a bare table name, how can I recover the schema
> >> qualified name with whatever the current search path happens
> >> to be?
>
> > SELECT table_schema
> > FROM information_schema.tables
> > WHERE table_name = 'your_table'
> > ;
>
> That's not going to work, at least not in the interesting case where you
> have more than one candidate table --- that SELECT will list all of 'em.
>
> In most cases the answer to this type of problem is "use regclass",
> but regclass doesn't quite solve Jerry's problem because it won't
> schema-qualify the name if the table is visible in the search path.
> The best solution I can think of is
>
> select nspname from pg_namespace n join pg_class c on n.oid =
> c.relnamespace
>  where c.oid = 'my_table_name'::regclass;
>
> which works but seems a bit brute-force.
>
>regards, tom lane
> - Hide quoted text -
>
> ==>
>
>
> On Wed, Jan 19, 2011 at 12:58 PM, Jerry LeVan  wrote:
>
>>
>> On Jan 19, 2011, at 3:12 PM, DM wrote:
>>
>> > one of the way to find out schema name is like below
>> >
>> > select * from pg_tables where tablename like '%xyz%';
>> >  schemaname | tablename | tableowner | tablespace | hasindexes |
>> hasrules | hastriggers
>> >
>> +---++++--+-
>> > (0 rows)
>> >
>> > ~Deepak
>> >
>>
>> That does not work if the user entered a 'view' in the sql box.
>>
>> Also a table/view could be in several schemas...
>>
>> Jerry
>>
>
>


Re: [GENERAL] How can I find the schema that a table belongs to?

2011-01-19 Thread DM
If your looking for Views then you could use pg_views ==> select * from
pg_views limit 1;

Here is one more, there was a recent post same as your request, Please see
the below email, hope this helps you

Tom Lane =>
Thomas Kellerer  writes:
> Jerry LeVan, 19.01.2011 17:35:
>> So I guess the question is:
>> Given a bare table name, how can I recover the schema
>> qualified name with whatever the current search path happens
>> to be?

> SELECT table_schema
> FROM information_schema.tables
> WHERE table_name = 'your_table'
> ;

That's not going to work, at least not in the interesting case where you
have more than one candidate table --- that SELECT will list all of 'em.

In most cases the answer to this type of problem is "use regclass",
but regclass doesn't quite solve Jerry's problem because it won't
schema-qualify the name if the table is visible in the search path.
The best solution I can think of is

select nspname from pg_namespace n join pg_class c on n.oid = c.relnamespace
 where c.oid = 'my_table_name'::regclass;

which works but seems a bit brute-force.

   regards, tom lane
- Hide quoted text -

======>

On Wed, Jan 19, 2011 at 12:58 PM, Jerry LeVan  wrote:

>
> On Jan 19, 2011, at 3:12 PM, DM wrote:
>
> > one of the way to find out schema name is like below
> >
> > select * from pg_tables where tablename like '%xyz%';
> >  schemaname | tablename | tableowner | tablespace | hasindexes | hasrules
> | hastriggers
> >
> +---++++--+-
> > (0 rows)
> >
> > ~Deepak
> >
>
> That does not work if the user entered a 'view' in the sql box.
>
> Also a table/view could be in several schemas...
>
> Jerry
>


Re: [GENERAL] How can I find the schema that a table belongs to?

2011-01-19 Thread DM
one of the way to find out schema name is like below

select * from pg_tables where tablename like '%xyz%';
 schemaname | tablename | tableowner | tablespace | hasindexes | hasrules |
hastriggers
+---++++--+-
(0 rows)

~Deepak


On Wed, Jan 19, 2011 at 8:26 AM, Jerry LeVan  wrote:

> Hi,
>
> I am trying to tidy up my perl script that runs as a cgi and allows
> remote users (aka me) to interact with my Pg database.
>
> I primarily want to  use this tool as a way for my iPad to
> browse my data base. It seems to be working quite well but
> one certainly does not want to 'select * from ginormous_table'.
>
> I am trying to build a 'describe' function. ie if the user
> types "describe tablename" in the sql box I want to display
> some interesting attributes for that table, mainly names and
> data types for each of the columns.
>
> I have done this in other programs but I forced the user
> to specify a schema qualified name ( and mimiced what psql -E
> did).
>
> So I guess the question is:
>   Given a bare table name, how can I recover the schema
>   qualified name with whatever the current search path happens
>   to be?
>
>   This task has to be done using  simple sql from the perl dbi.
>
> Thanks,
>
> Jerry
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Pg_upgrade question

2010-11-30 Thread DM
We are planning to upgrade to 9.01 from 8.4/8.3 version. On my production
Server, there isn't much space left to do a pg_upgrade (Copy),

Can I execute pg_upgrade from new box by pointing to the data directory and
binaries on old box.


Thanks
Deepak


Re: [GENERAL] Postgres 9.01 and WAL files issue

2010-11-24 Thread DM
Hi Mathew,

Thanks for your reply.

Its a test machine, I am testing streaming replication with archive.

I figured out he problem, the /mnt/nfs didnt had enough space.

Thanks for looking into this issue.

Thanks
Deepak


On Wed, Nov 24, 2010 at 10:36 AM, Matthew Walden wrote:

> Deepak,
>
> Does your backup script exclude the pg_xlog directory?  There is no point
> backing up the WAL files but you will want to make sure the archived WAL
> files are added to the backup set afterwards.
>
> How many files are in pg_xlog at the point where you stop the backup?  It
> may actually be that it takes a while to copy them all to the NFS if your
> database has been active overnight.  I would hope that it is a low traffic
> system if you are leaving it in backup mode all night though.
>
> Ideally you would build the stop and start SQL into a backup script which
> handles the rsync and then also configure the script to add the archived WAL
> files to the backup set afterwards.
>
> Another thing I would try is to run your exact archive command manually
> (using one of the completed WAL files as an example) to the NFS.  See what
> feedback you get as to why it isn't returning either a positive or negative
> return.
>
>
> On Wed, Nov 24, 2010 at 5:52 PM, DM  wrote:
>
>> Hi All,
>>
>> pg_stop_backup is not stopping...
>>
>> Postgresql version 9.01
>> OS: Centos
>> Postgresql.conf ==> default configuration did not change anything
>>
>> Steps
>>
>> 1. Initiated psql -c "SELECT pg_start_backup('label', true)"
>> 2. Started RSYNC job to sync slave box
>>  rsync -a /var/lib/pgsql/data/ dbpg9-test-02:/var/lib/pgsql/data
>>
>> Left over night to rsync - it was only around 10 GB of data to be rsynced,
>> but i left overnight...
>>
>> 3. psql -c "SELECT pg_stop_backup()"
>>
>>
>> *Gettign Error Message as below*
>>
>> NOTICE:  pg_stop_backup cleanup done, waiting for required WAL segments to
>> be archived
>> WARNING:  pg_stop_backup still waiting for all required WAL segments to be
>> archived (60 seconds elapsed)
>> HINT:  Check that your archive_command is executing properly.
>> pg_stop_backup can be cancelled safely, but the database backup will not be
>> usable without all the WAL segments.
>> WARNING:  pg_stop_backup still waiting for all required WAL segments to be
>> archived (120 seconds elapsed)
>> HINT:  Check that your archive_command is executing properly.
>> pg_stop_backup can be cancelled safely, but the database backup will not be
>> usable without all the WAL segments.
>>
>> WARNING:  pg_stop_backup still waiting for all required WAL segments to be
>> archived (240 seconds elapsed)
>> HINT:  Check that your archive_command is executing properly.
>> pg_stop_backup can be cancelled safely, but the database backup will not be
>> usable without all the WAL segments.
>>
>> I waited quite long and it was not stopping, I had to issue Ctrl+c to
>> cancel it.
>>
>>
>> 
>>
>> I tried to start backup and stop backup again, same problem.
>>
>> Here is my Archive Command:
>> archive_command = 'cp -i %p /mnt/nfs/primary/%f < /dev/null'
>>
>> Directory /mnt/nfs/primary ==> is an NFS mount.
>>
>>
>> Any solution how to fix it.
>>
>>
>> Thanks
>> Deepak Murthy
>>
>>
>>
>>
>>


Re: [GENERAL] diagram tools?

2010-11-24 Thread DM
There are many of them, I use SchemaSpy java based - easy to generate.

http://schemaspy.sourceforge.net/


Thanks
Deepak

On Wed, Nov 24, 2010 at 8:08 AM, Dan Armbrust <
daniel.armbrust.l...@gmail.com> wrote:

> Can anyone recommend a PostgreSQL compatible free tool that I can use
> to generate some schema diagrams of an existing database?
>
> Thanks
>
> Dan
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Postgres 9.01 and WAL files issue

2010-11-24 Thread DM
Hi All,

pg_stop_backup is not stopping...

Postgresql version 9.01
OS: Centos
Postgresql.conf ==> default configuration did not change anything

Steps

1. Initiated psql -c "SELECT pg_start_backup('label', true)"
2. Started RSYNC job to sync slave box
 rsync -a /var/lib/pgsql/data/ dbpg9-test-02:/var/lib/pgsql/data

Left over night to rsync - it was only around 10 GB of data to be rsynced,
but i left overnight...

3. psql -c "SELECT pg_stop_backup()"


*Gettign Error Message as below*

NOTICE:  pg_stop_backup cleanup done, waiting for required WAL segments to
be archived
WARNING:  pg_stop_backup still waiting for all required WAL segments to be
archived (60 seconds elapsed)
HINT:  Check that your archive_command is executing properly.
pg_stop_backup can be cancelled safely, but the database backup will not be
usable without all the WAL segments.
WARNING:  pg_stop_backup still waiting for all required WAL segments to be
archived (120 seconds elapsed)
HINT:  Check that your archive_command is executing properly.
pg_stop_backup can be cancelled safely, but the database backup will not be
usable without all the WAL segments.

WARNING:  pg_stop_backup still waiting for all required WAL segments to be
archived (240 seconds elapsed)
HINT:  Check that your archive_command is executing properly.
pg_stop_backup can be cancelled safely, but the database backup will not be
usable without all the WAL segments.

I waited quite long and it was not stopping, I had to issue Ctrl+c to cancel
it.




I tried to start backup and stop backup again, same problem.

Here is my Archive Command:
archive_command = 'cp -i %p /mnt/nfs/primary/%f < /dev/null'

Directory /mnt/nfs/primary ==> is an NFS mount.


Any solution how to fix it.


Thanks
Deepak Murthy


Re: [GENERAL] Query precision issue

2010-10-25 Thread DM
Hello Sairam,

What is the type of the colum?, what version of postgres are you using?

I didn't had any problem retrieving data in both ways... check out the below
=

test=# create temp table test (col1 float);
CREATE TABLE
test=# insert into test values(-107.689878);
INSERT 0 1
test=# select * from test where col1=-107.689878;
col1
-
 -107.689878
(1 row)

test=# select * from test where col1=-107.689878000;
col1
-
 -107.689878
(1 row)

test=# select * from version();

version
--
 PostgreSQL 8.4.1 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)
4.1.2 20071124 (Red Hat 4.1.2-42), 64-bit
(1 row)




Thanks
Deepak

On Mon, Oct 25, 2010 at 10:11 AM, Sairam Krishnamurthy
wrote:

>  All,
>
> I have simple query like 'select * from table1 where
> column1=-107.689878'. This returns an empty set. But there is data
> corresponding to this value of column.
>
> When I looked more into it, it seems like a precision issue. The value for
> column1 is -107.689878.
>
> More interesting is that the following query fetches the row,
>
> 'select * from table1 where column1=-107.689878000'
>
> Note that there are only three trailing zeros in the second query while
> there were four in the first.
>
> Can somebody help me to find out the problem? I can very well truncate the
> trailing zeros when querying, but I am interested in finding why an
> additional trailing zero returns an empty set.
>
>  --
> Thanks,
> Sairam Krishnamurthy  
> +1 612 859 8161
>
>


Re: [GENERAL] Question on Explain : Index Scan

2010-10-21 Thread DM
*Why is the difference in query plan, and the total runtime. Both tables
have the same  btree index*


test=# explain analyze select * from test_seqindex1 where sid='AA023';
 QUERY
PLAN
-
 Index Scan using test_seqindex1_pkey on test_seqindex1  (cost=0.00..8.27
rows=1 width=28) (actual time=0.017..0.018 rows=1 loops=1)
   Index Cond: ((sid)::text = 'AA023'::text)
 Total runtime: 0.035 ms
(3 rows)

test=# explain analyze select * from test_seqindex2 where sid='AA023';
QUERY
PLAN
--
 Bitmap Heap Scan on test_seqindex2  (cost=4.95..275.53 rows=73 width=30)
(actual time=57.833..71.577 rows=2 loops=1)
   Recheck Cond: ((sid)::text = 'AA023'::text)
   ->  Bitmap Index Scan on idx_test_seqindex2_sid  (cost=0.00..4.93 rows=73
width=0) (actual time=34.374..34.374 rows=2 loops=1)
 Index Cond: ((sid)::text = 'AA023'::text)
 Total runtime: 71.599 ms
(5 rows)


test=# \d test_seqindex1
   Table "public.test_seqindex1"
 Column | Type  | Modifiers
+---+---
 sid| character varying(13) | not null
 name   | character varying(80) |
Indexes:
"test_seqindex1_pkey" PRIMARY KEY, btree (sid)

test=# \d test_seqindex2
   Table "public.test_seqindex2"
 Column | Type  | Modifiers
+---+---
 eid| integer   | not null
 sid| character varying(13) |
 ename  | character varying(80) |
Indexes:
"test_seqindex2_pkey" PRIMARY KEY, btree (eid)
"idx_test_seqindex2_sid" btree (sid)


===


On Thu, Oct 21, 2010 at 11:09 AM, DM  wrote:

> perfecto, thank you for the explanation.
>
> - Deepak
>
>
> On Thu, Oct 21, 2010 at 3:20 AM, Mathieu De Zutter 
> wrote:
>
>> On Thu, Oct 21, 2010 at 3:47 AM, DM  wrote:
>> > I was hoping the optimizer would do a join using index scan.
>> >
>> > Could some one please explain me why its not doing an index scan rather
>> than
>> > sequential scan .
>>
>>
>> A index scan would be probably slower here because you're asking for a
>> lot of rows. A lot of rows means a lot of I/O, and an index scan is
>> more I/O intensive (since it has to read the index too). If you limit
>> the result (by being more selective in your where clause, just like
>> you do in the first two queries), postgres will most likely switch to
>> index scan.
>>
>> You can see for yourself if index-scan would be faster in your case by
>> running the following command before "explain (analyze)":
>>
>> set enable_seqscan = off;
>>
>> BTW, try to use explain analyze instead of explain, that way you'll
>> see the actual timings too instead of just the planner estimates.
>>
>> Kind regards,
>> Mathieu
>>
>
>


Re: [GENERAL] Question on Explain : Index Scan

2010-10-21 Thread DM
perfecto, thank you for the explanation.

- Deepak

On Thu, Oct 21, 2010 at 3:20 AM, Mathieu De Zutter wrote:

> On Thu, Oct 21, 2010 at 3:47 AM, DM  wrote:
> > I was hoping the optimizer would do a join using index scan.
> >
> > Could some one please explain me why its not doing an index scan rather
> than
> > sequential scan .
>
>
> A index scan would be probably slower here because you're asking for a
> lot of rows. A lot of rows means a lot of I/O, and an index scan is
> more I/O intensive (since it has to read the index too). If you limit
> the result (by being more selective in your where clause, just like
> you do in the first two queries), postgres will most likely switch to
> index scan.
>
> You can see for yourself if index-scan would be faster in your case by
> running the following command before "explain (analyze)":
>
> set enable_seqscan = off;
>
> BTW, try to use explain analyze instead of explain, that way you'll
> see the actual timings too instead of just the planner estimates.
>
> Kind regards,
> Mathieu
>


Re: [GENERAL] Composite Index question

2010-10-21 Thread DM
Thank you for all your suggestions and answers.

- Deepak

On Wed, Oct 20, 2010 at 7:41 PM, mark  wrote:

>
> On 10/20/2010 05:43 PM, DM wrote:
> > Composite Index question:
> >
> > I have composite index on 3 columns on a table, by mistake the composite
> > index was created twice on the table.
> >
> > Will there any performance issues on this table because of the 2 same
> > composite indexes?
> >
> > Thanks
> > Deepak
>
> Are the indices of the same type (e.g. both BTree) but with different index
> names?
>
> Is the second composite index the columns in same order as the first ? if
> not in the same column order you might be seeing some benefit for some
> queries but this is dependent on the queries filter clauses.  If so you
> might consider augmenting one or both of the indices to better suit your
> queries.
>
> From my experience, it appears to degrade performance because two indices
> have to be maintained. (not to mention also appears to be a waste of disk
> space) I am hopeful someone will correct me if I am wrong.
>
> Ours were from people explicitly creating indexes on columns that had
> indices implicitly created on them when the table was created.  Cleanup was
> pretty easy and painless on our production systems.
>
> It's pretty easy to check for exact duplicates all over a given database as
> well as how often each index is being used.
> Check some of the queries here:
> http://www.xzilla.net/blog/2008/Jul/Index-pruning-techniques.html
>
> ..:Mark
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Question on Explain : Index Scan

2010-10-20 Thread DM
Question on Index scan:
--->
test=# \d test_seqindex1
   Table "public.test_seqindex1"
 Column | Type  | Modifiers
+---+---
 sid| character varying(13) | not null
 name   | character varying(80) |
Indexes:
"test_seqindex1_pkey" PRIMARY KEY, btree (sid)

test=# explain select * from test_seqindex1 where sid='AA023';
QUERY
PLAN
---
 Index Scan using test_seqindex1_pkey on test_seqindex1  (cost=0.00..8.27
rows=1 width=28)
   Index Cond: ((sid)::text = 'AA023'::text)


test=# \d test_seqindex2
   Table "public.test_seqindex2"
 Column | Type  | Modifiers
+---+---
 eid| integer   | not null
 sid| character varying(13) |
 ename  | character varying(80) |
Indexes:
"test_seqindex2_pkey" PRIMARY KEY, btree (eid)
"idx_test_seqindex2_sid" btree (sid)

test=# explain select * from test_seqindex2 where sid='AA023';
  QUERY
PLAN
--
 Bitmap Heap Scan on test_seqindex2  (cost=4.95..275.53 rows=73 width=30)
   Recheck Cond: ((sid)::text = 'AA023'::text)
   ->  Bitmap Index Scan on idx_test_seqindex2_sid  (cost=0.00..4.93 rows=73
width=0)
 Index Cond: ((sid)::text = 'AA023'::text)



test=explain select * from test_seqindex1 t1,test_seqindex2 t2 where
t1.sid=t2.sid;
 QUERY
PLAN

 Hash Join  (cost=1231.55..46386.19 rows=920544 width=58)
   Hash Cond: ((t2.sid)::text = (t1.sid)::text)
   ->  Seq Scan on test_seqindex2 t2  (cost=0.00..16225.97 rows=920697
width=30)
   ->  Hash  (cost=581.80..581.80 rows=33580 width=28)
 ->  Seq Scan on test_seqindex1 t1  (cost=0.00..581.80 rows=33580
width=28)
(5 rows)



*I was hoping the optimizer would do a join using index scan*.

*Could some one please explain me why its not doing an index scan rather
than sequential scan .*


Thanks
Deepak


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread DM
create temp table dup_test (nm1 varchar(2),nm2 varchar(3));
insert into dup_test values ('A','A'),('A','B'),('A','C'),('B','A'),('B',
'B'),('B','C');

CREATE SEQUENCE
dup_test_seq
  INCREMENT 1
  MINVALUE 1
  MAXVALUE 9223372036854775807
  START 1
  CACHE 1;

alter table dup_test add column dup_id integer;

alter table dup_test alter column dup_id SET DEFAULT
nextval('dup_test_seq'::regclass);

update dup_test set dup_id = nextval('dup_test_seq'::regclass);


select * from dup_test;
 nm1 | nm2 | dup_id
-+-+
 A   | A   |  1
 A   | B   |  2
 A   | C   |  3
 B   | A   |  4
 B   | B   |  5
 B   | C   |  6
(6 rows)

Hope this helps


On Wed, Oct 20, 2010 at 4:07 PM, David Kerr  wrote:

> On Wed, Oct 20, 2010 at 03:47:19PM -0700, DM wrote:
> - select generate_series(1,(select count(*) from tax)), country from tax;
> -
> - you should use braces around the sub select.
> -
> - Thanks
> - Deepak
>
>   Table "public.test"
>  Column | Type | Modifiers
> +--+---
>  col1   | character varying(2) |
>  col2   | character varying(2) |
>
>
> select * from test;
>  col1 | col2
> --+--
>  A| A
>  A| B
>  A| C
>  B| A
>  B| B
>  B| C
> (6 rows)
>
> select generate_series(1,(select count(*) from test)), col1, col2 from
> test;
>  generate_series | col1 | col2
> -+--+--
>   1 | A| A
>   2 | A| A
>   3 | A| A
>   4 | A| A
>   5 | A| A
>   6 | A| A
>   1 | A| B
>   2 | A| B
>   3 | A| B
>   4 | A| B
>   5 | A| B
>   6 | A| B
>   1 | A| C
>   2 | A| C
>   3 | A| C
>   4 | A| C
>   5 | A| C
>   6 | A| C
>   1 | B| A
>   2 | B| A
>   3 | B| A
>   4 | B| A
>   5 | B| A
>   6 | B| A
>   1 | B| B
>   2 | B| B
>   3 | B| B
>   4 | B| B
>   5 | B| B
>   6 | B| B
>   1 | B| C
>   2 | B| C
>   3 | B| C
>   4 | B| C
>   5 | B| C
>   6 | B| C
> (36 rows)
>
> when what i want is:
> 1  | A| A
> 2  | A| B
> 3  | A| C
> 4  | B| A
> 5  | B| B
> 6  | B| C
>
>
> thanks
>
> Dave
>
> -
> - On Wed, Oct 20, 2010 at 3:30 PM, David Kerr  wrote:
> -
> - > On Wed, Oct 20, 2010 at 11:28:18PM +0100, Raymond O'Donnell wrote:
> - > - On 20/10/2010 23:22, David Kerr wrote:
> - > - >I know I've seen posts on how to do this, but i can't seem to find
> them.
> - > - >
> - > - >I've got a data set
> - > - >
> - > - >A, B
> - > - >A, C
> - > - >A, D
> - > - >[...]
> - > - >
> - > - >and so on
> - > - >
> - > - >and i'd like to be able to wite a query that would result in
> - > - >
> - > - >1,A,B
> - > - >2,A,C
> - > - >3,A,D
> - > - >[...]
> - > - >
> - > - >PG version is 8.3.
> - > - >
> - > - >Any ideas?
> - > -
> - > - You probably want generate_series():
> - > -
> - > -http://www.postgresql.org/docs/8.3/static/functions-srf.html
> - > -
> - > - Ray.
> - >
> - > I thought, so. what would that look like?
> - >
> - > select generate_series(1,select count(*) from table), field1, field2
> from
> - > table
> - > doesn't work..
> - >
> - >
> - > thanks
> - >
> - > Dave
> - >
> - > --
> - > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> - > To make changes to your subscription:
> - > http://www.postgresql.org/mailpref/pgsql-general
> - >
>


Re: [GENERAL] Composite Index question

2010-10-20 Thread DM
I know there is no benfit of having duplicate indexes.
Inorder for me to make change on production it requires lot of approvals and
stuff.

I wnat to know if there is any major performance drawback for having
duplicate composite index, so that i can push hard for the change. Let me
know.

thanks for your looking into this.


On Wed, Oct 20, 2010 at 5:10 PM, Rob Sargent  wrote:

> If you can think of one benefit from having the redundant index then by
> all means keep it.  It certainly eludes me.  Seems to me, removing an
> un-necessary index on a huge table can only be a good thing.
>
> On 10/20/2010 06:02 PM, DM wrote:
> > Its a huge table in production, i dont want to take any risk.
> >
> > I can simulate and test this but i was to checking to see If any one
> > knows off hand about this.
> >
> >
> >
> > I can simulate it but
> > On Wed, Oct 20, 2010 at 4:57 PM, Rob Sargent  > <mailto:robjsarg...@gmail.com>> wrote:
> >
> > Hm. Run some queries; drop the second version of the index
> definition;
> > re-run the same queries; report to the group.  The redundant index
> isn't
> > helping, that much is certain.
> >
> > On 10/20/2010 05:43 PM, DM wrote:
> > > Composite Index question:
> > >
> > > I have composite index on 3 columns on a table, by mistake the
> > composite
> > > index was created twice on the table.
> > >
> > > Will there any performance issues on this table because of the 2
> same
> > > composite indexes?
> > >
> > > Thanks
> > > Deepak
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org
> > <mailto:pgsql-general@postgresql.org>)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
> >
>


Re: [GENERAL] Composite Index question

2010-10-20 Thread DM
Its a huge table in production, i dont want to take any risk.

I can simulate and test this but i was to checking to see If any one knows
off hand about this.



I can simulate it but
On Wed, Oct 20, 2010 at 4:57 PM, Rob Sargent  wrote:

> Hm. Run some queries; drop the second version of the index definition;
> re-run the same queries; report to the group.  The redundant index isn't
> helping, that much is certain.
>
> On 10/20/2010 05:43 PM, DM wrote:
> > Composite Index question:
> >
> > I have composite index on 3 columns on a table, by mistake the composite
> > index was created twice on the table.
> >
> > Will there any performance issues on this table because of the 2 same
> > composite indexes?
> >
> > Thanks
> > Deepak
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Composite Index question

2010-10-20 Thread DM
Composite Index question:

I have composite index on 3 columns on a table, by mistake the composite
index was created twice on the table.

Will there any performance issues on this table because of the 2 same
composite indexes?

Thanks
Deepak


Re: [GENERAL] Generate a dynamic sequence within a query

2010-10-20 Thread DM
select generate_series(1,(select count(*) from tax)), country from tax;

you should use braces around the sub select.

Thanks
Deepak

On Wed, Oct 20, 2010 at 3:30 PM, David Kerr  wrote:

> On Wed, Oct 20, 2010 at 11:28:18PM +0100, Raymond O'Donnell wrote:
> - On 20/10/2010 23:22, David Kerr wrote:
> - >I know I've seen posts on how to do this, but i can't seem to find them.
> - >
> - >I've got a data set
> - >
> - >A, B
> - >A, C
> - >A, D
> - >[...]
> - >
> - >and so on
> - >
> - >and i'd like to be able to wite a query that would result in
> - >
> - >1,A,B
> - >2,A,C
> - >3,A,D
> - >[...]
> - >
> - >PG version is 8.3.
> - >
> - >Any ideas?
> -
> - You probably want generate_series():
> -
> -http://www.postgresql.org/docs/8.3/static/functions-srf.html
> -
> - Ray.
>
> I thought, so. what would that look like?
>
> select generate_series(1,select count(*) from table), field1, field2 from
> table
> doesn't work..
>
>
> thanks
>
> Dave
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Enforcing password standards

2010-06-11 Thread DM
Thanks everyone,

I will wait for Postgres 9.0 to implement this feature then. Thanks

Thanks
Deepak

On Fri, Jun 11, 2010 at 10:30 AM, Joshua Tolley  wrote:

> On Thu, Jun 10, 2010 at 06:01:24PM -0700, DM wrote:
> >How to force postgres users to follow password standards and renewal
> >policies?
> >Thanks
> >Deepak
>
> 9.0 will ship with a contrib module called "passwordcheck" which will
> enforce
> some of these things, FWIW.
>
> --
> Joshua Tolley / eggyknap
> End Point Corporation
> http://www.endpoint.com
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAkwScpkACgkQRiRfCGf1UMMOzgCfW1P8SpFR53OSjm/og3hQFjba
> 0dIAoJK9mkm07XCAyfnPeiygBgrKuFG2
> =XESJ
> -END PGP SIGNATURE-
>
>


[GENERAL] Enforcing password standards

2010-06-10 Thread DM
How to force postgres users to follow password standards and renewal
policies?

Thanks
Deepak


Re: [GENERAL] Move data from DB2 to Postgres any software/solutions/approach?

2010-06-04 Thread DM
Thank you so much for all your inputs.

On Fri, Jun 4, 2010 at 3:27 PM, John R Pierce  wrote:

> DM wrote:
>
>> Thank you so much for every ones inputs.
>>
>> It is not real time, updates every 5 mins should be fine. But the DB2
>> database is real busy and its real performance based.
>>
>
> well, you might look over
> http://www.redbooks.ibm.com/abstracts/sg246828.html   which discusses DB2
> replication.  "The Appendix C provides information about configuring
> federated access to Informix, which can be used as a model for federated
> access to other non-DB2 database, such as Oracle, MS SQL Server, Sybase, and
> more, using DB2 Information Integrator V8. "
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Move data from DB2 to Postgres any software/solutions/approach?

2010-06-04 Thread DM
Thank you so much for every ones inputs.

It is not real time, updates every 5 mins should be fine.
But the DB2 database is real busy and its real performance based.

Thanks
Deepak

On Fri, Jun 4, 2010 at 2:49 PM, John R Pierce  wrote:

> DM wrote:
>
>> Sorry i didnt frame my question properly earlier, we are looking for
>> solution to do real time replication from db2 to postgres, its different
>> from migration. Eventually we want to move away from DB2. Intention is to
>> create a subset of a db2 database on postgres and allow users to access the
>> postgres database.
>>
>
> *real* realtime, as in transaction by transaction?  or sorta-realtime, as
> in updates every X interval where X is a minute or few?
>
> wild guess says, you'll need to roll that yourself, probably on the DB2
> side using triggers, and I have no idea how you'd connect to PG from the DB2
> procedures (as I know very little about DB2 specifically)
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Move data from DB2 to Postgres any software/solutions/approach?

2010-06-04 Thread DM
Sorry i didnt frame my question properly earlier, we are looking for
solution to do real time replication from db2 to postgres, its different
from migration.
Eventually we want to move away from DB2. Intention is to create a subset of
a db2 database on postgres and allow users to access the postgres database.

Thanks
Deepak

On Fri, Jun 4, 2010 at 2:23 PM, DM  wrote:

> Thanks Robert,
>
> Is there any tools available.
>
> Thanks
> Deepak
>
>
> On Fri, Jun 4, 2010 at 2:19 PM, Richard Broersma <
> richard.broer...@gmail.com> wrote:
>
>> On Fri, Jun 4, 2010 at 2:13 PM, DM  wrote:
>> > We want to replicate /move data form db2 to postgres is there any
>> software /
>> > solutions / approach available to do this?
>>
>>
>> Here is a link on the postgresql wiki.
>>
>> Hopefully it has some useful information.
>>
>>
>> http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#IBM_DB2
>>
>> --
>> Regards,
>> Richard Broersma Jr.
>>
>> Visit the Los Angeles PostgreSQL Users Group (LAPUG)
>> http://pugs.postgresql.org/lapug
>>
>
>


Re: [GENERAL] Move data from DB2 to Postgres any software/solutions/approach?

2010-06-04 Thread DM
Thanks Robert,

Is there any tools available.

Thanks
Deepak

On Fri, Jun 4, 2010 at 2:19 PM, Richard Broersma  wrote:

> On Fri, Jun 4, 2010 at 2:13 PM, DM  wrote:
> > We want to replicate /move data form db2 to postgres is there any
> software /
> > solutions / approach available to do this?
>
>
> Here is a link on the postgresql wiki.
>
> Hopefully it has some useful information.
>
>
> http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#IBM_DB2
>
> --
> Regards,
> Richard Broersma Jr.
>
> Visit the Los Angeles PostgreSQL Users Group (LAPUG)
> http://pugs.postgresql.org/lapug
>


[GENERAL] Move data from DB2 to Postgres any software/solutions/approach?

2010-06-04 Thread DM
Hi All,

We want to replicate /move data form db2 to postgres is there any software /
solutions / approach available to do this?

Thanks
Deepak


Re: [GENERAL] PostgreSQL Performance issue

2010-04-29 Thread DM
Hello there,

1. Try using COPY Command, you will see significant decrease in the loading
time.
2. Turn off auto commit and Remove foreign key constraints if it is only one
time load - this will also help in decreasing the load time.

Try these options and let us know how it went.

We load around 6M rows of data into a table using copy command it takes few
mins to load the data and system configuration is not that high too. Also
one more thing we use linux box over here.

Do a small test as to how long will it take to do 1000 inserts into a
similar table and send us your timings and definition of the table. I will
compare against mine.

Thanks
Deepak


On Tue, Apr 27, 2010 at 10:09 PM,  wrote:

>  I am curious to know how much of your delay is due to PostgreSQL and how
> much to your Java batch program.  If you comment out the call to the
> database function, so that you are reading your input file but not doing
> anything with the data, how long does your batch program take to run?
>
>
>
> RobR
>
>
>
> --
>
> The Java program hardly takes a minute to process all the flat files but at
> the time inserting the records into the db, the entire process takes more
> than 4 hours.
>
>
>
> Many thanks
>
>
>


Re: [GENERAL] pg_dump issues

2010-02-02 Thread DM
You were right, after zipping the dump file it came out to  6.9G

Thanks for your help.

thanks
Deepak

On Tue, Feb 2, 2010 at 3:20 PM, Scott Marlowe wrote:

> On Tue, Feb 2, 2010 at 4:07 PM, DM  wrote:
> > Hi All,
> > I have a database with only one schema with 5 tables
> > nspname |   relname   |  size
> >  -+-+-
> >  sch | job1  | 211 MB
> >  sch | job2  | 5611 MB
> >  sch | job3  | 6584 kB
> >  sch | job4  | 89 MB
> >  sch | job5  | 39 MB
> > Total sum of sizes of the table is less than 6 GB.
> > When I do a pg_dump of this database it is 20GB (I also did a pg_dump of
> > only schema data and the size was still the same). I am using postgres
> 8.4.1
> > version.
> > Could any one explain why the pg_dump size of the file is 3 times the
> size
> > of the tables?
>
> Your database is likely full of easily compressed text, which
> postgresql automagically compresses inline when storing it and
> decompresses when it retrieves it.  To see if this is the case, try
> zipping or gzipping or bzip2ing the dump file and see if it comes out
> to something less than or about 6Gigs.
>


Re: [GENERAL] pg_dump issues

2010-02-02 Thread DM
Hi All,

I have a database with only one schema with 5 tables

nspname |   relname   |  size
 -+-+-
 sch | job1  | 211 MB
 sch | job2  | 5611 MB
 sch | job3  | 6584 kB
 sch | job4  | 89 MB
 sch | job5  | 39 MB

Total sum of sizes of the table is less than 6 GB.

When I do a pg_dump of this database it is 20GB (I also did a pg_dump of
only schema data and the size was still the same). I am using postgres 8.4.1
version.

Could any one explain why the pg_dump size of the file is 3 times the size
of the tables?

Thanks
Deepak


[GENERAL] pg_dump issues

2010-02-02 Thread DM
Hi All,

I have a database with only one schema with 5 tables

nspname |   relname   |  size
 -+-+-
 sch | job1  | 211 MB
 sch | job2  | 5611 MB
 sch | job3  | 6584 kB
 sch | job4  | 89 MB
 sch | job5  | 39 MB

Total sum of sizes of the table is less than 6 GB.

When I do a pg_dump of this database it is 20GB (I also did a pg_dump of
only schema data and the size was still the same). I am using postgres 8.4.1
version.

Could any one explain why the pg_dump size of the file is 3 times the size
of the tables?

Thanks
Deepak


Re: [GENERAL] port question

2010-01-25 Thread DM
Hello Amy,

trying to understand your situation,

*I have installed a v8.4 and first port using localhost is ok - *default
port is 5432

You can only use one port for one instance of postgres, default port 5432
can be overridden by any new port number.

Thanks
Deepak

On Sun, Jan 24, 2010 at 11:23 AM, John R Pierce  wrote:

> Amy Smith wrote:
>
>> I have installed a v8.4 and first port using localhost is ok. but the
>> second one using different port will get error when connect using IP
>> address.
>> Is that only one port is allowed for one server ?
>> please help - need expert's advice.
>>
>
> you can run different instances of postgresql on different ports as long as
> their $PGDATA directories are different.
>
>
> is this second server configured to LISTEN_ADDRESSES='*' ?
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] When is the release date for Postgres 8.5?

2010-01-22 Thread DM
It looks like alpha version of 8.5 is already out here is the link

http://www.postgresql.org/about/news.1172

Thanks
Deepak
On Fri, Jan 22, 2010 at 12:05 PM, Larry Rosenman  wrote:

> On Fri, 22 Jan 2010, John R Pierce wrote:
>
>  Larry Rosenman wrote:
>>
>>> Yeah.  The question of "when do we call it 9.0" has come up multiple
>>> times over the past few release cycles, and "when we get built-in
>>> replication" has always been one of the more popular answers.  If HS+SR
>>> aren't enough to justify a major version bump, I'm not sure what would
>>> be.
>>>
>>
>> works for me, heh!   what I was hoping to hear :)
>>
>> and, gotcha, re X.0...
>>
> to be clear the quote about dot-oh is from Tom Lane, and not me.
>
> I screwed up the cut/paste.
>
> LER
>
>
>
>>
>>
>>
> --
> Larry Rosenman http://www.lerctr.org/~ler
> Phone: +1 512-248-2683 E-Mail: l...@lerctr.org
> US Mail: 430 Valona Loop, Round Rock, TX 78681-3893
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] When is the release date for Postgres 8.5?

2010-01-22 Thread DM
Thank you for the update.

Thanks
Deepak

On Fri, Jan 22, 2010 at 5:45 AM, Bruce Momjian  wrote:

> A. Kretschmer wrote:
> > In response to DM :
> > > Hello All,
> > >
> > > When is the release date for Postgres 8.5?
> >
> > Never.
> >
> > The next version will be 9.0, released when it is ready for production.
>
> Yea, we are months away from a new major release.
>
> --
>  Bruce Momjian  http://momjian.us
>  EnterpriseDB http://enterprisedb.com
>
>  + If your life is a hard drive, Christ can be your backup. +
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Fragmentation/Vacuum, Analyze, Re-Index

2010-01-22 Thread DM
Hello All,

How to identify if a table requires full vacuum? How to identify when to do
re-index on an existing index of a table?

Is there any tool for the above?

Thanks
Deepak Murthy


[GENERAL] When is the release date for Postgres 8.5?

2010-01-22 Thread DM
Hello All,

When is the release date for Postgres 8.5?

Thanks
Deepak


Re: [GENERAL] Comparing 2 databases

2009-12-29 Thread DM
Also check this out Very interesting – it can compare data between the DBs
(tables/views). Check this out –

http://www.zidsoft.com/

http://www.zidsoft.com/screenshots.html


Thanks

Deepak

On Tue, Dec 29, 2009 at 4:37 PM, Joshua Tolley  wrote:

> On Tue, Dec 29, 2009 at 03:21:18PM -0500, akp geek wrote:
> >thanks for the repsonse. I appreciate it. are there any limitations on
> >using this one?  Means that we have to the same user on both databases
> and
> >same passwords.
> >
> >I have used the command following way
> >
> >  check_postgres.pl --action=same_schema -H 172.  -p 1550
> >--db=myProdDB  --dbuser=prodUser  --dbpass=prodPwd
>  --dbhost2=172.x
> >--db=testDB  --dbuser=testUser  --dbpass=testPwd  --verbose >
> >difference.txt
> >
> >what happend was , it complained about the password, then I tried
> >replacing the testPwd with prodPwd, then it started executing. but it
> >prompted for password for testuser. that's where I got confused
>
> You might try a pgpass file[1] and skip providing the passwords on the
> command
> line.
>
> >One question I have is, is there an option to specify schema also
>
> Check the docs under BASIC FILTERING[2].  You can tell it to ignore objects
> with certain names, or to include only those objects with the given names.
>
> [1] http://www.postgresql.org/docs/current/interactive/libpq-pgpass.html
> [2]
> http://bucardo.org/check_postgres/check_postgres.pl.html#basic_filtering
>
> --
> Joshua Tolley / eggyknap
> End Point Corporation
> http://www.endpoint.com
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.9 (GNU/Linux)
>
> iEYEARECAAYFAks6oNcACgkQRiRfCGf1UMOQVgCghRaU2VCwyXNg0KbkqI/FhA9J
> xpoAn2RJRSmJmbgybRytNjo0ZiPNruL4
> =Lk0m
> -END PGP SIGNATURE-
>
>


[GENERAL] Install compat-postgresql-libs-debuginfo on postgres 8.3.8 without using RPM

2009-12-09 Thread DM
All

How to Install compat-postgresql-libs-debuginfo on postgres 8.3.8 by
compiling the source code.
The way i have installed postgres is by compiling the postgres files.


Thanks
Deepak


Re: [GENERAL] [ADMIN] Best way to install postgres? RPM/Source Code Build

2009-04-28 Thread DM
Thank you guys for the answers.

On Tue, Apr 28, 2009 at 6:06 PM, Joshua D. Drake wrote:

>
> >
> > Ops team want to install postgres as (usr/local/bin) using RPM, and
> > want to keep the data directory under (/var/lib/data/pgsql) and they
> > say that we cannot have two versions at the same time using RPM.
> > They say that red hat standard is to install all binaries of the
> > database under /usr/local/bin and by using RPM only.
>
> They are wrong. Redhat standard is /usr/bin for
> binaries /var/lib/pgsql/data for the data directory.
>
> Use www.pgsqlrpms.org and make your life easy.
>
> Joshua D. Drake
>
> Joshua D. Drake
> >
> --
> PostgreSQL - XMPP: jdr...@jabber.postgresql.org
>   Consulting, Development, Support, Training
>   503-667-4564 - http://www.commandprompt.com/
>   The PostgreSQL Company, serving since 1997
>
>


[GENERAL] Best way to install postgres? RPM/Source Code Build

2009-04-28 Thread DM
Hi All,
What is the best way to Install postgres database?

We have installed (2) versions of the postgres on th server using source
code (compiling and building) (/usr/local/pgsql/8.3.3/ and
/usr/local/pgsql/8.3.6/) and our data directory is (/mnt/data/pgsql/)
Our database size is more than 100GB. Currently we are running 8.3.6. When
we upgraded from 8.3.3 to 8.3.6, we pointed the data directory of 8.3.3 to
8.3.6.

Ops team want to install postgres as (usr/local/bin) using RPM, and want to
keep the data directory under (/var/lib/data/pgsql) and they say that we
cannot have two versions at the same time using RPM.
They say that red hat standard is to install all binaries of the database
under /usr/local/bin and by using RPM only.

What is the best way to install postgres? Is it by RPM or using Source code
(Build). Could postgres experts comment on this please.

Thanks
Deepak


[GENERAL] Re: [ADMIN] how to revoke multiple users permission from multiple tables at the same time?

2009-04-22 Thread DM
Thanks Scott.
Good answer, I was consolidating the schemas here, there were too many users
were granted permission to tables.I wanted to consolidate/optimize to bring
it to one role and granting this role to the user (same way as you
mentioned).

Thanks for the solution.

Thanks
Deepak


On Wed, Apr 22, 2009 at 4:02 PM, Scott Marlowe wrote:

> On Wed, Apr 22, 2009 at 5:01 PM, Scott Marlowe 
> wrote:
> > On Wed, Apr 22, 2009 at 4:19 PM, DM  wrote:
> >> how to revoke multiple users permission from multiple tables at the same
> >> time?
> >> Or in simple is there a way to revoke multiple users grant access from
> >> multiple tables under a schema.?
> >
> > Best way is to NOT grant multiple users permissions, but to grant a
> > role the permissions and grant that role to users.  That way you only
> > have to revoke persmissions from the role to revoke it from all the
> > users.
> >
> >> I use Revoke below command to execute on each table one by one.
> >> revoke SELECT/ALL on testtable from user1;
> >
> > Note that you can build a set of revoke commands by using selects and
> > concatenations if you need them.  Something like this (use psql -E to
> > see the queries \ commands invoke in psql)
> >
> > SELECT 'revoke all from somename on '||n.nspname||'.'||  c.relname ||';'
> > FROM pg_catalog.pg_class c
> > JOIN pg_catalog.pg_roles r ON r.oid = c.relowner
> > LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
> > WHERE c.relkind IN ('r','v','S','')
> >  AND n.nspname <> 'pg_catalog'
> >  AND n.nspname !~ '^pg_toast'
> >  AND pg_catalog.pg_table_is_visible(c.oid)
> > ORDER BY 1;
> >
> >   ?column?
> > --
> >  revoke all from somename on public.colony;
> >  revoke all from somename on public.delegate;
> >  revoke all from somename on public.jt1;
> >  revoke all from somename on public.jt2;
> >  revoke all from somename on public.mytable;
> >  revoke all from somename on public.test1;
> >  revoke all from somename on public.test2;
> >  revoke all from somename on public.tmp;
> > (8 rows)
>
> In my hurry I got the order wrong, you want the revoke to look like this:
>
> revoke all on tablename from somename;
>
> I leave it to you to rebuild the query to get what ya need.
>


[GENERAL] how to revoke multiple users permission from multiple tables at the same time?

2009-04-22 Thread DM
how to revoke multiple users permission from multiple tables at the same
time?
Or in simple is there a way to revoke multiple users grant access from
multiple tables under a schema.?

I use Revoke below command to execute on each table one by one.

revoke SELECT/ALL on testtable from user1;

Thanks for taking your time to read this.

thanks
Deepak


Re: [GENERAL] how to search for relation by name?

2009-04-22 Thread DM
Here is the sql

SELECT c.oid,
  n.nspname,
  c.relname
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname LIKE ('%dt%')
  AND pg_catalog.pg_table_is_visible(c.oid)

replace dt with your sequence name

pg_catalog has the information.

Thanks
Deepak


On Wed, Apr 22, 2009 at 2:36 PM, zach cruise  wrote:

> when i try to copy database (into another database), i get "relation
> does not exist" errors for 'super objects' like sequences. (that is
> fine since i am using pg_dump, not pg_dumpall) but there is one
> relation i can't find to recreate in the new database.
>
> how can i search database for relation by name (some catalog table)?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Standards for Postgres Installation Question?

2009-04-15 Thread DM
Hi All,

Is there any standards for installing postgres in a certain directory? or
What is the standard installation path to install postgres.

If I have to run two different versions of postgres on the same machine, how
should I install postgres? What is the standards for this?
By using RPM it always defaults to -- /usr/local/pgsql/, Is there an RPM
that i can use where i can specify my installation directory path?

Thanks
DM


[GENERAL] Can we load all database objects in memory?

2009-03-25 Thread DM
Hi All,

I have a database of 10GB.
My Database Server has a RAM of 16GB

Is there a way that I can load all the database objects to memory?

Thanks for your time and taking a look at this question.


Thanks
Deepak


Re: [GENERAL] pg_restore error - Any Idea?

2009-03-23 Thread DM
Eric,

There was no change in the version, we are using postgres v8.3.5

Thanks
Deepak

On Sun, Mar 22, 2009 at 11:17 PM, Erik Jones  wrote:

>
> On Mar 22, 2009, at 10:44 PM, DM wrote:
>
>  Hi All,
>>
>> I am facing an error on executing the below command
>>
>> dump name: pg_dump_FcZ0.pnps_200903041201_1.2.1.0_base_testing
>> databae name: pnqd_test
>>
>> $pg_restore -U postgres -p 5433 -d pnqd_test
>> pg_dump_FcZ0.pnps_200903041201_1.2.1.0_base_testing
>>
>> pg_restore: [archiver (db)] Error while PROCESSING TOC:
>> pg_restore: [archiver (db)] Error from TOC entry 3715; 0 0 ACL monitor
>> postgres
>> WARNING: errors ignored on restore: 1
>>
>> I am not able to figure out this issue. Any idea guys.
>>
>
>
> TOC -> Table of Contents
>
> A dump made with pg_dump's -Fc will contain a table of contents of all of
> the database objects in the dump file.  Something in that is causing an
> error for pg_restore.  Does the version of pg_restore match up with the
> version of pg_dump that you used to make the dump?
>
> Erik Jones, Database Administrator
> Engine Yard
> Support, Scalability, Reliability
> 866.518.9273 x 260
> Location: US/Pacific
> IRC: mage2k
>
>
>
>
>
>


[GENERAL] pg_restore error - Any Idea?

2009-03-22 Thread DM
Hi All,

I am facing an error on executing the below command

dump name: pg_dump_FcZ0.pnps_200903041201_1.2.1.0_base_testing
databae name: pnqd_test

$pg_restore -U postgres -p 5433 -d pnqd_test
pg_dump_FcZ0.pnps_200903041201_1.2.1.0_base_testing

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 3715; 0 0 ACL monitor
postgres
WARNING: errors ignored on restore: 1

I am not able to figure out this issue. Any idea guys.

Thanks
Deepak


Re: [GENERAL] max_fsm_relations and max_fsm_pages problem

2009-03-18 Thread DM
Question How many database objects or relations do you have?
Thanks
Deepak

On Wed, Mar 18, 2009 at 8:12 AM, Gauthier, Dave wrote:

>  Hi:
>
>
>
> I have max_fsm_relations set to 1200 and max_fsm_pages set to 20 in
> postgres.conf (well over 16x max_fsm_relations) but still get...
>
>
>
> FATAL:   max_fsm_pages must exceed nax_fsm_relations * 16
>
>
>
> What’s up ?
>
>
>
> (v8.3 on Linux)
>
>
>
>
>