On Fri, Jan 10, 2014 at 12:02 AM, George Weaver gwea...@shaw.ca wrote:
Thanks David,
I found that if the whole expression is made a sub-select it works:
I too eventually got there. :-)
Check the plan for two queries that you have.
Best Regards,
*Sameer Kumar | Database Consultant*
Hi all,
I want to use PostgreSQL to help me calculate the cardinality/selectivity of
some queries, but I do not want to insert any data into these tables(since the
data size is huge) to PostgreSQL. So I plan to calculate the statistic data by
myself (not in PostgreSQL) and manually specify the
On Thu, Jan 9, 2014 at 11:42 AM, Willy-Bas Loos willy...@gmail.com wrote:
It doesn't seem logical to me that pg_dump should generate wal, but i
haven't been able to find a different explanation so far.
So to make sure, i want to ask you people: can it be that running pg_dump
creates a lot of
Hey List,
kind of a simple question :
I'm using the postgis_topology extension,
and I'm trying to figure where the slowness comes from when importing data.
It involves plpgsql function calling other plpgsql functions, insert,
update, etc etc.
I know I can use explain analyze for one querry, but
On 08/01/2014 16:09, gator...@yahoo.de wrote:
For machines running database systems, this means, this means,
that I need some way to get a consistent state of some point in
time. It does not particularly matter, which time exactly (in
Unfortunately, it does not look like there is any direct
to be specific, this is the SQL.
SELECT to_number((SELECT
array_to_string((SELECT xpath('/attributes/duration/text()',(SELECT XMLPARSE
(CONTENT
Hi All,
I am facing one problem. I want to read the log files of postgres.
Actually our customer facing some problem in database. Continuously one error
exception raised by the system.
Error is:
2014-01-09 22:08:12.003, SEVERE, manager.Data Manager - Could not execute JDBC
batch update; SQL
Hi:
On Thu, Jan 9, 2014 at 7:50 PM, Tirthankar Barari tbar...@verizon.com wrote:
We have a table where we insert about 10 million rows everyday. We keep 14
day's worth of entries (i.e. 140 mil). A scheduled task wakes up every day
and deletes all entries past the 14 day window (i.e. deletes
On Fri, Jan 10, 2014 at 6:00 PM, ygnhzeus ygnhz...@gmail.com wrote:
Hi all,
I want to use PostgreSQL to help me calculate the cardinality/selectivity of
some queries, but I do not want to insert any data into these tables(since
the data size is huge) to PostgreSQL. So I plan to calculate the
Thanks for your reply.
So correlation is not related to the calculation of selectivity right? If I
force PostgreSQL not to optimize the join order (by setting join_collapse_limit
and from_collapse_limit to 1) , is there any other factor that may affect the
structure of execution plan
Sent from my iPad
On 10-Jan-2014, at 19:42, ygnhzeus ygnhz...@gmail.com wrote:
Thanks for your reply.
So correlation is not related to the calculation of selectivity right? If I
force PostgreSQL not to optimize the join order (by setting
join_collapse_limit and from_collapse_limit to
On Fri, Jan 10, 2014 at 11:19 PM, Atri Sharma atri.j...@gmail.com wrote:
Sent from my iPad
On 10-Jan-2014, at 19:42, ygnhzeus ygnhz...@gmail.com wrote:
Thanks for your reply.
So correlation is not related to the calculation of selectivity right? If I
force PostgreSQL not to optimize the
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rémi Cura
Sent: Friday, January 10, 2014 4:10 AM
To: PostgreSQL General
Subject: [GENERAL] excution time for plpgsql function and subfunction
Hey List,
kind of a simple question :
I'm using the
Panneerselvam Posangu wrote
to be specific, this is the SQL.
SELECT to_number((SELECT
array_to_string((SELECT xpath('/attributes/duration/text()',(SELECT
XMLPARSE
(CONTENT
'
attributes
duration
2
/duration
maxlicenses
2
/maxlicenses
paymentrequired
true
/paymentrequired
tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
despite the server now having 32 GB instead of 4 GB of RAM and the workload
and number of clients remaining the same.
Details:
We have been using Postgresql for some time internally with much success.
Recently, we completed
Just as a followup to this. The process that I am using to do the upgrade
is as follows:
1. Install Postgres 9.3 in /opt dir.
2. In 9.0 instance update spclocation in pg_tablespace.
3. Update the symlinks in the pg_tblspace folder.
4. Move the tablespace folders to new location.
5. Run
On 01/10/2014 08:40 AM, Joseph Kregloh wrote:
Just as a followup to this. The process that I am using to do the
upgrade is as follows:
1. Install Postgres 9.3 in /opt dir.
2. In 9.0 instance update spclocation in pg_tablespace.
3. Update the symlinks in the pg_tblspace folder.
4. Move the
On 01/08/14 19:55, Jeff Janes wrote:
I think it would be easier to just exclude the database from the
system-wide backup and use a different method for it, rather than
engineer the necessary before/after hooks onto the system-wide backup.
Thanks for your comments!
I really thought, it would
On Jan 10, 2014, at 8:35 AM, Preston Hagar prest...@gmail.com wrote:
tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors despite
the server now having 32 GB instead of 4 GB of RAM and the workload and
number of clients remaining the same.
Details:
We have been
Any way to add a PK under the covers for PostgreSQL version 8.3?
On 01/06/2014 03:53 PM, Thomas Kellerer wrote:
CS DBA wrote on 06.01.2014 23:30:
We have a few very large tables with unique indexes on a column but
the column is not defined as the Primary Key. Can we add a PK
constraint
On Fri, Jan 10, 2014 at 11:09 AM, Steve Atkins st...@blighty.com wrote:
On Jan 10, 2014, at 8:35 AM, Preston Hagar prest...@gmail.com wrote:
tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
despite the server now having 32 GB instead of 4 GB of RAM and the workload
and
Preston Hagar prest...@gmail.com writes:
tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
despite the server now having 32 GB instead of 4 GB of RAM and the workload
and number of clients remaining the same.
Here are a couple of examples from the incident we had this
Thanks all for your suggestions. Looks like disabling transparent huge
pages fixed this issue for us. We haven't had it occur in two days now
after the change.
Thanks,
Karthik
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Fri, Jan 10, 2014 at 9:03 AM, gator...@yahoo.de wrote:
On 01/08/14 19:55, Jeff Janes wrote:
I think it would be easier to just exclude the database from the
system-wide backup and use a different method for it, rather than
engineer the necessary before/after hooks onto the system-wide
On 10 Leden 2014, 19:19, Tom Lane wrote:
Preston Hagar prest...@gmail.com writes:
tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
despite the server now having 32 GB instead of 4 GB of RAM and the
workload
and number of clients remaining the same.
Here are a couple of
History question:
Why does select round(3,3) work,
select round(3.0,3) work,
but select round(3.0::real,1) not work?
There's a utility cast in the integer case (described here
http://www.postgresql.org/docs/9.3/static/typeconv-func.html), but not in the
real case.
Is this on
On Fri, Jan 10, 2014 at 12:19 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Preston Hagar prest...@gmail.com writes:
tl;dr: Moved from 8.3 to 9.3 and are now getting out of memory errors
despite the server now having 32 GB instead of 4 GB of RAM and the
workload
and number of clients remaining
Paul Ramsey pram...@cleverelephant.ca writes:
History question:
Why does select round(3,3) work,Â
     select round(3.0,3) work,Â
but    select round(3.0::real,1) not work?
The 2-argument round() function actually takes (numeric, integer).
There's an implicit cast from int to
What surprises do you think would come from a
round(real, integer) returns real
function?
Just asking the question, I guess I can see the answer, since though round() is
usually used to reduce precision, it’s also possible to use it to increase it
arbitrarily… bah.
It does bug me a fair
Paul Ramsey pram...@cleverelephant.ca writes:
What surprises do you think would come from a
round(real, integer) returns real
function?Â
People might expect that rounding to, say, 6 digits produces an exact
decimal answer. Even if you're not exceeding 6 digits overall, it's
unlikely that the
Adrian.
Based on your earlier remarks and further investigation I find that the
restoration of a schema ( -n ) goes smoothly if there are no foreign key
References to the tables being restored from a schema that is not part of the
restoration. I had a couple of those that I had not initially
True (?) though I’m guessing the real test for most folks is if printf renders
it as expected. Anything else if icing on the cake, no?
P
--
Paul Ramsey
http://cleverelephant.ca
http://postgis.net
On January 10, 2014 at 1:09:24 PM, Tom Lane (t...@sss.pgh.pa.us) wrote:
to, say, 6 digits
@ sameer khan, i got query for all except *Number of cached blocks read,
Number of cached index blocks read, Number of cached sequence blocks read*.
can you tell query for these three counters only?
--
View this message in context:
Hi Kevin,
I will use whatever techniques you have mentioned.
The situation is unique there was no backup for 300+ GB Database.
If I give the command
select * from pg_largeobject where loid=141066;
it is showing 3 rows
But whenever I want to export to lo_export it says loid missing it
sramay nic.sr...@gmail.com wrote:
select * from pg_largeobject where loid=141066;
it is showing 3 rows
But whenever I want to export to lo_export it says loid missing
Perhaps pageno = 0 is missing for that object? Perhaps you need
something in pg_largeobject_metadata for the object
On Fri, Dec 27, 2013 at 04:10:25PM -0800, Adrian Klaver wrote:
On 12/27/2013 02:52 PM, Jeff Janes wrote:
On Friday, December 27, 2013, Joseph Kregloh wrote:
FYI, some testing showed that playing around with spclocation in
pg_tablespace is not recommended.
Do you
36 matches
Mail list logo