Laurenz Albe schrieb am 02.11.2017 um 09:30:
> Finally, even though the official line of PostgreSQL is to *not* have
> query hints, and for a number of good reasons, this is far from being
> an unanimous decision. The scales may tip at some point, though I
> personally hope that this point is not
Jim Nasby schrieb am 11.03.2016 um 17:37:
> If the blob is in the database then you have nothing extra to do. It's
> handled just like all your other data.
>
> If it's a file in a file system then you need to:
>
> - Have application code that knows how and where to get at the file
> - Have a
Hello,
I stumbled over this answer: http://stackoverflow.com/a/9717125/330315 and this
sounded quite strange to me.
So I ran this on my Windows laptop with Postgres 9.4.5, 64bit and indeed
now()::date is much faster than current_date:
explain analyze
select current_date
from
Jonathan Rogers schrieb am 17.10.2015 um 04:14:
>>> Yes, I have been looking at both plans and can see where they diverge.
>>> How could I go about figuring out why Postgres fails to see the large
>>> difference in plan execution time? I use exactly the same parameters
>>> every time I execute the
Graeme B. Bell schrieb am 09.07.2015 um 11:44:
I don't recall seeing a clear statement telling me I should mark pl/pgsql
functions nonvolatile wherever possible or throw all performance and
scalability out the window.
From: http://www.postgresql.org/docs/current/static/xfunc-volatility.html
Tomas Vondra schrieb am 17.03.2015 um 15:43:
On 17.3.2015 15:19, Thomas Kellerer wrote:
Tomas Vondra schrieb am 17.03.2015 um 14:55:
(2) using window functions, e.g. like this:
SELECT * FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id
Tomas Vondra schrieb am 17.03.2015 um 14:55:
(2) using window functions, e.g. like this:
SELECT * FROM (
SELECT *,
ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id
ORDER BY FROM max_creation_dt) AS rn
FROM
Robert Kaye schrieb am 16.03.2015 um 13:59:
However, I am glad to report that our problems are fixed and that our
server is back to humming along nicely.
And as I said to Josh earlier: Postgres rocks our world. I’m
immensely pleased that once again the problems were our own stupidity
and
Sven R. Kunze schrieb am 26.02.2015 um 13:23:
If you think Reverse Key Indexes have no usage here in PostgreSQL, you should
not support convenience features
for easily improving performance without breaking the querying API
It's also unclear to me which performance you are referring to.
Sven R. Kunze schrieb am 26.02.2015 um 12:04:
I just thought about btree indexes here mainly because they well-known and
well-used in ORM frameworks.
If your ORM framework needs to know about the internals of an index definition
or even requires a certain index type, then you should ditch
Magers, James, 14.07.2014 04:20:
Thank you for your feedback. I am attaching the requested information.
While I do not think the query is necessarily inefficient, I believe a
sequence scan would be more efficient.
You can try
set enable_indexscan = off;
set enable_bitmapscan = off;
Magers, James, 14.07.2014 15:18:
Thank you. I executed the query this morning after disabling the scan types.
I am including links to explain.depesz output for each of the three
variations that I executed.
indexscan and bitmapscan off: http://explain.depesz.com/s/sIx
seqscan and
bsreejithin wrote on 29.08.2013 18:13:
PostgreSQL version was* 8.2*.
8.2 has long been deprecated.
For a new system you should use 9.2 (or at least 9.1)
Thomas
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Shaun Thomas wrote on 15.05.2013 17:31:
Hi!
This has been a pain point for quite a while. While we've had several
discussions in the area, it always seems to just kinda trail off and
eventually vanish every time it comes up.
A really basic example of how bad the planner is here:
CREATE TABLE
mark.kirkw...@catalyst.net.nz wrote on 03.05.2013 00:19:
I think the idea of telling postgres that we are doing a load is probably
the wrong way to go about this. We have a framework that tries to
automatically figure out the best plans...I think some more thought about
how to make that
Hi,
I have a self-referencing table that defines a hierarchy of projects and
sub-projects.
This is the table definition:
CREATE TABLE project
(
project_idinteger primary key,
project_name text,
pl_name text,
parent_id integer
);
ALTER TABLE project
ADD CONSTRAINT
Tom Lane wrote on 19.10.2012 16:20:
Thomas Kellerer spam_ea...@gmx.net writes:
This is the execution plan without index: http://explain.depesz.com/s/ecCT
When I create an index on parent_id execution time increases to something
between 110ms and 130ms
This is the execution plan with index
Kiriakos Tsourapas, 25.09.2012 13:01:
Thank you,
I will take this into consideration, since upgrading to 9 will be much harder I
assume...
I think an upgrade from 8.3 to 8.4 was harder due to the removal of a lot of
implicit type casts.
8.4 to 9.x shouldn't be that problematic after all
Tom Lane, 24.07.2012 19:12:
Well, it would only help if you're running a PG version that's new
enough to recognize the NOT EXISTS as an anti-join; and even then,
it's possible that joining on a tid column forecloses enough plan
types that you don't get any real benefit. But I'm just guessing.
Hi,
I was testing a query to delete duplicates to see how well using ctid works if
the table doesn't have a unique identifier available.
The table definition is:
create table dupes
(
id integer primary key,
first_name text,
last_name text
);
My test table has 100.000 rows with
Tom Lane, 24.07.2012 16:23:
Thomas Kellerer spam_ea...@gmx.net writes:
DELETE FROM dupes
WHERE id NOT IN (SELECT min(b.id)
FROM dupes b
GROUP BY first_name, last_Name
HAVING count(*) 1);
Doesn't that kill the non-duplicates too?
Ah
Tom Lane wrote on 24.07.2012 17:55:
Joins on tid columns just aren't supported very well at the moment.
Partly that's from lack of round tuits, and partly it's because it
doesn't seem all that wise to encourage people to use them. There
are gotchas if any of the rows receive concurrent updates.
Craig Ringer, 04.07.2012 07:43:
I'm not sure what the best option for getting a 9.2 beta build for Windows is.
Download the ZIP from here:
http://www.enterprisedb.com/products-services-training/pgbindownload
Unzip, initdb, pg_ctl start
Regards
Thomas
--
Sent via pgsql-performance
MauMau, 10.05.2012 13:34:
Today, they told me that they ran the test on two virtual machines on
a single physical machine.
Which means that both databases shared the same I/O system (harddisks).
Thererfor it's not really surprising that the overall performance goes down if
you increase the
Robert Klemme, 07.05.2012 14:03:
Alternative tools for JDBC tests:
http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html
SQL Developer does not support PostgreSQL
This page:
http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools
also lists
Robert Klemme, 07.05.2012 15:44:
http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html
SQL Developer does not support PostgreSQL
Last time I checked (quite a while ago) you could use arbitrary JDBC
drivers. There's also
Walker, James Les wrote on 01.05.2012 16:44:
I installed the enterprisedb distribution and immediately saw a 400%
performance increase.
What exactly is the enterprisedb distribution?
Are you talking about the the Advanced Server?
I would be very surprised if the code base would differ so
Merlin Moncure wrote on 30.04.2012 23:43:
Trying turning off fsync in postgrsql.conf to be sure.
This is a dangerous advise.
Turning off fsync can potentially corrupt the database in case of a system
failure (e.g. power outage).
--
Sent via pgsql-performance mailing list
Tom Lane wrote on 26.04.2012 21:17:
Richard Kojedzinszkykri...@tvnetwork.hu writes:
Dear list,
We have a database schema, which looks the same as the attached script.
When filling the tables with data, and skipping analyze on the table (so
pg_stats contains no records for table 'a'), the
Kevin Kempter wrote on 03.04.2012 19:29:
Hi All;
I have a query that wants to update a table based on a join like this:
update test_one
set f_key = t.f_key
from
upd_temp1 t,
test_one t2
where
t.id_number = t2.id_number
upd_temp1 has 248,762 rows
test_one has 248,762 rows
To extend on what
Stefan Keller wrote on 26.02.2012 01:16:
2. Are there any hints on how to tell Postgres to read in all table
contents into memory?
What about creating tablespace on a RAM Fileystem (tmpfs), then create a second schema in
your database where all tables are located in the that temp tablespace.
sridhar bamandlapally, 27.01.2012 05:31:
SQL explain plan for select * from hr.emp ;
Explained.
PLAN
--
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
Stefan Keller, 17.09.2011 22:01:
I'm also interested in such proposals or ideas!
Here's some list of topics:
* Time in PostgreSQL
* Fast Bulk Data Inserting in PostgreSQL with Unlogged tables
I don't understand these two items. Postgres does have a time data type and it
has unlogged tables
Robert Klemme, 19.09.2011 13:13:
On Sun, Sep 18, 2011 at 9:31 PM, Stefan Kellersfkel...@gmail.com wrote:
I'm simply referring to literature (like the intro Ramakrishnan Gehrke).
I just know that Oracle an Mysql actually do have them too and use it
without those current implementation specific
Adarsh Sharma, 25.03.2011 07:51:
Thanks Andreas, I was about print the output but it takes too much time.
Below is the output of explain analyze command :
pdc_uima=# explain analyze select c.clause, s.* from clause2 c, svo2 s where
c.clause_id=s.clause_id and s.doc_id=c.source_id and c.
Anssi Kääriäinen, 18.03.2011 08:15:
Hello list,
I am working on a Entity-Attribute-Value (EAV) database using
PostgreSQL 8.4.7. The basic problem is that when joining multiple
times different entities the planner thinks that there is vastly less
rows to join than there is in reality and decides
Divakar Singh, 25.11.2010 12:37:
Hello Friends,
I have many instances of my software running on a server (Solaris SPARC). Each
software instance requires some DB tables (same DDL for all instances' tables)
to store data.
It essentially means that some processes from each instance of the
Kevin Grittner, 08.11.2010 18:01:
shaiju.ckshaiju...@gmail.com wrote:
The table have 200 records now.
Select * from employee takes 15 seconds to fetch the data!!!
Which seems to be very slow.
But when I say select id,name from empoyee it executes in 30ms.
Same pefromance if I say select
Kenneth Marshall, 27.10.2010 22:41:
Different algorithms have been discussed before. A quick search turned
up:
quicklz - GPL or commercial
fastlz - MIT works with BSD okay
zippy - Google - no idea about the licensing
lzf - BSD-type
lzo - GPL or commercial
zlib - current algorithm
Of these lzf
Matthew Wakeling wrote on 10.08.2010 18:03:
On Tue, 10 Aug 2010, Thomas Kellerer wrote:
No. It's built in (8.4) and it's called Windowing functions:
http://www.postgresql.org/docs/8.4/static/tutorial-window.html
http://www.postgresql.org/docs/8.4/static/functions-window.html
SELECT group
Craig James wrote on 27.05.2010 23:13:
It would be nice if Postgres had a way to assign a cost to every
function.
Isn't that what the COST parameter is intended to be:
http://www.postgresql.org/docs/current/static/sql-createfunction.html
Thomas
--
Sent via pgsql-performance mailing list
akp geek, 28.04.2010 16:37:
We have 8.4, which of AUTOVACUUM PARAMETERS can be set to handle individual
table?
All documented here:
http://www.postgresql.org/docs/current/static/sql-createtable.html
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Nickolay wrote on 09.01.2010 11:24:
it would be pretty hard with one table because of blocking
What do you man with because of blocking?
Thomas
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
Michael Clemmons wrote on 11.12.2009 23:52:
Thanks all this has been a good help.
I don't have control(or easy control) over unit tests creating/deleting
databases since Im using the django framework for this job. Createdb
takes 12secs on my system(9.10 pg8.4 and ext4) which is impossibly slow
Andreas Hartmann wrote on 29.10.2009 21:52:
Hi everyone,
I want to model the following scenario for an online marketing application:
Users can create mailings. The list of recipients can be uploaded as
spreadsheets with arbitrary columns (each row is a recipient). I expect
the following
Shaul Dar, 13.10.2009 17:17:
Also PG does not have a concept of an auto-increment pseudo-column
like Oracle's rownum. Any suggestions?
Yes it does (at least 8.4)
SELECT row_number() over(), the_other_columns...
FROM your_table
So you could do something like:
SELECT *
FROM (
SELECT
ramasubramanian, 27.05.2009 08:42:
How to insert or update a file in a table using the query in postgres
CREATE TABLE excel_file_upload
(
user_id integer,
excel_file bytea
}
example
insert into excel_file_upload values(1,file1)
file1 can be any file *.doc,*.xls
How i can do
Stephen Frost wrote on 22.04.2009 23:51:
What about 4 individual prepared inserts? Just curious about it.
4 inserts, one prepared statement each (constructing the prepared
statement only once), in a single transaction: 1.68s
I'm surprised that there's any win here at all.
For a
48 matches
Mail list logo