On 18 March 2010 14:31, Corin wrote:
> Hi all,
>
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking about
> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server
> is a dual dualcore
2010/5/14 Piotr Legiecki
> Hi
>
> I have a situation at my work which I simply don't understand and hope
> that here I can find some explanations.
>
> What is on the scene:
> A - old 'server' PC AMD Athlon64 3000+, 2GB RAM, 1 ATA HDD 150GB, Debian
> etch, postgresql 8.1.19
> B - new server HP DL
On 20 May 2010 06:06, David Jarvis wrote:
> Hi,
>
> I recently switched to PostgreSQL from MySQL so that I can use PL/R for data
> analysis. The query in MySQL form (against a more complex table structure)
> takes ~5 seconds to run. The query in PostgreSQL I have yet to let finish,
> as it takes o
On 20 May 2010 17:36, David Jarvis wrote:
> Hi, Thom.
>
> The query is given two items:
>
> Range of years
> Range of days
>
> I need to select all data between the range of days (e.g., Dec 22 - Mar 22)
> over the range of years (e.g., 1950 - 1970), such as shown here:
>
> http://i.imgur.com/MUkuZ
On 20 May 2010 19:36, Thom Brown wrote:
> On 20 May 2010 17:36, David Jarvis wrote:
>> Hi, Thom.
>>
>> The query is given two items:
>>
>> Range of years
>> Range of days
>>
>> I need to select all data between the range of days (e.g., Dec
On 20 May 2010 20:02, Tom Lane wrote:
> Thom Brown writes:
>> On 20 May 2010 17:36, David Jarvis wrote:
>> Okay, get your app to convert the month-date to a day of year, so we
>> have year_start, year_end, day_of_year_start, day_of_year_end
>
>> and your where cl
2010/5/25 Joachim Worringen :
> Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
>>
>> temporary tables are handled pretty much like the regular table. The
>> magic happens on schema level, new schema is setup for connection, so
>> that it can access its own temporary tables.
>> Temporary tables a
I'm using PostgreSQL 9.0 beta 1. I've got the following table definition:
# \d parts_2576
Table "public.parts_2576"
Column | Type |
Modifiers
++--
On 28 May 2010 19:54, Tom Lane wrote:
> Thom Brown writes:
>> I get this:
>
>> Limit (cost=0.00..316895.11 rows=400 width=211) (actual
>> time=3.880..1368.936 rows=400 loops=1)
>> -> GroupAggregate (cost=0.00..41843621.95 rows=52817 width=211)
>> (act
On 31 March 2010 15:23, Bruce Momjian wrote:
> James Mansion wrote:
>> Hannu Krosing wrote:
>> > Pulling the plug should not corrupt a postgreSQL database, unless it was
>> > using disks which lie about write caching.
>> >
>> Didn't we recently put the old wife's 'the disks lied' tale to bed in
>>
On 1 July 2010 06:19, Srikanth Kata wrote:
>
> Please tell me What is the best way to optimize this query
>
> select
> s.*,a.actid,a.phone,d.domid,d.domname,d.domno,a.actno,a.actname,p.descr
> as svcdescr from vwsubsmin s inner join packages p on s.svcno=p.pkgno inner
> join
> account a on a.actno
On 12 July 2010 14:11, Rob Wultsch wrote:
> On Mon, Jul 12, 2010 at 4:15 AM, A. Kretschmer
> wrote:
>> In response to atul.g...@globaldatapoint.com :
>>> Hi,
>>>
>>>
>>>
>>> I need to log the start and end time of the procedures in a table. But the
>>> start and end time are same. This is how I r
On 15 July 2010 15:41, Patrick Donlin wrote:
> I have two servers with equal specs, one of them running 8.3.7 and the new
> server running 8.4.4. The only tweak I have made from the default install
> (from Ubuntu repositories) is increasing shared_buffers to 768MB. Both
> servers are running 64-bi
The last_value function is a window
function aggregate. Give it a try.
--
Thom Brown
Registered Linux user: #516935
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 10 August 2010 17:06, Thom Brown wrote:
> On 10 August 2010 17:03, Matthew Wakeling wrote:
>> On Tue, 10 Aug 2010, Thomas Kellerer wrote:
>>>
>>> No. It's built in (8.4) and it's called Windowing functions:
>>> http://www.postgresql.org
ex to your views column to prevent
the need for a sequential scan to sort. Also, ILIKE won't be able to
use an index, so if you wish to match against title, you may wish to
change your query to use:
WHERE lower(title) LIKE
And then create an index on lower(title).
Regards
--
T
the list.
>
I don't think PgAdmin will display indexes created by primary keys,
only if indisprimary is false.
--
Thom Brown
Registered Linux user: #516935
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
this thread:
> http://www.jitterbit.com/PhpBB/viewtopic.php?f=5&t=869
>
> The specs of postres environment are:
>
> Windows XP SP3
> PostgreSQL 9.0.0, compiled by Visual C++ build 1500, 32-bit
> pgAdmin III 1.12.0 (Sep17 2010, rev: REL-1_12_0)
> PostGIS 2.0SVN
>
26 loops=1)
> Total runtime: 85875.591 ms
> (13 rows)
If you look at your latest explain, it shows that it's merging the
results of a full sequential scan of both crm and crm_active. Is
crm_active a child table of crm?
Do you no longer have the index "crm_pkey" on the parent tab
full_page_writes (i.e. running with scissors).
Also depends on what you mean by "as fast as possible". Fast at doing
what? Bulk inserts, selecting from massive tables?
--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935
On 5 November 2010 11:14, Thom Brown wrote:
> On 5 November 2010 10:59, A B wrote:
>
>> Hi there.
>>
>> If you just wanted PostgreSQL to go as fast as possible WITHOUT any
>> care for your data (you accept 100% dataloss and datacorruption if any
>> error sho
; > The server will just boot, load data, run, hopefully not crash but if
> > it would, just start over with load and run.
>
> If you want fast read queries then changing
> fsync/full_page_writes/synchronous_commit won't help you.
Yes, those will be for write-performance
tain? For instance, if you have 10 columns each returning massive XML
documents, each hundreds of megs, the bottleneck would be I/O bandwidth.
--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935
lified and also that
> there are many implementation details and difficulties that I am not
> aware.
>
> I strongly believe that the outcome of the discussion regarding this
> issue will be helpful.
>
>
Which version of PostgreSQL are you basing this on?
--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935
esql.org/docs/8.3/static/release-8-3.html#AEN87319
--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935
because the ORDER BY and LIMIT would need to be applied to the
>> subqueries and then re-applied after the APPEND,
>
> PostgreSQL 9.1 will know how to do this, FWIW.
Out of curiosity, what was the commit for this?
--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Register
On 2 March 2011 19:52, Robert Haas wrote:
> On Wed, Mar 2, 2011 at 9:11 AM, Thom Brown wrote:
>> On 2 March 2011 19:38, Robert Haas wrote:
>>> On Thu, Feb 24, 2011 at 11:38 AM, Dave Johansen
>>> wrote:
>>>> I also just noticed that an ORDER BY x LIMIT
2009/10/14 Scott Marlowe :
>
> If what you're trying to do is emulate a real world app which randomly
> grabs rows, then you want to setup something ahead of time that has a
> pseudo random order and not rely on using anything like order by
> random() limit 1 or anything like that. Easiest way is
2009/11/20 Richard Neill
>
>> Greg Williamson wrote:
>>
>>> Richard --
>>>
>>> You might post the results of "EXPLAIN ANALYZE ;" ... be
>>> sure to run it in a transaction if you want to be able roll it back. Perhaps
>>> try "EXPLAIN ;" first as it is faster, but EXPLAIN ANALYZE shows
>>> what t
2009/11/20 Richard Neill
>
>
> Thom Brown wrote:
> >
>
>> It looks like your statistics are way out of sync with the real data.
>>
>> > Nested Loop (cost=885367.03..1123996.87 rows=8686 width=12) (actual
>> time=248577.879..253168.466 rows=347308 l
2009/11/20 Richard Neill
>
>
> It might also help if you posted your postgresql.conf too.
>>
>
> Below (have removed the really non-interesting bits).
>
> Thanks,
>
> Richard
>
>
> I can't actually see anything in your config that would cause this problem.
:/
As for seeing the progress of an upd
2009/11/22 Jonathan Blitz
> I have a table with a number of columns.
>
> I perform
>
> Select *
> from table
> order by a,b
>
> There is an index on a,b which is clustered (as well as indexes on a and b
> alone).
> I have issued the cluster and anyalze commands.
>
> Nevertheless, PostgreSQL perf
2009/11/23 Faheem Mitha
>
> Hi everybody,
>
> I've got two queries that needs optimizing. Actually, there are others, but
> these are pretty representative.
>
> You can see the queries and the corresponding plans at
>
> http://bulldog.duhs.duke.edu/~faheem/snpdb/opt.pdf
>
> or
>
> http://bulldog.
2009/11/24 Luca Tettamanti
> On Tue, Nov 24, 2009 at 3:59 PM, Jerry Champlin
> wrote:
> > You may want to consider using partitioning. That way you can drop the
> > appropriate partition and never have the overhead of a delete.
>
> Hum, I don't think it's doable in my case; the partitioning is
2009/12/7 Kevin Grittner
> Ben Brehmer wrote:
>
> > -7.5 GB memory
> > -4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units
> >each)
> > -64-bit platform
>
> What OS?
>
> > (PostgreSQL 8.1.3)
>
> Why use such an antiquated, buggy version? Newer versions are
> faster.
>
> -Kevin
>
2009/12/8 Schmitz, David
> Hi Andres,
>
> EXPLAIN ANALYZE
> select ah.ORDER8_ID, ah.BUILTUP_ID, rl.LINK_ID, la.SIDE,
>rl.ROAD_NAME_ID, rl.LEFT_ADDRESS_RANGE_ID,
> rl.RIGHT_ADDRESS_RANGE_ID,
>rl.IS_EXIT_NAME, rl.EXPLICATABLE, rl.IS_JUNCTION_NAME,
>
Had a quick look at a benchmark someone put together of MySQL vs PostgreSQL,
and while PostgreSQL is generally faster, I noticed the bulk delete was very
slow: http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html
Is this normal?
Thom
On 9 February 2010 21:46, Dimi Paun wrote:
> >From what I've read on the net, these should be very similar,
> and should generate equivalent plans, in such cases:
>
> SELECT DISTINCT x FROM mytable
> SELECT x FROM mytable GROUP BY x
>
> However, in my case (postgresql-server-8.1.18-2.el5_4.1),
> t
On 18 February 2010 11:55, AI Rumman wrote:
> "Not like" operation does not use index.
>
> select * from vtiger_contactscf where lower(cf_1253) not like
> lower('Former%')
>
> I created index on lower(cf_1253).
>
> How can I ensure index usage in not like operation?
> Anyone please help.
>
How ma
> On Thu, Feb 18, 2010 at 6:00 PM, Thom Brown wrote:
>>
>> On 18 February 2010 11:55, AI Rumman wrote:
>> > "Not like" operation does not use index.
>> >
>> > select * from vtiger_contactscf where lower(cf_1253) not like
>> >
On 18 February 2010 12:18, A. Kretschmer
wrote:
> In response to Thom Brown :
>> On 18 February 2010 11:55, AI Rumman wrote:
>> > "Not like" operation does not use index.
>> >
>> > select * from vtiger_contactscf where lower(cf_1253) not like
>
q Scan on indextest (cost=0.00..143386.48 rows=5606 width=9)
(actual time=164.321..1299.794 rows=8000 loops=1)
Filter: (stuff = 'bark'::text)
Total runtime: 1300.267 ms
(3 rows)
The index doesn't get used. There's probably a logical explanation,
which is what I'm curi
On 6 April 2011 05:44, Tom Lane wrote:
> Thom Brown writes:
>> The index doesn't get used. There's probably a logical explanation,
>> which is what I'm curious about.
>
> Er ... it's broken?
>
> It looks like the index predicate expression i
On 6 April 2011 00:02, Kenneth Marshall wrote:
> The stats seem off. Are you certain that an analyze has run?
>
> Cheers,
> Ken
>
Yes, an ANALYZE was definitely run against the table.
--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935
On 7 April 2011 07:37, Tom Lane wrote:
> Thom Brown writes:
>> On 6 April 2011 05:44, Tom Lane wrote:
>>> It looks like the index predicate expression isn't getting the right
>>> collation assigned, so predtest.c decides the query doesn't imply the
>>&
On 7 April 2011 08:10, Thom Brown wrote:
> On 7 April 2011 07:37, Tom Lane wrote:
>> Thom Brown writes:
>>> On 6 April 2011 05:44, Tom Lane wrote:
>>>> It looks like the index predicate expression isn't getting the right
>>>> collation assigned
d an index containing data records make sense?
Yes, it's called a covering index, where the data required to produce
results for the query are entirely contained in the index. That
should be hopefully coming in 9.2.
See http://wiki.postgresql.org/wiki/Index-only_scans
--
Thom Brown
Tw
only appropriate when there is no access to table
data from within the function. An example of IMMUTABLE functions
would be mathematical operations, where only the inputs and/or
function constants are used to produce a result.
--
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registere
On 22 February 2012 23:50, Alessandro Gagliardi wrote:
> I have a database where I virtually never delete and almost never do
> updates. (The updates might change in the future but for now it's okay to
> assume they never happen.) As such, it seems like it might be worth it to
> set autovacuum=off
On 23 February 2012 17:35, Alessandro Gagliardi wrote:
> I should have been more clear. I virtually never delete or do updates, but I
> insert a lot. So the table does change quite a bit, but only in one
> direction.
The same thing applies. VACUUM cleans up dead tuples, which INSERTs
don't creat
Hi all,
I've created a test table containing 21 million random dates and
times, but I get wildly different results when I introduce a
functional index then ANALYSE again, even though it doesn't use the
index:
postgres=# CREATE TABLE test (id serial, sampledate timestamp);
CREATE TABLE
postgres=#
On 18 October 2012 17:11, Thom Brown wrote:
> Hi all,
>
> I've created a test table containing 21 million random dates and
> times, but I get wildly different results when I introduce a
> functional index then ANALYSE again, even though it doesn't use the
> index:
>
On 18 October 2012 17:24, Peter Geoghegan wrote:
> On 18 October 2012 17:11, Thom Brown wrote:
>> The estimate is down to almost a 10th of what it was before. What's going
>> on?
>
> Even though the index isn't used, the pg_statistic entries that the
>
On 18 October 2012 17:44, Tom Lane wrote:
> Thom Brown writes:
>> And as a side note, how come it's impossible to get the planner to use
>> an index-only scan to satisfy the query (disabling sequential and
>> regular index scans)?
>
> Implementation restriction -
On 18 October 2012 18:00, Peter Geoghegan wrote:
> On 18 October 2012 17:52, Tom Lane wrote:
>> I forgot to mention that there is a klugy workaround: add the required
>> variable(s) as extra index columns. That is,
>>
>> create index i on t (foo(x), x);
>
> Is there a case to be made for
On 18 October 2012 17:52, Tom Lane wrote:
> Thom Brown writes:
>> On 18 October 2012 17:44, Tom Lane wrote:
>>> Thom Brown writes:
>>>> And as a side note, how come it's impossible to get the planner to use
>>>> an index-only scan to satisfy t
On 4 April 2014 17:29, Nicolas Paris wrote:
> Hello,
>
> My question is about multiprocess and materialized View.
> http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html
> I (will) have something like 3600 materialised views, and I would like to
> know the way to refresh them i
On 4 April 2014 20:49, PARIS Nicolas wrote:
> Thanks,
>
> "The only thing that immediately comes to mind would be running a
> rather hacky DO function in 4 separate sessions:"
> You mean 8 sessions I guess.
Yes, typo.
> 8 separate sessions ?
> Have you any idea how to manage sessions ? Is it po
On 4 April 2014 21:07, PARIS Nicolas wrote:
> Ok thanks,
>
> And what about triggers. 8 triggers based on the same event won't be
> multithreaded ?
I'm not clear on how triggers come into this. You can't have triggers
on materialized views, and they don't fire triggers on tables or views
that th
On 4 April 2014 21:26, PARIS Nicolas wrote:
> this postgres documentation :
> http://www.postgresql.org/docs/9.3/static/ecpg-connect.html
> says it is actually possible to manage connection in C stored procedure.
>
> I may be wrong...
That page doesn't refer to triggers at all, so I'm still not s
60 matches
Mail list logo