Carlo Stonebanks wrote:
> The whole topic of messing with stats makes my head spin but I am concerned
> about some horridly performing queries that have had bad rows estimates and
> others which always choose seq scans when indexes are available. Reading up
> on how to improve planner estimates,
On a database (PostgreSQL 8.2.4 on 64-bit Linux 2.6.18 on 8 AMD Opterons)
that is under high load, I observe the following:
- About 200 database sessions concurrently issue queries, most of them small,
but I have found one that touches 38000 table and index blocks.
- "vmstat" shows that CPU time
Tom Lane wrote:
>> On a database (PostgreSQL 8.2.4 on 64-bit Linux 2.6.18 on 8 AMD Opterons)
>> that is under high load, I observe the following: ...
>> - "vmstat" shows that CPU time is divided between "idle" and "iowait",
>> with user and sys time practically zero.
>> - "sar" says that the disk
Laurent Raufaste wrote:
> I have a big PG server dedicated to serve only SELECT queries.
> The database is updated permanently using Slony.
>
> [...] load is huge.
>
> In order to locate the problem, I stopped Slony (no updates anymore),
> mounted the database and index partitions with the sync o
Laurent Raufaste wrote:
> The problem was that the optimiser didn't know how to run the queries
> well and used millions of tuples for simple queries. For each tuple
> used it was updating some bit in the table file, resulting in a huge
> writing activity to that file.
Good that you solved your pr
This refers to the performance problem reported in
http://archives.postgresql.org/pgsql-performance/2008-04/msg00052.php
After some time of trial and error we found that changing the I/O scheduling
algorithm to "deadline" improved I/O performance by a factor 4 (!) for
this specific load test.
It
Gregory Stark wrote:
>> After some time of trial and error we found that changing the I/O scheduling
>> algorithm to "deadline" improved I/O performance by a factor 4 (!) for
>> this specific load test.
>
> What was the algorithm before?
The default algorithm, CFQ I think it is.
Yours,
Laurenz A
Vlad Arkhipov wrote:
> I found strange issue in very simple query.
You forgot to mention what your problem is.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-perform
I've tried to communicate to the list admin address about this, so far
without any reaction.
Sorry to waste bandwith here, but I don't know where else to turn:
Whenever I post to the -performance list, I get spammed by a
challenge-response bot from [EMAIL PROTECTED]:
> The email message sent to [
Antonio Perez wrote:
[wonders why his online backup / recovery test didn't work]
> 1.Se crea una instancia de postgreSQL
>
> 2.Se crea un directorio $PGDATA/walback donde se almacenararn los wal
> antiguos
>
> 3.Se exporta una variable $PGDATA2 que es la ubicacion del respaldo del
Gauri Kanekar wrote:
> Is it possible to put Statement timeout at User Level.
> Like If i have a user like 'guest', Can i put a statement
> timeout for it.
If only all problems were that easily solved!
ALTER ROLE guest SET statement_timeout=1;
This will cause all statements longer than 10 s
> If that's what it looks like your server is running just fine. Load
> of 1.31, 85+% idle, no wait time. Or is that top and vmstat output
> from when the server is running fine?
Don't forget that there are 8 CPUs, and the backend will only run on one
of them.
But I concur that this seems ok.
H
kiki wrote:
> The number of rows returned by the query varies, right now is:
>
> 49 row(s)
> Total runtime: 3,965.718 ms
> The table currently has 971582 rows.
>
> But the problem is that when database server is restarted everything works
> fine and fast. No heavy loads of the processor and as time
kiki wrote:
> First I have increased shared_buffers from 2000 to 8000. Since the
> postgresql is on Debian I had to increase SHMMAX kernel value.
> Everything is working much faster now.
Good to hear that the problem is gone.
> There is still heavy load of postmaster process (up to 100%) for a si
Please try to avoid top-posting where inappropriate.
kiki wrote:
>>> There is still heavy load of postmaster process (up to 100%) for a simple
>>> query
>>>
>>> EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND
>>> confirmed='false' AND dismissed='false' ORDER BY date DESC, time
kiki wrote:
> The speed of the query is not a problem but the strange thing is the
> processor load with postmaster when the query is executed.
> I dont now how to reduce processor load.
Did you try without the ORDER BY?
Where are the execution plans?
Yours,
Laurenz Albe
--
Sent via pgsql-perf
kiki wrote:
> I expanded work_mem to 256 Mb and created index on table
>
> create index xxx on system_alarm (id_camera, date, time) where confirmed =
> 'false' and dismissed = 'false';
That index is not used for the query (as could be expected).
You better remove it.
> the processor load now exe
Rogatzki Rainer wrote:
> I'm having problems with the following bad performing select-statement
> in a trigger-function (on update before):
>
> ...
> for c in
> select id_country, sum(cost) as sum_cost
> from costs
> where id_user = p_id_user
> and id_state = 1
> and date(r
Rogatzki Rainer wrote:
> > > in a trigger-function (on update before):
> > >
> > > ...
> > > for c in
> > > select id_country, sum(cost) as sum_cost
> > > from costs
> > > where id_user = p_id_user
> > > and id_state = 1
> > > and date(request) between p_begin and p_until
>
Jaime Casanova wrote:
> we have a some bad queries (developers are working on that), some of
> them run in 17 secs and that is the average but when analyzing logs i
> found that from time to time some of them took upto 3 mins (the same
> query that normally runs in 17secs).
>
> so my question is:
roopasatish wrote:
> I have an issue with the add foreign key constraint which
> goes for waiting and locks other queries as well.
>
> ALTER TABLE ONLY holding_positions ADD CONSTRAINT
> holding_positions_stock_id_fkey FOREIGN KEY (stock_id)
> REFERENCES stocks (stock_id) MATCH SIMPLE
>
Dimitri wrote:
> I've run several tests before and now going in depth to understand if
> there is nothing wrong. Due such a single query time difference InnoDB
> is doing 2-3 times better TPS level comparing to PostgreSQL..
Why don't you use MySQL then?
Or tune PostgreSQL?
Yours,
Laurenz Albe
--
ramasubramanian wrote:
> How to insert or update a file in a table using the query in postgres
> CREATE TABLE excel_file_upload
> (
> user_id integer,
> excel_file bytea
> }
>
> example
> insert into excel_file_upload values(1,file1)
>
> file1 can be any file *.doc,*.xls
> How i ca
Alberto Dalmaso wrote:
[...]
> in the explanation I'll see that the db use nasted loop.
[...]
Sorry for the remark off topic, but I *love* the term
"nasted loop". It should not go to oblivion unnoticed.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql
Matthew Hartman wrote:
> To determine the available slots, the algorithm finds the earliest slot
> that has an available chair and a count of the required concurrent
> intervals afterwards. So a 60 minute regimen requires 12 concurrent
> rows. This is accomplished by joining the table on itself. A
Mike Ivanov wrote:
> Please help me to make a decision on how to manage users.
>
> For some reason it is easier in the project I'm working on to split data
> by schemes and assign them to Postgres' users (I mean those created with
> CREATE USER) rather than support 'owner' fields referring to a
Robert Haas wrote:
> > You cannot keep the connection and change users.
> > A change of database user always means a new connection and a new
> > backend process.
>
> I don't think this is true. You can use SET SESSION AUTHORIZATION,
> right?
You are right, I overlooked that.
It is restricte
Eric Haszlakiewicz wrote:
> I noticed a bit of a performance regression in embedded sql queries when
> moving from the client libraries in verison 8.2.4 to 8.3.7. My
> application does a whole lot of queries, many of which don't return any
> data. When we moved to the new libraries the time of ru
Eric Haszlakiewicz wrote:
>> The log is misleading; the first statement is not really executed,
>> it is only prepared (parsed). If you set the log level to DEBUG2, it
>> will look like:
>
> Yes, but it's still incurring the overhead of sending the message to the
> server, isn't it?
Yes.
>> Mayb
Raji Sridar wrote:
> We use a typical counter within a transaction to generate
> order sequence number and update the next sequence number.
> This is a simple next counter - nothing fancy about it. When
> multiple clients are concurrently accessing this table and
> updating it, under extermely
SHIVESH WANGRUNGVICHAISRI wrote:
> The main question is: how do I configure Postgresql such that
> it's most efficient for storing large BLOB at a high-rate?
Refering to what you wrote on the web site you quoted,
I would guess that neither tuning WAL nor tuning logging
will have much effect.
My
waldomiro wrote:
> I need to know how much the postgres is going to disk to get
> blocks and how much it is going to cache? witch is the
> statistic table and what is the field that indicates blocks
> reads from the disk and the memory cache?
The view pg_statio_all_tables will show you the numb
Robert Haas wrote:
> Just for kicks I tried this out and the behavior is as the OP
> describes: after a little poking around, it sees that the INSERT grabs
> a share-lock on the referenced row so that a concurrent update can't
> modify the referenced column.
>
> It's not really clear how to get ar
Robert Haas wrote:
[explanation of how Oracle locks on Updates involving foreign keys]
>
> Yeah, that seems odd. I assume they know what they're doing; they're
> Oracle, after all. It does sound, too, like they have column level
> locks based on your comment about "an EXCLUSIVE lock on the modif
Greg Smith wrote:
> Recently I've made a number of unsubstantiated claims that the deadline
> scheduler on Linux does bad things compared to CFQ when running
> real-world mixed I/O database tests. Unfortunately every time I do one
> of these I end up unable to release the results due to client
I wrote:
> > One idea that occurs to me is that it might be possible to add to PG
> > some tuple lock modes that are intended to cover updates that don't
> > touch indexed columns. So, say:
> >
> > SHARED NONINDEX - conflicts only with EXCLUSIVE locks
> > SHARED - conflicts with EXCLUSIVE or EXCL
We selected a 30MB bytea with psql connected with
"-h localhost" and found that it makes a huge
difference whether we have SSL encryption on or off.
Without SSL the SELECT finished in about a second,
with SSL it took over 23 seconds (measured with
\timing in psql).
During that time, the CPU is 100
Heikki Linnakangas wrote:
>> We selected a 30MB bytea with psql connected with
>> "-h localhost" and found that it makes a huge
>> difference whether we have SSL encryption on or off.
>>
>> Without SSL the SELECT finished in about a second,
>> with SSL it took over 23 seconds (measured with
>> \tim
Merlin Moncure wrote:
We selected a 30MB bytea with psql connected with
"-h localhost" and found that it makes a huge
difference whether we have SSL encryption on or off.
Without SSL the SELECT finished in about a second,
with SSL it took over 23 seconds (measured with
Marti Raudsepp wrote:
>> Disabling OpenSSL compression in the source (which
>> is possible since OpenSSL 1.0.0) does not give me any performance
>> improvement.
>
> If it doesn't give you any performance improvement then you haven't
> disabled compression. Modern CPUs can easily saturate 1 GbitE w
Heikki Linnakangas wrote:
Disabling OpenSSL compression in the source (which
is possible since OpenSSL 1.0.0) does not give me any performance
improvement.
>>> If it doesn't give you any performance improvement then you haven't
>>> disabled compression. Modern CPUs can easily satura
Marti Raudsepp wrote:
>> I can't get oprofile to run on this RHEL6 box, it doesn't record
>> anything, so all I can test is total query duration.
> Maybe this helps you with OProfile?
>
> http://people.planetpostgresql.org/andrew/index.php?/archives/224-The-joy-of-Vx.html
Dang, you're right, I w
Marti Raudsepp wrote:
>> Disabling OpenSSL compression in the source (which
>> is possible since OpenSSL 1.0.0) does not give me any performance
>> improvement.
> If it doesn't give you any performance improvement then you haven't
> disabled compression. Modern CPUs can easily saturate 1 GbitE wit
Rafael Martinez wrote:
> I am wondering why the time reported by \timing in psql is not the
same
> as the time reported by duration in the log file when log_duration or
> log_min_duration_statement are on?. I can not find any information
about
> this in the documentation.
\timing measures the time
Martin Grotzke wrote:
> we want to see if we can gain better performance with our postgresql
> database. In the last year the amount of data growed from ~25G to now
> ~140G and we're currently developing a new feature that needs to get
> data faster from the database. The system is both read and wr
Martin Grotzke wrote:
>> You could try different kernel I/O elevators and see if that improves
>> something.
>>
>> I have made good experiences with elevator=deadline and
elevator=noop.
> Ok, great info.
>
> I'm not sure at which device to look honestly to check the current
> configuration.
>
>
Could somebody confirm or refute the following statements, please?
- The statistics gathered by ANALYZE are independent of the tablespace
containing the table.
- The tablespace containing the table has no influence on query planning
unless seq_page_cost or random_page_cost has been set on the
Ivan Voras wrote:
> I have a SQL function (which I've pasted below) and while testing its
> code directly (outside a function), this is the "normal", default
plan:
>
> http://explain.depesz.com/s/vfP (67 ms)
>
> and this is the plain with enable_seqscan turned off:
>
> http://explain.depesz.com/
Cédric Villemain wrote:
> > - The statistics gathered by ANALYZE are independent of the tablespace
> > containing the table.
>
> yes.
>
> > - The tablespace containing the table has no influence on query planning
> > unless seq_page_cost or random_page_cost has been set on the
> > tablespace.
Josh Berkus wrote:
> On 6/20/12 3:27 PM, Midge Brown wrote:
>> I need to move a postgres 9.0 database -- with tables, indexes, and wals
>> associated with 16
>> tablespaces on 12 logical drives -- to an existing raid 10 drive in another
>> volume on the same server.
>> Once I get the data off the
Yan Chunlu wrote:
> I have grabbed one day slow query log and analyzed it by pgfouine, to
my surprise, the slowest query
> is just a simple select statement:
>
> select diggcontent_data_message.thing_id,
diggcontent_data_message.KEY,
> diggcontent_data_message.value, diggcontent_data_message.kind
Hermann Matthes wrote:
> I want to implement a "paged Query" feature, where the user can enter
in
> a dialog, how much rows he want to see. After displaying the first
page
> of rows, he can can push a button to display the next/previous page.
> On database level I could user "limit" to implement th
Stanislaw Pankevich wrote:
> PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the
> fastest way to clean each
> non-empty table and reset unique identifier column of empty ones
>
> I wonder, what is the fastest way to accomplish this kind of task in
> PostgreSQL. I am in
Stanislaw Pankevich wrote:
>>> PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the
>>> fastest way to clean each
>>> non-empty table and reset unique identifier column of empty ones
>>>
>>> I wonder, what is the fastest way to accomplish this kind of task in
>>> PostgreS
Yan Chunlu wrote:
> I have logged one day data and found the checkpoint is rather
frequently(detail:
> https://gist.github.com/3088338). Not sure if it is normal, but the
average time of checkpoint is
> about 100sec~200sec, it seems related with my settings:
>
> 574 checkpoint_segments = 64
> 575
codevally wrote:
> I have a question regarding PostgreSQL 9.1 indexing.
>
> I am having a table and want to create a index for a column and I want
to
> store the data with time zone for that column. The questions are:
>
> 1. Can I create a index for a column which store time stamp with time
zone.
Jayadevan M wrote:
> I have a plpgsql function that takes a few seconds (less than 5) when
executed from psql. The same
> function, when invoked from java via a prepared statement takes a few
minutes. There are a few queries
> in the function. Out of these, the first query takes input parameters
f
Markus Innerebner wrote:
> I am doing some runtime experiments in my implementation, which is
computing multi-modal range queries
> for a query point (if you want to know details check the website:
www.isochrones.inf.unibz.it).
> The network is explored using Dijkstra Shortest Path algorithm that
s
Eileen wrote:
> I have written some Java code which builds a postgresql function.
That function calls approximately 6
> INSERT statements with a RETURNING clause. I recreate and re-run the
function about 900,000 times. I
> use JDBC to execute these functions on postgresql 8.3 on Windows.
When I t
Aryan Ariel Rodriguez Chalas wrote:
> I'm working with an application that connects to a remote server database
> using "libpq" library over
> internet, but making a simple query is really slow even though I've done
> PostgreSQL Tunning and table
> being indexed, so I want to know:
>
> -Why is p
AI Rumman wrote:
> I execued the query:
> ALTER TABLE entity ADD COLUMN owner_type char(1) NOT NULL default 'U';
>
> The db is stuck. The enity table has 2064740 records;
>
> Watching locks:
[all locks are granted]
> Any idea for the db stuck?
To add the column, PostgreSQL has to modify all row
Jon Leighton wrote:
> I'm one of the developers of the Ruby on Rails web framework.
>
> In some situations, the framework generates an empty transaction
block.
> I.e. we sent a BEGIN and then later a COMMIT, with no other queries in
> the middle.
>
> We currently can't avoid doing this, because a
henk de wit wrote:
> I'm using Postgres 9.1 on Debian Lenny and via a Java server (JBoss AS
6.1) I'm executing a simple
> "select ... for update" query:
>
>
> SELECT
>
> importing
>
> FROM
>
> customer
>
> WHERE
>
> id = :customer_id
>
> FOR UPDATE NOWAIT
>
>
> Once every 10 to 20 times P
John Nash wrote:
> We have being doing some testing with an ISD transaction and we had
> some problems that we posted here.
>
> The answers we got were very kind and useful but we couldn't solve the
> problem.
Could you refer to the threads so that you don't get the same advice again?
> We have
Strahinja Kustudic wrote:
>> I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and RAID10
>> 15K SCSI drives
>> which is runing Centos 6.2 x64. This server is mainly used for
>> inserting/updating large amounts of
>> data via copy/insert/update commands, and seldom for running sele
Sam Wong wrote:
> I am investigating a performance issue involved with LIKE '%'
> on an index in a complex query with joins.
> Q1.
> SELECT * FROM shipments WHERE shipment_id LIKE '12345678%'
>
> Q2.
> SELECT * FROM shipments WHERE shipment_id >= '12345678' AND
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for ever:
it took the system more than 80 minutes to replay 48 WAL files
and connect to
Alvaro Herrera wrote:
>> I am configuring streaming replication with hot standby
>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
>> PostgreSQL was compiled from source.
>>
>> It works fine, except that starting the standby took for ever:
>> it took the system more than 80 minutes
Andy wrote:
> I have been pulling my hair out over the last few days trying to get
any useful performance out of the
> following
> painfully slow query.
> The query is JPA created, I've just cleaned the aliases to make it
more readable.
> Using 'distinct' or 'group by' deliver about the same result
>> On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz
wrote:
>>> I am configuring streaming replication with hot standby
>>> with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
>>> PostgreSQL was compiled from source.
>>>
>>> It works f
Sean Woolcock wrote:
> I have a large (3 million row) table called "tape" that represents
files,
> which I join to a small (100 row) table called "filesystem" that
represents
> filesystems. I have a web interface that allows you to sort by a
number of
> fields in the tape table and
Heikki Linnakangas wrote:
>> Why does WAL replay read much more than it writes?
>> I thought that pretty much every block read during WAL
>> replay would also get dirtied and hence written out.
>
> Not necessarily. If a block is modified and written out of the buffer
> cache before next checkpoint
k...@rice.edu wrote:
>>> If you do not have good random io performance log replay is nearly
>>> unbearable.
>>>
>>> also, what io scheduler are you using? if it is cfq change that to
>>> deadline or noop.
>>> that can make a huge difference.
>>
>> We use the noop scheduler.
>> As I said, an identic
AndyG wrote:
> A marginal improvement.
>
> http://explain.depesz.com/s/y63
That's what I thought.
Increasing the statistics for test_result.id_recipe_version
had no effect?
> I am going to normalize the table some more before partitioning.
How do you think that partitioning will help?
Yours,
AndyG wrote:
>> Increasing the statistics for test_result.id_recipe_version
>> had no effect?
> I increased the statistics in steps up to 5000 (with vacuum analyse) -
Seems
> to be as good as it gets.
>
> http://explain.depesz.com/s/z2a
Just out of curiosity, do you get a better plan with
enable
> But why? Is there a way to force the planner into this?
I don't know enough about the planner to answer the "why",
but the root of the problem seems to be the mis-estimate
for the join between test_result and recipe_version
(1348 instead of 21983 rows).
That makes the planner think that a neste
Pedro Jiménez Pérez wrote:
> Sent: Friday, November 02, 2012 1:14 PM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] help with too slow query
>
> Hello,
> I have this table definition:
> CREATE TABLE ism_floatvalues
> (
> id_signal bigint NOT NULL, -- Indica la señal a la que perte
K P Manoj wrote:
> I am facing query performance in one of my testing server.
> How i can create index with table column name ?
> EXPLAIN select xxx.* from xxx xxx where exists (select 1 from tmp
where mdc_domain_reverse like
> xxx.reverse_pd || '.%');
> QUER
K P Manoj wrote:
> Please find the details of table description
>
> test=# \d xxx
>Table "public.xxx"
> Column|Type |
Modifiers
>
--+-+---
> crawler_id
Eildert Groeneveld wrote:
> I am currently implementing using a compressed binary storage scheme
> genotyping data. These are basically vectors of binary data which may be
> megabytes in size.
>
> Our current implementation uses the data type bit varying.
>
> What we want to do is very simple: we
Wu Ming wrote:
> I had installed postgreSQL v9.2 in Windows XP SP3.
>
> My PC specs:
> Processor: Pentium Dual Core 2.09 GHz
> RAM: 2GB
>
> The postgreSQL is run as windows service (manual).
>
> The problem is the postgreSQL service uses a lot of memory and lags
> the OS if running in long time
David Popiashvili wrote:
> I have database with few hundred millions of rows. I'm running the
following query:
>
> select * from "Payments" as p
> inner join "PaymentOrders" as po
> on po."Id" = p."PaymentOrderId"
> inner join "Users" as u
> On u."Id" = po."UserId"
> INNER JOIN "Roles" as r
> on u
suhas.basavaraj12 wrote:
> This folde( Temporary tablespace) is getting filled and size increases
in
> the day where there lots of sorting operations.But after some times
the data
> in the is deleted automatically . Can any one explain what is going
on ?
Must be temporary files created by the sor
suhas.basavaraj12 wrote:
> Can i delete the content of this folder. I have observed couple of
times ,
> this folder got cleaned automatically.
These files are in use and you should not delete them.
If you need them to go right now, cancel the queries that
create temporary files.
If there are any
classical_89 wrote:
> Hi everyone ,I have a question. I have a table with large data (i was used
> bytea datatype and insert a binary content to table ) so that Postgres help
> me get a TOAST table to storage out-of-line values .
> Assume that my table is " tbl_test " and toast table oid is 16816
AJ Weber wrote:
> What is the unit-of-measure used for default_statistics_target?
Number of entries in pg_stats.histogram_bounds orpg_stats.most_common_vals.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
h
Tory M Blue wrote:
> My postgres db ran out of space. I have 27028 files in the pg_xlog directory.
> I'm unclear what
> happened this has been running flawless for years. I do have archiving turned
> on and run an archive
> command every 10 minutes.
>
> I'm not sure how to go about cleaning this
Jayadevan M wrote:
> If a table takes 100 MB while on disk, approximately how much space will it
> take in RAM/database
> buffer?
100 MB. A block in memory has the same layout as a block on disk.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
Niels Kristian Schjødt wrote:
> I am experiencing a similar issue as the one mentioned in this post
> http://stackoverflow.com/questions/3100072/postgresql-slow-on-a-large-table-with-arrays-and-lots-of-
> updates/3100232#3100232
> However the post is written for a 8.3 installation, so I'm wondering
amul sul wrote:
> I am trying to run DBT5 to test performance of PG9.2.4,
>
> But execution failed due to undefined symbol: PQescapeLiteral error in
> /bh/bh.out
>
> Full error as follow:
[...]
> BrokerageHouseMain: symbol lookup error: BrokerageHouseMain: undefined
> symbol: PQescapeLiteral
>
Johan Loubser wrote:
> I am tasked with getting specs for a postgres database server for the
> core purpose of running moodle at our university.
> The main question is at the moment is 12core AMD or 6/8core (E Series)
> INTEL.
>
> What would be the most in portend metric in planning an enterprise
monalee_dba wrote:
> Eg. SELECT col1, col2, col3,col10 FROM table1;
>
> For above query If I didn't mention ORDER BY clause, then I want to know
> selected data will appear in which order by a query planner?
>
> Because I have huge size table, and when I applied ORDER BY col1, col2..in
> quer
kosalram Babu Chellappa wrote:
> We have a requirement to store images/documents with an average size of
> 1-2MB on PostgreSQL database.
> We have PostgreSQL 9.2.4 running on Red hat linux 64 bit. We decided to setup
> a stand alone postgreSQL
> server without streaming replication to host the i
desmodemone wrote:
> max is 1024mb.
That must be a typo.
It can surely be much higher.
Yours,
Laurenz Albe
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hans Drexler wrote:
> We are porting an application to PostgreSQL. The appplication already
> runs with DB2 (LUW version) and Oracle. One query in particular executes
> slower on Postgres than it does on other Database platforms, notably DB2
> LUW and Oracle. (Please understand, we are not comparin
Karl Denninger wrote:
> I've been doing a bit of benchmarking and real-world performance
> testing, and have found some curious results.
[...]
> The odd thing is that I am getting better performance with a 128k record
> size on this application than I get with an 8k one!
[...]
> What I am curio
I just learned that NFS does not use a file system cache on the client side.
On the other hand, PostgreSQL relies on the file system cache for performance,
because beyond a certain amount of shared_buffers performance will suffer.
Together these things seem to indicate that you cannot get good pe
Stephen Frost wrote:
> All that said, there has always been a recommendation of caution around
> using NFS as a backing store for PG, or any RDBMS..
I know that Oracle recommends it - they even built an NFS client
into their database server to make the most of it.
Yours,
Laurenz Albe
--
Sent vi
John Melesky wrote:
>> I just learned that NFS does not use a file system cache on the client side.
>
> That's ... incorrect. NFS is cache-capable. NFSv3 (I think? It may have been
> v2) started sending
> metadata on file operations that was intended to allow for client-side
> caches. NFSv4 adde
Jeff Janes wrote:
>>> All that said, there has always been a recommendation of caution around
>>> using NFS as a backing store for PG, or any RDBMS..
>>
>> I know that Oracle recommends it - they even built an NFS client
>> into their database server to make the most of it.
>
> Last I h
1 - 100 of 129 matches
Mail list logo