On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:
> I have an application which stores a large amounts of hex-encoded hash
> strings (nearly 100 GB of them), which means:
Why do you keep them hex encoded, and not use bytea?
I made a sample table with 1 million rows, looking like this:
On Tue, Apr 22, 2014 at 01:16:15AM +, Verghese, Riya wrote:
I am going to add a new column to a table for modify_date that needs
to be updated every time the table is updated. Is it better to just
update application code to set the modify_date to current_time, or
create a Before-Update
On Sun, Dec 15, 2013 at 04:18:18PM +, Yuri Levinsky wrote:
Dear ALL,
I am running PL/pgsql procedure with sql statements that taking a long
time. I able to see them in the log just after their completion. How
can I see currently running SQL statement? I am able to see in
pg_stat_activity
On nie, wrz 29, 2013 at 02:09:07 -0700, Ken Tanzer wrote:
p.s., This script runs fine on my computer (Ubuntu 13.04), but on a Fedora
11 machine it dies with
pg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'
pg_analyze_info.sh: line 57: syntax error: unexpected end
On Sat, Apr 06, 2013 at 09:59:16PM -0700, Nik Tek wrote:
Could someone tell m how to measure postgres memory usage.
Is there a pg_* view to measure?
http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/
Best regards,
depesz
--
The best thing about modern society is how easy it
On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
When I disable nested loop, I get a cost of 2,535,992.34 which runs in
only 133,447.790 ms. We have run queries on our database with a cost
of 200K cost
On Tue, Sep 11, 2012 at 07:20:28PM +0600, AI Rumman wrote:
I have a table as follows:
I execued the query:
ALTER TABLE entity ADD COLUMN owner_type char(1) NOT NULL default 'U';
The db is stuck. The enity table has 2064740 records;
such alter table has to rewrite whole table. So it will
On Tue, Sep 11, 2012 at 07:55:24PM +0600, AI Rumman wrote:
I added the excel file for locks data.
well, it worked, but why didn't you just make it text file, in notepad or
something like this?
I was surprised to see that while I was updating a single column value for
all records in a tables,
On Tue, Sep 11, 2012 at 08:04:06PM +0600, AI Rumman wrote:
Table size is 1186 MB.
if it takes long, it just means that your IO is slow.
I split the command in three steps as you said, but the result same during
the update operation.
three? I was showing four steps, and one of them is usually
On Sat, Aug 11, 2012 at 12:15:11AM +0200, Strahinja Kustudić wrote:
Is there a way to make the autovacuum daemon more aggressive, since I'm not
exactly sure how to do that in this case? Would that even help? Is there
another way to remove this index bloat?
On Mon, May 28, 2012 at 07:24:13PM +0100, Alejandro Carrillo wrote:
¿How I can recover a row delete of a table that wasn't vacuummed?
I have PostgreSQL 9.1 in Windows XP/7.
http://www.depesz.com/2012/04/04/lets-talk-dirty/
Best regards,
depesz
--
The best thing about modern society is how
On Mon, Oct 03, 2011 at 02:48:10PM -0300, Soporte @ TEKSOL S.A. wrote:
Hi,
I need help to understand the issue on a productive database for a select
that takes more time than expected.
1- On a development database I ran the query (select) and I can see on
Explain Analyze
On Fri, Feb 04, 2011 at 03:46:35PM +0100, felix wrote:
directly after REINDEX and ANALYZE:
Aggregate (cost=62348.70..62348.71 rows=1 width=0) (actual
time=15830.000..15830.000 rows=1 loops=1)
- Seq Scan on fastadder_fastadderstatus (cost=0.00..61613.16
rows=294216 width=0) (actual
On Mon, Jan 24, 2011 at 01:29:01PM -0500, Dimi Paun wrote:
Hi folks,
I have a table like so:
create table tagRecord (
uid varchar(60) primary key,
[bunch of other fields]
location varchar(32),
creationTStimestamp
);
create index
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
)
the index definition is
CREATE INDEX PK_AT2
ON ABC
USING btree
(event, tableindex)
TABLESPACE sample;
Indexing twice the same column is useless.
On Tue, Nov 02, 2010 at 12:04:42PM +0100, Cédric Villemain wrote:
2010/11/2 hubert depesz lubaczewski dep...@depesz.com:
On Mon, Nov 01, 2010 at 02:57:56PM +0100, Cédric Villemain wrote:
CONSTRAINT tableindex_pkey PRIMARY KEY (tableindex)
)
the index definition is
CREATE INDEX
On Wed, Aug 18, 2010 at 03:51:22PM +0200, Clemens Eisserer wrote:
Hi,
Are indices for columns marked with PRIMARY KEY automatically generated by
postgresql, or do I have to do it manually?
The question might seem dumb, I ask because I remember from working with
MySQL it generates indices
On Tue, Aug 10, 2010 at 04:40:16PM +0100, Matthew Wakeling wrote:
I'm trying to eke a little bit more performance out of an
application, and I was wondering if there was a better way to do the
following:
I am trying to retrieve, for many sets of rows grouped on a couple
of fields, the
On Thu, Jun 10, 2010 at 10:50:40AM -0700, Anne Rosset wrote:
Any advice on how to make it run faster?
First, let me ask a simple question - what runtime for this query will
be satisfactory for you?
Best regards,
depesz
--
Linkedin: http://www.linkedin.com/in/depesz / blog:
On Thu, Jun 03, 2010 at 06:45:30PM -0700, Anj Adu wrote:
http://explain.depesz.com/s/kHa
can you please show us \d dev4_act_dy_fact_2010_05_t3 ?
depesz
--
Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl /
On Mon, Apr 05, 2010 at 04:28:35PM +0200, Oliver Kindernay wrote:
Hi, I have table with just on column named url (it's not real url,
just random string for testing purposes), type text. I have lots of
entries in it (it's dynamic, i add and remove them on the fly), 100
000 and more. I've
On Thu, Jan 07, 2010 at 01:38:41PM +0100, Lefteris wrote:
airtraffic=# EXPLAIN ANALYZE SELECT DayOfWeek, count(*) AS c FROM
ontime WHERE Year BETWEEN 2000 AND 2008 GROUP BY DayOfWeek ORDER
BY c DESC;
Well, this query basically has to be slow. Correct approach to this
problem is to add
Should I try a different approach to solve this issue?
Yes. Ask yourself if you *really* need 180k rows.
Best regards,
depesz
--
Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007
--
Sent
On Thu, Jul 09, 2009 at 09:26:42AM -0700, Craig James wrote:
Suppose I have a large table with a small-cardinality CATEGORY column (say,
categories 1..5). I need to sort by an arbitrary (i.e. user-specified)
mapping of CATEGORY, something like this:
1 = 'z'
2 = 'a'
3 = 'b'
4 = 'w'
On Wed, Mar 11, 2009 at 12:28:56PM -0700, Scott Carey wrote:
Google “linux drop_caches” first result:
http://www.linuxinsight.com/proc_sys_vm_drop_caches.html
To be sure a test is going to disk and not file system cache for everything
in linux, run:
‘sync; cat 3 /proc/sys/vm/drop_caches’
On Thu, Feb 05, 2009 at 02:31:24PM +0100, Ivan Voras wrote:
The problem is how to do it efficiently. I see there isn't a built-in
function that counts character occurrences so I'd have to write it
myself. An additional constraint is that it must be implemented with
built-in capabilities, i.e.
On Wed, Dec 03, 2008 at 04:01:48PM +, Matthew Wakeling wrote:
The work_mem setting on this machine is 1000MB, running Postgres 8.3.0.
Check bug report from 2008-11-28, by Grzegorz Jaskiewicz:
query failed, not enough memory on 8.3.5
On Wed, Nov 12, 2008 at 07:02:10PM +0200, Andrus wrote:
explain analyze select max(kuupaev) from bilkaib where
kuupaev=date'2008-11-01' and (cr='00' or db='00')
do you always have this: (cr='00' or db='00')? or do the values (00)
change?
if they don't change, or *most* of the queries have
On Mon, Sep 29, 2008 at 10:29:45AM +0200, [EMAIL PROTECTED] wrote:
EXPLAIN ANALYSE SELECT * FROM system_alarm WHERE id_camera='3' AND
confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC
LIMIT 1;
Sorry, without LIMIT returns around 70 rows.
Tried to index date column
On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:
Does anyone know what will cause this bahavior for autovacuum?
http://www.postgresql.org/docs/current/interactive/runtime-config-autovacuum.html
- autovacuum_freeze_max_age
depesz
--
Linkedin: http://www.linkedin.com/in/depesz /
On Tue, Aug 26, 2008 at 10:45:31AM -0600, Jerry Champlin wrote:
This makes sense. What queries can I run to see how close to the limit
we are? We need to determine if we should stop the process which
updates and inserts into this table until after the critical time this
afternoon when
On Tue, Jun 03, 2008 at 09:57:15AM +0200, Marcin Citowicki wrote:
I'm not a dba so I'm not sure if the time it takes to execute this query
is OK or not, it just seems a bit long to me.
This is perfectly OK. count(*) from table is generally slow. There are
some ways to make it faster
On Mon, Apr 07, 2008 at 11:01:18PM -0400, Christian Bourque wrote:
I have a performance problem with a script that does massive bulk
insert in 6 tables. When the script starts the performance is really
good but will degrade minute after minute and take almost a day to
finish!
how do you do
On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:
Any Idea on this ???
yes. dont use count(*).
if you want whole-table row count, use triggers to store the count.
it will be slow. regeardless of whether it's in ram or on hdd.
depesz
--
quicksil1er: postgres is excellent, but
On Mon, Feb 18, 2008 at 02:41:50PM +0530, Kathirvel, Jeevanandam wrote:
I want to disable Write Ahead Log (WAL) completely because
of following reasons,
basically, you can't disable it.
regards,
depesz
--
quicksil1er: postgres is excellent, but like any DB it requires a
highly
On Mon, Feb 18, 2008 at 03:00:47PM +0530, Kathirvel, Jeevanandam wrote:
Is there way to minimize the I/O operation on disk/CF.
Can I create RAM file system and point the pg_xlog files to RAM
location instead of CF. whether this will work?
it will, but in case you'll lost power you
On Fri, Nov 16, 2007 at 10:40:43AM +0100, Gábor Farkas wrote:
we are moving one database from postgresql-7.4 to postgresql-8.2.4.
any particular reason why not 8.2.5?
my question is: is it recommended to use it? or in other words, should i
only use autovacuum? or it's better to use
On Thu, Nov 01, 2007 at 02:07:55PM +0100, Palle Girgensohn wrote:
I have a table login with approx 600,000 tuples, a person table with
approx 10 tuples.
When running
select max(when) from login where userid='userid'
it takes a second or two, but when adding group by userid the planner
On Tue, Sep 25, 2007 at 07:08:42AM -0400, Dave Cramer wrote:
ERROR: deadlock detected
DETAIL: Process 23063 waits for ExclusiveLock on tuple (20502,48) of
relation 48999028 of database 14510214; blocked by process 23110.
Process 23110 waits for ShareLock on transaction 1427023217; blocked
On Wed, Sep 05, 2007 at 12:30:21PM +0100, Gregory Stark wrote:
SELECT DISTINCT ON (id) id, the_date AS min_date FROM my_table ORDER BY id,
the_date ASC
SELECT DISTINCT ON (id) id, the_date AS max_date FROM my_table ORDER BY id,
the_date DESC
I think the first of these can actually use your
On Thu, Aug 16, 2007 at 06:14:02PM +0200, Frank Schoep wrote:
The (example) query:
SELECT * FROM movies WHERE letter = 'T' ORDER BY name ASC LIMIT 100
OFFSET 1900;
try to change the query to:
SELECT * FROM movies WHERE letter = 'T' ORDER BY letter ASC, name ASC LIMIT 100
OFFSET 1900;
On 3/18/07, Vincenzo Romano [EMAIL PROTECTED] wrote:
And these are the EXPLAINs for the queries:
please provide output of explain analyze of the queries. otherwise -
it is not really useful.
depesz
---(end of broadcast)---
TIP 4: Have you
On 3/18/07, Barry Moore [EMAIL PROTECTED] wrote:
Does anyone know how I can repeatedly run the same query in the
worst case scenario of no postgres data in the disk cache (e.g.,
clear the page cache or force it to be ignored)?
try to disconnect from postgresql, reconnect, rerun the query.
if
On 3/13/07, femski [EMAIL PROTECTED] wrote:
I have a batch application that writes approx. 4 million rows into a narrow
table. I am using JDBC addBatch/ExecuteBatch with auto commit turned off.
Batch size is 100. So far I am seeing Postgres take roughly five times the
time it takes to do this in
hii'm waiting for new server to arrive.for the server i will get hp msa1000, with 14 discs (72g, 15000 rpm).what kind of partitioning you suggest, to get maximum performance?for system things i will have separate discs, so whole array is only for postgresql.
data processing is oltp, but with large
On 6/14/06, Sven Geisler [EMAIL PROTECTED] wrote:
You should configure your discs to RAID 10 volumes.You should set up a separate volume for WAL.A volume for an additional table space may also useful.In your case I would do 2 partitions:1. RAID 10 with 8 discs for general data
raid 10 is of course
On 4/11/06, Simon Dale [EMAIL PROTECTED] wrote:
I'm trying to evaluate PostgreSQL as a database that
will have to store a high volume of data and access that data frequently. One
of the features on our wish list is to be able to use stored procedures to
access the data and I was
On 3/4/06, Joost Kraaijeveld [EMAIL PROTECTED] wrote:
Below is what I actually have. Given the fact that it takes forever to get
a result ( 6 seconds) , there must be something wrong with my solution or
my expectation. Can anyone tell what I should do to make this query go
faster ( or
On 3/4/06, Joost Kraaijeveld [EMAIL PROTECTED] wrote:
how many record do you have in the customers table?
368915 of which 222465 actually meet the condition.
From what I understand from the mailing list, PostgreSQL prefers a table
scan whenever it expects that the number of records in the
On 1/29/06, Luke Lonergan [EMAIL PROTECTED] wrote:
Oh - and about RAID 10 - for large data work it's more often a waste of
disk performance-wise compared to RAID 5 these days. RAID5 will almost
double the performance on a reasonable number of drives.
how many is reasonable?
depesz
On 1/28/06, Luke Lonergan [EMAIL PROTECTED] wrote:
You should check your disk performance, I would
expect you'll find it lacking, partly because you are running RAID10, but
mostly because I expect you are using a hardware RAID adapter.
hmm .. do i understand correctly that you're suggesting
On 1/17/06, Michael Riess [EMAIL PROTECTED] wrote:
about the FSM: You say that increasing the FSM is fairly cheap - howshould I know that?comment from original postgresql.conf file seems pretty obvious:#max_fsm_pages = 2 # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000 # min
On 11/11/05, DW [EMAIL PROTECTED] wrote:
I'm perplexed. I'm trying to find out why some queries are taking a longtime, and have found that after running analyze,one particular querybecomes slow.
i have had exactly the same problem very recently.
what helped? increasing statistics on come column.
On 10/7/05, Cestmir Hybl [EMAIL PROTECTED] wrote:
No, I can't speed-up evaluation of generic
count(*) where () queries this way.
no you can't speed up generic where(), *but* you can check what are the
most common where's (like usually i do where on one column like:
select count(*) from table
On 10/7/05, Cestmir Hybl [EMAIL PROTECTED] wrote:
Isn't it possible (and reasonable) for these
environments to keep track of whether there is a transaction in progress with
update to given table and if not, use an index scan (count(*) where) or cached
value (count(*)) to perform this kind of
hi
setup:
postgresql 8.0.3 put on debian on dual xeon, 8GB ram, hardware raid.
database just after recreation from dump takes 15gigabytes.
after some time (up to 3 weeks) it gets really slow and has to be dump'ed and restored.
as for fsm:
end of vacuum info:
INFO: free space map: 248 relations,
On 6/24/05, Yves Vindevogel [EMAIL PROTECTED] wrote:
So, when I want the last page, which is: 600k / 25 = page 24000 - 1 =
23999, I issue the offset of 23999 * 25
improving this is hard, but not impossible.
if you have right index created, try to reverse the order and fetch
first adverts, and
On 6/13/05, Saranya Sivakumar [EMAIL PROTECTED] wrote:
2 x 2.4 Ghz Intel Xeon CPU with HT(4 virtual CPUs)
switch to amd opteron (dual cpu). for the same price you get 2x
performance - comparing to xeon boxes.
RAM - 1GB
you'd definitelly could use more ram. the more the better.
HDD - 34GB
58 matches
Mail list logo