hi
i think the telegram_id's type should be integer.
please change telegram_id to numeric and try to run the the following sql.
the index should be used.
explain SELECT md.*
FROM measure_data md
where telegram_id in (trunc(66484.2),trunc(132362.1 ))
2014-05-15 17:28 GMT+09:00 changchao :
> explain analyze
> select a.ID, a.provider, a.hostname, a.username, a.eventTimeStamp,
> a.AIPGUID, a.submissionGUID, a.parentSubmissionGUID, a.sizeArchived,
> a.addedContentString, a.addedContentSizesString, a.removedContentString,
> a.removedContentSizesString, a.modifiedContentStr
On 11/07/2011 2:26 AM, Radhya sahal wrote:
long startTime = System.currentTimeMillis();
//execute query
long executionTime = System.currentTimeMillis() - startTime;
this executionTime is not an actual time for query ,
it includes time for access to postgresql server
using JDBC
The pg_stat_st
- Forwarded Message
From: Radhya sahal
To: Samuel Gendler
Sent: Sun, July 10, 2011 11:25:46 AM
Subject: Re: [PERFORM] query total time im milliseconds
Thank's
long startTime = System.currentTimeMillis();
//execute query
long executionTime = System.currentTimeMillis() - startTime;
- Forwarded Message -
>From: Denis de Bernardy
>To: Jens Hoffrichter
>Sent: Tuesday, June 28, 2011 12:59 AM
>Subject: Re: [PERFORM] Getting rid of a seq scan in query on a large table
>
>
>> Hash Cond: (posts.poster_id = posters.poster_id)
>
>> -> Seq Scan on post
My problem is, that in fact I don't know which tag to index since I'm
running a web admin application where users can enter arbitrary
queries.
For a tag cloud, try this :
- table tags ( tag_id, tag_name )
- table articles ( article_id )
- table articles_to_tags( article_id, tag_id )
now this
Hi all
Thank you to all who answered: That worked:
CREATE INDEX planet_osm_point_tags_amenity
ON planet_osm_point ((tags->'amenity'))
WHERE (tags->'amenity') IS NOT NULL;
My problem is, that in fact I don't know which tag to index since I'm
running a web admin application where users can enter a
On Wed, May 25, 2011 at 11:59 AM, Pierre C wrote:
>> You wrote
>>>
>>> Try to create a btree index on "(bench_hstore->bench_id) WHERE
>>> (bench_hstore->bench_id) IS NOT NULL".
>>
>> What do you mean exactly?
>> => CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE
>> ??? IS NOT
You wrote
Try to create a btree index on "(bench_hstore->bench_id) WHERE
(bench_hstore->bench_id) IS NOT NULL".
What do you mean exactly?
=> CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE
??? IS NOT NULL;
My table's def is:
CREATE TABLE myhstore ( id bigint PRIMARY KEY,
Salut Pierre
You wrote
> Try to create a btree index on "(bench_hstore->bench_id) WHERE
> (bench_hstore->bench_id) IS NOT NULL".
What do you mean exactly?
=> CREATE INDEX myhstore_kps_gin_idx ON myhstore USING gin(kvps) WHERE
??? IS NOT NULL;
My table's def is:
> CREATE TABLE myhstore ( id bigi
On Tue, May 17, 2011 at 11:10 AM, wrote:
> For Hstore I'm using a GIST index.
I would have thought that GIN would be a better choice for this workload.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql
Hi Merlin
The analyze command gave the following result:
On the KVP table:
Index Scan using kvpidx on bench_kvp (cost=0.00..8.53 rows=1 width=180)
(actual time=0.037..0.038 rows=1 loops=1)
Index Cond: (bench_id = '20_20'::text)
Total runtime: 0.057 ms
And on the Hstore table:
Bitma
Hi Merlin
The analyze command gave the following result:
On the KVP table:
Index Scan using kvpidx on bench_kvp (cost=0.00..8.53 rows=1 width=180) (actual
time=0.037..0.038 rows=1 loops=1)
Index Cond: (bench_id = '20_20'::text)
Total runtime: 0.057 ms
And on the Hstore table:
Bitmap Hea
7:31 AM
> To: Anne Rosset
> Cc: pgsql-performance@postgresql.org
> Subject: Re: FW: [PERFORM] Queries becoming slow under heavy load
>
> On 01/27/2011 11:12 PM, Anne Rosset wrote:
>
>> Thanks for your response.
>> We are over NFS for our storage ...
>
> NFS? I
ginal Message-
From: Shaun Thomas [mailto:stho...@peak6.com]
Sent: Friday, January 28, 2011 7:31 AM
To: Anne Rosset
Cc: pgsql-performance@postgresql.org
Subject: Re: FW: [PERFORM] Queries becoming slow under heavy load
On 01/27/2011 11:12 PM, Anne Rosset wrote:
> Thanks for your response.
>
Shaun Thomas wrote:
On 01/27/2011 11:12 PM, Anne Rosset wrote:
Thanks for your response.
We are over NFS for our storage ...
NFS? I'm not sure you know this, but NFS has major locking issues
that would make it a terrible candidate for hosting a database.
That depends on the implem
On 01/27/2011 11:12 PM, Anne Rosset wrote:
> Thanks for your response.
> We are over NFS for our storage ...
NFS? I'm not sure you know this, but NFS has major locking issues
that would make it a terrible candidate for hosting a database.
> and it's not until around the 221 second mark that we s
mailto:scott.marl...@gmail.com]
Sent: Wednesday, January 26, 2011 8:19 PM
To: Anne Rosset
Cc: pgsql-performance@postgresql.org
Subject: Re: FW: [PERFORM] Queries becoming slow under heavy load
On Wed, Jan 26, 2011 at 9:04 AM, Anne Rosset wrote:
PLEASE post just the settings you changed. I'm
On Wed, Jan 26, 2011 at 10:16 AM, Shaun Thomas wrote:
> Worse however, is your checkpoints. Lord. Increase checkpoint_segments to
> *at least* 20, and increase your checkpoint_completion_target to 0.7 or 0.8.
> Check your logs for checkpoint warnings, and I'll bet it's constantly
> complaining abo
On Wed, Jan 26, 2011 at 9:04 AM, Anne Rosset wrote:
PLEASE post just the settings you changed. I'm not searching through
a list that big for the interesting bits.
> Today we did more analysis and observed postgress processes that
> continually reported status 'D' in top.
Full stop. The mos
On 01/26/2011 10:04 AM, Anne Rosset wrote:
We've been able to match long running database queries to such
processes. This occurs under relatively low load average (say 4 out of
8) and can involve as little as 1 single sql query.
The b state means the process is blocking, waiting for... someth
Sorry it seems like the postgres configuration didn't come thru the
first time.
name| setting
- +
--
add_missing_from| off
allow_system_table_mods | off
archive_command | (disabled)
archive_mode|
On Sun, Aug 2, 2009 at 10:04 PM, Chris Dunn wrote:
> The database is 8gb currently. Use to be a lot bigger but we removed all
> large objects out and developed a file server storage for it, and using
> default page costs for 8.4, I did have it changed in 8.1.4
You might want to play with lowerin
The database is 8gb currently. Use to be a lot bigger but we removed all large
objects out and developed a file server storage for it, and using default page
costs for 8.4, I did have it changed in 8.1.4
-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com]
Sent: Sunday, 2
bad address kep his from going to the list on my first try ... apologies to the
moderators.
-Original Message-
From: Gregory Williamson
Sent: Wed 9/5/2007 4:59 AM
To: JS Ubei; pgsql-performance@postgresql.org
Subject: RE: [PERFORM] optimize query with a maximum(date) extraction
In order
<[EMAIL PROTECTED]> writes:
> with a standard 7200 rpm drive ~150 transactions/sec sounds about right
>
> to really speed things up you want to get a disk controller with a battery
> backed cache so that the writes don't need to hit the disk to be safe.
Note that this is only if you're counting t
On Tue, 8 May 2007, Orhan Aglagul wrote:
No, it is one transaction per insert.
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 08, 2007 5:38 PM
To: Orhan Aglagul
Subject: RE: [PERFORM]
On Tue, 2007-05-08 at 19:36, Orhan Aglagul wrote:
But 10,000 rec
No, it is one transaction per insert.
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 08, 2007 5:38 PM
To: Orhan Aglagul
Subject: RE: [PERFORM]
On Tue, 2007-05-08 at 19:36, Orhan Aglagul wrote:
> But 10,000 records in 65 sec comes to ~153 records per
-Original Message-
From: Orhan Aglagul
Sent: Tuesday, May 08, 2007 5:37 PM
To: 'Scott Marlowe'
Subject: RE: [PERFORM]
But 10,000 records in 65 sec comes to ~153 records per second. On a dual
3.06 Xeon
What range is acceptable?
-Original Message-
From: Scott Marlowe [mailto:
Yes, I did not do it in one transaction.
All 3 machines are configured with the same OS and same version
postgres.
No kernel tweaking and no postgres tweaking done (except the fsync)...
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 08, 2007 5:23 PM
Jim C. Nasby wrote:
Ok, now that I've actually looked at the release notes, I take that back
and apologize. But while there's a lot of improvements that have been
made, there's still some seriously tough problems that have been talked
about for a long time and there's still no "light at the end
Tom Lane wrote:
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
If someone's going to commit to putting effort into improving the
planner then that's wonderful. But I can't recall any significant
planner improvements since min/max (which I'd argue was more of a bug
fix than an improvement).
Hmph.
On Thu, 2006-10-12 at 09:44, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > If someone's going to commit to putting effort into improving the
> > planner then that's wonderful. But I can't recall any significant
> > planner improvements since min/max (which I'd argue was more of a
On Thu, Oct 12, 2006 at 10:44:20AM -0400, Tom Lane wrote:
> "Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> > If someone's going to commit to putting effort into improving the
> > planner then that's wonderful. But I can't recall any significant
> > planner improvements since min/max (which I'd argue
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> If someone's going to commit to putting effort into improving the
> planner then that's wonderful. But I can't recall any significant
> planner improvements since min/max (which I'd argue was more of a bug
> fix than an improvement).
Hmph. Apparently I
On Thu, Oct 12, 2006 at 10:59:23PM +1300, Mark Kirkwood wrote:
> H.J. Sanders wrote:
>
> > why not just like in some other (commercial) databases:
> >
> > a statement to say: use index
> >
> > I know this is against all though but if even the big ones can not resist
> > the pressure
H.J. Sanders wrote:
why not just like in some other (commercial) databases:
a statement to say: use index
I know this is against all though but if even the big ones can not resist
the pressure of their users, why not?
Yeah - some could not (e.g. Oracle), but some did (e
Hello.
Simply jumping on the bandwagon, just my 2 cents:
why not just like in some other (commercial) databases:
a statement to say: use index
I know this is against all though but if even the big ones can not resist
the pressure of their users, why not?
Henk Sander
Sorry, forgot to ask:
What is the recommended/best PG block size for DWH database? 16k, 32k, 64k ?
What hsould be the relation between XFS/RAID stripe size and PG block size ?
Best Regards.
Milen Kulev
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Forwarding to -performance
From: Alan Hodgson [mailto:[EMAIL PROTECTED]
On Friday 09 June 2006 12:41, "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> Has anyone actually done any testing on this? Specifically, I'm
> wondering if the benefit of adding 2 more drives to a RAID10 outweighs
> whatever pen
Hello
list.
Reading my
own e-mail I notice I made a very important mistake.
The
X206 has 1 x ATA 7200 RPM
The X226
has 2 x SCSI
RAID1 1RPM
I corrected
it below.
Sorry
.
Henk
Sanders
-Oorspronkelijk
bericht-Van: [EMAIL PROTECTED]
[mailto:[E
Here are the configuration of our database server:
port = 5432
max_connections = 300
superuser_reserved_connections = 10
authentication_timeout = 60
shared_buffers = 48000
sort_mem = 32168
sync = false
Do you think this is enough? Or
Here are the configuration of our database server:
port = 5432
max_connections = 300
superuser_reserved_connections = 10
authentication_timeout = 60
shared_buffers = 48000
sort_mem = 32168
sync = false
Do you think this is enough? Or
Anu,
> Thanks a lot for your quick response. Which version do you think is the
> more stable one that we should upgrade to?
8.0.3
> Please provide us with the Upgrade instructions/documentation to be
> followed for both red hat and PostgreSQL.
See the PostgreSQL documentation for upgrade instru
Hello Tom,
Thanks a lot for your quick response. Which version do you think is the
more stable one that we should upgrade to?
Please provide us with the Upgrade instructions/documentation to be
followed for both red hat and PostgreSQL.
Thanks and Best Regards,
Anu
-
- Original Message -
From: "Jean-Pierre Pelletier" <[EMAIL PROTECTED]>
To:
Sent: Thursday, September 22, 2005 6:37 PM
Subject: Re: [PERFORM] Queries 15 times slower on 8.1 beta 2 than on 8.0
How do I produce an "Index scan plan" ?
- Original Message -
From: "Josh Berkus"
T
On Mon, 18 Apr 2005, Joel Fradkin wrote:
> Another odd thing is when I tried turning off merge joins on the XP desktop
> It took 32 secs to run compared to the 6 secs it was taking.
> On the Linux (4proc box) it is now running in 3 secs with the mergejoins
> turned off.
>
> Unfortunately it takes
Sorry if this posts twice I posted and did not see it hit the list.
What are the statistics
for tbljobtitle.id and tbljobtitle.clientnum
I added default_statistics_target = 250 to the config and re-loaded the data
base. If that is what you mean?
--- how many distinct values of each,
tbljobtitl
What are the statistics
for tbljobtitle.id and tbljobtitle.clientnum
I added default_statistics_target = 250 to the config and re-loaded the data
base. If that is what you mean?
--- how many distinct values of each,
tbljobtitle.id 6764 for all clients 1018 for SAKS
tbljobtitle.clientnum 237 di
Another odd thing is when I tried turning off merge joins on the XP desktop
It took 32 secs to run compared to the 6 secs it was taking.
On the Linux (4proc box) it is now running in 3 secs with the mergejoins
turned off.
Unfortunately it takes over 2 minutes to actually return the 160,000+ rows.
BBI Edwin Punzalan wrote:
Hi.
1) chatlogs rows increases every now and then (its in a live environment)
and currently have 538,696 rows
OK, so as a rule of thumb I'd say if you were fetching less than 5000
rows it's bound to use an index. If more than 50,000 always use a
seqscan, otherwise it'll
reSQL except setting fsync to false.
Thanks for taking a look at our problem. :D
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 24, 2004 6:17 PM
To: BBI Edwin Punzalan
Cc: [EMAIL PROTECTED]
Subject: Re: FW: [PERFORM] FW: Index usage
BBI Edwin Pun
BBI Edwin Punzalan wrote:
Thanks, Tim.
I tried adding an upper limit and its still the same as follows:
==
db=# explain analyze select date from chatlogs where date>='11/24/04';
NOTICE: QUERY PLAN:
Index Scan using chatlogs_date_idx on chatlogs (cost=0.00..145.72 rows=37
width=4) (act
Thanks, Tim.
I tried adding an upper limit and its still the same as follows:
==
db=# explain analyze select date from chatlogs where date>='11/24/04';
NOTICE: QUERY PLAN:
Index Scan using chatlogs_date_idx on chatlogs (cost=0.00..145.72 rows=37
width=4) (actual time=0.18..239.69
>>Eliminate that contention point, and you will have solved your problem.
I agree, If your updates are slow then you will get a queue building up.
Make sure that:-
1) all your indexing is optimised.
2) you are doing regular vacuuming (bloated tables will cause a slow down
due to swapping).
3)
- Original Message -
From: "Stefano Bonnin" <[EMAIL PROTECTED]>
To: "Josh Berkus" <[EMAIL PROTECTED]>
Sent: Monday, August 30, 2004 4:13 PM
Subject: Re: [PERFORM] Query performance issue with 8.0.0beta1
> This is my postgres.conf, I have changed only the work_mem and
> shared_buffers pa
[forwarded to performance]
> The result is that for "short queries" (Q1 and Q2) it runs in a few
> seconds on both Oracle and PG. The difference becomes important with
> Q3 : 8 seconds with oracle
> 80 sec with PG
> and too much with Q4 : 28s with oracle
>17m20s with P
The best way to set it is to let the machine run under normal load for a
while, then look at the cache / buffer usage using top (or some other
program, top works fine for this).
My server with 1.5 gig ram shows 862328K cache right now. So, just divide
by page size (usually 8192) and I get ~ 10
What would be the best value range for effective_cache_size
on Postgres 7.3.2, assuming say 1.5 GB of RAM and
shared_buffers set to 8192, and shmmax set to 750mb?
And what are the most important factors one should take
into account in determining the value?
> -Original Message-
> From:
59 matches
Mail list logo