On 21-4-2007 1:42 Mark Kirkwood wrote:
I don't think that will work for the vector norm i.e:
|x - y| = sqrt(sum over j ((x[j] - y[j])^2))
I don't know if this is usefull here, but I was able to rewrite that
algorithm for a set of very sparse vectors (i.e. they had very little
overlapping
* Bill Moran:
To clarify my viewpoint:
To my knowledge, there is no Unix filesystem that _suffers_ from
fragmentation. Specifically, all filessytems have some degree of
fragmentation that occurs, but every Unix filesystem that I am aware of
has built-in mechanisms to mitigate this and
On Fri, Apr 27, 2007 at 04:43:06AM +, Andres Retzlaff wrote:
Hi,
I have pg 8.1.4 running in
Windows XP Pro
wirh a Pentium D
and I notice that I can not use more than 50% of the cpus (Pentium D has 2
cpus), how can I change the settings to use the 100% of it.
A single query will
Hi Magnus,
in this case each CPU goes up to 50%, giveing me 50% total usage. I was
specting as you say 1 query 100% cpu.
Any ideas?
Andrew
On Fri, Apr 27, 2007 at 04:43:06AM +, Andres Retzlaff wrote:
Hi,
I have pg 8.1.4 running in
Windows XP Pro
wirh a Pentium D
and I notice that
On Fri, Apr 27, 2007 at 08:10:48AM +, Andres Retzlaff wrote:
Hi Magnus,
in this case each CPU goes up to 50%, giveing me 50% total usage. I was
specting as you say 1 query 100% cpu.
Any ideas?
No. 1 query will only use 100% of *one* CPU, which means 50% total usage.
You need at least
Magnus Hagander wrote:
On Fri, Apr 27, 2007 at 08:10:48AM +, Andres Retzlaff wrote:
Hi Magnus,
in this case each CPU goes up to 50%, giveing me 50% total usage. I was
specting as you say 1 query 100% cpu.
Any ideas?
No. 1 query will only use 100% of *one* CPU, which means 50% total
On Fri, Apr 27, 2007 at 07:23:41PM +0930, Shane Ambler wrote:
I would think that as you are sitting and watching the cpu usage, your
query would seem to taking a while to run, leading me to wonder if you
are getting a full table scan that is causing pg to wait for disk response?
If so, you
Tom Lane wrote:
Carlos Moreno [EMAIL PROTECTED] writes:
... But, wouldn't it make sense that the configure script
determines the amount of physical memory and perhaps even do a HD
speed estimate to set up defaults that are closer to a
performance-optimized
configuration?
No. Most
Adding -performance back in so others can learn.
On Apr 26, 2007, at 9:40 AM, Paweł Gruszczyński wrote:
Jim Nasby napisał(a):
On Apr 25, 2007, at 8:51 AM, Paweł Gruszczyński wrote:
where u6 stores Fedora Core 6 operating system, and u0 stores 3
partitions with ext2, ext3 and jfs filesystem.
On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:
Notice that the second part of my suggestion covers this --- have
additional
switches to initdb so that the user can tell it about estimates on how
the DB
will be used: estimated size of the DB, estimated percentage of
activity
Carlos Moreno [EMAIL PROTECTED] writes:
Tom Lane wrote:
But
the fundamental problem remains that we don't know that much about
how the installation will be used.
Notice that the second part of my suggestion covers this --- have
additional switches to initdb
That's been proposed and
Maybe he's looking for a switch for initdb that would make it
interactive and quiz you about your expected usage-- sort of a magic
auto-configurator wizard doohicky? I could see that sort of thing being
nice for the casual user or newbie who otherwise would have a horribly
mis-tuned database.
Hello.
Just my 2 cents, and not looking to the technical aspects:
setting up PSQL is the weakest point of PSQL as we have experienced ourself,
once it is running it is great.
I can imagine that a lot of people of stops after their first trials after
they have
experienced the troubles and
On Fri, Apr 27, 2007 at 07:36:52AM -0700, Mark Lewis wrote:
Maybe he's looking for a switch for initdb that would make it
interactive and quiz you about your expected usage-- sort of a magic
auto-configurator wizard doohicky? I could see that sort of thing being
nice for the casual user or
Hi!
I read the link below and am puzzled by or curious about something.
http://www.postgresql.org/docs/8.1/interactive/datatype-character.html
The Tip below is intriguing
Tip: There are no performance differences between these three types,
apart from the increased storage size when using the
Hi!
I read the link below and am puzzled by or curious about something.
http://www.postgresql.org/docs/8.1/interactive/datatype-character.html
The Tip below is intriguing
Tip: There are no performance differences between these three types,
apart from the increased storage size when using the
On Apr 27, 2007, at 3:30 PM, Michael Stone wrote:
On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:
Notice that the second part of my suggestion covers this --- have
additional
switches to initdb so that the user can tell it about estimates on
how the DB
will be used: estimated
Siddharth Anand [EMAIL PROTECTED] writes:
How can a field that doesn't have a limit like text perform similarly to
char varying(128), for example? At some point, we need to write data to
disk. The more data that needs to be written, the longer the disk write
will take, especially when it
Siddharth Anand wrote:
Hi!
I read the link below and am puzzled by or curious about something.
http://www.postgresql.org/docs/8.1/interactive/datatype-character.html
The Tip below is intriguing
Tip: There are no performance differences between these three types,
apart from the increased
I think the manual is implying that if you store a value like Sid in a
field either of type varchar(128) or type text there is no performance
difference. The manual is not saying that you get the same performance
storing a 500k text field as when you store the value Sid.
Dave
-Original
Hi Tom,
My question wasn't phrased clearly. Oracle exhibits a performance
degradation for very large-sized fields (CLOB types that I equate to
PostGres' text type) when compared with the performance of field types
like varchar that handle a max character limit of a few thousand bytes in
Oracle.
Siddharth Anand [EMAIL PROTECTED] writes:
My question wasn't phrased clearly. Oracle exhibits a performance
degradation for very large-sized fields (CLOB types that I equate to
PostGres' text type) when compared with the performance of field types
like varchar that handle a max character limit
Hi,
We're facing some perfomance problems with the database for a web site with
very specific needs. First of all, we're using version 8.1 in a server with
1GB of RAM. I know memory normally should be more, but as our tables are not
so big (as a matter of fact, they are small) I think the
Michael Stone wrote:
On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:
Notice that the second part of my suggestion covers this --- have
additional
switches to initdb
snip
If the person knows all that, why wouldn't they know to just change the
config parameters?
Exactly..
Dan,
Exactly.. What I think would be much more productive is to use the
great amount of information that PG tracks internally and auto-tune the
parameters based on it. For instance:
*Everyone* wants this. The problem is that it's very hard code to write
given the number of variables. I'm
In response to Dan Harris [EMAIL PROTECTED]:
Michael Stone wrote:
On Fri, Apr 27, 2007 at 09:27:49AM -0400, Carlos Moreno wrote:
Notice that the second part of my suggestion covers this --- have
additional
switches to initdb
snip
If the person knows all that, why wouldn't they know
At 10:36a -0400 on 27 Apr 2007, Tom Lane wrote:
That's been proposed and rejected before, too; the main problem being
that initdb is frequently a layer or two down from the user (eg,
executed by initscripts that can't pass extra arguments through, even
assuming they're being invoked by hand in
On Fri, Apr 27, 2007 at 02:40:07PM -0400, Kevin Hunter wrote:
out that many run multiple postmasters or have other uses for the
machines in question), but perhaps it could send a message (email?)
along the lines of Hey, I'm currently doing this many of X
transactions, against this much
Bill,
The only one that seems practical (to me) is random_page_cost. The
others are all configuration options that I (as a DBA) want to be able
to decide for myself.
Actually, random_page_cost *should* be a constant 4.0 or 3.5, which
represents the approximate ratio of seek/scan speed
Bill Moran wrote:
In response to Dan Harris [EMAIL PROTECTED]:
snip
Why does the user need to manually track max_fsm_pages and max_fsm_relations? I
bet there are many users who have never taken the time to understand what this
means and wondering why performance still stinks after vacuuming
Mauro N. Infantino [EMAIL PROTECTED] writes:
What we basically have is a site where each user has a box with links to
other randomly selected users. Whenever a box from a user is shown, a SPs is
executed: a credit is added to that user and a credit is substracted from
the accounts of the shown
Dan,
Yes, this is the classic problem. I'm not demanding anyone pick up the
ball and jump on this today, tomorrow, etc.. I just think it would be
good for those who *could* make a difference to keep those goals in mind
when they continue. If you have the right mindset, this problem will
On Fri, 27 Apr 2007, Josh Berkus wrote:
Dan,
Yes, this is the classic problem. I'm not demanding anyone pick up the
ball and jump on this today, tomorrow, etc.. I just think it would be
good for those who *could* make a difference to keep those goals in mind
when they continue. If you have
33 matches
Mail list logo