I tried to use partitioning and have problem with it,
as I get very bad perfomance. I cannot understand, what I am doing wrong.
I set up master and child tables via inheritance, with range CHECK by date
and with
trigger on 'insert', as described in the documentation.
I was happy with insertion s
On Fri, Jul 7, 2017 at 10:12 AM, Shaun Thomas
wrote:
>> I don't think the downstream dependencies will let that work without
>> rebuilding them as well. The drop fails (without a cascade), and the other
>> views and matviews that are built off of this all simply point to x_old.
>
> Wow, ouch. Ye
> I don't think the downstream dependencies will let that work without
> rebuilding them as well. The drop fails (without a cascade), and the other
> views and matviews that are built off of this all simply point to x_old.
Wow, ouch. Yeah, I'd neglected to consider dependent objects. Your
only "
>
>
> If you _can't_ do
>> that due to cloud restrictions, you'd actually be better off doing an
>> atomic swap.
>>
>> CREATE MATERIALIZED VIEW y AS ...;
>>
>> BEGIN;
>> ALTER MATERIALIZED VIEW x RENAME TO x_old;
>> ALTER MATERIALIZED VIEW y RENAME TO x;
>> DROP MATERIALIZED VIEW x_old;
>> COMMIT;
On Thu, Jul 6, 2017 at 11:25 AM, Shaun Thomas
wrote:
> > I'm curious if I'm overlooking other possible architectures or tools
> that might make this simpler to manage.
>
> One of the issues with materialized views is that they are based on
> views... For a concurrent update, it essentially perfor
> I'm curious if I'm overlooking other possible architectures or tools that
> might make this simpler to manage.
One of the issues with materialized views is that they are based on
views... For a concurrent update, it essentially performs a looped
merge, which can be pretty ugly. That's the price
I'm pondering approaches to partitioning large materialized views and was
hoping for some feedback and thoughts on it from the [perform] minds.
PostgreSQL 9.6.3 on Ubuntu 16.04 in the Google Cloud.
I have a foreign table with 250M or so rows and 50 or so columns, with a
UUID as the primary key.
On 5/28/15 9:31 AM, Ravi Krishna wrote:
explain select count(*) from tstesting.account where account_row_inst = 101 ;
Aggregate (cost=8.16..8.17 rows=1 width=0)
-> Index Only Scan using account_pkey on account (cost=0.14..8.16
rows=1 width=0)
Index Cond: (account_row_inst = 101)
EXPLAIN only sh
On Thu, May 28, 2015 at 10:31 AM, Ravi Krishna wrote:
> I am testing partitioning of a large table. I am doing a range
Sorry I forgot to clarify. I am using INHERITS for partioning with
check constraing built for range partitioning.
--
Sent via pgsql-performance mailing list (pgsql-performance
I am testing partitioning of a large table. I am doing a range
partitioning based on a sequence col, which also acts as the primary
key. For inserts I am using a trigger which will redirect insert to
the right table based on the value of the primary key.
Based on my testing, I see that the insert
Postgres Performance Wizards,
I am a few years into developing and operating a system underpinned by
Postgres that sees the arrival a significant number of events around
the clock, not an epic amount of data, but enough to be challenging to
me, in particular when system downtime is not very palata
On Sat, Jun 9, 2012 at 7:58 PM, Kevin Kempter
wrote:
> Hi All;
>
> We have a client that has a table where large blobs (bytea) are stored. the
> table has a key column that is numbers (like 112362) but unfortunately it's
> a varchar column so the blobs are accessed via queries like:
>
> select * f
Hi All;
We have a client that has a table where large blobs (bytea) are stored.
the table has a key column that is numbers (like 112362) but
unfortunately it's a varchar column so the blobs are accessed via
queries like:
select * from bigtable where keycol = '217765'
The primary reason we w
Thanks for pointing me to that article. I totally forgot that the postgres wiki
existed.
Updating is not an option at the moment, but we'll probably do so in the
future. Until then I can
live with the workaround.
Kind regards,
Marc
--
Sent via pgsql-performance mailing list (pgsql-perfor
On 5 Březen 2012, 16:11, Marc Schablewski wrote:
> We have an optimizer problem regarding partitioned tables on 8.4.11.
...
> gdw=> explain select min( emg_id ) from edifactmsgpart;
> QUERY PLAN
>
We have an optimizer problem regarding partitioned tables on 8.4.11.
We started partitioning a large table containing approx. 1 billion records.
So far, there is only the master table, called edifactmsgpart (which is empty)
and 1 partition,
called edifactmsgpart_pact.
There is a bigint column ca
Em 13-01-2012 17:05, Josh Berkus escreveu:
On 1/13/12 2:44 AM, alexandre - aldeia digital wrote:
Also, (2) only really works if you're going to obsolesce (remove)
archive records after a certain period of time. Otherwise the
sub-partitioning hurts performance.
Is there any moves to include
On 1/13/12 2:44 AM, alexandre - aldeia digital wrote:
>>
>> Also, (2) only really works if you're going to obsolesce (remove)
>> archive records after a certain period of time. Otherwise the
>> sub-partitioning hurts performance.
>>
>
> Is there any moves to include the "easy" table partitioning
Also, (2) only really works if you're going to obsolesce (remove)
archive records after a certain period of time. Otherwise the
sub-partitioning hurts performance.
Is there any moves to include the "easy" table partitioning in the 9.2
version ?
--
Sent via pgsql-performance mailing list (
Mike,
> Is it practical to partition on the status column and, eg, use triggers to
> move a row between the two partitions when status is updated? Any
> surprises to watch for, given the status column is actually NULL for active
> data and contains a value when archived?
When I've done this befo
Mike Blackwell wrote:
> We have a set of large tables. One of the columns is a status indicator
> (active / archived). The queries against these tables almost always include
> the status, so partitioning against that seems to makes sense from a logical
> standpoint, especially given most of the
We have a set of large tables. One of the columns is a status indicator
(active / archived). The queries against these tables almost always
include the status, so partitioning against that seems to makes sense from
a logical standpoint, especially given most of the data is "archived" and
most of
> -Original Message-
> From: Ben [mailto:midfi...@gmail.com]
> Sent: Friday, October 29, 2010 12:16 PM
> To: Igor Neyman
> Cc: pgsql-performance@postgresql.org
> Subject: Re: partitioning question 1
>
> On Oct 29, 2010, at 7:38 AM, Igor Neyman wrote:
>
> >> is my intuition completely
On Oct 29, 2010, at 7:38 AM, Igor Neyman wrote:
>> is my intuition completely off on this?
>>
>> best regards, ben
>>
>
> If your SELECT retrieves substantial amount of records, table scan could
> be more efficient than index access.
>
> Now, if while retrieving large amount of records "WHERE
> -Original Message-
> From: Ben [mailto:midfi...@gmail.com]
> Sent: Thursday, October 28, 2010 12:37 PM
> To: pgsql-performance@postgresql.org
> Subject: partitioning question 1
>
> hello --
>
> my last email was apparently too long to respond to so i'll
> split it up into shorter piec
whoops, didn't see the i=9 (linebreak! linebreak!)
nonetheless that is a static constant constraint on the column i, and i was
asking if constraint exclusions would work for dynamic constraints (like those
derived from a table joined against.) so for example the bar table has only
0-9 in its h
On Thu, 2010-10-28 at 12:59 -0700, Ben wrote:
> On Oct 28, 2010, at 12:44 PM, Joshua D. Drake wrote:
> >
> > My tests show you are incorrect:
> >
> >
> > part_test=# explain analyze select * from foo join bar using (i) where
> > i=9;
> >QUERY
>
On Thu, 2010-10-28 at 12:59 -0700, Ben wrote:
> explain analyze select * from foo join bar using (i);
vs
explain analyze select * from foo join bar using (i) where i=9;
On Oct 28, 2010, at 12:44 PM, Joshua D. Drake wrote:
>
> My tests show you are incorrect:
>
>
> part_test=# explain analyze select * from foo join bar using (i) where
> i=9;
>QUERY
> PLAN
> -
On Thu, 2010-10-28 at 12:25 -0700, Ben wrote:
> i think we are talking about two different things here: the constraints on
> the table, and the where-clause constraints in a query which may or may not
> trigger constraint exclusion. i understand that table constraints have to be
> constants --
On Oct 28, 2010, at 11:50 AM, Joshua D. Drake wrote:
>>> Yes the constraints have to be static. Not sure about the operator
>>> question honestly.
>>
>> this seems to severely restrict their usefulness -- our queries are data
>> warehouse analytical -type queries, so the constraints are usually
On Thu, 2010-10-28 at 11:44 -0700, Ben wrote:
> > Yes the constraints have to be static. Not sure about the operator
> > question honestly.
>
> this seems to severely restrict their usefulness -- our queries are data
> warehouse analytical -type queries, so the constraints are usually
> data-d
thanks for the prompt response. some comments / questions below :
On Oct 28, 2010, at 10:31 AM, Joshua D. Drake wrote:
>> ...constraint exclusion is able to eliminate table partitions. the I/O
>> advantages of having queries target small subtables are the same as the I/O
>> advantages of clust
On Thu, 2010-10-28 at 09:36 -0700, Ben wrote:
> hello --
>
> my last email was apparently too long to respond to so i'll split it up into
> shorter pieces. my first question :
>
> my understanding of how range partitioning and constraint exclusion works
> leads me to believe that it does not b
hello --
my last email was apparently too long to respond to so i'll split it up into
shorter pieces. my first question :
my understanding of how range partitioning and constraint exclusion works leads
me to believe that it does not buy any query performance that a clustered index
doesn't alr
On Wednesday 09 September 2009 07:56:53 Kenneth Cox wrote:
> In case you aren't comfortable running unreleased planner patches from
> pgsql-hackers, a workaround was discussed on this list recently:
>
> http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php
>
> On Wed, 09 Sep 2009 06
In case you aren't comfortable running unreleased planner patches from
pgsql-hackers, a workaround was discussed on this list recently:
http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php
On Wed, 09 Sep 2009 06:05:22 -0400, Heikki Linnakangas
wrote:
Kevin Kempter wrote:
Kevin Kempter wrote:
> Hi all I have a large table (>2billion rows) that's partitioned by date based
> on an epoch int value. We're running a select max(id) where id is the PK. I
> have a PK index on each of the partitions, no indexes at all on the base
> table.
>
> If I hit a partition table
Hi all I have a large table (>2billion rows) that's partitioned by date based
on an epoch int value. We're running a select max(id) where id is the PK. I
have a PK index on each of the partitions, no indexes at all on the base
table.
If I hit a partition table directly I get an index scan as e
If I can't find an answer in the next day or so I'll crack open OpenJPA and
disable that check. Its a very simple, if ugly, hack.
--Nik
On 5/12/08, Neil Peter Braggio <[EMAIL PROTECTED]> wrote:
>
> I have the same problem in PG 8.2
>
> To resolve this issue I had to create a new table with the
I have the same problem in PG 8.2
To resolve this issue I had to create a new table with the same
structure than the partitioned table with a trigger for insert and
update. All the operations the application have to do are directed to
this new table.
When a new record is inserted in the new table
I figure this subject belongs on the performance mailing list because it is
about partitioning, which is a performance issue.
I'm working on partitioning some of the tables used by an application that
uses OpenJPA. It turns out that OpenJPA is sensitive to the numbers
returned when you do an inse
Tore Lukashaugen wrote:
Hi,
I am new to postgres having worked with Oracle in the past. I am interested
in understanding Postgres's table partition functionality better.
Specifically, I have a third party application running against my postgres
database, but the database is becoming rather la
Hi,
I am new to postgres having worked with Oracle in the past. I am interested
in understanding Postgres's table partition functionality better.
Specifically, I have a third party application running against my postgres
database, but the database is becoming rather large to maintain. I am
thi
Abu Mushayeed wrote:
I have partitioned a table based on period (e.g., cdate >=
'2007-01-01'::date and cdate<=.2007-03-31':;date).
Now, I am issuing query like cdate >= CURRENT_DATE - 1 and cdate <=
CURRENT_DATE, it scans all the partitions. But if I do cdate >=
'2007-01-01'::date and cdate<=
> Can anybody help me out
>
> I just wanted to knw what will be the configuraion settings for
> partitioning table so as to make inserts faster on the partitioned tables.
Well, that depends. Many questions are important here. Will you insert
into several partitions or only to a single one? Do yo
I have partitioned a table based on period (e.g., cdate >= '2007-01-01'::date
and cdate<=.2007-03-31':;date).
Now, I am issuing query like cdate >= CURRENT_DATE - 1 and cdate <=
CURRENT_DATE, it scans all the partitions. But if I do cdate >=
'2007-01-01'::date and cdate<=.2007-03-31'::date it p
Can anybody help me out
I just wanted to knw what will be the configuraion settings for partitioning
table so as to make inserts faster on the partitioned tables.
--
Regards
Gauri
On Thu, 11 Jan 2007 08:18:39 -0600, "Adam Rich" <[EMAIL PROTECTED]> wrote:
>
> Subpartitions are just a way to break (parent) partitions up into
> smaller pieces. Those of course can be moved to other disks
> just like the main partitions.
Ah, didn't know that (i just wondered why i need a s
o other disks
just like the main partitions.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bernd
Helmle
Sent: Thursday, January 11, 2007 6:51 AM
To: Scott Marlowe
Cc: Jim C. Nasby; Jeremy Haile; pgsql-performance@postgresql.org
Subject: Re: [PERF
Well - whether or not MySQL's implementation of partitioning has some
deficiency, it sure is a lot easier to set up than PostgreSQL. And I
don't think there is any technical reason that setting up partitioning
on Postgres couldn't be very easy and still be robust.
On Thu, 11 Jan 2007 13:59:20 +01
> On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
> >> Take a look at the set of partitioning functions I wrote shortly
after
> >> the 8.1 release:
> >>
> >> http://www.studenter.hb.se/~arch/files/part_functions.sql
> >>
> >> You could probably work something out using those funct
On Wed, 10 Jan 2007 15:30:16 -0600, Scott Marlowe <[EMAIL PROTECTED]> wrote:
[...]
>
> And I don't think the mysql partition supports tablespaces either.
>
MySQL supports distributing partitions over multiple disks via the SUBPARTITION
clause [1].
I leave it to you, wether their syntax is
On Wed, 2007-01-10 at 15:15, Jeremy Haile wrote:
> You can do list partitioning in MySQL:
> http://dev.mysql.com/doc/refman/5.1/en/partitioning-list.html
>
> My comment was not meant as a criticism of PostgreSQL's current state -
> I'm glad that it has partitioning. I'm simply wondering if there
You can do list partitioning in MySQL:
http://dev.mysql.com/doc/refman/5.1/en/partitioning-list.html
My comment was not meant as a criticism of PostgreSQL's current state -
I'm glad that it has partitioning. I'm simply wondering if there are
any plans of adopting a more user-friendly syntax in th
On Wed, 2007-01-10 at 15:09, Jim C. Nasby wrote:
> On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:
> > This seems so much more intuitive and simpler than what is required to
> > set it up in PostgreSQL. Does PostgreSQL's approach to table
> > partitioning have any advantage over MySQ
On Wed, Jan 10, 2007 at 03:28:00PM -0500, Jeremy Haile wrote:
> This seems so much more intuitive and simpler than what is required to
> set it up in PostgreSQL. Does PostgreSQL's approach to table
> partitioning have any advantage over MySQL? Is a "nicer" syntax planned
> for Postgres?
The focu
> > > From: [EMAIL PROTECTED]
> > [mailto:pgsql-performance-
> > > [EMAIL PROTECTED] On Behalf Of Arnau
> > > Sent: den 5 januari 2007 12:02
> > > To: pgsql-performance@postgresql.org
> > > Subject: [PERFORM] Partitioning
> > >
> > >
On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
Take a look at the set of partitioning functions I wrote shortly after
the 8.1 release:
http://www.studenter.hb.se/~arch/files/part_functions.sql
You could probably work something out using those functions (as-is, or
as inspirati
t;
> > -Original Message-
> > From: [EMAIL PROTECTED]
> [mailto:pgsql-performance-
> > [EMAIL PROTECTED] On Behalf Of Arnau
> > Sent: den 5 januari 2007 12:02
> > To: pgsql-performance@postgresql.org
> > Subject: [PERFORM] Partitioning
> &g
/pgagent.html)
/Mikael
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
> [EMAIL PROTECTED] On Behalf Of Arnau
> Sent: den 5 januari 2007 12:02
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] Partitioning
>
> Hi all,
>
>I
Hi all,
I'm not sure if this question fits in the topic of this list.
I'm interested in partitioning and it's the first time I'd use it.
There is an issue I don't know how you handle it. Lets say I'm
interested in store monthly based statistical data like the example of
http://www.postgresql
Tom Lane <[EMAIL PROTECTED]> writes:
> It's usually better to use partitioning rules that have something to
> do with the WHERE-clauses you'd be using anyway. For instance, try
> to partition on ranges.
I agree and tried to create new partitioned tables. But now I ran into
some other performance
Martin Lesser <[EMAIL PROTECTED]> writes:
> I try to partition a large table (~ 120 mio. rows) into 50 smaller
> tables but using the IMO immutable %-function constraint exclusion
> does not work as expected:
The constraint exclusion mechanism is not as bright as you think.
There are some very lim
I try to partition a large table (~ 120 mio. rows) into 50 smaller
tables but using the IMO immutable %-function constraint exclusion
does not work as expected:
CREATE TABLE tt_m (id1 int, cont varchar);
CREATE TABLE tt_0 (check (id1 % 50 = 0)) INHERITS (tt_m);
CREATE TABLE tt_1 (check (id1 % 50 =
On Sun, Mar 19, 2006 at 01:31:42PM +0100, Antoine wrote:
> Hi,
> Is there any work on the cards for implementing other partitioning
> strategies? I see mysql 5.1 will have support for hashes and stuff but
> didn't see anything in the todos for postgres.
You'd have to provide a pretty convincing ar
Hi,
Is there any work on the cards for implementing other partitioning
strategies? I see mysql 5.1 will have support for hashes and stuff but
didn't see anything in the todos for postgres.
Cheers
Antoine
--
This is where I should put some witty comment.
---(end of broadcast
On Tue, 2006-02-07 at 18:59 -0600, Jim C. Nasby wrote:
> I'm honestly somewhat surprised someone hasn't run into this problem
> with partitioning yet; or maybe everyone who needs to do long
> transactions just shoves those off to slony slaves...
All DDL takes locks, on all DBMS.
Best Regards, Si
On Tue, Feb 07, 2006 at 10:09:02PM +, Simon Riggs wrote:
> On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:
>
> > > > 1- long running report is running on view
> > > > 2- continuous inserters into view into a table via a rule
> > > > 3- truncate or rule change occur
ql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:
>
> > > > 1- long running report is running on view
> > > > 2- continuous inserters into view into a table
> via a rule
&
At 05:09 PM 2/7/2006, Simon Riggs wrote:
I'd be disinclined to using the locking system as a scheduling tool.
I Agree with Simon. Using the locking system for scheduling feels
like a form of Programming by Side Effect.
Ron
---(end of broadcast)---
On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:
> > > 1- long running report is running on view
> > > 2- continuous inserters into view into a table via a rule
> > > 3- truncate or rule change occurs, taking an exclusive lock.
> > > Must wait for #1 to finish.
> > > 4- new reports and
On Wed, Feb 01, 2006 at 10:20:21AM -0500, Tom Lane wrote:
> "Marc Morin" <[EMAIL PROTECTED]> writes:
> > Do you mean it would be impossible to change the code so that existing
> > selects continue to use the pre-truncated table until they commit?
>
> Yes, because that table won't exist any more (a
, 2006 7:44 AM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> Hi, Marc,
>
> Marc Morin wrote:
>
> > 1- long running report is running on view
> > 2- continuous inserters into v
Hi, Marc,
Marc Morin wrote:
> 1- long running report is running on view
> 2- continuous inserters into view into a table via a rule
> 3- truncate or rule change occurs, taking an exclusive lock.
> Must wait for #1 to finish.
> 4- new reports and inserters must now wait for
"Marc Morin" <[EMAIL PROTECTED]> writes:
> Do you mean it would be impossible to change the code so that existing
> selects continue to use the pre-truncated table until they commit?
Yes, because that table won't exist any more (as in the file's been
unlinked) once the TRUNCATE commits.
> The upd
m: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, February 01, 2006 12:50 AM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> "Marc Morin" <[EMAIL PROTECTED]> writes:
> > Would l
Marc Morin wrote:
Under both these circumstances (truncate and create / replace rule) the
locking behaviour of these commands can cause locking problems for us.
The scenario is best illustrated as a series of steps:
1- long running report is running on view
2- continuous inser
"Marc Morin" <[EMAIL PROTECTED]> writes:
> Would like to understand the implications of changing postgres'
> code/locking for rule changes and truncate to not require locking out
> select statements?
It won't work...
regards, tom lane
---(end of
We have a large database system designed around partitioning. Our
application is characterized with
- terabytes of data
- billions of rows in dozens of base tables (and 100s of paritions)
- 24x7 insert load of new data that cannot be stopped, data is time
sensitive.
- periodic reports that can h
Yes, that's how I solved it... and I totally agree that it's hard for the
planner to guess what to do on the partitions. But maybe there should be
something in the docs explaining the limitations ...
I'm only asking for the biggest 100 ids from the table, so I thought maybe the
planner would ta
I just saw that there is no where clause in the query, that you had
fed to explain plan.
you need to include a where clause based on id_machine column to see the effect.
On 12/13/05, Pandurangan R S <[EMAIL PROTECTED]> wrote:
> Did you set constraint_exclusion = true in postgresql.conf file?
>
> O
Did you set constraint_exclusion = true in postgresql.conf file?
On 12/13/05, Marc Cousin <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I've been working on trying to partition a big table (I've never partitioned a
> table in any other database till now).
> Everything went ok, except one query that didn't
Hi,
I've been working on trying to partition a big table (I've never partitioned a
table in any other database till now).
Everything went ok, except one query that didn't work afterwards.
I've put the partition description, indexes, etc ..., and the explain plan
attached.
The query is extremel
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an RDBMS) and the
app server than you are between the app server and the client, an out
of band 100Mb network for session information is plenty of bandwidth.
So if you count on a m
Ross,
> Memcached is a PG memory store, I gather,
Nope. It's a hyperfast resident-in-memory hash that allows you to stash stuff
like user session information and even materialized query set results.
Thanks to SeanC, we even have a plugin, pgmemcached.
> but...what is squid, lighttpd?
> anyt
On 12 May 2005, at 18:33, Josh Berkus wrote:
People,
In general I think your point is valid. Just remember that it
probably
also matters how you count page views. Because technically images
are a
separate page (and this thread did discuss serving up images). So if
there are 20 graphics on a sp
People,
> In general I think your point is valid. Just remember that it probably
> also matters how you count page views. Because technically images are a
> separate page (and this thread did discuss serving up images). So if
> there are 20 graphics on a specific page, that is 20 server hits just
100 hits a second = 8,640,000 hits a day. I work on a site which does >
100 million dynamic pages a day. In comparison Yahoo probably does >
100,000,000,000 (100 billion) views a day
if I am interpreting Alexa's charts correctly. Which is about
1,150,000 a second.
Read the help on Alexa
Alex Turner wrote:
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there ar
Ok - my common sense alarm is going off here...
There are only 6.446 billion people worldwide. 100 Billion page views
would require every person in the world to view 18 pages of yahoo
every day. Not very likely.
http://www.internetworldstats.com/stats.htm
suggests that there are around 1 billio
On 12 May 2005, at 15:08, Alex Turner wrote:
Having local sessions is unnesesary, and here is my logic:
Generaly most people have less than 100Mb of bandwidth to the
internet.
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an
Having local sessions is unnesesary, and here is my logic:
Generaly most people have less than 100Mb of bandwidth to the internet.
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an RDBMS) and the
app server than you are between
machines. Which has it's own set of issues entirely. I am not entirely
sure that memcached actually does serialize data when it's comitted into
I think it does, ie. it's a simple mapping of [string key] => [string
value].
memcached either, although I could be wrong, I have not looked at the
On 11 May 2005, at 23:35, PFC wrote:
However, memcached (and for us, pg_memcached) is an excellent way
to improve
horizontal scalability by taking disposable data (like session
information)
out of the database and putting it in protected RAM.
So, what is the advantage of such a system ve
However, memcached (and for us, pg_memcached) is an excellent way to
improve
horizontal scalability by taking disposable data (like session
information)
out of the database and putting it in protected RAM.
So, what is the advantage of such a system versus, say, a "sticky
sessions" system wh
On Wed, May 11, 2005 at 08:57:57AM +0100, David Roussel wrote:
> For an interesting look at scalability, clustering, caching, etc for a
> large site have a look at how livejournal did it.
> http://www.danga.com/words/2004_lisa/lisa04.pdf
>
> They have 2.6 Million active users, posting 200 new blog
On Wed, 2005-05-11 at 17:13 +0800, Christopher Kings-Lynne wrote:
> > Alex Stapleton wrote
> > Be more helpful, and less arrogant please.
>
> Simon told you all the reasons clearly and politely.
Thanks Chris for your comments.
PostgreSQL can always do with one more developer and my sole intent
Alex Stapleton <[EMAIL PROTECTED]> writes:
> Acceptable Answers to 'So, when/is PG meant to be getting a decent
> partitioning system?':
...
> 3. Your welcome to take a stab at it, I expect the community would
> support your efforts as well.
As long as we're being curt all around, this one'
David,
> It's interesting that the solution livejournal have arrived at is quite
> similar in ways to the way google is set up.
Yes, although again, they're using memcached as pseudo-clustering software,
and as a result are limited to what fits in RAM (RAM on 27 machines, but it's
still RAM).
1 - 100 of 141 matches
Mail list logo