On 03/28/2016 02:55 PM, Mat Arye wrote:
This will run on EC2 (or other cloud service) machines and on ssds.
Right now runs on m4.4xlarge with 64GiB of ram.
Willing to pay for beefy instances if it means better performance.
On Mon, Mar 28, 2016 at 4:49 PM, Rob Sargent
This will run on EC2 (or other cloud service) machines and on ssds.
Right now runs on m4.4xlarge with 64GiB of ram.
Willing to pay for beefy instances if it means better performance.
On Mon, Mar 28, 2016 at 4:49 PM, Rob Sargent wrote:
>
>
> On 03/28/2016 02:41 PM, Mat
On 03/28/2016 02:41 PM, Mat Arye wrote:
Hi All,
I am writing a program that needs time-series-based insert mostly
workload. I need to make the system scaleable with many thousand of
inserts/s. One of the techniques I plan to use is time-based table
partitioning and I am trying to figure
Hi All,
I am writing a program that needs time-series-based insert mostly workload.
I need to make the system scaleable with many thousand of inserts/s. One of
the techniques I plan to use is time-based table partitioning and I am
trying to figure out how large to make my time tables.
Does
I deleted 7 rows from a table and then execute
vacuum analyze table.
But table size not yet changed.
I am using Postgresql 8.1.
Could anyone please tell me what the problem is?
try reindexing table.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, Feb 3, 2010 at 2:43 AM, AI Rumman rumman...@gmail.com wrote:
I deleted 7 rows from a table and then execute
vacuum analyze table.
But table size not yet changed.
I am using Postgresql 8.1.
Could anyone please tell me what the problem is?
paul rivers-2 wrote:
chuckee wrote:
1) how do I find out the size, in MB, of a particular table (called
'capture' in this case).
I tried entering the SQL query SELECT (pg_tablespace_size('capture'));
The result was the following:
ERROR: tablespace capture does not exist
You're
chuckee wrote:
paul rivers-2 wrote:
chuckee wrote:
1) how do I find out the size, in MB, of a particular table (called
'capture' in this case).
I tried entering the SQL query SELECT (pg_tablespace_size('capture'));
The result was the following:
ERROR: tablespace capture does not exist
On 25/03/2008, chuckee [EMAIL PROTECTED] wrote:
Thanks but I still get the error 'ERROR: relation capture does not exist'
when trying these two alternative functions you mention above. There is
definitely a table called 'capture' in my database!
Are you sure you're connected to the right
Andrej Ricnik-Bay [EMAIL PROTECTED] writes:
On 25/03/2008, chuckee [EMAIL PROTECTED] wrote:
Thanks but I still get the error 'ERROR: relation capture does not exist'
when trying these two alternative functions you mention above. There is
definitely a table called 'capture' in my database!
Hi,
I have two questions:
1) how do I find out the size, in MB, of a particular table (called
'capture' in this case).
I tried entering the SQL query SELECT (pg_tablespace_size('capture'));
The result was the following:
ERROR: tablespace capture does not exist
2) how do I find out where the
chuckee wrote:
1) how do I find out the size, in MB, of a particular table (called
'capture' in this case).
I tried entering the SQL query SELECT (pg_tablespace_size('capture'));
The result was the following:
ERROR: tablespace capture does not exist
You're looking for
I have two questions.
How can I enter comments into a table? Where the comments are stored?
In psql How can I know the size of a single table?
If know pls replay.
Thanks i advance
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Fri, Mar 21, 2008 at 3:03 PM, lak [EMAIL PROTECTED] wrote:
I have two questions.
How can I enter comments into a table? Where the comments are stored?
What do you mean by comments in a table ?
In psql How can I know the size of a single table?
Select pg_relation_size('mytable');
On Mar 21, 2008, at 4:33 AM, lak wrote:
I have two questions.
How can I enter comments into a table? Where the comments are stored?
Comments are created with the COMMENT sql command and, in pg, are
stored in pg_description.
In psql How can I know the size of a single table?
lak wrote:
I have two questions.
How can I enter comments into a table? Where the comments are stored?
Assuming you want comments on the table schema definitions, use COMMENT ON.
CREATE TABLE sometable (
-- definition
);
COMMENT ON TABLE sometable IS This is a table;
If that's not what
Pavan Deolasee wrote:
On Fri, Mar 21, 2008 at 3:03 PM, lak [EMAIL PROTECTED] wrote:
I have two questions.
How can I enter comments into a table? Where the comments are stored?
What do you mean by comments in a table ?
I think what you are referring to is detailed in
Pavan Deolasee [EMAIL PROTECTED] schrieb:
On Fri, Mar 21, 2008 at 3:03 PM, lak [EMAIL PROTECTED] wrote:
I have two questions.
How can I enter comments into a table? Where the comments are stored?
What do you mean by comments in a table ?
Comments on a table or a column or on other
On Fri, Mar 21, 2008 at 10:12 PM, Andreas Kretschmer
[EMAIL PROTECTED] wrote:
Comments on objects can set by:
comment on ... is 'comment';
Oh cool.. I did not such facility exists.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-general
On Fri, Mar 21, 2008 at 10:25 PM, Pavan Deolasee
[EMAIL PROTECTED] wrote:
Oh cool.. I did not such facility exists.
I meant, I did not know such facility exists
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
--
Sent via pgsql-general mailing list
I meant, I did not know such facility exists
When you use pgautodoc, it automatically grabs those comments and puts
them in the web page it crreates... more coolness!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Guys,
I've created 2 sample tables with 1 column each - type
char(1) and type integer. After inserting equal number
of rows (4M or more) tablesizes are exactly the same, while
I would expect table with char(1) to be slighly smaller...
What's causing it ? Thanks!
Server version is 8.3.
Best
Alex Vinogradovs [EMAIL PROTECTED] writes:
I've created 2 sample tables with 1 column each - type
char(1) and type integer. After inserting equal number
of rows (4M or more) tablesizes are exactly the same, while
I would expect table with char(1) to be slighly smaller...
What's causing it ?
How many rows does it take for select performance on a
table to degrade? I hope this question isn't to
ambiguous (ie lollipop licks). But seriously, 100,000?
1,000,000? 10,000,000? With just a regular lookup on
an unique index. Nothing crazy or aggregate.
EX: select * from bigtable where id =
am 07.09.2005, um 1:01:11 -0700 mailte Matthew Peter folgendes:
How many rows does it take for select performance on a
table to degrade? I hope this question isn't to
ambiguous (ie lollipop licks). But seriously, 100,000?
1,000,000? 10,000,000? With just a regular lookup on
an unique index.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Dennis Gearon wrote:
| Gaetano Mendola wrote:
|
| Dennis Gearon wrote:
|
| I am designing something that may be the size of yahoo, google, ebay,
| etc.
|
| Just ONE many to many table could possibly have the following
| characteristics:
|
|
Great Idea! When I get that far, I will try it.
Gaetano Mendola wrote:
snip
For partion in some way I don't mean only split it in more tables. You
can use some available tools in postgres and continue to see this table
as one but implemented behind the scenes with more tables.
One usefull and
Dennis Gearon wrote:
Google probably is much bigger, and on mainframes, and probably Oracle
or DB2.
Google uses a Linux cluster and there database is HUGE. I do not know
which database
they use. I bet they built their own specifically for what they do.
Sincerely,
Joshua D. Drake
But the table
Actually, now that I think about it, they use a special table type that the INDEX is
also the DATUM. It is possible to recover the data, out of the index listing. So go
down the index, then decode the indexing value - voila, a whole step saved. I have no
idea what engine these table types are
On Wed, 2004-10-20 at 23:01 -0700, Joshua D. Drake wrote:
Dennis Gearon wrote:
Google probably is much bigger, and on mainframes, and probably Oracle
or DB2.
Google uses a Linux cluster and there database is HUGE. I do not know
which database
they use. I bet they built their own
On 21. okt 2004, at 01:30, Dennis Gearon wrote:
I am designing something that may be the size of yahoo, google, ebay,
etc.
Grrr. Geek wet-dream.
Just ONE many to many table could possibly have the following
characteristics:
3,600,000,000 records
each record is 9 fields of INT4/DATE
I
Dennis Gearon wrote:
I am designing something that may be the size of yahoo, google, ebay, etc.
Just ONE many to many table could possibly have the following
characteristics:
3,600,000,000 records
This is a really huge monster one, and if you don't partition that
table in some way I think
I am designing something that may be the size of yahoo, google, ebay, etc.
Just ONE many to many table could possibly have the following
characteristics:
3,600,000,000 records
each record is 9 fields of INT4/DATE
Other tables will have about 5 million records of about the same size.
There
Hi,
Am Do, den 21.10.2004 schrieb Dennis Gearon um 1:30:
I am designing something that may be the size of yahoo, google, ebay, etc.
Just ONE many to many table could possibly have the following
characteristics:
3,600,000,000 records
each record is 9 fields of INT4/DATE
Other
Google probably is much bigger, and on mainframes, and probably Oracle or DB2.
But the table I am worried about is the one sized = 3.6 GIGA records.
Tino Wildenhain wrote:
Hi,
Am Do, den 21.10.2004 schrieb Dennis Gearon um 1:30:
I am designing something that may be the size of yahoo, google, ebay,
Hi David,
I'd say that if it is a new app develop it with 7.4 and use statement level
triggers otherwise you could use normal triggers and perform a count each
time but that will slow things down dramatically.
Other option is to use cron and write a daemon/script to periodically check
the
Hi,
I have a table in my database which can grow very quickly. Is
there some way to partition the table so that when it reaches a certain
size the
information in it is copied to a temporary table and the
original table is free again.
You can create a view and update the view definition
Is there a way to figure out which file represents which table?
IE: I have a file 21691 and I want to know what table it is.
Also, I've heard that pg splits tables when they get to about 1 gig.
I have a table that could grow to that. It is 700+ megs now. Will
performance/indexes be effected
39 matches
Mail list logo