Ronan McGlue writes:
> Hi Olivier,
>
> On 28/11/2018 8:00 pm, Olivier wrote:
>> Hello,
>>
>> Is there a way that gives an estimate of the size of a mysqldump such a
>> way that it would always be larger than the real size?
>>
>> So far, I have fou
Ronan McGlue writes:
> Hi Olivier,
>
> On 28/11/2018 8:00 pm, Olivier wrote:
>> Hello,
>>
>> Is there a way that gives an estimate of the size of a mysqldump such a
>> way that it would always be larger than the real size?
>>
>> So far, I have fou
Am 28.11.18 um 10:00 schrieb Olivier:
> Is there a way that gives an estimate of the size of a mysqldump such a
> way that it would always be larger than the real size?
keep in mind that a dump has tons of sql statements not existing that
way in the data
--
MySQL General Mailing List
Fo
Hi Olivier,
On 28/11/2018 8:00 pm, Olivier wrote:
Hello,
Is there a way that gives an estimate of the size of a mysqldump such a
way that it would always be larger than the real size?
So far, I have found:
mysql -s -u root -e "SELECT SUM(data_length) Data_BB
Hello,
Is there a way that gives an estimate of the size of a mysqldump such a
way that it would always be larger than the real size?
So far, I have found:
mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM
information_schema.tables WHERE table_schema N
Not sure about the size of your dump, but, have you tried to set the new
value on the server and client side? you can increase max_allowed_packet up
to 1G. Let us know after you tried that, and maybe other guys have another
solution to share...
--
*Wagner Bianchi, +55.31.8654.9510*
Oracle ACE
Hi,
When we are trying to restore the dump file, we got an error like "Got a
packet bigger than max_allowed_packet". Then we increased max_allowed_packet
variable size and passed along with MySQL restore command.
mysql -max_allowed_packet=128M -u -p < /path/file.sql
After i
- Original Message -
> From: "Rick James"
Hey Rick,
Thanks for your thoughts.
> * Smells like some huge LONGTEXTs were INSERTed, then DELETEd.
> Perhaps just a single one of nearly 500M.
I considered that, too; but I can see the on-disk size grow over a period of a
InnoDB, the LONGTEXT will usually be stored separately, thereby making a
full table scan relatively efficient.
> -Original Message-
> From: Johan De Meersman [mailto:vegiv...@tuxera.be]
> Sent: Friday, February 15, 2013 4:21 AM
> To: mysql.
> Subject: MyISAM table size vs a
egradation isn't perfectly traceable to a single point in
time; the slowlog does show that query being slow on occasion; however it seems
that it is intermittent until it reaches a point of no return, when the queries
get slow enough that a cascade of pending connections happens until we ru
application making it composing large sql statements to mysql.
> Statment over a meg on size on 1 line. We disabled those devices and the
> problems have gone away.
>
> Thanks.
>
> Kent.
> - Original Message -
> From: Rick James
> Sent: 10/17/12 04:50 PM
> To
Thanks for the replies.
After examining the logs carefully. We found several devices sending snmp
traps to the application making it composing large sql statements to mysql.
Statment over a meg on size on 1 line. We disabled those devices and the
problems have gone away.
Thanks.
Kent
om
> Subject: Unexpected gradual replication log size increase.
>
> Hi,
>
> I have a Mysql replicate setup running for a while over 6 months and
> recent we had an outage. We fix it, bought the server back up and we
> spotted something peculiar and worrying. The replication log
Hi,
I have a Mysql replicate setup running for a while over 6 months and recent we
had an outage. We fix it, bought the server back up and we spotted something
peculiar and worrying. The replication logs are growing in size, all of a
sudden on Tuesday 9th Oct based on clues from monitoring
ivanna...@spanservices.com]
> Sent: Monday, May 21, 2012 6:04 AM
> To: mysql@lists.mysql.com
> Subject: Reducing ibdata1 file size
>
> Hi ,
>
> I am trying to reduce the ibdata1 data file in MySQL.
> In MySQL data directory the ibdata1 data file is always increasing
> when
Okay, my mistake. I should write precisely when communicating with precise
people. :-)
What I meant was, dumping and importing is the "common knowledge" way of
"virtually" shrinking innodb files.
So, now that I've conceded the meta-argument, what do you think of the linked
procedure for reduci
Despite the conventional wisdom, converting to innodb_file_per_table will not
necessarily help you. It depends on your situation. If most of your growth is
in a single table, you will only have transferred the problem from the ibdata1
file to a new file. The ibdata1 file may also continue to
Jan,
that's not common wisdom, Innodb datafiles ***never*** shrink,
that in the blog from 22th of May is a workaround, one of the many.
If you ask my my favourite is to use a stand by instance and work on that.
Claudio
2012/5/22 Jan Steinman
> > From: Claudio Nanni
> >
> > No, as already expl
- Original Message -
> From: "Jan Steinman"
>
> That's been the common wisdom for a long time.
>
> However, this just popped up on my RSS reader. I haven't even looked
> at it, let alone tried it.
In brief: convert all your tables to myisam, delete ibdatafile during a
restart, convert
> From: Claudio Nanni
>
> No, as already explained, it is not possible, Innodb datafiles *never* shrink.
That's been the common wisdom for a long time.
However, this just popped up on my RSS reader. I haven't even looked at it, let
alone tried it.
I'm interested in what the experts think...
or it could be that your buffer size is too small, as mysql is spending lot
of CPU time for compress and uncompressing
On Tue, May 22, 2012 at 5:45 PM, Ananda Kumar wrote:
> Is you system READ intensive or WRITE intensive.
> If you have enable compression for WRITE intensive data, then CP
Is you system READ intensive or WRITE intensive.
If you have enable compression for WRITE intensive data, then CPU cost will
be more.
On Tue, May 22, 2012 at 5:41 PM, Johan De Meersman wrote:
>
>
> - Original Message -
> > From: "Reindl Harald"
> >
> > interesting because i have here a d
- Original Message -
> From: "Reindl Harald"
>
> interesting because i have here a dbmail-server with no CPU load and
> innodb with compression enabled since 2009 (innodb plugin in the past)
Ah, this is a mixed-use server that also receives data from several Cacti
installs.
> [--] Da
Am 22.05.2012 13:59, schrieb Johan De Meersman:
> - Original Message -
>> From: "Reindl Harald"
>>
>> 95% of mysqld-installations have no problem with
>> innodb_file_per_table so DEFAULTS should not be for 5%
>
> There is "no problem", and there is "better practice"
> and if your syste
- Original Message -
> From: "Reindl Harald"
>
> 95% of mysqld-installations have no problem with
> innodb_file_per_table so DEFAULTS should not be for 5%
There is "no problem", and there is "better practice" - and if your system is
I/O bound it makes sense to minimize on-disk fragmenta
lled RAM)
[OK] Slow queries: 0% (3/455M)
[OK] Highest usage of available connections: 18% (93/500)
[OK] Key buffer size / total MyISAM indexes: 128.0M/76.4M
[OK] Key buffer hit rate: 98.6% (40M cached / 559K reads)
signature.asc
Description: OpenPGP digital signature
- Original Message -
> From: "Ananda Kumar"
> yes, Barracuda is limited to FILE_PER_TABLE.
Ah, I didn't realise that. Thanks :-)
> Yes, true there is CPU cost, but very less.
> To gain some you have to loss some.
I've only got it enabled on a single environment, but enabling it added
Am 22.05.2012 13:40, schrieb Johan De Meersman:
> - Original Message -
>> From: "Reindl Harald"
>> Subject: Re: Reducing ibdata1 file size
>>
>> well but for what price?
>> the problem is the DEFAULT
>>
>> users with enough knowl
- Original Message -
> From: "Reindl Harald"
> Subject: Re: Reducing ibdata1 file size
>
> well but for what price?
> the problem is the DEFAULT
>
> users with enough knowledge could easy change the default
> currently what is happening is that mostly
yes, Barracuda is limited to FILE_PER_TABLE.
Yes, true there is CPU cost, but very less.
To gain some you have to loss some.
On Tue, May 22, 2012 at 5:07 PM, Johan De Meersman wrote:
> --
>
> *From: *"Ananda Kumar"
>
>
> yes, there some new features you can use to im
- Original Message -
> From: "Ananda Kumar"
> yes, there some new features you can use to improve performance.
> If you are using mysql 5.5 and above, with files per table, you can
> enable BARACUDA file format, which in turn provides data compression
> and dynamic row format, which will
In regards to why the file grows large, you may wish to read some of
the posts on the MySQL Performance Blog, which has quite a bit of
information on this, such as
http://www.mysqlperformanceblog.com/2010/06/10/reasons-for-run-away-main-innodb-tablespace/
--
MySQL General Mailing List
For list ar
Am 22.05.2012 13:19, schrieb Johan De Meersman:
> - Original Message -
>> From: "Reindl Harald"
>>
>> as multiple said the default of a single table space
>> is idiotic in my opinion, but however this is well
>> known over years
>
> I suppose there's a certain logic to favouring one-sho
yes, there some new features you can use to improve performance.
If you are using mysql 5.5 and above, with files per table, you can enable
BARACUDA file format, which in turn provides data compression
and dynamic row format, which will reduce IO.
For more benefits read the doc
On Tue, May 22, 20
- Original Message -
> From: "Reindl Harald"
>
> as multiple said the default of a single table space
> is idiotic in my opinion, but however this is well
> known over years
I suppose there's a certain logic to favouring one-shot allocation and never
giving up free space, in that it red
- Original Message -
> From: "Pothanaboyina Trimurthy"
>
> hi sir,
Please keep the list in CC, others may benefit from your questions, too.
> can we see any performance related improvements if we use
> "innodb_file_per_table" other than using a single ibdatafile for all
> inn
at if we have a single tablespace with file per table and
> doing the optimization will reduce the
> size of the datafile size ? If yes, then why this not possible on the
> datafile (one single file) too ?
> On Tue, May 22, 2012 at 3:07 PM, Reindl Harald <mailto:h.rei...@thelou
table and
> doing the optimization will reduce the size of the datafile size ? If yes,
> then why this not possible on the datafile (one single file) too ?
> *
> *
> *thanks & regards,*
> *__*
> Kishore Kumar Vaishnav
> *
> *
> On Tue, May 22, 2012
B per day
>>> with only 1 DB (apart from mysql / information_schema / test) and the
>>> size
>>> of the DB is just 600MB, where records get updated / deleted / added and
>>> on
>>> an average it maintains 600MB only. Now the datafile is increased to
Hi Reindl Harald,
Does this means that if we have a single tablespace with file per table and
doing the optimization will reduce the size of the datafile size ? If yes,
then why this not possible on the datafile (one single file) too ?
*
*
*thanks & regards,*
*__*
Kishore K
as multiple answered, yes it matters!
there is no way to reduce the size of a single tablespace
with file per table you can shrink the files with
"optimize table " which is in fact a "ALTER TABLE"
without real changes
Am 22.05.2012 11:28, schrieb Kishore Vaishnav:
> R
, "Kishore Vaishnav"
wrote:
> Thanks for the reply, but in my case the datafile is growing 1 GB per day
> with only 1 DB (apart from mysql / information_schema / test) and the size
> of the DB is just 600MB, where records get updated / deleted / added and on
> an average it mai
n Tue, May 22, 2012 at 2:20 PM, Kishore Vaishnav <
> kish...@railsfactory.org> wrote:
>
>> Thanks for the reply, but in my case the datafile is growing 1 GB per day
>> with only 1 DB (apart from mysql / information_schema / test) and the size
>> of the DB is just 600MB,
do u have one file per table or just one system tablespace datafile.
On Tue, May 22, 2012 at 2:20 PM, Kishore Vaishnav
wrote:
> Thanks for the reply, but in my case the datafile is growing 1 GB per day
> with only 1 DB (apart from mysql / information_schema / test) and the size
> of
Thanks for the reply, but in my case the datafile is growing 1 GB per day
with only 1 DB (apart from mysql / information_schema / test) and the size
of the DB is just 600MB, where records get updated / deleted / added and on
an average it maintains 600MB only. Now the datafile is increased to 30GB
e -
> > > From: "Manivannan S."
> > >
> > > How to reduce the ibdata1 file size in both LINUX and WINDOWS
> > > machine.
> >
> > This is by design - you cannot reduce it, nor can you remove added
> > datafiles.
> >
> >
3 PM, Johan De Meersman wrote:
> - Original Message -
> > From: "Manivannan S."
> >
> > How to reduce the ibdata1 file size in both LINUX and WINDOWS
> > machine.
>
> This is by design - you cannot reduce it, nor can you remove added
> datafile
- Original Message -
> From: "Manivannan S."
>
> How to reduce the ibdata1 file size in both LINUX and WINDOWS
> machine.
This is by design - you cannot reduce it, nor can you remove added datafiles.
If you want to shrink the ibdata files, you must stop all conn
ta still exist in the ibdata1 data file.
>
> How to reduce the ibdata1 file size in both LINUX and WINDOWS machine.
>
> Do you have any idea how to solve this problem. Thanks for any feedback.
>
>
>
> Thanks
> Manivannan S
>
> DISCLAIMER: This email message and all atta
server but data
still exist in the ibdata1 data file.
How to reduce the ibdata1 file size in both LINUX and WINDOWS machine.
Do you have any idea how to solve this problem. Thanks for any feedback.
Thanks
Manivannan S
DISCLAIMER: This email message and all attachments are confidential and
du reports how much space the file takes on the disk. This # depends on the
block size of each file system.
On Aug 11, 2011 9:13 PM, "Feng He" wrote:
Hello DBAs,
Though this is not exactly a mysql problem, but I think this list may
be helpful for my question.
I have dumped a mysql
417672 fcm.0812.sql.gz
Though the files in two hosts have the same md5sum, but why they have
different size with 'du -k' showed?
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Dear all,
I am research on several commands through which I can monitor the size
of specific data of tables in different . I want to write a script that
fetches the data of different database tables & databases too daily and
write it into a file .
Is there is any way or commands to ach
Hi Geoff,
> This server has 6GB of RAM and no swap. According to some reasearch I was
> doing I found this formula for calculating memory size:
>
> key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections =
> (in your case) 384M + (64M + 2M)*1000 = 66384M
>
&g
found this formula for calculating memory size:
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = (in
your case) 384M + (64M + 2M)*1000 = 66384M That come directly from this old
post: http://bugs.mysql.com/bug.php?id=5656In our case, the result is just
below 6GB and then acco
annot create thread" errors.
This server has 6GB of RAM and no swap. According to some reasearch I was
doing I found this formula for calculating memory size:
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections =
(in your case) 384M + (64M + 2M)*1000 = 66384M
Hello,
We are having issues with one of our servers sometimes hanging up and when
attempting to shutdown the DB, we get "cannot create thread" errors.
This server has 6GB of RAM and no swap. According to some reasearch I was
doing I found this formula for calculating m
Well, it wouldn't exactly limit the size of your tables, but you may want to
look into creating a partitioned table to store your data. You could define
your partition ranges to store a single day's worth of data or whatever
granularity works best for you. Then, when you need to re
Hello everyone,
I've actually a database (MySAM) which is growing very quickly (1,3Go/hour).
I would like to limit the size of the database but with a log rotation after
the size is reached. Do you know a way to do it ?
I thought of maybe a script who would delete the oldest entry when it re
On Fri, 25 Jun 2010 06:31:11 -0500, Jim Lyons
wrote:
> I think you're confusing table size with data base size. The original
post
> grouped by schema so it appears the question concerns database size. I
> don't believe mysql imposes any limits on that. Is there a lim
I think you're confusing table size with data base size. The original post
grouped by schema so it appears the question concerns database size. I
don't believe mysql imposes any limits on that. Is there a limit on the
number of tables you can have in a schema imposed by mysql?
On F
On Fri, Jun 25, 2010 at 7:11 AM, Prabhat Kumar wrote:
> In case MyISAM it will grow up to space on your data drive or the Max size
> of file limited by OS..
>
Not entirely correct. There is some kind of limit to a MyISAM file that has
to do with pointer size - I've encountered it
There is 2 way to check databases size :
A. OS level, you can do *#du -hs *of data dir , it will show current usages
of you database size at File system level.
B. You can also check on Database level check details
here<http://adminlinux.blogspot.com/2009/12/mysql-tips-calculate-database-
What do you mean "time to increase"? What tells you that?
A database's size is determined by the amount of available diskspace. If
you need more than the filesystem that it is currently on has, then you can
either move the entire schema (which is synonymous to "database&quo
what is the innodb file size that u have specified in my.cnf.
If the last file is autoextend, that this will grow to the size of the disk
space avaliable.
regards
anandkl
On Thu, Jun 24, 2010 at 7:43 PM, Sarkis Karayan wrote:
> I feel like I am missing something, because I am not able to f
I feel like I am missing something, because I am not able to find the
answer to this simple question.
How can I increase the size of a database?
I am using the following query to check the available space and notice
that it is time to increase.
SELECT
table_schema AS 'Db Name',
Are there any publicly available data on how the size of some (or better
yet, many) particular "real" database(s) changed over time (for a longish
period of time)? How about data on how the throughput (in any interesting
terms) varied over time?
Thanks,
Mike Spreitzer
thing goes very badly.
If you search for "buffer pool size" on mysqlperformanceblog.com, you
will get good advice. You should also get a copy of High Performance
MySQL, Second Edition. (I'm the lead author.) In short: ignore
advice about ratios, and ignore advice about the size o
In infinite wisdom "Machiel Richards" wrote:
> The current Innodb buffer pool size is at 4Gb for instance, and the
> innodb tables then grow to be about 8Gb in size.
InnoDB manages the pool as a list, using a least recently used (LRU) algorithm
incorporating a midpoint in
said
> that, in this case increasing buffer pool size is still advisable as per my
> understanding. Your swap consumption will go up in that case which is not
> very good either. But giving only 4 GB to Innodb is even worse for the
> performance. It is subjective though. You should
Hi,
First thing that comes to my mind is that it is probably the best time to put
your application server and database server on different hosts. Having said
that, in this case increasing buffer pool size is still advisable as per my
understanding. Your swap consumption will go up in that case
Hi Guys
I just have a quick question.
I have done some research into how to determine the size of your Innodb
buffer pool.
All of the sources I used, specified that the Innodb buffer pool size
should be the same size as your database + 10%.
However, as far as I
Google oom_adj and oom_score. You can control which process is most
likely to be killed.
On Mon, Apr 19, 2010 at 12:53 AM, Johan De Meersman wrote:
>
>
> On Sun, Apr 18, 2010 at 9:04 PM, Eric Bergen wrote:
>>
>> Usually I prefer to have linux kill processes rather than excessively
>> swapping. I
On Sun, Apr 18, 2010 at 9:04 PM, Eric Bergen wrote:
> Usually I prefer to have linux kill processes rather than excessively
> swapping. I've worked on machines before that have swapped so badly
>
I guess you never had the OOM killer randomly shooting down your SSH daemon
on a machine hundred of
The impact of swap activity on performance is dependent on the rate at
which things are being swapped and the speed of swapping. A few pages
per second probably won't kill things but in this case it was swapping
hundreds of pages per second which killed performance. Disks are much
slower than ram.
On Sun, Apr 18, 2010 at 12:04 PM, Eric Bergen wrote:
> Linux will normally swap out a few pages of rarely used memory so it's
> a good idea to have some swap around. 2G seems excessive though.
> Usually I prefer to have linux kill processes rather than excessively
> swapping. I've worked on machin
Linux will normally swap out a few pages of rarely used memory so it's
a good idea to have some swap around. 2G seems excessive though.
Usually I prefer to have linux kill processes rather than excessively
swapping. I've worked on machines before that have swapped so badly
that it took minutes just
--- On Wed, 14/4/10, Dan Nelson wrote:
> Hammerman said:
> > My organization has a dedicated MySQL server. The
> system has 32Gb of
> > memory, and is running CentOS 5.3. The default
> engine will be InnoDB.
> > Does anyone know how much space should be dedicated to
> swap?
>
> I say zero swap
Correct, but when something *does* go amiss, some swap may give you the time
you need to fix things before you really go down :-)
So, yeah, a gig or two should be fine. There's also no real need for an
actual swap partition, these days - just use a swap file. Performance is
only marginally less th
Yeah. One of the telltale signs of something amiss is excessive swap activity.
You're not going to be happy with the performance when the swap space
is actually in use heavily.
Kyong
On Tue, Apr 13, 2010 at 8:15 PM, Dan Nelson wrote:
> In the last episode (Apr 13), Joe Hammerman said:
>> My organ
In the last episode (Apr 13), Joe Hammerman said:
> My organization has a dedicated MySQL server. The system has 32Gb of
> memory, and is running CentOS 5.3. The default engine will be InnoDB.
> Does anyone know how much space should be dedicated to swap?
I say zero swap, or if for some reason y
Hello all,
My organization has a dedicated MySQL server. The system has
32Gb of memory, and is running CentOS 5.3. The default engine will be InnoDB.
Does anyone know how much space should be dedicated to swap?
Thanks!
Alex,
You need to follow the directions given in the section titled "Installing from
the Development Source Tree" in the manual with the following change.
bzr init-repo --trees --1.9 mysql-server
Hope this helps,
Hiromichi
--- On Sun, 3/21/10, Alex wrote:
> From: Alex
> Su
i type
bzr branch lp:mysql-server
and now 986582KB downloaded
What size of repo i must download with this command ?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
>-Original Message-
>From: machiel.richards [mailto:machiel.richa...@gmail.com]
>Sent: Friday, December 18, 2009 12:33 AM
>To: mysql@lists.mysql.com
>Subject: RE: Innodb buffer pool size filling up
>
>Good Morning all
>
> QUOTE: "We
Thank you very much.
This now explains a lot.
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: 18 December 2009 10:05 AM
To: machiel.richards
Cc: mysql@lists.mysql.com
Subject: Re: RE: Innodb buffer pool size filling up
Machiel,
That is how it is supposed to
Machiel,
That is how it is supposed to work.
You assign a certain amount of memory(RAM) to it and the database engine
then manages it. It is highly desirable that this buffer is fully used, and
if the growing curve is slow it is because it is not undersized. If you
really need more ram for other us
yone in advance.
Regards
Machiel
-Original Message-
From: Jerry Schwartz [mailto:jschwa...@the-infoshop.com]
Sent: 01 December 2009 10:04 PM
To: 'machiel.richards'; 'Claudio Nanni'
Cc: mysql@lists.mysql.com
Subject: RE: Innodb buffer pool size filling up
>-Original Message-
>From: machiel.richards [mailto:machiel.richa...@gmail.com]
>Sent: Tuesday, December 01, 2009 6:17 AM
>To: 'Claudio Nanni'
>Cc: mysql@lists.mysql.com
>Subject: RE: Innodb buffer pool size filling up
>
>The size was at 2Gb and was rece
The Innodb Buffer Pull usually follow a growth over time that resembles an
horizontal asintot (
http://www.maecla.it/bibliotecaMatematica/go_file/MONE_BESA/grafico.gif)
This to leverage all its size!
So should not be a problem!
Cheers
Claudio
2009/12/1 machiel.richards
> The size was at
The size was at 2Gb and was recently changed to 3Gb in size during the last
week of November (around the 23rd / 24th) and as of this morning was already
sitting at 2.3gb used.
The total database size is about 750Mb.
Regards
Machiel
From: Claudio Nanni [mailto:claudio.na
ecember 2009 08:55 AM
> To: mysql@lists.mysql.com
> Subject: RE: Innodb buffer pool size filling up
>
> Machiel:
>
> > We have a MySQL database where the
> > INNODB_BUFFER_POOL_SIZE
> > keeps on filling up.
>
> Are you getting any errors or just
...@jammconsulting.com]
Sent: 01 December 2009 08:55 AM
To: mysql@lists.mysql.com
Subject: RE: Innodb buffer pool size filling up
Machiel:
> We have a MySQL database where the
> INNODB_BUFFER_POOL_SIZE
> keeps on filling up.
Are you getting any errors or just noticing the buffer
pool is
Machiel:
> We have a MySQL database where the
> INNODB_BUFFER_POOL_SIZE
> keeps on filling up.
Are you getting any errors or just noticing the buffer
pool is full?
I saw some error messages about the buffer pool size
becoming a problem if the fscync is slow. Do yo
memory I need?
Consider a simple case, a MyISAM table is 10GB in size, with 2GB
index, how much memory I need?
Thanks.
It's not the size of the table, it's the size of the index that you
need to watch. MyISAM keeps the table and index separate, so the
memory requirements can be co
how much memory I need?
>
> Consider a simple case, a MyISAM table is 10GB in size, with 2GB
> index, how much memory I need?
If by "table scan" you mean a full table scan with no index usage, your RAM
is irrevelant unless you have at leat 10GB (enough to cache the entire
table)
table is 10GB in size, with 2GB
index, how much memory I need?
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
at I was saying, is that VARCHAR takes up space "l" (= length)
> of the data plus 1 or 2 bytes to store the length, while CHAR takes
> up the full space of the -defined- column size.
>
> This is rather wasteful when storing CHAR data that doesn't take up
> the full av
Your mail suggests that you *are* seeing a difference, though. What
are you seeing?
What I was saying, is that VARCHAR takes up space "l" (= length)
of the data plus 1 or 2 bytes to store the length, while CHAR takes
up the full space of the -defined- column size.
This is rathe
>
> > Note: as you can see in the above, CHAR data DOES take up room for it's
> > full size, stupidly enough.
> >
> > On Tue, Nov 10, 2009 at 6:37 PM, Waynn Lue wrote:
> >> Hey all,
> >>
> >> I was building a table for storing email addresse
1 - 100 of 954 matches
Mail list logo