*please* don't use reply-all on mailing-lists
the list by definition distributes your message
Am 30.06.2014 13:14, schrieb Antonio Fernández Pérez:
> Thanks for your reply. Theorically the fragmented tables not offer the best
> performance to the InnoDB engine,
> that's correct or not?
practical
Hi Johan,
Thanks for your reply. Theorically the fragmented tables not offer the best
performance to the InnoDB engine, that's correct or not?
I don't know if is a problem or not, is a doubt/question for me. I'm not
sure if is an atypical behaviour.
Thanks in advance.
Regards,
Antonio.
- Original Message -
> From: "Antonio Fernández Pérez"
> Subject: Re: Optimizing InnoDB tables
>
> I would like to know, if is possible, why after execute an analyze table
> command on some fragmented table, after that, appears fragmented again.
Simple question
Hello Antonio,
On 6/27/2014 9:31 AM, Antonio Fernández Pérez wrote:
Hi Reindl,
Thanks for your attention.
Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in Inno
Hi Reindl,
Thanks for your attention.
Following the previous mail, I have checked my MySQL's configuration and
innodb_file_per_table is enabled so, I think that this parameter not
affects directly to fragmented tables in InnoDB (In this case).
I would like to know, if is possible, why after exec
Am 27.06.2014 09:48, schrieb Antonio Fernández Pérez:
> Thanks for your reply. I have checked the link and my configuration.
> Innodb_file_per_table is enabled and in data directory appears a set of
> files by each table.
>
> Any ideas?
ideas for what?
* which files don't get shrinked (ls -lha)
Hi Andre,
Thanks for your reply. I have checked the link and my configuration.
Innodb_file_per_table is enabled and in data directory appears a set of
files by each table.
Any ideas?
Thanks in advance.
Regards,
Antonio.
Have a look at this:
https://rtcamp.com/tutorials/mysql/enable-innodb-file-per-table/
--
Andre Matos
andrema...@mineirinho.org
On Jun 25, 2014, at 2:22 AM, Antonio Fernández Pérez
wrote:
> Hi again,
>
> I have enabled innodb_file_per_table (Its value is on).
> I don't have clear what I sho
- Original Message -
> From: "Antonio Fernández Pérez"
> Subject: Re: Optimizing InnoDB tables
>
> I have enabled innodb_file_per_table (Its value is on).
> I don't have clear what I should to do ...
Then all new tables will be created in their own tablesp
Hi again,
I have enabled innodb_file_per_table (Its value is on).
I don't have clear what I should to do ...
Thanks in advance.
Regards,
Antonio.
Hello Reindl,
On 6/24/2014 3:29 PM, Reindl Harald wrote:
Am 24.06.2014 21:07, schrieb shawn l.green:
It makes a huge difference if the tables you are trying to optimize have their
own tablespace files or if they live
inside the common tablespace.
http://dev.mysql.com/doc/refman/5.5/en/innod
Am 24.06.2014 21:07, schrieb shawn l.green:
> It makes a huge difference if the tables you are trying to optimize have
> their own tablespace files or if they live
> inside the common tablespace.
>
> http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_file_per_table
whi
Hello Antonio,
On 6/24/2014 7:03 AM, Antonio Fernández Pérez wrote:
Hi list,
I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.
SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_schema","mysql
Hi Wagner,
I'm running
MySQL Percona Server 5.5.30 64Bits. No, I don't have tried to execute
ALTER TABLE (Analyze with InnoDB tables do that, or not?).
Thanks in advance.
Regards,
Antonio.
Hi Antonio, como esta?
What's the mysql version you're running? Have you tried to ALTER TABLE x
ENGINE=InnoDB?
-- WB, MySQL Oracle ACE
> Em 24/06/2014, às 08:03, Antonio Fernández Pérez
> escreveu:
>
> Hi list,
>
> I was trying to optimize the InnoDB tables. I have executed the next query
Hi list,
I was trying to optimize the InnoDB tables. I have executed the next query
to detect what are the fragmented tables.
SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_schema","mysql") AND
Data_free > 0
After that, I have seen that there are 49 fragme
* Dan Nelson [110618 16:33]:
> In the last episode (Jun 18), Tim Johnson said:
> > Is there an optimal 'alignment' for column widths for
> > varchar types?
> >
> > I.E., divisible by 8 or 10 or by powers of 2?
>
> No. Varchar fields are stored using only as many bytes as are in that
> particula
In the last episode (Jun 18), Tim Johnson said:
> Is there an optimal 'alignment' for column widths for
> varchar types?
>
> I.E., divisible by 8 or 10 or by powers of 2?
No. Varchar fields are stored using only as many bytes as are in that
particular entry. The size in the column definition is
Is there an optimal 'alignment' for column widths for
varchar types?
I.E., divisible by 8 or 10 or by powers of 2?
URLs to documentation would be more than sufficient.
Version 5+, linux platforms.
thanks
--
Tim
tim at johnsons-web dot com or akwebsoft dot com
http://www.akwebsoft.com
--
MyS
Thanks for the suggestions everybody.
I added in columns to store the day, month and year of the created_at value,
and then added in an index on (newsletter_id, created_month, created_day),
and the the slow queries reduced from around 20 seconds to 0.5 seconds! I
also removed the redundant indexe
On 1/18/11 10:22 AM, Simon Wilkinson wrote:
SELECT articles.* FROM articles INNER JOIN newsletters ON
articles.newsletter_id = newsletters.id INNER JOIN users ON users.id =
newsletters.user_id WHERE users.id =12 AND MONTH(articles.created_at) = '12'
AND DAY(articles.created_at) = '5' ORDER BY YEA
I concur. In addition to suggested index I would add a new column in articles
table called body_length, which is going to be updated every time the body
column is updated. Add that column to the composite index mentioned below.
This should speed up the query a lot.
Cheers,
Mihail
On Jan 18, 20
Hi Simon,
once you apply functions to a field, an index on that field is pretty
much useless. For this particular query, I would be tempted to create
additional fields to store the values of MONTH(articles.created_at)
and DAY(articles.created_at). Create an index on (month_created,
day_created)
Hi,
I am trying to optimize the following query:
SELECT articles.* FROM articles INNER JOIN newsletters ON
articles.newsletter_id = newsletters.id INNER JOIN users ON users.id =
newsletters.user_id WHERE users.id =12 AND MONTH(articles.created_at) = '12'
AND DAY(articles.created_at) = '5' ORDER B
In general you need to adjust your server settings to actually use the new RAM.
This depends on various things, storage engines used, what else runs on the
box, etc.
In addition you should look at your schema, see if you have the right indices
for what you want to do.
E.g. All columns that you u
Which MySQL server do you use?
What is your storage engine?
-Original Message-
From: Ken D'Ambrosio [mailto:k...@jots.org]
Sent: Friday, February 05, 2010 5:26 PM
To: mysql@lists.mysql.com
Subject: Help optimizing settings?
I've got a fairly large -- 100+ GB -- MySQL dat
Using "load index" might help, but you'll have to make sure you allocate
plenty of space to your keycache. If the queries are identical, the query
cache might also be a good candidate, but that seems unlikely.
The memory will also be used by your OS to cache often-used parts of the
filesystem, inc
I've got a fairly large -- 100+ GB -- MySQL database. It isn't accessed
often -- it's acting more as an archive right now than anything else.
That being said, when it does get accessed, the indeces seem to take
forever to load. Being as I just bumped the RAM from 2 GB to 6 GB, what,
generically,
MySQL University: Optimizing Queries with EXPLAIN
http://forge.mysql.com/wiki/Optimizing_Queries_with_Explain
This Thursday (February 4th, 14:00 UTC), Morgan Tocker will talk about
Optimizing Queries with Explain. Morgan was a technical instructor at
MySQL and works for Percona today.
For MySQL
Mikesz,
The best way to approach this is to discover how it has been
implemented by others. There is no point re-inventing the wheel. I'm sure
you can get some ideas from this web site:
http://www.databaseanswers.org/data_models/. and the tutorial at
http://www.databaseanswers.org/tuto
Hello and Greetings,
I have an idea for a project that I am putting together and was
wondering about the best way to store the information. I am looking
for suggestions concerning the formatting and storing of information.
I have three different groups that I need to track and use
information. Th
@gmail.com]
Sent: 06 October 2009 04:57
To: Rob Wultsch
Cc: mysql@lists.mysql.com
Subject: Re: Optimizing my.cnf
As you see on my my.cnf I skip innodb and federated. So I just use
myisam in this case. TIA.
Willy
On Mon, 2009-10-05 at 20:47 -0700, Rob Wultsch wrote:
>
> On Mon, Oct 5,
As you see on my my.cnf I skip innodb and federated. So I just use
myisam in this case. TIA.
Willy
On Mon, 2009-10-05 at 20:47 -0700, Rob Wultsch wrote:
>
> On Mon, Oct 5, 2009 at 6:12 PM, sangprabv wrote:
> I have Dell PE2950iii with 16GB of RAM, and 1 Quadcore
> processor @
On Mon, Oct 5, 2009 at 6:12 PM, sangprabv wrote:
> I have Dell PE2950iii with 16GB of RAM, and 1 Quadcore processor @2.00G.
> Installed with MySQL 5.075 on 64bit Ubuntu Jaunty. I have these
> parameters in my.cnf:
>
> blah blah blah...
>
>
This heavily depends on workload. Are you using innodb? e
I have Dell PE2950iii with 16GB of RAM, and 1 Quadcore processor @2.00G.
Installed with MySQL 5.075 on 64bit Ubuntu Jaunty. I have these
parameters in my.cnf:
[mysqld]
key_buffer = 512M
max_allowed_packet = 512M
thread_stack= 4096K
thread_cache_size = 256
myisa
On Tue, Apr 21, 2009 at 2:52 PM, Milan Andric wrote:
> Hello,
>
> I have a rather burly Drupal based site that seems to be causing some
> problems, today we had a major outage. There are many slow queries
> and also mysql related iowait that causes server processes to hang, at
> least that is the
Hello,
I have a rather burly Drupal based site that seems to be causing some
problems, today we had a major outage. There are many slow queries
and also mysql related iowait that causes server processes to hang, at
least that is the theory.
Here you can examine some of the stats on the server:
h
0, but not millions), it took MySQL 26
> seconds to reply on my dev box.
>
> Can someone suggest what I can look at to speed this up? The
> section of the manual that talked about optimizing range
> queries spent a lot of time explaining how they work but very
> little on how to spee
26
seconds to reply on my dev box.
Can someone suggest what I can look at to speed this up? The
section of the manual that talked about optimizing range
queries spent a lot of time explaining how they work but very
little on how to speed them up. The EXPLAIN didn't really
help--only one co
Hi all
I am new to stored procedures and have just started using them. Is there
any Web site you could suggest which discusses common optimization tips
for MySQL SPs?
Vikram
--
Operator: "So what do you need? Besides a miracle."
Neo: "Guns. Lots of guns."
-- The Matrix
--
MySQL General M
Hi,
I am pretty new in optimizing tables with index and may need some help.
This is my query:
EXPLAIN SELECT timestamp
FROM Meting_INT_COPY
WHERE blockid = '200811252000'
ORDER BY timestamp DESC
LIMIT 1
If I have an index(blockid),
EXPLAIN will return the following informat
Hi,
I am getting copies of your email...I don't know why...Can you fix this
problem..
Kind regards,
Matthew
- Original Message -
From: "Mark Goodge" <[EMAIL PROTECTED]>
To: "mysql"
Sent: Monday, November 17, 2008 10:14 AM
Subject: Re: Optim
Norman Elton wrote:
I've got a table that tracks expiration dates. Currently, if the
record hasn't expired, the column is stored as NULL. In order to find
records that haven't expired, I search for "WHERE expire_at > NOW() OR
expire_at IS NULL". This seems dirty, and I suspect it makes my
index
I've got a table that tracks expiration dates. Currently, if the
record hasn't expired, the column is stored as NULL. In order to find
records that haven't expired, I search for "WHERE expire_at > NOW() OR
expire_at IS NULL". This seems dirty, and I suspect it makes my
indexes very angry.
I suppos
On Fri, Jul 25, 2008 at 12:35 PM, Arthur Fuller <[EMAIL PROTECTED]> wrote:
> ORDER BY implies a sort of the result set. I don't think there is any way
> around that.
I guess so. What I am doing is to just run the query once per day and
store the results in memcache.
Michael
>
> Arthur
>
> On Fr
On Fri, Jul 25, 2008 at 12:27 AM, Michael Stearne <[EMAIL PROTECTED]> wrote:
> I have a query:
>
> SELECT Country, COUNT( Country ) AS Cnt
> FROM properties WHERE (
> Country != 'USA' AND
> Country != 'US' AND
> Country != 'Unit' AND
> Country != 'United States'
> AND
I have a query:
SELECT Country, COUNT( Country ) AS Cnt FROM properties WHERE (
Country != 'USA' AND Country != 'US' AND Country != 'Unit' AND Country
!= 'United States' AND Country != ' ' AND Country IS NOT NULL ) GROUP
BY Country ORDER BY Cnt DESC LIMIT 8
that gets the top 8 non-US countries fr
On Tue, 10 Jun 2008, Martin wrote:
Lucio
So the net effect is to produce a cartesian join where ALL results from
Query1 (are combined with) ALL results from Query2
In order to prevent cartesian join can you use a more narrowly defined
predicate such as what is defined at
http://www.mysqlperf
(mysql 5.0.27 on SuSE Linux)
I recently thought to use UNIONs to allow me to concatenate "vertically"
some database tables with statements like this
create or replace view combo as
(select * from nov06) union
(select *,ra_corr as ra_cor2,dec_corr as dec_cor2 from jul07) union
(select *
Charles Lambach schrieb:
Hi.
My hosting provider recommended me to optimize my 200,000 record table in
order to save resources.
I do _always_ this query:
SELECT * FROM books WHERE isbn='foo' LIMIT 1
The primary key of this table was 'id', and 'isbn' was and INDEX field.
I've modified this:
AL
On Tue, Apr 29, 2008 at 5:09 AM, Charles Lambach
<[EMAIL PROTECTED]> wrote:
> Thank you all for your suggestions.
>
> So it's very important to make primary fields be as smaller as possible,
> right? I'm going to change 'isbn' from VARCHAR(100) to VARCHAR(25) and, if
> possible (I might change m
Thank you all for your suggestions.
So it's very important to make primary fields be as smaller as possible,
right? I'm going to change 'isbn' from VARCHAR(100) to VARCHAR(25) and, if
possible (I might change my code), from VARCHAR to BIGINT.
By the way, which are optimal values for "key_len" par
On Mon, April 28, 2008 09:44, Rob Wultsch wrote:
> On Mon, Apr 28, 2008 at 6:49 AM, Rob Wultsch <[EMAIL PROTECTED]> wrote:
>> I am going to assume that you are asking this question because
>> performance has not improved from this change. Is this correct?
>>
>> I don't think that your surogate ke
On Mon, Apr 28, 2008 at 6:49 AM, Rob Wultsch <[EMAIL PROTECTED]> wrote:
> I am going to assume that you are asking this question because
> performance has not improved from this change. Is this correct?
>
> I don't think that your surogate key (id) is useful, but that is
> probably minor. I thin
I am going to assume that you are asking this question because
performance has not improved from this change. Is this correct?
I don't think that your surogate key (id) is useful, but that is
probably minor. I think that your hostings company suggestion is
probably a good idea, but will also proba
Hi Rob.
Thank you very much for your answer.
CREATE TABLE `books` (
`id` int(11) unsigned NOT NULL auto_increment,
`title` varchar(200) NOT NULL,
`author_name` varchar(100) NOT NULL,
`category_name` varchar(100) NOT NULL,
`description` varchar(200) NOT NULL,
`isbn` varchar(100) NOT NU
On Sun, Apr 27, 2008 at 3:59 AM, Charles Lambach
<[EMAIL PROTECTED]> wrote:
> I do _always_ this query:
> SELECT * FROM books WHERE isbn='foo' LIMIT 1
>
> The primary key of this table was 'id', and 'isbn' was and INDEX field.
This sentence could have been better written. If you have a primary
Hi.
My hosting provider recommended me to optimize my 200,000 record table in
order to save resources.
I do _always_ this query:
SELECT * FROM books WHERE isbn='foo' LIMIT 1
The primary key of this table was 'id', and 'isbn' was and INDEX field.
I've modified this:
ALTER TABLE books DROP PRIMAR
Chris, this should already be pretty fast as it is using a primary key in
its entirety, and as long as the index size remains manageable MySQL will be
able to keep it in memory for fast access.
That said, doing away with the aggregate function might speed things up just
slightly.
You don't care h
I was wondering if someone could tell me what things I might need to do
to make this query as fast as possible.
I am developing a web site where users will have access to certain
things based on what groups they are in and what groups have access to
certain things. There are several different
On 6/21/07, Guillermo <[EMAIL PROTECTED]> wrote:
Hello,
I have a query that i could do in 2 diferent ways...i want to know wich one
is recomended:
* Select T1.field1,T1.field2, (select sum(T2.field4) from Table2 T2 where
T2.field1 = T1.field1) from Table1 T1
or
* Select P1.fie
Hello,
I have a query that i could do in 2 diferent ways...i want to know wich one
is recomended:
* Select T1.field1,T1.field2, (select sum(T2.field4) from Table2 T2 where
T2.field1 = T1.field1) from Table1 T1
or
* Select P1.field1,P1.field2,P2.field2 from (Select T1.field1,
gt; Dan
>
> On 1/8/07, Michael Gargiullo <[EMAIL PROTECTED]> wrote:
>>
>>
>> -Original Message-
>> From: Brian Dunning [mailto:[EMAIL PROTECTED]
>> Sent: Sunday, January 07, 2007 1:12 PM
>> To: mysql
>> Subject: Help optimizing this quer
a problem, sometimes not.
Dan
On 1/8/07, Michael Gargiullo <[EMAIL PROTECTED]> wrote:
-Original Message-
From: Brian Dunning [mailto:[EMAIL PROTECTED]
Sent: Sunday, January 07, 2007 1:12 PM
To: mysql
Subject: Help optimizing this query?
This is the query that's killing
-Original Message-
From: Brian Dunning [mailto:[EMAIL PROTECTED]
Sent: Sunday, January 07, 2007 1:12 PM
To: mysql
Subject: Help optimizing this query?
This is the query that's killing me in the slow query log, usually
taking around 20 seconds:
select count(ip) as counted,stamp
Brian Dunning wrote:
This is the query that's killing me in the slow query log, usually
taking around 20 seconds:
select count(ip) as counted,stamp from ip_addr where
stamp>=NOW()-interval 14 day and source='sometext' group by stamp
order by stamp desc;
Here is the table:
CREATE TABLE `ip_
This is the query that's killing me in the slow query log, usually
taking around 20 seconds:
select count(ip) as counted,stamp from ip_addr where stamp>=NOW()-
interval 14 day and source='sometext' group by stamp order by stamp
desc;
Here is the table:
CREATE TABLE `ip_addr` (
`ip` int(
On Tue, 2006-10-17 at 09:26 -0700, Robert DiFalco wrote:
> I have an unavoidable filesort in a very large query. Can someone point
> me to references for optimizing filesort? I'm assuming this is going to
> be changes to my.ini or the hardware.
Well, one method to *eliminate* "
efman/5.0/en/server-system-variables.html
and
http://dev.mysql.com/doc/refman/5.0/en/temporary-files.html
On 10/17/06, Robert DiFalco <[EMAIL PROTECTED]> wrote:
I have an unavoidable filesort in a very large query. Can someone point
me to references for optimizing filesort? I'm assuming
I have an unavoidable filesort in a very large query. Can someone point
me to references for optimizing filesort? I'm assuming this is going to
be changes to my.ini or the hardware.
TIA,
R.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
Btw, this is using the InnoDB engine.
-Original Message-
From: Robert DiFalco
Sent: Tuesday, October 17, 2006 9:26 AM
To: mysql@lists.mysql.com
Subject: References on Optimizing File Sort
I have an unavoidable filesort in a very large query. Can someone point
me to references for
I have an update operation where I am able to update 10 million records in
approx
2.5 mins.
But when I tried to do the same update on say 40-50 million records, mysql
takes forever to finish. Its the same table, and same update operation, i am
just changing the range of ids using where claus
>Also, when a query takes too long I kill it by shutting down the
server, is this safe or is there a better alternative (MyISAM tables)?
Use show processlist to see the query that taking more time and use "
kill ; " for terminating the query .
--
Praj
Surendra Singhi wrote:
Hello Chris,
C
Hello Chris,
Chris <[EMAIL PROTECTED]> writes:
> Surendra Singhi wrote:
>> I am using mySQL 5.0 and I have 2 tables with few hundred millions of
>> records. To optimize things, I am using MyISAM tables, using the
>> smallest possible data
>> type and have set indexes.
>> Now, the problem which
Surendra Singhi wrote:
Hi,
I am using mySQL 5.0 and I have 2 tables with few hundred millions of
records.
To optimize things, I am using MyISAM tables, using the smallest possible data
type and have set indexes.
Now, the problem which I am facing is that mySql process is wasting lot of
time
Hi,
I am using mySQL 5.0 and I have 2 tables with few hundred millions of
records.
To optimize things, I am using MyISAM tables, using the smallest possible data
type and have set indexes.
Now, the problem which I am facing is that mySql process is wasting lot of
time in disk access the CPU an
Hi Jesse,
Have you tried the following:
1) ordering by only part.d and seeing how long the query takes
2) putting an index on (part.d, cwGroup.stripped_cw) and seeing how
long the query takes.
1 will help pinpoint the problem, and 2 might actually help.
-Sheeri
SELECT part.d, quotation.qt,
Is there any way to optimize a range query that includes
an ORDER BY with keys from two different tables? I'm running
MySQL 4.1.18 on FreeBSD.
I've been struggling with some queries that are _incredibly_
slow--from 1-5 minutes on slowish but decent hardware. When I
try versions without the ORDER
Hi All,
Our current setup has the mysql server and the client app run on the
same host. We are using mysql++ (ports/mysql++-1.7.9) for the database
client api. The application returns huge data sets in the order of 50K. When
we do a show processlist most of the threads spend their time in th
ay 01, 2006 1:27 PM
To: mysql@lists.mysql.com
Subject: Re: Optimizing DISTINCT searches
On 2006-05-01 1:14 PM, "Robert DiFalco" <[EMAIL PROTECTED]> wrote:
> Would you need the DISTINCT if you change the query like so?
>
> SELECT Site.Site_ID, Site, Status, Type FROM Site J
On 2006-05-01 1:14 PM, "Robert DiFalco" <[EMAIL PROTECTED]> wrote:
> Would you need the DISTINCT if you change the query like so?
>
> SELECT Site.Site_ID, Site, Status, Type
> FROM Site
> JOIN Project ON Site.Site_ID = Project.Site_ID
> ORDER BY Site;
>
> You may also want to just try your ini
To: mysql@lists.mysql.com
Subject: FW: Optimizing DISTINCT searches
On 2006-05-01 11:55 AM, "Robert DiFalco" <[EMAIL PROTECTED]> wrote:
> Well, normally a DISTINCT has to do a type of sort and is slower than
> non-DISTINCT queries. Each field of the result set is considered i
On 2006-05-01 11:55 AM, "Robert DiFalco" <[EMAIL PROTECTED]> wrote:
> Well, normally a DISTINCT has to do a type of sort and is slower than
> non-DISTINCT queries. Each field of the result set is considered in the
> DISTINCT logic. Can you modify the query so that it does not require the
> DISTINC
. Fracek, Jr. [mailto:[EMAIL PROTECTED]
Sent: Monday, May 01, 2006 7:52 AM
To: mysql@lists.mysql.com
Subject: Optimizing DISTINCT searches
Several of my DISTINCT searches are frequently showing up in the slow
query log. These queries use multiple table joins. Using EXPLAIN shows
that the queries
Several of my DISTINCT searches are frequently showing up in the slow query
log. These queries use multiple table joins. Using EXPLAIN shows that the
queries are using the appropriate keys, as far as I know. Are DISTINCT
searches using multiple joins slow?
TIA.
Stephen P. Fracek, Jr.
[EMAIL PR
First of all thanks all for your answers.
Second I'll explain what I manage to do in regard with this issue,
maybe someone else may need it in the future.
So, first I have started by changing the way I have stored the `ip`
field from varchar(15) to int unsigned and populated this field with
the 4
Hi all!
(Sorry for the late reply.)
Puiu Hrenciuc wrote (re-ordered):
"Barry" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
Puiu Hrenciuc wrote:
Hi,
I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network
traffic data :
record_time datetime - time when recording
2006/4/13, Puiu Hrenciuc <[EMAIL PROTECTED]>:
> Hi,
>
> I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network traffic
> data :
>
> record_time datetime - time when recording was added
> ip char(15) - ip that generated the traffic
> type tinyint(3) - traffic type ( 1 - local, 2 - inte
Hmmm, I have omited that :
I also have an index on (ip,type) in that order
"Barry" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Puiu Hrenciuc wrote:
>> Hi,
>>
>> I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network
>> traffic data :
>>
>> record_time datetime -
Puiu Hrenciuc wrote:
Hi,
I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network traffic
data :
record_time datetime - time when recording was added
ip char(15) - ip that generated the traffic
type tinyint(3) - traffic type ( 1 - local, 2 - internet )
inbound int(10) - in bytes
o
Hi,
I have a table ( MyISAM, MySQL 5.0.20, FreeBSD ) containing network traffic
data :
record_time datetime - time when recording was added
ip char(15) - ip that generated the traffic
type tinyint(3) - traffic type ( 1 - local, 2 - internet )
inbound int(10) - in bytes
outbound int(10) - out by
Hello.
Taking into an account the nature of your queries, I can forward you to
these links:
http://dev.mysql.com/doc/refman/5.0/en/query-speed.html
http://dev.mysql.com/doc/refman/5.0/en/limit-optimization.html
If some of your fields have only two values ('Yes', 'No') you can
switch to ENUM
Sorry guys,
here is a table description, query and result:
SHOW CREATE TABLE Properties;
Properties |CREATE TABLE `Properties` (
`Property_ID` int(11) NOT NULL auto_increment,
`Insertion_Date` date default NULL,
`Status` varchar(20) default NULL,
`Uploader_ID` int(11) default NULL,
`Ap
Hello.
You will get much more help from the list if you provide the current
table structure, short description of the information which is stored
in that fields, and the query you want to optimize (most probably it
takes much more time than others).
> If point 1 is true then is there a way to tri
Hi,
I am trying to optimize my data and am currently looking at indexes etc.
Someone has suggested the following as well:
1. Ensure that there is no whitespace at the beginning or end of entries
2. Use 1 or 0 instead of yes or no.
If point 1 is true then is there a way to trim all whitespace da
Horpak.com/air -
> > แหล่งรวมผู้ให้บริการติดตั้งเครื่องปรับอากาศ
> > http://www.EasyHorpak.com/internet -
> > แหล่งรวมผู้ให้บริการติดตั้งอินเตอร์เน็ตในอพาร์ทเมนท์
> > http://www.EasyHorpak.com/software - แหล่งรวมโปรแกรมสำหรับอพาร์ทเมนท์
> > http://www.EasyHorpak.com/mo
ww.EasyHorpak.com/move -
>
>
>
>
>
>> From: Anthony Ettinger <[EMAIL PROTECTED]>
>> To: mysql@lists.mysql.com
>> Subject: optimizing mysqldump
>> Date: Wed, 30 Nov 2005 16:44:02 -0800
>>
>> I'm using the typical --opt wi
If your servers is Linux based try running mysqldump locally on the
server via ssh. Then you can zip it up and transfer it over.
=C=
|
| Cal Evans
| http://www.calevans.com
|
Anthony Ettinger wrote:
I'm using the typical --opt with mysqldump, over the internet.
the problem I'm having is the d
http://www.EasyHorpak.com/internet -
áËÅè§ÃÇÁ¼ÙéãËéºÃÔ¡ÒõԴµÑé§ÍÔ¹àµÍÃìà¹çµã¹Í¾ÒÃì·àÁ¹·ì
http://www.EasyHorpak.com/software - áËÅè§ÃÇÁâ»Ãá¡ÃÁÊÓËÃѺ;ÒÃì·àÁ¹·ì
http://www.EasyHorpak.com/move -
From: Anthony Ettinger <[EMAIL PROTECTED]>
To: mysql@lists.mysql.com
Subject: optimizing mys
I'm using the typical --opt with mysqldump, over the internet.
the problem I'm having is the dump takes 30 mins or so, and during that
time, every table in the db is locked (so as not to throw it out of sync).
How can I optmize this? I'm trying to come up with a centralized database
backup tool,
1 - 100 of 391 matches
Mail list logo