Thanks for your information, we are sorry about this.
- Original Message -
From: Martin Gainty
To: pagong...@gmail.com ; haidarpes...@gmail.com
Cc: mysql@lists.mysql.com
Sent: Wednesday, August 04, 2010 6:20 AM
Subject: RE: from excel to the mySQL
Vaz
lecture
Hi!
Data Wizard for MySQL allows you to import data from Excel (as well as
from CSV, DBF, XML and text files) in a few mouse clicks. The app also
provides a flexible task scheduler.
http://www.sqlmaestro.com/products/mysql/datawizard/
- Original Message -
From: HaidarPesebe
Thanks for your information, we have tried it and solve my problems.
Thanks.
- Original Message -
From: Miguel Vaz
To: HaidarPesebe
Cc: MySQL Lists
Sent: Wednesday, August 04, 2010 1:40 AM
Subject: Re: from excel to the mySQL
Hi,
Ive always used navicat for mysql
With the following query if I it returns 2 results it's fast .04s, if
it has less results than the limit it takes 1minute.
Query:
select * from hub_dailies_sp where active='1' and date='2010-08-04'
order by id desc LIMIT 2;
Show create table:
http://pastebin.org/447171
27,000 rows in
Hi,
With the following query if I it returns 2 results it's fast .04s, if
it has less results than the limit it takes 1minute.
Query:
select * from hub_dailies_sp where active='1' and date='2010-08-04'
order by id desc LIMIT 2;
Show create table:
http://pastebin.org/447171
27,000
Isn't it so that it firstly order the rows by id (index'ed?) and then scan
it to pick the rows which satisfy the where clause?
It stops when the result reaches the limit, otherwise scans the whole (27,
000 rows scan).
Then the response time with 2 rows limit by 2 can really depend. If the
Because you are sorting the results, the LIMIT clause has to be applied after
all of the eligible rows have been retrieved. There shouldn't be a big
difference between 2 and 3, but there would be between 2 and 2.
Regards,
Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Hello,
I would like to hire a seasoned MySQL DBA for a short consulting job. I need
to migrate a website with a peak traffic of ~3k pageviews / second from an
Oracle database running on big iron to a MySQL set up. The 3k is of course a
ballpark figure, and we cache content aggressively but as
Hello Gurus :-) I was running a simple load generator against our 16GB Dual
Quad core server and it pretty much came down to it's knees within two hours of
running tests. The customer DOES NOT WANT to change any code, they just want
to
throw hardware at it since it took them a year to create
On 8/4/2010 12:40 PM, Nunzio Daveri wrote:
it pretty much came down to it's knees within two hours of
running tests.
Can you clarify what happened in those 2 hours, exactly?
If you mean it took 2 hours of running a single test for performance to
collapse, I'm not sure this means anything. 2
On Wed, August 4, 2010 11:40, Nunzio Daveri wrote:
Hello Gurus :-)� I was running a simple load generator against our 16GB
Dual
Quad core server and it pretty much came down to it's knees within two
hours of
running tests.� The customer DOES NOT WANT to change any code, they just
want to
1. Setup a single master and 2 slaves. The question is how to tell the web
servers to get all the read data from the slaves and to only write to the
master?
Replication is not an answer to all performance problems. Although updates
on the slave are more optimized than if you ran the updates
My experience with replication:
Most of the times, is good enough, fast enough... I have just reworked some
part of an application to split the reporting module from all other modules.
We are still using php 4.3 with pear::db module (what? legacy software is
hard to kill! we are trying!,
13 matches
Mail list logo