The order the records are returned is not guaranteed unless you
specify an ORDER BY. You could run the same query multiple times and
the order the records are returned could be different each time.
Although this is rarely the case, especially with caching enabled.
Always do an ORDER BY with
level, which is based on your RAM and InnoDB settings.
MyISAM performance is usually fairy steady as the size of the table
increases.
--
Brent Baisley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
checking, etc. That way
you don't have to rewrite the same lines every time you want to run a
query.
Brent Baisley
On Fri, Sep 4, 2009 at 6:51 AM, AndrewJamesandrewhu...@gmail.com wrote:
is there a better way (hopefully simpler) to code this?
i want to get the user id of the logged in user to use
unlikely all of them will be running a query at the same time.
Unless your queries are really slow.
I'm using a server with 2GB RAM that gets 300K hits a day, adds 250K
records per day to the databases, with one table having almost 50
million records.
Brent Baisley
On Fri, Sep 4, 2009 at 12:48 PM
on a.cat_id = b.cat_id where
a.user_id=1 and a.acc_id=3 order by a.tran_date ASC
Brent Baisley
On Thu, Sep 3, 2009 at 1:56 PM, John
Daisleyjohn.dais...@butterflysystems.co.uk wrote:
Hi,
Hoping someone can help me with this little issue! It seems really
simple but my brain is refusing to work
MySQL doesn't support timezones (I think Postgres does). I usually
just store dates as Greenwich Mean Time and store the time zone hours
offset in a separate field.
Brent Baisley
On Tue, Jul 28, 2009 at 7:59 AM, Manoj Singhmanojsingh2...@gmail.com wrote:
Hi All,
Is it possible to store
orders USE INDEX (index_a) WHERE ...
Brent Baisley
On Tue, Jul 21, 2009 at 5:52 AM, Mortenmy.li...@mac.com wrote:
Hi, I have a table orders with the columns
item_id INT FK items(id)
customer_id INT FK customers(id)
status_id TINYINT -- Between 1 and 4 always
ordered_at DATETIME
value.
SELECT tablename.*,
IF(tablename.head_id=NULL,
CONCAT(tablename.name, tablename.member_id),
CONCAT(heads.name, tablename.head_id)
) AS SortValue
FROM tablename LEFT JOIN tablename AS heads ON tablename.head_id=heads.member_id
ORDER BY SortValue
Brent Baisley
--
MySQL General Mailing
of all the dups.
But I agree, that is the best way to remove duplicates in place
provided the table is not too large.
Brent Baisley
On Tue, Jul 14, 2009 at 11:52 AM, Marcus
Bointonmar...@synchromedia.co.uk wrote:
You can combine the two queries you have in option 3 (you'll need to change
field
and deletes).
Brent Baisley
On Fri, Jun 26, 2009 at 11:25 AM, fa sofak...@yahoo.com wrote:
I have a website where my database is continuously growing. And I started
being worried about performance.
I have a couple of questions, and I would appreciate it very much if you can
elaborate on them.
- I
(radians(39.0788994))*cos(radians(latitude))*cos(radians(-77.1227036-longitude*60*1.1515
5) AS zips
ON custzip=zip
Often times that simple change speeds things up considerably in MySQL.
An explain should show it has a DERIVED TABLE if I recall correctly.
Brent Baisley
On Thu, Jun 18, 2009 at 9:06
( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 5 ), ',', -1 ) end5
FROM events GROUP BY event_id;
I think that will give the format you specified, but I am not
recommending you do it this way.
Hope that helps.
Brent Baisley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe
a list of values that need
to be saved and when you hit 100 (or some other batch size), bulk
insert into the database and bulk write to the file.
Brent Baisley
On Wed, May 6, 2009 at 12:02 PM, Andrew Carlson naclos...@gmail.com wrote:
Any ideas about this? It's very aggravating and I have no idea
are in a different state. One of those may be the culprit.
Brent Baisley
On Mon, Apr 20, 2009 at 10:28 AM, living liquid | Christian Meisinger
c.meisin...@livingliquid.com wrote:
Hi there.
I've a small table with my daily banner hits.
1. version was with myisam but with a lot of concurrent queries
has been resolved.
Interesting, your temp1 attached file shows mysql switched from using
the org_date index to the organization index.
Brent Baisley
2009/3/12 Carl c...@etrak-plus.com:
Brent,
After a delay while I was busy killing alligators, I did as you suggested
(added a composite index
an auto increment column, order it by that value. That
field will have the order the records were imported in.
Brent Baisley
On Mar 6, 2009, at 9:10 PM, revDAVE wrote:
Hi folks,
I'm trying to use MySQL Migration Toolkit 1.1 with MS SQL server 2005
http://dev.mysql.com/downloads/gui-tools/5.0
locking the table.
Is that how many records you want to return? That seems like a lot.
Maybe reworking your query may help. Heck, post the sizeable query.
You've been spending weeks on it.
Brent Baisley
On Tue, Mar 3, 2009 at 10:53 AM, Carl c...@etrak-plus.com wrote:
I have been wrestling
Be careful with using InnoDB with large tables. Performance drops
quickly and quite a bit once the size exceeds your RAM capabilities.
On Mar 1, 2009, at 3:41 PM, Claudio Nanni wrote:
Hi Baron,
I need to try some trick like that, a sort of offline index building.
Luckily I have a slave on
It's actually a very simple solution, and you should do it all in a
single INSERT. Putting INSERTs in a loop will kill your performance
when you try to scale.
$sql4 = 'INSERT INTO temp (example) VALUES (' . implode('),(',
$_POST[categoriesIN]) . ')';
$result4 = mysql_query($sql4, $db);
That
), this will not clean the all
out. You can keep running the query to delete multiple duplicates of
records.
Test the query first to make sure it's working properly. Just switch
DELETE tableNAME with SELECT fieldName(s).
Brent Baisley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
.
Brent Baisley
On Mon, Feb 23, 2009 at 6:58 AM, Andy Smith a.sm...@ukgrid.net wrote:
What RAID level to use, whether to use SCSI or SATA etc are all pretty much
how long is a piece of string? questions. If you have a really high end
hardware array RAID 5 may be faster than RAID1+0
AND quotation.id IN (107037, 304650,
508795, 712723, 1054653))
JOIN part ON ( part.id = quotation.part_id )
That may or may not help, check if the explain changes.
Brent Baisley
On Mon, Jan 26, 2009 at 6:16 AM, Jesse Sheidlower jes...@panix.com wrote:
I have an app that joins results from a MySQL query
the query as if it was a regular table.
As I mentioned, this will break if more than one supplier has the same
price. You'll get an arbitrary supplier ID out of those with the
minimum price. This is because there is no unique value to join on.
Hope that points you in the right direction.
Brent
The ratings field would be NULL. You could also add a count in your
query to tell how many ratings there were. If count is 0, you know
there are no ratings.
SELECT count(ratings.rating_id) AS rate_count, ...
Brent Baisley
On Mon, Dec 22, 2008 at 12:39 PM, Brian Dunning br...@briandunning.com
.country='dk'
where m.active = 'on' and m.tdataon = 'on'
order by m.code;
That may not be completely correct. What you are doing is getting 2
copies of the data field and conditional adding the one you need to
the retrieved record.
Brent Baisley
--
MySQL General Mailing List
For list archives: http
That's because RAND() is a decimal (0.37689672). Try score*RAND().
Brent Baisley
On Nov 30, 2008, at 2:03 AM, sangprabv wrote:
Hi,
Thans for the reply, I have tried it but I don't see the RAND() to be
work. This list is always the same. TIA
Willy
-Original Message-
From: mos
for any help.
Andre
Create a MERGE table that is all those tables combined. Then you just
need to do 1 select as if it was one table. Just be sure to update the
MERGE table description when ever you add a table.
Brent Baisley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com
, Nov 21, 2008 at 2:12 PM, Andre Matos [EMAIL PROTECTED] wrote:
Sounds interesting, but does the MERGER support complex SELECT statements
and LEFT JOIN?
Andre
On 21-Nov-08, at 1:45 PM, Brent Baisley wrote:
On Fri, Nov 21, 2008 at 12:44 PM, Andre Matos [EMAIL PROTECTED]
wrote:
Hello
another subquery (i.e.
LEFT JOIN on the dup select WHERE table.uniqueId IS NULL) to that to
filter so you can delete all duplicates in 1 shot. This has always
been something I had to do very infrequently, so I never bothered
taking it further.
Hope that help!
Brent Baisley
--
MySQL General
while also removing the oldest table from the MERGE
list. You still have all the data, but you've removed it from normal
use with virtually no overhead.
Brent Baisley
On Nov 17, 2008, at 9:53 PM, Micah Stevens wrote:
I don't think this is indicative of a design issue. Some tables need
data
On Mon, Nov 17, 2008 at 7:56 PM, sangprabv [EMAIL PROTECTED] wrote:
Hi,
I just want to know what things that cause table/db overhead? Because I
have my tables always get overhead problem. And must run OPTIMIZE query
every morning. Is there any other solution? TIA.
Willy
What is happening
to another table. While that will speed
up searches, the speed improvement likely won't be noticeable for the
searches you listed.
Make sure query cache is enabled. That will help a lot since the
result of the search will be cached until the table changes.
Brent Baisley
--
MySQL General Mailing
Why are you creating a subquery/derived table?
Just change your limit to 1,2
ORDER BY updates.AcctSessionTime DESC LIMIT 1,2
Like you did in the outer query.
Brent
On Fri, Oct 17, 2008 at 5:12 AM, Ian Christian [EMAIL PROTECTED] wrote:
Hi all,
I'm trying to work out the difference in a
Both times seem a bit long, even if you database has millions of rows.
Can you post and explain of your query? That they are in different
databases should have minimal effect on your query.
Brent
On Oct 3, 2008, at 12:14 PM, mos wrote:
I have two indexed MyISAM tables, each in a separate
It doesn't work because you are counting values. An empty string is
still a value. Perhaps you are trying to SUM instead? If you are
really looking for a count, you also should use sum.
sum(if(a.Type = Signature Based Return, 1,0))
That will return a count of those records where a.Type =
pconnect.
Brent Baisley
On Fri, Sep 19, 2008 at 3:51 PM, Jaime Fuentes [EMAIL PROTECTED] wrote:
You have to use mysql 64bits on S.O. 64bits
--Mensaje original--
De: Martin Gainty
Para: Kinney, Gail
Para: 'mysql@lists.mysql.com'
Enviado: 19 Sep 2008 10:51
Asunto: RE: too many
label+value. The table will grow quickly as far as
number of records, but MySQL handles millions of records without a
problem.
Hope that helps
Brent Baisley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
size for varchar. Text is like varchar, but with a
fixed max size of 65,000 characters.
Brent Baisley
On Tue, Sep 9, 2008 at 8:24 AM, Krishna Chandra Prajapati
[EMAIL PROTECTED] wrote:
Hi,
I would like to know the difference between char, varchar and text.
char limit 255 character fixed
MySQL has no idea how you are presenting the data (html, rtf, etc.),
so it couldn't hilight the words for you. It should really be that
tricky using grep and PHP.
Brent
On Sep 8, 2008, at 10:58 AM, Fco. Mario Barcala Rodríguez wrote:
Hi all:
I was reading documentation and searching into
BY LOC
That should add a sequential number to LOC and DATA that will reset to
0 whenever the value of LOC changes. Some of the IFs in there are just
to suppress output of variable assignment.
Hope that helps
Brent Baisley
On Sep 5, 2008, at 5:44 PM, Dan Tappin wrote:
I have an existing data
should probably be records with a column indicating what
type of data it is.
Brent Baisley
On Sep 4, 2008, at 5:11 AM, drflxms wrote:
Dear MySQL specialists,
this is a MySQL-newbie question: I want to create a view of a table,
where all NULL-values are substituted by 0. Therefore I tried
.
Hope that helps or points you in the right direction.
Brent Baisley
On Sep 4, 2008, at 4:26 PM, Josh Miller wrote:
Good afternoon,
I have recently converted a large table from MyISAM to InnoDB and am
experiencing severe performance issues because of it. HTTP response
times have gone
queries.
Divide and conquer, it will scale better.
Brent Baisley
On Wed, Sep 3, 2008 at 1:05 PM, Jim Leavitt [EMAIL PROTECTED] wrote:
Greetings List,
We have a medium-large size database application which we are trying to
optimize and I have a few questions.
Server Specs
1 Dual Core 2.6 Ghz
2GB
It might be because you you are comparing user_id to a string, when
the field type is a decimal. Drop the quotes around the user_id search
value and see if that works.
Brent Baisley
On Sep 1, 2008, at 3:59 PM, Krishna Chandra Prajapati wrote:
Hi,
In the query below explain gives
re-enable the
InnoDB stuff and you should be alright. You can leave your default
engine as MYISAM and if you like, you can use ALTER TABLE to convert
your InnoDB tables to MYISAM.
--
Brent Baisley
On Aug 15, 2008, at 1:01 AM, [EMAIL PROTECTED] wrote:
Hello mysql,
As I have previously
Just do a left join with the delete query.
DELETE feed_tag FROM feed_tag LEFT JOIN feed ON
feed_tag.feed_id=feed.id WHERE feed.id IS NULL
That should do it. You can change DELETE feed_tag to SELECT and
test it first.
--
Brent Baisley
On Aug 13, 2008, at 4:51 PM, Daevid Vincent wrote
you start back up.
--
Brent Baisley
On Tue, Aug 12, 2008 at 8:29 AM, [EMAIL PROTECTED] wrote:
Hi all,
I try to generate a unique id for each row in a Mysql-InnoDB Table. Because
of many deletes I can't use an auto_increment column.
After a Mysql restart, the next value for an auto_increment
multiple
tables in the same query.
Hope that helps.
Brent Baisley
On Wed, Aug 6, 2008 at 4:31 AM, Magnus Smith
[EMAIL PROTECTED] wrote:
I have the following two tables
ACCOUNTACTION
+---+--+--+-+-+---+
| Field | Type | Null
will actually be helpful.
Hope that helps.
Brent Baisley
On Aug 11, 2008, at 8:26 AM, Jonathan Terhorst wrote:
I have this query:
SELECT DISTINCT t1.string FROM t1
LEFT JOIN t2 ON t1.string=t2.string
LEFT JOIN t3 ON t1.int_a=t3.int_a
LEFT JOIN t4 ON t1.int_b=t4.int_b
Copying 5GB files shows you what kind of performance you would get for
working with say video, or anything with large contiguous files.
Database access tends to be random, so you want a drive with faster
random access, not streaming speed. Try copying thousands of small
files and compare the
on cust_full.name=mx_cust_full.name AND
cust_full.acq_date=mx_cust_full.mx_acq_date
The name+acq_date is going to be your unique string to join on. Your
finding out the max, then finding out which record is associated with
the max.
Brent Baisley
I write code.
On Jun 20, 2008, at 10:50 PM
posts.post_id) as counted
That will count the number of unique posts. I don't know what your
unique field name is for the posts table.
Brent Baisley
Systems Architect
On Sat, May 3, 2008 at 9:00 PM, Patrick Aljord [EMAIL PROTECTED] wrote:
hey all,
I have my query that counts posts per user
returns all
records, then a limit is imposed.
Brent Baisley
Systems Architect
On Apr 26, 2008, at 7:22 AM, j's mysql general wrote:
Hi Guys,
Firstly, this is the only time I have ever encountered this problem
and
searching archives or google shed no luck since yesterday so here I
am
and the slave becomes the master.
Very simple in theory, a bit more complicated in practice.
Brent Baisley
Systems Architect
On Apr 23, 2008, at 2:28 PM, Paul Choi wrote:
Does anyone have experience with upgrading large databases (~500GB
each)
from MySQL 4.1 to 5.0? The tables are in InnoDB format. We
a database is setup as a slave, that doesn't mean you
can't use it like a typical database. You can insert, delete, update,
etc. just like any other DB. Something or someone is likely adding
records directly to the slave, which is then generating it's own auto-
increment value.
Brent Baisley
Is the plus query return more then 50% of the records? If so, MySQL
won't return anything since the result set isn't that relevant.
Brent Baisley
Systems Architect
On Apr 11, 2008, at 8:08 AM, Barry wrote:
I am confused ( nothing new there), what I thought was a simple
search is proving
is just the opposite.
An outer join doesn't filter the table, it just finds any matching
content if it's present. Anything without matching content has a
null where normal matched content would be.
Brent Baisley
PHP, MySQL, Linux, Mac
I write code
On Mar 1, 2008, at 4:16 AM, Thufir wrote
When you establish a connection, it's a connection to a server, not a
specific DB. You can set a default db so that you don't always have
to specify the db name you are working with. So to answer your
question, no, a new connection is not established.
Brent
On Jan 19, 2008, at 10:19 AM,
Your biggest problem is probably the subquery/IN your are performing.
You should change that to a join. And I don't know about using
SQL_CALC_FOUND_ROWS in a full text query that's not boolean, and you
shouldn't use it unless you have a LIMIT clause.
SELECT SQL_CALC_FOUND_ROWS
Yes, that is the correct behavior of a LEFT JOIN. A left join keeps
all the record from the original/left table and will link up any
related data from the right table, but if there is no related data,
it sets it to NULL. If you want the join to act as a filter, the just
use regular JOIN.
The query_cache TimeToLive is variable. The query will be in the
cache as long as the data does not change. Once a table/data changes,
the query cache for those tables are cleared. It's not the best
implementation, but it's way better than nothing.
MySQL 5 does have an on demand query cache
Sounds like you should create a MERGE table that links all the
underlying tables together. Then you just query the merge table and
MySQL handles which tables it needs to pull data from. You also then
don't need to query for the tables.
On Jan 9, 2008, at 9:12 AM, Cx Cx wrote:
Hi List,
I
Taking it step by step, this query will give you all the lowest ids,
for those records with duplicates.
SELECT min(id), email, count(*) AS cnt
FROM addressbook
GROUP BY email
HAVING cnt1
Now think of that query as an already existing table, which you can
do, you just need to name the query
A function on any column does not negate the use of the index. That
only happens if you use a function in a filter part (join, where, etc.).
You may want to run optimize table on the 2 tables involved. That
will update the table stats that mysql uses to optimize the queries.
MySQL may
You would need to have an auto_increment field and your primary key
would be the item id (or what you call it) field + the auto_increment
field. Mysql will then handle incrementing the field if there is
already a record with the same item id. Usually the auto_increment
field will contain
It sounds to me like you might be trying to find the standard
deviation or the variance, which are functions you can use right in
your query.
On Nov 2, 2007, at 7:37 AM, Octavian Rasnita wrote:
Hi,
I have a table with the following columns:
symbol
date
value
I want to select all the
You are using a correlated subquery, which MySQL is terrible at.
Whenever you find yourself doing a correlated subquery, see if you
can switch it to a derived table with a join, which MySQL is far
better at. A derived table is like a virtual table you create on
the fly. It's very simple,
That is a string comparison, so they will never be equal. You don't
have to put quotes around field names unless you are using reserved
words, which you shouldn't. If you do use quotes around field
names, you need to use `backticks`.
On Oct 10, 2007, at 1:15 PM, Martijn Tonies wrote:
I'm not sure why you say 30 or more inserts will take too long. As
long as you do a bulk insert, it's just a single command. 30
individual insert will take it's toll.
You are really looking for a logging system. Your not going to be
querying the table all that much, just a lot of inserts.
As others have mentioned, mysql doesn't handle IN queries efficiently.
You can try changing it to using derived tables/subqueries. I did some
quick tests and the explain shows a different analysis.
select comment, gid, date_posted from tbl
JOIN
(select max(id) as mid
from tbl where gid in (
You should read up on the show status and show variables output.
That will give you a start as to perhaps some obvious issues. For
instances, your opened_tables and threads_created should not be
large. Using the right table types is also a very big issue, although
you may have gone down
You can do it as long as there is only a single record with the max
value. If there is more than 1 record with the same max value, there
isn't a single record to pull.
To do it, you would need to join on the results of the max query, and
part of the join condition would be the max value
I assume each part can be associated with multiple projects, which is
a many to many relation. In that case you need to create a join
table that holds the relation. Table like that are typically just 2
fields, one for the project id and one for the part id. You may want
to add other fields
One thing to check is to make sure you are not quoting your NULL
value for your insert statement. MySQL will try to convert that to a
numeric value, which may end up as 0.
On Aug 8, 2007, at 12:55 PM, Mahmoud Badreddine wrote:
Hello
I have a table which contain a few numerical values.
I
If you are going to implement real security, it shouldn't matter if
someone nows the unique id of a record. You should be checking if
they have the right to see that record.
But regardless, there is an easy way to set random ids as your
unique identifier. Setup 2 fields, one being the
You have your count in the wrong spot. I'm not even sure how that
query would work. The count(*) should be part of a select field. By
putting it in the HAVING clause it's calculating it after the query
runs.
SELECT jobs.*, count(*) AS Cnt FROM jobs GROUP BY customer_number,
job_number
I would use replication with multiple masters. You could setup 2, 3
or however many servers, that all replicate to 1 server (which you
may be doing). I would just set the auto_increment increment larger
than you need to allow for growth, like 10. If you are using
auto_increment.
You then
I would say caching, on multiple levels (CPU, DB, File System). By
splitting at least some of the load, it's possible for parts of the
cache to become old and get flushed. When everything is on one
machine, the box has a complete picture of the traffic patterns and
can optimize better.
The most obvious is to make sure you are doing bulk inserts, which
you may already be doing.
MyISAM tables use table locking, so you usually can't insert while a
search is occurring. There are a few exceptions and v5 (5.1?) has
another option you can set so inserts are always added to the
Wallace is right, Data Warehousing shouldn't delete any data. MySQL
isn't as robust as say, Oracle, for partitioning so you need to fudge
things a little. I think partitioning is the way to go and you should
use MERGE tables to handle your partitions. Really what you are
looking to do is
The only regular expression MySQL support return a true/false if the
expression was found. I had to do something similar to what you want
to do. Although I needed to count how many digits there were.
You can use the REPLACE() function to strip out the numbers. Of
course, this means you need
a) You setup a special index (full text).
b) Full text indexes can only be created on MyISAM table types.
c) MyISAM does support transactions, it works by table locking. If
you are not specifically using transactions, you don't need to worry
about it. not transaction safe just means that
That's quite a query. You may not be able to optimize it well with
those nested selects. You may want to think about changing your query
around a little, perhaps joining pieces of data using whatever
programming language you're using on the front end. You have MySQL
doing a lot of work and
As Dan mentioned, you're searching on the 'tag' field which has no
index. But since that field is in the table you're joining on, adding
an index on it might not help. You actually searching on the tag_id
in the join field, not the 'tag'.
Add an index on 'object_type' in the
If you don't want to change any code, you can look into using
federated tables. But if your tables are local, you're adding
unnecessary overhead. You can reference tables in other databases on
the local machine by simply adding the database name before the table
name:
SELECT * FROM
You probably want to look at the group_concat function. It doesn't work
as a subselect, but it allows you to group a set of records and rollup
the different values in the grouping.
Christian Hansel wrote:
I'ld like to accomplish something like:
set @myvar=concat_ws(,,(SELECT column from
Yes, that will lock up the table while the change is being made. One
technique you can use is to rename the table and create a new to catch
the incoming data.
RENAME TABLE x TO y;CREATE TABLE x LIKE y;
By putting both commands on 1 line, it will execute almost immediately.
Then you can alter
I think you're missing the concept of a transaction in the database sense.
The idea behind a transaction is that you can perform multiple steps and if
you don't complete all steps, any changes are reversed. The reversal process
is handled by the database.
A good example is moving money from bank
with nothing and determine how the string length changed.
Any ideas? I'm using v4.1.
I'd rather do it in a sql statement rather than using a scripting language.
--
Brent Baisley
Systems Specialist
CoverClicks, LLC.
Privileged/Confidential Information may be contained in this message. If you
Yeah, I was sort of heading that route. But I would also like to determine a count of the numbers in a string too. Certainly the
query is doable, but it's unwieldy.
What I have so far:
SELECT fld, @FLDLEN:=char_length(fld) fld_len,
@FLDLEN-char_length(replace(fld,'o',''))[EMAIL
to boil down to a very easy grep statement, but a complicated SQL statement.
- Original Message -
From: Reinhardt Christiansen [EMAIL PROTECTED]
To: Brent Baisley [EMAIL PROTECTED]; mysql@lists.mysql.com
Sent: Thursday, May 31, 2007 2:41 PM
Subject: Re: Determining number of vowels
As Jerry mentioned, you can use temporary tables. Temp tables are unique to the login session, so each usr logged in could create a
table called data with conflict. But temp tables are just that, once the session is done, the temp table is dropped.
If you need persistant tables, which I think
I think what you want to do is look into creating a spacial index on the raw data. Then just searching on that index will allow you
to judge distances between things. You won't need to create a table with every single possible combination. There are quite a few
examples in the manual and online.
between zip codes.
- Original Message -
From: Chris Prakoso [EMAIL PROTECTED]
To: Brent Baisley [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, May 23, 2007 9:41 AM
Subject: Re: Help on selecting a View with 3 Billions rows !
Brent,
Thanks for the reply. I knew that MySQL has
Normalization is about using ids to minimize change, which also eliminates repetition. It's fine to have the color red repeated
throughout your table as long as it will never change. But if you suddenly have two shades of red, you'll need to update all the
records that say red. If you used id's,
You would need to find out the reason for the crash to prevent or minimize it.
The reason may be external to mysql.
Innodb can get really, really slow when tables get physically large if you
don't have a similar amount of RAM.
MyISAM doesn't support transactions, so no, that wouldn't help.
If
create table x like y
Creates an exact copy of the table without the data, indexes are included.
- Original Message -
From: J Trahair [EMAIL PROTECTED]
To: MySQL General mysql@lists.mysql.com
Sent: Wednesday, May 23, 2007 3:58 PM
Subject: Replicating an existing table
Hi Everyone
It seems to me that you are asking about Merge tables. A merge table allows you to combine 1 or more tables to appear as a single
virtual table. What tables make up the merge table can modified quickly and easily, regardless of size. Then your code only needs
to reference 1 table name.
There
You may be running into file system file size limits. You would need to make sure the file system you are using is set to handle
files larger than 4GB, in addition, you need to check that the account mysqld us running under is allowed to create files larger
than 4GB. Just because the OS and file
In a nutshell, one way to do subqueries is to just name the query and join on
it as if it was a regular table.
SELECT field1,field2,... FROM table1
INNER JOIN table2 ON field1=fieldT2
INNER JOIN (SELECT fieldA, fieldB FROM tableA WHERE ...) AS table3
ON fieldA=field1
...
More commonly people
1 - 100 of 553 matches
Mail list logo