Hi,
I'm writing a stats script that monitors, among other things, the MyISAM key
cache usage. As per the mysql.com docs, I plot Key_blocks_used as the
high-watermark line, but on some of my servers, this is quite a bit lower
than current usage.
Googling reveals that I'm not the only one to see
On Wed, Jan 7, 2009 at 2:26 PM, Richard Heyes rich...@php.net wrote:
I still think a CHAR field would be faster than a VARCHAR because of
the fixed row length (assuming every thing else is fixed). Perhaps
someone from the MySQL list could clarify...?
Say that your column length goes up to
On Wed, Jan 7, 2009 at 6:20 PM, David Scott
critt...@desktopcreatures.comwrote:
When we run a large query other queries start to back up when the large one
gets to the 'creating sort index' phase, this lock seems to affect the
whole
server, all databases... does anyone know what may be
Replying to list, as one suggestion does involve the database :-)
On Fri, Jan 9, 2009 at 3:03 AM, xufeng xuf...@yuanjie.net wrote:
Hi all,
Recently I am redesigning our website architecture.
I have two servers located at Location_A and Location_B serving the same
http://www.ourexample.com
The error is not in the join, but in the fact that you have two invoices
with the same invocecode. The items are retrieved and displayed for both
invoices.
If this is correct, select distinct should solve your problem.
On Mon, Jan 12, 2009 at 5:59 PM, Ron r...@silverbackasp.com wrote:
Hi All,
You need to set that in the script that starts your mysql daemon, probably
/etc/init.d/mysql or something similar.
On Tue, Jan 27, 2009 at 3:44 PM, Xristos Karvouneas
kalamat...@hotmail.comwrote:
Dear All,
I would like to be able to do that from the cnf file, as I want to have a
different
Bah, annoying person :-p
TBH, when I think on it, it'd probably not even work, because the env
variable you need to set is that of the client, not the server.
I suppose the simplest way of doing this would be to write a wrapper script
around the mysql client that sets the editor. On a per-user
* check on your filesystem if the directory actually still exists - rmdir on
a nonexisting dir might throw a non-zero exit
* Take your db offline and do an fsck. If you want, you could flush tables
with read lock, then mount -oremount,ro and then fsck - that'll keep your db
up for reads, at least.
Obviously doesn't work for extremely large datasets, but nothing stops you
from stuffing a server full of memory, assigning a huge block to ramfs, and
using that as the second leg of a mirror, with the first leg a real disk
device set to write-mostly.
Obviously you'll need to create an init
On Mon, Feb 9, 2009 at 6:10 PM, Lucio Chiappetti lu...@lambrate.inaf.itwrote:
I used to make Unix soft links of mysql tables back on mysql 3.23.
[...]
But I wonder if there is anything intrinsically wrong in the usage of
soft-links, or whether this might be an indication instead of
On Tue, Feb 10, 2009 at 12:39 PM, Michael Addyman
michael.addy...@googlemail.com wrote:
I have now thought of having 1 table type per database (i.e. ~30
databases).
This would be easier and cheaper to manage than hundreds of databases, and
would also allow databases to be finely tuned to
On Tue, Feb 10, 2009 at 2:57 PM, Michael Addyman
michael.addy...@googlemail.com wrote:
Hooray! http://code.google.com/p/mysql-master-master/
Am I crazy to be considering replicating 500+ databases? I think so...
I don't think the number of databases is an issue - the main point is the
What you just tested, on the other hand, was the limit of your maxpacket :-)
Up that to something unlikely and try again :-)
On Tue, Feb 10, 2009 at 9:24 PM, Baron Schwartz ba...@xaprb.com wrote:
On Tue, Feb 10, 2009 at 1:29 PM, Cantwell, Bryan
bcantw...@firescope.com wrote:
I am trying to
In SQL, the correct syntax is IS NULL or IS NOT NULL.
Random programming languages more often than not have no decent support for
NULL content, although your DB library might have an isnull() function or
something similar. Once you've exported a field into a regular variable,
however, most often
I know this doesn't answer any of your questions directly, but I
imagine that if you run the load data on the insert table of the merge
set, the rest might still be available for operations through the
actual merge alias.
not sure, though, feel free to report test results :-)
On Wed, Feb 25,
On Wed, Mar 4, 2009 at 8:42 PM, Morten Primdahl my.li...@mac.com wrote:
around while trying to figure out why the first query was slow and the
subsequent snappy.
Given that you posted that a MySQL restart does not change anything,
but a system restart does, I put my money on the filesystem
It's a good thing, then, that we've got your experience to rely on.
Woe is us, for not having any, and not seeing how obviously right you
are.
You're entitled to your opionion on NULLs, but kindly stop spamming
*my* mailbox with it. I was aware of your dislike for them after the
first mail.
On
Either HAVING, or an additional GROUP BY field of
person_postal_address.person_id should do, I think.
On Sat, Mar 14, 2009 at 8:38 PM, Nigel Peck nigel.p...@miswebdesign.com wrote:
Arthur Fuller wrote:
Won't a simple LIMIT 1 do what you want? Or am I missing something?
Thanks for getting
AFAIK, repeated LIKEs.
On Tue, Mar 24, 2009 at 6:24 AM, Yariv Omer yar...@jungo.com wrote:
Hi
when I am using a query for several field's values I am using the following
query:
Select field from table where in ('11', '22')
I need to do a LIKE search (not exact match but like match)
How
you *could* go with if-statements, returning a numerical weight for each
criterion if match and 0 if not; summing those and sorting by the sum
column.
I would do it in code, though - it may or may not be less efficient, but
it'll be easier to maintain and read.
On Tue, May 12, 2009 at 5:50 PM,
but doing it in code would require me to pull in the entire car
table
and process it. With potentially tons of rows, seems like I should be able
to use the db to get those.
On Tue, May 12, 2009 at 12:23 PM, Johan De Meersman vegiv...@tuxera.be
wrote:
you *could* go with if-statements, returning
*shrugs* I, for one, appreciate a tool that doesn't try to be smarter than I
am. If I want to be treated like an idiot, I'll use microsoft software.
On Wed, May 27, 2009 at 2:38 AM, Daevid Vincent dae...@daevid.com wrote:
I'm a little concerned and disappointed that the GRANT command doesn't
On Wed, May 27, 2009 at 2:05 AM, Daevid Vincent dae...@daevid.com wrote:
So why mySQL is putting back ticks in there even though I didn't,
Because it doesn't save your original statements, but recreates an
appropriate set from the grant tables.
and more importantly why doesn't the second
* Undelete the file
* Restore from backup
* Apply the infinite monkey theorem
On Wed, May 27, 2009 at 8:41 AM, Sebastien MORETTI
sebastien.more...@unil.ch wrote:
Hi,
Is there a way to recover data and/or database schema when ibdata1 file has
been removed (for InnoDB databases) ?
(MySQL
Mind sharing ? :-) I don't really have the issue, but it sounds like a
useful trick.
On Fri, Jun 5, 2009 at 11:44 AM, Moon's Father yueliangdao0...@gmail.comwrote:
Ok. I have already gotten the answer.
On Wed, Jun 3, 2009 at 4:15 PM, Moon's Father yueliangdao0...@gmail.com
wrote:
Hi.
Have a look at create table as select and the csv engine.
On Thu, Jun 25, 2009 at 4:18 PM, Lin Chun franks1...@gmail.com wrote:
Hi
In mysql , how can we stock the query result into a text file
the table is too large , i don't want to dump it , but select a part of it,
i wrote a script perl
To be honest, this sounds like more of a filesystem thing, given that you
only ever need to select the full set of an individual user. Just build up
an FS structure with one file per user.
On Wed, Jul 1, 2009 at 7:57 PM, Marcus Bointon mar...@synchromedia.co.ukwrote:
I need to log fairly large
On Fri, Jul 3, 2009 at 12:09 PM, Marcus Bointon
mar...@synchromedia.co.ukwrote:
On 3 Jul 2009, at 09:42, Johan De Meersman wrote:
To be honest, this sounds like more of a filesystem thing, given that you
only ever need to select the full set of an individual user. Just build up
an FS
Redesign. Do NOT store files in a database. The kind of database that's good
a storing files, is more commonly known as filesystem.
On Mon, Jul 13, 2009 at 2:08 PM, Daniele Development-ML
daniele@googlemail.com wrote:
Hello,
I'm developing a web application that requires to store large
On Fri, Jul 31, 2009 at 8:14 PM, Milan Andric mand...@gmail.com wrote:
Hello,
I'm serving a burly Drupal install and at some points throughout the
day the mysql threads go way up and iowait peaks. I'm not sure which
is causing which but during this time the server is unresponsive. I
would like
You can easily get around that behaviour by specifying 127.0.0.1, though :-)
On Tue, Aug 11, 2009 at 8:56 PM, Gavin Towey gto...@ffn.com wrote:
Andrew,
Yes it's true, because when you specify localhost, you're using the local
socket file. The port only has meaning for TCP connections.
You can't make the database, table or field names dynamic with
prepare, only values. Prepare allows you to cache the execution plan,
which wouldn't be valid on a different table.
On Mon, Sep 14, 2009 at 7:09 AM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Hi All,
I have a query about
On Thu, Sep 17, 2009 at 3:46 AM, John Meyer john.l.me...@gmail.com wrote:
Alternatively, you can skip the A_ID and have a compound key of USER_ID and
A_NUMBER on the ASSOC_NUMBERS table. I prefer the A_ID, though.
Note that this would be marginally faster, because all your data is in
the index,
Hey,
Currently, afaik, when you update a table, MySQL will flush all query
cache entries for that table. I've heard that work was ongoing to
change this behavior to only the affected rows, but I can't seem to
find much information on it.
Does anyone know the status of this ?
Thx,
Johan
--
You can do that at runtime no problem, you just need to do OS calls for it.
r...@soulcake-duck:/var/lib/mysql# *mysqlshow*
++
| Databases |
++
| information_schema |
| mysql |
++
r...@soulcake-duck:/var/lib/mysql#
On Fri, Sep 18, 2009 at 12:54 PM, Claudio Nanni claudio.na...@gmail.comwrote:
Nice!
Bit are you sure that database is nowhere in the information schema?
didn't think of that :-) Just checked, and the table in remotedb appears and
vanishes in information_schema.tables as you create and remove
The 'recent' module in iptables allows you to automatically block IPs that
open more than x connections in y seconds. As long as the ddos doesn't
saturate your line, that'll help a lot.
On Thu, Sep 24, 2009 at 10:56 AM, Claudio Nanni claudio.na...@gmail.comwrote:
and in case it is feasible
Have a look at the Merge engine.
On Sun, Oct 18, 2009 at 10:31 PM, Allen Fowler allen.fow...@yahoo.comwrote:
Hello,
I have a Python application that is using MySQL to store records of
transactions about 3 tables with ~1k records each.
How can I periodically copy the records off the
Something in the ilk of
update *table* set *field* = concat(prefix_, *field*) where *condition *
should do the trick.
On Mon, Oct 19, 2009 at 4:56 PM, lejeczek pelj...@yahoo.co.uk wrote:
dear all, a novice here
quickie regarding query syntax - is it possible to take fields values from
one
Well, it's not exactly a clean approach, but if those are the limits you
have to work within, so be it :-)
I suspect you could gain some more performance by using MySQL's multiple
select syntax. It's rougly like this:
insert into *table* (*field*, *field*, *field*) values (*value*, *value*, *
Signal 15 is pretty much equal to a regular shutdown, except that if your
shutdown script doesn't run, you may be left with lockfiles, pidfiles and
the like.
A crash would most likely be visible in the logfile, and even if it isn't
(machine loses power), your log should show innodb running a
Presumably because you are removing 189 gigabyte of data and 549 gigabyte of
indexes, all of which need to be marked as deleted in your innodb file. I/O
is rather expensive :-)
On MyISAM this would have been close to instantaneous (as you probably
expected), because the datafile is used only for
I think triggers are your only option, here.
On 11/9/09, Sebastiaan van Erk sebs...@sebster.com wrote:
Hi,
I have the following model:
Domain (*)-(1) Account (*)-(1) User
That is, each user belongs to exactly 1 account, an account can have
multiple users; each account belongs to a single
dykman
On Mon, Nov 9, 2009 at 3:41 AM, Johan De Meersman vegiv...@tuxera.be
wrote:
Presumably because you are removing 189 gigabyte of data and 549
gigabyte
of
indexes, all of which need to be marked as deleted in your innodb
file.
I/O
is rather expensive :-)
On MyISAM
Your mail suggests that you *are* seeing a difference, though. What
are you seeing?
On 11/11/09, Martijn Tonies m.ton...@upscene.com wrote:
You have stumbled across the secret. No, there is no difference at
all as the calculations suggested here confirm.
Aware of that, but char brings you certain performance benefits that
are good to know about if you need every last drop of juice :-)
On 11/11/09, Martijn Tonies m.ton...@upscene.com wrote:
Your mail suggests that you *are* seeing a difference, though. What
are you seeing?
What I was saying,
The query cache lays at the server level, above individual engine
implementations, and thus affects all of them.
It (case-sensitively!) compares the current query with the queries in the
query cache, and if there's an EXACT match (including all parameter
positions, wheres, and whatnot), it
Hey all,
I'm looking for a Mysql benchmarking/stresstesting tool that can generate a
workload based on standard Mysql full query log files. The idea is to verify
performance of real production loads on various database setups.
Does anyone know of such a tool, free or paying ?
Thx,
Johan
On Fri, Nov 13, 2009 at 22:33, Johan De Meersman vegiv...@tuxera.be
wrote:
Hey all,
I'm looking for a Mysql benchmarking/stresstesting tool that can generate
a
workload based on standard Mysql full query log files. The idea is to
verify
performance of real production loads on various
genlog
Cheers,
Ewen
On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman vegiv...@tuxera.be
wrote:
Hey all,
I'm looking for a Mysql benchmarking/stresstesting tool that can generate
a
workload based on standard Mysql full query log files. The idea is to
verify
performance of real
File permissions ? SE Linux ? AppArmor ?
On Mon, Nov 16, 2009 at 7:48 AM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Thanks Shawn for the quick response.
But then What I am doing is I am doing copy paste of one database and
rename it to another. but I cannot read the tables
You don't even need to stop the server afaik. As mentioned previously,
though, works for MyISAM only.
On Mon, Nov 23, 2009 at 3:17 PM, Robinson, Eric eric.robin...@psmnv.comwrote:
RENAME TABLE
olddb.table1 TO newdb.table1,
olddb.table2 TO newdb.table2
put the whole list in here, the
MySQL is already a multithreaded process, even though you only see a single
process. Note that it doesn't scale very well above eight or so cores,
especially InnoDB iirc.
If you are wondering about parallel query execution (that is, splitting a
single query over multiple cores for faster
the command 'top -H' will show you the individual threads with their CPU
use, but I'm afraid I don't know how to link that up with a MySQL 'show
processlist'.
On Tue, Nov 24, 2009 at 12:22 PM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Hi Johan,
Thanks for the quick response.
The amount and type of data is less the issue than the amount and type of
queries is :-) The machine you've described should be able to handle quite a
bit of load, though, if well-tuned.
On Tue, Nov 24, 2009 at 4:45 PM, Manish Ranjan (Stigasoft)
manish.ran...@stigasoft.com wrote:
Hi,
I am
- first to get the
count using search conditions and then to get data from some columns based
on those search conditions. The fields will be indexed as per search
requirements.
_
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Tuesday
of log to --split (default slowlog). The permitted types are
binlog
Split a binary log file.
slowlog
Split a log file in any varation of MySQL slow-log format.
Cheers,
Ewen
On Tue, Nov 24, 2009 at 2:41 PM, Johan De Meersman vegiv...@tuxera.be
wrote:
Ewen,
Do
You don't need to move any databases. Look at this structure:
/data/disk1/mysql/db1 (directory)
/db2 (directory)
/db3 (directory)
/db4 (symlink to /data/disk2/mysql/db4)
/db5 (symlink to /data/disk2/mysql/db5)
On Wed, Nov 25, 2009 at 11:55 AM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Hi Johan,
I am Sorry. If I have complicated the senerio But, this still not fix my
purpose.
What I want is - From your example :-
/data/disk1/mysql/db1 (directory)
/db2 (directory)
On Wed, Nov 25, 2009 at 12:05 PM, Krishna Chandra Prajapati
prajapat...@gmail.com wrote:
At a time mysql can point to one data directory. For your task you can have
n number of mysql installation with different data directory. After that
you
can use federated storage engine to perform your
Decidedly RAID 10 - that is, a concatenation of mirrors. You get a lot more
redundancy that way.
Raid 01:
[A+B+C+D]
[E+F+G+H]
Here, a single disk lost in each concatenation (so two in total) loses you
the set. Probability is 1/8 * 4/7 (0.0714).
Raid 10:
[A] [B] [C] [D]
[E] [F] [G] [H]
Here,
I'm not really clear on what you hope to accomplish by putting two virtual
servers on the same host ?
On Wed, Nov 25, 2009 at 1:35 PM, Götz Reinicke - IT-Koordinator
goetz.reini...@filmakademie.de wrote:
majk.sko...@eventim.de schrieb:
-Ursprüngliche Nachricht-
Von: Götz Reinicke
On Wed, Nov 25, 2009 at 3:42 PM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Dear Johan,
Need your help again in understanding How mysql reads symlink.
As you said below, I have created symlinks in default mysql directory.
and try to read that symlink file as a database. But
They're not data modification statements, so no, they're not replicated.
On Sun, Nov 29, 2009 at 2:06 AM, Tom Worster f...@thefsb.org wrote:
Are SQL statements like CACHE INDEX or LOAD INDEX INTO CACHE replicated?
If so, is there a way to prevent that replication?
If a slave mysqld
Have you considered Reading The *Fine* Manual at
http://dev.mysql.com/doc/#manual ?
On Sat, Dec 5, 2009 at 4:59 PM, Victor Subervi victorsube...@gmail.comwrote:
Hi;
I remember vaguely how to do this but don't know how to google it:
show tables like categories$;
such that it will return
The FEDERATED engine is what you're looking for, but it probably doesn't
quite do what you expect. Full documentation is at
http://dev.mysql.com/doc/refman/5.0/en/federated-storage-engine.html .
The major gotcha - in my opinion - is that it doesn't actually use indices.
If you only need to run
Posted this before, but beware: federated tables do NOT use indices. Every
select is a full table scan, and if you're talking about a logging table
that could become very expensive very fast.
On Wed, Dec 9, 2009 at 4:13 PM, Neil Aggarwal n...@jammconsulting.comwrote:
Is this possible to do?
This only works for MyISAM :-)
However, there's another solution where you don't need to shut down, and
that works for any engine afaik:
rename table oldschema.table to newschema.table;
I agree that it's a silly thing to not have, but I can't say that I've
encountered a whole lot of instances
On Fri, Dec 11, 2009 at 11:19 AM, Mark Goodge m...@good-stuff.co.uk wrote:
Jørn Dahl-Stamnes wrote:
On Friday 11 December 2009 10:38, Victor Subervi wrote:
Hi;
mysql update products set sizes=('Small', 'Large') where ID=0;
Query OK, 0 rows affected, 1 warning (0.00 sec)
Rows matched: 1
On Fri, Dec 11, 2009 at 1:56 PM, Ken D'Ambrosio k...@jots.org wrote:
rename table oldschema.table to newschema.table;
Just to be 100% clear -- I assume you have to first create the destination
database, and then do this for all the tables in the source database?
Yep. Easily scriptable,
On Fri, Dec 11, 2009 at 6:40 PM, Victor Subervi victorsube...@gmail.comwrote:
I'm lost. I set up this database originally with auto_increment and the
first value was 0. I thought that was always the case. Is there a problem
here?
Yes, that should not have happened. For autoincrement fields,
On Fri, Dec 11, 2009 at 10:40 PM, Daevid Vincent dae...@daevid.com wrote:
Will this work in 5.0?
Yes.
If I'm reading this right, it seems like this is some kind of trick or
loophole then right? If it works and solves my dilemna, I'm fine with that,
but I'm just curious.
Not really, this
Gods. What is this, a creche ?
*plonk*
On Sun, Dec 13, 2009 at 6:44 PM, Victor Subervi victorsube...@gmail.comwrote:
On Sun, Dec 13, 2009 at 12:21 PM, Pinter Tibor tib...@tibyke.hu wrote:
Victor Subervi wrote:
Hi;
mysql insert into *tem126072414516* (ProdID, Quantity) values (2,
Heh. You'll need to just create an empty .frm file, and then issue the drop
table statement.
On Mon, Dec 14, 2009 at 6:11 AM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Hi All,
I really don't know how to elaborate this problem because it is quite
strange-
I have deleted .frm
I don't think there's an actual problem as such, the optimiser is just
making a decision to merge the lastname and firstname indices for the second
query. At a guess, I'd say that the cardinality of clark in your lastname
index is too high, so it uses both; the cardinality for clarke will be
.
--
Regards,
Manasi Save
Quoting Johan De Meersman vegiv...@tuxera.be:
Heh. You'll need to just create an empty .frm file, and then issue the
drop
table statement.
On Mon, Dec 14, 2009 at 6:11 AM, Manasi Save
manasi.s...@artificialmachines.com wrote:
Hi All,
I really don't know how
...@stigasoft.comwrote:
Thanks Johan.
More to add here, execution time for query with 'clarke' is much higher
than
query with 'clark'. This is why it looks a bit strange behavior.
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Monday, December 14
Presumably someone deleted it :-) You can try an 'lsof -U |grep mysql' to
see if any processes still have it open. If the mysqld process still has it
open, it's probably not the server.
Why do you keep it in /tmp, btw ? My guess would be that someone (or some
process) decided it was time to clean
If the aim is purely to find the progs without events, it might be more
efficient to use something like
select * from progs where not exist (select id_prog from events where
id_prog = progs.id_prog);
My syntax might be off, check not exists documentation for more info.
On Tue, Dec 15, 2009 at
I think what he's trying to accomplish is not truly random, but to use the
probability that's indicated in the second field of the table:
1, Aaron, 0.240
3, Abe, 0.006
13, Adrian, 0.069
So there would be a probability of 0.240 that the call returns Aaron, a
probability of 0.006 that it
Try backticks.
On Wed, Jan 13, 2010 at 2:44 PM, Intell! Soft h.ba...@intelli-soft.atwrote:
Hey
Nobody any ideas?
-Ursprüngliche Nachricht-
Von: Intell! Soft [mailto:intellis...@fachoptiker.net]
Bereitgestellt: Freitag, 08. Jänner 2010 11:06
Bereitgestellt in:
De Meersman vegiv...@tuxera.be
wrote:
Try backticks.
On Wed, Jan 13, 2010 at 2:44 PM, Intell! Soft h.ba...@intelli-soft.at
wrote:
Hey
Nobody any ideas?
-Ursprüngliche Nachricht-
Von: Intell! Soft [mailto:intellis...@fachoptiker.net]
Bereitgestellt
in: gmane.comp.db.mysql.general
Unterhaltung: Show Tables not working
Betreff: Re: Show Tables not working
i dont think my design is valid.
There should not be any SPACE for a db name.
On Wed, Jan 13, 2010 at 7:26 PM, Johan De Meersman
vegiv...@tuxera.bewrote:
Try backticks.
On Wed, Jan 13, 2010 at 2:44 PM
You *should* be using a package manager (perfectly fine RPMs available for
all your needs), but if you must do this, it's a reasonably safe bet to
right-align and zero-pad all your number to 4 digits, at which point you're
free to concatenate them and treat them as a single number.
14.12.5.0.19
no RedHat/CentOS man, but if there's also init.1 through init.6,
it should be pretty safe to ignore those - the scripts in there should be
symlinks to the init.d ones.
-Original Message-
From: Johan De Meersman [mailto:vegiv...@tuxera.be]
Sent: Thursday, January 14, 2010 09:44 AM
The best way is to keep track of all individual changes to your
staging environment, including fire-and-forget style scripts; and
apply those to your production environment as needed. This is part of
the process of change management, and generally a very good idea :-)
Lacking that, there are
between
two live databases. I've used it before to get a DEV and PROD server in
sync, so that I could then implement the above methodology.
-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On
Behalf Of Johan De Meersman
Sent: Thursday, January 21, 2010 10:35
More precisely, you'll be better off always using explicit joins - that way
you not only prevent this, but also unexpected problems and the need to
review code when something happens to your DB schema.
On Fri, Jan 22, 2010 at 6:42 PM, Scott Swaim sc...@tmcclinic.com wrote:
I ran into this when
if you will.
ÐÆ5ÏÐ
*Light travels faster than sound. This is why some people appear bright
until you hear them speak.*
--
*From:* vegiv...@gmail.com [mailto:vegiv...@gmail.com] *On Behalf Of *Johan
De Meersman
*Sent:* Friday, January 22, 2010 1:06 AM
On Mon, Jan 25, 2010 at 10:08 PM, Yong Lee yong@gogoants.com wrote:
yah, mysql only allows one auto increment field n that's used as the
primary key in tables. I don't think it has to be the primary key as
long as it is a unique key i think that's okay.
so u should be able to do :
On Tue, Jan 26, 2010 at 12:21 AM, Daevid Vincent dae...@daevid.com wrote:
* at
* /etc/crontab
* /var/spool/crontabs/root
* /var/spool/crontabs/joeblow
* /etc/cron.d/
* /etc/cron.daily/
* /etc/cron.hourly/
* /etc/cron.monthly/
That entire list represents exactly two tools: at
On Tue, Jan 26, 2010 at 10:43 PM, Daevid Vincent dae...@daevid.com wrote:
12+ notebooks, netbooks, desktops, etc. But for any LAMP dev work, I either
have a dedicated Linux box setup or a Virtual Machine.
Well... It wouldn't be *Linux*-Apache-Mysql-PHP otherwise, would it :-)
--
Bier met
On Wed, Jan 27, 2010 at 5:52 PM, Jerry Schwartz
jschwa...@the-infoshop.comwrote:
[JS] I second this. Instead of using a LAMP development environment, I went
with WAMP -- even though our production environment was LAMP.
Generally a bad idea - you keep running into annoying minor differences
On Wed, Jan 27, 2010 at 6:10 PM, Dan Nelson dnel...@allantgroup.com wrote:
WWJD - Windows, Websphere, Java, Derby? :)
You are an evil, *evil* little man. I just spilled juice all over my
keyboard.
--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't
For MyISAM that's not a problem (with some caveats if you wanna do it
online). For InnoDB you can't do that, but you can create a new database,
and rename from olddb.table to newdb.table.
On Thu, Jan 28, 2010 at 10:34 AM, Cui Shijun rancp...@gmail.com wrote:
hi,
Is there an easier way to
Yes: YMMV. Caveat emptor. Don't switch to a product you don't know.
If you need nothing that MySQL doesn't offer, it may be a good fit for you.
If you need features that it doesn't offer, it may not be a good fit for
you. News at eleven.
On Thu, Jan 28, 2010 at 11:21 AM, changuno
Given that you're talking about quite a few folders, I think it'd be good
to check up on the theoretical and practical limits for your filesystem of
choice, though - ext2 for example starts getting noticeably slower when you
have a lot more than 10.000 entries in the same directory. One way of
On Fri, Jan 29, 2010 at 9:52 AM, Cui Shijun rancp...@gmail.com wrote:
I'm using innodb engine. There are more and more rows in my table
recently, and
I've planned to extend the structure.
Why do you want to split your table ? What problems are you trying to solve
?
--
Bier met grenadyn
in a
suitable range. This is
certainly not a way of solve a problem completely, but might be a way
that works before I figure
out what had happened to mysql.
2010/1/29 Johan De Meersman vegiv...@tuxera.be:
On Fri, Jan 29, 2010 at 9:52 AM, Cui Shijun rancp...@gmail.com
wrote:
I'm using
On Mon, Feb 1, 2010 at 2:21 PM, Cui Shijun rancp...@gmail.com wrote:
To minimize the problem with multiple client sessions having
different states on the same table, the table is opened independently
by each concurrent session.
That literally means that every time a table gets referenced, it
1 - 100 of 809 matches
Mail list logo