Hi List,
I would appreciate your help on the following.
When using LOAD DATA INFILE 'inputfile.txt' into a MyISAM table,
it creates mysql-bin.nn files under my database directory
with the size of 'inputfile.txt' (about 200 MB).
Since I have to load 12 inputfiles, I get about 2.5 GB of
I think this is normal as the binary log will contain a record of all
changes made to the data, therefore if you are loading large files
regularly- the bin logs will be quite large. If you do not want the
binary logging, edit the my.cnf file, comment out the line log-bin
(#log-bin) and
Thanks Adrian, Dilipkumar, Dhandapani,
I changed my.ini file, restarted the server and now it's okay.
Regards, Cor
- Original Message -
From: Adrian Bruce [EMAIL PROTECTED]
To: C.R.Vegelin [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Thursday, March 30, 2006 9:48 AM
Subject: Re:
I submitted this yesterday and was not sure if maybe it did not get out
to folks. How would I put an expiration date on a mysql field so that
it would match a radius entry?
Also, is there a way that I can call up a web based screen and have all
the information at my fingertips for inputting user
Sheeri is correct. Rich's statement should have worked. What Rich is
looking for is the syntax for doing what the manual calls extended
inserts.
quoting TFM (http://dev.mysql.com/doc/refman/5.0/en/insert.html)
INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE]
[INTO] tbl_name
We got the question. However what you ask isn't purely a database issue.
How does your authentication program (freeradius?) handle expiration
dates? If it doesn't then adding those to the database won't help a bit.
If it does, then there should already be a date column. (or two) in the
Using the full text function to run a search stunt on a table. all seem
working fine except im looking for ways to fine tune the table relevance. i
found out that the function relevates a row higer by the number of keywords
found in it and not by if all keywords is present in the row...
for
Ravi Prasad LR wrote:
Luke,
Yup. Basically if I do a particular query from the command line,
I get the following error:
===
InnoDB: Error: tried to read 16384 bytes at offset 1 3469819904.
InnoDB: Was only able to read -1.
060327 8:25:41 InnoDB: Operating
We're having some serious problems with concurrent queries.
This is a dual-processor amd64 machine with 16GB RAM, running NetBSD
and MySQL 4.0.25. key_buffer_size is 3GB.
When I have a long running query going, otherwise short queries take
a very very long time to execute. For example, I have
Can you post the output of SHOW FULL PROCESSLIST during the time when
both sets of queries are running?
Also what storage engine are you using for your tables?
Chris Kantarjiev wrote:
We're having some serious problems with concurrent queries.
This is a dual-processor amd64 machine with 16GB
Can you post the output of SHOW FULL PROCESSLIST during the time when
both sets of queries are running?
mysql show full processlist;
Chris Kantarjiev wrote:
Can you post the output of SHOW FULL PROCESSLIST during the time when
both sets of queries are running?
That throws out my first theory about table locks.
What do vmstat and top say? Is it CPU bound? I/O bound?
Also you might want to do a show status before and
That throws out my first theory about table locks.
That's what I thought, too.
What do vmstat and top say? Is it CPU bound? I/O bound?
Certainly not CPU bound. Maybe I/O bound, not conclusive. My current
theory is that there is some thrashing on key buffer blocks.
Also you might want to
Is there any way to make this the default behaviour? I did a Google
search, and it was suggested I put the following line in /etc/my.cnf:
[mysqld]
init_connect='set autocommit=0'
This works fine, but I worry that this will affect all incoming
connections regardless of whether or not they are
I've confirmed that this does affect ALL incoming connections.
On 3/30/06, patrick [EMAIL PROTECTED] wrote:
Is there any way to make this the default behaviour? I did a Google
search, and it was suggested I put the following line in /etc/my.cnf:
[mysqld]
init_connect='set autocommit=0'
It doesn't really answer your question, but have you tried INSERT
DELAYED as a work around?
Also the updated status is strange, because that generally indicates
that its looking for the record to be updated, but since the record is
new, there is no record to be updated. Could it be checking
I think I've seen this complaint posted before but I ignored but now I
realize that in some of my db tables' last_updated field the value is
automatically updating on UPDATEs to records while in other tables the
last_updated fields for some strange reason aren't automatically updating.
I'll
I think I've seen this complaint posted before but I ignored but now I
realize that in some of my db tables' last_updated field the value is
automatically updating on UPDATEs to records while in other tables the
last_updated fields for some strange reason aren't automatically updating.
I'll
are you having two timestamp fields in a table (ie a created and a
last_updated)?
-j
On Mar 30, 2006, at 5:17 PM, Ferindo Middleton Jr wrote:
I think I've seen this complaint posted before but I ignored but
now I realize that in some of my db tables' last_updated field the
value is
jonathan wrote:
are you having two timestamp fields in a table (ie a created and a
last_updated)?
-j
On Mar 30, 2006, at 5:17 PM, Ferindo Middleton Jr wrote:
I think I've seen this complaint posted before but I ignored but now
I realize that in some of my db tables' last_updated field the
Hi there. Any quick way of killing duplicate records?
Cheers
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Mike Wexler wrote:
It doesn't really answer your question, but have you tried INSERT
DELAYED as a work around?
We've not had a lot of luck with this in the past, but it's worth a try.
Also the updated status is strange, because that generally indicates
that its looking for the record to be
I have about 25 databases with the same structure and occasionally need
to update the table structure. For example, I recently found a mistake
in a field that was of type SET and needed to be VARCHAR. I will now
need to edit each table. Is there an easy method to alter table
structure
Rich wrote:
Hi there. Any quick way of killing duplicate records?
Cheers
Subqueries probably.
--
Smileys rule (cX.x)C --o(^_^o)
Dance for me! ^(^_^)o (o^_^)o o(^_^)^ o(^_^o)
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
You ought to use the *Boolean Full-Text Searches.*
You would then do a:
SELECT title, Comment FROM table_name WHERE MATCH (Comment) AGAINST ('+foo
+bar' IN BOOLEAN MODE);
This way the rows that contain both words have higher relevance... those
that have only one... will have lower relevance.
Or
Subquries will help you .
--Praj
On Thu, 30 Mar 2006 21:11:56 -0500
Rich [EMAIL PROTECTED] wrote:
Hi there. Any quick way of killing duplicate records?
Cheers
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL
Is tat query is the problem ?
Then turn on your slow queies and try optimizing those slow queries ?
Post your queries and table description for further help :)
--Praj
On Wed, 29 Mar 2006 12:33:20 -0500
Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:
After a 23days of running mysql, I have
27 matches
Mail list logo