Re: Data security - help required

2007-05-15 Thread Brent Baisley
The only way to keep the data secure so ONLY the user can see it, is to have the user come up with a pass phrase that is used to encrypt the data. That pass phrase should not be stored in the database or on any of your systems. For them to see the data, they need to enter the proper pass phrase.

Re: finding next and prev record in mysql

2007-05-11 Thread Brent Baisley
You actually do want to use the limit clause, but use 2 parameters. The first number is which record you want to start at, the second is how many records you want. So limit 2,1 will give you the second record. It's the same query every time, you're just incrementing the first number of the limit

Re: mysql connections - how to stop flooding?

2007-04-30 Thread Brent Baisley
My first guess would be that you have a MyISAM table that gets hit with a long running query which locks the table. When that happens, all other queries start queueing up and connections will rise as new queries come in. Once the long running query finishes, all the other queued queries run and

Re: FW: MySQL patches from Google

2007-04-27 Thread Brent Baisley
I've read a few technical papers on how large companies handle massive amounts of data (i.e. search engines, phone companies). Really it comes down to no general purpose database is strong enough (including Oracle) to handle the amount of data involved. Which is why phone companies, google, etc

Re: Problem on millions of records in one table?

2007-04-18 Thread Brent Baisley
It all depends on how complicated your data and searches are. I've got tables that add 2-3 million per day and I don't have performance problems. Although we only retain at most 500 millions records, not a full years worth. That said, you can get horrible performance out of mysql with tables as

Re: Why Is Autonumber Not Working With Following Query

2007-04-13 Thread Brent Baisley
You probably should specify the fields for the insert. MySQL is probably inserting into the ID field because you didn't specify which fields to use for inclusion. INSERT INTO jobs (field1,field2,...) (SELECT...) - Original Message - From: John Kopanas [EMAIL PROTECTED] To: [EMAIL

Re: Can I create a index on a column of Type datetime?

2007-04-12 Thread Brent Baisley
, create_date and create_time. Then I can use just the create_date as part of a compound index, which wouldn't be unique. - Original Message - From: wangxu [EMAIL PROTECTED] To: Brent Baisley [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Thursday, April 12, 2007 10:13 AM Subject: Re

Re: Query needed for this sol.

2007-04-09 Thread Brent Baisley
Not sure how to do it in a single query, but if you run this query repeatedly, eventually you'll have only 10 left of each category. DELETE tableName FROM tableName, (SELECT MAX(ID) deleteID,count(ID) categCount FROM tableName GROUP BY Category HAVING categCount10) AS Smry WHERE

Re: Tricky Sorting

2007-04-02 Thread Brent Baisley
That should be fairly easy. What you are looking to do is have your sort field conditional. The ORDER BY doesn't have to specify a database field, it can be any field in your query. So you could do something like this: SELECT IF( issuemonth IS NULL, issuenum, issuedate) AS sortfield, issuedate,

Re: Querying large table

2007-03-30 Thread Brent Baisley
It's taking a long time because your filter is external to the table, so you can't use an index. You want to focus your search on the category table, where you can quickly narrow down the number of records to search. SELECT ctg, count(itemid) FROM categories JOIN items ON ctgID=itemCtgID WHERE

Re: a 'safe' way to move data?

2007-03-30 Thread Brent Baisley
No, you can't assure the same data will be addressed without at least including an order by. Even then you would need to make sure that the first X records in the order would not change. For instance, if you order by entered_date DESC, then the data set would change because any new records would

Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread Brent Baisley
You can try changing the myisam_data_pointer_size setting, but the max_rows/avg_row_length should have worked. Unless it's your OS restricting the mysql user to a 4GB file size. You can use ulimit or getrlimit to determine OS limits for a user. If a user hits a limit imposed by the OS, I think

Re: Database creation question

2007-03-21 Thread Brent Baisley
You might try issueing a FLUSH PRIVILEGES command instead of restarting. This is supposed to be implicit when you use the GRANT statement. If you do a direct insert into the user table you have to issue this command. - Original Message - From: Lopez, Denise [EMAIL PROTECTED] To:

Re: max_rows query + SegFaulting at inopportune times

2007-03-15 Thread Brent Baisley
You probably did not change the max_rows setting when you created the table. If you read the manual under AVG_ROW_LENGTH for create table it says: When you create a MyISAM table, MySQL uses the product of the MAX_ROWS and AVG_ROW_LENGTH options to decide how big the resulting table is. If you do

Re: Finding a Solution To A Distinct Problem of Mine

2007-03-12 Thread Brent Baisley
If you are looking for the latest created_at date, then you want to be grabbing the max value of that field. SELECT DISTINCT from_user_id, max(created_at) FROM messages WHERE to_user_id = 1 GROUP BY from_user_id; In your original query I think you meant to select from, not to, since to will

Re: Function in Query has Terrible Impact on Indexes Used

2007-03-12 Thread Brent Baisley
The SYSDATE() function is not evaluated once, it's evaluated for each record. An index can't be used because the value may change at any time. Just as if you were using RAND() instead, it can't be preparsed. You can put the result of SYSDATE() into a variable and then use the variable in the

Re: INSERT ... SELECT Challenge

2007-03-12 Thread Brent Baisley
Skip the whol SELECT part an create a unique index on the fields you want unique (AttributeID, Attribute_Value). Then just do an INSERT IGNORE. The index will prevent a new non-unique from being entered and the IGNORE will prevent an error. - Original Message - From: Miles Thompson

Re: what journal options should I use on linux?

2007-03-09 Thread Brent Baisley
Enabling journaling isn't going to halve your performance. Remember, a journal is a record of what happened. It is only added to, not updated, so while there is overhead, performance is fairly good. ext3 also has a few different journaling optins. Journaling is mainly for quick recovery and

Re: Diagnosing i/o thrashing

2007-03-09 Thread Brent Baisley
When you say 70% iowait are you referring to vmstat results? There are a lot of things that can be causing iowait, the most obvious being the disks are busy. In which case giving MySQL more memory won't really help, unless it's something that can be solved with caching. What does your context

Re: what journal options should I use on linux?

2007-03-09 Thread Brent Baisley
Yes, the article did lack a lot of methodology information. Since you already have things separated and it seems your data isn't ciritical, meaning losing it would be a pain as opposed to life threatening, you can turn off journaling for those file systems. You may actually want to look at

Re: what journal options should I use on linux?

2007-03-09 Thread Brent Baisley
Would using merge tables be an option? You would have to split your data, but then each individual table could be within MySQL memory limits. Divide and conquer. You would need a way to work around your insert ignore requirement. But the end result could be much faster. The solution I use for

Re: running sum with a @variable

2007-02-06 Thread Brent Baisley
Have you tried using the WITH ROLLUP option after the group by clause? It seems to me that might give you something close to what you are looking for. - Original Message - From: C.R.Vegelin [EMAIL PROTECTED] To: Lars Schwarz [EMAIL PROTECTED]; mysql@lists.mysql.com Sent: Tuesday,

Re: How find duplicate entries

2007-02-06 Thread Brent Baisley
That actually should be HAVING ( NumOfFoos 1 ) - Original Message - From: Lars Schwarz [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Tuesday, February 06, 2007 11:26 AM Subject: Re: How find duplicate entries SELECT foobar, COUNT(foobar) AS NumOfFoos FROM bar GROUP BY foobar

Re: select on multiple fields in several tables?

2007-02-05 Thread Brent Baisley
You can also use the following syntax if you want to pull all the fields from only certain tables: SELECT table1.*,table2.*,table3.field1,table3.field2 FROM table1, table2, table3 ... - Original Message - From: Ryan Stille [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Sunday,

Re: Joins with sums

2007-01-30 Thread Brent Baisley
You're left joining invoices, then left joining receipts. A left join will replicate rows on the left side to match the number of rows it found in the join, or just leave 1 row with NULL values (as you probably know). This is where your problem is. You were correct to try to use left joins

Re: Innodb, why not?

2007-01-25 Thread Brent Baisley
Size is an issue with InnoDB and deleting records does not reduce the size of the file. In my experience, the performance drop off is considerable once the table reaches a certain size. And it's not a slight drop off over time. If your table is going to get very large, I would reccommend using

Re: speeding up a join COUNT

2007-01-24 Thread Brent Baisley
You should create indexes on the fields you search on most. In this case, you are searching on the user_type field, so create an index on that field. Otherwise you need to scan the entire table to find out which users are of the type you are searching for. - Original Message - From:

Re: Formatting a subquery?

2007-01-17 Thread Brent Baisley
Just so I'm straight on this. You want to select 20 random people, then select one random picture for each of those 20 people, ending up with 20 random pictures, each from a different account Your query I'm pretty sure gives you all pictures from the 20 random accounts, ordered randomly. If we

Re: Formatting a subquery?

2007-01-17 Thread Brent Baisley
Sorry about the last email, sent it before I finished it. As I was saying... Just so I'm straight on this. You want to select 20 random people, then select one random picture for each of those 20 people, ending up with 20 random pictures, each from a different account Your query I'm pretty

Re: Insert ... Select troubles

2007-01-15 Thread Brent Baisley
t1 table has a RecID field that already is my primary key and set to auto_increment. Is there another way that this can be done? Thanks Brent Baisley [EMAIL PROTECTED] 1/12/07 1:10 PM Sine ItemCount is part of your unique key, you should just make ItemCount an auto_increment field. Mysql

Re: Index and Order By

2007-01-15 Thread Brent Baisley
It's possible mysql optimizes queries incorrectly. I've had to correct it on a few occasions. If you look at the results of the explain for Q1, you'll notice that in both queries the key_len is 1. Meaning Mysql thinks it only needs to look at the first character of the index. Based on that

Re: Insert ... Select troubles

2007-01-12 Thread Brent Baisley
Sine ItemCount is part of your unique key, you should just make ItemCount an auto_increment field. Mysql will then handle providing sequential numbering within the group. For example: CREATE TABLE t1 ( GroupCount int, ItemCount int auto_increment, PRIMARY KEY (GroupCount,ItemCount) ) When you

Re: Select where the column names are not fully known

2007-01-11 Thread Brent Baisley
Your best bet would be to dynamically build the field string using whatever you are coding the front end on. A simple configuration paramter would tell you how many fields there are. Now, if you don't need to retain your current table structure, I would recommend switching it to a name/value

Re: Multiple table updates (Was: Does Update allow for aliases)

2007-01-11 Thread Brent Baisley
The key part of the documentation for me was: - Original Message - From: Chris White [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Wednesday, January 10, 2007 12:55 PM Subject: Multiple table updates (Was: Does Update allow for aliases) Reading the noted previous thread, I was

Re: Multiple table updates (Was: Does Update allow for aliases)

2007-01-11 Thread Brent Baisley
The key part of the UPDATE documentation for me was: The table_references clause lists the tables involved in the join. Its syntax is described in Section 13.2.7.1, JOIN Syntax. Basically you can take almost any SELECT statement with JOINs and convert it to an UPDATE. There are certain

Re: Date v. DateTime index performance

2007-01-10 Thread Brent Baisley
Splitting out your values will cause problems where doing greater than/less than searching. If you search on year_number=2000 and month_number=6, that's not going to give you everything from 6/2000 on. It will return really only the second half of each year from 2000 on. To include 2/2002,

Re: Strange query.

2007-01-10 Thread Brent Baisley
You can't join on the result of calculations in the field selection. The result is not associated with any table. So the problem isn't so much with the date_format statement, but that you are joining on HOURS.hour the timestamp, not HOURS.hour the DATE_FORMAT. I would think you would be getting

Re: high load, tons of data

2007-01-08 Thread Brent Baisley
First, I think you may need to migrate off of OSX for your server if you are under heavy load. I love OSX and have setup many servers with it, but not for system with high contention. Many reviews show that MySQL under OSX doesn't scale, most likely because of the lack of fine grained locking

Re: How to access an array variables in a single query

2007-01-08 Thread Brent Baisley
If this is in PHP, just use implode to create the string. The there is no need to strip trailing commas and no loops. $instring= implode(',', $a); As for the query, where are you getting your 1000 values from? You can do joins on queries if your values are the result of a some complex

Re: select statement question

2007-01-05 Thread Brent Baisley
You want to use a LEFT JOIN, which will select all the records from A and link them with records in B if there are any. If there are no matches in B, the the fields from B will be NULL. You then just check for the NULL value (no match) in a B field and use that as your filter. SELECT A.*,B.D

Re: sequences and auto_increment

2007-01-02 Thread Brent Baisley
I don't think MySQL has exactly what you are looking for, but you may be able to get the behavior you want. The auto_increment value is actually based on an index and doesn't have to be unique. So you could create a compound index that has one or more fields plus the auto_increment field. The

Re: How many colums should a index contain?

2006-11-03 Thread Brent Baisley
I think you want to create separate indexes. Indexes are basically a sorted list. So a single index on all those fields would sort the data first by id, then bid, then title,... If id is unique, then there is absolutely no reason to add other fields to the index. Think of a compound index as a

Re: Speed of DECIMAL

2006-10-26 Thread Brent Baisley
Did you leave your queries searching on ASCII instead of decimal? Depending on how you structured your queries, MySQL may actually be converting the fields to ASCII in order to do the comparison. Which would probably take 3x longer since it's not using an index. - Original Message -

Re: How can I do a SELECT without locking the table against updates?

2006-10-19 Thread Brent Baisley
You would need to convert the table to InnoDB as George mentioned. Alternatively, break up your SELECT into many smaller selects. If your query is running slow now, it's only going to get slower and slower as your table grows. I've noticed with MySQL that large result queries don't slow down

Re: Finding duplicates

2006-10-12 Thread Brent Baisley
You're kind of heading down the right road. And this was discussed on the list not too long ago, how to delete duplicates. Here's one solution that will find the oldest duplicate(s): SELECT prod.prod_num,prod.prod_title,prod.updated FROM prod JOIN ( SELECT prod_title,max(updated) maxdate FROM

Re: How to speed up query of indexed column with 5M rows?

2006-10-09 Thread Brent Baisley
For some reason it's not using the index on the owner field (flow_fk_owner), it's doing a full table scan. Which also explains the i/o results from vmstat. Since MySQL is not using the index, that means it determined a full table scan would be quicker (MySQL doesn't always get this right). You

Re: Low priority copy?

2006-09-27 Thread Brent Baisley
I'm guessing what's happening is that your import is locking the table, putting everything else on hold. People keep connecting, getting put on hold until you run out of connections. It's not that you machine is so busy, it just can't do two things at once. One of the limitations of MyISAM

Re: COUNT question

2006-09-18 Thread Brent Baisley
You might try changing it to distinct if you are looking for unique count of ids from each. SELECT a.a,aa,COUNT(DISTINCT b.id),COUNT(DISTINCT c.id) FROM... Since you are doing a left join, there always going to be something for b.id and c.id, even if the value is NULL. Distinct may work to

Re: Link request from a Hiking Forums

2006-09-18 Thread Brent Baisley
Hmmm, I have the urge to tell this guy to take hike... - Original Message - From: George McFee [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Saturday, September 16, 2006 10:00 AM Subject: Link request from a Hiking Forums Hello, My name is George and I'm from

Re: How to sort last n entries?

2006-09-15 Thread Brent Baisley
This might work, I've used this syntax to select and sort from a UNION. SELECT * FROM (SELECT * FROM table ORDER BY id DESC LIMIT n) AS ltable ORDER BY datefield DESC According to the documentation you can use limit in subqueries: A subquery can contain any of the keywords or clauses that an

Re: Like inside in clause

2006-09-14 Thread Brent Baisley
You probably want to look into using RLIKE or REGEXP. SELECT id,name FROM tabename WHERE name RLIKE '^[abc]' - Original Message - From: Ravi Kumar. [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, September 14, 2006 7:37 AM Subject: Like inside in clause Dear Friends,

Re: query to find duplicate rows

2006-09-13 Thread Brent Baisley
You should always have a field that is a unique id for the record (i.e. autoincrement). It makes it easier for differentiating and deleting duplicates if needed. This query only works if you have a unique id field and it will only delete one duplicate record at a time. So if you have 4 records

Re: MySQL tuning advice

2006-09-12 Thread Brent Baisley
Really you need to look at what your server is doing, the traffic patterns, then you can tune. There is a lot you can determine by just looking at the output of SHOW STATUS and SHOW VARIABLES. Threads_created number high and growing? Increase the thread_cache_size variable. Opened_tables number

Re: Adding and Removing tables from MERGE tables dynamically

2006-09-06 Thread Brent Baisley
I've got a similar setup, total records across about 8 tables hoovers around 200 million. To change a merge table just issue an alter table with a new union. ALTER TABLE mergetable UNION=(table1, table2, table3,...); - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED]

Re: Insane execution time for JOIN query

2006-08-31 Thread Brent Baisley
How about posting the results of: EXPLAIN SELECT products LEFT JOIN items ON products.product_id = items.product_id WHERE items.product_id IS NULL; Your query looks fine to me. 6 hours is a very long time for tables that small. Are these MyISAM or InnoDB? - Original Message - From:

Re: limit clause on join results

2006-08-31 Thread Brent Baisley
How about something like this: select * from states, cities, (select id from state limit 2) as stlimit where states.id = cities.state and state.id=stlimit.id I tried it in 4.1 and it works. - Original Message - From: Rob Nikander [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent:

Re: Complex SQL for multiple joins

2006-08-31 Thread Brent Baisley
You should be able to do it 1 query, I can think of two ways. I'm not sure which one will run faster. I use table aliasing to keep the lines shorter. The first possibility would have user permissions in one column and group permissions in another: SELECT

Re: Insane execution time for JOIN query

2006-08-31 Thread Brent Baisley
VARIABLES LIKE %buffer_size - Original Message - From: Kim Christensen [EMAIL PROTECTED] To: MySQL List mysql@lists.mysql.com Sent: Thursday, August 31, 2006 9:12 AM Subject: Re: Insane execution time for JOIN query On 8/31/06, Brent Baisley [EMAIL PROTECTED] wrote: How about posting

Re: Seperating Application server and Database server

2006-08-28 Thread Brent Baisley
In addition to what the others told you, security would be a big reason. If the application server is compromised, whether through your application code or some other service on the box, then they also have local access to the database files. If you setup a database server, you can open just

Re: file i/o operations...

2006-08-25 Thread Brent Baisley
Just getting that number of processes running I think would be a challenge. A setup I recently worked on runs a few hundred processes per box, and that kind of maxes out the CPU. Approach 1, been there, done that. Too messy. Approach 2, considered it, but you may end up with processes that

Re: limitations of mySQL i.e. number of rows

2006-08-22 Thread Brent Baisley
If you are going to be storing only 2 months of data at a time, I would suggest using merge tables. Use one table per month. This makes it very easy to delete data, simple redeclare what the union is. Then you can also retain previous months without affecting performance. One of the databases I

Re: Joining result sets into 1 row

2006-08-14 Thread Brent Baisley
I think what you are looking for is GROUP_CONCAT. Without more info I can't really tell though. http://dev.mysql.com/doc/refman/4.1/en/group-by-functions.html - Original Message - From: Steffan A. Cline [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Sunday, August 13, 2006 6:53 PM

Re: Group by with an IF

2006-08-14 Thread Brent Baisley
The problem is your GROUP BY on celec_id. MySQL is doing the grouping and thus only grabbing the first season_week_date value within the grouping. I'm not sure what end result your are looking for. Grouping additionally by season_week_date might be what you are looking for. GROUP BY r.celeb_id,

Re: insert/replace question...

2006-08-11 Thread Brent Baisley
REPLACE is a special INSERT/UPDATE combination where you dont specify a filter, it uses the primary key. If no existing record exists, it INSERTs a new one, otherwise it UPDATEs an existing one. What you are looking for is the UPDATE command. - Original Message - From: bruce [EMAIL

Re: Get a Numeric Zero instead of NULL in SELECT

2006-08-10 Thread Brent Baisley
The only way to force a numeric zero instead of a NULL is to alter the table. Don't allow NULL's in the field and set a default value of 0. However, you then lose the ability to determine is a value was actually entered or if 0 was entered. Don't know if that matter to you. Otherwise, you can

Re: Another question on Cardinality??

2006-08-04 Thread Brent Baisley
It depends on the data, not the situation. How's that? FLD_4 is doubtful that you would want/need and index on it. This assumes an even distribution of both values (ie. male/female). Since you would be scanning half the table anyway, an index won't really help. Now if it's not an even

Re: Another question on Cardinality??

2006-08-04 Thread Brent Baisley
of records a day, performance is a concern. Indexing everything with have a noticeable slow down on inserts, updates and deletes because all the indexes also need to be updated. - Original Message - From: Philip Mather [EMAIL PROTECTED] To: Brent Baisley [EMAIL PROTECTED] Cc: Ratheesh K J

Re: Table analysis - Help required urgently

2006-08-03 Thread Brent Baisley
You should look at what MySQL is doing, start by looking at SHOW STATUS and SHOW VARIABLES. The SHOW STATUS will give you an idea how your indexes are performing, thread management, open file management, etc. You can file a lot of info in there and some variables you can adjust on the fly. There

Re: Running Totals?

2006-08-03 Thread Brent Baisley
You might look into WITH ROLLUP. That could easily give you cumulative totals for the year, but off the top of my head I can't think of a way to get it for the months. - Original Message - From: Barry Newton [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Wednesday, August 02, 2006

Re: Is this query possible?

2006-08-02 Thread Brent Baisley
I'm not sure why you split out track, track is really kind of an attribute of a song. Especially since you have artist and album with the song. Wouldn't album be the same as cd title? I'm not quite getting the logic of your schema. It would have been helpful if you provided your current query.

Re: MySQL performing too badly under heavy load - urgent hlp needed

2006-07-27 Thread Brent Baisley
Do a show status and check on what mysql is doing. I would start by looking at: threads_created - if this is high, increase your thread_cache_size. This means MySQL is busy creating and destroying threads instead of reusing them. This can take a toll on the OS. Opened_tables - if this number

Re: Replication vs. Clustering

2006-07-27 Thread Brent Baisley
It's important to read the How much RAM part if you are running any version lower than 5.1. In 5.0 and lower clusters store all information in memory, which can be a very limiting factor. My experience with replication is that it is fairly quick, in seconds at most rather than minutes. One

Re: Vertical Tab problem

2006-07-14 Thread Brent Baisley
Aaaah the joys of importing data from FileMaker. Repeating fields, gr. You need to do an ASCII/CHAR conversion since vertical tab is a non-printable character. select * from accounts where billing_address_street LIKE CONCAT(%,CHAR(11),%) Since this is problem a one shot deal and you are

Re: fulltext search optimization

2006-07-14 Thread Brent Baisley
Most of the queries you listed below probably aren't taking advantage of the full text index. MySQL doesn't consider song.mp3 or document.pdf words, they would be considered two words because the period is considered a word separator. Keep in mind MySQL indexes words not character strings.

Re: which is better long rows in table or two short row tables

2006-07-13 Thread Brent Baisley
Not sure what your question is, but remember you can use indexes to speed up access to rows. Thus usually the less fields you need to seach in the better, avoid OR searches across fields. - Original Message - From: abhishek jain [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent:

Re: Trying to understand why Select running out of memory if table not used

2006-07-12 Thread Brent Baisley
It's your MySQL client that's run out of memory, not the server. I don't know how many rows MySQL is trying to return. Probably an enormous amount since you're not sepcifying a join criteria on the data table. The number of records in the event table between '2006-05-01' AND '2006-05-15' times

Re: Self Join Performance

2006-07-12 Thread Brent Baisley
Interesting setup. You're using one more join than you need to. Your query should look like this: SELECT DISTINCT(fvr.DocumentID) FROM FieldValueRelation fvr INNER JOIN FieldValueRelation fvr2 ON fvr.DocumentID = fvr2.DocumentID AND fvr2.FieldValueID = '1569' WHERE fvr1.FieldValueID = '1344'

Re: Tuning High Loads MySQL Server

2006-07-12 Thread Brent Baisley
Cover the basics first by looking at the SHOW STATUS results. You shold make yourself familiar with what these variables are telling you. It'll help in determining your bottleneck. http://dev.mysql.com/doc/refman/4.1/en/server-status-variables.html Right of the bat, look at the Threads_created

Re: Speedup of database changes

2006-07-11 Thread Brent Baisley
Changes to the database structure should be few a far between. If you are finding you need to change the structure regularly, you should probably look at a redesign. MyISAM locks the table when an insert/delete is occurring, during which time other requests are queue up. You can switch it to

Re: Speedup of database changes

2006-07-11 Thread Brent Baisley
PROTECTED] To: mysql@lists.mysql.com Sent: Tuesday, July 11, 2006 9:51 AM Subject: Re: Speedup of database changes Brent Baisley wrote: Changes to the database structure should be few a far between. If you are finding you need to change the structure regularly, you should probably look at a redesign

Re: How does one speed up delete-Again

2006-07-10 Thread Brent Baisley
I've had to delete large amounts of records from tables with over 200 millions records. You are correct in that you want to do it in chunks. How you divide the deletes is up to you to decide. You would want to do it based on a indexed column, like a date, and specify a range. DELETE data FROM

Re: Date functions

2006-07-07 Thread Brent Baisley
The INTERVAL command is what you are looking for. It doesn't have to be SECOND (with no S), you could use day, hour ,week, etc. SELECT * FROM t WHERE TimeCol(now() - INTERVAL X SECOND) http://dev.mysql.com/doc/refman/4.1/en/date-and-time-functions.html - Original Message - From:

Re: special char in full-text search

2006-07-06 Thread Brent Baisley
MySQL doesn't index the special characters, some of which are considered word separators, like the period. MySQL indexes words, not strings (sort of). I had a similar problem on one of my projects where I needed to index resumes for searching. Typical searches would include C++, C#, .NET, etc.

Re: Client still reports table full

2006-06-30 Thread Brent Baisley
reports table full If I understand the results from SHOW TABLE STATUS LIKE 'data'; My avg_row_length = 497 Why would descreasing it to 50 have a positive Effect. I would assume I should increase it? Thank you/Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent

Re: howto set mysql to readonly

2006-06-30 Thread Brent Baisley
Instead of locking the table, why not just change the privileges for the specific accounts so they have select only privileges? Then you still of write access through admin accounts if need be. - Original Message - From: Jehan PROCACCIA [EMAIL PROTECTED] To: mysql@lists.mysql.com

Re: Multiple joins

2006-06-30 Thread Brent Baisley
Yes it's legal to do multiple join. No, the way you declared you joins is not legal. At the very least it confusing. Do you want to left join discussion, users and topics, or just memebers? I'm not sure if MySQL would accept things in that order. I always specifically declare my joins (left or

Re: MySQL Denormalized

2006-06-30 Thread Brent Baisley
A specific character that would occupy less space? You mean like using lower case instead of upper case? All characters use the same amount of space (unless your using Chinese or something). I don't think reducing the number of columns will help. Usually you take other performance enhancing

Re: Client still reports table full

2006-06-29 Thread Brent Baisley
Oops, left out an important part. You should change the Avg_row_length also. ALTER TABLE AVG_ROW_LENGTH = 50 You need to specify an average row length if you have dynamic length fields in the table (blob, text, etc.). Also, perhaps a silly question which you may have answered earlier, but

Re: concurrency problem

2006-06-28 Thread Brent Baisley
On the extremely rare occasion when I couldn't use an auto increment, mainly for performance reasons, I've used an id+subid. Usually I've only done this for long running scripts that process/add a lot of records. In this case I want to the script to generate it's own id's. To keep them unique,

Re: How can I enable big-table option ?

2006-06-28 Thread Brent Baisley
It may not be the big-table option you are looking for. MySQL defaults to a 4GB table limit based on calculation. From CREATE TABLE in the manual, under AVG_ROW_LENGTH. MySQL uses the product of the MAX_ROWS and AVG_ROW_LENGTH options to decide how big the resulting table is. If you do not

Re: Sorry for the dumb question how do I fix table is full?

2006-06-28 Thread Brent Baisley
MySQL by default limits tables to 4GB, it looks like you hit that limit. It's fairly easy to change that limit, ideally when you create the table. Before MySQL 5.0.6., the default pointer size was 4bytes, which limits you to 4GB. That's the default size, if you specify a max_rows, that size will

Re: multi-column indexes on InnoDB tables

2006-06-27 Thread Brent Baisley
Sounds like you have more potential problems than is typical. It also sounds like you may want to setup replication, even if the data is replicated to an old, old computer that you were going to throw out. It's not a big deal if it falls behind in the replication at times. And if you could get a

Re: Problem searching in grouped rows

2006-06-26 Thread Brent Baisley
I'll give it a shot. First, select the people that got the first advertisement: SELECT c_id,aa_id FROM adverticelink WHERE aa_id=4 From that result, you want to additionally filter out who didn't get the second advertisement. Since that information is contained in the same table, you want to

Re: PHP mysql_connect

2006-06-23 Thread Brent Baisley
I assume you are using php. It has to do with how the password in mysql is encrypted. On some accounts, the ones that work, it's encrypted in the old way that php can use. The default new, php 4 can't use. Here's the part of the manual that explains it and how to fix it:

Re: Left Join Help

2006-06-23 Thread Brent Baisley
Here is your query rephrased a bit. I find this query structure easier to debug, especially when their are lots of joins. This is also the preferred structure in mysql 5 as I recall. Notice the ON ? part of the join. You didn't specify anything join condition so your doing a full join, very very

Re: Need to speed up deletes

2006-06-22 Thread Brent Baisley
You should try deleting in smaller batches if you can. I've found things go a lot quicker doing it that way and you don't lock up your table for long durations if you want to stick with MyISAM. - Original Message - From: mos [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Wednesday,

Re: Lots of threads in opening tables and closing tables state

2006-06-22 Thread Brent Baisley
You should also up your thread cache, currently it's set to 0. MySQL won't reuse threads if this is set to 0. MySQL has created 13781740 new threads so far. You can change the value while the server is running (example below), but may want to also add it to your conf file. set global

Re: Math problem

2006-06-22 Thread Brent Baisley
dollar/currency/money, call it whatever you like, but if it's got a $ in front of it, then it's a string, not a number. Your price field should have been created as a decimal. I'm guessing it's a char type. You should also set your fields to default to 0, not NULL. The problem is with your

Re: Full-Text problems

2006-06-21 Thread Brent Baisley
Perhaps the searches that return nothing are actually matching more than 50% of the record in the table. From the manual: In addition, words that are present in more than 50% of the rows are considered common and do not match. - Original Message - From: Taco Fleur [EMAIL PROTECTED]

Re: Full-Text problems

2006-06-21 Thread Brent Baisley
] To: mysql@lists.mysql.com Cc: 'Brent Baisley' [EMAIL PROTECTED] Sent: Wednesday, June 21, 2006 8:51 AM Subject: RE: Full-Text problems Is there any way to test this? I doubt it is 50%, some of these words only appear once or twice within the content. Kind regards, Taco Fleur Free Call 1800 032 982

<    1   2   3   4   5   6   >