Re: Recover dropped database

2012-10-26 Thread Derek Downey
That's rough. The only thing I could suggest is try out Percona's data recovery 
tool ( https://launchpad.net/percona-data-recovery-tool-for-innodb )

They have a blog on how to use it in a specific scenario (deleted rows from a 
single table) here: 
http://www.mysqlperformanceblog.com/2012/02/20/how-to-recover-deleted-rows-from-an-innodb-tablespace/

I've never used it (yay backups!), so I can't tell you if it will work for an 
entire dropped database.

Regards,
Derek Downey

On Oct 26, 2012, at 10:15 AM, Lorenzo Milesi max...@ufficyo.com wrote:

 innodb will not be consistent if there are parts overwritten
 in the meantime or small pieces are not recovered 100%
 
 I took a lvm snapshot few minutes after the happening, and the sql server is 
 barely used so it shouldn't be overwritten..
 
 -- 
 Lorenzo Milesi - lorenzo.mil...@yetopen.it
 
 GPG/PGP Key-Id: 0xE704E230 - http://keyserver.linux.it
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 



Re: Recover dropped database

2012-10-26 Thread Derek Downey
I do agree with Reindl that it is highly unlikely to work, but without 
specifics of how busy your DB is and how much write activity you have, it will 
at least give a shot for perhaps some data. 

Derek Downey


On Oct 26, 2012, at 10:29 AM, Reindl Harald h.rei...@thelounge.net wrote:

 
 
 Am 26.10.2012 16:15, schrieb Lorenzo Milesi:
 innodb will not be consistent if there are parts overwritten
 in the meantime or small pieces are not recovered 100%
 
 I took a lvm snapshot few minutes after the happening, and the sql server is 
 barely used so it shouldn't be overwritten..
 
 this does not help you in any way
 
 making a LVM snapshot while mysqld is running
 makes even a incosistent backup per design
 
 so after dropa database ANY change and write back innodb
 buffers will resue the table space and overwrite data
 
 innodb is a large file with internal structures and you said:
 Storage is innodb, no separate files for tables, everything is in ibdata1
 
 so how do you imageine restore PARTYL contents of a large
 file which has to be also consistent with ib_logfile*
 
 sorry, but there is not tool doing magic if someone is
 missing at least daily backups to restore in the case
 of mistakes like yours
 
 
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: (real) silly question about variables...

2012-10-04 Thread Derek Downey
Hrm, what version of MySQL? I just ran the query on 5.5.24 and it worked as 
expected.

- Derek Downey

On Oct 4, 2012, at 9:52 AM, MAS! wrote:

 Hi
 
 I know there'd be a reason, but I can't understand that..
 
 mysql select @valore:=rand(), @valore, @valore:=ciao, @valore;
 +---+---+-+-+
 | @valore:=rand()   | @valore   | @valore:=ciao | @valore |
 +---+---+-+-+
 | 0.483624490428366 | 0.483624490428366 | ciao|   0 |
 +---+---+-+-+
 1 row in set (0.00 sec)
 
 mysql select @valore:=rand(), @valore, @valore:=ciao, @valore;
 +---+---+-+-+
 | @valore:=rand()   | @valore   | @valore:=ciao | @valore |
 +---+---+-+-+
 | 0.747058809499311 | 0.747058809499311 | ciao| ciao|
 +---+---+-+-+
 1 row in set (0.00 sec)
 
 why in the first execution the latest value is 0 and not 'ciao'?
 and why in the first 2 columns the variables seems works as expected!?
 
 thank you in advance
 
 bye
 MAS!
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Need Help Converting Character Sets

2012-09-27 Thread Derek Downey
To go along with what Rick is saying, this link might help you: 
http://dba.stackexchange.com/questions/10467/how-to-convert-control-characters-in-mysql-from-latin1-to-utf-8

I remember doing a bunch of converting HEX() control characters (such as an 
apostrophe copied from a Word document) before attempting the SET NAMES.

Derek Downey


On Sep 24, 2012, at 7:53 PM, Rick James wrote:

 If you have a mixture of encodings, you are in deep doodoo.
 
 This page describes some debugging techniques and some issues:
   http://mysql.rjweb.org/doc.php/charcoll
 
 That apostrophe might be MicroSquish's smart quote.
 
 Can you provide SELECT HEX(the_field) FROM... ?  We (or the above page) might 
 be able to interpret the character.
 
 To prevent future char set issues, you must know what encoding the source is. 
  Then, with SET NAMES (etc), you tell mysqld that the bytes you have in hand 
 are encoded that way.  mysqld will then convert those bytes to the character 
 set of declared for the column they go in.  (Presumably, all the text columns 
 will be declared utf8 or utf8mb4.)
 
 -Original Message-
 From: Mark Phillips [mailto:m...@phillipsmarketing.biz]
 Sent: Monday, September 24, 2012 4:28 PM
 To: Mysql List
 Subject: Need Help Converting Character Sets
 
 I have a table, Articles, of news articles (in English) with three text
 columns for the intro, body, and caption. The data came from a web
 page, and the content was cut and pasted from other sources. I am
 finding that there are some non utf-8 characters in these three text
 columns. I would like to (1) convert these text fields to be strict
 utf-8 and then (2) fix the input page to keep all new submissions utf-
 8.
 
 91) For the first step, fixing the current database, I tried:
 
 update Articles set body = CONVERT(body USING ASCII);
 
 However, when I checked one of the articles I found an apostrophe had
 been converted into a question mark. (FWIW, the apostrophe was one of
 those offending non utf-8 characters):
 
 Before conversion: I stepped into the observatory's control room ...
 
 After conversion: I stepped into the observatory?s control room...
 
 Is there a better way to accomplish my first goal, without reading each
 article and manually making the changes?
 
 (2) For the second goal, insuring that all future articles are utf-8,
 do I need to change the table structure or the insert query to insure I
 get the correct utf-8 characters into the database?
 
 Thanks,
 
 Mark
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 



Re: Deleting Records in Big tables

2011-11-04 Thread Derek Downey
Be careful deleting with limit. If you're replicating, you're not guaranteed 
the same order of those  you've deleted.

Perhaps a better way to delete in smaller chunks is to increase the id value:

DELETE FROM my_big_table WHERE id 5000;
DELETE FROM my_big_table WHERE id 4000;
etc

-- Derek

On Nov 4, 2011, at 12:47 PM, Andy Wallace wrote:

 I've had some luck in the past under similar restrictions deleting in
 chunks:
 
 delete from my_big_table where id  2474 limit 1000
 
 But really, the best way is to buy some more disk space and use the
 new table method
 
 On 11/4/11 1:44 AM, Adarsh Sharma wrote:
 Thanks Anand,
 
 
 Ananda Kumar wrote:
 Why dont you create a new table where id  2474, rename the original table 
 to _old and the new table to actual table name.
 I need to delete rows from 5 tables each  50 GB ,  I don't have sufficient 
 space to store extra data.
 My application loads 2 GB data daily in my databases.
 
 or
 You need to write a stored proc to loop through rows and delete, which will 
 be faster.
 
 Can U provide me a simple example of stored proc
 
 Doing just a simple delete statement, for deleting huge data will take 
 ages.
 
 Even the Create Index command on ID takes hours too complete.
 
 I think there is no easiest way to delete that rows from mysql tables.
 
 
 
 
 
 
 regards
 anandkl
 
 On Fri, Nov 4, 2011 at 12:52 PM, Adarsh Sharma adarsh.sha...@orkash.com 
 mailto:adarsh.sha...@orkash.com wrote:
 
 
 Dear all,
 
 Today I need to delete some records in  70 GB tables.
 I have 4 tables in mysql database.
 
 my delete command is :-
 
 delete from metadata where id2474;
 
 but it takes hours to complete.
 
 One of my table structure is as :-
 
 CREATE TABLE `metadata` (
 `meta_id` bigint(20) NOT NULL AUTO_INCREMENT,
 `id` bigint(20) DEFAULT NULL,
 `url` varchar(800) DEFAULT NULL,
 `meta_field` varchar(200) DEFAULT NULL,
 `meta_value` varchar(2000) DEFAULT NULL,
 `dt_stamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`meta_id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=388780373 ;
 
 
 Please let me know any quickest way to do this.
 I tried to create indexes in these tables on id, but this too
 takes time.
 
 
 
 Thanks
 
 -- MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=anan...@gmail.com
 
 
 
 
 
 -- 
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 There are two ways to build software:
 Make it so simple that there are obviously no bugs,
 or make it so complex that there are no obvious bugs.
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=de...@orange-pants.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to select the id of 2+ records for given user?

2011-10-19 Thread Derek Downey
You could do a GROUP_CONCAT to get you close:

SELECT userID, SUBSTRING_INDEX(GROUP_CONCAT(messageID), ',', 1) messageList 
FROM table
GROUP BY userID

| userID | messageList |
|--|---|
| 71| 984|
| 73| 441, 489|
| 74| 483, 723|

Or some such. Note I haven't tested the actual query. It's just a general 
direction.

- Derek Downey

On Oct 19, 2011, at 1:32 PM, Michael Dykman wrote:

 I'm afraid that what you are looking for simply cannot be done with MySQL
 alone.  You will need to pare your results at the application layer.
 Remember that rows have no inherent order except for conforming to any
 ORDER BY clause contained within the query.
 
 - md
 
 On Wed, Oct 19, 2011 at 1:27 PM, Dotan Cohen dotanco...@gmail.com wrote:
 
 Assuming a table such this:
 | ID |  messageID  | userID |
 ||-||
 | 1  | 345 | 71 |
 | 2  | 984 | 71 |
 | 3  | 461 | 72 |
 | 4  | 156 | 73 |
 | 5  | 441 | 73 |
 | 6  | 489 | 73 |
 | 7  | 483 | 74 |
 | 8  | 523 | 74 |
 | 9  | 723 | 74 |
 
 I need the second, third, fourth, etc messageID for each userID. So I
 would get a results table such as:
 | ID |  messageID  | userID |
 ||-||
 | 2  | 984 | 71 |
 | 5  | 441 | 73 |
 | 6  | 489 | 73 |
 | 7  | 483 | 74 |
 | 9  | 723 | 74 |
 
 I've tried playing with count and group by and limit, but I've not
 found a solution. I can easily get all the rows and then remove the
 rows that I don't need in PHP, but I'd still like to know if an
 all-MySQL solution is possible.
 
 Thanks!
 
 --
 Dotan Cohen
 
 http://gibberish.co.il
 http://what-is-what.com
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com
 
 
 
 
 -- 
 - michael dykman
 - mdyk...@gmail.com
 
 May the Source be with you.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to select the id of 2+ records for given user?

2011-10-19 Thread Derek Downey
Ah-hah! :)

Actually, I did something similar to that a month or so ago. I ran into a speed 
limitation on a not-small database (~3mill rows). So be careful.

Luckily in my case, I put all the 'minimum' ids in a memory table with an index 
and it solved it. It also was a cleanup script, and not something I need to run 
everyday.

-- Derek Downey


On Oct 19, 2011, at 6:06 PM, Basil Daoust wrote:

 For me given the sample data the following worked.
 The inner select says find all first messages, the outer says give me all
 messages that are thus not first messages.
 
 select * from table1 where messageID NOT IN (
 select messageID from table1
 group by userID
 )
 
 Some times just playing with the data will result in an aha moment.
 I'm assuming where you show row 7 you meant row 8?
 
 
 On Wed, Oct 19, 2011 at 12:43 PM, Derek Downey de...@orange-pants.comwrote:
 
 You could do a GROUP_CONCAT to get you close:
 
 SELECT userID, SUBSTRING_INDEX(GROUP_CONCAT(messageID), ',', 1) messageList
 FROM table
 GROUP BY userID
 
 | userID | messageList |
 |--|---|
 | 71| 984|
 | 73| 441, 489|
 | 74| 483, 723|
 
 Or some such. Note I haven't tested the actual query. It's just a general
 direction.
 
 - Derek Downey
 
 On Oct 19, 2011, at 1:32 PM, Michael Dykman wrote:
 
 I'm afraid that what you are looking for simply cannot be done with MySQL
 alone.  You will need to pare your results at the application layer.
 Remember that rows have no inherent order except for conforming to any
 ORDER BY clause contained within the query.
 
 - md
 
 On Wed, Oct 19, 2011 at 1:27 PM, Dotan Cohen dotanco...@gmail.com
 wrote:
 
 Assuming a table such this:
 | ID |  messageID  | userID |
 ||-||
 | 1  | 345 | 71 |
 | 2  | 984 | 71 |
 | 3  | 461 | 72 |
 | 4  | 156 | 73 |
 | 5  | 441 | 73 |
 | 6  | 489 | 73 |
 | 7  | 483 | 74 |
 | 8  | 523 | 74 |
 | 9  | 723 | 74 |
 
 I need the second, third, fourth, etc messageID for each userID. So I
 would get a results table such as:
 | ID |  messageID  | userID |
 ||-||
 | 2  | 984 | 71 |
 | 5  | 441 | 73 |
 | 6  | 489 | 73 |
 | 7  | 483 | 74 |
 | 9  | 723 | 74 |
 
 I've tried playing with count and group by and limit, but I've not
 found a solution. I can easily get all the rows and then remove the
 rows that I don't need in PHP, but I'd still like to know if an
 all-MySQL solution is possible.
 
 Thanks!
 
 --
 Dotan Cohen
 
 http://gibberish.co.il
 http://what-is-what.com
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com
 
 
 
 
 --
 - michael dykman
 - mdyk...@gmail.com
 
 May the Source be with you.
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=bdao...@lemonfree.com
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: replication between two tables in same database

2011-09-29 Thread Derek Downey
You could look into the sphinx engine http://sphinxsearch.com/about/sphinx/

No experience with this personally though

- Derek
On Sep 29, 2011, at 1:07 PM, Tompkins Neil wrote:

 Yes, unless I can set-up some sort of replication between the two tables.
 
 On Thu, Sep 29, 2011 at 6:05 PM, Reindl Harald h.rei...@thelounge.netwrote:
 
 please do NOT post off-list!
 
 so your only workaround is like '%whatever%' currently
 
 Am 29.09.2011 19:04, schrieb Tompkins Neil:
 The reason I'm using Innodb is because the usage of foreign keys in short
 however, we need a couple of the tables to support FULLTEXT searching which 
 Innodb
 does not support.
 
 
 On Thu, Sep 29, 2011 at 6:01 PM, Reindl Harald h.rei...@thelounge.netwrote:
 
 
 
 Am 29.09.2011 18:56, schrieb Tompkins Neil:
 Hi
 
 I've a Innodb and MyISAM table in the SAME database that I wish to
 replicate
 the data between the two because I need to use FULLTEXT searching on
 the MyISAM table.  Is this possible ? If so how do I do it.
 
 in short: no
 
 in longer:
 why in the world are you using the wrong engine if you need
 fulltext-search?
 mysql 5.6 semmes to start support this in innodb, but currently not
 
 replication is based on binlogs and contains database/table so there is
 no dirty trick do this on one server except let run a script and copy
 the table-contents per cronjob
 
 
 
 --
 
 Mit besten Grüßen, Reindl Harald
 the lounge interactive design GmbH
 A-1060 Vienna, Hofmühlgasse 17
 CTO / software-development / cms-solutions
 p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
 icq: 154546673, http://www.thelounge.net/
 http://www.thelounge.net/signature.asc.what.htm
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Stored Procedure Question

2011-09-21 Thread Derek Downey
SELECT id INTO @row_id FROM myTable WHERE  blah blah LIMIT 1;

Source 
http://dev.mysql.com/doc/refman/5.5/en/select-into-statement.html

On Sep 21, 2011, at 2:23 PM, Brandon Phelps wrote:

 Hello all,
 
 I would like to create a stored procedure that does the following:
 
 1. Accepts 4 values as parameters
 2. SELECTS 1 record (LIMIT 1) from a table where the 4 parameters match 
 fields in that table
   a. If a record was returned then UPDATE the table
   b. If a record was not returned then INSERT into a different table
 
 My main question here is how can I execute a SELECT id FROM ... LIMIT 1 
 statement within a stored procedure then use the returned id field later in 
 the procedure?
 
 Something like this:
 
 @row_id = SELECT id FROM myTable WHERE blah blah LIMIT 1;
 
 IF @row_id != nothing THEN
   UPDATE myTable ...
 ELSE
   INSERT INTO anotherTable ...
 END IF
 
 So if no rows were returned from the select I perform the ELSE block, 
 otherwise I perform the main IF block.
 
 Thanks!
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=de...@orange-pants.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: strange mysql update ..

2011-09-09 Thread Derek Downey
Try searching for a row that has a login_date of '-00-00 00:00:00'

- Derek

On Sep 9, 2011, at 8:52 AM, umapathi b wrote:

 Any update from anybody ?
 
 
 -- Forwarded message --
 From: umapathi b umapath...@gmail.com
 Date: Thu, Sep 8, 2011 at 4:28 AM
 Subject: Re: strange mysql update ..
 To: Ananda Kumar anan...@gmail.com
 Cc: mysql@lists.mysql.com
 
 
 Here is the o/p after the update ..
 
 
   user_id: 16078845
   drivers_license: TEST1140DL
login_date: 2011-06-19 11:20:07
 course_id: 1011
  regulator_id: 10840
 test_info: 
   completion_date: 2011-06-19 11:37:16
print_date: NULL
  password: test1140dl
 certificate_number: NULL
 login: test114...@1140dl.com
 
 
 - Umapathi
 
 
 On Thu, Sep 8, 2011 at 4:23 AM, Ananda Kumar anan...@gmail.com wrote:
 
 Can you lets us know what is the output of
 
 select * from user_info where user_id=16078845;
 
 
 On Thu, Sep 8, 2011 at 1:02 PM, umapathi b umapath...@gmail.com wrote:
 
 I wanted to change the login_date of one user . The original data of that
 user is like this ..
 
 select * from user_info where user_id = 16078845 \G
 *** 1. row ***
  user_id: 16078845
  drivers_license: TEST1140DL
   login_date: 2011-06-19 11:20:07
course_id: 1011
 regulator_id: 10840
test_info: 
  completion_date: 2011-06-19 11:37:16
   print_date: NULL
 password: test1140dl
 certificate_number: NULL
login: test114...@1140dl.com
 
 I fired the update statement in a wrong way ..like this ..
 
 update user_info set login_date='2011-08-05 04:15:05' and user_id
 =16078845
 limit 1 ;
 ( I forgot to use where . instead of where I used and )
 update user_info set login_date='2011-08-05 04:15:05' where user_id
 =16078845 limit 1 ; ( this is the query intended )
 
 after the update ..I got this message ..
 mysql update user_info set login_date='2011-08-05 04:15:05' and user_id
 =16078845 limit 1;
 Query OK, 1 row affected, 1 warning (0.02 sec)
 Rows matched: 1  Changed: 1  Warnings: 0
 
 It shows that one record is affected and one row changed ..
 I did show warnings ..the output is like this ..
 
 mysql show warnings;
 
 +-+--+-+
 | Level   | Code | Message
 |
 
 +-+--+-+
 | Warning | 1292 | Truncated incorrect DOUBLE value: '2011-08-05 04:15:05'
 |
 
 
 +-+--+-+
 
 But I could not get any record in the table with the updated login_date ..
 mysql select * from user_info where login_date like '2011-08-05%' ;
 Empty set (0.67 sec)
 
 So my question is what happened exactly ?
 Why no records updated ?
 
 Help is highly appreciated in this regard ..
 
 - Umapathi
 umapath...@gmail.com
 
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Query Optimization

2011-09-08 Thread Derek Downey
Correct me if I'm wrong. You're wanting to get all records that have an 
open_date or a close_date between two times.

If that's correct, you might be able to get an index_merge by doing a query 
like:

WHERE ((starting time)=open_dt= (ending time)) OR ((starting 
time)=close_dt=(ending time))

and creating two indexes (one on 'open_dt' and the other on 'close_dt')

http://dev.mysql.com/doc/refman/5.1/en/index-merge-optimization.html

Regards,
Derek

On Sep 8, 2011, at 2:50 PM, Brandon Phelps wrote:

 Andy,
 
 The queries take minutes to run.  MySQL is 5.1.54 and it's running on Ubuntu 
 server 11.04.  Unfortunately the machine only has 2GB of RAM but no other 
 major daemons are running on the machine.  We are running RAID 1 (mirroring) 
 with 1TB drives.  The tables in question here are all MyISAM.  When running 
 with the LIMIT 10 my EXPLAIN is:
 
 ++-+---++-+-+-++--+-+
 | id | select_type | table | type   | possible_keys   | key   
   | key_len | ref| rows | Extra   |
 ++-+---++-+-+-++--+-+
 |  1 | SIMPLE  | sc| range  | ndx_open_close_rcvd | 
 ndx_open_close_rcvd | 8   | NULL   | 32393316 | Using where; 
 Using filesort |
 |  1 | SIMPLE  | spm   | eq_ref | PRIMARY | PRIMARY   
   | 2   | syslog.sc.src_port |1 | |
 |  1 | SIMPLE  | dpm   | eq_ref | PRIMARY | PRIMARY   
   | 2   | syslog.sc.dst_port |1 | |
 ++-+---++-+-+-++--+-+
 
 When I remove the LIMIT 10 I get:
 
 +-+---++-+-+-++--+-+
 | id | select_type | table | type   | possible_keys   | key | key_len 
 | ref| rows | Extra   |
 ++-+---++-+-+-++--+-+
 |  1 | SIMPLE  | sc| ALL| ndx_open_close_rcvd | NULL| NULL
 | NULL   | 32393330 | Using where; Using filesort |
 |  1 | SIMPLE  | spm   | eq_ref | PRIMARY | PRIMARY | 2   
 | syslog.sc.src_port |1 | |
 |  1 | SIMPLE  | dpm   | eq_ref | PRIMARY | PRIMARY | 2   
 | syslog.sc.dst_port |1 | |
 ++-+---++-+-+-++--+-+
 
 Thanks for all your help thus far.
 
 On 09/08/2011 02:38 PM, Andrew Moore wrote:
 I don't think I saw any query timings in the emails (maybe I missed them).
 
 What version of MySQL are you currently using?
 What does the explain look like when your remove the limit 10?
 Is your server tuned for MyISAM or InnoDB?
 What kind of disk setup is in use?
 How much memory is in your machine?
 
 
 On Thu, Sep 8, 2011 at 7:27 PM, Brandon Phelpsbphe...@gls.com  wrote:
 
 Thanks for the reply Andy.  Unfortunately the users will be selecting
 varying date ranges and new data is constantly coming in, so I am not sure
 how I could archive/cache the necessary data that would be any more
 efficient than simply using the database directly.
 
 
 
 On 09/08/2011 02:16 PM, Andrew Moore wrote:
 
 Thinking outside the query, is there any archiving that could happen to
 make
 your large tables kinder in the range scan?
 
 Andy
 
 On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelpsbphe...@gls.com   wrote:
 
  On 09/01/2011 01:32 PM, Brandon Phelps wrote:
 
  On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
 
  On 9/1/2011 09:42, Brandon Phelps wrote:
 
  On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
 
 ...
 
 
  WHERE
 (open_dt= '2011-08-30 00:00:00' OR close_dt= '2011-08-30
 
 00:00:00')
 
 AND (open_dt= '2011-08-30 12:36:53' OR close_dt= '2011-08-30
 
 12:36:53')
 
  In that case your logic here simplifies to:
 WHERE
 open_dt= '2011-08-30 00:00:00'
 AND
 close_dt= '2011-08-30 12:36:53'
 
 
  Now add an index over open_dt and close_dt and see what happens.
 
 
  Jochem
 
 
 Jochem,
 
 I can't really use your WHERE logic because I also need to retrieve
 results where the open_dt time is out of the range specified. For
 example, a very large file download might span multiple days so given
 your logic if the connection was started 2 days ago and I want to pull
 1
 days worth of connections, I would miss that entry. Basically I want
 to
 SELECT all of the records that were opened 

Re: Build from bazaar source on Mac 10.7 (Lion) fails

2011-09-03 Thread Derek Downey
MacPorts is nice, but I am wanting to start tinkering with the source code.

I was able to get it to build with the following steps on Lion:

$ mkdir mysql-repo 
$ cd mysql-repo/ 
$ bzr init-repo . 
$ bzr branch lp:mysql-server/5.5 5.5 
$ cd 5.5 
$ ./BUILD/compile-pentium64-debug-max --prefix=/usr/local/mysql5.5 
$ sudo make install 

I think my cmake step was missing a flag to specify x86_64 arch

Derek Downey

On Sep 3, 2011, at 2:38 PM, Jan Steinman wrote:

 From: Derek Downey de...@orange-pants.com
 
 I am trying to setup a development machine to start delving into some of the 
 internal code of the MySQL DB. I'm off to a great start, since I can't even 
 get it to build properly...
 
 My machine is: 
 
 $ uname -a 
 Darwin DDMac 11.1.0 Darwin Kernel Version 11.1.0: Tue Jul 26 16:07:11 PDT 
 2011; root:xnu-1699.22.81~1/RELEASE_X86_64 x86_64
 
 Since you're on a Mac, have you considered using MacPorts? MySQL builds and 
 installs flawlessly on 10.6.8 with sudo port -f install mysql5 here. 
 (Haven't tried it on 10.7 yet.)
 
http://www.macports.org/
 
 
 I can remember when a good politician had to be 75 percent ability and 25 
 percent actor, but I can well see the day when the reverse could be true. -- 
 Harry S. Truman
  Jan Steinman, EcoReality Co-op 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=de...@orange-pants.com
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Build from bazaar source on Mac 10.7 (Lion) fails

2011-09-02 Thread Derek Downey
Hello, 

I am trying to setup a development machine to start delving into some of the 
internal code of the MySQL DB. I'm off to a great start, since I can't even get 
it to build properly. I'm sure I'm missing an important step, but am following 
the instructions from 
http://dev.mysql.com/doc/refman/5.5/en/installing-development-tree.html and 
http://dev.mysql.com/doc/refman/5.5/en/installing-source-distribution.html 

Below are my steps, but I do have another instance of mysql running that I use 
for work development. So I was trying to set this one up as on a different 
port. 

$ mkdir mysql-repo 
$ cd mysql-repo/ 
$ bzr init-repo . 
$ bzr branch lp:mysql-server/5.5 5.5 
$ cd 5.5 
$ cmake . -DCMAKE_INSTALL_PREFIX=/usr/local/mysql-5.5 -DMYSQL_TCP_PORT=3310 
-DMYSQL_UNIX_ADDR=/tmp/mysql-devel.sock 
$ make VERBOSE=1 

The Make step fails at 95%, here's the last few lines: 

[ 95%] Building C object unittest/mysys/CMakeFiles/lf-t.dir/lf-t.c.o 
cd /Users/derekd/devel/mysql-repo/5.5/unittest/mysys  /usr/bin/gcc 
-DHAVE_CONFIG_H -Wall -O2 -g -DDBUG_OFF 
-I/Users/derekd/devel/mysql-repo/5.5/include 
-I/Users/derekd/devel/mysql-repo/5.5/sql 
-I/Users/derekd/devel/mysql-repo/5.5/regex 
-I/Users/derekd/devel/mysql-repo/5.5/extra/yassl/include 
-I/Users/derekd/devel/mysql-repo/5.5/unittest/mytap -o 
CMakeFiles/lf-t.dir/lf-t.c.o -c 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys/lf-t.c 
Linking C executable lf-t 
cd /Users/derekd/devel/mysql-repo/5.5/unittest/mysys  /Applications/CMake 
2.8-5.app/Contents/bin/cmake -E cmake_link_script CMakeFiles/lf-t.dir/link.txt 
--verbose=1 
/usr/bin/gcc -Wall -O2 -g -DDBUG_OFF -Wl,-search_paths_first 
-Wl,-headerpad_max_install_names CMakeFiles/lf-t.dir/lf-t.c.o -o lf-t -lpthread 
../mytap/libmytap.a ../../mysys/libmysys.a ../../strings/libstrings.a 
../../dbug/libdbug.a ../../mysys/libmysys.a ../../dbug/libdbug.a 
../../strings/libstrings.a -lz -lpthread 
/Applications/CMake 2.8-5.app/Contents/bin/cmake -E cmake_progress_report 
/Users/derekd/devel/mysql-repo/5.5/CMakeFiles 
[ 95%] Built target lf-t 
make -f unittest/mysys/CMakeFiles/my_atomic-t.dir/build.make 
unittest/mysys/CMakeFiles/my_atomic-t.dir/depend 
cd /Users/derekd/devel/mysql-repo/5.5  /Applications/CMake 
2.8-5.app/Contents/bin/cmake -E cmake_depends Unix Makefiles 
/Users/derekd/devel/mysql-repo/5.5 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys 
/Users/derekd/devel/mysql-repo/5.5 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys/CMakeFiles/my_atomic-t.dir/DependInfo.cmake
 --color= 
Dependee 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys/CMakeFiles/my_atomic-t.dir/DependInfo.cmake
 is newer than depender 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys/CMakeFiles/my_atomic-t.dir/depend.internal.
 
Dependee 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys/CMakeFiles/CMakeDirectoryInformation.cmake
 is newer than depender 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys/CMakeFiles/my_atomic-t.dir/depend.internal.
 
Scanning dependencies of target my_atomic-t 
make -f unittest/mysys/CMakeFiles/my_atomic-t.dir/build.make 
unittest/mysys/CMakeFiles/my_atomic-t.dir/build 
/Applications/CMake 2.8-5.app/Contents/bin/cmake -E cmake_progress_report 
/Users/derekd/devel/mysql-repo/5.5/CMakeFiles 
[ 95%] Building C object 
unittest/mysys/CMakeFiles/my_atomic-t.dir/my_atomic-t.c.o 
cd /Users/derekd/devel/mysql-repo/5.5/unittest/mysys  /usr/bin/gcc 
-DHAVE_CONFIG_H -Wall -O2 -g -DDBUG_OFF 
-I/Users/derekd/devel/mysql-repo/5.5/include 
-I/Users/derekd/devel/mysql-repo/5.5/sql 
-I/Users/derekd/devel/mysql-repo/5.5/regex 
-I/Users/derekd/devel/mysql-repo/5.5/extra/yassl/include 
-I/Users/derekd/devel/mysql-repo/5.5/unittest/mytap -o 
CMakeFiles/my_atomic-t.dir/my_atomic-t.c.o -c 
/Users/derekd/devel/mysql-repo/5.5/unittest/mysys/my_atomic-t.c 
/var/folders/1q/0bfhzdp923vf_856z7dzsncmgn/T//ccROezHa.s:596:suffix or 
operands invalid for `add' 
make[2]: *** [unittest/mysys/CMakeFiles/my_atomic-t.dir/my_atomic-t.c.o] Error 
1 
make[1]: *** [unittest/mysys/CMakeFiles/my_atomic-t.dir/all] Error 2 
make: *** [all] Error 2 


My machine is: 

$ uname -a 
Darwin DDMac 11.1.0 Darwin Kernel Version 11.1.0: Tue Jul 26 16:07:11 PDT 2011; 
root:xnu-1699.22.81~1/RELEASE_X86_64 x86_64 


Thanks for any help.

P.S. I have posted this on the Builds/Binary Forums, so I apologize for the 
duplicate content.