Re: extract year-month-day-hour

2014-09-18 Thread Mihail Manolov
Try this:

LEFT('2014-09-17 12:22:16', 13)


Mihail Manolov
Senior Manager, Database
Liquidity Services, Inc.
1920 L Street NW
Washington DC 20036
202-467-6868 x3227 (Office)
202-256-9044 (Mobile)


mihail.mano...@liquidityservices.com
LiquidityServicesInc.com
 On Sep 18, 2014, at 18:31, Jopoy Solano m...@jopoy.com wrote:

 Hi!

 I want to say sorry in advance for my ignorance. I was wondering how can I
 extract year-month-day-hour in one go. For example:

 From 2014-09-17 12:22:16 to 2014-09-17 12

 The reason for this is I'm trying to generate how much data users
 historically consume every hour  for each day.

 Thank you.

 Jopoy

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-14 Thread Mihail Manolov
You could jump from 5.0 directly to 5.5 and skip 5.1. I have without any 
issues. There are some configuration file change, which you may want to 
consider checking. I definitely recommend upgrading your development servers 
for an extensive testing. Some queries _may_ run slower or not work at all and 
you may have to rearrange how you join tables in your queries.

The upgrade from 5.5 to 5.6 should me smoother, though.


On Feb 14, 2013, at 4:28 PM, Mike Franon wrote:

 Great thanks for the info, I guess the best way to do this is take a
 spare server, set it up with our standard setup, and then start the
 upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to 5.6
 and test.
 
 
 
 
 
 
 
 
 On Thu, Feb 14, 2013 at 4:22 PM, Akshay Suryavanshi
 akshay.suryavansh...@gmail.com wrote:
 Mike,
 
 5.6 is GA now, so its stable release. Also you should not jump to 5.6
 directly, atleast from 5.0.
 
 There are many bug fixes and changes in 5.1, so you should consider this
 way.
 
 5.0--5.1--5.5 (all slaves first, and then the master)
 
 And further 5.5 -- 5.6 (again all slaves first and then the master)
 
 Hope this helps.
 
 Cheers!
 
 On Fri, Feb 15, 2013 at 2:38 AM, Mike Franon kongfra...@gmail.com wrote:
 
 I have 1 master with many slaves, using the master only for inserts
 and the rest are readers.
 
 
 Is 5.6 stable?  Or better off to go to 5.5?
 
 If so do I need to make a few steps or can go straight from 5.0 to 5.6?
 
 
 Any best practices and recommendations?
 
 Thanks
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6

2013-02-14 Thread Mihail Manolov
The ones that didn't work for me required table rearrangement in the query. 
MySQL 5.5 was very particular about the table join order.

On Feb 14, 2013, at 6:11 PM, Rick James wrote:

 Singer, do you have some examples?
 
 -Original Message-
 From: Singer Wang [mailto:w...@singerwang.com]
 Sent: Thursday, February 14, 2013 2:59 PM
 To: Mihail Manolov
 Cc: Mike Franon; Akshay Suryavanshi; mysql@lists.mysql.com
 Subject: Re: Upgrading form mysql 5.0.90 to 5.5 or 5.6
 
 There are queries that works with 5.1/5.0 that do not work with 5.5, I
 would test extensively..
 
 S
 
 
 On Thu, Feb 14, 2013 at 5:22 PM, Mihail Manolov 
 mihail.mano...@liquidation.com wrote:
 
 You could jump from 5.0 directly to 5.5 and skip 5.1. I have without
 any issues. There are some configuration file change, which you may
 want to consider checking. I definitely recommend upgrading your
 development servers for an extensive testing. Some queries _may_ run
 slower or not work at all and you may have to rearrange how you join
 tables in your queries.
 
 The upgrade from 5.5 to 5.6 should me smoother, though.
 
 
 On Feb 14, 2013, at 4:28 PM, Mike Franon wrote:
 
 Great thanks for the info, I guess the best way to do this is take
 a
 spare server, set it up with our standard setup, and then start the
 upgrade as you said 5.0 - 5.1 - 5.5, test and then upgrade to 5.6
 and test.
 
 
 
 
 
 
 
 
 On Thu, Feb 14, 2013 at 4:22 PM, Akshay Suryavanshi
 akshay.suryavansh...@gmail.com wrote:
 Mike,
 
 5.6 is GA now, so its stable release. Also you should not jump to
 5.6 directly, atleast from 5.0.
 
 There are many bug fixes and changes in 5.1, so you should
 consider
 this way.
 
 5.0--5.1--5.5 (all slaves first, and then the master)
 
 And further 5.5 -- 5.6 (again all slaves first and then the
 master)
 
 Hope this helps.
 
 Cheers!
 
 On Fri, Feb 15, 2013 at 2:38 AM, Mike Franon
 kongfra...@gmail.com
 wrote:
 
 I have 1 master with many slaves, using the master only for
 inserts and the rest are readers.
 
 
 Is 5.6 stable?  Or better off to go to 5.5?
 
 If so do I need to make a few steps or can go straight from 5.0
 to 5.6?
 
 
 Any best practices and recommendations?
 
 Thanks
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Replication question: How to make a slave a master?

2012-08-24 Thread Mihail Manolov
Are you trying to promote a slave as a new master and replace current master or 
create intermediate slave? If it is the latter all you need to do is to:

1. Stop the slave
2. Add log_slave_updates = 1 in the slave's config file
3. Copy the slave files to the new slave(s)
4. Start your intermediate slave
5. Add replication user on your intermediate slave and allow new slaves to 
replicate
6. Start your slave(s) of intermediate slave
7. Issue following on your new slaves: CHANGE MASTER TO master_log_file = 
'mysql-bin.01', master_log_pos = 4;

On Aug 24, 2012, at 11:25 AM, Richard Reina wrote:

 I have a couple of mysql database slaves and would like to make one of them
 be a master as well so that I can set another machine to replicate from it.
 Can anyone tell me how I should go about it or know of any howtos for this
 specific task?
 
 Thanks,
 
 Richard


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: GA download reverted back to 5.5.24?

2012-06-30 Thread Mihail Manolov
Noel,

I am really sorry for those high profile sites, which in your scenario 
haven't tested their database in dev/test/stage before upgrading. Bugs happen. 
Big organizations have slower communication.

Cheers,
Mihail

On Jun 30, 2012, at 0:16, Noel Butler noel.but...@ausics.net wrote:

 I wonder if you would have the same opinion to say your Operating System 
 environment, Apache, php, any mainstream server daemon, how about they pull 
 the current version for a serious bug, but dont tell anyone... 
 
 Oracle have been quick to announce new releases of mysql, but failed to issue 
 a notice saying  uhoh, you better not use it instead, putting a small 
 notice, where, on a fricken manual page FFS. who the hell reads that! and 
 they say use version a which does not even exist, I'd hate to think of how 
 many high profile sites are at risk of being screwed over by yet MORE oracle 
 incompetence.
 No one would think  any less of them if they sent that notice, many would be 
 appreciative, but to hide such a serious issue that was enough for them to 
 withdraw and remove that version, is outright despicable.
 
 
 
 
 
 On Fri, 2012-06-29 at 22:58 -0400, Govinda wrote:
 
  That was nice of oracle to announce this wasn't it ...(/sarcasm)
  
 
 I am not aligned with any side.. and I am also not known/qualified/respected 
 in this group enough to make much of a statement...  but:
 IMHO, In almost all matters, *appreciation* is the only approach that will 
 serve... let alone sustain happiness...  
 ...and especially when we consider what little we must give to have right to 
 use MySQL.
 
 Sure, desire for better communication/usability makes total sense.. but I am 
 just also observing/suggesting: please add (positively) to the atmosphere.. 
 for everyones' sake.  Just us humans under the hood.
 
 -Govinda
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How to quickly detect if there are any crashed tables

2012-05-10 Thread Mihail Manolov
You can enable check/recovery automatically by using myisam_recover. Look it 
up in the documentation.

There is no way to repair them faster, though.

On May 10, 2012, at 2:40 PM, Adrian Fita wrote:

 Hi.
 
 I have several hundreds of databases with MyISAM tables in a server and
 after a power outage, I have to manually repair them with mysqlcheck.
 Sometimes I forget and it's not nice. So I am thinking of making a
 Nagios plugin to check if there are any crashed tables that need repairing.
 
 I tried using mysqlcheck --all-databases --quick --check
 --check-only-changed --fast, but it still took around 2h to run.
 
 I also tried using myisamchk with find /var/lib/mysql -name '*.MYI'
 -exec myisamchk --silent --fast '{}' \;, but still, after 15 minutes,
 it hasn't finished.
 
 I am looking for a solution that will give me an answer in at least 3-4
 minutes.
 
 I apreciate any sugestions you might have.
 
 Thanks,
 -- 
 Fita Adrian
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How to quickly detect if there are any crashed tables

2012-05-10 Thread Mihail Manolov
AFAIK the tables will be locked one by one until checked/repaired.

On May 10, 2012, at 3:07 PM, Adrian Fita wrote:

 On 10/05/12 21:51, Mihail Manolov wrote:
 You can enable check/recovery automatically by using
 myisam_recover. Look it up in the documentation.
 
 There is no way to repair them faster, though.
 
 Thanks for the quick response. This definetly looks like a useable
 solution. Do you know if during the auto-repair at startup, the server
 with all the tables will be available and answering to queries? Or will
 it make the tables available as it progresses with the repair?
 
 -- 
 Fita Adrian
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Group_Concat help...

2012-03-19 Thread Mihail Manolov
Try this

SELECT `user_id`, `login_ip`,
COUNT(`id`) AS 'connections'
FROM `mysql_test`
WHERE `login_datetime` BETWEEN '2012-03-19 00:00:00' AND '2012-03-19
23:59:59'
GROUP BY `user_id`, `login_ip`
HAVING COUNT(`id`)  2
ORDER BY COUNT(`id`) DESC 
LIMIT 0, 15;

On Mar 19, 2012, at 12:06 PM, Steven Staples wrote:

 SELECT `user_id`, GROUP_CONCAT(DISTINCT `login_ip`) AS 'login_ips',
 COUNT(`id`) AS 'connections'
 FROM `mysql_test`
 WHERE `login_datetime` BETWEEN '2012-03-19 00:00:00' AND '2012-03-19
 23:59:59'
 GROUP BY `user_id`
 HAVING COUNT(`id`)  2
 ORDER BY COUNT(`id`) DESC 
 LIMIT 0, 15;


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Query Optimization

2011-09-08 Thread Mihail Manolov
If you're running version 5.1+ you may wanna take a look at table partitioning 
options you may have.

On Sep 8, 2011, at 2:27 PM, Brandon Phelps wrote:

 Thanks for the reply Andy.  Unfortunately the users will be selecting varying 
 date ranges and new data is constantly coming in, so I am not sure how I 
 could archive/cache the necessary data that would be any more efficient than 
 simply using the database directly.
 
 
 On 09/08/2011 02:16 PM, Andrew Moore wrote:
 Thinking outside the query, is there any archiving that could happen to make
 your large tables kinder in the range scan?
 
 Andy
 
 On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelpsbphe...@gls.com  wrote:
 
 On 09/01/2011 01:32 PM, Brandon Phelps wrote:
 
 On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
 
 On 9/1/2011 09:42, Brandon Phelps wrote:
 
 On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
 ...
 
 WHERE
 (open_dt= '2011-08-30 00:00:00' OR close_dt= '2011-08-30
 00:00:00')
 AND (open_dt= '2011-08-30 12:36:53' OR close_dt= '2011-08-30
 12:36:53')
 
 In that case your logic here simplifies to:
 WHERE
 open_dt= '2011-08-30 00:00:00'
 AND
 close_dt= '2011-08-30 12:36:53'
 
 Now add an index over open_dt and close_dt and see what happens.
 
 Jochem
 
 Jochem,
 
 I can't really use your WHERE logic because I also need to retrieve
 results where the open_dt time is out of the range specified. For
 example, a very large file download might span multiple days so given
 your logic if the connection was started 2 days ago and I want to pull 1
 days worth of connections, I would miss that entry. Basically I want to
 SELECT all of the records that were opened OR closed during the
 specified time period, ie. if any activity happened between my start and
 end dates, I need to see that record.
 
 Any other ideas?
 
 
 I believe Jochem was on the right track but he got his dates reversed.
 
 Let's try a little ASCII art to show the situation. I will setup a query
 window with two markers (s) and (e). Events will be marked by || 
 markers
 showing their durations.
 
 a) (s) (e)
 b) |---|
 c) |---|
 d) |---|
 e) ||
 f) |---|
 g) |---|
 
 To describe these situations:
 a) is the window for which you want to query (s) is the starting time and
 (e) is the ending time for the date range you are interested in.
 b) the event starts and stops before your window exists. It won't be part
 of your results.
 c) the event starts before the window but ends within the window -
 include this
 d) the event starts and ends within the window - include this
 e) the event starts before the window and ends after the window - include
 this
 f) the event starts inside the window but ends beyond the window -
 include this.
 g) the event starts and ends beyond the window - exclude this.
 
 In order to get every event in the range of c-f, here is what you need
 for a WHERE clause
 
 WHERE start= (ending time) and end= (starting time)
 
 Try that and let us know the results.
 
 
 Thanks Jochem and Shawn, however the following two queries result in the
 exact same EXPLAIN output: (I hope the tables don't wrap too early for you)
 
 Old method:
 SELECT
 sc.open_dt,
 sc.close_dt,
 sc.protocol,
 INET_NTOA(sc.src_address) AS src_address,
 sc.src_port,
 INET_NTOA(sc.dst_address) AS dst_address,
 sc.dst_port,
 sc.sent,
 sc.rcvd,
 spm.desc AS src_port_desc,
 dpm.desc AS dst_port_desc
 FROM firewall_connections AS sc
 LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
 LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE
 (open_dt= '2011-08-31 09:53:31' OR close_dt= '2011-08-31 09:53:31')
 AND (open_dt= '2011-09-01 09:53:31' OR close_dt= '2011-09-01
 09:53:31')
 ORDER BY rcvd DESC
 LIMIT 0, 10;
 
 New method with BTREE index on open_dt, close_dt (index name is
 ndx_open_close_dt):
 SELECT
 sc.open_dt,
 sc.close_dt,
 sc.protocol,
 INET_NTOA(sc.src_address) AS src_address,
 sc.src_port,
 INET_NTOA(sc.dst_address) AS dst_address,
 sc.dst_port,
 sc.sent,
 sc.rcvd,
 spm.desc AS src_port_desc,
 dpm.desc AS dst_port_desc
 FROM firewall_connections AS sc
 LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
 LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE
 open_dt= '2011-09-01 09:53:31' AND close_dt= '2011-08-31 09:53:31'
 ORDER BY rcvd DESC
 LIMIT 0, 10;
 
 EXPLAIN output for old method:
 ++-+---+--**--+---**
 +--+-+**+--+--**---+
 | id | select_type | table | type | possible_keys | key | key_len | ref |
 rows | Extra |
 ++-+---+--**--+---**
 +--+-+**+--+--**---+
 | 1 | SIMPLE | sc | index | open_dt,ndx_open_close_dt | ndx_rcvd | 4 |
 NULL | 10 | Using where |
 | 1 | SIMPLE | spm | eq_ref | PRIMARY | PRIMARY | 2 | syslog.sc.src_port |
 1 | |
 | 1 | SIMPLE | dpm | eq_ref | PRIMARY | PRIMARY | 2 | syslog.sc.dst_port |
 1 | |
 

Re: Query Optimization

2011-09-08 Thread Mihail Manolov
How about:

SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
spm.desc AS src_port_desc,
dpm.desc AS dst_port_desc
FROM sonicwall_connections AS sc
LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
WHERE open_dt BETWEEN '2011-09-07 13:18:58' AND '2011-09-08 13:18:58'

UNION

SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
spm.desc AS src_port_desc,
dpm.desc AS dst_port_desc
FROM sonicwall_connections AS sc
LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
WHERE close_dt BETWEEN '2011-09-07 13:18:58' AND '2011-09-08 13:18:58'



On Sep 8, 2011, at 3:25 PM, Brandon Phelps wrote:

 Thanks for the idea Derek, however given the following query my EXPLAIN 
 output is identical:
 
 SELECT
   sc.open_dt,
   sc.close_dt,
   sc.protocol,
   INET_NTOA(sc.src_address) AS src_address,
   sc.src_port,
   INET_NTOA(sc.dst_address) AS dst_address,
   sc.dst_port,
   sc.sent,
   sc.rcvd,
   spm.desc AS src_port_desc,
   dpm.desc AS dst_port_desc
 FROM sonicwall_connections AS sc
   LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
   LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE ('2011-09-07 13:18:58' = open_dt = '2011-09-08 13:18:58') OR 
 ('2011-09-07 13:18:58' = close_dt = '2011-09-08 13:18:58');
 
 
 ++-+---++---+-+-++--+-+
 | id | select_type | table | type   | possible_keys | key | key_len | ref 
| rows | Extra   |
 ++-+---++---+-+-++--+-+
 |  1 | SIMPLE  | sc| ALL| NULL  | NULL| NULL| 
 NULL   | 32393330 | Using where |
 |  1 | SIMPLE  | spm   | eq_ref | PRIMARY   | PRIMARY | 2   | 
 syslog.sc.src_port |1 | |
 |  1 | SIMPLE  | dpm   | eq_ref | PRIMARY   | PRIMARY | 2   | 
 syslog.sc.dst_port |1 | |
 ++-+---++---+-+-++--+-+
 
 I did create indexes on open_dt and close_dt (2 separate indexes).
 
 
 
 On 09/08/2011 02:55 PM, Derek Downey wrote:
 Correct me if I'm wrong. You're wanting to get all records that have an 
 open_date or a close_date between two times.
 
 If that's correct, you might be able to get an index_merge by doing a query 
 like:
 
 WHERE ((starting time)=open_dt= (ending time)) OR ((starting 
 time)=close_dt=(ending time))
 
 and creating two indexes (one on 'open_dt' and the other on 'close_dt')
 
 http://dev.mysql.com/doc/refman/5.1/en/index-merge-optimization.html
 
 Regards,
 Derek
 
 On Sep 8, 2011, at 2:50 PM, Brandon Phelps wrote:
 
 Andy,
 
 The queries take minutes to run.  MySQL is 5.1.54 and it's running on 
 Ubuntu server 11.04.  Unfortunately the machine only has 2GB of RAM but no 
 other major daemons are running on the machine.  We are running RAID 1 
 (mirroring) with 1TB drives.  The tables in question here are all MyISAM.  
 When running with the LIMIT 10 my EXPLAIN is:
 
 ++-+---++-+-+-++--+-+
 | id | select_type | table | type   | possible_keys   | key 
 | key_len | ref| rows | Extra   
 |
 ++-+---++-+-+-++--+-+
 |  1 | SIMPLE  | sc| range  | ndx_open_close_rcvd | 
 ndx_open_close_rcvd | 8   | NULL   | 32393316 | Using 
 where; Using filesort |
 |  1 | SIMPLE  | spm   | eq_ref | PRIMARY | PRIMARY 
 | 2   | syslog.sc.src_port |1 | 
 |
 |  1 | SIMPLE  | dpm   | eq_ref | PRIMARY | PRIMARY 
 | 2   | syslog.sc.dst_port |1 | 
 |
 ++-+---++-+-+-++--+-+
 
 When I remove the LIMIT 10 I get:
 
 

Re: Query Optimization

2011-09-08 Thread Mihail Manolov
From the manual: The default behavior for UNION is that duplicate rows are 
removed from the result.

On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:

 Mihail,
 
 Thanks so much!  I modified your example to include the proper ORDER BY and 
 LIMIT clauses and this, so far, is running super fast (0.0007 seconds).  
 Question, if a record's open_dt is between the range AND the close_dt is 
 between the range as well, will the UNION output the record twice?  If so, is 
 there any way to prevent that?
 
 (SELECT
   sc.open_dt,
   sc.close_dt,
   sc.protocol,
   INET_NTOA(sc.src_address) AS src_address,
   sc.src_port,
   INET_NTOA(sc.dst_address) AS dst_address,
   sc.dst_port,
   sc.sent,
   sc.rcvd,
   spm.desc AS src_port_desc,
   dpm.desc AS dst_port_desc
 FROM sonicwall_connections AS sc
   LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
   LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE open_dt BETWEEN '2011-09-07 13:18:58' AND '2011-09-08 13:18:58')
 UNION
 (SELECT
   sc.open_dt,
   sc.close_dt,
   sc.protocol,
   INET_NTOA(sc.src_address) AS src_address,
   sc.src_port,
   INET_NTOA(sc.dst_address) AS dst_address,
   sc.dst_port,
   sc.sent,
   sc.rcvd,
   spm.desc AS src_port_desc,
   dpm.desc AS dst_port_desc
 FROM sonicwall_connections AS sc
   LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
   LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE close_dt BETWEEN '2011-09-07 13:18:58' AND '2011-09-08 13:18:58')
 ORDER BY rcvd DESC LIMIT 10;
 
 ++--++++--+-++--++
 | id | select_type  | table  | type   | possible_keys 
  | key  | key_len | ref| rows | Extra 
  |
 ++--++++--+-++--++
 |  1 | PRIMARY  | sc | range  | 
 open_dt,ndx_open_close_rcvd,ndx_open_close | open_dt  | 8   | NULL
| 1057 | Using where|
 |  1 | PRIMARY  | spm| eq_ref | PRIMARY   
  | PRIMARY  | 2   | syslog.sc.src_port |1 |   
  |
 |  1 | PRIMARY  | dpm| eq_ref | PRIMARY   
  | PRIMARY  | 2   | syslog.sc.dst_port |1 |   
  |
 |  2 | UNION| sc | range  | ndx_close_dt  
  | ndx_close_dt | 8   | NULL   | 1131 | Using where   
  |
 |  2 | UNION| spm| eq_ref | PRIMARY   
  | PRIMARY  | 2   | syslog.sc.src_port |1 |   
  |
 |  2 | UNION| dpm| eq_ref | PRIMARY   
  | PRIMARY  | 2   | syslog.sc.dst_port |1 |   
  |
 | NULL | UNION RESULT | union1,2 | ALL| NULL
| NULL | NULL| NULL   | NULL | Using 
 filesort |
 ++--++++--+-++--++
 
 
 
 On 09/08/2011 03:45 PM, Mihail Manolov wrote:
 How about:
 
 SELECT
  sc.open_dt,
  sc.close_dt,
  sc.protocol,
  INET_NTOA(sc.src_address) AS src_address,
  sc.src_port,
  INET_NTOA(sc.dst_address) AS dst_address,
  sc.dst_port,
  sc.sent,
  sc.rcvd,
  spm.desc AS src_port_desc,
  dpm.desc AS dst_port_desc
 FROM sonicwall_connections AS sc
  LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
  LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE open_dt BETWEEN '2011-09-07 13:18:58' AND '2011-09-08 13:18:58'
 
 UNION
 
 SELECT
  sc.open_dt,
  sc.close_dt,
  sc.protocol,
  INET_NTOA(sc.src_address) AS src_address,
  sc.src_port,
  INET_NTOA(sc.dst_address) AS dst_address,
  sc.dst_port,
  sc.sent,
  sc.rcvd,
  spm.desc AS src_port_desc,
  dpm.desc AS dst_port_desc
 FROM sonicwall_connections AS sc
  LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
  LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE close_dt BETWEEN '2011-09-07 13:18:58' AND '2011-09-08 13:18:58'
 
 
 
 On Sep 8, 2011, at 3:25 PM, Brandon Phelps wrote:
 
 Thanks for the idea Derek, however given the following query my EXPLAIN 
 output is identical:
 
 SELECT
 sc.open_dt,
 sc.close_dt,
 sc.protocol,
 INET_NTOA(sc.src_address) AS src_address,
 sc.src_port,
 INET_NTOA(sc.dst_address) AS dst_address,
 sc.dst_port,
 sc.sent,
 sc.rcvd,
 spm.desc AS src_port_desc,
 dpm.desc AS dst_port_desc
 FROM

Re: optimizing query

2011-01-18 Thread Mihail Manolov
I concur. In addition to suggested index I would add a new column in articles 
table called body_length, which is going to be updated every time the body 
column is updated. Add that column to the composite index mentioned below.

This should speed up the query a lot.

Cheers,
Mihail

On Jan 18, 2011, at 13:03, Michael Dykman mdyk...@gmail.com wrote:

 Hi Simon,
 
 once you apply functions to a field, an index on that field is pretty
 much useless.  For this particular query, I would be tempted to create
 additional fields to store the values of MONTH(articles.created_at)
 and DAY(articles.created_at).  Create an index on (month_created,
 day_created)  You could just sort by articles.created_at; no need for
 the YEAR function, the result will be the same given your other
 selectors.
 
 Given those adjustments, the query looks righteous enough.
 
 - michael dykman
 
 On Tue, Jan 18, 2011 at 12:22 PM, Simon Wilkinson
 simon.wilkin...@gmail.com wrote:
 Hi,
 
 I am trying to optimize the following query:
 
 SELECT articles.* FROM articles INNER JOIN newsletters ON
 articles.newsletter_id = newsletters.id INNER JOIN users ON users.id =
 newsletters.user_id WHERE users.id =12 AND MONTH(articles.created_at) = '12'
 AND DAY(articles.created_at) = '5' ORDER BY YEAR(articles.created_at),
 LENGTH(articles.body);
 
 I am trying to retrieve all the articles created on a specific day of a
 specific month that belong to a user, ordered by the oldest and then longest
 article.
 
 I have run explain on the query, and get the following:
 
 ++-+--+---+++-+-+--+--+
 | id | select_type | table| type  |
 possible_keys  |
 key| key_len | ref |
 rows | Extra|
 ++-+--+---+++-+-+--+--+
 |  1 | SIMPLE  | users| const |
 PRIMARY|
 PRIMARY| 4   | const
 |1 | Using index; Using temporary; Using filesort |
 |  1 | SIMPLE  | newsletters | ref   |
 PRIMARY,index_newsletters_on_user_id  |
 index_newsletters_on_user_id  | 4   |
 const   |1 | Using index  |
 |  1 | SIMPLE  | articles  | ref   |
 index_articles_on_newsletter_id,index_articles_on_newsletter_id_and_created_at
 | index_articles_on_newsletter_id_and_created_at | 4   |
 my_db.newsletters.id |3 | Using where  |
 
 ++-+--+---+++-+-+--+--+
 3 rows in set (0.00 sec)
 
 This seems pretty decent, and does perform pretty well for some users (~0.5
 - 1 sec), but for some users (seemingly those with large numbers of
 articles) the query can take 20 - 30 seconds to run.  This seems really slow
 to me.  I tried adding in the index
 'index_articles_on_newsletter_id_and_created_at' but the performance doesn't
 seem to be any different then when it uses just the
 'index_articles_on_newsletter_id' index.  I think this might be because of
 the functions I am using on the created_at column to get the day and month
 from it, making an index on created_at useless in this instance.
 
 Running both an 'optimize table entries' and 'analyze table entries' also
 didn't seem to have any real impact on the performance.
 
 I was wondering if anybody had any suggestions for what else I might be able
 to try, or if there is a better way to search on dates in this manner.  Any
 ideas would be greatly appreciated.
 
 Thanks,
 
 Simon
 
 
 
 
 -- 
  - michael dykman
  - mdyk...@gmail.com
 
  May the Source be with you.
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mmano...@liquidation.com
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Upgrading

2009-03-09 Thread Mihail Manolov
I would strongly suggest logging all your 4.0 queries for at least 24  
hours and then running them on your new 5.x server to avoid any  
surprises such as incompatible queries for example.


Good luck!

Mihail

On Mar 9, 2009, at 1:42 PM, Matthew Stuart wrote:

Hi all, I am on... wait for it... version 4.0.25 and I want to  
upgrade to MySQL 5.x


Is there anything special I should do in order to upgrade? Do I need  
to uninstall v4 or can I just download the most current version and  
double click to upgrade? I am not particularly hardcore mysql  
minded, and quite honestly, if it ain't broke don't fix it is a good  
mantra of mine... but I need to move on now as I have received an  
sql file that is v5 compatible but not v4 compatible.



mysql@lists.mysql.com


Once I have it up and running I'll be fine, it's just that I am  
nervous about upgrading and consequently breaking it, and at that  
point, I'll be struggling to put it right.


Any advice on how I can best do this / best practices etc will be  
very much appreciated.


Many thanks.

Mat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mmano...@liquidation.com





Re: Migration from 32-bit to 64-bit MySQL

2008-04-29 Thread Mihail Manolov

Hi all,

I feel obliged to report on my success with migration from 32bit to  
64bit platform.


Last Sunday I braced myself and migrated 3 servers (one master and two  
slaves) with approximately 100GB data each by simply rsyncing the data  
files. It took about 1 hour total downtime.


Everything looks great so far. I ran lots of tests, especially on  
currency columns and all tests were successfull.


It is fair to note that I don't have any FLOAT columns in my databases.

I have mixed table environment (MyISAM and InnoDB tables). Running  
MySQL 4.1.24, Linux binaries.


Let me know if you have any questions.

Mihail

On Apr 25, 2008, at 12:48 PM, Mihail Manolov wrote:


I am in process of planning 32 to 64 migration as well. I googled the
following, but it could be only relevant to a specific application:

It should be noted that, when switching between 32bit and 64bit server
using
the same data-files, all the current major storage engines
(with one exception) are architecture neutral, both in endian-ness and
bit size.
You should be  able to copy a 64-bit or 32-bit DB either way,
and even between platforms without problems for MyISAM, InnoDB and  
NDB.
For other  engines it doesn't matter (CSV, MEMORY, MERGE, BLACKHOLE  
and

FEDERATED) either the engine doesn't have a disk storage format or
the format they use is text based (CSV) or based on
MyISAM (MERGE; and therefore not an issue). The only exception is
Falcon, which is only available in MySQL 6.0.

It is generally recommended from MySQL that a dump and reload of
data for absolute compatibility for any engine and major migration.
The googled link: http://wikis.sun.com/display/WebStack/MySQL64bitARC

Any comments on this?


Mihail

On Apr 25, 2008, at 12:03 PM, Olaf Stein wrote:

 Probably not

 AFAIK it should work in theory if you have no floating point columns
 but I
 would not try it.
 Why cant you take a dump, you can do it table by table, you will
 have some
 downtime though.

 One option might be to use a 64bit slave and make that the master
 and then
 add more 64 slaves.


 On 4/25/08 11:57 AM, Mike [EMAIL PROTECTED] wrote:

  On Fri, Apr 25, 2008 at 11:45 AM, Olaf Stein
  [EMAIL PROTECTED] wrote:
  As long as you use dumps to restore your databases on the new
 64bit system
  (instead of the binary files) you should be fine
 
  Olaf
 
  I have so much data that we can't take a mysqldump of our
 database. The
  directory tared is about 18GB.  I just use the other method by
 just copying
  over the data directory.  Do you think the data will be intact if
 a just copy
  over the data directory?
 







Re: Migration from 32-bit to 64-bit MySQL

2008-04-25 Thread Mihail Manolov
I am in process of planning 32 to 64 migration as well. I googled the  
following, but it could be only relevant to a specific application:


It should be noted that, when switching between 32bit and 64bit server  
using

the same data-files, all the current major storage engines
(with one exception) are architecture neutral, both in endian-ness and  
bit size.

You should be  able to copy a 64-bit or 32-bit DB either way,
and even between platforms without problems for MyISAM, InnoDB and NDB.
For other  engines it doesn't matter (CSV, MEMORY, MERGE, BLACKHOLE and
FEDERATED) either the engine doesn't have a disk storage format or
the format they use is text based (CSV) or based on
MyISAM (MERGE; and therefore not an issue). The only exception is
Falcon, which is only available in MySQL 6.0.

It is generally recommended from MySQL that a dump and reload of
data for absolute compatibility for any engine and major migration.
The googled link: http://wikis.sun.com/display/WebStack/MySQL64bitARC

Any comments on this?


Mihail

On Apr 25, 2008, at 12:03 PM, Olaf Stein wrote:


Probably not

AFAIK it should work in theory if you have no floating point columns  
but I

would not try it.
Why cant you take a dump, you can do it table by table, you will  
have some

downtime though.

One option might be to use a 64bit slave and make that the master  
and then

add more 64 slaves.


On 4/25/08 11:57 AM, Mike [EMAIL PROTECTED] wrote:

 On Fri, Apr 25, 2008 at 11:45 AM, Olaf Stein
 [EMAIL PROTECTED] wrote:
 As long as you use dumps to restore your databases on the new  
64bit system

 (instead of the binary files) you should be fine

 Olaf

 I have so much data that we can't take a mysqldump of our  
database. The
 directory tared is about 18GB.  I just use the other method by  
just copying
 over the data directory.  Do you think the data will be intact if  
a just copy

 over the data directory?






Re: Disable engines

2005-10-28 Thread Mihail Manolov

Flavio Gonçalves wrote:

I want to permit only MyISAM tables to be created in my server. How can I
disable all the other engines?


Put:

skip-innodb
skip-bdb

in your my.cnf file, [mysqld] section.


Mihail


--
Mihail Manolov
Government Liquidation, LLC
Special Projects Leader
202 558 6227


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ARCHIVE storage engine and INSERT DELAY in MySQL 4.1

2005-10-19 Thread Mihail Manolov

Apparently ARCHIVE tables do not support INSERT DELAYED. Why?
In documentation 
(http://dev.mysql.com/doc/refman/4.1/en/archive-storage-engine.html) it 
says that it should be possible.



Example of the error that I am getting:

INSERT DELAYED INTO audit_log VALUES 
('db','user','549220','address_id','757812','5214');


ERROR 1031 (HY000): Table storage engine for 'audit_log' doesn't have 
this option



Mihail



--
Mihail Manolov
Government Liquidation, LLC
Special Projects Leader
202 558 6227


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ARCHIVE storage engine and INSERT DELAY in MySQL 4.1

2005-10-19 Thread Mihail Manolov

Jeff Smelser wrote:

On Wednesday 19 October 2005 01:18 pm, Mihail Manolov wrote:


Apparently ARCHIVE tables do not support INSERT DELAYED. Why?
In documentation
(http://dev.mysql.com/doc/refman/4.1/en/archive-storage-engine.html) it
says that it should be possible.


Example of the error that I am getting:

INSERT DELAYED INTO audit_log VALUES
('db','user','549220','address_id','757812','5214');

ERROR 1031 (HY000): Table storage engine for 'audit_log' doesn't have
this option



do show create table audit_log, and post, plz.

Jeff


There you go:

CREATE TABLE `audit_log` (
  `db_host` varchar(64) NOT NULL default '',
  `table_name` varchar(255) NOT NULL default '',
  `record_id` int(11) NOT NULL default '0',
  `field_name` varchar(255) NOT NULL default '',
  `field_value` varchar(255) NOT NULL default '',
  `user` varchar(255) NOT NULL default ''
) ENGINE=ARCHIVE DEFAULT CHARSET=latin1


Mihail


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ARCHIVE storage engine and INSERT DELAY in MySQL 4.1

2005-10-19 Thread Mihail Manolov

Jeff Smelser wrote:


I would highly suspect this is a bug.. I would submit one.. unless someone 
else knows better..


Not real sure why you really need delayed, archive is suppose to be much 
faster on inserts then even myisam.


Jeff


Thanks Jeff!

It's the way our code is written, and I just changed the table type and 
began getting this error. We have lots of data to insert into this 
table, therefore the optimal option is to use DELAYED and insert them in 
blocks.


Not sure why they say that ARCHIVE storage engine is a new feature in 5.0?


Mihail


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to sort Query Browser's connection list?

2005-09-30 Thread Mihail Manolov

Hi all,

It's not a big deal, but I can't seem to find how to sort or order my 
connection entries in Options-Connections view tree in MySQL Query 
Browser. (See attached screenshot.)


Any ideas?


Thanks!

Mihail


--
Mihail Manolov
Government Liquidation, LLC
Special Projects Leader
202 558 6227


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query help - Joining adjacent rows?

2005-06-21 Thread Mihail Manolov

Will this work?

GROUP BY maxtime, user
ORDER BY maxtime DESC

comex wrote:


I have a table:
create table example(time datetime, username varchar(255));

timeusername
2005-06-21 15:58:02 user1
2005-06-21 14:58:02 user1
2005-06-21 11:57:51 user2
2005-06-21 10:57:51 user1
2005-06-21 09:57:51 user1

The query:
select COUNT(*), username, MAX(time) as maxtime from example group by
username order by maxtime desc;

groups by username, and returns:

COUNT(*)usernamemaxtime
4   user1   2005-06-21 15:58:02
1   user2   2005-06-21 11:57:51

I want it, however, to return:

COUNT(*)usernamemaxtime
2   user1   2005-06-21 15:58:02
1   user2   2005-06-21 11:57:51
2   user1   2005-06-21 10:57:51

That is, do not group entries by the same user together if another
user visited in between.  Is this possible?

 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqldump: Got errno 5 on write

2005-03-22 Thread Mihail Manolov
Have you seen this before:
mysqldump: Got errno 5 on write
I have started getting this error recently in my email box - the message is 
result of my daily backup cronjob.
Ideas?
rgrdz,
Mihail Manolov


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Getting weird error when updating table

2005-03-07 Thread Mihail Manolov
Check the size of your partition that contains /tmp directory.
Just in case symlink it to a bigger partition. Your temporary table may 
be bigger than the available space in /tmp

Luck,
Mihail
Homam S.A. wrote:
Whenever I try to update an ISAM table (through a join
with nother small lookup table, also ISAM), I get the
following error:
Error Code : 1114
The table '#sql_910_0' is full
I read the following in the documentation:
http://dev.mysql.com/doc/mysql/en/full-table.html
But none of the reasons listed apply. It seems that
#sql_910_0 is some temp table that MySQL is using to
temporary spool intermediate results. But it doesn't
exist anywhere in my file system.
Have you guys gotten such an error?

	
		
__ 
Celebrate Yahoo!'s 10th Birthday! 
Yahoo! Netrospective: 100 Moments of the Web 
http://birthday.yahoo.com/netrospective/

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL Privileges - table privileges question

2004-12-02 Thread Mihail Manolov
Greetings,
I am sorry if I am asking a question that has already been answered 
somewhere. If it was and you know where - please let me know.

I am trying to give access to a user, who should have access to a 
specific database with over 200 tables and should not have access to one 
or two of them.

Reading MySQL's manual only suggests that I have to add ALL tables that 
I want the user to have access to into the tables_priv and exclude the 
ones that the uses should not have accesss to.

Any better solutions than this one?
Thanks!
Mihail Manolov
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL/InnoDB-4.0.20 is released

2004-05-18 Thread Mihail Manolov
Hmm... I didn't have to upgrade our DBD drivers when we moved from
3.23.57 to 4.0.18. Strange you had to. Do  you remember your old DBD
driver's version? Or was that only Win32 problem?

- Original Message - 
From: Mark [EMAIL PROTECTED]
To: Heikki Tuuri [EMAIL PROTECTED];
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, May 18, 2004 2:21 PM
Subject: Re: MySQL/InnoDB-4.0.20 is released


 Heikki Tuuri wrote:

  Release 4.0.20 is mainly a bugfix release, but there are also
some
  important functional changes. Release 4.0.19 was completely
skipped
  over because Bug #3596 might have caused segmentation faults on
some
  platforms. The changelog below lists all the changes since
4.0.18.

 Will that require an upgrade from Perl DBD drivers as well? (like
when I did when I moved from 3.23.58 to 4.0.18). I really hope not.
:)

 - Mark


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: upgrading a server to latest MySQL

2004-05-13 Thread Mihail Manolov
Piece of cake :)
http://dev.mysql.com/doc/mysql/en/Upgrading-from-3.23.html

I've recently upgraded our servers from 3.23.57 to 4.0.18. No
problems at all.

Luck!

- Original Message - 
From: Joe Harkins [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, May 13, 2004 2:23 PM
Subject: upgrading a server to latest MySQL


 I have MySQL 3.23.41 running on an Apache server with Red hat.
it's hosted at EV1 (formerly Rackshack) but they do not provide any
support for the server.

 I'd like to upgrade to the latest MySQL. Does anyone know of a
bulletproof, step-by-step, online guide to doing this? Perhaps a
reasonable priced resource that will do it?

 Thank you


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Select month from Date

2004-05-10 Thread Mihail Manolov
Is this what you're looking for?

SELECT id, datecol, text FROM table WHERE MONTH(datecol)
=$mySelectedMonth;

- Original Message - 
From: H Bartel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, May 10, 2004 11:34 AM
Subject: Select month from Date


 Hi,

 I am trying to start using the mySQL Date functions instead of the
unix
 timestamp from PHP. I have read the man page several times now,
but
 still haven't come up with a solution for this simple query:

 SELECT id, datecol, text FROM table WHERE month-in-date-col =
 $mySelectedMonth;

 This shouldn't be too hard, I guess, but it seems I'm missing
something
 here? Any help greatly appreciated.

 Regards,
 Holger



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Date Question 2

2004-05-10 Thread Mihail Manolov
There are different ways to do accomplish this.

You can use either:
SELECT id, datecol, text FROM table WHERE YEAR(datecol) = 2004 AND
MONTH(datecol) =$mySelectedMonth;

or:
SELECT id, datecol, text FROM table WHERE LEFT(datecol,7) =
'2004-$mySelectedMonth';

As of formated date... use the DATE_FORMAT function. Monday,
10.05.2004 should be something like DATE_FORMAT(datecol, '%W
%d.%c.%Y')

Check out this page:
http://dev.mysql.com/doc/mysql/en/Date_and_time_functions.html for
more information.


- Original Message - 
From: H Bartel [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, May 10, 2004 1:56 PM
Subject: Date Question 2


 Hi again,

 I got the MONTH() selection thing working now, but it seems like
I'm not
 getting the hang of the concept with the sql date..

 With PHP and a UNIX timestamp I would just insert a timestamp, the
 select it and after selection, go ahead and manipulate it to fit
my
 needs, like extracting day names, months etc.

 With the sql date it seems that all manipulation with dates has to
be
 done during selection. Which would make sense somehow, since it's
a
 query language.. I'm just wondering, why I should use readable
dates, if
 it's so much harder to get where I need to, instead of sticking to
unix
 timestamps and manipulate these with PHP functions?

 What I want to do is:

 Select every row from the current month and year with:

 SELECT id, datecol, text FROM table WHERE MONTH(datecol)
 =$mySelectedMonth;

 then output the different parts of the date like: Monday,
10.05.2004

 or do I have to get all this within the query?

 Thanks for your patience and time reading this rather long post
and
 maybe offering some enlightening explanation?

 Regards,
 Holger

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL User Conference - Session Presentations?

2004-04-28 Thread Mihail Manolov
Hello,

Does anybody knows if User Conference' presentations were published
somewhere on the web? I was told that it will be done shortly after
the conference, and it is more than a week after the conference
end...

Thank you,


Mihail Manolov
___
Solutions Architect
Government Liquidation, LLC - Your Direct Source for Government
Surplus
2131 K Street NW, 4th Floor
Washington, DC  20037
Telephone:  1 (202) 467-6868 x 227
Fax Number: 1 (202) 467-5475
Cell Phone: 1 (202) 256-9044
[EMAIL PROTECTED]
www.govliquidation.com
www.liquidityservicesinc.com

sql query server



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: granting privileges using wildcards

2003-03-26 Thread Mihail Manolov
Is there a reason for not using tables_priv table for this purpose?


Mihail

- Original Message - 
From: Dimitar Haralanov [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, March 26, 2003 2:29 PM
Subject: Re: granting privileges using wildcards

On Wed, 26 Mar 2003 13:14:09 -0600
Black, Kelly W [PCS] wrote:

I think this might do what you want, but then you will be required
to log
in with the -p syntax...

GRANT SELECT on *.* TO yourlogin@'%' IDENTIFIED BY somepassword;

FLUSH PRIVILEGES;

mysql -u youruserid -p -h hostname dbasename

 Well, as far as I know SQL and MySQL this will grant select option to
user 'yourlogin' from any host to any database and any table.

 What I wanted is that the user will have select privileges on on tables
'table_*' from database db.
 Basically, is something like the following possible?

 GRANT SELECT ON db.table_% TO user@'%' IDENTIFIED BY 'pass';

 Thanx
  -- Mitko



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: db dump question

2002-09-26 Thread Mihail Manolov

mysql  datadump.sql

- Original Message -
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, September 26, 2002 4:44 PM
Subject: db dump question


 I need to copy a database and all its tables and contents to a backup
 server. The server
 they live on is on an ISP, the server I want to back the up on is in my
 office in another town.
 I can access the ISP's server via phpMyAdmin and dump the data to a .sql
 file, but then
 what do I do to recreate the database and all the tables on my local mysql
 server? What
 do I look for in the manual?

 --
 Chip

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: SQL question

2002-09-26 Thread Mihail Manolov

Try:

SELECT fieldname FROM table ORDER 0+fieldname;


Hope it helps.


Mihail

- Original Message -
From: John Almberg [EMAIL PROTECTED]
To: Mysql [EMAIL PROTECTED]
Sent: Thursday, September 26, 2002 8:04 PM
Subject: SQL question


 I'm trying to sort a table on a character-type field that contains mostly
 numbers. This field always contains either a number or a number followed
by
 a character. Like '57' or '57a'.

 I'd like to sort the table *numerically* on this field, not
*alphabetically*
 on this field. That is, I'd like the table to be sorted like:

 1 ...
 2 ...
 2a ...
 3 ...
 4d ...

 NOT like:

 1 ...
 11 ...
 111a ...
 2a ...
 22 ...

 See what I mean? This is a common problem, I think, when you sort an
 character type field that contains numbers. The sort comes out all wrong.

 Is there anyway I can achieve this sort using SQL? The target server is
 running 3.22.32. Any ideas greatly appreciated!

 -- John



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mysqlhotcopy problem - Segmentation fault

2002-08-29 Thread Mihail Manolov

Yes, both run 3.23.49a

- Original Message -
From: Jeremy Zawodny [EMAIL PROTECTED]
To: Mihail Manolov [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, August 29, 2002 3:24 AM
Subject: Re: mysqlhotcopy problem - Segmentation fault


 On Wed, Aug 28, 2002 at 04:50:22PM -0400, Mihail Manolov wrote:
  I have strange problem - I get Segmentation fault error when I am trying
to
  execute mysqlhotcopy.
 
  When I remove the [mysqlhotcopy] section in my /etc/my.cnf
  configuration it works. Meanwhile, on another server it works just
  fine with the same configuration file.

 Do both servers have the same MySQL version installed?



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysqlhotcopy problem - Segmentation fault

2002-08-28 Thread Mihail Manolov

I have strange problem - I get Segmentation fault error when I am trying to
execute mysqlhotcopy.

When I remove the [mysqlhotcopy] section in my /etc/my.cnf configuration it
works. Meanwhile, on another server it works just fine with the same
configuration file.

There is nothing fancy in the my.cnf file - I have the default configuration
for mysqlhotcopy:

[mysqlhotcopy]
interactive-timeout

Any ideas? Does anyone has had such problem? It's not critical, but it's
strange.

Mihail Manolov
Government Liquidation, LLC
202 467 6868 x.227
[EMAIL PROTECTED]
www.govliquidation.com

sql query


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysqlhotcopy problem - Segmentation fault

2002-08-28 Thread Mihail Manolov

I have strange problem - I get Segmentation fault error when I am trying to
execute mysqlhotcopy.

When I remove the [mysqlhotcopy] and [client] sections in my /etc/my.cnf
configuration it works. Meanwhile, on another server it works just fine with
the same configuration file.

There is nothing fancy in the my.cnf file - I have the default configuration
for mysqlhotcopy:

[mysqlhotcopy]
interactive-timeout

Any ideas? Does anyone has had such problem? It's not critical, but it's
strange.

Mihail Manolov
Government Liquidation, LLC
202 467 6868 x.227
[EMAIL PROTECTED]
www.govliquidation.com

sql, query, mysql


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




myisampack/myisamchk question

2002-08-15 Thread Mihail Manolov

I got an error while executing myisamchk -rq on compressed table. Here's the
output:

[root@db prod_rel5_1_0]# myisampack buyer
Compressing buyer.MYD: (3687550 records)
- Calculating statistics
- Compressing file
62.69%
Remember to run myisamchk -rq on compressed tables

[root@db prod_rel5_1_0]# myisamchk -rq buyer
- check key delete-chain
- check record delete-chain
- recovering (with sort) MyISAM-table 'buyer.MYI'
Data records: 3687550
- Fixing index 1
myisamchk: error: Found too many records; Can`t continue
MyISAM-table 'buyer' is not fixed because of errors
Try fixing it by using the --safe-recover (-o) or the --force (-f) option

[root@db prod_rel5_1_0]# myisamchk -roq buyer
- check key delete-chain
- check record delete-chain
- recovering (with keycache) MyISAM-table 'buyer.MYI'
Data records: 3668074
Data records: 3687550

So, using myisamchk -roq solved the problem.

My question is, what is the purpose of the error that I get and is it safe?


Mihail Manolov
Government Liquidation, LLC
202 467 6868 x.227
[EMAIL PROTECTED]
www.govliquidation.com

sql,query


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




What is the difference between thread and connection

2002-08-02 Thread Mihail Manolov

Could someone explain to me what is the difference between thread and
connection?

I guess one connection can have many threads? Am I right?


Thanks,

Mihail Manolov

sql,query




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: What is the difference between thread and connection

2002-08-02 Thread Mihail Manolov

Thanks Paul,

I am confused with these two variables in the server status:

Connections: 114577
Threads_created: 12759

Could you explain the difference between these two variables?

Connections should be the number of connection attemplts to the MySQL
server. Does this counts slave connections? Cause they are usually permanent
connections.

Threads_created - Number of threads created to handle connections.

Thank you very much!

Mihail

- Original Message -
From: Paul DuBois [EMAIL PROTECTED]
To: Mihail Manolov [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Friday, August 02, 2002 1:45 PM
Subject: Re: What is the difference between thread and connection


 At 13:29 -0400 8/2/02, Mihail Manolov wrote:
 Could someone explain to me what is the difference between thread and
 connection?

 No difference.

 
 I guess one connection can have many threads? Am I right?

 The server is multi-threaded.  It associates a thread with each
 connection.  Connection ID and thread ID are synonymous.

 
 
 Thanks,
 
 Mihail Manolov
 
 sql,query


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Help - query suggestion needed - interesting case

2002-07-12 Thread Mihail Manolov

:) Is this some sort of a joke?

I am grouping using event_id, which makes your query useless because it will
return just the first time row per each event_id.

Thanks anyway. I may have to use second query... :-(


Mihail


- Original Message -
From: Bhavin Vyas [EMAIL PROTECTED]
To: Mihail Manolov [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, July 11, 2002 10:51 PM
Subject: Re: Help - query suggestion needed - interesting case


 How about:

  SELECT
  event_id, time,
  count(DISTINCT time) AS Ranges
  FROM
  events
  GROUP BY
  event_id HAVING Ranges  1


 - Original Message -
 From: Mihail Manolov [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Thursday, July 11, 2002 2:58 PM
 Subject: Help - query suggestion needed - interesting case


  Greetings,
 
  I am stuck with this problem:
 
  I have the following table:
 
  event_id   time
  1002000-10-23
  1002000-10-23
  1012000-10-24
  1012000-10-25
 
  I need to know all event_id's that have multiple times + time columns.
Is
 it
  possible to get that result in just one query?
  The result should be something like this:
 
  event_id   time
  1012000-10-24
  1012000-10-25
 
 
  I managed to get all event_id's that have multiple times, but I don't
know
  how to get the time column in the same query.
  Here is my current query:
 
  SELECT
  event_id,
  count(DISTINCT time) AS Ranges
  FROM
  events
  GROUP BY
  event_id HAVING Ranges  1
 
  Please help me to find a single query that will return the time column
as
  well.
 
 
  Mihail



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Help - query suggestion needed - interesting case

2002-07-11 Thread Mihail Manolov

Greetings,

I am stuck with this problem:

I have the following table:

event_id   time
1002000-10-23
1002000-10-23
1012000-10-24
1012000-10-25

I need to know all event_id's that have multiple times + time columns. Is it
possible to get that result in just one query?
The result should be something like this:

event_id   time
1012000-10-24
1012000-10-25


I managed to get all event_id's that have multiple times, but I don't know
how to get the time column in the same query.
Here is my current query:

SELECT
event_id,
count(DISTINCT time) AS Ranges
FROM
events
GROUP BY
event_id HAVING Ranges  1

Please help me to find a single query that will return the time column as
well.


Mihail





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Replication (possible) problems | 'show binlog' command is broken?

2002-05-21 Thread Mihail Manolov

Greetings,

We got the following lines in our error logfile on one of our slave servers:

020521 13:19:29  Error reading packet from server: Lost connection to MySQL
server during query (read_errno 0,server_errno=2013)
020521 13:19:29  Slave: Failed reading log event, reconnecting to retry, log
'db-bin.024' position 833365852
020521 13:19:29  Slave: reconnected to master
'[EMAIL PROTECTED]:3306',replication resumed in log 'db-bin.024' at
position 833365852

It isn't the first one and I was wondering what could be cause?
A network problem? Large data chunk, or something?

I tried to see what was the event that cause that problem and the following
commands only returns an error:

show binlog events in 'db-bin.024' from 833365852 limit 5;
ERROR 1064: You have an error in your SQL syntax near 'binlog events in
'db-bin.024' from 833365852 limit 5' at line 1


Any help is appreciated!



Mihail Manolov
Government Liquidation, LLC
202 467 6868 x.227
[EMAIL PROTECTED]
www.govliquidation.com





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




funny truncate problem

2002-05-10 Thread Mihail Manolov

Hey guys,

did you spot this problem?

mysql select truncate(199.20,2);
++
| truncate(199.20,2) |
++
| 199.19 |
++
1 row in set (0.00 sec)

Bug?


Mihail Manolov
Government Liquidation, LLC
202 467 6868 x.227
[EMAIL PROTECTED]
www.govliquidation.com





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Lost connection to MySQL server during query

2002-03-11 Thread Mihail Manolov

Greetings,

We use MySQL 3.23.47 running on 4 Pentium III processors machine with 16GB
of RAM.
The site is hosted on 6 web servers using Alteon for load balancing.


If someone had such problem and knows how to fix it, or any ideas are
welcome regarding the following...

Here's what we know about the two problems:

First Scenario: auction_list.pl
---

Problem can be duplicated with every run time

Approximate runtime is 20-40 minutes

Its a standard, non-mod_perl script that has no way of clobbering
a handle due to a memory leak, that can be experienced under
mod_perl.

Program is very database intensive with many multi-table joined
SELECT queries.

All SELECT/INSERT/UPDATE statements occur over one DB handle.

The two handles that fail are prepared at the beginning of
runtime:

my $sth_update = $LIQ::DB-prepare(

UPDATE auction_list SET
auction_count= ?,
auction_data = ?,
last_update_time = NOW()
WHERE
owner_id = ?
  AND language_code= ?
);

my $sth_insert = $LIQ::DB-prepare(

INSERT INTO auction_list (
auction_count,
auction_data,
owner_id,
language_code,
last_update_time
) VALUES (
?, ?, ?, ?, NOW()
)
);

mysql desc auction_list;
+--+-+--+-+-+---
+

| Field| Type| Null | Key | Default | Extra
| |

+--+-+--+-+-+---
+

| owner_id | varchar(32) |  | PRI | |
| | language_code| char(2) |  | PRI | |
|  | auction_count| int(11) |  | | 0   |
|   | auction_data | mediumblob  |  | | |
|| last_update_time | datetime|  | | -00-00 00:00:00 |
| |

+--+-+--+-+-+---
+ 5 rows in set (0.00 sec)

Program runs for about 20 minutes, then breaks on both queries
sequentially (i.e., if an update fails, it performs an insert),
both fail.

Statement from handle is the following:

MySQL server has gone away

Loop continues and all other subsequent rows are written correctly.

This error occurs on one specific row update/insertion and then
continues. The row this breaks on _can_ change. I ran this script
again, commented out the lengthy part of the initial run time,
and half way through, it broke on a different row than usually
expected, and then merrily continued. However, during a run time,
usually the same row breaks from one run time to the next. I
managed to change the row it breaks on by cutting the amount of
SELECTS it initially does in half.

Neither database handle, nor prepared statement is clobbered,
because subsequent rows are written.

One of the datums inserted with each statement is always a blob.

I have included a Dumper of the bind variables passed to the
statement handles in error.txt.gz, please note it is rather large.

Second scenario: various CGIs, diff machines, same time
---

Run time breakage of following programs:

auction/bid
auction/list
auction/search

During normal execution. Software runs correctly for hours, then
the following message is seen in debug emails:

Lost connection to MySQL server during query

Usually this error is accompanied by multiple CGIs experiencing the
error at the same time (i.e., seconds, or concurrently between each
other). The error then simply goes away and processing returns to
normal.

This error can be seen with any type of query, UPDATE/INSERT/DELETE/
SELECT maybe once or twice a day, from various mod_perl CGIs, at the
same, or close time, for 2-5 separate requests, and goes away. I've
attached example debug output emails with this message.

This message can be encountered on any of our machines, and usually
occurs at the same time from various machines in the loop. It then
simply fixes itself.

My concern is that this behavior has occurring in our bid CGI. The
others are not as important. But I'm scared to see what kind of effect
this may be having on our data.

Thanks.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Myisamchk can't repair table help help help

2002-02-27 Thread Mihail Manolov

Steve Rapaport wrote:
 
 Arrrggghhh!
 
 This is definitely a problem!
 Can't access table White in mysql 4.0.1:  table is read-only.
 why?  because it's corrupt!
 Try to fix:  takes a long time (about an hour) and seems to work.
 But it's still corrupt and read-only:
 
[cut]

 [root@db1 elenco4_fb02]# ls -l
 total 7001668
 -rwxrwxr-x1 root root0 Feb 27 01:30 Counts.MYD
 -rwxrwxr-x1 root root 1024 Feb 27 01:30 Counts.MYI
 -rwxrwxr-x1 root root 8578 Feb 27 01:30 Counts.frm
 -rwxrwxr-x1 root root 409097196 Feb 27 01:30 Invfile.MYD
 -rwxrwxr-x1 root root 314644480 Feb 27 01:28 Invfile.MYI
 -rwxrwxr-x1 root root 8612 Feb 27 01:27 Invfile.frm
 -rwxrwxr-x1 root root 1964830232 Feb 27 19:19 White.MYD
 -rwxrwxr-x1 root root 2079708160 Feb 27 22:11 White.MYI
 -rwxrwxr-x1 root root 9159 Feb 27 01:18 White.frm
[cut]

Does your system have support for files larger than 2GB?

-- 
Mihail Manolov
Solutions Architect
Government Liquidation, LLC
www.govliquidation.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php