On Mon, Mar 5, 2012 at 6:25 PM, Claudio Valderrama C. <[email protected]>
 wrote:

> (Thorny issue, I hope Ann Harrison will comment.)
>
>
And now, finally, a month later she does.  Included after my signature is a
bit of MySQL code which may explain my desire to keep things as simple as
possible.


Hello, currently the engine supports BLR4 (legacy) and BLR 5. All FB
> versions generate BLR 5. But we are hitting some limits and I think we
> should increase it again (this would be for the first time for FB).
>

My understanding and memory is that within a blr_rse, each stream has a
context which is expressed as a single byte.  My first reaction was to
suggest adding a blr_rse2 using sixteen bit contexts.  Obviously, even if
that didn't trigger a BLR6 it would not be backward compatible, but there
are other extensions to blr that haven't caused a version change and are
also not backward compatible.

Forward compatibility (i.e. the ability of a new engine to handle an old
database) is critically important.  If someone restores an old database, it
may well have significant amounts of old blr in it.  People really dislike
a database that won't let them at their data - even if the data is old.

Claudio has proposed increasing the blr version and changing those routines
that manage blr to recognize that BLR4 and BLR5 have eight bit contexts but
BLR6 has sixteen bit contexts. That's at least as clean as recognizing that
blr_rse has eight bit contexts while blr_rse2 has sixteen bit contexts.

What bothered me was the follow-on idea of fixing lots of housekeeping
issues



> Things we can do in the new BLR version:
> - enlarge some values that are currently held in a single bit
>

Which means testing for blr version on every reference to them.


> - allow for reuse of holes in the BLR namespace without risk of
> misinterpreting a deprecated verb
>

Which means testing for blr version on every reference to them.


> - allow for BLR streams bigger than 64K thus supporting procedure BLR that
> will be stored in multiple blob segments if necessary (AFAIK, gbak is
> prepared to handle that).
>

Fixing the problem that gbak always gets or puts blr in a single read or
write is something I had on my list to do since the first
gds-Galaxy release.  I hope it's finally been done.



> Also, somewhat related to this, I propose that for 64-bit FB, the limit
> MAX_REQUESTS_SIZE should be raised, too or estimated on the fly or put in
> the config file.
>

That would also be a good thing.

Best wishes,

Ann


Here's the code ...


int
mysql_execute_command(THD *thd)
{
 int res= FALSE;
 bool need_start_waiting= FALSE; // have protection against global read lock
 int  up_result= 0;
 LEX  *lex= thd->lex;
 /* first SELECT_LEX (have special meaning for many of non-SELECTcommands)
*/
 SELECT_LEX *select_lex= &lex->select_lex;
 /* first table of first SELECT_LEX */
 TABLE_LIST *first_table= (TABLE_LIST*) select_lex->table_list.first;
 /* list of all tables in query */
 TABLE_LIST *all_tables;
 /* most outer SELECT_LEX_UNIT of query */
 SELECT_LEX_UNIT *unit= &lex->unit;
#ifdef HAVE_REPLICATION
 /* have table map for update for multi-update statement (BUG#37051) */
 bool have_table_map_for_update= FALSE;
#endif
 /* Saved variable value */
 DBUG_ENTER("mysql_execute_**command");
#ifdef WITH_PARTITION_STORAGE_ENGINE
 thd->work_part_info= 0;
#endif

 /*
   In many cases first table of main SELECT_LEX have special meaning =>
   check that it is first table in global list and relink it first in
   queries_tables list if it is necessary (we need such relinking only
   for queries with subqueries in select list, in this case tables of
   subqueries will go to global list first)

   all_tables will differ from first_table only if most upper SELECT_LEX
   do not contain tables.

   Because of above in place where should be at least one table in most
   outer SELECT_LEX we have following check:
   DBUG_ASSERT(first_table == all_tables);
   DBUG_ASSERT(first_table == all_tables && first_table != 0);
 */
 lex->first_lists_tables_same()**;
 /* should be assigned after making first tables same */
 all_tables= lex->query_tables;
 /* set context for commands which do not use setup_tables */
 select_lex->
   context.resolve_in_table_list_**only((TABLE_LIST*)select_lex->
                                      table_list.first);

 /*
   Reset warning count for each query that uses tables
   A better approach would be to reset this for any commands
   that is not a SHOW command or a select that only access local
   variables, but for now this is probably good enough.
 */
 if ((sql_command_flags[lex->sql_**command] & CF_DIAGNOSTIC_STMT) != 0)
   thd->warning_info->set_read_**only(TRUE);
 else
 {
   thd->warning_info->set_read_**only(FALSE);
   if (all_tables)
     thd->warning_info->opt_clear_**warning_info(thd->query_id);
 }

#ifdef HAVE_REPLICATION
 if (unlikely(thd->slave_thread))
 {
   if (lex->sql_command == SQLCOM_DROP_TRIGGER)
   {
     /*
       When dropping a trigger, we need to load its table name
       before checking slave filter rules.
     */
     add_table_for_trigger(thd, thd->lex->spname, 1, &all_tables);

     if (!all_tables)
     {
       /*
         If table name cannot be loaded,
         it means the trigger does not exists possibly because
         CREATE TRIGGER was previously skipped for this trigger
         according to slave filtering rules.
         Returning success without producing any errors in this case.
       */
       DBUG_RETURN(0);
     }

     // force searching in slave.cc:tables_ok()
     all_tables->updating= 1;
   }

   /*
     For fix of BUG#37051, the master stores the table map for update
     in the Query_log_event, and the value is assigned to
     thd->variables.table_map_for_**update before executing the update
     query.

     If thd->variables.table_map_for_**update is set, then we are
     replicating from a new master, we can use this value to apply
     filter rules without opening all the tables. However If
     thd->variables.table_map_for_**update is not set, then we are
     replicating from an old master, so we just skip this and
     continue with the old method. And of course, the bug would still
     exist for old masters.
   */
   if (lex->sql_command == SQLCOM_UPDATE_MULTI &&
       thd->table_map_for_update)
   {
     have_table_map_for_update= TRUE;
     table_map table_map_for_update= thd->table_map_for_update;
     uint nr= 0;
     TABLE_LIST *table;
     for (table=all_tables; table; table=table->next_global, nr++)
     {
       if (table_map_for_update & ((table_map)1 << nr))
         table->updating= TRUE;
       else
         table->updating= FALSE;
     }

     if (all_tables_not_ok(thd, all_tables))
     {
       /* we warn the slave SQL thread */
       my_message(ER_SLAVE_IGNORED_**TABLE, ER(ER_SLAVE_IGNORED_TABLE),
MYF(0));
       if (thd->one_shot_set)
         reset_one_shot_variables(thd);
       DBUG_RETURN(0);
     }

     for (table=all_tables; table; table=table->next_global)
       table->updating= TRUE;
   }

   /*
     Check if statment should be skipped because of slave filtering
     rules

     Exceptions are:
     - UPDATE MULTI: For this statement, we want to check the filtering
       rules later in the code
     - SET: we always execute it (Not that many SET commands exists in
       the binary log anyway -- only 4.1 masters write SET statements,
 in 5.0 there are no SET statements in the binary log)
     - DROP TEMPORARY TABLE IF EXISTS: we always execute it (otherwise we
       have stale files on slave caused by exclusion of one tmp table).
   */
   if (!(lex->sql_command == SQLCOM_UPDATE_MULTI) &&
 !(lex->sql_command == SQLCOM_SET_OPTION) &&
 !(lex->sql_command == SQLCOM_DROP_TABLE &&
         lex->drop_temporary && lex->drop_if_exists) &&
       all_tables_not_ok(thd, all_tables))
   {
     /* we warn the slave SQL thread */
     my_message(ER_SLAVE_IGNORED_**TABLE, ER(ER_SLAVE_IGNORED_TABLE),
MYF(0));
     if (thd->one_shot_set)
     {
       /*
         It's ok to check thd->one_shot_set here:

         The charsets in a MySQL 5.0 slave can change by both a binlogged
         SET ONE_SHOT statement and the event-internal charset setting,
         and these two ways to change charsets do not seems to work
         together.

         At least there seems to be problems in the rli cache for
         charsets if we are using ONE_SHOT.  Note that this is normally no
         problem because either the >= 5.0 slave reads a 4.1 binlog (with
         ONE_SHOT) *or* or 5.0 binlog (without ONE_SHOT) but never both."
       */
       reset_one_shot_variables(thd);
     }
     DBUG_RETURN(0);
   }
 }
 else
 {
#endif /* HAVE_REPLICATION */
   /*
     When option readonly is set deny operations which change non-temporary
     tables. Except for the replication thread and the 'super' users.
   */
   if (deny_updates_if_read_only_**option(thd, all_tables))
   {
     my_error(ER_OPTION_PREVENTS_**STATEMENT, MYF(0), "--read-only");
     DBUG_RETURN(-1);
   }
#ifdef HAVE_REPLICATION
 } /* endif unlikely slave */
#endif

 status_var_increment(thd->**status_var.com_stat[lex->sql_**command]);

 DBUG_ASSERT(thd->transaction.**stmt.modified_non_trans_table == FALSE);

 /*
   End a active transaction so that this command will have it's
   own transaction and will also sync the binary log. If a DDL is
   not run in it's own transaction it may simply never appear on
   the slave in case the outside transaction rolls back.
 */
 if (opt_implicit_commit(thd, CF_IMPLICT_COMMIT_BEGIN))
   goto error;

 switch (lex->sql_command) {

 case SQLCOM_SHOW_EVENTS:
#ifndef HAVE_EVENT_SCHEDULER
   my_error(ER_NOT_SUPPORTED_YET, MYF(0), "embedded server");
   break;
#endif
 case SQLCOM_SHOW_STATUS_PROC:
 case SQLCOM_SHOW_STATUS_FUNC:
   if (!(res= check_table_access(thd, SELECT_ACL, all_tables, FALSE, FALSE,
                                 UINT_MAX)))
     res= execute_sqlcom_select(thd, all_tables);
   break;
 case SQLCOM_SHOW_STATUS:
 {
   system_status_var old_status_var= thd->status_var;
   thd->initial_status_var= &old_status_var;
   if (!(res= check_table_access(thd, SELECT_ACL, all_tables, FALSE, FALSE,
                                 UINT_MAX)))
     res= execute_sqlcom_select(thd, all_tables);
   /* Don't log SHOW STATUS commands to slow query log */
   thd->server_status&= ~(SERVER_QUERY_NO_INDEX_USED |
                          SERVER_QUERY_NO_GOOD_INDEX_**USED);
   /*
     restore status variables, as we don't want 'show status' to cause
     changes
   */
   pthread_mutex_lock(&LOCK_**status);
   add_diff_to_status(&global_**status_var, &thd->status_var,
&old_status_var);
   thd->status_var= old_status_var;
   pthread_mutex_unlock(&LOCK_**status);
   break;
 }
 case SQLCOM_SHOW_DATABASES:
 case SQLCOM_SHOW_TABLES:
 case SQLCOM_SHOW_TRIGGERS:
 case SQLCOM_SHOW_TABLE_STATUS:
 case SQLCOM_SHOW_OPEN_TABLES:
 case SQLCOM_SHOW_PLUGINS:
 case SQLCOM_SHOW_FIELDS:
 case SQLCOM_SHOW_KEYS:
 case SQLCOM_SHOW_VARIABLES:
 case SQLCOM_SHOW_CHARSETS:
 case SQLCOM_SHOW_COLLATIONS:
 case SQLCOM_SHOW_STORAGE_ENGINES:
 case SQLCOM_SHOW_PROFILE:
 case SQLCOM_SELECT:
 {
   thd->status_var.last_query_**cost= 0.0;

   /*
     lex->exchange != NULL implies SELECT .. INTO OUTFILE and this
     requires FILE_ACL access.
   */
   ulong privileges_requested= lex->exchange ? SELECT_ACL | FILE_ACL :
     SELECT_ACL;
   if (all_tables)
     res= check_table_access(thd,
                             privileges_requested,
                             all_tables, FALSE, FALSE, UINT_MAX);
   else
     res= check_access(thd,
                       privileges_requested,
                       any_db, 0, 0, 0, 0);
   if (!res)
     res= execute_sqlcom_select(thd, all_tables);
   break;
 }
 case SQLCOM_PREPARE:
 {
   mysql_sql_stmt_prepare(thd);
   break;
 }
 case SQLCOM_EXECUTE:
 {
   mysql_sql_stmt_execute(thd);
   break;
 }
 case SQLCOM_DEALLOCATE_PREPARE:
 {
   mysql_sql_stmt_close(thd);
   break;
 }
 case SQLCOM_DO:
   if (check_table_access(thd, SELECT_ACL, all_tables, FALSE, FALSE,
UINT_MAX) ||
       open_and_lock_tables(thd, all_tables))
     goto error;

   res= mysql_do(thd, *lex->insert_list);
   break;

 case SQLCOM_EMPTY_QUERY:
   my_ok(thd);
   break;

 case SQLCOM_HELP:
   res= mysqld_help(thd,lex->help_arg)**;
   break;

#ifndef EMBEDDED_LIBRARY
 case SQLCOM_PURGE:
 {
   if (check_global_access(thd, SUPER_ACL))
     goto error;
   /* PURGE MASTER LOGS TO 'file' */
   res = purge_master_logs(thd, lex->to_log);
   break;
 }
 case SQLCOM_PURGE_BEFORE:
 {
   Item *it;

   if (check_global_access(thd, SUPER_ACL))
     goto error;
   /* PURGE MASTER LOGS BEFORE 'data' */
   it= (Item *)lex->value_list.head();
   if ((!it->fixed && it->fix_fields(lex->thd, &it)) ||
       it->check_cols(1))
   {
     my_error(ER_WRONG_ARGUMENTS, MYF(0), "PURGE LOGS BEFORE");
     goto error;
   }
   it= new Item_func_unix_timestamp(it);
   /*
     it is OK only emulate fix_fieds, because we need only
     value of constant
   */
   it->quick_fix_field();
   res = purge_master_logs_before_date(**thd, (ulong)it->val_int());
   break;
 }
 /*
   Purge backup logs command.
 */
 case SQLCOM_PURGE_BACKUP_LOGS:
 {
   char buff[256];
   int num= 0;
   res= 0;

   if (check_global_access(thd, SUPER_ACL))
     goto error;

   /*
     If we are attempting to purge to a specified date or backup_id, we
     must ensure the backup history log is turned on and is
     being written to a table.
   */
   if (((lex->type == TYPE_ENUM_PURGE_BACKUP_LOGS_**ID) ||
        (lex->type == TYPE_ENUM_PURGE_BACKUP_LOGS_**DATE)) &&
        (opt_backup_history_log && !(log_backup_output_options &
LOG_TABLE)))
   {
     my_error(ER_BACKUP_LOG_OUTPUT, MYF(0), ER(ER_BACKUP_LOG_OUTPUT));
     goto error;
   }

   /*
     Check the type of purge command and process accordingly.
   */
   switch (lex->type) {
   case TYPE_ENUM_PURGE_BACKUP_LOGS:
   {
     if (sys_var_backupdir.value_**length > 0)
       res= logger.purge_backup_logs(thd);
     break;
   }
   case TYPE_ENUM_PURGE_BACKUP_LOGS_**ID:
   {
     res= logger.purge_backup_logs_**before_id(thd, thd->lex->backup_id,
&num);
     break;
   }
   case TYPE_ENUM_PURGE_BACKUP_LOGS_**DATE:
   {
     Item *it;

     /*
       Perform additional error checking for the
       PURGE BACKUP LOGS BEFORE <date> command.
     */
     it= (Item *)lex->value_list.head();
     if ((!it->fixed && it->fix_fields(lex->thd, &it)) ||
         it->check_cols(1))
     {
       my_error(ER_WRONG_ARGUMENTS, MYF(0), "PURGE BACKUP LOGS BEFORE");
       goto error;
     }
     it= new Item_func_unix_timestamp(it);

     /*
       it is OK to only emulate fix_fields, because we need only
       value of constant
     */
     it->quick_fix_field();

     if ((ulong)it->val_int() == 0)
     {
       my_error(ER_BACKUP_PURGE_**DATETIME, MYF(0), "PURGE BACKUP LOGS
BEFORE");
       goto error;
     }

     my_time_t t= (ulong)it->val_int();

     res= logger.purge_backup_logs_**before_date(thd, t, &num);
     break;
   }
   }

   /*
     Check result.
   */
   if (res)
     goto error;
   if (lex->type == TYPE_ENUM_PURGE_BACKUP_LOGS)
     my_sprintf(buff, (buff, "%s.", ER(ER_BACKUP_LOGS_TRUNCATED)))**;
   else
     my_sprintf(buff, (buff, "%s %d.", ER(ER_BACKUP_LOGS_DELETED), num));
   my_ok(thd, num, 0, buff);
   break;
 }
#endif
 case SQLCOM_SHOW_WARNS:
 {
   res= mysqld_show_warnings(thd, (ulong)
           ((1L << (uint) MYSQL_ERROR::WARN_LEVEL_NOTE) |
            (1L << (uint) MYSQL_ERROR::WARN_LEVEL_WARN) |
            (1L << (uint) MYSQL_ERROR::WARN_LEVEL_ERROR)
            ));
   break;
 }
 case SQLCOM_SHOW_ERRORS:
 {
   res= mysqld_show_warnings(thd, (ulong)
           (1L << (uint) MYSQL_ERROR::WARN_LEVEL_ERROR)**);
   break;
 }
 case SQLCOM_SHOW_PROFILES:
 {
#if defined(ENABLED_PROFILING)
   thd->profiling.discard_**current_query();
   res= thd->profiling.show_profiles()**;
   if (res)
     goto error;
#else
   my_error(ER_FEATURE_DISABLED, MYF(0), "SHOW PROFILES",
"enable-profiling");
   goto error;
#endif
   break;
 }
 break;
 case SQLCOM_SHOW_NEW_MASTER:
 {
   if (check_global_access(thd, REPL_SLAVE_ACL))
     goto error;
   /* This query don't work now. See comment in repl_failsafe.cc */
#ifndef WORKING_NEW_MASTER
   my_error(ER_NOT_SUPPORTED_YET, MYF(0), "SHOW NEW MASTER");
   goto error;
#else
   res = show_new_master(thd);
   break;
#endif
 }

#ifdef HAVE_REPLICATION
 case SQLCOM_SHOW_SLAVE_HOSTS:
 {
   if (check_global_access(thd, REPL_SLAVE_ACL))
     goto error;
   res = show_slave_hosts(thd);
   break;
 }
 case SQLCOM_SHOW_BINLOG_EVENTS:
 {
   if (check_global_access(thd, REPL_SLAVE_ACL))
     goto error;
   res = mysql_show_binlog_events(thd);
   break;
 }
#endif


#ifdef BACKUP_TEST
 case SQLCOM_BACKUP_TEST:
#ifdef EMBEDDED_LIBRARY
   my_error(ER_NOT_SUPPORTED_YET, MYF(0), "BACKUP");
   goto error;
#else
   /*
     Note: execute_backup_test_command() sends a correct response to the
client
     (either ok, result set or error message).
    */
   if (execute_backup_test_command(**thd, &lex->db_list))
     goto error;
   break;
#endif
#endif
------------------------------------------------------------------------------
Better than sec? Nothing is better than sec when it comes to
monitoring Big Data applications. Try Boundary one-second 
resolution app monitoring today. Free.
http://p.sf.net/sfu/Boundary-dev2dev
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to