Hi,

I've got a question 'bout the number of number of writes a MySQLserver can 
handle in a second. I've written a program which contains the following 
code:

typedef struct {
MYSQL * connection;                                                          
    char  * database;
} con;

int resolvequery(char * vraag, MYSQL_RES& toReturn, int& affected, int       
    control) {
MYSQL_RES * result;
MYSQL_ROW row;
int state,counter;                                                           
    char * query;
char * db;
MYSQL * connect;
int * temporal;

  connect = standardcon.connection;
  db = standardcon.database;

state = mysql_real_query(connect, vraag, 1024);
if (state != 0) {                                                            
     printf(mysql_error(connect));
  return 1;
}
if((strncmp(vraag,"SEL",3)==0)||(strncmp(vraag,"SHO",3)==0)                  
     ||(strncmp(vraag,"DES",3)==0))
{
  result = mysql_store_result(connect);
  affected = mysql_num_rows(result);                                         
      toReturn = *result;
}
else
  affected = 0;
return 0;
}

And when I call this procedure 50 times/second (from one client)
or 100 times/second (two clients at the same time) it stores all the data 
without generating an error, but immediatly when I start a third client it 
crashes with the usual segmentation fault. In case you need it, the calls 
are made from this function:

CORBA::Short i_DBM_impl::storeData(const DINA::t_Table& table,
const char * e) throw(CORBA::SystemException)
{                                                                            
     int state;
char * mission;
MYSQL_RES temp;
mission = (char *) malloc(256 * sizeof(char));                               
    sprintf(mission, "INSERT DELAYED INTO %s VALUES 
('%s',CURRENT_TIMESTAMP())",
  (char *) table.name, e);
cout << mission << endl;
if (resolvequery(mission,temp,state,0)==0)                                   
     return 1;
else
  return 0;                                                                  
    }

So is it possible that some buffer is full or something when more than 100 
inserts in second are requested?? What can I do to adjust the performance or 
did I make other mistakes?? I'm using mysql  Ver 9.38 Distrib 3.22.32, for 
pc-linux-gnu (i586) on Debian Linux machine.
Please can you also mail to me ([EMAIL PROTECTED]) and not only directly 
to the list (I'm receiving the index-version and else I've to wait too long 
:-)

Greetings,
Raf
_________________________________________________________________________
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to