-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviewboard.asterisk.org/r/3557/#review11964
-----------------------------------------------------------


Thanks for picking this up.


/branches/1.8/res/res_config_odbc.c
<https://reviewboard.asterisk.org/r/3557/#comment21872>

    Why did you remove the comment about replacing the empty string with a 
single space?



/branches/1.8/res/res_config_odbc.c
<https://reviewboard.asterisk.org/r/3557/#comment21873>

    So, when I'm fetching "abcdefg" (7), and my buffer is sized 4, I would 
first fetch "abc\0" and then make_space for 7.
    
    The documentation I look at doesn't specify that indicator includes the 
terminating NUL, but we do need it. (8 bytes needed.)
    
    ast_str_make_space(&rowdata, indicator + 1); // the ast_str does not take 
the NUL into account on its own afaics



/branches/1.8/res/res_config_odbc.c
<https://reviewboard.asterisk.org/r/3557/#comment21875>

    When dealing with continue/break, I prefer this:
    
      if (x) {
          continue;
      }
      if (y) {
          ...
    
    That extra brace+LF increase visibility of the jump.



/branches/1.8/res/res_config_odbc.c
<https://reviewboard.asterisk.org/r/3557/#comment21874>

    Same remark about indicator+1.



/branches/1.8/res/res_config_odbc.c
<https://reviewboard.asterisk.org/r/3557/#comment21876>

    SQL_C_ULONG is an "unsigned long int" according to 
http://msdn.microsoft.com/en-us/library/ms714556(v=vs.85).aspx
    
    Not sure if that's necessarily the same as a size_t.



/branches/1.8/res/res_config_odbc.c
<https://reviewboard.asterisk.org/r/3557/#comment21877>

    (A) I don't like calloc. We can just malloc it. If you feel like it, set 
the first character to 0. After a couple of iterations, it will be filled with 
garbage anyway. Why initialize it to blanks?
    
    (B) I believe you just fetched MAX(LENGTH(var_val)). Shouldn't we increase 
that by one to fit in the terminating NUL? q.var_val_size += 1
    
    (And for a second there, I was worried that MAX(LENGTH(var_val)) would 
return the amount of characters, instead of bytes. But they're bytes. That's a 
relief.)


- wdoekes


On May 22, 2014, 5:48 p.m., Joshua Colp wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviewboard.asterisk.org/r/3557/
> -----------------------------------------------------------
> 
> (Updated May 22, 2014, 5:48 p.m.)
> 
> 
> Review request for Asterisk Developers and wdoekes.
> 
> 
> Bugs: ASTERISK-23582
>     https://issues.asterisk.org/jira/browse/ASTERISK-23582
> 
> 
> Repository: Asterisk
> 
> 
> Description
> -------
> 
> This change removes fixed size buffers in ODBC related code for reading in 
> row data and func_odbc configuration. For func_odbc the configured queries 
> are duplicated instead of being stored in a fixed size buffer. For dynamic 
> realtime a thread local strings is enlarged as needed as row data is read in. 
> For static realtime the maximum size of a configuration value is read in and 
> buffer created accordingly.
> 
> 
> Diffs
> -----
> 
>   /branches/1.8/res/res_config_odbc.c 414399 
>   /branches/1.8/funcs/func_odbc.c 414399 
> 
> Diff: https://reviewboard.asterisk.org/r/3557/diff/
> 
> 
> Testing
> -------
> 
> Configured func_odbc within MySQL (via ODBC) using extconfig with a 3000 
> length query and configured it was read in completely. This used static 
> realtime and func_odbc.
> Configured chan_sip to use peers stored in MySQL (via ODBC) and stored very 
> long values. Confirmed read in completely.
> 
> Also ran these scenarios under valgrind to confirm no memory insanity.
> 
> 
> Thanks,
> 
> Joshua Colp
> 
>

-- 
_____________________________________________________________________
-- Bandwidth and Colocation Provided by http://www.api-digital.com --

asterisk-dev mailing list
To UNSUBSCRIBE or update options visit:
   http://lists.digium.com/mailman/listinfo/asterisk-dev

Reply via email to