Hi~

     I wrote an echo server as follow:

     one accept thread, one io thread, and i use uv_write2 to pass tcp 
handle to io thread, everything works fine ~

     and i use a uv_time_t to record the read_count/sec and write_count/sec 
in the client, when i test on win7 x64 
  
     system,  use a single tcp connection: 

    condition 1>  the result of read/write buffer size 16 or 1024 bytes as 
follow:

2017-12-26 16:53:55.501 INFO  [16572] [accept_server::run@77] tcp server is 
runing on 0.0.0.0:9999 ...
2017-12-26 16:53:55.502 INFO  [16572] [pipe_server::run@51] pipe server is 
runing on \\?\pipe\pipe_trans_handle_test ...
2017-12-26 16:53:58.962 INFO  [9164] [session_io::run@25] session io is 
runing, instance: 0244FB5C ...
2017-12-26 16:53:58.963 INFO  [16572] [pipe_server::pipe_connection_cb@72] 
enter pipe_connection_cb, status = 0
2017-12-26 16:53:58.963 INFO  [9164] [pipe_client::pipe_connect_cb@48] pipe 
client connect ok!
2017-12-26 16:53:58.964 INFO  [16572] [pipe_server::pipe_connection_cb@128] 
pipe client connected, address: \\?\pipe\pipe_trans_handle_test, fd: 400
2017-12-26 16:53:59.504 INFO  [17600] [client::on_connect_cb@70] client 
connect to server ok!
2017-12-26 16:53:59.504 INFO  [16572] [accept_server::connection_cb@109] 
client connected, address: 127.0.0.1:32603
2017-12-26 16:53:59.505 DEBUG [16572] [accept_server::connection_cb@119] 
dispatch_handle through pipe ok!
2017-12-26 16:53:59.505 DEBUG [9164] [pipe_client::pipe_read_cb@103] 
uv_pipe_pending_count: 1
2017-12-26 16:53:59.505 DEBUG [9164] [pipe_client::pipe_read_cb@111] this 
pipe transfer for tcp handle!
2017-12-26 16:53:59.565 DEBUG [9164] [pipe_client::pipe_read_cb@146] pipe 
client accept fd: 460
2017-12-26 16:53:59.565 DEBUG [9164] [session_io::process_new_handle@53] 
call process_new_handle!
2017-12-26 16:54:00.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
27852, write count: 27852
2017-12-26 16:54:01.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
31363, write count: 31364
2017-12-26 16:54:02.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
30302, write count: 30302
2017-12-26 16:54:03.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
30879, write count: 30878
2017-12-26 16:54:04.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
31278, write count: 31278
2017-12-26 16:54:05.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
31566, write count: 31566
2017-12-26 16:54:06.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
30883, write count: 30883
2017-12-26 16:54:07.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
31020, write count: 31020
2017-12-26 16:54:08.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
30830, write count: 30830
2017-12-26 16:54:09.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
31016, write count: 31016
2017-12-26 16:54:10.503 DEBUG [17600] [client::on_timer_cb@164] read count: 
30634, write count: 30634
2017-12-26 16:54:11.504 DEBUG [17600] [client::on_timer_cb@164] read count: 
31165, write count: 31165


  about 30K opt/sec, and very stable.


  condition 2>  then i changed the read/write buffer size to 1024 * 4 , the 
print as follow:

2017-12-26 16:57:15.077 INFO  [14272] [accept_server::run@77] tcp server is 
runing on 0.0.0.0:9999 ...

2017-12-26 16:57:15.077 INFO  [14272] [pipe_server::run@51] pipe server is 
runing on \\?\pipe\pipe_trans_handle_test ...
2017-12-26 16:57:18.424 INFO  [16784] [session_io::run@25] session io is 
runing, instance: 024DF80C ...
2017-12-26 16:57:18.425 INFO  [16784] [pipe_client::pipe_connect_cb@48] 
pipe client connect ok!
2017-12-26 16:57:18.425 INFO  [14272] [pipe_server::pipe_connection_cb@72] 
enter pipe_connection_cb, status = 0
2017-12-26 16:57:18.425 INFO  [14272] [pipe_server::pipe_connection_cb@128] 
pipe client connected, address: \\?\pipe\pipe_trans_handle_test, fd: 400
2017-12-26 16:57:19.079 INFO  [17404] [client::on_connect_cb@70] client 
connect to server ok!
2017-12-26 16:57:19.079 INFO  [14272] [accept_server::connection_cb@109] 
client connected, address: 127.0.0.1:33063
2017-12-26 16:57:19.080 DEBUG [14272] [accept_server::connection_cb@119] 
dispatch_handle through pipe ok!
2017-12-26 16:57:19.081 DEBUG [16784] [pipe_client::pipe_read_cb@103] 
uv_pipe_pending_count: 1
2017-12-26 16:57:19.081 DEBUG [16784] [pipe_client::pipe_read_cb@111] this 
pipe transfer for tcp handle!
2017-12-26 16:57:19.081 DEBUG [16784] [pipe_client::pipe_read_cb@146] pipe 
client accept fd: 460
2017-12-26 16:57:19.082 DEBUG [16784] [session_io::process_new_handle@53] 
call process_new_handle!
2017-12-26 16:57:20.079 DEBUG [17404] [client::on_timer_cb@164] read count: 
21890, write count: 21891
2017-12-26 16:57:21.079 DEBUG [17404] [client::on_timer_cb@164] read count: 
16166, write count: 16159
2017-12-26 16:57:22.079 DEBUG [17404] [client::on_timer_cb@164] read count: 
16133, write count: 16140
2017-12-26 16:57:23.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
15766, write count: 15766
2017-12-26 16:57:24.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
18521, write count: 18520
2017-12-26 16:57:25.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
19493, write count: 19494
2017-12-26 16:57:26.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
18410, write count: 15758
2017-12-26 16:57:27.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
18884, write count: 13194
2017-12-26 16:57:28.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
13686, write count: 8342
2017-12-26 16:57:29.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
5706, write count: 0
2017-12-26 16:57:30.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
3437, write count: 0
2017-12-26 16:57:31.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
2357, write count: 0
2017-12-26 16:57:32.080 DEBUG [17404] [client::on_timer_cb@164] read count: 
2063, write count: 0


 the record data is not stable, and looks like the memory keep increased.


    I did't reuse a single uv_write_t  object, but use an object-pool like 
this:

uv_write_t* write_req = m_write_req_pool.require();


    and the recycle it like this:


m_write_req_pool.recycle(write_req);


    and  i reused the read buffer: after i read one, use it again in the io 
thread. my session context like this:

struct PEER_SESSION_CONTEXT

{

session_io* self;

size_t ctx_key;

std::shared_ptr<handle_storage_t> peer_handle;

std::string peer_ip;

unsigned short peer_port;

char cache_read_buf[TEST_MSG_SIZE];

}; 

  
    as my thoughts, i didn't alloc read buffer and write buffer in the io 
thread loop, so the loop can't delay much time when process the io task.

    and i known uv_write will put the data to a queue, but i don't think 
this can effect a lot.

    
    need help, thanks.



-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/libuv.
For more options, visit https://groups.google.com/d/optout.

Reply via email to