[jira] [Updated] (TS-4613) Set an independent thread_data_used for each thread group instead of sharing one thread_data_used
[ https://issues.apache.org/jira/browse/TS-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Leif Hedstrom updated TS-4613: -- Fix Version/s: (was: 7.0.0) 7.1.0 > Set an independent thread_data_used for each thread group instead of sharing > one thread_data_used > - > > Key: TS-4613 > URL: https://issues.apache.org/jira/browse/TS-4613 > Project: Traffic Server > Issue Type: Improvement > Components: Core >Reporter: Oknet Xu >Assignee: Oknet Xu >Priority: Minor > Labels: Optimization > Fix For: 7.1.0 > > > "thread_data_used" indicate the usage of EThread::thread_private[ ]. > The EThread::thread_private[ ] saved thread specific data e.g. : > - stat system arrays > - NetHandler object > - PollCont object > However, the private data of thread group are different. > Sharing thread_data_used cause the waste of space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-4613) Set an independent thread_data_used for each thread group instead of sharing one thread_data_used
[ https://issues.apache.org/jira/browse/TS-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oknet Xu updated TS-4613: - Priority: Minor (was: Major) > Set an independent thread_data_used for each thread group instead of sharing > one thread_data_used > - > > Key: TS-4613 > URL: https://issues.apache.org/jira/browse/TS-4613 > Project: Traffic Server > Issue Type: Improvement > Components: Core >Reporter: Oknet Xu >Assignee: Oknet Xu >Priority: Minor > Labels: Optimization > Fix For: 7.0.0 > > > "thread_data_used" indicate the usage of EThread::thread_private[ ]. > The EThread::thread_private[ ] saved thread specific data e.g. : > - stat system arrays > - NetHandler object > - PollCont object > However, the private data of thread group are different. > Sharing thread_data_used cause the waste of space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-4613) Set an independent thread_data_used for each thread group instead of sharing one thread_data_used
[ https://issues.apache.org/jira/browse/TS-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oknet Xu updated TS-4613: - Issue Type: Improvement (was: Bug) > Set an independent thread_data_used for each thread group instead of sharing > one thread_data_used > - > > Key: TS-4613 > URL: https://issues.apache.org/jira/browse/TS-4613 > Project: Traffic Server > Issue Type: Improvement > Components: Core >Reporter: Oknet Xu >Assignee: Oknet Xu > Labels: Optimization > Fix For: 7.0.0 > > > "thread_data_used" indicate the usage of EThread::thread_private[ ]. > The EThread::thread_private[ ] saved thread specific data e.g. : > - stat system arrays > - NetHandler object > - PollCont object > However, the private data of thread group are different. > Sharing thread_data_used cause the waste of space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-4613) Set an independent thread_data_used for each thread group instead of sharing one thread_data_used
[ https://issues.apache.org/jira/browse/TS-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oknet Xu updated TS-4613: - Labels: Optimization (was: ) > Set an independent thread_data_used for each thread group instead of sharing > one thread_data_used > - > > Key: TS-4613 > URL: https://issues.apache.org/jira/browse/TS-4613 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Oknet Xu >Assignee: Oknet Xu > Labels: Optimization > Fix For: 7.0.0 > > > "thread_data_used" indicate the usage of EThread::thread_private[ ]. > The EThread::thread_private[ ] saved thread specific data e.g. : > - stat system arrays > - NetHandler object > - PollCont object > However, the private data of thread group are different. > Sharing thread_data_used cause the waste of space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-4613) Set an independent thread_data_used for each thread group instead of sharing one thread_data_used
[ https://issues.apache.org/jira/browse/TS-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oknet Xu updated TS-4613: - Description: "thread_data_used" indicate the usage of EThread::thread_private[ ]. The EThread::thread_private[ ] saved thread specific data e.g. : - stat system arrays - NetHandler object - PollCont object However, the private data of thread group are different. Sharing thread_data_used cause the waste of space. was: NetHandler has a method: _close_vc , It is called by InactivityCop. first, create a dummy Event in stack, then call UnixNetVConnection::mainEvent by vc->handleEvent(EVENT_IMMEDIATE, ); the handleEvent is mainEvent here. In the UnixNetVConnection::mainEvent code: ``` int UnixNetVConnection::mainEvent(int event, Event *e) { ink_assert(event == EVENT_IMMEDIATE || event == EVENT_INTERVAL); ink_assert(thread == this_ethread()); MUTEX_TRY_LOCK(hlock, get_NetHandler(thread)->mutex, e->ethread); MUTEX_TRY_LOCK(rlock, read.vio.mutex ? read.vio.mutex : e->ethread->mutex, e->ethread); MUTEX_TRY_LOCK(wlock, write.vio.mutex ? write.vio.mutex : e->ethread->mutex, e->ethread); if (!hlock.is_locked() || !rlock.is_locked() || !wlock.is_locked() || (read.vio.mutex && rlock.get_mutex() != read.vio.mutex.get()) || (write.vio.mutex && wlock.get_mutex() != write.vio.mutex.get())) { #ifdef INACTIVITY_TIMEOUT if (e == active_timeout) #endif e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); return EVENT_CONT; } ``` the dummy Event would be schedule_in into Event System by e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); I think we should move the schedule_in into the INACTIVITY_TIMEOUT macro. ``` #ifdef INACTIVITY_TIMEOUT if (e == active_timeout) e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); #endif ``` I'm try to allocate a Event instead dummy Event, but meet Event System callback on a deallocated UnixNetVConnection. due to NetHandler called close_UnixNetVConnection before Event System callback the Event by schedule_in. In NetHandler::_close_vc, depend the return value (EVENT_DONE or EVENT_CONT) from UnixNetVConnection::mainEvent, to do ++handle_event; or not. ``` if (vc->handleEvent(EVENT_IMMEDIATE, ) == EVENT_DONE) ++handle_event; ``` the 3 MUTEX_TRY_LOCK not always success on InactivityCop callback, due to the mutex of ServerSessionVC may different from ClientSessionVC. Only mutex of ServerSession is set to HttpSM when HttpSM pick up a Server Session from SessionPool. ServerSessionVC still keep the old mutex. > Set an independent thread_data_used for each thread group instead of sharing > one thread_data_used > - > > Key: TS-4613 > URL: https://issues.apache.org/jira/browse/TS-4613 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Oknet Xu >Assignee: Oknet Xu > Fix For: 7.0.0 > > > "thread_data_used" indicate the usage of EThread::thread_private[ ]. > The EThread::thread_private[ ] saved thread specific data e.g. : > - stat system arrays > - NetHandler object > - PollCont object > However, the private data of thread group are different. > Sharing thread_data_used cause the waste of space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-4613) Set an independent thread_data_used for each thread group instead of sharing one thread_data_used
[ https://issues.apache.org/jira/browse/TS-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oknet Xu updated TS-4613: - Summary: Set an independent thread_data_used for each thread group instead of sharing one thread_data_used (was: In UnixNetVConnection::mainEvent should not do e->schedule_in for dummy event callback ) > Set an independent thread_data_used for each thread group instead of sharing > one thread_data_used > - > > Key: TS-4613 > URL: https://issues.apache.org/jira/browse/TS-4613 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Oknet Xu >Assignee: Oknet Xu > Fix For: 7.0.0 > > > NetHandler has a method: _close_vc , It is called by InactivityCop. > first, create a dummy Event in stack, > then call UnixNetVConnection::mainEvent by vc->handleEvent(EVENT_IMMEDIATE, > ); > the handleEvent is mainEvent here. > In the UnixNetVConnection::mainEvent code: > ``` > int > UnixNetVConnection::mainEvent(int event, Event *e) > { > ink_assert(event == EVENT_IMMEDIATE || event == EVENT_INTERVAL); > ink_assert(thread == this_ethread()); > MUTEX_TRY_LOCK(hlock, get_NetHandler(thread)->mutex, e->ethread); > MUTEX_TRY_LOCK(rlock, read.vio.mutex ? read.vio.mutex : e->ethread->mutex, > e->ethread); > MUTEX_TRY_LOCK(wlock, write.vio.mutex ? write.vio.mutex : > e->ethread->mutex, e->ethread); > if (!hlock.is_locked() || !rlock.is_locked() || !wlock.is_locked() || > (read.vio.mutex && rlock.get_mutex() != read.vio.mutex.get()) || > (write.vio.mutex && wlock.get_mutex() != write.vio.mutex.get())) { > #ifdef INACTIVITY_TIMEOUT > if (e == active_timeout) > #endif > e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); > return EVENT_CONT; > } > ``` > the dummy Event would be schedule_in into Event System by > e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); > I think we should move the schedule_in into the INACTIVITY_TIMEOUT macro. > ``` > #ifdef INACTIVITY_TIMEOUT > if (e == active_timeout) > e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); > #endif > ``` > I'm try to allocate a Event instead dummy Event, but meet Event System > callback on a deallocated UnixNetVConnection. > due to NetHandler called close_UnixNetVConnection before Event System > callback the Event by schedule_in. > In NetHandler::_close_vc, depend the return value (EVENT_DONE or EVENT_CONT) > from UnixNetVConnection::mainEvent, to do ++handle_event; or not. > ``` > if (vc->handleEvent(EVENT_IMMEDIATE, ) == EVENT_DONE) > ++handle_event; > ``` > the 3 MUTEX_TRY_LOCK not always success on InactivityCop callback, > due to the mutex of ServerSessionVC may different from ClientSessionVC. > Only mutex of ServerSession is set to HttpSM when HttpSM pick up a Server > Session from SessionPool. ServerSessionVC still keep the old mutex. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (TS-4613) Set an independent thread_data_used for each thread group instead of sharing one thread_data_used
[ https://issues.apache.org/jira/browse/TS-4613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oknet Xu updated TS-4613: - Component/s: (was: Network) > Set an independent thread_data_used for each thread group instead of sharing > one thread_data_used > - > > Key: TS-4613 > URL: https://issues.apache.org/jira/browse/TS-4613 > Project: Traffic Server > Issue Type: Bug > Components: Core >Reporter: Oknet Xu >Assignee: Oknet Xu > Fix For: 7.0.0 > > > NetHandler has a method: _close_vc , It is called by InactivityCop. > first, create a dummy Event in stack, > then call UnixNetVConnection::mainEvent by vc->handleEvent(EVENT_IMMEDIATE, > ); > the handleEvent is mainEvent here. > In the UnixNetVConnection::mainEvent code: > ``` > int > UnixNetVConnection::mainEvent(int event, Event *e) > { > ink_assert(event == EVENT_IMMEDIATE || event == EVENT_INTERVAL); > ink_assert(thread == this_ethread()); > MUTEX_TRY_LOCK(hlock, get_NetHandler(thread)->mutex, e->ethread); > MUTEX_TRY_LOCK(rlock, read.vio.mutex ? read.vio.mutex : e->ethread->mutex, > e->ethread); > MUTEX_TRY_LOCK(wlock, write.vio.mutex ? write.vio.mutex : > e->ethread->mutex, e->ethread); > if (!hlock.is_locked() || !rlock.is_locked() || !wlock.is_locked() || > (read.vio.mutex && rlock.get_mutex() != read.vio.mutex.get()) || > (write.vio.mutex && wlock.get_mutex() != write.vio.mutex.get())) { > #ifdef INACTIVITY_TIMEOUT > if (e == active_timeout) > #endif > e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); > return EVENT_CONT; > } > ``` > the dummy Event would be schedule_in into Event System by > e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); > I think we should move the schedule_in into the INACTIVITY_TIMEOUT macro. > ``` > #ifdef INACTIVITY_TIMEOUT > if (e == active_timeout) > e->schedule_in(HRTIME_MSECONDS(net_retry_delay)); > #endif > ``` > I'm try to allocate a Event instead dummy Event, but meet Event System > callback on a deallocated UnixNetVConnection. > due to NetHandler called close_UnixNetVConnection before Event System > callback the Event by schedule_in. > In NetHandler::_close_vc, depend the return value (EVENT_DONE or EVENT_CONT) > from UnixNetVConnection::mainEvent, to do ++handle_event; or not. > ``` > if (vc->handleEvent(EVENT_IMMEDIATE, ) == EVENT_DONE) > ++handle_event; > ``` > the 3 MUTEX_TRY_LOCK not always success on InactivityCop callback, > due to the mutex of ServerSessionVC may different from ClientSessionVC. > Only mutex of ServerSession is set to HttpSM when HttpSM pick up a Server > Session from SessionPool. ServerSessionVC still keep the old mutex. -- This message was sent by Atlassian JIRA (v6.3.4#6332)