Hi Guys, 
I am new here ( as well as  new to google groups).
We are trying to do something similar to what "christo" is doing.
In our case -   we  are trying to create automated C++ tests ( using gtest).
In our Setup() -- we want to create server/ register services
In our TearDown() -- we want to shutdown server/ delete every thing.
-------------
So that all the tests are exactly same precondition in terms of servers.
We are running the server and it's Wait function ( event loop)  in a 
separate thread.
One thread is client, the testcases   and  another thread is server -- 
wjhich waits on the port.
All this effort is also aimed to create a debugable server side testcases.
As Chrsto has mentiond there are tickets which are closed without 
resolution.
https://github.com/grpc/grpc-go/issues/928
But in case if anyone has any ideas/suggestions, it will be helpful.

On Wednesday, July 21, 2021 at 6:32:04 PM UTC+1 [email protected] wrote:

> What assert are you seeing exactly? We have tests explicitly that are 
> creating the same service multiple times within the same process, so this 
> is supposed to be a supported use case:
>
>
> https://github.com/grpc/grpc/blob/master/test/cpp/server/server_builder_test.cc
>
> On Wednesday, July 14, 2021 at 11:57:18 AM UTC-7 [email protected] 
> wrote:
>
>> I've got a server class that has a `start` method and a `stop` method. 
>> The listening port is unknown until a user calls start.
>>
>> As such, I build the server there, something like:
>>
>>      void GrpcStreamingServerImpl::start(QUrl const& endpoint)
>> {
>> stop();
>>
>> ::grpc::ServerBuilder builder;
>> builder.AddListeningPort(endpoint.toString().toStdString(), 
>> grpc::InsecureServerCredentials());
>> builder.RegisterService(&m_service);
>> m_completionQueueCalls         = builder.AddCompletionQueue();
>> m_completionQueueNotifications = builder.AddCompletionQueue();
>> m_server                       = builder.BuildAndStart();
>>
>> m_running = true;
>>                ....
>>
>> and then I tear everything down when the server stops. Something like:
>>
>>     if(!m_running)
>>     {
>>         return;
>>     }
>>
>>     m_running = false;
>>
>>     {
>>         std::lock_guard<std::mutex> local_lock_guard {m_mutexSessions};
>>         for(const auto& it : m_sessions)
>>         {
>>             it.second->finish();
>>         }
>>     }
>>
>>     if(m_server)
>>     {
>>         m_server->Shutdown();
>>         m_completionQueueCalls->Shutdown();
>>         m_completionQueueNotifications->Shutdown();
>>
>>         // Wait for the threads to exit before allowing the destruction 
>> of the completion queues they use
>>         if(m_callQueueThread && m_callQueueThread->isRunning() )
>>         {
>>             m_callQueueThread->wait();
>>         }
>>
>>         if(m_notificationQueueThread && 
>> m_notificationQueueThread->isRunning())
>>         {
>>             m_notificationQueueThread->wait();
>>         }
>>
>>         delete m_completionQueueNotifications;
>>         delete m_completionQueueCalls;
>>         delete m_server;
>> ....
>>
>>
>> Problem is, if the user wants to start, stop, and start again, the 
>> RegisterService call results in abort being called and complains a service 
>> is already registered.
>>
>> I tried to move the registration part to the constructor and only add a 
>> listening port when start() is called, but that results in a seg fault. 
>> Builder isn't a singleton, although I dunno how or why it retains the 
>> registration after the server object it returned was destroyed.
>>
>> A little googling and I found issues were submitted regarding this, but 
>> there are no plans to implement an "unregisterService" method. So, what 
>> does one to work around this problem?
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8f446be4-b97f-43b5-b6ae-cadfabd4a4f1n%40googlegroups.com.

Reply via email to