More data: I can actually stop the service on the server, which I assume 
releases port 8091 entirely, then restart the service on the server. All 
the while I leave the client service running with ccChannel & ccClient 
still defined as before. If I now ask the client to talk to the server the 
gRPC works fine - does not generate a 'Stream removed' exception. It 
appears that re-initializing ccClient is not needed even if you shut down 
the server.

On Wednesday, January 13, 2021 at 12:30:51 PM UTC-6 Jim Thomas wrote:

>
> My Windows 10 gRPC client reports a 'Stream Removed' exception when the 
> channel has not been used for over an hour. (Not sure why it says 'Stream' 
> because I am not using a stream!) Here is the exception where port :8091 is 
> the port on the Windows 10 server. Both the client and the server run as 
> Windows 10 services (on different computers) and I am using Protobuf 3.12.4 
> and Grpc 2.30.0
>
> -----------------------------------------------------------------------------------------------
> DebugException="Grpc.Core.Internal.CoreErrorDetailException: 
> {"created":"@1610556972.490000000","description":"Error received from peer 
> ipv4:192.168.55.4:8091","file":"T:\src\github\grpc\workspace_csharp_ext_windows_x64\src\core\lib\surface\call.cc","file_line":1055,"grpc_message":"Stream
>  
> removed","grpc_status":2}"
>
> ------------------------------------------------------------------------------------------------
> Here is my C# gRPC code (approximately) which creates the client:
>
>     ccChannel = new Channel (serverNamePort, ChannelCredentials.Insecure);
>     ccClient = new cc.Client(ccChannel);
>     
> where cc means CreditCard as this is a credit card service. I an using 
> .Insecure because I encrypt the message myself and only the destination app 
> which runs on a WebServer beyond the gRPC server decrypts the message. I 
> believe my architecture is more secure than using TLS 1.3. I only mention 
> this because some support forums suggest the exception is caused by not 
> using TLS but I have never seen a definitive resolution.
>
> I have a second test environment where I left the channel idle for 6 hours 
> and the rpc still worked (but in that case both the client and server were 
> running on localhost so not really comparing apples-to-apples.) Hence, I 
> cannot say idleness is the cause unless there is some configurable 
> difference on the two systems which causes idle channels to be removed on 
> one but not the other? 
>
> For what it's worth the client does report that the channel is 'Ready' 
> just prior to the exception.
>
> My work-around is to detect the exception on the client and try the call 
> again - it ALWAYS works the second time.
>
> I wonder if re-using ccChannel & ccClient is part of the cause? Do the 
> creators of gRPC recommend defining a new channel and client before each 
> rpc? 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/c22edc85-60ee-4cbc-b6db-ace1450e430bn%40googlegroups.com.

Reply via email to