Here are the print statements. If we look at the time difference between 
subsequent yields for Thread-15 it is more than 2 seconds. Eventually the 
difference goes beyond 2 minutes


test:<Thread(*Thread-15*, started daemon 140166928791296)>, 2018-08-15 
*23:32:27*.009203
test:<Thread(Thread-16, started daemon 140166882019072)>, 2018-08-15 
23:32:27.311508

test:<Thread(*Thread-15*, started daemon 140166928791296)>, 2018-08-15 
*23:32:29*.069680
test:<Thread(Thread-16, started daemon 140166882019072)>, 2018-08-15 
23:32:29.449455


On Wednesday, 15 August 2018 14:02:15 UTC-7, ranjith wrote:
>
> Thanks Nathaniel for suggesting the optimizations. I did the changes in my 
> code and noticed that there are 101 threads spawned (each thread mimics as 
> a server running on its own port). However I noticed that data is not 
> streaming every second from the server. Here is my entire code
>
>
>
> from concurrent import futures
> from multiprocessing import Process, Queue
> import time
> import math
> import grpc
> import plugin_pb2
> import plugin_pb2_grpc
> import data_streamer
> import threading
> import datetime
> import sys
> import get_sensor_data
>
>
>
> _ONE_DAY_IN_SECONDS = 60 * 60 * 24
>
>
> class OpenConfigServicer(plugin_pb2_grpc.OpenConfigServicer):
>
>         def dataSubscribe(self, request, context):
>             try:
>                 path = '/data-sensor'
>                 metadata = None
>                 data = get_sensor_data(path, metadata)
>                 data_point = data[0]
>                 while True:
>                     print 'test:{}, {}'.format(threading.current_thread(), 
> datetime.datetime.now())
>                     yield data_point
>                     # Each server should stream data every second
>                     time.sleep(1)
>             except Exception as e:
>                 import traceback
>                 print 'Exception is streaming data:{}, {}'.format(
>                         e, traceback.format_exc())
>
>
> def serve():
>         server = grpc.server(futures.ThreadPoolExecutor(max_workers=101))
>         
> plugin_pb2_grpc.add_OpenConfigServicer_to_server(OpenConfigServicer(), 
> server)
>         for i in range(50051, 50152):
>             server.add_insecure_port('[::]:' + str(i))
>         server.start()
>         try:
>             while True:
>                 time.sleep(_ONE_DAY_IN_SECONDS)
>         except KeyboardInterrupt:
>                 server.stop(0)
>         
>
> if __name__ == '__main__':
>     serve()
>
> On Wednesday, 15 August 2018 02:07:37 UTC-7, Nathaniel Manista wrote:
>>
>> On Tue, Aug 14, 2018 at 2:55 AM ranjith <[email protected]> wrote:
>>
>>> I have a gRPC service running fine on a server. I have limited amount of 
>>> servers so what I want to do is run this service on the same server but on 
>>> different ports (basically faking the number of grpc servers). The service 
>>> has a single rpc which sends data every 1 second.
>>>
>>> I am running this service on 100 different ports staring from 50000 to 
>>> 50100. Now there are 100 different clients making requests to their 
>>> corresponding server. What I noticed is the data is not sent by these 
>>> servers every 1 second.
>>>
>>>
>>> Example:
>>>
>>> Servers are running on localhost:50000, localhost:50001, localhost:50002 
>>> .... localhost:50100 
>>>
>>>
>>>  
>>>
>>> class OpenConfigServicer(plugin_pb2_grpc.OpenConfigServicer):
>>>
>>>         def dataSubscribe(self, request, context):
>>>             # this rpc yields data every 1s
>>>
>>>
>>>
>>> def serve():
>>>         servers = []
>>>         for i in range(50000, 50101):
>>>             server = 
>>> grpc.server(futures.ThreadPoolExecutor(max_workers=1))
>>>             servers.append(server)
>>>         i = 50000
>>>         for server in servers:
>>>             plugin_pb2_grpc.add_OpenConfigServicer_to_server(
>>>                 OpenConfigServicer(), server)
>>>             server.add_insecure_port('[::]:' + str(i))
>>>             server.start()
>>>             i += 1
>>>
>>>
>>> Can someone tell me if we can optimize this
>>>
>>
>> Probably not that important: why have 101 OpenConfigServicer instances 
>> rather than one that is shared among all your grpc.Server instances?
>>
>> Probably more important: why have 101 grpc.Server instances each serving 
>> on one port rather than one serving on 101 ports? Why don't you construct 
>> one server outside your loop and only call add_insecure_port on that one 
>> grpc.Server instance inside the loop?
>> -Nathaniel
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/f4e0216f-4a37-49fb-842a-5121f10a299c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to