I am having the following problem:

I have implemented a protobuff to send data from a golang client to a 
python server using a grpc stream. The data I am sending needs to be loaded 
quickly on the python server and processed. This data is a composite field 
with the largest part being a repeated sint array defined as such: *repeated 
sint32 array = 4 [packed=true];*

This field contains around 18,000,000 entries and when I try to load these 
into my code using the following line: *data = np.array(array_obj, 
dtype=np.int8) *this process takes around 1.5 SECONDS. I have tried 
alternative methods of first reading the data as a list, which is also not 
faster, using copy=False in numpy... I just want to access the memory where 
these values are stored...

I would like to try something such as numba o Cython, but both of those 
would require me to implement the complete container type stored here 
https://github.com/protocolbuffers/protobuf/blob/main/python/google/protobuf/internal/containers.py
 
. Is there some way this process could be accelerated?

Thankful for any help

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/ea698aba-7c57-4da2-8868-3743561f8a37n%40googlegroups.com.

Reply via email to