Are you using protobuf python cpp extension or pure python?

We do have a design of "Optimized Python NumPy Bindings for Repeated 
Fields" for cpp extension , but the implementation breaks too many users so 
it was blocked. I will check with the owner to go if we can go forward
在2022年6月15日星期三 UTC-7 14:59:14<Daniel> 写道:

> I am having the following problem:
>
> I have implemented a protobuff to send data from a golang client to a 
> python server using a grpc stream. The data I am sending needs to be loaded 
> quickly on the python server and processed. This data is a composite field 
> with the largest part being a repeated sint array defined as such: *repeated 
> sint32 array = 4 [packed=true];*
>
> This field contains around 18,000,000 entries and when I try to load these 
> into my code using the following line: *data = np.array(array_obj, 
> dtype=np.int8) *this process takes around 1.5 SECONDS. I have tried 
> alternative methods of first reading the data as a list, which is also not 
> faster, using copy=False in numpy... I just want to access the memory where 
> these values are stored...
>
> I would like to try something such as numba o Cython, but both of those 
> would require me to implement the complete container type stored here 
> https://github.com/protocolbuffers/protobuf/blob/main/python/google/protobuf/internal/containers.py
>  
> . Is there some way this process could be accelerated?
>
> Thankful for any help
>

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/2efd86bb-e0f6-4140-bed7-5dfe65e1e3aan%40googlegroups.com.

Reply via email to