gRPC has a 4GB hard limit for the size of message, and ProtoBuf has a 2GB 
hard limit.

I wonder what's the real size of the `chunks`, sys.getsizeof 
<https://docs.python.org/3/library/sys.html#sys.getsizeof> is not 
accounting for all content in the container.

If possible, can you break the data into multiple messages?

On Thursday, February 27, 2020 at 6:11:52 AM UTC-8 [email protected] 
wrote:

> Getting error in Python 3.6.6 where I am trying to write data using 
> ndb.put_multi(chunks) where *ndb* is *google.cloud.ndb* from  
> google-cloud-ndb==1.0.1 <https://pypi.org/project/google-cloud-ndb/>
>
>    from google.cloud import ndb
>
> We are getting the following error while doing* ndb.put_multi(chunks)*. 
> where we have sys.getsizeof(chunks) is 200 (*bytes)*
>
> [datastore] Feb 27, 2020 6:53:21 PM 
>> io.grpc.netty.NettyServerStream$TransportState deframeFailed
>>
>> [datastore] WARNING: Exception processing message
>>
>> [datastore] io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: *gRPC 
>> message exceeds maximum size 4194304: 13208641*
>>
>> [datastore] at io.grpc.Status.asRuntimeException(Status.java:521)
>>
>> [datastore] at 
>> io.grpc.internal.MessageDeframer.processHeader(MessageDeframer.java:387)
>>
>> [datastore] at 
>> io.grpc.internal.MessageDeframer.deliver(MessageDeframer.java:267)
>>
>> [datastore] at 
>> io.grpc.internal.MessageDeframer.request(MessageDeframer.java:161)
>>
>> [datastore] at 
>> io.grpc.internal.AbstractStream$TransportState.requestMessagesFromDeframer(AbstractStream.java:205)
>>
>> [datastore] at 
>> io.grpc.netty.NettyServerStream$Sink$1.run(NettyServerStream.java:100)
>>
>> [datastore] at 
>> io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
>>
>> [datastore] at 
>> io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
>>
>> [datastore] at 
>> io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:474)
>>
>> [datastore] at 
>> io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
>>
>> [datastore] at 
>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>>
>> [datastore] at 
>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>>
>> [datastore] at java.base/java.lang.Thread.run(Thread.java:830)
>>
>
> I am running datastore emulator in my local using gcloud beta emulators 
> datastore start command.
>
> Thanks in advance for your help.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/d4e183e1-63c0-461e-a49c-c9b23c0c36a5%40googlegroups.com.

Reply via email to