[ 
https://issues.apache.org/jira/browse/BEAM-13777?focusedWorklogId=718201&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-718201
 ]

ASF GitHub Bot logged work on BEAM-13777:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 31/Jan/22 21:14
            Start Date: 31/Jan/22 21:14
    Worklog Time Spent: 10m 
      Work Description: lukecwik merged pull request #16652:
URL: https://github.com/apache/beam/pull/16652


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 718201)
    Time Spent: 20m  (was: 10m)

> confluent schema registry cache capacity
> ----------------------------------------
>
>                 Key: BEAM-13777
>                 URL: https://issues.apache.org/jira/browse/BEAM-13777
>             Project: Beam
>          Issue Type: Bug
>          Components: sdk-java-core
>            Reporter: Mostafa Aghajani
>            Assignee: Mostafa Aghajani
>            Priority: P2
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Change cache capacity should be specified as input parameter instead of 
> default max integer. The usage can be quite different case by case and a 
> default Integer max value can lead to error like this depending on the setup:
> {{Exception in thread "main" java.lang.OutOfMemoryError: Java heap space}}
> Some documentation link on the parameter: 
> [https://docs.confluent.io/5.4.2/clients/confluent-kafka-dotnet/api/Confluent.SchemaRegistry.CachedSchemaRegistryClient.html#Confluent_SchemaRegistry_CachedSchemaRegistryClient_DefaultMaxCachedSchemas]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to