shibd commented on PR #955:
URL: https://github.com/apache/pulsar-client-go/pull/955#issuecomment-1449284692
> > This semaphore can only support acquire or release 1 signal.
>
> Yes. But we can adopt a similar way to handle this case.
>
> For example, if we add a `ch chan bool` field to `memoryLimitController`,
we can implement blocking `ReserveMemory` method like:
>
> ```go
> func (m *memoryLimitController) ReserveMemory(ctx context.Context, size
int64) bool {
> // NOTE: maybe we need to check if m.currentUsage > m.limit first
> currentUsage := atomic.AddInt64(&m.currentUsage, size)
> for currentUsage > m.limit {
> select {
> case <-m.ch:
> currentUsage = atomic.LoadInt64(&m.currentUsage)
> case <-ctx.Done(): // NOTE: Not sure if we need to reset
some fields here
> return false
> }
> }
> return true
> }
>
> func (m *memoryLimitController) ReleaseMemory(size int64) {
> newUsage := atomic.AddInt64(&m.currentUsage, -size)
> // newUsage+size > m.limit means m was blocked in ReserveMemory
method
> if newUsage+size > m.limit && newUsage <= m.limit {
> m.ch <- true
> }
> }
> ```
>
> The code above is not verified yet. But with the channel the code looks
more simple and clear.
This implementation has a problem, and the `broadcasting` cannot be
implemented using channels.
For example:
1. `currentUsage` = 100, `limit` = 100
2. goroutine `1` call `ReseveMemory(10)`, it will blocked.
3. goroutine `2` call `ReseveMemory(10)`, it will blocked.
4. goroutine `3` call `ReleaseMemory(20)`, it only wake one goroutine(1 or
2). The expectation is that both are woken up and return true.
// NOTE: maybe we need to check if m.currentUsage > m.limit first
Like this note. If we want to handle this case, we may need to introduce a
variable `waitNum`. So I feel like it's complicated to implement
`mem_controller_limit`, and it's better to introduce `channel_cond`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]