On Wednesday, 7 October 2015 at 13:15:11 UTC, Paulo Pinto wrote:
On Wednesday, 7 October 2015 at 12:56:32 UTC, bitwise wrote:
On Wednesday, 7 October 2015 at 07:24:03 UTC, Paulo Pinto wrote:
On Tuesday, 6 October 2015 at 20:43:42 UTC, bitwise wrote:
[...]

That no, but this yes (at least in C#):

using (LevelManager mgr = new LevelManager())
{
     //....
     // Somewhere in the call stack
     Texture text = mgr.getTexture();
}
--> All level resources gone that require manual management gone
--> Ask the GC to collect the remaining memory right now

If not level wide, than maybe scene/section wide.

However I do get that not all architectures are amendable to be re-written in a GC friendly way.

But the approach is similar to RAII in C++, reduce new to minimum and allocate via factory functions that work together with handle manager classes.

--
Paulo

Still no ;)

It's a Texture. It's meant to be seen on the screen for a while, not destroyed in the same scope which it was created.

In games though, we have a scene graph. When things happen, we often chip off a large part of it while the game is running, discard it, and load something new. We need to know that what we just discarded has been destroyed completely before we start loading new stuff when we're heavily constrained by memory. And even in cases where we aren't that constrained by memory, we need to know things have been destroyed, period, for non-memory resources. Also, when using graphics APIs like OpenGL, we need control over which thread an object is destroyed in, because you can't access OpenGL resources from just any thread. Now, you could set up some complicated queue where you send textures and so on to(latently) be destroyed, but this is just complicated. Picture a Hello OpenGL app in D and the hoops some noob would have to jump through. It's bad news.

Also, I should add, that a better example of the Texture thing would be a regular Texture and a RenderTexture. You can only draw to the RenderTexture, but you should be able to apply both to a primitive for drawing. You need polymorphism for this. A struct will not do.

    Bit

I guess you misunderstood the // Somewhere in the call stack

It is meant as the logical region where that scene graph block you refer to is valid.

Anyway I was just explaining what is possible when one embraces the tools GC languages offer.

I still don't think your example exists in real world applications. Typically, you don't have that kind of control over the application's control-flow. You don't really have the option of unwinding the stack when you want to clean up. Most applications these days are event-based. When things are loaded or unloaded, it's usually as a result of some event-callback originating from either an input event, or a display link callback. To clarify, on iOS, you don't have a game loop. You can register a display-link or timer which will call your 'draw' or 'update' function at a fixed interval. On top of this, you just can't rely on a strict hierarchical ownership of resources like this. large bundles of resources may be loaded/unloaded in any order, at any time.

And both Java and .NET do offer support such type of queues as well.

I was actually thinking about this.

If D had a standard runloop of some sort(like NSRunLoop/performSelectorOnThread: for iOS/OSX) then it would make queueing things to other threads a little easier. I suppose D's receive() API could be used to make something a little more specialized. But although this would allow classes to delegate the destruction of resources to the correct thread, it wouldn't resolve the problem that those destruction commands will still only be delegated if/when a classes destructor is actually called.

In general, I advocate any form of automatic memory/resource management.

+1 :)



Reply via email to