Post by Jürgen BeckDer Vorgang zum Herausfinden, welche Objekte gelöscht werden müssen,
ist aber ein anderer.
Es wird _nicht_ herausgefunden, welche Objekte gelöscht werden müssen.
Stattdessen werden bei einer GC alle Objekte zur Löschung vorgesehen
und dann wird herausgefunden, welche Objekte nich aus dem Code heraus
referenziert werden.
Post by Markus SandheideAber wie soll das gehen wenn z.B.
B in seinem Destruktor auf C zugreift und
C in seinem Destruktor auf B?
Es gibt keinen Destruktor mehr.
Stattdessen gibt es nur noch einen Finalizer, der (leider) syntaktisch
genauso aussieht.
Dort dürfen nur nicht verwaltete Objekte zerstört werden.
Auf verwaltete Objekte zuzugreifen, um diese freizugeben, macht hier
keinen Sinn, da diese sowieso automatisch freigegeben werden.
Stattdessen sollte man das Dispose-Designpattern (siehe
.NET-Dokumentation) einsetzen, um explizit Ressourcen freizugeben.
Siehe dazu übrigens auch:
http://blogs.msdn.com/ricom/archive/2003/12/02/40780.aspx
Rico Mariani ist CLR Performance Architect bei Microsoft und sollte wissen
wovon er spricht.
Auszug:
"Almost-rule #2: Never have finalizers
(except on leaf objects which hold a single unmanaged resource and nothing
else)
In C++ it's common to use destructors to release memory. It's tempting to do
that with finalizers but you must not. The GC will collect the memory
associated with the members of a dead object just fine without those members
needing to be nulled. You only need to null the members if you want part of
the state to go away early, while the object is still alive (see above). So
really the only reason to finalize is if you're holding on to an unmanaged
resource (like some kind of operating system handle) and you need to Close
the handle or something like that.
Why is this so important? Well, objects that need to be finalized don't die
right away. When the GC discovers they are dead they are temporarily brought
back to life so that they can get queued up for finalization by the
finalizer thread. Since the object is now alive so is everything that it
points to! So if you were to put a finalizer on all tree nodes in an
application for instance, when you released the root of the tree, no memory
would be reclaimed at first because the root of the tree holds on to
everything else. Yuck! If those tree nodes need to be finalized because they
might hold an unmanaged resource it would be much better* to wrap that
unmanged resource in a object that does nothing else but hold the resource
and let that wrapper object be the finalized thing. Then your tree nodes are
just normal and the only thing that's pending finalization is the one
wrapper object (a leaf), which doesn't keep any other objects alive.
*Note: Whenever I say "it would be much better" that's special
performance-guy-techno-jargon, what I really mean is: "It probably would be
much better but of course you have to measure to know for sure because I can
never predict anything."
Actually, the situation is even worse than I made it out to be above, when
an object has a finalizer it will necessarily survive at least one
collection (because of being brought back to life) which means it might very
well get promoted. If that happens, even the next collect won't reclaim the
memory, you need the next collect for the next bigger generation to reclaim
the memory, and if things are going well the next higher level of collect
will be happening only 1/10th as often, so that could be a long time. All
the more reason to have as little memory as possible tied up in finalizable
objects and all the more reason to use the Dispose pattern whenever possible
so that finalization is not necessary.
Of course if you never have finalizers, you won't have to worry about these
problems."
--
Jürgen Beck
MCSD.NET, MCDBA, MCT
www.Juergen-Beck.de