osdir.com


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RFC: Proposal: Deterministic Object Destruction


On Sun, 04 Mar 2018 03:37:38 -0800, Ooomzay wrote:

> Please consider the case of a composite resource: You need to implement
> __enter__, __exit__ and track the open/closed state at every level in
> your component hierarchy - even if some levels hold no resources
> directly.

I'm sorry, that description is too abstract for me to understand. Can you 
give a simple example?


> This is burdensome, breaks encapsulation, breaks invariance and is error
> prone ...very unpythonic.

Without a more concrete example, I cannot comment on these claims.


[...]
> My PEP is about improving the linguistic integrity and fitness for
> resource management purpose of the language.

So you claim, but *requiring* reference counting semantics does not 
improve the integrity or fitness of the language. And the required 
changes to programming styles and practices (no cycles, no globals, put 
all resources inside their own scope) demonstrate that this is a step 
backwards.

Right now I can write reliable code that uses external resources (such as 
a database connection or file) and put it in my application's module 
scope, and still easily manage the resource lifetime. I cannot do that by 
relying only on RAII. That's a step backwards as far as language fitness.


>> In any case, you might not like with statements, but I think they're
>> infinitely better than:
>> 
>> def meaningless_function_that_exists_only_to_manage_resource():
>>     x = open_resource()
>>     process(x)
> 
>> def function():
>>     meaningless_function_that_exists_only_to_manage_resource()
>>     sleep(10000)  # simulate a long-running function
> 
> Why would you prefer a new construct?

I don't prefer a new construct. The "with" statement isn't "new". It goes 
back to Python 2.5 (`from __future__ import with_statement`) which is 
more than eleven years old now. That's about half the lifetime of the 
language!

I prefer the with statement because it is *explicit*, simple to use, and 
clear to read. I can read some code and instantly see that when the with 
block ends, the resource will be closed, regardless of how many 
references to the object still exist.

I don't have to try to predict (guess!) when the last reference will go 
out of scope, because that's irrelevant.

RAII conflates the lifetime of the object with the lifetime of the 
resource held by the object. They are not the same, and the object can 
outlive the resource.

Your position is:

"RAII makes it really elegant to close the file! All you need to do is 
make sure that when you want to close the file, you delete all the 
references to the file, so that it goes out of scope, and the file will 
be closed."

My position is:

"If I want to close the file, I'll just close the file. Why should I care 
that there are zero or one or a million references to it?"


>> - the with block is equivalent to a try...finally, and so it is
>>   guaranteed to close the resource even if an exception occurs; your
>>   solution isn't.
> 
>> If process(x) creates a non-local reference to x, and then raises an
>> exception, and that exception is caught elsewhere, x will not go out of
>> scope and won't be closed.
>> A regression in the reliability of the code.
> 
> This PEP does not affect existing code. Peeps who are familiar with RAII
> understand that creating a global reference to an RAII resource is
> explicitly saying "I want this kept open at global scope" and that is
> the behaviour that they will be guaranteed.

I'm not talking about global scope. Any persistent reference to the 
object will prevent the resource from being closed. 

Here's a proof of concept which demonstrates the problem with conflating 
object scope and resource lifetime. Notice that there are no globals used.


def main():
    from time import sleep
    values = []
    def process():
        f = open('/tmp/foo', 'w')
        values.append(f)
        f.write("Hello world!")
        f.read()  # oops!
        del values[-1]
    try:
        process()
    except IOError:
        pass
    # The file should be closed now. But it isn't.
    sleep(10)  # simulate doing a lot of work
    g = open('/tmp/foo', 'r')
    assert g.read() == "Hello world!"


The assertion fails, because the file hasn't be closed in a timely 
manner. On the other hand:

def main2():
    from time import sleep
    values = []
    def process():
        with open('/tmp/bar', 'w') as f:
            values.append(f)
            f.write("Hello world!")
            f.read()  # oops!
            del values[-1]
    try:
        process()
    except IOError:
        pass
    sleep(10)  # simulate doing a lot of work
    g = open('/tmp/foo', 'r')
    assert g.read() == "Hello world!"


The assertion here passes.

Now, these are fairly contrived examples, but in real code the resource 
owner might be passed into an iterator, or bound to a class attribute, or 
anything else that holds onto a reference to it. As soon as that happens, 
and there's another reference to the object anywhere, RAII will be unable 
to close the resource in a timely manner.



-- 
Steve