On Sat, Apr 18, 2009 at 12:16 AM, erik quanstrom <quans...@quanstro.net> wrote:
>> On Fri, Apr 17, 2009 at 11:37 PM, erik quanstrom <quans...@quanstro.net> 
>> wrote:
>> >> I can imagine a lot of problems stemming from open files could be
>> >> resolved by first attempting to import the process's namespace at the
>> >> time of checkpoint and, upon that failing, using cached copies of the
>> >> file made at the time of checkpoint, which could be merged later.
>> >
>> > there's no guarantee to a process running in a conventional
>> > environment that files won't change underfoot.  why would
>> > condor extend a new guarantee?
>> >
>> > maybe i'm suffering from lack of vision, but i would think that
>> > to get to 100% one would need to think in terms of transactions
>> > and have a fully transactional operating system.
>> >
>> > - erik
>> >
>>
>> There's a much lower chance of files changing out from you in a
>> conventional environment. If the goal is to make the "unconventional"
>> environment look and act like the conventional one, it will probably
>> have to try to do some of these things to be useful.
>
> * you can get the same effect by increasing the scale of your system.
>
> * the reason conventional systems work is not, in my opinion, because
> the collision window is small, but because one typically doesn't do
> conflicting edits to the same file.
>
> * saying that something "isn't likely" in an unquantifiable way is
> not a recipie for success in computer science, in my experience.
>
> - erik
>

I don't see how any of that relates to having to do more work to
ensure that C/R and process migration across nodes works and keeps
things as consistent as possible.

Reply via email to