On 2/02/20 1:00 AM, R.Wieser wrote:
As sent to the OP. I appreciate these discussions, in the expectation of
learning something-new. (and with rust-removal paints at the ready!)
Indeed. Even if its just a different POV which makes you rethink the
reasons of your own one.
+1
--
Regards =dn
DL,
>> While I agree with you there, I've been searching for other ways to
>> detect a
>> keypress (in a console-based script) and have found none.
>
> Color me disappointed!
I was disapponted too, but realized that being able to just capture any
keypress (how does the 'puter know the
On 2020-01-30 07:56:30 +1100, Chris Angelico wrote:
> On Thu, Jan 30, 2020 at 7:49 AM MRAB wrote:
> > On 2020-01-29 20:00, jkn wrote:
> > > I could have a file with all the URLs listed and work through each line
> > > in turn.
> > > But then I would have to rewrite the file (say, with the
> > >
On Sat, Feb 1, 2020 at 4:17 PM DL Neil via Python-list
wrote:
>
> On 31/01/20 9:53 PM, R.Wieser wrote:
> >> Using ctrl+c is a VERY BAD idea.
> >
> > To have it just exit the program ? Yes, indeed.
> >
> > Though you /could/ keep track of what needs to be finished and have the
> > ctrl-c handler
On 31/01/20 9:53 PM, R.Wieser wrote:
Using ctrl+c is a VERY BAD idea.
To have it just exit the program ? Yes, indeed.
Though you /could/ keep track of what needs to be finished and have the
ctrl-c handler do that for you (barf).
Another posibility is to capture the ctrl-c and set a flag,
On 31/01/20 9:30 AM, jkn wrote:
Err, well, thanks for that discussion gents...
As it happens I do know how to use a database, but I regard it as overkill for
what I am trying to do here. I think a combination of hashing the URL,
and using a suffix to indicate the result of previous downloaded
jkn,
> I'm happy to consider the risk and choose (eg.) the hash function
> accordingly, thanks.
No problem, just wanted you to be aware and (thus) able to choose.
Regards,
Rudy Wieser
--
https://mail.python.org/mailman/listinfo/python-list
On Friday, January 31, 2020 at 9:41:32 AM UTC, R.Wieser wrote:
> jkn,
>
> > I think a combination of hashing the URL,
>
> I hope you're not thinking of saving the hash (into the "done" list) instead
> if the URL itself. While hash collisions do not happen often (especially
> not in a small
On Friday, January 31, 2020 at 9:41:32 AM UTC, R.Wieser wrote:
> jkn,
>
> > I think a combination of hashing the URL,
>
> I hope you're not thinking of saving the hash (into the "done" list) instead
> if the URL itself. While hash collisions do not happen often (especially
> not in a small
jkn,
> I think a combination of hashing the URL,
I hope you're not thinking of saving the hash (into the "done" list) instead
if the URL itself. While hash collisions do not happen often (especially
not in a small list), you cannot rule them out. And when that happens that
would mean you
On Fri, Jan 31, 2020 at 7:56 PM R.Wieser wrote:
>
> Dennis,
>
> > A full client/server RDBM should never be affected by an abort
> > of a client program.
>
> What you describe is on the single query level. What I was thinking of was
> having several queries that /should/ work as a single unit,
DL,
>> Nothing that can't be countered by keeping copies of the last X number of
>> to-be-dowloaded-URLs files.
>
> That's a good idea, but how would the automated system 'know' to give-up
> on the current file and utilise generation n-1? Unable to open the file or
> ???
Well, that would be one
Dennis,
> A full client/server RDBM should never be affected by an abort
> of a client program.
What you describe is on the single query level. What I was thinking of was
having several queries that /should/ work as a single unit, but could get
interrupted (because of the OPs ctrl-c).
Yes,
On 30/01/20 9:35 PM, R.Wieser wrote:
MRAB's scheme does have the disadvantages to me that Chris has pointed
out.
Nothing that can't be countered by keeping copies of the last X number of
to-be-dowloaded-URLs files.
That's a good idea, but how would the automated system 'know' to give-up
on
Err, well, thanks for that discussion gents...
As it happens I do know how to use a database, but I regard it as overkill for
what I am trying to do here. I think a combination of hashing the URL,
and using a suffix to indicate the result of previous downloaded attempts, will
work adequately for
On Fri, Jan 31, 2020 at 4:11 AM R.Wieser wrote:
> But, do you remember what the OP said ?
> [quote]
> want to download these as a 'background task'. ... you can CTRL-C out,
> [/quote]
>
> Why now do I think that, when such a backgroud process is forgotten and the
> 'puter switched off, the file,
Chris,
> Yes, and then you backpedalled furiously when I showed that
> proper transactions prevent this.
You're a fool, out for a fight.
/You/ might know exactly how to handle a database to make sure its
/transactions/ will not leave the database in a corrupt state, but as I
mentioned a few
On Fri, Jan 31, 2020 at 2:01 AM R.Wieser wrote:
>
> Chris,
>
> >> I think that a database is /definitily/ overcomplicating stuff,
> >
> > Okay, sure... but you didn't say that.
>
> I'm sorry ? In my first reply I described a file-based approach and
> mentioned that the folder approach is a
Chris,
>> I think that a database is /definitily/ overcomplicating stuff,
>
> Okay, sure... but you didn't say that.
I'm sorry ? In my first reply I described a file-based approach and
mentioned that the folder approach is a rather good one. What do you think
I ment there ?
> You said that
On Thu, 30 Jan 2020 23:34:59 +1100
Chris Angelico wrote:
> ... I wasn't advocating for the use of a database; my first and
> strongest recommendation was, and still is, a stateless system wherein
> the files themselves are the entire indication of which documents have
> been downloaded.
Yes, I
On Thu, Jan 30, 2020 at 11:11 PM R.Wieser wrote:
>
> Chris,
>
> > That's what transactions are for.
>
> Again,
>
> >> I guess that that went right over your head. :-)/You/ might know
> >> exactly
> >> what should and shouldn't be done, what makes you think the OP currently
> >> does ?
>
> > I
Chris,
> That's what transactions are for.
Again,
>> I guess that that went right over your head. :-)/You/ might know
>> exactly
>> what should and shouldn't be done, what makes you think the OP currently
>> does ?
> I don't understand why you're denigrating databases,
Am I denigrating a
On Thu, Jan 30, 2020 at 8:36 PM R.Wieser wrote:
>
> Chris,
>
> > Uhh
> >
> > Proper databases don't HAVE non-atomic operations. That's kinda their job.
>
> Uhh... yes, /singular/ operations are considered to be atomic. A series of
> operations /ment/ to be executed as a single one on the
Chris,
> Uhh
>
> Proper databases don't HAVE non-atomic operations. That's kinda their job.
Uhh... yes, /singular/ operations are considered to be atomic. A series of
operations /ment/ to be executed as a single one on the other hand aren't.
> Unless you mean that there's a non-atomic
On Thu, Jan 30, 2020 at 7:41 PM R.Wieser wrote:
> Also think of the old adagio:
BTW, the word you want here is "adage", unless you mean that it's a
piece of music being played slowly :)
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
On Thu, Jan 30, 2020 at 7:41 PM R.Wieser wrote:
> A database /sounds/ good, but what happens when you ctrl-c outof a
> non-atomic operation ? How do you fix that ?IOW: Databases can be
> corrupted for pretty-much the same reason as for a simple datafile (but with
> much worse consequences).
jkn,
> MRAB's scheme does have the disadvantages to me that Chris has pointed
> out.
Nothing that can't be countered by keeping copies of the last X number of
to-be-dowloaded-URLs files.
As for rewriting every time, you will /have/ to write something for every
action (and flush the file!),
On Thu, Jan 30, 2020 at 8:59 AM DL Neil via Python-list
wrote:
> * NB I don't use SQLite (in favor of going 'full-fat') and thus cannot
> vouch for its behavior under load/queuing mechanism/concurrent
> accesses... but I'm biased and probably think/write SQL more readily
> than Python - oops!
I
On 30/01/20 10:38 AM, jkn wrote:
On Wednesday, January 29, 2020 at 8:27:03 PM UTC, Chris Angelico wrote:
On Thu, Jan 30, 2020 at 7:06 AM jkn wrote:
I want to be a able to use a simple 'download manager' which I was going to
write
(in Python), but then wondered if there was something suitable
On Wednesday, January 29, 2020 at 8:27:03 PM UTC, Chris Angelico wrote:
> On Thu, Jan 30, 2020 at 7:06 AM jkn wrote:
> >
> > Hi all
> > I'm almost embarrassed to ask this as it's "so simple", but thought I'd
> > give
> > it a go...
>
> Hey, nothing wrong with that!
>
> > I want to be a
On Thu, 30 Jan 2020 07:26:36 +1100
Chris Angelico wrote:
> On Thu, Jan 30, 2020 at 7:06 AM jkn wrote:
> > The situation is this - I have a long list of file URLs and want to
> > download these as a 'background task'. I want this to process to be
> > 'crudely persistent' - you can CTRL-C out,
On Thu, Jan 30, 2020 at 7:49 AM MRAB wrote:
>
> On 2020-01-29 20:00, jkn wrote:
> > I could have a file with all the URLs listed and work through each line in
> > turn.
> > But then I would have to rewrite the file (say, with the
> > previously-successful
> > lines commented out) as I go.
> >
>
On 2020-01-29 20:00, jkn wrote:
Hi all
I'm almost embarrassed to ask this as it's "so simple", but thought I'd
give
it a go...
I want to be a able to use a simple 'download manager' which I was going to
write
(in Python), but then wondered if there was something suitable already out
On Thu, Jan 30, 2020 at 7:06 AM jkn wrote:
>
> Hi all
> I'm almost embarrassed to ask this as it's "so simple", but thought I'd
> give
> it a go...
Hey, nothing wrong with that!
> I want to be a able to use a simple 'download manager' which I was going to
> write
> (in Python), but then
Hi all
I'm almost embarrassed to ask this as it's "so simple", but thought I'd give
it a go...
I want to be a able to use a simple 'download manager' which I was going to
write
(in Python), but then wondered if there was something suitable already out
there.
I haven't found it, but thought
35 matches
Mail list logo