Re: Evaluation of variable as f-string

2023-01-29 Thread Johannes Bauer

Am 29.01.23 um 05:27 schrieb Thomas Passin:

Well, yes, we do see that.  What we don't see is what you want to 
accomplish by doing it, and why you don't seem willing to accept some 
restrictions on the string fragments so that they will evaluate correctly.


I'll have to accept the restrictions. That's a good enough answer for 
me, actually. I was just thinking that possibly there's something like 
(made-up code):


x = { "foo": "bar" }
fstr = string.fstring_compile(s)
fstr.eval(x = x)

Which I didn't know about. It would make sense to me, but possibly not 
enough of a usecase to make it into Python. The format() flavors do not


IOW, perhaps there is a more practical way to accomplish what you want. 
Except that we don't know what that is.


Well, I don't know. I pretty much want a generic Python mechanism that 
allows for exactly what f-strings do: execute arbitrary Python snippets 
of code and format them in one go. In other words, I want to be able to 
do things like that, given an *arbitrary* dictionary x and a string s 
(which has the only restriction that its content needs to be vald 
f-string grammar):


x = {
"d": 12,
"t": 12345,
"dt": datetime.datetime,
"td": datetime.timedelta
}
s = "{x['d']:09b} {'->' * (x['d'] // 3)} {(x['dt'](2000, 1, x['d']) + 
x['td'](120)).strftime('%y.%m.%d')} {'<-' * (x['d'] // 4)}"

q = magic_function(s, x = x)

and have "q" then be

'01100 ->->->-> 00.05.11 <-<-<-'

I believe the closest solution would be using a templating mechanism 
(like Mako), but that has slightly different syntax and doesn't do 
string formatting as nice as f-strings do. f-strings really are the 
perfect syntax for what I want to do.


Cheers,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluation of variable as f-string

2023-01-29 Thread Johannes Bauer

Am 29.01.23 um 02:09 schrieb Chris Angelico:


The exact same points have already been made, but not listened to.
Sometimes, forceful language is required in order to get people to
listen.


An arrogant bully's rationale. Personally, I'm fine with it. I've been
to Usenet for a long time, in which this way of "educating" people was
considered normal. But I do think it creates a deterring, toxic
environment and reflects back to you as a person negatively.


Arrogant bully? Or someone who has tried *multiple times* to explain
to you that what you're asking for is IMPOSSIBLE, and you need to ask
a better question if you want a better answer?


In literally your first answer you resorted to aggressive language and 
implied that what I asked wasn't what I actually wanted. It was.


Also note that in your first answer you did not answer "sorry, this is 
not possible", which would have been completely sufficient as an answer. 
Instead you tried your best at guesswork, implying I didn't know what I 
was doing.


So, yes, absolutely toxic behavior. I fully stand by that judgement of mine.

I'll go a step further and again repeat that THIS sort of behavior is 
what gives open source forums a bad rep. There's always a Lennart 
Poettering, an Ulrich Drepper or maybe a Chris Angelico around who may 
have great technical skill but think they can treat people like shit.



If that's "bullying", then fine, ban me for bullying, and go find
somewhere else where you'll be coddled and told that your question is
fine, it's awesome, and yes, wouldn't it be nice if magic were a
thing.


LOL, "ban you"? What the heck are you talking about, my friend?

I don't need to be coddled by you. I'm trying to give you the favor of 
honest feedback, which is that you sound like an utter bully. If you 
don't care, that is totally fine by me.



They're not different things, because what you asked for is NOT
POSSIBLE without the caveats that I gave. It is *fundamentally not
possible* to "evaluate a string as if it were an f-string", other than
by wrapping it in an f-string and evaluating it - with the
consequences of that.


Yeah that sucks, unfortunately. But I'll live.


In other words, if there were a magic function:

evalfstring(s, x = x)

That would have been the ideal answer. There does not seem to be one,
however. So I'm back to silly workarounds.


Right. Exactly. Now if you'd asked for what you REALLY need, maybe
there'd be a solution involving format_map, but no, you're so utterly
intransigent that you cannot adjust your question to fit reality.


Does format_map do exactly what f-strings can do? Can I pass arbitrary 
functions and Python expressions insode a format_map? No? Of course not. 
Then it does not answer the question.



If that makes me a bad guy, then fine. I'll be the bad guy.


Awww, it's adorable how you're trying to frame yourself as the victim. 
I'll be here if you need a hug, buddy.



But you're not going to change the laws of physics.


Yeah we'll have to disagree about the fact that it's the "laws of 
physics" preventing a specific implementation of a Python function.


Cheers,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluation of variable as f-string

2023-01-28 Thread Johannes Bauer

Am 27.01.23 um 23:10 schrieb Christian Gollwitzer:

Am 27.01.23 um 21:43 schrieb Johannes Bauer:
I don't understand why you fully ignore literally the FIRST example I 
gave in my original post and angrily claim that you solution works 
when it does not:


x = { "y": "z" }
s = "-> {x['y']}"
print(s.format(x = x))
Traceback (most recent call last):
   File "", line 1, in 
KeyError: "'y'"

This. Does. Not. Work.


It's because "you're holding it wrong!". Notice the error message; it 
says that the key 'y' does not exist.


Ah, that is neat! I didn't know that. Thanks for the info.

In my case, I do also however want to have some functionality that 
actually does math or even calls functions. That would be possible with 
templates or f-strings, but not format:



x = { "t": 12345 }
s = "{x['t'] // 60:02d}:{x['t'] % 60:02d}"
print(s.format(x = x))
Traceback (most recent call last):
  File "", line 1, in 
KeyError: "'t'"

and

s = "{x[t] // 60:02d}:{x[t] % 60:02d}"
print(s.format(x = x))

Traceback (most recent call last):
  File "", line 1, in 
ValueError: Only '.' or '[' may follow ']' in format field specifier

but of course:

print(f"{x['t'] // 60:02d}:{x['t'] % 60:02d}")
205:45

Best,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluation of variable as f-string

2023-01-28 Thread Johannes Bauer

Am 28.01.23 um 00:41 schrieb Chris Angelico:

On Sat, 28 Jan 2023 at 10:08, Rob Cliffe via Python-list
 wrote:


Whoa! Whoa! Whoa!
I appreciate the points you are making, Chris, but I am a bit taken
aback by such forceful language.


The exact same points have already been made, but not listened to.
Sometimes, forceful language is required in order to get people to
listen.


An arrogant bully's rationale. Personally, I'm fine with it. I've been 
to Usenet for a long time, in which this way of "educating" people was 
considered normal. But I do think it creates a deterring, toxic 
environment and reflects back to you as a person negatively.



Addressing your points specifically:
  1) I believe the quote character limitation could be overcome. It
would need a fair amount of work, for which I haven't (yet) the time or
inclination.


No problem. Here, solve it for this string:

eval_me = ' f"""{f\'\'\'{f"{f\'{1+2}\'}"}\'\'\'}""" '

F-strings can be nested, remember.


Exactly. This is precisely what I want to avoid. Essentially, proper 
quotation of such a string requires to write a fully fledged f-string 
parser, in which case the whole problem solves itself.



Don't ask how to use X to do Y. Ask how to do Y.

Good advice.


Exactly. As I have shown, asking how to use f-strings to achieve this
is simply not suitable, and there's no useful way to discuss other
than to argue semantics. If we had a GOAL to discuss, we could find
much better options.


I was not asking how to use f-strings. I was asking to evaluate a string 
*as if it were* an f-string. Those are two completely different things 
which you entirely ignored.


In other words, if there were a magic function:

evalfstring(s, x = x)

That would have been the ideal answer. There does not seem to be one, 
however. So I'm back to silly workarounds.


Cheers,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluation of variable as f-string

2023-01-28 Thread Johannes Bauer

Am 28.01.23 um 02:51 schrieb Thomas Passin:

This is literally the version I described myself, except using triple 
quotes. It only modifies the underlying problem, but doesn't solve it.


Ok, so now we are in the territory of "Tell us what you are trying to 
accomplish". And part of that is why you cannot put some constraints on 
what your string fragments are.  The example I gave, copied out of your 
earlier message, worked and now you are springing triple quotes on us.


It works in this particular case, yes. Just like the example I gave in 
my original case:


eval("f'" + s + "'")

"works" if there are no apostrophes used. And just like

eval("f\"" + s + "\"")

"works" if there are no quotation marks used.

I don't want to have to care about what quotation is used inside the 
string, as long as it could successfully evaluate using the f-string 
grammar.


Stop with the rock management already and explain (briefly if possible) 
what you are up to.


I have a string. I want to evaluate it as if it were an f-string. I.e., 
there *are* obviously restrictions that apply (namely, the syntax and 
semantics of f-strings), but that's it.


Best,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluation of variable as f-string

2023-01-27 Thread Johannes Bauer

Am 27.01.23 um 20:18 schrieb Chris Angelico:


All you tell us is
what you're attempting to do, which there is *no good way to achieve*.


Fair enough, that is the answer. It's not possible.


Perhaps someone will be inspired to write a function to do it. 😎


See, we don't know what "it" is, so it's hard to write a function
that's any better than the ones we've seen. Using eval() to construct
an f-string and then parse it is TERRIBLE because:

1) It still doesn't work in general, and thus has caveats like "you
can't use this type of quote character"


Exactly my observation as well, which is why I was thinking there's 
something else I missed.



2) You would have to pass it a dictionary of variables, which also
can't be done with full generality


Nonsense. I only am passing a SINGLE variable to eval, called "x". That 
is fully general.



3) These are the exact same problems, but backwards, that led to
f-strings in the first place


I don't know what you mean by that.


4) eval is extremely slow and horrifically inefficient.


Let me worry about it.


For some reason, str.format() isn't suitable,


I don't understand why you fully ignore literally the FIRST example I 
gave in my original post and angrily claim that you solution works when 
it does not:


x = { "y": "z" }
s = "-> {x['y']}"
print(s.format(x = x))
Traceback (most recent call last):
  File "", line 1, in 
KeyError: "'y'"

This. Does. Not. Work.

I want to pass a single variable as a dictionary and access its members 
inside the expression.



 but *you haven't said
why*,


Yes I have, see above.


Well, yes. If you asked "how can I do X", hoping the answer would be
"with a runtime-evaluated f-string", then you're quite right - the
answer might not be what you were hoping for. But since you asked "how
can I evaluate a variable as if it were an f-string", the only
possible answer is "you can't, and that's a horrible idea".


"You can't" would have been sufficient. Pity. Your judgement is 
unnecessary and, frankly, uncalled for as well. Multiple instances you 
claim that you have no idea what I am doing so how would you even begin 
to judge a solution as fit or unfit?



Don't ask how to use X to do Y. Ask how to do Y.


You don't have to be angry that my question does not have a solution. I 
will manage and so might you.


Cheers,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluation of variable as f-string

2023-01-27 Thread Johannes Bauer

Am 23.01.23 um 17:43 schrieb Stefan Ram:

Johannes Bauer  writes:

x = { "y": "z" }
s = "-> {x['y']}"
print(s.format(x = x))


x = { "y": "z" }
def s( x ): return '-> ' + x[ 'y' ]
print( s( x = x ))


Except this is not at all what I asked for. The string "s" in my example 
is just that, an example. I want to render *arbitrary* strings "s" 
together with arbitrary dictionaries "x".


Cheers,
Johannes

--
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluation of variable as f-string

2023-01-27 Thread Johannes Bauer

Am 25.01.23 um 20:38 schrieb Thomas Passin:


x = { "y": "z" }
s = "-> {target}"
print(s.format(target = x['y']))


Stack overflow to the rescue:


No.


Search phrase:  "python evaluate string as fstring"

https://stackoverflow.com/questions/47339121/how-do-i-convert-a-string-into-an-f-string

def effify(non_f_str: str):
     return eval(f'f"""{non_f_str}"""')

print(effify(s))  # prints as expected: "-> z"


Great.

s = '"""'

> def effify(non_f_str: str):
>  return eval(f'f"""{non_f_str}"""')
>
> print(effify(s))  # prints as expected: "-> z"

>>> print(effify(s))
Traceback (most recent call last):
  File "", line 1, in 
  File "", line 2, in effify
  File "", line 1
f"
   ^
SyntaxError: unterminated triple-quoted string literal (detected at line 1)

This is literally the version I described myself, except using triple 
quotes. It only modifies the underlying problem, but doesn't solve it.


Cheers,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Re: Evaluation of variable as f-string

2023-01-27 Thread Johannes Bauer

Am 23.01.23 um 19:02 schrieb Chris Angelico:


This is supposedly for security reasons. However, when trying to emulate
this behavior that I wanted (and know the security implications of), my
solutions will tend to be less secure. Here is what I have been thinking
about:


If you really want the full power of an f-string, then you're asking
for the full power of eval(),


Exactly.


and that means all the security
implications thereof,


Precisely, as I had stated myself.


not to mention the difficulties of namespacing.


Not an issue in my case.


Have you considered using the vanilla format() method instead?


Yes. It does not provide the functionality I want. Not even the utterly 
trivial example that I gave. To quote myself again, let's say I have an 
arbitrary dictionary x (with many nested data structures), I want an 
expression to be evaluated that can access any members in there.


x = { "y": "z" }
s = "-> {x['y']}"
print(s.format(x = x))
Traceback (most recent call last):
  File "", line 1, in 
KeyError: "'y'"

I also want to be able to say things like {'x' * 100}, which .format() 
also does not do.


In other words: I want the evaluation of a variable as an f-string.



But if you really REALLY know what you're doing, just use eval()
directly.


I do, actually, but I hate it. Not because of the security issue, not 
because of namespaces, but because it does not reliably work:


>>> s = "{\"x\" * 4}"
>>> eval("f'" + s + "'")
''

As I mentioned, it depends on the exact quoting. Triple quotes only 
shift the problem. Actually replacing/escaping the relevant quotation 
marks is also not trivial.


I don't really see what you'd gain from an f-string. 


The full power of eval.


At very
least, work with a well-defined namespace and eval whatever you need
in that context.


That's what I'm doing.


Maybe, rather than asking for a way to treat a string as code, ask for
what you ACTUALLY need, and we can help?


I want to render data from a template using an easily understandable 
syntax (like an f-string), ideally using native Python. I want the 
template to make use of Python code constructs AND formatting (e.g. 
{x['time']['runtime']['seconds'] // 60:02d}).


Cheers,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Evaluation of variable as f-string

2023-01-23 Thread Johannes Bauer

Hi there,

is there an easy way to evaluate a string stored in a variable as if it 
were an f-string at runtime?


I.e., what I want is to be able to do this:

x = { "y": "z" }
print(f"-> {x['y']}")

This prints "-> z", as expected. But consider:

x = { "y": "z" }
s = "-> {x['y']}"
print(s.format(x = x))
Traceback (most recent call last):
  File "", line 1, in 
KeyError: "'y'"

Even though

s = "-> {x}"
print(s.format(x = x))

Prints the expected "-> {'y': 'z'}".

This is supposedly for security reasons. However, when trying to emulate 
this behavior that I wanted (and know the security implications of), my 
solutions will tend to be less secure. Here is what I have been thinking 
about:


1. Somehow wrap "s" into an f-string, then eval. E.g.:

eval("f'" + s + "'")

This is a pain in the ass because you have to know what kind of 
quotation signs are used inside the expression. In the given case, this 
wouldn't work (but 'f"' prefix and '"' suffix would).


2. Parse the expression (regex?), then eval() the individual arguments, 
then run through format(). Pain in the ass to get the exact same 
behavior as f-strings. Probably by regex alone not even guaranteed to be 
parsable (especially corner cases with escaped '{' signs or ':' or '{' 
included inside the expression as a literal).


3. Somehow compile the bytecode representing an actual f-string 
expression, then execute it. Sounds like a royal pain in the butt, have 
not tried it.


All solutions are extremely undesirable and come with heavy drawbacks. 
Is there any standard solution (Py3.10+) that does what I would? 
Anything I'm missing?


Thanks,
Johannes
--
https://mail.python.org/mailman/listinfo/python-list


Creating lambdas inside generator expression

2022-06-29 Thread Johannes Bauer
Hi list,

I've just encounted something that I found extremely unintuitive and
would like your feedback. This bit me *hard*, causing me to question my
sanity for a moment. Consider this minimal example code (Py 3.10.4 on
Linux x64):


class Msg():
def hascode(self, value):
print("Check for", value)
return False

conds = [
lambda msg: msg.hascode("foo"),
lambda msg: msg.hascode("bar"),
]

msg = Msg()
print(conds[0](msg))
print(conds[1](msg))



It works perfectly and does exactly what it looks like. The output is:

Check for foo
False
Check for bar
False

But now consider what happens when we create the lambdas inside a list
comprehension (in my original I used a generator expresison, but the
result is the same). Can you guess what happens when we create conds
like this?

conds = [ lambda msg: msg.hascode(z) for z in ("foo", "bar") ]

I certainly could not. Here's what it outputs:

Check for bar
False
Check for bar
False

I.e., the iteration variable "z" somehow gets bound inside the lambda
not by its value, but by its reference. All checks therefore refence
only the last variable.

This totally blew my mind. I can understand why it's happening, but is
this the behavior we would expect? And how can I create lambdas inside a
generator expression and tell the expression to use the *value* and not
pass the "z" variable by reference?

Cheers,
Joe
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Creating lambdas inside generator expression

2022-06-29 Thread Johannes Bauer
Aha!

conds = [ lambda msg, z = z: msg.hascode(z) for z in ("foo", "bar") ]

Is what I was looking for to explicitly use the value of z. What a
caveat, didn't see that coming.

Learning something new every day.

Cheers,
Joe


Am 29.06.22 um 11:50 schrieb Johannes Bauer:
> Hi list,
> 
> I've just encounted something that I found extremely unintuitive and
> would like your feedback. This bit me *hard*, causing me to question my
> sanity for a moment. Consider this minimal example code (Py 3.10.4 on
> Linux x64):
> 
> 
> class Msg():
>   def hascode(self, value):
>   print("Check for", value)
>   return False
> 
> conds = [
>   lambda msg: msg.hascode("foo"),
>   lambda msg: msg.hascode("bar"),
> ]
> 
> msg = Msg()
> print(conds[0](msg))
> print(conds[1](msg))
> 
> 
> 
> It works perfectly and does exactly what it looks like. The output is:
> 
> Check for foo
> False
> Check for bar
> False
> 
> But now consider what happens when we create the lambdas inside a list
> comprehension (in my original I used a generator expresison, but the
> result is the same). Can you guess what happens when we create conds
> like this?
> 
> conds = [ lambda msg: msg.hascode(z) for z in ("foo", "bar") ]
> 
> I certainly could not. Here's what it outputs:
> 
> Check for bar
> False
> Check for bar
> False
> 
> I.e., the iteration variable "z" somehow gets bound inside the lambda
> not by its value, but by its reference. All checks therefore refence
> only the last variable.
> 
> This totally blew my mind. I can understand why it's happening, but is
> this the behavior we would expect? And how can I create lambdas inside a
> generator expression and tell the expression to use the *value* and not
> pass the "z" variable by reference?
> 
> Cheers,
> Joe

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: threading and multiprocessing deadlock

2021-12-06 Thread Johannes Bauer
Am 06.12.21 um 13:56 schrieb Martin Di Paola:
> Hi!, in short your code should work.
> 
> I think that the join-joined problem is just an interpretation problem.
> 
> In pseudo code the background_thread function does:
> 
> def background_thread()
>   # bla
>   print("join?")
>   # bla
>   print("joined")
> 
> When running this function in parallel using threads, you will probably
> get a few "join?" first before receiving any "joined?". That is because
> the functions are running in parallel.
> 
> The order "join?" then "joined" is preserved within a thread but not
> preserved globally.

Yes, completely understood and really not the issue. That these pairs
are not in sequence is fine.

> Now, I see another issue in the output (and perhaps you was asking about
> this one):
> 
> join?
> join?
> myfnc
> myfnc
> join?
> join?
> joined.
> joined.
> 
> So you have 4 "join?" that correspond to the 4 background_thread
> function calls in threads but only 2 "myfnc" and 2 "joined".

Exactly that is the issue. Then it hangs. Deadlocked.

> Could be possible that the output is truncated by accident?

No. This is it. The exact output varies, but when it hangs, it always
also does not execute the function (note the lack of "myfnc"). For example:

join?
join?
myfnc
join?
myfnc
join?
myfnc
joined.
joined.
joined.

(only three threads get started there)

join?
myfnc
join?
join?
join?
joined.

(this time only a single one made it)

join?
join?
join?
myfnc
join?
myfnc
joined.
myfnc
joined.
joined.

(three get started)

> I ran the same program and I got a reasonable output (4 "join?", "myfnc"
> and "joined"):
> 
> join?
> join?
> myfnc
> join?
> myfnc
> join?
> joined.
> myfnc
> joined.
> joined.
> myfnc
> joined.

This happens to me occasionally, but most of the time one of the
processes deadlocks. Did you consistently get four of each? What
OS/Python version were you using?

> Another issue that I see is that you are not joining the threads that
> you spawned (background_thread functions).

True, I kindof assumed those would be detached threads.

> I hope that this can guide you to fix or at least narrow the issue.

Depending on what OS/Python version you're using, that points in that
direction and kindof reinforces my belief that the code is correct.

Very curious.

Thanks & all the best,
Joe
-- 
https://mail.python.org/mailman/listinfo/python-list


threading and multiprocessing deadlock

2021-12-05 Thread Johannes Bauer
Hi there,

I'm a bit confused. In my scenario I a mixing threading with
multiprocessing. Threading by itself would be nice, but for GIL reasons
I need both, unfortunately. I've encountered a weird situation in which
multiprocessing Process()es which are started in a new thread don't
actually start and so they deadlock on join.

I've created a minimal example that demonstrates the issue. I'm running
on x86_64 Linux using Python 3.9.5 (default, May 11 2021, 08:20:37)
([GCC 10.3.0] on linux).

Here's the code:


import time
import multiprocessing
import threading

def myfnc():
print("myfnc")

def run(result_queue, callback):
result = callback()
result_queue.put(result)

def start(fnc):
def background_thread():
queue = multiprocessing.Queue()
proc = multiprocessing.Process(target = run, args = (queue, 
fnc))
proc.start()
print("join?")
proc.join()
print("joined.")
result = queue.get()
threading.Thread(target = background_thread).start()

start(myfnc)
start(myfnc)
start(myfnc)
start(myfnc)
while True:
time.sleep(1)


What you'll see is that "join?" and "joined." nondeterministically does
*not* appear in pairs. For example:

join?
join?
myfnc
myfnc
join?
join?
joined.
joined.

What's worse is that when this happens and I Ctrl-C out of Python, the
started Thread is still running in the background:

$ ps ax | grep minimal
 370167 pts/0S  0:00 python3 minimal.py
 370175 pts/2S+ 0:00 grep minimal

Can someone figure out what is going on there?

Best,
Johannes
-- 
https://mail.python.org/mailman/listinfo/python-list


'%Y' in strftime() vs. strptime()

2019-12-29 Thread Johannes Bauer
Hi list,

I've just stumbled upon a strange phaenomenon and I'm wondering if it's
a bug. Short and sweet:

Python 3.7.3 (default, Oct  7 2019, 12:56:13)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

>>> from datetime import datetime as d

>>> x = d(1, 1, 1)

>>> x.strftime("%Y-%m-%d")
'1-01-01'

>>> d.strptime(x.strftime("%Y-%m-%d"), "%Y-%m-%d")
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.7/_strptime.py", line 577, in _strptime_datetime
tt, fraction, gmtoff_fraction = _strptime(data_string, format)
  File "/usr/lib/python3.7/_strptime.py", line 359, in _strptime
(data_string, format))
ValueError: time data '1-01-01' does not match format '%Y-%m-%d'

>>> d.strptime("0001-01-01", "%Y-%m-%d")
datetime.datetime(1, 1, 1, 0, 0)

I.e. for years that are not 4 digits longs, strftime() produces no
leading zeros for the '%Y' replacement, but strptime() requires leading
zeros.

Is this expected behavior? Shouldn't %Y be consistent across both?

All the best,
Johannes
-- 
https://mail.python.org/mailman/listinfo/python-list


Handling of disconnecting clients in asyncio

2019-09-24 Thread Johannes Bauer
Hi group,

I'm trying to get into async programming using Python. Concretely I open
a UNIX socket server in my application. The UNIX socket server generates
events and also receives commands/responds to them.

I do this by:

async def _create_local_server(self):
await asyncio.start_unix_server(self._local_server_tasks, path = "foo"))

And then gather the command/response and event tasks:

async def _local_server_tasks(self, reader, writer):
await asyncio.gather(
self._local_server_commands(reader, writer),
self._local_server_events(reader, writer),
)

I believe so far this is okay, right? If not, please tell me. Anyways,
the event loop as an example:

async def _local_server_events(self, reader, writer):
while True:
await asyncio.sleep(1)
writer.write(b"event\n")

And the command/response loop, obviously simplified:

async def _local_server_commands(self, reader, writer):
while True:
msg = await reader.readline()
writer.write(msg)

Now I'm having the following issue: A client connects to my server and
then properly disconnects (shutdown/RDWR, close). This causes the await
reader.readline() to return an empty message (after which I can properly
end the _local_server_commands loop).

However, the _local_server_events loop does get no such notificiation.
Nor does writer.write() throw an exception that I could catch (and exit
as a consequence). Instead, I get this on stderr:

socket.send() raised exception.
socket.send() raised exception.
socket.send() raised exception.
socket.send() raised exception.
[...]

My questions are:

1. Is the design generally sane or is this usually done differently?
I.e., am I making any obvious beginner mistakes?

2. What is the proper way of discovering a peer has disconnected and
exiting cleanly?

Thanks in advance,
All the best,
Johannes

-- 
"Performance ist nicht das Problem, es läuft ja nachher beides auf der
selben Hardware." -- Hans-Peter Diettrich in d.s.e.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: GPG wrapper, ECC 25519 compatible?

2019-09-02 Thread Johannes Bauer
On 03.09.19 05:28, rmli...@riseup.net wrote:

> But I just don't understand how to get
> and pass information back to the gpg command line prompts at all, not to
> mention automating the process.

The manpage describes how to enable the machine-parsable interface,
which is exactly what you want. Then:

import subprocess
inputs = [
"11",
"Q",
"1",
"0",
"y",
"username",
"usern...@user.net",
"none",
"O",
]
input_data = ("\n".join(inputs) + "\n").encode()
subprocess.check_output([ "gpg2", "--expert", "--full-gen-key",
"--with-colons", "--command-fd", "0", "--status-fd", "1" ], input =
input_data)

Cheers,
Joe


-- 
"Performance ist nicht das Problem, es läuft ja nachher beides auf der
selben Hardware." -- Hans-Peter Diettrich in d.s.e.
-- 
https://mail.python.org/mailman/listinfo/python-list


Pythonic custom multi-line parsers

2019-07-10 Thread Johannes Bauer
Hi list,

I'm looking for ideas as to a pretty, Pythonic solution for a specific
problem that I am solving over and over but where I'm never happy about
the solution in the end. It always works, but never is pretty. So see
this as an open-ended brainstorming question.

Here's the task: There's a custom file format. Each line can be parsed
individually and, given the current context, the meaning of each
individual line is always clearly distinguishable. I'll give an easy
example to demonstrate:


moo = koo
bar = foo
foo :=
   abc
   def
baz = abc

Let's say the root context knows only two regexes and give them names:

keyvalue: \w+ = \w+
start-multiblock: \w+ :=

The keyvalue is contained in itself, when the line is successfully
parsed all the information is present. The start-multiblock however
gives us only part of the puzzle, namely the name of the following
block. In the multiblock context, there's different regexes that can
happen (actually only one):

multiblock-item: \s\w+

Now obviously whe the block is finished, there's no delimiter. It's
implicit by the multiblock-item regex not matching and therefore we
backtrack to the previous parser (root parser) and can successfully
parse the last line baz = abc.

Especially consider that even though this is a simple example, generally
you'll have multiple contexts, many more regexes and especially nesting
inside these contexts.

Without having to use a parser generator (for those the examples I deal
with are usually too much overhead) what I usually end up doing is
building a state machine by hand. I.e., I memorize the context, match
those and upon no match manually delegate the input data to backtracked
matchers.

This results in AWFULLY ugly code. I'm wondering what your ideas are to
solve this neatly in a Pythonic fashion without having to rely on
third-party dependencies.

Cheers,
Joe
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Curious case of UnboundLocalError

2018-03-31 Thread Johannes Bauer
On 30.03.2018 16:46, Ben Bacarisse wrote:

>> Yup, but why? I mean, at the point of definition of "z", the only
>> definition of "collections" that would be visible to the code would be
>> the globally imported module, would it not? How can the code know of the
>> local declaration that only comes *after*?
> 
> Why questions can be hard.  The language definition says what's supposed
> to happen.  Is that enough of an answer to why?

Absolutely. Don't get me wrong, I dont't doubt either the correctness of
your answer nor question the design choice. I just found it surprising
and cool.

Thanks for clearing it up.

Cheers,
Joe



-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Curious case of UnboundLocalError

2018-03-30 Thread Johannes Bauer
On 30.03.2018 13:25, Johannes Bauer wrote:

>> This mention of collections refers to ...
>>
>>> }
>>> for (_, collections) in z.items():
>>
>> ... this local variable.
> 
> Yup, but why? I mean, at the point of definition of "z", the only
> definition of "collections" that would be visible to the code would be
> the globally imported module, would it not? How can the code know of the
> local declaration that only comes *after*?

Now that I understand what's going on, this is a much clearer example:

import collections
def foo():
print(collections)
collections = "BAR"
foo()

I would have thought that during the "print", collections (because
locally undefined) would expand the scope to global scope and refer to
the module and only after the definition overwrites the binding with the
local scope, collections would be "BAR".

But that's not the case. Huh! I wonder if I'm the last one to notice
that -- it's never come up before for me, I think :-)

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Curious case of UnboundLocalError

2018-03-30 Thread Johannes Bauer
On 30.03.2018 13:13, Ben Bacarisse wrote:

>> import collections
>>
>> class Test(object):
>>  def __init__(self):
>>  z = {
>>  "y": collections.defaultdict(list),
> 
> This mention of collections refers to ...
> 
>>  }
>>  for (_, collections) in z.items():
> 
> ... this local variable.

Yup, but why? I mean, at the point of definition of "z", the only
definition of "collections" that would be visible to the code would be
the globally imported module, would it not? How can the code know of the
local declaration that only comes *after*?

>> Interestingly, when I remove the class:
> 
> The significant change is removing the function that creates a local scope.

Hmmm yes, there's definitely something that I wasn't aware of when
dealing with local scopes.

>> It works as expected (doesn't throw).
> 
> ... except now collections is bound to a list (from the for) and no
> longer refers to the module.

Yup -- which was why my naming was stupid in the first place. However, I
really wasn't aware of this and was certainly surprised -- even though
the workaround is obvious (just don't call it "collections", duh) I
wanted to investigate.

Thanks for the clarification, most interesting!

Cheers,
Joe

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Curious case of UnboundLocalError

2018-03-30 Thread Johannes Bauer
Hey group,

I stumbled about something that I cannot quite explain while doing some
stupid naming of variables in my code, in particular using "collections"
as an identifier. However, what results is strange. I've created a
minimal example. Consider this:

import collections

class Test(object):
def __init__(self):
z = {
"y": collections.defaultdict(list),
}
for (_, collections) in z.items():
pass

Test()


In my opinion, this should run. However, this is what happens on Python
3.6.3 (default, Oct  3 2017, 21:45:48) [GCC 7.2.0] on linux):

Traceback (most recent call last):
  File "x.py", line 11, in 
Test()
  File "x.py", line 6, in __init__
"y": collections.defaultdict(list),
UnboundLocalError: local variable 'collections' referenced before assignment

Interestingly, when I remove the class:

import collections

z = {
"y": collections.defaultdict(list),
}
for (_, collections) in z.items():
pass

It works as expected (doesn't throw).

Have I found a bug in the interpreter or am I doing something incredibly
stupid? I honest cannot tell right now now.

Cheers,
Joe

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Entering a very large number

2018-03-23 Thread Johannes Bauer
On 23.03.2018 14:01, ast wrote:

> It is not beautiful and not very readable. It is better to
> have a fixed number of digits per line (eg 50)

Oh yes, because clearly a 400-digit number becomes VERY beautiful and
readable once you add line breaks to it.

Cheers,
Joe

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Brainstorming on recursive class definitions

2017-09-12 Thread Johannes Bauer
By the way, here's my work in progress:
https://gist.github.com/johndoe31415/7e432b4f47f0030f0903dbd6a401e5dc

I really really love the look & feel, but am unsure if there's a better
way for this?

Cheers,
Joe


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Brainstorming on recursive class definitions

2017-09-12 Thread Johannes Bauer
Hi group,

so I'm having a problem that I'd like to solve *nicely*. I know plenty
of ways to solve it, but am curious if there's a solution that allows me
to write the solution in a way that is most comfortable for the user.

I'm trying to map registers of a processor. So assume you have a n bit
address space, the processor might have duplicate units of identical
functionality mapped at different places in memory. For example, assume
there's a GPIO unit that has registers FOO, BAR and KOO and two GPIO
ports GPIOA and GPIOB. I'd like to write code along the lines of this:

class GpioMap(BaseRegisterMap):
FOO = 0x0
BAR = 0x4
KOO = 0x8

class CPURegisterMap(BaseRegisterMap):
GPIOA = GpioMap(0x1)
GPIOB = GpioMap(0x2)

cpu = CPURegisterMap(0x8000)

assert(cpu.addr == 0x8000)
assert(cpu.GPIOA.addr == 0x8000 + 0x1)
assert(cpu.GPIOB.addr == 0x8000 + 0x2)
assert(cpu.GPIOA.FOO.addr == 0x8000 + 0x1 + 0x0)
assert(cpu.GPIOA.KOO.addr == 0x8000 + 0x1 + 0x8)
assert(cpu.GPIOB.BAR.addr == 0x8000 + 0x2 + 0x4)

So, obviously, FOO, BAR and KOO are of type "int" without any "addr"
property, so there would need to be some magic there. Additionally,
through some way the instanciation of GpioMap() would need the knowledge
of its parent base, which I'm not sure is even possible. Maybe (that's
what I'm currently trying to get right) the __getattribute__ would
propagate the information about the accumulated parent's base address to
the child during lookup.

Anyways, I'm looking for your ideas on how to solve such a thing
"nicely". Note that "BaseRegisterMap" is allowed to do dirty things as
long as the definition code has a clean look & feel.

Cheers,
Joe


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Lexer/Parser question: TPG

2016-11-14 Thread Johannes Bauer
Hi group,

this is not really a Python question, but I use Python to lex/parse some
input. In particular, I use the amazing TPG (http://cdsoft.fr/tpg/).
However, I'm now stuck at a point and am sure I'm not doing something
correctly -- since there's a bunch of really smart people here, I hope
to get some insights. Here we go:

I've created a minimal example in which I'm trying to parse some tokens
(strings and ints in the minimal example). Strings are delimited by
braces (). Therefore

(Foo) -> "Foo"

Braces inside braces are taken literally when balanced. If not balanced,
it's a parsing error.

(Foo (Bar)) -> "Foo (Bar)"

Braces may be escaped:

(Foo \)Bar) -> "Foo )Bar"

In my first (naive) attempt, I ignored the escaping and went with lexing
and then these rules:

token string_token '[^()]*';

[...]

String/s -> start_string   $ s = ""
   (
   string_token/e  $ s += e
   | String/e  $ s += "(" + e + ")"
   )*
   end_string
   ;

While this worked a little bit (with some erroneous parsing,
admittedly), at least it *somewhat* worked. In my second attempt, I
tried to do it properly. I omitted the tokenization and instead used
inline terminals (which have precendence in TPG):

String/s -> start_string   $ s = ""
   (
   '\\.'/e $ s += "ESCAPED[" + e + "]"
   | '[^\\()]+'/e  $ s += e
   | String/e  $ s += "(" + e + ")"
   )*
   end_string
   ;

(the "ESCAPED" part is just for demonstration to get the idea).

While the latter parser parses all strings perfectly, it now isn't able
to parse anything else anymore (including integer values!). Instead, it
appears to match the inline terminal '[^\\()]+' to my integer and then
dies (when trying, for example, to parse "12345"):

[  1][ 3]START.Expression.Value: (1,1) _tok_2 12345 != integer
[  2][ 3]START.Expression.String: (1,1) _tok_2 12345 != start_string
Traceback (most recent call last):
  File "example.py", line 56, in 
print(Parser()(example))
  File "example/tpg.py", line 942, in __call__
return self.parse('START', input, *args, **kws)
  File "example/tpg.py", line 1125, in parse
return Parser.parse(self, axiom, input, *args, **kws)
  File "example/tpg.py", line 959, in parse
value = getattr(self, axiom)(*args, **kws)
  File "", line 3, in START
  File "", line 14, in Expression
UnboundLocalError: local variable 'e' referenced before assignment

"_tok_2" seems to correspond to one of the inline terminal symbols, the
only one that fits would be '[^\\()]+'. But why would that *ever* match?
I thought it'd only match once a "start_string" was encountered (which
it isn't).

Since I'm the parsing noob, I don't think TPG (which is FREAKING
AMAZING, seriously!) is at fault but rather my understanding of TPG. Can
someone help me with this?

I've uploaded a complete working example to play around with here:

http://wikisend.com/download/642120/example.tar.gz
(if it's not working, please tell me and I'll look for some place else).

Thank you so much for your help,
Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Magic UTF-8/Windows-1252 encodings

2016-08-30 Thread Johannes Bauer
On 29.08.2016 17:59, Chris Angelico wrote:

> Fair enough. If this were something that a lot of programs wanted,
> then yeah, there'd be good value in stdlibbing it. Character encodings
> ARE hard to get right, and this kind of thing does warrant some help.
> But I think it's best not done in core - at least, not until we see a
> lot more people doing the same :)

I hope this kind of botchery never makes it in the stdlib. It directly
contradicts "In the face of ambiguity, refuse the temptation to guess."

If you don't know what the charset is, don't guess. It'll introduce
subtle ambiguities and ugly corner cases and will make the life for the
rest of us -- who are trying to get their charsets straight and correct
-- a living hell.

Having such silly "magic" guessing stuff is actually detrimental to the
whole concept of properly identifying and using character sets.
Everything about the thought makes me shiver.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Zero runtime impact tracing

2016-07-30 Thread Johannes Bauer
Hi group,

I'm using CPython 3.5.1. Currently I'm writing some delicate code that
is doing the right thing in 99% of the cases and screws up on the other 1%.

I would like to have tracing in some very inner loops:

if self._debug:
   print("Offset %d foo bar" % (self._offset))

However, this incurs a hefty performance penalty even when tracing disabled.

What I want is that the if clause completely disappears during bytecode
compilation if self._debug is not set. Is there any way that I can tell
the optimizer that this variable will either be set once, but never
change during runtime and that it can go ahead and completely remove the
code when self._debug is False?

Any other means of signalling the it should compile the tracing code in
would also be fine by me (e.g, calling Python with some command line
options or such). As long as during normal operation, there is no
performance impact.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: value of pi and 22/7

2016-06-18 Thread Johannes Bauer
On 18.06.2016 01:12, Lawrence D’Oliveiro wrote:

> I’m not sure how you can write “30” with one digit...

3e1 has one significant digit.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


pytz and Python timezones

2016-06-11 Thread Johannes Bauer
Hi there,

first off, let me admit that I have a hard time comprehensively wrapping
my head around timezones. Everything around them is much more
complicated than it seems, IMO. That said, I'm trying to do things the
"right" way and stumbled upon some weird issue which I can't explain.
I'm unsure what is happening here. I try to create a localized timestamp
in the easiest possible way. So, intuitively, I did this:

datetime.datetime(2016,1,1,0,0,0,tzinfo=pytz.timezone("Europe/Berlin"))

Which gives me:

datetime.datetime(2016, 1, 1, 0, 0, tzinfo=)

Uh... what?

This here:

pytz.timezone("Europe/Berlin").localize(datetime.datetime(2016,1,1))

Gives me the expected result of:

datetime.datetime(2016, 1, 1, 0, 0, tzinfo=)

Can someone explain what's going on here and why I end up with the weird
"00:53" timezone? Is this a bug or am I doing things wrong?

Thanks,
Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: What is heating the memory here? hashlib?

2016-02-15 Thread Johannes Bauer
On 15.02.2016 03:21, Paulo da Silva wrote:

> So far I tried the program twice and it ran perfectly.

I think you measured your RAM consumption wrong.

Linux uses all free RAM as HDD cache. That's what is used in "buffers".
That is, it's not "free", but it would be free if any process would
sbrk(). My guess is that you only looked at the "free" number going down
and concluded your program is eating your RAM. Which it wasn't.

Cheers
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: raise None

2016-01-02 Thread Johannes Bauer
On 31.12.2015 21:18, Ben Finney wrote:

> As best I can tell, Steven is advocating a way to obscure information
> from the traceback, on the assumption the writer of a library knows that
> I don't want to see it.

How do you arrive at that conclusion? The line that raises the exception
is exactly the line that you would expect the exception to be raised.
I.e., the one containing the "raise" statement.

What you seem to advocate against is a feature that is ALREADY part of
the language, i.e. raising exceptions by reference to a variable, not
constructing them on-the-go. Your argumentation makes therefore no sense
in this context.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: raise None

2015-12-31 Thread Johannes Bauer
On 31.12.2015 01:09, Steven D'Aprano wrote:

> Obviously this doesn't work now, since raise None is an error, but if it did
> work, what do you think?

I really like the idea. I've approached a similar problem with a similar
solution (also experimented with decorators), but the tracebacks really
are unintuitive.

Unless I missed something, this seems like a nice feature.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Understanding WSGI together with Apache

2015-10-10 Thread Johannes Bauer
Hi there,

I'm running an Apache 2.4 webserver using mod_wsgi 4.3.0. There are two
different applications running in there running on two completely
separate vhosts.

I'm seeing some weird crosstalk between them which I do not understand.
In particular, crosstalk concerning the locales of the two. One
application needs to output, e.g., date information using a German
locale. It uses locale.setlocale to set its LC_ALL to de_DE.UTF-8.

Now the second application doesn't need nor want to be German. It wants
to see the C locale everywhere, in particular because at some point it
uses datetime.datetime.strptime() to parse a datetime.

Here's where things get weird: Sometimes, my "C" locale process throws
exceptions, because it's unable to parse a date. When looking why this
fails, the string looks like de_DE's "Sa, 10 Okt 2015" instead of C's
"Sat, 10 Oct 2015". This seems to happen depending on which worker
thread is currently serving the request, i.e. nondeterministically.

So all in all, this is very weird and I must admit that I don't seem to
fully understand how WSGI applications are run and served within a
mod_wsgi framework altogether. In the past it all "just worked" and I
didn't need to understand it all in-depth. But I think to be able to
debug such a weird issue, in-depth knowledge of what happens under the
hood would be helpful.

So if someone could shed some light on how it works in general or what
could cause the described issue in particular, I'd really be grateful.

Thanks,
Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RPI.GPIO Help

2015-09-01 Thread Johannes Bauer
On 31.08.2015 19:41, John McKenzie wrote:

>  Still checking here and am discussing all this in the Raspberry pi 
> newsgroup. Thanks to the several people who mentioned it.
> 
>  Again, still listening here if anyone has any more to add.

I've had the problem to use interrupt-driven GPIOs on the Pi about two
years back. Here's how I solved it:

http://pastebin.com/gdJaJByU

To explain the message you're getting: If you want to handle GPIOs in
the most resource-efficient way, you use interrupt-driven handling.
Interrupts for GPIOs can be configured to be off, level-triggered or
edge-triggered. For edge-triggering I'm also pretty sure that the type
of edge (rising, falling, both) can be specified.

IIRC (and I might not, been quite some time), these interrupts are
bundled together in GPIO ports ("channels"). All GPIOs in one channel
need to have the same configuration. You cannot have conflicing
configuration between two pins which belong to the same GPIO (and
apparently, your framework is trying to do it).

The code I posted does it all by hand (and it's not really hard, as you
can see). I used input and output functionality and do the interrupt
configuration myself (this works through the /proc filesystem on the Pi).

Hope this helps,
Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sometimes bottle takes a lot of time

2015-08-23 Thread Johannes Bauer
On 23.08.2015 18:47, Michael Torrie wrote:

> Since this is an ajax thing, I can entirely
> understand that Firefox introduces random delays.  Practically all
> ajax-heavy sites I've ever used has had random slowdowns in Firefox.

This would imply that random six-second delays have somehow passed the
Firefox QA effortlessly. It's something that is entirely possible, but
also something that I would consider magnitudes less likely than other
explanations. Six seconds is *huge* for regular web applications.

> Name resolution could be an issue, but the script he wrote to simulate
> the browser requests does not show the slowdown at all.  Firefox could
> be doing name resolution differently than the rest of the system and
> Chrome of course, which wouldn't surprise me as Firefox seems to more
> and more stupid stuff.

Proxy settings are another thing that could influence behavior. Maybe
the proxy of his Chrome is differently configured than Firefox and this
is causing issues. SOCKS proxies can emulate DNS as well. So there is a
plethora of possible causes; no need to shift blame before evidence is
presented, no need to jump to conclusions.

> But your bashing on him is inappropriate and
> unhelpful.
[...]
> What is annoying to me is how you have done nothing but jump all over
> him this whole thread, and several other times.  
[...]
> In fact I can find very few of your posts to this list where you
> aren't bashing Cecil in recent months.  This does not reflect well on
> the list community and drives away people who would otherwise want to
> learn Python.

I think you're right about this. I've had some run-in with Cecil some
months ago - don't even remember what it was about. The thing I did
remember was that I was particularly annoyed by the whole discussion
back then. This probably led to me being a lot more agressive in my
choice of tone than I should have been.

You're entirely right that this kind of personal feud and immature
mockery is inappropriate for a mailing list and you're also right that
it does create a toxic atmosphere. Since Python is the lanauge I'm most
passionate about a detrimental effect on the Python community is
something that is surely the exact opposite of what I want.

> If Cecil's posts annoy you, please ignore them (I wouldn't even respond
> to this post of yours, but I feel like something has to be said).

I'll follow your advice and thank you for your honest words.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sometimes bottle takes a lot of time

2015-08-23 Thread Johannes Bauer
On 22.08.2015 16:15, Christian Gollwitzer wrote:

> Probably yes. You should take a look at the OP again and compare the
> time stamps. It says that in between two consecutive calls of the same
> program, the request was served once in a second, and once with serious
> delays. Despite that the server is localhost. In between both trials
> there are 20 seconds. I do not see, how git bisect would help here.

I do completely understand that in two consecutive runs one time the
problem occurs and the other time it doesn't.

It's highly unlikely that such a bug would ever have passed the bottle
QA and if it did it would affect thousands of users (who would report
this issue, since it's very severe). It is much more likely the bug is
somewhere within the OP's program. By git bisect he can find out where
he introduced the bug.

> Note that this says nothing about the location of the bug, in can still
> be either in the OPs code or in the framework.

Yup. Note that he has now shifted from blaming bottle to blaming
Firefox. Same thing with that claim. If somehow website delivery was
delayed 6 seconds reproducibly, people would have noticed.

I suspect that either the OPs program is at fault or the OP's setup
(name resolution or some other weird stuff going on). But instead of
tackling this problem systematically, like a Software Engineer would
(Wireshark, debugger, profiler) he just blames other people's software.
This, in my humble opinion, is annoying as fuck.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sometimes bottle takes a lot of time

2015-08-22 Thread Johannes Bauer
On 22.08.2015 15:09, Cecil Westerhof wrote:

>> So let me get your story straight:
> 
> I wish you really meant that.

I really do, did I get it wrong at all? I really don't think that I did.

> Also: take a course in reading.

Maybe you, oh very wise Senior Software Engineer, should take a course
in Software Engineering. You wouldn't otherwise ask embarassingly stupid
questions over and over and over again. Really eats away at your
seniority if you ask me.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sometimes bottle takes a lot of time

2015-08-22 Thread Johannes Bauer
On 22.08.2015 13:28, Cecil Westerhof wrote:

> If you would have taken a little more time you would have seen that
> there where 20 seconds between both logs. I am fast, but not that
> fast. It is exactly the same code. I suppose it has to do something
> with bottle. Something I use since yesterday. Is it that strange to
> look if someone else has had the same problem and maybe a solution?

So let me get your story straight:

You're new to bottle and, apparently, web-programming in Python as well.
You're new to sqlite.
You've built some web application using both.
Yesterday it worked perfectly.
Today it doesn't.
Obviously, you suspect bottle is at fault.

Being the very wise Senior Software Engineer that you are, do you really
think that mature software, programmed by people who actually know what
they're doing is at fault? Or do you rather think it maybe could be the
piece-of-junk demo application written by someone who has proven his
utter incompetence comprehensively time and time again?

Since you're the very wise Senior Software Engineer here, why don't you
use an approach that every schoolkid learns in a programming class?
Namely, circle the error, reproduce it reliably. Change your machine,
network setup and change between your software versions. Create a
minimal example that demonstrate the issue. Then, should you find one,
blame bottle. Not sooner, very wise Senior Software Engineer, not sooner.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Every character of a string becomes a binding

2015-08-22 Thread Johannes Bauer
On 21.08.2015 19:04, Cecil Westerhof wrote:

>> Because the execute method expects the bindings to be passed as a
>> sequence,
>
> Yeah, I found that. I solved it a little differently:
> urls = c.execute('SELECT URL FROM links WHERE URL = ?', [url]).fetchall()

You continuously ask more than amateurish questions here and frequently,
after having received dozens of helpful and quick replies you respond
with "yeah, I found that" or similar.

Wouldn't it be advisable to tinker with your code roughly 10 more
minutes the next time you run into an amateur's problem instead of
firing off a Usenet post, oh very wise Senior Software Engineer?

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Sometimes bottle takes a lot of time

2015-08-22 Thread Johannes Bauer
On 21.08.2015 23:22, Cecil Westerhof wrote:

> Just before everything was done in a second:

Since you're on GitHub, why don't you git bisect and find out where you
screwed up instead of trying to get people to remotely debug and profile
your broken code?

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Searching for a usable X509 implementation

2015-07-06 Thread Johannes Bauer
On 05.07.2015 07:33, Laura Creighton wrote:

> For an RSA key in PEM format you can do:
> from OpenSSL.crypto import _new_mem_buf, _lib, _bio_to_string
> 
> def dump_rsa_public_key(pkey):
> bio = _new_mem_buf()
> result = _lib.PEM_write_bio_RSAPublicKey(bio, 
> _lib.EVP_PKEY_get1_RSA(pkey._
> pkey))
> # if result == 0: ERROR!  Figure out what you want to do here ...
> return _bio_to_string(bio)

Oh, hacky :-)

> The original version of PyOpenSSL was written by Martin Sjögren, when
> he was working for me, and we had no need for such a thing at the time,
> since we just saved full certificates.  You are right that it is very
> odd that nobody else has needed them since then, and this probably
> should be added to PyOpenSSL.

Sadly my impression is that pyOpenSSL development is slow at best. I've
had an issue with it a while back and was missing some feature which
someone else had already suggested. It kindof was some back and forth in
their bugtracker and then all discussion died.

IIRC (and my memory may be wrong) it was about the ability to check
signatures of one certificate against a well-defined truststore
(especially against only one to identify parent certificates by crypto).
I was frustrated back then about the indecisiveness and wrote my own
wrapper around the functions I needed and was done with it.

Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-27 Thread Johannes Bauer
On 27.06.2015 12:16, Chris Angelico wrote:

> Okay, Johannes, NOW you're proving that you don't have a clue what
> you're talking about. D-K effect doesn't go away...

:-D

It does in some people. I've seen it happen, with knowledge comes
humility. Not saying Jon is a lost cause just yet. He's just in
intellectual puberty right now. I'm giving him a few years to re-judge.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-27 Thread Johannes Bauer
On 27.06.2015 11:17, Chris Angelico wrote:

> Good, so this isn't like that episode of Yes Minister when they were
> trying to figure out whether to allow a chemical factory to be built.

I must admit that I have no clue about that show or that epsisode in
particular and needed to read up on it:
https://en.wikipedia.org/wiki/The_Greasy_Pole

>> I must admit that I haven't seen your ideas in this thread?
> 
> No, the proposal I'm putting together is unrelated. You'll see the
> *vast* extent of my security skills here:
> 
> https://github.com/Rosuav/ThirdSquare
> 
> My contribution to this thread has been fairly minor, just suggesting
> one attack that doesn't even work any more, not much else.

Well, if people already have a solution ready there's a good chance that
any criticism falls on deaf ears. In any case something that others have
to be responsible for, their party, their choice.

I've looked at your code even though I don't know pike. That's the
typesafe JavaScript derivative, isn't it?

The only thing that I found horrible was the ssh key format to PKCS
parsing. Man that's hacky :-) You're creating a DER structure on-the-fly
that you fill with the key and that you then have parsed back. I've only
seen ssh-keygen used to generate keys (not to initiate actual ssh
connections), why don't you use openssl to generate the keys? I think
you can generate a RSA keypair in openssl (also valid for ssh should you
need it) and I'm pretty sure that you can generate a ssh public key with
ssh-keygen from that private keypair file. That would eliminate the need
to do this kind of parsing, but it's just a PoC as I understand it.

It appears to be online-only, is that correct? Is Internet coverage so
good down under? I wish this were the case in Germany :-/

Not 100% about it, but I think that the bus concepts that are active in
Germany (locally in some cities) either user asymmetric transponders
(i.e. SmartMX), which gives a beautiful, decentralized, secure and
offline solution at the cost of being comparatively expensive. The
others use symmetric transponders which have limited off-line
functionality: i.e. monotonic counters which are reset in a
cryptographically secured way by backend systems every time a
online-connection persists and which are counted down in the offline case.

In any case, interesting. Thanks for sharing.
Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-27 Thread Johannes Bauer
On 27.06.2015 11:27, Jon Ribbens wrote:

> Johannes might have all the education in the world, but he's
> demonstrated quite comprehensively in this thread that he doesn't
> have a clue what he's talking about.

Oh, how hurtful. I might even shed a tear or two, but it's pretty clear
to me that you're just suffering under the Dunning-Kruger effect. No
worries, champ, it's just a phase that'll go away eventually.

Hugs and kisses,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-27 Thread Johannes Bauer
On 27.06.2015 10:38, Steven D'Aprano wrote:

> Can you say "timing attack"?
> 
> http://codahale.com/a-lesson-in-timing-attacks/
> 
> Can you [generic you] believe that attackers can *reliably* attack remote
> systems based on a 20µs timing differences? If you say "No", then you fail
> Security 101 and should step away from the computer until a security expert
> can be called in to review your code.

Yes, as people do more and more proper crypto (in contrast to crappy
stuff like LFSR-based custom keystream generators and such), side
channels become of great importance.

> I'm not a security expert. I'm not even a talented amateur. *Every time* I
> suggest that "X is secure", the security guy at work shoots me down in
> flames. But nicely, because I pay his wages 

:-)

Being shot down in flames is the way to become a security expert,
probably the *only* way. I don't know anyone who is an expert who hasn't
had that horrible experience at least a dozen of times.

It is amazing how many holes you can poke in designs if you look at it
from enough angles. Having holes poked in my designs gives you a
thourough appreciation for the true crypto experts (i.e. people doing
theoretical cryptography).

Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-27 Thread Johannes Bauer
On 27.06.2015 10:53, Chris Angelico wrote:
> On Sat, Jun 27, 2015 at 6:38 PM, Steven D'Aprano  wrote:
>> I'm not a security expert. I'm not even a talented amateur. *Every time* I
>> suggest that "X is secure", the security guy at work shoots me down in
>> flames. But nicely, because I pay his wages 
> 
> Just out of interest, is _anybody_ active in this thread an expert on
> security?

Yes. I've done a good 10 years of work in the field doing security
(mostly applied cryptography on embedded systems with a focus on side
channels like DPA, but also security concepts and threat/risk analysis)
and spent the last 3-4 years working on my PhD in the field of IT
security. My thesis is almost(tm) finished. I would claim to be an
expert, yes.

> I certainly am not, which means that the proposal I'm
> currently putting together probably has a whole bunch of
> vulnerabilities that I haven't thought of. (Though there's no emphasis
> on encryption anywhere, just signing. I'm *hoping* that RSA public key
> verification is sufficient, but if it isn't, it would be possible for
> a malicious user to make a serious mess of stuff.) But I'm under no
> delusions. I don't say "this is secure" - all I'm saying is "this
> works in proof-of-concept".

I must admit that I haven't seen your ideas in this thread?

Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-26 Thread Johannes Bauer
On 27.06.2015 02:55, Randall Smith wrote:

> No the attacker does not have access to the ciphertext.  What would lead
> you to think they did?

Years of practical experience in the field of applied cryptography.
Knowledge of how side channels work and how easily they can be
constructed for bad schemes.

Rest snipped, explanation futile.
Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-26 Thread Johannes Bauer
On 26.06.2015 23:29, Jon Ribbens wrote:

>> While you seem to think that Steven is rampaging about nothing, he does
>> have a fair point: You consistently were vague about wheter you want to
>> have encryption, authentication or obfuscation of data. This suggests
>> that you may not be so sure yourself what it is you actually want.
> 
> He hasn't been vague, you and Steven just haven't been paying
> attention.

Bullshit. Even the topic indicates that he doesn't know what he wants:
"data mangling" or "encryption", which one is it?

>> You always play around with the 256! which would be a ridiculously high
>> security margin (1684 bits of security, w!). You totally ignore that
>> the system can be broken in a linear fashion.
> 
> No, it can't, because the attacker does not have access to the
> ciphertext.

Or so you claim.

I could go into detail about how the assumtion that the ciphertext is
secret is not a smart one in the context of cryptography. And how side
channels and other leakage may affect overall system security. But I'm
going to save my time on that. I do get paid to review cryptographic
systems and part of the job is dealing with belligerent people who have
read Schneier's blog and think they can outsmart anyone else. Since I
don't get paid to convice you, it's absolutely fine that you think your
substitution scheme is the grand prize.

>> Nobody assumes you're a moron. But it's safe to assume that you're a
>> crypto layman, because only laymen have no clue on how difficult it is
>> to get cryptography even remotely right.
> 
> Amateur crypto is indeed a bad idea. But what you're still not getting
> is that what he's doing here *isn't crypto*. 

So the topic says "Encrypting". If you look really closely at the word,
the part "crypt" might give away to you that cryptography is involved.

> He's just trying to avoid
> letting third parties write completely arbitrary data to the disk.

There's your requirement. Then there's obviously some kind of
implication when a third party *can* write arbitrary data to disk. And
your other solution to that problem...

> You
> know what would be a perfectly good solution to his problem? Base 64
> encoding. That would solve the issue pretty much completely, the only
> reason it's not an ideal solution is that it of course increases the
> size of the data.

...wow.

That's a nice interpretation of not letting a third party write
completely arbitrary data. According to your definition, this would be:
It's okay if the attacker can control 6 of 8 bits.

>> That people in 2015 actually defend inventing a substitution-cipher
>> "crypto"system sends literally shivers down my spine.
> 
> Nobody is defending such a thing, you just haven't understood what
> problem is being solved here.

Oh I understand your "solutions" plenty well. The only thing I don't
understand is why you don't own a Fields medal yet for your
groundbreaking work on bulletproof obfuscation.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-26 Thread Johannes Bauer
On 26.06.2015 22:09, Randall Smith wrote:

> And that's why we're having this discussion.  Do you know of an attack
> in which you can control the output (say at least 100 consecutive bytes)
> for data which goes through a 256 byte translation table, chosen
> randomly from 256! permutations after the data is sent.  If you do, I'm
> all ears!  But at this point you're just setting up straw men and
> knocking them down.

Oh and I wanted to comment on this as well, but sent my reply too soon.

You misunderstand. This is now how it works, this is not how any of this
works. Steven does not *at all* have to prove to you your system is
breakable or show actual attacks. YOU have to prove that your system is
secure. Either analytically or you wait until you have peer review and
cryptanalysis by actual experts.

It's *very* easy to set up a badly flawed obfuscation system that can't
be broken by laymen in a Python newsgroup and which appers to be secure.
This does not imply one bit that it is even remotely secure.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Pure Python Data Mangling or Encrypting

2015-06-26 Thread Johannes Bauer
On 26.06.2015 22:09, Randall Smith wrote:

> You've gone on a rampage about nothing.  My original description said
> the client was supposed to encrypt the data, but you want to assume the
> opposite for some unknown reason.

While you seem to think that Steven is rampaging about nothing, he does
have a fair point: You consistently were vague about wheter you want to
have encryption, authentication or obfuscation of data. This suggests
that you may not be so sure yourself what it is you actually want.

All Steven is doing is pointing out that people do good crypto for a
reason. It's 2015 and we're still discussion "substitution ciphers",
really? Good crypto is available, it's fast, it has awesome
cryptanalysis. All Steven is pointing out is that when ten crypto-laymen
meet in a Python newsgroup and think they have invented a soooper secure
scheme, it may still be complete and utter crap. Just not everone can
see it.

You always play around with the 256! which would be a ridiculously high
security margin (1684 bits of security, w!). You totally ignore that
the system can be broken in a linear fashion. I don't need to know all
256 characters to do damage, sometimes even a handful will already give
me part of what I need and the option to crack more and more. This is
something that would ultimately and instantly disqualify your
"crypto"system as utterly insecure.

Nobody assumes you're a moron. But it's safe to assume that you're a
crypto layman, because only laymen have no clue on how difficult it is
to get cryptography even remotely right. Everyone who knows the trade
uses proven constructions not because it's inconvenient, but because
it's one of the very few ways to achieve a secure system.

That said, for your solution this type of obfuscation may be fine. And
chances are that nobody will ever notice. But don't claim you weren't
warned about the abyss when you designed your solution and people break
this stuff. Because then you might *look* like a moron (even if you're
not), since the first question people will ask will be: "Why? Why on
earth?" It's a blatantly obvious bad idea(tm).

That people in 2015 actually defend inventing a substitution-cipher
"crypto"system sends literally shivers down my spine.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Jython and can't write cache file

2015-06-21 Thread Johannes Bauer
On 21.06.2015 11:40, Cecil Westerhof wrote:

> Thanks. Good that I asked it. :-D

Good for you that you found someone able to enter words into a Google
query. That's a skill you might want to acquire some time in the future.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Using ssl module over custom sockets

2015-06-08 Thread Johannes Bauer
On 08.06.2015 09:22, jbauer.use...@gmail.com wrote:

> Something that I could always use as a workaround would be to open up a 
> listening port locally in one thread and connection to that local port in a 
> different thread, then forward packets. But that's pretty ugly and I'd like 
> to avoid it.

Didn't actually have to do that. My solution now is to use
socket.socketpair() and forward traffic from there. Works reasonably
well and isn't racy (unlike opening up a local TCP/IP port would be).

So I think this solves the problem (although it'd still be cool to have
an alternative way of specifying a send/recv function IMHO).

Cheers,
Johannes


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Function to show time to execute another function

2015-06-07 Thread Johannes Bauer
On 07.06.2015 22:35, Cecil Westerhof wrote:

>> And you also posted your solution. I fail to find any question in
>> your original posting at all.
> 
> That is because there was no question: I just wanted to share
> something I thought that could be useful. If you would have taken the
> trouble to read a little, you would have known that.

You actually didn't write that. You wrote "I have a problem. Here is my
solution. This is how it works." Okay, so what? If you would have taken
the time to write "Just wanted to share this" your intention would have
been obvious.

>>> Sadly the quality of the answers on this list is going down. 
>>
>> Maybe you should start asking questions that people are able to
>> comprehend so that you get an answer that you like.
> 
> I do not want answer I like, but an answer that is useful.

Sorry to break it to you, but this group does not revolve around you. An
answer that might not be useful to you might be useful for someone else
and it surely is useful to get a discussion.

>>> Here I get an alternative that does only half what I want and when
>>> writing an alternative for ‘!find’ I am told I could use ‘!find’
>>> (which only works in ipython, not python and which also not works
>>> with Windows).
>>
>> Protip: Ditch the shitty attitude. I don't know if you're a jerk or
>> not but I know for sure that you sound like one. Makes it also much
>> less likely to get the answers you'd like.
> 
> So if someone gives an answer that is completely useless it is a
> shitty attitude when I point this out? Very interesting indeed.

Precisely. Especially the tone you used is proof of your really shitty
attitude.

> I do not think I have to expect something useful from you, it looks
> like you prefer name calling. Luckily there are a lot of people that
> have a different attitude.

Indeed you can expect nothing more from me and I purposely omitted the
way I do timing from this discussion. I do not aid people who fail to
recognize their major social dysfunction. Not even when they are coding
geniuses. Which, judging from your code snippet, you clearly aren't.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Function to show time to execute another function

2015-06-07 Thread Johannes Bauer
On 07.06.2015 10:22, Cecil Westerhof wrote:

> That only times the function. I explicitly mentioned I want both the
> needed time AND the output.

And you also posted your solution. I fail to find any question in your
original posting at all.

> Sadly the quality of the answers on this list is going down. 

Maybe you should start asking questions that people are able to
comprehend so that you get an answer that you like.

> Here I
> get an alternative that does only half what I want and when writing an
> alternative for ‘!find’ I am told I could use ‘!find’ (which only
> works in ipython, not python and which also not works with Windows).

Protip: Ditch the shitty attitude. I don't know if you're a jerk or not
but I know for sure that you sound like one. Makes it also much less
likely to get the answers you'd like.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Ah Python, you have spoiled me for all other languages

2015-05-24 Thread Johannes Bauer
On 23.05.2015 19:05, Marko Rauhamaa wrote:
> Johannes Bauer :
> 
>> I think the major flaw of the X.509 certificate PKI we have today is
>> that there's no namespacing whatsoever. This is a major problem, as
>> the Government of Untrustworthia may give out certifictes for
>> google.de if they wish to do so.
> 
> But you're fine with the Government of Germany, I take it? Or any
> accredited German CA?

Of course not. But namespacing *enables* separation of trusted entities
where we currently have none whatsoever.

>> Sounds like it's trivial to implement, I wonder why it's not in place.
>> It must have some huge drawback that I can't think of right now.
> 
> How would your scheme address .com, .net, .org etc?

I don't see any problem, why do you see one?

The thing was that I was just giving an example of how nesting could
work. If those are domain names or nested OIDs or any other form of
unique identifier does not matter at all. de, org, fudis, it's all the same.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Ah Python, you have spoiled me for all other languages

2015-05-23 Thread Johannes Bauer
On 23.05.2015 13:21, Tim Daneliuk wrote:

> Trust has context.  You're going to that site to read an article.  This
> is rather different than, say, going somewhere to transact commerce or
> move money.

Sure, for your site it doesn't really make a difference. And, as I said
before, having a self-signed CA certificate doing https is still WAY
better than not having it. Especially if you have PFS-only ciphersuites
configured (I didn't check, but you should if you're unsure). Because
this effectively means that you're protected against passive
eavesdropping, no matter what.

> So, there is increasing thought that we should all just
> run https everywhere all the time.  But then we run into the signing problem.
> I am hoping that we will soon see free or inexpensive CAs to make that
> problem go away.  See:

Running TLS everywhere is an awesome idea and I'm all for it. So good
that you've already made the switch :-)

But I don't think inexpensive CAs would make the signing problem go away.

I think the major flaw of the X.509 certificate PKI we have today is
that there's no namespacing whatsoever. This is a major problem, as the
Government of Untrustworthia may give out certifictes for google.de if
they wish to do so.

In my opinion, it would be great to have a suffix-option in X.509 (maybe
there's even an extension for this already and I'm not aware -
regardless, nobody is using it if there is such a thing). For example,
there'd be root certificates in the certificate store:

CA1: PF=.com signs -> CA2: PF=.google.com
CA3: PF=.de

So CA1 could give out certificates for
foo.com
www.google.com

And CA2 could give out certificates for
www.google.com

And CA3 could give out certificates for
google.de

But CA1 could never sign any .de domain webserver certificate. It would
only ever get more restrictive down the chain.

Sounds like it's trivial to implement, I wonder why it's not in place.
It must have some huge drawback that I can't think of right now.

Cheers,
Johannes


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Ah Python, you have spoiled me for all other languages

2015-05-23 Thread Johannes Bauer
On 23.05.2015 14:44, Marko Rauhamaa wrote:
> Johannes Bauer :
> 
>> I dislike CAs as much as the next guy. But the problem of distributing
>> trust is just not easy to solve, a TTP is a way out. Do you have an
>> alternative that does not at the same time to providing a solution
>> also opens up obvious attack surface?
> 
> Here's an idea: an authentication is considered valid if it is vouched
> for by the United States, China, Russia *and* the European Union. Those
> governments are the only entities that would have the right to delegate
> their respective certification powers to private entities. The
> governments would also offer to certify anybody in the world free of
> charge.

You propose that a set of multiple CA signatures (TTPs) is required and
that those CAs work for free.

Multiple problems with that.

Firstly, who is going to choose the TTPs? In your example you
arbitrarily chose four instances. Japan is missing from there, why?
Because you made arbitrary rules. Good luck convincing everyone
(especially the Japanese) that your choice is the "right" one. There is
never going to be agreement.

Secondly, any of the "chosen" TTPs can effectively DoS every other
country in your scenario. If the US and Russia have a conflict, each
party can become sloppy at their certifications and slow things down a
bit. Suddenly bank-of-russia.ru doesn't have a valid certificate
anymore, ooops.

Thirdly, the more TTPs you have, the less well the whole thing scales.
The whole idea of a trusted third party is that you can TRUST that party
and don't have to do additional checks (like checking agreement with
other TTPs).

Fourthly and lastly: How would this work? If I have a website running
https, how would I get my identity certified by Russia or China? I
should maybe mention that I speak neither Russian nor Chinese. And even
if I did or maybe if their CAs provided service in English, how would
they certify me? For personal identification purposes you often have to
appear in person, something that is impossible if you distribute the
scheme around the whole world.

All in all, the current CA system is shitty and has numerous problems,
but it's not like it's been designed by monkeys. Every alternative has
new problems, some of which may be even worse than the problems we have now.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Ah Python, you have spoiled me for all other languages

2015-05-23 Thread Johannes Bauer
On 23.05.2015 05:31, Michael Torrie wrote:

> Sigh. I blame this as much on the browser.  There's no inherent reason
> why a connection to a site secured with a self-signed certificate is
> insecure.

The problem is *not* that the certificate is self-signed.

It's that it's unknown previously to being encountered within the TLS
handshake. And that *does* make it inherently insecure.

Not algorithmically, obviously.  I can still do a DH-handshake with the
remote peer that will generate a shared secret no eavesdropper will
know. The browser just can't be sure that whoever it negotiated the DH
with is really the endpoint (i.e. the webserver). That is the problem.

I dislike CAs as much as the next guy. But the problem of distributing
trust is just not easy to solve, a TTP is a way out. Do you have an
alternative that does not at the same time to providing a solution also
opens up obvious attack surface?

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


textwrap.wrap() breaks non-breaking spaces

2015-05-17 Thread Johannes Bauer
Hey there,

so that textwrap.wrap() breks non-breaking spaces, is this a bug or
intended behavior? For example:

Python 3.4.0 (default, Apr 11 2014, 13:05:11)
[GCC 4.8.2] on linux

>>> import textwrap
>>> for line in textwrap.wrap("foo dont\xa0break " * 20): print(line)
...
foo dont break foo dont break foo dont break foo dont break foo dont
break foo dont break foo dont break foo dont break foo dont break foo
dont break foo dont break foo dont break foo dont break foo dont break
foo dont break foo dont break foo dont break foo dont break foo dont
break foo dont break

Apparently it does recognize that \xa0 is a kind of space, but it thinks
it can break any space. The point of \xa0 being exactly to avoid this
kind of thing.

Any remedy or ideas?

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is this unpythonic?

2015-05-10 Thread Johannes Bauer
On 10.05.2015 10:58, Frank Millman wrote:

> It is then a simple extra step to say -
> 
> EMPTY_L:IST = ()
> 
> and if required -
> 
> EMPTY_DICT = ()
> 
> and expand the explanation to show why a tuple is used instead.
> 
> So if there was a situation where the overhead of testing for None became a 
> problem, this solution offers the following -
> 
> 1. it solves the 'overhead' problem
> 2. it reads reasonably intuitively in the body of the program
> 3. it is safe
> 4. it should not be difficult to write a suitable self-explanatory comment

I do understand what you're trying to do, but it is my gut-feeling that
you're overengineering this and as a side-effect introducing new problems.

With the above declaration as you describe, the code becomes weird:

foo = EMPTY_LIST
foo.append(123)
Traceback (most recent call last):
  File "", line 1, in 
AttributeError: 'tuple' object has no attribute 'append'

and

foo = EMPTY_DICT
foo["bar"] = "moo"
Traceback (most recent call last):
  File "", line 1, in 
TypeError: 'tuple' object does not support item assignment

So instead, the user of this construct would have to

foo = list(EMPTY_LIST)

or

foo = dict(EMPTY_DICT)

but, coincidentially, this is easier (and more pythonic) by doing

foo = list()
foo = dict()

to which there are the obvious (pythonic) shortcuts

foo = [ ]
foo = { }

All in all, I'd be more confused why someone would introduct
"EMPTY_LIST" in the first place and think there's some strange weird
reason behind it. Explaining the reason in the comments doesn't really
help in my opinion.

Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Is this unpythonic?

2015-05-10 Thread Johannes Bauer
On 08.05.2015 14:04, Dave Angel wrote:

> It might be appropriate to define the list at top-level, as
> 
> EMPTY_LIST=[]
> 
> and in your default argument as
> def x(y, z=EMPTY_LIST):
> 
> and with the all-caps, you're thereby promising that nobody will modify
> that list.
> 
> (I'd tend to do the None trick, but I think this alternative would be
> acceptable)

I think it's a really bad idea to use a module-global mutable
"EMPTY_LIST". It's much too easy this happens:

# Globally
>>> EMPTY_LIST = [ ]

# At somewhere in the code at some point in time
>>> foo = EMPTY_LIST
>>> foo.append(123)
>>> print(foo)
[123]

# Some other place in code
>>> bar = EMPTY_LIST
>>> print(bar)
[123]

Regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: sqlite3 and dates

2015-02-18 Thread Johannes Bauer
On 18.02.2015 13:14, Chris Angelico wrote:
> On Wed, Feb 18, 2015 at 10:57 PM, Johannes Bauer  wrote:
>> SQLite and Postgres are so vastly different in their setup,
>> configuration, capabilities and requirements that the original developer
>> has to have done a MAJOR error in judgement so that a change from one to
>> the other would not be ill-advised.
> 
> On Wed, Feb 18, 2015 at 6:49 PM, Frank Millman  wrote:
>> My accounting software supports three databases - MS Sql Server, PostgreSQL,
>> and sqlite3.
> 
> Johannes, are you saying that Frank made three major errors of judgement? :)

I'm totally pulling my fifth! :-P

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: sqlite3 and dates

2015-02-18 Thread Johannes Bauer
On 18.02.2015 12:21, Chris Angelico wrote:

> SQLite3 is fine for something that's basically just a more structured
> version of a flat file. You assume that nobody but you has the file
> open, and you manipulate it just the same as if it were a big fat blob
> of JSON, but thanks to SQLite, you don't have to rewrite the whole
> file every time you make a small change. That's fine. But it's the
> wrong tool for any job involving multiple users over a network, and
> quite probably the wrong tool for a lot of other jobs too.

Your assessment that some tools fit certain problems and don't fit
different problems is entirely correct. SQLite does the job that it is
supposed to do and it fills that nieche well.

> It's the
> smallest-end piece of software that can truly be called a database. I
> would consider it to be the wrong database for serious accounting
> work, and that's based on the ranting of a majorly-annoyed accountant
> who had to deal with issues in professional systems that had made
> similar choices in back-end selection.

It probably is the wrong database for serious accounting work, and it's
probably also the wrong database for doing multivariate statistical
analysis on sparse matrices that you store in tables.

You could similarly argue that a hammer is the wrong tool to drive in a
screw and you'd be correct in that assessment. But it's completely
besides the point.

SQLite and Postgres are so vastly different in their setup,
configuration, capabilities and requirements that the original developer
has to have done a MAJOR error in judgement so that a change from one to
the other would not be ill-advised.

> You're welcome to disagree, but since PostgreSQL doesn't cost any
> money and (on Linux at least; can't speak for other platforms) doesn't
> take significant effort to set up, I will continue to recommend it.

I work with Postgres on a professional, day-to-day basis. And while it's
free, it *does* take a significant effort to setup and it *does* take a
significant effort to maintain. Especially in comparison with something
like SQLite that literally has no setup at all.

PostgreSQL is great. It's an incredible database and that it's free is
amazing. But in very few settings will it be a replacement for SQLite.

Cheers,
Johannes


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: sqlite3 and dates

2015-02-18 Thread Johannes Bauer
On 18.02.2015 08:05, Chris Angelico wrote:

> But if you need more facilities than SQLite3 can offer, maybe it's
> time to move up to a full database server, instead of local files.
> Switching to PostgreSQL will give you all those kinds of features,
> plus a lot of other things that I would have thought pretty basic -
> like ALTER TABLE. It was quite a surprise to learn that SQLite3 didn't
> support that.

I see you're running a lawnmower. Maybe you should switch to a combine
harvester. That'll get you extra features like a reciprocating knife
cutter bar. I was quite surprised that regular lawnmowers don't support
those.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Idiomatic backtracking in Python

2015-01-25 Thread Johannes Bauer
Hi folks,

I have a problem at hand that needs code for backtracking as a solution.
And I have no problem coding it, but I can't get rid of the feeling that
I'm always solving backtracking problems in a non-Pythonic
(non-idiomatic) way. So, I would like to ask if you have a Pythonic
approach to backtracking problems? If so, I'd love to hear your solutions!

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Obscuring Python source from end users

2014-10-02 Thread Johannes Bauer
On 29.09.2014 16:53, Sturla Molden wrote:
> Chris Angelico  wrote:
> 
>>> I have a project that involves distributing Python code to users in an
>>> organisation. Users do not interact directly with the Python code; they
>>> only know this project as an Excel add-in.
>>>
>>> Now, internal audit takes exception in some cases if users are able to
>>> see the source code.
>>
>> The solution is to fix your internal audit.
> 
> +1

You two have obviously never worked at a large corporation or you would
both realize how tremendously naive and unhelpful this "solution" is.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: GIL detector

2014-08-17 Thread Johannes Bauer
On 17.08.2014 16:21, Steven D'Aprano wrote:
> Coincidentally after reading Armin Ronacher's criticism of the GIL in
> Python:
> 
> http://lucumr.pocoo.org/2014/8/16/the-python-i-would-like-to-see/

Sure that's the right one? The article you linked doesn't mention the GIL.

> I stumbled across this "GIL detector" script:
> 
> http://yuvalg.com/blog/2011/08/09/the-gil-detector/
> 
> Running it on a couple of my systems, I get these figures:
> 
> CPython 2.7: 0.8/2 cores
> CPython 3.3: 1.0/2 cores
> 
> Jython 2.5:  2.3/4 cores
> CPython 2.6: 0.7/4 cores
> CPython 3.3: 0.7/4 cores

CPython 3.4: 0.9/4 cores

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: python small task

2014-08-15 Thread Johannes Bauer
On 15.08.2014 11:18, ngangsia akumbo wrote:
> i have this piece of code 
> 
> file1 = open('text.txt, w)
> try:
> text = file1.read()
> finally:
> file1.close()
> 
> i wish to manage an office task using this small code , how can i implemetn 
> it to function. 

import random
import troll
import socket

t = troll.GenericTroll(maxiq = 75)
try:
t.connect("comp.lang.python")
t.spout(random.randdrivel())
except socket.error:
print("so good try. many wow. fail sad")
finally:
t.close()

> how can i pars it in a webpage ?

t.pars_in_a_webpage()

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to get the ordinal number in list

2014-08-09 Thread Johannes Bauer
On 09.08.2014 19:22, luofeiyu wrote:
 x=["x1","x3","x7","x5"]
 y="x3"
> 
>  how can i get the ordinal number by some codes?
> 
> for id ,value in enumerate(x):
> if y==value : print(id)
> 
> Is more simple way to do that?

print(x.index(y))

HTH,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Proposal: === and !=== operators

2014-07-12 Thread Johannes Bauer
On 12.07.2014 18:35, Steven D'Aprano wrote:

> If you said, "for many purposes, one should not compare floats for 
> equality, but should use some sort of fuzzy comparison instead" then I 
> would agree with you. But your insistence that equality "always" is wrong 
> takes it out of good advice into the realm of superstition.

Bullshit. Comparing floats by their representation is *generally* a bad
idea because of portability issues. You don't know if IEEE754 is used to
represent floats on the systems that your code is used on.

You're hairsplitting: when I'd have said "in 99.9% of cases" you'd agree
with me but since I said "always" you disagree. Don't lawyer out.
Comparing binary representation of floats is a crappy idea.

Even more so in the age of cloud computing where your code is executed
on who knows which architecture where the exact same high level
interpretation might lead to vastly different results. Not to mention
high performance computing, where specialized FPUs can numerously be
found which don't give a shit about IEEE754.

Another thing why it's good to NEVER compare floats with regards to
their binary representation: Do you exactly know how your FPU is
configured by your operating system. Do you know that your FPUs on a
multiprocessor system are configured all identically with regards to
754? Rounding modes, etc?

Just don't fall in the pit. Don't compare floats via equals.

>> when x < x -> False
> 
> Because not all types are ordered:
> 
> py> x = 1+3j
> py> x < x
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: unorderable types: complex() < complex()

Oh, so then you also don't want refelexivity of equals, I think.
Because, obviously, not all types support comparison for equality:

#!/usr/bin/python3
class Yeah(object):
def __eq__(self, other):
raise TypeError("Booya")
Yeah() == Yeah()

You cherrypick your logic and hairsplit in your reasoning. It's not
consistent.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Proposal: === and !=== operators

2014-07-12 Thread Johannes Bauer
On 09.07.2014 11:17, Steven D'Aprano wrote:

> People are already having problems, just listen to Anders. He's 
> (apparently) not doing NAN-aware computations on his data, he just wants 
> to be able to do something like
> 
> this_list_of_floats == that_list_of_floats

This is a horrible example.

There's no pretty way of saying this: Comparing floats using equals
operators has always and will always be an incredibly dumb idea. The
same applies obviously to containers containing floats.

I also agree with Chris that I think an additional operator will make
things worse than better. It'll add confusion with no tangible benefit.
The current operators might have the deficiency that they're not
relexive, but then again: Why should == be always reflexive while the
other operators aren't? Why should I be able to assume that

x == x -> True

but not

when x < x -> False

If you're arguing from a mathematical/logical standpoint then if you
want the former you'll also have to want the latter.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Off-Topic: The Machine

2014-06-24 Thread Johannes Bauer
On 24.06.2014 03:23, Steven D'Aprano wrote:
> http://www.iflscience.com/technology/new-type-computer-capable-
> calculating-640tbs-data-one-billionth-second-could
> 
> Relevance: The Machine uses *eighty times less power* for the same amount 
> of computing power as conventional architectures. If they could shrink it 
> down to a mobile phone, your phone might last 2-3 months on a single 
> charge.

The article is highly unscientific and unspecific. It does not elaborate
on what it means to "calculate" a terabyte of data nor does it specify
what it means to "handle" a petabyte of data. They're terms used by
ignorants, written for ignorants.

So all in all I think it's safe to discard the article.

Also, mobile phones don't waste most of their power doing "calculating"
and "handling" terabytes of data, but the RF and display consumes the
most of power. Therefore, even if you could scale the CPU down your
phone would still not go 2-3 months on a single charge.

Cheers,
Joe

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-15 Thread Johannes Bauer
On 05.06.2014 23:53, Marko Rauhamaa wrote:

> or:
> 
>def make_street_address_map(info_list):
>return dict((info.get_street_address(), info.get_zip_code())
>for info in info_list)
> 

or, what I think is even clearer than your last one:

def make_street_address_map(info_list):
return { info.get_street_address(): info.get_zip_code()
   for info in info_list }

Regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-15 Thread Johannes Bauer
On 07.06.2014 11:54, Alain Ketterlin wrote:

> No. Cost is the issue (development, maintenance, operation,
> liability...). Want an example? Here is one:
> 
> http://tech.slashdot.org/story/14/06/06/1443218/gm-names-and-fires-engineers-involved-in-faulty-ignition-switch

Yeah this is totally believable. One rogue engineer who clearly did it
all by himself. He just wanted to save the company a few dollars out of
pure love for it. Clearly it's his and only his fault with no boundary
conditions that could have influenced his decision in any meaningful
ways. In fact, there's even a GM company memo that states "Hey Ray, just
do what is sensible engineering-wise and don't worry about cost. It's
kewl." But no, Ray just had to go rogue. Just had to do it his way. Man.
Typical Ray thing.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-06 Thread Johannes Bauer
On 05.06.2014 22:18, Ian Kelly wrote:

> Personally I tend toward rstrip('\r\n') so that I don't have to worry
> about files with alternative line terminators.

Hm, I was under the impression that Python already took care of removing
the \r at a line ending. Checking that right now:

(DOS encoded file "y")
>>> for line in open("y", "r"): print(line.encode("utf-8"))
...
b'foo\n'
b'bar\n'
b'moo\n'
b'koo\n'

Yup, the \r was removed automatically. Are there cases when it isn't?

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-06 Thread Johannes Bauer
On 05.06.2014 20:52, Ryan Hiebert wrote:
> 2014-06-05 13:42 GMT-05:00 Johannes Bauer :
> 
>> On 05.06.2014 20:16, Paul Rubin wrote:
>>> Johannes Bauer  writes:
>>>> line = line[:-1]
>>>> Which truncates the trailing "\n" of a textfile line.
>>>
>>> use line.rstrip() for that.
>>
>> rstrip has different functionality than what I'm doing.
> 
> How so? I was using line=line[:-1] for removing the trailing newline, and
> just replaced it with rstrip('\n'). What are you doing differently?

Ah, I didn't know rstrip() accepted parameters and since you wrote
line.rstrip() this would also cut away whitespaces (which sadly are
relevant in odd cases).

Thanks for the clarification, I'll definitely introduce that.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-05 Thread Johannes Bauer
On 05.06.2014 20:16, Paul Rubin wrote:
> Johannes Bauer  writes:
>> line = line[:-1]
>> Which truncates the trailing "\n" of a textfile line.
> 
> use line.rstrip() for that.

rstrip has different functionality than what I'm doing.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode and Python - how often do you index strings?

2014-06-05 Thread Johannes Bauer
On 04.06.2014 02:39, Chris Angelico wrote:

> I know the collective experience of python-list can't fail to bring up
> a few solid examples here :)

Just also grepped lots of code and have surprisingly few instances of
index-search. Most are with constant indices. One particular example
that comes up a lot is

line = line[:-1]

Which truncates the trailing "\n" of a textfile line.

Then some indexing in the form of

negative = (line[0] == "-")

All in all I'm actually a bit surprised this isn't too common.

Cheers,
Johannes


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3 is killing Python

2014-06-02 Thread Johannes Bauer
On 02.06.2014 18:21, Roy Smith wrote:

> Are we talking Tolkien trolls, Pratchett trolls, Rowling trolls, D&D 
> trolls, WoW trolls, or what?  Details matter.

Monkey Island trolls, obviously.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3 is killing Python

2014-05-31 Thread Johannes Bauer
On 31.05.2014 12:07, Steve Hayes wrote:

> So I bought this book, and decided that whatever version of Python it deals
> with, that's the one I will download and use.

This sounds like remarkably bad advice. That's like saying "I bought a
can of motor oil in my department store and whatever engine that is good
for that's the car that I'll buy and put into!"

> The book is:
> 
> Cunningham, Katie. 2014. Teach yourself Python in 24 hours.
>Indianapolis: Sams.
>ISBN: 978-0-672-33687-4
>For Python 2.7.5
> 
> I'll leave Python 3.2 on my computer, but 2.7.5 will be the one I'm installing
> now. Even if I could *find* a book that deals with Python 3.x, couldn't afford
> to but yet another Python book. 

Lucky for you 2.7.5 isn't all that different from Py3 and most of it
will apply. You'll be missing out on a bunch of cool features (arbitrary
precision ints, int division operator, real Unicode support) but that's
no big deal.

Regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3 is killing Python

2014-05-28 Thread Johannes Bauer
On 28.05.2014 21:23, Larry Martell wrote:
> Somthing I came across in my travels through the ether:
> 
> https://medium.com/@deliciousrobots/5d2ad703365d/

Sub-headline "The Python community should fork Python 2". Which could
also read "Someone else should REALLY fork Py2 because I'm mad about Py3
yet too lazy to fork Py2 myself".

I wish all these ridiculous dumb whiners would finally shut up and fork
Python away. That would be win-win: They could use their fork of 2.4
forever and ever, maybe fork 1.4 too while they're at it. Then maintain
it. Above all: They would complain to each other and stay away from the
mailing lists of people who actually *embrace* progress and who
appreciate the wonderful features Py3 has given us.

What a wonderful world it would be. So, I agree with the above blogpost.
Some lazy blogwriting bum should fork Py2!

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python is horribly slow compared to bash!!

2014-05-26 Thread Johannes Bauer
On 22.05.2014 15:43, wxjmfa...@gmail.com wrote:

> I can take the same application and replace 'z' by ..., and
> ... No, I do not win :-( . Python fails.

That's nothing. I can make an application a TOUSAND times slower by
changing the constant 1 to a 2. Python is such utter garbage!

import time

def myfunction(constant):
if constant == 1:
time.sleep(1)
else:
time.sleep(1000)

constant = 1
myfunction(constant)

Now let's all code Itanium assembler, yes?

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: PEP 8 : Maximum line Length :

2014-05-15 Thread Johannes Bauer
On 15.05.2014 04:43, Ben Finney wrote:
> Rustom Mody  writes:
> 
>> Until then may we relegate '79' to quaint historical curiosities
> 
> Not until the general capacity of human cognition advances to make
> longer lines easier to read.

I find it surprising how you can make such a claim about the whole of
humanity (!) without even feeling the need to have a pro forma study to
back it up. Also, not everything that applies to prose also equally
applies to code.

Personally I find overly narrow code (80 cols) to be much *harder* to
read than code that is 100 cols wide. Keep in mind that even if the
break is at 100 cols, lines will rarely exceed that limit. And if they
do to *understand* the code, the further down the line it is the less
important are the details usually.

I don't know why anyone would force a display issue onto everyone. It
imples the arrogant stance that every human being has the exact way of
reading and writing code. Everyone can configure her editor to what she
wants (including line breaks and such).

If people were to force pixel sizes of editor fonts, everyone would
immediately recognize what a stupid idea this would be. Even though I
could claim that the vertical formatting is all messed up when you don't
display my code with the correct font size! Ridiculous, right?

Regards,
Johannes


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Everything you did not want to know about Unicode in Python 3

2014-05-13 Thread Johannes Bauer
On 13.05.2014 10:25, Marko Rauhamaa wrote:

> Based on my background (network and system programming), I'm a bit
> suspicious of strings, that is, text. For example, is the stuff that
> goes to syslog bytes or text? Does an XML file contain bytes or
> (encoded) text? The answers are not obvious to me. Modern computing is
> full of ASCII-esque binary communication standards and formats.

Traditional Unix programs (syslog for example) are notorious for being
clear, ambiguous and/or ignorant of character encodings altogether. And
this works, unfortunately, for the most time because many encodings
share a common subset. If they wouldn't, the problems would be VERY
apparent and people would be forced to handle the issues not so sloppily.

Which is the route that Py3 chose. Don't be sloppy, make a great
distinction between "text" (which handles naturally as strings) and its
respective encoding.

The only people who are angered by this now is people who always treated
encodings sloppily and it "just worked". Well, there's a good chance it
has worked by pure chance so far. It's a good thing that Python does
this now more strictly as it gives developers *guarantees* about what
they can and cannot do with text datatypes without having to deal with
encoding issues in many places. Just one place: The interface where text
is read or written, just as it should be.

Regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Everything you did not want to know about Unicode in Python 3

2014-05-13 Thread Johannes Bauer
On 13.05.2014 10:38, Chris Angelico wrote:

>> Python 2's ambiguity allows me not to answer the tough philosophical
>> questions. I'm not saying it's necessarily a good thing, but it has its
>> benefits.
> 
> It's not a good thing. It means that you have the convenience of
> pretending there's no problem, which means you don't notice trouble
> until something happens... and then, in all probability, your app is
> in production and you have no idea why stuff went wrong.

Exactly. With Py2 "strings" you never know what encoding they are, if
they already have been converted or something like that. And it's very
well possible to mix already converted strings with other, not yet
encoded strings. What a mess!

All these issues are avoided by Py3. There is a very clear distinction
between strings and string representation (data bytes), which is
beautiful. Accidental mixing is not possible. And you have some thing
*guaranteed* for the string type which aren't guaranteed for the bytes
type (for example when doing string manipulation).

Regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Everything you did not want to know about Unicode in Python 3

2014-05-13 Thread Johannes Bauer
On 13.05.2014 03:18, Steven D'Aprano wrote:

> Armin Ronacher is an extremely experienced and knowledgeable Python 
> developer, and a Python core developer. He might be wrong, but he's not 
> *obviously* wrong.

He's correct about file name encodings. Which can be fixed really easily
wihtout messing everything up (sys.argv binary variant, open accepting
binary filenames). But that he suggests that Go would be superior:

> Which uses an even simpler model than Python 2: everything is a byte string. 
> The assumed encoding is UTF-8. End of the story.

Is just a horrible idea. An obviously horrible idea, too.

Having dealt with the UTF-8 problems on Python2 I can safely say that I
never, never ever want to go back to that freaky hell. If I deal with
strings, I want to be able to sanely manipulate them and I want to be
sure that after manipulation they're still valid strings. Manipulating
the bytes representation of unicode data just doesn't work.

And I'm very very glad that some people felt the same way and
implemented a sane, consistent way of dealing with Unicode in Python3.
It's one of the reasons why I switched to Py3 very early and I love it.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: checking if two things do not equal None

2014-03-29 Thread Johannes Bauer
On 29.03.2014 22:55, Johannes Bauer wrote:

>>> if (a is not None) or (b is not None):
> 
> Yes, probably. I liked the original, too. If I were writing the code,
> I'd probably try to aim to invert the condition though and simply do
> 
> if (a is None) and (b is None)
> 
> Which is pretty easy to understand for even a rookie programmer.

Let me expand on that thought one or two more sentences: Although it may
seem really trivial, inversions ("not") in my opinion can really make
code unreadable. One thing that I regularly see when peer-reviewing code
is something like:

if not feature_disabled:

or one that I've seen in-field (modulo the programming language and the
variable names):

if (not no_delayed_commit) and (not data_unchanged):

instead of:

if immediate_commit and data_changed:

Enough of my two cents for today :-)
Cheers,
Johannes


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: checking if two things do not equal None

2014-03-29 Thread Johannes Bauer
On 29.03.2014 22:07, Roy Smith wrote:

> I agree with that.  But
> 
>> if (a, b) != (None, None):
> 
> seems pretty straight-forward to me too.  In fact, if anything, it seems 
> easier to understand than
> 
>> if (a is not None) or (b is not None):

Yes, probably. I liked the original, too. If I were writing the code,
I'd probably try to aim to invert the condition though and simply do

if (a is None) and (b is None)

Which is pretty easy to understand for even a rookie programmer.

> I certainly agree that things like
> 
>> if a is not b is not None: ...
> 
> belong in an obfuscated coding contest.  Code gets read a lot more often 
> than it get written.  Make it dead-ass simple to understand, and future 
> generations of programmers who inherit your code will thank you for it.

Absolutely.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: checking if two things do not equal None

2014-03-29 Thread Johannes Bauer
On 29.03.2014 20:05, Steven D'Aprano wrote:
> On Sat, 29 Mar 2014 11:56:50 -0700, contact.trigon wrote:
> 
>> if (a, b) != (None, None):
>> or
>> if a != None != b:
>>
>> Preference? Pros? Cons? Alternatives?
>
> if not (a is b is None): ...
> 
> Or if you prefer:
> 
> if a is not b is not None: ...

Is this an obfuscated coding contest? Why do you opt for a solution that
one has to at least think 2 seconds about when the simplest solution:

if (a is not None) or (b is not None):

is immediately understandable by everyone?

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: YADTR (Yet Another DateTime Rant)

2014-03-27 Thread Johannes Bauer
On 27.03.2014 11:44, Chris Angelico wrote:

> It's not "equally braindead", it follows a simple and logical rule:
> Only the day portion is negative.

The more I think about it, the sillier this rule seems to me.

A timedelta is a *whole* object. Either the whole delta is negative or
it is not. It doesn't make sense to split it up in two parts and
arbitrarily define one to be always nonnegative and the other to have no
restrictions.

The only locical reasoning behind this could be to argue that "-2 days"
makes more sense than "-15 minutes". Which it doesn't.

Worse: Negating a timedelta (which I would argue is a fairly common
operation) makes the whole thing unstable representation-wise:

>>> str(datetime.timedelta(2, 30))
'2 days, 0:00:30'
>>> str(-datetime.timedelta(2, 30))
'-3 days, 23:59:30'

And it makes it extremely error-prone to the reader:

>>> str(datetime.timedelta(0, -1))
'-1 day, 23:59:59'

This looks MUCH more like "almost two days ago" than

'-00:00:01'

does.

In any case, the str() function is *only* about the representation that
can be read by humans. Therefore its highest priority should be to
output something that can, in fact, be easily parsed by humans. The
current format is nothing of the sort.

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: YADTR (Yet Another DateTime Rant)

2014-03-27 Thread Johannes Bauer
On 27.03.2014 11:44, Chris Angelico wrote:
> On Thu, Mar 27, 2014 at 9:22 PM, Johannes Bauer  wrote:
>> Besides, there's an infinite amount of (braindead) timedelta string
>> representations. For your -30 hours, it is perfectly legal to say
>>
>> 123 days, -2982 hours
>>
>> Yet Python doesn't (but chooses an equally braindead representation).
> 
> It's not "equally braindead", it follows a simple and logical rule:
> Only the day portion is negative. That might not be perfectly suited
> to all situations, but it does mean that adding and subtracting whole
> days will never change the representation of the time. That's a
> reasonable promise.

Why would the stability of the *string* output of the time
representation be of any interest whatsoever? Do you have any even
halfways reasonable usecase for that?

> What you propose is completely arbitrary, 

No. What I propose is that for t > 0 this holds:

"-" + str(t) == str(-t)

Which is far from arbitrary. It follows "natural" rules of inverting
something (-abs(x) == -x), and it yields a (truly) human-readable form
of showing a timedelta.

Please don't mix this up with the very apparent braindead proposal of
mine. In case you didn't notice, this was irony at work. The word
"braindead" I chose to describe the format should have tipped you off to
that.

>> Where can I enter a PIP that proposes that all timedelta strings are
>> fixed at 123 days (for positive, non-prime amount of seconds) and fixed
>> at -234 days (for all negative or positive prime amount of seconds)?
> 
> Doesn't need a PEP. Just subclass it or monkey-patch it and use it as
> you will. :)

Nonono, you misunderstand: I want everyone to suffer under the braindead
representation, just as it is now!

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: YADTR (Yet Another DateTime Rant)

2014-03-27 Thread Johannes Bauer
On 27.03.2014 01:16, Steven D'Aprano wrote:

> py> divmod(30, 24)
> (1, 6)
> 
> That makes perfect intuitive sense: 30 hours is 1 day with 6 hours 
> remaining. In human-speak, we'll say that regardless of whether the 
> timedelta is positive or negative: we'll say "1 day and 6 hours from now" 
> or "1 day and 6 hours ago". But when we specify the sign:
> 
> py> divmod(-30, 24)
> (-2, 18)
> 
> If an event happened 30 hours ago, it is correct to say that it occurred 
> "18 hours after 2 days ago", but who talks that way?

Well, no matter how timedeltas internal representation is and if they
use modular division or (they probably do, I agree) this shouldn't
affect the display at all. Internally they can use any arbitrary
representation, but for the external representation I think a very sane
requirement should be that for a given timedelta t with t > 0 it is its
string representation str(t) would satisfy:

"-" + str(t) == str(-t)

Besides, there's an infinite amount of (braindead) timedelta string
representations. For your -30 hours, it is perfectly legal to say

123 days, -2982 hours

Yet Python doesn't (but chooses an equally braindead representation).

Where can I enter a PIP that proposes that all timedelta strings are
fixed at 123 days (for positive, non-prime amount of seconds) and fixed
at -234 days (for all negative or positive prime amount of seconds)?

Cheers,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: YADTR (Yet Another DateTime Rant)

2014-03-27 Thread Johannes Bauer
On 26.03.2014 10:53, Jean-Michel Pichavant wrote:

> Note : I don't see what's wrong in your example, however I have the feeling 
> the term "stupiditie" is a little bit strong ;)

The problem is that for a given timedelta t with t > 0 it is intuitive
to think that its string representation str(t) would follow the rule

"-" + str(t) == str(-t)

But it doesn't.

> -- IMPORTANT NOTICE: 
> 
> The contents of this email and any attachments are confidential and may also 
> be privileged. If you are not the intended recipient, please notify the 
> sender immediately and do not disclose the contents to any other person, use 
> it for any purpose, or store or copy the information in any medium. Thank you.

I hereby inform you that I - probably by accident received your very
confidential and privileged message! All hardware has already been
thrown into the incinerator, is that good enough for you? Or do I have
to fulfill some ISO-9001 certified privileged mail destruction
procedure? Please advise on how I shall continue my life.

Cheers,
Johannes

IMPORTANT NOTICE:

If you write mails to newsgroups with a ridiculous confidentiality
footer, you are obliged under section 14.9a of the Imbecile Act to take
a blunt object of no less than 12 kg weight and hit yourself repeatedly
in the head until the email footer disappears. Failure to comply might
lead to criminal prosecution and/or permanent constipation! Thank you.


-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Possible bug with stability of mimetypes.guess_* function output

2014-02-07 Thread Johannes Bauer
On 07.02.2014 20:09, Asaf Las wrote:

> it might be you could try to query using sequence below : 
> 
> import mimetypes
> mimetypes.init()
> mimetypes.guess_extension("text/html")
> 
> i got only 'htm' for 5 consequitive attempts

Doesn't change anything. With this:

#!/usr/bin/python3
import mimetypes
mimetypes.init()
print(mimetypes.guess_extension("application/msword"))

And a call like this:

$ for i in `seq 100`; do ./x.py ; done | sort | uniq -c

I get

 35 .doc
 24 .dot
 41 .wiz

Regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Possible bug with stability of mimetypes.guess_* function output

2014-02-07 Thread Johannes Bauer
Hi group,

I'm using Python 3.3.2+ (default, Oct  9 2013, 14:50:09) [GCC 4.8.1] on
linux and have found what is very peculiar behavior at best and a bug at
worst. It regards the mimetypes module and in particular the
guess_all_extensions and guess_extension functions.

I've found that these do not return stable output. When running the
following commands, it returns one of:

$ python3 -c 'import mimetypes;
print(mimetypes.guess_all_extensions("text/html"),
mimetypes.guess_extension("text/html"))'
['.htm', '.html', '.shtml'] .htm

$ python3 -c 'import mimetypes;
print(mimetypes.guess_all_extensions("text/html"),
mimetypes.guess_extension("text/html"))'
['.html', '.htm', '.shtml'] .html

So guess_extension(x) seems to always return guess_all_extensions(x)[0].

Curiously, "shtml" is never the first element. The other two are mixed
with a probability of around 50% which leads me to believe they're
internally managed as a set and are therefore affected by the
(relatively new) nondeterministic hashing function initialization.

I don't know if stable output is guaranteed for these functions, but it
sure would be nice. Messes up a whole bunch of things otherwise :-/

Please let me know if this is a bug or expected behavior.
Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Blog "about python 3"

2014-01-05 Thread Johannes Bauer
On 31.12.2013 10:53, Steven D'Aprano wrote:
> Mark Lawrence wrote:
> 
>> http://blog.startifact.com/posts/alex-gaynor-on-python-3.html.
> 
> I quote:
> 
> "...perhaps a brave group of volunteers will stand up and fork Python 2, and
> take the incremental steps forward. This will have to remain just an idle
> suggestion, as I'm not volunteering myself."
> 
> I expect that as excuses for not migrating get fewer, and the deadline for
> Python 2.7 end-of-life starts to loom closer, more and more haters^W
> Concerned People will whine about the lack of version 2.8 and ask for
> *somebody else* to fork Python.
> 
> I find it, hmmm, interesting, that so many of these Concerned People who say
> that they're worried about splitting the Python community[1] end up
> suggesting that we *split the community* into those who have moved forward
> to Python 3 and those who won't.

Exactly. I don't know what exactly their problem is. I've pushed the
migration of *large* projects at work to Python3 when support was pretty
early and it really wasn't a huge deal.

Specifically because I love pretty much every single aspect that Python3
introduced. The codec support is so good that I've never seen anything
like it in any other programming language and then there's the tons of
beautiful changes (div/intdiv, functools.lru_cache, print(),
datetime.timedelta.total_seconds(), int.bit_length(), bytes/bytearray).

Regards,
Joe

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Programming puzzle with boolean circuits

2013-12-11 Thread Johannes Bauer
On 09.12.2013 14:25, Chris Angelico wrote:

>> I found this puzzle again and was thinking about: How would I code a
>> brute-force approach to this problem in Python?
> 
> Ooooh interesting!

Ha, I thought so too :-)

> Well, here's a start: There's no value in combining the same value in
> an AND or an OR, ergo every gate you add must bring together two
> different values.
> 
> To start with, you have three values (the three inputs). Every time
> you combine two of them, with either type of gate, you create a new
> value. You can also combine a single value with a NOT to create its
> inverse, but only if you have done so no more than once.
> 
> The goal is to produce something which is provably the opposite of
> each of the three inputs.
> 
> I'm not sure if this helps or not, but one thing I learned from
> geometry is that setting down everything you know and need to know is
> a good basis for the search!

Absolutely.

> 
> The hardest part, so far, is proving a result. The algorithm that's
> coming to mind is this:
> 
> def find_solution(inputs, not_count):
> # TODO: First, see if inputs contains three values that are the inverses 
> of
> # the three values i1,i2,i3. If they are, throw something, that's probably
> # the easiest way to unwind the stack.
> if not_count < 2:
> for val in inputs:
> find_solution(inputs + [not val], not_count + 1)
> for val1 in inputs:
> for val2 in inputs:
> if val1 is not val2:
> find_solution(inputs + [val1 and val2], not_count)
> find_solution(inputs + [val1 or val2], not_count)
> 
> find_solution([i1, i2, i3], 0)

I understand your approach, it has given me some ideas too. Thanks for this!

> So, here's a crazy idea: Make i1, i2, i3 into objects of a type with
> an __eq__ that actually does the verification. Schrodinger's Objects:
> they might be True, might be False, and until you call __eq__, they're
> in both states. This probably isn't the best way, but I think it's the
> most fun!

Haha, it surely is a very cool idea!

Thanks for the ideas and your very cool approach. I'll try to tackle it
myself (I think I have a good point to start) and will post the code
once I'm finished.

Best regards,
Joe

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Programming puzzle with boolean circuits

2013-12-09 Thread Johannes Bauer
Hi group,

it's somewhat OT here, but I have a puzzle to which I would like a
solution -- but I'm unsure how I should tackle the problem with Python.
But it's a fun puzzle, so maybe it'll be appreciated here.

The question is: How do you design a boolean circuit that contains at
most 2 NOT gates, but may contain as many AND or OR gates that inverts
three inputs? IOW: Build three inverters by using only two inverters
(and an infinite amount of AND/OR).

Surprisingly, this is possible (and I even know the solution, but won't
give it away just yet).

I found this puzzle again and was thinking about: How would I code a
brute-force approach to this problem in Python? And to my surprise, it
isn't as easy as I thought. So I'm looking for some advice from you guys
(never huts to improve ones coding skills).

Best regards,
Johannes

-- 
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
 - Karl Kaos über Rüdiger Thomas in dsa 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python3 doc, operator reflection

2013-10-28 Thread Johannes Bauer
Am 28.10.2013 13:23, schrieb Chris Angelico:
> On Mon, Oct 28, 2013 at 11:00 PM, Johannes Bauer  wrote:
>>> There are no swapped-argument versions of these methods (to be used when 
>>> the left argument does not support the operation but the right argument 
>>> does); rather, __lt__() and __gt__() are each other’s reflection, __le__() 
>>> and __ge__() are each other’s reflection, and __eq__() and __ne__() are 
>>> their own reflection.
>>
>> But shouldn't __lt__ be the reflection or __ge__ and __gt__ the
>> reflection of __le__?
> 
> lt is the negation of ge, but it's the reflection of gt. Consider this:
> 
> 1 < 2
> 2 > 1
> 
> If Python can't ask 1 if it's less than 2, it'll ask 2 if it's greater than 1.

Ah, I see. Thanks for clearing that up!

Best regards,
Joe

-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   3   >