On Sun, Oct 16, 2016 at 05:02:49PM +0200, Mikhail V wrote:
> In this discussion yes, but layout aspects can be also
> improved and I suppose special purpose of
> language does not always dictate the layout of
> code, it is up to you who can define that also.
> And glyphs is not very narrow
Mikhail V wrote:
Those things cannot be easiliy measured, if at all,
If you can't measure something, you can't be sure
it exists at all.
> In my case I am looking at what I've achieved
during years of my work on it and indeed there some
interesting things there.
Have you *measured*
On 16 October 2016 at 04:10, Steve Dower wrote:
>> I posted output with Python2 and Windows 7
>> BTW , In Windows 10 'print' won't work in cmd console at all by default
>> with unicode but thats another story, let us not go into that.
>> I think you get my idea right, it
On 16 October 2016 at 17:16, Todd wrote:
>Even if you were right that your approach is somehow inherently easier,
>it is flat-out wrong that other approaches lead to "brain impairment".
>On the contrary, it is well-established that challenging
>the brain prevents or at least
On Thu, Oct 13, 2016 at 1:46 AM, Mikhail V wrote:
> Practically all this notation does, it reduces the time
> before you as a programmer
> become visual and brain impairments.
>
>
Even if you were right that your approach is somehow inherently easier, it
is flat-out wrong
On 16 October 2016 at 02:58, Greg Ewing wrote:
>> even if it is assembler or whatever,
>> it can be made readable without much effort.
>
>
> You seem to be focused on a very narrow aspect of
> readability, i.e. fine details of individual character
> glyphs. That's
ython-ideas@python.org" <python-ideas@python.org>
Subject: Re: [Python-ideas] Proposal for default character representation
Forgot to reply to all, duping my mesage...
On 12 October 2016 at 23:48, M.-A. Lemburg <m...@egenix.com> wrote:
> Hmm, in Python3, I get:
>
>>
On Sun, Oct 16, 2016 at 12:06 AM, Mikhail V wrote:
> But I can bravely claim that it is better than *any*
> hex notation, it just follows from what I have here
> on paper on my table, namely that it is physically
> impossible to make up highly effective glyph system
> of
On 14 October 2016 at 11:36, Greg Ewing wrote:
>but bash wasn't designed for that.
>(The fact that some people use it that way says more
>about their dogged persistence in the face of
>adversity than it does about bash.)
I can not judge what bash is good for, since
On 14.10.2016 10:26, Serhiy Storchaka wrote:
> On 13.10.16 17:50, Chris Angelico wrote:
>> Solution: Abolish most of the control characters. Let's define a brand
>> new character encoding with no "alphabetical garbage". These
>> characters will be sufficient for everyone:
>>
>> * [2] Formatting
Steven D'Aprano wrote:
That's because some sequence of characters
is being wrongly interpreted as an emoticon by the client software.
The only thing wrong here is that the client software
is trying to interpret the emoticons.
Emoticons are for *humans* to interpret, not software.
Subtlety and
On Fri, Oct 14, 2016 at 07:56:29AM -0400, Random832 wrote:
> On Fri, Oct 14, 2016, at 01:54, Steven D'Aprano wrote:
> > Good luck with that last one. Even if you could convince the Chinese and
> > Japanese to swap to ASCII, I'd like to see you pry the emoji out of the
> > young folk's phones.
>
On Fri, Oct 14, 2016 at 8:36 PM, Greg Ewing wrote:
>> I know people who can read bash scripts
>> fast, but would you claim that bash syntax can be
>> any good compared to Python syntax?
>
>
> For the things that bash was designed to be good for,
> yes, it can. Python
Mikhail V wrote:
if "\u1230" <= c <= "\u123f":
and:
o = ord (c)
if 100 <= o <= 150:
Note that, if need be, you could also write that as
if 0x64 <= o <= 0x96:
So yours is a valid code but for me its freaky,
and surely I stick to the second variant.
The thing is, where did you get
On 13.10.16 17:50, Chris Angelico wrote:
Solution: Abolish most of the control characters. Let's define a brand
new character encoding with no "alphabetical garbage". These
characters will be sufficient for everyone:
* [2] Formatting characters: space, newline. Everything else can go.
* [8]
On Fri, Oct 14, 2016 at 7:18 PM, Cory Benfield wrote:
> The many glyphs that exist for writing various human languages are not
> inefficiency to be optimised away. Further, I should note that most places to
> not legislate about what character sets are acceptable to
> On 14 Oct 2016, at 08:53, Mikhail V wrote:
>
> What keeps people from using same characters?
> I will tell you what - it is local law. If you go to school you *have* to
> write in what is prescribed by big daddy. If youre in europe or America, you
> are
> more lucky.
On Fri, Oct 14, 2016 at 6:53 PM, Mikhail V wrote:
> On 13 October 2016 at 16:50, Chris Angelico wrote:
>> On Fri, Oct 14, 2016 at 1:25 AM, Steven D'Aprano wrote:
>>> On Thu, Oct 13, 2016 at 03:56:59AM +0200, Mikhail V wrote:
and
On 13 October 2016 at 16:50, Chris Angelico wrote:
> On Fri, Oct 14, 2016 at 1:25 AM, Steven D'Aprano wrote:
>> On Thu, Oct 13, 2016 at 03:56:59AM +0200, Mikhail V wrote:
>>> and in long perspective when the world's alphabetical garbage will
>>> dissapear,
On Fri, Oct 14, 2016 at 08:05:40AM +0200, Mikhail V wrote:
> Any critics on it? Besides not following the unicode consortium.
Besides the other remarks on "tradition", I think this is where a big
problem lies: We should not deviate from a common standard (without
very good cause).
There are
On 13 October 2016 at 12:05, Cory Benfield wrote:
>
> integer & 0x00FF # Hex
> integer & 16777215 # Decimal
> integer & 0o # Octal
> integer & 0b # Binary
>
> The octal representation is infuriating because one octal digit refers to
>
On Fri, Oct 14, 2016 at 1:54 AM, Steven D'Aprano wrote:
>> and:
>>
>> o = ord (c)
>> if 100 <= o <= 150:
>
> Which is clearly not the same thing, and better written as:
>
> if "d" <= c <= "\x96":
> ...
Or, if you really want to use ord(), you can use hex literals:
o =
On 13 October 2016 at 10:18, M.-A. Lemburg wrote:
> I suppose you did not intend everyone to have to write
> \u010 just to get a newline code point to avoid the
> ambiguity.
Ok there are different usage cases.
So in short without going into detail,
for example if I need to
On Fri, Oct 14, 2016 at 07:21:48AM +0200, Mikhail V wrote:
> I'll explain what I mean with an example.
> This is an example which I would make to
> support my proposal. Compare:
>
> if "\u1230" <= c <= "\u123f":
For an English-speaker writing that, I'd recommend:
if "\N{ETHIOPIC SYLLABLE SA}"
On 10/13/16 2:42 AM, Mikhail V wrote:
> On 13 October 2016 at 08:02, Greg Ewing wrote:
>> Mikhail V wrote:
>>> Consider unicode table as an array with glyphs.
>>
>> You mean like this one?
>>
>> http://unicode-table.com/en/
>>
>> Unless I've miscounted, that one has
On 10/12/2016 07:13 PM, Mikhail V wrote:
On 12 October 2016 at 23:50, Thomas Nyberg wrote:
Since when was decimal notation "standard"?
Depends on what planet do you live. I live on planet Earth. And you?
If you mean that decimal notation is the standard used for
On Fri, Oct 14, 2016 at 1:25 AM, Steven D'Aprano wrote:
> On Thu, Oct 13, 2016 at 03:56:59AM +0200, Mikhail V wrote:
>> and in long perspective when the world's alphabetical garbage will
>> dissapear, two digits would be ok.
> Talking about "alphabetical garbage" like that
On Thu, Oct 13, 2016 at 03:56:59AM +0200, Mikhail V wrote:
> > How many decimal digits would you use to denote a single character?
>
> for text, three decimal digits would be enough for me personally,
Well, if it's enough for you, why would anyone need more?
> and in long perspective when the
On Thu, Oct 13, 2016 at 9:05 PM, Cory Benfield wrote:
> Binary notation seems like the solution, but note the above case: the only
> way to work out how many bits are being masked out is to count them, and
> there can be quite a lot. IIRC there’s some new syntax coming for
Mikhail V wrote:
Eee how would I find if the character lies in certain range?
>>> c = "\u1235"
>>> if "\u1230" <= c <= "\u123f":
... print("Boo!")
...
Boo!
--
Greg
___
Python-ideas mailing list
Python-ideas@python.org
Mikhail V wrote:
Ok, but if I write a string filtering in Python for example then
obviously I use decimal everywhere to compare index ranges, etc.
so what is the use for me of that label? Just redundant
conversions back and forth.
I'm not sure what you mean by that. If by "index ranges"
Mikhail V wrote:
I am not against base-16 itself in the first place,
but rather against the character set which is simply visually
inconsistent and not readable.
Now you're talking about inventing new characters, or
at least new glyphs for existing ones, and persuading
everyone to use them.
Mikhail V wrote:
Did you see much code written with hex literals?
From /usr/include/sys/fcntl.h:
/*
* File status flags: these are used by open(2), fcntl(2).
* They are also used (indirectly) in the kernel file structure f_flags,
* which is a superset of the open/fcntl flags. Open flags
On 13.10.2016 01:06, Mikhail V wrote:
> On 12 October 2016 at 23:48, M.-A. Lemburg wrote:
>> The hex notation for \u is a standard also used in many other
>> programming languages, it's also easier to parse, so I don't
>> think we should change this default.
>
> In
On 13 October 2016 at 08:02, Greg Ewing wrote:
> Mikhail V wrote:
>>
>> Consider unicode table as an array with glyphs.
>
>
> You mean like this one?
>
> http://unicode-table.com/en/
>
> Unless I've miscounted, that one has the characters
> arranged in rows of 16, so
On 13 October 2016 at 04:49, Emanuel Barry <vgr...@live.ca> wrote:
>> From: Mikhail V
>> Sent: Wednesday, October 12, 2016 9:57 PM
>> Subject: Re: [Python-ideas] Proposal for default character representation
>
> Hello, and welcome to Python-ideas, where only a small
Mikhail V wrote:
And decimal is objectively way more readable than hex standard character set,
regardless of how strong your habits are.
That depends on what you're trying to read from it. I can
look at a hex number and instantly get a mental picture
of the bit pattern it represents. I can't
On 13 October 2016 at 04:18, Brendan Barnwell wrote:
> On 2016-10-12 18:56, Mikhail V wrote:
>>
>> Please don't mix the readability and personal habit, which previuos
>> repliers seems to do as well. Those two things has nothing
>> to do with each other.
>
>
> You
> From: Mikhail V
> Sent: Wednesday, October 12, 2016 9:57 PM
> Subject: Re: [Python-ideas] Proposal for default character representation
Hello, and welcome to Python-ideas, where only a small portion of ideas go
further, and where most newcomers that wish to improve the languag
On Oct 12, 2016 9:25 PM, "Chris Angelico" wrote:
>
> On Thu, Oct 13, 2016 at 12:56 PM, Mikhail V wrote:
> > But as said I find this Unicode only some temporary happening,
> > it will go to history in some future and be
> > used only to study extinct
On Oct 12, 2016 4:33 PM, "Mikhail V" wrote:
>
> Hello all,
>
> *snip*
>
> PROPOSAL:
> 1. Remove all hex notation from printing functions, typing,
> documention.
> So for printing functions leave the hex as an "option",
> for example for those who feel the need for hex
On Thu, Oct 13, 2016 at 12:56 PM, Mikhail V wrote:
> But as said I find this Unicode only some temporary happening,
> it will go to history in some future and be
> used only to study extinct glyphs.
And what will we be using instead?
Morbid curiosity trumping a plonking,
On 2016-10-12 18:56, Mikhail V wrote:
Please don't mix the readability and personal habit, which previuos
repliers seems to do as well. Those two things has nothing
to do with each other.
You keep saying this, but it's quite incorrect. The usage of decimal
notation is itself just a
On 13 October 2016 at 01:50, Chris Angelico wrote:
> On Thu, Oct 13, 2016 at 10:09 AM, Mikhail V wrote:
>> On 12 October 2016 at 23:58, Danilo J. S. Bellini
>> wrote:
>>
>>> Decimal notation is hardly
>>> readable when we're
On Thu, Oct 13, 2016 at 10:09 AM, Mikhail V wrote:
> On 12 October 2016 at 23:58, Danilo J. S. Bellini
> wrote:
>
>> Decimal notation is hardly
>> readable when we're dealing with stuff designed in base 2 (e.g. due to the
>> visual separation of
On 12 October 2016 at 23:50, Thomas Nyberg wrote:
> Since when was decimal notation "standard"?
Depends on what planet do you live. I live on planet Earth. And you?
> opposite. For unicode representations, byte notation seems standard.
How does this make it a good idea?
On 12 October 2016 at 23:58, Danilo J. S. Bellini
wrote:
> Decimal notation is hardly
> readable when we're dealing with stuff designed in base 2 (e.g. due to the
> visual separation of distinct bytes).
Hmm what keeps you from separateting the logical units to be
Forgot to reply to all, duping my mesage...
On 12 October 2016 at 23:48, M.-A. Lemburg wrote:
> Hmm, in Python3, I get:
>
s = "абв.txt"
s
> 'абв.txt'
I posted output with Python2 and Windows 7
BTW , In Windows 10 'print' won't work in cmd console at all by default
I'm -1 on this.
Just type "0431 unicode" on your favorite search engine. U+0431 is the
codepoint, not whatever digits 0x431 has in decimal. That's a tradition and
something external to Python.
As a related concern, I think using decimal/octal on raw data is a terrible
idea (e.g. On Linux, I
On 12.10.2016 23:33, Mikhail V wrote:
> Hello all,
>
> I want to share my thoughts about syntax improvements regarding
> character representation in Python.
> I am new to the list so if such a discussion or a PEP exists already,
> please let me know.
>
> So in short:
>
> Currently Python uses
On 10/12/2016 05:33 PM, Mikhail V wrote:
Hello all,
Hello! New to this list so not sure if I can reply here... :)
Now printing it we get:
u'\u0430\u0431\u0432.txt'
By "printing it", do you mean "this is the string representation"? I
would presume printing it would show characters
Hello all,
I want to share my thoughts about syntax improvements regarding
character representation in Python.
I am new to the list so if such a discussion or a PEP exists already,
please let me know.
So in short:
Currently Python uses hexadecimal notation
for characters for input and output.
52 matches
Mail list logo