On Tue, Mar 24, 2015 at 1:13 PM, gdot...@gmail.com wrote:
SyntaxError: Missing parentheses in call to 'print'
It appears you are attempting to use a Python 2.x print statement with
Python 3.x Try changing the last line to
print(line.rstrip())
Skip
--
Changes by Evgeny Kapun abacabadabac...@gmail.com:
--
nosy: +abacabadabacaba
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue2636
___
___
,5.,4.5686274500,3.7272727272727271,3.3947368421052630,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I want to end up with:
14S,5.0,4.56862745,3.7272727272727271,3.394736842105263,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I have a regex to remove the zeros:
'0+[,$]', ''
But I can't figure out
On 2015-03-13 12:05, Larry Martell wrote:
I need to remove all trailing zeros to the right of the decimal
point, but leave one zero if it's whole number.
But I can't figure out how to get the 5. to be 5.0.
I've been messing with the negative lookbehind, but I haven't found
,5.,4.5686274500,3.7272727272727271,3.3947368421052630,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I want to end up with:
14S,5.0,4.56862745,3.7272727272727271,3.394736842105263,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I have a regex to remove the zeros:
'0+[,$]', ''
But I can't
,5.0,4.56862745,3.7272727272727271,3.394736842105263,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I have a regex to remove the zeros:
'0+[,$]', ''
But I can't figure out how to get the 5. to be 5.0.
I've been messing with the negative lookbehind, but I haven't found
one that works for this.
Search: (\.\d+?)0+\b
Replace: \1
which is:
re.sub(r'(\.\d+?)0+\b
,5.,4.5686274500,3.7272727272727271,3.3947368421052630,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I want to end up with:
14S,5.0,4.56862745,3.7272727272727271,3.394736842105263,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I have a regex to remove the zeros:
'0
Larry Martell wrote:
I need to remove all trailing zeros to the right of the decimal point,
but leave one zero if it's whole number.
def strip_zero(s):
if '.' not in s:
return s
s = s.rstrip('0')
if s.endswith('.'):
s += '0'
return s
And in use:
py
,5.,4.5686274500,3.7272727272727271,3.3947368421052630,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I want to end up with:
14S,5.0,4.56862745,3.7272727272727271,3.394736842105263,5.7307692307692308,5.7547169811320753,4.9423076923076925,5.7884615384615383,5.13725490196
I have a regex to remove the zeros:
'0+[,$]', ''
But I can't figure out how to get the 5.
Serhiy Storchaka added the comment:
Could anyone please make a review? This patch is a prerequisite of other
patches.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22364
___
Matthew Barnett added the comment:
Not quite all. POSIX regexes will always look for the longest match, so the
order of the alternatives doesn't matter, i.e. x|xy would give the same result
as xy|x.
--
___
Python tracker rep...@bugs.python.org
Changes by Rick Otten rottenwindf...@gmail.com:
--
components: Regular Expressions
nosy: Rick Otten, ezio.melotti, mrabarnett
priority: normal
severity: normal
status: open
title: regex | behavior differs from documentation
type: behavior
versions: Python 2.7
Mark Shannon added the comment:
This looks like the expected behaviour to me.
re.sub matches the leftmost occurence and the regular expression is greedy so
(x|xy) will always match xy if it can.
--
nosy: +Mark.Shannon
___
Python tracker
Rick Otten added the comment:
Can the documentation be updated to make this more clear?
I see now where the clause As the target string is scanned, ... is describing
what you have listed here.
I and a coworker both read the description several times and missed that. I
thought it first tried
Matthew Barnett added the comment:
@Mark is correct, it's not a bug.
In the first example:
It tries to match each alternative at position 0. Failure.
It tries to match each alternative at position 1. Failure.
It tries to match each alternative at position 2. Failure.
It tries to match each
New submission from Rick Otten:
The documentation states that | parsing goes from left to right. This
doesn't seem to be true when spaces are involved. (or \s).
Example:
In [40]: mystring
Out[40]: 'rwo incorporated'
In [41]: re.sub('incorporated| inc|llc|corporation|corp| co', '',
-Regular Expressions
nosy: +docs@python, r.david.murray
title: regex | behavior differs from documentation - add example of 'first
match wins' to regex | documentation?
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23532
.
Agree. I'll change this in re. What message is better in case of overflow: the
repetition number is too large (in re) or repeat count too big (in regex)?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22364
Serhiy Storchaka added the comment:
Here is a patch for regex which makes some error messages be the same as in re
with re_errors_2.patch. You could apply it to regex if new error messages look
better than old error messages. Otherwise we could change re error messages to
match regex
Matthew Barnett added the comment:
Some error messages use the indefinite article:
expected a bytes-like object, %.200s found
cannot use a bytes pattern on a string-like object
cannot use a string pattern on a bytes-like object
but others don't:
expected string instance,
Serhiy Storchaka added the comment:
Updated patch addresses Ezio's comments.
--
Added file: http://bugs.python.org/file38080/re_errors_2.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22364
Serhiy Storchaka added the comment:
Here is a patch which unify and improves re error messages. Added tests for all
parsing errors. Now error message always points on the start of affected
component, i.e. on the start of bad escape, group name or unterminated
subpattern.
--
stage:
Serhiy Storchaka added the comment:
re_errors_diff.txt contains differences for all tested error messages.
--
Added file: http://bugs.python.org/file38036/re_errors_diff.txt
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22364
Changes by Serhiy Storchaka storch...@gmail.com:
--
resolution: - fixed
stage: patch review - resolved
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23191
___
Roundup Robot added the comment:
New changeset fe12c34c39eb by Serhiy Storchaka in branch '2.7':
Issue #23191: fnmatch functions that use caching are now threadsafe.
https://hg.python.org/cpython/rev/fe12c34c39eb
--
nosy: +python-dev
___
Python
Changes by Serhiy Storchaka storch...@gmail.com:
--
assignee: - serhiy.storchaka
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23191
___
___
(repr(ll))
print( 'Try again with revised RegEx' )
splitter = re.compile( r'(?:(?:\s*[+/;,]\s*)|(?:\s+and\s+))' )
ll = splitter.split( 'Dave Sam, Jane and Zoe' )
print(repr(ll))
Results:
['Dave', ' ', None, 'Sam', ', ', None, 'Jane', None, ' and ', 'Zoe']
Try again with revised RegEx
['Dave
SilentGhost added the comment:
Looks like it works exactly as the docs[1] describe:
re.split(r'\s*[+/;,]\s*|\s+and\s+', string)
['Dave', 'Sam', 'Jane', 'Zoe']
You're using capturing groups (parentheses) in your original regex which
returns separators as part of a match.
[1] https
Thomas 'PointedEars' Lahn wrote:
wxjmfa...@gmail.com wrote:
[...]
And, why not? compare Py3.2 and Py3.3+ !
What are you getting at?
Don't waste your time with JMF. He is obsessed with a trivial performance
regression in Python 3.3. Unicode strings can be slightly more expensive to
create
in computer science, and
that doing so is allegedly one of the more trivial things computer
scientists are supposed to be able to do, but the learning curve to write
parsers is if anything even higher than the learning curve to write a
regex.
I wish that Python made it as easy to use
On Wed, 14 Jan 2015 14:02:27 +0100, Thomas 'PointedEars' Lahn wrote:
wxjmfa...@gmail.com wrote:
Le mardi 13 janvier 2015 03:53:43 UTC+1, Rick Johnson a écrit :
[...]
you should find Python's text processing Nirvana
[...]
I recommend, you write a small application
I recommend you get
On Tuesday, January 13, 2015 at 10:06:50 AM UTC+5:30, Steven D'Aprano wrote:
On Mon, 12 Jan 2015 19:48:18 +, Ian wrote:
My recommendation would be to write a recursive decent parser for your
files.
That way will be easier to write,
I know that writing parsers is a solved problem
wxjmfa...@gmail.com wrote:
Le mardi 13 janvier 2015 03:53:43 UTC+1, Rick Johnson a écrit :
[...]
you should find Python's text processing Nirvana
[...]
I recommend, you write a small application
I recommend you get a real name and do not post using the troll and spam-
infested Google
On Tuesday, January 13, 2015 at 11:09:17 AM UTC-6, Rick Johnson wrote:
[...]
DO YOU NEED ME TO DRAW YOU A PICTURE?
I don't normally do this, but in the interest of education
i feel i must bear the burdens for which all professional
educators like myself are responsible.
On Tuesday, January 13, 2015 at 12:39:55 AM UTC-6, Steven D'Aprano wrote:
On Mon, 12 Jan 2015 15:47:08 -0800, Rick Johnson wrote:
[...]
[...]
#Ironic Twist (Reformatted)#
that writing parsers is a solved problem in computer
science, and that doing so is allegedly one of the more trivial
things computer scientists are supposed to be able to do, but the
learning curve to write parsers is if anything even higher than
the learning curve to write a regex.
I
, and
that doing so is allegedly one of the more trivial things computer
scientists are supposed to be able to do, but the learning curve to
write parsers is if anything even higher than the learning curve to
write a regex.
I wish that Python made it as easy to use EBNF to write a parser
network called My-Network-FECO
from the above config file stored in the variable 'filebody'.
First I have my variable 'shared_network' which contains the string
My-Network-FECO.
I compile my regex:
m = re.compile(r^(shared\-network ( + re.escape(shared_network) + r)
\{((\n|.|\r\n
Thomas 'PointedEars' Lahn wrote:
Jason Bailey wrote:
shared-network My-Network-MOHE {
[…] {
I compile my regex:
m = re.compile(r^(shared\-network ( + re.escape(shared_network) + r)
\{((\n|.|\r\n)*?)(^\})), re.MULTILINE|re.UNICODE)
This code does not run as posted. Applying Occam’s
On Monday, January 12, 2015 at 11:34:57 PM UTC-6, Mark Lawrence wrote:
You snipped the bit that says [normal cobblers snipped].
Oh my, where are my *manners* today? Tell you what, next time when
your sneaking up behind me with a knife in hand, do be a
friend and tap me on the shoulder first, so
On 12/01/2015 18:03, Jason Bailey wrote:
Hi all,
I'm working on a Python _3_ project that will be used to parse ISC
DHCPD configuration files for statistics and alarming purposes (IP
address pools, etc). Anyway, I'm hung up on this one section and was
hoping someone could provide me with
On Tue, Jan 13, 2015 at 5:03 AM, Jason Bailey jbai...@emerytelcom.com wrote:
Unfortunately, I get no matches. From output on the command line, I can see
that Python is adding extra backslashes to my re.compile string. I have
added the raw 'r' in front of the strings to prevent it, but to no
On 01/12/2015 01:20 PM, Jason Bailey wrote:
Hi all,
What changed between 1:03 and 1:20 that made you post a nearly identical
second message, as a new thread?
Unfortunately, I get no matches. From output on the command line, I can
see that Python is adding extra backslashes to my
On Tue, Jan 13, 2015 at 6:48 AM, Ian hobso...@gmail.com wrote:
My recommendation would be to write a recursive decent parser for your
files.
That way will be easier to write, much easier to modify and almost certainly
faster that a RE solution - and it can easily give you all the information
- Original Message -
From: Jason Bailey jbai...@emerytelcom.com
To: python-list@python.org
Cc:
Sent: Monday, January 12, 2015 7:20 PM
Subject: Python 3 regex woes (parsing ISC DHCPD config)
Hi all,
I'm working on a Python _3_ project that will be used to parse ISC DHCPD
'Some people, when confronted with a problem, think I
know, I'll use regular expressions. Now they have two
problems.' - Jamie Zawinski.
This statement is one of my favorite examples of powerful
propaganda, which has scared more folks away from regexps
than even the Upright Citizens Brigade
On Tue, Jan 13, 2015 at 10:47 AM, Rick Johnson
rantingrickjohn...@gmail.com wrote:
WHO'S LAUGHING NOW? -- YOU MINDLESS ROBOTS!
It's very satisfying when mindless robots laugh.
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
bomb instead of a tooth pick
feel free, I won't lose any sleep over it. Meanwhile I'll
get on with writing code, and for the simple jobs that can
be completed with string methods I'll carry on using them.
When that gets too complicated I'll reach for the regex
manual, knowing full well
any sleep over it. Meanwhile I'll get on with writing code,
and for the simple jobs that can be completed with string methods I'll
carry on using them. When that gets too complicated I'll reach for the
regex manual, knowing full well that there's enough data in books and
online to help even
computer
scientists are supposed to be able to do, but the learning curve to write
parsers is if anything even higher than the learning curve to write a
regex.
I wish that Python made it as easy to use EBNF to write a parser as it
makes to use a regex :-(
http://en.wikipedia.org/wiki
.
[snip]
If you wish to use a hydrogen bomb instead of a tooth pick
feel free, I won't lose any sleep over it. Meanwhile I'll
get on with writing code, and for the simple jobs that can
be completed with string methods I'll carry on using them.
When that gets too complicated I'll reach for the regex
On Mon, 12 Jan 2015 15:47:08 -0800, Rick Johnson wrote:
'Some people, when confronted with a problem, think I know, I'll use
regular expressions. Now they have two problems.' - Jamie Zawinski.
I wonder if Jamie's conclusions are a result of careful study, or
merely, an attempt to resolve
'shared_network' which contains the string
My-Network-FECO.
I compile my regex:
m = re.compile(r^(shared\-network ( + re.escape(shared_network) + r)
\{((\n|.|\r\n)*?)(^\})), re.MULTILINE|re.UNICODE)
I search for regex matches in my config file:
m.search(filebody)
Unfortunately, I get
'shared_network' which contains the string
My-Network-FECO.
I compile my regex:
m = re.compile(r^(shared\-network ( + re.escape(shared_network) + r)
\{((\n|.|\r\n)*?)(^\})), re.MULTILINE|re.UNICODE)
I search for regex matches in my config file:
m.search(filebody)
Unfortunately, I get
New submission from M. Schmitzer:
The way the fnmatch module uses its regex cache is not threadsafe. When
multiple threads use the module in parallel, a race condition between
retrieving a - presumed present - item from the cache and clearing the cache
(because the maximum size has been
STINNER Victor added the comment:
I guess that a lot of stdlib modules are not thread safe :-/ A workaround is to
protect calls to fnmatch with your own lock.
--
nosy: +haypo
___
Python tracker rep...@bugs.python.org
M. Schmitzer added the comment:
Ok, if that is the attitude in such cases, feel free to close this.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23191
___
STINNER Victor added the comment:
It would be nice to fix the issue, but I don't know how it is handled in other
stdlib modules.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue23191
___
Serhiy Storchaka added the comment:
It is easy to make fnmatch caching thread safe without locks. Here is a patch.
The problem with fnmatch is that the caching is implicit and a user don't know
that any lock are needed. So either the need of the lock should be explicitly
documented, or
M. Schmitzer added the comment:
@serhiy.storchaka: My thoughts exactly, especially regarding the caching being
implicit. From the outside, fnmatch really doesn't look like it could have
threading issues.
The patch also looks exactly like what I had in mind.
--
I remember seeing here (couple of weeks ago??) a mention of a regex
debugging/editing tool hidden away in the python source tree.
Does someone remember the name/path?
There are of course dozens of online ones...
Looking for a python native tool
--
https://mail.python.org/mailman/listinfo
On Saturday, December 20, 2014 12:01:10 PM UTC+5:30, Rustom Mody wrote:
I remember seeing here (couple of weeks ago??) a mention of a regex
debugging/editing tool hidden away in the python source tree.
Does someone remember the name/path?
There are of course dozens of online ones
Mateon1 added the comment:
Well, I am reporting it here, is this not the correct place? Sorry if it is.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue2636
___
Changes by Brett Cannon br...@python.org:
--
nosy: +brett.cannon
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue2636
___
___
Python-bugs-list
Matthew Barnett added the comment:
The page on PyPI says where the project's homepage is located:
Home Page: https://code.google.com/p/mrab-regex-hg/
The bug was fixed in the last release.
--
___
Python tracker rep...@bugs.python.org
http
Mateon1 added the comment:
Well, I found a bug with this module, on Python 2.7(.5), on Windows 7 64-bit
when you try to compile a regex with the flags V1|DEBUG, the module crashes as
if it wanted to call a builtin called ascii.
The bug happened to me several times, but this is the regexp when
Matthew Barnett added the comment:
@Mateon1: I hope it's fixed? Did you report it?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue2636
___
___
Terry J. Reedy added the comment:
I already said we should either stick with what we have if better (and gave
examples, including sticking with 'cannot') or possibly combine the best of
both if we can improve on both. 13 should use 'bytes-like' (already changed?).
There is no review button.
Serhiy Storchaka added the comment:
Here is a patch which makes re error messages match regex. It doesn't look to
me that all these changes are enhancements.
--
keywords: +patch
Added file: http://bugs.python.org/file37167/re_errors_regex.patch
Nick Coghlan added the comment:
Thanks for pushing this one forward Serhiy! Your approach sounds like a
fine plan to me.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue2636
___
Grouping. If we wish to therefore bring re to
regex standard we could start with those features.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue2636
Serhiy Storchaka added the comment:
Here is my (slowly implemented) plan:
0. Recommend regex as advanced replacement of re (issue22594).
1. Fix all obvious bugs in the re module if this doesn't break backward
compatibility (issue12728, issue14260, and many already closed issues).
2
Antoine Pitrou added the comment:
Here is my (slowly implemented) plan:
Exciting. Perhaps you should post your plan on python-dev.
In any case, huge thanks for your work on the re module.
--
___
Python tracker rep...@bugs.python.org
Serhiy Storchaka added the comment:
Exciting. Perhaps you should post your plan on python-dev.
Thank you Antoine. I think all interested core developers are already aware
about this issue. A disadvantage of posting on python-dev is that this would
require manually copy links and may be
Ezio Melotti added the comment:
So you are suggesting to fix bugs in re to make it closer to regex, and then
replace re with a forked subset of regex that doesn't include advanced
features, or just to fix/improve re until it matches the behavior of regex?
If you are suggesting the former, I
Serhiy Storchaka added the comment:
So you are suggesting to fix bugs in re to make it closer to regex, and then
replace re with a forked subset of regex that doesn't include advanced
features, or just to fix/improve re until it matches the behavior of regex?
Depends on what will be easier
Ezio Melotti added the comment:
Ok, regardless of what will happen, increasing test coverage is a worthy goal.
We might start by looking at the regex test suite to see if we can import some
tests from there.
--
___
Python tracker rep
massi_...@msn.com wrote:
Hi everyone,
I'm not really sure if this is the right place to ask about regular
expressions, but since I'm usin python I thought I could give a try
:-)
Here is the problem, I'm trying to write a regex in order to
substitute
all the occurences in the form
Raymond Hettinger added the comment:
+1
--
nosy: +rhettinger
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22364
___
___
Python-bugs-list mailing
Changes by Serhiy Storchaka storch...@gmail.com:
--
dependencies: +Add additional attributes to re.error, Other mentions of the
buffer protocol
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22364
Ezio Melotti added the comment:
+1 on the idea.
--
stage: - needs patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22364
___
___
Hi everyone,
I'm not really sure if this is the right place to ask about regular
expressions, but since I'm usin python I thought I could give a try :-)
Here is the problem, I'm trying to write a regex in order to substitute all the
occurences in the form $somechars with another string
in Python, or file I/O in Python, or
anything like that. Not a problem!
Here is the problem, I'm trying to write a regex in order to substitute all
the occurences in the form $somechars with another string. This is what I
wrote:
newstring = re.sub(ur(?u)(\$\[\s\w]+\), subst, oldstring
Hi Chris, thanks for the reply. I tried to use look ahead assertions, in
particular I modified the regex this way:
newstring = re.sub(ur(?u)(\$\[\s\w(?=\\)\]+\), subst, oldstring)
but it does not work. I'm absolutely not a regex guru so I'm surely missing
something. The strings I'm dealing
assertions, in
particular I modified the regex this way:
newstring = re.sub(ur(?u)(\$\[\s\w(?=\\)\]+\), subst, oldstring)
but it does not work. I'm absolutely not a regex guru so I'm surely missing
something.
Yeah, I'm not a high-flying regex programmer either, so I'll leave the
specifics
On Tue, Oct 28, 2014 at 4:02 AM, massi_...@msn.com wrote:
Hi everyone,
I'm not really sure if this is the right place to ask about regular
expressions, but since I'm usin python I thought I could give a try :-)
Here is the problem, I'm trying to write a regex in order to substitute
all
On Tuesday, October 28, 2014 7:03:00 AM UTC-4, mass...@msn.com wrote:
Hi everyone,
I'm not really sure if this is the right place to ask about regular
expressions, but since I'm usin python I thought I could give a try :-)
Here is the problem, I'm trying to write a regex in order
On 2014-10-28 12:28, massi_...@msn.com wrote:
Hi Chris, thanks for the reply. I tried to use look ahead assertions, in
particular I modified the regex this way:
newstring = re.sub(ur(?u)(\$\[\s\w(?=\\)\]+\), subst, oldstring)
but it does not work. I'm absolutely not a regex guru so I'm surely
On 28Oct2014 04:02, massi_...@msn.com massi_...@msn.com wrote:
I'm not really sure if this is the right place to ask about regular
expressions, but since I'm usin python I thought I could give a try :-)
Here is the problem, I'm trying to write a regex in order to substitute all the
occurences
anupama srinivas murthy added the comment:
I have modified the patch and listed the points I know. Could you review it?
--
versions: -Python 3.4, Python 3.5
Added file: http://bugs.python.org/file37052/regex-link.patch
___
Python tracker rep
Changes by Serhiy Storchaka storch...@gmail.com:
--
components: +Regular Expressions
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22594
___
___
Changes by Serhiy Storchaka storch...@gmail.com:
--
components: +Regular Expressions
versions: +Python 3.4, Python 3.5 -Python 3.1
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10076
___
anupama srinivas murthy added the comment:
I have added the link and attached the patch below. Could you review it?
Thank you
--
components: -Regular Expressions
keywords: +patch
nosy: +anupama.srinivas.murthy
Added file: http://bugs.python.org/file36900/regex-link.patch
Georg Brandl added the comment:
currently more bugfree and intended to replace re
The first part is spreading FUD if not explained in more detail. The second is
probably never going to happend :(
--
nosy: +georg.brandl
___
Python tracker
Ezio Melotti added the comment:
+1
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22594
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Berker Peksag berker.pek...@gmail.com:
--
stage: - needs patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22594
___
___
Changes by Tshepang Lekhonkhobe tshep...@gmail.com:
--
nosy: +tshepang
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue22594
___
___
New submission from Serhiy Storchaka:
The regex module is purposed as a replacement of standard re module. Of course
we fix re bugs, but for now regex is more bugfree. Even after fixing all open
re bugs, regex will remain more featured. It would be good to add a link to
regex in re
James Smith wrote:
I want the last 1
I can't this to work:
pattern=re.compile( (\d+)$ )
match=pattern.match( LINE: 235 : Primary Shelf Number (attempt 1): 1)
print match.group()
pattern = re.compile((\d+)$)
match = pattern.search( LINE: 235 : Primary Shelf Number (attempt 1):
1)
Peter Otten __pete...@web.de writes:
pattern = re.compile((\d+)$)
match = pattern.search( LINE: 235 : Primary Shelf Number (attempt 1): 1)
match.group()
'1'
An alternative way to accomplish the above using the ‘match’ method::
import re
pattern = re.compile(^.*:(? *)(\d+)$)
601 - 700 of 2838 matches
Mail list logo