Kay Schluehr:
Don't understand your Cython compliant. The only tricky part of Cython is the
doublethink regarding Python types and C types. I attempted once to write a
ShedSkin like code transformer from Python to Cython based on type recordings
but never found the time for this because I have
s...@pobox.com:
Why not just write extension modules in C then?
In the past I have used some C for that purpose, but have you tried
the D language (used from Python with Pyd)? It's way better,
especially if you for example use libs similar to itertools functions,
etc :-)
Bye,
bearophile
--
R. David Murray:
Given your description, I don't see any reason to prefer any alternate
data structure. 1000 small CSV files should fit in a modern computer's
memory with no problem...and if it does become an issue, worry about it
then.
The OP can also try the diff command that can be found
Greg:
Can you elaborate on those problems?
I can't, I am sorry, I don't remember the details anymore.
Feel free to ignore what I have written about Pyrex, lot of people
appreciate it, so it must be good enough, even if I was not smart/
expert enough to use it well. I have even failed in using it
pataphor:
The problem is posting *this*
function would kill my earlier repeat for sure. And it already had a
problem with parameters 0 (Hint: that last bug has now become a
feature in the unpostable repeat implementation)
Be bold, kill your precedent ideas, and post the Unpostable :-)
Luis M. González:
it seems they intend to do upfront
compilation. How?
Unladen swallow developers want to try everything (but black magic and
necromancy) to increase the speed of Cpython. So they will try to
compile up-front if/where they can (for example most regular
expressions are known at
Paul Rubin:
IMHO the main problem with the Unladen Swallow approach is that it would
surprise me if CPython really spends that much of its time interpreting byte
code.
Note that Py3 already has a way to speed up byte code interpretation
where compiled by GCC or Intel compiler (it's a very old
samwyse:
Always saying print(','.join(x)) gets tiresome in a hurry. I've
thought about defining my own function prnt that wraps print and
fixes generators, but that requires me to get their type,
Why do you need to know their type?
Isn't something like this enough?
def pr(it):
txt =
Carl Banks:
What about print(list(x))
Right, better than mine :-)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
akindo:
So, it seems I want the best of both worlds: specific indexing using
my own IDs/keys (not just by list element location), sorting and the
ability to start iterating from a specific location.
A sorted associative map may be fit. A data structure based on a
search tree, like a
Jared.S., even if a regex doesn't look like a program, it's like a
small program written in a strange language. And you have to test and
comment your programs.
So I suggest you to program in a more tidy way, and add unit tests
(doctests may suffice here) to your regexes, you can also use the
Terry Reedy:
a,b,*rest = list(range(10))
a,b,rest
(0, 1, [2, 3, 4, 5, 6, 7, 8, 9])
a,*rest,b = 'abcdefgh'
a,rest,b
('a', ['b', 'c', 'd', 'e', 'f', 'g'], 'h')
For the next few years I generally suggest to specify the Python
version too (if it's 2.x or 3.x).
This is Python 3.x.
Bye,
Marius Retegan:
parameters1
key1 value1
key2 value2
end
parameters2
key1 value1
key2 value2
end
So I want to create two dictionaries parameters1={key1:value1,
key2:value2} and the same for parameters2.
I have wasted some time trying to create a regex for that. But
Sumitava Mukherjee:
I need to randomly sample from a list where all choices have weights attached
to them.
Like this?
http://code.activestate.com/recipes/498229/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
yadin, understanding what you want is probably 10 times harder than
writing down the code :-)
I have a a table, from where I can extract a column.
You can extract it? Or do you want to extract it? Or do you want to
process it? Etc.
I wanna go down trough that column made of numbers
examine
yadin:
How can I build up a program that tells me that this sequence
128706
128707
128708
is repeated somewhere in the column, and how can i know where?
Can such patterns nest? That is, can you have a repeated pattern made
of an already seen pattern plus something else?
If you
Jeremy Martin, nowadays a parallelfor can be useful, and in future
I'll try to introduce similar things in D too, but syntax isn't
enough. You need a way to run things in parallel. But Python has the
GIL.
To implement a good parallel for your language may also need more
immutable data structures
Gediminas Kregzde:
map function is slower than
for loop for about 5 times, when using huge amounts of data.
It is needed to perform some operations, not to return data.
Then you are using map() for the wrong purpose. map() purpose is to
build a list of things. Use a for loop.
Bye,
bearophile
flam...@gmail.com:
I am wondering if it's possible to get the return value of a method
*without* calling it using introspection?
Python is dynamically typed, so you can create a function like this:
foo = lambda x: x if x else 1
foo(1)
'x'
foo(0)
1
The return type of foo() changes according
rump...@web.de:
Eventually the rope data structure (that the compiler uses heavily)
will become a proper part of the library:
Ropes are a complex data structure, that it has some downsides too.
Python tries to keep its implementation too simple, this avoids lot of
troubles (and is one of the
On the other hand, generally good programming practice suggests you to
write functions that have a constant return type. And in most programs
most functions are like this. This is why ShedSkin can indeed infer
the return type of functions in good behaved programs. To do this
ShedSkin uses a quite
Piet van Oostrum:
You may not have seen it, but Fortran and Algol 60 belong to that
category.
I see. It seems my ignorance is unbounded, even for the things I like.
I am very sorry.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
kk:
I am sure I am missing something here.
This instruction created a new dicky dict for every iteration:
diky={chr(a):a}
What you want is to add items to the same dict, that you later output.
The normal way to do it is:
diky[chr(a)] = a
Your fixed code:
def values(x):
diky = {}
for i
godshorse, you may use the shortestPaths method of this graph class
of mine:
http://sourceforge.net/projects/pynetwork/
(It uses the same Dijkstra code by Eppstein).
(Once you have all distances from a node to the other ones, it's not
too much difficult to find the tree you talk about).
Also see
Martin Vilcans:
Nice with a language with a new language designed for high
performance. It seems like a direct competitor with D, i.e. a
high-level language with low-level abilities.
The Python-like syntax is a good idea.
There is Delight too:
http://delight.sourceforge.net/
But I agree, one
noydb:
I have not worked with the %.
Can you provide a snippet of your idea in code form?
Then it's a very good moment to learn using it:
http://en.wikipedia.org/wiki/Modulo_operator
10 % 3
1
10 % 20
10
-10 % 3
2
-10 % -3
-1
Something like that
Implement your first version, run it,
Looking for this, Kevin D. Smith?
http://code.activestate.com/recipes/502295/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
I appreciate the tables Infinite Iterators and Iterators
terminating on the shortest input sequence at the top of the
itertools module, they are quite handy. I'd like to see similar
summary tables at the top of other docs pages too (such pages are
often quite long), for example the collections
Francis Carr:
I don't know who are you talking to, but I can give you few answers
anyway.
collections of multiply-recursive functions (which get used very frequently --
by no means is it an uncommon situation, as you suggest in your initial post),
They may be frequent in Scheme (because it's
Terry Reedy:
bearophile:
Well, I'd like function call semantics pass-in keyword arguments to
use OrderedDicts then... :-)
[...]
It would require a sufficiently fast C implementation.
Right.
Such dict is usually small, so if people want it ordered, it may be
better to just use an array of
John O'Hagan:
li=['a', 'c', 'a', 'c', 'c', 'g', 'a', 'c']
for i in range(len(li)):
if li[i:i + 2] == ['a', 'c']:
li[i:i + 2] = ['6']
Oh well, I have done a mistake, it seems.
Another solution then:
'acaccgac'.replace(ac, chr(6))
'\x06\x06cg\x06'
Bye,
bearophile
--
wolfram.hinde...:
It is easy to change all references of the function name, except for
those in the function body itself? That needs some explantation.
I can answer this. If I have a recursive function, I may want to
create a similar function, so I copy and paste it, to later modify the
copied
Aaron Brady:
def auto( f ):
... def _inner( *ar, **kw ):
... return f( g, *ar, **kw )
... g= _inner
... return g
Looks nice, I'll try to the following variant to see if it's usable:
def thisfunc(fun):
Decorator to inject a default name of a
function inside
Arnaud Delobelle:
def bindfunc(f):
... def boundf(*args, **kwargs):
... return f(boundf, *args, **kwargs)
... return boundf
... @bindfunc
... def fac(self, n):
... return 1 if n = 1 else n * self(n - 1)
... fac(5)
120
This is cute, now I have two names to take care
Matthias Gallé:
My problem is to replace all occurrences of a sublist with a new element.
Example:
Given ['a','c','a','c','c','g','a','c'] I want to replace all
occurrences of ['a','c'] by 6 (result [6,6,'c','g',6]).
There are several ways to solve this problem. Representing a string as
a
Matthias Gallé:
the int that can replace a sublist can be 255,
You didn't specify your integer ranges.
Probably there are many other solutions for your problem, but you have
to give more information. Like the typical array size, typical range
of the numbers, how much important is total memory
Steve Howell:
two methods with almost identical names, where one function is the public
interface and then another method that does most of the recursion.
Thanks Guido Walter both Python and D support nested functions, so
in such situations I put the recursive function inside the public
Arnaud Delobelle:
def fac(n):
def rec(n, acc):
if n = 1:
return acc
else:
return rec(n - 1, n*acc)
return rec(n, 1)
Right, that's another way to partially solve the problem I was talking
about. (Unfortunately the performance in Python is
Aahz:
When have you ever had a binary tree a thousand levels deep?
Yesterday.
Consider how big 2**1000 is...
You are thinking just about complete binary trees.
But consider that a topology like a single linked list (every node has
1 child, and they are chained) is a true binary tree still.
Carl Banks:
1. Singly-linked lists can and should be handled with iteration.
I was talking about a binary tree with list-like topology, of course.
All recursion does it make what you're doing a lot less readable for almost
all programmers.
I can't agree. If the data structure is recursive
Sometimes I rename recursive functions, or I duplicatemodify them,
and they stop working because inside them there's one or more copy of
their old name.
This happens to me more than one time every year.
So I have written this:
from inspect import getframeinfo, currentframe
def
Esmail:
Is there a Python construct to allow me to do something like
this:
for i in range(-10.5, 10.5, 0.1):
Sometimes I use an improved version of this:
http://code.activestate.com/recipes/66472/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
dineshv:
Thanks for that about Python3. My integers range from 0 to 9,999,999
and I have loads of them. Do you think Python3 will help?
Nope.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
mikefromvt:
I am very very unfamiliar with Python and need to update a Python
script. What I need to do is to replace three variables (already
defined in the script) within a string. The present script correctly
replaces two of the three variables. I am unable to add a third
variable.
Zealalot, probably there are some ways to do that, but a simple one is
the following (not tested):
def function2(self, passed_function=None):
if passed_function is None:
passed_function = self.doNothing
...
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
dineshv:
Yes, integer compression as in Unary, Golomb, and there are a few
other schemes.
OK. Currently Python doesn't uses Golomb and similar compression
schemes.
But in Python3 all integers are multi-precision ones (I don't know yet
what's bad with the design of Python2.6 integers), and a
Sion Arrowsmith:
The keys aren't integers, though, they're strings.
You are right, sorry. I need to add an int() there.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Dinesh:
If you store a large number of integers (keys and values) in a
dictionary, do the Python internals perform integer compression to
save memory and enhance performance? Thanks
Define what you mean with integer compression please. It has several
meanings according to the context. For
On Apr 28, 2:54 pm, forrest yang gforrest.y...@gmail.com wrote:
i try to load a big file into a dict, which is about 9,000,000 lines,
something like
1 2 3 4
2 2 3 4
3 4 5 6
code
for line in open(file)
arr=line.strip().split('\t')
dict[line.split(None, 1)[0]]=arr
but, the dict is
Arnaud Delobelle:
Some people would write it as:
def leniter(iterable):
if hasattr(iterable, '__len__'):
return len(iteratble)
return sum(1 for _ in iterable)
That's slower than my version.
def xpairwise(iterable):
return izip(iterable, islice(iterable, 1, None))
Paul Rubin:
Arnaud Delobelle:
Do you mean imap(comp, a, b)?
Oh yes, I forgot you can do that. Thanks.
That works and is nice and readable:
import operator
from itertools import imap
def equal_sequences(a, b, comp=operator.eq):
a and b must have __len__
equal_sequences([1,
You can also use quite less code, but this is less efficient:
def equal_items(iter1, iter2, key=lambda x: x):
class Guard(object): pass
try:
for x, y in izip_longest(iter1, iter2, fillvalue=Guard()):
if key(x) != key(y):
return False
except
Peter Otten:
[...] I think Raymond Hettinger posted
an implementation of this idea recently, but I can't find it at the moment.
[...]
class Grab:
def __init__(self, value):
self.search_value = value
def __hash__(self):
return hash(self.search_value)
def
Arnaud Delobelle:
You don't want to silence TypeErrors that may arise from with key() when
x or y is not a Guard, as it could hide bugs in key(). So I would write
something like this:
def equal_items(iter1, iter2, key=lambda x: x, _fill = object()):
for x, y in izip_longest(iter1, iter2,
Some idioms are so common that I think they deserve to be written in C
into the itertools module.
1) leniter(iterator)
It returns the length of a given iterator, consuming it, without
creating a list. I have discussed this twice in the past.
Like itertools.izip_longest don't use it with infinite
Ciprian Dorin, Craciun:
Python way:
-
def eq (a, b) :
return a == b
def compare (a, b, comp = eq) :
if len (a) != len (b) :
return False
for i in xrange (len (a)) :
if not comp (a[i], b[i]) :
return False
return True
That's not
Esmail:
oh, I forgot to mention that each list may contain duplicates.
Comparing the sorted lists is a possible O(n ln n) solution:
a.sort()
b.sort()
a == b
Another solution is to use frequency dicts, O(n):
from itertools import defaultdict
d1 = defaultdict(int)
for el in a:
d1[el] += 1
Arnaud Delobelle:
Thanks to the power of negative numbers, you only need one dict:
d = defaultdict(int)
for x in a:
d[x] += 1
for x in b:
d[x] -= 1
# a and b are equal if d[x]==0 for all x in d:
not any(d.itervalues())
Very nice, I'll keep this for future use.
Someday I'll have
casevh:
Testing 2 digits. This primarily measures the overhead for call GMP
via an extension module.
...
Thank you for adding some actual data to the whole discussion :-)
If you perform similar benchmarks with Bigints of Java you will see
how much slower they are compared to the Python ones.
MRAB:
I think I might have cracked it:
...
print n, sums
Nice.
If you don't want to use dynamic programming, then add a @memoize
decoration before the function, using for example my one:
http://code.activestate.com/recipes/466320/
And you will see an interesting speed increase, even if
Emmanuel Surleau:
On an unrelated note, it would be *really* nice to have a length property on
strings. Even Java has that!
Once you have written a good amount of Python code you can understand
that a len() function, that calls the __len__ method of objects, is
better. It allows you to write:
per:
in other words i want the list of random numbers to be arbitrarily
different (which is why i am using rand()) but as different from other
tuples in the list as possible.
This is more or less the problem of packing n equal spheres in a cube.
There is a lot of literature on this. You can
zaheer.ag...:
I am asking free advice,The program is not very complex it is around
500 lines with most the code being reused,
500 lines is not a small Python program :-)
If you don't want to show it, then you can write another program, a
smaller one, for the purpose of letting people review it.
Paul McGuire:
xrange is not really intended for in testing,
Let's add the semantic of a good and fast in to xrange (and to the
range of Python3). It hurts no one, allows for a natural idiom
(especially when you have a stride you don't want to re-invent the
logic of skipping absent numbers), and
Ravi:
Which is a better approach.
My personal view is that I should create a module with functions.
When in doubt, use the simplest solution that works well enough. In
this case, module functions are simple and probably enough.
But there can be a situation where you want to keep functions even
Hyunchul Kim:
Following script do exactly what I want but I want to improve the speed.
This may be a bit faster, especially if sequences are long (code
untested):
import re
from collections import deque
def scanner1(deque=deque):
result_seq = deque()
cp_regular_expression =
bearophile:
cp_regular_expression = re.compile(^a complex regular expression
here$)
for line in file(inputfile):
if cp_regular_expression.match(line) and result_seq:
Sorry, you can replace that with:
cp_regular_expression = re.compile(^a complex regular expression
activescott:
BTW: I decided to go with 'scottsappengineutil'.
scottsappengineutil is hard to read and understand. The name split
with underscores is more readable:
scott_appengine_util
Or just:
app_engine_util
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
grkunt...:
If I am writing in Python, since it is dynamically, but strongly
typed, I really should check that each parameter is of the expected
type, or at least can respond to the method I plan on calling (duck
typing). Every call should be wrapped in a try/except statement to
prevent the
Here an informal list in random order of things that I may like to add
or to remove to/from Python3.x+.
The things I list here don't come from fifty hours of thinking of
mine, and they may be often wrong. But I use Python2.x often enough,
so such things aren't totally random either.
To remove:
Lada Kugis:
(you have 1 apple, you start counting from 1 ...
To little children I now show how to count starting from zero: apple
number zero, apple number one, etc, and they find it natural
enough :-)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Ross:
How should I go about starting this problem...I'm feel like this is a
really simple problem, but I'm having writer's/coder's block. Can you
guys help?
There are refined ways to design a program, but this sounds like a
simple and small one, so you probably don't need much formal things to
Apollo:
my question is how to use 'heapq' to extract the biggest item from the heap?
is it possible?
This wrapper allows you to give a key function:
http://code.activestate.com/recipes/502295/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
An interval map maybe?
http://code.activestate.com/recipes/457411/
A programmer has to know the name of many data structures :-)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Kent:
Now I just deal with my little application exactly in Java style:
package: gui, service, dao, entity, util
If those things are made of a small enough number of sub things and
such sub things are small enough, then you may use a single module for
each of those Java packages (or even less).
srinivasan srinivas:
For ex: to check list 'A' is empty or not..
Empty collections are false:
if somelist:
... # somelist isn't empty
else:
... # somelist is empty
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
CinnamonDonkey:
It is neither constructive nor educational.
It's a bit like saying If you don't know what a function is, then
maybe you don't need it. ... have you tried having a single block of
code?
The point of people coming to these forums is to LEARN and share
knowledge. Perhaps it's
Peter Waller:
Is there any better way to attach code?
This is a widely used place (but read the contract/disclaimer
first):
http://code.activestate.com/recipes/langs/python/
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
CinnamonDonkey:
what makes something a package?
If you don't know what a package is, then maybe you don't need
packages.
In your project is it possible to avoid using packages and just use
modules in the same directory?
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Carl Banks:
The slow performance is most likely due to the poor performance of
Python 3's IO, which is caused by [...]
My suggestion for the Original Poster is just to try using Python 2.x,
if possible :-)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Kottiyath:
How do we decide whether a level of complexity is Ok or not?
I don't understand your question, but here are better ways to do what
you do:
a = {'a': 2, 'c': 4, 'b': 3}
for k, v in a.iteritems():
... a[k] = v + 1
...
a
{'a': 3, 'c': 5, 'b': 4}
b = dict((k, v+1) for k, v in
mattia:
Now, some ideas (apart from the double loop to aggregate each element of
l1 with each element of l2):
from itertools import product
list(product([1,2,3], [4,5]))
[(1, 4), (1, 5), (2, 4), (2, 5), (3, 4), (3, 5)]
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
John Nagle:
I gave up on this when C came in; the C crowd was so casual about integer
overflow that nobody cared about this level of correctness. Today, of course,
buffer overflows are a way of life.
Experience shows that integer overflows are a very common bug. One of
the huge advantages of
JanC:
In most modern Pascal dialects the overflow checks can be (locally)
enabled or disabled with compiler directives in the source code,
I think that was possible in somewhat older versions of Pascal-like
languages too (like old Delphi versions, and maybe TurboPascals too).
so the speed
Maxim Khitrov:
When the types are immutable, there is no difference.
But you may want different instances to have different immutable data.
Generally if the data (immutable or not) is the same for all the
instances, use class attributes, otherwise use instance attributes.
Bye,
bearophile
--
MRAB:
sorted(range(9), def key(n): n % 3)
I am fine with the current lambda syntax, but another possibility:
sorted(range(9), n = n % 3)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
ZikO, if you know C++, then knowing one scripting language is useful,
it can be Ruby, Python (or even Lua, etc).
Note that learning a language isn't a binary thing, so I suggest you
to use a week to learn Python and use it try to solve some of your
practical problems. After a week you will be able
Raymond Hettinger:
In your experiences with xsplit(), do most use cases involve removing the
separators?
Unfortunately I am not able to tell you how often I remove them. But
regarding strings, I usually want to remove separators:
aXcdXfg.split(X)
['a', 'cd', 'fg']
So sometimes I want to do
See here Daniel Fetchinson:
http://groups.google.com/group/comp.lang.python/browse_thread/thread/a973de8f3562675c
But be quite careful in using that stuff, it has some traps.
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Are the computed gotos used in the future pre-compiled Windows binary
(of V.3.1) too?
Is such optimization going to be backported to the 2.x series too,
like Python 2.7?
Bye and thank you for your work,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Benjamin Peterson:
It provides a good incentive for people to upgrade. :)
Sometimes at work you are forced you to use Python 2.x, so incentives
aren't much relevant.
Christian Heimes:
No, the MS Visual C compiler doesn't supported labels as values [1]. The
feature is only supported by some
Raymond Hettinger, maybe it can be useful to add an optional argument
flag to tell such split_on to keep the separators or not? This is the
xsplit I usually use:
def xsplit(seq, key=bool, keepkeys=True):
xsplit(seq, key=bool, keepkeys=True): given an iterable seq and
a predicate
key,
This is an interesting post, it shows me that fitness plateau where
design of Python syntax lives is really small, you can't design
something just similar:
http://unlimitednovelty.com/2009/03/indentation-sensitivity-post-mortem.html
Living on a small fitness plateau isn't good, even if it's very
odeits:
Although this is true, that is more of an answer to the question How
do i remove duplicates from a huge list in Unix?.
Don't you like cygwin?
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
Raymond Hettinger:
Paul Rubin:
another (messy) approach would be to write a C
extension that uses a doubly linked list some day.
That seems like an ideal implementation to me.
This was my Python implementation, where the delete too is O(1), but
it's slow:
Chris Rebert:
That seems to just be an overly complicated way of writing:
spaces = bool(form.has_key('spaces') and form.getvalue('spaces') == 1)
Better:
spaces = bool(('spaces' in form) and form.getvalue('spaces') == 1)
Bye,
bearophile
--
http://mail.python.org/mailman/listinfo/python-list
odeits:
How big of a list are we talking about? If the list is so big that the
entire list cannot fit in memory at the same time this approach wont
work e.g. removing duplicate lines from a very large file.
If the data are lines of a file, and keeping the original order isn't
important, then
Paul Rubin:
I don't see how to delete a randomly chosen node if you use that trick, since
the hash lookup doesn't give you two consecutive nodes in the linked list to
xor together.
Thank you, I think you are right, I am sorry.
So on 32-bit CPUs you need to add 8 bytes to each value.
On 64-bit
Steve Holden:
A sort of premature pessimization, then.
Maybe not, the save in memory may lead to higher speed anyway. So you
need to test it to know the overall balance. And in data structures
with general purpose you want all the speed you can get.
Bye,
bearophile
--
Brett Hedges:
How would I keep track of the absolute position of the lines?
You may have to do all things manually (tell, seek and looking for
newlines manually, iterating chars), that's why I have said it's not
handy. The other solutions are simpler.
Bye,
bearophile
--
1 - 100 of 1178 matches
Mail list logo