Re: Fallback for operator and other dunder methods

2023-08-04 Thread Edmondo Giovannozzi via Python-list
Il giorno mercoledì 26 luglio 2023 alle 20:35:53 UTC+2 Dom Grigonis ha scritto:
> Tried exactly that and didn’t work. Neither __getattr__, nor __getattribute__ 
> of meta is being invoked.
> > On 26 Jul 2023, at 10:01, Chris Angelico via Python-list 
> >  wrote: 
> > 
> > On Wed, 26 Jul 2023 at 16:52, Dom Grigonis  wrote: 
> >> 
> >> Could you give an example? Something isn’t working for me. 
> >> 
> > 
> > This is a metaclass: 
> > 
> > class Meta(type): 
> > ... 
> > class Demo(metaclass=Meta): 
> > ... 
> > 
> > In order to catch those kinds of attribute lookups, you'll need the 
> > metaclass to hook them. And you might need to use __getattribute__ 
> > rather than __getattr__. However, there may also be some checks that 
> > simply look for the presence of the attribute (see: slots), so you may 
> > find that it's even more complicated. It's usually easiest to just 
> > create the slots you want. 
> > 
> > ChrisA
> > -- 
> > https://mail.python.org/mailman/listinfo/python-list


For numpy arrays you can find some suggestion at: 
https://numpy.org/doc/stable/user/basics.dispatch.html
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging reason for python running unreasonably slow when adding numbers

2023-03-21 Thread Edmondo Giovannozzi
Il giorno lunedì 20 marzo 2023 alle 19:10:26 UTC+1 Thomas Passin ha scritto:
> On 3/20/2023 11:21 AM, Edmondo Giovannozzi wrote: 
> > 
> >>> def sum1(): 
> >>> s = 0 
> >>> for i in range(100): 
> >>> s += i 
> >>> return s 
> >>> 
> >>> def sum2(): 
> >>> return sum(range(100)) 
> >> Here you already have the numbers you want to add. 
> > 
> > Actually using numpy you'll be much faster in this case: 
> > 
> > § import numpy as np 
> > § def sum3(): 
> > § return np.arange(1_000_000, dtype=np.int64).sum() 
> > 
> > On my computer sum1 takes 44 ms, while the numpy version just 2.6 ms 
> > One problem is that sum2 gives the wrong result. This is why I used 
> > np.arange with dtype=np.int64.
> On my computer they all give the same result. 
> 
> Python 3.10.9, PyQt version 6.4.1 
> Windows 10 AMD64 (build 10.0.19044) SP0 
> Processor: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, 1690 Mhz, 4 
> Core(s), 8 Logical Processor(s)
> > sum2 evidently doesn't uses the python "big integers" e restrict the result 
> > to 32 bits.
> What about your system? Let's see if we can figure the reason for the 
> difference.

I'm using winpython on Windows 11 and the python version is, well, 3.11:

But it is my fault, sorry, I realised now that ipython is importing numpy 
namespace and the numpy sum function is overwriting the intrinsic sum.
The intrinsic sum is behaving correctly and is faster when used in 
sum(range(1_000_000)) then the numpy version.


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Debugging reason for python running unreasonably slow when adding numbers

2023-03-20 Thread Edmondo Giovannozzi

> > def sum1(): 
> > s = 0 
> > for i in range(100): 
> > s += i 
> > return s 
> > 
> > def sum2(): 
> > return sum(range(100))
> Here you already have the numbers you want to add. 

Actually using numpy you'll be much faster in this case:

§ import numpy as np
§ def sum3():
§return np.arange(1_000_000, dtype=np.int64).sum()

On my computer sum1 takes 44 ms, while the numpy version just 2.6 ms
One problem is that sum2 gives the wrong result. This is why I used np.arange 
with dtype=np.int64.

sum2 evidently doesn't uses the python "big integers" e restrict the result to 
32 bits.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Line continuation and comments

2023-02-22 Thread Edmondo Giovannozzi
Il giorno mercoledì 22 febbraio 2023 alle 09:50:14 UTC+1 Robert Latest ha 
scritto:
> I found myself building a complicated logical condition with many ands and 
> ors 
> which I made more manageable by putting the various terms on individual lines 
> and breaking them with the "\" line continuation character. In this context 
> it 
> would have been nice to be able to add comments to lines terms which of 
> course 
> isn't possible because the backslash must be the last character on the line. 
> 
> Question: If the Python syntax were changed to allow comments after 
> line-ending 
> backslashes, would it break any existing code? I can't think of an example.

Well you can if you use parenthesis like in:
x = 5
a = (x > 3 and
# x < 21 or
 x > 100
 )
You don't need the "\" to continue a line in this case

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fast lookup of bulky "table"

2023-01-17 Thread Edmondo Giovannozzi
Il giorno martedì 17 gennaio 2023 alle 00:18:04 UTC+1 Dino ha scritto:
> On 1/16/2023 1:18 PM, Edmondo Giovannozzi wrote: 
> > 
> > As a comparison with numpy. Given the following lines: 
> > 
> > import numpy as np 
> > a = np.random.randn(400,100_000) 
> > ia = np.argsort(a[0,:]) 
> > a_elem = a[56, ia[0]] 
> > 
> > I have just taken an element randomly in a numeric table of 400x10 
> > elements 
> > To find it with numpy: 
> > 
> > %timeit isel = a == a_elem 
> > 35.5 ms ± 2.79 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 
> > 
> > And 
> > %timeit a[isel] 
> > 9.18 ms ± 371 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) 
> > 
> > As data are not ordered it is searching it one by one but at C level. 
> > Of course it depends on a lot of thing...
> thank you for this. It's probably my lack of experience with Numpy, 
> but... can you explain what is going on here in more detail? 
> 
> Thank you 
> 
> Dino

Sorry, 
I was just creating an array of 400x10 elements that I fill with random 
numbers:

  a = np.random.randn(400,100_000) 

Then I pick one element randomly, it is just a stupid sort on a row and then I 
take an element in another row, but it doesn't matter, I'm just taking a random 
element. I may have used other ways to get that but was the first that came to 
my mind.

 ia = np.argsort(a[0,:]) 
 a_elem = a[56, ia[0]] 

The I'm finding that element in the all the matrix a (of course I know where it 
is, but I want to test the speed of a linear search done on the C level):

%timeit isel = a == a_elem 

Actually isel is a logic array that is True where a[i,j] == a_elem and False 
where a[i,j] != a_elem. It may find more then one element but, of course, in 
our case it will find only the element that we have selected at the beginning. 
So it will give the speed of a linear search plus the time needed to allocate 
the logic array. The search is on the all matrix of 40 million of elements not 
just on one of its row of 100k element. 

On the single row (that I should say I have chosen to be contiguous) is much 
faster.

%timeit isel = a[56,:] == a_elem
26 µs ± 588 ns per loop (mean ± std. dev. of 7 runs, 1 loops each)

the matrix is a double precision numbers that is 8 byte, I haven't tested it on 
string of characters.

This wanted to be an estimate of the speed that one can get going to the C 
level. 
You loose of course the possibility to have a relational database, you need to 
have everything in memory, etc...

A package that implements tables based on numpy is pandas: 
https://pandas.pydata.org/

I hope that it can be useful.


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fast lookup of bulky "table"

2023-01-16 Thread Edmondo Giovannozzi
Il giorno domenica 15 gennaio 2023 alle 05:26:50 UTC+1 Dino ha scritto:
> Hello, I have built a PoC service in Python Flask for my work, and - now 
> that the point is made - I need to make it a little more performant (to 
> be honest, chances are that someone else will pick up from where I left 
> off, and implement the same service from scratch in a different language 
> (GoLang? .Net? Java?) but I am digressing). 
> 
> Anyway, my Flask service initializes by loading a big "table" of 100k 
> rows and 40 columns or so (memory footprint: order of 300 Mb) and then 
> accepts queries through a REST endpoint. Columns are strings, enums, and 
> numbers. Once initialized, the table is read only. The endpoint will 
> parse the query and match it against column values (equality, 
> inequality, greater than, etc.) Finally, it will return a (JSON) list of 
> all rows that satisfy all conditions in the query. 
> 
> As you can imagine, this is not very performant in its current form, but 
> performance was not the point of the PoC - at least initially. 
> 
> Before I deliver the PoC to a more experienced software architect who 
> will look at my code, though, I wouldn't mind to look a bit less lame 
> and do something about performance in my own code first, possibly by 
> bringing the average time for queries down from where it is now (order 
> of 1 to 4 seconds per query on my laptop) to 1 or 2 milliseconds on 
> average). 
> 
> To be honest, I was already able to bring the time down to a handful of 
> microseconds thanks to a rudimentary cache that will associate the 
> "signature" of a query to its result, and serve it the next time the 
> same query is received, but this may not be good enough: 1) queries 
> might be many and very different from one another each time, AND 2) I am 
> not sure the server will have a ton of RAM if/when this thing - or 
> whatever is derived from it - is placed into production. 
> 
> How can I make my queries generally more performant, ideally also in 
> case of a new query? 
> 
> Here's what I have been considering: 
> 
> 1. making my cache more "modular", i.e. cache the result of certain 
> (wide) queries. When a complex query comes in, I may be able to restrict 
> my search to a subset of the rows (as determined by a previously cached 
> partial query). This should keep the memory footprint under control. 
> 
> 2. Load my data into a numpy.array and use numpy.array operations to 
> slice and dice my data. 
> 
> 3. load my data into sqlite3 and use SELECT statement to query my table. 
> I have never used sqllite, plus there's some extra complexity as 
> comparing certain colum requires custom logic, but I wonder if this 
> architecture would work well also when dealing with a 300Mb database. 
> 
> 4. Other ideas? 
> 
> Hopefully I made sense. Thank you for your attention 
> 
> Dino

As a comparison with numpy. Given the following lines:

import numpy as np
a = np.random.randn(400,100_000)
ia = np.argsort(a[0,:])
a_elem = a[56, ia[0]]

I have just taken an element randomly in a numeric table of 400x10 elements
To find it with numpy:

%timeit isel = a == a_elem
35.5 ms ± 2.79 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

And 
%timeit a[isel]
9.18 ms ± 371 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

As data are not ordered it is searching it one by one but at C level.
Of course it depends on a lot of thing... 


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fwd: Installation hell

2022-12-20 Thread Edmondo Giovannozzi
Personally I use winpython: https://winpython.github.io/
That have all the scientific packages already available.
It can run without being installed and uses spyder as an IDE (for small 
projects it's ok).
And,  I can import pygame (even though I have not tested if everything works) 
in python 3.11.
As I'm using it for science projects I find it perfect.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Byte arrays and DLLs

2022-07-02 Thread Edmondo Giovannozzi
Il giorno venerdì 1 luglio 2022 alle 00:46:13 UTC+2 ery...@gmail.com ha scritto:
> On 6/30/22, Rob Cliffe via Python-list  wrote: 
> > 
> > AKAIK it is not possible to give ctypes a bytearray object and persuade 
> > it to give you a pointer to the actual array data, suitable for passing 
> > to a DLL.
> You're overlooking the from_buffer() method. For example: 
> 
> >>> ba = bytearray(10) 
> >>> ca = (ctypes.c_char * len(ba)).from_buffer(ba) 
> >>> ca.value = b'spam' 
> >>> ba 
> bytearray(b'spam\x00') 
> 
> Note that the bytearray can't be resized while a view of the data is 
> exported. For example: 
> 
> >>> ba.append(97) 
> Traceback (most recent call last): 
> File "", line 1, in  
> BufferError: Existing exports of data: object cannot be re-sized 
> 
> >>> del ba[-1] 
> Traceback (most recent call last): 
> File "", line 1, in  
> BufferError: Existing exports of data: object cannot be re-sized

Have you had a look at numpy (https://numpy.org/)?
Typically, it is used for all scientific applications, supports several 
different kind of array, fast linear algebra, etc.
And of course you can pass an array to a dynamic library with ctypes 
(https://numpy.org/doc/stable/reference/routines.ctypeslib.html).

 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: C is it always faster than nump?

2022-02-28 Thread Edmondo Giovannozzi
Il giorno sabato 26 febbraio 2022 alle 19:41:37 UTC+1 Dennis Lee Bieber ha 
scritto:
> On Fri, 25 Feb 2022 21:44:14 -0800, Dan Stromberg  
> declaimed the following:
> >Fortran, (still last I heard) did not support pointers, which gives Fortran 
> >compilers the chance to exploit a very nice class of optimizations you 
> >can't use nearly as well in languages with pointers. 
> >
> Haven't looked much at Fortran-90/95 then... 
> 
> Variable declaration gained a POINTER qualifier, and there is an 
> ALLOCATE intrinsic to obtain memory. 
> 
> And with difficulty one could get the result in DEC/VMS FORTRAN-77 
> since DEC implemented (across all their language compilers) intrinsics 
> controlling how arguments are passed -- overriding the language native 
> passing: 
> CALL XYZ(%val(M)) 
> would actually pass the value of M, not Fortran default address-of, with 
> the result that XYZ would use that value /as/ the address of the actual 
> argument. (Others were %ref() and %descr() -- descriptor being a small 
> structure with the address reference along with, say, upper/lower bounds; 
> often used for strings).
> -- 
> Wulfraed Dennis Lee Bieber AF6VN 
> wlf...@ix.netcom.com http://wlfraed.microdiversity.freeddns.org/

The latest Fortran revision is the 2018.
A variable can also have the VALUE attribute even though nowhere in the 
standard is written that it means passing the data by value. It just means that 
if a variable is changed in a procedure the changes don't propagate back to the 
caller.
With the iso_c_binding one can directly call a C function or let a Fortran 
procedure appear as a C function. There is the C_LOC that gives the C address 
of a variable if needed. Of course from 2003 it is fully object oriented.
The claim that it was faster then C is mostly related to the aliasing rule that 
is forbidden in Fortran. The C introduced the "restrict" qualifier for the same 
reason.
In Fortran you also have array operation like you have in numpy. 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Extracting dataframe column with multiple conditions on row values

2022-01-09 Thread Edmondo Giovannozzi
Il giorno sabato 8 gennaio 2022 alle 23:01:13 UTC+1 Avi Gross ha scritto:
> I have to wonder if when something looks like HOMEWORK, if it should be 
> answered in detail, let alone using methods beyond what is expected in class. 
> The goal of this particular project seems to be to find one (or perhaps more) 
> columns in some data structure like a dataframe that match two conditions 
> (containing a copy of two numbers in one or more places) and then KNOW what 
> column it was in. The reason I say that is because the next fairly 
> nonsensical request is to then explicitly return what that column has in the 
> row called 2, meaning the third row. 
> Perhaps stated another way: "what it the item in row/address 2 of the column 
> that somewhere contains two additional specified contents called key1 and 
> key2"  
> My guess is that if the instructor wanted this to be solved using methods 
> being taught, then loops may well be a way to go. Python and numpy/pandas 
> make it often easier to do things with columns rather than in rows across 
> them, albeit many things allow you to specify an axis. So, yes, transposing 
> is a way to go that transforms the problem in a way easier to solve without 
> thinking deeply. Some other languages allow relatively easy access in both 
> directions of horizontally versus vertically. And this may be an example 
> where solving it as a list of lists may also be easier.  
> Is the solution at the bottom a solution? Before I check, I want to see if I 
> understand the required functionality and ask if it is completely and 
> unambiguously specified.  
> For completeness, the question being asked may need to deal with a uniqueness 
> issue. Is it possible multiple columns match the request and thus more than 
> one answer is required to be returned? Is the row called 2 allowed to 
> participate in the match or must it be excluded and the question becomes to 
> find one (or more) columns that contain key1 somewhere else than row 2 and 
> key2 (which may have to be different than key1 or not) somewhere else and 
> THEN provide the corresponding entry from row 2 and that (or those) 
> column(s)? 
> So in looking at the solution offered, what exactly was this supposed to do 
> when dft is the transpose?
> idt = (dft[0] == 1) & (dft[1] == 5)
> Was the code (way below in this message) tried out or just written for us to 
> ponder? I tried it. I got an answer of: 0 1 2 
>V2 1 5 6 
> That is not my understanding of what was requested. Row 2 (shown transposed 
> as a column) is being shown as a whole. The request was for item "2" which 
> would be just 6. Something more like this: 
> print(dft[idt][2]) 
> 
> But the code makes no sense to me.  seems to explicitly test the first column 
> (0) to see if it contains a 1 and then the second column (1) to see if it 
> contains a 5. Not sure who cares about this hard-wired query as this is not 
> my understanding of the question. You want any of the original three rows 
> (now transposed)  tested to see if it contains BOTH.  
> I may have read the requirements wrong or it may not be explained well. Until 
> I am sure what is being asked and whether there is a good reason someone 
> wants a different solution, I see no reason to provide yet another 
> solution.But just for fund, assuming dft contains the transpose of the 
> original data, will this work? 
> first = dft[dft.values == key1 ]second = first[first.values == key2 
> ]print(second[2]) 
> I get a 6 as an answer and suppose it could be done in one more complex 
> expression if needed! LOL!
> -Original Message- 
> From: Edmondo Giovannozzi  
> To: pytho...@python.org 
> Sent: Sat, Jan 8, 2022 8:00 am 
> Subject: Re: Extracting dataframe column with multiple conditions on row 
> values 
> 
> Il giorno sabato 8 gennaio 2022 alle 02:21:40 UTC+1 dn ha scritto: 
> > Salaam Mahmood, 
> > On 08/01/2022 12.07, Mahmood Naderan via Python-list wrote: 
> > > I have a csv file like this 
> > > V0,V1,V2,V3 
> > > 4,1,1,1 
> > > 6,4,5,2 
> > > 2,3,6,7 
> > > 
> > > And I want to search two rows for a match and find the column. For 
> > > example, I want to search row[0] for 1 and row[1] for 5. The 
> > > corresponding 
> > > column is V2 (which is the third column). Then I want to return the value 
> > > at row[2] and the found column. The result should be 6 then. 
> > Not quite: isn't the "found column" also required? 
> > > I can manually extract the specified rows (with index 0 and 1 which are 
> > > fixed) and manually iterate over them like arrays to find a match. Then I 
> > Perhaps this idea has been influenced by a s

Re: Extracting dataframe column with multiple conditions on row values

2022-01-08 Thread Edmondo Giovannozzi
Il giorno sabato 8 gennaio 2022 alle 02:21:40 UTC+1 dn ha scritto:
> Salaam Mahmood,
> On 08/01/2022 12.07, Mahmood Naderan via Python-list wrote: 
> > I have a csv file like this 
> > V0,V1,V2,V3 
> > 4,1,1,1 
> > 6,4,5,2 
> > 2,3,6,7 
> > 
> > And I want to search two rows for a match and find the column. For 
> > example, I want to search row[0] for 1 and row[1] for 5. The corresponding 
> > column is V2 (which is the third column). Then I want to return the value 
> > at row[2] and the found column. The result should be 6 then.
> Not quite: isn't the "found column" also required?
> > I can manually extract the specified rows (with index 0 and 1 which are 
> > fixed) and manually iterate over them like arrays to find a match. Then I
> Perhaps this idea has been influenced by a similar solution in another 
> programming language. May I suggest that the better-answer you seek lies 
> in using Python idioms (as well as Python's tools)...
> > key1 = 1 
> > key2 = 5
> Fine, so far - excepting that this 'problem' is likely to be a small 
> part of some larger system. Accordingly, consider writing it as a 
> function. In which case, these two "keys" will become 
> function-parameters (and the two 'results' become return-values).
> > row1 = df.iloc[0] # row=[4,1,1,1] 
> > row2 = df.iloc[1] # row=[6,4,5,2]
> This is likely not native-Python. Let's create lists for 'everything', 
> just-because: 
> 
> >>> headings = [ "V0","V1","V2","V3" ] 
> >>> row1 = [4,1,1,1] 
> >>> row2 = [6,4,5,2] 
> >>> results = [ 2,3,6,7 ] 
> 
> 
> Note how I'm using the Python REPL (in a "terminal", type "python" (as 
> appropriate to your OpSys) at the command-line). IMHO the REPL is a 
> grossly under-rated tool, and is a very good means towards 
> trial-and-error, and learning by example. Highly recommended! 
> 
> 
> > for i in range(len(row1)): 
> 
> This construction is very much a "code smell" for thinking that it is 
> not "pythonic". (and perhaps the motivation for this post) 
> 
> In Python (compared with many other languages) the "for" loop should 
> actually be pronounced "for-each". In other words when we pair the 
> code-construct with a list (for example): 
> 
> for each item in the list the computer should perform some suite of 
> commands. 
> 
> (the "suite" is everything 'inside' the for-each-loop - NB my 
> 'Python-betters' will quickly point-out that this feature is not limited 
> to Python-lists, but will work with any :iterable" - ref: 
> https://docs.python.org/3/tutorial/controlflow.html#for-statements) 
> 
> 
> Thus: 
> 
> > for item in headings: print( item ) 
> ... 
> V0 
> V1 
> V2 
> V3 
> 
> 
> The problem is that when working with matrices/matrixes, a math 
> background equips one with the idea of indices/indexes, eg the 
> ubiquitous subscript-i. Accordingly, when reading 'math' where a formula 
> uses the upper-case Greek "sigma" character, remember that it means "for 
> all" or "for each"! 
> 
> So, if Python doesn't use indexing or "pointers", how do we deal with 
> the problem? 
> 
> Unfortunately, at first glance, the pythonic approach may seem 
> more-complicated or even somewhat convoluted, but once the concepts 
> (and/or the Python idioms) are learned, it is quite manageable (and 
> applicable to many more applications than matrices/matrixes!)...
> > if row1[i] == key1: 
> > for j in range(len(row2)): 
> > if row2[j] == key2: 
> > res = df.iloc[:,j] 
> > print(res) # 6 
> > 
> > Is there any way to use built-in function for a more efficient code?
> This is where your idea bears fruit! 
> 
> There is a Python "built-in function": zip(), which will 'join' lists. 
> NB do not become confused between zip() and zip archive/compressed files! 
> 
> Most of the time reference book and web-page examples show zip() being 
> used to zip-together two lists into a single data-construct (which is an 
> iterable(!)). However, zip() will actually zip-together multiple (more 
> than two) "iterables". As the manual says: 
> 
> «zip() returns an iterator of tuples, where the i-th tuple contains the 
> i-th element from each of the argument iterables.» 
> 
> Ah, so that's where the math-idea of subscript-i went! It has become 
> 'hidden' in Python's workings - or putting that another way: Python 
> looks after the subscripting for us (and given that 'out by one' errors 
> in pointers is a major source of coding-error in other languages, 
> thank-you very much Python!) 
> 
> First re-state the source-data as Python lists, (per above) - except 
> that I recommend the names be better-chosen to be more meaningful (to 
> your application)! 
> 
> 
> Now, (in the REPL) try using zip(): 
> 
> >>> zip( headings, row1, row2, results ) 
>  
> 
> Does that seem a very good illustration? Not really, but re-read the 
> quotation from the manual (above) where it says that zip returns an 
> iterator. If we want to see the values an iterator will produce, then 
> turn it into an iterable data-structure, eg: 
> 
> >>> list( zip( 

Re: Negative subscripts

2021-11-26 Thread Edmondo Giovannozzi
Il giorno venerdì 26 novembre 2021 alle 10:23:46 UTC+1 Frank Millman ha scritto:
> Hi all 
> 
> In my program I have a for-loop like this - 
> 
> >>> for item in x[:-y]: 
> ...[do stuff] 
> 
> 'y' may or may not be 0. If it is 0 I want to process the entire list 
> 'x', but of course -0 equals 0, so it returns an empty list. 
> 
> In theory I can say 
> 
> >>> for item in x[:-y] if y else x: 
> ...[do stuff] 
> 
> But in my actual program, both x and y are fairly long expressions, so 
> the result is pretty ugly. 
> 
> Are there any other techniques anyone can suggest, or is the only 
> alternative to use if...then...else to cater for y = 0? 
> 
> Thanks 
> 
> Frank Millman

First assign your long expressions to variables x and y and then you may write:

for item in x[:len(x) - y]:
   ...

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: pyinstaller wrong classified as Windows virus

2021-11-26 Thread Edmondo Giovannozzi
Il giorno venerdì 26 novembre 2021 alle 08:13:50 UTC+1 Ulli Horlacher ha 
scritto:
> Avi Gross  wrote: 
> 
> > I am not sure what your real problem is, Ulli, but many antivirus programs 
> > can be TEMPORARILY shut off.
> Meanwhile I found this configuration option. 
> But this does not help me much, because my programs must run on other 
> Windows PCs of other users and they cannot disable the default Windows 
> virus scanner. 
> 
> I for myself does not use Windows at all, I just use it to compile my 
> programs.
> > If one recognizes your code a potentially having a virus, it may be for an 
> > assortment of reasons such as a table it contains to look at position N in 
> > the executable for an exact match with some bit-string. If so, one 
> > potential 
> > fix is a slight change in the code that compiles a bit differently like 
> > x=sin(30) or other filler.
> I do not know what and where the virus scanning is complaining about. 
> It simple says virus threat found and then it deletes my executables. 
> It is the default virus scanner of Windows 10, I have not installed any 
> additional software (besides Python and cygwin).
> > But consider another possibility that your compiler software is compromised
> Then https://www.python.org/ftp/python/3.10.0/python-3.10.0-amd64.exe 
> is infected. I doubt this.
> > Is this happening to only one set of code?
> This is happening SOMETIMES, not always. With the SAME source code. When I 
> call pyinstaller often enough, then the virus scanner is quiet. In about 1 
> of 20 compile runs.
> -- 
> Ullrich Horlacher Server und Virtualisierung 
> Rechenzentrum TIK 
> Universitaet Stuttgart E-Mail: horl...@tik.uni-stuttgart.de 
> Allmandring 30a Tel: ++49-711-68565868 
> 70569 Stuttgart (Germany) WWW: http://www.tik.uni-stuttgart.de/


You can try to download winpython: 
https://github.com/winpython/winpython/releases
It is an executable, but you don't need to execute it as it is a 7zip 
compressed archive.
You may run it or use directly 7zip to decompress it, the result will be the 
same. 
 
Then you have a full python installation that don't need to be installed. 
You may try to put your program there and give the users that directory.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to pass a method as argument?

2021-09-30 Thread Edmondo Giovannozzi
Il giorno giovedì 30 settembre 2021 alle 07:11:28 UTC+2 anila...@gmail.com ha 
scritto:
> I want to write a python calculator program that has different methods to 
> add, subtract, multiply which takes 2 parameters. I need to have an execute 
> method when passed with 3 parameters, should call respective method and 
> perform the operation. How can I achieve that? 
> 
> 
> 
> class calc(): 
> def __init__(self,a,b): 
> self.a=a 
> self.b=b 
> 
> def ex(self,fun): 
> self.fun=fun 
> if fun=="add": 
> self.add() 
> 
> def add(self): 
> return self.a+self.b 
> def sub(self): 
> return self.a-self.b 
> def mul(self): 
> return self.a*self.b 
> def div (self): 
> return self.a/self.b 
> def execu( 
> obj1=calc() 
> obj1.execu("add",1,,2)

For example:

|def ex(self, fun):
|getattr(self, fun)()
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Data structure for plotting monotonically expanding data set

2021-05-27 Thread Edmondo Giovannozzi
Il giorno giovedì 27 maggio 2021 alle 11:28:31 UTC+2 Loris Bennett ha scritto:
> Hi, 
> 
> I currently a have around 3 years' worth of files like 
> 
> home.20210527 
> home.20210526 
> home.20210525 
> ... 
> 
> so around 1000 files, each of which contains information about data 
> usage in lines like 
> 
> name kb 
> alice 123 
> bob 4 
> ... 
> zebedee 999 
> 
> (there are actually more columns). I have about 400 users and the 
> individual files are around 70 KB in size. 
> 
> Once a month I want to plot the historical usage as a line graph for the 
> whole period for which I have data for each user. 
> 
> I already have some code to extract the current usage for a single from 
> the most recent file: 
> 
> for line in open(file, "r"): 
> columns = line.split() 
> if len(columns) < data_column: 
> logging.debug("no. of cols.: %i less than data col", len(columns)) 
> continue 
> regex = re.compile(user) 
> if regex.match(columns[user_column]): 
> usage = columns[data_column] 
> logging.info(usage) 
> return usage 
> logging.error("unable to find %s in %s", user, file) 
> return "none" 
> 
> Obviously I will want to extract all the data for all users from a file 
> once I have opened it. After looping over all files I would naively end 
> up with, say, a nested dict like 
> 
> {"20210527": { "alice" : 123, , ..., "zebedee": 999}, 
> "20210526": { "alice" : 123, "bob" : 3, ..., "zebedee": 9}, 
> "20210525": { "alice" : 123, "bob" : 1, ..., "zebedee": 999}, 
> "20210524": { "alice" : 123, ..., "zebedee": 9}, 
> "20210523": { "alice" : 123, ..., "zebedee": 999}, 
> ...} 
> 
> where the user keys would vary over time as accounts, such as 'bob', are 
> added and latter deleted. 
> 
> Is creating a potentially rather large structure like this the best way 
> to go (I obviously could limit the size by, say, only considering the 
> last 5 years)? Or is there some better approach for this kind of 
> problem? For plotting I would probably use matplotlib. 
> 
> Cheers, 
> 
> Loris 
> 
> -- 
> This signature is currently under construction.

Have you tried to use pandas to read the data?
Then you may try to add a column with the date and then join the datasets.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: GUI (tkinter) popularity and job prospects for

2020-10-26 Thread Edmondo Giovannozzi
Il giorno venerdì 23 ottobre 2020 alle 18:55:53 UTC+2 john... ha scritto:
> On 23/10/2020 05:47, Grant Edwards wrote: 
> > 
> >> I think that commercial desktop applications with a python 
> >> compatible GUI would likely use QT or a Python binding thereof. 
> > Agreed. If you want to improve you "hirability" for GUI application 
> > development, I would probably put Qt first. Then gobject or 
> > wx. Tkinter would probably be last.
> I've used tkinter and wxPython occasionally in the past for 1 off test 
> tasks (and interest). What's the advantage of Qt? 
> 
> John

I use PyQt in research application, there is a library pyqtgraph 
(http://www.pyqtgraph.org/) based on Qt that is very fast, you can zoom and pan 
even complicated plots, it is much faster than matplotlib.

For me, the graphic makes the difference.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: newbie

2020-09-10 Thread edmondo . giovannozzi
You can also have a look at www.scipy.org where you can find some packages used 
for scientific programming like numpy, scipy, matplotlib.
The last one is a graphic package that may be useful to make some initial plots.


Il giorno martedì 8 settembre 2020 22:57:36 UTC+2, Don Edwards ha scritto:
> Purchased the book python-basics-2020-05-18.pdf a few days ago.
> To direct my learning I have a project in mind as per below;
> 
> Just started learning python 3.8. At 76 I will be a bit slow but
> fortunately decades ago l learnt pascal. I am not asking programming help
> just guidance toward package(s) I may need. My aim is to write a program
> that simulates croquet - 2 balls colliding with the strikers (cue) ball
> going into the hoop (pocket), not the target ball. I want to be able to
> move the balls around and draw trajectory lines to evaluate different
> positions. Is there a package for this or maybe one to draw and one to
> move? Any help much appreciated.
> 
> -- 
> Regards, Don Edwards
> I aim to live forever - or die in the attempt. So far so good!
> Perth Western Australia

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can't download Pygame and Pgzero

2020-06-06 Thread edmondo . giovannozzi
Il giorno venerdì 5 giugno 2020 18:35:10 UTC+2, Lily Sararat ha scritto:
> To whom it may concern,
> I have trouble installing the Pygame and Pgzero on Window.  I based on the 
> instruction on the "Computer Coding Python Games for Kids" by Carol 
> Vorderman.  Nothing works.  
> I tried Python 3.6.2 as describe in the book and the latest version 3.8.3 
> still encounter on the same problem.  I really need to get this sorted so my 
> kid can spend his summer break mastering the coding.
> Brgs,

Hi Lily, 
on windows I prefer to start from a complete python distribution where most of 
the packages are already installed.
One possibility is WinPython 

http://winpython.github.io/

You can install that in a directory without a full system installation.
you can find there a "WinPython command Prompt" if you open it you can issue 
there the command:

pip install pgzero

pygame should be already installed.

Cheers

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Strange use of Lambda arrow

2020-06-06 Thread edmondo . giovannozzi
Have a look at:

https://docs.python.org/3/library/typing.html


Il giorno venerdì 5 giugno 2020 18:35:10 UTC+2, Agnese Camellini ha scritto:
> Hello to everyone, lately i building up an open source project, with some
> collaborator, but one of them cannot contribute any more. He is a solution
> architect so he is very skilled (much more than me!). I am now analysing
> his code to finish the job but i don't get this use of the lambda arrow,
> it's like he is deplaring the returned tipe in the function signature (as
> you would do in Java). I have never seen something like this in python..
> 
> Can someone please explain to me this usage (the part regarding the
> question is highlighted in yellow):
> 
> @classmethod
> def extract_document_data(cls, file_path : str) -> DocumentData:
> """
> Entry point of the module, it extracts the data from the document
> whose path is passed as input.
> The extraction strategy is automatically chosen based on the MIME
> type
> of the file.
> 
> @type file_path: str
> @param file_path: The path of the document to be parsed.
> @rtype: DocumentData
> @returns: An object containing the data of the parsed document.
> """
> 
> mime = magic.Magic(mime=True)
> mime_type = mime.from_file(file_path)
> document_type = DocumentType.get_instance(mime_type)
> strategy = cls.strategies[document_type]
> return strategy.extract_document_data(file_path)
> 
> 
> To be more verbose, this is the whole script:
> 
> from enum import Enum
> import json
> import magic
> 
> import docx
> from pdfminer.converter import PDFPageAggregator
> from pdfminer.layout import LAParams, LTContainer, LTTextContainer
> from pdfminer.pdfdocument import PDFDocument, PDFNoOutlines
> from pdfminer.pdfinterp import PDFPageInterpreter
> from pdfminer.pdfinterp import PDFResourceManager
> from pdfminer.pdfpage import PDFPage
> from pdfminer.pdfparser import PDFParser
> 
> 
> class DocumentType(Enum):
> """
> Defines the handled document types.
> Each value is associated to a MIME type.
> """
> 
> def __init__(self, mime_type):
> self.mime_type = mime_type
> 
> @classmethod
> def get_instance(cls, mime_type : str):
> values = [e for e in cls]
> for value in values:
> if value.mime_type == mime_type:
> return value
> raise MimeNotValidError(mime_type)
> 
> PDF = 'application/pdf'
> DOCX =
> 'application/vnd.openxmlformats-officedocument.wordprocessingml.document'
> 
> 
> class MimeNotValidError(Exception):
> """
> Exception to be raised when a not valid MIME type is processed.
> """
> 
> pass
> 
> 
> class DocumentData:
> """
> Wrapper for the extracted document data (TOC and contents).
> """
> 
> def __init__(self, toc : list = [], pages : list = [], document_text :
> str = None):
> self.toc = toc
> self.pages = pages
> if document_text is not None:
> self.document_text = document_text
> else:
> self.document_text = ' '.join([page.replace('\n', ' ') for page
> in pages])
> 
> def toc_as_json(self) -> str:
> return json.dumps(self.toc)
> 
> 
> class ExtractionStrategy:
> """
> Base class for the extraction strategies.
> """
> 
> @staticmethod
> def extract_document_data(file_path : str) -> DocumentData:
> pass
> 
> 
> class DOCXExtractionStrategy(ExtractionStrategy):
> """
> It implements the TOC and contents extraction from a DOCX document.
> """
> 
> @staticmethod
> def extract_document_data(file_path : str) -> DocumentData:
> document = docx.Document(file_path)
> body_elements = document._body._body
> # Selecting only the  elements from DOCX XML,
> # as they're the only to contain some text.
> text_elems = body_elements.xpath('.//w:t')
> return DocumentData(document_text = ' '.join([elem.text for elem in
> text_elems]))
> 
> 
> class PDFExtractionStrategy(ExtractionStrategy):
> """
> It implements the TOC and contents extraction from a PDF document.
> """
> 
> @staticmethod
> def parse_toc(doc : PDFDocument) -> list:
> raw_toc = []
> try:
> outlines = doc.get_outlines()
> for (level, title, dest, a, se) in outlines:
> raw_toc.append((level, title))
> except PDFNoOutlines:
> pass
> return PDFExtractionStrategy.build_toc_tree(raw_toc)
> 
> @staticmethod
> def build_toc_tree(items : list) -> list:
> """
> Builds the TOC tree from a list of TOC items.
> 
> @type items: list
> @param items: The TOC items.
> Each item must have the following format: (,  description>).
> E.g: [(1, 'Contents'), (2, 'Chapter 1'), (2, 'Chapter 2')]
> @rtype: list
>  

Re: python and numpy

2020-04-22 Thread edmondo . giovannozzi
Il giorno martedì 21 aprile 2020 21:04:17 UTC+2, Derek Vladescu ha scritto:
> I’ve just begun a serious study of using Python as an aspiring 
> programmer/data scientist.
> Can someone please walk me through how to download Python, SO THAT I will be 
> able to import numpy?
> 
> Thanks,
> Derek
> 
> Sent from Mail for Windows 10

If you re on Windows have also a look at WinPython.
You can also run it without system installation, in some cases that could be 
nice.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: numpy array question

2020-04-02 Thread edmondo . giovannozzi
Il giorno giovedì 2 aprile 2020 06:30:22 UTC+2, jagmit sandhu ha scritto:
> python newbie. I can't understand the following about numpy arrays:
> 
> x = np.array([[0, 1],[2,3],[4,5],[6,7]])
> x
> array([[0, 1],
>[2, 3],
>[4, 5],
>[6, 7]])
> x.shape
> (4, 2)
> y = x[:,0]
> y
> array([0, 2, 4, 6])
> y.shape
> (4,)
> 
> Why is the shape for y reported as (4,) ? I expected it to be a (4,1) array.
> thanks in advance

Because is not Matlab where everything is at least a 2d array.
If you fix a dimension that dimension disappear. It is the same behaviour as 
that of Fortran.

Personally I think that the Python behaviour is more natural and obvious. As 
always it is a choice of who has written the library what will happen with a 
slice.

  
 
  
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Difference between array( [1,0,1] ) and array( [ [1,0,1] ] )

2019-06-21 Thread edmondo . giovannozzi
Keep also in mind that numpy is quite different from Matlab.
In Matlab every vaiable is a matrix of at least 2 dimensions.

This is not the case of numpy (and is not the case in Fortran too).
every array can have a different number of dimensions. The transposition of an 
array with just 1 dimension is not really meaningful.
On the other hand most of the time is not needed.
For example le have a matrix:
a = np.array([[1,2],[1,1]])
and an array:
b = np.array([0.1,0.2])
You can left multiply or right multiply it to this matrix without any need to 
transpose it  (as you have to do in Matlab):
a @ b
array([0.5,0.3])
b @ a
array([0.3,0.4])
Cheers,


 

 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Difference between array( [1,0,1] ) and array( [ [1,0,1] ] )

2019-06-21 Thread edmondo . giovannozzi
Every array in numpy has a number of dimensions,
"np.array" is a function that can create an array numpy given a list.

when  you write 
vector_1  = np.array([1,2,1])
you are passing a list of number to thet function array that will create a 1D 
array.
As you are showing: 
vector_1.shape
will return a tuple with the sizes of each dimension of the array that is:
(3,)
Note the comma thta indicate that is a tuple.
While if you write:
vector_2 = np.array([[1,2,3]])
You are passing a list of list to the function array that will instruct it to 
crete a 2D array, even though the size of the first dimension is 1:
vector_2.shape
(1,3)
It is still a tuple as you can see.
Try:
vector_3 = np.array([[1,2,3],[4,5,6]])
And you'll see that i'll return a 2D array with a shape:
vector_3.shape
(2,3)
As the external list has 2 elements that is two sublists each with 3 elements.
The vector_2 case is just when the external list has only 1 element.

I hope it is more clear now.
Cherrs,

  

  
  

Il giorno venerdì 21 giugno 2019 08:29:36 UTC+2, Markos ha scritto:
> Hi,
> 
> I'm studying Numpy and I don't understand the difference between
> 
> >>> vector_1 = np.array( [ 1,0,1 ] )
> 
> with 1 bracket and
> 
> >>> vector_2 = np.array( [ [ 1,0,1 ] ] )
> 
> with 2 brackets
> 
> The shape of vector_1 is:
> 
> >>> vector_1.shape
> (3,)
> 
> But the shape of vector_2 is:
> 
> >>> vector_2.shape
> (1, 3)
> 
> The transpose on vector_1 don't work:
> 
> >>> vector_1.T
> array([1, 0, 1])
> 
> But the transpose method in vector_2 works fine:
> 
> >>> vector_2.T
> array([[1],
>     [0],
>     [1]])
> 
> 
> I thought that both vectors would be treated as an matrix of 1 row and 3 
> columns.
> 
> Why this difference?
> 
> Any tip?
> 
> Thank you,
> Markos

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Levenberg-Marquardt non-linear least-squares fitting in Python [follow-on]

2019-03-29 Thread edmondo . giovannozzi



> ltemp = [ydata[i] - ydata[0] for i in range(ll)]
> ytemp = [ltemp[i] * .001 for i in range(ll)]
> ltemp = [xdata[i] - xdata[0] for i in range(ll)]
> xtemp = [ltemp[i] * .001 for i in range(ll)]

Use the vectorization given by numpy:

  ytemp = (ydata - ydata[0]) * 0.001
  xtemp =  (xdata - xdata[0]) * 0.001


  fitted =  popt[0] - popt[1] * np.exp(-popt[2] * xtemp)

 or better

  fitted = func2fit(xtemp, *popt)
  

> #
> #  popt is a list of the three optimized fittine parameters [a, b, c]
> #  we are interested in the value of a.
> #  cov is the 3 x 3 covariance matrix, the standard deviation (error) of the 
> fit is
> #  the square root of the diagonal.
> #
> popt,cov = curve_fit(func2fit, xtemp, ytemp)
> #
> #  Here is what the fitted line looks like for plotting
> #
> fitted = [popt[0] - popt[1] * np.exp(-popt[2] * xtemp[i]) for i in 
> range(ll)]
> #
> #  And now plot the results to check the fit
> #
> fig1, ax1 = plt.subplots()
> plt.title('Normalized Data ' + str(run_num))
> color_dic = {0: "red", 1: "green", 2: "blue", 3: "red", 4: "green", 5: 
> "blue"}
> ax1.plot(xtemp, ytemp, marker = '.', linestyle  = 'none', color = 
> color_dic[run_num])
> ax1.plot(xtemp, fitted, linestyle = '-', color = color_dic[run_num])
> plt.savefig('Normalized ' + str(run_num))
> perr = np.sqrt(np.diag(cov))
> return popt, cov, xdata[0], ydata[0], fitted, perr[0]

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: moving to Python from Java/C++/C

2018-06-26 Thread edmondo giovannozzi
  To: itaiyz97
From: edmondo.giovanno...@gmail.com

Il giorno luned─¼ 25 giugno 2018 12:40:53 UTC+2, itai...@gmail.com ha scritto:
> Hey,
> I already have quite an experience in programming, and I wish to study Python
as well. I need to study it before I continue with my comp. science academic
studies.
> How do you recommend studying it? As mentioned in the headline, I already
know Java, C++ and C.
> Thanks!

Well, I have used the tutorial at www.python.org.

Python 3 of course.

--- BBBS/Li6 v4.10 Toy-3
 * Origin: Prism bbs (1:261/38)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: moving to Python from Java/C++/C

2018-06-25 Thread edmondo . giovannozzi
Il giorno lunedì 25 giugno 2018 12:40:53 UTC+2, itai...@gmail.com ha scritto:
> Hey, 
> I already have quite an experience in programming, and I wish to study Python 
> as well. I need to study it before I continue with my comp. science academic 
> studies. 
> How do you recommend studying it? As mentioned in the headline, I already 
> know Java, C++ and C.
> Thanks!

Well, I have used the tutorial at www.python.org.

Python 3 of course.


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Learning

2017-12-16 Thread edmondo . giovannozzi
Il giorno venerdì 15 dicembre 2017 12:50:08 UTC+1, Varun R ha scritto:
> Hi All,
> 
> I'm new to programming, can anyone guide me, how to start learning python 
> programming language,...plz suggest some books also.
> 
> Thanks all

Personally I learnt python from the tutorial that you can find in:

https://docs.python.org/3/tutorial/index.html

But I should say that I already knew other programming languages. 
By the way I would give a try at that tutorial and ask if you cannot understand 
something.
:-)

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python homework

2017-12-14 Thread edmondo . giovannozzi
Il giorno martedì 12 dicembre 2017 00:30:24 UTC+1, jlad...@itu.edu ha scritto:
> On Thursday, December 7, 2017 at 4:49:52 AM UTC-8, edmondo.g...@gmail.com 
> wrote:
> 
> >   import numpy
> 
> I teach Python to students at varying levels.  As much as I love and use 
> Numpy in my regular work, I try to avoid showing beginning Python students 
> solutions that require third-party packages.  Here are my reasons:
> 
> 1. Not every programming novice needs to understand things at the 
> bits-and-bytes level, but they should learn the inner workings of algorithms. 
>  You won't always have a ready-made algorithm to solve your problem in a 
> library function call, so you should learn to write your own.
> 
> 2. Package maintenance can be its own headache.  Sure, Anaconda can help, but 
> it's a heavyweight distribution.  And not every student is working on a 
> computer where they have the rights to install software.

I understand you points, on the other end, as we are not his teachers, we can 
also suggest other ways to solve its problem. 

Look that I haven't posted a complete solution, I just stopped at the same 
point he had already reached, and then I had given him just hints.

I think there is also another lessons that a student should learn: not to 
reinvent the wheel. But it may just be my feeling about this issue. 

I think also that introducing third-party package or not depends on the student 
environment. If he is studying to become an engineer he may have already been 
exposed to other proprietary software (the famous one). It could be useful to 
show him that most of the same things can be done in python with just an almost 
standard third-party package.

Cheers,
Edmondo


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python homework

2017-12-07 Thread edmondo . giovannozzi
Il giorno mercoledì 6 dicembre 2017 02:33:52 UTC+1, nick martinez ha scritto:
> I have a question on my homework. My homework is to write a program in which 
> the computer simulates the rolling of a die 50
> times and then prints
> (i). the most frequent side of the die
> (ii). the average die value of all rolls. 
> I wrote the program so it says the most frequent number out of all the rolls 
> for example (12,4,6,14,10,4) and will print out "14" instead of 4 like I need.
> This is what I have so far:
> import random
> 
> def rollDie(number):
> rolls = [0] * 6
> for i in range(0, number):
> roll=int(random.randint(1,6))
> rolls[roll - 1] += 1
> return rolls
> 
> if __name__ == "__main__":
> result = rollDie(50)
> print (result)
> print(max(result))

Another way to get an answer is to use the numpy package. It is freely 
available and it is the "de facto" standard for numerical problems.
If you get python from one of the scientific distribution (Anaconda, enthought, 
etc.) it will be automatically installed.

  import numpy

  Nrolls = 50 # but you can easily have millions of rolls 
  a = numpy.random.randint(1,7,Nrolls)

And then to count the rolls for each number you can use the histogram function 

  frequency , _ = numpy.histogram(a, bins=[1,2,3,4,5,6,7])

You can now use the "argmax" method to find the number that appeared most 
frequently. Use the "mean" method to calculate the mean. Look on the help pages 
of numpy why I used radint(1,7) instead of radint(1,6) and the same for the 
bins in the histogram function.

Have a look at: www.scipy.org

Cheers :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: What use is of this 'cast=float ,'?

2017-10-27 Thread edmondo . giovannozzi
Il giorno venerdì 27 ottobre 2017 22:35:45 UTC+2, Robert ha scritto:
> Hi,
> 
> I read below code snippet on line. I am interested in the second of the last 
> line.
> 
>  cast=float ,
> 
> 
> I've tried it in Python. Even simply with 
> 
> float
> 
> 
> it has no error, but what use is it?
> 
> 
> I do see a space before the comma ','. Is it a typo or not?
> 
> 
> Thanks,
> 
> 
> 
> self.freqslider=forms.slider(
>  parent=self.GetWin( ),
>  sizer=freqsizer,
>  value=self.freq,
>  callback= self.setfreq,
>  minimum=−samprate/2,
>  maximum=samprate/2,
>  num_steps=100,
>  style=wx.SL_HORIZONTAL,
>  cast=float ,
>  proportion=1,
> )

cast is the name of keyword argument of the function slider called "cast". It 
likely means that it should return a float. Quite likely inside the function 
"slider" there will be something like 

return cast(...)

that if you pass float will become equivalent to

return float(...)

Of course I don't know that function so take it as just a likely possibility.

By the way, it can be a method of an object named forms or a function in a 
module named forms.

cheers,
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python for Dummies exaple

2015-10-14 Thread edmondo . giovannozzi
Il giorno mercoledì 14 ottobre 2015 12:04:30 UTC+2, Chris Angelico ha scritto:
> On Wed, Oct 14, 2015 at 8:55 PM, NewsLeecher User wrote:
> > But with this script i get an error:
> > But i have not so many experience to see what the error is.
> > Good someone help me with this ?
> 
> You need to tell us what the error is :)
> 
> But, looking in my crystal ball, I find a hint that you're using
> Python 3 and a book that's teaching an older version of Python.
> Solution: Get a better book.
> 
> ChrisA

And using the same cristal ball, assuming you are using python 3.x, put 
brackets around the arguments of print like:

print(countdown)
...
print("Hello, my name is", self.myname)

etc.
forget for the moment about the ending comma.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Strong typing implementation for Python

2015-10-12 Thread edmondo . giovannozzi
Il giorno lunedì 12 ottobre 2015 10:51:50 UTC+2, John Michael Lafayette ha 
scritto:
> Now that Python has static type checking and support for IDE auto-complete 
> (PEP 484?), I beg you to please use it. In your standard library, in your 
> production code, in everywhere. I cannot type without auto-complete. 
> 
> I know that sounds ridiculous, but I have been coding on a daily basis for 
> the last four years and I cannot recall the last time I actually typed out a 
> variable or function name without auto-complete typing it for me. To me, 
> java.net.DatagramSocket isn't "DatagramSocket". It is "Da" + Ctrl + Space + 
> Enter (auto complete). I literally memorized the number of characters I have 
> to type for auto-complete to guess the variable name and then I only type 
> that many characters. For me, typing without auto-complete is like doing 
> surgery with a kitchen knife. It's messy and error prone and I make lots of 
> mistakes and have to try twice as hard to remember the actual names of 
> variables instead of just whatever they start with. 
> 
> You don't understand because you actually know what all the function names 
> are and you don't have to constantly hover over them in auto-complete and 
> pull up their documentation to figure out how to use them. But I do. And for 
> me, without auto-complete, the learning process goes from actively querying 
> the IDE for one documentation after another to having to minimize the IDE and 
> Google search for each and every function and module that I"m not familiar 
> with. Auto-complete literally cuts the learning time by more than half. 
> 
> So please please please use PEP 484 to document all your standard library 
> functions. Not for static compilation. Not even for catching mistakes caused 
> by bad function input (although I like that). For Christ's sake, do it for 
> the auto-complete. I gave up on JavaScript in favor of TypeScript for a 
> reason god dammit.
> 
> On Oct 11, 2015 3:45 PM, "Matt Wheeler"  wrote:
> On 9 October 2015 at 17:26, John Michael Lafayette
> 
>  wrote:
> 
> > I would like Python to have a strong typing feature that can co-exist with
> 
> > the current dynamic typing system. Currently Python is like this:
> 
> >
> 
> >     var animal = Factory.make("dog")  # okay.
> 
> >     var dog = Factory.make("dog")       # okay.
> 
> >     var cat = Factory.make("dog")        # are you sure?
> 
> >
> 
> > I would like Python to also be able to also do this:
> 
> >
> 
> >     Animal a = Factory.make("dog")    # okay. Dog is Animal.
> 
> >     Dog d = Factory.make("dog")         # okay. Dog is Dog.
> 
> >     Cat c = Factory.make("cat")           # Runtime error. Dog is not Cat.
> 
> 
> 
> Though it's intended for performance optimisation rather than simply
> 
> static typing for static typing's sake, you could probably use Cython
> 
> to achieve what you want...
> 
> 
> 
> ...but then you might start to see the benefits of dynamic typing :)
> 
> 
> 
> 
> 
> --
> 
> Matt Wheeler
> 
> http://funkyh.at

As far as I know the shell (if I can call it a shell) IPython has a sort of 
autocomplete. Given an object, after a dot if you type a tab it will give you 
the list of all the method that it can retrieve from the function dir(). 

geany, and spyder have also a sort of autocomplete.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Fast 12 bit to 16 bit sample conversion?

2015-07-20 Thread edmondo . giovannozzi
Il giorno lunedì 20 luglio 2015 15:10:22 UTC+2, Peter Heitzer ha scritto:
 I am currently writing a python script to extract samples from old Roland 12 
 bit sample
 disks and save them as 16 bit wav files.
 
 The samples are layouted as follows
 
 0 [S0 bit 11..4] [S0 bit 3..0|S1 bit 3..0] [S1 bit 11..4]
 3 [S2 bit 11..4] [S2 bit 3..0|S3 bit 3..0] [S3 bit 11..4]
 
 In other words 
 sample0=(data[0]4)|(data[1]4)
 sample1=(data[2]4)|(data[1]  0x0f)
 
 I use this code for the conversion (using the struct module)
 
 import struct
 from array import array
 
 def getWaveData(diskBuffer):
   offset=0
   words=array('H')
   for i in range(len(diskBuffer)/3):
 h0=struct.unpack_from('h',diskBuffer,offset)
 h1=struct.unpack_from('h',diskBuffer,offset+1)
 words.append(h0[0]  0xfff0)
 words.append(h1[0]  0xfff0)
 offset+=3
   return words
 
 I unpack the samples in an array of unsigned shorts for I later can use the 
 byteswap() method
 if the code is running on a big endian machine.
 
 What options using pure python do I have to make the conversion faster?
 I thought of unpacking more bytes at once e.g. using a format 'hxhxhxhx' for 
 4 even samples
 and 'xhxhxhxh' for 4 odd samples vice versa.
 Can I map the ' 0xfff0' to the whole array?

I'll try to read the binary data with numpy.fromfile, reshape the array in 
[n,3] matrix, and then you can operate with the columns to get what you want.
:-)
 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python and fortran Interface suggestion

2015-04-22 Thread edmondo . giovannozzi
Il giorno domenica 19 aprile 2015 22:26:58 UTC+2, Dave Angel ha scritto:
 On 04/19/2015 11:56 AM, pauld11718 wrote:
  I shall provide with further details
 
  Its about Mathematical modelling of a system. Fortran does one part and 
  python does the other part (which I am suppose to provide).
  For a time interval tn -- t_n+1, fortran code generates some values, for 
  which my python code accepts it as an input. It integrates some data for 
  the same time step tn -- tn+1 and fortran computes based on this for the 
  next time step t_n+1 -- t_n+2..and this is how loop continues.
 
  Its the fortran code calling my Python executable at all the time interval.
 
 Then you'd better find out how it's calling your executable.  Calling it 
 is very different than starting it.  The whole import x, y,z is done 
 only upon initial startup of the python code.  You can then call the 
 Python code as many times as you like without incurring that overhead again.
 
 Naturally, if the guy who designed the Fortran code didn't think the 
 same way, and is unavailable for further tweaking, you've got a problem.
 
 
  So,
  1. Is it about rebuilding every thing using cx_freeze kind of stuff in 
  windows?
 
 Don't worry about how you get installed until after you figure out how 
 you're going to get those calls and data back and forth between the 
 Fortran code and your own.
 
 
  2. As interfacing between fortran/python is via .dat file, so on every 
  time-step python executable is called it shall open()/close() .dat file. 
  This along with all those
  from  import *'s
  are unnecessary.
 
 Have you got specs on the whole dat file thing?  How will you know it's 
 time to examine the file again?  As long as you get notified through 
 some other mechanism, there's no problem in both programs having an open 
 handle on the shared file.
 
 For that matter, the notification can be implicit in the file as well.
 
 But is there a designer behind this, or is the program intending to deal 
 with something else and you're just trying to sneak in the back door?
 
 
 
  3. I dont have access to fortran code. I shall provide the python 
  executable only, which will take input from .dat file and write to another 
  .dat file. So, what is your suggestion regarding socket/pipe and python 
  installation on RAM? I have no idea with those.
 
 
 Not much point in opening a pipe if the other end isn't going to be 
 opened by the Fortran code.
 
  Isn't there a better way to handle such issues?
 
 
 Sure, there are lots of ways.  But if the Fortran code is a closed book, 
 you'll have to find out how it was designed, and do all the adapting at 
 your end.
 
 If it becomes open again, then you have the choice between having one of 
 your languages call functions in the other (ie. single process), having 
 some mechanism (like queues, or sockets) between two separately started 
 processes, and having one program launch the other repeatedly.  That 
 last is probably the least performant choice.
 
 If you google python Fortran you'll get lots of hits.  You could start 
 reading to see what your choices are.  But if the Fortran designer has 
 left the room, you'll be stuck with whatever he put together.
 
 And chances are very high that you'll need to develop, and certainly 
 test, some parts of the project on the Windows machine, and particularly 
 with the Windows C/Fortran compilers.
 
 
 -- 
 DaveA

Generally I have used f2py to link Fortran subroutines to Python, then python 
will call this Fortran subroutines. Who is calculating what is not really 
important, except that Python is the driver. You can of course also call a 
python function from Fortran (well look at the documentation of f2py).


When you are saying that you don't have access to the Fortran program what that 
does mean?
Somebody else is developing it or it is an already written full Fortran program?
If the latter I will always use Python as a driver that prepare files for the 
Fortran program then read back the data make some elaboration and prepare the 
data for the following step.

But if somebody else is writing the Fortran part or have access to the source 
tell him to give you some some subroutines that you will call from Python.
  
 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: New to Python - block grouping (spaces)

2015-04-20 Thread edmondo . giovannozzi

I work in research and mainly use Fortran and Python.

I haven't had any problem with the python indentation. I like it, I find it 
simple and easy.

Well, sometimes I may forget to close an IF block with an ENDIF, in Fortran, so 
used I am on ending a block just decreasing the indentation, not a big problem 
actually (always spotted by the compiler).


 
-- 
https://mail.python.org/mailman/listinfo/python-list