Hey Steven,

Thank you for having taken the time to comment qualitatively my response.

For the first part :

I think there is a way to map x <- [1; + inf] to y <- [0;1] by putting y =
1/x

I didn't pay attention of the fact stated by @Richard Damon
<rich...@damon-family.org>  that the shape of distribution won't be
preserved not only because of the argument that the distribution will goes
from (0, 1] instead of [0. 1) but the pareto shape will change, and the
solution shifted, scaled, and truncated Pareto make more sens

For the second part

But i think there is another issue to consider :
for example if we want to sort :

random.choice(['a', 'b', 'c'], random=lambda: random.paretovariate(1.75)),
we should have a mapping
between the elements of the list (here :a,b,c) to [0, 1] and the reverser
mapping, having this mapping underline
that we have a notion of order and distance, "measurement" between the
elements, and it is not trivial to always
have a notion of "order" or "distance" between the elements we manipulate,
but it could be fun
to have this possibility.


At
To answer the following questions :

   - What are you sorting ?
   - What is this reverser mapping ?
   - Why are you switching from probability concepts to metric theory
   halfway through the paragraph?
   - Maybe it would help if you could explain what you expected this to do.
   What should the odds of a, b, and c be in your function?

At the best of my understanding, we define a probabilisable space by
(omega, tribute, P) and
*X* the random variable as the mesurable function that goes from (omega,
tribute, P) to the measurable
space (E, Eta)

the space omega could be anything for example it could be *['a','b','c'] *but
we need a function
that knows how to go from subset *s *of omega (where *s* <- tribute) to a
subset *s2* of  E (where *s2* <- Eta)
and actually by measuring, *P('a')* we are measuring *P(X^-1(b) =a)* =
*Px(b)) *where (b <- Eta)

As an example for bernnouli law (flipping coin for example)
Ω = { ω 1 , ω 2 } ;
E = {0,1}
T (tribute) = P(Ω) ;
P( ω 1 ) = p, P( ω 2 ) = 1 − p où p ∈]0, 1[.
X( ω 1 ) = 1, X( ω 2 ) = 0.
Px(1) = P(X −1 (1)) = P( ω 1 ) = p et Px(0) = P(X −1 (0)) = P( ω 2 ) = 1 −
p.

So the mapping function is *X *and the reverse mapping function is *X^-1*

To resume the idea, the mapping *X *allow us to go from abstract elements
('a','b','c') that we do not
measure directly (P(a)) to elements we can measure(Px(1)) and by having *X *we
will be able to have the odds of a, b, and c
so my idea was to give this function *X* as a helper function that the user
should define, that will know how to map subsets of the starting space for
example subsets from *ASCII*  to subsets of  a measurable spaces (the
easiest one, would be [0,1], the measure in that space will be the identity)

This is why i have bringed some metric concepts.

I'm sorry i couldn't get the last part :

 It sounds like a good idea on the face of it. The only obvious problems
are the limitations of only having a double rather than as many bits as
needed, and coming from a PRNG with a limited period compared to 2^N for
large N, both of which are if anything even bigger problems for shuffle,
which already allows a random argument. So, why not choice and sample too?

I don't know if we are better without it, but it seems to be complex for a
practical daily use !

Best regards,

-- 
SENHAJI RHAZI Hamza




Le dim. 18 août 2019 à 23:16, Andrew Barnert <abarn...@yahoo.com> a écrit :

> On Aug 18, 2019, at 06:20, Senhaji Rhazi hamza <
> hamza.senhajirh...@gmail.com> wrote:
>
> Hey Steven,
>
> I think there is a way to map x <- [1; + inf] to y <- [0;1] by putting y =
> 1/x
>
>
> Well, that gives you a distribution from (0, 1] instead of [0. 1), which
> is technically not legal as a substitute for random, even though it’ll
> actually only matter 1 in about 2^42 runs. You could pass this to shuffle
> today and probably get away with it, so asking for the same in choice, etc.
> isn’t too outrageous. But it’s still hard to judge whether it’s a good
> suggestion, if we don’t actually know what you’re trying to accomplish.
>
> First, there’s no way random could even know that you needed anything
> transformed. The distribution functions don’t come with metadata describing
> their support, and, even if they did, a new lambda that you pass in
> wouldn’t. As far as it could possibly tell, you passed in something that
> claims to be a nullary function that returns values in [0, 1), and it is a
> nullary function, and that’s all it knows.
>
> More importantly, even if that weren’t a problem, 1/x is hardly the one
> and only one obvious guess at how to turn Pareto into something with the
> appropriate support. In fact, I suspect more people would probably want a
> shifted, scaled, and truncated Pareto if they asked for Pareto. (Much as
> people often tall about things like the mean of a Cauchy distribution,
> which doesn’t exist but can be approximated very well with the mean of a
> Cauchy distribution truncated to some very large maximum.)
>
> But i think there is another issue to consider :
> for example if we want to sort :
>
> random.choice(['a', 'b', 'c'], random=lambda: random.paretovariate(1.75)),
> we should have a mapping
> between the elements of the list (here :a,b,c) to [0, 1] and the reverser
> mapping, having this mapping underline
> that we have a notion of order and distance, "measurement" between the
> elements, and it is not trivial to always
> have a notion of "order" or "distance" between the elements we manipulate,
> but it could be fun
> to have this possibility.
>
>
> I’m not sure what all of this means. What are you sorting? Why are you
> passing paretovariate instead of 1/ that after you just talked about that?
> What is this reverser mapping? Why are you switching from probability
> concepts to metric theory halfway through the paragraph?
>
> Maybe it would help if you could explain what you expected this to do.
> What should the odds of a, b, and c be in your function?
>
> If we understood what you wanted, people could probably (a) explain the
> best way to do it today, (b) come up with a specific proposal for a way to
> make it easier, and (c) evaluate whether that proposal is a good idea. Or,
> alternatively, if you have a different example, one where you’d obviously
> want to use some existing [0, 1) distribution with choice or sample and
> don’t need to go through all this to explain it, that would help.
>
> The thing is, your proposal—to add a random argument to choice, choices,
> and sample—makes sense, but if your only example doesn’t make sense, it’s
> very hard to judge the proposal.
>
> It sounds like a good idea on the face of it. The only obvious problems
> are the limitations of only having a double rather than as many bits as
> needed, and coming from a PRNG with a limited period compared to 2^N for
> large N, both of which are if anything even bigger problems for shuffle,
> which already allows a random argument. So, why not choice and sample too?
>
> But if the only thing anyone wants from it is something that doesn’t make
> sense, then maybe we’re better off without it?
>
>
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/TX2EGFZGMRWSSKMTXDDPWG7463AWUTTP/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to