On Wednesday, January 17, 2018 at 1:40:35 AM UTC-6, Ralf Stephan wrote:
>
> On Tuesday, January 16, 2018 at 3:23:16 AM UTC+1, saad khalid wrote:
>>
>> Hello everyone:
>>
>> So, I was just messing around with the assume command, and did:
>>
>> var('i')
>> assume(abs(x) < 1)
>> f(x) = sum(x^i, i, 0, oo )
>>
>> This is just 1/(1-x). I wanted to see what would happen when I tried 
>> using x > 1, and it still evaluates properly
>>
>
> I cannot confirm that, I get:
> sage: forget()
> sage: assume(x-1>0)
> sage: f(x) = sum(x^i, i, 0, oo )
> ...
> ValueError: Sum is divergent.
>  
>
>

Sorry, what I meant was, when I use the assume(abs(x) < 1), but still plug 
in a value for x that is greater than 1 into the function.  For example, 
f(1.5) runs fine, even when I have assume(abs(x) < 1). 

Here is a cocalc sheet with the code:
https://cocalc.com/share/a112503d-d6e0-4d8c-8775-5b48c369255d/Saad%20Main/Personal/QM%20HW%201.sagews?viewer=share


> I cannot confirm that either for f(x) = 1/(1-x) because I get
> sage: integrate(1/(1-x),x,0,2)
> ...
> ValueError: Integral is divergent.
> regardless of assumptions.
>

Yes, exactly! This is what was confusing me. So, when i run my original 
code with assume( abs(x) < 1), and then define my function, it seems to 
automatically define my function as f(x) = -1/(x-1). Originally, I was just 
curious about how it knew to do this substitution (though I assumed it was 
just some built in substitution/simplification). But, I didn't understand 
why it let me plug in values for x that were greater than 1, when I had 
defined the function to take inputs less than 1. I don't think this is a 
bad thing, I just thought it was interesting and was curious about it. But 
then, I tried integrating over the divergence at 1, doing the integral from 
0 to 2, and i expected it to tell me it was a divergent integral, but the 
result was -I*pi. I was even more intrigued by this, because I assume it is 
substituting in some sort of analytic continuation, but I wasn't sure how 
it was doing that exactly, since the function I originally defined as an 
infinite sum had obviously been replaced by f(x) = -1/(x-1), which clearly 
has a divergent integral over these limits. Why is it that when I integrate 
over 1/(1-x), it is divergent with these limits, but when I integrate over 
the function I defined originally, it is not? Why does the substitution 
happen in the second case and not the first, and how is this substitution 
happening? 

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sage-support" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/sage-support.
For more options, visit https://groups.google.com/d/optout.

Reply via email to