Hi Jon, Is it ok to ask a question about this? No problem if you don't have time to > answer. >
Of course :) Say I am on a beach and there are a number of ice cream stands. I create an > agent who's job it is to take my position and outputs the direction to the > nearest ice cream stand. If I am equidistant between two stands won't I see > large changes in the output based on small changed in the input? For > instance if I take a tiny step to the left I am pointed to continue left, > and if I take a tiny step to the right I am pointed to continue to the > right? > Well, In this scenario the large change in the output is the side effect of "making" the decision and it is not a property of the function that is used in the decision making process. The difference is like getting the "output" of a classifier versus the "prediction" of a classifier. When you are getting the output of the classifier, you would get a real number between -1 and 1, but if you want to get the prediction of the classifier you would take the sign of the output which is either -1, 0 or 1. The scenario here is more like getting the prediction of your agent than observing its output IMO. I think the problem would go away if you consider the confidence of your agent as well. The confidence of your agent should not change drastically if you take an infinitesimal step in either direction and should be at its minimum (which would be 0 in the case of a classifier). If you take a tiny step to the right of the equidistant point and suddenly your agent becomes 100% sure that you should go right, I would say that your agent is not stable. In more mathematical terms I think all Holomorphic functions are > continuous? Have you considered functions with branch cuts like the square root or the complex logarithm? > Whereas in this situation wouldn't an agent want a discontinuous map made > of several "attraction basins"? Would you see a similar issue in the > "travelling salesman problem" where if you moved one of the cities a little > you might see a radical change in the shortest overall route? > I have not taken this approach, but I would give you the best answer that I have; which is fueled by wikipedia articles. So don't take it seriously! :p If I want to analyze the stability of an agent from the point of view of "attraction basins", I would say that an intelligent agent would partition the input space into 3 regions. The stable region (the Fatou set), the chaotic region (the Julia set) and the essentially singular region (the Baker set). I am using the terminology of "complex dynamics <https://en.wikipedia.org/wiki/Complex_dynamics>" here in case you're wondering. So, if I suppose that my agent is a classifier, the Fatou set would be the regions of the input where the classifier gives the correct label and is robust. The margin of the decision boundary would be the Julia set; even though the output of the classifier is not stable in this region that does not mean that it has adversarial examples there. The output SHOULD be unstable in this region. Then there is the Baker set in which an essential singularity exists. I'm not sure but I think in this region the classifier would output all the possible labels infinitely many times due to the Picard's theorem <https://en.wikipedia.org/wiki/Picard_theorem>: *Great Picard's Theorem:* If an analytic function *f* has an essential singularity <https://en.wikipedia.org/wiki/Essential_singularity> at a point *w*, then on any punctured neighborhood <https://en.wikipedia.org/wiki/Punctured_neighborhood> of *w*, *f*(*z*) takes on all possible complex values, with at most a single exception, infinitely often. So in the case of TSP, I would say that it depends on whether the graph is in the Julia or the Baker set of the agent, but I have already stretched my knowledge dangerously far enough. I will follow the advice of Linas and stop giving you my baseless opinion! :D On Tue, Jun 15, 2021 at 3:30 AM Linas Vepstas <[email protected]> wrote: > Hi Ramin, > > On Sun, Jun 13, 2021 at 5:27 PM Ramin Barati <[email protected]> wrote: > >> >> Right now I am trying to provide for myself a stable job and to set my >> foot on a firm ground financially. I think that I am living that part of a >> man's life in which one needs to bear the fruits of his first endeavors. >> The geopolitical situation in the middle-east and especially Iran is of no >> help though. Nevertheless, I am always interested in the discussions in the >> mailing list and try to follow them as much as possible. >> > > Food, a place to live, income, savings are vital. Spread the word. Talk to > people with political views opposite of your own. Befriend them, even. Talk > them over to your side. Gently; don't get shot. Geopolitics should not > prevent you from being a good and active citizen. > >> >> On another note, I also have been reading about quantum probability >> recently. While the subject is certainly out of my reach right now, I think >> that I have found something interesting that I would like to share with you >> and Linas and ask if you see any potential there. Before that I would like >> to give my thanks to Linas for introducing the reading materials on these >> subjects and to tell you that I would surely look them up. On the subject >> of Riemannian surfaces, I had a hunch that the subject is important but I >> lack the math to read the literature. I figured that I need to get a better >> understanding of vector fields and geometric algebra and I am reading a >> book called "Geometric Algebra for Computer Science". I would be glad if >> you could suggest an introductory book on the subject of Riemannian >> surfaces itself. >> > > The more you can read, the better. I would normally recommend "Compact > Riemann Surfaces" by Jurgen Jost. It's a Springer textbook. If you look > hard enough, you can find a PDF online. My only concern is that it might be > a bit too advanced for you. Try it anyway, see how far you can get. Skip > the proofs, on first reading. > > >> >> The idea is that the output of a classifier is a quantum probability >> distribution. So a classifier is something like a Dirichlet process but for >> quantum probability distributions. The output of a k-class classifier is a >> pure complex antisymmetric k-by-k matrix and using matrix exponential we >> can map that matrix to a matrix in SU(k). >> > > Yuck. Stop right there. I know that you don't know the theory of Lie > algebras, so down this path you will only find trouble and flawed thinking. > > In grad school, I had a professor, P.G.O. Freund, and one day, instead of > lecturing, he went on a tirade. I did not like it much, it felt like a > waste of my time. It took me 2-3 decades to understand what he was saying. > I hope it won't take you that long. > > He drew three symbols on the blackboard: the delta, the nabla and the > D'Alembertian (a square). He said: "The people who use a nabla are like > that symbol - precariously balanced on its tip, using a tiny amount of > knowledge at their base, to reach up into the clouds to explain everything. > You don't want to be like that. Stay away from people like that. They are > no good. The people who use the delta have a broad base of knowledge, and > a sharp pointy tip: they can use their extensive base knowledge to make > precise, pointed observations. You want to be like that. The people who > use the D'Alembertian are the best: not only do they have a proper > foundation on which to build, but they are able to accomplish many things > with their knowledge." > > See what the problem is? I thought to myself "I came to class to hear > about this? What a waste of time!" -- but he was right. It took me a few > decades to develop a broad base of knowledge. Alas, I am now old, as I > mis-spent my youth. If you want to be good at stuff, read widely. But, more > importantly, establish a firm foundation. Study the basics. Careful getting > tangled in fancy-pants theories before you first have complete mastery of > the basics. Once you know the basics, the fancy stuff will then come > easily, and quickly, without a struggle. > > I listened to another famous mathematician proclaim that research should > be like paddling a canoe: mostly a leisurely paddle down-stream, with > occasional furious paddles upstream. (Maybe this was Raoul Bott? I don't > recall.) > > -- Linas > > -- > Patrick: Are they laughing at us? > Sponge Bob: No, Patrick, they are laughing next to us. > > > -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/CAHrUA36hZT-NPQ-qFaCMHsOjR_SX-5R3KqMXyNVaD%3DmsG9icNw%40mail.gmail.com > <https://groups.google.com/d/msgid/opencog/CAHrUA36hZT-NPQ-qFaCMHsOjR_SX-5R3KqMXyNVaD%3DmsG9icNw%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHmauB6Gpt77Fo5edN-BoAxzHXXq%2Bxy25SUj%3D4YhrG1wz56r2g%40mail.gmail.com.
