I've the following testing code:
var test= "á'bé"; // ascii E1, 27,62,E9 print(test.charCodeAt(0)); print(test.charCodeAt(1)); print(test.charCodeAt(2)); print(test.charCodeAt(3)); test= "\u00e1'b\u00e9"; print(test.charCodeAt(0)); print(test.charCodeAt(1)); print(test.charCodeAt(2)); print(test.charCodeAt(3)); In the first case, the result is: 65533 39 98 65533 And in the second case is: 225 39 98 233 What I expect to happen is that because the "á" still is ascii, the function "charCodeAt" return the ascii code for it, not the unicode value 65533. Is that a misuse or misunderstand of some kind? -- v8-users mailing list [email protected] http://groups.google.com/group/v8-users
