https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=32401
Marcel de Rooy <[email protected]> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |[email protected] --- Comment #9 from Marcel de Rooy <[email protected]> --- (In reply to David Cook from comment #0) > If you try to search pending orders in acquisitions using Arabic like خمسة > it will silently fail. Nothing changes on the screen. No "Processing" or > obvious Javascript errors. > > However, if you turn on "Pause on caught exceptions", you'll see the > following: > > TypeError: Failed to execute 'setRequestHeader' on 'XMLHttpRequest': String > contains non ISO-8859-1 code point. Just a dumb question, but since Koha is multilingual and filtering on UTF8 chars does not really sound asking too much ? Why dont we use encodeURIComponent on the header string and call url_decode_utf8 on the receiving side? This could be kind of a convention for x-koha headers? Search for percent encoding, evaluate results of decoding? And be done quite early in the process on a general level when processing (PSGI) headers? -- You are receiving this mail because: You are the assignee for the bug. You are watching all bug changes. _______________________________________________ Koha-bugs mailing list [email protected] https://lists.koha-community.org/cgi-bin/mailman/listinfo/koha-bugs website : http://www.koha-community.org/ git : http://git.koha-community.org/ bugs : http://bugs.koha-community.org/
