It seems that special characters (umlauts etc.) get mangled when the
metadata is uploaded to TEN:
I was worried about this too. But it might get mangled in the log as well.
When I realised it did recognise "Züri West" as well as "Zueri West" or
"Zuri West", I stopped bothering.
{
"song_name": "Ozean und Brandung",
"artist_id": "ARVQ0YD1187B9BA5B4",
"artist_name": "Einstürzende Neubauten",
..
"request": {
"item_id": "00b843ac0e6b4b3daa11baca795019ff",
"release": "Perpetuum Mobile",
"song_name": "Ozean und Brandung",
"artist_name": "Einstürzende Neubauten",
Where did you get that output from? It seems to be using two different
encodings. But as I said: if the first if the result of the analysis, and
the second the request as it was sent, then it was still successful.
"request": {
"song_name": "Les Orgues De Staline",
"artist_name": "Dernière Volonté",
"track_number": 10,
"release": "Obeir Et Mourir",
Do you know whether this failure is due to the encoding? I mean: it
_could_ all be a problem at the presentation layer only. But yeah, I'll
probably have to review this.
What platform is your LMS running on?
--
Michael
_______________________________________________
plugins mailing list
[email protected]
http://lists.slimdevices.com/mailman/listinfo/plugins