Smalyshev added a comment. > I am looking for a use case we are trying to solve.
The use case is users using SPARQL query tool that only does POST. Unfortunately, such tools exist, and in non-nelgligible numbers > At first glance, it seems like this means mangling all POST to GET would > result in an incompatibility with the sparql protocol. @chasemp I think you're missing an important point here - lack of SPARQL Update support on query endpoint is **intentional**. That's the whole point of the exercise - how we allow to send POST without allowing SPARQL Update. We do not want to be compatible with SPARQL Update in query endpoint - in fact, we want to explicitly forbid it, since we don't want the whole internet to mess with our database (there's wikidata.org site for that ;) That's why we do not want to allow anybody from outside to POST to Blazegraph. However, we do want to allow tools that use POST to do SPARQL Query to do so. Unfortunately, distinguishing SPARQL Query from SPARQL Update and other update requests may be non-trivial to do from something like nginx, so it is much safer and easier to just never send POST to Blazegraph, thus ensuring we never produce an update. We could, of course, take a stance that since REST dictates retrieval queries should go through GET, we only support GET. However, this stance sounds to me unpractical and not user-friendly, as most users do not care about the purity of our REST track record (in fact, most of them have only the vaguest idea of REST and their requirements about POST/GET) and just want their SPARQL tool (which they probably use with a dozen of other SPARQL endpoints by now) to work with WDQS endpoint. Answering @BBlack's question, we do not know why these tools only use POST, but most importantly, it's completely beyond the point **why** they do it - whatever the reasons are, they do it, and we're not going to change that. So we can either support them or refuse to support them. I do not see any practical reason to refuse support given that we can do it. As for cookies and multi-DC setup, many non-browser clients would just ignore cookies completely anyway. For browser-based clients, I don't think in the perceivable timeframe we'd get traffic numbers that would make it any problem. The traffic numbers now are low, and if they raise most of the traffic would be bots, tools and widgets feeding data from SPARQL, not browser traffic. If it ever does become a problem, it should be trivial to make an exception for requests coming to query.wikidata.org - they are easily identifiable. Also, we will never have all or significant part of the overall traffic produced by these tools - these tools are user-friendly frontends, but the bulk of the work will be done by automated tools, just like a typical SQL database would serve most traffic via programmatic connections, not via Web interface. TASK DETAIL https://phabricator.wikimedia.org/T112151 EMAIL PREFERENCES https://phabricator.wikimedia.org/settings/panel/emailpreferences/ To: Smalyshev Cc: chasemp, JanZerebecki, BBlack, Andrew, Deskana, Joe, gerritbot, nichtich, Jneubert, Karima, Aklapper, Smalyshev, jkroll, Wikidata-bugs, Jdouglas, aude, Manybubbles _______________________________________________ Wikidata-bugs mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs
