Jump to content

Machine-friendly wiki interface

From Meta, a Wikimedia project coordination wiki
This is an archived version of this page, as edited by Brooke Vibber (talk | contribs) at 22:20, 22 December 2002 (Any implementations?). It may differ significantly from the current version.

For a client-side reader/editor and legitimate bots, it would be useful to be able to bypass some of the variableness of the for-humans web interface.

  • Retrieve raw wikicode source of a page without parsing the edit page
    • ex http://www.wikipedia.org/wiki/Foobar?action=raw
    • Should we be able to get some meta-data along with that -- revision date, name, etc? Or all separate...
    • How best to deal with old revisions? The 'oldid' as at present, or something potentially more robust; revision timestamp should be unique, but may not always be (only second resolution; some old timestamps wiped out by a bug in February '02 leaving multiple revisions at the same time)
    • At some future point, preferred URLs may change and UTF-8 may be used more widely; a client should be able to handle 301 & 302 redirects, and the charset specified in the Content-type header. If your bot won't handle UTF-8, it should explicitly say so in an Accept-charset header so the server can treat you like a broken web browser and work around it.
  • Fuller RDF-based Recentchanges
    • Also page history and incoming/outgoing links lists? Watchlist?
  • A cleaner save interface and login?
  • Look into wasabii (web application standard API [for] bi-directional information interchange). It's meant as a general API for CMSes, weblogs, etc. The spec may be rich enough for it to work with Wikipedia. The plus side of supporting wasabii is that any wasabii-compliant end-user application should be able to interface with Wikipedia.
    • In the blog world, at least, wasabii seems to be positioning itself as the next generation standard API (replacing bloggerAPI as the popular interface), which means lots of end-user applications will be created. All we'd have to do is support wasabii at some URL and we'd automatically inherit crap loads of functionality.
      • The specs at the site aren't very clear. Are there are implementations you can point to that would give a better idea of how it actually would operate? (The mailing list drops off in September, with people saying that it's too bad there are no implementations so no one's really sure how it works so there are no implementations...) Additionally, it's not clear how the recursive node model maps onto wiki (is a title a parent node, and old versions subnodes? Or are new versions subnodes? Or...???) How would categories, schemes and taxonomies map to languages/sections and namespaces? --Brion VIBBER

Comments, suggestions?