Jump to content

Machine-friendly wiki interface

From Meta, a Wikimedia project coordination wiki
This is an archived version of this page, as edited by Jizzbug (talk | contribs) at 21:34, 22 December 2002 (Adding a note about wasabii.). It may differ significantly from the current version.

For a client-side reader/editor and legitimate bots, it would be useful to be able to bypass some of the variableness of the for-humans web interface.

  • Retrieve raw wikicode source of a page without parsing the edit page
    • ex http://www.wikipedia.org/wiki/Foobar?action=raw
    • Should we be able to get some meta-data along with that -- revision date, name, etc? Or all separate...
    • How best to deal with old revisions? The 'oldid' as at present, or something potentially more robust; revision timestamp should be unique, but may not always be (only second resolution; some old timestamps wiped out by a bug in February '02 leaving multiple revisions at the same time)
    • At some future point, preferred URLs may change and UTF-8 may be used more widely; a client should be able to handle 301 & 302 redirects, and the charset specified in the Content-type header. If your bot won't handle UTF-8, it should explicitly say so in an Accept-charset header so the server can treat you like a broken web browser and work around it.
  • Fuller RDF-based Recentchanges
    • Also page history and incoming/outgoing links lists? Watchlist?
  • A cleaner save interface and login?
  • Look into wasabii (web application standard API (for) bi-directional information interchange). It's meant as a general API for CMSes, weblogs, etc. The spec. may be rich enough for it to work with Wikipedia. The plus side of supporting wasabii is that any wasabii-compliant end-user application should be able to interface with Wikipedia.

Comments, suggestions?