Jump to content

Machine-friendly wiki interface: Difference between revisions

From Meta, a Wikimedia project coordination wiki
Content deleted Content added
Any implementations?
m Reverted changes by 37.154.217.246 (talk) to last version by Anonymous Dissident
Tags: Replaced Rollback
 
(32 intermediate revisions by 24 users not shown)
Line 1: Line 1:
{{MovedToMediaWiki}}
For a [[Dedicated Wikipedia editor|client-side reader/editor]] and [[w:Wikipedia:Bots|legitimate bots]], it would be useful to be able to bypass some of the variableness of the for-humans web interface.
35 revisions exported. &mdash;<strong>[[User:Anonymous Dissident|<span style="font-family:Script MT Bold;color:DarkSlateGray">Anonymous Dissident</span>]]</strong>[[User_talk:Anonymous Dissident|<sup><span style="font-family:Verdana;color:Gray">Talk</span></sup>]] 23:42, 2 December 2009 (UTC)

* Retrieve raw wikicode source of a page without parsing the edit page
**ex http://www.wikipedia.org/wiki/Foobar?action=raw
**Should we be able to get some meta-data along with that -- revision date, name, etc? Or all separate...
**How best to deal with old revisions? The 'oldid' as at present, or something potentially more robust; revision timestamp ''should'' be unique, but may not always be (only second resolution; some old timestamps wiped out by a bug in February '02 leaving multiple revisions at the same time)
**At some future point, preferred URLs may change and UTF-8 may be used more widely; a client should be able to handle 301 & 302 redirects, and the charset specified in the Content-type header. If your bot won't handle UTF-8, it should explicitly say so in an Accept-charset header so the server can treat you like a broken web browser and work around it.
* Fuller [[RDF spool|RDF-based Recentchanges]]
** Also page history and incoming/outgoing links lists? Watchlist?
* A cleaner save interface and login?
* Look into [http://wasabii.org/root/ wasabii] (<i><b>w</b>eb <b>a</b>pplication <b>s</b>tandard <b>A</b>PI [for] <b>b</b>i-directional <b>i</b>nformation <b>i</b>nterchange</i>). It's meant as a general API for CMSes, weblogs, etc. The spec may be rich enough for it to work with Wikipedia. The plus side of supporting <i>wasabii</i> is that any wasabii-compliant end-user application should be able to interface with Wikipedia.
**In the blog world, at least, <i>wasabii</i> seems to be positioning itself as the next generation standard API (replacing bloggerAPI as the popular interface), which means lots of end-user applications will be created. All we'd have to do is support <i>wasabii</i> at some URL and we'd automatically inherit crap loads of functionality.
***The specs at the site aren't very clear. Are there are implementations you can point to that would give a better idea of how it actually would operate? (The mailing list drops off in September, with people saying that it's too bad there are no implementations so no one's really sure how it works so there are no implementations...) Additionally, it's not clear how the recursive node model maps onto wiki (is a title a parent node, and old versions subnodes? Or are new versions subnodes? Or...???) How would categories, schemes and taxonomies map to languages/sections and namespaces? --[[User:Brion VIBBER|Brion VIBBER]]

Comments, suggestions?

Latest revision as of 01:08, 21 May 2020

35 revisions exported. —Anonymous DissidentTalk 23:42, 2 December 2009 (UTC)[reply]