The first poster pointed out that we already have XML. I wanted to add to this in case anyone was wondering where to go from there. XSLT is similar to the per-element replacement rules the author is suggesting you do in REBOL. But XSLT is a W3 standard, free, and there are more than one companany and invididuals making processors for it.
XSLT is understood natively by both Internet Explorer 5 and Mozilla, so you can either deploy it on the browser side, client side (your development machine, making it spit out static HTML ready-to-be-hosted), or the server side. In fact, since its a standard, you can begin with a client side system, and then use exactly the same templates should you upgrade to a hosting service that offers an XSLT processor on the server (and there are now several pieces of software which do this, like Apache Cocoon and Apache AxKit).
XSLT is the _only_ templating system that can live comfortably on all three points of the development/hosting/browsing pipeline.
Throw in a browser sniffer, and you can even deliver raw XML to Internet Explorer/Mozilla and have it format your page into HTML on your visitor’s CPU time instead of yours.
XSLT is as simple to learn as REBOL, and is more likely to be supported by advanced web site editors (maybe future versions of Dreamweaver and Frontpage, if they don’t already) than a proprietary system.
But if that’s not enough, a critical advantage of XSLT is that it comes with XPath. With XPath you can look anywhere in the source document from anywhere in the template using a simple, URL-ish addressing scheme. So, for example, say you want a button at the top of the page to turn on “Director’s Commentary” for your weblog/essay/novella, but you don’t want that button to appear if the document doesn’t contain any <commentary> tags in the XML. All you need to do is use XPath to see if there are any <commentary> tags up ahead before you actually include the button. Same can be applied to a dozen conditional menu links or UI elements, set both ahead of and after the body of the content iself.
In my practice, I’ve also found that XPath is invaluable for filling the TITLE atribute on links. When I get to an <a> element I just look ahead to the glossary section I’ve included in the XML source and suck that information out.
XPath also understands full URLS. If there’s an XHTML or XML resource on a web page elsewhere, you can suck in all or fragments of it while you’re building the template and include its contents in your own document’s XML tree. It’s as simple as:
Temperature in New York: <xsl:value-of select=“document(‘http://weather.boygenius.com/new_york-new_york.xml’)/city/temperatures/current_farenheit”>
That is ease and power I think anyone can respect.
There’s more about XSLT and XPath at http://www.w3.org/Style/XSL/