Cross-Browser Scripting with importNode()

While building a browser slideshow object for a demonstration on dynamically pulling image information from a web server, I ran into difficulty with the DOM-compliant approach I had envisioned. A two-day journey into the world of XML DOM support for web browsers lay between me and a satisfactory solution.

Article Continues Below

My plan was to pass an XMLHttpRequest (XHR) with the name of an image to the server, which would return, from a combination of a database and the image itself, a title for the image, a description, and all meta data stored for the image. This data would be sent in XHTML format so that the client could simply import the XML response and append it to some container elements, thus speeding up the slideshow application. The data would look something like this:

<image_info id="image_number">
  <title>image_title</title>

  <description>image_description</description>
  <meta>image_meta</meta>
</image_info>

where

image_title:
  <div id="imageTitle">The title of the image</div>

image_description:
  <div id="imageDescription">
    <p class="para">Paragraph 1</p>
    <p class="para">Paragraph 2</p>
    ...
    <p class="para">Paragraph n</p>

  </div>

image_meta:    
  <div id="imageMeta">
    <!-- formatted meta information here -->
  </div>

Even the best-laid plans can go awry, and this one certainly did. I’d planned to take the XHR response from the server and its responseXML property, parse out the different sections using the document.getElementsByTagName() property, and put them where they needed to go. Simple, right?

Nope.

The frustration of the DOM#section2

Though I wouldn’t call myself a standards-compliance zealot, I believe standards have their place in development, and I wanted to do this using W3C DOM standards. I know a big question is “Why use the DOM when we have the handy innerHTML property?” The answer, in three parts:

  1. The innerHTML property is not standards-compliant.
  2. To avoid the XML aspect of this application, I would have to either make a separate call for each part of the data I wanted, or to parse the responseText property that came with the XHR response.
  3. Depending on what was contained in the response, innerHTML might not even work correctly.

The first point is self-evident, so let’s turn our attention to the other two. I could make a separate XHR call for the title, description, and meta data, but this could be slower, as there are three requests for the server instead of one—and as you scale an application upwards, even a minor speed discrepancy will grow. Parsing the responseText is no speedier, as you’d have to wade through an unknown amount of text to get what you needed. Which leads to the third point: innerHTML is not implemented on TBODY elements with Internet Explorer, so if the container element is a table you may encounter problems. innerHTML also has problems rendering SELECT elements in Internet Explorer (see the bug report from Microsoft on this issue).

I needed to use DOM functionality, thus opening the door to the not-so-standard world of JavaScript where cross-browser compatibility is but a tantalizing dream. Importing documents from two different ownerDocument properties (which is what I needed to do in my slideshow application) requires the use of the DOM Level 2 method importNode(), since in these cases the DOM will not allow a simple document.appendChild().

Unfortunately, even in Internet Explorer 7, Microsoft has not yet implemented any of the missing DOM methods that are sorely needed. So the development community must wait for the next release and hope that the DOM is eventually upgraded. This does not, however, help with a solution now. There needs to be solutions for the problems we face when writing a web application for all browsers, and importing nodes is no exception.

The W3C DOM Level 2 import approach#section3

To import a DOM document into an existing DOM document, there is the handy importNode() method that was introduced in DOM Level 2 as part of the Document Object Model Core. This imports a new document, A, into an existing document, B, by creating a new instance of A that has the ownerDocument property set to the B’s ownerDocument. The new instance can then be appended to the existing document using appendChild(). (Line wraps marked » —Ed.)

var newNode = null, importedNode = null;

/* Throughout the examples, our new document will »
come from an XHR response */
newNode = xhrResponse.responseXML.getElementsByTagName »
('title')[0].childNodes[0];
if (newNode.nodeType != document.ELEMENT_NODE)
  newNode = newNode.nextSibling;
if (newNode) {
  importedNode = document.importNode(newNode, true);
  document.getElementById('divTitleContainer') »
.appendChild(importedNode);
}    

The problem with implementing this code is that although it may work in Firefox, Opera, Netscape, and Safari (to name a few), it doesn’t work in Internet Explorer for any version. Why? Because Internet Explorer does not understand the DOM Level 2 method importNode().

Trying a different W3C method#section4

I figured there had to be some way around this issue that Microsoft had already provided, so I looked. I didn’t have to dig long before I remembered the cloneNode() method. So I tried the following: (Line wraps marked » —Ed.)

var newNode = null, importedNode = null;

newNode = xhrResponse.responseXML.getElementsByTagName »
('title')[0].childNodes[0];
if (newNode.nodeType != document.ELEMENT_NODE)
  newNode = newNode.nextSibling;
if (newNode) {
  importedNode = newNode.cloneNode(true);
  document.getElementById('divTitleContainer') »
.appendChild(importedNode);
}

I received the following error: No such interface supported. Besides which, there would have been an issue with the differences in ownerDocument, right?

What about a Microsoft hack?#section5

Convinced that the answer did not lie somewhere else in the W3C DOM standards, I began looking for a Microsoft-friendly workaround. It was not long before I found a post from James Edwards, co-author of The JavaScript Anthology: 101 Essential Tips, Tricks & Hacks, with a solution to the dilemma. (Line wraps marked » —Ed.)

var newNode = null, importedNode = null;

newNode = xhrResponse.responseXML.getElementsByTagName »
('title')[0].childNodes[0];
if (newNode.nodeType != document.ELEMENT_NODE)
  newNode = newNode.nextSibling;
if (newNode) {
  importedNode = document.cloneNode(true);
  document.getElementById('divTitleContainer') »
.innerHTML += importedNode.outerHTML;
}

So why does this work, when the other cloneNode() solution did not? Actually, in Internet Explorer 6, it doesn’t, though it may have back in 2004 when the blog post was made. After debugging the code for a while I noticed what should have been an obvious problem, probably a typo in the post: the node being cloned was wrong; document has no method cloneNode(). I changed line 7 of the above code to clone the newNode instead of document:

importedNode = newNode.cloneNode(true);

With this change, the innerHTML of document.getElementById('divTitleContainer') was undefined. It looked like a Microsoft hack was not a viable solution. I was not completely sorry that I wouldn’t be using innerHTML, and was left with the option of implementing an IE version of importNode().

A new importNode() method#section6

Following the W3C DOM Level 2 standards for document.importNode(), I wanted to make sure that my method could handle the different node types it might encounter. This was when I noticed one more inconsistency—Internet Explorer did not define document.ELEMENT_NODE and the other node types as part of its DOM implementation. Cripes! I quickly rectified this situation with:

if (document.ELEMENT_NODE == null) {
  document.ELEMENT_NODE = 1;
  document.ATTRIBUTE_NODE = 2;
  document.TEXT_NODE = 3;
  document.CDATA_SECTION_NODE = 4;
  document.ENTITY_REFERENCE_NODE = 5;
  document.ENTITY_NODE = 6;
  document.PROCESSING_INSTRUCTION_NODE = 7;
  document.COMMENT_NODE = 8;
  document.DOCUMENT_NODE = 9;
  document.DOCUMENT_TYPE_NODE = 10;
  document.DOCUMENT_FRAGMENT_NODE = 11;
  document.NOTATION_NODE = 12;
}

Every piece of code I tried had a test for document.ELEMENT_NODE, so I went back and tested all of my previous attempts to see if this was the problem with Internet Explorer. It wasn’t—even with this addition, none of the previous attempts worked. Writing my own method was the way I would have to go.

My importNode() method had to be able to create a new node, and if necessary any child nodes as well. It also needed to import all attributes associated with the node. The result was the following: (Line wraps marked » —Ed.)

if (!document.importNode) {
  document.importNode = function(node, allChildren) {
    switch (node.nodeType) {
      case document.ELEMENT_NODE:
        var newNode = document.createElement(node »
.nodeName);
        /* does the node have any attributes to add? */
        if (node.attributes && node.attributes.length > 0)
          for (var i = 0; il = node.attributes.length; i < il)
            newNode.setAttribute(node.attributes[i] »
.nodeName, node.getAttribute(node.attributes[i++] »
.nodeName));
        /* are we going after children too, and does »
the node have any? */
        if (allChildren && node.childNodes && node »
.childNodes.length > 0)
          for (var i = 0; il = node.childNodes.length; »
i < il)
            newNode.appendChild(document.importNode »
(node.childNodes[i++], allChildren));
        return newNode;
        break;
      case document.TEXT_NODE:
      case document.CDATA_SECTION_NODE:
      case document.COMMENT_NODE:
        return document.createTextNode(node.nodeValue);
        break;
    }
  };
}

Using the format from the beginning of this article and importing from this xhrResponse.responseXML: (Line wraps marked » —Ed.)

<image_info id="002">
  <title>Looking Through the Window</title>

  <description>
    <p class="para">
      This image was taken looking at my backyard »
from inside the kitchen of my house. It reminds me of »
something from a <a href="dummy.html" onclick ="return »
openNewWindow(this.href);">fantasy world</a>.
    </p>
  </description>

  <meta>image_meta</meta>
</image_info>

The results in Internet Explorer looked as I had expected them to! I felt relief wash over me and I was very content…until a code review uncovered another problem.

Please remember the events#section7

The onclick event that I had in the <a> element wouldn’t work, and I couldn’t figure out why. I went back to the other browsers for reassurance, and none of them—not a single Gecko browser, Opera, or anything else—would fire the event either. While importing nodes, it seems that the DOM does not register the event handlers in the elements properly. This became clear after searching blogs and documentation; event handlers are not activated in imported elements.

I found confirmation in a Microsoft article called “Faster DHTML in 12 Steps.” It stated: “If you are applying a block of HTML text, as opposed to accessing individual elements, then the HTML parser must be invoked.” What a disappointment.

On a whim, I decided to try my importNode() method against Firefox, just to see what would happen…and what happened was it worked—even the event handlers! The implementation for importNode() found in browsers does not import event handlers or default styles that would be attached to elements like strong, as in my example. I should have realized this earlier; the word “something” was never made bold in any of the browsers. Apparently, in all browsers, elements must be passed through the HTML parser before events and style will be activated.

Internet Explorer’s event handling troubles still loomed. I wanted the solution to be completely cross-browser compatible, and I knew Internet Explorer implemented the event object differently than the W3C recommendation. A solution was not forthcoming, but just as I was ready to give up, a silly solution came to me. I remembered seeing in a blog long ago that a lot of life’s little problems with Internet Explorer could be solved by simply setting an element’s innerHTML property to itself. So I tried it.

document.getElementById('divTitleContainer') »
.innerHTML = document.getElementById('divTItleContainer') »
.innerHTML;

It was silly, I know. But it worked.

My final solution#section8

The solution to all of my problems was to not use a DOM method after all, and instead use my own implementation. Here, in all of its glory, is my final solution to the importNode() problem coded in a cross-browser compliant way: (Line wraps marked » —Ed.)

if (!document.ELEMENT_NODE) {
  document.ELEMENT_NODE = 1;
  document.ATTRIBUTE_NODE = 2;
  document.TEXT_NODE = 3;
  document.CDATA_SECTION_NODE = 4;
  document.ENTITY_REFERENCE_NODE = 5;
  document.ENTITY_NODE = 6;
  document.PROCESSING_INSTRUCTION_NODE = 7;
  document.COMMENT_NODE = 8;
  document.DOCUMENT_NODE = 9;
  document.DOCUMENT_TYPE_NODE = 10;
  document.DOCUMENT_FRAGMENT_NODE = 11;
  document.NOTATION_NODE = 12;
}

document._importNode = function(node, allChildren) {
  switch (node.nodeType) {
    case document.ELEMENT_NODE:
      var newNode = document.createElement(node »
.nodeName);
      /* does the node have any attributes to add? */
      if (node.attributes && node.attributes »
.length > 0)
        for (var i = 0; il = node.attributes.length; »
i < il)
          newNode.setAttribute(node.attributes[i] »
.nodeName, node.getAttribute(node.attributes[i++] »
.nodeName));
      /* are we going after children too, and does »
the node have any? */
      if (allChildren && node.childNodes && »
node.childNodes.length > 0)
        for (var i = 0; il = node.childNodes.length; »
i < il)
          newNode.appendChild(document._importNode »
(node.childNodes[i++], allChildren));
      return newNode;
      break;
    case document.TEXT_NODE:
    case document.CDATA_SECTION_NODE:
    case document.COMMENT_NODE:
      return document.createTextNode(node.nodeValue);
      break;
  }
};

Here it is in use:

var newNode = null, importedNode = null;

newNode = xhrResponse.responseXML.getElementsByTagName »
('title')[0].childNodes[0];
if (newNode.nodeType != document.ELEMENT_NODE)
  newNode = newNode.nextSibling;
if (newNode) {
  importedNode = document._importNode(newNode, true);
  document.getElementById('divTitleContainer') »
.appendChild(importedNode);
  if (!document.importNode)
    document.getElementById('divTitleContainer') »
.innerHTML = document.getElementById('divTitleContainer') »
.innerHTML;
}

Let’s get practical#section9

This is all well and good in theory, is it worth using this solution in the real world? For developers creating Ajax web applications or websites, I believe it is. An Ajax application can obviously take advantage of the document._importNode() solution if it receives chunks of XHTML from the server in response to a client request. In these situations, it is important that any events built into the chunks of markup coming from the server function correctly and built-in style elements should also display properly.

We can assume, for the sake of this example, that the client is requesting new data to place within a <DIV> element with id="xhrText". The server will send the response as a chunk of XHTML to be placed directly into this element, surrounded by a parent XML node that can be effectively ignored. (Line wraps marked » —Ed.)

var newNode = null, importedNode = null;

newNode = xhrResponse.responseXML.getElementsByTagName »
('response')[0].childNodes[0];
if (newNode.nodeType != document.ELEMENT_NODE)
  newNode = newNode.nextSibling;
if (newNode) {
  importedNode = document._importNode(newNode, true);
  document.getElementById('xhrText').innerHTML = '';
  document.getElementById('xhrText').appendChild »
(importedNode);
  if (!document.importNode)
    document.getElementById('xhrText').innerHTML = »
document.getElementById('xhrText').innerHTML;
}

The method described above will ensure that events attached to elements contained in the server response fire when needed, and that any style associated with any elements will be applied to the imported markup. Example one shows it in action.

You might also use this method in web page that uses Ajax and XHTML to replace frames or iframes. When the main page contains large graphics, style sheets, or JavaScript that the developer would rather not require the client to load again and again if the client’s cache is not set up to handle it all. Ajax fetches the contents of an entire page and then places the contents of the <body> into a predetermined “frame” element. There must be an entire page to be retrieved so that the website remains accessible to browsers that have JavaScript disabled. A link, for example, would look like this: (Line wraps marked » —Ed.)

<a href="page2.xhtml" onclick="return gotoPage »
(this.href);">Page 2</a>

The gotoPage() function will always return false in order to stop the browser from moving to page2.xhtml. If JavaScript is disabled, this link still works, as the browser will go to page2.xhtml, and then it must reload all of the large files the developer is trying to avoid. The gotoPage() function would make an Ajax call for the new page:

var xhr = false;

function gotoPage(p_url) {
  if (window.XMLHttpRequest) {
    xhr = new XMLHttpRequest();
  } else {
    try {
      xhr = new ActiveXObject('Msxml2.XMLHTTP');
    } catch (ex) {
      try {
        xhr = new ActiveXObject('Microsoft.XMLHTTP');
      } catch (ex) {
        xhr = false;
      }
    }
  }
  if (!xhr)
    return (false);
  else {
    xhr.open('get', p_url, true);
    xhr.onreadystatechange = showPageContent;
    xhr.send(null);
  }
  return (false);
}

The function to call for a response handles the whole page, importing only the part it needs. For my pages, I keep all of the content separate from headers and footers by keeping it in a separate <DIV> element with id="documentBodyContent". (Line wraps marked » —Ed.)

function showPageContent() {
  if (xhr.readyState  4 && xhr.status  200) {
    var newNode = null, tempNode = null, importedNode »
= null;

    tempNode = xhr.responseXML.getElementsByTagName »
(‘div’);
    for (var i = 0; il = tempNode.length; i < il; i++)
      if (tempNode[i].getAttribute(‘id’) == »

‘documentBodyContent’) {
        newNode = tempNode[i];
        break;
      }
    if (newNode.nodeType != document.ELEMENT_NODE)
      newNode = newNode.nextSibling;
    if (newNode) {
      importedNode = document._importNode(newNode, »
true);
      document.getElementById(‘xhrFrame’).innerHTML = ’’;
      document.getElementById(‘xhrFrame’).appendChild »
(importedNode);
      if (!document.importNode)
        document.getElementById(‘xhrFrame’).innerHTML = »
document.getElementById(‘xhrFrame’).innerHTML;
    }
  }
}

Example two demonstrates the code in action. There is a snag with Internet Explorer because IE does not recognize XHTML correctly. A workaround provided in the code parses the responseText and loads the string into an XML document that can then be used.

In summary#section10

So where do we stand? It’s complicated. If you need to import nodes from two different document owners, you’ll discover that no browser gets it right; neither using cloneNode(), importNode() nor saving the outerHTML of the new node to the innerHTML of the existing node. Using document._importNode() works for me, but your mileage may vary.

Please note: The document._importNode() that I presented cannot handle the complete set of node types, only the most common ones found when importing a DOM document. By following the W3C definition for importNode(), it would not be too much trouble to add the missing types into this method. Remember, though, that types document.DOCUMENT_NODE and document.DOCUMENT_TYPE_NODE cannot be imported and may be excluded. Also the types document.ENTITY_NODE and document.NOTATION_NODE cannot be completely imported due to DocumentType being read-only in the current implementation of the DOM.

Download the final solution in this document and try it out for yourself. I have tested this code in Mozilla 1.7.13, Firefox 2.0 and 1.5, Internet Explorer 7.0, 6.0, 5.5, and 5.0, Opera 9.10 and 8.53, Netscape 8.1, Flock 0.7, K-Meleon 1.02, and SeaMonkey 1.0; it worked for me in all cases. Please let me know if you run into any problems with other browsers or discover problems in the code!

About the Author

Anthony Holdener

Anthony T. Holdener III is a web programming consultant working in St. Louis, MO, where he concentrates on building business applications for the web. He is currently writing his first book for O’Reilly Media, Inc. When he is not writing, he enjoys spending time with his wife and family and finding new things to teach his two-year-old twins.

34 Reader Comments

  1. From my experience over the past years I’ve to say that I won’t go back to XML handling in XmlHttpRequests ever. And, .innerHTML being standards or not, de facto it works every and it is fast. And speed is a concern, especially if your application gets bigger and bigger.

    Same for harvesting through XML data isles. Your browsers native scripting language is JS, so JSON seems just more natural to me for any data exchange.

    As long as you don’t have to provide the XML to another place where it is required I don’t see a justification for it.

  2. ThinkelMan summed up my first thoughts exactly when reading this article. I have to admit your solution is nice but I can’t help thinking you caused yourself alot of extra work when you could have accomplished much the same thing using JSON rather than XML as the response format.

    Was there any particular reason for choosing XML over JSON in the application?

  3. You could also write some IE innerHTML workarounds, e.g. setting the innerHTML of a )/, ‘$1’ + content + ‘$2’);
    } else {
    selectNode.innerHTML=content;
    }

  4. There are generally two approaches to retrieving data by asynchronous request:
    * Retrieve an XML (or JSON) object containing data nodes, then parse out that data and build the HTML client-side.
    * Retrieve raw text or HTML to insert into an element using innerHTML (or whatever).
    These two approaches are meant for different uses, but the author of the article seems to want to use both at the same time.

    If you want data, parse it out and manipulate it as you like. If you want prebuilt HTML, ask the server for that and don’t try to parse it yourself.

    In other words, I think the whole problem can be avoided, insofar as the provided example.

  5. JSON is a very viable solution if I were taking data, say, from a database and sending that asynchronously to the client from the server. If I want to strip out sections from a page that is already formatted as XHTML, though, why go through the hassle of:

    1. converting it to JSON on the server from wherever it came from
    2. sending it to the client
    3. converting it back to XHTML on the client

    XML works well in these cases and has no additional processing costs.

    I’m not an XML zealot by any means, but I like to have solutions for every scenario out there just in case it comes up. I understand where Markus and Tim are coming from.

  6. I understand what people are saying about importing then parsing XML or alternately using innerHTML, and why some people think this _importNode script is pointless. However, I think this script has potential.

    My scenario: I have pre-existing XHTML pages I’d like to grab data from. This eliminates JSON because the data is already in XHTML format. I can import the pages as XML, but then I run into trouble with formatting markup (such as ). I can use innerHTML, but that means giving up DOM scripting control over the imported data.

    It appears this script will let me import a chunk of XHTML from an existing page, retain existing XHTML markup such as and CSS selectors, and also be able to manipulate it via the DOM. And as a plus, it does it with a very small script that is cross-browser and reusable.

    What more can I ask for? 🙂

  7. We took a similar approach at my work, where we had a XML response containing a HTML “payload” among other things. We wanted to to inject the HTML into the page without resorting to parsing the responseText with a regular expression and using innerHTML. What we came up with was a copyNodes function:

    “http://www.stringify.com/misc/copynodes.js”:http://www.stringify.com/misc/copynodes.js

    This function allows you to specify a source node in the responseXML, a destination node in the current document, and an optional filter to weed out unwanted nodes.

  8. I can see this being useful in some very specific cases, but most of the time it would seem to be easier to pull the XHTML text using XHR, and then use Regular Expressions to find the appropriate data within.

  9. XML is not worth the trouble.

    This kind of “solution” to work around the X in Ajax just helps to convince me that Json and innerHTML is the best way to go, standards or not. The relatively few problems that you run into using innerHTML are pretty minor compared to the headaches of the badly designed and inconsistently implemented XML Dom.

    >>>
    1. converting it to JSON on the server from wherever it came from 2. sending it to the client 3. converting it back to XHTML on the client
    >>>

    Yeah…
    1. php_json, or any of the tools available at http://www.json.org/. Even without handy extensions, outputting Json is at least as simple as outputting XML. (In my opinion, it’s easier.)
    2. XHR – Just use the responseText property
    3. eval() or http://www.json.org/json.js, and you’ve got a native Javascript object to work with, instead of a klunky Dom.

    It’s much faster for the browser, the code that parses it is easier to understand and simpler to read. In the real world, we spend about 80% of our time maintaining code, not writing it; simplicity and readability == $$. On top of that, Json is typically fewer bytes than XML, and with innerHTML, you can update the page with a single redraw/reflow, which saves perceived and actual time for the user.

    And, Json isn’t just serialized Javascript any more. It’s extensible and simple enough that it’s fast becoming a form of serialized *anything* – php, java, c++, and so on.

    If the best, fastest, easiest, most readable and extensible and maintainable method of asynchronous data transfer and DHTML updating is not supported by the standards, does that mean that the method is broken, or that the standards are?

    Thankfully, standards change, albeit painfully slowly.
    http://www.google.com/search?q=jsonrequest

    It’s a nicely written article, though. Thankfully we all get to learn from your headache instead of having our own 🙂

  10. The problem I see with innerHTML is that it destroys previous DOM objects.

    Say you have a list. You want to apply a behaviour to each list item without using any inline JS. A response comes back with HTML for an a new

  11. . Presumably you now write something like:

    ul.innerHTML += response.responseText;

    You’ve now lost all your event listeners for the other items, since those DOM objects have gone.

  12. I have been wracking my brain for a couple of days trying to solve an issue with IE. If i use this method to import my xml response (containing html form markup), all the elements loose their names in IE.

    The offending line is:
    document.getElementById(“˜xhrFrame’).innerHTML = document.getElementById(“˜xhrFrame’).innerHTML;

    An example XML:

    American Express Cards



    User ID
    Password

    Before the innerHTML = innerHTML line, all the input elements have names in IE. After that line, they loose there names. I have tried retrieving the names every way I could think if and looked at those discussed in this article:

    http://tobielangel.com/2007/1/11/attribute-nightmare-in-ie

    In the end, the only solution I have is to retrieve the input names before the innerHTML = innerHTML and then set them again afterwards. However, this is verbose and ugly.

    Has anyone encountered this problem? Is there a better solution?

    thanks
    -matt

  13. Thanks Sir; Your approach on the subject is really appreciatable Actually your presentation compels me to share a similar related theme as I am a fresher in this field earlier 2 weeks ago I send an intial js loader to the browser, namely htmlOutput

    function htmlOutput(xmlUri, xslUri){

    var xmlHttp = XmlHttp.create(); //XmlHttp is a cross-browser
    httpRequest Class. We load the xml file
    var async = true;
    xmlHttp.open(“GET”, xmlUri, async);
    xmlHttp.onreadystatechange = function () {
    if (xmlHttp.readyState == 4){ //Once loaded, the xsl stylesheet is
    loaded too, then imported into the xslt processor
    // TODO error control
    var processor = new XSLTProcessor();
    var xslHttp = XmlHttp.create();
    var xslasync = true;
    xslHttp.open(“GET”, xslUri, xslasync);
    xslHttp.onreadystatechange = function () {
    if (xslHttp.readyState == 4){
    processor.importStylesheet(xslHttp.responseXML);
    var newDocument =
    processor.transformToDocument(xmlHttp.responseXML); //xslt produce a DomDoc
    var bodyChildren = new Array();
    bodyChildren = newDocument.getElementsByTagName(“body”)[0].childNodes;
    for (var i = 0; i < bodyChildren.length; i++) { //each part of the body is imported into the displayed doc, namely document document.getElementsByTagName("body")[0].appendChild(document.importNode(bo­dyChildren[i], true)); } var bodyChildren = new Array(); headChildren = newDocument.getElementsByTagName("head")[0].childNodes; for (var i = 0; i < headChildren.length; i++) { //same for each part of the head is imported into the displayed doc, namely document document.getElementsByTagName("head")[0].appendChild(document.importNode(he­adChildren[i], true)); } } } xslHttp.send(null); } } xmlHttp.send(null); } This is very classical. Xml is loaded, then Xsl, XsltProcessor is instanciated, filled with the stylesheet, the ransformation is made. Finally, I import into the displayed document every node which interest me, in the body then in the head section. In that case, every node of the XHTML output produced. It works. I was happy, because everything is loaded just once, and my xmlHttp.responseXML is ready to be worked on, while the nodes are displayed and present in the DOM inspector. But There is a problem: The script node I add via xslt does not seem to be evaluated. Its functions are not taken into consideration. Any ideas ? Your

  14. It’s truely right to the factor. See I have pre-existing XHTML pages I’d like to grab data from. This eliminates JSON because the data is already in XHTML format. I can import the pages as XML, but then I run into trouble with formatting markup (such as ). I can use innerHTML, but that means giving up DOM scripting control over the imported data.
    It appears this script will let me import a chunk of XHTML from an existing page, retain existing XHTML markup such as
    and CSS selectors, and also be able to manipulate it via the DOM.That’s how it’s work.

  15. It seems to me that the “default styles” won’t be used if an element is not in the XHTML namespace. I saw this behavior myself when I removed the xmlns attribute from the XHTML document that I was receiving via XMLHttpRequest in one of my test scripts. (As far as I know, IE7 doesn’t support XML namespaces.)

    Try this:

    image_description:

    Paragraph 1

    Paragraph 2

    Paragraph n

  16. Importing xml can have its issues, I don’t think the format will ever be 100% perfect although I haven’t tried what your doing there Matt. Wish I could help in some way 🙂 I have spent several days do what seems like the simplest thing when working with importing xml.

  17. Great work.

    But i have a question.

    If your code modified to class. Is it will work correctly, if several objects will placed on single page? For example, is XMLHttpRequest will correct transfer date?

    Thank’s, AlexeyGfi

  18. I use serverside code as a graceful-failure option using the same XSLT as the Javascript uses. The most convenient DOM method for me is XSLTProcessor.transformToFragment(xml, targetDocument); however, I’ve had to enhance Anthony’s importNode() to import the transformed XHTML as a DocumentFragment. Here’s the additional case (looks familiar, doesn’t it?):

    case document.DOCUMENT_NODE:
      var newNode = document.createDocumentFragment();
      if (allChildren && node.childNodes && node.childNodes.length > 0)
        for (var i = 0, il = node.childNodes.length; i < il;)       newNode.appendChild(document._importNode(node.childNodes[i++], » allChildren));   return newNode;   break;

  19. Thanks for the article and comments, I found it very useful.

    Check out the following post to see how to create pure HTML templates that are compatible with IE:

    http://ccsoftwarefactory.com/blog/index.php/2009/10/26/html-templates-with-javascript-and-external-xml

    The method uses a similar implementation of the adoptNode() function that includes tweaks for the Missing TBody Problem, the Special IE Attribute Names (className, CSSText, HTMLFor) and the Attribute Case Sensitiveness Problem.

    Comments welcome.

    Cheers!

  20. Firstly I buy 100% into your requirement and philosophy Anthony and I would like to thank you for taking the time to write such an amazing article.

    Secondly, I would like to point the following out to the narrow minded JSON zealots in your post and also hopefully assist those who as a result are being exposed to their comments:- While JSON absolutely has it’s place, you and your programming will always benefit by broadening your outlook. Don’t assume that any one solution is best all the time every time.

    There are many reasons why XHTML has a place. One reason for example is that XML is adopted right into the core of many databases and at a simplistic level this means that using XHTML internally (or a stripped down version) allows one to pass data from the core to the user with minimal/ even no transformation (and please can I ask any commenters not to labour the “gee, but that is inefficient in a database” point as the details of this aspect are well known to me) . Another is the massive supporting framework that exists for XML, a quick example being XSLT; which allows simple, elegant data transformations to XML data. Combining simplicity and elegance in my book makes for great programmatic results.

    So whether you choose JSON or XHTML for your particular purpose, no matter. In my case I will for sure be taking advantage of this wonderful post.

    Thank you again

  21. Just a quick update that thanks to your article, from now on it will take just 2 lines of code for me to take any subtree of a response XHTML document and append the entire subtree to any branch of the current document – and that I have tested this successfully on IE 6,7,8, FF3, current Opera, current Chrome and current Safari

    Of particular benefit was

    if (!document.importNode) {
    parentNode[removed] = parentNode[removed];
    }

    for IE 6 and 7, which would have taken me an unknown amount of time to identify – as IE6 and IE7 throw no errors if this is omitted and simply displays “nothing”

    And one last note is that I highly recommend that people stay away from innerHTML as quirks that I saw with it, even in the latest firefox were what started me on the dom investigation in the first place – you simply cannot be sure when innerHTML will not work…

    Thanks again and good luck to all others out there grappling with cross browser compatibility – I highly recommend taking advantage of this fantastic article.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career