Cross-Browser Scripting with importNode()

by Anthony Holdener

34 Reader Comments

Back to the Article
  1. Why not just use Json, its faster and smaller?

    Copy & paste the code below to embed this comment.
  2. From my experience over the past years I’ve to say that I won’t go back to XML handling in XmlHttpRequests ever. And, [removed] being standards or not, de facto it works every and it is fast. And speed is a concern, especially if your application gets bigger and bigger.

    Same for harvesting through XML data isles. Your browsers native scripting language is JS, so JSON seems just more natural to me for any data exchange.

    As long as you don’t have to provide the XML to another place where it is required I don’t see a justification for it.

    Copy & paste the code below to embed this comment.
  3. Broken link: «Download the “final”:http://alistapart.com/articles/xbImportNode.js solution»

    Copy & paste the code below to embed this comment.
  4. ThinkelMan summed up my first thoughts exactly when reading this article. I have to admit your solution is nice but I can’t help thinking you caused yourself alot of extra work when you could have accomplished much the same thing using JSON rather than XML as the response format.

    Was there any particular reason for choosing XML over JSON in the application?

    Copy & paste the code below to embed this comment.
  5. You could also write some IE innerHTML workarounds, e.g. setting the innerHTML of a <SELECT> box using the IE outerHTML property:

    if (node.outerHTML && node.nodeName == “SELECT”) {
      selectNode.outerHTML = selectNode.outerHtml.replace
    (/(<SELECT[^<]*>).*(<\/SELECT>)/, ‘$1’ + content + ‘$2’);
    } else {
      selectNode[removed]=content;
    }

    Copy & paste the code below to embed this comment.
  6. For full-featured XML processing support check out Taconite (http://sourceforge.net/projects/taconite/) or its jQuery equivalent (http://malsup.com/jquery/taconite).  These libs have long since solved the importNode problem in a manner that hides all the details from the user.

    Copy & paste the code below to embed this comment.
  7. There are generally two approaches to retrieving data by asynchronous request:

    • Retrieve an XML (or JSON) object containing data nodes, then parse out that data and build the HTML client-side.
    • Retrieve raw text or HTML to insert into an element using innerHTML (or whatever).
      These two approaches are meant for different uses, but the author of the article seems to want to use both at the same time.

    If you want data, parse it out and manipulate it as you like.  If you want prebuilt HTML, ask the server for that and don’t try to parse it yourself.

    In other words, I think the whole problem can be avoided, insofar as the provided example.

    Copy & paste the code below to embed this comment.
  8. JSON is a very viable solution if I were taking data, say, from a database and sending that asynchronously to the client from the server.  If I want to strip out sections from a page that is already formatted as XHTML, though, why go through the hassle of:

    1. converting it to JSON on the server from wherever it came from
    2. sending it to the client
    3. converting it back to XHTML on the client

    XML works well in these cases and has no additional processing costs.

    I’m not an XML zealot by any means, but I like to have solutions for every scenario out there just in case it comes up. I understand where Markus and Tim are coming from.

    Copy & paste the code below to embed this comment.
  9. I understand what people are saying about importing then parsing XML or alternately using innerHTML, and why some people think this _importNode script is pointless.  However, I think this script has potential.

    My scenario: I have pre-existing XHTML pages I’d like to grab data from.  This eliminates JSON because the data is already in XHTML format.  I can import the pages as XML, but then I run into trouble with formatting markup (such as <em>). I can use innerHTML, but that means giving up DOM scripting control over the imported data.

    It appears this script will let me import a chunk of XHTML from an existing page, retain existing XHTML markup such as <em> and CSS selectors, and also be able to manipulate it via the DOM.  And as a plus, it does it with a very small script that is cross-browser and reusable.

    What more can I ask for?  :)

    Copy & paste the code below to embed this comment.
  10. We took a similar approach at my work, where we had a XML response containing a HTML “payload” among other things. We wanted to to inject the HTML into the page without resorting to parsing the responseText with a regular expression and using innerHTML. What we came up with was a copyNodes function:

    “http://www.stringify.com/misc/copynodes.js”:http://www.stringify.com/misc/copynodes.js

    This function allows you to specify a source node in the responseXML, a destination node in the current document, and an optional filter to weed out unwanted nodes.

    Copy & paste the code below to embed this comment.
  11. Hmm seems interesting, ajaxian has an interesting story if anyone wants to read up on the debate:

    JSON vs. XML”:http://ajaxian.com/archives/json-vs-xml-the-debate
    “The debate in AjaxWorld Mag”:http://ajaxian.com/archives/ajaxworld-magazine-json-versus-xml

    Thanks for the post, I’ll be speding more time w/ JSON

    Copy & paste the code below to embed this comment.
  12. … “E4X”:http://en.wikipedia.org/wiki/E4X to be widely supported, then we’re in business!

    Copy & paste the code below to embed this comment.
  13. The preview showed the proper characters, not the entities!

    Copy & paste the code below to embed this comment.
  14. I can see this being useful in some very specific cases, but most of the time it would seem to be easier to pull the XHTML text using XHR, and then use Regular Expressions to find the appropriate data within.

    Copy & paste the code below to embed this comment.
  15. The preview showed the proper characters, not the entities!

    Yup, sorry. Should be fixed in the New CMS™ (coming soon).

    Copy & paste the code below to embed this comment.
  16. (The preview showed the ™ symbol, but the system printed the entity. Whee.)

    Copy & paste the code below to embed this comment.
  17. XML is not worth the trouble.

    This kind of “solution” to work around the X in Ajax just helps to convince me that Json and innerHTML is the best way to go, standards or not.  The relatively few problems that you run into using innerHTML are pretty minor compared to the headaches of the badly designed and inconsistently implemented XML Dom.

    >>>
    1. converting it to JSON on the server from wherever it came from 2. sending it to the client 3. converting it back to XHTML on the client
    >>>

    Yeah…
    1. php_json, or any of the tools available at http://www.json.org/.  Even without handy extensions, outputting Json is at least as simple as outputting XML.  (In my opinion, it’s easier.)
    2. XHR – Just use the responseText property
    3. eval() or http://www.json.org/json.js, and you’ve got a native Javascript object to work with, instead of a klunky Dom.

    It’s much faster for the browser, the code that parses it is easier to understand and simpler to read.  In the real world, we spend about 80% of our time maintaining code, not writing it; simplicity and readability == $$.  On top of that, Json is typically fewer bytes than XML, and with innerHTML, you can update the page with a single redraw/reflow, which saves perceived and actual time for the user.

    And, Json isn’t just serialized Javascript any more.  It’s extensible and simple enough that it’s fast becoming a form of serialized anything – php, java, c++, and so on.

    If the best, fastest, easiest, most readable and extensible and maintainable method of asynchronous data transfer and DHTML updating is not supported by the standards, does that mean that the method is broken, or that the standards are?

    Thankfully, standards change, albeit painfully slowly.
    http://www.google.com/search?q=jsonrequest

    It’s a nicely written article, though.  Thankfully we all get to learn from your headache instead of having our own :)

    Copy & paste the code below to embed this comment.
  18. I’m just curious if any Safari users out there have tried this solution and found that it worked or didn’t work?

    Copy & paste the code below to embed this comment.
  19. “Internet Explorer did not define document.ELEMENT_NODE and the other node types as part of its DOM implementation.”

    These constants are defined in Node interface (Node.ELEMENT_NODE, …):
    http://www.w3.org/TR/DOM-Level-2-Core/core.html#ID-1950641247

    AnyNode.*_NODE works AFAIK only in Firefox.

    Copy & paste the code below to embed this comment.
  20. The problem I see with innerHTML is that it destroys previous DOM objects.

    Say you have a list. You want to apply a behaviour to each list item without using any inline JS. A response comes back with HTML for an a new <li>. Presumably you now write something like:

    ul[removed] += response.responseText;

    You’ve now lost all your event listeners for the other items, since those DOM objects have gone.

    Copy & paste the code below to embed this comment.
  21. I have been wracking my brain for a couple of days trying to solve an issue with IE. If i use this method to import my xml response (containing html form markup), all the elements loose their names in IE.

    The offending line is:
    document.getElementById(”˜xhrFrame’)[removed] = document.getElementById(”˜xhrFrame’)[removed];

    An example XML:

    <response type=“object” id=“RequestFIDetail”>
    <h1>American Express Cards</h1>
    <div class=“content”>
      <form class=‘yodleeForm’ acti>
      <input type=“hidden” name=“fiId” value=“12”/>
      <input type=“hidden” name=“filoginId” value=“0”/>
      <input type=“hidden” name=“refresh” value=“true”/>
     
      <div class=“row”>
      <span class=“label”>User ID</span>
      <span class=“ff”><input class=“ssl_search” type=“text” name=“LOGIN” value=”” size=“20” maxlength=“40” /></span>
      </div>

      <div class=“row”>
      <span class=“label”>Password</span>
      <span class=“ff”><input class=“ssl_search” type=“password” name=“PASSWORD” value=”” size=“20” maxlength=“40” /></span>
      </div>

      <div class=“row”>
      <span class=“ff”><input type=“submit” name=“submit” value=“Login” class=“submit” /></span>
      </div>
      </form>
    </div>
    </response>

    Before the innerHTML = innerHTML line, all the input elements have names in IE. After that line, they loose there names. I have tried retrieving the names every way I could think if and looked at those discussed in this article:

    http://tobielangel.com/2007/1/11/attribute-nightmare-in-ie

    In the end, the only solution I have is to retrieve the input names before the innerHTML = innerHTML and then set them again afterwards. However, this is verbose and ugly.

    Has anyone encountered this problem? Is there a better solution?

    thanks
    -matt

    Copy & paste the code below to embed this comment.
  22. Maybe someone could convince “Dean Edwards”:http://dean.edwards.name to add this to his base2 project?????

    Copy & paste the code below to embed this comment.
  23. Thanks Sir; Your approach on the subject is really appreciatable Actually your presentation compels me to share a similar related theme as I am a fresher in this field earlier 2 weeks ago I send an intial js loader to the browser, namely htmlOutput

    function htmlOutput(xmlUri, xslUri){


          var xmlHttp = XmlHttp.create(); //XmlHttp is a cross-browser
    httpRequest Class. We load the xml file
          var async = true;
          xmlHttp.open(“GET”, xmlUri, async);
          xmlHttp.onreadystatechange = function () {
                if (xmlHttp.readyState 4){ //Once loaded, the xsl stylesheet is
    loaded too, then imported into the xslt processor
                    // TODO error control
                    var processor = new XSLTProcessor();
                    var xslHttp = XmlHttp.create();
                    var xslasync = true;
                    xslHttp.open(“GET”, xslUri, xslasync);
                    xslHttp.onreadystatechange = function () {
                          if (xslHttp.readyState 4){
                              processor.importStylesheet(xslHttp.responseXML);
                              var newDocument =
    processor.transformToDocument(xmlHttp.responseXML); //xslt produce a DomDoc
                              var bodyChildren = new Array();
                              bodyChildren = newDocument.getElementsByTagName(“body”)0.childNodes;
                              for (var i = 0; i < bodyChildren.length; i++) { //each part of the
    body is imported into the displayed doc, namely document


    document.getElementsByTagName(“body”)0.appendChild(document.importNode(bo­dyChildren,
    true));
                              }
                              var bodyChildren = new Array();
                              headChildren = newDocument.getElementsByTagName(“head”)0.childNodes;
                              for (var i = 0; i < headChildren.length; i++) { //same for each
    part of the head is imported into the displayed doc, namely document


    document.getElementsByTagName(“head”)0.appendChild(document.importNode(he­adChildren,
    true));
                              }
                          }
                    }
                    xslHttp.send(null);
              }
          }
          xmlHttp.send(null);

    }


    This is very classical. Xml is loaded, then Xsl, XsltProcessor   is instanciated, filled with the stylesheet, the ransformation is made.
    Finally, I import into the displayed document every node which interest me, in the body then in the head section. In that case, every node of the XHTML output produced. It works. I was happy, because everything is loaded just once, and my xmlHttp.responseXML is ready to be worked on, while the nodes are displayed and present in the DOM inspector.
    But There is a problem: The script node I add via xslt does not seem to be evaluated. Its functions are not taken into consideration. Any ideas ?
    Your

     

    Copy & paste the code below to embed this comment.
  24. What is the license of this code? I’m assuming some kind of GPL…?

    Copy & paste the code below to embed this comment.
  25. It’s truely right to the factor. See I have pre-existing XHTML pages I’d like to grab data from. This eliminates JSON because the data is already in XHTML format. I can import the pages as XML, but then I run into trouble with formatting markup (such as <em>). I can use innerHTML, but that means giving up DOM scripting control over the imported data.
    It appears this script will let me import a chunk of XHTML from an existing page, retain existing XHTML markup such as <em> and CSS selectors, and also be able to manipulate it via the DOM.That’s how it’s work.

    Copy & paste the code below to embed this comment.
  26. As per “ALA’s Copyright”:http://www.alistapart.com/copyright/ all of the source code is free to use—no questions asked.

    Copy & paste the code below to embed this comment.
  27. As per “ALA’s Copyright”:http://www.alistapart.com/copyright/ all of the source code is free to use—no questions asked.

    Copy & paste the code below to embed this comment.
  28. It seems to me that the “default styles” won’t be used if an element is not in the XHTML namespace. I saw this behavior myself when I removed the xmlns attribute from the XHTML document that I was receiving via XMLHttpRequest in one of my test scripts. (As far as I know, IE7 doesn’t support XML namespaces.)

    Try this:

    image_description:
      <div id=“imageDescription” >
      Paragraph 1
      Paragraph 2
      …
      Paragraph n
      </div>

    Copy & paste the code below to embed this comment.
  29. Importing xml can have its issues, I don’t think the format will ever be 100% perfect although I haven’t tried what your doing there Matt. Wish I could help in some way :) I have spent several days do what seems like the simplest thing when working with importing xml.

    Copy & paste the code below to embed this comment.
  30. Great work.

    But i have a question.

    If your code modified to class. Is it will work correctly, if several objects will placed on single page? For example, is XMLHttpRequest will correct transfer date?

    Thank’s, AlexeyGfi

    Copy & paste the code below to embed this comment.
  31. I use serverside code as a graceful-failure option using the same XSLT as the Javascript uses. The most convenient DOM method for me is XSLTProcessor.transformToFragment(xml, targetDocument); however, I’ve had to enhance Anthony’s importNode() to import the transformed XHTML as a DocumentFragment. Here’s the additional case (looks familiar, doesn’t it?):

    case document.DOCUMENT_NODE:
      var newNode = document.createDocumentFragment();
      if (allChildren && node.childNodes && node.childNodes.length > 0)
        for (var i = 0, il = node.childNodes.length; i < il;)
          newNode.appendChild(document._importNode(node.childNodes[i++], »
    allChildren));
      return newNode;
      break;

    Copy & paste the code below to embed this comment.
  32. Thanks for the article and comments, I found it very useful.

    Check out the following post to see how to create pure HTML templates that are compatible with IE:

    http://ccsoftwarefactory.com/blog/index.php/2009/10/26/html-templates-with-javascript-and-external-xml

    The method uses a similar implementation of the adoptNode() function that includes tweaks for the Missing TBody Problem, the Special IE Attribute Names (className, CSSText, HTMLFor) and the Attribute Case Sensitiveness Problem.

    Comments welcome.

    Cheers!

    Copy & paste the code below to embed this comment.
  33. Firstly I buy 100% into your requirement and philosophy Anthony and I would like to thank you for taking the time to write such an amazing article.

    Secondly, I would like to point the following out to the narrow minded JSON zealots in your post and also hopefully assist those who as a result are being exposed to their comments:- While JSON absolutely has it’s place, you and your programming will always benefit by broadening your outlook. Don’t assume that any one solution is best all the time every time.

    There are many reasons why XHTML has a place. One reason for example is that XML is adopted right into the core of many databases and at a simplistic level this means that using XHTML internally (or a stripped down version) allows one to pass data from the core to the user with minimal/ even no transformation (and please can I ask any commenters not to labour the “gee, but that is inefficient in a database” point as the details of this aspect are well known to me) . Another is the massive supporting framework that exists for XML, a quick example being XSLT; which allows simple, elegant data transformations to XML data. Combining simplicity and elegance in my book makes for great programmatic results.

    So whether you choose JSON or XHTML for your particular purpose, no matter. In my case I will for sure be taking advantage of this wonderful post.

    Thank you again

    Copy & paste the code below to embed this comment.
  34. Just a quick update that thanks to your article, from now on it will take just 2 lines of code for me to take any subtree of a response XHTML document and append the entire subtree to any branch of the current document – and that I have tested this successfully on IE 6,7,8, FF3, current Opera, current Chrome and current Safari

    Of particular benefit was

    if (!document.importNode) {
      parentNode[removed] = parentNode[removed];
    }

    for IE 6 and 7, which would have taken me an unknown amount of time to identify – as IE6 and IE7 throw no errors if this is omitted and simply displays “nothing”

    And one last note is that I highly recommend that people stay away from innerHTML as quirks that I saw with it, even in the latest firefox were what started me on the dom investigation in the first place – you simply cannot be sure when innerHTML will not work…

    Thanks again and good luck to all others out there grappling with cross browser compatibility – I highly recommend taking advantage of this fantastic article.

    Copy & paste the code below to embed this comment.