A More Useful 404
Issue № 272

A More Useful 404

Encountering 404 errors is not new. Often, developers provide custom 404 pages to make the experience a little less frustrating. However, for a custom 404 page to be truly useful, it should not only provide relevant information to the user, but should also provide immediate feedback to the developer so that, when possible, the problem can be fixed.

Article Continues Below

To accomplish this, I developed a custom 404 page that can be adapted to the look and feel of the website it’s used on and uses server-side includes (SSI) to execute a Perl script that determines the cause of the 404 error and takes appropriate action.

Overall design#section2

To provide useful and specific information to the user, it is necessary to define the possible causes of a 404 error. Here are four possible causes:

  1. The user mistyped the URL or followed an out-of-date bookmark. These are grouped together because we’ll see that it’s not possible to distinguish one from the other.
  2. The user encountered a 404 error because of a broken link within my site.
  3. The 404 error results from a broken link returned by a search engine.
  4. The 404 error was caused by a broken link on another website, but not a search engine.

In each of these cases, the 404 provides information about the specific cause of the error. If the broken link is either on my website or someone else’s website, but not returned via a search engine, the Perl script sends me, the developer, an e-mail about the broken link, including the URL the link points to and the page the user was trying to reach.

Custom 404 page#section3

SSI allow you to include common snippets of static HTML, such as a header and footer, throughout a site. SSI pages, which typically have an .shtml extension, are processed by the server before the pages are sent to the browser.

When an SSI directive such as this one:

<!--#include virtual="/inc/header.html" -->

is encountered in the .shtml file, the server replaces that line with the contents of the file specified.

However, in addition to this rather simple function, SSI can execute programs such as Perl scripts. In this case, the output generated by the Perl script is sent to the browser.

Since I wanted my custom 404 page to provide specific information to the user as well as send information to me, my custom 404 page is an .shtml page in which I use SSI to execute a Perl script that does all the work. For my site, the SSI directive looks like this.

<!--#include virtual="/cgi-bin/404.pl" -->

The rest of the 404 page contains code to give the page the look and feel of the website that contains it.

Enabling custom 404 pages#section4

The web server needs to be configured to use SSI. This can be done by either using an .htaccess file, or modifying the Apache httpd.conf file.

First, to have Apache serve up my specific 404 page when a 404 error is encountered, I add the ErrorDocument directive to the httpd.conf file, or the .htaccess file. It looks like this.

ErrorDocument 404 /errorpages/404.shtml

Second, to tell Apache to execute CGI scripts, I need to make sure the httpd.conf file has the ExecCGI parameter added to the Options directive. Or I can just add: Options +ExecCGI to the .htaccess  file.

Perl script#section5

The Perl script does the processing to determine the appropriate action. To identify the source of the 404 error, the Perl script accesses the HTTP_REFERER environmental variable.  HTTP_REFERER contains the URL of the page that the user just came from. I realize that there are no guarantees that this is accurate because it can be faked, but this isn’t really a concern for this application.

In general, the Perl code performs the following steps:

  1. Check HTTP_REFERER to determine the source of the 404 error.
  2. Display the appropriate message to the user.
  3. Send me an e-mail message, if needed for the particular error.

Case 1: Mistyped URL or out-of-date bookmark#section6

In the case of a mistyped URL or an out-of-date bookmark, the HTTP_REFERER will be blank. In Perl, I check for this using the following code:

if (length($ENV{'HTTP_REFERER'}) == 0)

The Perl script displays a message in the custom 404 page that tells the user what the problem is. In the messages displayed to the user, as well as any e-mail messages sent to me, I provide the URL of the requested page using this code:

my $requested = "http://$ENV{'SERVER_NAME'}$ENV{'REQUEST_URI'}";

Case 2: Broken link on my website#section7

When HTTP_REFERER is not blank, I check to see if it refers to my site, somebody else’s site, or a search engine. If it contains my domain name, then I know the user followed a link from one of my pages. I use the following Perl snippet to check for this:

if ((index($ENV{'HTTP_REFERER'}, $ENV{'SERVER_NAME'}) >= 0))

The index function will return the position of SERVER_NAME in the HTTP_REFERER string. If it’s there, index  will be a number greater than zero and I’ll know that the user was on a page on my site.

In this case, I present a message to the user stating that I have a broken link on my page. However, rather than ask the user to send me an e-mail telling me this, the Perl script sends me an e-mail containing all of the necessary information. At the same time, I let the user know that an e-mail has just been sent and the broken link will be corrected shortly.

In the e-mail message, I set the subject of the message to clearly identify that there is a broken link on my site and provide the domain name using $ENV{'SERVER_NAME'}. This allows me to use this script on multiple sites while simplifying the sorting of any incoming messages. The body of the e-mail tells me the URL of the page the user was on, as well as the URL of the requested page.

Case 3: Broken link from a search engine#section8

To determine if the user came from a search engine results page, I check HTTP_REFERER against a list of search engine URLs. This list is stored in a simple text file that the Perl script reads. By using an external file containing a list of URLs, I can update the list at any time and not have to modify the Perl.

Here are the Perl snippets for this case:

my $SEARCHENGINE = "false";
open(FILE, "searchengines.txt") or die "cannot open the file";
while () {
  if (index($referrer, $_) >= 0) {
    $SEARCHENGINE = "true";


if ($SEARCHENGINE eq "true")

In this case, I let the user know that the search engine returned an old link. Since there really isn’t anything I can do about it, I don’t need an e-mail message, however, I may want one just so I know about the problem.

Case 4: Broken link on somebody else’s website#section9

If the 404 was not the result of any of the three previous situations, then I know it was caused by a broken link on somebody else’s page. So again, the Perl script displays the appropriate information to the user and sends me an e-mail message. I can then go to the page with the broken link and if the page owner has provided contact information, I can notify them of the problem.

Other than Apache#section10

There is no reason this won’t work on web servers running Microsoft IIS. The server needs to be configured to allow scripts to be executed, and of course Perl needs to be available.


Implementing this custom 404 page improves the usability of my site by helping the user, and it keeps me informed of broken links. For clarity, the table below shows the four cases I discussed, along with the message displayed to the user and any e-mail message that is sent.

Table 1: The four cases discussed















Case Message to User E-mail message
Case Message to User E-mail message

1. Mistyped URL or out-of-date bookmark

Sorry, but the page you were trying to get to, http://www.mydomain.com/
no-such-page.shtml, does not exist.

It looks like this was the result of either a mistyped address or an out-of-date bookmark in your web browser.

You may want to try searching this site or using our site map to find what you were looking for.


2. Broken link on one of my pages

Sorry, but the page you were trying to get to, http://www.mydomain.com/
no-such-page.shtml, does not exist.

Apparently, we have a broken link on our page. An e-mail has just been sent to the person who can fix this and it should be corrected shortly. No further action is required on your part.

From: www.mydomain.com 404 script

Subject: Broken link on my site, www.mydomain.com.


There appears to be a broken link on my page, http://www.mydomain.com/
badlink.shtml. Someone was trying to get to http://www. mydomain.gov/
no-such-page.shtml from that page.  Why don’t you take a look at it and see what’s wrong?

3. Broken link on a search engine results page

Sorry, but the page you were trying to get to, http://www.mydomain.com/no-such-page.shtml, does not exist.

It looks like the search engine has returned a link to an old page. These old links should eventually be removed from their indexes but since these are automatically generated there is no one to contact to try to correct the problem.

You may want to try searching this site or using our site map to find what you were looking for.

Optional.  An e-mail message is not needed because there isn’t much I can do about the broken link but I may go ahead and have the script send me one just so I know about it.

4. Broken link on somebody else’s page

Sorry, but the page you were trying to get to, http://www.mydomain.com/
no-such-page.shtml, does not exist.

Apparently, there is a broken link on the page you just came from. We have been notified and will attempt to contact the owner of that page and let them know about it.

You may want to try searching this site or using our site map to find what you were looking for.

From:www.mydomain.com 404 script

Subject: Broken link on somebody else’s site.


There appears to be a broken link on the page, http://www.somedomain.com/
badlink.shtml.  Someone was trying to get to http://www.mydomain.com/
no-such-page.shtml from that page.  Why don’t you take a look at it and see if you can contact the page owner and let them know about it?

About the Author

Dean Frickey

Dean Frickey has been involved in the internet since 1995, teaching classes at the local university. His interests are in designing to web standards and web usability. He lives and works in Idaho Falls, Idaho.

56 Reader Comments

  1. It’s true, a 404 page isn’t often something that is in the forefront of the designer/developer’s mind when building a site but it really is important. One error or bad link is often enough to make the user leave and never look back but if your 404 page soothes them and makes them feel like you’re sorry for any inconvenience and you want to help them find what they’re looking for then you stand a good chance of keeping the visitor.

    All good stuff!

  2. Hi Dean,

    of course you can do something about wrong links in search results. The best solution would be to redirect the user to the correct page, assuming that just the link has changed.

    Otherwise, you can use the Google Webmaster Tools / Yahoo! Site Explorer / or any of the other search engines webmaster tools to remove that page from their index and allow users to have a better experience on the web.


    Olaf Offick

  3. Has anyone used the Google 404 code from Webmaster Tools?

    Its a javascript snippet you put in the body of the page and it will try to suggest other pages in your site (that are in the Google index) that match the bad URL the user typed.

    Just wondering how well it works.

  4. Sometimes it is also a good idea to redirect a 404 to the index-Page. Especially if you totally changed your site structure and you can’t redirect each old url to the new one, this might be an option. Otherwise you may lose a lot of link power.

  5. I’m sorry, but I just don’t subscribe to this notion at all.

    For one, the execution could easily result in lots of emails from any number of badly configured web spiders. I mean, we’ve all seen the number of 404s our sites get.

    It’s certainly not unique to get reports on the location of outdated links. But the article is just a taster of what is possible. It would be much more worthwhile to see an article on how to utilize something similar as part of a 500, with a complete debug/trace going to the developer. Perhaps thats the coder in me speaking out and has little place on ALA.

  6. I’m currently using Ruby on Rails as my default web application framework, and it makes it incredibly easy to handles these missing requests. Simply create a “catch-all” controller that will log what the user requested, the number of times that has been requested, and you can put in the logic for directing them to a proper page (say you’ve analyzed where multiple users are going, and you know what they are trying to get to).

  7. This is helpful since handling errors is often forgotten about during the rush to go live.

    If you are scripting this type of thing, it may be useful to log all 404 errors per session/IP address/IP range and choose some sort of threshold to terminate a session or temporarily ban the IP address. The threshold level will depend on the sensitivity of data on the site and say whether a user is logged in. If there are many ‘not founds’ in a short period of time, this can be an indicator of someone scanning the site. But if you have opted in to something that scans in this way (perhaps a remote vulnerability assessment tool), you’ll need to exclude that from any filtering. 404 logging should also be correlated with server error logging (as Peter alludes to above).

    When using any data that can be modified by a user such as HTTP_REFERER or the REQUEST_URI be very careful about using it in scripts, writing it to your database, including it in an email or displaying it back to screen. If you are not careful, these could lead to added vulnerabilities in the web site.

    The 404 page/script should also return a 404 ‘Not found’ HTTP status code. Interestingly on ALA, the link to your (Dean Frickey’s) details:


    returns a ‘not found’ type of page, but the status code is ‘200 OK’ like other ‘not found; errors on ALA. Reference:


  8. bq. Interestingly on ALA, the link to your (Dean Frickey’s) details:

    Temporary CMS hiccup. Sorry about that. Dean’s bio is of course online and the link works.

    bq. The status code is “˜200 OK’ like other “˜not found; errors on ALA

    Thanks for alerting us to the issue.

  9. This solution is a duplication of effort, more complex than it needs to be, and opens up a potential attack vector that could otherwise be closed.

    Web servers log the HTTP referer for every request, in addition to the user agent string, originating IP, etc. The same thing could be done with a script to pull out all 404s from the access log and analyze them the same way. If you want the script to e-mail you, it can be run in a cron job.

    Using the logs means not needing to run extra (interpreted) code for every 404 request to pull the same information from the environment that’s already available in the log. In addition you can turn of server-side includes, which removes a potential exploit vector for your server.

    The goal of informing the developer when users are seeing 404s is laudable; the method proposed here is inelegant.

  10. Kevin, your comment regarding duplication of effort is absolutely correct, however it only deals with half of the problem. This solution gives you the flexibility to handle the error message that appears to users, which is more important.

    Honestly, the e-mail part of this is likely unnecessary, as most basic server-based stats software will list these pages. But the increased usability for the end user is stellar, and more sites should be implementing this thought process where possible.

  11. I agree with Chris. The idea here is not only to alert the webmaster (which certainly can be done in other ways) but also to provide better feedback to the user. In this light, I’d also be interested in experimenting with the Google 404 script that Jeremy mentioned. If it’s possible to include search results for the most likely page the user would have been looking for, that would add even more value.

  12. while I understand the comments above, I think this technique has great value to clients – especially those who never go ‘under the hood’ of their CMS. I can see myself coding this option in as a standard feature.

    I also appreciate the ever-present consideration of a well maintained site. (there are a lot out there that still are not.)

    Good show.

  13. one thing that wasn’t mentioned is the slew of personal firewalls/security programs that strip http-referer headers from all requests for privacy reasons. i would make note of that in the 404’s content for the first case, but still wouldn’t send an email.

  14. Olaf: You’re certainly correct in that the developer can make some effort to help resolve bad links from search engines and my statement that “there really isn’t anything I can do about it”? is technically not accurate. I have made attempts to remove URLs from search engines, but in my opinion, working through the search engine’s web site, looking for a link to add/remove URLs, then actually going through the process takes too much time and effort. Not to mention that the list of search engines I check against already numbers 150.

    Stefan: Personally, I would never automatically re-direct a user from a 404 page back to the home page (I’m assuming that you were referring to doing this automatically). If the user doesn’t realize that they’ve been redirected, they’ll think the index page has the information that he’s looking for, and this probably won’t be the case. However, the message provided to the user could easily contain a link to the site’s home page in addition to the search engine and site map that I showed in my article.

    Peter: I can appreciate your concerns. I have had discussions with other developers who suggest that this will generate an inordinate about of e-mail, however, from my experience on a relatively large site, this has not been the case. If a spider follows a link to a page that doesn’t exist, then the e-mail message from that will allow me to correct the link and the 404 goes away. If the error is the result of a missing file that should be available for download, either because it’s moved, or was never uploaded, this becomes very helpful in identifying and resolving those issues. However, if someone, or something, is just hitting the site looking for pages that don’t exist, no e-mail is even generated.

    Clerkendweller: Excellent comments. The original intent was to provide immediate feedback to the user, then I thought “well, why not inform the developer of the problem”? and that led to what I have. The idea of logging these errors has been discussed for exactly the reasons you mention. I’m leaving it for phase II.

    Kevin: Thanks for your input. I realize that server logs contain this same information, however, I don’t want to look through any log report, and more importantly, I want to know right now that I have a problem on my site. A simple e-mail does the trick.

    Daniel, Chris, George: Thanks for your kind words. Yes, a large part of this effort was driven by trying to provide the user with accurate and specific information with links that will help them find what they’re looking for.

  15. I’ve seen quite a few sites (mostly content focused) that take some of the query string parameters and use them as a search to produce a list of pages that the user might have been trying to reach. Not exactly guaranteed to pull the right page, but might save the user some time, and keep them from leaving the site altogether.

  16. First, as other people have said, you ought to give the user what they want: a way to find that content.

    Whether it’s “a simple search box”:http://www.mediauk.com/asjuhsdhiufds or a “simple site map”:http://www.absoluteradio.co.uk/askjhdskjhfds or the “Google 404 script”:http://james.cridland.net/blog-and-a-404 you’re missing a trick if you simply create a “nice looking 404 page”.

    Second, you forgot to stress that it should still return a 404 HTTP header (if not, you’re causing a LOAD of issues with Google). And you might also want to use custom Google Analytics code on the page too, to enable logging of everything in a viewable way.

    All this is for nothing if your custom 404 page is 512 bytes or less: because, if you’re running Google Toolbar, if it’s 512 bytes or less, Google serves you a rather better 404 error anyway. Overwriting yours.

    And I strongly recommend against firing off automated emails: if your site is even mildly attacked by bots trying to find a way into your SQL/membership data/credit card data, then you’ve made your problem a whole lot worse.

  17. Fine, I’m not a professional; I’m not even old enough to be classed as one. But personally, and from my point of view, can I congratulate the author on another great ALA article. It appeals to my more practical mind, and even made me update my 404 page.
    In response to everyone else’s comments, I would like to add my own thoughts: firstly, that I rewrote this in PHP, hence reducing the security concerns I think. (If someone more experienced would like to comment on this, please do, I love being proved wrong.). Secondly I think that automated email do have their advantages – having received 4 about one link motivated me to do something about it; logs are very good for statistics but don’t give me the imperative to do something (stress on the “me” there).
    And if James is following these comments, it would be interesting to know what harm emails cause.

  18. bq. If a spider follows a link to a page that doesn’t exist, then the e-mail message from that will allow me to correct the link and the 404 goes away.

    Except that Slurp goes around deliberately making up URLs that it expects not to exist, so that it can check the site is correctly sending a 404 for non-existent pages (so it knows it can assume that a 200 page really is A-OK). You don’t want to be notified of every instance of this. I’m sure there is something in the user-agent string that you look for and use some sort of trickery to filter those out.

  19. A couple of reader’s have written and are concerned the spiders and bots could result in a large number of e-mails being sent. But spiders and bots that are guessing at URLs will not generate e-mails because they are not following bad links and therefore will (probably) not have an HTTP_REFERER. But as I mentioned, HTTP_REFERER can be faked, so I’m not going to say for certain that this is always the case with all spiders or bots. However, I have been using the ideas presented here for the past few years and have yet to experience any problems with spiders or bots accessing the site.

  20. I have taken this stuff into consideration when doing redesigns for a number of sites. Instead of sending emails, I created a database to capture that data and then allowed the client to provide a correct URL for common issue pages and turn it into a redirect page. It also allows the user to see a list grouped by common pages and know how often it comes up.

    Much more useful than an email every time something comes up. Also more scalable than the method in the article.

  21. Just one thing.

    If a user is reading certain page from your site, and manually types a new URL but gets a 404 error, wouldn’t that count as “a bad link on your site”?

    For example, someone that’s on http://www.domain.com and types in the address bar http://www.domain.com/contact.

    Wouldn’t that send a HTTP_REFERER with your domain? Then you would get an email saying there’s a bad link on the index page when there really isn’t.

    Maybe I’m getting confused here, but I wanted to ask to make sure.

  22. Apache is more than happy to use a CGI as your ErrorDocument:

    ErrorDocument 404 /cgi-bin/404.pl

    If you don’t want that cgi-bin in the URL, just go for

    Alias /404-not-found /cgi-bin/404.pl
    ErrorDocument 404 /404-not-found

  23. Kevin Selles: It’s a good question, so thanks for asking it. The referer header is only sent by the browser when a link is clicked so, no, manually entering an incorrect URL will not generate an e-mail, regardless of the page you’re currently viewing.

  24. Dick Davies: You are correct in that Apache could be configured to call the Perl script directly. But when doing this the Perl script would be responsible for building the complete 404 page with all of the elements and styles necessary to have the look and feel of the website. And it will be more difficult to access the styles and shared elements which would be located somewhere under document root.

    By executing the Perl script from within the .shtml page, the design of the 404 page (i.e. headers, footers, navigation, etc.) is easy. If you have a template for your site, the 404 page is simply a template file with the line,

    inserted at the point where the content needs to appear.

  25. Hi, I tried this with some of my sites and this get better serp’s in only five days, I don’t know if this is the result of make my 404 page more friendly, only know that I only made this change in my site. Also tried plugin for wp from Marcus and work fine. Thanks.

  26. Several readers have commented that I need to be sure to send the correct HTTP header in the response generated by my script. I’ll admit I hadn’t given this any thought so I started looking into it. I used the Live HTTP Headers FireFox extension to watch the HTTP traffic. When I select a link or type in a URL to a page that doesn’t exist, I receive “HTTP/1.x 404 Not Found.” So it seems that Apache is sending the proper heading. If anyone has more to add to this please speak up.

  27. If you are getting a 404 from search engine referrals, that’s because you’ve forgot to setup a 301 redirect to the new URL. Otherwise you should have a custom 410 page saying that the resource was removed for good. See “RFC 2616”:www.w3.org/Protocols/rfc2616/rfc2616.html for HTTP 1.1 specification.

  28. If you are getting a 404 from search engine referrals, that’s because you’ve forgot to setup a 301 redirect to the new URL. Otherwise you should have a custom 410 page saying that the resource was removed for good. See “RFC 2616”:www.w3.org/Protocols/rfc2616/rfc2616.html for HTTP 1.1 specification.

  29. Jeremy Flint,

    We use google 404 with our University website in combination with analytics so we know when pages are broken. I would like to combine our current setup with some of the things that were discussed in this article. Google 404 works really well for us…here is an example: http://www.uwgb.edu/asdf

  30. I’ve been looking for some good code for this for a while, and I’m especially impressed how this takes all of the error scenarios into account in order to address the issue. What I didn’t see was an easy download link for the PERL script, am I just supposed to copy and paste all the code snippets on the page together?

    Also, I’ve noticed when you get a 404 from sites like Google.com, it doesn’t display the actual 404 page url, but displays the 404 page ON the URL you typed.



    Even though there is no oops404 page, it looks like there is. If someone wanted to integrate that functionality into this code, is that doable, and how would you go about it?

  31. Hi this is a great article, most of the stuff I have converted to PHP.

    However I can’t use $_SERVER[‘SERVER_NAME’] . $_SERVER[‘REQUEST_URI’] to determine the page the user was trying to get. I just get the .php error page that the user was forwarded onto.

    Anyone have the same problem? how did you solve it?


  32. What is actually quite fun to play with is using the php similar_text() function (or anything similar) to match the desired url against a list of valid urls for a site and redirect to the closest match. Using certain limits as to how close the match must be I use it to fix problems when a users miss types one letter in a longish url, or do a bad copy and pate job where they add or drop a letter.

  33. I didn’t take the time to read through all of the comments so I don’t know if this was already suggested. For my 404 pages I utilize my sites search functionality. IF a URL is mistyped I have a little message that says “We couldn’t find the page you requested. Did you mean one of these:” Then I display the search results form my site search.

  34. In reply to the Google 404 widget post, another similar idea – the “Linkgraph”:http://linkgraph.net/ widget, was released in December 2008. It’s a tool like the “Google 404 widget”:http://googlewebmastercentral.blogspot.com/2008/08/make-your-404-pages-more-useful.html only the Linkgraph widget uses a database of all previous URLs of a site’s pages to get the right URL when you click a broken link. Provided you got to the page through a broken link of course.

  35. I feel that 404 pages are useless, but mainly for SEO. What I would do and what we suggest doing is putting a php 301 redirect above the header of your 404 page. Where this does not give a great user experience (unless you rework the page its directing to, to signal that the page was not found) it does 301 any 404 page that comes up before the search engines can see that its a 404, thus preserving a % of the link juice and pushing it on to the page it redirects to.

  36. I agree with bill that 404s are bad for SEO – however 301-ing all 404s to a single other pages isn’t quite optimal either. The optimal (if sometimes unattainable) experience would be for any old page/URL to get 301 redirected to the new/working page that is most applicable to the old page. This is really common when our clients move to a new CMS for example. The old pages (the content of them) are still on the new site, but all at new URLs. We’ve actually done this so many times we built a tool called http://www.errorlytics.com that does exactly this quickly and easily for sites running PHP, JS and Rails…and has WordPress and Drupal plugins/modules. The idea of the tool is to 1) make you aware of the often ignored 404 and 2) make it so you can get rid of them via 301 redirects thus preserving SEO and the end user experience.

    The key edge that Errorlytics offers over tools like, for example, linkgraph is that it allows the webmaster not only to ultimately get the user who has requested a bad/dead URL to get to a page that has some content on it – but Errorlytics does this via an SEO friendly 301 so even the spiders are happy. Using a frameset and a meta refresh is far from optimal on the SEO side.

  37. Steven, I just tried to mimic .htaccess and php, and for me $_SERVER[“SCRIPT_URI”] returned URL of the page user asked for. You can temporary send your 404 to a .php page with

    then go to any not-existing webpage in your website, 404 page will show PHP info page, just examine the variables it shows and I’m sure you will find webpage you entered, not only name of 404 script. Just do not forget to restore 404 to your original script after fixing the problem.

  38. I have been developing web sites for a number of years now and I am always surprised at the lack of attention paid to 404 pages by both developers and designers. I have always harvested 404 data and was very pleased to find this article and see that there are web developers and designer out there that understand how the “404” should be used to teach us about issues our end users are encountering. This technique has saved my shame and insult because I can proactively find issues.

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA