Your Daily Source for Apache News and Information |
Breaking News | Preferences | Contribute | Triggers | Link Us | Search | About |
By
In the first sections of this series, I've talked about what goes into the standard log files, and how you can change the contents of those files.
This week, we're looking at how to get meaningful information back out of those log files.
The problem is that although there is an enormous amount of information in the log files, it's not much good to the people that pay your salary. They want to know how many people visited your site, what they looked at, how long they stayed, and where they found out about your site. All of that information is (or might be) in your log files.
They also want to know the names, addresses, and shoe sizes of those people, and, hopefully, their credit card numbers. That information is not in there, and you need to know how to explain to your employer that not only is it not in there, but the only way to get this information is to explicitly ask your visitors for this information, and be willing to be told 'no.'
There is a lot of information available to put in your log files, including the following:
buglet.rcbowen.com
or proxy01.aol.com
.
Single records, of course, give you very little useful information, but across several thousand 'hits', you can start to gather useful statistics.
HTTP is a stateless, anonymous protocol. This is by design, and is not, at least in my opinion, a shortcoming of the protocol. If you want to know more about your visitors, you have to be polite, and actually ask them. And be prepared to not get reliable answers. This is amazingly frustrating for marketing types. They want to know the average income, number of kids, and hair color, of their target demographic. Or something like that. And they don't like to be told that that information is not available in the log files. However, it is quite beyond your control to get this information out of the log files. Explain to them that HTTP is anonymous.
And even what the log files do tell you is occasionally suspect. For example, I have numerous entries in my log files indicating that a machine called cache-mtc-am05.proxy.aol.com
visited my web site today. I can tell that this is a machine that is on the AOL network. But because of the way that AOL works, this might be one person visiting my site many times, or it might be many people visiting my site one time each. AOL does something called proxying, and you can see from the machine address that it is a proxy server. A proxy server is one that one or more people sit behind. They type an address into their browser. It makes that request to the proxy server. The proxy server gets the page (generating the log file entry on my web site). It then passes that page back to the requesting machine. This means that I never see the request from the originating machine, but only the request from the proxy.
Another implication of this is that if, 10 minutes later, someone else sitting behind that same proxy requests the same page, they don't generate a log file entry at all. They type in the address, and that request goes to the proxy server. The proxy sees the request and thinks "I already have that document in memory. There's no point asking the web site for it again." And so instead of asking my web site for the page, it gives the copy that it already has to the client. So, not only is the address field suspect, but the number of request is also suspect.
It might sound like the data that you receive is so suspect as to be useless. This is in fact not the case. It should just be taken with a grain of salt. The number of hits that your site receives is almost certainly not really the number of visitors that came to your site. But it's a good indication. And it still gives you some useful information. Just don't rely on it for exact numbers.
So, to the real meat of all of this. How do you actually generate statistics from your Web-server logs?
There are two main approaches that you can take here. You can either do it yourself, or you can get one of the existing applications that is available to do it for you.
Unless you have custom log files that don't look anything like the Common
log format, you should probably get one of the available apps out there. There are some excellent commercial products, and some really good free ones, so you just need to decide what features you are looking for.
So, without further ado, here's some of the great apps out there that can help you with this task.
The example report, which you can see on the Analog web site, seemed very thorough, and to contain all of the stats that I might want. In addition to the pages and pages of detailed statistics, there was a very useful executive summary, which will probably be the only part that your boss will really care about.
WebTrends has, in my opinion, two counts against it.
The first is that it is really expensive. You can look up the actual price on their web site. (http://www.webtrends.com/default.htm)
The other is that it is painfully slow. A 50MB log file from one site for which I am responsible (one month's traffic) took about 3 hours to grind through to generate the report. Admittedly, it's doing a heck of a lot of stuff. But, for the sake of comparison, the same log file took about 10 minutes using WWWStat. Some of this is just the difference between Perl's ability to grind through text files and C's ability. But 3 hours seemed a little excessive.
It is very easy to automate WWWStat so that it generates your log statistics every night at midnight, and then generates monthly reports at the end of each month.
It may not be as full-featured as WebTrends, but it has given me all the stats that I've ever needed.
You can get Wusage at http://www.boutell.com/wusage/
If you want to do your own log parsing and reporting, the best tool for the task is going to be Perl. In fact, Perl's name (Practical Extraction and Report Language) is a tribute to its ability to extract useful information from logs and generate reports. (In reality, the name ``Perl'' came before the expansion of it, but I suppose that does not detract from my point.)
The Apache::ParseLog
module, available from your favorite CPAN mirror, makes parsing log files simple, and so takes all the work out of generating useful reports from those logs.
For detailed information about how to use this module, install it and read the documentation. Once you have installed the module, you can get at the documentation by typing perldoc Apache::ParseLog
.
Trolling through the source code for WWWStat is another good way to learn about Perl log file parsing.
Not much more to say here. I'm sure that I've missed out someone's favorite log parsing tool, and that's to be expected. There are hundreds of them on the market. It's really a question of how much you want to pay, and what sort of reports you need.
Thanks for reading. Let me know if there are any subjects that you'd like to see articles on in the future. You can contact me at
--Rich
Related Stories:
E-Commerce Solutions: Template-Driven Pages, Part 2(Sep 13, 2000)
Apache Guide: Logging, Part 3 -- Custom Logs(Sep 05, 2000)
Apache Guide: Logging, Part II -- Error Logs(Aug 28, 2000)
Apache Guide: Logging with Apache--Understanding Your access_log(Aug 21, 2000)
Apache Guide: Apache Authentication, Part 3(Aug 07, 2000)
Apache Guide: Apache Authentication, Part 1(Jul 24, 2000)
About Triggers | Media Kit | Security | Triggers | Login |
All times are recorded in UTC. Linux is a trademark of Linus Torvalds. Powered by Linux 2.4, Apache 1.3, and PHP 4 Legal Notices, Licensing, Reprints, & Permissions, Privacy Policy. |