Have you ever wanted to get a report of all web pages on a website? That’s exactly what a friend of mine wanted. I’m savvy to writing web-crawlers so I had no problem whipping something up for him. It’s simple – give it a web page to start on and it will find all of the links on that page, and branch out from there. Only web pages with the same domain name are crawled. Dynamic pages are limited to 100 hits. When it’s all done crawling it outputs a nice text file with all of the web page URLs listed alphabetically.
Today I am releasing it for public consumption. It requires .NET 4.5 which should already be installed if you keep your Windows Update, updated.
[sdm_download id=”4591″ fancy=”0″] * 6KB