Using sitemaps to crawl websites

In order to gather web documents it can be useful to download the portions of a website programmatically. This post shows ways to find URLs within a website and to work with URL lists on the command-line.

For general information on command-line operations please refer to Comment Prompt (tutorial for Windows systems), How to use the Terminal command line in macOS, or An introduction to the Linux Terminal.

Download of sitemaps and extraction of URLs

A sitemap is a file that lists the visible URLs for a given site, the main goal being to reveal where machines can look for content. The retrieval and download of documents within a website is often called crawling. The sitemaps protocol allows a webmaster to inform search engines about URLs on a website that are available for crawling. Sitemaps follow the XML format, so each sitemap is or should be a valid XML file.

Download and filtering

A sitemap.xml file is usually located at the root of a website. If it is present, it is almost always to be found by appending the file name after the domain name and a slash: https://www.sitemaps.org becomes https://www.sitemaps.org/sitemap.xml …

more ...