The books I'm reading these days come with examples of code, saved on associated web sites. Sometimes that code is neatly packaged into a zip archive or tarball, with every piece of code sitting in a directory named after the chapter it was referenced in. But other times these web sites have the code sitting in directories that you're expected to browse by hand. One example is here.

Suppose you don't want to do that. Suppose you just want to get the code all in the same directory structure on your local machine, and only the code, not the associated html files that make it render in a browser. In that situation, you can do this:

wget -r -np -nH -R "*.html*" http://www.sobell.com/UB2/code/

Recursive downloads with wget are discouraged because they hit the web server hard with a fast succession of requests. But I think the example above is a case of polite wget use: it only downloads what the original host of the code intended to make available for downloading. The -r part means "recursive". The -np part means "but not upwards". The -nH part means "and do not visit any hosts referenced here". Finally, the -R "*.html*" part means "and I don't want any file with .html in it".

As it turns out, wget can do a lot of things, so its man page is rather long. I googled around for a more concise reference, and I found this. Enjoy.