For instances where wget doesn't work or is not installed, use curl -O as a replacement for wget
wget can be used to get a single file from a website, or to download the entire site if you know how. Here some examples of using wget.
To get a single file from a website, and save it using it's default name, use:
wget http://www.cameraangle.co.uk/index.php This will download the file index.php from wwww.cameraangle.co.uk
To get a single file from a website, and save it using a new name, use:
wget http://www.cameraangle.co.uk/index.php -O index.txt This will download the file index.php from wwww.cameraangle.co.uk and save it as index.txt
This does depend on the site supporting resuming.
wget ‐‐continue example.com/big.file.iso
wget ‐‐continue ‐‐timestamping wordpress.org/latest.zip
Download a web page with all assets – like stylesheets and inline images – that are required to properly display the web page offline.
wget ‐‐page-requisites ‐‐span-hosts ‐‐convert-links ‐‐adjust-extension http://example.com/dir/file
Download an entire website including all the linked pages and files
wget ‐‐execute robots=off ‐‐recursive ‐‐no-parent ‐‐continue ‐‐no-clobber http://example.com/
Download all the MP3 files from a sub directory
wget ‐‐level=1 ‐‐recursive ‐‐no-parent ‐‐accept mp3,MP3 http://example.com/mp3/
Download all images from a website in a common folder in a common folder
wget ‐‐directory-prefix=files/pictures ‐‐no-directories ‐‐recursive ‐‐no-clobber ‐‐accept jpg,gif,png,jpeg http://example.com/images/
Download the PDF documents from a website through recursion but stay within specific domains.
wget ‐‐mirror ‐‐domains=abc.com,files.abc.com,docs.abc.com ‐‐accept=pdf http://abc.com/
Original article at http://www.labnol.org/software/wget-command-examples/28750/