User Tools

Site Tools


wget_examples

wget Examples

2016

For instances where wget doesn't work or is not installed, use curl -O as a replacement for wget

wget can be used to get a single file from a website, or to download the entire site if you know how. Here some examples of using wget.


Download a single file

To get a single file from a website, and save it using it's default name, use:

  wget http://www.cameraangle.co.uk/index.php
  
  This will download the file index.php from wwww.cameraangle.co.uk



Download a single file and rename it

To get a single file from a website, and save it using a new name, use:

  wget http://www.cameraangle.co.uk/index.php -O index.txt
  
  This will download the file index.php from wwww.cameraangle.co.uk and save it as index.txt



Resume an interrupted download previously started by wget itself

This does depend on the site supporting resuming.

  wget ‐‐continue example.com/big.file.iso



Download a file but only if the version on server is newer than your local copy

  wget ‐‐continue ‐‐timestamping wordpress.org/latest.zip



Download a web page with all assets

Download a web page with all assets – like stylesheets and inline images – that are required to properly display the web page offline.

  wget ‐‐page-requisites ‐‐span-hosts ‐‐convert-links ‐‐adjust-extension http://example.com/dir/file



Download an entire website

Download an entire website including all the linked pages and files

  wget ‐‐execute robots=off ‐‐recursive ‐‐no-parent ‐‐continue ‐‐no-clobber http://example.com/



Download all the MP3 files

Download all the MP3 files from a sub directory

  wget ‐‐level=1 ‐‐recursive ‐‐no-parent ‐‐accept mp3,MP3 http://example.com/mp3/



Download all images from a website

Download all images from a website in a common folder in a common folder

  wget ‐‐directory-prefix=files/pictures ‐‐no-directories ‐‐recursive ‐‐no-clobber ‐‐accept jpg,gif,png,jpeg http://example.com/images/



Download the PDF documents from a website

Download the PDF documents from a website through recursion but stay within specific domains.

  wget ‐‐mirror ‐‐domains=abc.com,files.abc.com,docs.abc.com ‐‐accept=pdf http://abc.com/



Original article at http://www.labnol.org/software/wget-command-examples/28750/

wget_examples.txt · Last modified: 2023/03/09 22:35 by 127.0.0.1