Why are Backup and Recovery Important for Linux Servers?

On Linux servers, data is the lifeline—system configurations, user files, databases, website code, etc. If lost, it can lead to service outages, business disruptions, or even significant financial losses. For example, accidentally deleting an important configuration file or a server hard drive failure could render data irrecoverable without a backup. Thus, backup is the first line of defense against data disasters, and recovery is the key to minimizing losses when backups inevitably fail.

1. Common Linux Backup Methods (Simple Tools + Examples)

You don’t need complex tools—master these three basic ones to handle most scenarios:

1. tar: Basic File Archiving Tool (System Configs, User Data)

tar packages multiple files/directories into one archive and supports compression with gzip/bzip2. It’s one of the most widely used backup tools in Linux.

Example: Full Backup of a Directory
To back up /var/www/html (website files):

tar -czvf backup.tar.gz /var/www/html
  • Parameters:
  • -c: Create a new archive.
  • -z: Compress with gzip (add -j for bzip2, -v for verbose output, -f to specify the filename).
  • backup.tar.gz: Output filename (.tar.gz = compressed tar archive).

Example: Incremental Backup (Only Changed Files)
To back up only files modified since the last backup:

tar -czvf backup_$(date +%Y%m%d).tar.gz --newer-mtime "$(date -d '1 day ago' +%Y%m%d)" /var/www/html

(--newer-mtime includes files modified after the specified date; date -d '1 day ago' fetches yesterday’s date for incremental logic.)

2. rsync: Incremental Sync Tool (Multi-Directory/Cross-Server Backups)

rsync not only backs up data but also synchronizes files in real time, ideal for local multi-disk setups or remote server backups. It only transfers changed files, saving space and time.

Example: Local Incremental Backup
Sync /data to /backup/data (only new/changed files):

rsync -av /data/ /backup/data
  • Parameters:
  • -a: Archive mode (preserves permissions, timestamps, directory structure).
  • -v: Verbose output (for debugging).

Example: Remote Sync (Disaster Recovery)
Sync to a remote server (异地备份):

rsync -avz --delete /data/ user@remote-server:/backup/data

(--delete ensures the target directory matches the source exactly by removing extra files.)

3. cp: Simple Copy (Quick Small File Backups)

For small datasets or quick copies, cp works, though less efficient than tar/rsync.

Example: Copy Directory to Backup

cp -r /source/dir /backup/dir
  • -r: Recursively copy directories.

2. Practical Data Recovery Tips (From Backup to Restoration)

Avoid the pitfall of “backup exists but fails to restore.” Follow these steps:

1. Pre-Recovery Preparation

  • Stop Services: Halt services (e.g., systemctl stop apache2, systemctl stop mysql) to prevent data modification during restoration.
  • Verify Backup Integrity: Use md5sum to check for corruption:
  md5sum backup.tar.gz  # Save the hash; recheck before restoring.
  • Create a Recovery Directory: Use a temporary path (e.g., /tmp/restored) to avoid overwriting existing data.

2. Recovery by Backup Type

  • From a tar Archive:
  tar -xzvf backup.tar.gz -C /tmp/restored

(-C specifies the target directory for extraction.)

  • From rsync Sync:
    Reverse the sync direction to restore from a remote server:
  rsync -av /backup/data/ /var/www/html
  • From LVM Snapshot (Filesystem-Level Recovery):
    For LVM-managed disks, use snapshots:
  # 1. Create snapshot (10GB for example)
  lvcreate -L 10G -s -n lv0_snapshot vg0/lv0  
  # 2. Mount snapshot
  mount /dev/vg0/lv0_snapshot /mnt/snapshot  
  # 3. Copy data
  cp -r /mnt/snapshot/* /var/www/html  
  # 4. Unmount
  umount /mnt/snapshot  

3. Database-Specific Recovery

For databases like MySQL, distinguish between “cold” and “hot” backups:
- Cold Backup: Stop the database service and copy the data directory (e.g., /var/lib/mysql):

  systemctl stop mysql
  cp -r /var/lib/mysql /backup/mysql_backup
  • Hot Backup: Use mysqldump (no service downtime):
  mysqldump -u root -p --databases mydb > backup.sql  # Backup
  mysql -u root -p mydb < backup.sql  # Restore

3. Automation and Long-Term Backup Strategies

Backup reliability hinges on automation + regular testing—avoid manual errors or forgotten backups:

1. Schedule Backups with crontab

Automate daily backups (e.g., 3 AM):

# 1. Create backup script (backup.sh)
#!/bin/bash
tar -czvf /backup/$(date +%Y%m%d).tar.gz /var/www/html /etc/nginx

# 2. Make executable
chmod +x backup.sh

# 3. Add to crontab (edit with `crontab -e`)
0 3 * * * /path/to/backup.sh >> /var/log/backup.log 2>&1

(This runs the script daily at 3:00 AM and logs output.)

2. Backup Storage Best Practices

  • Local + Remote: Back up locally (for immediate access) and to a remote server/cloud (for disaster recovery).
  • Full + Incremental: Weekly full backups + daily incremental backups (saves space).
  • Retention Policy: Keep critical data for 30+ days; prune old backups (e.g., only retain monthly increments).

3. Test Restorations Regularly

Validate backups monthly:

# 1. Mount backup to a test server
mount /backup/backup.tar.gz /test -o loop  

# 2. Verify files exist
ls /test/data/  

# 3. Restore a single file
cp /test/data/important.txt /var/www/html  

Only tested backups are truly reliable!

4. Pitfall Avoidance (For Beginners)

  • Permissions: Ensure backup directories have correct ownership (e.g., chown root:root /backup).
  • Lock Files: Avoid backing up actively written files (e.g., logs); use --inplace or rsync to keep files open.
  • Dry Runs: Test restoration to a temporary directory before deploying changes.

Summary

Linux server backup and recovery boil down to “simplicity, reliability, and automation.” Master tar and rsync, pair with crontab for scheduling, and test regularly. Remember: Data security has no “what-if”—only “prepared” or “unprepared”.

Xiaoye