Estimated reading time: 13 minutes
Introduction
In our previous articles, we have set up a LEMP stack WordPress site and extended our website's capabilities with multisite feature. As the website's content and user data grow, it becomes crucial to regularly back up and maintain site data. Backing up data not only prevents the loss of important information but also ensures quick recovery from hardware failures, software bugs, security breaches, or other unforeseen events. Moreover, it is an indispensable step during website migration, upgrades, or significant changes. Regularly backing up website data ensures long-term stability and secure storage of all vital information for any emergency.
The main components for backup and maintenance of a WordPress site include:
- MySQL Database: Stores all website data, including posts, comments, and user information, and is the core component of the site.
- WordPress Files: Include all core code, themes, plugins, and uploaded files, forming the foundation of the website's operation.
- SSL Certificates: Ensure the security of website data transmission. When using HTTPS, these certificates are crucial for maintaining user trust and establishing website credibility.
- iptables Rules: Protect the server from unauthorized access and various network attacks, constituting an essential part of server security.
- System and Application Configuration Files: This mainly includes web server (like Nginx or Apache) and PHP configuration files, which determine the operational behavior and performance of the website and have a direct impact on its stable operation and security.
By regularly backing up and maintaining these key components, we can ensure the data security, stability, and performance of our WordPress website. Next, we will delve into each section, explaining how to perform specific operations and maintenance steps.
MySQL Database Backup
In maintaining any website or large system, database backup is one of the most critical steps. Especially for content management systems like WordPress, all content data is stored in the MySQL database. Without a database backup, any data corruption or loss can lead to significant setbacks, disastrously affecting website operation and user experience. Therefore, regularly backing up the MySQL database is a necessary measure to ensure data safety and business stability.
Using the mysqldump Tool for Backup
-
SSH into your server.
-
Back up using the following command:
sudo mysqldump -u [username] -p [database_name] > [backup_file_name].sql
After execution, enter your MySQL user password and replace the relevant parts with actual usernames, database names, and desired backup file names.
-
After completing the backup, make sure to check the .sql file to confirm the data has been successfully backed up.
Using crontab for Scheduled Backup
We don't have to spend time manually backing up every day, as setting up a scheduled task (cron job) makes it easier. For cron, you need to understand some basic concepts to arrange scheduled tasks properly. In brief, its structure is as follows:
* * * * * command to be executed
The parts of this structure represent the scheduled time for execution:
- The first * represents minutes (0 - 59).
- The second * represents hours (0 - 23).
- The third * represents the day of the month (1 - 31).
- The fourth * represents the month (1 - 12).
- The fifth * represents the day of the week (0 - 7), where both 0 and 7 denote Sunday.
Now, the steps for setting up:
- Open the terminal and enter
crontab -e
to edit the crontab as the current user or usesudo crontab -e
for root to edit. - At the end of the file, add a similar command for automatic backup. For example, to perform a backup every day at 2 AM (usually set during off-peak hours):
0 2 * * * sudo /usr/bin/mysqldump -u [username] -p[password] [database_name] > /home/user/db_backup/[database_name]_$(date +\%Y\%m\%d).sql
With this command, we can backup the corresponding database to the
/home/user/db_backup/
directory. Note that the password follows directly after -p (no space), and the backup format is name_current date. - Save and close (similar to vim operations), and the system will automatically apply the new cron job. You can set a time close to the current time to test. After confirming the function is normal, set it to the time you want it to execute. If changes are infrequent and demands are not high, backing up once a day is sufficient.
- Restoring the database is relatively simple. After creating the corresponding tables and setting permissions in the target database, use
mysql -u [username] -p[password] [database_name] < [backup_file_name].sql
to restore.
Note:
This backup will display your database password in plaintext. If you need to enhance security, you can use a script to hide the password. I will write an article about using bash scripts to complete these operations later.
Offsite Storage and Backup File Management
Storing backups solely on the local server is still risky, as data could be lost if the server itself encounters problems. Therefore, we also need to consider offsite storage and management of backup files, such as using cloud storage services or uploading to another backup server, or even downloading to your PC (if information security is not a concern).
Introduction to MySQL Real-Time Incremental Backup
The full backup method mentioned above can serve as a simple backup strategy. MySQL also supports real-time incremental backups, which record all changes made to the database since the last backup. This means you can achieve more accurate data recovery, restoring to the latest moment, whereas scheduled tasks only capture data up to the time of backup. This article does not delve into the specifics (as I am not a database expert :P), because implementing incremental backups usually requires specialized tools or enabling MySQL's binary logs, which record database changes. By using binlog, you can restore the database to the latest data. Here, I will mention how to enable binlog (as for how to restore data, I will study and update later):
-
Add the following configuration in MySQL's config file (usually
/etc/mysql/my.cnf
or/etc/my.cnf
):[mysqld] log_bin=mysql-bin server_id=1 max_binlog_size=100M # Each file is 100M expire_logs_days=7 # Binlog retention time is 7 days
-
Restart MySQL service:
sudo systemctl restart mysql
-
Log in to MySQL and verify if binary logging is enabled:
SHOW BINARY LOGS;
If binary logging is enabled, you will see a list of log files.
With these steps, your MySQL database backup work should be proceeding smoothly. Regularly performing full and incremental backups is key to ensuring data security.
WordPress Files Backup
When maintaining a WordPress site, regularly backing up website files is very important. This not only protects your site from data loss, hacker attacks, update anomalies, and other unexpected events but also enables quick restoration of site operation when needed, such as during site migration or upgrades. WordPress files include core codes, themes, plugins, and uploaded media files, which maintain the entire site's operation.
Backing Up WordPress Files with tar
-
Log into your server.
-
Switch to the root directory of WordPress, for example:
cd /var/www/blog.yourdomain.com
-
Use the
tar
command (usually pre-installed) to create a compressed package of the WordPress directory:sudo tar -czvf wordpress_backup_$(date +%Y%m%d).tar.gz .
This command compresses the entire WordPress directory into a file named wordpress_backup_current date.tar.gz
.
Scheduled WordPress Files Backup Using crontab
Scheduled backups can be implemented by setting the following cron job:
- As before, edit the crontab.
- At the end of the file, add a line to set your automatic backup task. For example, to back up WordPress files every day at 3 AM (set your backup time wisely, so several backup tasks don't run simultaneously, which might affect performance or cause backup failure):
0 3 * * 0 sudo tar -czvf /home/user/wp_backup/wordpress_backup_$(date +\%Y\%m\%d).tar.gz -C /var/www/blog.yourdomain.com .
One thing to note, including the previous MySQL backup, you need to ensure the specified backup directory exists first. If it does not, you should create it with mkdir /home/user/xx_backup
.
- If you need to restore WordPress files, simply unzip the backup file to the WordPress root directory:
sudo tar -xzvf wordpress_backup_YYYYMMDD.tar.gz -C /var/www/blog.yourdomain.com/
Backup File Storage and Management
Similarly, don't put all your eggs in one basket. The best practice is to store backup files in locations other than your local server to ensure backup files remain accessible in case of server anomalies.
With the steps mentioned above, you should have effectively backed up your WordPress files and ensured data security and disaster recovery capabilities through scheduled tasks and offsite storage management.
SSL Certificate Maintenance with Certbot
SSL certificates are crucial for securing your website. They encrypt communications between the website and users, ensuring the safety of data transmission. However, SSL certificates typically have a validity period, and they will no longer provide protection once expired. Hence, regularly maintaining and updating SSL certificates is an important part of keeping your site secure. We have already covered this in the first article, but let's quickly go over it again.
- Verify Automatic Renewal:
After installing Certbot, it automatically sets up a scheduled task to regularly check and renew the certificates. You can test the automatic renewal process by executing the following command:sudo certbot renew --dry-run
Setting Up systemd Timer for Automatic Renewal
Certbot installation typically automatically sets up a systemd timer to manage the certificate's automatic renewal. You can check the status of this timer using the following commands:
sudo systemctl list-timers | grep certbot
sudo systemctl status certbot.timer
If this timer is activated, it will perform the renewal tasks periodically, typically twice a day, and you don't need further settings, just verify if the automatic renewal can be carried out.
Although it checks twice a day, if your certificate isn't close to its expiry date, it won't change your certificate, so no need to worry.
If this timer is not activated, you can activate it with the following commands:
sudo systemctl start certbot.timer
sudo systemctl enable certbot.timer
Manually Setting Up cron job for Automatic Renewal
If the timer can't be set or if you're more accustomed to using cron jobs, you can also set up a cron job for automatic renewal by following these steps:
- Edit the crontab.
- At the end of the file, add a line to set the automatic renewal task. For example, to check and renew any certificates that are about to expire at 1 AM every day:
0 1 * * * sudo certbot renew --quiet
The --quiet
option means that Certbot will only output information if the renewal attempt fails or if it successfully renews and updates the certificates.
Verifying Automatic Renewal
You can test the automatic renewal process by executing the following command:
sudo certbot renew --dry-run
Ensure that your server's port 80 is open before running Certbot and note that the server firewall (ufw/iptables) allows access to port 80, as Let's Encrypt usually needs to verify server ownership through this port.
With these steps, you should be able to ensure your site continues to use valid SSL certificates, thereby maintaining the site's security and trustworthiness. Even after setting up automatic renewal, it's important to regularly check your renewal task logs or look out for related notifications in your email to ensure renewals continue to proceed normally.
iptables Backup and Boot-Time Loading
Since I previously used iptables as an access control tool, regularly backing up iptables rules and automatically loading these rules at system startup are important steps to ensure system security.
How to Backup iptables Rules
- Log into the server.
- Use the following command to save the current iptables rules to a file:
sudo iptables-save > /home/user/iptables.backup
This command outputs all current active iptables rules to the specified file, ensuring you can quickly restore these rules when needed.
- You can set up a cron job to regularly execute this command, ensuring your iptables rule backup files remain up-to-date:
0 0 * * * sudo iptables-save > /home/user/iptables_$(date +%Y%m%d).backup
How to Restore iptables Rules
- Ensure you have a backup file of iptables rules.
- Use the following command to restore these rules to iptables:
sudo iptables-restore < /home/user/iptables.backup
If using a specific day's backup, modify the file name to iptables_YYYYMMDD.backup
.
Automatically Loading iptables Using systemd at Boot Time
To have iptables rules automatically loaded on Ubuntu 20.04 system startup, you can add the iptables-restore
command to the system's startup scripts:
-
Create a systemd service file:
sudo vim /etc/systemd/system/iptables-load.service
-
Add the following content to the file:
[Unit] Description=IPTABLES RESTORE DefaultDependencies=no Before=network-pre.target Wants=network-pre.target [Service] Type=oneshot ExecStart=/sbin/iptables-restore /home/user/iptables.backup [Install] WantedBy=multi-user.target
Reload systemd configuration and start the service:
sudo systemctl daemon-reload
sudo systemctl enable iptables-load
sudo systemctl start iptables-load
- If you need to restore the most recent iptables backup, since systemd doesn't support using shell commands and variables, you'll need to complete this through a script:
sudo vim /home/user/iptables-load.sh
Add the following content:
#!/bin/bash
/sbin/iptables-restore < /home/user/iptables_$(date +%Y%m%d).backup
Save and exit, then add execution permissions to the shell script:
sudo chmod +x /home/user/load-iptables.sh
Modify the /etc/systemd/system/iptables-load.service
file, changing the ExecStart
value to /home/user/load-iptables.sh
:
[Unit]
Description=Load iptables rules
Before=network-pre.target
Wants=network-pre.target
[Service]
Type=oneshot
ExecStart=/home/user/load-iptables.sh
[Install]
WantedBy=multi-user.target
Again, you'll need to reload the systemd configuration and start the service:
sudo systemctl daemon-reload
sudo systemctl enable iptables-load
sudo systemctl start iptables-load
Conclusion
Regular backups and maintenance are indispensable measures to keep your WordPress site running smoothly and securely. In today's era, data is an extremely valuable resource. Even though cloud services provide high reliability, the online environment is still full of unpredictability. Hence, by regularly performing backup and maintenance tasks, not only can you ensure data security, but you can also quickly restore services in case of problems, minimizing your losses and impact on user experience.
This should conclude our articles about WordPress, though there may be more to come sharing experiences and tips from usage. In this article, we have introduced how to back up MySQL databases and WordPress files, how to maintain SSL certificates with Certbot automatically, and how to back up and auto-load iptables rules. I hope this content helps you build a more stable and secure website environment.
References:
https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html
https://dev.mysql.com/doc/refman/8.0/en/mysqlbinlog.html
https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units