Restore a dump file from mysqldump?
To get a list of all MySQL user accounts, use the SELECT
statement to retrieve all rows from the mysql.users
table:
SELECT User, Host FROM mysql.user;
Backup a Single MySQL Database:
mysqldump -u root -p database_name > database_name.sql
mysqldump database_name > database_name.sql
mysqldump --single-transaction --quick --skip-lock-tables some_database > some_database.sql
Backup all MySQL databases to separate files:
for DB in $(mysql -e 'show databases' -s --skip-column-names); do
mysqldump $DB > "$DB.sql";
done
Create a Compressed MySQL Database Backup:
mysqldump database_name | gzip > database_name.sql.gz
mysqldump --single-transaction --quick --skip-lock-tables some_database | gzip > some_database.sql.gz
Create a Backup with Timestamp:
mysqldump database_name > database_name-$(date +%Y%m%d).sql
Restoring a MySQL dump :
mysql database_name < file.sql
Restoring using MySQL Prompt, using source:
On the command-line, if you’re in the same directory that contains the dumped file, use these commands (with appropriate substitutions):
mysql -u root -p
mysql> create database mydb;
mysql> use mydb;
mysql> source db_backup.dump;
In the following example, the first command will create a database named database_name
and then it will import the dump database_name.sql
into it:
mysql -u root -p -e "create database database_name";
mysql -u root -p database_name < database_name.sql
Few MySQL Commands:
mysql> SHOW ENGINE INNODB STATUS\G
mysqldump --single-transaction --quick --skip-lock-tables some_database > some_database.sql
mysqldump --single-transaction --quick --skip-lock-tables some_database | gzip > some_database.sql.gz
shell> mysqldump –uUserName –pPassword –hHost — single-transaction db_name > latest_backup.sql
mysqldump -h hostname-of-the-server -u mysql_user -p database_name > file.sql
# Or with the "pv" tool, which let's us know how much data is
# flowing between our pipes - useful for knowing if the msyqldump
# has stalled
mysqldump some_database | pv | gzip > some_database.sql.gz
# 102kB 0:01:23 [1.38MB/s] [ <=>
Note: When you run mysqldump, only data is dumped in SQL statements. Indexed are not copied. Indexes get rebuilt when the SQL is loaded into another server:
Export mysql database: mysqldump -u mysql_user -p database1 > dump.sql
Import mysql database (into same mysql server, on same host): mysql -u mysql_user -p database 2 < dump.sql
Check the database size using:
SELECT table_schema "Database Name",
sum( data_length + index_length ) / 1024 / 1024 "database size in MB",
sum( data_free )/ 1024 / 1024 "free space in MB"
FROM information_schema.TABLES
GROUP BY table_schema;
Which then gives:
+--------------------+---------------------+------------------+
| Database Name | database size in MB | free space in MB |
+--------------------+---------------------+------------------+
| information_schema | 0.01074219 | 0.00000000 |
| database2 | 978.73327637 | 120.00000000 |
| database1 | 284.76471329 | 217.00003815 |
+--------------------+---------------------+------------------+
3 rows in set (0.52 sec)
Note how the database 2 is 3.5 times bigger than database 1.
If I do a checksum test, I get the same checksum so the data seems to be valid. Just it's taking up much more space on the disk!
mysql> use database1
;Database changed
mysql> checksum table wp_site;
+--------------------+------------+
| Table | Checksum |
+--------------------+------------+
| database1.wp_site | 2263483605 |
+--------------------+------------+
1 row in set (0.00 sec)
mysql> use database2
;Database changed
mysql> checksum table wp_site;
+---------------------+------------+
| Table | Checksum |
+---------------------+------------+
| database2.wp_site | 2263483605 |
+---------------------+------------+
1 row in set (0.00 sec)
Automate Backups with Cron
Automating the process of backing up the databases is as simple as creating a cron job what will run the mysqldump command at specified time.
To set up automated backups of a MySQL database using cronjob, follow the steps below:
- Create a file named
.my.cnf
in your user home directory:
sudo nano ~/.my.cnf
Copy and paste the following text into the .my.cnf file.
[client]
user = dbuser
password = dbpasswd
- Do not forget to replace
dbuser
anddbpasswd
with the database user and user’s password. - Restrict permissions of the credentials file so that only your user has access to it:
chmod 600 ~/.my.cnf
Create a directory to store the backups:
mkdir ~/db_backups
Open your user crontab file:
crontab -e
Add the following cron job that will create a backup of a database name mydb
every day at 3am:
0 3 * * * /usr/bin/mysqldump -u dbuser mydb > /home/username/db_backups/mydb-$(date +\%Y\%m\%d).sql
You can also create another cronjob to delete any backups older than 30 days:
find /path/to/backups -type f -name "*.sql" -mtime +30 -delete
Leave a Comment