MySQL数据传输完全指南 利用INI配置文件轻松实现数据库导入导出同步与备份解决数据迁移过程中的常见问题提高工作效率
引言:MySQL数据传输的重要性
MySQL作为世界上最流行的开源关系型数据库管理系统之一,在各种规模的应用程序中扮演着至关重要的角色。随着业务的发展,数据迁移、备份和同步成为数据库管理员的日常工作。然而,传统的MySQL数据传输方法往往需要记忆复杂的命令行参数,容易出错且效率低下。本文将详细介绍如何利用INI配置文件简化MySQL数据传输过程,实现数据库的导入导出、同步与备份,并解决数据迁移过程中的常见问题,从而显著提高工作效率。
INI配置文件基础
什么是INI配置文件
INI配置文件是一种简单的文本文件格式,用于存储应用程序的配置信息。它由节(section)、键(key)和值(value)组成,结构清晰,易于理解和编辑。在MySQL数据传输中,INI配置文件可以存储连接信息、认证凭据以及其他参数,避免每次操作时重复输入相同的信息。
INI配置文件的基本结构
一个典型的MySQL数据传输INI配置文件结构如下:
[client] host = localhost user = root password = your_password port = 3306 [export] database = my_database output_file = /path/to/backup.sql tables = table1,table2,table3 where_condition = id > 100 [import] database = target_database input_file = /path/to/backup.sql ignore_errors = true [sync] source_database = source_db target_database = target_db sync_tables = users,products,orders
创建和使用INI配置文件
创建INI配置文件非常简单,只需使用任何文本编辑器(如Notepad++、VS Code等)创建一个文本文件,并按照上述结构填写相应信息。保存时,文件扩展名通常为.ini
或.cnf
。
在MySQL命令行工具中使用INI配置文件,可以通过--defaults-file
参数指定:
mysqldump --defaults-file=/path/to/config.ini [其他参数]
利用INI配置文件实现数据库导出
基本数据库导出
使用INI配置文件进行数据库导出可以大大简化命令行操作。以下是一个完整的示例:
首先,创建一个名为export.ini
的配置文件:
[client] host = localhost user = root password = securepassword port = 3306 [export] database = company_db output_file = /backups/company_db_$(date +%Y%m%d).sql single-transaction = true routines = true triggers = true events = true
然后,使用以下命令进行导出:
mysqldump --defaults-file=export.ini --databases=$(grep 'database' export.ini | cut -d' ' -f3) > $(grep 'output_file' export.ini | cut -d' ' -f3 | sed 's/$(date +%Y%m%d)/'$(date +%Y%m%d)'/g')
导出特定表或数据
如果只需要导出特定的表或符合特定条件的数据,可以修改INI配置文件:
[client] host = localhost user = root password = securepassword port = 3306 [export] database = company_db output_file = /backups/users_active_$(date +%Y%m%d).sql tables = users where_condition = status = 'active' AND last_login > '2023-01-01'
对应的导出命令:
mysqldump --defaults-file=export.ini $(grep 'database' export.ini | cut -d' ' -f3) $(grep 'tables' export.ini | cut -d' ' -f3) --where="$(grep 'where_condition' export.ini | cut -d' ' -f3)" > $(grep 'output_file' export.ini | cut -d' ' -f3 | sed 's/$(date +%Y%m%d)/'$(date +%Y%m%d)'/g')
自动化导出脚本
为了进一步提高效率,可以创建一个自动化脚本,结合INI配置文件实现定时导出:
#!/bin/bash # auto_export.sh CONFIG_FILE="/path/to/export.ini" DATE=$(date +%Y%m%d_%H%M%S) # 从配置文件读取参数 DB_HOST=$(grep 'host' $CONFIG_FILE | cut -d' ' -f3) DB_USER=$(grep 'user' $CONFIG_FILE | cut -d' ' -f3) DB_PASS=$(grep 'password' $CONFIG_FILE | cut -d' ' -f3) DB_PORT=$(grep 'port' $CONFIG_FILE | cut -d' ' -f3) DB_NAME=$(grep 'database' $CONFIG_FILE | cut -d' ' -f3) OUTPUT_FILE=$(grep 'output_file' $CONFIG_FILE | cut -d' ' -f3 | sed "s/$(date +%Y%m%d)/$DATE/g") # 执行导出 mysqldump -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT --single-transaction --routines --triggers --events $DB_NAME > $OUTPUT_FILE # 压缩备份文件 gzip $OUTPUT_FILE echo "Database export completed: $OUTPUT_FILE.gz"
将此脚本设置为定时任务(例如使用cron),即可实现自动化数据库导出。
利用INI配置文件实现数据库导入
基本数据库导入
数据库导入同样可以通过INI配置文件简化操作。创建一个名为import.ini
的配置文件:
[client] host = localhost user = root password = securepassword port = 3306 [import] database = new_company_db input_file = /backups/company_db_20231101.sql create_database = true ignore_errors = false
对应的导入命令:
# 如果需要创建数据库 if [ "$(grep 'create_database' import.ini | cut -d' ' -f3)" = "true" ]; then mysql --defaults-file=import.ini -e "CREATE DATABASE IF NOT EXISTS $(grep 'database' import.ini | cut -d' ' -f3);" fi # 导入数据 if [ "$(grep 'ignore_errors' import.ini | cut -d' ' -f3)" = "true" ]; then mysql --defaults-file=import.ini $(grep 'database' import.ini | cut -d' ' -f3) < $(grep 'input_file' import.ini | cut -d' ' -f3) 2>/dev/null else mysql --defaults-file=import.ini $(grep 'database' import.ini | cut -d' ' -f3) < $(grep 'input_file' import.ini | cut -d' ' -f3) fi
导入到特定表
如果只需要将数据导入到特定表,可以修改INI配置文件:
[client] host = localhost user = root password = securepassword port = 3306 [import] database = company_db input_file = /backups/users_data.sql target_table = users truncate_first = true
对应的导入脚本:
#!/bin/bash # import_to_table.sh CONFIG_FILE="/path/to/import.ini" # 从配置文件读取参数 DB_HOST=$(grep 'host' $CONFIG_FILE | cut -d' ' -f3) DB_USER=$(grep 'user' $CONFIG_FILE | cut -d' ' -f3) DB_PASS=$(grep 'password' $CONFIG_FILE | cut -d' ' -f3) DB_PORT=$(grep 'port' $CONFIG_FILE | cut -d' ' -f3) DB_NAME=$(grep 'database' $CONFIG_FILE | cut -d' ' -f3) INPUT_FILE=$(grep 'input_file' $CONFIG_FILE | cut -d' ' -f3) TARGET_TABLE=$(grep 'target_table' $CONFIG_FILE | cut -d' ' -f3) TRUNCATE_FIRST=$(grep 'truncate_first' $CONFIG_FILE | cut -d' ' -f3) # 如果需要先清空表 if [ "$TRUNCATE_FIRST" = "true" ]; then mysql -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT $DB_NAME -e "TRUNCATE TABLE $TARGET_TABLE;" fi # 导入数据 mysql -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT $DB_NAME < $INPUT_FILE echo "Data imported to table $TARGET_TABLE successfully."
处理大型SQL文件导入
对于大型SQL文件,直接使用mysql
命令导入可能会遇到内存或性能问题。以下是一个优化的导入脚本,可以处理大型文件:
#!/bin/bash # import_large_sql.sh CONFIG_FILE="/path/to/import.ini" # 从配置文件读取参数 DB_HOST=$(grep 'host' $CONFIG_FILE | cut -d' ' -f3) DB_USER=$(grep 'user' $CONFIG_FILE | cut -d' ' -f3) DB_PASS=$(grep 'password' $CONFIG_FILE | cut -d' ' -f3) DB_PORT=$(grep 'port' $CONFIG_FILE | cut -d' ' -f3) DB_NAME=$(grep 'database' $CONFIG_FILE | cut -d' ' -f3) INPUT_FILE=$(grep 'input_file' $CONFIG_FILE | cut -d' ' -f3) # 检查文件是否存在 if [ ! -f "$INPUT_FILE" ]; then echo "Error: Input file $INPUT_FILE not found." exit 1 fi # 使用pv命令显示进度(如果已安装) if command -v pv >/dev/null 2>&1; then pv "$INPUT_FILE" | mysql -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT $DB_NAME else echo "pv command not found, importing without progress indicator..." mysql -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT $DB_NAME < "$INPUT_FILE" fi echo "Large SQL file import completed."
利用INI配置文件实现数据库同步
单向数据库同步
单向数据库同步是指将一个数据库(源)的数据同步到另一个数据库(目标)。创建一个名为sync.ini
的配置文件:
[source] host = source.db.com user = sync_user password = source_password port = 3306 database = source_db [target] host = target.db.com user = sync_user password = target_password port = 3306 database = target_db [sync] tables = users,products,orders sync_method = replace ignore_tables = logs,temp_data where_condition = updated_at > '2023-01-01'
实现单向同步的脚本:
#!/bin/bash # sync_databases.sh CONFIG_FILE="/path/to/sync.ini" TEMP_DIR="/tmp/sync_$(date +%Y%m%d_%H%M%S)" mkdir -p $TEMP_DIR # 从配置文件读取参数 SOURCE_HOST=$(grep -A5 '[source]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) SOURCE_USER=$(grep -A5 '[source]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) SOURCE_PASS=$(grep -A5 '[source]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) SOURCE_PORT=$(grep -A5 '[source]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) SOURCE_DB=$(grep -A5 '[source]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) TARGET_HOST=$(grep -A5 '[target]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) TARGET_USER=$(grep -A5 '[target]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) TARGET_PASS=$(grep -A5 '[target]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) TARGET_PORT=$(grep -A5 '[target]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) TARGET_DB=$(grep -A5 '[target]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) SYNC_TABLES=$(grep -A10 '[sync]' $CONFIG_FILE | grep 'tables' | cut -d' ' -f3) SYNC_METHOD=$(grep -A10 '[sync]' $CONFIG_FILE | grep 'sync_method' | cut -d' ' -f3) IGNORE_TABLES=$(grep -A10 '[sync]' $CONFIG_FILE | grep 'ignore_tables' | cut -d' ' -f3) WHERE_CONDITION=$(grep -A10 '[sync]' $CONFIG_FILE | grep 'where_condition' | cut -d' ' -f3) # 将表名转换为数组 IFS=',' read -ra TABLES_ARRAY <<< "$SYNC_TABLES" IFS=',' read -ra IGNORE_ARRAY <<< "$IGNORE_TABLES" # 同步每个表 for TABLE in "${TABLES_ARRAY[@]}"; do # 检查是否在忽略列表中 IGNORE=false for IGNORE_TABLE in "${IGNORE_ARRAY[@]}"; do if [ "$TABLE" = "$IGNORE_TABLE" ]; then IGNORE=true break fi done if [ "$IGNORE" = true ]; then echo "Skipping table: $TABLE" continue fi echo "Syncing table: $TABLE" # 导出源表数据 if [ -n "$WHERE_CONDITION" ]; then mysqldump -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB $TABLE --where="$WHERE_CONDITION" > $TEMP_DIR/$TABLE.sql else mysqldump -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB $TABLE > $TEMP_DIR/$TABLE.sql fi # 根据同步方法导入到目标表 case $SYNC_METHOD in "replace") mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "DROP TABLE IF EXISTS $TABLE; SOURCE $TEMP_DIR/$TABLE.sql;" ;; "update") mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "DELETE FROM $TABLE; SOURCE $TEMP_DIR/$TABLE.sql;" ;; "insert_ignore") mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "SET FOREIGN_KEY_CHECKS=0; SOURCE $TEMP_DIR/$TABLE.sql; SET FOREIGN_KEY_CHECKS=1;" ;; *) mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB < $TEMP_DIR/$TABLE.sql ;; esac echo "Table $TABLE synced successfully." done # 清理临时文件 rm -rf $TEMP_DIR echo "Database synchronization completed."
双向数据库同步
双向数据库同步更加复杂,需要解决数据冲突的问题。以下是一个基于时间戳的双向同步方案:
[source] host = source.db.com user = sync_user password = source_password port = 3306 database = source_db [target] host = target.db.com user = sync_user password = target_password port = 3306 database = target_db [sync] tables = users,products,orders timestamp_column = updated_at conflict_resolution = newer_wins sync_interval = 3600
双向同步脚本:
#!/bin/bash # bidirectional_sync.sh CONFIG_FILE="/path/to/sync.ini" TEMP_DIR="/tmp/bidir_sync_$(date +%Y%m%d_%H%M%S)" mkdir -p $TEMP_DIR # 从配置文件读取参数 SOURCE_HOST=$(grep -A5 '[source]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) SOURCE_USER=$(grep -A5 '[source]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) SOURCE_PASS=$(grep -A5 '[source]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) SOURCE_PORT=$(grep -A5 '[source]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) SOURCE_DB=$(grep -A5 '[source]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) TARGET_HOST=$(grep -A5 '[target]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) TARGET_USER=$(grep -A5 '[target]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) TARGET_PASS=$(grep -A5 '[target]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) TARGET_PORT=$(grep -A5 '[target]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) TARGET_DB=$(grep -A5 '[target]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) SYNC_TABLES=$(grep -A10 '[sync]' $CONFIG_FILE | grep 'tables' | cut -d' ' -f3) TIMESTAMP_COLUMN=$(grep -A10 '[sync]' $CONFIG_FILE | grep 'timestamp_column' | cut -d' ' -f3) CONFLICT_RESOLUTION=$(grep -A10 '[sync]' $CONFIG_FILE | grep 'conflict_resolution' | cut -d' ' -f3) # 将表名转换为数组 IFS=',' read -ra TABLES_ARRAY <<< "$SYNC_TABLES" # 同步每个表 for TABLE in "${TABLES_ARRAY[@]}"; do echo "Bidirectional syncing table: $TABLE" # 创建临时表存储差异 mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e " CREATE TABLE IF NOT EXISTS temp_diff_$TABLE LIKE $TABLE; TRUNCATE TABLE temp_diff_$TABLE; " # 1. 查找源数据库中有但目标数据库中没有的记录 mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e " SELECT s.* FROM $TABLE s LEFT JOIN $TARGET_HOST.$TARGET_DB.$TABLE t ON s.id = t.id WHERE t.id IS NULL " > $TEMP_DIR/source_only_$TABLE.sql # 2. 查找目标数据库中有但源数据库中没有的记录 mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e " SELECT t.* FROM $TABLE t LEFT JOIN $SOURCE_HOST.$SOURCE_DB.$TABLE s ON t.id = s.id WHERE s.id IS NULL " > $TEMP_DIR/target_only_$TABLE.sql # 3. 查找两个数据库中都存在但数据不同的记录 mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e " SELECT s.* FROM $TABLE s JOIN $TARGET_HOST.$TARGET_DB.$TABLE t ON s.id = t.id WHERE s.$TIMESTAMP_COLUMN > t.$TIMESTAMP_COLUMN " > $TEMP_DIR/source_newer_$TABLE.sql mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e " SELECT t.* FROM $TABLE t JOIN $SOURCE_HOST.$SOURCE_DB.$TABLE s ON t.id = s.id WHERE t.$TIMESTAMP_COLUMN > s.$TIMESTAMP_COLUMN " > $TEMP_DIR/target_newer_$TABLE.sql # 应用差异 # 将源数据库独有的记录插入目标数据库 if [ -s $TEMP_DIR/source_only_$TABLE.sql ]; then mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e " LOAD DATA LOCAL INFILE '$TEMP_DIR/source_only_$TABLE.sql' INTO TABLE $TABLE FIELDS TERMINATED BY 't' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY 'n' IGNORE 1 LINES; " fi # 将目标数据库独有的记录插入源数据库 if [ -s $TEMP_DIR/target_only_$TABLE.sql ]; then mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e " LOAD DATA LOCAL INFILE '$TEMP_DIR/target_only_$TABLE.sql' INTO TABLE $TABLE FIELDS TERMINATED BY 't' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY 'n' IGNORE 1 LINES; " fi # 根据冲突解决策略处理更新时间不同的记录 if [ "$CONFLICT_RESOLUTION" = "newer_wins" ]; then # 将源数据库中较新的记录更新到目标数据库 if [ -s $TEMP_DIR/source_newer_$TABLE.sql ]; then mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e " CREATE TEMPORARY TABLE temp_updates LIKE $TABLE; LOAD DATA LOCAL INFILE '$TEMP_DIR/source_newer_$TABLE.sql' INTO TABLE temp_updates FIELDS TERMINATED BY 't' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY 'n' IGNORE 1 LINES; UPDATE $TABLE t JOIN temp_updates u ON t.id = u.id SET t.* = u.*; DROP TEMPORARY TABLE temp_updates; " fi # 将目标数据库中较新的记录更新到源数据库 if [ -s $TEMP_DIR/target_newer_$TABLE.sql ]; then mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e " CREATE TEMPORARY TABLE temp_updates LIKE $TABLE; LOAD DATA LOCAL INFILE '$TEMP_DIR/target_newer_$TABLE.sql' INTO TABLE temp_updates FIELDS TERMINATED BY 't' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY 'n' IGNORE 1 LINES; UPDATE $TABLE t JOIN temp_updates u ON t.id = u.id SET t.* = u.*; DROP TEMPORARY TABLE temp_updates; " fi fi # 清理临时表 mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e " DROP TABLE IF EXISTS temp_diff_$TABLE; " echo "Table $TABLE bidirectional sync completed." done # 清理临时文件 rm -rf $TEMP_DIR echo "Bidirectional database synchronization completed."
利用INI配置文件实现数据库备份
完整数据库备份
完整数据库备份是最基本的备份策略,可以使用INI配置文件简化操作。创建一个名为backup.ini
的配置文件:
[client] host = localhost user = backup_user password = backup_password port = 3306 [backup] database = my_database backup_dir = /backups/daily backup_type = full compression = true retention_days = 30
完整备份脚本:
#!/bin/bash # full_backup.sh CONFIG_FILE="/path/to/backup.ini" DATE=$(date +%Y%m%d_%H%M%S) # 从配置文件读取参数 DB_HOST=$(grep 'host' $CONFIG_FILE | cut -d' ' -f3) DB_USER=$(grep 'user' $CONFIG_FILE | cut -d' ' -f3) DB_PASS=$(grep 'password' $CONFIG_FILE | cut -d' ' -f3) DB_PORT=$(grep 'port' $CONFIG_FILE | cut -d' ' -f3) DB_NAME=$(grep 'database' $CONFIG_FILE | cut -d' ' -f3) BACKUP_DIR=$(grep 'backup_dir' $CONFIG_FILE | cut -d' ' -f3) BACKUP_TYPE=$(grep 'backup_type' $CONFIG_FILE | cut -d' ' -f3) COMPRESSION=$(grep 'compression' $CONFIG_FILE | cut -d' ' -f3) RETENTION_DAYS=$(grep 'retention_days' $CONFIG_FILE | cut -d' ' -f3) # 创建备份目录 mkdir -p $BACKUP_DIR # 执行备份 BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${BACKUP_TYPE}_$DATE.sql" echo "Starting $BACKUP_TYPE backup of database $DB_NAME to $BACKUP_FILE..." mysqldump -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT --single-transaction --routines --triggers --events --databases $DB_NAME > $BACKUP_FILE # 检查备份是否成功 if [ $? -eq 0 ]; then echo "Backup completed successfully." # 压缩备份文件 if [ "$COMPRESSION" = "true" ]; then gzip $BACKUP_FILE echo "Backup compressed to $BACKUP_FILE.gz" BACKUP_FILE="$BACKUP_FILE.gz" fi # 清理旧备份 if [ -n "$RETENTION_DAYS" ] && [ "$RETENTION_DAYS" -gt 0 ]; then echo "Cleaning up backups older than $RETENTION_DAYS days..." find $BACKUP_DIR -name "${DB_NAME}_${BACKUP_TYPE}_*.sql*" -type f -mtime +$RETENTION_DAYS -delete echo "Old backups cleaned up." fi else echo "Backup failed!" exit 1 fi echo "Backup process completed."
增量数据库备份
增量备份只备份自上次备份以来发生变化的数据,可以节省存储空间和备份时间。修改INI配置文件:
[client] host = localhost user = backup_user password = backup_password port = 3306 [backup] database = my_database backup_dir = /backups/incremental backup_type = incremental compression = true retention_days = 30 binary_log_dir = /var/lib/mysql binary_log_prefix = mysql-bin
增量备份脚本:
#!/bin/bash # incremental_backup.sh CONFIG_FILE="/path/to/backup.ini" DATE=$(date +%Y%m%d_%H%M%S) # 从配置文件读取参数 DB_HOST=$(grep 'host' $CONFIG_FILE | cut -d' ' -f3) DB_USER=$(grep 'user' $CONFIG_FILE | cut -d' ' -f3) DB_PASS=$(grep 'password' $CONFIG_FILE | cut -d' ' -f3) DB_PORT=$(grep 'port' $CONFIG_FILE | cut -d' ' -f3) DB_NAME=$(grep 'database' $CONFIG_FILE | cut -d' ' -f3) BACKUP_DIR=$(grep 'backup_dir' $CONFIG_FILE | cut -d' ' -f3) BACKUP_TYPE=$(grep 'backup_type' $CONFIG_FILE | cut -d' ' -f3) COMPRESSION=$(grep 'compression' $CONFIG_FILE | cut -d' ' -f3) RETENTION_DAYS=$(grep 'retention_days' $CONFIG_FILE | cut -d' ' -f3) BINLOG_DIR=$(grep 'binary_log_dir' $CONFIG_FILE | cut -d' ' -f3) BINLOG_PREFIX=$(grep 'binary_log_prefix' $CONFIG_FILE | cut -d' ' -f3) # 创建备份目录 mkdir -p $BACKUP_DIR # 获取当前的二进制日志位置 BINLOG_INFO=$(mysql -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT -e "SHOW MASTER STATUSG" | grep File) CURRENT_LOG=$(echo $BINLOG_INFO | awk '{print $2}') CURRENT_POS=$(echo $BINLOG_INFO | awk '{print $4}') # 检查是否有上一次的备份位置信息 LAST_POS_FILE="$BACKUP_DIR/last_position.txt" if [ -f "$LAST_POS_FILE" ]; then LAST_LOG=$(head -n1 $LAST_POS_FILE) LAST_POS=$(tail -n1 $LAST_POS_FILE) echo "Performing incremental backup from $LAST_LOG:$LAST_POS to $CURRENT_LOG:$CURRENT_POS" # 执行增量备份 BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${BACKUP_TYPE}_$DATE.sql" mysqlbinlog --read-from-remote-server --host=$DB_HOST --user=$DB_USER --password=$DB_PASS --port=$DB_PORT --start-position=$LAST_POS $BINLOG_DIR/$LAST_LOG > $BACKUP_FILE # 检查备份是否成功 if [ $? -eq 0 ]; then echo "Incremental backup completed successfully." # 压缩备份文件 if [ "$COMPRESSION" = "true" ]; then gzip $BACKUP_FILE echo "Backup compressed to $BACKUP_FILE.gz" BACKUP_FILE="$BACKUP_FILE.gz" fi # 更新位置信息 echo $CURRENT_LOG > $LAST_POS_FILE echo $CURRENT_POS >> $LAST_POS_FILE # 清理旧备份 if [ -n "$RETENTION_DAYS" ] && [ "$RETENTION_DAYS" -gt 0 ]; then echo "Cleaning up backups older than $RETENTION_DAYS days..." find $BACKUP_DIR -name "${DB_NAME}_${BACKUP_TYPE}_*.sql*" -type f -mtime +$RETENTION_DAYS -delete echo "Old backups cleaned up." fi else echo "Incremental backup failed!" exit 1 fi else echo "No previous backup position found. Performing full backup first..." # 执行完整备份 BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_full_$DATE.sql" mysqldump -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT --single-transaction --routines --triggers --events --master-data=2 --databases $DB_NAME > $BACKUP_FILE # 检查备份是否成功 if [ $? -eq 0 ]; then echo "Full backup completed successfully." # 从备份文件中提取二进制日志位置 BINLOG_LINE=$(grep "CHANGE MASTER TO" $BACKUP_FILE) LAST_LOG=$(echo $BINLOG_LINE | sed -n "s/.*MASTER_LOG_FILE='([^']*)'.*/1/p") LAST_POS=$(echo $BINLOG_LINE | sed -n "s/.*MASTER_LOG_POS=([0-9]*).*/1/p") # 保存位置信息 echo $LAST_LOG > $LAST_POS_FILE echo $LAST_POS >> $LAST_POS_FILE # 压缩备份文件 if [ "$COMPRESSION" = "true" ]; then gzip $BACKUP_FILE echo "Backup compressed to $BACKUP_FILE.gz" BACKUP_FILE="$BACKUP_FILE.gz" fi else echo "Full backup failed!" exit 1 fi fi echo "Backup process completed."
定时备份策略
结合cron和INI配置文件,可以实现自动化的定时备份策略。创建一个名为cron_backup.ini
的配置文件:
[schedule] full_backup_day = sunday full_backup_time = 02:00 incremental_backup_time = 03:00 [client] host = localhost user = backup_user password = backup_password port = 3306 [backup] database = my_database backup_dir = /backups/automated compression = true full_retention_days = 30 incremental_retention_days = 7 binary_log_dir = /var/lib/mysql binary_log_prefix = mysql-bin
自动化备份脚本:
#!/bin/bash # automated_backup.sh CONFIG_FILE="/path/to/cron_backup.ini" DATE=$(date +%Y%m%d_%H%M%S) DAY_OF_WEEK=$(date +%A | tr '[:upper:]' '[:lower:]') # 从配置文件读取参数 FULL_BACKUP_DAY=$(grep 'full_backup_day' $CONFIG_FILE | cut -d' ' -f3) FULL_BACKUP_TIME=$(grep 'full_backup_time' $CONFIG_FILE | cut -d' ' -f3) INCREMENTAL_BACKUP_TIME=$(grep 'incremental_backup_time' $CONFIG_FILE | cut -d' ' -f3) DB_HOST=$(grep 'host' $CONFIG_FILE | cut -d' ' -f3) DB_USER=$(grep 'user' $CONFIG_FILE | cut -d' ' -f3) DB_PASS=$(grep 'password' $CONFIG_FILE | cut -d' ' -f3) DB_PORT=$(grep 'port' $CONFIG_FILE | cut -d' ' -f3) DB_NAME=$(grep 'database' $CONFIG_FILE | cut -d' ' -f3) BACKUP_DIR=$(grep 'backup_dir' $CONFIG_FILE | cut -d' ' -f3) COMPRESSION=$(grep 'compression' $CONFIG_FILE | cut -d' ' -f3) FULL_RETENTION_DAYS=$(grep 'full_retention_days' $CONFIG_FILE | cut -d' ' -f3) INCREMENTAL_RETENTION_DAYS=$(grep 'incremental_retention_days' $CONFIG_FILE | cut -d' ' -f3) BINLOG_DIR=$(grep 'binary_log_dir' $CONFIG_FILE | cut -d' ' -f3) BINLOG_PREFIX=$(grep 'binary_log_prefix' $CONFIG_FILE | cut -d' ' -f3) # 创建备份目录 mkdir -p $BACKUP_DIR # 获取当前时间 CURRENT_TIME=$(date +%H:%M) # 检查是否是完整备份时间 if [ "$DAY_OF_WEEK" = "$FULL_BACKUP_DAY" ] && [ "$CURRENT_TIME" = "$FULL_BACKUP_TIME" ]; then echo "Performing full backup..." # 执行完整备份 BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_full_$DATE.sql" mysqldump -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT --single-transaction --routines --triggers --events --master-data=2 --databases $DB_NAME > $BACKUP_FILE # 检查备份是否成功 if [ $? -eq 0 ]; then echo "Full backup completed successfully." # 从备份文件中提取二进制日志位置 BINLOG_LINE=$(grep "CHANGE MASTER TO" $BACKUP_FILE) LAST_LOG=$(echo $BINLOG_LINE | sed -n "s/.*MASTER_LOG_FILE='([^']*)'.*/1/p") LAST_POS=$(echo $BINLOG_LINE | sed -n "s/.*MASTER_LOG_POS=([0-9]*).*/1/p") # 保存位置信息 echo $LAST_LOG > $BACKUP_DIR/last_position.txt echo $LAST_POS >> $BACKUP_DIR/last_position.txt # 压缩备份文件 if [ "$COMPRESSION" = "true" ]; then gzip $BACKUP_FILE echo "Backup compressed to $BACKUP_FILE.gz" BACKUP_FILE="$BACKUP_FILE.gz" fi # 清理旧备份 if [ -n "$FULL_RETENTION_DAYS" ] && [ "$FULL_RETENTION_DAYS" -gt 0 ]; then echo "Cleaning up full backups older than $FULL_RETENTION_DAYS days..." find $BACKUP_DIR -name "${DB_NAME}_full_*.sql*" -type f -mtime +$FULL_RETENTION_DAYS -delete echo "Old full backups cleaned up." fi else echo "Full backup failed!" exit 1 fi # 检查是否是增量备份时间 elif [ "$CURRENT_TIME" = "$INCREMENTAL_BACKUP_TIME" ]; then echo "Performing incremental backup..." # 检查是否有上一次的备份位置信息 LAST_POS_FILE="$BACKUP_DIR/last_position.txt" if [ -f "$LAST_POS_FILE" ]; then LAST_LOG=$(head -n1 $LAST_POS_FILE) LAST_POS=$(tail -n1 $LAST_POS_FILE) # 获取当前的二进制日志位置 BINLOG_INFO=$(mysql -h $DB_HOST -u $DB_USER -p$DB_PASS -P $DB_PORT -e "SHOW MASTER STATUSG" | grep File) CURRENT_LOG=$(echo $BINLOG_INFO | awk '{print $2}') CURRENT_POS=$(echo $BINLOG_INFO | awk '{print $4}') echo "Performing incremental backup from $LAST_LOG:$LAST_POS to $CURRENT_LOG:$CURRENT_POS" # 执行增量备份 BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_incremental_$DATE.sql" mysqlbinlog --read-from-remote-server --host=$DB_HOST --user=$DB_USER --password=$DB_PASS --port=$DB_PORT --start-position=$LAST_POS $BINLOG_DIR/$LAST_LOG > $BACKUP_FILE # 检查备份是否成功 if [ $? -eq 0 ]; then echo "Incremental backup completed successfully." # 压缩备份文件 if [ "$COMPRESSION" = "true" ]; then gzip $BACKUP_FILE echo "Backup compressed to $BACKUP_FILE.gz" BACKUP_FILE="$BACKUP_FILE.gz" fi # 更新位置信息 echo $CURRENT_LOG > $LAST_POS_FILE echo $CURRENT_POS >> $LAST_POS_FILE # 清理旧备份 if [ -n "$INCREMENTAL_RETENTION_DAYS" ] && [ "$INCREMENTAL_RETENTION_DAYS" -gt 0 ]; then echo "Cleaning up incremental backups older than $INCREMENTAL_RETENTION_DAYS days..." find $BACKUP_DIR -name "${DB_NAME}_incremental_*.sql*" -type f -mtime +$INCREMENTAL_RETENTION_DAYS -delete echo "Old incremental backups cleaned up." fi else echo "Incremental backup failed!" exit 1 fi else echo "No previous backup position found. Skipping incremental backup." fi else echo "Not a scheduled backup time. Current time: $DAY_OF_WEEK $CURRENT_TIME" exit 0 fi echo "Backup process completed."
然后,在crontab中添加以下条目,每小时运行一次脚本:
0 * * * * /path/to/automated_backup.sh
数据迁移过程中的常见问题及解决方案
字符集和排序规则问题
在数据迁移过程中,字符集和排序规则不匹配是常见问题。创建一个专门的INI配置文件来处理这些问题:
[source] host = source.db.com user = migration_user password = source_password port = 3306 database = source_db charset = utf8mb4 collation = utf8mb4_unicode_ci [target] host = target.db.com user = migration_user password = target_password port = 3306 database = target_db charset = utf8mb4 collation = utf8mb4_unicode_ci [migration] convert_charset = true handle_collation_conflicts = true
处理字符集问题的脚本:
#!/bin/bash # handle_charset.sh CONFIG_FILE="/path/to/migration.ini" # 从配置文件读取参数 SOURCE_HOST=$(grep -A5 '[source]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) SOURCE_USER=$(grep -A5 '[source]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) SOURCE_PASS=$(grep -A5 '[source]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) SOURCE_PORT=$(grep -A5 '[source]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) SOURCE_DB=$(grep -A5 '[source]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) SOURCE_CHARSET=$(grep -A5 '[source]' $CONFIG_FILE | grep 'charset' | cut -d' ' -f3) SOURCE_COLLATION=$(grep -A5 '[source]' $CONFIG_FILE | grep 'collation' | cut -d' ' -f3) TARGET_HOST=$(grep -A5 '[target]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) TARGET_USER=$(grep -A5 '[target]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) TARGET_PASS=$(grep -A5 '[target]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) TARGET_PORT=$(grep -A5 '[target]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) TARGET_DB=$(grep -A5 '[target]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) TARGET_CHARSET=$(grep -A5 '[target]' $CONFIG_FILE | grep 'charset' | cut -d' ' -f3) TARGET_COLLATION=$(grep -A5 '[target]' $CONFIG_FILE | grep 'collation' | cut -d' ' -f3) CONVERT_CHARSET=$(grep -A5 '[migration]' $CONFIG_FILE | grep 'convert_charset' | cut -d' ' -f3) HANDLE_COLLATION=$(grep -A5 '[migration]' $CONFIG_FILE | grep 'handle_collation_conflicts' | cut -d' ' -f3) # 检查源数据库和目标数据库的字符集是否一致 if [ "$SOURCE_CHARSET" != "$TARGET_CHARSET" ] && [ "$CONVERT_CHARSET" = "true" ]; then echo "Character sets differ. Converting from $SOURCE_CHARSET to $TARGET_CHARSET..." # 获取源数据库中的所有表 TABLES=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SHOW TABLES;" | awk '{print $1}' | grep -v "Tables_in") # 为每个表创建转换脚本 for TABLE in $TABLES; do echo "Processing table: $TABLE" # 获取表的创建语句 CREATE_TABLE=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SHOW CREATE TABLE $TABLEG" | grep "CREATE TABLE" | sed 's/CREATE TABLE/CREATE TABLE IF NOT EXISTS/') # 替换字符集和排序规则 CREATE_TABLE_MODIFIED=$(echo "$CREATE_TABLE" | sed "s/DEFAULT CHARSET=$SOURCE_CHARSET/DEFAULT CHARSET=$TARGET_CHARSET/g" | sed "s/COLLATE $SOURCE_COLLATION/COLLATE $TARGET_COLLATION/g") # 在目标数据库中创建表 mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "$CREATE_TABLE_MODIFIED" # 导出并转换数据 if [ "$HANDLE_COLLATION" = "true" ]; then # 使用CONVERT转换字符集 mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SELECT * FROM $TABLE" | mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "SET NAMES $TARGET_CHARSET; SET FOREIGN_KEY_CHECKS=0; LOAD DATA LOCAL INFILE '/dev/stdin' INTO TABLE $TABLE CHARACTER SET $SOURCE_CHARSET; SET FOREIGN_KEY_CHECKS=1;" else # 直接导出导入 mysqldump -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB $TABLE --no-create-info | mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB fi echo "Table $TABLE processed successfully." done echo "Character set conversion completed." else echo "Character sets match or conversion is disabled. Proceeding with normal migration." # 标准迁移过程 mysqldump -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT --default-character-set=$SOURCE_CHARSET --single-transaction --routines --triggers --events $SOURCE_DB | mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT --default-character-set=$TARGET_CHARSET $TARGET_DB echo "Migration completed." fi
大数据量迁移性能优化
处理大数据量迁移时,性能优化至关重要。创建一个专门的INI配置文件:
[source] host = source.db.com user = migration_user password = source_password port = 3306 database = large_db [target] host = target.db.com user = migration_user password = target_password port = 3306 database = large_db_target [migration] batch_size = 10000 delay_between_batches = 1 parallel_threads = 4 disable_indexes = true disable_foreign_keys = true
大数据量迁移优化脚本:
#!/bin/bash # large_migration.sh CONFIG_FILE="/path/to/large_migration.ini" # 从配置文件读取参数 SOURCE_HOST=$(grep -A5 '[source]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) SOURCE_USER=$(grep -A5 '[source]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) SOURCE_PASS=$(grep -A5 '[source]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) SOURCE_PORT=$(grep -A5 '[source]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) SOURCE_DB=$(grep -A5 '[source]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) TARGET_HOST=$(grep -A5 '[target]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) TARGET_USER=$(grep -A5 '[target]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) TARGET_PASS=$(grep -A5 '[target]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) TARGET_PORT=$(grep -A5 '[target]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) TARGET_DB=$(grep -A5 '[target]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) BATCH_SIZE=$(grep -A5 '[migration]' $CONFIG_FILE | grep 'batch_size' | cut -d' ' -f3) DELAY_BETWEEN_BATCHES=$(grep -A5 '[migration]' $CONFIG_FILE | grep 'delay_between_batches' | cut -d' ' -f3) PARALLEL_THREADS=$(grep -A5 '[migration]' $CONFIG_FILE | grep 'parallel_threads' | cut -d' ' -f3) DISABLE_INDEXES=$(grep -A5 '[migration]' $CONFIG_FILE | grep 'disable_indexes' | cut -d' ' -f3) DISABLE_FOREIGN_KEYS=$(grep -A5 '[migration]' $CONFIG_FILE | grep 'disable_foreign_keys' | cut -d' ' -f3) # 创建目标数据库 mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT -e "CREATE DATABASE IF NOT EXISTS $TARGET_DB;" # 获取源数据库中的所有表 TABLES=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SHOW TABLES;" | awk '{print $1}' | grep -v "Tables_in") # 在目标数据库中创建表结构 echo "Creating table structure in target database..." mysqldump -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT --no-data --single-transaction $SOURCE_DB | mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB # 禁用外键检查 if [ "$DISABLE_FOREIGN_KEYS" = "true" ]; then echo "Disabling foreign key checks..." mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "SET FOREIGN_KEY_CHECKS=0;" fi # 迁移每个表的数据 for TABLE in $TABLES; do echo "Migrating table: $TABLE" # 获取表的行数 ROW_COUNT=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SELECT COUNT(*) FROM $TABLE;" | tail -n1) echo "Table $TABLE has $ROW_COUNT rows." # 获取表的主键列 PRIMARY_KEY=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SHOW KEYS FROM $TABLE WHERE Key_name = 'PRIMARY';" | awk '{print $5}' | head -n1) if [ -z "$PRIMARY_KEY" ]; then echo "Warning: Table $TABLE has no primary key. Using full table migration." # 没有主键,使用完整的表迁移 mysqldump -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT --no-create-info --single-transaction $SOURCE_DB $TABLE | mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB echo "Table $TABLE migrated without batching." else echo "Using primary key $PRIMARY_KEY for batched migration." # 禁用索引 if [ "$DISABLE_INDEXES" = "true" ]; then echo "Disabling indexes for table $TABLE..." INDEXES=$(mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "SHOW INDEX FROM $TABLE;" | awk '{print $3}' | sort | uniq) for INDEX in $INDEXES; do if [ "$INDEX" != "PRIMARY" ]; then mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "ALTER TABLE $TABLE DISABLE KEYS;" break fi done fi # 计算批次数 BATCH_COUNT=$(( (ROW_COUNT + BATCH_SIZE - 1) / BATCH_SIZE )) echo "Table $TABLE will be migrated in $BATCH_COUNT batches of $BATCH_SIZE rows each." # 分批迁移数据 for ((i=0; i<BATCH_COUNT; i++)); do OFFSET=$((i * BATCH_SIZE)) echo "Migrating batch $((i+1))/$BATCH_COUNT (rows $OFFSET-$((OFFSET + BATCH_SIZE - 1)))..." # 导出并导入当前批次 mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SELECT * FROM $TABLE ORDER BY $PRIMARY_KEY LIMIT $OFFSET, $BATCH_SIZE" | mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "LOAD DATA LOCAL INFILE '/dev/stdin' REPLACE INTO TABLE $TABLE" # 批次间延迟 if [ $i -lt $((BATCH_COUNT - 1)) ] && [ "$DELAY_BETWEEN_BATCHES" -gt 0 ]; then echo "Waiting $DELAY_BETWEEN_BATCHES seconds before next batch..." sleep $DELAY_BETWEEN_BATCHES fi done # 重新启用索引 if [ "$DISABLE_INDEXES" = "true" ]; then echo "Re-enabling indexes for table $TABLE..." mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "ALTER TABLE $TABLE ENABLE KEYS;" fi echo "Table $TABLE migrated successfully in batches." fi done # 重新启用外键检查 if [ "$DISABLE_FOREIGN_KEYS" = "true" ]; then echo "Re-enabling foreign key checks..." mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "SET FOREIGN_KEY_CHECKS=1;" fi echo "Large database migration completed."
数据一致性与完整性验证
确保迁移后的数据一致性和完整性是数据迁移过程中的关键步骤。创建一个专门的INI配置文件:
[source] host = source.db.com user = verification_user password = source_password port = 3306 database = source_db [target] host = target.db.com user = verification_user password = target_password port = 3306 database = target_db [verification] check_row_counts = true check_checksums = true sample_percentage = 10 max_differences = 100 report_file = /var/log/migration_verification.log
数据验证脚本:
#!/bin/bash # verify_migration.sh CONFIG_FILE="/path/to/verification.ini" # 从配置文件读取参数 SOURCE_HOST=$(grep -A5 '[source]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) SOURCE_USER=$(grep -A5 '[source]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) SOURCE_PASS=$(grep -A5 '[source]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) SOURCE_PORT=$(grep -A5 '[source]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) SOURCE_DB=$(grep -A5 '[source]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) TARGET_HOST=$(grep -A5 '[target]' $CONFIG_FILE | grep 'host' | cut -d' ' -f3) TARGET_USER=$(grep -A5 '[target]' $CONFIG_FILE | grep 'user' | cut -d' ' -f3) TARGET_PASS=$(grep -A5 '[target]' $CONFIG_FILE | grep 'password' | cut -d' ' -f3) TARGET_PORT=$(grep -A5 '[target]' $CONFIG_FILE | grep 'port' | cut -d' ' -f3) TARGET_DB=$(grep -A5 '[target]' $CONFIG_FILE | grep 'database' | cut -d' ' -f3) CHECK_ROW_COUNTS=$(grep -A5 '[verification]' $CONFIG_FILE | grep 'check_row_counts' | cut -d' ' -f3) CHECK_CHECKSUMS=$(grep -A5 '[verification]' $CONFIG_FILE | grep 'check_checksums' | cut -d' ' -f3) SAMPLE_PERCENTAGE=$(grep -A5 '[verification]' $CONFIG_FILE | grep 'sample_percentage' | cut -d' ' -f3) MAX_DIFFERENCES=$(grep -A5 '[verification]' $CONFIG_FILE | grep 'max_differences' | cut -d' ' -f3) REPORT_FILE=$(grep -A5 '[verification]' $CONFIG_FILE | grep 'report_file' | cut -d' ' -f3) # 创建报告目录 mkdir -p $(dirname $REPORT_FILE) # 初始化报告文件 echo "Migration Verification Report - $(date)" > $REPORT_FILE echo "=====================================" >> $REPORT_FILE echo "" >> $REPORT_FILE # 获取源数据库和目标数据库中的所有表 SOURCE_TABLES=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SHOW TABLES;" | awk '{print $1}' | grep -v "Tables_in") TARGET_TABLES=$(mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "SHOW TABLES;" | awk '{print $1}' | grep -v "Tables_in") # 检查表数量是否一致 SOURCE_TABLE_COUNT=$(echo "$SOURCE_TABLES" | wc -l) TARGET_TABLE_COUNT=$(echo "$TARGET_TABLES" | wc -l) echo "Source database has $SOURCE_TABLE_COUNT tables." >> $REPORT_FILE echo "Target database has $TARGET_TABLE_COUNT tables." >> $REPORT_FILE echo "" >> $REPORT_FILE if [ "$SOURCE_TABLE_COUNT" -ne "$TARGET_TABLE_COUNT" ]; then echo "WARNING: Table count mismatch between source and target databases!" >> $REPORT_FILE # 找出源数据库中有但目标数据库中没有的表 echo "Tables in source but not in target:" >> $REPORT_FILE for TABLE in $SOURCE_TABLES; do if ! echo "$TARGET_TABLES" | grep -q "^$TABLE$"; then echo "- $TABLE" >> $REPORT_FILE fi done # 找出目标数据库中有但源数据库中没有的表 echo "Tables in target but not in source:" >> $REPORT_FILE for TABLE in $TARGET_TABLES; do if ! echo "$SOURCE_TABLES" | grep -q "^$TABLE$"; then echo "- $TABLE" >> $REPORT_FILE fi done echo "" >> $REPORT_FILE fi # 验证每个表 TOTAL_DIFFERENCES=0 for TABLE in $SOURCE_TABLES; do echo "Verifying table: $TABLE" >> $REPORT_FILE # 检查表是否存在于目标数据库 if ! echo "$TARGET_TABLES" | grep -q "^$TABLE$"; then echo "ERROR: Table $TABLE does not exist in target database!" >> $REPORT_FILE TOTAL_DIFFERENCES=$((TOTAL_DIFFERENCES + 1)) continue fi # 检查行数 if [ "$CHECK_ROW_COUNTS" = "true" ]; then SOURCE_ROW_COUNT=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SELECT COUNT(*) FROM $TABLE;" | tail -n1) TARGET_ROW_COUNT=$(mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "SELECT COUNT(*) FROM $TABLE;" | tail -n1) echo " - Source row count: $SOURCE_ROW_COUNT" >> $REPORT_FILE echo " - Target row count: $TARGET_ROW_COUNT" >> $REPORT_FILE if [ "$SOURCE_ROW_COUNT" -ne "$TARGET_ROW_COUNT" ]; then echo " - WARNING: Row count mismatch for table $TABLE!" >> $REPORT_FILE TOTAL_DIFFERENCES=$((TOTAL_DIFFERENCES + 1)) else echo " - Row counts match." >> $REPORT_FILE fi fi # 检查校验和 if [ "$CHECK_CHECKSUMS" = "true" ] && [ "$SOURCE_ROW_COUNT" -gt 0 ]; then echo " - Calculating checksums..." >> $REPORT_FILE # 获取表的主键 PRIMARY_KEY=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SHOW KEYS FROM $TABLE WHERE Key_name = 'PRIMARY';" | awk '{print $5}' | head -n1) if [ -z "$PRIMARY_KEY" ]; then echo " - No primary key found for table $TABLE. Skipping checksum verification." >> $REPORT_FILE else # 计算采样行数 SAMPLE_ROWS=$((SOURCE_ROW_COUNT * SAMPLE_PERCENTAGE / 100)) if [ "$SAMPLE_ROWS" -lt 1 ]; then SAMPLE_ROWS=1 fi echo " - Sampling $SAMPLE_ROWS rows ($SAMPLE_PERCENTAGE%) for checksum verification..." >> $REPORT_FILE # 获取采样行的主键值 SOURCE_KEYS=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "SELECT $PRIMARY_KEY FROM $TABLE ORDER BY RAND() LIMIT $SAMPLE_ROWS;" | tail -n +2) # 对每个采样行计算校验和 CHECKSUM_MISMATCH=0 for KEY in $SOURCE_KEYS; do SOURCE_CHECKSUM=$(mysql -h $SOURCE_HOST -u $SOURCE_USER -p$SOURCE_PASS -P $SOURCE_PORT $SOURCE_DB -e "CHECKSUM TABLE $TABLE WHERE $PRIMARY_KEY = $KEY;" | awk '{print $2}') TARGET_CHECKSUM=$(mysql -h $TARGET_HOST -u $TARGET_USER -p$TARGET_PASS -P $TARGET_PORT $TARGET_DB -e "CHECKSUM TABLE $TABLE WHERE $PRIMARY_KEY = $KEY;" | awk '{print $2}') if [ "$SOURCE_CHECKSUM" != "$TARGET_CHECKSUM" ]; then echo " - WARNING: Checksum mismatch for row with $PRIMARY_KEY = $KEY!" >> $REPORT_FILE CHECKSUM_MISMATCH=$((CHECKSUM_MISMATCH + 1)) TOTAL_DIFFERENCES=$((TOTAL_DIFFERENCES + 1)) # 如果差异太多,提前终止 if [ "$TOTAL_DIFFERENCES" -ge "$MAX_DIFFERENCES" ]; then echo " - Stopping verification as maximum number of differences ($MAX_DIFFERENCES) reached." >> $REPORT_FILE break 2 fi fi done if [ "$CHECKSUM_MISMATCH" -eq 0 ]; then echo " - All sampled rows have matching checksums." >> $REPORT_FILE else echo " - Found $CHECKSUM_MISMATCH rows with checksum mismatches." >> $REPORT_FILE fi fi fi echo "" >> $REPORT_FILE # 如果差异太多,提前终止 if [ "$TOTAL_DIFFERENCES" -ge "$MAX_DIFFERENCES" ]; then break fi done # 总结报告 echo "Verification Summary" >> $REPORT_FILE echo "==================" >> $REPORT_FILE echo "Total differences found: $TOTAL_DIFFERENCES" >> $REPORT_FILE echo "Maximum allowed differences: $MAX_DIFFERENCES" >> $REPORT_FILE if [ "$TOTAL_DIFFERENCES" -eq 0 ]; then echo "SUCCESS: No differences found between source and target databases." >> $REPORT_FILE EXIT_CODE=0 elif [ "$TOTAL_DIFFERENCES" -lt "$MAX_DIFFERENCES" ]; then echo "PARTIAL SUCCESS: Found some differences, but within acceptable limits." >> $REPORT_FILE EXIT_CODE=1 else echo "FAILURE: Too many differences found between source and target databases." >> $REPORT_FILE EXIT_CODE=2 fi echo "" >> $REPORT_FILE echo "Verification completed at: $(date)" >> $REPORT_FILE # 显示报告摘要 echo "Verification completed. Summary:" echo "- Total differences found: $TOTAL_DIFFERENCES" echo "- Maximum allowed differences: $MAX_DIFFERENCES" echo "- Detailed report saved to: $REPORT_FILE" exit $EXIT_CODE
提高工作效率的最佳实践
使用脚本模板和配置文件库
为了进一步提高工作效率,可以创建一个脚本模板和配置文件库,以便快速部署常见的数据传输任务。以下是一个示例目录结构:
/mysql_data_transfer/ ├── templates/ │ ├── export_template.ini │ ├── import_template.ini │ ├── sync_template.ini │ ├── backup_template.ini │ └── verify_template.ini ├── scripts/ │ ├── export.sh │ ├── import.sh │ ├── sync.sh │ ├── backup.sh │ └── verify.sh ├── configs/ │ ├── production_export.ini │ ├── staging_import.ini │ ├── prod_staging_sync.ini │ └── daily_backup.ini └── logs/ ├── export.log ├── import.log ├── sync.log └── backup.log
创建一个快速部署脚本,用于根据模板生成新的配置文件:
#!/bin/bash # create_config.sh TEMPLATE_DIR="/mysql_data_transfer/templates" CONFIG_DIR="/mysql_data_transfer/configs" # 显示可用的模板 echo "Available templates:" ls $TEMPLATE_DIR echo "" # 选择模板 read -p "Enter template name: " TEMPLATE_NAME if [ ! -f "$TEMPLATE_DIR/$TEMPLATE_NAME" ]; then echo "Error: Template $TEMPLATE_NAME not found." exit 1 fi # 输入新配置文件名称 read -p "Enter new config name: " CONFIG_NAME if [ -f "$CONFIG_DIR/$CONFIG_NAME" ]; then echo "Warning: Config $CONFIG_NAME already exists." read -p "Overwrite? (y/n): " OVERWRITE if [ "$OVERWRITE" != "y" ]; then echo "Operation cancelled." exit 0 fi fi # 复制模板到新配置文件 cp "$TEMPLATE_DIR/$TEMPLATE_NAME" "$CONFIG_DIR/$CONFIG_NAME" # 编辑配置文件 ${EDITOR:-vi} "$CONFIG_DIR/$CONFIG_NAME" echo "Config file $CONFIG_NAME created successfully."
实现集中式管理和监控
对于大型组织,实现集中式管理和监控可以显著提高工作效率。以下是一个简单的集中式管理系统的示例:
[management] central_server = management.db.com central_user = admin central_password = admin_password central_port = 3306 central_database = data_transfer_management [notification] email_on_success = false email_on_failure = true email_recipients = admin@example.com,dba@example.com slack_webhook = https://hooks.slack.com/services/XXXXX
集中式管理脚本:
#!/bin/bash # central_management.sh CONFIG_FILE="/path/to/central_management.ini" # 从配置文件读取参数 CENTRAL_HOST=$(grep 'central_server' $CONFIG_FILE | cut -d' ' -f3) CENTRAL_USER=$(grep 'central_user' $CONFIG_FILE | cut -d' ' -f3) CENTRAL_PASS=$(grep 'central_password' $CONFIG_FILE | cut -d' ' -f3) CENTRAL_PORT=$(grep 'central_port' $CONFIG_FILE | cut -d' ' -f3) CENTRAL_DB=$(grep 'central_database' $CONFIG_FILE | cut -d' ' -f3) EMAIL_ON_SUCCESS=$(grep 'email_on_success' $CONFIG_FILE | cut -d' ' -f3) EMAIL_ON_FAILURE=$(grep 'email_on_failure' $CONFIG_FILE | cut -d' ' -f3) EMAIL_RECIPIENTS=$(grep 'email_recipients' $CONFIG_FILE | cut -d' ' -f3) SLACK_WEBHOOK=$(grep 'slack_webhook' $CONFIG_FILE | cut -d' ' -f3) # 初始化中央管理数据库 mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT -e " CREATE DATABASE IF NOT EXISTS $CENTRAL_DB; USE $CENTRAL_DB; CREATE TABLE IF NOT EXISTS transfer_jobs ( id INT AUTO_INCREMENT PRIMARY KEY, job_name VARCHAR(100) NOT NULL, job_type ENUM('export', 'import', 'sync', 'backup') NOT NULL, config_file VARCHAR(255) NOT NULL, script_file VARCHAR(255) NOT NULL, schedule VARCHAR(100), active BOOLEAN DEFAULT TRUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); CREATE TABLE IF NOT EXISTS transfer_logs ( id INT AUTO_INCREMENT PRIMARY KEY, job_id INT NOT NULL, start_time TIMESTAMP NOT NULL, end_time TIMESTAMP, status ENUM('running', 'completed', 'failed') NOT NULL, output TEXT, error_message TEXT, FOREIGN KEY (job_id) REFERENCES transfer_jobs(id) ); " # 函数:发送通知 send_notification() { local job_name=$1 local status=$2 local message=$3 # 发送邮件通知 if ([ "$status" = "completed" ] && [ "$EMAIL_ON_SUCCESS" = "true" ]) || ([ "$status" = "failed" ] && [ "$EMAIL_ON_FAILURE" = "true" ]); then echo "$message" | mail -s "Data Transfer Job $job_name: $status" $EMAIL_RECIPIENTS fi # 发送Slack通知 if [ -n "$SLACK_WEBHOOK" ]; then local color="good" if [ "$status" = "failed" ]; then color="danger" fi curl -X POST -H 'Content-type: application/json' --data "{ "attachments": [ { "color": "$color", "title": "Data Transfer Job: $job_name", "text": "$message", "fields": [ { "title": "Status", "value": "$status", "short": true }, { "title": "Time", "value": "$(date)", "short": true } ] } ] }" $SLACK_WEBHOOK fi } # 函数:运行传输作业 run_transfer_job() { local job_id=$1 local job_name=$2 local job_type=$3 local config_file=$4 local script_file=$5 echo "Starting job: $job_name (ID: $job_id)" # 记录作业开始 local log_id=$(mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " INSERT INTO transfer_logs (job_id, start_time, status) VALUES ($job_id, NOW(), 'running'); SELECT LAST_INSERT_ID(); " | tail -n1) # 运行作业 local temp_log="/tmp/transfer_job_${job_id}_$(date +%Y%m%d_%H%M%S).log" local start_time=$(date +%s) $script_file --config $config_file > $temp_log 2>&1 local exit_code=$? local end_time=$(date +%s) local duration=$((end_time - start_time)) # 读取日志内容 local output=$(cat $temp_log) # 更新作业状态 local status="completed" local error_message="" if [ $exit_code -ne 0 ]; then status="failed" error_message="Job exited with code $exit_code" fi mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " UPDATE transfer_logs SET end_time = NOW(), status = '$status', output = '$(echo "$output" | sed "s/'/''/g")', error_message = '$error_message' WHERE id = $log_id; " # 发送通知 local notification_message="Job $job_name ($job_type) has $status. Duration: $duration seconds." send_notification "$job_name" "$status" "$notification_message" # 清理临时日志文件 rm -f $temp_log echo "Job $job_name completed with status: $status" } # 主菜单 while true; do echo "" echo "Central Data Transfer Management System" echo "======================================" echo "1. List all jobs" echo "2. Add a new job" echo "3. Run a job" echo "4. View job logs" echo "5. Exit" echo "" read -p "Select an option: " option case $option in 1) echo "" echo "All Transfer Jobs:" mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " SELECT id, job_name, job_type, config_file, schedule, active FROM transfer_jobs ORDER BY id; " ;; 2) echo "" read -p "Enter job name: " job_name read -p "Enter job type (export/import/sync/backup): " job_type read -p "Enter config file path: " config_file read -p "Enter script file path: " script_file read -p "Enter schedule (cron format, optional): " schedule mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " INSERT INTO transfer_jobs (job_name, job_type, config_file, script_file, schedule) VALUES ('$job_name', '$job_type', '$config_file', '$script_file', '$schedule'); " echo "Job added successfully." ;; 3) echo "" mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " SELECT id, job_name, job_type, config_file, script_file FROM transfer_jobs WHERE active = TRUE ORDER BY id; " read -p "Enter job ID to run: " job_id # 获取作业详情 job_details=$(mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " SELECT job_name, job_type, config_file, script_file FROM transfer_jobs WHERE id = $job_id AND active = TRUE; ") if [ -z "$job_details" ]; then echo "Error: Job not found or inactive." else job_name=$(echo "$job_details" | tail -n1 | awk '{print $1}') job_type=$(echo "$job_details" | tail -n1 | awk '{print $2}') config_file=$(echo "$job_details" | tail -n1 | awk '{print $3}') script_file=$(echo "$job_details" | tail -n1 | awk '{print $4}') run_transfer_job $job_id "$job_name" "$job_type" "$config_file" "$script_file" fi ;; 4) echo "" mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " SELECT l.id, j.job_name, l.start_time, l.end_time, l.status, CASE WHEN l.status = 'running' THEN TIMESTAMPDIFF(SECOND, l.start_time, NOW()) ELSE TIMESTAMPDIFF(SECOND, l.start_time, l.end_time) END as duration_seconds FROM transfer_logs l JOIN transfer_jobs j ON l.job_id = j.id ORDER BY l.start_time DESC LIMIT 20; " read -p "Enter log ID to view details (or 0 to cancel): " log_id if [ "$log_id" -gt 0 ]; then mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " SELECT j.job_name, l.start_time, l.end_time, l.status, l.error_message FROM transfer_logs l JOIN transfer_jobs j ON l.job_id = j.id WHERE l.id = $log_id; " echo "" echo "Output:" mysql -h $CENTRAL_HOST -u $CENTRAL_USER -p$CENTRAL_PASS -P $CENTRAL_PORT $CENTRAL_DB -e " SELECT output FROM transfer_logs WHERE id = $log_id; " | tail -n +2 fi ;; 5) echo "Exiting..." exit 0 ;; *) echo "Invalid option. Please try again." ;; esac done
自动化工作流程集成
将MySQL数据传输集成到现有的自动化工作流程中,可以进一步提高工作效率。以下是一个与CI/CD系统集成的示例:
[cicd] jenkins_url = https://jenkins.example.com jenkins_user = ci_user jenkins_api_token = api_token_xyz jenkins_job = database-migration [git] repo_url = https://github.com/example/database-configs.git repo_branch = main local_path = /tmp/database-configs [deployment] environments = dev,staging,prod pre_migration_script = /scripts/pre_migration_checks.sh post_migration_script = /scripts/post_migration_validation.sh
CI/CD集成脚本:
#!/bin/bash # cicd_integration.sh CONFIG_FILE="/path/to/cicd_integration.ini" # 从配置文件读取参数 JENKINS_URL=$(grep 'jenkins_url' $CONFIG_FILE | cut -d' ' -f3) JENKINS_USER=$(grep 'jenkins_user' $CONFIG_FILE | cut -d' ' -f3) JENKINS_API_TOKEN=$(grep 'jenkins_api_token' $CONFIG_FILE | cut -d' ' -f3) JENKINS_JOB=$(grep 'jenkins_job' $CONFIG_FILE | cut -d' ' -f3) REPO_URL=$(grep 'repo_url' $CONFIG_FILE | cut -d' ' -f3) REPO_BRANCH=$(grep 'repo_branch' $CONFIG_FILE | cut -d' ' -f3) LOCAL_PATH=$(grep 'local_path' $CONFIG_FILE | cut -d' ' -f3) ENVIRONMENTS=$(grep 'environments' $CONFIG_FILE | cut -d' ' -f3) PRE_MIGRATION_SCRIPT=$(grep 'pre_migration_script' $CONFIG_FILE | cut -d' ' -f3) POST_MIGRATION_SCRIPT=$(grep 'post_migration_script' $CONFIG_FILE | cut -d' ' -f3) # 函数:触发Jenkins作业 trigger_jenkins_job() { local environment=$1 local config_file=$2 echo "Triggering Jenkins job for environment: $environment" # 构建Jenkins API URL local job_url="${JENKINS_URL}/job/${JENKINS_JOB}/buildWithParameters" # 发送请求 local response=$(curl -s -X POST -u "${JENKINS_USER}:${JENKINS_API_TOKEN}" --data-urlencode "ENVIRONMENT=${environment}" --data-urlencode "CONFIG_FILE=${config_file}" "$job_url") # 检查响应 if [ -n "$response" ]; then echo "Jenkins job triggered successfully." echo "Response: $response" # 提取队列ID local queue_id=$(echo "$response" | grep -o 'queue/item/[0-9]*' | cut -d'/' -f3) if [ -n "$queue_id" ]; then echo "Job queued with ID: $queue_id" # 获取作业URL local job_api_url="${JENKINS_URL}/queue/item/${queue_id}/api/json" local job_info=$(curl -s -u "${JENKINS_USER}:${JENKINS_API_TOKEN}" "$job_api_url") local executable_url=$(echo "$job_info" | grep -o '"executable":{"_class":"hudson.model.FreeStyleBuild","url":"[^"]*"' | cut -d'"' -f6) if [ -n "$executable_url" ]; then echo "Job URL: ${executable_url}" fi fi else echo "Failed to trigger Jenkins job." return 1 fi return 0 } # 函数:克隆配置仓库 clone_config_repo() { echo "Cloning configuration repository..." # 清理本地路径 rm -rf $LOCAL_PATH # 克隆仓库 git clone -b $REPO_BRANCH $REPO_URL $LOCAL_PATH if [ $? -eq 0 ]; then echo "Repository cloned successfully." return 0 else echo "Failed to clone repository." return 1 fi } # 函数:运行迁移前检查 run_pre_migration_checks() { local environment=$1 local config_file=$2 echo "Running pre-migration checks for environment: $environment" if [ -f "$PRE_MIGRATION_SCRIPT" ]; then $PRE_MIGRATION_SCRIPT --environment $environment --config $config_file return $? else echo "Pre-migration script not found: $PRE_MIGRATION_SCRIPT" return 1 fi } # 函数:运行迁移后验证 run_post_migration_validation() { local environment=$1 local config_file=$2 echo "Running post-migration validation for environment: $environment" if [ -f "$POST_MIGRATION_SCRIPT" ]; then $POST_MIGRATION_SCRIPT --environment $environment --config $config_file return $? else echo "Post-migration script not found: $POST_MIGRATION_SCRIPT" return 1 fi } # 主菜单 while true; do echo "" echo "CI/CD Integration for Database Migration" echo "======================================" echo "1. Clone configuration repository" echo "2. List available configurations" echo "3. Trigger migration for environment" echo "4. Run pre-migration checks" echo "5. Run post-migration validation" echo "6. Exit" echo "" read -p "Select an option: " option case $option in 1) clone_config_repo ;; 2) if [ ! -d "$LOCAL_PATH" ]; then echo "Configuration repository not cloned. Please clone it first." continue fi echo "" echo "Available configurations:" find $LOCAL_PATH -name "*.ini" | sort ;; 3) if [ ! -d "$LOCAL_PATH" ]; then echo "Configuration repository not cloned. Please clone it first." continue fi echo "" echo "Available environments: $ENVIRONMENTS" read -p "Enter environment: " environment # 检查环境是否有效 if ! echo "$ENVIRONMENTS" | tr ',' 'n' | grep -q "^${environment}$"; then echo "Error: Invalid environment." continue fi echo "" echo "Available configurations:" find $LOCAL_PATH -name "*.ini" | sort read -p "Enter configuration file path: " config_file # 检查配置文件是否存在 if [ ! -f "$config_file" ]; then echo "Error: Configuration file not found." continue fi # 运行迁移前检查 if ! run_pre_migration_checks $environment $config_file; then echo "Error: Pre-migration checks failed." continue fi # 触发Jenkins作业 if trigger_jenkins_job $environment $config_file; then echo "Migration job triggered successfully." else echo "Error: Failed to trigger migration job." fi ;; 4) if [ ! -d "$LOCAL_PATH" ]; then echo "Configuration repository not cloned. Please clone it first." continue fi echo "" echo "Available environments: $ENVIRONMENTS" read -p "Enter environment: " environment # 检查环境是否有效 if ! echo "$ENVIRONMENTS" | tr ',' 'n' | grep -q "^${environment}$"; then echo "Error: Invalid environment." continue fi echo "" echo "Available configurations:" find $LOCAL_PATH -name "*.ini" | sort read -p "Enter configuration file path: " config_file # 检查配置文件是否存在 if [ ! -f "$config_file" ]; then echo "Error: Configuration file not found." continue fi # 运行迁移前检查 if run_pre_migration_checks $environment $config_file; then echo "Pre-migration checks completed successfully." else echo "Error: Pre-migration checks failed." fi ;; 5) if [ ! -d "$LOCAL_PATH" ]; then echo "Configuration repository not cloned. Please clone it first." continue fi echo "" echo "Available environments: $ENVIRONMENTS" read -p "Enter environment: " environment # 检查环境是否有效 if ! echo "$ENVIRONMENTS" | tr ',' 'n' | grep -q "^${environment}$"; then echo "Error: Invalid environment." continue fi echo "" echo "Available configurations:" find $LOCAL_PATH -name "*.ini" | sort read -p "Enter configuration file path: " config_file # 检查配置文件是否存在 if [ ! -f "$config_file" ]; then echo "Error: Configuration file not found." continue fi # 运行迁移后验证 if run_post_migration_validation $environment $config_file; then echo "Post-migration validation completed successfully." else echo "Error: Post-migration validation failed." fi ;; 6) echo "Exiting..." exit 0 ;; *) echo "Invalid option. Please try again." ;; esac done
结论
通过本文,我们详细介绍了如何利用INI配置文件轻松实现MySQL数据库的导入导出、同步与备份,并解决了数据迁移过程中的常见问题。我们探讨了从基本的数据库导出导入到复杂的数据同步策略,从简单的备份方案到高级的增量备份,以及如何处理大数据量迁移和确保数据一致性。
关键要点总结:
INI配置文件的优势:通过使用INI配置文件,我们可以将复杂的命令行参数和配置选项集中管理,使操作更加简单、可重复且不易出错。
灵活的数据传输策略:根据不同的业务需求,可以选择完整备份、增量备份、单向同步或双向同步等不同的数据传输策略。
自动化和监控:通过结合cron、脚本和集中式管理系统,可以实现数据库传输任务的自动化执行和监控,显著提高工作效率。
问题预防和解决:通过预先识别和解决常见的数据迁移问题,如字符集不匹配、大数据量性能问题和数据一致性问题,可以确保数据迁移的顺利进行。
集成到现有工作流程:将MySQL数据传输集成到现有的CI/CD和自动化工作流程中,可以进一步提高工作效率和可靠性。
通过采用本文介绍的方法和最佳实践,数据库管理员和开发人员可以显著提高MySQL数据传输的效率和可靠性,减少人为错误,并确保数据的安全性和一致性。