blue-ox.nl From coffee-fueled fruity tech to fast runs—think different, let’s run them.

Building Backup from RPi attached USB-SSD with Immich photo source to Synology NAS

B

In this comprehensive guide, I’ll walk you through creating an advanced backup solution that I’ve implemented for my home photo library. This system combines the convenience of a Raspberry Pi with the reliability of a Synology NAS, featuring intelligent retention policies and a sophisticated trash system for accidental deletion recovery.

System Architecture

Our backup infrastructure maintains multiple layers of protection:

/volume1/photoBackup_test/
├── current/ # Active backup state
├── trash/ # Deleted files (30-day retention)
├── daily/ # Daily snapshots (7 days)
├── weekly/ # Weekly snapshots (4 weeks)
├── monthly/ # Monthly snapshots (12 months)
└── yearly/ # Annual snapshots (5 years)

Key Features

  • Hierarchical backup retention
  • Intelligent trash management
  • File versioning with timestamps
  • Original directory structure preservation
  • Automated cleanup processes
  • Lock mechanism preventing concurrent runs
  • SSH key-based secure transfers

The Implementation

#!/bin/bash
set -e
set -u
#set -x

# Configuration
LOG="/home/erik/logs/photo-backup.log"
LOG_MAX_SIZE=10M
LOG_BACKUPS=5
LOCK_TIMEOUT=3600

# ADDED: Configuration for acceptable file count difference
FILE_COUNT_TOLERANCE=150  # How many files (count) of difference between backup source and target to call the backup still a success

# Backup paths
SOURCE="/mnt/Immich-Library"
REMOTE_HOST="nas"  # SSH alias from ~/.ssh/config
REMOTE_PATH="/volume1/photoBackup"

# Retention configuration
DAILY_RETENTION=7
WEEKLY_RETENTION=4
MONTHLY_RETENTION=12
YEARLY_RETENTION=5
TRASH_RETENTION=30

# Directories to exclude from verification
EXCLUDE_DIRS=(
    "encoded-video"
    "thumbs"
    "upload"
    "cache"
    "tmp"
    ".tmp"
    "temp"
    ".temp"
)

# Critical services
CRITICAL_CONTAINERS=(
    "immich_postgres"
    "immich_redis"
    "immich_server"
)

# Performance and space settings
MIN_FREE_SPACE=10  # GB
DRY_RUN=false
NICE_LEVEL=19
IO_CLASS="idle"

# Derived paths
TRASH_DIR="${REMOTE_PATH}/trash"
LOCK_FILE="/tmp/photo_backup.lock"

# Enhanced logging with timestamps and debug info
log_message() {
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    echo "$timestamp - $1" | tee -a "$LOG"
}

log_debug() {
    if [ "$DRY_RUN" = true ]; then
        log_message "DEBUG: $1"
    fi
}

# Log rotation management
handle_log_rotation() {
    if [ ! -f "$LOG" ]; then
        return 0
    fi

    local log_size
    log_size=$(stat -f%z "$LOG" 2>/dev/null || stat -c%s "$LOG" 2>/dev/null)

    if [ "${log_size:-0}" -gt $((10*1024*1024)) ]; then
        log_message "Log file size exceeds ${LOG_MAX_SIZE}, rotating logs..."

        # Rotate existing backup logs
        for i in $(seq $((LOG_BACKUPS-1)) -1 1); do
            if [ -f "${LOG}.$i" ]; then
                mv "${LOG}.$i" "${LOG}.$((i+1))"
                log_debug "Moved ${LOG}.$i to ${LOG}.$((i+1))"
            fi
        done

        # Move current log to .1
        mv "$LOG" "${LOG}.1"
        touch "$LOG"

        log_message "Log rotation completed"
    fi
}

# Perform initial log rotation check
handle_log_rotation

generate_rsync_excludes() {
    local excludes=""
    for dir in "${EXCLUDE_DIRS[@]}"; do
        excludes="${excludes} --exclude=/${dir}/"
    done
    echo "$excludes"
}

# Helper function to generate find exclude patterns
generate_find_excludes() {
    local excludes=""
    for dir in "${EXCLUDE_DIRS[@]}"; do
        excludes="${excludes} ! -path \"*/${dir}/*\""
    done
    # Voeg type -f toe om alleen bestanden te tellen, geen directories
    echo "$excludes -type f"
}

# Nieuwe helper functie voor bestandstelling
count_files() {
    local path=$1
    local excludes=$2

    ssh "$REMOTE_HOST" "find \"$path\" $excludes -print0 | tr -dc '\0' | wc -c"
}

# Wake-up attempt function
attempt_wake_up() {
    log_message "Attempting to wake up NAS..."

    # Probeer een lichte SSH command uit te voeren
    ssh "$REMOTE_HOST" "ls" >/dev/null 2>&1

    # Wacht 30 seconden om de schijven tijd te geven om op te starten
    sleep 30
}

# Verify SSH connection with retries
verify_ssh_connection() {
    log_message "Verifying SSH connection..."

    local max_attempts=3
    local attempt=1

    while [ $attempt -le $max_attempts ]; do
        if ssh "$REMOTE_HOST" "exit" 2>/dev/null; then
            log_debug "SSH connection verified successfully"
            return 0
        fi

        log_message "SSH connection attempt $attempt failed, trying to wake up NAS..."
        attempt_wake_up

        attempt=$((attempt + 1))

        if [ $attempt -le $max_attempts ]; then
            log_message "Retrying SSH connection in 30 seconds... (attempt $attempt of $max_attempts)"
            sleep 30
        fi
    done

    log_message "ERROR: Cannot establish SSH connection after $max_attempts attempts"
    return 1
}

# Handle services
handle_services() {
    local action=$1
    local failed=0
    log_message "${action^}ing critical services and containers"

    if command -v docker >/dev/null 2>&1; then
        for container in "${CRITICAL_CONTAINERS[@]}"; do
            if docker ps -q -f name="$container" >/dev/null; then
                if [ "$action" = "stop" ]; then
                    if ! docker inspect --format '{{.State.Paused}}' "$container" | grep -q "true"; then
                        log_message "Pausing container: $container"
                        docker pause "$container" >/dev/null || failed=1
                    fi
                else
                    if docker inspect --format '{{.State.Paused}}' "$container" | grep -q "true"; then
                        log_message "Unpausing container: $container"
                        docker unpause "$container" >/dev/null || failed=1
                    fi
                fi
            fi
        done
    fi

    return $failed
}

ensure_nas_ready() {
    log_message "Ensuring NAS is fully awake..."

    # Voer een serie lichte commands uit om de NAS actief te houden
    ssh "$REMOTE_HOST" "
        ls ${REMOTE_PATH} >/dev/null 2>&1
        df -h ${REMOTE_PATH} >/dev/null 2>&1
        find ${REMOTE_PATH}/current -maxdepth 1 >/dev/null 2>&1
    "

    # Geef extra tijd voor volledig ontwaken
    sleep 15
}

# Space check
check_free_space() {
    log_message "Checking free space..."
    local max_attempts=3
    local attempt=1

    while [ $attempt -le $max_attempts ]; do
        local free_space

        free_space=$(ssh "$REMOTE_HOST" "df -BG '${REMOTE_PATH}' | awk 'NR==2 {gsub(\"G\",\"\"); print \$4}'" 2>&1)

        if [[ "$free_space" =~ ^[0-9]+$ ]]; then
            if [ "$free_space" -lt "$MIN_FREE_SPACE" ]; then
                log_message "ERROR: Insufficient space (${free_space}GB free, ${MIN_FREE_SPACE}GB required)"
                return 1
            fi
            log_message "Space check passed (${free_space}GB available)"
            return 0
        fi

        log_message "Failed to get free space, attempting to wake up NAS..."
        attempt_wake_up

        attempt=$((attempt + 1))
        [ $attempt -le $max_attempts ] && sleep 30
    done

    log_message "ERROR: Could not verify free space after $max_attempts attempts"
    return 1
}

# Lock management
check_lock() {
    if [ -f "$LOCK_FILE" ]; then
        local pid
        pid=$(cat "$LOCK_FILE" 2>/dev/null || echo "")
        if [ -n "$pid" ] && kill -0 "$pid" 2>/dev/null; then
            local lock_age
            lock_age=$(($(date +%s) - $(stat -c %Y "$LOCK_FILE")))
            if [ "$lock_age" -gt "$LOCK_TIMEOUT" ]; then
                log_message "Lock file is stale (age: ${lock_age}s). Removing."
                rm -f "$LOCK_FILE"
            else
                log_message "Another backup process is running (PID: $pid)"
                exit 1
            fi
        fi
    fi
    echo $$ > "$LOCK_FILE"
}

# Cleanup function
cleanup() {
    handle_services start
    rm -f "$LOCK_FILE"
    log_message "Cleanup completed"
}

# Initialize backup structure
initialize_backup_structure() {
    log_message "Initializing backup directory structure..."

    local init_command="mkdir -p '${REMOTE_PATH}/current' '${REMOTE_PATH}/daily' '${REMOTE_PATH}/weekly' '${REMOTE_PATH}/monthly' '${REMOTE_PATH}/yearly' '${TRASH_DIR}'"
    log_debug "Executing: $init_command"

    if ! ssh "$REMOTE_HOST" "$init_command" 2>&1; then
        log_message "ERROR: Failed to initialize backup structure"
        return 1
    fi

    # Verify directories were created
    local verify_command="ls -la '${REMOTE_PATH}'"
    log_debug "Verifying directories: $verify_command"
    local dir_listing
    dir_listing=$(ssh "$REMOTE_HOST" "$verify_command" 2>&1) || {
        log_message "ERROR: Failed to verify backup structure"
        return 1
    }
    log_debug "Directory structure:\n$dir_listing"

    return 0
}

#Perform backup
perform_backup() {
    log_message "Starting backup process..."

    if ! initialize_backup_structure; then
        return 1
    fi

    # Get exclude parameters for rsync
    local excludes
    excludes=$(generate_rsync_excludes)

    # Zorg dat NAS zeker wakker is voordat we beginnen
    ensure_nas_ready

    # Backup external directory met --prune-empty-dirs
    log_message "Executing rsync command for external: rsync -rtv --progress --prune-empty-dirs ${excludes} --rsync-path=/usr/bin/rsync '${SOURCE}/external/' '${REMOTE_HOST}:${REMOTE_PATH}/current/external/'"
    if ! rsync -rtv --progress --prune-empty-dirs ${excludes} --rsync-path=/usr/bin/rsync "${SOURCE}/external/" "${REMOTE_HOST}:${REMOTE_PATH}/current/external/"; then
        log_message "ERROR: rsync failed for external"
        return 1
    fi

    # Backup internal directory met --prune-empty-dirs
    log_message "Executing rsync command for internal: rsync -rtv --progress --prune-empty-dirs ${excludes} --rsync-path=/usr/bin/rsync '${SOURCE}/internal/' '${REMOTE_HOST}:${REMOTE_PATH}/current/internal/'"
    if ! rsync -rtv --progress --prune-empty-dirs ${excludes} --rsync-path=/usr/bin/rsync "${SOURCE}/internal/" "${REMOTE_HOST}:${REMOTE_PATH}/current/internal/"; then
        log_message "ERROR: rsync failed for internal"
        return 1
    fi

    # Backup api_keys directory (meestal geen excludes nodig voor api_keys)
    log_message "Executing rsync command for api_keys: rsync -rtv --progress --rsync-path=/usr/bin/rsync '${SOURCE}/api_keys/' '${REMOTE_HOST}:${REMOTE_PATH}/current/api_keys/'"
    if ! rsync -rtv --progress --rsync-path=/usr/bin/rsync "${SOURCE}/api_keys/" "${REMOTE_HOST}:${REMOTE_PATH}/current/api_keys/"; then
        log_message "ERROR: rsync failed for api_keys"
        return 1
    fi

    log_message "rsync completed successfully"
    return 0
}

# Create snapshot
create_snapshot() {
    local target_dir="$1"
    local retention="$2"
    local snapshot_type="$3"

    log_message "Creating ${snapshot_type} snapshot in ${target_dir}"

    # Zorg dat NAS wakker is
    ensure_nas_ready

    # Voer de rsync uit op de remote host zelf voor lokaal-naar-lokaal kopie
    if ! ssh "$REMOTE_HOST" "
        rsync -a \
              -H \
              -x \
              --delete \
              --numeric-ids \
              --inplace \
              --partial \
              --modify-window=1 \
              --link-dest=\"\$(realpath '${REMOTE_PATH}/current')\" \
              '${REMOTE_PATH}/current/' \
              '${target_dir}/'
    "; then
        log_message "ERROR: Failed to create ${snapshot_type} snapshot"
        return 1
    fi

    # Verify the snapshot after creation
    local excludes
    excludes=$(generate_find_excludes)

    local snapshot_count
    local current_count
    if snapshot_count=$(ssh "$REMOTE_HOST" "eval 'find \"${target_dir}\" -type f ${excludes} | wc -l'") && \
       current_count=$(ssh "$REMOTE_HOST" "eval 'find \"${REMOTE_PATH}/current\" -type f ${excludes} | wc -l'"); then
        local diff=$((snapshot_count > current_count ? snapshot_count - current_count : current_count - snapshot_count))
        log_message "${snapshot_type^} snapshot file count difference ($diff) within tolerance of $FILE_COUNT_TOLERANCE files"
    fi

    # Cleanup old snapshots
    local cleanup_command="cd \"\$(dirname '${target_dir}')\" && ls -1t | tail -n +$((retention + 1)) | xargs -r rm -rf"
    log_debug "Executing cleanup command: $cleanup_command"

    if ! ssh "$REMOTE_HOST" "$cleanup_command" 2>&1; then
        log_message "ERROR: Failed to cleanup old ${snapshot_type} snapshots"
        return 1
    fi

    return 0
}

# Backup rotation
rotate_backups() {
    [ "$DRY_RUN" = true ] && return 0

    local current_date=$(date +%Y-%m-%d)
    local day_of_week=$(date +%u)
    local day_of_month=$(date +%d)
    local month=$(date +%m)

    log_message "Starting backup rotation..."

    # Daily backup
    if ! create_snapshot "${REMOTE_PATH}/daily/${current_date}" "$DAILY_RETENTION" "daily"; then
        return 1
    fi

    # Weekly backup (Sunday)
    if [ "$day_of_week" = "7" ]; then
        local week=$(date +%Y-W%V)
        if ! create_snapshot "${REMOTE_PATH}/weekly/${week}" "$WEEKLY_RETENTION" "weekly"; then
            return 1
        fi
    fi

    # Monthly backup (1st of month)
    if [ "$day_of_month" = "01" ]; then
        local month_dir=$(date +%Y-%m)
        if ! create_snapshot "${REMOTE_PATH}/monthly/${month_dir}" "$MONTHLY_RETENTION" "monthly"; then
            return 1
        fi
    fi

    # Yearly backup (January 1st)
    if [ "$day_of_month" = "01" ] && [ "$month" = "01" ]; then
        local year=$(date +%Y)
        if ! create_snapshot "${REMOTE_PATH}/yearly/${year}" "$YEARLY_RETENTION" "yearly"; then
            return 1
        fi
    fi

    log_message "Backup rotation completed successfully"
    return 0
}

#Verify backup
# Functie om de current backup tegen de bron te verifiëren
verify_current_backup() {
    local excludes=$1

    log_message "Verifying current backup against source..."

    # Tel bronbestanden
    log_debug "Counting source files..."
    if ! src_count=$(eval "find \"$SOURCE\" -type f ${excludes} | wc -l"); then
        log_message "ERROR: Failed to count source files"
        return 1
    fi

    # Tel doelbestanden
    log_debug "Counting destination files..."
    if ! dst_count=$(ssh "$REMOTE_HOST" "eval 'find \"${REMOTE_PATH}/current\" -type f ${excludes} | wc -l'"); then
        log_message "ERROR: Failed to count destination files"
        return 1
    fi

    log_debug "Source files: $src_count"
    log_debug "Destination files: $dst_count"

    # Controleer bestandsaantallen met tolerantie
    local diff=$((src_count > dst_count ? src_count - dst_count : dst_count - src_count))
    if [ "$diff" -gt "$FILE_COUNT_TOLERANCE" ]; then
        log_message "ERROR: Current backup file count mismatch exceeds tolerance - Source: $src_count, Destination: $dst_count (diff: $diff, tolerance: $FILE_COUNT_TOLERANCE)"
        return 1
    fi

    log_message "Current backup verified successfully (difference: $diff files)"
    return 0
}

# Functie om hardlinks in een snapshot te verifiëren
verify_snapshot_hardlinks() {
    local snapshot_path=$1
    local excludes=$2

    log_message "Verifying hardlinks in snapshot: ${snapshot_path}"

    # Tel unieke inodes in current en snapshot
    local current_inodes snapshot_inodes
    if ! current_inodes=$(ssh "$REMOTE_HOST" "eval 'find \"${REMOTE_PATH}/current\" -type f ${excludes} -printf \"%i\n\" | sort -u | wc -l'") || \
       ! snapshot_inodes=$(ssh "$REMOTE_HOST" "eval 'find \"${snapshot_path}\" -type f ${excludes} -printf \"%i\n\" | sort -u | wc -l'"); then
        log_message "ERROR: Failed to count inodes"
        return 1
    fi

    log_debug "Current unique inodes: $current_inodes"
    log_debug "Snapshot unique inodes: $snapshot_inodes"

    # Controleer of hardlinks correct zijn (sta 10% verschil toe voor nieuwe/verwijderde bestanden)
    if [ "$snapshot_inodes" -gt "$((current_inodes + current_inodes / 10))" ]; then
        log_message "ERROR: Too many unique inodes in snapshot (hardlink verification failed)"
        return 1
    fi

    log_message "Snapshot hardlinks verified successfully"
    return 0
}

# Trash cleanup
cleanup_trash() {
    [ "$DRY_RUN" = true ] && return 0

    log_message "Starting trash cleanup..."

    local cutoff_date=$(date -d "${TRASH_RETENTION} days ago" +%Y%m%d)

    if ! ssh "$REMOTE_HOST" "
        find '${TRASH_DIR}' -type f -name '*.del.*' | while read -r file; do
            deleted_date=\$(echo \"\$file\" | grep -o '[0-9]\{8\}$')
            if [ -n \"\$deleted_date\" ] && [ \"\$deleted_date\" -lt \"$cutoff_date\" ]; then
                rm -f \"\$file\"
            fi
        done

        find '${TRASH_DIR}' -type d -empty -delete
    "; then
        log_message "ERROR: Trash cleanup failed"
        return 1
    fi

    log_message "Trash cleanup completed"
    return 0
}

# Main execution
main() {
    log_message "Starting backup process"
    check_lock
    trap cleanup EXIT INT TERM

    if ! verify_ssh_connection; then
        log_message "ERROR: SSH connection failed"
        exit 1
    fi

    ensure_nas_ready

    if ! check_free_space; then
        exit 1
    fi

    if ! handle_services stop; then
        log_message "Failed to stop services"
        handle_services start
        exit 1
    fi

    # Voer backup uit
    if ! perform_backup; then
        log_message "ERROR: Backup failed"
        exit 1
    fi

    # Verifieer current backup tegen bron
    local excludes
    excludes=$(generate_find_excludes)
    if ! verify_current_backup "$excludes"; then
        log_message "ERROR: Current backup verification failed"
        exit 1
    fi

    # Maak snapshots als de backup goed is
    if ! rotate_backups; then
        log_message "ERROR: Backup rotation failed"
        exit 1
    fi

    # Verifieer de nieuwe snapshots
    local latest_daily
    latest_daily=$(ssh "$REMOTE_HOST" "ls -1t '${REMOTE_PATH}/daily' 2>/dev/null | head -n1")
    if [ -n "$latest_daily" ]; then
        if ! verify_snapshot_hardlinks "${REMOTE_PATH}/daily/${latest_daily}" "$excludes"; then
            log_message "ERROR: Daily snapshot verification failed"
            exit 1
        fi
    fi

    # Cleanup trash als alles goed is gegaan
    if ! cleanup_trash; then
        log_message "ERROR: Trash cleanup failed"
        exit 1
    fi

    log_message "Backup process completed successfully"
}

# Execute main
if [ "$DRY_RUN" = true ]; then
    log_message "Performing dry run..."
fi
main
exit 0Code language: PHP (php)

Understanding the Backup Lifecycle

When a file is deleted, it follows this lifecycle:

Continues to exist in historical snapshots

  1. Removed from current
  2. Moved to trash/ with a datestamp
  3. Remains in existing snapshots
  4. After 30 days, removed from trash
  5. Continues to exist in historical snapshots

File Recovery Options

To recover files, you have multiple paths depending on the deletion timeframe:
For recent deletions (within 30 days):

./backup_script.sh restore "Vacation2024/photo123.jpg"Code language: JavaScript (javascript)

For older files, check historical snapshots:

# List available daily backups
ls -lt /volume1/photoBackup_test/daily/

# Restore from a specific backup
cp -p /volume1/photoBackup_test/daily/2025-01-20/path/to/photo.jpg /volume1/photoBackup_test/current/path/to/Code language: PHP (php)

Automation with Systemd

For automated execution, create these systemd service files:

sudo nano /etc/systemd/system/photo-backup.service
# /etc/systemd/system/photo-backup.service
[Unit]
Description=Photo Backup Service
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
ExecStart=/home/erik/photoBackup-ssh-test.sh
User=erik
WorkingDirectory=/home/erik
# Add error handling
StandardOutput=journal
StandardError=journal
# Add timeout protection
TimeoutStartSec=1hourCode language: PHP (php)

ee

sudo nano /etc/systemd/system/photo-backup.timer

eee

# /etc/systemd/system/photo-backup.timer
[Unit]
Description=Run Photo Backup periodically

[Timer]
OnBootSec=15min
OnUnitActiveSec=1h
AccuracySec=5min
RandomizedDelaySec=30s  # Add some randomization to prevent resource conflicts

[Install]
WantedBy=timers.targetCode language: PHP (php)

Enable the service:

sudo systemctl daemon-reload
sudo systemctl enable photo-backup.timerCode language: CSS (css)
sudo systemctl start photo-backup.timerCode language: CSS (css)

Verify the status

systemctl status photo-backup.serviceCode language: CSS (css)
systemctl status photo-backup.timerCode language: CSS (css)

Performance Considerations

The script includes several optimizations:

  • --partial for resumable transfers
  • --progress for transfer monitoring
  • Hardlinks for efficient storage
  • Lock file mechanism preventing concurrent runs

Conclusion

This backup solution provides a robust framework for protecting your photo library. The combination of frequent backups, intelligent retention policies, and easy recovery options ensures your precious memories are safe while maintaining storage efficiency.

The script can be further customized to your specific needs by adjusting retention periods or adding notification systems. Remember to regularly test your backup restoration process to ensure everything works as expected.

Why SSH with ED25519 Keys Instead of NFS?

When designing this backup system, I deliberately chose SSH with ED25519 keys over NFS mounting, a decision that significantly impacts both security and operational efficiency. Let’s explore the trade-offs:

Advantages of SSH-based Transfers

On-Demand Connectivity

  • The NAS is only accessible during actual backup operations
  • No permanent mount points that could be compromised
  • Reduced attack surface for your network

Enhanced Security

  • Modern ED25519 keys provide strong cryptographic security
  • No need to expose NFS ports or handle complex NFS security
  • Built-in encryption for all transfers
  • Fine-grained access control per key

Cross-Platform Reliability

  • Works consistently across different NAS operating systems
  • No NFS version compatibility issues
  • More reliable across network interruptions

Trade-offs to Consider

Performance Impact

  • Encryption/decryption overhead during transfers
  • Slightly slower than direct NFS mounts
  • Additional CPU usage on both ends

Complexity

  • Requires key management infrastructure
  • More initial setup work
  • Need to handle SSH connection timeouts

Memory Usage

  • Each SSH connection maintains its own session
  • Multiple concurrent backups need separate connections

Despite these trade-offs, the security benefits and operational isolation make SSH the preferred choice for this backup system. The performance impact is minimal for photo backup scenarios, where transfer speeds are rarely the bottleneck.

Setting Up SSH Infrastructure (Optional)

If you need to set up the SSH key infrastructure, here’s how:

Generate ED25519 Key Pair

ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_nas -C "photo-backup"Code language: JavaScript (javascript)

Configure Synology NAS

# Copy key to NAS

ssh-copy-id -i ~/.ssh/id_ed25519_nas.pub -p 2222 Erik@192.168.1.3Code language: JavaScript (javascript)

Test connection

ssh -i ~/.ssh/id_ed25519_nas -p 2222 Erik@192.168.1.3 "echo 'Connection successful'"Code language: JavaScript (javascript)

Note

Secure SSH Configuration Add these lines to your Raspberry Pi’s SSH config (~/.ssh/config):

Host photo-backup-nas
    HostName 192.168.1.3
    Port 2222
    User Erik
    IdentityFile ~/.ssh/id_ed25519_nas
    IdentitiesOnly yes
    ConnectTimeout 30
    ServerAliveInterval 60
    ServerAliveCountMax 3Code language: JavaScript (javascript)

Restrict NAS Access On your Synology, edit /etc/ssh/sshd_config:

Match User Erik
    AuthorizedKeysFile .ssh/authorized_keys
    PasswordAuthentication no
    PermitRootLogin no
    AllowTcpForwarding no
    X11Forwarding no
    ForceCommand internal-sftp

These SSH security measures ensure that even if a key is compromised, the potential damage is limited to file transfers only, with no shell access possible.

[Rest of the blog post continues as before…]

Would you like me to expand on any part of these new sections? Or should we adjust the technical depth of the SSH setup instructions?

This is the folder structure I use at my photo/video source library

/Family #library owner, whoes media?
. └── YYYY #year media is created, when was the event
. . └── YYYY-MM #month within the year
. . . └── YYYYMMDD_photo-name_00X #photo name, numbered and started at 1 within each new folder
/User-1 #library owner, whoes media?
. └── YYYY
. . └── YYYY-MM
. . . └── YYYYMMDD_photo-name_00X

Summarized, it looks like /Family/YYYY/YYYY-MM/YYYYMMDD_photo-name_00X

The backup folder structure I want to realize on my Synology NAS,

/mnt/photoBackup/
. ├── [source structure] # The actual backup
. └── _deleted/ # Deleted files
. . └── [source structure]

Reload the systemd configuration

sudo systemctl daemon-reload

Enable the service to start at boot

sudo systemctl enable photobackup.serviceCode language: CSS (css)

Start the service

sudo systemctl start photobackup.serviceCode language: CSS (css)

Check the status

sudo systemctl status photobackup.serviceCode language: CSS (css)

sources (references) & credits

Basically all input/information came from the websites below. So credits and thanks to those content creators and subject matter experts. The only reason I mainly copy/paste their content is to guarantee I have a backup for myself and because multiple times I had to change and adapt. So archiving the “scripts” as I executed it succesfully is inportant for me.

https://claude.ai/

About the author

Add comment

By Erik
blue-ox.nl From coffee-fueled fruity tech to fast runs—think different, let’s run them.

Pages

Tags