The HyperExport TUI now includes comprehensive cloud storage support, allowing you to export VMs and automatically upload them to Amazon S3, Azure Blob Storage, Google Cloud Storage, or SFTP servers directly from the interactive interface.
# Start interactive TUI
hyperexport --interactive
# Or use the alias
hyperexport --tui
Navigate the VM list using keyboard shortcuts:
β/k Move up
β/j Move down
Space Select/deselect VM
Enter Continue to confirmation
u Configure cloud upload (shortcut)
Quick Filters:
1 Powered ON VMs only
2 Powered OFF VMs only
3 Linux VMs
4 Windows VMs
5 High CPU (8+ cores)
6 High Memory (16GB+)
7 Large Storage (500GB+)
Press βuβ to open the cloud provider selection screen.
βοΈ Cloud Storage Provider
Select a cloud storage provider for backup:
βΆ πΎ Skip Cloud Upload
Export to local storage only
βοΈ Amazon S3
AWS S3 or S3-compatible storage
π· Azure Blob Storage
Microsoft Azure Blob Storage
π©οΈ Google Cloud Storage
Google Cloud Platform Storage
π SFTP Server
Secure File Transfer Protocol
βοΈ Upload Options
s: Stream upload (no local copy): β
l: Keep local copy: β
β/β: Navigate | Enter: Select | Esc: Back | q: Quit
Options:
The TUI will guide you through entering credentials step-by-step.
π§ Configure Amazon S3
S3 Bucket Name:
Enter the S3 bucket name (without s3:// prefix)
my-vm-backupsβ
Example: my-backup-bucket
Step 1 of 5
Required Information:
my-vm-backups)us-east-1, eu-west-1)AKIAIOSFODNN7EXAMPLE)prod/vms)Environment Variables:
# Alternatively, set credentials via environment:
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export AWS_REGION="us-east-1"
π§ Configure Azure Blob Storage
Container Name:
Enter the Azure container name
vm-backupsβ
Example: vm-backups
Step 1 of 4
Required Information:
vm-backups)exports/prod)Environment Variables:
export AZURE_STORAGE_ACCOUNT="mystorageaccount"
export AZURE_STORAGE_KEY="your-account-key"
π§ Configure Google Cloud Storage
GCS Bucket Name:
Enter the Google Cloud Storage bucket name
my-gcs-bucketβ
Example: my-gcs-bucket
Step 1 of 2
Required Information:
my-gcs-bucket)vm-exports)Service Account Authentication:
# Set service account credentials:
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account-key.json"
π§ Configure SFTP Server
SFTP Host:
Enter the SFTP server hostname or IP
sftp.example.comβ
Example: sftp.example.com
Step 1 of 5
Required Information:
Key-Based Authentication:
~/.ssh/id_rsa--keyfile flagReview your selection and cloud configuration before proceeding.
π Confirm Export
π¦ web-server-01 | 4 CPU | 8.0 GB | 100.0G
π¦ db-server-01 | 8 CPU | 16.0 GB | 500.0G
π Summary
VMs: 2 | CPUs: 12 | Memory: 24.0 GB | Storage: 600.0G
βοΈ Cloud Upload
β Provider: s3 | Bucket: my-vm-backups | Prefix: prod/vms
β Disk space OK: 2.5T available
y/Y/Enter: Start export | u: Cloud upload | n/Esc: Go back | q: Quit
If cloud upload is not configured, youβll see:
βοΈ Cloud upload: Not configured (press 'u' to configure)
Monitor real-time progress during export and upload.
Local Export Phase:
π¦ Exporting VMs
web-server-01
[ββββββββββββββββββββββββββββββββββββββββ] 65.3%
85.3 GB / 130.6 GB
Speed: 125.4 MB/s
Files: 12 / 18
Elapsed: 11m 23s
Export in progress... Press q to cancel
Cloud Upload Phase:
βοΈ Uploading to Cloud
Uploading: web-server-01
[ββββββββββββββββββββββββββββββββββββββββ] 75.0%
98.0 GB / 130.6 GB
Speed: 45.2 MB/s
Files: 14 / 18
Upload in progress... Press q to cancel
β
Export complete!
Local: /exports/web-server-01
Cloud: s3://my-vm-backups/prod/vms/web-server-01
Press q to quit
Browse and download previously uploaded exports.
# Launch cloud browser (future feature)
hyperexport --browse-cloud s3://my-bucket/exports
Browser Interface:
βοΈ Cloud Storage Browser - Amazon S3
Found 15 files:
βΆ π web-server-01/web-server-01.ovf 2.5 GB 2026-01-20 14:30
π web-server-01/web-server-01-disk1.vmdk 125.0 GB 2026-01-20 14:30
π db-server-01/db-server-01.ovf 3.2 GB 2026-01-19 09:15
π db-server-01/db-server-01-disk1.vmdk 500.0 GB 2026-01-19 09:15
β/β: Navigate | Enter/d: Download | x: Delete | r: Refresh | Esc: Back | q: Quit
Save export configurations including cloud settings.
# Create profile with cloud upload
hyperexport --save-profile prod-backup \
--provider vsphere \
--format ova \
--compress \
--upload s3://my-bucket/prod \
--stream-upload
# Use saved profile
hyperexport --interactive --profile prod-backup
Export multiple VMs and upload to cloud in one operation.
# Using batch file
cat vms.txt
web-server-01
web-server-02
db-server-01
hyperexport --batch vms.txt \
--upload s3://my-bucket/weekly-backup \
--parallel 4
Export directly to cloud without local storage:
# Stream mode (no local copy)
hyperexport --interactive \
--upload s3://my-bucket/backups \
--stream-upload
Benefits:
Considerations:
aws s3 mb s3://my-vm-backups --region us-east-1
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::my-vm-backups",
"arn:aws:s3:::my-vm-backups/*"
]
}
]
}
# Create IAM user
aws iam create-user --user-name hyperexport
# Attach policy
aws iam put-user-policy --user-name hyperexport \
--policy-name S3Access --policy-document file://policy.json
# Generate access keys
aws iam create-access-key --user-name hyperexport
az storage account create \
--name mystorageaccount \
--resource-group myresourcegroup \
--location eastus \
--sku Standard_LRS
az storage container create \
--name vm-backups \
--account-name mystorageaccount
az storage account keys list \
--account-name mystorageaccount \
--resource-group myresourcegroup
gsutil mb -l us-east1 gs://my-gcs-bucket/
# Create service account
gcloud iam service-accounts create hyperexport \
--display-name="HyperExport Service Account"
# Grant permissions
gsutil iam ch serviceAccount:hyperexport@PROJECT_ID.iam.gserviceaccount.com:objectAdmin \
gs://my-gcs-bucket
# Generate key file
gcloud iam service-accounts keys create key.json \
--iam-account=hyperexport@PROJECT_ID.iam.gserviceaccount.com
# Generate SSH key pair
ssh-keygen -t rsa -b 4096 -f ~/.ssh/hyperexport_key
# Copy public key to SFTP server
ssh-copy-id -i ~/.ssh/hyperexport_key.pub user@sftp.example.com
# Use with hyperexport
hyperexport --interactive --keyfile ~/.ssh/hyperexport_key
β/k Move cursor up
β/j Move cursor down
Space Select/deselect VM
Enter Continue to confirmation
u/U Configure cloud upload
a Select all visible VMs
n Deselect all
A Regex pattern selection
1-7 Quick filters
t/T Export templates
s Cycle sort mode
c Clear all filters
h/? Toggle help
q Quit
Esc Go back
β/k Navigate up
β/j Navigate down
Enter Select provider
s Toggle stream upload
l Toggle keep local copy
Esc Back to VM selection
q Quit
Type Enter characters
Backspace Delete last character
Enter Continue to next field
Esc Back to provider selection
q Quit
y/Y/Enter Start export
u/U Configure cloud upload
n/Esc Go back to VM selection
q Quit
β/k Navigate up
β/j Navigate down
Enter/d Download selected file
x/Del Delete selected file
r Refresh file list
Esc Exit browser
q Quit
S3: βInvalidAccessKeyIdβ
Check:
- Access key ID is correct
- Secret access key matches
- IAM user has necessary permissions
- Region is correct
Azure: βAuthenticationFailedβ
Check:
- Storage account name is correct
- Account key is valid
- Container exists
- Network connectivity to Azure
GCS: βPermissionDeniedβ
Check:
- Service account JSON file path
- GOOGLE_APPLICATION_CREDENTIALS environment variable
- Service account has storage.objects.create permission
- Bucket exists and is accessible
SFTP: βPermission deniedβ
Check:
- Username is correct
- Password/key is correct
- SSH key permissions (chmod 600)
- Server allows password/key authentication
- Network connectivity on port 22 (or custom port)
βConnection timeoutβ
Solutions:
- Check network connectivity
- Verify firewall rules
- Try different region/endpoint
- Increase timeout settings
βInsufficient storage spaceβ
Solutions:
- Check cloud storage quota
- Verify billing is active
- Check bucket/container limits
- Contact cloud provider support
βFile too largeβ
Solutions:
- Enable multipart upload (automatic for >5GB)
- Use stream upload mode
- Split large disks (manual)
- Check provider limits (S3: 5TB, Azure: 190.7TB, GCS: 5TB)
Slow upload speeds
Optimization:
- Use nearest region
- Enable parallel uploads (--parallel)
- Check network bandwidth
- Use stream upload mode
- Enable compression (trade CPU for bandwidth)
High memory usage
Solutions:
- Use stream upload (no local buffering)
- Reduce parallel uploads
- Export fewer VMs at once
- Enable compression
Never hardcode credentials:
# β BAD - credentials in script
hyperexport --upload s3://bucket \
--access-key AKIAIOSFODNN7EXAMPLE \
--secret-key wJalrXUtnFEMI
# β
GOOD - use environment variables
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
hyperexport --upload s3://bucket
Use credential files:
# AWS credentials file
~/.aws/credentials
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI
# Azure connection string
~/.azure/storage_connection_string
Use IAM roles (AWS):
# When running on EC2 with IAM role, no credentials needed
hyperexport --upload s3://bucket
Use encryption in transit:
Restrict network access:
# S3 bucket policy - IP restriction
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}]
}
Use VPN/Private Links:
Enable versioning:
# S3 versioning
aws s3api put-bucket-versioning \
--bucket my-bucket \
--versioning-configuration Status=Enabled
# Azure blob versioning
az storage account blob-service-properties update \
--account-name mystorageaccount \
--enable-versioning true
Enable encryption at rest:
Use lifecycle policies:
# S3 lifecycle - delete after 30 days
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle.json
Use meaningful prefixes:
s3://my-bucket/
βββ prod/
β βββ daily/2026-01-20/web-server-01/
β βββ daily/2026-01-19/web-server-01/
β βββ weekly/2026-01-15/web-server-01/
βββ dev/
β βββ snapshots/web-server-dev/
βββ test/
βββ backups/test-vm-01/
Include metadata:
# Add tags for searchability
--prefix "backups/$(date +%Y-%m-%d)/$(hostname)"
Use appropriate storage classes:
S3:
- Standard: Frequent access
- Infrequent Access: Monthly access
- Glacier: Long-term archive
Azure:
- Hot: Frequent access
- Cool: Infrequent access
- Archive: Long-term storage
GCS:
- Standard: Frequent access
- Nearline: Monthly access
- Coldline: Quarterly access
- Archive: Yearly access
Enable compression:
# Reduce storage costs by 30-70%
hyperexport --interactive --compress
Cleanup old exports:
# Delete exports older than 30 days
aws s3 ls s3://my-bucket/backups/ --recursive | \
awk '$1 < "'$(date -d '30 days ago' +%Y-%m-%d)'" {print $4}' | \
xargs -I {} aws s3 rm s3://my-bucket/{}
Track upload costs:
# AWS Cost Explorer API
aws ce get-cost-and-usage \
--time-period Start=2026-01-01,End=2026-01-31 \
--granularity MONTHLY \
--metrics BlendedCost \
--filter file://s3-filter.json
Set up budget alerts:
Monitor upload success:
# Check for failed uploads
grep "upload failed" /var/log/hyperexport.log
#!/bin/bash
# daily-backup.sh
export AWS_ACCESS_KEY_ID="$(cat ~/.aws/access_key)"
export AWS_SECRET_ACCESS_KEY="$(cat ~/.aws/secret_key)"
export AWS_REGION="us-east-1"
DATE=$(date +%Y-%m-%d)
BUCKET="s3://my-backups/daily/$DATE"
hyperexport \
--batch /etc/hyperexport/production-vms.txt \
--format ova \
--compress \
--upload "$BUCKET" \
--stream-upload \
--parallel 4 \
--quiet
# Cleanup old backups (keep last 7 days)
aws s3 ls s3://my-backups/daily/ | \
awk '$1 < "'$(date -d '7 days ago' +%Y-%m-%d)'" {print $2}' | \
xargs -I {} aws s3 rm s3://my-backups/daily/{} --recursive
#!/bin/bash
# multi-cloud-backup.sh
VM_LIST="web-server-01 db-server-01"
for vm in $VM_LIST; do
# Primary backup to S3
hyperexport --vm "$vm" \
--upload s3://primary-backups/prod \
--compress
# Secondary backup to Azure
hyperexport --vm "$vm" \
--upload azure://secondary-backups/prod \
--compress
# Tertiary backup to GCS
hyperexport --vm "$vm" \
--upload gs://tertiary-backups/prod \
--compress
done
#!/bin/bash
# dr-backup.sh
# Export critical VMs to multiple regions
CRITICAL_VMS="db-master web-lb auth-server"
REGIONS="us-east-1 us-west-2 eu-west-1"
for vm in $CRITICAL_VMS; do
for region in $REGIONS; do
export AWS_REGION="$region"
hyperexport --vm "$vm" \
--format ova \
--compress \
--verify \
--upload "s3://dr-backups-$region/critical" \
--stream-upload
done
done
# Send completion notification
echo "DR backup completed for $CRITICAL_VMS" | \
mail -s "DR Backup Complete" ops@example.com
For issues, questions, or feature requests related to cloud storage integration: