Skip to main content

AWS Marketplace AMI

Deploy SpatialFlow to your own AWS account using our pre-built Amazon Machine Image (AMI) from the AWS Marketplace.

Overview

The SpatialFlow AMI provides a complete, production-ready deployment in a single EC2 instance:

  • Pre-configured Stack: Django API, React frontend, PostgreSQL/PostGIS, Valkey (Redis-compatible)
  • Quick Launch: CloudFormation template for automated deployment
  • Flexible Architecture: Optional RDS PostgreSQL and ElastiCache for high availability
  • Enterprise Ready: TLS support, SSO authentication, CloudWatch integration

Architecture Options

All-in-One (Default)

Perfect for development, testing, or small deployments:

┌─────────────────────────────────────────────┐
│ EC2 Instance │
│ ┌─────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ Nginx │ │ Django │ │ Celery │ │
│ │ │ │ API │ │ Workers │ │
│ └─────────┘ └──────────┘ └──────────────┘ │
│ ┌─────────────────┐ ┌──────────────────┐ │
│ │ PostgreSQL │ │ Valkey │ │
│ │ + PostGIS │ │ (Cache/Queue) │ │
│ └─────────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────┘

For production workloads, use external managed services:

┌─────────────────────────────────────────────┐
│ EC2 Instance │
│ ┌─────────┐ ┌──────────┐ ┌──────────────┐ │
│ │ Nginx │ │ Django │ │ Celery │ │
│ │ │ │ API │ │ Workers │ │
│ └─────────┘ └──────────┘ └──────────────┘ │
└─────────────────────────────────────────────┘
│ │
▼ ▼
┌────────────────┐ ┌────────────────┐
│ Amazon RDS │ │ ElastiCache │
│ PostgreSQL │ │ (Valkey/Redis)│
└────────────────┘ └────────────────┘

Quick Start

Prerequisites

  • AWS account with permissions to create EC2, RDS, ElastiCache resources
  • VPC with at least 2 subnets in different Availability Zones
  • SSH key pair for instance access

Deploy with CloudFormation

  1. Subscribe to the AMI on AWS Marketplace

  2. Launch CloudFormation Stack:

    • Go to AWS CloudFormation Console
    • Click "Create stack" → "With new resources"
    • Upload the SpatialFlow QuickStart template
    • Fill in parameters:
ParameterDescriptionDefault
InstanceTypeEC2 instance sizet3.medium
KeyNameSSH key pair for access(required)
VpcIdYour VPC ID(required)
SubnetId1Primary subnet (public recommended)(required)
SubnetId2Secondary subnet (different AZ, required for RDS/ElastiCache)(required)
AllowedCIDRCIDR block for HTTP/HTTPS access (e.g., 0.0.0.0/0 for public, or your corporate CIDR)(required - no default)
SSHAllowedCIDRCIDR block for SSH access (leave empty to disable SSH, use SSM Session Manager instead)(empty - SSH disabled)
AmiIdOverrideOptional AMI ID override (leave empty to use latest AMI via SSM parameter)(empty - uses SSM)
AdminEmailEmail for admin account(required)
SenderEmailDomainVerified SES domain or email for outbound emails(optional)
EnvironmentEnvironment name for loggingproduction
UseExternalDBUse RDS PostgreSQL instead of localfalse
DBInstanceClassRDS instance class (if UseExternalDB=true)db.t3.micro
DBAllocatedStorageRDS storage in GB20
UseExternalCacheUse ElastiCache instead of local Valkeyfalse
CacheNodeTypeElastiCache node type (if UseExternalCache=true)cache.t3.micro
  1. Wait for deployment (~10-15 minutes)

  2. Access your instance:

    • Application URL: Check CloudFormation Outputs for ApplicationURL
    • SSH (if SSHAllowedCIDR is configured): ssh -i your-key.pem ec2-user@<public-ip>
    • SSM Session Manager (recommended): aws ssm start-session --target <instance-id>

First Boot Process

On first launch, SpatialFlow automatically:

  1. Generates secure passwords for PostgreSQL and Valkey
  2. Runs database migrations
  3. Creates an admin user with the provided email
  4. Writes credentials to ~/SPATIALFLOW_CREDENTIALS.txt
  5. Starts all services

Retrieve your admin credentials:

ssh -i your-key.pem ec2-user@<public-ip>
cat ~/SPATIALFLOW_CREDENTIALS.txt

Configuring HTTPS

HTTPS is required for production deployments. The AMI includes a helper script:

sudo spatialflow-enable-tls --domain your-domain.com --email admin@your-domain.com

This will:

  • Obtain a certificate from Let's Encrypt
  • Configure Nginx for HTTPS
  • Set up automatic certificate renewal

Using Custom Certificate

sudo spatialflow-enable-tls --custom \
--cert /path/to/fullchain.pem \
--key /path/to/privkey.pem

Enabling SSO Authentication

SpatialFlow supports Single Sign-On with Google, Microsoft, and GitHub.

HTTPS Required

OAuth providers require HTTPS for callback URLs. Configure TLS before enabling SSO.

Provider Setup

Google OAuth

  1. Go to Google Cloud Console
  2. Create OAuth 2.0 Client ID (Web application)
  3. Add redirect URI: https://your-domain.com/api/v1/auth/oauth/google/callback
  4. Copy Client ID and Client Secret

Microsoft OAuth

  1. Go to Azure Portal
  2. Create App Registration
  3. Add redirect URI: https://your-domain.com/api/v1/auth/oauth/microsoft/callback
  4. Create client secret

GitHub OAuth

  1. Go to GitHub Developer Settings
  2. Create OAuth App
  3. Set callback URL: https://your-domain.com/api/v1/auth/oauth/github/callback

Configuration

Add credentials to the environment file:

sudo nano /etc/spatialflow/.env

# Add your OAuth credentials (only providers you want to enable)
GOOGLE_CLIENT_ID=your-google-client-id
GOOGLE_CLIENT_SECRET=your-google-client-secret

MICROSOFT_CLIENT_ID=your-microsoft-client-id
MICROSOFT_CLIENT_SECRET=your-microsoft-client-secret

GITHUB_CLIENT_ID=your-github-client-id
GITHUB_CLIENT_SECRET=your-github-client-secret

# Set to your domain
OAUTH_REDIRECT_BASE_URL=https://your-domain.com

Restart services:

sudo systemctl restart spatialflow-api spatialflow-celery spatialflow-celerybeat

Service Management

SpatialFlow uses systemd for service management:

# View service status
sudo systemctl status spatialflow-api
sudo systemctl status spatialflow-celery
sudo systemctl status spatialflow-celerybeat

# View logs
sudo journalctl -u spatialflow-api -f
sudo journalctl -u spatialflow-celery -f

# Restart services after configuration changes
sudo systemctl restart spatialflow-api spatialflow-celery spatialflow-celerybeat

Configuration Reference

All configuration is managed through /etc/spatialflow/.env:

Core Settings

VariableDefaultDescription
SECRET_KEYAuto-generatedDjango secret key
DEBUGFalseDebug mode (never enable in production)
ALLOWED_HOSTS*Comma-separated allowed hostnames
LOG_LEVELINFOLogging level

Database Settings

VariableDefaultDescription
DATABASE_URLLocal PostgreSQLDatabase connection URL
EXTERNAL_DB0Set to 1 for RDS

Cache Settings

VariableDefaultDescription
REDIS_URLLocal ValkeyRedis/Valkey connection URL
EXTERNAL_VALKEY0Set to 1 for ElastiCache
REDIS_SSL0Set to 1 for TLS connections

SSO Settings

VariableDescription
GOOGLE_CLIENT_IDGoogle OAuth client ID
GOOGLE_CLIENT_SECRETGoogle OAuth client secret
MICROSOFT_CLIENT_IDMicrosoft/Azure AD client ID
MICROSOFT_CLIENT_SECRETMicrosoft/Azure AD client secret
GITHUB_CLIENT_IDGitHub OAuth app client ID
GITHUB_CLIENT_SECRETGitHub OAuth app client secret
OAUTH_REDIRECT_BASE_URLBase URL for OAuth callbacks

Backup and Recovery

Local PostgreSQL Backup (All-in-One Setup)

If using the default local PostgreSQL:

# Create backup
sudo -u postgres pg_dump spatialflow > spatialflow_backup_$(date +%Y%m%d).sql

# Restore from backup
sudo -u postgres psql spatialflow < spatialflow_backup_20250101.sql

RDS Backup (High Availability Setup)

If using external RDS (UseExternalDB=true), RDS handles automated backups. You can also create manual snapshots:

# Create RDS snapshot via AWS CLI
aws rds create-db-snapshot \
--db-instance-identifier your-stack-name-db \
--db-snapshot-identifier spatialflow-manual-$(date +%Y%m%d) \
--region us-east-1

# Or manually export using pg_dump with RDS credentials
# DATABASE_URL contains embedded credentials, so pg_dump can use it directly
source /etc/spatialflow/.env
pg_dump "$DATABASE_URL" > spatialflow_backup_$(date +%Y%m%d).sql
RDS Automated Backups

RDS automatically creates daily snapshots with configurable retention (default: 7 days). Access these via the AWS Console under RDS → Snapshots.

Backup to S3 (Local PostgreSQL)

For local PostgreSQL deployments, automate backups to S3. This requires an IAM role attached to the EC2 instance with S3 write permissions.

# Create and upload encrypted backup (replace with your values)
sudo -u postgres pg_dump spatialflow | gzip | \
aws s3 cp - s3://your-backup-bucket/spatialflow_$(date +%Y%m%d).sql.gz \
--region us-east-1 \
--sse AES256

# Set up daily cron job (2 AM daily)
# Replace YOUR_BUCKET and YOUR_REGION with your actual values
# Note: The cron runs as postgres user; ensure aws-cli is installed and in PATH
cat <<'EOF' | sudo tee /etc/cron.d/spatialflow-backup
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
0 2 * * * postgres pg_dump spatialflow | gzip | aws s3 cp - s3://YOUR_BUCKET/spatialflow_$(date +\%Y\%m\%d).sql.gz --region YOUR_REGION --sse AES256
EOF

# Then edit the file to set your bucket and region
sudo nano /etc/cron.d/spatialflow-backup
Bucket Security

Enable S3 bucket versioning and configure a lifecycle policy to expire old backups. Use a bucket policy to enforce encryption (aws:SecureTransport) and restrict access to the EC2 instance role.

Configuration Backup

Also back up your configuration:

sudo cp /etc/spatialflow/.env /path/to/backup/.env.backup

Upgrading

To upgrade to a new AMI version:

  1. Create a backup of your database and configuration
  2. Launch a new instance from the updated AMI
  3. Restore your data to the new instance
  4. Update DNS/Elastic IP to point to the new instance
  5. Terminate the old instance after verification
# Check current version
cat /opt/spatialflow/VERSION

# Back up before upgrading
sudo -u postgres pg_dump spatialflow > pre_upgrade_backup.sql
sudo cp /etc/spatialflow/.env /home/ec2-user/.env.backup
Blue-Green Deployment

For zero-downtime upgrades, run the new instance in parallel and switch traffic using an Elastic IP or load balancer.

Security Best Practices

  1. Restrict SSH access - Limit security group SSH rule to your IP
  2. Enable HTTPS - Configure TLS before production use
  3. Use IAM roles - Attach IAM role instead of access keys
  4. Delete credentials file - Remove ~/SPATIALFLOW_CREDENTIALS.txt after noting values
  5. Configure ALLOWED_HOSTS - Set specific domains, not *
  6. Enable CloudWatch - Monitor logs and set up alarms

Health Checks

Configure load balancer health checks:

EndpointResponseUse Case
/health200 OKFull application health
/nginx-health200 OKNginx only (faster)

Troubleshooting

Common Issues

"DisallowedHost" errors:

sudo nano /etc/spatialflow/.env
# Set ALLOWED_HOSTS to your domain
ALLOWED_HOSTS=your-domain.com,*.your-domain.com

Services not starting:

# Check service status
sudo systemctl status spatialflow-api
# View detailed logs
sudo journalctl -u spatialflow-api --no-pager -n 100

Database connection issues:

# Test connection
sudo -u spatialflow /opt/spatialflow/.venv/bin/python \
/opt/spatialflow/api/manage.py dbshell

SSO buttons not appearing:

  • Verify credentials are set in .env
  • Credentials must be at least 20 characters
  • Restart services after configuration changes

Getting Help

  • View first-boot logs: sudo journalctl -u spatialflow-first-boot
  • Application logs: sudo journalctl -u spatialflow-api -f
  • Email: support@spatialflow.io

Cost Optimization

Use CaseInstance TypevCPUMemory
Developmentt3.small22 GB
Small Productiont3.medium24 GB
Medium Productiont3.large28 GB
Large Productionm5.xlarge416 GB

Cost-Saving Tips

  1. Use Reserved Instances for predictable workloads
  2. Enable RDS/ElastiCache only when needed for HA
  3. Right-size instances based on actual usage
  4. Use Savings Plans for long-term commitments