EC2 only lets you export instances as VMWare-compatible OVA files if you originally imported that instance from an OVA. Presumably it preserves the metadata and XML gubbins for the instance, and just wraps it up again using that metadata on export.
In order to provision arbitrary VMs in an OVA-exportable way, we abuse the volume snapshots on one VM.
- Make a fresh install of ubuntu server or whatever your base distro is, in VMWare, export as OVA file. (single disk only!)
- Untar the OVA and import the VMDK file into ec2 using
ec2-instance-import
onto an HVM instance type (ie, no xen kernel needed) - Snapshot the volume, make a note of the snapshot ID. This is the "freshly installed ubuntu server" snapshot.
- Stop instance (
ec2-stop-instances
) - Detach existing volume from instance (
ec2-detach-volume
) - Delete existing volume (
ec2-delete-volume
) - Make new volume from
$INITIAL_SNAPSHOT_ID
(ec2-create-volume
) - Attach new volume to instance (
ec2-attach-volume
) - Boot instance (
ec2-start-instances
) - Provision (eg: run packer with null builder, run chef, etc)
- Export instance using
ec2-create-instance-export
- Download your OVA from S3, import to VMWare / Virtualbox
- Repeat for the next provisioning job.
I have a 100 line shell script as part of a jenkins job that does this to build exportable VM images. You have to block until completion, eg call ec2-create-volume
when poll ec2-describe-volumes
until your action has completed.
Have fun.
in case anyone cares, here is the (lightly sanitised) bash monstrosity i currently use for this