Genvid Forum

Genvid Technologies

Bastion migration


Hi, is there a way to either:

  1. Migrate an existing (remote AWS) cluster to be managed by Bastion on another machine - I suspect creating a new cluster on the new bastion and then doing a terraform refresh would not connect to the existing AWS machines, based on previous issues

  2. Start services/jobs on a remote cluster directly without using Bastion at all (e.g. through the HTTP rest APIs for Nomad/Consul) ?




Hi Adrian!

Our team will look into this and we’ll provide you an answer as soon as possible!




Hi Adrian,

You can use the new backup functionality in genvid-bastion to migrate (See our upgrade notes for details). I never try it for such thing, but it should work. Tell us if you have any problems.

Although the nomad API is available, we have currently no standard way of managing jobs other than cluster-api. The reason cluster-api doesn’t run on the cluster is both for security reason (limiting our attack surface), but also to give it access to additional information in the bastion itself (like CA private keys, sdk versions, terraform informations).

For running nomad job from the template, you first needs to run the template through consul-template and then evaluate the jobs through nomad. I could give you more instructions on how to do this, but it is really not a path that I suggest you to go since you will be on your own.


Is there a way to manually backup from a bastion still running in 1.13 ? If not, is it possible to upgrade bastion to 1.15 (which I believe keeps the settings?) and still start/jobs running on clusters that were created in 1.13?




Hi Adrian,

The backup options should work in 1.13 bastion without problem, just install the SDK next to it.

However, the restoration would have to be in 1.15. The cluster UI will be upgraded under restoration, but since the version of cluster processes are actually in the cluster Consul KV Store, they will stay at the same version. Just don’t run a genvid-sdk setup or load-config-sdk on it, since this will actually change the services version.


On a related note whilst trying to migrate, after a reboot, I’m seeing the following message when I do a genvid-install (still on 1.13, not on 1.15 yet)

“INFO:service-vault:Unsealing vault
Unseal failed, invalid key”

Is there a way to find the valid key, where are these stored?




The key is store in “~/.vault-keys” and it is saved during the backup process. The key are used to decrypt the content of the vault in consul, so they must follow.

Hopefully, there is almost no information saved in the bastion vault previous to 1.15, so you could probably just remove the ~/.genvid/vault directory by hand, as well as the /vault folder in consul (using consul ui available under


Thanks, removing the vault worked!

Stepping back up to the general point, if the cluster API is intended to run locally to avoid security holes, what is your recommended way for more than one person to be able to access the bastion UI? I imagine Bastion could be installed on a Windows machine in the cloud (it’s a Windows EXE currently, I don’t think you provide a Linux based install method), however it would itself be insecure if Bastion has no kind of login mechanism? We could limit access based on IP of course, but that gets quite unmanageable if logging in from different locations.

My next (more work) idea would be to setup some kind of secure proxy web server on that Windows machine that requires login but forwards call to the local bastion/cluster APIs.



Actually, that’s the only method we proposed: a shared computer on a private network for people to connect. The original design is to provide a container-based version of the bastion services, but it is still a WIP. I hope we will be able to get it by the end of summer.

If you are using the cloud, it is possible also to only run the bastion on the local address, and access it to a remote desktop application. It is a bit less practical but more secure.

Rest assure however that will not be the ultimate solution.