Avoid issues when migrating Puppet Application Manager to a new machine, such as during disaster recovery, by using these steps to build a Puppet Application Manager installer to build a new cluster with the same components and versions as the original cluster.
These steps are not for legacy architectures, to migrate to a supported architecture, use the steps in Migrate to a supported architecture of Puppet Application Manager
Version and installation information
Product: Continuous Delivery for Puppet Enterprise
Version: 4.x
Puppet Application Manager version: 1.40.0 and later
Installation type: non-legacy
Solution
Get information from the original installation
Before you build the installer, you need to get kURL and spec
information from the original installation. Run the following commands on the original installation.
-
Get the kURL version:
kubectl get configmap -n kurl kurl-current-config -o jsonpath="{.data.kurl-version}" && echo
-
Get the installer
spec
. The installer name (and command needed to get thespec
) varies based on your architecture.Run one of the following. The information you need is output in the
spec
section.
-
If you have an HA installation, run
kubectl get installers puppet-application-manager -o yaml
-
If you have a standalone installation, run
kubectl get installers puppet-application-manager-standalone -o yaml
Note: If you see
Error from server (NotFound)
, check that you used the correct command for your architecture. You can view all installers by runningkubectl get installers
. You’re targeting the one with the shortest age.Example output:
# kubectl get installers puppet-application-manager-standalone -o yaml apiVersion: cluster.kurl.sh/v1beta1 kind: Installer metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"cluster.kurl.sh/v1beta1","kind":"Installer","metadata":
{"annotations":{},"creationTimestamp":null,"name":"puppet-application-manager-standalone", "namespace":"default"},"spec":{"containerd":{"version":"1.4.12"},"contour": {"version":"1.18.0"},"ekco":{"version":"0.16.0"},"kotsadm":{"applicationSlug":"puppet-application-manager", "version":"1.64.0"},"kubernetes":{"version":"1.21.8"},"metricsServer":{"version":"0.4.1"}, "minio":{"version":"2020-01-25T02-50-51Z"},"openebs":{"isLocalPVEnabled":true, "localPVStorageClassName":"default","version":"2.6.0"},"prometheus":{"version":"0.49.0-17.1.1"}, "registry":{"version":"2.7.1"},"velero":{"version":"1.6.2"},"weave":{"podCidrRange": "/22","version":"2.8.1"}},"status":{}} creationTimestamp: "2021-06-04T00:05:08Z" generation: 4 labels: velero.io/exclude-from-backup: "true" name: puppet-application-manager-standalone namespace: default resourceVersion: "102061068" uid: 4e7f1196-5fab-4072-9399-15d18dcc5137 spec: containerd: version: 1.4.12 contour: version: 1.18.0 ekco: version: 0.16.0 kotsadm: applicationSlug: puppet-application-manager version: 1.64.0 kubernetes: version: 1.21.8 metricsServer: version: 0.4.1 minio: version: 2020-01-25T02-50-51Z openebs: isLocalPVEnabled: true localPVStorageClassName: default version: 2.6.0 prometheus: version: 0.49.0-17.1.1 registry: version: 2.7.1 velero: version: 1.6.2 weave: podCidrRange: /22 version: 2.8.1 status: {} The spec is this section:
spec: containerd: version: 1.4.12 contour: version: 1.18.0 ekco: version: 0.16.0 kotsadm: applicationSlug: puppet-application-manager version: 1.64.0 kubernetes: version: 1.21.8 metricsServer: version: 0.4.1 minio: version: 2020-01-25T02-50-51Z openebs: isLocalPVEnabled: true localPVStorageClassName: default version: 2.6.0 prometheus: version: 0.49.0-17.1.1 registry: version: 2.7.1 velero: version: 1.6.2 weave: podCidrRange: /22 version: 2.8.1
Build the installer and migrate the cluster
-
Create a file on a new machine called
installer.yaml
with the following contents, replacing<SPEC>
and<KURL VERSION>
with thespec
and kURL information that you got in earlier steps. If you’re using Puppet Application Manager 1.68.0 and later, the<KURL VERSION>
might already be in the spec, if so, you don’t need to add it again.Note: The spacing is critical in YAML files. Use a YAML file linter to confirm that the format is correct.
apiVersion: cluster.kurl.sh/v1beta1 kind: Installer metadata: <SPEC> kurl: installerVersion: "<KURL VERSION>"
For example:
apiVersion: cluster.kurl.sh/v1beta1 kind: Installer metadata: spec: containerd: version: 1.4.12 contour: version: 1.18.0 ekco: version: 0.16.0 kotsadm: applicationSlug: puppet-application-manager version: 1.64.0 kubernetes: version: 1.21.8 metricsServer: version: 0.4.1 minio: version: 2020-01-25T02-50-51Z openebs: isLocalPVEnabled: true localPVStorageClassName: default version: 2.6.0 prometheus: version: 0.49.0-17.1.1 registry: version: 2.7.1 velero: version: 1.6.2 weave: podCidrRange: /22 version: 2.8.1 kurl: installerVersion: "v2022.03.11-0"
-
Build an installer using
installer.yaml
. Run:curl -s -X POST -H "Content-Type: text/yaml" --data-binary "@installer.yaml" https://kurl.sh/installer |grep -o "[^/]*$"
The output is a hash. Note the hash information to complete later steps.
For example:
# curl -s -X POST -H "Content-Type: text/yaml" --data-binary "@installer.yaml"
https://kurl.sh/installer |grep -o "[^/]*$" 78501f1 -
As you work through the appropriate Puppet Application Manager documentation to create a new installation for the new cluster, use the hash information from previous steps to create a customized command that replaces the installation script command in our documentation. The installer script command and step number vary based on your architecture.
For online installations
In the documentation appropriate to your installation, (HA online installation or standalone online installation, work through the installation steps, replacing the installer script command
curl -sSL https://k8s.kurl.sh/puppet-application-manager...
withcurl https://kurl.sh/<HASH> | sudo bash
Offline installations
In the documentation appropriate to your installation (HA offline installation or standalone offline installation ), work through the installation steps, replacing the installer script command
https://k8s.kurl.sh/bundle/puppet-application-manager...
withcurl -LO https://k8s.kurl.sh/bundle/<HASH>.tar.gz
-
Once the new cluster is installed, you can restore snapshots from the original cluster as outlined in our documentation on migration and disaster recovery using a snapshot, starting from the step setting a retention schedule (currently step 6).
For offline installations, make sure to get the
kurl-registry-ip
from the original installation before restoring, as outlined in our documentation on configuring snapshots.
How can we improve this article?
2 comments
Hi, thank you for this, but there is an inconsistency that causes the process to fail. Step 3 either needs to create file called installer.yaml or step 4 needs to reference install.yaml.
Thank you for the comment, it's fixed.
Please sign in to leave a comment.
Related articles