Legacy architecture for Puppet Application Manager is no longer supported. Use the steps in this article to help you migrate to a supported architecture of Puppet Application Manager. We have two versions of this article, this one is for Continuous Delivery for Puppet Enterprise. The other one is for Puppet Comply.
As of June 2022, Puppet Application Manager legacy architecture is no longer available for download. Legacy architecture utilizes Rook 1.0, which is incompatible with Kubernetes version 1.20 and newer versions. Kubernetes version 1.19 is no longer receiving security updates. Puppet continued to update legacy architecture components other than Kubernetes until 30 June 2022.
Version and installation information
Product: Continuous Delivery for Puppet Enterprise
Version: All supported
Installation type: Puppet Application Manager legacy architecture
Solution
Before you begin:
-
A step in this solution requires that the
jq
utility is installed. We cannot provide support for third-party software. -
If you’re not sure if you’re using legacy architecture, you can check using steps in our documentation.
-
Your legacy Puppet Application Manager cluster and the new cluster that you migrate to must have the same connection status (online or offline). Migrating from offline to online clusters or vice versa is not supported.
Use the following steps to migrate your Puppet Application Manager to a supported architecture.
-
Part two: Snapshot the successful upgrade, install Puppet Application Manager on a new machine
-
Part three: Restore the successful upgrade snapshot on a new cluster
Part one: Snapshot your legacy installation and upgrade it
-
In the Puppet Application Manager UI, at the bottom of the screen, note the version number you’re using.
-
Take a full snapshot of the legacy installation to external storage (S3 bucket or NFS share). You can use this to revert if there are any issues during the upgrade in the next step.
-
Using our documentation, upgrade the legacy installation to the latest version making sure to use the
force-reapply-addons
flag.For example:
If you have a standalone installation:
curl -sSL https://k8s.kurl.sh/puppet-application-manager-legacy | sudo bash -s force-reapply-addons
If you have a high availability (HA) installation:
curl -sSL https://k8s.kurl.sh/puppet-application-manager-legacy | sudo bash -s ha force-reapply-addons
-
Confirm that the installation looks good. Make sure that:
- Pods are up and running.
- The application is responding.
- In the Puppet Application Manager UI, at the bottom of the screen, the version number has changed to match the new version. If you don’t see a new number, force refresh the page in your browser.
Continue to part two.
Part two: Snapshot the successful upgrade, install Puppet Application Manager on a new machine
-
After the upgrade succeeds:
-
Take a new full snapshot of the upgraded installation to external storage (S3 bucket or NFS share). You will use this snapshot to migrate.
-
If you have an offline (air-gapped) installation, check the registry IP address used in the cluster, you will need it in the next step:
kubectl -n kurl get svc registry -o jsonpath='{.spec.clusterIP}'
-
-
On a new machine, install the latest version of Puppet Application Manager using the steps in our documentation. If the legacy installation was offline (air-gapped), make sure to include the
kurl-registry-ip=<IP>
flag during installation using the registry IP address from the previous step. -
To configure the new cluster to restore from your upgrade snapshot, point Puppet Application Manager at the upgrade snapshot storage.
For example, if you’re using NFS storage and are migrating to an HA installation:
kubectl kots -n default velero configure-nfs --nfs-path "<PATH TO THE UPGRADE SNAPSHOT>" --nfs-server <IP ADDRESS OF NFS SERVER>
Note: If you are using NFS for snapshot storage and are migrating to a standalone installation, make sure to use the
--with-minio=false
flag, for example:kubectl kots -n default velero configure-nfs --nfs-path --with-minio=false "<PATH TO THE UPGRADE SNAPSHOT>" --nfs-server <IP ADDRESS OF NFS SERVER>
Note: To get the command for other types of storage, run
kubectl kots -n default velero configure-{hostpath,nfs,aws-s3,other-s3,gcp} --help
-
To check if the snapshot is complete, run
kubectl kots get backup
. It can take time for the snapshots to populate, so you might need to run the command more than one time.
Continue to part three.
Part three: Restore the successful upgrade snapshot on a new target cluster
-
On the new cluster, restore from the snapshot:
kubectl kots restore --from-backup <NAME OF UPGRADE BACKUP>
For example:
kubectl kots restore --from-backup instance-9m8xw
-
Monitor the restoration process to ensure it completes:
kubectl get pod -o json | jq -r '.items[] | select(.metadata.annotations."backup.velero.io/backup-volumes") | .metadata.name' | xargs kubectl wait --for=condition=Ready pod --timeout=20m
-
On the new installation, go to the PAM UI at port 8800 (https://:8800/). Select the Config tab and update the hostname to match the new cluster. Click Save config. Wait for pre-flights to complete and click Deploy.
Wait for the deploy to complete. You can monitor it by running
kubectl get pods -w
.
How can we improve this article?
0 comments
Please sign in to leave a comment.
Related articles