Learn about components and settings that affect plan and task performance so that you can tune Puppet Enterprise to complete tasks and plans more efficiently.
Version and installation information
PE version: All supported
Solution
Tasks and plans are both completed by the orchestrator. You can improve plan performance by either upgrading Puppet Enterprise (PE), and/or adjusting the orchestrator’s JRuby configuration. You can improve task performance by configuring orchestrator task concurrency, making sure to balance that with the RAM available to your primary server.
Plan performance
Upgrade to take advantage of a change to plan and task management
In PE 2019.8.7 to 2019.8.10 and in 2021.2 to 2021.5, the orchestrator uses all available JRubies for lockless code deployments, stopping scheduled plans until deployment is completed. In PE 2019.8.11 and later 2019.8 and in 2021.6 and later, code deployment won’t stop plans.
Adjust the number of concurrent plans
You can customize the number of concurrent plans by changing the optional orchestrator parameter puppet_enterprise::profile::orchestrator::jruby_max_active_instances
. Learn more about the parameter and how to change it in our documentation.
Run multiple plans and code deployments simultaneously
To scale tasks and plans, use a workaround documented in Patterns and Tactics. The module used in this workaround is not supported but is actively developed by our colleagues.
Task performance
When running tasks at scale, performance decreases. The orchestrator executes tasks in batches. To configure task concurrency, you can increase or decrease the puppet_enterprise::profile::orchestrator::task_concurrency
parameter (by default 250) using steps in our documentation.
For example:
If you run a task targeting 1000 nodes and task_concurrency
is set to 250 (the default):
The orchestrator divides the 1000 target nodes into four batches of 250 nodes. In this example, each batch takes 3 minutes to complete, so execution looks like this:
-
The first batch of 250 runs takes 3 minutes.
-
The second batch of 250 takes 3 minutes.
-
The third batch of 250 takes 3 minutes.
-
The fourth batch of 250 takes 3 minutes.
-
Total execution time is 12 minutes.
To decrease total execution time, you could change task_concurrency
to 500, so that the orchestrator runs 2 batches of 500 nodes. Total execution time is reduced to 6 minutes. However, since the orchestrator is a part of the primary server, by increasing task_concurrency
you consume primary server RAM.
There is a 1:1 relationship between the size of the ThreadPool and task_concurrency
; the orchestrator sets a ThreadPool size equal to the number set in the task_concurrency
parameter. Roughly speaking, 1 thread in the ThreadPool reserves about 1MB of RAM, so for the given example, setting task_concurrency
to 500 consumes 500MB of RAM.
How can we improve this article?
0 comments
Please sign in to leave a comment.
Related articles