To investigate issues with Puppet Comply scans, Support might you to manually run the CIS-CAT® Pro Assessor in the command line to generate debug logs and other diagnostic output. Depending on your issue, you can gather more information by using additional debugging flags with the command.
Version and installation information
Comply version: All
Solution
Use these steps to get the command to execute CIS-CAT® Pro Assessor on agent nodes during Puppet Comply scans and run the scan.
-
Log into the PE console and click Tasks. In filter by task name, enter
comply::ciscat_scan
and click on the ID of the node you want to debug. -
In the Node run result section, next to the node name you want to debug, click on the triangle. The command is in the
scan_cmd
entry. The values following the-b
and-p
flags in thescan_cmd
will change depending on the operating system benchmark and profile level assigned to each node.For a Linux node,
scan_cmd
is similar to this:java -Xmx2048M -jar /opt/puppetlabs/comply/Assessor-CLI/Assessor-CLI.jar -q \ -rp puppet-compliance -rd /opt/puppetlabs/comply/tmp/ \ -p xccdf_org.cisecurity.benchmarks_profile_Level_1_-_Server \ -b /opt/puppetlabs/comply/Assessor-CLI/benchmarks/CIS_CentOS_Linux_7_Benchmark_v3.0.0-xccdf.xml
Examples
For a Windows node,
scan_cmd
is similar to this:java -Xmx2048M -jar C:/ProgramData/PuppetLabs/comply/Assessor-CLI/Assessor-CLI.jar -q ` -rp puppet-compliance -rd C:/ProgramData/PuppetLabs/comply/tmp/ ` -p xccdf_org.cisecurity.benchmarks_profile_Level_1_-_Member_Server ` -b C:/ProgramData/PuppetLabs/comply/Assessor-CLI/benchmarks/CIS_Microsoft_Windows_Server_2016_RTM_(Release_1607)_Benchmark_v1.2.0-xccdf.xml
-
To manually run the scan, execute the
scan_cmd
in the command line as theroot
user on a Linux node or the Administrator user on a Windows node.To generate additional debugging output, you can add flags to the end of the
scan_cmd
:-
If the scanner detects the wrong operating system or halts with an error, increase the log level of CIS-CAT® Pro Assessor to info level by using the
--info
flag. Logs are written to the./logs/assessor-cli.log
file created in the directory where thescan_cmd
is run. -
To gather findings that caused a rule in a benchmark to pass or fail, generate an HTML report using the
-html
flag. The report is saved to the/opt/puppetlabs/comply/tmp/
directory in Linux and theC:/ProgramData/PuppetLabs/comply/tmp/
directory in Windows.
-
- Attach the file with your debugging information to your Support ticket.
How can we improve this article?
0 comments
Please sign in to leave a comment.
Related articles