diff --git a/exercises/rhdp_auto_satellite/1-compliance/README.md b/exercises/rhdp_auto_satellite/1-compliance/README.md index 3b5425d2f..771965be4 100644 --- a/exercises/rhdp_auto_satellite/1-compliance/README.md +++ b/exercises/rhdp_auto_satellite/1-compliance/README.md @@ -55,7 +55,7 @@ Exercise Now we will start configuring a compliance policy that we can use to scan our RHEL nodes. -- In the Satellite UI, click on the 'Hosts' dropdown menu pane on the left, then click on the 'Compliance' dropdown, followed by clicking on 'Policies' +- In the Satellite UI, click on the 'Hosts' dropdown menu pane on the left, then click on the 'Compliance' dropdown, followed by clicking on 'Policies'. ![satellite_policy](images/1-compliance-aap2-Satellite_Policies.png) @@ -65,25 +65,28 @@ Now we will start configuring a compliance policy that we can use to scan our RH #### 3\. Configuring a new compliance policy -Now we will start configuring our Satellite server to be able to manage a compliance policy +Now we will start configuring our Satellite server to be able to manage a compliance policy. -- Select "Manual" from the deployment options and click "Next" +- Select "Manual" from the deployment options and click "Next". + +> **NOTE:** +> There is an "Ansible" radio button selection, why aren't we using that? Selecting the "Ansible" radio button here would utilize the Ansible engine built into Satellite to execute the automation for the scan. In this case, we are going to be utilizing Ansible Automation Platform (AAP) to automate the execution of the OpenSCAP client scan on the managed host, providing the means to expand the capabilities of the scan, as well as providing for the expanded automation capabilites provided by AAP. ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP1.png) -- Create the policy name "PCI_Compliance" and provide any description you like. Then click "Next" +- Create the policy name "PCI_Compliance" and provide any description you like. Then click "Next". ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP2.png) -- Select the "Red Hat rhel7 default content" and "PCI-DSS v4.0 Control Baseline for Red Hat Enterprise Linux 7". There is no tailoring file. Then click "Next" +- Select the "Red Hat rhel7 default content" and "PCI-DSS v4.0 Control Baseline for Red Hat Enterprise Linux 7". There is no tailoring file. Then click "Next". ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP3.png) -- It is necessary to set a schedule when creating a new compliance policy. You can select "Monthly" and "1" for Day of Month for the purposes of this exercise. Then click "Next" +- It is necessary to set a schedule when creating a new compliance policy. You can select "Monthly" and "1" for Day of Month for the purposes of this exercise. Then click "Next". ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP4.png) -- Steps 5, 6, and 7 as part of the New Compliance Policy can use default values. Click "Next" through "Locations", and "Organizations". For "Hostgroups" click "Submit" +- Steps 5, 6, and 7 as part of the New Compliance Policy can use default values. Click "Next" through "Locations", and "Organizations". For "Hostgroups" click "Submit". ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP5.png) @@ -105,7 +108,7 @@ Now we will start configuring our Satellite server to be able to manage a compli This step will allow us to scan a single RHEL 7 host with the ```PCI_Compliance``` policy that we configured on Satellite. -- In Ansible Automation Platform click 'Templates' from the menu pane on the left side +- In Ansible Automation Platform click 'Templates' from the menu pane on the left side. - Click the BLUE 'Add' drop-down icon and select 'Add job template' from the drop-down selection menu. Fill out the details as follows: @@ -187,7 +190,7 @@ This step will allow us to scan a single RHEL 7 host with the ```PCI_Compliance` - Click on the 'Full Report' button, under Actions, for 'node1.example.com' to see the report (This may take a few seconds). The Openscap Capsule field will reflect your workshop Satellite host. -- Scroll down to the **Rule Overview** section. You can filter by "Pass", "Fail", "Fixed", or any number of qualifiers as well as group rules by "Severity" +- Scroll down to the **Rule Overview** section. You can filter by "Pass", "Fail", "Fixed", or any number of qualifiers as well as group rules by "Severity". ![aap_arf](images/1-compliance-aap2-Satellite_ARF.png) @@ -210,23 +213,26 @@ Click "Activate to reveal" arrow next to the 'Remediation Ansible snippet', whic This step will expand our OpenSCAP policy scan to add another XCCDF compliance profile called ```STIG_Compliance```. We will also expand to include all systems in the 'RHEL7 Development' inventory by leaving the job run ```limit survey``` blank instead of specifying a single system. -- In Satellite, hover over "Hosts" from the menu on the left side of the screen, and then click on "Policies". +- In the Satellite UI, click on the 'Hosts' dropdown menu pane on the left, then click on the 'Compliance' dropdown, followed by clicking on 'Policies'. -- Click on the "New Compliance Policy" button +- Click on the "New Compliance Policy" button on the top right of the UI. -- Select "Manual" from the deployment options and click "Next" +- Select "Manual" from the deployment options and click "Next". + +> **NOTE:** +> Remember, selecting the "Ansible" radio button here would utilize the Ansible engine built into Satellite to execute the automation for the scan. We are going to be utilizing Ansible Automation Platform (AAP) to automate the execution of the OpenSCAP client scan on the managed host, so selecting "Manual" for the scap policy provides a means to integrate AAP for the scan automation. ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP6.png) -- Create the policy name "STIG_Compliance" and provide any description you like. Then click "Next" +- Create the policy name "STIG_Compliance" and provide any description you like. Then click "Next". ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP7.png) -- Select the "Red Hat rhel7 default content" and "DISA STIG for Red Hat Enterprise Linux 7". There is no tailoring file. Then click "Next" +- Select the "Red Hat rhel7 default content" and "DISA STIG for Red Hat Enterprise Linux 7". There is no tailoring file. Then click "Next". ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP8.png) -- It is necessary to set a schedule when creating a new compliance policy. You can select "Monthly" and "1" for Day of Month for the purposes of this exercise. Then click "Next" +- It is necessary to set a schedule when creating a new compliance policy. You can select "Monthly" and "1" for Day of Month for the purposes of this exercise. Then click "Next". ![satellite_policy](images/1-compliance-aap2-Satellite_SCAP9.png) @@ -240,7 +246,7 @@ This step will expand our OpenSCAP policy scan to add another XCCDF compliance p - Now, we will update our OpenSCAP Scan job template in Ansible Automation Platform and run another PCI compliance scan, plus the STIG compliance scan. - Navigate back to the Ansible Automation Platform UI and click 'Templates' from the left side pane menu -- Select the OpenSCAP Scan job template, and click edit at the bottom of the template to modify the "Variables" section and add the ```STIG_Compliance``` policy to the ```policy_name``` list: +- Find the `SATELLITE / Compliance - OpenSCAP Scan` job template, and select it by clicking on the name of the job template. Next, click edit at the bottom of the template to modify the "Variables" section and add the ```STIG_Compliance``` policy to the ```policy_name``` list: Variables (Keep the exact spacing provided below. Note that the extra-vars that we are supplying need to be @@ -253,6 +259,8 @@ This step will expand our OpenSCAP policy scan to add another XCCDF compliance p ![aap_template](images/1-compliance-aap2-template2-fix.png) +- Notice that we have listed the policy names, `PCI_Compliance` and `STIG_Compliance` exactly how we named the policies in the Satellite UI. By configuring the `policy_name` variable in this format, we are providing it as a list of the policies to utilize each time we execute this job template. + - Leave the rest of the fields blank or as they are, and click 'Save'. You can then select 'Launch' to deploy the job template. - On the survey, leave the Limit field empty, as we are going to target all instances in the inventory group and click Next. For "Select inventory group", leave the default selection for "RHEL7_Dev" and click Next. Review the entries on the launch Preview and notice scrolling down confirms the entries made during the survey. Click "Launch". @@ -268,7 +276,7 @@ This step will expand our OpenSCAP policy scan to add another XCCDF compliance p ![aap_arf](images/1-compliance-aap2-Satellite_ARF-Final.png) -- Each report can be reviewed independent of other node scans and remediations for rule findings can be completed according to the requirements of your own internal policies. +- Each report can be reviewed independent of other node scans and automation remediations for rule findings can be compiled according to the requirements of internal organizational policies. #### 9\. End of Exercise diff --git a/exercises/rhdp_auto_satellite/2-patching/README.md b/exercises/rhdp_auto_satellite/2-patching/README.md index 185d874da..69324ee80 100644 --- a/exercises/rhdp_auto_satellite/2-patching/README.md +++ b/exercises/rhdp_auto_satellite/2-patching/README.md @@ -130,16 +130,16 @@ Save and exit the workflow template editor by clicking on "Save" on the top righ #### 4\. Exploring the Satellite host configuration -* In the Satellite UI on the left menu pane, hover over 'Hosts' and select 'Content Hosts'. +* In the Satellite UI on the left menu pane, click on 'Hosts' and then select 'Content Hosts'. * Observe the multiple security, bug fix, enhancements and package updates available for each server, which will vary depending on the date of when the workshop takes place. * Further, take note of the life cycle environment: RHEL7_Dev. ![Satellite content hosts](images/2-patching-aap2-Satellite-contenthosts.png) -* In the Satellite UI on the left menu pane, navigate to 'Content' and select 'Content Views'. +* In the Satellite UI on the left menu pane, click on 'Content', followed by clicking on 'Lifecycle', and then select 'Content Views'. * Since the servers that we are working with are RHEL7 select the 'RHEL7' content view. * We may need to publish a new content view version, however, we set that up as part of our workflow! - * Note: your content view version may differ from this example, that is OK + * Note: your content view version may differ from this example, that is OK. ![Satellite RHEL7 CV](images/2-patching-aap-Satellite-CV-RHEL7.png) @@ -149,7 +149,7 @@ Save and exit the workflow template editor by clicking on "Save" on the top righ #### 5\. Navigate back to Ansible Automation Platform and launch workflow job -* Click on Templates to locate the 'SATELLITE / Patching Workflow' template. +* Click on Templates and locate the 'SATELLITE / Patching Workflow' template. * You can either click on the rocketship to the right of the template or select the template and select LAUNCH. (they do the same thing). * Observe the job kicking off in Ansible. * You need to wait for this workflow to complete before moving on to the next step. @@ -161,7 +161,7 @@ Save and exit the workflow template editor by clicking on "Save" on the top righ #### 6\. Navigate back to Satellite to examine automation effects -* In the Satellite UI on the left menu pane, navigate to 'Content' then 'Content Views' and select RHEL7. +* In the Satellite UI on the left menu pane, navigate to 'Content', then 'Lifecycle', then 'Content Views' and select RHEL7. * Notice the new content view version. * In the Satellite UI on the left menu pane, navigate to Hosts > All Hosts and select node1.example.com. * Select the 'content' tab under Details. @@ -172,8 +172,8 @@ Save and exit the workflow template editor by clicking on "Save" on the top righ * You may notice that not all issues are remediated. * This is to showcase that you can exclude updates based on type. - * In this case we're not pushing out updates for kernel changes. - * Of course this can be configurable through use of the exclude definition for ```ansible.builtin.yum``` module in the server_patch.yml playbook. + * In this case, we are not pushing out updates for kernel changes. + * Of course, this can be configurable through use of the exclude definition for ```ansible.builtin.yum``` module in the server_patch.yml playbook. ![kernel patches excluded](images/2-patching-aap2-server-patching-kernel-exclude.png) diff --git a/exercises/rhdp_auto_satellite/2-patching/images/2-patching-aap2-Satellite-dashboard.png b/exercises/rhdp_auto_satellite/2-patching/images/2-patching-aap2-Satellite-dashboard.png index fe3481a72..c370a37b3 100644 Binary files a/exercises/rhdp_auto_satellite/2-patching/images/2-patching-aap2-Satellite-dashboard.png and b/exercises/rhdp_auto_satellite/2-patching/images/2-patching-aap2-Satellite-dashboard.png differ diff --git a/exercises/rhdp_auto_satellite/2-patching/images/2-patching-aap2-Satellite-login.png b/exercises/rhdp_auto_satellite/2-patching/images/2-patching-aap2-Satellite-login.png index 252aba1de..07239958c 100644 Binary files a/exercises/rhdp_auto_satellite/2-patching/images/2-patching-aap2-Satellite-login.png and b/exercises/rhdp_auto_satellite/2-patching/images/2-patching-aap2-Satellite-login.png differ diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/README.md b/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/README.md index 37b967eaa..2b4720ec4 100644 --- a/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/README.md +++ b/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/README.md @@ -27,16 +27,16 @@ The workshop is provisioned with a pre-configured lab environment. You will have access to a host deployed with Ansible Automation Platform (AAP) which you will use to control the playbook and workflow jobs that automate the CentOS conversion workflow steps. You will also have access to three CentOS hosts. These are the hosts where we will be converting the CentOS operating system (OS) to Red Hat Enterprise Linux. -| Role | Inventory name | -| ------------------------------------| ---------------| -| Automation controller | ansible-1 | -| Satellite Server | satellite | -| Managed Host 1 - RHEL | node1 | -| Managed Host 2 - RHEL | node2 | -| Managed Host 3 - RHEL | node3 | -| Managed Host 4 - CentOS/OracleLinux | node4 | -| Managed Host 5 - CentOS/OracleLinux | node5 | -| Managed Host 6 - CentOS/OracleLinux | node6 | +| Role | Inventory name | +| ---------------------------------------| ---------------| +| Ansible Automation Platform controller | ansible-1 | +| Satellite Server | satellite | +| Managed Host 1 - RHEL | node1 | +| Managed Host 2 - RHEL | node2 | +| Managed Host 3 - RHEL | node3 | +| Managed Host 4 - CentOS/OracleLinux | node4 | +| Managed Host 5 - CentOS/OracleLinux | node5 | +| Managed Host 6 - CentOS/OracleLinux | node6 | ### Step 1 - Access the AAP Web UI @@ -48,7 +48,7 @@ The AAP Web UI is where we will go to submit and check the status of the Ansible - Enter the username `admin` and the password provided. This will bring you to your AAP Web UI dashboard like the example below: - ![Example AAP Web UI dashboard](images/aap_console_example.svg) + ![Example AAP Web UI dashboard](images/aap_console_example.png) - Let's use the AAP Web UI to make a couple of preparations for the exercise. First, let's ensure our CentOS nodes are up and running. In the AAP Web UI browser tab, navigate to Resources > Templates by clicking on "Templates" under the "Resources" group in the navigation menu. Browse the list of job templates and click on the template `EC2 / Instance action`: diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/aap_console_example.png b/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/aap_console_example.png new file mode 100644 index 000000000..67493866e Binary files /dev/null and b/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/aap_console_example.png differ diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/aap_console_example.svg b/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/aap_console_example.svg deleted file mode 100644 index c97421583..000000000 --- a/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/aap_console_example.svg +++ /dev/null @@ -1,2931 +0,0 @@ - - - - - - - - - diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/satellite_console_example.png b/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/satellite_console_example.png index 6ee3030e6..474947e4f 100644 Binary files a/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/satellite_console_example.png and b/exercises/rhdp_auto_satellite/3-convert2rhel/1.1-setup/images/satellite_console_example.png differ diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/README.md b/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/README.md index 87158d76d..3e6bca4a4 100644 --- a/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/README.md +++ b/exercises/rhdp_auto_satellite/3-convert2rhel/1.2-three-tier-app/README.md @@ -23,13 +23,13 @@ This use-case will focus on conversion from CentOS (though this could be another RHEL derivitive) to RHEL while maintaining a 3 tier application stack (do no harm). We will utilize an additional project in Ansible Automation Platform, "Three Tier App / Prod", which will allow us to install a three tier application stack, consisting of HAProxy, Tomcat, and PostgreSQL, across the three CentOS nodes. Additionally, the project also provides a means to test/verify functionality of the application components, which we will perform before and after CentOS to RHEL conversions. -| Role | Inventory name | -| ---------------------------------------| ---------------| -| Automation controller | ansible-1 | -| Satellite Server | satellite | -| CentOS/OracleLinux Host 4 - HAProxy | node4 | -| CentOS/OracleLinux Host 5 - Tomcat | node5 | -| CentOS/OracleLinux Host 6 - PostgreSQL | node6 | +| Role | Inventory name | +| ------------------------------------------| ---------------| +| Automation Automation Platform controller | ansible-1 | +| Satellite Server | satellite | +| CentOS/OracleLinux Host 4 - HAProxy | node4 | +| CentOS/OracleLinux Host 5 - Tomcat | node5 | +| CentOS/OracleLinux Host 6 - PostgreSQL | node6 | | **A Note about using Satellite vs. Ansible Automation Platform for this...**
| | ------------- | @@ -37,6 +37,11 @@ This use-case will focus on conversion from CentOS (though this could be another ### Step 1 - Set Instance Tags + > **Note** + > + > - This workshop is utilizing virtual machines/servers running on Amazon Web Services (AWS). [AWS nomenclature utilizes the term `instance`](https://aws.amazon.com/what-is/cloud-instances/) to designate a virtual machine/server. + > - [In AWS, a tag is a key-value pair applied to a resource to hold metadata about that resource](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/what-are-tags.html). Each tag is a label consisting of a key and an optional value. For example, if we wanted to designate a `name` tag for an instance named `node4.example.com` within AWS, we could assign --> key: name, value: node4.example.com. Utilizing tags are a suggested practice for grouping resources in logical manners (not just in AWS, but in other hyperscaler clouds, as well as even in Vsphere, etc). We will see that when utilized properly, tags are a powerful means for constructing dynamic infrastructure inventories that can be utilized within Ansible Automation Platform to target the correct components in a hybrid cloud infrastructure. + - Return to the AAP Web UI browser tab you opened in step 3 of the previous exercise. Navigate to Resources > Templates by clicking on "Templates" under the "Resources" group in the navigation menu. This will bring up a list of job templates that can be used to run playbook jobs on target hosts: ![Job templates filtered list](images/set_instance_tags_01.png) @@ -69,7 +74,7 @@ This use-case will focus on conversion from CentOS (though this could be another ### Step 2 - Update Ansible Inventory -- Now that we have application tier group tags set, we can update the Ansible inventory to include inventory groups associated with our application stack tiers. However, before we do, let's review how the Ansible inventory update works behind the scenes. +- Now that we have application tier group tags set for each instance, we can update the Ansible inventory to include inventory groups associated with our application stack tiers. However, before we do, let's review how the Ansible inventory update works behind the scenes. ![Controller inventories](images/update_controller_inventory_01.png) @@ -89,11 +94,11 @@ This use-case will focus on conversion from CentOS (though this could be another ![Controller inventories keyed_groups](images/update_controller_inventory_inventory_filter.png) -- Looking at the Source variables, first let's look at `filters` and `hostnames`. The `filters` section will allow definining which instances should be selected for inclusion within the given inventory. In this case, the tags `ContentView`, `Environment`, `Student`, and `guid` will be utilized...all instances with tags matching the current values defined for each tag, will be selected. The `hostnames` sections allows defining how names of filtered resources will be definined in the inventory. In this case, the value currently defined with tag `NodeName` will be utilized for the name within the inventory. +- Looking at the Source variables, first let's look at `filters` and `hostnames`. The `filters` section will allow definining which instances should be selected for inclusion within the given inventory. In this case, the tags `ContentView`, `Environment`, `Student`, and `guid` will be utilized...all instances with tags matching the current values defined for each tag, will be selected. The `hostnames` sections allows defining how names of filtered resources will be defined in the inventory. In this case, the value currently defined with tag `NodeName` will be utilized for the name populated within the `CentOS7 Development` inventory. ![Controller inventories keyed_groups](images/update_controller_inventory_05.png) -- Scroll down the source variables section until you see "keyed_groups". [Keyed groups](https://docs.ansible.com/ansible/latest/plugins/inventory.html#:~:text=with%20the%20constructed-,keyed_groups,-option.%20The%20option) are where you can define dynamic inventory groups based on instance tags. In this case, given the instances that are selected via the filters in the previous section, if any of these instances are currently tagged with "app_stack_name" and "AnsibleGroup" tags, then it will create an inventory group with the name beginning with the value assigned to the "app_stack_name" tag, an "_" (underscore) and then the value assigned to the "AnsibleGroup" tag...so in this case, if the "app_stack_name" tag is currently set to `stack02` and the "AnsibleGroup" tag is set to `appdbs`, then the inventory group `stack02_appdbs` will be created (or confirmed if already existing) and that instance will be assigned to the `stack02_appdbs` group. +- Scroll down the source variables section until you see "keyed_groups". [Keyed groups](https://docs.ansible.com/ansible/latest/plugins/inventory.html#:~:text=with%20the%20constructed-,keyed_groups,-option.%20The%20option) are where you can define dynamic inventory groups based on instance tags. In this case, given the instances that are selected via the filters in the previous section, if any of these instances are currently tagged with "app_stack_name" and "AnsibleGroup" tags, then it will create an inventory group with the name beginning with the value assigned to the "app_stack_name" tag, an "_" (underscore) and then the value assigned to the "AnsibleGroup" tag...so in this case, if the "app_stack_name" tag is currently set to `stack02` and the "AnsibleGroup" tag is set to `appdbs`, then the inventory group `stack02_appdbs` will be created (or confirmed if already existing) and that instance will be assigned to the `stack02_appdbs` inventory group. - Click on "Done" in the Source variables exapanded view. @@ -169,7 +174,7 @@ This use-case will focus on conversion from CentOS (though this could be another ![EC2 Instance Action Launch](images/ec2_instance_action_launch.png) -- On the `Launch | EC2 / Instance action - Preview` dialog, review and then click **Launch**. Once the `EC2 / Instance action` job has completed, our CentOS7 instances will be started and available for installing the three tier application stack. +- On the `Launch | EC2 / Instance action - Preview` dialog, review and then click **Launch**. Once the `EC2 / Instance action` job has completed, our CentOS7 instances will be started (or confirmed as started if they are already up and running) and available for installing the three tier application stack. - In the AAP Web UI, navigate to Resources > Templates by clicking on "Templates" under the "Resources" group in the navigation menu. This will bring up a list of job templates. diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/README.md b/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/README.md index 4f266142d..82c1b2d38 100644 --- a/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/README.md +++ b/exercises/rhdp_auto_satellite/3-convert2rhel/1.3-analysis/README.md @@ -28,7 +28,7 @@ ### Step 1 - CentOS Conversion Automation Workflow -Red Hat provides the Convert2RHEL utility, a tool to convert RHEL-like systems to their RHEL counterparts. The [Convert2RHEL documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/converting_from_an_rpm-based_linux_distribution_to_rhel/index) guides users on how to use the Convert2RHEL utility to manually convert a RHEL host. This is fine if there only a few CentOS hosts to convert, but what if you are a large enterprise with tens, hundreds, or even thousands of CentOS hosts? The manual process does not scale. Using automation, the end-to-end process for converting a RHEL host is reduced to a matter of days and the total downtime required for the actual conversion is measured in hours or less. +Red Hat provides the Convert2RHEL utility, a tool to convert RHEL-like systems to their RHEL counterparts. The [Convert2RHEL documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/converting_from_an_rpm-based_linux_distribution_to_rhel/index) guides users on how to utilize the Convert2RHEL utility to manually convert a RHEL host. This is fine if there only a few CentOS hosts to convert, but what if you are a large enterprise with tens, hundreds, or even thousands of CentOS hosts? The manual process does not scale. Using automation, the end-to-end process for converting a RHEL host is reduced to a matter of days and the total downtime required for the actual conversion is measured in hours or less. Our CentOS conversion automation approach follows a workflow with three phases: @@ -49,7 +49,7 @@ During the analysis phase, theoretically, no changes should be made to the syste #### Convert -After the analysis phase is complete and the report indicates acceptable risk, a maintenance window can be scheduled and the conversion phase can begin. It is during this phase that the conversion playbooks are executed using a workflow job template. The first playbook creates a snapshot that can be used for rolling back if anything goes wrong with the conversion. After the snapshot is created, the second playbook uses the [convert role from the infra.convert2rhel Ansible collection](https://github.com/redhat-cop/infra.convert2rhel/tree/main/roles/convert), where the automation is a convenience wrapper around the Convert2RHEL utility, to perform the operation where the CentOS host is converted to RHEL. The host should not be accessed via login or application access during the conversion, unless working through remediation development activities. When the conversion is finished, the host will reboot under the newly converted RHEL system. Now the ops and app teams can assess if the conversion was successful by verifying all application services are working as expected. +After the analysis phase is complete and the report indicates acceptable risk, a maintenance window can be scheduled and the conversion phase can begin. It is during this phase that the conversion playbooks are executed using a workflow job template. The first playbook creates a snapshot that can be used for rolling back if anything goes wrong with the conversion. After the snapshot is created, the second playbook utilizes the [convert role from the infra.convert2rhel Ansible collection](https://github.com/redhat-cop/infra.convert2rhel/tree/main/roles/convert), where the automation is a convenience wrapper around the Convert2RHEL utility, to perform the operation where the CentOS host is converted to RHEL. The host should not be accessed via login or application access during the conversion, unless working through remediation development activities. When the conversion is finished, the host will reboot under the newly converted RHEL system. Now the ops and app teams can assess if the conversion was successful by verifying all application services are working as expected. #### Commit @@ -116,17 +116,19 @@ If you would like to review the content view configuration that we will be utili > > A composite content view in Satellite is a content view that is composed of other composite views, typically multiples of content views. +- Click on the `CentOS7_to_RHEL7` composite content view. + - On the `CentOS7_to_RHEL7` content view page, click on the `Content views` tab. ![CentOS7_to_RHEL7 Content View](images/composite_content_view_for_convert2rhel.png) -- Notice that the CentOS7 and RHEL7 content views have been added to the CentOS7_to_RHEL7 composite content view. Click on either of the CentOS7 or RHEL7 content views and then the `Repositories` tab in each view. +- Notice that the CentOS7 and RHEL7 content views have been added to the CentOS7_to_RHEL7 composite content view (the versions and environments depicted may vary in your workshop deployment, this is OK). Click on either of the CentOS7 or RHEL7 content views and then the `Repositories` tab in each view. ![CentOS7 Content View Repositories](images/composite_content_view_centos_repos.png) ![RHEL7 Content View Repositories](images/composite_content_view_rhel_repos.png) -Currently, our CentOS7 nodes are configured to utilize the `CentOS7` content view, with associated `CentOS7_Dev` lifecycle environment. We will now change our CentOS7 nodes to instead consume the `CentOS7_to_RHEL7` composite content view via the associated `CentOS7_to_RHEL7_Dev` lifecycle environment during the conversion process. +Currently, our CentOS7 nodes are configured to utilize the `CentOS7` content view, with associated `CentOS7_Dev` lifecycle environment for software packages. We will now change our CentOS7 nodes to instead consume the `CentOS7_to_RHEL7` composite content view via the associated `CentOS7_to_RHEL7_Dev` lifecycle environment for access to the requisite software packages during the conversion process. - Return to the AAP Web UI browser tab and navigate to Resources > Templates by clicking on "Templates" under the "Resources" group in the navigation panel and click on `SATELLITE / Change content source for content host`. @@ -136,7 +138,7 @@ Currently, our CentOS7 nodes are configured to utilize the `CentOS7` content vie ![Satellite Change content source for content host](images/content_host_template_launch.png) -- On the survey dialog, for `Select inventory group`, select `CentOS7_Dev` from the drop down. Leave the specific content hosts limit field blank. for `Select target lifecycle environment for the content host`, select `CentOS7_to_RHEL7_Dev`, then click "Next". +- On the survey dialog, for `Select inventory group`, select `CentOS7_Dev` from the drop down. Leave the specific content hosts to limit field blank. for `Select target lifecycle environment for the content host`, select `CentOS7_to_RHEL7_Dev`, then click "Next". ![Satellite Change content source for content host survey](images/content_host_template_survey.png) @@ -182,7 +184,11 @@ The first step in converting our three tier app hosts will be executing the anal ![Analysis job survey prompt on AAP Web UI](images/analysis_survey_prompt.png) -- For this workflow job template, the survey allows for choosing a group of hosts on which the workflow will execute against. For `Select EL Group to analyze` choose `CentOS_Dev` from the drop-down and click the "Next" button. This will bring you to a preview of the selected job options and variable settings. +- For this workflow job template, the survey allows for choosing an Ansible inventory group of hosts on which the workflow will execute against. For `Select EL Group to analyze` choose `CentOS_Dev` from the drop-down and click the "Next" button. This will bring you to a preview of the selected job options and variable settings. + + > **Note** + > + > While we did change the `CentOS7_to_RHEL7_Dev` lifecycle environment that the CentOS nodes are assigned to in Satellite, this was only for selecting the requisite software packge repositories for the conversion process. We are still utilizing the `CentOS_Dev` inventory group in the Ansible Automation Platform inventory for specifying the proper instances to launch conversion automation against. ![Analysis job preview on AAP Web UI](images/analysis_preview.png) @@ -251,7 +257,7 @@ Can you find the upstream source repo and playbook code? ``` By checking the `collections/requirements.yml` file in the `redhat-partner-tech/automated-satellite` git repo, we can discover that this role comes from another git repo at [https://github.com/heatmiser/infra.convert2rhel](https://github.com/heatmiser/infra.convert2rhel). It is the `analysis` role under this second git repo that provides all the automation tasks that ultimately runs the Convert2RHEL analysis scan and generates the report. - *NOTE* We are utilizing a fork of the upstream infra.convert2rhel Ansible collection [https://github.com/redhat-cop/infra.convert2rhel](https://github.com/redhat-cop/infra.convert2rhel). Because the upstream collections is a fast moving project, we utilize a fork where we can closely manage the state of the code base to ensure optimal stability for the lab/workshop/demo environment. + > **NOTE:** We are utilizing a fork of the upstream infra.convert2rhel Ansible collection [https://github.com/redhat-cop/infra.convert2rhel](https://github.com/redhat-cop/infra.convert2rhel). Because the upstream collections is a fast moving project, we utilize a fork where we can closely manage the state of the code base to ensure optimal stability for the lab/workshop/demo environment. - In a new browser tab/instance, open the [https://github.com/heatmiser/infra.convert2rhel](https://github.com/heatmiser/infra.convert2rhel) URL. Drill down to the `roles/analysis` directory in this git repo to review the README and yaml source files. diff --git a/exercises/rhdp_auto_satellite/3-convert2rhel/1.4-report/README.md b/exercises/rhdp_auto_satellite/3-convert2rhel/1.4-report/README.md index be21b50c5..fb744f978 100644 --- a/exercises/rhdp_auto_satellite/3-convert2rhel/1.4-report/README.md +++ b/exercises/rhdp_auto_satellite/3-convert2rhel/1.4-report/README.md @@ -48,7 +48,7 @@ For this workshop, we will be using the CentOS Web Console to access the Convert ![Remote host menu listing all workbench app servers](images/remote_host_menu_with_pets.png) -- You can use the remote host menu to navigate to the web consoles of each of your CentOS 7 app servers. Try selecting one of your CentOS 7 app servers now. The Web Console system overview page will show the operating system version installed. For example, we can see node4 is confirmed as running CentOS 7: +- You can use the remote host menu to navigate to the web consoles of each of your CentOS 7 app servers. Try selecting one of your CentOS 7 app servers now ( node4, node5, or node6 ). The Web Console system overview page will show the operating system version installed. For example, we can see node4 is confirmed as running CentOS 7: ![node4 running CentOS Linux 7 (Core)](images/centos7_os.png) @@ -58,7 +58,7 @@ For this workshop, we will be using the CentOS Web Console to access the Convert ![Web console is running in limited access mode](images/limited_access.svg) - If you see this, use the button to switch to administrative access mode before proceeding. A confirmation will appear like this: + **If** you see this, use the button to switch to administrative access mode before proceeding. A confirmation will appear like this: ![You now have administrative access](images/administrative_access.svg) @@ -104,7 +104,7 @@ less /var/log/convert2rhel/convert2rhel-pre-conversion.txt ### Challenge Lab: What if we were to experience warnings we are unsure of? -You may be wondering: what if there are many warning issues listed in the report? Why would we be going forward with attempting a conversion without first resolving all the findings on the report? It's a fair question. +You may be wondering: what if there are many warning issues listed in the report, beyond those mentioned above? Why would we be going forward with attempting a conversion without first resolving all the findings on the report? It's a fair question. > **Tip** > @@ -128,7 +128,7 @@ Of course, the answer is our automated snapshot/rollback capability. ## Conclusion -In this exercise, we learned about the different options for managing Convert2RHEL pre-conversion analysis reports. We used the CentOS Web Console to look at the reports we generated in the previous exercise. In the challenge lab, we reviewed the importance of snapshots and learned to embrace failure. +In this exercise, we learned about the different options for managing Convert2RHEL pre-conversion analysis reports. We used the CentOS Web Console to look at the reports we generated in the previous exercise. In the challenge lab, we reviewed the importance of snapshots and learned to embrace any potential failures, embracing the concept of utilizing lessons learned from failed conversions as inputs to improved conversion remediations and automation. ---