Differences
This shows you the differences between two versions of the page.
linux_wiki:spacewalk [2015/12/30 14:58] billdozor [Jabber Database Cleanup Script] |
linux_wiki:spacewalk [2019/05/25 23:50] |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Spacewalk ====== | ||
- | |||
- | **General Information** | ||
- | |||
- | Spacewalk is a centralized system update and config server.\\ | ||
- | Official Site: https:// | ||
- | |||
- | **Checklist** | ||
- | * Spacewalk server installed | ||
- | |||
- | ---- | ||
- | |||
- | ====== Spacecmd ====== | ||
- | |||
- | Spacecmd is the command line interface to Spacewalk.\\ | ||
- | Details here: [[https:// | ||
- | |||
- | ---- | ||
- | |||
- | ====== Register System with Spacewalk ====== | ||
- | |||
- | A [[linux_wiki: | ||
- | |||
- | ---- | ||
- | |||
- | ===== Re-Register ===== | ||
- | |||
- | If you need to re-register a client for any reason, you need the " | ||
- | |||
- | * Delete system from Spacewalk< | ||
- | * Register system with the --force option< | ||
- | sw_activation_key=" | ||
- | rhnreg_ks --force --serverUrl=https:// | ||
- | </ | ||
- | |||
- | ---- | ||
- | |||
- | ====== rhn_check ====== | ||
- | |||
- | By default, a system checks into Spacewalk via rhn_check every 4 hours. | ||
- | |||
- | If systems are not picking up the scheduled action from the Spacewalk portal in a timely manner with the osad (such as a config deploy, package upgrade, etc), you can force a group of systems to check in by running the " | ||
- | |||
- | To loop through a group of systems and have them check in: | ||
- | |||
- | Example: Loop through the dev system group and have them check in | ||
- | <code bash> | ||
- | for NODE in $(spacecmd group_listsystems dev); do echo " | ||
- | </ | ||
- | |||
- | ---- | ||
- | |||
- | ====== Channel Management ====== | ||
- | |||
- | About Channels | ||
- | * Systems are subscribed to " | ||
- | * Channel subscriptions can be changed at any time in the Spacewalk portal, across any amount of systems. | ||
- | * Channels have Repositories assigned to them | ||
- | * This allows for a single repo to back multiple channels | ||
- | |||
- | ---- | ||
- | |||
- | ===== Channel Freezing/ | ||
- | |||
- | In order to facilitate the same updates being applied to the Development, | ||
- | This creates a " | ||
- | **Note: This copies metadata of the Channel and does not duplicate repo packages** | ||
- | |||
- | To Clone an entire Channel tree: | ||
- | * Login to a system with spacecmd installed | ||
- | * Clone the original base tree to a " | ||
- | |||
- | * Clone can be performed with spacecmd in batch or interactive mode: | ||
- | * Batch Clone Example => Clone the CentOS 6 tree, giving it the prefix " | ||
- | * The above will clone the entire tree(base and child channels), give the shown prefix, copy gpg data, and copy errata data. | ||
- | * Interactive Clone Example< | ||
- | Source Channels: | ||
- | centos6_x86-64_base | ||
- | centos7_x86-64_base | ||
- | |||
- | Select source channel: centos6_x86-64_base | ||
- | Prefix: ss-20151215_ | ||
- | |||
- | Copy source channel GPG details? [y/N]: y | ||
- | |||
- | Original State (No Errata) [y/N]: N</ | ||
- | |||
- | ---- | ||
- | |||
- | ===== Errata Setup ===== | ||
- | |||
- | As of 12/15/2015, CentOS does not generate an " | ||
- | |||
- | For a workaround, use a script to scrape the CentOS mailing archive lists for the errata. | ||
- | |||
- | * Git hub project: https:// | ||
- | * This is a bash based project that is a wrapper for the perl based project, making it easy to implement. | ||
- | * Original perl project' | ||
- | * Original perl project' | ||
- | |||
- | The " | ||
- | * Main Dir: / | ||
- | * com.redhat.rhsa-all.xml => File downloaded by the " | ||
- | * errata-import.pl => main perl script that does the work | ||
- | * errata.latest.xml => File downloaded by the " | ||
- | * **errata-sync.sh** => Configuration file and parent script that launches " | ||
- | * **Edit this file to make login credential changes or to include other channels for inclusion in errata scanning.** | ||
- | * install.sh => Downloads the latest " | ||
- | * Cron Job installed to: | ||
- | * / | ||
- | 00 01 * * * root /bin/bash / | ||
- | |||
- | ---- | ||
- | |||
- | ====== Config Management ====== | ||
- | |||
- | A system is automatically subscribed to the proper configuration channels when it is registered via its Activation Key. | ||
- | * Configuration is NOT pushed to the system automatically. | ||
- | * The config files can be deployed while on the client system or pushed to the client using the Spacewalk server portal or spacecmd. | ||
- | |||
- | ---- | ||
- | |||
- | ===== Compare Configs ===== | ||
- | |||
- | To compare the centrally managed files to a system' | ||
- | * Login to the Spacewalk Web Portal | ||
- | * Find the target system via one of these methods: | ||
- | * Searching in the top right | ||
- | * Browsing all systems by clicking " | ||
- | * Browsing system groups | ||
- | * Click the systems name | ||
- | * On the systems Overview page, click on the " | ||
- | * On the right under " | ||
- | * Click " | ||
- | * Refresh the Configuration Overview page or click on the systems " | ||
- | * On the systems Configuration > Overview page, at the bottom under " | ||
- | * Click the "View Details" | ||
- | * Under the Config Files list, click on the " | ||
- | |||
- | ---- | ||
- | |||
- | ===== Download (Pull) Configs ===== | ||
- | |||
- | The various ways to download config files while on the client system. | ||
- | |||
- | Download all config files, from all subscribed config channels | ||
- | <code bash> | ||
- | rhncfg-client get | ||
- | </ | ||
- | |||
- | Download a specific managed config file | ||
- | <code bash> | ||
- | rhncfg-client get / | ||
- | </ | ||
- | |||
- | Download all config files from a specific Config Channel ID | ||
- | <code bash> | ||
- | for FILE in $(rhncfg-client list | awk / | ||
- | </ | ||
- | |||
- | ---- | ||
- | |||
- | ===== Deploy (Push) Configs ===== | ||
- | |||
- | To deploy configs from the server to a client. | ||
- | |||
- | ==== Portal Deploy ==== | ||
- | |||
- | * Login to the Spacewalk Web Portal. | ||
- | * At the top, click on the " | ||
- | * Find the target system via one of these methods: | ||
- | * Searching in the top right | ||
- | * Browsing all systems by clicking " | ||
- | * Browsing system groups | ||
- | * Click the systems name | ||
- | * On the systems Overview page, click on the " | ||
- | * Click " | ||
- | * Check the config files to deploy, then click " | ||
- | * On the " | ||
- | |||
- | ==== Spacecmd Deploy ==== | ||
- | |||
- | List config channels a system is subscribed to | ||
- | <code bash> | ||
- | spacecmd system_listconfigchannels | ||
- | </ | ||
- | |||
- | List config files that a system is subscribed to | ||
- | <code bash> | ||
- | spacecmd system_listconfigfiles | ||
- | </ | ||
- | |||
- | Deploy all of those config files | ||
- | <code bash> | ||
- | spacecmd system_deployconfigfiles < | ||
- | </ | ||
- | * < | ||
- | * a single system name | ||
- | * multiple system names space separated | ||
- | * " | ||
- | |||
- | ---- | ||
- | |||
- | ===== Create a Local Managed File Override ===== | ||
- | |||
- | Some systems will need to have different config files than the centrally managed ones. | ||
- | \\ | ||
- | |||
- | To create exceptions, or local managed overrides: | ||
- | * Login to the Spacewalk Web Portal. | ||
- | * Find the system (Systems tab at the top) | ||
- | * Click on the system name to go to its Overview page. | ||
- | |||
- | On the system' | ||
- | * Click the " | ||
- | * To to right of the File Name to override, click " | ||
- | * Click " | ||
- | * Check the file to override that exists on the system, click " | ||
- | * After file has been successfully imported, click " | ||
- | * Check the file > click "Copy Latest to System Channel" | ||
- | * The file will now show up under " | ||
- | |||
- | ---- | ||
- | |||
- | ====== Server Services ====== | ||
- | |||
- | Normal Status of Spacewalk Services | ||
- | <code bash> | ||
- | / | ||
- | |||
- | postmaster (pid 29875) is running... | ||
- | router (pid 31614) is running... | ||
- | sm (pid 31622) is running... | ||
- | c2s (pid 31630) is running... | ||
- | s2s (pid 31638) is running... | ||
- | tomcat6 (pid 29992) is running... | ||
- | httpd (pid 30115) is running... | ||
- | osa-dispatcher (pid 31659) is running... | ||
- | rhn-search is running (30168). | ||
- | cobblerd (pid 30204) is running... | ||
- | RHN Taskomatic is running (30236). | ||
- | </ | ||
- | |||
- | ---- | ||
- | |||
- | ===== osa-dispatcher dead but pid file exists ===== | ||
- | |||
- | If osa-dispatcher shows the following: | ||
- | <code bash> | ||
- | / | ||
- | |||
- | osa-dispatcher dead but pid file exists | ||
- | </ | ||
- | |||
- | And the following error messages are in its log file: | ||
- | <code bash> | ||
- | tail / | ||
- | |||
- | 2015/11/03 07:38:05 -05:00 30144 0.0.0.0: osad/ | ||
- | 2015/11/03 07:38:05 -05:00 30144 0.0.0.0: osad/ | ||
- | 2015/11/03 07:38:05 -05:00 30144 0.0.0.0: osad/ | ||
- | </ | ||
- | |||
- | Fix this by stopping jabberd and osa-dispatcher (osa-dispatcher will probably show " | ||
- | <code bash> | ||
- | service jabberd stop | ||
- | service osa-dispatcher stop | ||
- | </ | ||
- | |||
- | Remove jabberd database files: | ||
- | <code bash> | ||
- | rm -rf / | ||
- | </ | ||
- | |||
- | Start jabberd and osa-dispatcher | ||
- | <code bash> | ||
- | service jabberd start | ||
- | service osa-dispatcher start | ||
- | </ | ||
- | |||
- | Logs should now show the " | ||
- | <code bash> | ||
- | tail / | ||
- | |||
- | 2015/11/03 08:19:43 -05:00 31657 0.0.0.0: osad/ | ||
- | 2015/11/03 08:19:43 -05:00 31657 0.0.0.0: osad/ | ||
- | 2015/11/03 08:19:43 -05:00 31657 0.0.0.0: osad/ | ||
- | 2015/11/03 08:19:43 -05:00 31657 0.0.0.0: osad/ | ||
- | </ | ||
- | |||
- | **Warning** | ||
- | * After recovering the jabberdb in this way, the osad clients on each system need to re-establish a connection. This is done by stopping the osad service on the clients, removing the osad-auth.conf file and starting osad again. | ||
- | * From a system that has spacecmd installed:< | ||
- | |||
- | ---- | ||
- | |||
- | ===== Jabber Database Cleanup Script ===== | ||
- | |||
- | A useful cron job that executes weekly to clean up the jabber database. | ||
- | |||
- | / | ||
- | <code bash> | ||
- | # Clean up jabber database logs weekly | ||
- | |||
- | # .---------------- minute (0 - 59) | ||
- | # | .------------- hour (0 - 23) | ||
- | # | | .---------- day of month (1 - 31) | ||
- | # | | | .------- month (1 - 12) OR jan, | ||
- | # | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun, | ||
- | # | | | | | | ||
- | # * * * * * user-name command to be executed | ||
- | 00 00 * * sun root / | ||
- | </ | ||
- | |||
- | / | ||
- | <code bash> | ||
- | ############################################################################################### | ||
- | #!/bin/bash | ||
- | # Name: jabberdb_cleanup-logs | ||
- | # Description: | ||
- | ############################################################################################### | ||
- | |||
- | echo -e " | ||
- | echo -e "==== Jabber Database Log Clean ====" | ||
- | echo -e " | ||
- | |||
- | echo -e " | ||
- | sudo -u jabber db_checkpoint -1 -h / | ||
- | |||
- | echo -e " | ||
- | db_archive -a -h / | ||
- | |||
- | echo -e " | ||
- | db_archive -d -h / | ||
- | db-archive-status=$? | ||
- | |||
- | if [[ ${db-archive-status} -eq 0 ]]; then | ||
- | echo -e " | ||
- | else | ||
- | echo -e " | ||
- | fi | ||
- | </ | ||
- | |||
- | * **Note**: This requires that / | ||
- | # | ||
- | </ | ||
- | |||
- | ---- | ||
- | |||
- | ===== Jabberd Timeout Tuning ===== | ||
- | |||
- | Jabber osad clients were not checking in until the following server timeout changes were made: | ||
- | |||
- | Set jabberd server timeout intervals | ||
- | <code bash> | ||
- | sed -i ' | ||
- | sed -i ' | ||
- | sed -i ' | ||
- | </ | ||
- | |||
- | Restart the Spacewalk services | ||
- | <code bash> | ||
- | / | ||
- | </ | ||
- | |||
- | Clear out the jabberdb | ||
- | <code bash> | ||
- | / | ||
- | </ | ||
- | |||
- | Re-establish osad client connections | ||
- | <code bash> | ||
- | for NODE in $(spacecmd system_list); | ||
- | </ | ||
- | |||
- | ---- | ||
- | |||
- | ====== Spacewalk SSL Certificates ====== | ||
- | |||
- | Updating the SSL Certificates on the Spacewalk server is more complex than just updating Apache, as the SSL certs are used for: | ||
- | * Spacewalk Portal (Apache httpd server) | ||
- | * Jabber local daemon components communication | ||
- | * Jabber Spacewalk client to Spacewalk server communication | ||
- | |||
- | Using the following RPM method will allow you to update all applications correctly at the same time. | ||
- | |||
- | **Before manipulating either client or CA cert** | ||
- | * SSH to the Spacewalk server and switch to root | ||
- | * Backup the current ssl-build directory (if it exists already) | ||
- | * <code bash>cp -R / | ||
- | |||
- | ---- | ||
- | |||
- | ===== Client Certificate ===== | ||
- | |||
- | Client Certificate locations: | ||
- | * / | ||
- | * / | ||
- | * / | ||
- | |||
- | Client Certificate Update Procedure | ||
- | * Order certificate renewal from certificate provider | ||
- | * Download certificate, | ||
- | * SSH to the Spacewalk server and switch to root | ||
- | * Copy the current CA cert in use to the ssl-build directory | ||
- | * <code bash>cp / | ||
- | * Copy NEW client certificate into ssl-build/ | ||
- | * <code bash>cp server.crt / | ||
- | * Copy existing client key and CSR into ssl-build/ | ||
- | * <code bash>cp / | ||
- | cp / | ||
- | * Verify that NEW client cert will work with CA cert | ||
- | * <code bash> | ||
- | * Generate the new client cert RPM | ||
- | * <code bash> | ||
- | * Remove old SSL key pair package | ||
- | * <code bash>rpm -e rhn-org-httpd-ssl-key-pair-my-spacewalk-server-1.0-1.noarch</ | ||
- | * Install new SSL key pair package | ||
- | * <code bash>rpm -ivh / | ||
- | * Stop Spacewalk services, clear jabberd' | ||
- | * <code bash> | ||
- | rm -rf / | ||
- | spacewalk-service start</ | ||
- | * Force an OSAD client re-authentication on each client< | ||
- | |||
- | ---- | ||
- | |||
- | ===== CA Certificate ===== | ||
- | |||
- | CA Chain Certificate locations | ||
- | * RPM build location: / | ||
- | * Locally installed location: / | ||
- | * Publicly available for clients to download: / | ||
- | * Also packaged in: / | ||
- | |||
- | Updating the CA certificate will not have to be done very often; only when: | ||
- | * CA cert expires | ||
- | * You change certificate providers | ||
- | |||
- | **WARNING** | ||
- | * Updating the CA certificate on the Spacewalk server will break all communication between the server and the clients. | ||
- | * Each client will need to update to the new CA cert individually before communication can be restored. | ||
- | |||
- | CA Certificate Update Procedure | ||
- | * Download the new single .pem file containing all the certs from the certificate provider. | ||
- | * Copy the PEM file to the Spacewalk server | ||
- | * SSH to the Spacewalk server and switch to root | ||
- | * Cat/view the contents of the PEM file | ||
- | * The top BEGIN/END block is the client cert (server.crt) | ||
- | * The rest is the certificate chain | ||
- | * Copy this into a new file; " | ||
- | * Copy into ssl-build directory | ||
- | * < | ||
- | * Verify CA cert with the server cert | ||
- | * <code bash> | ||
- | * Generate CA chain RPM | ||
- | * <code bash> | ||
- | * Copy new CA chain cert and RPM into Spacewalk' | ||
- | * <code bash>cp / | ||
- | cp ssl-build/ | ||
- | * Install new CA chain cert on the Spacewalk server | ||
- | * <code bash>rpm -ivh / | ||
- | * Update the database | ||
- | * <code bash> | ||
- | * Stop the Spacewalk services, clear the jabberd scratch database, start services | ||
- | * <code bash> | ||
- | rm -rf / | ||
- | spacewalk-service start</ | ||
- | * **Login to each client and update the CA chain** | ||
- | * <code bash>rpm -ivh https:// | ||
- | * Each client will have no communication to the Spacewalk server until this is complete. | ||
- | * Force an OSAD client re-authentication on each client< | ||
- | |||
- | ---- | ||