|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Developer(s) | Michael DeHaan |
---|---|
Initial release | February 20th, 2012 |
Stable release | 1.6.3 / 9 June 2014; 29 days ago |
Written in | Python |
Operating system | GNU/Linux, Unix-like |
Type | Configuration management, Orchestration engine |
License | GNU General Public License |
Website | http://www.ansible.com/ |
Initially Ansible was commercially supported and sponsored by Ansible, Inc. It was named by DeHaan after the fictional instantaneous hyperspace communication system featured in Orson Scott Card's Ender's Game, and originally invented by Ursula K. Le Guin for her 1966 novel Rocannon's World. This software application was created by Michael DeHaan, the author of the provisioning server application Cobbler and co-author of the Func framework for remote administration.
Red Hat bought this system and actively maintained it. The last version is 2.3.0 (March 15, 2017).
Ansible is written in Python and requires Python on all managed nodes. Nodes are managed by a controlling machine over SSH.
In the simplest form /etc/hosts can serve as an inventory. Much like in many parallel execution tools like Pdsh
From technology standpoint Ansible looks a lot like like JCL on a new level with some bell and whistles like parallel execution of scripts, called playbooks on multiple modes. Program are executed in special wrappers, called modules. Ansible deploys modules to nodes over SSH. Modules are temporarily stored in the nodes and communicate with the controlling machine through a JSON protocol over the standard output. When Ansible is not managing nodes, it does not consume resources because no daemons or programs are executing for Ansible in the background.
While supposed design goals of Ansible were listed as:
They were violated in the process of development and modern Ansible is a very complex tool, unless limited to the pdsh role.
As pdsh replacement Ansible can work both as a parallel execution tool and as a parallel scp command (in Ansible terminology there are called "ad hoc" operations):
ansible atlanta -a "/sbin/reboot" ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts" ansible webservers -m file -a "dest=/srv/foo/b.txt mode=600 owner=joeuser group=joeuser"
This is probably overkill as simpler tools that are doing the same more efficiently exists (and Pdsh is just one example, see Parallel command execution) but sysadmin fields is driven by fashion more than logic ;-)
Much like JCL Ansible is perfectly suitable for perfuming installation tasks on multiple nodes. It provide some unified reporting and checks after each step which can be useful and while trivial to implement simplify the code.
Control node: the host on which Ansible is installed and which you use Ansible to execute tasks on the managed nodes
Managed node: a host that is configured by the control node
Host inventory: a list of managed nodes
Ad-hoc command: a simple one-off task
Playbook: a set of repeatable tasks for more complex configurations
Module: code that performs a particular common task such as adding a user, installing a package, etc.
Idempotency: an operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions
You can use root account but typically special use (for example ansible) is created on all nodes and configured for privilege escalation via the following entry in the /etc/sudoers file:
%wheel ALL=(ALL) NOPASSWD: ALL
The default host inventory file is /etc/ansible/hosts but can be changed via the configuration file (as shown above) or by using the -i option on the ansible command.
For example
[webservers] vm2 vm3 [dbservers] vm4 [logservers] vm5 [lamp:children] webservers dbservers
Let’s confirm that all hosts can be located using this configuration file:
[curtis@vm1 ~]$ ansible all --list-hosts hosts (4): vm5 vm2 vm3 vm4
Similarly for individual groups, such as the webservers group:
[curtis@vm1 ~]$ ansible webservers --list-hosts hosts (2): vm2 vm3
Now that we have validated our host inventory, let’s do a quick check to make sure all our hosts are up and running. We will do this using an ad-hoc command that uses the ping module:
[curtis@vm1 ~]$ ansible all -m ping vm4 | SUCCESS => { "changed": false, "failed": false, "ping": "pong" } vm5 | SUCCESS => { "changed": false, "failed": false, "ping": "pong" } vm3 | SUCCESS => { "changed": false, "failed": false, "ping": "pong" } vm2 | SUCCESS => { "changed": false, "failed": false, "ping": "pong" }
We can see from the above output that all systems returned a successful result, nothing changed, and the result of each "ping" was "pong".
You can obtain a list of available modules using:
$ ansible-playbook --check myplaybook.yml
Stepping through a playbook may also be useful:
$ ansible-playbook --step myplaybook.yml
Similar to a shell script, you can make your Ansible playbook executable and add the following to the top of the file:
#!/bin/ansible-playbook
To execute arbitrary ad-hoc shell commands, use the command module (the default module if -m is not specified). If you need to use things like redirection, pipelines, etc., then use the shell module instead.
Logging is disabled by default. To enable logging, use the log_path parameter in the Ansible configuration file.
From Wikipedia, the free encyclopedia
As with most configuration management software, Ansible distinguishes two types of servers: controlling machines and nodes. First, there is a single controlling machine which is where orchestration begins. Nodes are managed by a controlling machine over SSH. The controlling machine describes the location of nodes through its inventory.
To orchestrate nodes, Ansible deploys modules to nodes over SSH. Modules are temporarily stored in the nodes and communicate with the controlling machine through a JSON protocol over the standard output.[8] When Ansible is not managing nodes, it does not consume resources because no daemons or programs are executing for Ansible in the background.[9]
In contrast with popular configuration management software such as Chef and Puppet, Ansible uses an agentless architecture.[9] With an agent-based architecture, nodes must have a locally installed daemon that communicates with a controlling machine. With an agentless architecture, nodes are not required to install and run background daemons to connect with a controlling machine. This type of architecture reduces the overhead on the network by preventing the nodes from polling the controlling machine.[9]
The design goals of Ansible[8] include:
Modules are considered to be the units of work in Ansible. Each module is mostly standalone and can be written in a standard scripting language such as Python, Perl, Ruby, bash, etc. One of the guiding properties of modules is idempotency which means that no operations are performed once an operation has placed a system into a desired state.[8]
The Inventory is a description of the nodes that can be accessed by Ansible. The Inventory is described by a configuration file,
in INI format, whose default location is in /etc/ansible/hosts
. The configuration file lists either the IP address or
hostname of each node that is accessible by Ansible. In addition, nodes can be
assigned to groups.[10]
An example configuration file:
192.168.6.1 [webservers] foo.example.com bar.example.com
This configuration file specifies three nodes. The first node is specified by an IP address and the latter two nodes are specified
by hostnames. Additionally, the latter two nodes are grouped under the webservers
group name.
Playbooks express configurations, deployment, and orchestration in Ansible.[11] The Playbook format is YAML. Each Playbook maps a group of hosts to a set of roles. Each role is represented by calls to Ansible call tasks.
Control machines must have Python 2.6. Operating systems supported on control machines include most Linux and Unix distributions such as Red Hat, Debian, CentOS, OSX, and BSD, among others except Windows.[12]
Managed nodes must have Python 2.4 or later. For managed nodes with Python 2.5 or earlier, the python-simplejson
package
is also required.[12]
Ansible can deploy to virtualization environments and public and private cloud environments including VMWare, OpenStack, AWS, Eucalyptus Cloud, KVM, and CloudStack.[8]
Ansible can deploy big data, storage and analytics environments including Hadoop, Riak, and Aerospike. The problem addressed by Ansible in these environments includes the management of resource consumption of each node. Specifically, big data, storage, and analytics environments intend to be resource efficient by wasting as little CPU time and memory as possible. Furthermore, Ansible provides monitoring capabilities that measure quantities such as CPU resources available which can help in the supervision of these nodes.[8]
Ansible is used by Atlassian, Twitter, OneKingsLane, Evernote, TrunkClub, edX, hootsuite, GoPro and Care.com, among others.[13]
Jump to: navigation, search
Filename extension | .yaml, .yml |
---|---|
Initial release | 11 May 2001; 13 years ago (2001-05-11) |
Latest release | 1.2 (Third Edition) / 1 October 2009; 4 years ago (2009-10-01) |
Type of format | Data interchange |
Open format? | Yes |
Website | yaml.org |
YAML (/ˈjæməl/, rhymes with camel) is a human-readable data serialization format that takes concepts from programming languages such as C, Perl, and Python, and ideas from XML and the data format of electronic mail (RFC 2822). YAML was first proposed by Clark Evans in 2001,[1] who designed it together with Ingy döt Net[2] and Oren Ben-Kiki.[2] It is available for several programming languages.
YAML is a recursive acronym for "YAML Ain't Markup Language". Early in its development, YAML was said to mean "Yet Another Markup Language",[3] but it was then reinterpreted (backronyming the original acronym) to distinguish its purpose as data-oriented, rather than document markup.
YAML syntax was designed to be easily mapped to data types common to most high-level languages: list, associative array, and scalar.[4] Its familiar indented outline and lean appearance make it especially suited for tasks where humans are likely to view or edit data structures, such as configuration files, dumping during debugging, and document headers (e.g. the headers found on most e-mails are very close to YAML). Although well-suited for hierarchical data representation, it also has a compact syntax for relational data.[5] Its line and whitespace delimiters make it friendly to ad hoc grep/Python/Perl/Ruby operations. A major part of its accessibility comes from eschewing the use of enclosures such as quotation marks, brackets, braces, and open/close-tags, which can be hard for the human eye to balance in nested hierarchies.
Data structure hierarchy is maintained by outline indentation.
--- receipt: Oz-Ware Purchase Invoice date: 2012-08-06 customer: given: Dorothy family: Gale items: - part_no: A4786 descrip: Water Bucket (Filled) price: 1.47 quantity: 4 - part_no: E1628 descrip: High Heeled "Ruby" Slippers size: 8 price: 100.27 quantity: 1 bill-to: &id001 street: | 123 Tornado Alley Suite 16 city: East Centerville state: FL ship-to: *id001 specialDelivery: > Follow the Yellow Brick Road to the Emerald City. Pay no attention to the man behind the curtain. ...
Notice that strings do not require enclosure in quotations. The specific number of spaces in the indentation is unimportant as long
as parallel elements have the same left justification and the hierarchically nested elements are indented further. This sample document
defines an associative array with 7 top level keys: one of the keys, "items", contains a 2-element array (or "list"), each element of
which is itself an associative array with differing keys. Relational data and redundancy removal are displayed: the "ship-to" associative
array content is copied from the "bill-to" associative array's content as indicated by the anchor (&) and reference (*)
labels. Optional blank lines can be added for readability. Multiple documents can exist in a single file/stream and are separated by
"---
". An optional "...
" can be used at the end of a file (useful for signaling an end in streamed
communications without closing the pipe).
YAML offers both an indented and an "in-line" style for denoting associative arrays and lists. Here is a sample of the components.
Conventional block format uses a hyphen+space to begin a new item in list.
--- # Favorite movies - Casablanca - North by Northwest - The Man Who Wasn't There
Optional inline format is delimited by comma+space and enclosed in brackets (similar to JSON).[6]
--- # Shopping list [milk, pumpkin pie, eggs, juice]
Keys are separated from values by a colon+space. Indented Blocks, common in YAML data files, use indentation and new lines to separate the key: value pairs. Inline Blocks, common in YAML data streams, use comma+space to separate the key: value pairs between braces.
--- # Indented Block name: John Smith age: 33 --- # Inline Block {name: John Smith, age: 33}
Strings do not require quotation.
--- | There once was a short man from Ealing Who got on a bus to Darjeeling It said on the door "Please don't spit on the floor" So he carefully spat on the ceiling
By default, the leading indent (of the first line) and trailing white space is stripped, though other behavior can be explicitly specified.
--- > Wrapped text will be folded into a single paragraph Blank lines denote paragraph breaks
Folded text converts newlines to spaces and removes leading whitespace.
- {name: John Smith, age: 33} - name: Mary Smith age: 27
men: [John Smith, Bill Jones] women: - Mary Smith - Susan Williams
Two features that distinguish YAML from the capabilities of other data serialization languages are Structures[7] and Data Typing.
YAML structures enable storage of multiple documents within single file, usage of references for repeated nodes, and usage of arbitrary nodes as keys.[7]
For clarity, compactness, and avoiding data entry errors, YAML provides node anchors ( & ) and references ( * ). References to the anchor work for all data types (see the ship-to reference in the example above).
Below is an example of a queue in an instrument sequencer in which two steps are reused repeatedly without being fully described each time.
# sequencer protocols for Laser eye surgery --- - step: &id001 # defines anchor label &id001 instrument: Lasik 2000 pulseEnergy: 5.4 pulseDuration: 12 repetition: 1000 spotSize: 1mm - step: &id002 instrument: Lasik 2000 pulseEnergy: 5.0 pulseDuration: 10 repetition: 500 spotSize: 2mm - step: *id001 # refers to the first step (with anchor &id001) - step: *id002 # refers to the second step - step: *id001 - step: *id002
Explicit data typing is seldom seen in the majority of YAML documents since YAML autodetects simple types. Data types can be divided into three categories: core, defined, and user-defined. Core are ones expected to exist in any parser (e.g. floats, ints, strings, lists, maps, ...). Many more advanced data types, such as binary data, are defined in the YAML specification but not supported in all implementations. Finally YAML defines a way to extend the data type definitions locally to accommodate user-defined classes, structures or primitives (e.g. quad-precision floats).
YAML autodetects the datatype of the entity. Sometimes one wants to cast the datatype explicitly. The most common situation is a single word string that looks like a number, boolean or tag may need disambiguation by surrounding it with quotes or use of an explicit datatype tag.
--- a: 123 # an integer b: "123" # a string, disambiguated by quotes c: 123.0 # a float d: !!float 123 # also a float via explicit data type prefixed by (!!) e: !!str 123 # a string, disambiguated by explicit type f: !!str Yes # a string via explicit type g: Yes # a boolean True h: Yes we have No bananas # a string, "Yes" and "No" disambiguated by context.
Not every implementation of YAML has every specification-defined data type. These built-in types use a double exclamation sigil prefix
( !!
). Particularly interesting ones not shown here are sets, ordered maps, timestamps, and hexadecimal. Here's
an example of base64 encoded binary data.
--- picture: !!binary | R0lGODlhDAAMAIQAAP//9/X 17unp5WZmZgAAAOfn515eXv Pz7Y6OjuDg4J+fn5OTk6enp 56enmleECcgggoBADs=mZmE
Many implementations of YAML can support user defined data types. This is a good way to serialize an object. Local data types are
not universal data types but are defined in the application using the YAML parser library. Local data types use a single exclamation
mark ( !
).
--- myObject: !myClass { name: Joe, age: 15 }
A compact cheat sheet as well as a full specification are available at the official site.[8] The following is a synopsis of the basic elements.
-
) with one member per line, or enclosed in
square brackets
( [ ] ) and separated by
comma
space ( ,
).---
).
...
) optionally
end a file within a stream.!!
) followed by a string, which can be expanded into a URI.YAML requires that colons and commas used as list separators be followed by a space so that scalar values containing embedded punctuation (such as 5,280 or http://www.wikipedia.org) can generally be represented without needing to be enclosed in quotes.
Two additional sigil characters are reserved in YAML for possible future standardisation: the at sign ( @ ) and accent grave ( ` ).
While YAML shares similarities with JSON, XML and SDL (Simple Declarative Language), it also has characteristics that are unique in comparison to many other similar format languages.
JSON syntax is a subset of YAML version 1.2, which was promulgated with the express purpose of bringing YAML "into compliance with JSON as an official subset".[10] Though prior versions of YAML were not strictly compatible,[11] the discrepancies were rarely noticeable, and most JSON documents can be parsed by some YAML parsers such as Syck.[12] This is because JSON's semantic structure is equivalent to the optional "inline-style" of writing YAML. While extended hierarchies can be written in inline-style like JSON, this is not a recommended YAML style except when it aids clarity.
YAML has many additional features lacking in JSON, including comments, extensible data types, relational anchors, strings without quotation marks, and mapping types preserving key order.
YAML lacks the notion of tag attributes that are found in XML and SDL.[13] For data structure serialization, tag attributes are, arguably, a feature of questionable utility, since the separation of data and meta-data adds complexity when represented by the natural data structures (associative arrays, lists) in common languages.[14] Instead YAML has extensible type declarations (including class types for objects).
YAML itself does not have XML's language-defined document schema descriptors that allow, for example, a document to self-validate. However, there are several externally defined schema descriptor languages for YAML (e.g. Doctrine, Kwalify and Rx) that fulfill that role. Moreover, the semantics provided by YAML's language-defined type declarations in the YAML document itself frequently relaxes the need for a validator in simple, common situations. Additionally, YAXML, which represents YAML data structures in XML, allows XML schema importers and output mechanisms like XSLT to be applied to YAML.
Because YAML primarily relies on outline indentation for structure, it is especially resistant to delimiter collision. YAML's insensitivity to quotes and braces in scalar values means one may embed XML, SDL, JSON or even YAML documents inside a YAML document by simply indenting it in a block literal:
--- example: > HTML goes into YAML without modification message: | <blockquote style="font: italic 12pt Times"> <p>"Three is always greater than two, even for large values of two"</p> <p>--Author Unknown</p> </blockquote> date: 2007-06-01
YAML may be placed in JSON and SDL by quoting and escaping all interior quotes. YAML may be placed in XML by escaping reserved characters (<, >, &, ', "), and converting whitespace; or by placing it in a CDATA-section.
Unlike SDL and JSON, which can only represent data in a hierarchical model with each child node having a single parent, YAML also offers a simple relational scheme that allows repeats of identical data to be referenced from two or more points in the tree rather than entered redundantly at those points. This is similar to the facility IDREF built into XML.[15] The YAML parser then expands these references into the fully populated data structures they imply when read in, so whatever program is using the parser does not have to be aware of a relational encoding model, unlike XML processors, which do not expand references. This expansion can enhance readability while reducing data entry errors in configuration files or processing protocols where many parameters remain the same in a sequential series of records while only a few vary. An example being that "ship-to" and "bill-to" records in an invoice are nearly always the same data.
YAML is line-oriented and thus it is often simple to convert the unstructured output of existing programs into YAML format while having them retain much of the look of the original document. Because there are no closing tags, braces, or quotation marks to balance, it is generally easy to generate well-formed YAML directly from distributed print statements within unsophisticated programs. Likewise, the whitespace delimiters facilitate quick-and-dirty filtering of YAML files using the line-oriented commands in grep, awk, perl, ruby, and python.
In particular, unlike mark-up languages, chunks of consecutive YAML lines tend to be well-formed YAML documents themselves. This makes it very easy to write parsers that do not have to process a document in its entirety (e.g. balancing opening and closing tags and navigating quoted and escaped characters) before they begin extracting specific records within. This property is particularly expedient when iterating in a single, stateless pass, over records in a file whose entire data structure is too large to hold in memory, or for which reconstituting the entire structure to extract one item would be prohibitively expensive.
Counterintuitively, although its indented delimiting might seem to complicate deeply nested hierarchies, YAML handles indents as small as a single space, and this may achieve better compression than markup languages. Additionally, extremely deep indentation can be avoided entirely by either: 1) reverting to "inline style" (i.e. JSON-like format) without the indentation; or 2) using relational anchors to unwind the hierarchy to a flat form that the YAML parser will transparently reconstitute into the full data structure.[citation needed]
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Jan 26, 2021 | www.redhat.com
Nodes
In Ansible architecture, you have a controller node and managed nodes. Ansible is installed on only the controller node. It's an agentless tool and doesn't need to be installed on the managed nodes. Controller and managed nodes are connected using the SSH protocol. All tasks are written into a "playbook" using the YAML language. Each playbook can contain multiple plays, which contain tasks , and tasks contain modules . Modules are reusable standalone scripts that manage some aspect of a system's behavior. Ansible modules are also known as task plugins or library plugins.
More about automation RolesPlaybooks for complex tasks can become lengthy and therefore difficult to read and understand. The solution to this problem is Ansible roles . Using roles, you can break long playbooks into multiple files making each playbook simple to read and understand. Roles are a collection of templates, files, variables, modules, and tasks. The primary purpose behind roles is to reuse Ansible code. DevOps engineers and sysadmins should always try to reuse their code. An Ansible role can contain multiple playbooks. It can easily reuse code written by anyone if the role is suitable for a given case. For example, you could write a playbook for Apache hosting and then reuse this code by changing the content of
index.html
to alter options for some other application or service.The following is an overview of the Ansible role structure. It consists of many subdirectories, such as:
|-- README.md |-- defaults |-------main.yml |-- files |-- handlers |-------main.yml |-- meta |-------main.yml |-- tasks |-------main.yml |-- templates |-- tests |-------inventory |-- vars |-------main.ymlInitially, all files are created empty by using the
ansible-galaxy
command. So, depending on the task, you can use these directories. For example, thevars
directory stores variables. In thetasks
directory, you havemain.yml
, which is the main playbook. Thetemplates
directory is for storing Jinja templates. Thehandlers
directory is for storing handlers.Advantages of Ansible roles:
- Allow for content reusability
- Make large projects manageable
Ansible roles are structured directories containing sub-directories.
But did you know that Red Hat Enterprise Linux also provides some Ansible System Roles to manage operating system tasks?
System rolesThe
rhel-system-roles
package is available in the Extras (EPEL) channel. Therhel-system-roles
package is used to configure RHEL hosts. There are seven defaultrhel-system-roles
available:
- rhel-system-roles.kdump - This role configures the kdump crash recovery service. Kdump is a feature of the Linux kernel and is useful when analyzing the cause of a kernel crash.
- rhel-system-roles.network - This role is dedicated to network interfaces. This helps to configure network interfaces in Linux systens.
- rhel-system-roles.selinux - This role manages SELinux. This helps to configure the SELinux mode, files, port-context, etc.
- rhel-system-roles.timesync - This role is used to configure NTP or PTP on your Linux system.
- rhel-system-roles.postfix - This role is dedicated to managing the Postfix mail transfer agent.
- rhel-system-roles.firewall - As the name suggests, this role is all about managing the host system's firewall configuration.
- rhel-system-roles.tuned - Tuned is a system tuning service in Linux to monitor connected devices. So this role is to configure the tuned service for system performance.
The
rhel-system-roles
package is derived from open source Linux system-roles . This Linux-system-role is available on Ansible Galaxy. Therhel-system-roles
is supported by Red Hat, so you can think of this as ifrhel-system-roles
are downstream of Linux system-roles. To installrhel-system-roles
on your machine, use:$ sudo yum -y install rhel-system-roles or $ sudo dnf -y install rhel-system-rolesThese roles are located in the
Great DevOps Downloads/usr/share/ansible/roles/
directory.This is the default path, so whenever you use playbooks to reference these roles, you don't need to explicitly include the absolute path. You can also refer to the documentation for using Ansible roles. The path for the documentation is
/usr/share/doc/rhel-system-roles
The documentation directory for each role has detailed information about that role. For example, the README.md file is an example of that role, etc. The documentation is self-explanatory.
The following is an example of a role.
ExampleIf you want to change the SELinux mode of the localhost machine or any host machine, then use the system roles. For this task, use
rhel-system-roles.selinux
For this task the ansible-playbook looks like this:
--- - name: a playbook for SELinux mode hosts: localhost roles: - rhel-system-roles.selinux vars: - selinux_state: disabledAfter running the playbook, you can verify whether the SELinux mode changed or not.
[ Looking for more on system automation? Get started with The Automated Enterprise, a free book from Red Hat . ]
Shiwani Biradar I am an OpenSource Enthusiastic undergraduate girl who is passionate about Linux & open source technologies. I have knowledge of Linux , DevOps, and cloud. I am also an active contributor to Fedora. If you didn't find me exploring technologies then you will find me exploring food! More about me
Nov 29, 2020 | opensource.com
We've gone over several things you can do with Ansible on your system, but we haven't yet discussed how to provision a system. Here's an example of provisioning a virtual machine (VM) with the OpenStack cloud solution.
- name: create a VM in openstack osp_server: name: cloudera-namenode state: present cloud: openstack region_name: andromeda image: 923569a-c777-4g52-t3y9-cxvhl86zx345 flavor_ram: 20146 flavor: big auto_ip: yes volumes: cloudera-namenodeAll OpenStack modules start with
os
, which makes it easier to find them. The above configuration uses the osp-server module, which lets you add or remove an instance. It includes the name of the VM, its state, its cloud options, and how it authenticates to the API. More information about cloud.yml is available in the OpenStack docs, but if you don't want to use cloud.yml, you can use a dictionary that lists your credentials using theauth
option. If you want to delete the VM, just changestate:
toabsent
.Say you have a list of servers you shut down because you couldn't figure out how to get the applications working, and you want to start them again. You can use
os_server_action
to restart them (or rebuild them if you want to start from scratch).Here is an example that starts the server and tells the modules the name of the instance:
- name: restart some servers os_server_action: action: start cloud: openstack region_name: andromeda server: cloudera-namenodeMost OpenStack modules use similar options. Therefore, to rebuild the server, we can use the same options but change the
action
torebuild
and add theimage
we want it to use:os_server_action: action: rebuild image: 923569a-c777-4g52-t3y9-cxvhl86zx345
Nov 29, 2020 | opensource.com
For this laptop experiment, I decided to use Debian 32-bit as my starting point, as it seemed to work best on my older hardware. The bootstrap YAML script is intended to take a bare-minimal OS install and bring it up to some standard. It relies on a non-root account to be available over SSH and little else. Since a minimal OS install usually contains very little that is useful to Ansible, I use the following to hit one host and prompt me to log in with privilege escalation:
$ ansible-playbook bootstrap.yml -i '192.168.0.100,' -u jfarrell -KkThe script makes use of Ansible's raw module to set some base requirements. It ensures Python is available, upgrades the OS, sets up an Ansible control account, transfers SSH keys, and configures sudo privilege escalation. When bootstrap completes, everything should be in place to have this node fully participate in my larger Ansible inventory. I've found that bootstrapping bare-minimum OS installs is nuanced (if there is interest, I'll write another article on this topic).
The account YAML setup script is used to set up (or reset) user accounts for each family member. This keeps user IDs (UIDs) and group IDs (GIDs) consistent across the small number of machines we have, and it can be used to fix locked accounts when needed. Yes, I know I could have set up Network Information Service or LDAP authentication, but the number of accounts I have is very small, and I prefer to keep these systems very simple. Here is an excerpt I found especially useful for this:
---
- name : Set user accounts
hosts : all
gather_facts : false
become : yes
vars_prompt :
- name : passwd
prompt : "Enter the desired ansible password:"
private : yestasks :
- name : Add child 1 account
user :
state : present
name : child1
password : "{{ passwd | password_hash('sha512') }}"
comment : Child One
uid : 888
group : users
shell : /bin/bash
generate_ssh_key : yes
ssh_key_bits : 2048
update_password : always
create_home : yesThe vars_prompt section prompts me for a password, which is put to a Jinja2 transformation to produce the desired password hash. This means I don't need to hardcode passwords into the YAML file and can run it to change passwords as needed.
The software installation YAML file is still evolving. It includes a base set of utilities for the sysadmin and then the stuff my users need. This mostly consists of ensuring that the same graphical user interface (GUI) interface and all the same programs, games, and media files are installed on each machine. Here is a small excerpt of the software for my young children:
- name : Install kids software
apt :
name : "{{ packages }}"
state : present
vars :
packages :
- lxde
- childsplay
- tuxpaint
- tuxtype
- pysycache
- pysiogame
- lmemory
- bouncyI created these three Ansible scripts using a virtual machine. When they were perfect, I tested them on the D620. Then converting the Mini 9 was a snap; I simply loaded the same minimal Debian install then ran the bootstrap, accounts, and software configurations. Both systems then functioned identically.
For a while, both sisters enjoyed their respective computers, comparing usage and exploring software features.
The moment of truthA few weeks later came the inevitable. My older daughter finally came to the conclusion that her pink Dell Mini 9 was underpowered. Her sister's D620 had superior power and screen real estate. YouTube was the new rage, and the Mini 9 could not keep up. As you can guess, the poor Mini 9 fell into disuse; she wanted a new machine, and sharing her younger sister's would not do.
I had another D620 in my pile. I replaced the BIOS battery, gave it a new SSD, and upgraded the RAM. Another perfect example of breathing new life into old hardware.
I pulled my Ansible scripts from source control, and everything I needed was right there: bootstrap, account setup, and software. By this time, I had forgotten a lot of the specific software installation information. But details like account UIDs and all the packages to install were all clearly documented and ready for use. While I surely could have figured it out by looking at my other machines, there was no need to spend the time! Ansible had it all clearly laid out in YAML.
Not only was the YAML documentation valuable, but Ansible's automation made short work of the new install. The minimal Debian OS install from USB stick took about 15 minutes. The subsequent shape up of the system using Ansible for end-user deployment only took another nine minutes. End-user acceptance testing was successful, and a new era of computing calmness was brought to my family (other parents will understand!).
ConclusionTaking the time to learn and practice Ansible with this exercise showed me the true value of its automation and documentation abilities. Spending a few hours figuring out the specifics for the first example saves time whenever I need to provision or fix a machine. The YAML is clear, easy to read, and -- thanks to Ansible's idempotency -- easy to test and refine over time. When I have new ideas or my children have new requests, using Ansible to control a local virtual machine for testing is a valuable time-saving tool.
Doing sysadmin tasks in your free time can be fun. Spending the time to automate and document your work pays rewards in the future; instead of needing to investigate and relearn a bunch of things you've already solved, Ansible keeps your work documented and ready to apply so you can move onto other, newer fun things!
Mar 04, 2019 | opensource.com
Ansible works by connecting to nodes and sending small programs called modules to be executed remotely. This makes it a push architecture, where configuration is pushed from Ansible to servers without agents, as opposed to the pull model, common in agent-based configuration management systems, where configuration is pulled.
These modules are mapped to resources and their respective states , which are represented in YAML files. They enable you to manage virtually everything that has an API, CLI, or configuration file you can interact with, including network devices like load balancers, switches, firewalls, container orchestrators, containers themselves, and even virtual machine instances in a hypervisor or in a public (e.g., AWS, GCE, Azure) and/or private (e.g., OpenStack, CloudStack) cloud, as well as storage and security appliances and system configuration.
With Ansible's batteries-included model, hundreds of modules are included and any task in a playbook has a module behind it.
More on AnsibleThe contract for building modules is simple: JSON in the stdout . The configurations declared in YAML files are delivered over the network via SSH/WinRM -- or any other connection plugin -- as small scripts to be executed in the target server(s). Modules can be written in any language capable of returning JSON, although most Ansible modules (except for Windows PowerShell) are written in Python using the Ansible API (this eases the development of new modules).
- How Ansible works
- Free Ansible eBooks
- eBook: The automated enterprise
- Download and install Ansible
- Ansible Essentials: Simplicity in automation technical overview
- A quickstart guide to Ansible
- Latest Ansible articles
Modules are one way of expanding Ansible capabilities. Other alternatives, like dynamic inventories and plugins, can also increase Ansible's power. It's important to know about them so you know when to use one instead of the other.
Plugins are divided into several categories with distinct goals, like Action, Cache, Callback, Connection, Filters, Lookup, and Vars. The most popular plugins are:
- Connection plugins: These implement a way to communicate with servers in your inventory (e.g., SSH, WinRM, Telnet); in other words, how automation code is transported over the network to be executed.
- Filters plugins: These allow you to manipulate data inside your playbook. This is a Jinja2 feature that is harnessed by Ansible to solve infrastructure-as-code problems.
- Lookup plugins: These fetch data from an external source (e.g., env, file, Hiera, database, HashiCorp Vault).
Ansible's official docs are a good resource on developing plugins .
When should you develop a module?Although many modules are delivered with Ansible, there is a chance that your problem is not yet covered or it's something too specific -- for example, a solution that might make sense only in your organization. Fortunately, the official docs provide excellent guidelines on developing modules .
IMPORTANT: Before you start working on something new, always check for open pull requests, ask developers at #ansible-devel (IRC/Freenode), or search the development list and/or existing working groups to see if a module exists or is in development.
Signs that you need a new module instead of using an existing one include:
- Conventional configuration management methods (e.g., templates, file, get_url, lineinfile) do not solve your problem properly.
- You have to use a complex combination of commands, shells, filters, text processing with magic regexes, and API calls using curl to achieve your goals.
- Your playbooks are complex, imperative, non-idempotent, and even non-deterministic.
In the ideal scenario, the tool or service already has an API or CLI for management, and it returns some sort of structured data (JSON, XML, YAML).
Identifying good and bad playbooks"Make love, but don't make a shell script in YAML."So, what makes a bad playbook?
- name : Read a remote resource
command : "curl -v http://xpto/resource/abc"
register : resource
changed_when : False- name : Create a resource in case it does not exist
command : "curl -X POST http://xpto/resource/abc -d '{ config:{ client: xyz, url: http://beta, pattern: *.* } }'"
when : "resource.stdout | 404"# Leave it here in case I need to remove it hehehe
#- name: Remove resource
# command: "curl -X DELETE http://xpto/resource/abc"
# when: resource.stdout == 1Aside from being very fragile -- what if the resource state includes a 404 somewhere? -- and demanding extra code to be idempotent, this playbook can't update the resource when its state changes.
Playbooks written this way disrespect many infrastructure-as-code principles. They're not readable by human beings, are hard to reuse and parameterize, and don't follow the declarative model encouraged by most configuration management tools. They also fail to be idempotent and to converge to the declared state.
Bad playbooks can jeopardize your automation adoption. Instead of harnessing configuration management tools to increase your speed, they have the same problems as an imperative automation approach based on scripts and command execution. This creates a scenario where you're using Ansible just as a means to deliver your old scripts, copying what you already have into YAML files.
Here's how to rewrite this example to follow infrastructure-as-code principles.
- name : XPTO
xpto :
name : abc
state : present
config :
client : xyz
url : http://beta
pattern : "*.*"The benefits of this approach, based on custom modules , include:
Implementing a custom module
- It's declarative -- resources are properly represented in YAML.
- It's idempotent.
- It converges from the declared state to the current state .
- It's readable by human beings.
- It's easily parameterized or reused.
Let's use WildFly , an open source Java application server, as an example to introduce a custom module for our not-so-good playbook:
- name : Read datasource
command : "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:read-resource()'"
register : datasource- name : Create datasource
command : "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:add(driver-name=h2, user-name=sa, password=sa, min-pool-size=20, max-pool-size=40, connection-url=.jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE..)'"
when : 'datasource.stdout | outcome => failed'Problems:
- It's not declarative.
- JBoss-CLI returns plaintext in a JSON-like syntax; therefore, this approach is very fragile, since we need a type of parser for this notation. Even a seemingly simple parser can be too complex to treat many exceptions .
- JBoss-CLI is just an interface to send requests to the management API (port 9990).
- Sending an HTTP request is more efficient than opening a new JBoss-CLI session, connecting, and sending a command.
- It does not converge to the desired state; it only creates the resource when it doesn't exist.
A custom module for this would look like:
- name : Configure datasource
jboss_resource :
name : "/subsystem=datasources/data-source=DemoDS"
state : present
attributes :
driver-name : h2
connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
jndi-name : "java:jboss/datasources/DemoDS"
user-name : sa
password : sa
min-pool-size : 20
max-pool-size : 40This playbook is declarative, idempotent, more readable, and converges to the desired state regardless of the current state.
Why learn to build custom modules?Good reasons to learn how to build custom modules include:
- Improving existing modules
- You have bad playbooks and want to improve them, or
- You don't, but want to avoid having bad playbooks.
- Knowing how to build a module considerably improves your ability to debug problems in playbooks, thereby increasing your productivity.
" abstractions save us time working, but they don't save us time learning." -- Joel Spolsky, The Law of Leaky AbstractionsCustom Ansible modules 101The Ansible way
- JSON (JavaScript Object Notation) in stdout : that's the contract!
- They can be written in any language, but
- Python is usually the best option (or the second best)
- Most modules delivered with Ansible ( lib/ansible/modules ) are written in Python and should support compatible versions.
An alternative: drop it in the library directory library/ # if any custom modules, put them here (optional)
- First step:
git clone https://github.com/ansible/ansible.git- Navigate in lib/ansible/modules/ and read the existing modules code.
- Your tools are: Git, Python, virtualenv, pdb (Python debugger)
- For comprehensive instructions, consult the official docs .
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tierroles/
common/ # this hierarchy represents a "role"
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case
- It's easier to start.
- Doesn't require anything besides Ansible and your favorite IDE/text editor.
- This is your best option if it's something that will be used internally.
TIP: You can use this directory layout to overwrite existing modules if, for example, you need to patch a module.
First stepsYou could do it in your own -- including using another language -- or you could use the AnsibleModule class, as it is easier to put JSON in the stdout ( exit_json() , fail_json() ) in the way Ansible expects ( msg , meta , has_changed , result ), and it's also easier to process the input ( params[] ) and log its execution ( log() , debug() ).
def main () :arguments = dict ( name = dict ( required = True , type = 'str' ) ,
state = dict ( choices = [ 'present' , 'absent' ] , default = 'present' ) ,
config = dict ( required = False , type = 'dict' ))module = AnsibleModule ( argument_spec = arguments , supports_check_mode = True )
try :
if module. check_mode :
# Do not do anything, only verifies current state and report it
module. exit_json ( changed = has_changed , meta = result , msg = 'Fez alguma coisa ou não...' )if module. params [ 'state' ] == 'present' :
# Verify the presence of a resource
# Desired state `module.params['param_name'] is equal to the current state?
module. exit_json ( changed = has_changed , meta = result )if module. params [ 'state' ] == 'absent' :
# Remove the resource in case it exists
module. exit_json ( changed = has_changed , meta = result )except Error as err:
module. fail_json ( msg = str ( err ))NOTES: The check_mode ("dry run") allows a playbook to be executed or just verifies if changes are required, but doesn't perform them. Also, the module_utils directory can be used for shared code among different modules.
For the full Wildfly example, check this pull request .
Running tests The Ansible wayThe Ansible codebase is heavily tested, and every commit triggers a build in its continuous integration (CI) server, Shippable , which includes linting, unit tests, and integration tests.
For integration tests, it uses containers and Ansible itself to perform the setup and verify phase. Here is a test case (written in Ansible) for our custom module's sample code:
- name : Configure datasource
jboss_resource :
name : "/subsystem=datasources/data-source=DemoDS"
state : present
attributes :
connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
...
register : result- name : assert output message that datasource was created
assert :
that :
- "result.changed == true"
- "'Added /subsystem=datasources/data-source=DemoDS' in result.msg" An alternative: bundling a module with your roleHere is a full example inside a simple role:
Molecule + Vagrant + pytest :
molecule init
(inside roles/)It offers greater flexibility to choose:
- Simplified setup
- How to spin up your infrastructure: e.g., Vagrant, Docker, OpenStack, EC2
- How to verify your infrastructure tests: Testinfra and Goss
But your tests would have to be written using pytest with Testinfra or Goss, instead of plain Ansible. If you'd like to learn more about testing Ansible roles, see my article about using Molecule .
tyberious 5 hours agoThey were trying to overcome the largest Electoral and Popular Vote Victory in American History!
It was akin to dousing a 4 alarm fire with a garden hose, eventually you will get burned! play_arrow 95 play_arrow pocomotion 5 hours ago
Tyberious, thanks for sharing your thoughts. I think you are correct is your assessment. play_arrow 15 play_arrow 1 systemsplanet 4 hours ago
I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't matter.
https://thedonald.win/p/11QRtZSsD4/
The game is rigged y_arrow 1 Stainmaker 4 hours ago
I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't matter.
Of course it doesn't matter when you have Lester Holt interviewing Joe Biden and asking whether Creepy Joe's administration is going to continue investigating Trump. Whatever happened to Hunter's laptop and the hundreds of millions in Russian, Ukrainian & Chinese bribes anyway? 7 play_arrow HelluvaEngineer 5 hours ago
So far, they are winning. Got an idea of a path to victory? I don't. Americans are fvcking stupid. play_arrow 24 play_arrow 1 tyberious 5 hours ago
They like to think they are!
Go here and follow!
Nov 25, 2019 | opensource.com
Secure shell (SSH) is at the heart of Ansible, at least for almost everything besides Windows. Key (no pun intended) to using SSH efficiently with Ansible is keys ! Slight aside -- there are a lot of very cool things you can do for security with SSH keys. It's worth perusing the authorized_keys section of the sshd manual page . Managing SSH keys can become laborious if you're getting into the realms of granular user access, and although we could do it with either of my next two favourites, I prefer to use the module because it enables easy management through variables .
4. fileBesides the obvious function of placing a file somewhere, the file module also sets ownership and permissions. I'd say that's a lot of bang for your buck with one module. I'd proffer a substantial portion of security relates to setting permissions too, so the file module plays nicely with authorized_keys .
3. templateThere are so many ways to manipulate the contents of files, and I see lots of folk use lineinfile . I've used it myself for small tasks. However, the template module is so much clearer because you maintain the entire file for context. My preference is to write Ansible content in such a way that anyone can understand it easily -- which to me means not making it hard to understand what is happening. Use of template means being able to see the entire file you're putting into place, complete with the variables you are using to change pieces.
2. uriMany modules in the current distribution leverage Ansible as an orchestrator. They talk to another service, rather than doing something specific like putting a file into place. Usually, that talking is over HTTP too. In the days before many of these modules existed, you could program an API directly using the uri module. It's a powerful access tool, enabling you to do a lot. I wouldn't be without it in my fictitious Ansible shed.
1. shellThe joker card in our pack. The Swiss Army Knife. If you're absolutely stuck for how to control something else, use shell . Some will argue we're now talking about making Ansible a Bash script -- but, I would say it's still better because with the use of the name parameter in your plays and roles, you document every step. To me, that's as big a bonus as anything. Back in the days when I was still consulting, I once helped a database administrator (DBA) migrate to Ansible. The DBA wasn't one for change and pushed back at changing working methods. So, to ease into the Ansible way, we called some existing DB management scripts from Ansible using the shell module. With an informative name statement to accompany the task.
You can ac hieve a lot with these five modules. Yes, modules designed to do a specific task will make your life even easier. But with a smidgen of engineering simplicity, you can achieve a lot with very little. Ansible developer Brian Coca is a master at it, and his tips and tricks talk is always worth a watch.
Nov 25, 2020 | opensource.com
10 Ansible modules for Linux system automation These handy modules save time and hassle by automating many of your daily tasks, and they're easy to implement with a few commands. 26 Oct 2020 Ricardo Gerardi (Red Hat) Feed 69 up 3 comments Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
Ansible is a complete automation solution for your IT environment. You can use Ansible to automate Linux and Windows server configuration, orchestrate service provisioning, deploy cloud environments, and even configure your network devices.
Ansible modules abstract actions on your system so you don't need to worry about implementation details. You simply describe the desired state, and Ansible ensures the target system matches it.
This module availability is one of Ansible's main benefits, and it is often referred to as Ansible having "batteries included." Indeed, you can find modules for a great number of tasks, and while this is great, I frequently hear from beginners that they don't know where to start.
Although your choice of modules will depend exclusively on your requirements and what you're trying to automate with Ansible, here are the top ten modules you need to get started with Ansible for Linux system automation.
1. copyThe copy module allows you to copy a file from the Ansible control node to the target hosts. In addition to copying the file, it allows you to set ownership, permissions, and SELinux labels to the destination file. Here's an example of using the copy module to copy a "message of the day" configuration file to the target hosts:
- name: Ensure MOTD file is in place
copy:
src: files / motd
dest: / etc / motd
owner: root
group: root
mode: 0644For less complex content, you can copy the content directly to the destination file without having a local file, like this:
- name: Ensure MOTD file is in place
copy:
content: "Welcome to this system."
dest: / etc / motd
owner: root
group: root
mode: 0644This module works idempotently , which means it will only copy the file if the same file is not already in place with the same content and permissions.
The copy module is a great option to copy a small number of files with static content. If you need to copy a large number of files, take a look at the synchronize module. To copy files with dynamic content, take a look at the
2. templatetemplate
module next.The template module works similarly to the
copy
module, but it processes content dynamically using the Jinja2 templating language before copying it to the target hosts.For example, define a "message of the day" template that displays the target system name, like this:
$ vi templates / motd.j2
Welcome to {{ inventory_hostname }} .Then, instantiate this template using the
- name: Ensure MOTD file is in placetemplate
module, like this:
template:
src: templates / motd.j2
dest: / etc / motd
owner: root
group: root
mode: 0644Before copying the file, Ansible processes the template and interpolates the variable, replacing it with the target host system name. For example, if the target system name is
Welcome to rh8-vm03.rh8-vm03
, the result file is:While the
copy
module can also interpolate variables when using thecontent
parameter, thetemplate
module allows additional flexibility by creating template files, which enable you to define more complex content, includingfor
loops,if
conditions, and more. For a complete reference, check Jinja2 documentation .This module is also idempotent, and it will not copy the file if the content on the target system already matches the template's content.
3. userThe user module allows you to create and manage Linux users in your target system. This module has many different parameters, but in its most basic form, you can use it to create a new user.
For example, to create the user
- name: Ensure user ricardo existsricardo
with UID 2001, part of the groupsusers
andwheel
, and passwordmypassword
, apply theuser
module with these parameters:
user:
name: ricardo
group: users
groups: wheel
uid: 2001
password: "{{ 'mypassword' | password_hash('sha512') }}"
state: presentNotice that this module tries to be idempotent, but it cannot guarantee that for all its options. For instance, if you execute the previous module example again, it will reset the password to the defined value, changing the user in the system for every execution. To make this example idempotent, use the parameter
update_password: on_create
, ensuring Ansible only sets the password when creating the user and not on subsequent runs.You can also use this module to delete a user by setting the parameter
state: absent
.The
4. packageuser
module has many options for you to manage multiple user aspects. Make sure you take a look at the module documentation for more information.The package module allows you to install, update, or remove software packages from your target system using the operating system standard package manager.
For example, to install the Apache web server on a Red Hat Linux machine, apply the module like this:
- name: Ensure Apache package is installed
package:
name: httpd
state: present More on AnsibleThis module is distribution agnostic, and it works by using the underlying package manager, such as
- How Ansible works
- Free Ansible eBooks
- eBook: The automated enterprise
- Download and install Ansible
- Ansible Essentials: Simplicity in automation technical overview
- A quickstart guide to Ansible
- Latest Ansible articles
yum/dnf
for Red Hat-based distributions andapt
for Debian. Because of that, it only does basic tasks like install and remove packages. If you need more control over the package manager options, use the specific module for the target distribution.Also, keep in mind that, even though the module itself works on different distributions, the package name for each can be different. For instance, in Red Hat-based distribution, the Apache web server package name is
httpd
, while in Debian, it isapache2
. Ensure your playbooks deal with that.This module is idempotent, and it will not act if the current system state matches the desired state.
5. serviceUse the service module to manage the target system services using the required init system; for example, systemd .
In its most basic form, all you have to do is provide the service name and the desired state. For instance, to start the
- name: Ensure SSHD is startedsshd
service, use the module like this:
service:
name: sshd
state: startedYou can also ensure the service starts automatically when the target system boots up by providing the parameter
enabled: yes
.As with the
package
module, theservice
module is flexible and works across different distributions. If you need fine-tuning over the specific target init system, use the corresponding module; for example, the modulesystemd
.Similar to the other modules you've seen so far, the
6. firewalldservice
module is also idempotent.Use the firewalld module to control the system firewall with the
firewalld
daemon on systems that support it, such as Red Hat-based distributions.For example, to open the HTTP service on port 80, use it like this:
- name: Ensure port 80 ( http ) is open
firewalld:
service: http
state: enabled
permanent: yes
immediate: yesYou can also specify custom ports instead of service names with the
- name: Ensure port 3000 / TCP is openport
parameter. In this case, make sure to specify the protocol as well. For example, to open TCP port 3000, use this:
firewalld:
port: 3000 / tcp
state: enabled
permanent: yes
immediate: yesYou can also use this module to control other
7. filefirewalld
aspects like zones or complex rules. Make sure to check the module's documentation for a comprehensive list of options.The file module allows you to control the state of files and directories -- setting permissions, ownership, and SELinux labels.
For instance, use the
- name: Ensure directory / app existsfile
module to create a directory/app
owned by the userricardo
, with read, write, and execute permissions for the owner and the groupusers
:
file:
path: / app
state: directory
owner: ricardo
group: users
mode: 0770You can also use this module to set file properties on directories recursively by using the parameter
recurse: yes
or delete files and directories with the parameterstate: absent
.This module works with idempotency for most of its parameters, but some of them may make it change the target path every time. Check the documentation for more details.
8. lineinfileThe lineinfile module allows you to manage single lines on existing files. It's useful to update targeted configuration on existing files without changing the rest of the file or copying the entire configuration file.
For example, add a new entry to your hosts file like this:
- name: Ensure host rh8-vm03 in hosts file
lineinfile:
path: / etc / hosts
line: 192.168.122.236 rh8-vm03
state: presentYou can also use this module to change an existing line by applying the parameter
- name: Ensure root cannot login via sshregexp
to look for an existing line to replace. For example, update thesshd_config
file to prevent root login by modifying the linePermitRootLogin yes
toPermitRootLogin no
:
lineinfile:
path: / etc / ssh / sshd_config
regexp: '^PermitRootLogin'
line: PermitRootLogin no
state: presentNote: Use the service module to restart the SSHD service to enable this change.
This module is also idempotent, but, in case of line modification, ensure the regular expression matches both the original and updated states to avoid unnecessary changes.
9. unarchiveUse the unarchive module to extract the contents of archive files such as
tar
orzip
files. By default, it copies the archive file from the control node to the target machine before extracting it. Change this behavior by providing the parameterremote_src: yes
.For example, extract the contents of a
- name: Extract contents of app.tar.gz.tar.gz
file that has already been downloaded to the target host with this syntax:
unarchive:
src: / tmp / app.tar.gz
dest: / app
remote_src: yesSome archive technologies require additional packages to be available on the target system; for example, the package
unzip
to extract.zip
files.Depending on the archive format used, this module may or may not work idempotently. To prevent unnecessary changes, you can use the parameter
10. commandcreates
to specify a file or directory that this module would create when extracting the archive contents. If this file or directory already exists, the module does not extract the contents again.The command module is a flexible one that allows you to execute arbitrary commands on the target system. Using this module, you can do almost anything on the target system as long as there's a command for it.
Even though the
command
module is flexible and powerful, it should be used with caution. Avoid using the command module to execute a task if there's another appropriate module available for that. For example, you could create users by using thecommand
module to execute theuseradd
command, but you should use theuser
module instead, as it abstracts many details away from you, taking care of corner cases and ensuring the configuration only changes when necessary.For cases where no modules are available, or to run custom scripts or programs, the
- name: Run the app installercommand
module is still a great resource. For instance, use this module to run a script that is already present in the target machine:
command: "/app/install.sh"By default, this module is not idempotent, as Ansible executes the command every single time. To make the
What's next?command
module idempotent, you can usewhen
conditions to only execute the command if the appropriate condition exists, or thecreates
argument, similarly to the unarchive module example.Using these modules, you can configure entire Linux systems by copying, templating, or modifying configuration files, creating users, installing packages, starting system services, updating the firewall, and more.
If you are new to Ansible, make sure you check the documentation on how to create playbooks to combine these modules to automate your system. Some of these tasks require running with elevated privileges to work. For more details, check the privilege escalation documentation.
As of Ansible 2.10, modules are organized in collections. Most of the modules in this list are part of the
ansible.builtin
collection and are available by default with Ansible, but some of them are part of other collections. For a list of collections, check the Ansible documentation . What you need to know about Ansible modules Learn how and when to develop custom modules for Ansible.
Nov 02, 2020 | www.upguard.com
Ansible has no notion of state. Since it doesn't keep track of dependencies, the tool simply executes a sequential series of tasks, stopping when it finishes, fails or encounters an error . For some, this simplistic mode of automation is desirable; however, many prefer their automation tool to maintain an extensive catalog for ordering (à la Puppet), allowing them to reach a defined state regardless of any variance in environmental conditions.
Nov 02, 2020 | www.redhat.com
YAML Ain't a Markup Language (YAML), and as configuration formats go, it's easy on the eyes. It has an intuitive visual structure, and its logic is pretty simple: indented bullet points inherit properties of parent bullet points.
But this apparent simplicity can be deceptive.
Great DevOps DownloadsIt's easy (and misleading) to think of YAML as just a list of related values, no more complex than a shopping list. There is a heading and some items beneath it. The items below the heading relate directly to it, right? Well, you can test this theory by writing a little bit of valid YAML.
Open a text editor and enter this text, retaining the dashes at the top of the file and the leading spaces for the last two items:
--- Store: Bakery Sourdough loaf BagelsSave the file as example.yaml (or similar).
If you don't already have
yamllint
installed, install it:$ sudo dnf install -y yamllintA linter is an application that verifies the syntax of a file. The
yamllint
command is a great way to ensure your YAML is valid before you hand it over to whatever application you're writing YAML for (Ansible, for instance).Use
yamllint
to validate your YAML file:$ yamllint --strict shop.yaml || echo "Fail" $But when converted to JSON with a simple converter script , the data structure of this simple YAML becomes clearer:
$ ~/bin/json2yaml.py shop.yaml {"Store": "Bakery Sourdough loaf Bagels"}Parsed without the visual context of line breaks and indentation, the actual scope of your data looks a lot different. The data is mostly flat, almost devoid of hierarchy. There's no indication that the sourdough loaf and bagels are children of the name of the store.
[ Readers also liked: Ansible: IT automation for everybody ]
How data is stored in YAMLYAML can contain different kinds of data blocks:
- Sequence: values listed in a specific order. A sequence starts with a dash and a space (
-
). You can think of a sequence as a Python list or an array in Bash or Perl.- Mapping: key and value pairs. Each key must be unique, and the order doesn't matter. Think of a Python dictionary or a variable assignment in a Bash script.
There's a third type called
scalar
, which is arbitrary data (encoded in Unicode) such as strings, integers, dates, and so on. In practice, these are the words and numbers you type when building mapping and sequence blocks, so you won't think about these any more than you ponder the words of your native tongue.When constructing YAML, it might help to think of YAML as either a sequence of sequences or a map of maps, but not both.
YAML mapping blocksWhen you start a YAML file with a mapping statement, YAML expects a series of mappings. A mapping block in YAML doesn't close until it's resolved, and a new mapping block is explicitly created. A new block can only be created either by increasing the indentation level (in which case, the new block exists inside the previous block) or by resolving the previous mapping and starting an adjacent mapping block.
The reason the original YAML example in this article fails to produce data with a hierarchy is that it's actually only one data block: the key
Store
has a single value ofBakery Sourdough loaf Bagels
. YAML ignores the whitespace because no new mapping block has been started.Is it possible to fix the example YAML by prepending each sequence item with a dash and space?
--- Store: Bakery - Sourdough loaf - BagelsAgain, this is valid YAML, but it's still pretty flat:
$ ~/bin/json2yaml.py shop.yaml {"Store": "Bakery - Sourdough loaf - Bagels"}The problem is that this YAML file opens a mapping block and never closes it. To close the
Store
block and open a new one, you must start a new mapping. The value of the mapping can be a sequence, but you need a key first.Here's the correct (and expanded) resolution:
--- Store: Bakery: - 'Sourdough loaf' - 'Bagels' Cheesemonger: - 'Blue cheese' - 'Feta'In JSON, this resolves to:
{"Store": {"Bakery": ["Sourdough loaf", "Bagels"], "Cheesemonger": ["Blue cheese", "Feta"]}}As you can see, this YAML directive contains one mapping (
YAML sequence blocksStore
) to two child values (Bakery
andCheesemonger
), each of which is mapped to a child sequence.The same principles hold true should you start a YAML directive as a sequence. For instance, this YAML directive is valid:
Flour Water SaltEach item is distinct when viewed as JSON:
["Flour", "Water", "Salt"]But this YAML file is not valid because it attempts to start a mapping block at an adjacent level to a sequence block :
--- - Flour - Water - Salt Sugar: casterIt can be repaired by moving the mapping block into the sequence:
--- - Flour - Water - Salt - Sugar: casterYou can, as always, embed a sequence into your mapping item:
--- - Flour - Water - Salt - Sugar: - caster - granulated - icingViewed through the lens of explicit JSON scoping, that YAML snippet reads like this:
["Flour", "Salt", "Water", {"Sugar": ["caster", "granulated", "icing"]}][ A free guide from Red Hat: 5 steps to automate your business . ]
YAML syntaxIf you want to comfortably write YAML, it's vital to be aware of its data structure. As you can tell, there's not much you have to remember. You know about mapping and sequence blocks, so you know everything you need have to work with. All that's left is to remember how they do and do not interact with one another. Happy coding! Check out these related articles on Enable Sysadmin Image 10 YAML tips for people who hate YAML
Oct 21, 2020 | www.redhat.com
This article describes the different parts of an Ansible playbook starting with a very broad overview of what Ansible is and how you can use it. Ansible is a way to use easy-to-read YAML syntax to write playbooks that can automate tasks for you. These playbooks can range from very simple to very complex and one playbook can even be embedded in another.
More about automationInstalling httpd with a playbook
- An introduction to Ansible
- 3 ways to try Ansible Tower free
- Free Ansible e-books
- Getting started with network automation
Now that you have that base knowledge let's look at a basic playbook that will install the httpd package. I have an inventory file with two hosts specified, and I placed them in the web group:
[root@ansible test]# cat inventory [web] ansibleclient.usersys.redhat.com ansibleclient2.usersys.redhat.comLet's look at the actual playbook to see what it contains:
[root@ansible test]# cat httpd.yml --- - name: this playbook will install httpd hosts: web tasks: - name: this is the task to install httpd yum: name: httpd state: latestBreaking this down, you see that the first line in the playbook is
---
. This lets you know that it is the beginning of the playbook. Next, I gave a name for the play. This is just a simple playbook with only one play, but a more complex playbook can contain multiple plays. Next, I specify the hosts that I want to target. In this case, I am selecting the web group, but I could have specified either ansibleclient.usersys.redhat.com or ansibleclient2.usersys.redhat.com instead if I didn't want to target both systems. The next line tells Ansible that you're going to get into the tasks that do the actual work. In this case, my playbook has only one task, but you can have multiple tasks if you want. Here I specify that I'm going to install the httpd package. The next line says that I'm going to use the yum module. I then tell it the name of the package, httpd , and that I want the latest version to be installed.[ Readers also liked: Getting started with Ansible ]
When I run the
httpd.yml
playbook twice, I get this on the terminal:[root@ansible test]# ansible-playbook httpd.yml PLAY [this playbook will install httpd] ************************************************************************************************************ TASK [Gathering Facts] ***************************************************************************************************************************** ok: [ansibleclient.usersys.redhat.com] ok: [ansibleclient2.usersys.redhat.com] TASK [this is the task to install httpd] *********************************************************************************************************** changed: [ansibleclient2.usersys.redhat.com] changed: [ansibleclient.usersys.redhat.com] PLAY RECAP ***************************************************************************************************************************************** ansibleclient.usersys.redhat.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ansibleclient2.usersys.redhat.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0[root@ansible test]# ansible-playbook httpd.yml PLAY [this playbook will install httpd] ************************************************************************************************************ TASK [Gathering Facts] ***************************************************************************************************************************** ok: [ansibleclient.usersys.redhat.com] ok: [ansibleclient2.usersys.redhat.com] TASK [this is the task to install httpd] *********************************************************************************************************** ok: [ansibleclient.usersys.redhat.com] ok: [ansibleclient2.usersys.redhat.com] PLAY RECAP ***************************************************************************************************************************************** ansibleclient.usersys.redhat.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ansibleclient2.usersys.redhat.com : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 [root@ansible test]#Note that in both cases, I received an ok=2 , but in the second run of the playbook, nothing was changed. The latest version of httpd was already installed at that point.
To get information about the various modules you can use in a playbook, you can use the
ansible-doc
command. For example:[root@ansible test]# ansible-doc yum > YUM (/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/yum.py) Installs, upgrade, downgrades, removes, and lists packages and groups with the `yum' package manager. This module only works on Python 2. If you require Python 3 support, see the [dnf] module. * This module is maintained by The Ansible Core Team * note: This module has a corresponding action plugin. < output truncated >It's nice to have a playbook that installs httpd , but to make it more flexible, you can use variables instead of hardcoding the package as httpd . To do that, you could use a playbook like this one:
[root@ansible test]# cat httpd.yml --- - name: this playbook will install {{ myrpm }} hosts: web vars: myrpm: httpd tasks: - name: this is the task to install {{ myrpm }} yum: name: "{{ myrpm }}" state: latestHere you can see that I've added a section called "vars" and I declared a variable myrpm with the value of httpd . I then can use that myrpm variable in the playbook and adjust it to whatever I want to install. Also, because I've specified the RPM to install by using a variable, I can override what I have written in the playbook by specifying the variable on the command line by using
-e
:[root@ansible test]# cat httpd.yml --- - name: this playbook will install {{ myrpm }} hosts: web vars: myrpm: httpd tasks: - name: this is the task to install {{ myrpm }} yum: name: "{{ myrpm }}" state: latest [root@ansible test]# ansible-playbook httpd.yml -e "myrpm=at" PLAY [this playbook will install at] *************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************************** ok: [ansibleclient.usersys.redhat.com] ok: [ansibleclient2.usersys.redhat.com] TASK [this is the task to install at] ************************************************************************************************************** changed: [ansibleclient2.usersys.redhat.com] changed: [ansibleclient.usersys.redhat.com] PLAY RECAP ***************************************************************************************************************************************** ansibleclient.usersys.redhat.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ansibleclient2.usersys.redhat.com : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 [root@ansible test]#Another way to make the tasks more dynamic is to use loops . In this snippet, you can see that I have declared rpms as a list to have mailx and postfix . To use them, I use
loop
in my task:vars: rpms: - mailx - postfix tasks: - name: this will install the rpms yum: name: "{{ item }}" state: installed loop: "{{ rpms }}"You might have noticed that when these plays run, facts about the hosts are gathered:
TASK [Gathering Facts] ***************************************************************************************************************************** ok: [ansibleclient.usersys.redhat.com] ok: [ansibleclient2.usersys.redhat.com]
These facts can be used as variables when you run the play. For example, you could have amotd.yml
file that sets content like:"This is the system {{ ansible_facts['fqdn'] }}. This is a {{ ansible_facts['distribution'] }} version {{ ansible_facts['distribution_version'] }} system."For any system where you run that playbook, the correct fully-qualified domain name (FQDN), operating system distribution, and distribution version would get set, even without you manually defining those variables.
[ Need more on Ansible? Take a free technical overview course from Red Hat. Ansible Essentials: Simplicity in Automation Technical Overview . ]
Wrap upThis was a quick introduction to how Ansible playbooks look, what the different parts do, and how you can get more information about the modules. Further information is available from Ansible documentation . Check out these related articles on Enable Sysadmin Image Easing into automation with Ansible It's easier than you think to get started automating your tasks with Ansible. This gentle introduction gives you the basics you need to begin streamlining your administrative life. Posted: September 19, 2019 Author: Jörg Kastning (Red Hat Accelerator, Sudoer) Image Ansible: IT automation for everybody Kick the tires with Ansible and start automating with these simple tasks. Posted: July 31, 2019 Author: Jörg Kastning (Red Hat Accelerator, Sudoer) Image How to navigate Ansible documentation Ansible's documentation can be daunting. Here's a tour that might help. Posted: August 21, 2019 Author: Brady Thompson (Red Hat Accelerator) Topics: Linux Linux Administration Ansible Peter Gervase
I currently work as a Solutions Architect at Red Hat. I have been here for going on 14 years, moving around a bit over the years, working in front line support and consulting before my current role. In my free time, I enjoy spending time with my family, exercising, and woodworking. More about me Related Content Image Tricks and treats for sysadmins and ops Are you ready for the scary technology tricks that can haunt you as a sysadmin? Here are five treats to counter those tricks. Posted: October 30, 2020 Author: Bryant Son (Red Hat, Sudoer) Image Eight ways to protect SSH access on your system The Secure Shell is a critical tool in the administrator's arsenal. Here are eight ways you can better secure SSH, and some suggestions for basic SSH centralization. Posted: October 29, 2020 Author: Damon Garn Image Linux command basics: printf Use printf to format text or numbers. Posted: October 27, 2020 Author: Tyler Carrigan (Red Hat)
Oct 07, 2020 | perlmonks.org
I want to make it easier to modify configuration files. For example, let's day I want to edit a postfix config file according to the directions here.
So I started writing simple code in a file that could be interpreted by perl to make the changes for me with one command per line:
uc mail_owner # "uc" is the command for "uncomment" uc hostname cv hostname {{fqdn}} # "cv" is the command for "change value", {{fqdn} + } is replaced with appropriate value ... [download]You get the idea. I started writing some code to interpret my config file modification commands and then realized someone had to have tackled this problem before. I did a search on metacpan but came up empty. Anyone familiar with this problem space and can help point me in the right direction?
by likbez on Oct 05, 2020 at 03:16 UTC Reputation: 2There are also some newer editors that use LUA as the scripting language, but none with Perl as a scripting language. See https://www.slant.co/topics/7340/~open-source-programmable-text-editors
Here, for example, is a fragment from an old collection of hardening scripts called Titan, written for Solaris by Brad M. Powell. Example below uses vi which is the simplest, but probably not optimal choice, unless your primary editor is VIM.
FixHostsEquiv() { if [ -f /etc/hosts.equiv -a -s /etc/hosts.equiv ]; then t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a co + py..." /bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG if grep -s "^+$" /etc/hosts.equiv then ed - /etc/hosts.equiv <<- ! g/^+$/d w q ! fi else t_echo 2 " No /etc/hosts.equiv - PASSES CHECK" exit 1 fi [download]For VIM/Emacs users the main benefit here is that you will know your editor better, instead of inventing/learning "yet another tool." That actually also is an argument against Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to kill a bird with a cannon. Positive return on investment probably starts if you manage over 8 or even 16 boxes.
Perl also can be used. But I would recommend to slurp the file into an array and operate with lines like in editor; a regex on the whole text are more difficult to write correctly then a regex for a line, although experts have no difficulties using just them. But we seldom acquire skills we can so without :-)
On the other hand, that gives you a chance to learn splice function ;-)
If the files are basically identical and need some slight customization you can use patch utility with pdsh, but you need to learn the ropes. Like Perl the patch utility was also written by Larry Wall and is a very flexible tool for such tasks. You need first to collect files from your servers into some central directory with pdsh/pdcp (which I think is a standard RPM on RHEL and other linuxes) or other tool, then to create diffs with one server to which you already applied the change (diff is your command language at this point), verify that on other server that this diff produced right results, apply it and then distribute the resulting files back to each server using again pdsh/pdcp. If you have a common NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the tree and diffs on common filesystem.
The same central repository of config files can be used with vi and other approaches creating "poor man Ansible" for you .
Nov 07, 2019 | opensource.com
Get the details on what's inside your computer from the command line. 16 Sep 2019 Howard Fosdick Feed 44 up 5 comments Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
The easiest way is to do that is with one of the standard Linux GUI programs:
- i-nex collects hardware information and displays it in a manner similar to the popular CPU-Z under Windows.
- HardInfo displays hardware specifics and even includes a set of eight popular benchmark programs you can run to gauge your system's performance.
- KInfoCenter and Lshw also display hardware details and are available in many software repositories.
Alternatively, you could open up the box and read the labels on the disks, memory, and other devices. Or you could enter the boot-time panels -- the so-called UEFI or BIOS panels. Just hit the proper program function key during the boot process to access them. These two methods give you hardware details but omit software information.
Or, you could issue a Linux line command. Wait a minute that sounds difficult. Why would you do this?
The Linux Terminal
Sometimes it's easy to find a specific bit of information through a well-targeted line command. Perhaps you don't have a GUI program available or don't want to install one.
- Top 7 terminal emulators for Linux
- 10 command-line tools for data analysis in Linux
- Download Now: SSH cheat sheet
- Advanced Linux commands cheat sheet
- Linux command line tutorials
Probably the main reason to use line commands is for writing scripts. Whether you employ the Linux shell or another programming language, scripting typically requires coding line commands.
Many line commands for detecting hardware must be issued under root authority. So either switch to the root user ID, or issue the command under your regular user ID preceded by sudo :
sudo <the_line_command>and respond to the prompt for the root password.
This article introduces many of the most useful line commands for system discovery. The quick reference chart at the end summarizes them.
Hardware overviewThere are several line commands that will give you a comprehensive overview of your computer's hardware.
The inxi command lists details about your system, CPU, graphics, audio, networking, drives, partitions, sensors, and more. Forum participants often ask for its output when they're trying to help others solve problems. It's a standard diagnostic for problem-solving:
inxi -FxzThe -F flag means you'll get full output, x adds details, and z masks out personally identifying information like MAC and IP addresses.
The hwinfo and lshw commands display much of the same information in different formats:
hwinfo --shortor
lshw -shortThe long forms of these two commands spew out exhaustive -- but hard to read -- output:
hwinfoor
lshwCPU detailsYou can learn everything about your CPU through line commands. View CPU details by issuing either the lscpu command or its close relative lshw :
lscpuor
lshw -C cpuIn both cases, the last few lines of output list all the CPU's capabilities. Here you can find out whether your processor supports specific features.
With all these commands, you can reduce verbiage and narrow any answer down to a single detail by parsing the command output with the grep command. For example, to view only the CPU make and model:
lshw -C cpu | grep -i productTo view just the CPU's speed in megahertz:
lscpu | grep -i mhzor its BogoMips power rating:
lscpu | grep -i bogoThe -i flag on the grep command simply ensures your search ignores whether the output it searches is upper or lower case.
MemoryLinux line commands enable you to gather all possible details about your computer's memory. You can even determine whether you can add extra memory to the computer without opening up the box.
To list each memory stick and its capacity, issue the dmidecode command:
dmidecode -t memory | grep -i sizeFor more specifics on system memory, including type, size, speed, and voltage of each RAM stick, try:
lshw -short -C memoryOne thing you'll surely want to know is is the maximum memory you can install on your computer:
dmidecode -t memory | grep -i maxNow find out whether there are any open slots to insert additional memory sticks. You can do this without opening your computer by issuing this command:
lshw -short -C memory | grep -i emptyA null response means all the memory slots are already in use.
Determining how much video memory you have requires a pair of commands. First, list all devices with the lspci command and limit the output displayed to the video device you're interested in:
lspci | grep -i vgaThe output line that identifies the video controller will typically look something like this:
00:02.0 VGA compatible controller: Intel Corporation 82Q35 Express Integrated Graphics Controller (rev 02)Now reissue the lspci command, referencing the video device number as the selected device:
lspci -v -s 00:02.0The output line identified as prefetchable is the amount of video RAM on your system:
...
Memory at f0100000 ( 32 -bit, non-prefetchable ) [ size =512K ]
I / O ports at 1230 [ size = 8 ]
Memory at e0000000 ( 32 -bit, prefetchable ) [ size =256M ]
Memory at f0000000 ( 32 -bit, non-prefetchable ) [ size =1M ]
...Finally, to show current memory use in megabytes, issue:
free -mThis tells how much memory is free, how much is in use, the size of the swap area, and whether it's being used. For example, the output might look like this:
total used free shared buff / cache available
Mem: 11891 1326 8877 212 1687 10077
Swap: 1999 0 1999The top command gives you more detail on memory use. It shows current overall memory and CPU use and also breaks it down by process ID, user ID, and the commands being run. It displays full-screen text output:
topDisks, filesystems, and devicesYou can easily determine whatever you wish to know about disks, partitions, filesystems, and other devices.
To display a single line describing each disk device:
lshw -short -C diskGet details on any specific SATA disk, such as its model and serial numbers, supported modes, sector count, and more with:
hdparm -i /dev/sdaOf course, you should replace sda with sdb or another device mnemonic if necessary.
To list all disks with all their defined partitions, along with the size of each, issue:
lsblkFor more detail, including the number of sectors, size, filesystem ID and type, and partition starting and ending sectors:
fdisk -lTo start up Linux, you need to identify mountable partitions to the GRUB bootloader. You can find this information with the blkid command. It lists each partition's unique identifier (UUID) and its filesystem type (e.g., ext3 or ext4):
blkidTo list the mounted filesystems, their mount points, and the space used and available for each (in megabytes):
df -mFinally, you can list details for all USB and PCI buses and devices with these commands:
lsusbor
lspciNetworkLinux offers tons of networking line commands. Here are just a few.
To see hardware details about your network card, issue:
lshw -C networkTraditionally, the command to show network interfaces was ifconfig :
ifconfig -aBut many people now use:
ip link showor
netstat -iIn reading the output, it helps to know common network abbreviations:
Abbreviation Meaning lo Loopback interface eth0 or enp* Ethernet interface wlan0 Wireless interface ppp0 Point-to-Point Protocol interface (used by a dial-up modem, PPTP VPN connection, or USB modem) vboxnet0 or vmnet* Virtual machine interface The asterisks in this table are wildcard characters, serving as a placeholder for whatever series of characters appear from system to system.
To show your default gateway and routing tables, issue either of these commands:
ip route | column -tor
netstat -rSoftwareLet's conclude with two commands that display low-level software details. For example, what if you want to know whether you have the latest firmware installed? This command shows the UEFI or BIOS date and version:
dmidecode -t biosWhat is the kernel version, and is it 64-bit? And what is the network hostname? To find out, issue:
uname -aQuick reference chartThis chart summarizes all the commands covered in this article:
Display info about all hardware inxi -Fxz --or--
hwinfo --short --or--
lshw -shortDisplay all CPU info lscpu --or--
lshw -C cpuShow CPU features (e.g., PAE, SSE2) lshw -C cpu | grep -i capabilities Report whether the CPU is 32- or 64-bit lshw -C cpu | grep -i width Show current memory size and configuration dmidecode -t memory | grep -i size --or--
lshw -short -C memoryShow maximum memory for the hardware dmidecode -t memory | grep -i max Determine whether memory slots are available lshw -short -C memory | grep -i empty
(a null answer means no slots available)Determine the amount of video memory lspci | grep -i vga
then reissue with the device number;
for example: lspci -v -s 00:02.0
The VRAM is the prefetchable value.Show current memory use free -m --or--
topList the disk drives lshw -short -C disk Show detailed information about a specific disk drive hdparm -i /dev/sda
(replace sda if necessary)List information about disks and partitions lsblk (simple) --or--
fdisk -l (detailed)List partition IDs (UUIDs) blkid List mounted filesystems, their mount points,
and megabytes used and available for eachdf -m List USB devices lsusb List PCI devices lspci Show network card details lshw -C network Show network interfaces ifconfig -a --or--
ip link show --or--
netstat -iDisplay routing tables ip route | column -t --or--
netstat -rDisplay UEFI/BIOS info dmidecode -t bios Show kernel version, network hostname, more uname -a Do you have a favorite command that I overlooked? Please add a comment and share it
Nov 07, 2019 | tunnelix.com
09/16/2018
Building from scratch an agentless inventory system for Linux servers is a very time-consuming task. To have precise information about your server's inventory, Ansible comes to be very handy, especially if you are restricted to install an agent on the servers. However, there are some pieces of information that the Ansible's inventory mechanism cannot retrieve from the default inventory. In this case, a Playbook needs to be created to retrieve those pieces of information. Examples are VMware tool and other application versions which you might want to include in your inventory system. Since Ansible makes it easy to create JSON files, this can be easily manipulated for other interesting tasks, say an HTML static page. I would recommend Ansible-CMDB which is very handy for such conversion. The Ansible-CMDB allows you to create a pure HTML file based on the JSON file that was generated by Ansible. Ansible-CMDB is another amazing tool created by Ferry Boender .
Let's have a look how the agentless servers inventory with Ansible and Ansible-CMDB works. It's important to understand the prerequisites needed before installing Ansible. There are other articles which I published on Ansible:
- Some tips with Ansible for managing OS and applications
- Configure your LVM via Ansible
- Update GLIBC and restart BIND using Ansible
- Some fun with Ansible playbooks
- Getting started with Ansible deployment
Ansible Basics and Pre-requisites
1. In this article, you will get an overview of what Ansible inventory is capable of. Start by gathering the information that you will need for your inventory system. The goal is to make a plan first.
2. As explained in the article Getting started with Ansible deployment , you have to define a group and record the name of your servers(which can be resolved through the host file or DNS server) or IP's. Let's assume that the name of the group is " test ".
3. Launch the following command to see a JSON output which will describe the inventory of the machine. As you may notice that Ansible had fetched all the data.
Ansible -m setup test4. You can also append the output to a specific directory for future use with Ansible-cmdb. I would advise creating a specific directory (I created /home/Ansible-Workdesk ) to prevent confusion where the file is appended.
Ansible-m setup --tree out/ test5. At this point, you will have several files created in a tree format, i.e; specific file with the name of the server containing JSON information about the servers inventory.
Getting Hands-on with Ansible-cmdb
6. Now, you will have to install Ansible-cmdb which is pretty fast and easy. Do make sure that you follow all the requirements before installation:
git clone https://github.com/fboender/ansible-cmdb cd ansible-cmdb && make install7. To convert the JSON files into HTML, use the following command:
ansible-cmdb -t html_fancy_split out/8. You should notice a directory called "cmdb" which contain some HTML files. Open the index.html and view your server inventory system.
Tweaking the default template
9. As mentioned previously, there is some information which is not available by default on the index.html template. You can tweak the /usr/local/lib/ansible-cmdb/ansiblecmdb/data/tpl/html_fancy_defs.html page and add more content, for example, ' uptime ' of the servers. To make the " Uptime " column visible, add the following line in the " Column definitions " section:
{"title": "Uptime", "id": "uptime", "func": col_uptime, "sType": "string", "visible": True},Also, add the following lines in the "Column functions " section :
<%def name="col_uptime(host, **kwargs)"> ${jsonxs(host, 'ansible_facts.uptime', default='')} </%def>Whatever comes after the dot just after ansible_fact.<xxx> is the parent value in the JSON file. Repeat step 7. Here is how the end result looks like.
Sep 19, 2019 | www.redhat.com
It's easier than you think to get started automating your tasks with Ansible. This gentle introduction gives you the basics you need to begin streamlining your administrative life.Posted | by Jörg Kastning (Red Hat Accelerator)
In the end of 2015 and the beginning of 2016, we decided to use Red Hat Enterprise Linux (RHEL) as our third operating system, next to Solaris and Microsoft Windows. I was part of the team that tested RHEL, among other distributions, and would engage in the upcoming operation of the new OS. Thinking about a fast-growing number of Red Hat Enterprise Linux systems, it came to my mind that I needed a tool to automate things because without automation the number of hosts I can manage is limited.
I had experience with Puppet back in the day but did not like that tool because of its complexity. We had more modules and classes than hosts to manage back then. So, I took a look at Ansible version 2.1.1.0 in July 2016.
What I liked about Ansible and still do is that it is push-based. On a target node, only Python and SSH access are needed to control the node and push configuration settings to it. No agent needs to be removed if you decide that Ansible isn't the right tool for you. The YAML syntax is easy to read and write, and the option to use playbooks as well as ad hoc commands makes Ansible a flexible solution that helps save time in our day-to-day business. So, it was at the end of 2016 when we decided to evaluate Ansible in our environment.
First stepsAs a rule of thumb, you should begin automating things that you have to do on a daily or at least a regular basis. That way, automation saves time for more interesting or more important things. I followed this rule by using Ansible for the following tasks:
Baseline Ansible configuration
- Set a baseline configuration for newly provisioned hosts (set DNS, time, network, sshd, etc.)
- Set up patch management to install Red Hat Security Advisories (RHSAs) .
- Test how useful the ad hoc commands are, and where we could benefit from them.
For us, baseline configuration is the configuration every newly provisioned host gets. This practice makes sure the host fits into our environment and is able to communicate on the network. Because the same configuration steps have to be made for each new host, this is an awesome step to get started with automation.
The following are the tasks I started with:
- Register Red Hat Enterprise Linux and attach a subscription with Ansible
- Configure DNS with Ansible
- Synchronize time across hosts
- Configure the repos our hosts would use
- Make sure a certain set of packages were installed
- Configure Postfix to be able to send mail in our environment
- Configure firewalld
- Configure SELinux
(Some of these steps are already published here on Enable Sysadmin, as you can see, and others might follow soon.)
All of these tasks have in common that they are small and easy to start with, letting you gather experience with using different kinds of Ansible modules, roles, variables, and so on. You can run each of these roles and tasks standalone, or tie them all together in one playbook that sets the baseline for your newly provisioned system.
Red Hat Enterprise Linux Server patch management with AnsibleAs I explained on my GitHub page for ansible-role-rhel-patchmanagement , in our environment, we deploy Red Hat Enterprise Linux Servers for our operating departments to run their applications.
More about automation
- An introduction to Ansible
- 3 ways to try Ansible Tower free
- Free Ansible e-books
- Getting started with network automation
This role was written to provide a mechanism to install Red Hat Security Advisories on target nodes once a month. In our special use case, only RHSAs are installed to ensure a minimum security limit. The installation is enforced once a month. The advisories are summarized in "Patch-Sets." This way, it is ensured that the same advisories are used for all stages during a patch cycle.
The Ansible Inventory nodes are summarized in one of the following groups, each of which defines when a node is scheduled for patch installation:
- [rhel-patch-phase1] - On the second Tuesday of a month.
- [rhel-patch-phase2] - On the third Tuesday of a month.
- [rhel-patch-phase3] - On the fourth Tuesday of a month.
- [rhel-patch-phase4] - On the fourth Wednesday of a month.
In case packages were updated on target nodes, the hosts will reboot afterward.
Because the production systems are most important, they are divided into two separate groups (phase3 and phase4) to decrease the risk of failure and service downtime due to advisory installation.
You can find more about this role in my GitHub repo: https://github.com/Tronde/ansible-role-rhel-patchmanagement .
Updating and patch management are tasks every sysadmin has to deal with. With these roles, Ansible helped me get this task done every month, and I don't have to care about it anymore. Only when a system is not reachable, or yum has a problem, do I get an email report telling me to take a look. But, I got lucky, and have not yet received any mail report for the last couple of months, now. (Yes, of course, the system is able to send mail.)
Ad hoc commandsThe possibility to run ad hoc commands for quick (and dirty) tasks was one of the reasons I chose Ansible. You can use these commands to gather information when you need them or to get things done without the need to write a playbook first.
I used ad hoc commands in cron jobs until I found the time to write playbooks for them. But, with time comes practice, and today I try to use playbooks and roles for every task that has to run more than once.
Here are small examples of ad hoc commands that provide quick information about your nodes.
Query package versionansible all -m command -a'/usr/bin/rpm -qi <PACKAGE NAME>' | grep 'SUCCESS\|Version'Query OS-Releaseansible all -m command -a'/usr/bin/cat /etc/os-release'Query running kernel versionansible all -m command -a'/usr/bin/uname -r'Query DNS servers in use by nodesansible all -m command -a'/usr/bin/cat /etc/resolv.conf' | grep 'SUCCESS\|nameserver'Hopefully, these samples give you an idea for what ad hoc commands can be used.
SummaryIt's not hard to start with automation. Just look for small and easy tasks you do every single day, or even more than once a day, and let Ansible do these tasks for you.
Eventually, you will be able to solve more complex tasks as your automation skills grow. But keep things as simple as possible. You gain nothing when you have to troubleshoot a playbook for three days when it solves a task you could have done in an hour.
[Want to learn more about Ansible? Check out these free e-books .]
Sep 16, 2019 | opensource.com
10 Ansible modules you need to know See examples and learn the most important modules for automating everyday tasks with Ansible. 11 Sep 2019 DirectedSoul (Red Hat) Feed 25 up 4 comments x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 Ansible is an open source IT configuration management and automation platform. It uses human-readable YAML templates so users can program repetitive tasks to happen automatically without having to learn an advanced programming language.
Ansible is agentless, which means the nodes it manages do not require any software to be installed on them. This eliminates potential security vulnerabilities and makes overall management smoother.
Ansible modules are standalone scripts that can be used inside an Ansible playbook. A playbook consists of a play, and a play consists of tasks. These concepts may seem confusing if you're new to Ansible, but as you begin writing and working more with playbooks, they will become familiar.
More on Ansible
There are some modules that are frequently used in automating everyday tasks; those are the ones that we will cover in this article.
- How Ansible works
- Free Ansible eBooks
- Ansible quick start video
- Download and install Ansible
- A quickstart guide to Ansible
Ansible has three main files that you need to consider:
Module 1: Package management
- Host/inventory file: Contains the entry of the nodes that need to be managed
- Ansible.cfg file: Located by default at /etc/ansible/ansible.cfg , it has the necessary privilege escalation options and the location of the inventory file
- Main file: A playbook that has modules that perform various tasks on a host listed in an inventory or host file
There is a module for most popular package managers, such as DNF and APT, to enable you to install any package on a system. Functionality depends entirely on the package manager, but usually these modules can install, upgrade, downgrade, remove, and list packages. The names of relevant modules are easy to guess. For example, the DNF module is dnf_module , the old YUM module (required for Python 2 compatibility) is yum_module , while the APT module is apt_module , the Slackpkg module is slackpkg_module , and so on.
Example 1:
- name : install the latest version of Apache and MariaDB
dnf :
name :
- httpd
- mariadb-server
state : latestThis installs the Apache web server and the MariaDB SQL database.
Example 2: - name : Install a list of packages
yum :
name :
- nginx
- postgresql
- postgresql-server
state : presentThis installs the list of packages and helps download multiple packages.
Module 2: ServiceAfter installing a package, you need a module to start it. The service module enables you to start, stop, and reload installed packages; this comes in pretty handy.
Example 1: - name : Start service foo, based on running process /usr/bin/foo
service :
name : foo
pattern : /usr/bin/foo
state : startedThis starts the service foo .
Example 2: - name : Restart network service for interface eth0
service :
name : network
state : restarted
args : eth0This restarts the network service of the interface eth0 .
Module 3: CopyThe copy module copies a file from the local or remote machine to a location on the remote machine.
Example 1: - name : Copy a new "ntp.conf file into place, backing up the original if it differs from the copied version
copy:
src: /mine/ntp.conf
dest: /etc/ntp.conf
owner: root
group: root
mode: '0644'
backup: yes Example 2: - name : Copy file with owner and permission, using symbolic representation
copy :
src : /srv/myfiles/foo.conf
dest : /etc/foo.conf
owner : foo
group : foo
mode : u=rw,g=r,o=r Module 4: DebugThe debug module prints statements during execution and can be useful for debugging variables or expressions without having to halt the playbook.
Example 1: - name : Display all variables/facts known for a host
debug :
var : hostvars [ inventory_hostname ]
verbosity : 4This displays all the variable information for a host that is defined in the inventory file.
Example 2: - name : Write some content in a file /tmp/foo.txt
copy :
dest : /tmp/foo.txt
content : |
Good Morning!
Awesome sunshine today.
register : display_file_content
- name : Debug display_file_content
debug :
var : display_file_content
verbosity : 2This registers the content of the copy module output and displays it only when you specify verbosity as 2. For example:
ansible-playbook demo.yaml -vvModule 5: FileThe file module manages the file and its properties.
Example 1: - name : Change file ownership, group and permissions
- It sets attributes of files, symlinks, or directories.
- It also removes files, symlinks, or directories.
file :
path : /etc/foo.conf
owner : foo
group : foo
mode : '0644'This creates a file named foo.conf and sets the permission to 0644 .
Example 2: - name : Create a directory if it does not exist
file :
path : /etc/some_directory
state : directory
mode : '0755'This creates a directory named some_directory and sets the permission to 0755 .
Module 6: LineinfileThe lineinfile module manages lines in a text file.
Example 1: - name : Ensure SELinux is set to enforcing mode
- It ensures a particular line is in a file or replaces an existing line using a back-referenced regular expression.
- It's primarily useful when you want to change just a single line in a file.
lineinfile :
path : /etc/selinux/config
regexp : '^SELINUX='
line : SELINUX=enforcingThis sets the value of SELINUX=enforcing .
Example 2: - name : Add a line to a file if the file does not exist, without passing regexp
lineinfile :
path : /etc/resolv.conf
line : 192.168.1.99 foo.lab.net foo
create : yesThis adds an entry for the IP and hostname in the resolv.conf file.
Module 7: GitThe git module manages git checkouts of repositories to deploy files or software.
Example 1: # Example Create git archive from repo
- git :
repo : https://github.com/ansible/ansible-examples.git
dest : /src/ansible-examples
archive : /tmp/ansible-examples.zip Example 2: - git :
repo : https://github.com/ansible/ansible-examples.git
dest : /src/ansible-examples
separate_git_dir : /src/ansible-examples.gitThis clones a repo with a separate Git directory.
Module 8: Cli_commandThe cli_command module , first available in Ansible 2.7, provides a platform-agnostic way of pushing text-based configurations to network devices over the network_cli connection plugin.
Example 1: - name : commit with comment
cli_config :
config : set system host-name foo
commit_comment : this is a testThis sets the hostname for a switch and exits with a commit message.
Example 2: - name : configurable backup path
cli_config :
config : "{{ lookup('template', 'basic/config.j2') }}"
backup : yes
backup_options :
filename : backup.cfg
dir_path : /home/userThis backs up a config to a different destination file.
Module 9: ArchiveThe archive module creates a compressed archive of one or more files. By default, it assumes the compression source exists on the target.
Example 1: - name : Compress directory /path/to/foo/ into /path/to/foo.tgz
archive :
path : /path/to/foo
dest : /path/to/foo.tgz Example 2: - name : Create a bz2 archive of multiple files, rooted at /path
archive :
path :
- /path/to/foo
- /path/wong/foo
dest : /path/file.tar.bz2
format : bz2 Module 10: CommandOne of the most basic but useful modules, the command module takes the command name followed by a list of space-delimited arguments.
Example 1: - name : return motd to registered var
command : cat /etc/motd
register : mymotd Example 2: - name : Change the working directory to somedir/ and run the command as db_owner if /path/to/database does not exist.
command : /usr/bin/make_database.sh db_user db_name
become : yes
become_user : db_owner
args :
chdir : somedir/
creates : /path/to/database ConclusionThere are tons of modules available in Ansible, but these ten are the most basic and powerful ones you can use for an automation job. As your requirements change, you can learn about other useful modules by entering ansible-doc <module-name> on the command line or refer to the official documentation .
Sep 13, 2019 | www.redhat.com
Ansible is a multiplier, a tool that automates and scales infrastructure of every size. It is considered to be a configuration management, orchestration, and deployment tool. It is easy to get up and running with Ansible. Even a new sysadmin could start automating with Ansible in a matter of a few hours.
Ansible automates using the SSH protocol. The control machine uses an SSH connection to communicate with its target hosts, which are typically Linux hosts. If you're a Windows sysadmin, you can still use Ansible to automate your Windows environments using WinRM as opposed to SSH. Presently, though, the control machine still needs to run Linux.
More about automationAs a new sysadmin, you might start with just a few playbooks. But as your automation skills continue to grow, and you become more familiar with Ansible, you will learn best practices and further realize that as your playbooks increase, using Ansible Galaxy becomes invaluable.
In this article, you will learn a bit about Ansible Galaxy, its structure, and how and when you can put it to use.
What Ansible doesCommon sysadmin tasks that can be performed with Ansible include patching, updating systems, user and group management, and provisioning. Ansible presently has a huge footprint in IT Automation -- if not the largest presently -- and is considered to be the most popular and widely used configuration management, orchestration, and deployment tool available today.
One of the main reasons for its popularity is its simplicity. It's simple, powerful, and agentless. Which means a new or entry-level sysadmin can hit the ground automating in a matter of hours. Ansible allows you to scale quickly, efficiently, and cross-functionally.
Create roles with Ansible GalaxyAnsible Galaxy is essentially a large public repository of Ansible roles. Roles ship with READMEs detailing the role's use and available variables. Galaxy contains a large number of roles that are constantly evolving and increasing.
Galaxy can use git to add other role sources, such as GitHub. You can initialize a new galaxy role using
ansible-galaxy init
, or you can install a role directly from the Ansible Galaxy role store by executing the commandansible-galaxy install <name of role>
.Here are some helpful
ansible-galaxy
commands you might use from time to time:
ansible-galaxy list
displays a list of installed roles, with version numbers.ansible-galaxy remove <role>
removes an installed role.ansible-galaxy info
provides a variety of information about Ansible Galaxy.ansible-galaxy init
can be used to create a role template suitable for submission to Ansible Galaxy.To create an Ansible role using Ansible Galaxy, we need to use the
Image Create collectionsansible-galaxy
command and its templates. Roles must be downloaded before they can be used in playbooks, and they are placed into the default directory/etc/ansible/roles
. You can find role examples at https://galaxy.ansible.com/geerlingguy :While Ansible Galaxy has been the go-to tool for constructing and managing roles, with new iterations of Ansible you are bound to see changes or additions. On Ansible version 2.8 you get the new feature of collections .
What are collections and why are they worth mentioning? As the Ansible documentation states:
Collections are a distribution format for Ansible content. They can be used to package and distribute playbooks, roles, modules, and plugins.
Collections follow a simple structure:
collection/ ├── docs/ ├── galaxy.yml ├── plugins/ │ ├── modules/ │ │ └── module1.py │ ├── inventory/ │ └── .../ ├── README.md ├── roles/ │ ├── role1/ │ ├── role2/ │ └── .../ ├── playbooks/ │ ├── files/ │ ├── vars/ │ ├── templates/ │ └── tasks/ └── tests/The
ansible-galaxy-collection
command implements the following commands. Notably, a few of the subcommands are the same as used withansible-galaxy
:
init
creates a basic collection skeleton based on the default template included with Ansible, or your own template.build
creates a collection artifact that can be uploaded to Galaxy, or your own repository.publish
publishes a built collection artifact to Galaxy.install
installs one or more collections.In order to determine what can go into a collection, a great resource can be found here .
ConclusionEstablish yourself as a stellar sysadmin with an automation solution that is simple, powerful, agentless, and scales your infrastructure quickly and efficiently. Using Ansible Galaxy to create roles is superb thinking, and an ideal way to be organized and thoughtful in managing your ever-growing playbooks.
The only way to improve your automation skills is to work with a dedicated tool and prove the value and positive impact of automation on your infrastructure.
Sep 13, 2019 | github.com
1. Introduction
Dell EMC OpenManage Ansible Modules provide customers the ability to automate the Out-of-Band configuration management, deployment and updates for Dell EMC PowerEdge Servers using Ansible by leeveragin the management automation built into the iDRAC with Lifecycle Controller. iDRAC provides both REST APIs based on DMTF RedFish industry standard and WS-Management (WS-MAN) for management automation of PowerEdge Servers.
With OpenManage Ansible modules, you can do:
- Server administration
- Configure iDRAC's settings such as:
- iDRAC Network Settings
- SNMP and SNMP Alert Settings
- Timezone and NTP Settings
- System settings such as server topology
- LC attributes such as CSIOR etc.
- Perform User administration
- BIOS and Boot Order configuration
- RAID Configuration
- OS Deployment
- Firmware Updates
1.1 How OpenManage Ansible Modules work?
OpenManage Ansible modules extensively uses the Server Configuration Profile (SCP) for most of the configuration management, deployment and update of PowerEdge Servers. Lifecycle Controller 2 version 1.4 and later adds support for SCP. A SCP contains all BIOS, iDRAC, Lifecycle Controller, Network amd Storage settings of a PowerEdge server and can be applied to multiple servers, enabling rapid, reliable and reproducible configuration.
A SCP operation can be performed using any of the following methods:
- Export/Import to/from a remote network share via CIFS, NFS
- Export/Import to/from a remote network share via HTTP, HTTPS (iDRAC firmware 3.00.00.00 and above)
- Export/Import to/from via local file streaming (iDRAC firmware 3.00.00.00 and above)
NOTE : This BETA release of OpenManage Ansible Module supports only the first option listed above for SCP operations i.e. export/import to/from a remote network share via CIFS or NFS. Future releases will support all the options for SCP operations.
Setting up a local mount point for a remote network share
Since OpenManage Ansible modules extensively uses SCP to automate and orchestrate configuration, deployment and update on PowerEdge servers, you must locally mount the remote network share (CIFS or NFS) on the ansible server where you will be executing the playbook or modules. Local mount point also should have read-write privileges in order for OpenManage Ansible modules to write a SCP file to remote network share that will be imported by iDRAC.
You can use either of the following ways to setup a local mount point:
- Use the
mount
command to mount a remote network share# Mount a remote CIFS network share on the local ansible machine. # In the below command, 192.168.10.10 is the IP address of the CIFS file # server (you can provide a hostname as well), Share is the directory that # is being shared, and /mnt/CIFS is the location to mount the file system # on the local ansible machine sudo mount -t cifs \\\\192.168.10.10\\Share -o username=user1,password=password,dir_mode=0777,file_mode=0666 /mnt/CIFS # Mount a remote NFS network share on the local ansible machine. # In the below command, 192.168.10.10 is the IP address of the NFS file # server (you can provide a hostname as well), Share is the directory that # is being exported, and /mnt/NFS is the location to mount the file system # on the local ansible machine. Please note that NFS checks access # permissions against user ids (UIDs). For granting the read-write # privileges on the local mount point, the UID and GID of the user on your # local ansible machine needs to match the UID and GID of the owner of the # folder you are trying to access on the server. Other option for granting # the rw privileges would be to use all_squash option. sudo mount -t nfs 192.168.10.11:/Share /mnt/NFS -o rw,user,auto- Alternate and preferred way would be to use the
/etc/fstab
for mounting the remote network share. That way, you won't have to mount the network share after a reboot and remember all the options. General syntax for mounting the network share in/etc/fstab
would be as follows:# Mounting a CIFS network share: //192.168.10.10/Share /mnt/CIFS cifs username=user,password=pwd,domain=domain_name,dir_mode=0777,file_mode=0666,iocharset=utf8 0 0 # Mounting a NFS network share: 192.168.10.11:/Share /mnt/NFS nfs rw,user,auto 0 0
1.2 What is included in this BETA release?
Use Cases Included in this BETA release Protocol Support
- WS-Management
Server Administration Power and Thermal
- Power Control
iDRAC Reset
- Yes
iDRAC Configuration User and Password Management
- Local user and password management
- Create User
- Change Password
- Change User Privileges
- Remove an user
iDRAC Network Configuration
- NIC Selection
- Zero-Touch Auto-Config settings
- IPv4 Address settings:
- Enable/Disable IPv4
- Static IPv4 Address settings (IPv4 address, gateway and netmask)
- Enable/Disable DNS from DHCP
- Preferred/Alternate DNS Server
- VLAN Configuration
SNMP and SNMP Alert Configuration
- SNMP Agent configuration
- SNMP Alert Destination Configuration
- Add, Modify and Delete an alert destination
Server Configuration Profile (SCP)
- Export SCP to remote network share (CIFS, NFS)
- Import SCP from a remote network share (CIFS, NFS)
iDRAC Services
- iDRAC Web Server configuration
- Enable/Disable Web server
- TLS protocol version
- SSL Encryption Bits
- HTTP/HTTPS port
- Time out period
Lifecycle Controller (LC) attributes
- Enable/Disable CSIOR (Collect System Inventory on Restart)
BIOS Configuration Boot Order Settings
- Change Boot Mode (Bios, Uefi)
- Change Bios/Uefi Boot Sequence
- One-Time Bios/Uefi Boot Configuration settings
Deployment OS Deployment
- OS Deployment from:
- Remote Network Share (CIFS, NFS)
Storage Virtual Drive
- Create and Delete virtual drives
Update Firmware Update
- Firmware update from:
- Remote network share (CIFS, NFS)
Monitor Logs
- Export Lifecycle Controller (LC) Logs to:
- Remote network share (CIFS, NFS)
- Export Tech Support Report (TSR) to:
- Remote network share (CIFS, NFS)
2. Requirements
- Ansible >= '2.3'
- Python >= '2.7.9'
- Dell EMC OpenManage Python SDK
Aug 28, 2019 | www.redhat.com
We take our first glimpse at the Ansible documentation on the official website. While Ansible can be overwhelming with so many immediate options, let's break down what is presented to us here. Putting our attention on the page's main pane, we are given five offerings from Ansible. This pane is a central location, or one-stop-shop, to maneuver through the documentation for products like Ansible Tower, Ansible Galaxy, and Ansible Lint: Image
We can even dive into Ansible Network for specific module documentation that extends the power and ease of Ansible automation to network administrators. The focal point of the rest of this article will be around Ansible Project, to give us a great starting point into our automation journey:
ImageOnce we click the Ansible Documentation tile under the Ansible Project section, the first action we should take is to ensure we are viewing the documentation's correct version. We can get our current version of Ansible from our control node's command line by running
Imageansible --version
. Armed with the version information provided by the output, we can select the matching version in the site's upper-left-hand corner using the drop-down menu, that by default says latest :
Aug 04, 2019 | www.redhat.com
10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain.
Posted June 10, 2019 | by Seth Kenlon (Red Hat) There are lots of formats for configuration files: a list of values, key and value pairs, INI files, YAML, JSON, XML, and many more. Of these, YAML sometimes gets cited as a particularly difficult one to handle for a few different reasons. While its ability to reflect hierarchical values is significant and its minimalism can be refreshing to some, its Python-like reliance upon syntactic whitespace can be frustrating.
However, the open source world is diverse and flexible enough that no one has to suffer through abrasive technology, so if you hate YAML, here are 10 things you can (and should!) do to make it tolerable. Starting with zero, as any sensible index should.
0. Make your editor do the workWhatever text editor you use probably has plugins to make dealing with syntax easier. If you're not using a YAML plugin for your editor, find one and install it. The effort you spend on finding a plugin and configuring it as needed will pay off tenfold the very next time you edit YAML.
For example, the Atom editor comes with a YAML mode by default, and while GNU Emacs ships with minimal support, you can add additional packages like yaml-mode to help.
If your favorite text editor lacks a YAML mode, you can address some of your grievances with small configuration changes. For instance, the default text editor for the GNOME desktop, Gedit, doesn't have a YAML mode available, but it does provide YAML syntax highlighting by default and features configurable tab width:
With the
drawspaces
Gedit plugin package, you can make white space visible in the form of leading dots, removing any question about levels of indentation.Take some time to research your favorite text editor. Find out what the editor, or its community, does to make YAML easier, and leverage those features in your work. You won't be sorry.
1. Use a linterIdeally, programming languages and markup languages use predictable syntax. Computers tend to do well with predictability, so the concept of a linter was invented in 1978. If you're not using a linter for YAML, then it's time to adopt this 40-year-old tradition and use
yamllint
.You can install
yamllint
on Linux using your distribution's package manager. For instance, on Red Hat Enterprise Linux 8 or Fedora :$ sudo dnf install yamllintInvoking
yamllint
is as simple as telling it to check a file. Here's an example ofyamllint
's response to a YAML file containing an error:$ yamllint errorprone.yaml errorprone.yaml 23:10 error syntax error: mapping values are not allowed here 23:11 error trailing spaces (trailing-spaces)That's not a time stamp on the left. It's the error's line and column number. You may or may not understand what error it's talking about, but now you know the error's location. Taking a second look at the location often makes the error's nature obvious. Success is eerily silent, so if you want feedback based on the lint's success, you can add a conditional second command with a double-ampersand (
&&
). In a POSIX shell,&&
fails if a command returns anything but 0, so upon success, yourecho
command makes that clear. This tactic is somewhat superficial, but some users prefer the assurance that the command did run correctly, rather than failing silently. Here's an example:$ yamllint perfect.yaml && echo "OK" OKThe reason
2. Write in Python, not YAMLyamllint
is so silent when it succeeds is that it returns 0 errors when there are no errors.If you really hate YAML, stop writing in YAML, at least in the literal sense. You might be stuck with YAML because that's the only format an application accepts, but if the only requirement is to end up in YAML, then work in something else and then convert. Python, along with the excellent pyyaml library, makes this easy, and you have two methods to choose from: self-conversion or scripted.
Self-conversionIn the self-conversion method, your data files are also Python scripts that produce YAML. This works best for small data sets. Just write your JSON data into a Python variable, prepend an
import
statement, and end the file with a simple three-line output statement.#!/usr/bin/python3 import yaml d={ "glossary": { "title": "example glossary", "GlossDiv": { "title": "S", "GlossList": { "GlossEntry": { "ID": "SGML", "SortAs": "SGML", "GlossTerm": "Standard Generalized Markup Language", "Acronym": "SGML", "Abbrev": "ISO 8879:1986", "GlossDef": { "para": "A meta-markup language, used to create markup languages such as DocBook.", "GlossSeeAlso": ["GML", "XML"] }, "GlossSee": "markup" } } } } } f=open('output.yaml','w') f.write(yaml.dump(d)) f.closeRun the file with Python to produce a file called
output.yaml
file.$ python3 ./example.json $ cat output.yaml glossary: GlossDiv: GlossList: GlossEntry: Abbrev: ISO 8879:1986 Acronym: SGML GlossDef: GlossSeeAlso: [GML, XML] para: A meta-markup language, used to create markup languages such as DocBook. GlossSee: markup GlossTerm: Standard Generalized Markup Language ID: SGML SortAs: SGML title: S title: example glossaryThis output is perfectly valid YAML, although
Scripted conversionyamllint
does issue a warning that the file is not prefaced with---
, which is something you can adjust either in the Python script or manually.In this method, you write in JSON and then run a Python conversion script to produce YAML. This scales better than self-conversion, because it keeps the converter separate from the data.
Create a JSON file and save it as
example.json
. Here is an example from json.org :{ "glossary": { "title": "example glossary", "GlossDiv": { "title": "S", "GlossList": { "GlossEntry": { "ID": "SGML", "SortAs": "SGML", "GlossTerm": "Standard Generalized Markup Language", "Acronym": "SGML", "Abbrev": "ISO 8879:1986", "GlossDef": { "para": "A meta-markup language, used to create markup languages such as DocBook.", "GlossSeeAlso": ["GML", "XML"] }, "GlossSee": "markup" } } } } }Create a simple converter and save it as
json2yaml.py
. This script imports both the YAML and JSON Python modules, loads a JSON file defined by the user, performs the conversion, and then writes the data tooutput.yaml
.#!/usr/bin/python3 import yaml import sys import json OUT=open('output.yaml','w') IN=open(sys.argv[1], 'r') JSON = json.load(IN) IN.close() yaml.dump(JSON, OUT) OUT.close()Save this script in your system path, and execute as needed:
$ ~/bin/json2yaml.py example.json3. Parse early, parse oftenSometimes it helps to look at a problem from a different angle. If your problem is YAML, and you're having a difficult time visualizing the data's relationships, you might find it useful to restructure that data, temporarily, into something you're more familiar with.
If you're more comfortable with dictionary-style lists or JSON, for instance, you can convert YAML to JSON in two commands using an interactive Python shell. Assume your YAML file is called
mydata.yaml
.$ python3 >>> f=open('mydata.yaml','r') >>> yaml.load(f) {'document': 34843, 'date': datetime.date(2019, 5, 23), 'bill-to': {'given': 'Seth', 'family': 'Kenlon', 'address': {'street': '51b Mornington Road\n', 'city': 'Brooklyn', 'state': 'Wellington', 'postal': 6021, 'country': 'NZ'}}, 'words': 938, 'comments': 'Good article. Could be better.'}There are many other examples, and there are plenty of online converters and local parsers, so don't hesitate to reformat data when it starts to look more like a laundry list than markup.
4. Read the specAfter I've been away from YAML for a while and find myself using it again, I go straight back to yaml.org to re-read the spec. If you've never read the specification for YAML and you find YAML confusing, a glance at the spec may provide the clarification you never knew you needed. The specification is surprisingly easy to read, with the requirements for valid YAML spelled out with lots of examples in chapter 6 .
5. Pseudo-configBefore I started writing my book, Developing Games on the Raspberry Pi , Apress, 2019, the publisher asked me for an outline. You'd think an outline would be easy. By definition, it's just the titles of chapters and sections, with no real content. And yet, out of the 300 pages published, the hardest part to write was that initial outline.
YAML can be the same way. You may have a notion of the data you need to record, but that doesn't mean you fully understand how it's all related. So before you sit down to write YAML, try doing a pseudo-config instead.
A pseudo-config is like pseudo-code. You don't have to worry about structure or indentation, parent-child relationships, inheritance, or nesting. You just create iterations of data in the way you currently understand it inside your head.
Once you've got your pseudo-config down on paper, study it, and transform your results into valid YAML.
6. Resolve the spaces vs. tabs debateOK, maybe you won't definitively resolve the spaces-vs-tabs debate , but you should at least resolve the debate within your project or organization. Whether you resolve this question with a post-process
sed
script, text editor configuration, or a blood-oath to respect your linter's results, anyone in your team who touches a YAML project must agree to use spaces (in accordance with the YAML spec).Any good text editor allows you to define a number of spaces instead of a tab character, so the choice shouldn't negatively affect fans of the Tab key.
Tabs and spaces are, as you probably know all too well, essentially invisible. And when something is out of sight, it rarely comes to mind until the bitter end, when you've tested and eliminated all of the "obvious" problems. An hour wasted to an errant tab or group of spaces is your signal to create a policy to use one or the other, and then to develop a fail-safe check for compliance (such as a Git hook to enforce linting).
7. Less is more (or more is less)Some people like to write YAML to emphasize its structure. They indent vigorously to help themselves visualize chunks of data. It's a sort of cheat to mimic markup languages that have explicit delimiters.
Here's a good example from Ansible's documentation :
# Employee records - martin: name: Martin D'vloper job: Developer skills: - python - perl - pascal - tabitha: name: Tabitha Bitumen job: Developer skills: - lisp - fortran - erlangFor some users, this approach is a helpful way to lay out a YAML document, while other users miss the structure for the void of seemingly gratuitous white space.
If you own and maintain a YAML document, then you get to define what "indentation" means. If blocks of horizontal white space distract you, then use the minimal amount of white space required by the YAML spec. For example, the same YAML from the Ansible documentation can be represented with fewer indents without losing any of its validity or meaning:
--- - martin: name: Martin D'vloper job: Developer skills: - python - perl - pascal - tabitha: name: Tabitha Bitumen job: Developer skills: - lisp - fortran - erlang8. Make a recipeI'm a big fan of repetition breeding familiarity, but sometimes repetition just breeds repeated stupid mistakes. Luckily, a clever peasant woman experienced this very phenomenon back in 396 AD (don't fact-check me), and invented the concept of the recipe .
If you find yourself making YAML document mistakes over and over, you can embed a recipe or template in the YAML file as a commented section. When you're adding a section, copy the commented recipe and overwrite the dummy data with your new real data. For example:
--- # - <common name>: # name: Given Surname # job: JOB # skills: # - LANG - martin: name: Martin D'vloper job: Developer skills: - python - perl - pascal - tabitha: name: Tabitha Bitumen job: Developer skills: - lisp - fortran - erlang9. Use something elseI'm a fan of YAML, generally, but sometimes YAML isn't the answer. If you're not locked into YAML by the application you're using, then you might be better served by some other configuration format. Sometimes config files outgrow themselves and are better refactored into simple Lua or Python scripts.
YAML is a great tool and is popular among users for its minimalism and simplicity, but it's not the only tool in your kit. Sometimes it's best to part ways. One of the benefits of YAML is that parsing libraries are common, so as long as you provide migration options, your users should be able to adapt painlessly.
If YAML is a requirement, though, keep these tips in mind and conquer your YAML hatred once and for all! What to read next
Aug 04, 2019 | www.redhat.com
Skip to main content We use cookies on our websites to deliver our online services. Details about how we use cookies and how you may disable them are set out in our Privacy Statement . By using this website you agree to our use of cookies. ×
Search Enable SysAdmin Ansible: IT automation for everybody Kick the tires with Ansible and start automating with these simple tasks.
- Articles Automation Career Culture Linux Programming Security
- About Topics Email newsletter Community guidelines
- Welcome
Posted July 31, 2019 | by Jörg Kastning
Ansible is an open source tool for software provisioning, application deployment, orchestration, configuration, and administration. Its purpose is to help you automate your configuration processes and simplify the administration of multiple systems. Thus, Ansible essentially pursues the same goals as Puppet, Chef, or Saltstack.
What I like about Ansible is that it's flexible, lean, and easy to start with. In most use cases, it keeps the job simple.
I chose to use Ansible back in 2016 because no agent has to be installed on the managed nodes -- a node is what Ansible calls a managed remote system. All you need to start managing a remote system with Ansible is SSH access to the system, and Python installed on it. Python is preinstalled on most Linux systems, and I was already used to managing my hosts via SSH, so I was ready to start right away. And if the day comes where I decide not to use Ansible anymore, I just have to delete my Ansible controller machine (control node) and I'm good to go. There are no agents left on the managed nodes that have to be removed.
Ansible offers two ways to control your nodes. The first one uses playbooks . These are simple ASCII files written in Yet Another Markup Language (YAML) , which is easy to read and write. And second, there are the ad-hoc commands , which allow you to run a command or module without having to create a playbook first.
You organize the hosts you would like to manage and control in an inventory file, which offers flexible format options. For example, this could be an INI-like file that looks like:
mail.example.com [webservers] foo.example.com bar.example.com [dbservers] one.example.com two.example.com three.example.com [site1:children] webservers dbserversExamplesI would like to give you two small examples of how to use Ansible. I started with these really simple tasks before I used Ansible to take control of more complex tasks in my infrastructure.
Ad-hoc: Check if Ansible can remote manage a systemAs you might recall from the beginning of this article, all you need to manage a remote host is SSH access to it, and a working Python interpreter on it. To check if these requirements are fulfilled, run the following ad-hoc command against a host from your inventory:
[jkastning@ansible]$ ansible mail.example.com -m ping mail.example.com | SUCCESS => { "changed": false, "ping": "pong" }Playbook: Register a system and attach a subscriptionThis example shows how to use a playbook to keep installed packages up to date. The playbook is an ASCII text file which looks like this:
--- # Make sure all packages are up to date - name: Update your system hosts: mail.example.com tasks: - name: Make sure all packages are up to date yum: name: "*" state: latestNow, we are ready to run the playbook:
[jkastning@ansible]$ ansible-playbook yum_update.yml PLAY [Update your system] ************************************************************************** TASK [Gathering Facts] ***************************************************************************** ok: [mail.example.com] TASK [Make sure all packages are up to date] ******************************************************* ok: [mail.example.com] PLAY RECAP ***************************************************************************************** mail.example.com : ok=2 changed=0 unreachable=0 failed=0Here everything is ok and there is nothing else to do. All installed packages are already the latest version.
It's simple: Try and use itThe examples above are quite simple and should only give you a first impression. But, from the start, it did not take me long to use Ansible for more complex tasks like the Poor Man's RHEL Mirror or the Ansible Role for RHEL Patchmanagment .
Today, Ansible saves me a lot of time and supports my day-to-day work tasks quite well. So what are you waiting for? Try it, use it, and feel a bit more comfortable at work. What to read next Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat) Topics: Automation Ansible
AUTOMATION FOR EVERYONEGetting started with Ansible Get started Jörg Kastning Joerg is a sysadmin for over ten years now. He is a member of the Red Hat Accelerators and runs his own blog at https://www.my-it-brain.de. More about me Related Content Image 10 YAML tips for people who hate YAML Do you hate YAML? These tips might ease your pain. Posted: June 10, 2019 Author: Seth Kenlon (Red Hat)
OUR BEST CONTENT, DELIVERED TO YOUR INBOX
https://www.redhat.com/sysadmin/eloqua-embedded-subscribe.html?offer_id=701f20000012gE7AAI The opinions expressed on this website are those of each author, not of the author's employer or of Red Hat.
Red Hat and the Red Hat logo are trademarks of Red Hat, Inc., registered in the United States and other countries.
Copyright ©2019 Red Hat, Inc.
Twitter Facebook 0 LinkedIn Reddit 33 Email Twitter Facebook 0 LinkedIn Reddit 33 Email xSubscribe nowGet the highlights in your inbox every week.
https://www.redhat.com/sysadmin/eloqua-embedded-email-capture-block.html?offer_id=701f20000012gE7AAI ✓ Thanks for sharing! Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Telegram Pocket Mix Tumblr Amazon Wish List AOL Mail Balatarin BibSonomy Bitty Browser Blinklist Blogger BlogMarks Bookmarks.fr Box.net Buffer Care2 News CiteULike Copy Link Design Float Diary.Ru Diaspora Digg Diigo Douban Draugiem DZone Evernote Facebook Messenger Fark Flipboard Folkd Google Bookmarks Google Classroom Google+ Hacker News Hatena Houzz Instapaper Kakao Kik Kindle It Known Line LiveJournal Mail.Ru Mastodon Mendeley Meneame Mixi MySpace Netvouz Odnoklassniki Outlook.com Papaly Pinboard Plurk Print PrintFriendly Protopage Bookmarks Pusha Qzone Rediff MyPage Refind Renren Sina Weibo SiteJot Skype Slashdot SMS StockTwits Svejo Symbaloo Bookmarks Threema Trello Tuenti Twiddla TypePad Post Viadeo Viber VK Wanelo WeChat WordPress Wykop XING Yahoo Mail Yoolink Yummly
AddToAny Facebook Twitter Email Pinterest LinkedIn Reddit WhatsApp Gmail Email Gmail AOL Mail Outlook.com Yahoo Mail More https://static.addtoany.com/menu/sm.21.html#type=page&event=load&url=https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody&referrer=https%3A%2F%2Fwww.linuxtoday.com%2Fit_management%2Fansible-it-automation-for-everybody-190731052032.html
https://redhat.demdex.net/dest5.html?d_nsid=0#https%3A%2F%2Fwww.redhat.com%2Fsysadmin%2Fansible-it-automation-everybody
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions
- 1. Objective
- 2. Operating System and Software Versions
- 3. Requirements
- 4. Difficulty
- 5. Conventions
- 6. Introduction
- 7. Distributions, major and minor versions
- 8. Setting up building environment
- 9. Building the first version of the package
- 10. Building another version of the package
- 11. Conclusion
Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions
- Operating system: Red Hat Enterprise Linux 7.5
- Software: rpm-build 4.11.3+
Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple
- # - requires given linux commands to be executed with root privileges either directly as a root user or by use of
sudo
command- $ - given linux commands to be executed as a regular non-privileged user
shell
scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run fromcron
at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.
You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?
On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.
In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two
bash
scripts,parselogs.sh
andpullnews.sh
to provide a way that all systems have the latest version of these scripts in the/usr/local/sbin
directory, and thus on the path of any user who logs in to the system.
Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except
bash
), so we will buildnoarch
packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that usesrpm
, and to any version - we only need to ensure that the build machine'srpm-build
package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install therpm-build
package:# yum install rpm-buildFrom now on, we do not useroot
user, and for a good reason. Building packages does not requireroot
privilege, and you don't want to break your building machine.Building the first version of the package Let's create the directory structure needed for building:
$ mkdir -p rpmbuild/SPECSOur package is called admin-scripts, version 1.0. We create aspecfile
that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such asvi
. The previously installedrpmbuild
package will fill your empty specfile with template data if you usevi
to create an empty one, but for this tutorial consider the specification below calledadmin-scripts-1.0.spec
:
Name: admin-scripts Version: 1 Release: 0 Summary: FooBar Inc. IT dept. admin scripts Packager: John Doe Group: Application/Other License: GPL URL: www.foobar.com/admin-scripts Source0: %{name}-%{version}.tar.gz BuildArch: noarch %description Package installing latest version the admin scripts used by the IT dept. %prep %setup -q %build %install rm -rf $RPM_BUILD_ROOT mkdir -p $RPM_BUILD_ROOT/usr/local/sbin cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/ %clean rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root,-) %dir /usr/local/sbin /usr/local/sbin/parselogs.sh /usr/local/sbin/pullnews.sh %doc %changelog * Wed Aug 1 2018 John Doe - release 1.0 - initial releasePlace the specfile in therpmbuild/SPEC
directory we created earlier.We need the sources referenced in the
specfile
- in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scriptsAnd copy/move the scripts into it:$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/ parselogs.sh pullnews.sh
As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and thepullnews.sh
is the script we will demonstrate with, it's source in the first version is as below:#!/bin/bash echo "news pulled" exit 0Do not forget to add the appropriate rights to the files in the source - in our case, execution right:chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.shNow we create atar.gz
archive from the source in the same directory:cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1We are ready to build the package:rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.specWe'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under therpmbuild
directory (sorted into subdirectories by architecture):$ ls rpmbuild/RPMS/noarch/ admin-scripts-1-0.noarch.rpmWe have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm Name : admin-scripts Version : 1 Release : 0 Architecture: noarch Install Date: (not installed) Group : Application/Other Size : 78 License : GPL Signature : (none) Source RPM : admin-scripts-1-0.src.rpm Build Date : 2018. aug. 1., Wed, 13.27.34 CEST Build Host : build01.foobar.com Relocations : (not relocatable) Packager : John Doe URL : www.foobar.com/admin-scripts Summary : FooBar Inc. IT dept. admin scripts Description : Package installing latest version the admin scripts used by the IT dept.And of cause we can install it (withroot
privileges): Installing custom scripts with rpm
As we installed the scripts into a directory that is on every user's$PATH
, you can run them as any user in the system, from any directory:$ pullnews.sh news pulledThe package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simpleyum install admin-scripts
within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that thepullnews.sh
should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.First we change the source of the
pullnews.sh
in the SOURCES to something even more complex:#!/bin/bash echo "news pulled" echo "another line printed" exit 0We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so theSource0
reference will be still valid). Note that we delete the previous archive first:cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1Now we create another specfile with a higher release number:cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.specWe don't change much on the package itself, so we simply administrate the new version as shown below:Name: admin-scripts Version: 1 Release: 1 Summary: FooBar Inc. IT dept. admin scripts Packager: John Doe Group: Application/Other License: GPL URL: www.foobar.com/admin-scripts Source0: %{name}-%{version}.tar.gz BuildArch: noarch %description Package installing latest version the admin scripts used by the IT dept. %prep %setup -q %build %install rm -rf $RPM_BUILD_ROOT mkdir -p $RPM_BUILD_ROOT/usr/local/sbin cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/ %clean rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root,-) %dir /usr/local/sbin /usr/local/sbin/parselogs.sh /usr/local/sbin/pullnews.sh %doc %changelog * Wed Aug 22 2018 John Doe - release 1.1 - pullnews.sh v1.1 prints another line * Wed Aug 1 2018 John Doe - release 1.0 - initial release
All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.specIf the build is successful, we now have two versions of the package under our RPMS directory:ls rpmbuild/RPMS/noarch/ admin-scripts-1-0.noarch.rpm admin-scripts-1-1.noarch.rpmAnd now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpmAnd our sysadmins can see that the feature request is landed in this version:
rpm -q --changelog admin-scripts * sze aug 22 2018 John Doe - release 1.1 - pullnews.sh v1.1 prints another line * sze aug 01 2018 John Doe - release 1.0 - initial releaseConclusionWe wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.
Mar 20, 2017 | www.informit.com
Usage of MetadataWhen performing the change process, metadata is used for analytical purposes. This may be in the form of reports or a direct search in the database or the databases where metadata is maintained. Trace information is often used-for instance, to determine in which configuration item changes are required due to an event. Also information about variants or branches belonging to a configuration item is used to determine if a change has effects in several places.
Finally metadata may be used to determine if a configuration item has other outstanding event registrations, such as whether other changes are in the process of being implemented or are awaiting a decision about implementation.
Consequence AnalysisWhen analyzing an event, you must consider the cost of implementing changes. This is not always a simple matter. The following checklists, adapted from a list by Karl Wiegers, may help in analyzing the effects of a proposed change. The lists are not exhaustive and are meant only as inspiration.
Identify
- All requirements affected by or in conflict with the proposed change
- The consequences of not introducing the proposed change
- Possible adverse effects and other risks connected with implementation
- How much of what has already been invested in the product will be lost if the proposed change is implemented-or if it is not
Check if the proposed change
- Has an effect on nonfunctional requirements, such as performance requirements (ISO 9126, a standard for quality characteristics, defines six characteristics: functional, performance, availability, usability, maintainability, and portability. The latter five are typically referred to as nonfunctional.)
- May be introduced with known technology and available resources
- Will cause unacceptable resource requirements in development or test
- Will entail a higher unit price
- Will affect marketing, production, services, or support
Follow-on effects may be additions, changes, or removals in
Roles
- User interfaces or reports, internal or external interfaces, or data storage
- Designed objects, source code, build scripts, include files
- Test plans and test specifications
- Help texts, user manuals, training material, or other user documentation
- Project plan, quality plan, configuration management plan, and other plans
- Other systems, applications, libraries, or hardware components
The configuration (or change) control board (CCB) is responsible for change control. A configuration control board may consist of a single person, such as the author or developer when a document or a piece of code is first written, or an agile team working in close contact with users and sponsors, if work can be performed in an informal way without bureaucracy and heaps of paper. It may also-and will typically, for most important configuration items-consist of a number of people, such as the project manager, a customer representative, and the person responsible for quality assurance.
Process DescriptionsThe methods, conventions, and procedures necessary for carrying out the activities in change control may be
Connection with Other Activities
- Description of the change control process structure
- Procedures in the life cycles of events and changes
- Convention(s) for forming different types of configuration control boards
- Definition of responsibility for each type of configuration control board
- Template(s) for event registration
- Template(s) for change request
Change control is clearly delimited from other activities in configuration management, though all activities may be implemented in the same tool in an automated system. Whether change control is considered a configuration management activity may differ from company to company. Certainly it is tightly coupled with project management, product management, and quality assurance, and in some cases is considered part of quality assurance or test activities. Still, when defining and distributing responsibilities, it's important to keep the boundaries clear, so change control is part of configuration management and nothing else.
ExampleFigure 1–10 shows an example of a process diagram for change control. A number of processes are depicted in the diagram as boxes with input and output sections (e.g., "Evaluation of event registration"). All these processes must be defined and, preferably, described. 1.5 Status Reporting
Status reporting makes available, in a useful and readable way, the information necessary to effectively manage a product's development and maintenance. Other activity areas in configuration management deliver the data foundation for status reporting, in the form of metadata and change control data. Status reporting entails extraction, arrangement, and formation of these data according to demand. Figure 1–11 shows how status reporting is influenced by its surroundings .
Figure 1–11 Status Reporting in Context
InputsStatus reporting can take place at any time.
OutputsThe result of status reporting is the generation of status report(s). Each company must define the reports it should be possible to produce. This may be a release note, an item list (by status, history, or composition), or a trace matrix. It should also be possible to extract ad hoc information on the basis of a search in the available data.
Process DescriptionsThe methods, conventions, and procedures necessary for the activities in status re-porting may be
Roles
- Procedure(s) for the production of available status reports
- Procedure(s) for ad hoc extraction of information
- Templates for status reports that the configuration management system should be able to produce
The librarian is responsible for ensuring that data for and information in status reports are correct, even when reporting is fully automated. Users themselves should be able to extract as many status reports as possible. Still, it may be necessary to involve a librarian, especially if metadata and change data are spread over different media.
Connection with Other ActivitiesStatus reporting depends on correct and sufficient data from other activity areas in configuration management. It's important to understand what information should be available in status reports, so it can be specified early on. It may be too late to get information in a status report if the information was requested late in the project and wasn't collected. Status reports from the configuration management system can be used within almost all process areas in a company. They may be an excellent source of metrics for other process areas, such as helping to identify which items have had most changes made to them, so these items can be the target of further testing or redesign. 1.6 False Friends: Version Control and Baselines
The expression "false friends" is used in the world of languages. When learning a new language, you may falsely think you know the meaning of a specific word, because you know the meaning of a similar word in your own or a third language. For example, the expression faire exprs in French means "to do something on purpose," and not, as you might expect, "to do something fast." There are numerous examples of "false friends"-some may cause embarrassment, but most "just" cause confusion.
This section discusses the concepts of "version control" and "baseline." These terms are frequently used when talking about configuration management, but there is no common and universal agreement on their meaning. They may, therefore, easily become "false friends" if people in a company use them with different meanings. The danger is even greater between a company and a subcontractor or customer, where the possibility of cultural differences is greater than within a single company. It is hoped that this section will help reduce misunderstandings.
Version Control"Version control" can have any of the following meanings:
- Configuration management as such
- Configuration management of individual items, as opposed to configuration management of deliveries
- Control of versions of an item (identification and storage of items) without the associated change control (which is a part of configuration management)
- Storage of intermediate results (backup of work carried out over a period of time for the sole benefit of the producer)
It's common but inadvisable to use the terms "configuration management" and "version control" indiscriminately. A company must make up its mind as to which meaning it will attach to "version control" and define the term relative to the meaning of configuration management. The term "version control" is not used in this book unless its meaning is clear from the context. Nor does the concept exist in IEEE standards referred to in this book, which use "version" in the sense of "edition."
Baseline"Baseline" can have any of the following meanings:
- An item approved and placed in storage in a controlled library
- A delivery (a collection of items released for usage)
- A configuration item, usually a delivery, connected to a specific milestone in a project
"Configuration item" as used in this book is similar to the first meaning of "baseline" in the previous list. "Delivery" is used in this book in the sense of a collection of configuration items (in itself a configuration item), whether or not such a delivery is associated with a milestone or some other specific event-similar to either the second or third meaning in the list, depending on circumstances.
The term "baseline" is not used in this book at all, since misconceptions could result from the many senses in which it's used. Of course, nothing prevents a company from using the term "baseline," as long as the sense is clear to everyone involved.
Feb 04, 2017 | www.cyberciti.biz
The diff command compare files line by line. It can also compare two directories:
# Compare two folders using diff ## diff /etc /tmp/etc_oldRafal Matczak September 29, 2015, 7:36 am§ Quickly find differences between two directories
And quicker:diff -y <(ls -l ${DIR1}) <(ls -l ${DIR2})
Prerequisites
1.Distro: RHEL/CentOS/Debian/Ubuntu Linux
2.Jinja2: A modern and designer friendly templating language for Python.
3.PyYAML: A YAML parser and emitter for the Python programming language.
4.parmiko: Native Python SSHv2 protocol library.
5.httplib2: A comprehensive HTTP client library.
6.Most of the actions listed in this post are written with the assumption that they will be executed by the root user running the bash or any other modern shell.
Google matched content |
[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS Published on Aug 24, 2018 | linuxconfig.org
Linux Install Ansible Configuration Management And IT Automation Tool
A system administrator's guide to getting started with Ansible - FAST! by Curtis Rempel, Red hat intro for sysamins
Ansible by Gabor Szabo
In this series we are going to look at various features of Ansible.
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: May, 25, 2021