Cloud computing has a dynamic nature with many fast, moving parts. This, plus a lack of user access to the underlying cloud infrastructure, makes it more complex and difficult to map and manage dependencies.
These obstacles make it more difficult to keep the relationships accurate. IT teams can’t see what changes the vendor makes under the hood that could affect their systems.
Configuration management database (CMDB) systems, when wrapped in an appropriate framework, such as ITIL, reduce errors, increase accuracy and store information. But how does this work with today’s dynamic cloud environments?
CMDBs and the cloud
A CMDB acts as a centralized store of known-good information about the infrastructure and changes applied to it. For example, an admin accessing a CMDB would expect to see information about a machine, its configuration details and any dependencies on other systems.
CMDBs tend to be among those tools that don’t get changed much, and many of the well-known providers have exceptionally poor integration and accommodation for cloud platforms and services. CMDBs themselves haven’t kept up with the dynamic cloud infrastructure and its ways of short-lived items.
For classic on-premises infrastructure, a CMDB should be easy to keep up to date. However, such sedate updating wasn’t designed with cloud infrastructure in mind, where specific VMs can live for minutes, rather than months or years.
CMDB infrastructure can’t add cloud components and work without enhanced processes to commission, decommission and map interdependencies.
How to use CMDBs with the cloud
Consistency and standards are just as important now as ever, irrespective of cloud vendor or location. Any cloud deployment, as a point of standard operating policy, should not be provisioned ad hoc from the cloud vendor’s own catalog.
Anyone who wants to deploy cloud infrastructure should, where possible, use a self-service environment. Consistency wins every time. However, that is easier said than done, as self-service comes with limitations on what items can be included.
It is important to understand that consistency and standards are just as important now as ever, irrespective of cloud vendor or location.
Using cloud systems in a self-service environment essentially cuts off the ability to deploy unapproved configurations, such as sizing and OSes. It can also force users into a predefined workflow, which provides pertinent information that the user can capture and subsequently use to populate the CMDB. With this approach, the sizing options are a known factor and easier to manage, as opposed to the potential chaos of machines of all types and sizes.
This approach also subjects the request to an appropriate approval mechanism. A key part of the information resources management frameworks is an approval and sign-off workflow.
A correctly built workflow will also enable useful outcomes, such as ensuring VMs — both in the cloud and on premises — end up in the correct resource groups, with the correct access level for the correct individuals and other programmatic items.
Most cloud vendors provide a way to export information about the built VMs, such as IP addresses, unique identifiers and other key details. Users can import this information into their local CMDBs. The uniformity this system enforces — by the nature of its restrictions — makes importing easier, as the required information has been collected as part of the workflow and can be mapped to existing fields. It also makes it easier to map the relationship.
However, depending on how the admin sets up the import and reporting, it could take several hours — or even a day — before the CMDB relationships and related mapping update. The amount of data imported would not be huge, but excessive imports cause unwanted overhead. Prepare for a bit of experimentation to find the optimum cadence.
There are other ways to approach cloud CMDBs, depending on the situation. For a cloud-only environment, tools such as the Microsoft Azure or AWS service catalogs might fare better. They are designed to interface with the major cloud providers via highly dynamic API interactions.
If an administrator uses multiple cloud providers, it becomes more complex to manage data export and ingest from each of the providers, provide different export capabilities and limitations and to map data.
Other complexities with non-cloud aware CMDBs include problems around cloud services that might be difficult to translate into a standard CMDB system. For example, if cloud load balancers are in use and the relationship isn’t picked up, it won’t be documented.
In other words, classic CMDB systems might not be fully aware of the cloud-based systems. The relationships in question can still be exported, but more transformations and manipulation of data must occur and be pulled into the CMDB correctly. For example, in-cloud storage or a shared SQL database will require more effort. All this can be tricky and time-consuming if the CMDB isn’t extensible in terms of imports, APIs and automation.
While a company might use a shared SQL service, the meta data that sits behind that database won’t be available. In short, the database is being consumed as an external service, which causes a bit of a data black hole.
An old-school CMDB might not be the best fit for a few reasons, but the most important question is what type of environment is currently in use, if any. However, for those who currently have an older, non-cloud aware CMDB, this could be the ideal time to upgrade, as data import and export and cloud awareness are now key requirements of a well-defined CMDB server.
A cloud-aware CMDB is much more likely to be API driven and integrate these objects into its own database, as opposed to a legacy closed system. The ability to easily import from the major cloud providers eases the whole process.
Those that only use the public cloud might find cloud-based options, such as CloudAware, CoreStack or ServiceNow, fit the bill better.