Generalists vs specialists: Different approaches to IT monitoring

When a company is considering a network monitoring solution they must take into consideration the skill sets and capabilities of their own IT team to make sure they find the right solution that fits their needs. By Martin Hodgson, Head of UK & Ireland, Paessler AG.

  • 3 years ago Posted in

Quite often IT monitoring is the unloved child of the IT administrator: it is necessary in order to ensure the smooth functioning of his company’s IT estate, but they usually don’t have any fun doing it. As a consequence, a large number of highly qualified IT experts have surprising knowledge gaps when it comes to network monitoring, this usually comes to light when the monitoring solution used is no longer adequate or, even worse, has become so complex over the years that it is barely operable.

What exactly is IT monitoring?

On a very basic level, classic IT monitoring is the monitoring of availability and performance in IT environments. IT monitoring answers questions such as "Is my server online?", "Does data in my network get to where it needs to go on time?" or "Is my firewall working reliably?".

The basic function of IT monitoring can be broken down into four tasks:

1. Determine and collect data on the availability and performance of IT components

2. data storage

3. notifications and alerts based on defined thresholds

4. data reports

Of course, there are numerous more advanced tasks in the IT monitoring environment. These include root cause analysis to get to the heart of a problem or recognising emerging trends and making predictions based on them. It can also cover monitoring the security of the network such as the function of firewalls or virus scanners or recognising unusual behavior in the network through intrusion detection.

Logging or event log management is also often listed under monitoring and refers to the analysis of log files such as syslog messages or SNMP traps and is often listed under SIEM (Security Information and Event Management).

Generalists vs specialists

Time and again, generalists are compared with specialists, but a generalist cannot replace a specialist and vice versa.

Specialised solutions

Highly specialised solutions provide specialists with deep insights into narrowly defined areas of the IT estate. The larger a company or the deeper its IT structure, the greater the need for such specialised solutions. DevOps requires detailed information about applications, SecOps needs in-depth insights into security-relevant aspects of network traffic beyond classic tools such as virus scanners or firewalls, while NetOps relies mainly on in-depth analysis of network performance.

In the area of network performance, solutions such as Scrutinizer by Plixer, Flowmon by Kemp or Kentik provide this, sometimes even beyond the narrow boundaries of a specific application area. Flowmon, for example, claims to inform SecOps and NetOps in equal measure. Nevertheless, Flowmon remains a solution for specialists that does not offer a general overview of the entire IT - which is not the claim.

Most specialised tools focus only on a few methods or protocols. In network and application monitoring, this is often flow or what’s called ‘packet sniffing’. When it comes to security it can be flow or packet sniffing using deep packet inspection, but also event log monitoring. Here, the tools usually deliver outstanding performance, scale even for larger environments, and offer in-depth data analyses beyond just pure monitoring, some of which also rely on artificial intelligence or advanced algorithms.

On the other hand, this also requires the IT teams to have a certain level of expertise when using the tools. Even if the operation is optimised and designed to be as simple as possible - in order to the maximum value from the tools, the necessary expertise is required to first configure and deploy the solution correctly and then to be able to use the determined data in a target-oriented manner.

However, if the user needs a central overview of the performance and availability of the entire IT - from infrastructure to network to cloud-based applications and perhaps even beyond to areas beyond IT, then even the specialists quickly reach their limits. This is where generalists are needed.

Generalist network monitoring solutions

SNMP (Simple Network Management Protocol) is often the basis of generalist network monitoring. Even though the protocol is not technically up to date and is regularly declared dead, it is still so widespread that broad IT monitoring without SNMP is not really practicable even in 2021. After all, IT environments are usually not completely overhauled, but continuously adapted to new requirements, older devices and structures coexist with modern systems. Thus, interface-based systems (API = Application Programming Interface) are also becoming increasingly important due to advancing digitalisation. Monitoring tools must still support traditional methods such as SNMP, Ping or WMI, but they must also support interfaces and other contemporary methods such as MQTT or OPC UA.

A central overview with the help of a generalist can be sufficient in smaller companies or limited IT environments. With a few dozen or a hundred devices or IP addresses to monitor, the administrator or ITOps (IT Operations) team usually knows their IT so well that a simple error message is enough for them to be able to identify and solve the problem. They don't need highly specialised tools and usually don't have the time or expert knowledge to use them efficiently. They prefer a monitoring solution that is as broad as possible and provides all the necessary information, is easy to deploy and operate, and alerts when intervention is required.

In larger companies or IT environments with specialised teams that require correspondingly specialised tools, there is s usually also a need for an overarching solution that provides a central overview and allows trends to be identified. This may be the case at the ITOps level or in higher-level management. This is where the generalists come into play. They provide information on all areas of IT without going into too much depth.

Generalists usually support a wide range of protocols and ideally also offer appropriate interfaces to enable the broadest possible collection of data. The data collected is stored and provided with threshold values, which are used to send notifications and alerts via a wide variety of channels. This includes monitoring traffic as well as devices, applications, storage systems, databases or cloud services.

Some generalists offer the possibility of relating data from different areas to each other. These include predefined interfaces to collect and integrate information from specialists. For example, the performance of multiple, redundant mail servers can be combined with traffic data, load balancers, firewalls, databases, storage systems and other components in a single service or process. This enables a management overview from a company perspective across a wide range of IT areas. Set up accordingly, the entire process is displayed as productive and functioning, even if individual components report problems. As long as the entire process is running, only the responsible employees are informed in order to fix the affected problems before it becomes critical. Only when the entire process is at risk, appropriate alarms are triggered.

When a company is considering a network monitoring solution they must take into consideration the skill sets and capabilities of their own IT team to make sure they find the right solution that fits their needs.

By John Kreyling, Managing Director, Centiel UK.
By David de Santiago, Group AI & Digital Services Director at OCS.
By Krishna Sai, Senior VP of Technology and Engineering.
By Danny Lopez, CEO of Glasswall.
By Oz Olivo, VP, Product Management at Inrupt.
By Jason Beckett, Head of Technical Sales, Hitachi Vantara.