Historically, most monitoring solutions are based on a pooler in charge of collecting information on the one hand and on a database to record reports and their history on the other hand. It is on this second part that any user interface in charge of displaying data and generating the necessary reports is sourced.
With more and more infrastructures to supervise, poolers remain adapted but the databases show limits. From local databases that limit configurations to a certain number of checks to centralized architectures that quickly reach the limits of current databases.
Before choosing your monitoring solution, it is therefore essential to ensure that it can meet your data volume requirements. Of course, in the short term, many solutions can meet your needs, but what about in the long term?
How will your platform perform after collecting data for several hundred devices, several thousand checkpoints and several thousand metrics? And over time? What can this give after several months? After several years?
With our ServiceNav product, we decided in 2016, to opt for BigData technologies to ensure high scalability of the product and to provide significant performance and possibilities in the long term.
Today ServiceNav's SaaS platform collects information from more than 50,000 pieces of equipment, 350,000 checkpoints and 750,000 metrics every day.
With a check every 2 to 3 minutes on average, several hundred million pieces of data are collected every day and fill the databases while allowing a fluid and dynamic display of the information.