Integrations¶
Integration allow you to ingest vital metrics to monitor the health of your entire infrastructure, with metrics from all aspects of your application stack.
Metric Exporters are libraries that expose integration metrics to an agent such as the Prometheus agent or Grafana agent, which allows you to send data to one or more data collection endpoints.
With these agents you can ingest any data into the FusionReactor cloud, and visualise your data within the Integration dashboards.
Exporters are available in many forms and allow you to monitor many aspects of your infrastructure including:
- Databases
- Machine system metrics
- Nginx
- IIS
- Kafka and other message busses
- Kubernetes
- Docker
- Many more
Depending on your agent, exporters can be integrated within the agent, or may require scraping periodically.
Viewing your metrics¶
Scraped metrics will be available in both explore and the Integrations Dashboard within the FusionReactor Cloud.
Generating API keys¶
To authenticate with our ingest endpoints you require an API key, if you have created an API key for log ingest already you can reuse this key.
To generate a new API key, go to your account settings page in FusionReactor Cloud Account Settings.
Under the API keys tab, click generate and create a key.
Integrations¶
As part of Dashboards we will automatically be provisioning Dashboards to monitor external exporters out of the box.
Presently, we have dashboards available for:
- MySQL
- MSSQL
- NginX
- Node Exporter
More integrations will follow over time, including:
- RabbitMQ
- Kafka
- MongoDB
- Postgress
- IIS
- Blackbox
- Kubernetes
- Docker
- AWS
- GCP
- Azure
- many more
If there is a specific exporter you would like us to consider, please contact support within the chat bubble and let us know.
Supported Agents¶
You can ingest data into the FusionReactor cloud using any exporter, we recommend using either the Prometheus agent or Grafana agent for the simplest and most time effective installation.
Prometheus Agent¶
The prometheus agent is an open source agent developed by the team that engineered the Prometheus metric storage engine, designed to ingest data into prometheus based datasources.
With the prometheus agent, you configure your required exporters and then scrape each exporter using the prometheus agent.
The agent mode of prometheus is something must be enabled using --enable-feature=agent
To learn more about the Prometheus agent and its additional features / configuration see Prometheus agent
To learn more about the grafana agent and its additional features / configuration see Grafana agent)
Below you can find an example prometheus.yaml file you can deploy, that will scrape each of our currently supported exporters.
prometheus.yaml Example:
global:
#We recommend a scrape interval between 15 and 60 seconds depending on the resolution of data you require
scrape_interval: 60s
scrape_configs:
- job_name: "Node_Exporter"
static_configs:
- targets: [ "{node}:9100" ]
- job_name: "MySQL_Collector"
static_configs:
- targets: [ "{mysql-collector}:9104" ]
- job_name: "MSSQL_Collector"
static_configs:
- targets: ["{mssql-collector}:4000"]
- job_name: "Nginx_Collector"
static_configs:
- targets: [ "{nginx-collector}:9113" ]
remote_write:
- url: "https://api.fusionreactor.io/v1/metrics"
authorization:
credentials: {API key}
You can run prometheus as a process on any machine, or spin up a simple Docker container using the Dockerfile below.
Prometheus Dockerfile example:
FROM prom/prometheus
ADD prometheus.yml /etc/prometheus/prometheus.yml
CMD ["--config.file=/etc/prometheus/prometheus.yml","--enable-feature=agent"]
Grafana Agent¶
The Grafana agent is a telemetry collector for sending metrics, logs and traces to an observability stack.
The grafana agent includes out of the box a host of included integrations, including Node Exporter and Mysql.
MSSQL and NGINX exporters are not included at this time, so scraping this exporter is required.
To lean more about the grafana agent and its additional features / configuration see Grafana agent
A working example of the Grafana agent is below:
metrics:
global:
scrape_interval: 1m
remote_write:
- url: "https://api.fusionreactor.io/v1/metrics"
authorization:
credentials: 7f5e1598e67524aacf90da7d8479a16f1236fe01095b081f0b684eae7570e54c4c5660b2b8adae573f860c2bca3b98b5ffe4237de2980e26d8951324ed4a9ee1
configs:
- name: nginx
scrape_configs:
static_configs:
- targets: ['{nginx-collector}:9113']
- name: mssql
scrape_configs:
static_configs:
- targets: ['{mssql-collector}:4000']
integrations:
node_exporter:
enabled: true
mysqld_exporter:
enabled: true
data_source_name: {user}:{pw}@({mysql-host}:3306)/
relabel_configs:
- source_labels: [__address__]
target_label: instance
replacement: server-a
Prometheus Dockerfile example:
FROM grafana/agent
ADD agent.yaml /etc/agent/agent.yaml
Supported Exporters¶
Exporters are often community driven tools that allow you to export metrics out of common components. There are exporters available for almost any component within your infrastructure.
Node Exporter¶
The Prometheus Node_Exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.
To collect node exporter metrics, you simply need to run the process within the node you wish to monitor, which can then br scraped by any agent.
To learn more see Prometheus Node_Exporter
MSSQL¶
The Prometheus exporter for Microsoft SQL Server (MSSQL) allows you to monitor all key metrics for an MSSQL server.
It is recommended to run the exporter on an external machine or Docker image, to monitor your database remotely.
This exporter requires a SERVER, USERNAME and PASSWORD environment configuration, allowing it to connect to your database and expose vital metrics.
To learn more, including additional configuration options see Prometheus exporter for Microsoft SQL Server (MSSQL).
Docker compose example:
mssql-exporter:
image: awaragi/prometheus-mssql-exporter
environment:
- SERVER={mssql-host}
- USERNAME={user}
- PASSWORD={password}
- DEBUG=app
MySQL¶
The Prometheus exporter for MySQL server metrics allows you to monitor all key metrics for an MySQL server.
This export requires a DATASOURCE environment variable, containing the authentication and connection details for the database, allowing it to connect to your database and expose vital metrics.
It is recommended that you create an additional restricted user, specifically for monitoring your MySQL database.
To learn more, including additional configuration options see Prometheus exporter for MySQL server metrics.
Docker composer example:
mysql-exporter:
image: prom/mysqld-exporter
ports:
- "9104:9104"
environment:
- DATA_SOURCE_NAME={user}:{password}@({mysql-host}:3306)/
- --collect.global_status
- --collect.info_schema.query_response_time
- --collect.info_schema.innodb_metrics
- --collect.info_schema.processlist
- --collect.info_schema.tablestats
- --collect.info_schema.tables
- --collect.info_schema.userstats
- --collect.engine_innodb_status
- --config.my-cnf=/etc/my.cnf
Nginx¶
The NGINX Prometheus exporter makes it possible to monitor NGINX or NGINX Plus using Prometheus.
This export requires different configuration depending on whether you are monitoring NGINX community or NGINX pro editions. Both require a URI and pro is not enabled by default.
To learn more, including additional configuration options see NGINX Prometheus exporter.
Nginx community edition example:
nginxexporter:
image: nginx/nginx-prometheus-exporter
environment:
- SCRAPE_URI=http://{nginx-host}:80/basic_status
ports:
- "9113:9113"
Nginx pro edition example:
nginxexporter:
image: nginx/nginx-prometheus-exporter
environment:
- SCRAPE_URI=http://{nginx-host}:80/api
- NGINX_PLUS=true
ports:
- "9113:9113"