Getting started

Basic

Data Model

Prometheus fundamentally stores all data as time series.

# Type
<metric name>{<label name>=<label value>, ...}
# Example
api_http_requests_total{method="POST", handler="/messages"}

This is the same notation that OpenTSDB uses.

Metric Types

The Prometheus client libraries offer four core metric types.

Support Go, Java, Python and Ruby client libraries.

See: https://prometheus.io/docs/concepts/metric_types/

Jobs and Instances

For example, an API server job with four replicated instances:

For each instance scrape, Prometheus stores a sample in the following time series:

up{job="<job-name>", instance="<instance-id>"}: 1 #表示该实例正常工作
up{job="<job-name>", instance="<instance-id>"}: 0 #表示该实例故障
scrape_duration_seconds{job="<job-name>", instance="<instance-id>"} #表示拉取数据的时间间隔
scrape_samples_post_metric_relabeling{job="<job-name>", instance="<instance-id>"} #表示采用重定义标签(relabeling)操作后仍然剩余的样本数
scrape_samples_scraped{job="<job-name>", instance="<instance-id>"}  #表示从该数据源获取的样本数

Configuration

Generic placeholder

Global config

global:
  # How frequently to scrape targets by default.
  [ scrape_interval: <duration> | default = 1m ]

  # How long until a scrape request times out.
  [ scrape_timeout: <duration> | default = 10s ]

  # How frequently to evaluate rules.
  [ evaluation_interval: <duration> | default = 1m ]

  # The labels to add to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    [ <labelname>: <labelvalue> ... ]

# Rule files specifies a list of globs. Rules and alerts are read from
# all matching files.
rule_files:
  [ - <filepath_glob> ... ]

# A list of scrape configurations.
scrape_configs:
  [ - <scrape_config> ... ]

# Alerting specifies settings related to the Alertmanager.
alerting:
  alert_relabel_configs:
    [ - <relabel_config> ... ]
  alertmanagers:
    [ - <alertmanager_config> ... ]

# Settings related to the experimental remote write feature.
remote_write:
  [ - <remote_write> ... ]

# Settings related to the experimental remote read feature.
remote_read:
  [ - <remote_read> ... ]

See https://prometheus.io/docs/prometheus/latest/configuration/configuration/

Scrape config

# The job name assigned to scraped metrics by default.
job_name: <job_name>

# How frequently to scrape targets from this job.
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]

# Per-scrape timeout when scraping this job.
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]

# The HTTP resource path on which to fetch metrics from targets.
[ metrics_path: <path> | default = /metrics ]

[ honor_labels: <boolean> | default = false ]

# Configures the protocol scheme used for requests.
[ scheme: <scheme> | default = http ]

# Optional HTTP URL parameters.
params:
  [ <string>: [<string>, ...] ]

# Sets the `Authorization` header on every scrape request with the
# configured username and password.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]

# Sets the `Authorization` header on every scrape request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <secret> ]

References