POSTS
Grafana Sqlite Migration to Postgresql
- 5 minutes read - 949 wordsIntroduction
Grafana is a monitoring and dashboard system that supports metrics backends like Prometheus . This post talks about the process of migrating from using a SQLite backend for Grafana to PostgreSQL . An initial Grafana installation (at least the Docker image) defaults to using SQLite. While this is nice to get up and running quickly and try out Grafana, to make Grafana highly available (HA), it’s better to use a database backend like MySQL or PostgreSQL.
High Availability
Although it’s easy to dismiss HA for an internal monitoring service like Grafana, when implemented well, monitoring becomes increasingly crucial to the health of an organization’s business and infrastructure. As something like Grafana becomes more important as the pulse of the organization, even brief service interruptions become increasingly visible. If you are running a container orchestration service such as AWS Elastic Container Service (ECS) or Kubernetes (k8s) , then running Grafana with a an EBS Docker volume on ECS or Persistent Volume (PV) on k8s is an HA option that may be “good enough.” In this post, I assume that Grafana has been installed via an OS package or via Docker container running SQLite.
Aside: PostgreSQL HA
This talk of HA begs the question, “is your database HA?” There are quite a number of HA solutions for PostgreSQL including multi-master (although I’ve never tried it) and hot standby allowing reads on the standby node. If you’re running on AWS, you might as well run RDS with multiple availability zone failover .
SQLite to PostgreSQL Migration
The migration from SQLite to PostgreSQL is made much easier by the existence of
the pgloader
project which
loads data into PostgreSQL from various sources. We can copy the SQLite
database from Grafana’s data directory /var/lib/grafana/grafana.db
and use
that in the migration. We create a main.load
script like the one below,
lifted almost directly from the SQLite example in the pgloader
documentation.
load database
from sqlite:///path/to/grafana.db
into postgresql://username:password@hostname/grafana
with include drop, create tables, create indexes, reset sequences
set work_mem to '16MB', maintenance_work_mem to '512 MB';
Schema Issues
It turns out that this naive approach results in a schema that is incompatible
with what Grafana expects. For example, this will result in a field in the
alert
table, silenced
, defined as a bigint
rather than a boolean
. We
know this if we allow Grafana to create a schema by allowing it to run its own
migrations again a clean PostgreSQL database. Below, we diff
the schema that
is created by pgloader
versus the one created by Grafana. Notice differences
in integer
versus bigint
fields, varchar
versus text
fields, and
differences in NULL
able fields.
2,15c2,15
< id integer NOT NULL,
< version bigint NOT NULL,
< dashboard_id bigint NOT NULL,
< panel_id bigint NOT NULL,
< org_id bigint NOT NULL,
< name character varying(255) NOT NULL,
< message text NOT NULL,
< state character varying(190) NOT NULL,
< settings text NOT NULL,
< frequency bigint NOT NULL,
< handler bigint NOT NULL,
< severity text NOT NULL,
< silenced boolean NOT NULL,
< execution_error text NOT NULL,
---
> id bigint NOT NULL,
> version bigint,
> dashboard_id bigint,
> panel_id bigint,
> org_id bigint,
> name text,
> message text,
> state text,
> settings text,
> frequency bigint,
> handler bigint,
> severity text,
> silenced bigint,
> execution_error text,
17,21c17,21
< eval_date timestamp without time zone,
< new_state_date timestamp without time zone NOT NULL,
< state_changes integer NOT NULL,
< created timestamp without time zone NOT NULL,
< updated timestamp without time zone NOT NULL
---
> eval_date timestamp with time zone,
> new_state_date timestamp with time zone,
> state_changes bigint,
> created timestamp with time zone,
> updated timestamp with time zone
Changing the Approach
This schema comparison suggests the correct procedure to follow. Instead of
allowing pgloader
to create the PostgreSQL schema, we need to export data
from SQLite into the PostgreSQL schema generated by Grafana. The first step is
to spin up an instance of Grafana pointed at some PostgreSQL database. One easy
way to do this is to use docker-compose
to spin up a set of linked containers. We can define a docker-compose.yml
with the following contents that will spin up two containers on a shared
network.
version: '3'
services:
postgres:
image: postgres:11
environment:
- POSTGRES_PASSWORD=dummy
grafana:
image: grafana/grafana:5.3.4
container_name: grafana
ports:
- 3000:3000
volumes:
- ../volumes/grafana/data:/var/lib/grafana:z
env_file:
- ./grafana.env
In the ./grafana.env
, we must configure Grafana to use the postgres
service
defined above.
GF_DATABASE_URL=postgres://postgres:dummy@postgres/grafana
Note that the Grafana container will initially fail because the PostgreSQL
has not been initialized. Once the postgres
service container intializes, we
create the Grafana database. First, we initiate a psql
prompt inside the
postgres
container:
docker-compose exec postgres psql -h 127.0.0.1 -U postgres
Then we issue the creation command:
psql> CREATE DATABASE grafana;
Next, we try restarting the Grafana container so that it will try to connect:
docker-compose restart grafana
The container should start and stay active, and a migration will create the
schema and data in the grafana
database. We can export that schema by running
pg_dump
in the postgres
container.
docker-compose exec postgres \
pg_dump --schema-only -h 127.0.0.1 -U postgres grafana \
| tee schema.sql
Retrying the Load
Then, we modify main.load
to avoid recreating the schema.
load database
from sqlite:///data/grafana.db
into postgresql://username:password@hostname/grafana
with data only, reset sequences
set work_mem to '16MB', maintenance_work_mem to '512 MB';
Rerunning pgloader
with this modified script, against a database initialized
with the schema generated by Grafana completes the migration.
Conclusion
In this post, we discussed how to migrate a Grafana database from SQLite to
PostgreSQL. While pgloader
made the process easier, there were initial
difficulties using the correct schema. Relying on Grafana’s migrations resolved
the issues.