PostgreSQL vs. MariaDB vs. SQLite: A Performance Test
Which database is best for your application? A detailed performance benchmark comparison between PostgreSQL, MariaDB, and SQLite.

Table of Contents
When setting up a self-hosted application, you inevitably face a crucial question: Which database should I use? For example, Gitea offers several options according to its documentation, including PostgreSQL, MySQL/MariaDB, and even the simple SQLite. But which is the right one for my use case? Is a full-fledged client-server database like PostgreSQL really necessary, or is the lightweight SQLite perhaps sufficient?
To clarify this question, I started an experiment: I set up three absolutely identical servers and installed Gitea with a different database on each. Afterward, I tested them with a raw benchmark test. The results were partly surprising.
The Test Environment: Three Identical Servers for a Fair Fight
To ensure a fair comparison, I need an identical hardware base. I opted for three small cloud servers (model CX22) from Hetzner.
Transparency notice: If you create a new account via my affiliate link at Hetzner, you will receive a €20 starting credit, and I will get a small commission.
When creating the servers, I made sure they were all in the same location (Nuremberg) and running Ubuntu 24.04. You can find more about setting up and securing a VPS for beginners in this post.
After the servers were provisioned (which can take a few minutes), I connected to each via SSH and first installed Docker. The commands for this are identical on all three servers:
# Install Docker dependencies
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
# Install Docker Engine
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Three Times Gitea, Three Databases: The Installation
I created a separate docker-compose.yml
for each database. I decided to create dedicated folders for the persistent data for the databases and not use Docker Volumes.
Server 1: Gitea with SQLite (The Standard Variant)
On the first server, I used the simplest configuration from the Gitea documentation, which uses SQLite as the database.
# Create directories
mkdir -p gitea
cd gitea
nano docker-compose.yml
# docker-compose.yml for SQLite
networks:
gitea:
external: false
services:
server:
image: gitea/gitea:1.23.8
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
restart: always
networks:
- gitea
volumes:
- ./gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "222:22"
Server 2: Gitea with MySQL
On the second server, I extended the configuration to use a separate MySQL instance.
# Create directories
mkdir -p gitea_mysql/{gitea,mysql}
cd gitea_mysql
nano docker-compose.yml
# docker-compose.yml for MySQL
version: "3"
networks:
gitea:
external: false
services:
server:
image: gitea/gitea:1.23.8
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__database__DB_TYPE=mysql
- GITEA__database__HOST=db:3306
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=gitea
restart: always
networks:
- gitea
volumes:
- ./gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "222:22"
depends_on:
- db
db:
image: mysql:9
restart: always
environment:
- MYSQL_ROOT_PASSWORD=gitea
- MYSQL_USER=gitea
- MYSQL_PASSWORD=gitea
- MYSQL_DATABASE=gitea
networks:
- gitea
volumes:
- ./mysql:/var/lib/mysql
Server 3: Gitea with PostgreSQL
On the third server, a PostgreSQL database was used.
# Create directories
mkdir -p gitea_postgres/{gitea,postgres}
cd gitea_postgres
nano docker-compose.yml
# docker-compose.yml for PostgreSQL
version: "3"
networks:
gitea:
external: false
services:
server:
image: gitea/gitea:1.23.8
container_name: gitea
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=gitea
restart: always
networks:
- gitea
volumes:
- ./gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "222:22"
depends_on:
- db
db:
image: postgres:14
restart: always
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=gitea
- POSTGRES_DB=gitea
networks:
- gitea
volumes:
- ./postgres:/var/lib/postgresql/data
After all three setups were started with sudo docker compose up -d
, a first look at the resource usage (sudo docker stats
) already showed differences: The MySQL container consumed significantly more RAM at idle than the PostgreSQL and the Gitea-integrated SQLite solution.
Afterward, I went through the setup wizard on each instance, created an admin account, and generated an API token for the upcoming tests.
Benchmark 1: The API Real-World Test – Gitea in Action
To test the performance under realistic conditions, I wrote a Node.js script that uses the Gitea API to perform typical actions. A fourth server served as the test client to minimize network effects.
The script performed the following operations and measured the time required for each:
- Repository Creation (150 Repos): A write-intensive operation that creates new entries in several database tables.
- Issue Creation (750 Issues): Simulates many small, consecutive write operations.
- Repository Listing (150 Fetches): A simple read operation.
- Issue Listing (5 Fetches): A more complex read operation that likely requires filters, sorting, and joins.
const axios = require("axios");
const { performance } = require("perf_hooks");
const GITEA_INSTANCES = [
{
name: "Gitea-SQLite",
baseUrl: "http:12.34.123.123:3000",
apiToken: "4f1c9c59ada75d793e4c6dbfab884768e5d52fe4",
},
{
name: "Gitea-MariaDB",
baseUrl: "http://191.199.123.123:3000",
apiToken: "ee077f0b0cc665445c97f0bbd5aa814740279871",
},
{
name: "Gitea-PostgreSQL",
baseUrl: "http://138.199.123.123:3000",
apiToken: "1377c5d0f6e48ed1b93a716f0b51807f2ca010fc",
},
];
const NUM_REPO_CREATIONS = 150;
const NUM_ISSUE_CREATIONS_PER_REPO = 5;
const NUM_REPO_LISTINGS = 150;
const NUM_ISSUE_LISTINGS_PER_REPO = 5;
function generateRandomString(length = 8) {
return Math.random()
.toString(36)
.substring(2, 2 + length);
}
async function sleep(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
// --- Gitea Testfunktionen ---
async function createRepository(instanceConfig, repoName) {
const url = `${instanceConfig.baseUrl}/api/v1/user/repos`;
const headers = { Authorization: `token ${instanceConfig.apiToken}` };
const data = {
name: repoName,
private: false,
description: `Test repository ${repoName}`,
};
try {
const response = await axios.post(url, data, { headers });
return response.data;
} catch (error) {
console.error(`[${instanceConfig.name}] Fehler beim Erstellen des Repos ${repoName}:`, error.response ? error.response.data : error.message);
throw error;
}
}
async function createIssue(instanceConfig, owner, repoName, issueTitle, issueBody) {
const url = `${instanceConfig.baseUrl}/api/v1/repos/${owner}/${repoName}/issues`;
const headers = { Authorization: `token ${instanceConfig.apiToken}` };
const data = {
title: issueTitle,
body: issueBody,
};
try {
await axios.post(url, data, { headers });
} catch (error) {
console.error(
`[${instanceConfig.name}] Fehler beim Erstellen des Issues "${issueTitle}" in ${owner}/${repoName}:`,
error.response ? error.response.data : error.message,
);
throw error;
}
}
async function listRepositories(instanceConfig) {
const url = `${instanceConfig.baseUrl}/api/v1/user/repos`;
const headers = { Authorization: `token ${instanceConfig.apiToken}` };
try {
await axios.get(url, { headers });
} catch (error) {
console.error(`[${instanceConfig.name}] Fehler beim Auflisten der Repos:`, error.response ? error.response.data : error.message);
throw error;
}
}
async function listIssues(instanceConfig, owner, repoName) {
const url = `${instanceConfig.baseUrl}/api/v1/repos/${owner}/${repoName}/issues`;
const headers = { Authorization: `token ${instanceConfig.apiToken}` };
try {
await axios.get(url, { headers });
} catch (error) {
console.error(
`[${instanceConfig.name}] Fehler beim Auflisten der Issues in ${owner}/${repoName}:`,
error.response ? error.response.data : error.message,
);
throw error;
}
}
async function getAuthenticatedUser(instanceConfig) {
const url = `${instanceConfig.baseUrl}/api/v1/user`;
const headers = { Authorization: `token ${instanceConfig.apiToken}` };
try {
const response = await axios.get(url, { headers });
return response.data.login;
} catch (error) {
console.error(`[${instanceConfig.name}] Fehler beim Abrufen des Benutzers:`, error.response ? error.response.data : error.message);
throw error;
}
}
async function runGiteaInstanceBenchmark(instanceConfig) {
console.log(`\n--- Starte Benchmark fĂĽr: ${instanceConfig.name} ---`);
const results = {
repoCreationTime: 0,
issueCreationTime: 0,
repoListingTime: 0,
issueListingTime: 0,
totalErrors: 0,
};
let owner;
try {
owner = await getAuthenticatedUser(instanceConfig);
if (!owner) {
console.error(`[${instanceConfig.name}] Konnte Benutzer nicht ermitteln. API Token korrekt und aktiv?`);
results.totalErrors++;
return results;
}
console.log(`[${instanceConfig.name}] Tests werden als Benutzer "${owner}" ausgefĂĽhrt.`);
// --- 1. Repositories erstellen (Schreiblast) ---
const createdRepoNames = [];
let startTime = performance.now();
for (let i = 0; i < NUM_REPO_CREATIONS; i++) {
const repoName = `perf-test-repo-${generateRandomString()}`;
try {
await createRepository(instanceConfig, repoName);
createdRepoNames.push(repoName);
} catch (e) {
results.totalErrors++;
}
await sleep(50);
}
results.repoCreationTime = performance.now() - startTime;
console.log(`[${instanceConfig.name}] ${NUM_REPO_CREATIONS} Repos erstellt in ${results.repoCreationTime.toFixed(2)} ms`);
// --- 2. Issues erstellen (Schreiblast) ---
if (createdRepoNames.length > 0) {
startTime = performance.now();
for (const repoName of createdRepoNames) {
for (let i = 0; i < NUM_ISSUE_CREATIONS_PER_REPO; i++) {
try {
await createIssue(
instanceConfig,
owner,
repoName,
`Test Issue ${i + 1} ${generateRandomString(4)}`,
"Dies ist ein automatischer Test-Issue.",
);
} catch (e) {
results.totalErrors++;
}
await sleep(30);
}
}
results.issueCreationTime = performance.now() - startTime;
const totalIssuesCreated = createdRepoNames.length * NUM_ISSUE_CREATIONS_PER_REPO;
console.log(`[${instanceConfig.name}] ${totalIssuesCreated} Issues erstellt in ${results.issueCreationTime.toFixed(2)} ms`);
} else {
console.log(`[${instanceConfig.name}] Keine Repos erstellt, ĂĽberspringe Issue-Erstellung.`);
}
// --- 3. Repository-Liste abrufen (Leselast) ---
startTime = performance.now();
for (let i = 0; i < NUM_REPO_LISTINGS; i++) {
try {
await listRepositories(instanceConfig);
} catch (e) {
results.totalErrors++;
}
await sleep(20);
}
results.repoListingTime = performance.now() - startTime;
console.log(`[${instanceConfig.name}] ${NUM_REPO_LISTINGS} Mal Repos gelistet in ${results.repoListingTime.toFixed(2)} ms`);
// --- 4. Issues eines Repos abrufen (Leselast) ---
if (createdRepoNames.length > 0) {
const repoToTestIssues = createdRepoNames[0];
startTime = performance.now();
for (let i = 0; i < NUM_ISSUE_LISTINGS_PER_REPO; i++) {
try {
await listIssues(instanceConfig, owner, repoToTestIssues);
} catch (e) {
results.totalErrors++;
}
await sleep(20);
}
results.issueListingTime = performance.now() - startTime;
console.log(
`[${instanceConfig.name}] ${NUM_ISSUE_LISTINGS_PER_REPO} Mal Issues von '${repoToTestIssues}' gelistet in ${results.issueListingTime.toFixed(2)} ms`,
);
} else {
console.log(`[${instanceConfig.name}] Keine Repos fĂĽr Issue-Listing Test vorhanden.`);
}
// --- Aufräumen ---
console.log(`[${instanceConfig.name}] Starte Aufräumarbeiten...`);
for (const repoName of createdRepoNames) {
try {
const url = `${instanceConfig.baseUrl}/api/v1/repos/${owner}/${repoName}`;
const headers = { Authorization: `token ${instanceConfig.apiToken}` };
await axios.delete(url, { headers });
} catch (error) {
console.error(
`[${instanceConfig.name}] Fehler beim Löschen des Repos ${repoName}:`,
error.response ? error.response.data : error.message,
);
results.totalErrors++;
}
await sleep(50);
}
console.log(`[${instanceConfig.name}] Aufräumarbeiten abgeschlossen.`);
} catch (mainError) {
console.error(`[${instanceConfig.name}] Schwerwiegender Fehler im Benchmark-Durchlauf:`, mainError.message);
results.totalErrors++;
}
console.log(`[${instanceConfig.name}] Benchmark abgeschlossen mit ${results.totalErrors} Fehlern.`);
return results;
}
async function main() {
console.log("Starte Gitea Performance Benchmark ĂĽber API...");
const allResults = {};
for (const instance of GITEA_INSTANCES) {
console.log(`\nInitialisiere Test fĂĽr ${instance.name}. Kurze Pause...`);
await sleep(5000);
const instanceResults = await runGiteaInstanceBenchmark(instance);
allResults[instance.name] = instanceResults;
await sleep(2000);
}
console.log("\n\n--- GITEA API BENCHMARK ERGEBNISSE ---");
for (const instanceName in allResults) {
const res = allResults[instanceName];
console.log(`\nErgebnisse fĂĽr: ${instanceName}`);
console.log(` Repo Erstellung (${NUM_REPO_CREATIONS} Repos):`.padEnd(45) + `${res.repoCreationTime.toFixed(2)} ms`);
const totalIssues = NUM_REPO_CREATIONS * NUM_ISSUE_CREATIONS_PER_REPO;
console.log(` Issue Erstellung (${totalIssues} Issues):`.padEnd(45) + `${res.issueCreationTime.toFixed(2)} ms`);
console.log(` Repo Listing (${NUM_REPO_LISTINGS} Abrufe):`.padEnd(45) + `${res.repoListingTime.toFixed(2)} ms`);
console.log(` Issue Listing (${NUM_ISSUE_LISTINGS_PER_REPO} Abrufe):`.padEnd(45) + `${res.issueListingTime.toFixed(2)} ms`);
console.log(` Gesamte Fehler:`.padEnd(45) + `${res.totalErrors}`);
}
console.log("\nBenchmark abgeschlossen.");
}
main().catch((err) => {
console.error("Unerwarteter Fehler in der HauptausfĂĽhrung:", err);
});
API Test Results
After running the test multiple times to minimize outliers, the following picture emerged (times in milliseconds, lower is better):
Operation | SQLite | MariaDB (MySQL) | PostgreSQL |
---|---|---|---|
Repo Creation (150) | 18,969.62 | 23,659.65 | 18,006.23 |
Issue Creation (750) | 41,311.92 | 68,647.51 | 49,333.02 |
Repo Listing (150 fetches) | 5,143.53 | 7,551.34 | 5,867.49 |
Issue Listing (5 fetches) | 268.11 | 402.66 | 321.45 |
Analysis of API Results
The result is astounding:
- PostgreSQL was the fastest at creating repositories. This indicates that it handles complex, transaction-heavy write operations very efficiently.
- SQLite won in all other three categories – and in some cases, by a significant margin! For many small write operations and read operations, the low overhead of the file-based database was a clear advantage.
Benchmark 2: Direct Database Showdown – Who is Fastest in Raw Performance?
The first test was practical but influenced by the Gitea application. I wanted to know how the databases compare directly. For this, I deleted the Gitea containers and only ran the MariaDB (version 11) and Postgres (version 17) containers, exposing their ports directly.
On my test server, I wrote a second Node.js script that connects directly to the databases and performs two tests:
- Write Test: Insert 100,000 rows into a simple table in batches of 1000.
- Read Test: Read 100,000 rows individually by their ID.
A local SQLite database file on the test server served as a third comparison partner.
const { Client } = require("pg");
const mysql = require("mysql2/promise");
const sqlite3 = require("sqlite3").verbose();
const { performance } = require("perf_hooks");
const path = require("path");
// --- Konfiguration ---
const MARIA_DB_CONFIG = {
host: "191.199.123.123",
user: "benchmarkuser",
password: "einStarkesPasswort",
database: "test_datenbank",
port: 3306,
connectTimeout: 10000,
};
const POSTGRES_CONFIG = {
host: "138.199.123.123",
user: "benchmarkuser",
password: "einStarkesPasswort",
database: "test_datenbank",
port: 5432,
connectionTimeoutMillis: 10000,
};
const SQLITE_DB_PATH = path.join(__dirname, "benchmark_sqlite.db");
const TEST_TABLE_NAME = "benchmark_items";
const NUM_WRITES = 100000;
const NUM_READS = 100000;
const BATCH_SIZE_WRITE = 1000;
// --- Hilfsfunktionen ---
function generateRandomString(length = 10) {
return Math.random()
.toString(36)
.substring(2, 2 + length);
}
async function sleep(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
// --- MariaDB Testfunktionen ---
async function runMariaDBBenchmark() {
console.log("\n--- Starte MariaDB Benchmark ---");
let connection;
const results = { writeTime: 0, readTime: 0, error: null };
try {
console.log(`Verbinde mit MariaDB auf ${MARIA_DB_CONFIG.host}...`);
connection = await mysql.createConnection(MARIA_DB_CONFIG);
console.log("MariaDB verbunden.");
// 1. Tabelle erstellen (wenn nicht vorhanden)
console.log(`Erstelle Tabelle ${TEST_TABLE_NAME} (falls nicht vorhanden)...`);
await connection.execute(`DROP TABLE IF EXISTS ${TEST_TABLE_NAME}`);
await connection.execute(`
CREATE TABLE ${TEST_TABLE_NAME} (
id INT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
`);
console.log(`Tabelle ${TEST_TABLE_NAME} erstellt.`);
// 2. Schreibtest
console.log(`Starte Schreibtest (${NUM_WRITES} Datensätze in Batches von ${BATCH_SIZE_WRITE})...`);
let writeStartTime = performance.now();
let itemsToWrite = [];
for (let i = 0; i < NUM_WRITES; i++) {
itemsToWrite.push([`ItemName ${generateRandomString(8)}`, `Description ${generateRandomString(50)}`]);
if (itemsToWrite.length === BATCH_SIZE_WRITE || i === NUM_WRITES - 1) {
const query = `INSERT INTO ${TEST_TABLE_NAME} (name, description) VALUES ?`;
await connection.query(query, [itemsToWrite]);
itemsToWrite = [];
}
}
results.writeTime = performance.now() - writeStartTime;
console.log(`Schreibtest abgeschlossen in ${results.writeTime.toFixed(2)} ms`);
// Kurze Pause
await sleep(1000);
// 3. Lesetest (liest alle IDs und dann zufällige Datensätze)
console.log(`Starte Lesetest (${NUM_READS} Datensätze)...`);
const [rows] = await connection.execute(`SELECT id FROM ${TEST_TABLE_NAME}`);
const ids = rows.map((row) => row.id);
if (ids.length === 0) {
console.log("Keine Daten zum Lesen vorhanden.");
results.readTime = 0;
} else {
let readStartTime = performance.now();
for (let i = 0; i < NUM_READS; i++) {
const randomId = ids[Math.floor(Math.random() * ids.length)];
await connection.execute(`SELECT * FROM ${TEST_TABLE_NAME} WHERE id = ?`, [randomId]);
}
results.readTime = performance.now() - readStartTime;
console.log(`Lesetest abgeschlossen in ${results.readTime.toFixed(2)} ms`);
}
} catch (err) {
console.error("MariaDB Benchmark Fehler:", err.message);
results.error = err.message;
} finally {
if (connection) {
console.log("SchlieĂźe MariaDB Verbindung.");
await connection.end();
}
}
return results;
}
// --- PostgreSQL Testfunktionen ---
async function runPostgresBenchmark() {
console.log("\n--- Starte PostgreSQL Benchmark ---");
const client = new Client(POSTGRES_CONFIG);
const results = { writeTime: 0, readTime: 0, error: null };
try {
console.log(`Verbinde mit PostgreSQL auf ${POSTGRES_CONFIG.host}...`);
await client.connect();
console.log("PostgreSQL verbunden.");
// 1. Tabelle erstellen
console.log(`Erstelle Tabelle ${TEST_TABLE_NAME} (falls nicht vorhanden)...`);
await client.query(`DROP TABLE IF EXISTS ${TEST_TABLE_NAME}`);
await client.query(`
CREATE TABLE ${TEST_TABLE_NAME} (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
`);
console.log(`Tabelle ${TEST_TABLE_NAME} erstellt.`);
// 2. Schreibtest
console.log(`Starte Schreibtest (${NUM_WRITES} Datensätze in Batches von ${BATCH_SIZE_WRITE})...`);
let writeStartTime = performance.now();
let itemsBuffer = [];
let valuePlaceholders = [];
for (let i = 0; i < NUM_WRITES; i++) {
itemsBuffer.push(`ItemName ${generateRandomString(8)}`);
itemsBuffer.push(`Description ${generateRandomString(50)}`);
valuePlaceholders.push(`($${itemsBuffer.length - 1}, $${itemsBuffer.length})`);
if (valuePlaceholders.length === BATCH_SIZE_WRITE || i === NUM_WRITES - 1) {
const queryText = `INSERT INTO ${TEST_TABLE_NAME} (name, description) VALUES ${valuePlaceholders.join(", ")}`;
await client.query(queryText, itemsBuffer);
itemsBuffer = [];
valuePlaceholders = [];
}
}
results.writeTime = performance.now() - writeStartTime;
console.log(`Schreibtest abgeschlossen in ${results.writeTime.toFixed(2)} ms`);
// Kurze Pause
await sleep(1000);
// 3. Lesetest
console.log(`Starte Lesetest (${NUM_READS} Datensätze)...`);
const res = await client.query(`SELECT id FROM ${TEST_TABLE_NAME}`);
const ids = res.rows.map((row) => row.id);
if (ids.length === 0) {
console.log("Keine Daten zum Lesen vorhanden.");
results.readTime = 0;
} else {
let readStartTime = performance.now();
for (let i = 0; i < NUM_READS; i++) {
const randomId = ids[Math.floor(Math.random() * ids.length)];
await client.query(`SELECT * FROM ${TEST_TABLE_NAME} WHERE id = $1`, [randomId]);
}
results.readTime = performance.now() - readStartTime;
console.log(`Lesetest abgeschlossen in ${results.readTime.toFixed(2)} ms`);
}
} catch (err) {
console.error("PostgreSQL Benchmark Fehler:", err.message);
results.error = err.stack; // stack fĂĽr mehr Details
} finally {
if (client) {
console.log("SchlieĂźe PostgreSQL Verbindung.");
await client.end();
}
}
return results;
}
// --- SQLite Testfunktionen ---
async function runSQLiteBenchmark() {
console.log("\n--- Starte SQLite Benchmark ---");
const results = { writeTime: 0, readTime: 0, error: null };
const openDb = () =>
new Promise((resolve, reject) => {
const db = new sqlite3.Database(SQLITE_DB_PATH, (err) => {
if (err) reject(err);
else resolve(db);
});
});
const dbRun = (db, sql, params = []) =>
new Promise((resolve, reject) => {
db.run(sql, params, function (err) {
if (err) reject(err);
else resolve(this);
});
});
const dbAll = (db, sql, params = []) =>
new Promise((resolve, reject) => {
db.all(sql, params, (err, rows) => {
if (err) reject(err);
else resolve(rows);
});
});
const dbClose = (db) =>
new Promise((resolve, reject) => {
db.close((err) => {
if (err) reject(err);
else resolve();
});
});
let db;
try {
console.log(`Ă–ffne/Erstelle SQLite Datenbank: ${SQLITE_DB_PATH}`);
db = await openDb();
console.log("SQLite DB geöffnet.");
await dbRun(db, "PRAGMA synchronous = OFF");
await dbRun(db, "PRAGMA journal_mode = MEMORY;");
// 1. Tabelle erstellen
console.log(`Erstelle Tabelle ${TEST_TABLE_NAME} (falls nicht vorhanden)...`);
await dbRun(db, `DROP TABLE IF EXISTS ${TEST_TABLE_NAME}`);
await dbRun(
db,
`
CREATE TABLE ${TEST_TABLE_NAME} (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
description TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
`,
);
console.log(`Tabelle ${TEST_TABLE_NAME} erstellt.`);
// 2. Schreibtest (fĂĽr SQLite sind einzelne Inserts in einer Transaktion oft am besten)
console.log(`Starte Schreibtest (${NUM_WRITES} Datensätze)...`);
let writeStartTime = performance.now();
await dbRun(db, "BEGIN TRANSACTION");
const stmt = await new Promise((resolve, reject) => {
const prepStmt = db.prepare(`INSERT INTO ${TEST_TABLE_NAME} (name, description) VALUES (?, ?)`, (err) => {
if (err) reject(err);
else resolve(prepStmt);
});
});
for (let i = 0; i < NUM_WRITES; i++) {
await new Promise((resolve, reject) => {
stmt.run([`ItemName ${generateRandomString(8)}`, `Description ${generateRandomString(50)}`], function (err) {
if (err) reject(err);
else resolve(this);
});
});
}
await new Promise((resolve, reject) => {
stmt.finalize((err) => {
if (err) reject(err);
else resolve();
});
});
await dbRun(db, "COMMIT");
results.writeTime = performance.now() - writeStartTime;
console.log(`Schreibtest abgeschlossen in ${results.writeTime.toFixed(2)} ms`);
await sleep(1000);
// 3. Lesetest
console.log(`Starte Lesetest (${NUM_READS} Datensätze)...`);
const rows = await dbAll(db, `SELECT id FROM ${TEST_TABLE_NAME}`);
const ids = rows.map((row) => row.id);
if (ids.length === 0) {
console.log("Keine Daten zum Lesen vorhanden.");
results.readTime = 0;
} else {
let readStartTime = performance.now();
const readStmt = await new Promise((resolve, reject) => {
const prep = db.prepare(`SELECT * FROM ${TEST_TABLE_NAME} WHERE id = ?`, (err) => {
if (err) reject(err);
else resolve(prep);
});
});
for (let i = 0; i < NUM_READS; i++) {
const randomId = ids[Math.floor(Math.random() * ids.length)];
await new Promise((resolve, reject) => {
readStmt.get([randomId], (err, row) => {
if (err) reject(err);
else resolve(row);
});
});
}
await new Promise((resolve, reject) => {
readStmt.finalize((err) => {
if (err) reject(err);
else resolve();
});
});
results.readTime = performance.now() - readStartTime;
console.log(`Lesetest abgeschlossen in ${results.readTime.toFixed(2)} ms`);
}
} catch (err) {
console.error("SQLite Benchmark Fehler:", err.message);
results.error = err.message;
} finally {
if (db) {
console.log("SchlieĂźe SQLite DB.");
try {
await dbClose(db);
} catch (closeErr) {
console.error("Fehler beim SchlieĂźen der SQLite DB:", closeErr.message);
}
}
}
return results;
}
// --- HauptausfĂĽhrung ---
async function main() {
console.log("Starte REINEN Datenbank Performance Benchmark (via Docker/Lokal)...");
console.log(`Parameter: ${NUM_WRITES} Schreibvorgänge, ${NUM_READS} Lesevorgänge`);
const allResults = {};
// Test SQLite
allResults.SQLite = await runSQLiteBenchmark();
await sleep(2000);
// Test MariaDB
allResults.MariaDB = await runMariaDBBenchmark();
await sleep(2000);
// Test Postgres
allResults.PostgreSQL = await runPostgresBenchmark();
console.log("\n\n--- REINER DATENBANK BENCHMARK ERGEBNISSE ---");
const printResult = (dbName, res) => {
console.log(`\n${dbName}:`);
if (res.error) {
console.log(` Fehler: ${res.error}`);
} else {
console.log(` Schreibzeit (${NUM_WRITES} Sätze):`.padEnd(45) + `${res.writeTime.toFixed(2)} ms`);
console.log(` Lesezeit (${NUM_READS} Sätze):`.padEnd(45) + `${res.readTime.toFixed(2)} ms`);
}
};
printResult("SQLite (Lokal)", allResults.SQLite);
printResult("MariaDB (Docker auf VPS)", allResults.MariaDB);
printResult("PostgreSQL (Docker auf VPS)", allResults.PostgreSQL);
console.log("\nBenchmark abgeschlossen.");
}
main().catch((err) => {
console.error("Unerwarteter Fehler in der HauptausfĂĽhrung:", err);
});
Direct Database Test Results
The results (again in milliseconds) show a different picture:
Operation | SQLite | MariaDB (MySQL) | PostgreSQL |
---|---|---|---|
Write Test (100k) | 23,386.67 | 2,095.29 | 2,801.28 |
Read Test (100k) | 32,989.38 | 106,746.85 | 105,027.99 |
Analysis of Direct Test Results
- Writing: Here, the “big” databases show their strength. MariaDB and PostgreSQL are about 10 times faster than SQLite when bulk-inserting data. Efficient batching and parallel processing pay off here.
- Reading: When reading individual records, SQLite is again unbeatable.
Conclusion: Performance Isn’t Everything
This test clearly shows that there is no single “best” database. The performance heavily depends on the type of operations being performed. While SQLite shines in simple read accesses, PostgreSQL and MariaDB demonstrate their superiority in write-intensive bulk operations.
However, the choice of the right database should not be based solely on raw performance figures. Other, often more important factors are:
- Scalability: How many users and how much data growth do you expect? Here, PostgreSQL and MariaDB have a clear advantage.
- Data Integrity: Do you need advanced transaction controls and constraints that go beyond what SQLite offers?
- Feature-Set: Are special features like JSONB support (PostgreSQL) or specific replication mechanisms (MariaDB) important?
- Operational Overhead: Are you willing to manage and back up a separate database container, or do you prefer the simplicity of a file-based solution?
For most small to medium self-hosting projects where simplicity and good read performance are paramount, SQLite has proven to be a more than capable option. However, as soon as the requirements for concurrency and complex write operations increase, PostgreSQL or MariaDB are the more solid and future-proof choice.
FAQs
Is SQLite really 'production-ready'?
Yes, for many use cases, absolutely. For personal projects, small teams, or servers with limited resources, it is often the best choice because it is easy to manage and, as the test shows, can be very performant. For a high volume of concurrent write accesses or very large instances, PostgreSQL or MariaDB might have advantages.