22 Commits

Author SHA1 Message Date
e8c188debf [microvms] rm unused code
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m50s
2025-03-20 19:55:51 +01:00
1f559d93ba [microvms] initialize directories on microvm host
Some checks failed
Check flake syntax / flake-check (push) Has been cancelled
2025-03-20 19:51:52 +01:00
a03b7506c5 [run-vm] keep microvm.deployHosts on nestedMicrovms 2025-03-20 19:51:14 +01:00
3b2a7cedc5 [backup] add 24 hourly backups
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m36s
2025-03-17 18:34:03 +01:00
a48e271853 [docs] rm outdated 2025-03-17 16:02:57 +01:00
d202a3d0cb [user module] I love symlinks
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m23s
2025-03-16 14:16:52 +01:00
ef33833910 Add backup server to vpn
All checks were successful
Check flake syntax / flake-check (push) Successful in 6m18s
2025-03-16 13:38:37 +01:00
d73031e7f1 Merge pull request 'backup module' (#92) from sanoid into master
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m28s
Reviewed-on: #92
Reviewed-by: ahtlon <ahtlon@noreply.git.dynamicdiscord.de>
2025-03-16 13:13:55 +01:00
be0bb0b08b [backup] fix description
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m13s
2025-03-16 12:53:43 +01:00
026494c877 [backup] fix typo
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m12s
2025-03-16 11:25:37 +01:00
3021716640 [backup] update module descriptions
Some checks failed
Check flake syntax / flake-check (push) Failing after 2m16s
2025-03-16 11:15:52 +01:00
70ec63f213 [users] fix typo
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m13s
2025-03-16 10:24:17 +01:00
91d86c49a1 [fanny] enable automatic snapshots
Some checks failed
Check flake syntax / flake-check (push) Failing after 3m0s
2025-03-16 10:18:57 +01:00
96dee29595 [fanny] enable backup user 2025-03-16 10:18:39 +01:00
d5e94b50cb [backup] fix errors
All checks were successful
Check flake syntax / flake-check (push) Successful in 5m44s
2025-03-16 10:09:54 +01:00
286e03c853 [backup] WIP setup sanoid/syncoid module
All checks were successful
Check flake syntax / flake-check (push) Successful in 6m5s
2025-03-16 00:57:24 +01:00
766b738a6a [malovpn] change peers.nix address to string without CIDR notation
this way we can easily use ip by hostname in other modules
2025-03-16 00:54:31 +01:00
de600fe7c7 [docs] update create
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m34s
2025-03-13 16:50:15 +01:00
5731fc795e Merge pull request 'backups add pull user' (#89) from backups into master
All checks were successful
Check flake syntax / flake-check (push) Successful in 5m46s
Reviewed-on: #89
2025-03-12 20:22:13 +01:00
1083949c87 [user module] add backup usr
All checks were successful
Check flake syntax / flake-check (push) Successful in 5m57s
2025-03-12 20:21:47 +01:00
413202e940 Merge pull request 'More nextcloud fixes' (#90) from nextcloud_issue_2 into master
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m38s
Reviewed-on: #90
2025-03-12 12:21:17 +01:00
ec20c80251 add proxy to trusted_proxies
All checks were successful
Check flake syntax / flake-check (push) Successful in 5m47s
2025-03-11 20:40:12 +01:00
10 changed files with 163 additions and 69 deletions

View File

@@ -1,47 +1,19 @@
# Create host with disko-install
How to use disko-install is described here: https://github.com/nix-community/disko/blob/master/docs/disko-install.md
---
Here are the exact steps to get bakunin running:
First create machines/hostname/configuration.nix
Add hosts nixosConfiguration in machines/configurations.nix
Boot nixos installer on the Machine.
# Create host with nixos-anywhere
We use a nixos-anywhere wrapper script to deploy new hosts.
The wrapper script takes care of copying persistent host keys before calling nixos-anywhere.
To accomplish that boot the host from a nixos image and setup a root password.
``` bash
# establish network connection
wpa_passphrase "network" "password" > wpa.conf
wpa_supplicant -B -i wlp3s0 -c wpa.conf
ping 8.8.8.8
# if that works continue
sudo su
passwd
```
# generate a base hardware config
nixos-generate-config --root /tmp/config --no-filesystems
After that get the hosts ip using `ip a` and start deployment from your own machine:
# get the infra repo
nix-shell -p git
git clone https://git.dynamicdiscord.de/kalipso/infrastructure
cd infrastructure
# add the new generated hardware config (and import in hosts configuration.nix)
cp /tmp/config/etc/nixos/hardware-configuration.nix machines/bakunin/
# check which harddrive we want to install the system on
lsblk #choose harddrive, in this case /dev/sda
# run nixos-install on that harddrive
sudo nix --extra-experimental-features flakes --extra-experimental-features nix-command run 'github:nix-community/disko/latest#disko-install' -- --flake .#bakunin --disk main /dev/sda
# this failed with out of memory
# running again showed: no disk left on device
# it seems the usb stick i used for flashing is way to small
# it is only
# with a bigger one (more than 8 gig i guess) it should work
# instead the disko-install tool i try the old method - first partitioning using disko and then installing the system
# for that i needed to adjust ./machines/modules/disko/btrfs-laptop.nix and set the disk to "/dev/sda"
sudo nix --extra-experimental-features "flakes nix-command" run 'github:nix-community/disko/latest' -- --mode format --flake .#bakunin
# failed with no space left on device.
# problem is lots of data is written to the local /nix/store which is mounted on tmpfs in ram
# it seems that a workaround could be modifying the bootable stick to contain a swap partition to extend tmpfs storage
``` bash
# from infrastrucutre repository root dir:
nix develop .#
remote-install hostname 10.0.42.23
```
# Testing Disko
@@ -49,18 +21,3 @@ Testing disko partitioning is working quite well. Just run the following and che
```bash
nix run -L .\#nixosConfigurations.fanny.config.system.build.vmWithDisko
```
Only problem is that encryption is not working, so it needs to be commented out. For testing host fanny the following parts in ```./machines/modules/disko/fanny.nix``` need to be commented out(for both pools!):
```nix
datasets = {
encrypted = {
options = {
encryption = "aes-256-gcm"; #THIS ONE
keyformat = "passphrase"; #THIS ONE
keylocation = "file:///tmp/root.key"; #THIS ONE
};
# use this to read the key during boot
postCreateHook = '' #THIS ONE
zfs set keylocation="prompt" "zroot/$name"; #THIS ONE
''; #THIS ONE
```

View File

@@ -18,6 +18,7 @@ in
inputs.self.nixosModules.malobeo.microvm
inputs.self.nixosModules.malobeo.metrics
inputs.self.nixosModules.malobeo.users
inputs.self.nixosModules.malobeo.backup
];
virtualisation.vmVariantWithDisko = {
@@ -42,6 +43,11 @@ in
cacheurl = "https://cache.dynamicdiscord.de";
};
malobeo.backup = {
enable = true;
snapshots = [ "storage/encrypted" "zroot/encrypted/var" ];
};
nix = {
settings.experimental-features = [ "nix-command" "flakes" ];
#always update microvms
@@ -53,6 +59,7 @@ in
malobeo.users = {
malobeo = true;
admin = true;
backup = true;
};
malobeo.disks = {

View File

@@ -195,8 +195,7 @@ rec {
vmNestedMicroVMOverwrites = host: sopsDummy: {
services.malobeo.microvm.deployHosts = pkgs.lib.mkForce [];
microvm.vms =
microvm.vms = pkgs.lib.mkForce (
let
# Map the values to each hostname to then generate an Attrset using listToAttrs
mapperFunc = name: { inherit name; value = {
@@ -216,7 +215,7 @@ rec {
};
}; };
in
builtins.listToAttrs (map mapperFunc self.nixosConfigurations.${host}.config.services.malobeo.microvm.deployHosts);
builtins.listToAttrs (map mapperFunc self.nixosConfigurations.${host}.config.services.malobeo.microvm.deployHosts));
};
buildVM = host: networking: sopsDummy: disableDisko: varPath: writableStore: fwdPort: (self.nixosConfigurations.${host}.extendModules {

View File

@@ -0,0 +1,102 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.malobeo.backup;
hostToCommand = (hostname: datasetNames:
(map (dataset: {
name = "${hostname}_${dataset.sourceDataset}";
value = {
inherit hostname;
inherit (dataset) sourceDataset targetDataset;
};
} ) datasetNames));
peers = import ./peers.nix;
enableSnapshots = cfg.snapshots != null;
enableBackups = cfg.hosts != null;
in
{
options.malobeo.backup = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable sanoid/syncoid based backup functionality";
};
snapshots = mkOption {
type = types.nullOr (types.listOf types.str);
default = null;
description = "Automatic snapshots will be created for the given datasets";
};
hosts = mkOption {
default = null;
type = types.nullOr (types.attrsOf (types.listOf (types.submodule {
options = {
sourceDataset = mkOption {
type = types.str;
description = "The source that needs to be backed up";
};
targetDataset = mkOption {
type = types.str;
description = "The target dataset where the backup should be stored";
};
};
})));
description = ''
Hostname with list of datasets to backup. This option should be defined on hosts that will store backups.
It is necessary to add the machines that get backed up to known hosts.
This can be done for example systemwide using
programs.ssh.knownHosts."10.100.0.101" = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHqp2/YiiIhai7wyScGZJ20gtrzY+lp4N/8unyRs4qhc";
Or set it for the syncoid user directly.
'';
};
sshKey = mkOption {
default = null;
type = types.nullOr types.str;
description = "Set path to ssh key used for pull backups. Otherwise default key is used";
};
};
config = mkIf (cfg.enable) {
services.sanoid = mkIf (enableSnapshots) {
enable = true;
templates."default" = {
hourly = 24;
daily = 30; #keep 30 daily snapshots
monthly = 6; #keep 6 monthly backups
yearly = 0;
autosnap = true; #take snapshots automatically
autoprune = true; #delete old snapshots
};
datasets = builtins.listToAttrs (map (name: { inherit name; value = {
useTemplate = [ "default" ];
recursive = true;
}; }) cfg.snapshots);
};
services.syncoid = mkIf (enableBackups) {
enable = true;
sshKey = cfg.sshKey;
commonArgs = [
"--no-sync-snap"
];
interval = "*-*-* 04:15:00";
commands = builtins.mapAttrs (name: value: {
source = "backup@${peers.${value.hostname}.address}:${value.sourceDataset}";
target = "${value.targetDataset}";
sendOptions = "w";
recvOptions = "\"\"";
recursive = true;
})(builtins.listToAttrs (builtins.concatLists (builtins.attrValues (builtins.mapAttrs hostToCommand cfg.hosts))));
};
};
}

View File

@@ -102,6 +102,22 @@ in
/run/current-system/sw/bin/microvm -Ru ${name}
'';
};
"microvm-init-dirs@${name}" = {
description = "Initialize microvm directories";
after = [ "zfs-mount.service" ];
wantedBy = [ "microvm@${name}.service" ];
unitConfig.ConditionPathExists = "!/var/lib/microvms/${name}/.is_initialized";
serviceConfig = {
Type = "oneshot";
};
script = ''
mkdir -p /var/lib/microvms/${name}/var
mkdir -p /var/lib/microvms/${name}/etc
mkdir -p /var/lib/microvms/data/${name}
touch /var/lib/microvms/${name}/.is_initialized
'';
};
}) {} (cfg.deployHosts);
systemd.timers = builtins.foldl' (timers: name: timers // {

View File

@@ -2,7 +2,7 @@
"vpn" = {
role = "server";
publicIp = "5.9.153.217";
address = [ "10.100.0.1/24" ];
address = "10.100.0.1";
allowedIPs = [ "10.100.0.0/24" ];
listenPort = 51821;
publicKey = "hF9H10Y8Ar7zvZXFoNM8LSoaYFgPCXv30c54SSEucX4=";
@@ -11,36 +11,43 @@
"celine" = {
role = "client";
address = [ "10.100.0.2/24" ];
address = "10.100.0.2";
allowedIPs = [ "10.100.0.2/32" ];
publicKey = "Jgx82tSOmZJS4sm1o8Eci9ahaQdQir2PLq9dBqsWZw4=";
};
"desktop" = {
role = "client";
address = [ "10.100.0.3/24" ];
address = "10.100.0.3";
allowedIPs = [ "10.100.0.3/32" ];
publicKey = "FtY2lcdWcw+nvtydOOUDyaeh/xkaqHA8y9GXzqU0Am0=";
};
"atlan-pc" = {
role = "client";
address = [ "10.100.0.5/24" ];
address = "10.100.0.5";
allowedIPs = [ "10.100.0.5/32" ];
publicKey = "TrJ4UAF//zXdaLwZudI78L+rTC36zEDodTDOWNS4Y1Y=";
};
"hetzner" = {
role = "client";
address = [ "10.100.0.6/24" ];
address = "10.100.0.6";
allowedIPs = [ "10.100.0.6/32" ];
publicKey = "csRzgwtnzmSLeLkSwTwEOrdKq55UOxZacR5D3GopCTQ=";
};
"fanny" = {
role = "client";
address = [ "10.100.0.101/24" ];
address = "10.100.0.101";
allowedIPs = [ "10.100.0.101/32" ];
publicKey = "3U59F6T1s/1LaZBIa6wB0qsVuO6pRR9jfYZJIH2piAU=";
};
"backup0" = {
role = "client";
address = "10.100.0.20";
allowedIPs = [ "10.100.0.20/32" ];
publicKey = "Pp55Jg//jREzHdbbIqTXc9N7rnLZIFw904qh6NLrACE=";
};
}

View File

@@ -68,7 +68,11 @@ in
users = [ "backup" ];
commands = [
{
command = "${pkgs.zfs-user}/bin/zfs";
command = "/run/current-system/sw/bin/zfs";
options = [ "NOPASSWD" ];
}
{
command = "/run/current-system/sw/bin/zpool";
options = [ "NOPASSWD" ];
}
];
@@ -94,4 +98,4 @@ in
];
}
];
}
}

View File

@@ -70,7 +70,7 @@ in
interfaces = {
malovpn = {
mtu = 1340; #seems to be necessary to proxypass nginx traffic through vpn
address = myPeer.address;
address = [ "${myPeer.address}/24" ];
autostart = cfg.autostart;
listenPort = mkIf (myPeer.role == "server") myPeer.listenPort;

View File

@@ -47,7 +47,7 @@ with lib;
};
extraAppsEnable = true;
extraApps = {
inherit (config.services.nextcloud.package.packages.apps) contacts calendar deck polls;
inherit (config.services.nextcloud.package.packages.apps) contacts calendar deck polls registration;
collectives = pkgs.fetchNextcloudApp {
sha256 = "sha256-cj/8FhzxOACJaUEu0eG9r7iAQmnOG62yFHeyUICalFY=";
url = "https://github.com/nextcloud/collectives/releases/download/v2.15.2/collectives-2.15.2.tar.gz";
@@ -56,6 +56,7 @@ with lib;
};
settings = {
trusted_domains = ["10.0.0.13"];
trusted_proxies = [ "10.0.0.1" ];
"maintenance_window_start" = "1";
"default_phone_region" = "DE";
};

View File

@@ -116,6 +116,7 @@ in (utils.lib.eachSystem (builtins.filter filter_system utils.lib.defaultSystems
metrics.imports = [ ./machines/modules/malobeo/metrics.nix ];
disko.imports = [ ./machines/modules/disko ];
users.imports = [ ./machines/modules/malobeo/users.nix ];
backup.imports = [ ./machines/modules/malobeo/backup.nix ];
};
hydraJobs = nixpkgs.lib.mapAttrs (_: nixpkgs.lib.hydraJob) (