17 Commits

Author SHA1 Message Date
7bfffb32f4 [fanny] set old ssh keys
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m10s
2025-02-25 17:49:58 +01:00
b792b738a9 [fanny] set old age key 2025-02-25 17:49:58 +01:00
8a2b948d11 [deployment] set hostname in pubkey 2025-02-25 17:49:58 +01:00
afd6444635 fix host_builder.nix tabs 2025-02-25 17:49:58 +01:00
b423efeaef Merge pull request 'Reproducible deployments new filestructure' (#85) from reproducible-deployments-filestructure into reproducible-deployments
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m11s
Reviewed-on: #85
Reviewed-by: Ahtlon <ahtlon@noreply.git.dynamicdiscord.de>
2025-02-23 13:34:38 +01:00
ahtlon
3bc74a3e80 [scripts] make pwpath consistant
All checks were successful
Check flake syntax / flake-check (push) Successful in 7m11s
2025-02-23 13:16:17 +01:00
251b0f0850 [fanny] generate deployment secrets on new location
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m11s
2025-02-22 19:15:36 +01:00
70fe179b5b [sops] rm deprecated host secrets 2025-02-22 19:15:10 +01:00
2eec2ed980 [sops] change reproducible secrets file structure
Some checks failed
Check flake syntax / flake-check (push) Has been cancelled
2025-02-22 19:10:44 +01:00
ahtlon
d00188f770 Add fanny keys and remove keepass
All checks were successful
Check flake syntax / flake-check (push) Successful in 4m14s
2025-02-22 12:51:22 +01:00
ahtlon
556cc3d423 Changed the rest of the scripts to sops encryption 2025-02-22 12:48:32 +01:00
ahtlon
edc754ee7f Changed the keepass db to sops in add_new_key script 2025-02-22 12:36:01 +01:00
ahtlon
ff673f0070 Change install script to use db
All checks were successful
Check flake syntax / flake-check (push) Successful in 8m2s
2025-02-14 07:10:09 +01:00
ahtlon
57c8e65917 move fanny to db 2025-02-12 20:08:57 +01:00
ahtlon
e4be136b64 Add age info after creation 2025-02-12 20:07:27 +01:00
ahtlon
aedf5ca0bf Add script for creating new hosts 2025-02-12 19:46:46 +01:00
ahtlon
923cbf4621 Add keepass db for hostkeys etc 2025-02-12 19:21:44 +01:00
32 changed files with 260 additions and 5349 deletions

View File

@@ -1,20 +1,44 @@
# malobeo infrastructure # malobeo infrastructure
this repository contains nixos configurations of the digital malobeo infrastructure. it should be used to setup, test, build and deploy different hosts in a reproducible manner. this repository nxios configurations of the digital malobeo infrastructure. it should be used to setup, test, build and deploy different hosts in a reproducible manner.
the file structure is based on this [blog post](https://samleathers.com/posts/2022-02-03-my-new-network-and-deploy-rs.html)
## hosts
#### durruti
- nixos-container running on dedicated hetzner server
- login via ```ssh -p 222 malobeo@dynamicdiscord.de```
- if rebuild switch fails due to biglock do ```mount -o remount,rw /nix/var/nix/db```
- currently is running tasklist in detached tmux session
- [x] make module with systemd service out of that
## creating a new host
### setting up filesystem
currently nixos offers no declarative way of setting up filesystems and partitions. that means this has to be done manually for every new host. [to make it as easy as possible we can use this guide to setup an encrypted zfs filesystem](https://openzfs.github.io/openzfs-docs/Getting%20Started/NixOS/Root%20on%20ZFS.html)
*we could create a shell script out of that*
### deploying configuration ### deploying configuration
hosts are deployed automatically from master. The [hydra build server](https://hydra.dynamicdiscord.de/jobset/malobeo/infrastructure) will build new commits and on success, hosts will periodically pull those changes. #### local deployment
Big changes (like updating flake lock) could be commited to the staging branch first. [Hydra builds staging seperate](https://hydra.dynamicdiscord.de/jobset/malobeo/staging), and on success you can merge into master. ``` shell
nixos-rebuild switch --use-remote-sudo
```
### deploy fresh host #### remote deployment
if you want to deploy a completly new host refer to [docs](https://docs.malobeo.org/anleitung/create.html)
### testing configuration you need the hostname and ip address of the host:
``` shell
nixos-rebuild switch --flake .#<hostname> --target-host root@<ip_address> --build-host localhost
```
in this case 'localhost' is used as buildhost which can be usefull if the target host is low systemresources
refer to https://docs.malobeo.org/anleitung/microvm.html#testing-microvms-locally
## development ## development
### requirements ### requirements
we use flake based configurations for our hosts. if you want to build configurations on you own machine you have to enable flakes first by adding the following to your *configuration.nix* or *nix.conf* we use flake based configurations for our hosts. if you want to build configurations on you own machine you have to enable flakes first by adding the following to your *configuration.nix* or *nix.conf*
``` nix ``` nix
@@ -31,13 +55,46 @@ a development shell with the correct environment can be created by running ```ni
If you're using direnv you can add flake support by following those steps: [link](https://nixos.wiki/wiki/Flakes#Direnv_integration) If you're using direnv you can add flake support by following those steps: [link](https://nixos.wiki/wiki/Flakes#Direnv_integration)
### build a configuration ### build a configuration
to build a configuration run the following command (replace ```<hostname>``` with the actual hostname): to build a configuration run the following command (replace ```<hostname>``` with the actual hostname):
``` shell ``` shell
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel
``` ```
### building raspberry image
for the raspberry it is possible to build the whole configuration as an sd-card image which then can be flashed directly. more information about building arm on nixos can be found [here](https://nixos.wiki/wiki/NixOS_on_ARM).
to be able to build the image you need to enable qemu emulation on the machine you are building with. therefore it is necessary to add the following to your configuration.nix:
``` nix
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
```
then you can build the image with:
``` shell
nix build .#nixosConfigurations.rpi1_base_image.config.system.build.sdImage
```
### run a configuration as vm
to run a vm we have to build it first using the following command (replace ```<hostname>``` with the actual hostname):
``` shell
nix build .#nixosConfigurations.<hostname>.config.system.build.vm
```
afterwards run the following command to start the vm:
``` shell
./result/bin/run-<hostname>-vm
```
### documentation ### documentation
documentation is automatically build from master and can be found here: docs.malobeo.org for documentation we currently just use README.md files.
locally you can run documentation using ```nix run .#docs``` or ```nix run .#docsDev```
the devshell provides the python package ['grip'](https://github.com/joeyespo/grip) which can be used to preview different README.md files in the browser.
the usage is simple, just run ```grip``` in the same folder as the README.md you wanna preview. then open your browser at ```http://localhost:6419 ```.

View File

@@ -1,20 +1,26 @@
# malobeo infrastructure # malobeo infrastructure
this repository contains nixos configurations of the digital malobeo infrastructure. it should be used to setup, test, build and deploy different hosts in a reproducible manner. this repository nxios configurations of the digital malobeo infrastructure. it should be used to setup, test, build and deploy different hosts in a reproducible manner.
the file structure is based on this [blog post](https://samleathers.com/posts/2022-02-03-my-new-network-and-deploy-rs.html)
### deploying configuration ### deploying configuration
#### local deployment
``` shell
nixos-rebuild switch --use-remote-sudo
```
hosts are deployed automatically from master. The [hydra build server](https://hydra.dynamicdiscord.de/jobset/malobeo/infrastructure) will build new commits and on success, hosts will periodically pull those changes. #### remote deployment
Big changes (like updating flake lock) could be commited to the staging branch first. [Hydra builds staging seperate](https://hydra.dynamicdiscord.de/jobset/malobeo/staging), and on success you can merge into master. you need the hostname and ip address of the host:
``` shell
nixos-rebuild switch --flake .#<hostname> --target-host root@<ip_address> --build-host localhost
```
### deploy fresh host in this case 'localhost' is used as buildhost which can be usefull if the target host is low systemresources
if you want to deploy a completly new host refer to [docs](https://docs.malobeo.org/anleitung/create.html)
### testing configuration
refer to https://docs.malobeo.org/anleitung/microvm.html#testing-microvms-locally
## development ## development
### requirements ### requirements
we use flake based configurations for our hosts. if you want to build configurations on you own machine you have to enable flakes first by adding the following to your *configuration.nix* or *nix.conf* we use flake based configurations for our hosts. if you want to build configurations on you own machine you have to enable flakes first by adding the following to your *configuration.nix* or *nix.conf*
``` nix ``` nix
@@ -31,13 +37,46 @@ a development shell with the correct environment can be created by running ```ni
If you're using direnv you can add flake support by following those steps: [link](https://nixos.wiki/wiki/Flakes#Direnv_integration) If you're using direnv you can add flake support by following those steps: [link](https://nixos.wiki/wiki/Flakes#Direnv_integration)
### build a configuration ### build a configuration
to build a configuration run the following command (replace ```<hostname>``` with the actual hostname): to build a configuration run the following command (replace ```<hostname>``` with the actual hostname):
``` shell ``` shell
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel
``` ```
### building raspberry image
for the raspberry it is possible to build the whole configuration as an sd-card image which then can be flashed directly. more information about building arm on nixos can be found [here](https://nixos.wiki/wiki/NixOS_on_ARM).
to be able to build the image you need to enable qemu emulation on the machine you are building with. therefore it is necessary to add the following to your configuration.nix:
``` nix
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
```
then you can build the image with:
``` shell
nix build .#nixosConfigurations.rpi1_base_image.config.system.build.sdImage
```
### run a configuration as vm
to run a vm we have to build it first using the following command (replace ```<hostname>``` with the actual hostname):
``` shell
nix build .#nixosConfigurations.<hostname>.config.system.build.vm
```
afterwards run the following command to start the vm:
``` shell
./result/bin/run-<hostname>-vm
```
### documentation ### documentation
documentation is automatically build from master and can be found here: docs.malobeo.org for documentation we currently just use README.md files.
locally you can run documentation using ```nix run .#docs``` or ```nix run .#docsDev```
the devshell provides the python package ['grip'](https://github.com/joeyespo/grip) which can be used to preview different README.md files in the browser.
the usage is simple, just run ```grip``` in the same folder as the README.md you wanna preview. then open your browser at ```http://localhost:6419 ```.

View File

@@ -1,19 +1,47 @@
# Create host with nixos-anywhere # Create host with disko-install
We use a nixos-anywhere wrapper script to deploy new hosts. How to use disko-install is described here: https://github.com/nix-community/disko/blob/master/docs/disko-install.md
The wrapper script takes care of copying persistent host keys before calling nixos-anywhere. ---
Here are the exact steps to get bakunin running:
To accomplish that boot the host from a nixos image and setup a root password. First create machines/hostname/configuration.nix
Add hosts nixosConfiguration in machines/configurations.nix
Boot nixos installer on the Machine.
``` bash ``` bash
sudo su # establish network connection
passwd wpa_passphrase "network" "password" > wpa.conf
``` wpa_supplicant -B -i wlp3s0 -c wpa.conf
ping 8.8.8.8
# if that works continue
After that get the hosts ip using `ip a` and start deployment from your own machine: # generate a base hardware config
nixos-generate-config --root /tmp/config --no-filesystems
``` bash # get the infra repo
# from infrastrucutre repository root dir: nix-shell -p git
nix develop .# git clone https://git.dynamicdiscord.de/kalipso/infrastructure
remote-install hostname 10.0.42.23 cd infrastructure
# add the new generated hardware config (and import in hosts configuration.nix)
cp /tmp/config/etc/nixos/hardware-configuration.nix machines/bakunin/
# check which harddrive we want to install the system on
lsblk #choose harddrive, in this case /dev/sda
# run nixos-install on that harddrive
sudo nix --extra-experimental-features flakes --extra-experimental-features nix-command run 'github:nix-community/disko/latest#disko-install' -- --flake .#bakunin --disk main /dev/sda
# this failed with out of memory
# running again showed: no disk left on device
# it seems the usb stick i used for flashing is way to small
# it is only
# with a bigger one (more than 8 gig i guess) it should work
# instead the disko-install tool i try the old method - first partitioning using disko and then installing the system
# for that i needed to adjust ./machines/modules/disko/btrfs-laptop.nix and set the disk to "/dev/sda"
sudo nix --extra-experimental-features "flakes nix-command" run 'github:nix-community/disko/latest' -- --mode format --flake .#bakunin
# failed with no space left on device.
# problem is lots of data is written to the local /nix/store which is mounted on tmpfs in ram
# it seems that a workaround could be modifying the bootable stick to contain a swap partition to extend tmpfs storage
``` ```
# Testing Disko # Testing Disko
@@ -21,3 +49,18 @@ Testing disko partitioning is working quite well. Just run the following and che
```bash ```bash
nix run -L .\#nixosConfigurations.fanny.config.system.build.vmWithDisko nix run -L .\#nixosConfigurations.fanny.config.system.build.vmWithDisko
``` ```
Only problem is that encryption is not working, so it needs to be commented out. For testing host fanny the following parts in ```./machines/modules/disko/fanny.nix``` need to be commented out(for both pools!):
```nix
datasets = {
encrypted = {
options = {
encryption = "aes-256-gcm"; #THIS ONE
keyformat = "passphrase"; #THIS ONE
keylocation = "file:///tmp/root.key"; #THIS ONE
};
# use this to read the key during boot
postCreateHook = '' #THIS ONE
zfs set keylocation="prompt" "zroot/$name"; #THIS ONE
''; #THIS ONE
```

95
flake.lock generated
View File

@@ -109,11 +109,11 @@
"spectrum": "spectrum" "spectrum": "spectrum"
}, },
"locked": { "locked": {
"lastModified": 1739104176, "lastModified": 1736905611,
"narHash": "sha256-bNvtud2PUcbYM0i5Uq1v01Dcgq7RuhVKfjaSKkW2KRI=", "narHash": "sha256-eW6SfZRaOnOybBzhvEzu3iRL8IhwE0ETxUpnkErlqkE=",
"owner": "astro", "owner": "astro",
"repo": "microvm.nix", "repo": "microvm.nix",
"rev": "d3a9b7504d420a1ffd7c83c1bb8fe57deaf939d2", "rev": "a18d7ba1bb7fd4841191044ca7a7f895ef2adf3b",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -160,11 +160,11 @@
}, },
"nixos-hardware": { "nixos-hardware": {
"locked": { "locked": {
"lastModified": 1738816619, "lastModified": 1736978406,
"narHash": "sha256-5yRlg48XmpcX5b5HesdGMOte+YuCy9rzQkJz+imcu6I=", "narHash": "sha256-oMr3PVIQ8XPDI8/x6BHxsWEPBRU98Pam6KGVwUh8MPk=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixos-hardware", "repo": "nixos-hardware",
"rev": "2eccff41bab80839b1d25b303b53d339fbb07087", "rev": "b678606690027913f3434dea3864e712b862dde5",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -192,11 +192,11 @@
}, },
"nixpkgs-unstable": { "nixpkgs-unstable": {
"locked": { "locked": {
"lastModified": 1739020877, "lastModified": 1737062831,
"narHash": "sha256-mIvECo/NNdJJ/bXjNqIh8yeoSjVLAuDuTUzAo7dzs8Y=", "narHash": "sha256-Tbk1MZbtV2s5aG+iM99U8FqwxU/YNArMcWAv6clcsBc=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "a79cfe0ebd24952b580b1cf08cd906354996d547", "rev": "5df43628fdf08d642be8ba5b3625a6c70731c19c",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -208,11 +208,11 @@
}, },
"nixpkgs_2": { "nixpkgs_2": {
"locked": { "locked": {
"lastModified": 1739206421, "lastModified": 1736916166,
"narHash": "sha256-PwQASeL2cGVmrtQYlrBur0U20Xy07uSWVnFup2PHnDs=", "narHash": "sha256-puPDoVKxkuNmYIGMpMQiK8bEjaACcCksolsG36gdaNQ=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "44534bc021b85c8d78e465021e21f33b856e2540", "rev": "e24b4c09e963677b1beea49d411cd315a024ad3a",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -235,8 +235,7 @@
"nixpkgs-unstable": "nixpkgs-unstable", "nixpkgs-unstable": "nixpkgs-unstable",
"sops-nix": "sops-nix", "sops-nix": "sops-nix",
"tasklist": "tasklist", "tasklist": "tasklist",
"utils": "utils_3", "utils": "utils_3"
"zineshop": "zineshop"
} }
}, },
"sops-nix": { "sops-nix": {
@@ -246,11 +245,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1739262228, "lastModified": 1737107480,
"narHash": "sha256-7JAGezJ0Dn5qIyA2+T4Dt/xQgAbhCglh6lzCekTVMeU=", "narHash": "sha256-GXUE9+FgxoZU8v0p6ilBJ8NH7k8nKmZjp/7dmMrCv3o=",
"owner": "Mic92", "owner": "Mic92",
"repo": "sops-nix", "repo": "sops-nix",
"rev": "07af005bb7d60c7f118d9d9f5530485da5d1e975", "rev": "4c4fb93f18b9072c6fa1986221f9a3d7bf1fe4b6",
"type": "github" "type": "github"
}, },
"original": { "original": {
@@ -335,21 +334,6 @@
"type": "github" "type": "github"
} }
}, },
"systems_5": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"tasklist": { "tasklist": {
"inputs": { "inputs": {
"nixpkgs": [ "nixpkgs": [
@@ -357,11 +341,11 @@
] ]
}, },
"locked": { "locked": {
"lastModified": 1743458889, "lastModified": 1737548421,
"narHash": "sha256-eVTtsCPio3Wj/g/gvKTsyjh90vrNsmgjzXK9jMfcboM=", "narHash": "sha256-gmlqJdC+v86vXc2yMhiza1mvsqh3vMfrEsiw+tV5MXg=",
"ref": "refs/heads/master", "ref": "refs/heads/master",
"rev": "b61466549e2687628516aa1f9ba73f251935773a", "rev": "c5fff78c83959841ac724980a13597dcfa6dc26d",
"revCount": 30, "revCount": 29,
"type": "git", "type": "git",
"url": "https://git.dynamicdiscord.de/kalipso/tasklist" "url": "https://git.dynamicdiscord.de/kalipso/tasklist"
}, },
@@ -423,45 +407,6 @@
"repo": "flake-utils", "repo": "flake-utils",
"type": "github" "type": "github"
} }
},
"utils_4": {
"inputs": {
"systems": "systems_5"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"zineshop": {
"inputs": {
"nixpkgs": [
"nixpkgs"
],
"utils": "utils_4"
},
"locked": {
"lastModified": 1744626173,
"narHash": "sha256-DSuLVFGvmMUoStIs5ar4CLE8eD2dlFPUmPC7CODauts=",
"ref": "refs/heads/master",
"rev": "19ce41aca7d92bc8e02f97e7bdbca7ac7ba64090",
"revCount": 103,
"type": "git",
"url": "https://git.dynamicdiscord.de/kalipso/zineshop"
},
"original": {
"type": "git",
"url": "https://git.dynamicdiscord.de/kalipso/zineshop"
}
} }
}, },
"root": "root", "root": "root",

View File

@@ -22,11 +22,6 @@
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";
}; };
zineshop = {
url = "git+https://git.dynamicdiscord.de/kalipso/zineshop";
inputs.nixpkgs.follows = "nixpkgs";
};
ep3-bs = { ep3-bs = {
url = "git+https://git.dynamicdiscord.de/kalipso/ep3-bs.nix"; url = "git+https://git.dynamicdiscord.de/kalipso/ep3-bs.nix";
inputs.nixpkgs.follows = "nixpkgs"; inputs.nixpkgs.follows = "nixpkgs";

View File

@@ -8,12 +8,12 @@ keys:
- &admin_atlan age1ljpdczmg5ctqyeezn739hv589fwhssjjnuqf7276fqun6kc62v3qmhkd0c - &admin_atlan age1ljpdczmg5ctqyeezn739hv589fwhssjjnuqf7276fqun6kc62v3qmhkd0c
- &machine_moderatio 3b7027ab1933c4c5e0eb935f8f9b3c058aa6d4c2 - &machine_moderatio 3b7027ab1933c4c5e0eb935f8f9b3c058aa6d4c2
- &machine_lucia 3474196f3adf27cfb70f8f56bcd52d1ed55033db - &machine_lucia 3474196f3adf27cfb70f8f56bcd52d1ed55033db
- &machine_durruti age1tc6aqmcl74du56d04wsz6mzp83n9990krzu4kuam2jqu8fx6kqpq038xuz - &machine_durruti age1pd2kkscyh7fuvm49umz8lfhse4fpkmp5pa3gvnh4ranwxs4mz9nqdy7sda
- &machine_infradocs age1tesz7xnnq9e58n5qwjctty0lw86gzdzd5ke65mxl8znyasx3nalqe4x6yy - &machine_infradocs age1decc74l6tm5sjtnjyj8rkxysr9j49fxsc92r2dcfpmzdcjv5dews8f03se
- &machine_overwatch age1hq75x3dpnfqat9sgtfjf8lep49qvkdgza3xwp7ugft3kd74pdfnqfsmmdn - &machine_overwatch age1psj6aeu03s2k4zdfcte89nj4fw95xgk4e7yr3e6k6u2evq84ng3s57p6f0
- &machine_vpn age1v6uxwej4nlrpfanr9js7x6059mtvyg4fw50pzt0a2kt3ahk7edlslafeuh - &machine_vpn age1v6uxwej4nlrpfanr9js7x6059mtvyg4fw50pzt0a2kt3ahk7edlslafeuh
- &machine_fanny age136sz3lzhxf74ryruvq34d4tmmxnezkqkgu6zqa3dm582c22fgejqagrqxk - &machine_fanny age136sz3lzhxf74ryruvq34d4tmmxnezkqkgu6zqa3dm582c22fgejqagrqxk
- &machine_nextcloud age1g084sl230x94mkd2wq92s03mw0e8mnpjdjfx9uzaxw6psm8neyzqqwpnqe - &machine_nextcloud age1z0cfz7l4vakjrte220h46fc05503506fjcz440na92pzgztlspmqc8vt6k
#this dummy key is used for testing. #this dummy key is used for testing.
- &machine_dummy age18jn5mrfs4gqrnv0e2sxsgh3kq4sgxx39hwr8z7mz9kt7wlgaasjqlr88ng - &machine_dummy age18jn5mrfs4gqrnv0e2sxsgh3kq4sgxx39hwr8z7mz9kt7wlgaasjqlr88ng
creation_rules: creation_rules:

View File

@@ -8,11 +8,12 @@ in
[ # Include the results of the hardware scan. [ # Include the results of the hardware scan.
#./hardware-configuration.nix #./hardware-configuration.nix
../modules/xserver.nix ../modules/xserver.nix
../modules/malobeo_user.nix
../modules/sshd.nix ../modules/sshd.nix
../modules/minimal_tools.nix
../modules/autoupdate.nix ../modules/autoupdate.nix
inputs.self.nixosModules.malobeo.disko inputs.self.nixosModules.malobeo.disko
inputs.self.nixosModules.malobeo.initssh inputs.self.nixosModules.malobeo.initssh
inputs.self.nixosModules.malobeo.users
]; ];
malobeo.autoUpdate = { malobeo.autoUpdate = {
@@ -37,8 +38,6 @@ in
ethernetDrivers = ["r8169"]; ethernetDrivers = ["r8169"];
}; };
malobeo.users.malobeo = true;
hardware.sane.enable = true; #scanner support hardware.sane.enable = true; #scanner support
nix.settings.experimental-features = [ "nix-command" "flakes" ]; nix.settings.experimental-features = [ "nix-command" "flakes" ];

View File

@@ -73,17 +73,6 @@ in
}; };
}; };
services.nginx.virtualHosts."shop.malobeo.org" = {
forceSSL = true;
enableACME= true;
locations."/" = {
proxyPass = "http://10.0.0.10";
extraConfig = ''
'';
};
};
services.nginx.virtualHosts."status.malobeo.org" = { services.nginx.virtualHosts."status.malobeo.org" = {
forceSSL = true; forceSSL = true;
enableACME= true; enableACME= true;

View File

@@ -5,11 +5,11 @@ in
{ {
sops.defaultSopsFile = ./secrets.yaml; sops.defaultSopsFile = ./secrets.yaml;
sops.secrets.wg_private = {}; sops.secrets.wg_private = {};
sops.secrets.shop_auth = {};
imports = imports =
[ # Include the results of the hardware scan. [ # Include the results of the hardware scan.
#./hardware-configuration.nix #./hardware-configuration.nix
../modules/malobeo_user.nix
../modules/sshd.nix ../modules/sshd.nix
../modules/minimal_tools.nix ../modules/minimal_tools.nix
../modules/autoupdate.nix ../modules/autoupdate.nix
@@ -18,8 +18,6 @@ in
inputs.self.nixosModules.malobeo.disko inputs.self.nixosModules.malobeo.disko
inputs.self.nixosModules.malobeo.microvm inputs.self.nixosModules.malobeo.microvm
inputs.self.nixosModules.malobeo.metrics inputs.self.nixosModules.malobeo.metrics
inputs.self.nixosModules.malobeo.users
inputs.self.nixosModules.malobeo.backup
]; ];
virtualisation.vmVariantWithDisko = { virtualisation.vmVariantWithDisko = {
@@ -44,11 +42,6 @@ in
cacheurl = "https://cache.dynamicdiscord.de"; cacheurl = "https://cache.dynamicdiscord.de";
}; };
malobeo.backup = {
enable = true;
snapshots = [ "storage/encrypted" "zroot/encrypted/var" ];
};
nix = { nix = {
settings.experimental-features = [ "nix-command" "flakes" ]; settings.experimental-features = [ "nix-command" "flakes" ];
#always update microvms #always update microvms
@@ -57,11 +50,6 @@ in
''; '';
}; };
malobeo.users = {
malobeo = true;
admin = true;
backup = true;
};
malobeo.disks = { malobeo.disks = {
enable = true; enable = true;
@@ -94,13 +82,7 @@ in
}; };
services.malobeo.microvm.enableHostBridge = true; services.malobeo.microvm.enableHostBridge = true;
services.malobeo.microvm.deployHosts = [ services.malobeo.microvm.deployHosts = [ "overwatch" "infradocs" "nextcloud" "durruti" ];
"overwatch"
"infradocs"
"nextcloud"
"durruti"
"zineshop"
];
networking = { networking = {
nat = { nat = {
@@ -151,18 +133,6 @@ in
''; '';
}; };
}; };
virtualHosts."shop.malobeo.org" = {
# created with: nix-shell --packages apacheHttpd --run 'htpasswd -B -c foo.txt malobeo'
# then content of foo.txt put into sops
basicAuthFile = config.sops.secrets.shop_auth.path;
locations."/" = {
proxyPass = "http://10.0.0.15:8080";
extraConfig = ''
proxy_set_header Host $host;
'';
};
};
}; };
services.tor = { services.tor = {

View File

@@ -1,6 +1,4 @@
wg_private: ENC[AES256_GCM,data:kFuLzZz9lmtUccQUIYiXvJRf7WBg5iCq1xxCiI76J3TaIBELqgbEmUtPR4g=,iv:0S0uzX4OVxQCKDOl1zB6nDo8152oE7ymBWdVkPkKlro=,tag:gg1n1BsnjNPikMBNB60F5Q==,type:str] wg_private: ENC[AES256_GCM,data:kFuLzZz9lmtUccQUIYiXvJRf7WBg5iCq1xxCiI76J3TaIBELqgbEmUtPR4g=,iv:0S0uzX4OVxQCKDOl1zB6nDo8152oE7ymBWdVkPkKlro=,tag:gg1n1BsnjNPikMBNB60F5Q==,type:str]
shop_cleartext: ENC[AES256_GCM,data:sifpX/R6JCcNKgwN2M4Dbflgnfs5CqB8ez5fULPohuFS6k36BLemWzEk,iv:1lRYausj7V/53sfSO9UnJ2OC/Si94JXgIo81Ld74BE8=,tag:5osQU/67bvFeUGA90BSiIA==,type:str]
shop_auth: ENC[AES256_GCM,data:0NDIRjmGwlSFls12sCb5OlgyGTCHpPQIjycEJGhYlZsWKhEYXV2u3g1RHMkF8Ny913jarjf0BgwSq5pBD9rgPL9t8X8=,iv:3jgCv/Gg93Mhdm4eYzwF9QrK14QL2bcC4wwSajCA88o=,tag:h8dhMK46hABv9gYW4johkA==,type:str]
sops: sops:
kms: [] kms: []
gcp_kms: [] gcp_kms: []
@@ -25,8 +23,8 @@ sops:
QVZyNWVOMTh3ejBha21Qb2xCRkFERGMKH9nMQUoS5bGcLUx2T1dOmKd9jshttTrP QVZyNWVOMTh3ejBha21Qb2xCRkFERGMKH9nMQUoS5bGcLUx2T1dOmKd9jshttTrP
SKFx7MXcjFRLKS2Ij12V8ftjL3Uod6be5zoMibkxK19KmXY/514Jww== SKFx7MXcjFRLKS2Ij12V8ftjL3Uod6be5zoMibkxK19KmXY/514Jww==
-----END AGE ENCRYPTED FILE----- -----END AGE ENCRYPTED FILE-----
lastmodified: "2025-04-14T10:34:55Z" lastmodified: "2025-01-14T12:41:07Z"
mac: ENC[AES256_GCM,data:vcDXtTi0bpqhHnL6XanJo+6a8f5LAE628HazDVaNO34Ll3eRyhi95eYGXQDDkVk2WUn9NJ5oCMPltnU82bpLtskzTfQDuXHaPZJq5gtOuMH/bAKrY0dfShrdyx71LkA4AFlcI1P5hchpbyY1FK3iqe4D0miBv+Q8lCMgQMVrfxI=,iv:1lMzH899K0CnEtm16nyq8FL/aCkSYJVoj7HSKCyUnPg=,tag:mEbkmFNg5VZtSKqq80NrCw==,type:str] mac: ENC[AES256_GCM,data:RJ4Fa8MmX8u8S3zrD/SaywTC3d2IfHQPBDy3C9u4GuXJ/ruEChAB1kN8rqMPvkmET8UUgHIEp7RpbzMtg/FOmKYKYTTx5t//3/VozvAEZurhG/4mnN3r6uaZ0R9+wSjym8IyOKsJ7p4XrfE5tRdzNyU4EqfkEiyf+jO751uSnYI=,iv:eiTdmbcrpUvyDPFmGawxJs/ehmD7KqulaoB+nfpC6ko=,tag:+TKr53cFS3wbLXNgcbZfJQ==,type:str]
pgp: pgp:
- created_at: "2025-02-11T18:32:49Z" - created_at: "2025-02-11T18:32:49Z"
enc: |- enc: |-
@@ -67,4 +65,4 @@ sops:
-----END PGP MESSAGE----- -----END PGP MESSAGE-----
fp: aef8d6c7e4761fc297cda833df13aebb1011b5d4 fp: aef8d6c7e4761fc297cda833df13aebb1011b5d4
unencrypted_suffix: _unencrypted unencrypted_suffix: _unencrypted
version: 3.9.4 version: 3.9.2

View File

@@ -67,14 +67,6 @@
}; };
}; };
zineshop = {
type = "microvm";
network = {
address = "10.0.0.15";
mac = "D0:E5:CA:F0:D7:F1";
};
};
testvm = { testvm = {
type = "host"; type = "host";
}; };

View File

@@ -1,4 +1,4 @@
{ config, pkgs, inputs, ... }: { config, pkgs, ... }:
{ {
imports = imports =
@@ -9,7 +9,6 @@
../modules/sshd.nix ../modules/sshd.nix
../modules/minimal_tools.nix ../modules/minimal_tools.nix
../modules/autoupdate.nix ../modules/autoupdate.nix
inputs.self.nixosModules.malobeo.printing
]; ];
malobeo.autoUpdate = { malobeo.autoUpdate = {
@@ -51,8 +50,6 @@
}; };
services.printing.enable = true; services.printing.enable = true;
services.malobeo.printing.enable = true;
services.printing.drivers = [ services.printing.drivers = [
(pkgs.writeTextDir "share/cups/model/brother5350.ppd" (builtins.readFile ../modules/BR5350_2_GPL.ppd)) (pkgs.writeTextDir "share/cups/model/brother5350.ppd" (builtins.readFile ../modules/BR5350_2_GPL.ppd))
pkgs.gutenprint pkgs.gutenprint

File diff suppressed because it is too large Load Diff

View File

@@ -187,6 +187,7 @@ in
postCreateHook = lib.mkIf cfg.encryption '' postCreateHook = lib.mkIf cfg.encryption ''
zfs set keylocation="prompt" zroot/encrypted; zfs set keylocation="prompt" zroot/encrypted;
''; '';
}; };
"encrypted/root" = { "encrypted/root" = {
type = "zfs_fs"; type = "zfs_fs";
@@ -244,18 +245,16 @@ in
}; };
# use this to read the key during boot # use this to read the key during boot
postCreateHook = lib.mkIf cfg.encryption '' postCreateHook = lib.mkIf cfg.encryption ''
zfs set keylocation="prompt" storage/encrypted; zfs set keylocation="file:///root/secret.key" storage/encrypted;
''; '';
}; };
"encrypted/data" = { "encrypted/data" = {
type = "zfs_fs"; type = "zfs_fs";
mountpoint = "/data"; mountpoint = "/data";
options.mountpoint = "legacy";
}; };
"encrypted/data/microvms" = { "encrypted/data/microvms" = {
type = "zfs_fs"; type = "zfs_fs";
mountpoint = "/data/microvms"; mountpoint = "/data/microvms";
options.mountpoint = "legacy";
}; };
reserved = { reserved = {
# for cow delete if pool is full # for cow delete if pool is full
@@ -272,7 +271,7 @@ in
}; };
boot.zfs.devNodes = lib.mkDefault cfg.devNodes; boot.zfs.devNodes = lib.mkDefault cfg.devNodes;
boot.zfs.extraPools = lib.mkIf cfg.storage.enable [ "storage" ];
fileSystems."/".neededForBoot = true; fileSystems."/".neededForBoot = true;
fileSystems."/etc".neededForBoot = true; fileSystems."/etc".neededForBoot = true;
fileSystems."/boot".neededForBoot = true; fileSystems."/boot".neededForBoot = true;

View File

@@ -195,7 +195,8 @@ rec {
vmNestedMicroVMOverwrites = host: sopsDummy: { vmNestedMicroVMOverwrites = host: sopsDummy: {
microvm.vms = pkgs.lib.mkForce ( services.malobeo.microvm.deployHosts = pkgs.lib.mkForce [];
microvm.vms =
let let
# Map the values to each hostname to then generate an Attrset using listToAttrs # Map the values to each hostname to then generate an Attrset using listToAttrs
mapperFunc = name: { inherit name; value = { mapperFunc = name: { inherit name; value = {
@@ -215,7 +216,7 @@ rec {
}; };
}; }; }; };
in in
builtins.listToAttrs (map mapperFunc self.nixosConfigurations.${host}.config.services.malobeo.microvm.deployHosts)); builtins.listToAttrs (map mapperFunc self.nixosConfigurations.${host}.config.services.malobeo.microvm.deployHosts);
}; };
buildVM = host: networking: sopsDummy: disableDisko: varPath: writableStore: fwdPort: (self.nixosConfigurations.${host}.extendModules { buildVM = host: networking: sopsDummy: disableDisko: varPath: writableStore: fwdPort: (self.nixosConfigurations.${host}.extendModules {

View File

@@ -1,102 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.malobeo.backup;
hostToCommand = (hostname: datasetNames:
(map (dataset: {
name = "${hostname}_${dataset.sourceDataset}";
value = {
inherit hostname;
inherit (dataset) sourceDataset targetDataset;
};
} ) datasetNames));
peers = import ./peers.nix;
enableSnapshots = cfg.snapshots != null;
enableBackups = cfg.hosts != null;
in
{
options.malobeo.backup = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable sanoid/syncoid based backup functionality";
};
snapshots = mkOption {
type = types.nullOr (types.listOf types.str);
default = null;
description = "Automatic snapshots will be created for the given datasets";
};
hosts = mkOption {
default = null;
type = types.nullOr (types.attrsOf (types.listOf (types.submodule {
options = {
sourceDataset = mkOption {
type = types.str;
description = "The source that needs to be backed up";
};
targetDataset = mkOption {
type = types.str;
description = "The target dataset where the backup should be stored";
};
};
})));
description = ''
Hostname with list of datasets to backup. This option should be defined on hosts that will store backups.
It is necessary to add the machines that get backed up to known hosts.
This can be done for example systemwide using
programs.ssh.knownHosts."10.100.0.101" = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHqp2/YiiIhai7wyScGZJ20gtrzY+lp4N/8unyRs4qhc";
Or set it for the syncoid user directly.
'';
};
sshKey = mkOption {
default = null;
type = types.nullOr types.str;
description = "Set path to ssh key used for pull backups. Otherwise default key is used";
};
};
config = mkIf (cfg.enable) {
services.sanoid = mkIf (enableSnapshots) {
enable = true;
templates."default" = {
hourly = 24;
daily = 30; #keep 30 daily snapshots
monthly = 6; #keep 6 monthly backups
yearly = 0;
autosnap = true; #take snapshots automatically
autoprune = true; #delete old snapshots
};
datasets = builtins.listToAttrs (map (name: { inherit name; value = {
useTemplate = [ "default" ];
recursive = true;
}; }) cfg.snapshots);
};
services.syncoid = mkIf (enableBackups) {
enable = true;
sshKey = cfg.sshKey;
commonArgs = [
"--no-sync-snap"
];
interval = "*-*-* 04:15:00";
commands = builtins.mapAttrs (name: value: {
source = "backup@${peers.${value.hostname}.address}:${value.sourceDataset}";
target = "${value.targetDataset}";
sendOptions = "w";
recvOptions = "\"\"";
recursive = true;
})(builtins.listToAttrs (builtins.concatLists (builtins.attrValues (builtins.mapAttrs hostToCommand cfg.hosts))));
};
};
}

View File

@@ -30,9 +30,7 @@ in
loader.efi.canTouchEfiVariables = true; loader.efi.canTouchEfiVariables = true;
supportedFilesystems = [ "vfat" "zfs" ]; supportedFilesystems = [ "vfat" "zfs" ];
zfs = { zfs = {
forceImportAll = true;
requestEncryptionCredentials = true; requestEncryptionCredentials = true;
}; };
initrd = { initrd = {
availableKernelModules = cfg.ethernetDrivers; availableKernelModules = cfg.ethernetDrivers;

View File

@@ -102,22 +102,6 @@ in
/run/current-system/sw/bin/microvm -Ru ${name} /run/current-system/sw/bin/microvm -Ru ${name}
''; '';
}; };
"microvm-init-dirs@${name}" = {
description = "Initialize microvm directories";
after = [ "zfs-mount.service" ];
wantedBy = [ "microvm@${name}.service" ];
unitConfig.ConditionPathExists = "!/var/lib/microvms/${name}/.is_initialized";
serviceConfig = {
Type = "oneshot";
};
script = ''
mkdir -p /var/lib/microvms/${name}/var
mkdir -p /var/lib/microvms/${name}/etc
mkdir -p /var/lib/microvms/data/${name}
touch /var/lib/microvms/${name}/.is_initialized
'';
};
}) {} (cfg.deployHosts); }) {} (cfg.deployHosts);
systemd.timers = builtins.foldl' (timers: name: timers // { systemd.timers = builtins.foldl' (timers: name: timers // {

View File

@@ -2,7 +2,7 @@
"vpn" = { "vpn" = {
role = "server"; role = "server";
publicIp = "5.9.153.217"; publicIp = "5.9.153.217";
address = "10.100.0.1"; address = [ "10.100.0.1/24" ];
allowedIPs = [ "10.100.0.0/24" ]; allowedIPs = [ "10.100.0.0/24" ];
listenPort = 51821; listenPort = 51821;
publicKey = "hF9H10Y8Ar7zvZXFoNM8LSoaYFgPCXv30c54SSEucX4="; publicKey = "hF9H10Y8Ar7zvZXFoNM8LSoaYFgPCXv30c54SSEucX4=";
@@ -11,43 +11,36 @@
"celine" = { "celine" = {
role = "client"; role = "client";
address = "10.100.0.2"; address = [ "10.100.0.2/24" ];
allowedIPs = [ "10.100.0.2/32" ]; allowedIPs = [ "10.100.0.2/32" ];
publicKey = "Jgx82tSOmZJS4sm1o8Eci9ahaQdQir2PLq9dBqsWZw4="; publicKey = "Jgx82tSOmZJS4sm1o8Eci9ahaQdQir2PLq9dBqsWZw4=";
}; };
"desktop" = { "desktop" = {
role = "client"; role = "client";
address = "10.100.0.3"; address = [ "10.100.0.3/24" ];
allowedIPs = [ "10.100.0.3/32" ]; allowedIPs = [ "10.100.0.3/32" ];
publicKey = "FtY2lcdWcw+nvtydOOUDyaeh/xkaqHA8y9GXzqU0Am0="; publicKey = "FtY2lcdWcw+nvtydOOUDyaeh/xkaqHA8y9GXzqU0Am0=";
}; };
"atlan-pc" = { "atlan-pc" = {
role = "client"; role = "client";
address = "10.100.0.5"; address = [ "10.100.0.5/24" ];
allowedIPs = [ "10.100.0.5/32" ]; allowedIPs = [ "10.100.0.5/32" ];
publicKey = "TrJ4UAF//zXdaLwZudI78L+rTC36zEDodTDOWNS4Y1Y="; publicKey = "TrJ4UAF//zXdaLwZudI78L+rTC36zEDodTDOWNS4Y1Y=";
}; };
"hetzner" = { "hetzner" = {
role = "client"; role = "client";
address = "10.100.0.6"; address = [ "10.100.0.6/24" ];
allowedIPs = [ "10.100.0.6/32" ]; allowedIPs = [ "10.100.0.6/32" ];
publicKey = "csRzgwtnzmSLeLkSwTwEOrdKq55UOxZacR5D3GopCTQ="; publicKey = "csRzgwtnzmSLeLkSwTwEOrdKq55UOxZacR5D3GopCTQ=";
}; };
"fanny" = { "fanny" = {
role = "client"; role = "client";
address = "10.100.0.101"; address = [ "10.100.0.101/24" ];
allowedIPs = [ "10.100.0.101/32" ]; allowedIPs = [ "10.100.0.101/32" ];
publicKey = "3U59F6T1s/1LaZBIa6wB0qsVuO6pRR9jfYZJIH2piAU="; publicKey = "3U59F6T1s/1LaZBIa6wB0qsVuO6pRR9jfYZJIH2piAU=";
}; };
"backup0" = {
role = "client";
address = "10.100.0.20";
allowedIPs = [ "10.100.0.20/32" ];
publicKey = "Pp55Jg//jREzHdbbIqTXc9N7rnLZIFw904qh6NLrACE=";
};
} }

View File

@@ -1,51 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.malobeo.printing;
driverFile = pkgs.writeTextDir "share/cups/model/konicaminoltac258.ppd" (builtins.readFile ../KOC658UX.ppd);
defaultPpdOptions = {
PageSize = "A4";
SelectColor = "Grayscale";
Finisher = "FS534";
SaddleUnit = "SD511";
Model = "C258";
InputSlot = "Tray1";
};
in
{
options.services.malobeo.printing = {
enable = mkOption {
type = types.bool;
default = false;
description = "Setup malobeo printers";
};
};
config = mkIf (cfg.enable) {
services.printing.enable = true;
services.printing.drivers = [
driverFile
];
hardware.printers.ensurePrinters = [ {
name = "KonicaDefault";
model = "konicaminoltac258.ppd";
location = "Zine Workshop";
deviceUri = "ipp://192.168.1.42/ipp";
ppdOptions = defaultPpdOptions;
}
{
name = "KonicaBooklet";
model = "konicaminoltac258.ppd";
location = "Zine Workshop";
deviceUri = "ipp://192.168.1.42/ipp";
ppdOptions = defaultPpdOptions // {
Fold = "Stitch";
Staple = "None";
};
}
];
};
}

View File

@@ -1,101 +0,0 @@
{config, lib, pkgs, inputs, ...}:
let
cfg = config.malobeo.users;
sshKeys = import ( inputs.self + /machines/ssh_keys.nix);
inherit (config.networking) hostName;
in
{
options.malobeo.users = {
malobeo = lib.mkOption {
type = lib.types.bool;
default = true;
description = "enable malobeo user, defaults to on, ";
};
admin = lib.mkOption {
type = lib.types.bool;
default = true;
description = "enable admin user, defaults to on to prevent lockouts, passwordless sudo access";
};
backup = lib.mkOption {
type = lib.types.bool;
default = false;
description = "enable backup user, ";
};
};
config = lib.mkMerge [
(lib.mkIf cfg.malobeo {
users.users.malobeo = {
isNormalUser = true;
description = "malobeo user, password and ssh access, no root";
extraGroups = [ "pipewire" "pulse-access" "scanner" "lp" ];
openssh.authorizedKeys.keys = sshKeys.admins;
hashedPassword = "$y$j9T$39oJwpbFDeETiyi9TjZ/2.$olUdnIIABp5TQSOzoysuEsomn2XPyzwVlM91ZsEkIz1";
};
environment.systemPackages = with pkgs; [];
})
(lib.mkIf cfg.admin {
users.users.admin = {
isNormalUser = true;
description = "admin user, passwordless sudo access, only ssh";
hashedPassword = null;
openssh.authorizedKeys.keys = sshKeys.admins;
extraGroups = [ "networkmanager" ];
};
environment.systemPackages = with pkgs; [];
nix.settings.trusted-users = [ "admin" ];
security.sudo.extraRules = [
{
users = [ "admin" ];
commands = [
{
command = "ALL";
options = [ "NOPASSWD" ];
}
];
}
];
})
(lib.mkIf cfg.backup {
users.users.backup = {
isNormalUser = true;
hashedPassword = null;
openssh.authorizedKeys.keys = sshKeys.backup;
description = "backup user for pull style backups, can only use zfs commands";
};
environment.systemPackages = with pkgs; [];
security.sudo.extraRules = [
{
users = [ "backup" ];
commands = [
{
command = "/run/current-system/sw/bin/zfs";
options = [ "NOPASSWD" ];
}
{
command = "/run/current-system/sw/bin/zpool";
options = [ "NOPASSWD" ];
}
];
}
];
})
{
users.mutableUsers = false;
services.openssh.hostKeys = [
{
path = "/etc/ssh/${hostName}";
type = "ssh-ed25519";
}
];
sops.age.sshKeyPaths = [ "/etc/ssh/${hostName}" ];
environment.systemPackages = with pkgs; [
nix-output-monitor
vim
htop
wget
git
pciutils
];
}
];
}

View File

@@ -70,7 +70,7 @@ in
interfaces = { interfaces = {
malovpn = { malovpn = {
mtu = 1340; #seems to be necessary to proxypass nginx traffic through vpn mtu = 1340; #seems to be necessary to proxypass nginx traffic through vpn
address = [ "${myPeer.address}/24" ]; address = myPeer.address;
autostart = cfg.autostart; autostart = cfg.autostart;
listenPort = mkIf (myPeer.role == "server") myPeer.listenPort; listenPort = mkIf (myPeer.role == "server") myPeer.listenPort;

View File

@@ -47,7 +47,7 @@ with lib;
}; };
extraAppsEnable = true; extraAppsEnable = true;
extraApps = { extraApps = {
inherit (config.services.nextcloud.package.packages.apps) contacts calendar deck polls registration; inherit (config.services.nextcloud.package.packages.apps) contacts calendar deck polls;
collectives = pkgs.fetchNextcloudApp { collectives = pkgs.fetchNextcloudApp {
sha256 = "sha256-cj/8FhzxOACJaUEu0eG9r7iAQmnOG62yFHeyUICalFY="; sha256 = "sha256-cj/8FhzxOACJaUEu0eG9r7iAQmnOG62yFHeyUICalFY=";
url = "https://github.com/nextcloud/collectives/releases/download/v2.15.2/collectives-2.15.2.tar.gz"; url = "https://github.com/nextcloud/collectives/releases/download/v2.15.2/collectives-2.15.2.tar.gz";
@@ -56,7 +56,6 @@ with lib;
}; };
settings = { settings = {
trusted_domains = ["10.0.0.13"]; trusted_domains = ["10.0.0.13"];
trusted_proxies = [ "10.0.0.1" ];
"maintenance_window_start" = "1"; "maintenance_window_start" = "1";
"default_phone_region" = "DE"; "default_phone_region" = "DE";
}; };

View File

@@ -8,60 +8,60 @@ sops:
- recipient: age1ljpdczmg5ctqyeezn739hv589fwhssjjnuqf7276fqun6kc62v3qmhkd0c - recipient: age1ljpdczmg5ctqyeezn739hv589fwhssjjnuqf7276fqun6kc62v3qmhkd0c
enc: | enc: |
-----BEGIN AGE ENCRYPTED FILE----- -----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBPT3ZxNEpRVktDWG9BR0Rv YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSB4dCt1ZFR0QnRqVFdiL0Zi
ZUZQTkJwQ0pSblNvTkFOT3BBdjVaSzJhVzBvCnVWc2xRUjBnRFFXSDgxczRMSFMy VTR6Zy9ZTy9YNDBZaDRTZzJnU2ZKcjJ0MG1vCldpRU5tTzc1YU5KbjlDbXlNRjBU
WFdaMGo4eE13b0RkZkphN2MvOUZtRmcKLS0tIDFHZU9tNjBNa0sveUYzN2dmYnM1 Sm8yc0oyNWU1WHJoYTRvK3o4aGtTY2MKLS0tIE9wY0R0V3Vkc3Y1T1YwTkFTY0J5
aDd0UlpMR3RNd3BDMmhqNmxhTFRoUlkK6Pni+cswKIU94WkP/fg5fzSmx/fhXjjl ZCtzbVdtNlh0cXpra2RWbEwzUDM0UjgKY3zZn5PUWuLBQgYxm9BUpLYWw3CdXYA8
mRG2o4ALCqcOxAxHBrKJppUCLjUgKG53wPF/jlIzkvbwHwnqVMfYsQ== 4U6OVdRF6foj4/GrKKyhVf8dMbLbkhPvxqZ5wg40o6bwHEw9QNM+5Q==
-----END AGE ENCRYPTED FILE----- -----END AGE ENCRYPTED FILE-----
- recipient: age1g084sl230x94mkd2wq92s03mw0e8mnpjdjfx9uzaxw6psm8neyzqqwpnqe - recipient: age1z0cfz7l4vakjrte220h46fc05503506fjcz440na92pzgztlspmqc8vt6k
enc: | enc: |
-----BEGIN AGE ENCRYPTED FILE----- -----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBRK2o2K2tPTFcvbXRkZ0lq YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBQbDZaYjRTTDc0SFU2U2xQ
bS9ZOUc3dG1JeERZYVNsc3k3RjcxQ0RsdkRJCkx1VFhBQXRDOElqakJ0eTd3NEJX cUhESStvKzM5Z0QyZlJldURtRUJZTHhvNEFrCmxReGJ6MU9qdkh6UFVPYmRuQThs
b0JxOUtSOGJWeXlqdE5DdC9qNHA2N1UKLS0tIEFiQ3ZQM0NOaXRhUHBjVFhRMFk4 VmVCMTQwc0xkR0gzemlSUVlnN0NCZE0KLS0tIDFtK041ZlF4VFBreHVacitSVEN5
VjBFeldXS1p0Zk1uSk02aHpJd3BPOHcKvCmnK/KttB4RgnID/fj2KOdjvNnV3EWU WXg4UkJtU2dTR3ZjeFYzR3lRODhLYzgKrO+NtT0Q3K8FgDwW0WiZJOUHwkEz+wp8
B9mW4yxbEqhoxtu+GFD3eR/8SvMPEsHl9xorT/ZygMG7hAzedSukWw== lgBkXy2QJuuJ11f2e9ZJ3hx1xgOm6SMBmgl3zQVfVpq88yZE8uDe2Q==
-----END AGE ENCRYPTED FILE----- -----END AGE ENCRYPTED FILE-----
lastmodified: "2024-11-26T20:00:50Z" lastmodified: "2024-11-26T20:00:50Z"
mac: ENC[AES256_GCM,data:qoY9SfpoU+8HfvD5v/1S6BOkbnZUmHIbtwr0tTSuPETjnFNgr1VVw9mnRatJKPYYFb9/rMZQWIqTY+iUIEkcTVyVXhd6ki5CHW+uxCeBIyMzq33rtEa/btkEUoii4iPieamBCIY21W0znE+edxfR04yRJtLxMICEbuW4Hjf6bwk=,iv:nG42fRgjpuIjPMYnn/6egEdzYolcUBsspaZ8zMv4888=,tag:C6apGoAvVLsWdLWSCwrx6w==,type:str] mac: ENC[AES256_GCM,data:qoY9SfpoU+8HfvD5v/1S6BOkbnZUmHIbtwr0tTSuPETjnFNgr1VVw9mnRatJKPYYFb9/rMZQWIqTY+iUIEkcTVyVXhd6ki5CHW+uxCeBIyMzq33rtEa/btkEUoii4iPieamBCIY21W0znE+edxfR04yRJtLxMICEbuW4Hjf6bwk=,iv:nG42fRgjpuIjPMYnn/6egEdzYolcUBsspaZ8zMv4888=,tag:C6apGoAvVLsWdLWSCwrx6w==,type:str]
pgp: pgp:
- created_at: "2025-03-05T08:24:30Z" - created_at: "2025-02-06T12:36:59Z"
enc: |- enc: |-
-----BEGIN PGP MESSAGE----- -----BEGIN PGP MESSAGE-----
hQGMA5HdvEwzh/H7AQv+Lr4ISzvM/IEkckNhOOYAeZ0XCJ3JviSwT5wh3nd5u7ZJ hQGMA5HdvEwzh/H7AQv8DLbU8OaQmYtAjTPlqeg1nv+/z3gA16MTZjz8rRBqK695
tXdLgLwGvFs0gXBf/R2kAoyEMyFziP3dqehvrjwTipuj/5lLdw73X9kddGkGOeQK JaEbWoCJ2Nv5Mnzj7owQSk/+f+Q/d00osr4KOhQWTNoq1442MyWgIXKGPDmHgXv8
EDq2+cW3ufuukpyRq+o4lJjMmbwQuqvhqeVOxohQ677e1Xy8q6DorfwOgEgHegK1 CxFT3hIKMEFFvFtkSdo+HlBSTQJZtHgDSGabd2xd4e45tLnHsPvWQ4ngGn+piUaw
t1H/DlVHritv54mPjr9hx0fZ5Auow17wteKD71KD/Y4s9JNB0DghcHAGiwYZzL7U qz5+YIpmFNlnL9ubsB8NivryXlIL6wBXL83FyfAPnY+qG0/7frVWwP1Cejg1CGYl
aFBY0itZyeJwH7rmFJDTQ+N+595t8dguTS/V1J0SKrIynMXVgTo6dpX10lSRubYe bOYxgb1uPYIIqvvU9bZ4r46DfojFFGur9pwG/wKGOgIQ867vsXtRnNm6+SJIHeyt
9TQRMO3bmnsdPlmVJ7SlCkcc8blpHSgeQYdaZgPjDmbs+cuAkUMBnR/aEFXIX5IC eNqil3tee++V4VVUrDTf+gWufx9YFS/afRgMKuf1pUvQGTBMbUJNhIp+PjpOSBCk
lohDRX5Vd7dUfVl5CiPNbG8hnvcp2lg7CUBV46fVQ5ZJ73jW1+1Bk3hxGdLklLlF Kk6uyMWrBhiCpAVU9GKFW1AbDBCgUig2sLIUGOrfb+RkzDLX4pEoa9DVVDC2pRVy
N3hLzBBUpF8U0mvGMXJYX2gQKivRurIoLHZgmjQTjhA1uBfhTI3ktyLABz04dt7I F2fjEEbPAZepsPFNbgDyaixv+FeA5oWWiBnA7qO/v8t142UOtqBcexUZjBYYgRmt
OQYOlpsAZ/qnxbt+37LD0lgBz5XoHDuGNS2nnVQnuvtzt69GDP8mR1QXnhZedTrM c0S+lTk//xEip9wYvY6W0lgBOLqEUEiLg1tw0xvt9H4R9aGNLkCyvUediwuAbfw4
kDxxTXV7tMIbqw55tr+qVVG4H8QGy1xat3LSTIxoMmz0MmkyjQhApPN2ipwjyNof bGha9PTckYpnKN589xxsDMqbQ0Vn/rxeSzC7RT+qtjUg1gDbDJQTZdYr0+//e0YV
X0Aw9hmAZV52 xRvlnfPW9voB
=A/yB =xqAk
-----END PGP MESSAGE----- -----END PGP MESSAGE-----
fp: c4639370c41133a738f643a591ddbc4c3387f1fb fp: c4639370c41133a738f643a591ddbc4c3387f1fb
- created_at: "2025-03-05T08:24:30Z" - created_at: "2025-02-06T12:36:59Z"
enc: |- enc: |-
-----BEGIN PGP MESSAGE----- -----BEGIN PGP MESSAGE-----
hQIMA98TrrsQEbXUAQ//UtocoQPpMZL71RyjSnJFZq1NT4H5KxbcvQb4LtPBbanK hQIMA98TrrsQEbXUARAAqGyBZLrJ1UpiJKIbQSTQpKA7bRD7olMczjh0Bx1fTN0U
/gPja40wOWXy/3plCHuZYQ/mR69k9URgwD6/0RSSmCCuVp/+FPbK3UbMYyTOtYBr bctdfIGVvdp5pM1C6xbvubNqAMEisQ1tMVozDkXCnLARTwcaq6lyE9vl3gJ1iF1Z
wKnq9eRP8URK6LINJMcNHzfQtgvTcBclJSY1kLfUVjl2RV2NqXVa6+B1sHJdse8A N8SbxVTYV1SXg3qokyBsZIggQ6gJqAr62Pyoansp4HfwwFwYohwR2zTfHJ8pFkkW
5NyJ/VNRRrNDtNnmIaIHPFUBySVKLqiXeex4gFNy02IcwwSiFZnBt/ReaFjZQTGE R2FfEI2Gw5nN4GaauIxUGFDPuvvZapCWZ/ejt4s/ezT9cYrwYfu9XIlqsivsi3yp
TKfyohQF4V4/e35wvhbqWSFYZmWHCwW94gnwTwnlEO/H/NvbNUNDSYL6WwrIFF7+ I03ohKS/pKhxlE7RV2ufRboG+m6TUCnyj5U5AzQa09hkSHd94s9A6M8I6M6zWebv
h/tP7nWbBxNqSByHGQuJnKSysndwUuSMuH+uj+uYdJDn1L1VEoErfAZcRSsDAz1Z pdX73sCjWZQdIZoeM5oXcyY/s/h4/w37loOUE/thh1+hIjybAG0CH31nJkjcdcLg
BVkEaPJM6ZmRM48up5eHQtPG2rklmoFw/O/bPd9QUWJ/ZRu2gcY8aLGczHygaq6W l/fqTLa89JVt37bU9c/hVsx2Bc1cTO7nqhG3kyahkMSLFrsb73yTNn4kOqSKZ7+z
gvZTCPrke7XZTd9S3mAcdlJ2iAYuKY4AmGqfmXbd7mRZXP3DQkRaA/IwCCNJVmhX 189oR0EjNySgRt+M20vjKzhPbjxxQTKlpTE0vho6fEHYRmzPQ3IQbVUbPEbZR64I
um5MbCa17I9Uwh58NBmDuejLcmJDWL5jLmN6S86t3k3jKxwaX56SRwAD92klMdQq S+Nk7m95ZV8djaUOwqqU9pwDTvuYIBwhGOY1kefDg1sCCTM8C9RI9sG02HeQpme3
8UX2dkUkwnnI+YjfOEeLGDAT655TsejpE97JKjI/m39Pwnn+MrDEfNDHhfjX+9QC bgkO+m4khXeiiIrTAODiyM+GCwx6UcwooUSpu8LZJmhiZtfgMsFdGF3P7ngtoOEQ
5k+fiP7yWc7hG3A7jt5vEadazPgDeqy0q7jbmTPtmdieqBs71P3j19l3VKAU8ZnS 4cxP231EI/zoMqRyXYrvAovxXndwghG0LGcCAZZL6mNN2xzE6z1gesVWRjXM8inS
WAGz21MFYjhESmU7fqhTyVE9Dsd4GqyB+ZSa8hHj2ISaOi0QQ9JXjjtQhkDGlsB4 WAFB7DgLTlY43D4QbhkyZfo6XltYe1g1tcJJraG/HICa7hq5BZn48t/BcacCvsrJ
/PrjZhyqd62wHS075YV1B9YqOhC1nAkC+s9mj9CrWIeH2JMOSK9EUyI= lIkEgOT8gn1SlQbDL+T+3pRNOixGKPNU6Ategoy+Eq0Im3AhE0XO8Ns=
=5u7o =Uvc2
-----END PGP MESSAGE----- -----END PGP MESSAGE-----
fp: aef8d6c7e4761fc297cda833df13aebb1011b5d4 fp: aef8d6c7e4761fc297cda833df13aebb1011b5d4
unencrypted_suffix: _unencrypted unencrypted_suffix: _unencrypted

View File

@@ -107,12 +107,6 @@ with lib;
targets = [ "10.0.0.13:9002" ]; targets = [ "10.0.0.13:9002" ];
}]; }];
} }
{
job_name = "zineshop";
static_configs = [{
targets = [ "10.0.0.15:9002" ];
}];
}
{ {
job_name = "fanny"; job_name = "fanny";
static_configs = [{ static_configs = [{

View File

@@ -5,8 +5,4 @@
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINQg6a2EGmq+i9lfwU+SRMQ8MGN3is3VS6janzl9qOHo quaseb67@hzdr.de" "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINQg6a2EGmq+i9lfwU+SRMQ8MGN3is3VS6janzl9qOHo quaseb67@hzdr.de"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICKaEcGaSKU0xC5qCwzj2oCLLG4PYjWHZ7/CXHw4urVk atlan@nixos" "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICKaEcGaSKU0xC5qCwzj2oCLLG4PYjWHZ7/CXHw4urVk atlan@nixos"
]; ];
backup = [
"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIGryhnXdmbGObONG88K+c9zeduMrwtoLHwzMDZg3UXB+Cnq2FXWOrlLZxA+95VUgHpyGZiykJmzeWrKhldDlDeGGd8QCCLeuliiOgXADTYjWaVfhd+6arPZrK2VtqqsguvH40gW1xfoGAOmAT4WFnxWxIaEip0V2u6NOoKTiH9I3bz2Qe4lfJvPXig+XwCXcukXd6XUkDFYDpiw8XNV3X7pqTus5d2RYR97bAhIYZZQ6h50ZpTY8N0lFh4RY5yfx8BhxJW3tfoi9uZvVuPx7dGPsZsSniENFPMNz3UwHGitTJMr4088cJrAGgd5lyFuLouiHiM2JMGA0wx9wWTbWJEwTLaTVQK9gSJf857ndV2zCh9vKlfko4w9bQgqxKg4U/mY8dXX1E7D51nD2ci8Ed+ZG+NEneFLyZhLsD82GkBY+YovA+4xm/pcx+hBhlyqGNxI8v+Jh+JhEyD/ZLgJfq3ZMbGIGsTiDwZ2flLLxImHHEoDoT6PHU6hDhPDJu560="
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJKl5FWPskhlnzJs1+mMYrVTMNnRG92uFKUgGlteTPhL"
];
} }

View File

@@ -24,7 +24,7 @@ in
malobeo.disks = { malobeo.disks = {
enable = true; enable = true;
encryption = false; encryption = true;
hostId = "83abc8cb"; hostId = "83abc8cb";
devNodes = "/dev/disk/by-path/"; devNodes = "/dev/disk/by-path/";
root = { root = {

View File

@@ -66,15 +66,6 @@ with lib;
''; '';
}; };
}; };
virtualHosts."shop.malobeo.org" = {
locations."/" = {
proxyPass = "http://10.100.0.101";
extraConfig = ''
proxy_set_header Host $host;
'';
};
};
}; };
system.stateVersion = "22.11"; # Did you read the comment? system.stateVersion = "22.11"; # Did you read the comment?

View File

@@ -1,34 +0,0 @@
{ self, config, lib, pkgs, inputs, ... }:
with lib;
{
networking = {
hostName = mkDefault "zineshop";
useDHCP = false;
};
imports = [
inputs.malobeo.nixosModules.malobeo.metrics
inputs.malobeo.nixosModules.malobeo.printing
inputs.zineshop.nixosModules.zineshop
../modules/malobeo_user.nix
../modules/sshd.nix
];
malobeo.metrics = {
enable = true;
enablePromtail = true;
logNginx = true;
lokiHost = "10.0.0.14";
};
services.printing.enable = true;
services.malobeo.printing.enable = true;
services.zineshop.enable = true;
networking.firewall.allowedTCPPorts = [ 8080 ];
system.stateVersion = "22.11"; # Did you read the comment?
}

View File

@@ -115,9 +115,6 @@ in (utils.lib.eachSystem (builtins.filter filter_system utils.lib.defaultSystems
initssh.imports = [ ./machines/modules/malobeo/initssh.nix ]; initssh.imports = [ ./machines/modules/malobeo/initssh.nix ];
metrics.imports = [ ./machines/modules/malobeo/metrics.nix ]; metrics.imports = [ ./machines/modules/malobeo/metrics.nix ];
disko.imports = [ ./machines/modules/disko ]; disko.imports = [ ./machines/modules/disko ];
users.imports = [ ./machines/modules/malobeo/users.nix ];
backup.imports = [ ./machines/modules/malobeo/backup.nix ];
printing.imports = [ ./machines/modules/malobeo/printing.nix ];
}; };
hydraJobs = nixpkgs.lib.mapAttrs (_: nixpkgs.lib.hydraJob) ( hydraJobs = nixpkgs.lib.mapAttrs (_: nixpkgs.lib.hydraJob) (

View File

@@ -40,13 +40,15 @@ trap cleanup EXIT
# Create the directory where sshd expects to find the host keys # Create the directory where sshd expects to find the host keys
install -d -m755 "$temp/etc/ssh/" install -d -m755 "$temp/etc/ssh/"
install -d -m755 "$temp/root/"
diskKey=$(sops -d $pwpath/disk.key) diskKey=$(sops -d $pwpath/disk.key)
echo "$diskKey" > /tmp/secret.key echo "$diskKey" > /tmp/secret.key
echo "$diskKey" > $temp/root/secret.key
sops -d "$pwpath/$hostkey" > "$temp/etc/ssh/$hostname" sops -d "$pwpath/$hostkey" > "$temp/etc/ssh/$hostname"
sops -d "$pwpath/$initrdkey" > "$temp/etc/ssh/initrd" sopd -d "$pwpath/$initrdkey" > "$temp/etc/ssh/initrd"
# # Set the correct permissions so sshd will accept the key # # Set the correct permissions so sshd will accept the key
chmod 600 "$temp/etc/ssh/$hostname" chmod 600 "$temp/etc/ssh/$hostname"

View File

@@ -25,13 +25,11 @@ echo
if [ $# = 1 ] if [ $# = 1 ]
then then
echo "$diskkey" | ssh $sshoptions root@$hostname-initrd "systemd-tty-ask-password-agent" #root echo "$diskkey" | ssh $sshoptions root@$hostname-initrd "systemd-tty-ask-password-agent" #root
echo "$diskkey" | ssh $sshoptions root@$hostname-initrd "systemd-tty-ask-password-agent" #data
elif [ $# = 2 ] elif [ $# = 2 ]
then then
ip=$2 ip=$2
echo "$diskkey" | ssh $sshoptions root@$ip "systemd-tty-ask-password-agent" #root echo "$diskkey" | ssh $sshoptions root@$ip "systemd-tty-ask-password-agent" #root
echo "$diskkey" | ssh $sshoptions root@$ip "systemd-tty-ask-password-agent" #data
else else
echo echo