Compare commits
9 Commits
26829f9255
...
better-wor
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
52824e39ee | ||
|
|
8793120436 | ||
|
|
950ada1e10 | ||
|
|
1e269966ff | ||
|
|
3861daaf76 | ||
|
|
3a332e77d1 | ||
|
|
79c311b45d | ||
|
|
850070f987 | ||
|
|
d242562544 |
@@ -1,9 +1,8 @@
|
|||||||
name: "Evaluate Hydra Jobs"
|
name: "Check flake syntax"
|
||||||
on:
|
on:
|
||||||
pull_request:
|
|
||||||
push:
|
push:
|
||||||
jobs:
|
jobs:
|
||||||
eval-hydra-jobs:
|
flake-check:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
@@ -11,5 +10,5 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
apt update -y
|
apt update -y
|
||||||
apt install sudo -y
|
apt install sudo -y
|
||||||
- uses: cachix/install-nix-action@v27
|
- uses: cachix/install-nix-action@v30
|
||||||
- run: nix eval --no-update-lock-file --accept-flake-config .\#hydraJobs
|
- run: nix flake check --no-update-lock-file --accept-flake-config .
|
||||||
@@ -1,82 +1 @@
|
|||||||
# malobeo infrastructure
|
# Index
|
||||||
|
|
||||||
this repository nxios configurations of the digital malobeo infrastructure. it should be used to setup, test, build and deploy different hosts in a reproducible manner.
|
|
||||||
|
|
||||||
the file structure is based on this [blog post](https://samleathers.com/posts/2022-02-03-my-new-network-and-deploy-rs.html)
|
|
||||||
|
|
||||||
### deploying configuration
|
|
||||||
#### local deployment
|
|
||||||
``` shell
|
|
||||||
nixos-rebuild switch --use-remote-sudo
|
|
||||||
```
|
|
||||||
|
|
||||||
#### remote deployment
|
|
||||||
you need the hostname and ip address of the host:
|
|
||||||
``` shell
|
|
||||||
nixos-rebuild switch --flake .#<hostname> --target-host root@<ip_address> --build-host localhost
|
|
||||||
```
|
|
||||||
|
|
||||||
in this case 'localhost' is used as buildhost which can be usefull if the target host is low systemresources
|
|
||||||
|
|
||||||
|
|
||||||
## development
|
|
||||||
|
|
||||||
### requirements
|
|
||||||
we use flake based configurations for our hosts. if you want to build configurations on you own machine you have to enable flakes first by adding the following to your *configuration.nix* or *nix.conf*
|
|
||||||
``` nix
|
|
||||||
nix.extraOptions = ''
|
|
||||||
experimental-features = nix-command flakes
|
|
||||||
'';
|
|
||||||
```
|
|
||||||
|
|
||||||
More information about flakes can be found [here](https://nixos.wiki/wiki/Flakes)
|
|
||||||
|
|
||||||
### dev shell
|
|
||||||
a development shell with the correct environment can be created by running ```nix develop ```
|
|
||||||
|
|
||||||
If you're using direnv you can add flake support by following those steps: [link](https://nixos.wiki/wiki/Flakes#Direnv_integration)
|
|
||||||
|
|
||||||
### build a configuration
|
|
||||||
|
|
||||||
to build a configuration run the following command (replace ```<hostname>``` with the actual hostname):
|
|
||||||
|
|
||||||
``` shell
|
|
||||||
nix build .#nixosConfigurations.<hostname>.config.system.build.toplevel
|
|
||||||
```
|
|
||||||
|
|
||||||
### building raspberry image
|
|
||||||
|
|
||||||
for the raspberry it is possible to build the whole configuration as an sd-card image which then can be flashed directly. more information about building arm on nixos can be found [here](https://nixos.wiki/wiki/NixOS_on_ARM).
|
|
||||||
|
|
||||||
to be able to build the image you need to enable qemu emulation on the machine you are building with. therefore it is necessary to add the following to your configuration.nix:
|
|
||||||
|
|
||||||
``` nix
|
|
||||||
boot.binfmt.emulatedSystems = [ "aarch64-linux" ];
|
|
||||||
```
|
|
||||||
|
|
||||||
then you can build the image with:
|
|
||||||
|
|
||||||
``` shell
|
|
||||||
nix build .#nixosConfigurations.rpi1_base_image.config.system.build.sdImage
|
|
||||||
```
|
|
||||||
|
|
||||||
### run a configuration as vm
|
|
||||||
|
|
||||||
to run a vm we have to build it first using the following command (replace ```<hostname>``` with the actual hostname):
|
|
||||||
|
|
||||||
``` shell
|
|
||||||
nix build .#nixosConfigurations.<hostname>.config.system.build.vm
|
|
||||||
```
|
|
||||||
|
|
||||||
afterwards run the following command to start the vm:
|
|
||||||
|
|
||||||
``` shell
|
|
||||||
./result/bin/run-<hostname>-vm
|
|
||||||
```
|
|
||||||
|
|
||||||
### documentation
|
|
||||||
|
|
||||||
for documentation we currently just use README.md files.
|
|
||||||
|
|
||||||
the devshell provides the python package ['grip'](https://github.com/joeyespo/grip) which can be used to preview different README.md files in the browser.
|
|
||||||
the usage is simple, just run ```grip``` in the same folder as the README.md you wanna preview. then open your browser at ```http://localhost:6419 ```.
|
|
||||||
|
|||||||
@@ -3,6 +3,7 @@
|
|||||||
, nixpkgs
|
, nixpkgs
|
||||||
, sops-nix
|
, sops-nix
|
||||||
, inputs
|
, inputs
|
||||||
|
, microvm
|
||||||
, nixos-hardware
|
, nixos-hardware
|
||||||
, home-manager
|
, home-manager
|
||||||
, ...
|
, ...
|
||||||
@@ -34,15 +35,14 @@ let
|
|||||||
};
|
};
|
||||||
};
|
};
|
||||||
})
|
})
|
||||||
|
|
||||||
sops-nix.nixosModules.sops
|
sops-nix.nixosModules.sops
|
||||||
|
microvm.nixosModules.microvm
|
||||||
];
|
];
|
||||||
}
|
}
|
||||||
];
|
];
|
||||||
defaultModules = baseModules;
|
defaultModules = baseModules;
|
||||||
|
|
||||||
makeMicroVM = hostName: ipv4Addr: macAddr: modules: [
|
makeMicroVM = hostName: ipv4Addr: macAddr: modules: [
|
||||||
inputs.microvm.nixosModules.microvm
|
|
||||||
{
|
{
|
||||||
microvm = {
|
microvm = {
|
||||||
hypervisor = "cloud-hypervisor";
|
hypervisor = "cloud-hypervisor";
|
||||||
@@ -170,16 +170,6 @@ in
|
|||||||
];
|
];
|
||||||
};
|
};
|
||||||
|
|
||||||
overwatch = nixosSystem {
|
|
||||||
system = "x86_64-linux";
|
|
||||||
specialArgs.inputs = inputs;
|
|
||||||
specialArgs.self = self;
|
|
||||||
modules = makeMicroVM "overwatch" "10.0.0.13" "D0:E5:CA:F0:D7:E9" [
|
|
||||||
./overwatch/configuration.nix
|
|
||||||
];
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
testvm = nixosSystem {
|
testvm = nixosSystem {
|
||||||
system = "x86_64-linux";
|
system = "x86_64-linux";
|
||||||
specialArgs.inputs = inputs;
|
specialArgs.inputs = inputs;
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ with lib;
|
|||||||
networking = {
|
networking = {
|
||||||
hostName = mkDefault "durruti";
|
hostName = mkDefault "durruti";
|
||||||
useDHCP = false;
|
useDHCP = false;
|
||||||
|
nameservers = [ "1.1.1.1" ];
|
||||||
};
|
};
|
||||||
|
|
||||||
networking.firewall.allowedTCPPorts = [ 8080 ];
|
networking.firewall.allowedTCPPorts = [ 8080 ];
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ with lib;
|
|||||||
networking = {
|
networking = {
|
||||||
hostName = mkDefault "infradocs";
|
hostName = mkDefault "infradocs";
|
||||||
useDHCP = false;
|
useDHCP = false;
|
||||||
|
nameservers = [ "1.1.1.1" ];
|
||||||
};
|
};
|
||||||
|
|
||||||
imports = [
|
imports = [
|
||||||
@@ -14,30 +15,6 @@ with lib;
|
|||||||
../modules/sshd.nix
|
../modules/sshd.nix
|
||||||
];
|
];
|
||||||
|
|
||||||
networking.firewall.allowedTCPPorts = [ 9002 ];
|
|
||||||
|
|
||||||
services.prometheus = {
|
|
||||||
exporters = {
|
|
||||||
node = {
|
|
||||||
enable = true;
|
|
||||||
enabledCollectors = [ "systemd" "processes" ];
|
|
||||||
port = 9002;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
services.promtail = {
|
|
||||||
enable = true;
|
|
||||||
configFile = import ../modules/malobeo/promtail_config.nix {
|
|
||||||
lokiAddress = "10.0.0.13";
|
|
||||||
logNginx = true;
|
|
||||||
config = config;
|
|
||||||
pkgs = pkgs;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
users.users.promtail.extraGroups = [ "nginx" "systemd-journal" ];
|
|
||||||
|
|
||||||
system.stateVersion = "22.11"; # Did you read the comment?
|
system.stateVersion = "22.11"; # Did you read the comment?
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,49 +0,0 @@
|
|||||||
{ logNginx, lokiAddress, config, pkgs, ... }:
|
|
||||||
|
|
||||||
let
|
|
||||||
basecfg = ''
|
|
||||||
server:
|
|
||||||
http_listen_port: 9080
|
|
||||||
grpc_listen_port: 0
|
|
||||||
|
|
||||||
positions:
|
|
||||||
filename: /tmp/positions.yaml
|
|
||||||
|
|
||||||
clients:
|
|
||||||
- url: http://${lokiAddress}:3100/loki/api/v1/push
|
|
||||||
'';
|
|
||||||
|
|
||||||
withNginx = ''
|
|
||||||
scrape_configs:
|
|
||||||
- job_name: journal
|
|
||||||
journal:
|
|
||||||
max_age: 12h
|
|
||||||
labels:
|
|
||||||
job: systemd-journal
|
|
||||||
host: ${config.networking.hostName}
|
|
||||||
relabel_configs:
|
|
||||||
- source_labels: ["__journal__systemd_unit"]
|
|
||||||
target_label: "unit"
|
|
||||||
- job_name: nginx
|
|
||||||
static_configs:
|
|
||||||
- targets:
|
|
||||||
- localhost
|
|
||||||
labels:
|
|
||||||
job: nginx
|
|
||||||
__path__: /var/log/nginx/*log
|
|
||||||
'';
|
|
||||||
|
|
||||||
withoutNginx = ''
|
|
||||||
scrape_configs:
|
|
||||||
- job_name: journal
|
|
||||||
journal:
|
|
||||||
max_age: 12h
|
|
||||||
labels:
|
|
||||||
job: systemd-journal
|
|
||||||
host: ${config.networking.hostName}
|
|
||||||
relabel_configs:
|
|
||||||
- source_labels: ["__journal__systemd_unit"]
|
|
||||||
target_label: "unit"
|
|
||||||
'';
|
|
||||||
in
|
|
||||||
pkgs.writeText "promtailcfg.yaml" (if logNginx then ''${basecfg}${withNginx}'' else ''${basecfg}${withoutNginx}'')
|
|
||||||
@@ -1,87 +0,0 @@
|
|||||||
{ config, lib, pkgs, inputs, ... }:
|
|
||||||
|
|
||||||
with lib;
|
|
||||||
|
|
||||||
{
|
|
||||||
networking = {
|
|
||||||
hostName = mkDefault "overwatch";
|
|
||||||
useDHCP = false;
|
|
||||||
};
|
|
||||||
|
|
||||||
imports = [
|
|
||||||
../modules/malobeo_user.nix
|
|
||||||
../modules/sshd.nix
|
|
||||||
];
|
|
||||||
|
|
||||||
networking.firewall.allowedTCPPorts = [ 80 9080 9001 3100 ];
|
|
||||||
|
|
||||||
services.grafana = {
|
|
||||||
enable = true;
|
|
||||||
domain = "grafana.malobeo.org";
|
|
||||||
port = 2342;
|
|
||||||
addr = "127.0.0.1";
|
|
||||||
};
|
|
||||||
|
|
||||||
services.nginx = {
|
|
||||||
enable = true;
|
|
||||||
virtualHosts.${config.services.grafana.domain} = {
|
|
||||||
locations."/" = {
|
|
||||||
proxyPass = "http://127.0.0.1:${toString config.services.grafana.port}";
|
|
||||||
proxyWebsockets = true;
|
|
||||||
|
|
||||||
extraConfig = ''
|
|
||||||
proxy_set_header Host $host;
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
services.prometheus = {
|
|
||||||
enable = true;
|
|
||||||
port = 9001;
|
|
||||||
exporters = {
|
|
||||||
node = {
|
|
||||||
enable = true;
|
|
||||||
enabledCollectors = [ "systemd" "processes" ];
|
|
||||||
port = 9002;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
scrapeConfigs = [
|
|
||||||
{
|
|
||||||
job_name = "overwatch";
|
|
||||||
static_configs = [{
|
|
||||||
targets = [ "127.0.0.1:9002" ];
|
|
||||||
}];
|
|
||||||
}
|
|
||||||
{
|
|
||||||
job_name = "infradocs";
|
|
||||||
static_configs = [{
|
|
||||||
targets = [ "10.0.0.11:9002" ];
|
|
||||||
}];
|
|
||||||
}
|
|
||||||
];
|
|
||||||
};
|
|
||||||
|
|
||||||
services.loki = {
|
|
||||||
enable = true;
|
|
||||||
configFile = ./loki.yaml;
|
|
||||||
};
|
|
||||||
|
|
||||||
services.promtail = {
|
|
||||||
enable = true;
|
|
||||||
configFile = import ../modules/malobeo/promtail_config.nix {
|
|
||||||
lokiAddress = "10.0.0.13";
|
|
||||||
logNginx = false;
|
|
||||||
config = config;
|
|
||||||
pkgs = pkgs;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
users.users.promtail.extraGroups = [ "nginx" "systemd-journal" ];
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
system.stateVersion = "22.11"; # Did you read the comment?
|
|
||||||
}
|
|
||||||
|
|
||||||
@@ -1,60 +0,0 @@
|
|||||||
auth_enabled: false
|
|
||||||
|
|
||||||
server:
|
|
||||||
http_listen_port: 3100
|
|
||||||
grpc_listen_port: 9096
|
|
||||||
log_level: debug
|
|
||||||
grpc_server_max_concurrent_streams: 1000
|
|
||||||
|
|
||||||
common:
|
|
||||||
instance_addr: 127.0.0.1
|
|
||||||
path_prefix: /tmp/loki
|
|
||||||
storage:
|
|
||||||
filesystem:
|
|
||||||
chunks_directory: /tmp/loki/chunks
|
|
||||||
rules_directory: /tmp/loki/rules
|
|
||||||
replication_factor: 1
|
|
||||||
ring:
|
|
||||||
kvstore:
|
|
||||||
store: inmemory
|
|
||||||
|
|
||||||
query_range:
|
|
||||||
results_cache:
|
|
||||||
cache:
|
|
||||||
embedded_cache:
|
|
||||||
enabled: true
|
|
||||||
max_size_mb: 100
|
|
||||||
|
|
||||||
schema_config:
|
|
||||||
configs:
|
|
||||||
- from: 2020-10-24
|
|
||||||
store: tsdb
|
|
||||||
object_store: filesystem
|
|
||||||
schema: v13
|
|
||||||
index:
|
|
||||||
prefix: index_
|
|
||||||
period: 24h
|
|
||||||
|
|
||||||
pattern_ingester:
|
|
||||||
enabled: true
|
|
||||||
metric_aggregation:
|
|
||||||
loki_address: localhost:3100
|
|
||||||
|
|
||||||
ruler:
|
|
||||||
alertmanager_url: http://localhost:9093
|
|
||||||
|
|
||||||
frontend:
|
|
||||||
encoding: protobuf
|
|
||||||
|
|
||||||
# By default, Loki will send anonymous, but uniquely-identifiable usage and configuration
|
|
||||||
# analytics to Grafana Labs. These statistics are sent to https://stats.grafana.org/
|
|
||||||
#
|
|
||||||
# Statistics help us better understand how Loki is used, and they show us performance
|
|
||||||
# levels for most users. This helps us prioritize features and documentation.
|
|
||||||
# For more information on what's sent, look at
|
|
||||||
# https://github.com/grafana/loki/blob/main/pkg/analytics/stats.go
|
|
||||||
# Refer to the buildReport method to see what goes into a report.
|
|
||||||
#
|
|
||||||
# If you would like to disable reporting, uncomment the following lines:
|
|
||||||
analytics:
|
|
||||||
reporting_enabled: false
|
|
||||||
@@ -1,29 +0,0 @@
|
|||||||
server:
|
|
||||||
http_listen_port: 9080
|
|
||||||
grpc_listen_port: 0
|
|
||||||
|
|
||||||
positions:
|
|
||||||
filename: /tmp/positions.yaml
|
|
||||||
|
|
||||||
clients:
|
|
||||||
- url: http://10.0.0.13:3100/loki/api/v1/push
|
|
||||||
|
|
||||||
|
|
||||||
scrape_configs:
|
|
||||||
- job_name: journal
|
|
||||||
journal:
|
|
||||||
max_age: 12h
|
|
||||||
labels:
|
|
||||||
job: systemd-journal
|
|
||||||
host: overwatch
|
|
||||||
relabel_configs:
|
|
||||||
- source_labels: ["__journal__systemd_unit"]
|
|
||||||
target_label: "unit"
|
|
||||||
- job_name: nginx
|
|
||||||
static_configs:
|
|
||||||
- targets:
|
|
||||||
- localhost
|
|
||||||
labels:
|
|
||||||
job: nginx
|
|
||||||
__path__: /var/log/nginx/*log
|
|
||||||
|
|
||||||
@@ -6,6 +6,7 @@ with lib;
|
|||||||
networking = {
|
networking = {
|
||||||
hostName = mkDefault "uptimekuma";
|
hostName = mkDefault "uptimekuma";
|
||||||
useDHCP = false;
|
useDHCP = false;
|
||||||
|
nameservers = [ "1.1.1.1" ];
|
||||||
};
|
};
|
||||||
|
|
||||||
imports = [
|
imports = [
|
||||||
|
|||||||
15
outputs.nix
15
outputs.nix
@@ -20,6 +20,7 @@ in (utils.lib.eachSystem (builtins.filter filter_system utils.lib.defaultSystems
|
|||||||
let
|
let
|
||||||
sops = sops-nix.packages."${pkgs.system}";
|
sops = sops-nix.packages."${pkgs.system}";
|
||||||
microvmpkg = microvm.packages."${pkgs.system}";
|
microvmpkg = microvm.packages."${pkgs.system}";
|
||||||
|
installed = builtins.attrNames self.legacyPackages."${pkgs.system}".scripts;
|
||||||
in
|
in
|
||||||
pkgs.mkShell {
|
pkgs.mkShell {
|
||||||
sopsPGPKeyDirs = [
|
sopsPGPKeyDirs = [
|
||||||
@@ -37,11 +38,14 @@ in (utils.lib.eachSystem (builtins.filter filter_system utils.lib.defaultSystems
|
|||||||
pkgs.mdbook
|
pkgs.mdbook
|
||||||
microvmpkg.microvm
|
microvmpkg.microvm
|
||||||
];
|
];
|
||||||
|
packages = builtins.map (pkgName: self.legacyPackages."${pkgs.system}".scripts.${pkgName}) installed;
|
||||||
|
shellHook = ''echo "Available scripts: ${builtins.concatStringsSep " " installed}"'';
|
||||||
|
};
|
||||||
|
legacyPackages = {
|
||||||
|
scripts.remote-install = pkgs.writeShellScriptBin "remote-install" (builtins.readFile ./scripts/remote-install-encrypt.sh);
|
||||||
|
scripts.boot-unlock = pkgs.writeShellScriptBin "boot-unlock" (builtins.readFile ./scripts/unlock-boot.sh);
|
||||||
};
|
};
|
||||||
|
|
||||||
packages = {
|
packages = {
|
||||||
remote-install = pkgs.writeShellScriptBin "remote-install" (builtins.readFile ./scripts/remote-install-encrypt.sh);
|
|
||||||
boot-unlock = pkgs.writeShellScriptBin "boot-unlock" (builtins.readFile ./scripts/unlock-boot.sh);
|
|
||||||
docs = pkgs.stdenv.mkDerivation {
|
docs = pkgs.stdenv.mkDerivation {
|
||||||
name = "malobeo-docs";
|
name = "malobeo-docs";
|
||||||
phases = [ "buildPhase" ];
|
phases = [ "buildPhase" ];
|
||||||
@@ -78,6 +82,11 @@ in (utils.lib.eachSystem (builtins.filter filter_system utils.lib.defaultSystems
|
|||||||
source = "/nix/store";
|
source = "/nix/store";
|
||||||
mountPoint = "/nix/.ro-store";
|
mountPoint = "/nix/.ro-store";
|
||||||
}];
|
}];
|
||||||
|
interfaces = pkgs.lib.mkForce [{
|
||||||
|
type = "user";
|
||||||
|
id = "eth0";
|
||||||
|
mac = "02:23:de:ad:be:ef";
|
||||||
|
}];
|
||||||
};
|
};
|
||||||
boot.isContainer = pkgs.lib.mkForce false;
|
boot.isContainer = pkgs.lib.mkForce false;
|
||||||
users.users.root.password = "";
|
users.users.root.password = "";
|
||||||
|
|||||||
@@ -1,5 +1,4 @@
|
|||||||
set -o errexit
|
set -o errexit
|
||||||
set -o nounset
|
|
||||||
set -o pipefail
|
set -o pipefail
|
||||||
|
|
||||||
if [ $# -lt 2 ]; then
|
if [ $# -lt 2 ]; then
|
||||||
@@ -9,6 +8,21 @@ if [ $# -lt 2 ]; then
|
|||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ ! -e flake.nix ]
|
||||||
|
then
|
||||||
|
echo "flake.nix not found. Searching down."
|
||||||
|
while [ ! -e flake.nix ]
|
||||||
|
do
|
||||||
|
if [ $PWD = "/" ]
|
||||||
|
then
|
||||||
|
echo "Found root. Aborting."
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
cd ..
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
hostname=$1
|
hostname=$1
|
||||||
ipaddress=$2
|
ipaddress=$2
|
||||||
|
|
||||||
|
|||||||
@@ -4,19 +4,33 @@ set -o pipefail
|
|||||||
sshoptions="-o StrictHostKeyChecking=no -o ServerAliveInterval=1 -o ServerAliveCountMax=1 -p 222 -T"
|
sshoptions="-o StrictHostKeyChecking=no -o ServerAliveInterval=1 -o ServerAliveCountMax=1 -p 222 -T"
|
||||||
HOSTNAME=$1
|
HOSTNAME=$1
|
||||||
|
|
||||||
echo
|
if [ ! -e flake.nix ]
|
||||||
diskkey=$(sops -d machines/$HOSTNAME/disk.key)
|
then
|
||||||
|
echo "flake.nix not found. Searching down."
|
||||||
|
while [ ! -e flake.nix ]
|
||||||
|
do
|
||||||
|
if [ $PWD = "/" ]
|
||||||
|
then
|
||||||
|
echo "Found root. Aborting."
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
cd ..
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
if [ $# = 1 ]
|
if [ $# = 1 ]
|
||||||
then
|
then
|
||||||
|
diskkey=$(sops -d machines/$HOSTNAME/disk.key)
|
||||||
echo "$diskkey" | ssh $sshoptions root@$HOSTNAME-initrd "systemd-tty-ask-password-agent" #storage
|
echo "$diskkey" | ssh $sshoptions root@$HOSTNAME-initrd "systemd-tty-ask-password-agent" #storage
|
||||||
|
|
||||||
echo "$diskkey" | ssh $sshoptions root@$HOSTNAME-initrd "systemd-tty-ask-password-agent" #root
|
echo "$diskkey" | ssh $sshoptions root@$HOSTNAME-initrd "systemd-tty-ask-password-agent" #root
|
||||||
|
|
||||||
elif [ $# = 2 ]
|
elif [ $# = 2 ]
|
||||||
then
|
then
|
||||||
|
diskkey=$(sops -d machines/$HOSTNAME/disk.key)
|
||||||
IP=$2
|
IP=$2
|
||||||
|
|
||||||
echo "$diskkey" | ssh $sshoptions root@$IP "systemd-tty-ask-password-agent" #storage
|
echo "$diskkey" | ssh $sshoptions root@$IP "systemd-tty-ask-password-agent" #storage
|
||||||
|
|
||||||
echo "$diskkey" | ssh $sshoptions root@$IP "systemd-tty-ask-password-agent" #root
|
echo "$diskkey" | ssh $sshoptions root@$IP "systemd-tty-ask-password-agent" #root
|
||||||
|
|||||||
Reference in New Issue
Block a user