Compare commits

..

21 Commits

Author SHA1 Message Date
Martin Weinelt
bc6f08d233 Create eval-jobset role and guard /api/push route
(cherry picked from commit f730433789)
2024-08-27 21:53:17 +02:00
Janne Heß
88d9898588 api: Require POST for /api/push
(cherry picked from commit 916531dc9c)
2024-08-27 21:53:13 +02:00
Pierre Bourdon
b8d03adaf4 queue runner: attempt at slightly smarter scheduling criteria
Instead of just going for "whatever is the oldest build we know of",
use the following first:

- Is the step more constrained? If so, schedule it first to avoid
  filling up "more desirable" build slots with less constrained builds.

- Does the step have more dependents? If so, schedule it first to try
  and maximize open parallelism and breadth of scheduling options.
2024-04-21 17:36:16 +02:00
Pierre Bourdon
ee1a7a7813 web: serveFile: also serve a CSP putting served HTML in its own origin 2024-04-21 16:14:24 +02:00
Pierre Bourdon
5c3e508e55 queue-runner: release machine reservation while copying outputs
This allows for better builder usage when the queue runner is busy. To
avoid running into uncontrollable imbalances between builder/queue
runner, we only release the machine reservation after the local
throttler has found a slot to start copying the outputs for that build.
2024-04-21 01:55:19 +02:00
Pierre Bourdon
026e3a3103 queue-runner: switch to pseudorandom ordering of builds processing
We don't rely on sequential / monotonic build IDs processing anymore, so
randomizing actually has the advantage of mixing builds for different
systems together, to avoid only one chunk of builds for a single system
getting processed while builders for other systems are starved.
2024-04-20 23:05:26 +02:00
Pierre Bourdon
6606a7f86e queue runner: introduce some parallelism for remote paths lookup
Each output for a given step being ingested is looked up in parallel,
which should basically multiply the speed of builds ingestion by the
average number of outputs per derivation.
2024-04-20 22:28:18 +02:00
Pierre Bourdon
f31b95d371 queue-runner: reduce the time between queue monitor restarts
This will induce more DB queries (though these are fairly cheap), but at
the benefit of processing bumps within 1m instead of within 10m.
2024-04-20 16:58:10 +02:00
Pierre Bourdon
54f8daf6b1 queue-runner: remove id > X from new builds query
Running the query with/without it shows that it makes no difference to
postgres, since there's an index on finished=0 already. This allows a
few simplifications, but also paves the way towards running multiple
parallel monitor threads in the future.
2024-04-20 16:53:52 +02:00
Pierre Bourdon
cc6bafe538 queue-runner: add prom metrics to allow detecting internal bottlenecks
By looking at the ratio of running vs. waiting for the dispatcher and
the queue monitor, we should get better visibility into what hydra is
currently bottlenecked on.

There are other side effects we can try to measure to get to the same
result, but having a simple way doesn't cost us much.
2024-04-20 16:48:03 +02:00
Pierre Bourdon
6189ba9c5e web: replace 'errormsg' with 'errormsg IS NULL' in most cases
This is implement in an extremely hacky way due to poor DBIx feature
support. Ideally, what we'd need is a way to tell DBIx to ignore the
errormsg column unless explicitly requested, and to automatically add a
computed 'errormsg IS NULL' column in others. Since it does not support
that, this commit instead hacks some support via method overrides while
taking care to not break anything obvious.
2024-04-12 20:14:09 +02:00
Pierre Bourdon
258e9314a9 web: include current step status on /machines 2024-04-11 17:15:58 +02:00
Pierre Bourdon
a51bd392a2 queue-runner: limit parallelism of CPU intensive operations
My current theory is that running more parallel xz than available CPU
cores is reducing our overall throughput by requiring more scheduling
overhead and more cache thrashing.
2024-04-11 16:43:01 +02:00
Maximilian Bosch
a596d6c3c1 Only show stepname if it doesn't equal the name of the drv
When building e.g. nixpkgs, the "Running builds" view will mostly look
like this

    hello.x86_64-linux (Build of hello-X.Y)
    exa.x86_64-linux (Build of exa-X.Y)
    ...

This doesn't provide any useful information. Showing the step name only
makes sense if it's not a child of the job's derivation. With this
patch, that information will only be shown if the drv name (i.e. w/o
`/nix/store/` prefix, .drv ext & hash) is not equal to the drv name of
the job itself (build.nixname).
2024-03-18 18:46:01 +01:00
Maximilian Bosch
415f9f2daa Running builds view: show build step names
When using Hydra to build machine configurations, you'll often see
"nixosConfigurations.foo" five times, i.e. for each build step being
run. This isn't very helpful I think because in such a case, a single
build step can also be compiling the Linux kernel.

This change also fetches the `drvpath` and `type` from the `buildsteps`
relation. We're already joining it, so this doesn't make much difference
(confirmed via query logging that this doesn't cause extra SQL queries).

Unfortunately build steps don't have a human readable name, so I'm
deriving it from the drvpath by stripping away the hash (assuming that
it'll never contain a `-` and that `/nix/store/` is used as prefix). I
decided against using the Nix bindings for that to avoid too much
overhead due to store operations for each build step.
2024-03-18 18:46:01 +01:00
Maximilian Bosch
9b465e7a67 Make "timed out" and "log limit exceeded" builds aborted
In 73694087a0 I gave builds that failed
because of a timeout or exceeded log limit a stop sign and I stand by
that reasoning: with that it's possible to distinguish between actual
build failures and rather transient things such as timeouts.

Back then I considered it a feature that these are shown in a different
tab, but I don't think that's a good idea anymore. When using a jobset to
e.g. track the regressions from a mass rebuild (like a compiler or gcc
update), "Newly failed builds" should exclusively display regressions (and
flaky builds of course, not much I can do about that).

Also, when a bunch of builds fail in such a jobset because of e.g. a
broken connection to a builder that results in a timeout, I want to be
able to restart them all w/o rebuilding actual regressions.

To make it clear that we not only have "Aborted" builds in the tab, I
renamed the label to "Aborted / Timed out".
2024-03-16 22:10:40 +01:00
Maximilian Bosch
9b62c52e5c hydra-queue-runner: drop broken connections from pool
Closes #1336

When restarting postgresql, the connections are still reused in
`hydra-queue-runner` causing errors like this

    main thread: Lost connection to the database server.
    queue monitor: Lost connection to the database server.

and no more builds being processed.

`hydra-evaluator` doesn't have that issue since it crashes right away.
We could let it retry indefinitely as well (see below), but I don't
want to change too much.

If the DB is still unreachable 10s later, the process will stop with a
non-zero exit code because of a missing DB connection. This however
isn't such a big deal because it will be immediately restarted
afterwards. With the current configuration, Hydra will never give up,
but restart (and retry) infinitely. To me that seems reasonable, i.e. to
retry DB connections on a long-running process. If this doesn't work
out, the monitoring should fire anyways because the queue fills up, but
I'm open to discuss that.

Please note that this isn't reproducible with the DB and the queue
runner on the same machine when using `services.hydra-dev`, because of
the `Requires=` dependency `hydra-queue-runner.service` ->
`hydra-init.service` -> `postgresql.service` that causes the queue
runner to be restarted on `systemctl restart postgresql`.

Internally, Hydra uses Nix's pool data structure: it basically has N
slots (here DB connections) and whenever a new one is requested, an idle
slot is provided or a new one is created (when N slots are active, it'll
be waited until one slot is free). The issue in the code here is however
that whenever an error is encountered, the slot is released, however the
same broken connection will be reused the next time. By using
`Pool::Handle::markBad`, Nix will drop a broken slot. This is now being
done when `pqxx::broken_connection` was caught.
2024-03-16 22:10:40 +01:00
Maximilian Bosch
ef6be80f54 Use submit event in login form
It's a pet peeve from me when logging into my personal Hydra that I
always have to press the button rather than hitting Return after entering
my password.

Reason for that is that the form doesn't have a "submit" button, so far
it was always listened to the "click" event. Submit does that and you
can hit Return alternatively.
2024-03-16 22:10:40 +01:00
K900
969eb3eeac urlencode drv names when fetching logs
Otherwise names with special characters like + break things.
2024-03-16 22:10:40 +01:00
Pierre Bourdon
18466e8326 queue-runner: try larger pipe buffer sizes 2024-03-16 22:10:40 +01:00
ajs124
6ed21490ee lazy-load evaluation errors
Closes #1362
2024-03-16 22:10:40 +01:00
111 changed files with 2419 additions and 1622 deletions

42
.gitignore vendored
View File

@@ -1,8 +1,48 @@
/.pls_cache
*.o
*~
.test_info.*
Makefile
Makefile.in
.deps
.hydra-data
/config.guess
/config.log
/config.status
/config.sub
/configure
/depcomp
/libtool
/ltmain.sh
/autom4te.cache
/aclocal.m4
/missing
/install-sh
/src/sql/hydra-postgresql.sql
/src/sql/hydra-sqlite.sql
/src/sql/tmp.sqlite
/src/hydra-eval-jobs/hydra-eval-jobs
/src/root/static/bootstrap
/src/root/static/js/flot
/tests
/doc/manual/images
/doc/manual/manual.html
/doc/manual/manual.pdf
/t/.bzr*
/t/.git*
/t/.hg*
/t/nix
/t/data
/t/jobs/config.nix
t/jobs/declarative/project.json
/inst
hydra-config.h
hydra-config.h.in
result
result-*
outputs
config
stamp-h1
src/hydra-evaluator/hydra-evaluator
src/hydra-queue-runner/hydra-queue-runner
src/root/static/fontawesome/
src/root/static/bootstrap*/

2
.yath.rc Normal file
View File

@@ -0,0 +1,2 @@
[test]
-I=rel(t/lib)

12
Makefile.am Normal file
View File

@@ -0,0 +1,12 @@
SUBDIRS = src doc
if CAN_DO_CHECK
SUBDIRS += t
endif
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
DIST_SUBDIRS = $(SUBDIRS)
EXTRA_DIST = nixos-modules/hydra.nix
install-data-local: nixos-modules/hydra.nix
$(INSTALL) -d $(DESTDIR)$(datadir)/nix
$(INSTALL_DATA) nixos-modules/hydra.nix $(DESTDIR)$(datadir)/nix/hydra-module.nix

View File

@@ -39,16 +39,16 @@ In order to evaluate and build anything you need to create _projects_ that conta
#### Creating A Project
Log in as administrator, click "_Admin_" and select "_Create project_". Fill the form as follows:
- **Identifier**: `hello-project`
- **Identifier**: `hello`
- **Display name**: `hello`
- **Description**: `hello project`
Click "_Create project_".
#### Creating A Jobset
After creating a project you are forwarded to the project page. Click "_Actions_" and choose "_Create jobset_". Change **Type** to Legacy for the example below. Fill the form with the following values:
After creating a project you are forwarded to the project page. Click "_Actions_" and choose "_Create jobset_". Fill the form with the following values:
- **Identifier**: `hello-project`
- **Identifier**: `hello`
- **Nix expression**: `examples/hello.nix` in `hydra`
- **Check interval**: 60
- **Scheduling shares**: 1
@@ -57,7 +57,7 @@ We have to add two inputs for this jobset. One for _nixpkgs_ and one for _hydra_
- **Input name**: `nixpkgs`
- **Type**: `Git checkout`
- **Value**: `https://github.com/NixOS/nixpkgs nixos-24.05`
- **Value**: `https://github.com/nixos/nixpkgs-channels nixos-20.03`
- **Input name**: `hydra`
- **Type**: `Git checkout`
@@ -140,7 +140,7 @@ You can also interface with Hydra through a JSON API. The API is defined in [hyd
## Additional Resources
- [Hydra User's Guide](https://nixos.org/hydra/manual/)
- [Hydra on the NixOS Wiki](https://wiki.nixos.org/wiki/Hydra)
- [Hydra on the NixOS Wiki](https://nixos.wiki/wiki/Hydra)
- [hydra-cli](https://github.com/nlewo/hydra-cli)
- [Peter Simons - Hydra: Setting up your own build farm (NixOS)](https://www.youtube.com/watch?v=RXV0Y5Bn-QQ)

91
configure.ac Normal file
View File

@@ -0,0 +1,91 @@
AC_INIT([Hydra], [m4_esyscmd([echo -n $(cat ./version.txt)$VERSION_SUFFIX])])
AC_CONFIG_AUX_DIR(config)
AM_INIT_AUTOMAKE([foreign serial-tests])
AC_LANG([C++])
AC_PROG_CC
AC_PROG_INSTALL
AC_PROG_LN_S
AC_PROG_LIBTOOL
AC_PROG_CXX
AC_PATH_PROG([XSLTPROC], [xsltproc])
AC_ARG_WITH([docbook-xsl],
[AS_HELP_STRING([--with-docbook-xsl=PATH],
[path of the DocBook XSL stylesheets])],
[docbookxsl="$withval"],
[docbookxsl="/docbook-xsl-missing"])
AC_SUBST([docbookxsl])
AC_DEFUN([NEED_PROG],
[
AC_PATH_PROG($1, $2)
if test -z "$$1"; then
AC_MSG_ERROR([$2 is required])
fi
])
NEED_PROG(perl, perl)
NEED_PROG([NIX_STORE_PROGRAM], [nix-store])
AC_MSG_CHECKING([whether $NIX_STORE_PROGRAM is recent enough])
if test -n "$NIX_STORE" -a -n "$TMPDIR"
then
# This may be executed from within a build chroot, so pacify
# `nix-store' instead of letting it choke while trying to mkdir
# /nix/var.
NIX_STATE_DIR="$TMPDIR"
export NIX_STATE_DIR
fi
if NIX_REMOTE=daemon PAGER=cat "$NIX_STORE_PROGRAM" --timeout 123 -q; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
AC_MSG_ERROR([`$NIX_STORE_PROGRAM' doesn't support `--timeout'; please use a newer version.])
fi
PKG_CHECK_MODULES([NIX], [nix-main nix-expr nix-store])
testPath="$(dirname $(type -p expr))"
AC_SUBST(testPath)
CXXFLAGS+=" -include nix/config.h"
AC_CONFIG_FILES([
Makefile
doc/Makefile
doc/manual/Makefile
src/Makefile
src/hydra-evaluator/Makefile
src/hydra-eval-jobs/Makefile
src/hydra-queue-runner/Makefile
src/sql/Makefile
src/ttf/Makefile
src/lib/Makefile
src/root/Makefile
src/script/Makefile
])
# Tests might be filtered out
AM_CONDITIONAL([CAN_DO_CHECK], [test -f "$srcdir/t/api-test.t"])
AM_COND_IF(
[CAN_DO_CHECK],
[
jobsPath="$(realpath ./t/jobs)"
AC_SUBST(jobsPath)
AC_CONFIG_FILES([
t/Makefile
t/jobs/config.nix
t/jobs/declarative/project.json
])
])
AC_CONFIG_COMMANDS([executable-scripts], [])
AC_CONFIG_HEADER([hydra-config.h])
AC_OUTPUT

View File

@@ -1,6 +1,6 @@
# The `default.nix` in flake-compat reads `flake.nix` and `flake.lock` from `src` and
# returns an attribute set of the shape `{ defaultNix, shellNix }`
(import (fetchTarball "https://github.com/edolstra/flake-compat/archive/master.tar.gz") {
(import (fetchTarball https://github.com/edolstra/flake-compat/archive/master.tar.gz) {
src = ./.;
}).defaultNix

4
doc/Makefile.am Normal file
View File

@@ -0,0 +1,4 @@
SUBDIRS = manual
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
DIST_SUBDIRS = $(SUBDIRS)

6
doc/manual/Makefile.am Normal file
View File

@@ -0,0 +1,6 @@
MD_FILES = src/*.md
EXTRA_DIST = $(MD_FILES)
install: $(MD_FILES)
mdbook build . -d $(docdir)

View File

@@ -1,36 +0,0 @@
srcs = files(
'src/SUMMARY.md',
'src/about.md',
'src/api.md',
'src/configuration.md',
'src/hacking.md',
'src/installation.md',
'src/introduction.md',
'src/jobs.md',
'src/monitoring/README.md',
'src/notifications.md',
'src/plugins/README.md',
'src/plugins/RunCommand.md',
'src/plugins/declarative-projects.md',
'src/projects.md',
'src/webhooks.md',
)
manual = custom_target(
'manual',
command: [
mdbook,
'build',
'@SOURCE_ROOT@/doc/manual',
'-d', meson.current_build_dir() / 'html'
],
depend_files: srcs,
output: ['html'],
build_by_default: true,
)
install_subdir(
manual.full_path(),
install_dir: get_option('datadir') / 'doc/hydra',
strip_directory: true,
)

View File

@@ -15,18 +15,12 @@ and dependencies can be found:
$ nix-shell
```
of when flakes are enabled:
```console
$ nix develop
```
To build Hydra, you should then do:
```console
[nix-shell]$ autoreconfPhase
[nix-shell]$ configurePhase
[nix-shell]$ make -j$(nproc)
[nix-shell]$ make
```
You start a local database, the webserver, and other components with
@@ -36,8 +30,6 @@ foreman:
$ foreman start
```
The Hydra interface will be available on port 63333, with an admin user named "alice" with password "foobar"
You can run just the Hydra web server in your source tree as follows:
```console

View File

@@ -42,7 +42,7 @@ Sets CircleCI status.
## Compress build logs
Compresses build logs after a build with bzip2 or zstd.
Compresses build logs after a build with bzip2.
### Configuration options
@@ -50,14 +50,6 @@ Compresses build logs after a build with bzip2 or zstd.
Enable log compression
- `compress_build_logs_compression`
Which compression format to use. Valid values are bzip2 (default) and zstd.
- `compress_build_logs_silent`
Whether to compress logs silently.
### Example
```xml

View File

@@ -1,12 +1,9 @@
# Webhooks
Hydra can be notified by github or gitea with webhooks to trigger a new evaluation when a
Hydra can be notified by github's webhook to trigger a new evaluation when a
jobset has a github repo in its input.
## GitHub
To set up a webhook for a GitHub repository go to `https://github.com/<yourhandle>/<yourrepo>/settings`
and in the `Webhooks` tab click on `Add webhook`.
To set up a github webhook go to `https://github.com/<yourhandle>/<yourrepo>/settings` and in the `Webhooks` tab
click on `Add webhook`.
- In `Payload URL` fill in `https://<your-hydra-domain>/api/push-github`.
- In `Content type` switch to `application/json`.
@@ -14,14 +11,3 @@ and in the `Webhooks` tab click on `Add webhook`.
- For `Which events would you like to trigger this webhook?` keep the default option for events on `Just the push event.`.
Then add the hook with `Add webhook`.
## Gitea
To set up a webhook for a Gitea repository go to the settings of the repository in your Gitea instance
and in the `Webhooks` tab click on `Add Webhook` and choose `Gitea` in the drop down.
- In `Target URL` fill in `https://<your-hydra-domain>/api/push-gitea`.
- Keep HTTP method `POST`, POST Content Type `application/json` and Trigger On `Push Events`.
- Change the branch filter to match the git branch hydra builds.
Then add the hook with `Add webhook`.

View File

@@ -1,5 +1,5 @@
#
# jobset example file. This file can be referenced as Nix expression
# jobset example file. This file canbe referenced as Nix expression
# in a jobset configuration along with inputs for nixpkgs and the
# repository containing this file.
#

114
flake.lock generated
View File

@@ -1,70 +1,114 @@
{
"nodes": {
"nix": {
"inputs": {
"flake-compat": [],
"flake-parts": [],
"git-hooks-nix": [],
"nixfmt": [],
"nixpkgs": [
"nixpkgs"
],
"nixpkgs-23-11": [],
"nixpkgs-regression": []
},
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1739571938,
"narHash": "sha256-NlaLAed/xei6RWpU2HIIbDjILRC4l1NIfGeyrn7ALQs=",
"owner": "NixOS",
"repo": "nix",
"rev": "ffc649d2eabdd3e678b5bcc211dd59fd06debf3e",
"lastModified": 1673956053,
"narHash": "sha256-4gtG9iQuiKITOjNQQeQIpoIB6b16fm+504Ch3sNKLd8=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "35bb57c0c8d8b62bbfd284272c928ceb64ddbde9",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "ssh-ng-extensions-for-hydra",
"repo": "nix",
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"nix-eval-jobs": {
"lowdown-src": {
"flake": false,
"locked": {
"lastModified": 1739499741,
"narHash": "sha256-dNMJY6+G3PwE8lIAhwetPJdA2DxCEKRXPY/EtHmdDh4=",
"owner": "Ericson2314",
"repo": "nix-eval-jobs",
"rev": "de345eb4518d952c2d86261b270f2c31edecd3de",
"lastModified": 1633514407,
"narHash": "sha256-Dw32tiMjdK9t3ETl5fzGrutQTzh2rufgZV4A/BbxuD4=",
"owner": "kristapsdz",
"repo": "lowdown",
"rev": "d2c2b44ff6c27b936ec27358a2653caaef8f73b8",
"type": "github"
},
"original": {
"owner": "Ericson2314",
"ref": "nix-2.27",
"repo": "nix-eval-jobs",
"owner": "kristapsdz",
"repo": "lowdown",
"type": "github"
}
},
"nix": {
"inputs": {
"flake-compat": "flake-compat",
"lowdown-src": "lowdown-src",
"nixpkgs": [
"nixpkgs"
],
"nixpkgs-regression": "nixpkgs-regression"
},
"locked": {
"lastModified": 1706208340,
"narHash": "sha256-wNyHUEIiKKVs6UXrUzhP7RSJQv0A8jckgcuylzftl8k=",
"owner": "NixOS",
"repo": "nix",
"rev": "2c4bb93ba5a97e7078896ebc36385ce172960e4e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "2.19-maintenance",
"repo": "nix",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1739461644,
"narHash": "sha256-1o1qR0KYozYGRrnqytSpAhVBYLNBHX+Lv6I39zGRzKM=",
"lastModified": 1701615100,
"narHash": "sha256-7VI84NGBvlCTduw2aHLVB62NvCiZUlALLqBe5v684Aw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "97a719c9f0a07923c957cf51b20b329f9fb9d43f",
"rev": "e9f06adb793d1cca5384907b3b8a4071d5d7cb19",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-24.11-small",
"ref": "nixos-23.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-for-fileset": {
"locked": {
"lastModified": 1706098335,
"narHash": "sha256-r3dWjT8P9/Ah5m5ul4WqIWD8muj5F+/gbCdjiNVBKmU=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a77ab169a83a4175169d78684ddd2e54486ac651",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.11",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-regression": {
"locked": {
"lastModified": 1643052045,
"narHash": "sha256-uGJ0VXIhWKGXxkeNnq4TvV3CIOkUJ3PAoLZ3HMzNVMw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
"type": "github"
}
},
"root": {
"inputs": {
"nix": "nix",
"nix-eval-jobs": "nix-eval-jobs",
"nixpkgs": "nixpkgs"
"nixpkgs": "nixpkgs",
"nixpkgs-for-fileset": "nixpkgs-for-fileset"
}
}
},

394
flake.nix
View File

@@ -1,45 +1,79 @@
{
description = "A Nix-based continuous build system";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.11-small";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.05";
inputs.nix.url = "github:NixOS/nix/2.19-maintenance";
inputs.nix.inputs.nixpkgs.follows = "nixpkgs";
inputs.nix = {
url = "github:NixOS/nix/ssh-ng-extensions-for-hydra";
inputs.nixpkgs.follows = "nixpkgs";
# TODO get rid of this once https://github.com/NixOS/nix/pull/9546 is
# mered and we upgrade or Nix, so the main `nixpkgs` input is at least
# 23.11 and has `lib.fileset`.
inputs.nixpkgs-for-fileset.url = "github:NixOS/nixpkgs/nixos-23.11";
# hide nix dev tooling from our lock file
inputs.flake-parts.follows = "";
inputs.git-hooks-nix.follows = "";
inputs.nixpkgs-regression.follows = "";
inputs.nixpkgs-23-11.follows = "";
inputs.flake-compat.follows = "";
inputs.nixfmt.follows = "";
};
inputs.nix-eval-jobs = {
url = "github:Ericson2314/nix-eval-jobs/nix-2.27";
# We want to control the deps precisely
flake = false;
};
outputs = { self, nixpkgs, nix, nix-eval-jobs, ... }:
outputs = { self, nixpkgs, nix, nixpkgs-for-fileset }:
let
systems = [ "x86_64-linux" "aarch64-linux" ];
forEachSystem = nixpkgs.lib.genAttrs systems;
overlayList = [ self.overlays.default nix.overlays.default ];
pkgsBySystem = forEachSystem (system: import nixpkgs {
inherit system;
overlays = overlayList;
});
# NixOS configuration used for VM tests.
hydraServer =
{ config, pkgs, ... }:
{
imports = [ self.nixosModules.hydraTest ];
virtualisation.memorySize = 1024;
virtualisation.writableStore = true;
environment.systemPackages = [ pkgs.perlPackages.LWP pkgs.perlPackages.JSON ];
nix = {
# Without this nix tries to fetch packages from the default
# cache.nixos.org which is not reachable from this sandboxed NixOS test.
binaryCaches = [ ];
};
};
in
rec {
# A Nixpkgs overlay that provides a 'hydra' package.
overlays.default = final: prev: {
nix-eval-jobs = final.callPackage nix-eval-jobs {};
# Add LDAP dependencies that aren't currently found within nixpkgs.
perlPackages = prev.perlPackages // {
PrometheusTiny = final.perlPackages.buildPerlPackage {
pname = "Prometheus-Tiny";
version = "0.007";
src = final.fetchurl {
url = "mirror://cpan/authors/id/R/RO/ROBN/Prometheus-Tiny-0.007.tar.gz";
sha256 = "0ef8b226a2025cdde4df80129dd319aa29e884e653c17dc96f4823d985c028ec";
};
buildInputs = with final.perlPackages; [ HTTPMessage Plack TestException ];
meta = {
homepage = "https://github.com/robn/Prometheus-Tiny";
description = "A tiny Prometheus client";
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
};
};
};
hydra = final.callPackage ./package.nix {
inherit (nixpkgs.lib) fileset;
inherit (nixpkgs-for-fileset.lib) fileset;
rawSrc = self;
nix-perl-bindings = final.nixComponents.nix-perl-bindings;
};
};
hydraJobs = {
build = forEachSystem (system: packages.${system}.hydra);
buildNoTests = forEachSystem (system:
@@ -48,22 +82,297 @@
})
);
manual = forEachSystem (system: let
pkgs = nixpkgs.legacyPackages.${system};
hydra = self.packages.${pkgs.hostPlatform.system}.hydra;
in
pkgs.runCommand "hydra-manual-${hydra.version}" { }
manual = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
pkgs.runCommand "hydra-manual-${pkgs.hydra.version}" { }
''
mkdir -p $out/share
cp -prvd ${hydra.doc}/share/doc $out/share/
cp -prvd ${pkgs.hydra}/share/doc $out/share/
mkdir $out/nix-support
echo "doc manual $out/share/doc/hydra" >> $out/nix-support/hydra-build-products
'');
tests = import ./nixos-tests.nix {
inherit forEachSystem nixpkgs nixosModules;
};
tests.install = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-install";
nodes.machine = hydraServer;
testScript =
''
machine.wait_for_job("hydra-init")
machine.wait_for_job("hydra-server")
machine.wait_for_job("hydra-evaluator")
machine.wait_for_job("hydra-queue-runner")
machine.wait_for_open_port(3000)
machine.succeed("curl --fail http://localhost:3000/")
'';
});
tests.notifications = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-notifications";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
<influxdb>
url = http://127.0.0.1:8086
db = hydra
</influxdb>
'';
services.influxdb.enable = true;
};
testScript = ''
machine.wait_for_job("hydra-init")
# Create an admin account and some other state.
machine.succeed(
"""
su - hydra -c "hydra-create-user root --email-address 'alice@example.org' --password foobar --role admin"
mkdir /run/jobset
chmod 755 /run/jobset
cp ${./t/jobs/api-test.nix} /run/jobset/default.nix
chmod 644 /run/jobset/default.nix
chown -R hydra /run/jobset
"""
)
# Wait until InfluxDB can receive web requests
machine.wait_for_job("influxdb")
machine.wait_for_open_port(8086)
# Create an InfluxDB database where hydra will write to
machine.succeed(
"curl -XPOST 'http://127.0.0.1:8086/query' "
+ "--data-urlencode 'q=CREATE DATABASE hydra'"
)
# Wait until hydra-server can receive HTTP requests
machine.wait_for_job("hydra-server")
machine.wait_for_open_port(3000)
# Setup the project and jobset
machine.succeed(
"su - hydra -c 'perl -I ${pkgs.hydra.perlDeps}/lib/perl5/site_perl ${./t/setup-notifications-jobset.pl}' >&2"
)
# Wait until hydra has build the job and
# the InfluxDBNotification plugin uploaded its notification to InfluxDB
machine.wait_until_succeeds(
"curl -s -H 'Accept: application/csv' "
+ "-G 'http://127.0.0.1:8086/query?db=hydra' "
+ "--data-urlencode 'q=SELECT * FROM hydra_build_status' | grep success"
)
'';
});
tests.gitea = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
makeTest {
name = "hydra-gitea";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
<gitea_authorization>
root=d7f16a3412e01a43a414535b16007c6931d3a9c7
</gitea_authorization>
'';
nix = {
distributedBuilds = true;
buildMachines = [{
hostName = "localhost";
systems = [ system ];
}];
binaryCaches = [ ];
};
services.gitea = {
enable = true;
database.type = "postgres";
disableRegistration = true;
httpPort = 3001;
};
services.openssh.enable = true;
environment.systemPackages = with pkgs; [ gitea git jq gawk ];
networking.firewall.allowedTCPPorts = [ 3000 ];
};
skipLint = true;
testScript =
let
scripts.mktoken = pkgs.writeText "token.sql" ''
INSERT INTO access_token (id, uid, name, created_unix, updated_unix, token_hash, token_salt, token_last_eight) VALUES (1, 1, 'hydra', 1617107360, 1617107360, 'a930f319ca362d7b49a4040ac0af74521c3a3c3303a86f327b01994430672d33b6ec53e4ea774253208686c712495e12a486', 'XRjWE9YW0g', '31d3a9c7');
'';
scripts.git-setup = pkgs.writeShellScript "setup.sh" ''
set -x
mkdir -p /tmp/repo $HOME/.ssh
cat ${snakeoilKeypair.privkey} > $HOME/.ssh/privk
chmod 0400 $HOME/.ssh/privk
git -C /tmp/repo init
cp ${smallDrv} /tmp/repo/jobset.nix
git -C /tmp/repo add .
git config --global user.email test@localhost
git config --global user.name test
git -C /tmp/repo commit -m 'Initial import'
git -C /tmp/repo remote add origin gitea@machine:root/repo
GIT_SSH_COMMAND='ssh -i $HOME/.ssh/privk -o StrictHostKeyChecking=no' \
git -C /tmp/repo push origin master
git -C /tmp/repo log >&2
'';
scripts.hydra-setup = pkgs.writeShellScript "hydra.sh" ''
set -x
su -l hydra -c "hydra-create-user root --email-address \
'alice@example.org' --password foobar --role admin"
URL=http://localhost:3000
USERNAME="root"
PASSWORD="foobar"
PROJECT_NAME="trivial"
JOBSET_NAME="trivial"
mycurl() {
curl --referer $URL -H "Accept: application/json" \
-H "Content-Type: application/json" $@
}
cat >data.json <<EOF
{ "username": "$USERNAME", "password": "$PASSWORD" }
EOF
mycurl -X POST -d '@data.json' $URL/login -c hydra-cookie.txt
cat >data.json <<EOF
{
"displayname":"Trivial",
"enabled":"1",
"visible":"1"
}
EOF
mycurl --silent -X PUT $URL/project/$PROJECT_NAME \
-d @data.json -b hydra-cookie.txt
cat >data.json <<EOF
{
"description": "Trivial",
"checkinterval": "60",
"enabled": "1",
"visible": "1",
"keepnr": "1",
"enableemail": true,
"emailoverride": "hydra@localhost",
"type": 0,
"nixexprinput": "git",
"nixexprpath": "jobset.nix",
"inputs": {
"git": {"value": "http://localhost:3001/root/repo.git", "type": "git"},
"gitea_repo_name": {"value": "repo", "type": "string"},
"gitea_repo_owner": {"value": "root", "type": "string"},
"gitea_status_repo": {"value": "git", "type": "string"},
"gitea_http_url": {"value": "http://localhost:3001", "type": "string"}
}
}
EOF
mycurl --silent -X PUT $URL/jobset/$PROJECT_NAME/$JOBSET_NAME \
-d @data.json -b hydra-cookie.txt
'';
api_token = "d7f16a3412e01a43a414535b16007c6931d3a9c7";
snakeoilKeypair = {
privkey = pkgs.writeText "privkey.snakeoil" ''
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIHQf/khLvYrQ8IOika5yqtWvI0oquHlpRLTZiJy5dRJmoAoGCCqGSM49
AwEHoUQDQgAEKF0DYGbBwbj06tA3fd/+yP44cvmwmHBWXZCKbS+RQlAKvLXMWkpN
r1lwMyJZoSGgBHoUahoYjTh9/sJL7XLJtA==
-----END EC PRIVATE KEY-----
'';
pubkey = pkgs.lib.concatStrings [
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHA"
"yNTYAAABBBChdA2BmwcG49OrQN33f/sj+OHL5sJhwVl2Qim0vkUJQCry1zFpKTa"
"9ZcDMiWaEhoAR6FGoaGI04ff7CS+1yybQ= sakeoil"
];
};
smallDrv = pkgs.writeText "jobset.nix" ''
{ trivial = builtins.derivation {
name = "trivial";
system = "${system}";
builder = "/bin/sh";
allowSubstitutes = false;
preferLocalBuild = true;
args = ["-c" "echo success > $out; exit 0"];
};
}
'';
in
''
import json
machine.start()
machine.wait_for_unit("multi-user.target")
machine.wait_for_open_port(3000)
machine.wait_for_open_port(3001)
machine.succeed(
"su -l gitea -c 'GITEA_WORK_DIR=/var/lib/gitea gitea admin user create "
+ "--username root --password root --email test@localhost'"
)
machine.succeed("su -l postgres -c 'psql gitea < ${scripts.mktoken}'")
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/user/repos "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"auto_init":false, "description":"string", "license":"mit", "name":"repo", "private":false}\'''
)
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/user/keys "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"key":"${snakeoilKeypair.pubkey}","read_only":true,"title":"SSH"}\'''
)
machine.succeed(
"${scripts.git-setup}"
)
machine.succeed(
"${scripts.hydra-setup}"
)
machine.wait_until_succeeds(
'curl -Lf -s http://localhost:3000/build/1 -H "Accept: application/json" '
+ '| jq .buildstatus | xargs test 0 -eq'
)
data = machine.succeed(
'curl -Lf -s "http://localhost:3001/api/v1/repos/root/repo/statuses/$(cd /tmp/repo && git show | head -n1 | awk "{print \\$2}")" '
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
)
response = json.loads(data)
assert len(response) == 2, "Expected exactly two status updates for latest commit!"
assert response[0]['status'] == "success", "Expected latest status to be success!"
assert response[1]['status'] == "pending", "Expected first status to be pending!"
machine.shutdown()
'';
});
tests.validate-openapi = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
pkgs.runCommand "validate-openapi"
{ buildInputs = [ pkgs.openapi-generator-cli ]; }
''
openapi-generator-cli validate -i ${./hydra-api.yaml}
touch $out
'');
container = nixosConfigurations.container.config.system.build.toplevel;
};
@@ -75,33 +384,18 @@
});
packages = forEachSystem (system: {
nix-eval-jobs = nixpkgs.legacyPackages.${system}.callPackage nix-eval-jobs {
nix = nix.packages.${system}.nix;
};
hydra = nixpkgs.legacyPackages.${system}.callPackage ./package.nix {
inherit (nixpkgs.lib) fileset;
inherit (self.packages.${system}) nix-eval-jobs;
rawSrc = self;
inherit (nix.packages.${system})
nix-util
nix-store
nix-main
nix-cli
;
nix-perl-bindings = nix.hydraJobs.perlBindings.${system};
};
default = self.packages.${system}.hydra;
hydra = pkgsBySystem.${system}.hydra;
default = pkgsBySystem.${system}.hydra;
});
nixosModules = import ./nixos-modules {
inherit self;
overlays = overlayList;
};
nixosConfigurations.container = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules =
[
self.nixosModules.hydra
self.nixosModules.hydraTest
self.nixosModules.hydraProxy
{

View File

@@ -1,40 +0,0 @@
project('hydra', 'cpp',
version: files('version.txt'),
license: 'GPL-3.0',
default_options: [
'debug=true',
'optimization=2',
'cpp_std=c++20',
],
)
nix_util_dep = dependency('nix-util', required: true)
nix_store_dep = dependency('nix-store', required: true)
nix_main_dep = dependency('nix-main', required: true)
# Nix need extra flags not provided in its pkg-config files.
nix_dep = declare_dependency(
dependencies: [
nix_util_dep,
nix_store_dep,
nix_main_dep,
],
compile_args: [
'-include', 'nix/config-util.hh',
'-include', 'nix/config-store.hh',
'-include', 'nix/config-main.hh',
],
)
pqxx_dep = dependency('libpqxx', required: true)
prom_cpp_core_dep = dependency('prometheus-cpp-core', required: true)
prom_cpp_pull_dep = dependency('prometheus-cpp-pull', required: true)
mdbook = find_program('mdbook', native: true)
perl = find_program('perl', native: true)
subdir('doc/manual')
subdir('nixos-modules')
subdir('src')
subdir('t')

View File

@@ -1,13 +1,14 @@
{ self }:
{ overlays }:
{
hydra = { pkgs, lib,... }: {
_file = ./default.nix;
rec {
hydra = {
imports = [ ./hydra.nix ];
services.hydra-dev.package = lib.mkDefault self.packages.${pkgs.hostPlatform.system}.hydra;
nixpkgs = { inherit overlays; };
};
hydraTest = { pkgs, ... }: {
imports = [ hydra ];
services.hydra-dev.enable = true;
services.hydra-dev.hydraURL = "http://hydra.example.org";
services.hydra-dev.notificationSender = "admin@hydra.example.org";
@@ -15,7 +16,7 @@
systemd.services.hydra-send-stats.enable = false;
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql_12;
services.postgresql.package = pkgs.postgresql_11;
# The following is to work around the following error from hydra-server:
# [error] Caught exception in engine "Cannot determine local time zone"

View File

@@ -68,6 +68,8 @@ in
package = mkOption {
type = types.path;
default = pkgs.hydra;
defaultText = literalExpression "pkgs.hydra";
description = "The Hydra package.";
};
@@ -231,7 +233,7 @@ in
gc-keep-outputs = true;
gc-keep-derivations = true;
};
services.hydra-dev.extraConfig =
''
using_frontend_proxy = 1
@@ -338,7 +340,6 @@ in
systemd.services.hydra-queue-runner =
{ wantedBy = [ "multi-user.target" ];
requires = [ "hydra-init.service" ];
wants = [ "network-online.target" ];
after = [ "hydra-init.service" "network.target" "network-online.target" ];
path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ];
restartTriggers = [ hydraConf ];
@@ -407,7 +408,6 @@ in
requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" ];
restartTriggers = [ hydraConf ];
path = [ pkgs.zstd ];
environment = env // {
PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-notify";
@@ -458,17 +458,10 @@ in
# logs automatically after a step finishes, but this doesn't work
# if the queue runner is stopped prematurely.
systemd.services.hydra-compress-logs =
{ path = [ pkgs.bzip2 pkgs.zstd ];
{ path = [ pkgs.bzip2 ];
script =
''
set -eou pipefail
compression=$(sed -nr 's/compress_build_logs_compression = ()/\1/p' ${baseDir}/hydra.conf)
if [[ $compression == "" ]]; then
compression="bzip2"
elif [[ $compression == zstd ]]; then
compression="zstd --rm"
fi
find ${baseDir}/build-logs -type f -name "*.drv" -mtime +3 -size +0c | xargs -r "$compression" --force --quiet
find ${baseDir}/build-logs -type f -name "*.drv" -mtime +3 -size +0c | xargs -r bzip2 -v -f
'';
startAt = "Sun 01:45";
};

View File

@@ -1,4 +0,0 @@
install_data('hydra.nix',
install_dir: get_option('datadir') / 'nix',
rename: ['hydra-module.nix'],
)

View File

@@ -1,307 +0,0 @@
{ forEachSystem, nixpkgs, nixosModules }:
let
# NixOS configuration used for VM tests.
hydraServer =
{ pkgs, ... }:
{
imports = [
nixosModules.hydra
nixosModules.hydraTest
];
virtualisation.memorySize = 1024;
virtualisation.writableStore = true;
environment.systemPackages = [ pkgs.perlPackages.LWP pkgs.perlPackages.JSON ];
nix = {
# Without this nix tries to fetch packages from the default
# cache.nixos.org which is not reachable from this sandboxed NixOS test.
settings.substituters = [ ];
};
};
in
{
install = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-install";
nodes.machine = hydraServer;
testScript =
''
machine.wait_for_job("hydra-init")
machine.wait_for_job("hydra-server")
machine.wait_for_job("hydra-evaluator")
machine.wait_for_job("hydra-queue-runner")
machine.wait_for_open_port(3000)
machine.succeed("curl --fail http://localhost:3000/")
'';
});
notifications = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-notifications";
nodes.machine = {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
<influxdb>
url = http://127.0.0.1:8086
db = hydra
</influxdb>
'';
services.influxdb.enable = true;
};
testScript = ''
machine.wait_for_job("hydra-init")
# Create an admin account and some other state.
machine.succeed(
"""
su - hydra -c "hydra-create-user root --email-address 'alice@example.org' --password foobar --role admin"
mkdir /run/jobset
chmod 755 /run/jobset
cp ${./t/jobs/api-test.nix} /run/jobset/default.nix
chmod 644 /run/jobset/default.nix
chown -R hydra /run/jobset
"""
)
# Wait until InfluxDB can receive web requests
machine.wait_for_job("influxdb")
machine.wait_for_open_port(8086)
# Create an InfluxDB database where hydra will write to
machine.succeed(
"curl -XPOST 'http://127.0.0.1:8086/query' "
+ "--data-urlencode 'q=CREATE DATABASE hydra'"
)
# Wait until hydra-server can receive HTTP requests
machine.wait_for_job("hydra-server")
machine.wait_for_open_port(3000)
# Setup the project and jobset
machine.succeed(
"su - hydra -c 'perl -I ${config.services.hydra-dev.package.perlDeps}/lib/perl5/site_perl ${./t/setup-notifications-jobset.pl}' >&2"
)
# Wait until hydra has build the job and
# the InfluxDBNotification plugin uploaded its notification to InfluxDB
machine.wait_until_succeeds(
"curl -s -H 'Accept: application/csv' "
+ "-G 'http://127.0.0.1:8086/query?db=hydra' "
+ "--data-urlencode 'q=SELECT * FROM hydra_build_status' | grep success"
)
'';
});
gitea = forEachSystem (system:
let pkgs = nixpkgs.legacyPackages.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
makeTest {
name = "hydra-gitea";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
<gitea_authorization>
root=d7f16a3412e01a43a414535b16007c6931d3a9c7
</gitea_authorization>
'';
nixpkgs.config.permittedInsecurePackages = [ "gitea-1.19.4" ];
nix = {
settings.substituters = [ ];
};
services.gitea = {
enable = true;
database.type = "postgres";
settings = {
service.DISABLE_REGISTRATION = true;
server.HTTP_PORT = 3001;
};
};
services.openssh.enable = true;
environment.systemPackages = with pkgs; [ gitea git jq gawk ];
networking.firewall.allowedTCPPorts = [ 3000 ];
};
skipLint = true;
testScript =
let
scripts.mktoken = pkgs.writeText "token.sql" ''
INSERT INTO access_token (id, uid, name, created_unix, updated_unix, token_hash, token_salt, token_last_eight, scope) VALUES (1, 1, 'hydra', 1617107360, 1617107360, 'a930f319ca362d7b49a4040ac0af74521c3a3c3303a86f327b01994430672d33b6ec53e4ea774253208686c712495e12a486', 'XRjWE9YW0g', '31d3a9c7', 'all');
'';
scripts.git-setup = pkgs.writeShellScript "setup.sh" ''
set -x
mkdir -p /tmp/repo $HOME/.ssh
cat ${snakeoilKeypair.privkey} > $HOME/.ssh/privk
chmod 0400 $HOME/.ssh/privk
git -C /tmp/repo init
cp ${smallDrv} /tmp/repo/jobset.nix
git -C /tmp/repo add .
git config --global user.email test@localhost
git config --global user.name test
git -C /tmp/repo commit -m 'Initial import'
git -C /tmp/repo remote add origin gitea@machine:root/repo
GIT_SSH_COMMAND='ssh -i $HOME/.ssh/privk -o StrictHostKeyChecking=no' \
git -C /tmp/repo push origin master
git -C /tmp/repo log >&2
'';
scripts.hydra-setup = pkgs.writeShellScript "hydra.sh" ''
set -x
su -l hydra -c "hydra-create-user root --email-address \
'alice@example.org' --password foobar --role admin"
URL=http://localhost:3000
USERNAME="root"
PASSWORD="foobar"
PROJECT_NAME="trivial"
JOBSET_NAME="trivial"
mycurl() {
curl --referer $URL -H "Accept: application/json" \
-H "Content-Type: application/json" $@
}
cat >data.json <<EOF
{ "username": "$USERNAME", "password": "$PASSWORD" }
EOF
mycurl -X POST -d '@data.json' $URL/login -c hydra-cookie.txt
cat >data.json <<EOF
{
"displayname":"Trivial",
"enabled":"1",
"visible":"1"
}
EOF
mycurl --silent -X PUT $URL/project/$PROJECT_NAME \
-d @data.json -b hydra-cookie.txt
cat >data.json <<EOF
{
"description": "Trivial",
"checkinterval": "60",
"enabled": "1",
"visible": "1",
"keepnr": "1",
"enableemail": true,
"emailoverride": "hydra@localhost",
"type": 0,
"nixexprinput": "git",
"nixexprpath": "jobset.nix",
"inputs": {
"git": {"value": "http://localhost:3001/root/repo.git", "type": "git"},
"gitea_repo_name": {"value": "repo", "type": "string"},
"gitea_repo_owner": {"value": "root", "type": "string"},
"gitea_status_repo": {"value": "git", "type": "string"},
"gitea_http_url": {"value": "http://localhost:3001", "type": "string"}
}
}
EOF
mycurl --silent -X PUT $URL/jobset/$PROJECT_NAME/$JOBSET_NAME \
-d @data.json -b hydra-cookie.txt
'';
api_token = "d7f16a3412e01a43a414535b16007c6931d3a9c7";
snakeoilKeypair = {
privkey = pkgs.writeText "privkey.snakeoil" ''
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIHQf/khLvYrQ8IOika5yqtWvI0oquHlpRLTZiJy5dRJmoAoGCCqGSM49
AwEHoUQDQgAEKF0DYGbBwbj06tA3fd/+yP44cvmwmHBWXZCKbS+RQlAKvLXMWkpN
r1lwMyJZoSGgBHoUahoYjTh9/sJL7XLJtA==
-----END EC PRIVATE KEY-----
'';
pubkey = pkgs.lib.concatStrings [
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHA"
"yNTYAAABBBChdA2BmwcG49OrQN33f/sj+OHL5sJhwVl2Qim0vkUJQCry1zFpKTa"
"9ZcDMiWaEhoAR6FGoaGI04ff7CS+1yybQ= sakeoil"
];
};
smallDrv = pkgs.writeText "jobset.nix" ''
{ trivial = builtins.derivation {
name = "trivial";
system = "${system}";
builder = "/bin/sh";
allowSubstitutes = false;
preferLocalBuild = true;
args = ["-c" "echo success > $out; exit 0"];
};
}
'';
in
''
import json
machine.start()
machine.wait_for_unit("multi-user.target")
machine.wait_for_open_port(3000)
machine.wait_for_open_port(3001)
machine.succeed(
"su -l gitea -c 'GITEA_WORK_DIR=/var/lib/gitea gitea admin user create "
+ "--username root --password root --email test@localhost'"
)
machine.succeed("su -l postgres -c 'psql gitea < ${scripts.mktoken}'")
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/user/repos "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"auto_init":false, "description":"string", "license":"mit", "name":"repo", "private":false}\'''
)
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/user/keys "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"key":"${snakeoilKeypair.pubkey}","read_only":true,"title":"SSH"}\'''
)
machine.succeed(
"${scripts.git-setup}"
)
machine.succeed(
"${scripts.hydra-setup}"
)
machine.wait_until_succeeds(
'curl -Lf -s http://localhost:3000/build/1 -H "Accept: application/json" '
+ '| jq .buildstatus | xargs test 0 -eq'
)
data = machine.succeed(
'curl -Lf -s "http://localhost:3001/api/v1/repos/root/repo/statuses/$(cd /tmp/repo && git show | head -n1 | awk "{print \\$2}")" '
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
)
response = json.loads(data)
assert len(response) == 2, "Expected exactly three status updates for latest commit (queued, finished)!"
assert response[0]['status'] == "success", "Expected finished status to be success!"
assert response[1]['status'] == "pending", "Expected queued status to be pending!"
machine.shutdown()
'';
});
validate-openapi = forEachSystem (system:
let pkgs = nixpkgs.legacyPackages.${system}; in
pkgs.runCommand "validate-openapi"
{ buildInputs = [ pkgs.openapi-generator-cli ]; }
''
openapi-generator-cli validate -i ${./hydra-api.yaml}
touch $out
'');
}

View File

@@ -8,16 +8,11 @@
, perlPackages
, nix-util
, nix-store
, nix-main
, nix-cli
, nix-perl-bindings
, nix
, git
, makeWrapper
, meson
, ninja
, autoreconfHook
, nukeReferences
, pkg-config
, mdbook
@@ -53,7 +48,6 @@
, xz
, gnutar
, gnused
, nix-eval-jobs
, rpm
, dpkg
@@ -65,7 +59,7 @@ let
name = "hydra-perl-deps";
paths = lib.closePropagation
([
nix-perl-bindings
nix.perl-bindings
git
] ++ (with perlPackages; [
AuthenSASL
@@ -93,10 +87,10 @@ let
DateTime
DBDPg
DBDSQLite
DBIxClassHelpers
DigestSHA1
EmailMIME
EmailSender
FileCopyRecursive
FileLibMagic
FileSlurper
FileWhich
@@ -144,28 +138,32 @@ stdenv.mkDerivation (finalAttrs: {
src = fileset.toSource {
root = ./.;
fileset = fileset.unions ([
./doc
./meson.build
./nixos-modules
./src
./t
./version.txt
./configure.ac
./Makefile.am
./src
./doc
./nixos-modules/hydra.nix
# These are always needed to appease Automake
./t/Makefile.am
./t/jobs/config.nix.in
./t/jobs/declarative/project.json.in
] ++ lib.optionals finalAttrs.doCheck [
./t
./.perlcriticrc
./.yath.rc
]);
};
outputs = [ "out" "doc" ];
strictDeps = true;
nativeBuildInputs = [
makeWrapper
meson
ninja
autoreconfHook
nukeReferences
pkg-config
mdbook
nix-cli
nix
perlDeps
perl
unzip
@@ -175,9 +173,7 @@ stdenv.mkDerivation (finalAttrs: {
libpqxx
openssl
libxslt
nix-util
nix-store
nix-main
nix
perlDeps
perl
boost
@@ -196,7 +192,6 @@ stdenv.mkDerivation (finalAttrs: {
openldap
postgresql_13
pixz
nix-eval-jobs
];
checkInputs = [
@@ -204,14 +199,13 @@ stdenv.mkDerivation (finalAttrs: {
glibcLocales
libressl.nc
python3
nix-cli
];
hydraPath = lib.makeBinPath (
[
subversion
openssh
nix-cli
nix
coreutils
findutils
pixz
@@ -226,22 +220,15 @@ stdenv.mkDerivation (finalAttrs: {
darcs
gnused
breezy
nix-eval-jobs
] ++ lib.optionals stdenv.isLinux [ rpm dpkg cdrkit ]
);
OPENLDAP_ROOT = openldap;
mesonBuildType = "release";
postPatch = ''
patchShebangs .
'';
shellHook = ''
pushd $(git rev-parse --show-toplevel) >/dev/null
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-queue-runner:$PATH
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-eval-jobs:$(pwd)/src/hydra-queue-runner:$PATH
PERL5LIB=$(pwd)/src/lib:$PERL5LIB
export HYDRA_HOME="$(pwd)/src/"
mkdir -p .hydra-data
@@ -251,11 +238,14 @@ stdenv.mkDerivation (finalAttrs: {
popd >/dev/null
'';
NIX_LDFLAGS = [ "-lpthread" ];
enableParallelBuilding = true;
doCheck = true;
mesonCheckFlags = [ "--verbose" ];
preCheck = ''
patchShebangs .
export LOGNAME=''${LOGNAME:-foo}
# set $HOME for bzr so it can create its trace file
export HOME=$(mktemp -d)
@@ -272,13 +262,12 @@ stdenv.mkDerivation (finalAttrs: {
--prefix PATH ':' $out/bin:$hydraPath \
--set HYDRA_RELEASE ${version} \
--set HYDRA_HOME $out/libexec/hydra \
--set NIX_RELEASE ${nix-cli.name or "unknown"} \
--set NIX_EVAL_JOBS_RELEASE ${nix-eval-jobs.name or "unknown"}
--set NIX_RELEASE ${nix.name or "unknown"}
done
'';
dontStrip = true;
meta.description = "Build of Hydra on ${stdenv.system}";
passthru = { inherit perlDeps; };
passthru = { inherit perlDeps nix; };
})

3
src/Makefile.am Normal file
View File

@@ -0,0 +1,3 @@
SUBDIRS = hydra-evaluator hydra-eval-jobs hydra-queue-runner sql script lib root ttf
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
DIST_SUBDIRS = $(SUBDIRS)

View File

@@ -0,0 +1,5 @@
bin_PROGRAMS = hydra-eval-jobs
hydra_eval_jobs_SOURCES = hydra-eval-jobs.cc
hydra_eval_jobs_LDADD = $(NIX_LIBS) -lnixcmd
hydra_eval_jobs_CXXFLAGS = $(NIX_CFLAGS) -I ../libhydra

View File

@@ -0,0 +1,579 @@
#include <iostream>
#include <thread>
#include <optional>
#include <unordered_map>
#include "shared.hh"
#include "store-api.hh"
#include "eval.hh"
#include "eval-inline.hh"
#include "eval-settings.hh"
#include "signals.hh"
#include "terminal.hh"
#include "util.hh"
#include "get-drvs.hh"
#include "globals.hh"
#include "common-eval-args.hh"
#include "flake/flakeref.hh"
#include "flake/flake.hh"
#include "attr-path.hh"
#include "derivations.hh"
#include "local-fs-store.hh"
#include "hydra-config.hh"
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/resource.h>
#include <nlohmann/json.hpp>
void check_pid_status_nonblocking(pid_t check_pid)
{
// Only check 'initialized' and known PID's
if (check_pid <= 0) { return; }
int wstatus = 0;
pid_t pid = waitpid(check_pid, &wstatus, WNOHANG);
// -1 = failure, WNOHANG: 0 = no change
if (pid <= 0) { return; }
std::cerr << "child process (" << pid << ") ";
if (WIFEXITED(wstatus)) {
std::cerr << "exited with status=" << WEXITSTATUS(wstatus) << std::endl;
} else if (WIFSIGNALED(wstatus)) {
std::cerr << "killed by signal=" << WTERMSIG(wstatus) << std::endl;
} else if (WIFSTOPPED(wstatus)) {
std::cerr << "stopped by signal=" << WSTOPSIG(wstatus) << std::endl;
} else if (WIFCONTINUED(wstatus)) {
std::cerr << "continued" << std::endl;
}
}
using namespace nix;
static Path gcRootsDir;
static size_t maxMemorySize;
struct MyArgs : MixEvalArgs, MixCommonArgs, RootArgs
{
Path releaseExpr;
bool flake = false;
bool dryRun = false;
MyArgs() : MixCommonArgs("hydra-eval-jobs")
{
addFlag({
.longName = "gc-roots-dir",
.description = "garbage collector roots directory",
.labels = {"path"},
.handler = {&gcRootsDir}
});
addFlag({
.longName = "dry-run",
.description = "don't create store derivations",
.handler = {&dryRun, true}
});
addFlag({
.longName = "flake",
.description = "build a flake",
.handler = {&flake, true}
});
expectArg("expr", &releaseExpr);
}
};
static MyArgs myArgs;
static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const std::string & name, const std::string & subAttribute)
{
Strings res;
std::function<void(Value & v)> rec;
rec = [&](Value & v) {
state.forceValue(v, noPos);
if (v.type() == nString)
res.emplace_back(v.string_view());
else if (v.isList())
for (unsigned int n = 0; n < v.listSize(); ++n)
rec(*v.listElems()[n]);
else if (v.type() == nAttrs) {
auto a = v.attrs->find(state.symbols.create(subAttribute));
if (a != v.attrs->end())
res.push_back(std::string(state.forceString(*a->value, a->pos, "while evaluating meta attributes")));
}
};
Value * v = drv.queryMeta(name);
if (v) rec(*v);
return concatStringsSep(", ", res);
}
static void worker(
EvalState & state,
Bindings & autoArgs,
AutoCloseFD & to,
AutoCloseFD & from)
{
Value vTop;
if (myArgs.flake) {
using namespace flake;
auto flakeRef = parseFlakeRef(myArgs.releaseExpr);
auto vFlake = state.allocValue();
auto lockedFlake = lockFlake(state, flakeRef,
LockFlags {
.updateLockFile = false,
.useRegistries = false,
.allowUnlocked = false,
});
callFlake(state, lockedFlake, *vFlake);
auto vOutputs = vFlake->attrs->get(state.symbols.create("outputs"))->value;
state.forceValue(*vOutputs, noPos);
auto aHydraJobs = vOutputs->attrs->get(state.symbols.create("hydraJobs"));
if (!aHydraJobs)
aHydraJobs = vOutputs->attrs->get(state.symbols.create("checks"));
if (!aHydraJobs)
throw Error("flake '%s' does not provide any Hydra jobs or checks", flakeRef);
vTop = *aHydraJobs->value;
} else {
state.evalFile(lookupFileArg(state, myArgs.releaseExpr), vTop);
}
auto vRoot = state.allocValue();
state.autoCallFunction(autoArgs, vTop, *vRoot);
while (true) {
/* Wait for the master to send us a job name. */
writeLine(to.get(), "next");
auto s = readLine(from.get());
if (s == "exit") break;
if (!hasPrefix(s, "do ")) abort();
std::string attrPath(s, 3);
debug("worker process %d at '%s'", getpid(), attrPath);
/* Evaluate it and send info back to the master. */
nlohmann::json reply;
try {
auto vTmp = findAlongAttrPath(state, attrPath, autoArgs, *vRoot).first;
auto v = state.allocValue();
state.autoCallFunction(autoArgs, *vTmp, *v);
if (auto drv = getDerivation(state, *v, false)) {
// CA derivations do not have static output paths, so we
// have to defensively not query output paths in case we
// encounter one.
DrvInfo::Outputs outputs = drv->queryOutputs(
!experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
if (drv->querySystem() == "unknown")
throw EvalError("derivation must have a 'system' attribute");
auto drvPath = state.store->printStorePath(drv->requireDrvPath());
nlohmann::json job;
job["nixName"] = drv->queryName();
job["system"] =drv->querySystem();
job["drvPath"] = drvPath;
job["description"] = drv->queryMetaString("description");
job["license"] = queryMetaStrings(state, *drv, "license", "shortName");
job["homepage"] = drv->queryMetaString("homepage");
job["maintainers"] = queryMetaStrings(state, *drv, "maintainers", "email");
job["schedulingPriority"] = drv->queryMetaInt("schedulingPriority", 100);
job["timeout"] = drv->queryMetaInt("timeout", 36000);
job["maxSilent"] = drv->queryMetaInt("maxSilent", 7200);
job["isChannel"] = drv->queryMetaBool("isHydraChannel", false);
/* If this is an aggregate, then get its constituents. */
auto a = v->attrs->get(state.symbols.create("_hydraAggregate"));
if (a && state.forceBool(*a->value, a->pos, "while evaluating the `_hydraAggregate` attribute")) {
auto a = v->attrs->get(state.symbols.create("constituents"));
if (!a)
throw EvalError("derivation must have a constituents attribute");
NixStringContext context;
state.coerceToString(a->pos, *a->value, context, "while evaluating the `constituents` attribute", true, false);
for (auto & c : context)
std::visit(overloaded {
[&](const NixStringContextElem::Built & b) {
job["constituents"].push_back(b.drvPath->to_string(*state.store));
},
[&](const NixStringContextElem::Opaque & o) {
},
[&](const NixStringContextElem::DrvDeep & d) {
},
}, c.raw);
state.forceList(*a->value, a->pos, "while evaluating the `constituents` attribute");
for (unsigned int n = 0; n < a->value->listSize(); ++n) {
auto v = a->value->listElems()[n];
state.forceValue(*v, noPos);
if (v->type() == nString)
job["namedConstituents"].push_back(v->string_view());
}
}
/* Register the derivation as a GC root. !!! This
registers roots for jobs that we may have already
done. */
auto localStore = state.store.dynamic_pointer_cast<LocalFSStore>();
if (gcRootsDir != "" && localStore) {
Path root = gcRootsDir + "/" + std::string(baseNameOf(drvPath));
if (!pathExists(root))
localStore->addPermRoot(localStore->parseStorePath(drvPath), root);
}
nlohmann::json out;
for (auto & [outputName, optOutputPath] : outputs) {
if (optOutputPath) {
out[outputName] = state.store->printStorePath(*optOutputPath);
} else {
// See the `queryOutputs` call above; we should
// not encounter missing output paths otherwise.
assert(experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
out[outputName] = nullptr;
}
}
job["outputs"] = std::move(out);
reply["job"] = std::move(job);
}
else if (v->type() == nAttrs) {
auto attrs = nlohmann::json::array();
StringSet ss;
for (auto & i : v->attrs->lexicographicOrder(state.symbols)) {
std::string name(state.symbols[i->name]);
if (name.find(' ') != std::string::npos) {
printError("skipping job with illegal name '%s'", name);
continue;
}
attrs.push_back(name);
}
reply["attrs"] = std::move(attrs);
}
else if (v->type() == nNull)
;
else throw TypeError("attribute '%s' is %s, which is not supported", attrPath, showType(*v));
} catch (EvalError & e) {
auto msg = e.msg();
// Transmits the error we got from the previous evaluation
// in the JSON output.
reply["error"] = filterANSIEscapes(msg, true);
// Don't forget to print it into the STDERR log, this is
// what's shown in the Hydra UI.
printError(msg);
}
writeLine(to.get(), reply.dump());
/* If our RSS exceeds the maximum, exit. The master will
start a new process. */
struct rusage r;
getrusage(RUSAGE_SELF, &r);
if ((size_t) r.ru_maxrss > maxMemorySize * 1024) break;
}
writeLine(to.get(), "restart");
}
int main(int argc, char * * argv)
{
/* Prevent undeclared dependencies in the evaluation via
$NIX_PATH. */
unsetenv("NIX_PATH");
return handleExceptions(argv[0], [&]() {
auto config = std::make_unique<HydraConfig>();
auto nrWorkers = config->getIntOption("evaluator_workers", 1);
maxMemorySize = config->getIntOption("evaluator_max_memory_size", 4096);
initNix();
initGC();
myArgs.parseCmdline(argvToStrings(argc, argv));
auto pureEval = config->getBoolOption("evaluator_pure_eval", myArgs.flake);
/* FIXME: The build hook in conjunction with import-from-derivation is causing "unexpected EOF" during eval */
settings.builders = "";
/* Prevent access to paths outside of the Nix search path and
to the environment. */
evalSettings.restrictEval = true;
/* When building a flake, use pure evaluation (no access to
'getEnv', 'currentSystem' etc. */
evalSettings.pureEval = pureEval;
if (myArgs.dryRun) settings.readOnlyMode = true;
if (myArgs.releaseExpr == "") throw UsageError("no expression specified");
if (gcRootsDir == "") printMsg(lvlError, "warning: `--gc-roots-dir' not specified");
struct State
{
std::set<std::string> todo{""};
std::set<std::string> active;
nlohmann::json jobs;
std::exception_ptr exc;
};
std::condition_variable wakeup;
Sync<State> state_;
/* Start a handler thread per worker process. */
auto handler = [&]()
{
pid_t pid = -1;
try {
AutoCloseFD from, to;
while (true) {
/* Start a new worker process if necessary. */
if (pid == -1) {
Pipe toPipe, fromPipe;
toPipe.create();
fromPipe.create();
pid = startProcess(
[&,
to{std::make_shared<AutoCloseFD>(std::move(fromPipe.writeSide))},
from{std::make_shared<AutoCloseFD>(std::move(toPipe.readSide))}
]()
{
try {
EvalState state(myArgs.searchPath, openStore());
Bindings & autoArgs = *myArgs.getAutoArgs(state);
worker(state, autoArgs, *to, *from);
} catch (Error & e) {
nlohmann::json err;
auto msg = e.msg();
err["error"] = filterANSIEscapes(msg, true);
printError(msg);
writeLine(to->get(), err.dump());
// Don't forget to print it into the STDERR log, this is
// what's shown in the Hydra UI.
writeLine(to->get(), "restart");
}
},
ProcessOptions { .allowVfork = false });
from = std::move(fromPipe.readSide);
to = std::move(toPipe.writeSide);
debug("created worker process %d", pid);
}
/* Check whether the existing worker process is still there. */
auto s = readLine(from.get());
if (s == "restart") {
pid = -1;
continue;
} else if (s != "next") {
auto json = nlohmann::json::parse(s);
throw Error("worker error: %s", (std::string) json["error"]);
}
/* Wait for a job name to become available. */
std::string attrPath;
while (true) {
checkInterrupt();
auto state(state_.lock());
if ((state->todo.empty() && state->active.empty()) || state->exc) {
writeLine(to.get(), "exit");
return;
}
if (!state->todo.empty()) {
attrPath = *state->todo.begin();
state->todo.erase(state->todo.begin());
state->active.insert(attrPath);
break;
} else
state.wait(wakeup);
}
/* Tell the worker to evaluate it. */
writeLine(to.get(), "do " + attrPath);
/* Wait for the response. */
auto response = nlohmann::json::parse(readLine(from.get()));
/* Handle the response. */
StringSet newAttrs;
if (response.find("job") != response.end()) {
auto state(state_.lock());
state->jobs[attrPath] = response["job"];
}
if (response.find("attrs") != response.end()) {
for (auto & i : response["attrs"]) {
std::string path = i;
if (path.find(".") != std::string::npos){
path = "\"" + path + "\"";
}
auto s = (attrPath.empty() ? "" : attrPath + ".") + (std::string) path;
newAttrs.insert(s);
}
}
if (response.find("error") != response.end()) {
auto state(state_.lock());
state->jobs[attrPath]["error"] = response["error"];
}
/* Add newly discovered job names to the queue. */
{
auto state(state_.lock());
state->active.erase(attrPath);
for (auto & s : newAttrs)
state->todo.insert(s);
wakeup.notify_all();
}
}
} catch (...) {
check_pid_status_nonblocking(pid);
auto state(state_.lock());
state->exc = std::current_exception();
wakeup.notify_all();
}
};
std::vector<std::thread> threads;
for (size_t i = 0; i < nrWorkers; i++)
threads.emplace_back(std::thread(handler));
for (auto & thread : threads)
thread.join();
auto state(state_.lock());
if (state->exc)
std::rethrow_exception(state->exc);
/* For aggregate jobs that have named consistuents
(i.e. constituents that are a job name rather than a
derivation), look up the referenced job and add it to the
dependencies of the aggregate derivation. */
auto store = openStore();
for (auto i = state->jobs.begin(); i != state->jobs.end(); ++i) {
auto jobName = i.key();
auto & job = i.value();
auto named = job.find("namedConstituents");
if (named == job.end()) continue;
std::unordered_map<std::string, std::string> brokenJobs;
auto getNonBrokenJobOrRecordError = [&brokenJobs, &jobName, &state](
const std::string & childJobName) -> std::optional<nlohmann::json> {
auto childJob = state->jobs.find(childJobName);
if (childJob == state->jobs.end()) {
printError("aggregate job '%s' references non-existent job '%s'", jobName, childJobName);
brokenJobs[childJobName] = "does not exist";
return std::nullopt;
}
if (childJob->find("error") != childJob->end()) {
std::string error = (*childJob)["error"];
printError("aggregate job '%s' references broken job '%s': %s", jobName, childJobName, error);
brokenJobs[childJobName] = error;
return std::nullopt;
}
return *childJob;
};
if (myArgs.dryRun) {
for (std::string jobName2 : *named) {
auto job2 = getNonBrokenJobOrRecordError(jobName2);
if (!job2) {
continue;
}
std::string drvPath2 = (*job2)["drvPath"];
job["constituents"].push_back(drvPath2);
}
} else {
auto drvPath = store->parseStorePath((std::string) job["drvPath"]);
auto drv = store->readDerivation(drvPath);
for (std::string jobName2 : *named) {
auto job2 = getNonBrokenJobOrRecordError(jobName2);
if (!job2) {
continue;
}
auto drvPath2 = store->parseStorePath((std::string) (*job2)["drvPath"]);
auto drv2 = store->readDerivation(drvPath2);
job["constituents"].push_back(store->printStorePath(drvPath2));
drv.inputDrvs.map[drvPath2].value = {drv2.outputs.begin()->first};
}
if (brokenJobs.empty()) {
std::string drvName(drvPath.name());
assert(hasSuffix(drvName, drvExtension));
drvName.resize(drvName.size() - drvExtension.size());
auto hashModulo = hashDerivationModulo(*store, drv, true);
if (hashModulo.kind != DrvHash::Kind::Regular) continue;
auto h = hashModulo.hashes.find("out");
if (h == hashModulo.hashes.end()) continue;
auto outPath = store->makeOutputPath("out", h->second, drvName);
drv.env["out"] = store->printStorePath(outPath);
drv.outputs.insert_or_assign("out", DerivationOutput::InputAddressed { .path = outPath });
auto newDrvPath = store->printStorePath(writeDerivation(*store, drv));
debug("rewrote aggregate derivation %s -> %s", store->printStorePath(drvPath), newDrvPath);
job["drvPath"] = newDrvPath;
job["outputs"]["out"] = store->printStorePath(outPath);
}
}
job.erase("namedConstituents");
/* Register the derivation as a GC root. !!! This
registers roots for jobs that we may have already
done. */
auto localStore = store.dynamic_pointer_cast<LocalFSStore>();
if (gcRootsDir != "" && localStore) {
auto drvPath = job["drvPath"].get<std::string>();
Path root = gcRootsDir + "/" + std::string(baseNameOf(drvPath));
if (!pathExists(root))
localStore->addPermRoot(localStore->parseStorePath(drvPath), root);
}
if (!brokenJobs.empty()) {
std::stringstream ss;
for (const auto& [jobName, error] : brokenJobs) {
ss << jobName << ": " << error << "\n";
}
job["error"] = ss.str();
}
}
std::cout << state->jobs.dump(2) << "\n";
});
}

View File

@@ -0,0 +1,5 @@
bin_PROGRAMS = hydra-evaluator
hydra_evaluator_SOURCES = hydra-evaluator.cc
hydra_evaluator_LDADD = $(NIX_LIBS) -lpqxx
hydra_evaluator_CXXFLAGS = $(NIX_CFLAGS) -Wall -I ../libhydra -Wno-deprecated-declarations

View File

@@ -38,7 +38,7 @@ class JobsetId {
friend bool operator!= (const JobsetId & lhs, const JobsetName & rhs);
std::string display() const {
return boost::str(boost::format("%1%:%2% (jobset#%3%)") % project % jobset % id);
return str(format("%1%:%2% (jobset#%3%)") % project % jobset % id);
}
};
bool operator==(const JobsetId & lhs, const JobsetId & rhs)

View File

@@ -1,9 +0,0 @@
hydra_evaluator = executable('hydra-evaluator',
'hydra-evaluator.cc',
dependencies: [
libhydra_dep,
nix_dep,
pqxx_dep,
],
install: true,
)

View File

@@ -0,0 +1,8 @@
bin_PROGRAMS = hydra-queue-runner
hydra_queue_runner_SOURCES = hydra-queue-runner.cc queue-monitor.cc dispatcher.cc \
builder.cc build-result.cc build-remote.cc \
hydra-build-result.hh counter.hh state.hh db.hh \
nar-extractor.cc nar-extractor.hh
hydra_queue_runner_LDADD = $(NIX_LIBS) -lpqxx -lprometheus-cpp-pull -lprometheus-cpp-core
hydra_queue_runner_CXXFLAGS = $(NIX_CFLAGS) -Wall -I ../libhydra -Wno-deprecated-declarations

View File

@@ -7,35 +7,180 @@
#include "build-result.hh"
#include "path.hh"
#include "ssh-store.hh"
#include "serve-protocol.hh"
#include "state.hh"
#include "current-process.hh"
#include "processes.hh"
#include "util.hh"
#include "serve-protocol.hh"
#include "serve-protocol-impl.hh"
#include "ssh.hh"
#include "finally.hh"
#include "url.hh"
using namespace nix;
bool ::Machine::isLocalhost() const
static void append(Strings & dst, const Strings & src)
{
return storeUri.params.empty() && std::visit(overloaded {
[](const StoreReference::Auto &) {
return true;
},
[](const StoreReference::Specified & s) {
return
(s.scheme == "local" || s.scheme == "unix") ||
((s.scheme == "ssh" || s.scheme == "ssh-ng") &&
s.authority == "localhost");
},
}, storeUri.variant);
dst.insert(dst.end(), src.begin(), src.end());
}
namespace nix::build_remote {
static Strings extraStoreArgs(std::string & machine)
{
Strings result;
try {
auto parsed = parseURL(machine);
if (parsed.scheme != "ssh") {
throw SysError("Currently, only (legacy-)ssh stores are supported!");
}
machine = parsed.authority.value_or("");
auto remoteStore = parsed.query.find("remote-store");
if (remoteStore != parsed.query.end()) {
result = {"--store", shellEscape(remoteStore->second)};
}
} catch (BadURL &) {
// We just try to continue with `machine->sshName` here for backwards compat.
}
return result;
}
static void openConnection(::Machine::ptr machine, Path tmpDir, int stderrFD, SSHMaster::Connection & child)
{
std::string pgmName;
Pipe to, from;
to.create();
from.create();
Strings argv;
if (machine->isLocalhost()) {
pgmName = "nix-store";
argv = {"nix-store", "--builders", "", "--serve", "--write"};
} else {
pgmName = "ssh";
auto sshName = machine->sshName;
Strings extraArgs = extraStoreArgs(sshName);
argv = {"ssh", sshName};
if (machine->sshKey != "") append(argv, {"-i", machine->sshKey});
if (machine->sshPublicHostKey != "") {
Path fileName = tmpDir + "/host-key";
auto p = machine->sshName.find("@");
std::string host = p != std::string::npos ? std::string(machine->sshName, p + 1) : machine->sshName;
writeFile(fileName, host + " " + machine->sshPublicHostKey + "\n");
append(argv, {"-oUserKnownHostsFile=" + fileName});
}
append(argv,
{ "-x", "-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
, "--", "nix-store", "--serve", "--write" });
append(argv, extraArgs);
}
child.sshPid = startProcess([&]() {
restoreProcessContext();
if (dup2(to.readSide.get(), STDIN_FILENO) == -1)
throw SysError("cannot dup input pipe to stdin");
if (dup2(from.writeSide.get(), STDOUT_FILENO) == -1)
throw SysError("cannot dup output pipe to stdout");
if (dup2(stderrFD, STDERR_FILENO) == -1)
throw SysError("cannot dup stderr");
execvp(argv.front().c_str(), (char * *) stringsToCharPtrs(argv).data()); // FIXME: remove cast
throw SysError("cannot start %s", pgmName);
});
to.readSide = -1;
from.writeSide = -1;
child.in = to.writeSide.release();
child.out = from.readSide.release();
// XXX: determine the actual max value we can use from /proc.
int pipesize = 1024 * 1024;
fcntl(child.in.get(), F_SETPIPE_SZ, &pipesize);
fcntl(child.out.get(), F_SETPIPE_SZ, &pipesize);
}
static void copyClosureTo(
::Machine::Connection & conn,
Store & destStore,
const StorePathSet & paths,
SubstituteFlag useSubstitutes = NoSubstitute)
{
StorePathSet closure;
destStore.computeFSClosure(paths, closure);
/* Send the "query valid paths" command with the "lock" option
enabled. This prevents a race where the remote host
garbage-collect paths that are already there. Optionally, ask
the remote host to substitute missing paths. */
// FIXME: substitute output pollutes our build log
conn.to << ServeProto::Command::QueryValidPaths << 1 << useSubstitutes;
ServeProto::write(destStore, conn, closure);
conn.to.flush();
/* Get back the set of paths that are already valid on the remote
host. */
auto present = ServeProto::Serialise<StorePathSet>::read(destStore, conn);
if (present.size() == closure.size()) return;
auto sorted = destStore.topoSortPaths(closure);
StorePathSet missing;
for (auto i = sorted.rbegin(); i != sorted.rend(); ++i)
if (!present.count(*i)) missing.insert(*i);
printMsg(lvlDebug, "sending %d missing paths", missing.size());
std::unique_lock<std::timed_mutex> sendLock(conn.machine->state->sendLock,
std::chrono::seconds(600));
conn.to << ServeProto::Command::ImportPaths;
destStore.exportPaths(missing, conn.to);
conn.to.flush();
if (readInt(conn.from) != 1)
throw Error("remote machine failed to import closure");
}
// FIXME: use Store::topoSortPaths().
static StorePaths reverseTopoSortPaths(const std::map<StorePath, ValidPathInfo> & paths)
{
StorePaths sorted;
StorePathSet visited;
std::function<void(const StorePath & path)> dfsVisit;
dfsVisit = [&](const StorePath & path) {
if (!visited.insert(path).second) return;
auto info = paths.find(path);
auto references = info == paths.end() ? StorePathSet() : info->second.references;
for (auto & i : references)
/* Don't traverse into paths that don't exist. That can
happen due to substitutes for non-existent paths. */
if (i != path && paths.count(i))
dfsVisit(i);
sorted.push_back(path);
};
for (auto & i : paths)
dfsVisit(i.first);
return sorted;
}
static std::pair<Path, AutoCloseFD> openLogFile(const std::string & logDir, const StorePath & drvPath)
{
std::string base(drvPath.to_string());
@@ -49,6 +194,28 @@ static std::pair<Path, AutoCloseFD> openLogFile(const std::string & logDir, cons
return {std::move(logFile), std::move(logFD)};
}
/**
* @param conn is not fully initialized; it is this functions job to set
* the `remoteVersion` field after the handshake is completed.
* Therefore, no `ServeProto::Serialize` functions can be used until
* that field is set.
*/
static void handshake(::Machine::Connection & conn, unsigned int repeats)
{
conn.to << SERVE_MAGIC_1 << 0x206;
conn.to.flush();
unsigned int magic = readInt(conn.from);
if (magic != SERVE_MAGIC_2)
throw Error("protocol mismatch with nix-store --serve on %1%", conn.machine->sshName);
conn.remoteVersion = readInt(conn.from);
// Now `conn` is initialized.
if (GET_PROTOCOL_MAJOR(conn.remoteVersion) != 0x200)
throw Error("unsupported nix-store --serve protocol version on %1%", conn.machine->sshName);
if (GET_PROTOCOL_MINOR(conn.remoteVersion) < 3 && repeats > 0)
throw Error("machine %1% does not support repeating a build; please upgrade it to Nix 1.12", conn.machine->sshName);
}
static BasicDerivation sendInputs(
State & state,
Step & step,
@@ -93,18 +260,18 @@ static BasicDerivation sendInputs(
MaintainCount<counter> mc2(nrStepsCopyingTo);
printMsg(lvlDebug, "sending closure of %s to %s",
localStore.printStorePath(step.drvPath), conn.machine->storeUri.render());
localStore.printStorePath(step.drvPath), conn.machine->sshName);
auto now1 = std::chrono::steady_clock::now();
/* Copy the input closure. */
copyClosure(
destStore,
conn.machine->isLocalhost() ? localStore : *conn.store,
basicDrv.inputSrcs,
NoRepair,
NoCheckSigs,
Substitute);
if (conn.machine->isLocalhost()) {
StorePathSet closure;
destStore.computeFSClosure(basicDrv.inputSrcs, closure);
copyPaths(destStore, localStore, closure, NoRepair, NoCheckSigs, NoSubstitute);
} else {
copyClosureTo(conn, destStore, basicDrv.inputSrcs, Substitute);
}
auto now2 = std::chrono::steady_clock::now();
@@ -119,10 +286,21 @@ static BuildResult performBuild(
Store & localStore,
StorePath drvPath,
const BasicDerivation & drv,
const State::BuildOptions & options,
counter & nrStepsBuilding
)
{
auto kont = conn.store->buildDerivationAsync(drvPath, drv, bmNormal);
conn.to << ServeProto::Command::BuildDerivation << localStore.printStorePath(drvPath);
writeDerivation(conn.to, localStore, drv);
conn.to << options.maxSilentTime << options.buildTimeout;
if (GET_PROTOCOL_MINOR(conn.remoteVersion) >= 2)
conn.to << options.maxLogSize;
if (GET_PROTOCOL_MINOR(conn.remoteVersion) >= 3) {
conn.to
<< options.repeats // == build-repeat
<< options.enforceDeterminism;
}
conn.to.flush();
BuildResult result;
@@ -131,10 +309,7 @@ static BuildResult performBuild(
startTime = time(0);
{
MaintainCount<counter> mc(nrStepsBuilding);
result = kont();
// Without proper call-once functions, we need to manually
// delete after calling.
kont = {};
result = ServeProto::Serialise<BuildResult>::read(localStore, conn);
}
stopTime = time(0);
@@ -150,7 +325,7 @@ static BuildResult performBuild(
// If the protocol was too old to give us `builtOutputs`, initialize
// it manually by introspecting the derivation.
if (GET_PROTOCOL_MINOR(conn.store->getProtocol()) < 6)
if (GET_PROTOCOL_MINOR(conn.remoteVersion) < 6)
{
// If the remote is too old to handle CA derivations, we cant get this
// far anyways
@@ -175,6 +350,91 @@ static BuildResult performBuild(
return result;
}
static std::map<StorePath, ValidPathInfo> queryPathInfos(
::Machine::Connection & conn,
Store & localStore,
StorePathSet & outputs,
size_t & totalNarSize
)
{
/* Get info about each output path. */
std::map<StorePath, ValidPathInfo> infos;
conn.to << ServeProto::Command::QueryPathInfos;
ServeProto::write(localStore, conn, outputs);
conn.to.flush();
while (true) {
auto storePathS = readString(conn.from);
if (storePathS == "") break;
auto deriver = readString(conn.from); // deriver
auto references = ServeProto::Serialise<StorePathSet>::read(localStore, conn);
readLongLong(conn.from); // download size
auto narSize = readLongLong(conn.from);
auto narHash = Hash::parseAny(readString(conn.from), htSHA256);
auto ca = ContentAddress::parseOpt(readString(conn.from));
readStrings<StringSet>(conn.from); // sigs
ValidPathInfo info(localStore.parseStorePath(storePathS), narHash);
assert(outputs.count(info.path));
info.references = references;
info.narSize = narSize;
totalNarSize += info.narSize;
info.narHash = narHash;
info.ca = ca;
if (deriver != "")
info.deriver = localStore.parseStorePath(deriver);
infos.insert_or_assign(info.path, info);
}
return infos;
}
static void copyPathFromRemote(
::Machine::Connection & conn,
NarMemberDatas & narMembers,
Store & localStore,
Store & destStore,
const ValidPathInfo & info
)
{
/* Receive the NAR from the remote and add it to the
destination store. Meanwhile, extract all the info from the
NAR that getBuildOutput() needs. */
auto source2 = sinkToSource([&](Sink & sink)
{
/* Note: we should only send the command to dump the store
path to the remote if the NAR is actually going to get read
by the destination store, which won't happen if this path
is already valid on the destination store. Since this
lambda function only gets executed if someone tries to read
from source2, we will send the command from here rather
than outside the lambda. */
conn.to << ServeProto::Command::DumpStorePath << localStore.printStorePath(info.path);
conn.to.flush();
TeeSource tee(conn.from, sink);
extractNarData(tee, localStore.printStorePath(info.path), narMembers);
});
destStore.addToStore(info, *source2, NoRepair, NoCheckSigs);
}
static void copyPathsFromRemote(
::Machine::Connection & conn,
NarMemberDatas & narMembers,
Store & localStore,
Store & destStore,
const std::map<StorePath, ValidPathInfo> & infos
)
{
auto pathsSorted = reverseTopoSortPaths(infos);
for (auto & path : pathsSorted) {
auto & info = infos.find(path)->second;
copyPathFromRemote(conn, narMembers, localStore, destStore, info);
}
}
}
/* using namespace nix::build_remote; */
@@ -234,9 +494,21 @@ void RemoteResult::updateWithBuildResult(const nix::BuildResult & buildResult)
}
/* Utility guard object to auto-release a semaphore on destruction. */
template <typename T>
class SemaphoreReleaser {
public:
SemaphoreReleaser(T* s) : sem(s) {}
~SemaphoreReleaser() { sem->release(); }
private:
T* sem;
};
void State::buildRemote(ref<Store> destStore,
MachineReservation::ptr & reservation,
::Machine::ptr machine, Step::ptr step,
const BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,
std::function<void(StepState)> updateStep,
NarMemberDatas & narMembers)
@@ -247,47 +519,26 @@ void State::buildRemote(ref<Store> destStore,
AutoDelete logFileDel(logFile, false);
result.logFile = logFile;
nix::Path tmpDir = createTempDir();
AutoDelete tmpDirDel(tmpDir, true);
try {
updateStep(ssConnecting);
// FIXME: rewrite to use Store.
::Machine::Connection conn {
.machine = machine,
.store = [&]{
auto * pSpecified = std::get_if<StoreReference::Specified>(&machine->storeUri.variant);
if (!pSpecified || pSpecified->scheme != "ssh-ng") {
throw Error("Currently, only ssh-ng:// stores are supported!");
}
auto remoteStore = machine->openStore().dynamic_pointer_cast<RemoteStore>();
auto remoteStoreConfig = std::dynamic_pointer_cast<SSHStoreConfig>(remoteStore);
assert(remoteStore);
if (machine->isLocalhost()) {
auto rp_new = remoteStoreConfig->remoteProgram.get();
rp_new.push_back("--builders");
rp_new.push_back("");
const_cast<nix::Setting<Strings> &>(remoteStoreConfig->remoteProgram).assign(rp_new);
}
remoteStoreConfig->extraSshArgs = {
"-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
};
// TODO logging
//const_cast<nix::Setting<int> &>(remoteStore->logFD).assign(logFD.get());
return nix::ref{remoteStore};
}(),
};
SSHMaster::Connection child;
build_remote::openConnection(machine, tmpDir, logFD.get(), child);
{
auto activeStepState(activeStep->state_.lock());
if (activeStepState->cancelled) throw Error("step cancelled");
activeStepState->pid = child.sshPid;
}
Finally clearPid([&]() {
auto activeStepState(activeStep->state_.lock());
activeStepState->pid = -1;
/* FIXME: there is a slight race here with step
cancellation in State::processQueueChange(), which
@@ -297,13 +548,25 @@ void State::buildRemote(ref<Store> destStore,
process. Meh. */
});
::Machine::Connection conn {
.from = child.out.get(),
.to = child.in.get(),
.machine = machine,
};
Finally updateStats([&]() {
// TODO
//auto stats = conn.store->getConnectionStats();
//bytesReceived += stats.bytesReceived;
//bytesSent += stats.bytesSent;
bytesReceived += conn.from.read;
bytesSent += conn.to.written;
});
try {
build_remote::handshake(conn, buildOptions.repeats);
} catch (EndOfFile & e) {
child.sshPid.wait();
std::string s = chomp(readFile(result.logFile));
throw Error("cannot connect to %1%: %2%", machine->sshName, s);
}
{
auto info(machine->state->connectInfo.lock());
info->consecutiveFailures = 0;
@@ -332,7 +595,7 @@ void State::buildRemote(ref<Store> destStore,
/* Do the build. */
printMsg(lvlDebug, "building %s on %s",
localStore->printStorePath(step->drvPath),
machine->storeUri.render());
machine->sshName);
updateStep(ssBuilding);
@@ -341,6 +604,7 @@ void State::buildRemote(ref<Store> destStore,
*localStore,
step->drvPath,
resolvedDrv,
buildOptions,
nrStepsBuilding
);
@@ -354,11 +618,28 @@ void State::buildRemote(ref<Store> destStore,
get a build log. */
if (result.isCached) {
printMsg(lvlInfo, "outputs of %s substituted or already valid on %s",
localStore->printStorePath(step->drvPath), machine->storeUri.render());
localStore->printStorePath(step->drvPath), machine->sshName);
unlink(result.logFile.c_str());
result.logFile = "";
}
/* Throttle CPU-bound work. Opportunistically skip updating the current
* step, since this requires a DB roundtrip. */
if (!localWorkThrottler.try_acquire()) {
updateStep(ssWaitingForLocalSlot);
localWorkThrottler.acquire();
}
SemaphoreReleaser releaser(&localWorkThrottler);
/* Once we've started copying outputs, release the machine reservation
* so further builds can happen. We do not release the machine earlier
* to avoid situations where the queue runner is bottlenecked on
* copying outputs and we end up building too many things that we
* haven't been able to allow copy slots for. */
assert(reservation.unique());
reservation = 0;
wakeDispatcher();
StorePathSet outputs;
for (auto & [_, realisation] : buildResult.builtOutputs)
outputs.insert(realisation.outPath);
@@ -371,12 +652,19 @@ void State::buildRemote(ref<Store> destStore,
auto now1 = std::chrono::steady_clock::now();
size_t totalNarSize = 0;
auto infos = build_remote::queryPathInfos(conn, *localStore, outputs, totalNarSize);
if (totalNarSize > maxOutputSize) {
result.stepStatus = bsNarSizeLimitExceeded;
return;
}
/* Copy each path. */
printMsg(lvlDebug, "copying outputs of %s from %s",
localStore->printStorePath(step->drvPath), machine->storeUri.render());
copyClosure(*conn.store, *destStore, outputs);
printMsg(lvlDebug, "copying outputs of %s from %s (%d bytes)",
localStore->printStorePath(step->drvPath), machine->sshName, totalNarSize);
build_remote::copyPathsFromRemote(conn, narMembers, *localStore, *destStore, infos);
auto now2 = std::chrono::steady_clock::now();
result.overhead += std::chrono::duration_cast<std::chrono::milliseconds>(now2 - now1).count();
@@ -397,11 +685,9 @@ void State::buildRemote(ref<Store> destStore,
}
}
/* Shut down the connection done by RAII.
Only difference is kill() instead of wait() (i.e. send signal
then wait())
*/
/* Shut down the connection. */
child.in = -1;
child.sshPid.wait();
} catch (Error & e) {
/* Disable this machine until a certain period of time has
@@ -415,7 +701,7 @@ void State::buildRemote(ref<Store> destStore,
info->consecutiveFailures = std::min(info->consecutiveFailures + 1, (unsigned int) 4);
info->lastFailure = now;
int delta = retryInterval * std::pow(retryBackoff, info->consecutiveFailures - 1) + (rand() % 30);
printMsg(lvlInfo, "will disable machine %1% for %2%s", machine->storeUri.render(), delta);
printMsg(lvlInfo, "will disable machine %1% for %2%s", machine->sshName, delta);
info->disabledUntil = now + std::chrono::seconds(delta);
}
throw;

View File

@@ -41,15 +41,17 @@ void State::builder(MachineReservation::ptr reservation)
} catch (std::exception & e) {
printMsg(lvlError, "uncaught exception building %s on %s: %s",
localStore->printStorePath(reservation->step->drvPath),
reservation->machine->storeUri.render(),
reservation->machine->sshName,
e.what());
}
}
/* Release the machine and wake up the dispatcher. */
assert(reservation.unique());
reservation = 0;
wakeDispatcher();
/* If the machine hasn't been released yet, release and wake up the dispatcher. */
if (reservation) {
assert(reservation.unique());
reservation = 0;
wakeDispatcher();
}
/* If there was a temporary failure, retry the step after an
exponentially increasing interval. */
@@ -72,11 +74,11 @@ void State::builder(MachineReservation::ptr reservation)
State::StepResult State::doBuildStep(nix::ref<Store> destStore,
MachineReservation::ptr reservation,
MachineReservation::ptr & reservation,
std::shared_ptr<ActiveStep> activeStep)
{
auto & step(reservation->step);
auto & machine(reservation->machine);
auto step(reservation->step);
auto machine(reservation->machine);
{
auto step_(step->state.lock());
@@ -98,6 +100,10 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
it). */
BuildID buildId;
std::optional<StorePath> buildDrvPath;
BuildOptions buildOptions;
buildOptions.repeats = step->isDeterministic ? 1 : 0;
buildOptions.maxLogSize = maxLogSize;
buildOptions.enforceDeterminism = step->isDeterministic;
auto conn(dbPool.get());
@@ -132,19 +138,18 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
{
auto i = jobsetRepeats.find(std::make_pair(build2->projectName, build2->jobsetName));
if (i != jobsetRepeats.end())
warn("jobset repeats is deprecated; nix stopped supporting this correctly a long time ago.");
buildOptions.repeats = std::max(buildOptions.repeats, i->second);
}
}
if (!build) build = *dependents.begin();
buildId = build->id;
buildDrvPath = build->drvPath;
settings.maxLogSize = maxLogSize;
settings.maxSilentTime = build->maxSilentTime;
settings.buildTimeout = build->buildTimeout;
buildOptions.maxSilentTime = build->maxSilentTime;
buildOptions.buildTimeout = build->buildTimeout;
printInfo("performing step %s %d times on %s (needed by build %d and %d others)",
localStore->printStorePath(step->drvPath), 1, machine->storeUri.render(), buildId, (dependents.size() - 1));
localStore->printStorePath(step->drvPath), buildOptions.repeats + 1, machine->sshName, buildId, (dependents.size() - 1));
}
if (!buildOneDone)
@@ -172,7 +177,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
unlink(result.logFile.c_str());
}
} catch (...) {
ignoreExceptionInDestructor();
ignoreException();
}
}
});
@@ -190,7 +195,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
{
auto mc = startDbUpdate();
pqxx::work txn(*conn);
stepNr = createBuildStep(txn, result.startTime, buildId, step, machine->storeUri.render(), bsBusy);
stepNr = createBuildStep(txn, result.startTime, buildId, step, machine->sshName, bsBusy);
txn.commit();
}
@@ -205,7 +210,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
try {
/* FIXME: referring builds may have conflicting timeouts. */
buildRemote(destStore, machine, step, result, activeStep, updateStep, narMembers);
buildRemote(destStore, reservation, machine, step, buildOptions, result, activeStep, updateStep, narMembers);
} catch (Error & e) {
if (activeStep->state_.lock()->cancelled) {
printInfo("marking step %d of build %d as cancelled", stepNr, buildId);
@@ -247,7 +252,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
/* Finish the step in the database. */
if (stepNr) {
pqxx::work txn(*conn);
finishBuildStep(txn, result, buildId, stepNr, machine->storeUri.render());
finishBuildStep(txn, result, buildId, stepNr, machine->sshName);
txn.commit();
}
@@ -255,7 +260,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
issue). Retry a number of times. */
if (result.canRetry) {
printMsg(lvlError, "possibly transient failure building %s on %s: %s",
localStore->printStorePath(step->drvPath), machine->storeUri.render(), result.errorMsg);
localStore->printStorePath(step->drvPath), machine->sshName, result.errorMsg);
assert(stepNr);
bool retry;
{
@@ -446,7 +451,7 @@ void State::failStep(
build->finishedInDB)
continue;
createBuildStep(txn,
0, build->id, step, machine ? machine->storeUri.render() : "",
0, build->id, step, machine ? machine->sshName : "",
result.stepStatus, result.errorMsg, buildId == build->id ? 0 : buildId);
}

View File

@@ -2,7 +2,6 @@
#include <cmath>
#include <thread>
#include <unordered_map>
#include <unordered_set>
#include "state.hh"
@@ -40,13 +39,15 @@ void State::dispatcher()
printMsg(lvlDebug, "dispatcher woken up");
nrDispatcherWakeups++;
auto now1 = std::chrono::steady_clock::now();
auto t_before_work = std::chrono::steady_clock::now();
auto sleepUntil = doDispatch();
auto now2 = std::chrono::steady_clock::now();
auto t_after_work = std::chrono::steady_clock::now();
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(now2 - now1).count();
prom.dispatcher_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(t_after_work - t_before_work).count();
/* Sleep until we're woken up (either because a runnable build
is added, or because a build finishes). */
@@ -60,6 +61,10 @@ void State::dispatcher()
*dispatcherWakeup_ = false;
}
auto t_after_sleep = std::chrono::steady_clock::now();
prom.dispatcher_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
} catch (std::exception & e) {
printError("dispatcher: %s", e.what());
sleep(1);
@@ -128,6 +133,8 @@ system_time State::doDispatch()
comparator is a partial ordering (see MachineInfo). */
int highestGlobalPriority;
int highestLocalPriority;
size_t numRequiredSystemFeatures;
size_t numRevDeps;
BuildID lowestBuildID;
StepInfo(Step::ptr step, Step::State & step_) : step(step)
@@ -136,6 +143,8 @@ system_time State::doDispatch()
lowestShareUsed = std::min(lowestShareUsed, jobset->shareUsed());
highestGlobalPriority = step_.highestGlobalPriority;
highestLocalPriority = step_.highestLocalPriority;
numRequiredSystemFeatures = step->requiredSystemFeatures.size();
numRevDeps = step_.rdeps.size();
lowestBuildID = step_.lowestBuildID;
}
};
@@ -188,6 +197,8 @@ system_time State::doDispatch()
a.highestGlobalPriority != b.highestGlobalPriority ? a.highestGlobalPriority > b.highestGlobalPriority :
a.lowestShareUsed != b.lowestShareUsed ? a.lowestShareUsed < b.lowestShareUsed :
a.highestLocalPriority != b.highestLocalPriority ? a.highestLocalPriority > b.highestLocalPriority :
a.numRequiredSystemFeatures != b.numRequiredSystemFeatures ? a.numRequiredSystemFeatures > b.numRequiredSystemFeatures :
a.numRevDeps != b.numRevDeps ? a.numRevDeps > b.numRevDeps :
a.lowestBuildID < b.lowestBuildID;
});
@@ -232,11 +243,11 @@ system_time State::doDispatch()
sort(machinesSorted.begin(), machinesSorted.end(),
[](const MachineInfo & a, const MachineInfo & b) -> bool
{
float ta = std::round(a.currentJobs / a.machine->speedFactor);
float tb = std::round(b.currentJobs / b.machine->speedFactor);
float ta = std::round(a.currentJobs / a.machine->speedFactorFloat);
float tb = std::round(b.currentJobs / b.machine->speedFactorFloat);
return
ta != tb ? ta < tb :
a.machine->speedFactor != b.machine->speedFactor ? a.machine->speedFactor > b.machine->speedFactor :
a.machine->speedFactorFloat != b.machine->speedFactorFloat ? a.machine->speedFactorFloat > b.machine->speedFactorFloat :
a.currentJobs > b.currentJobs;
});
@@ -256,7 +267,7 @@ system_time State::doDispatch()
/* Can this machine do this step? */
if (!mi.machine->supportsStep(step)) {
debug("machine '%s' does not support step '%s' (system type '%s')",
mi.machine->storeUri.render(), localStore->printStorePath(step->drvPath), step->drv->platform);
mi.machine->sshName, localStore->printStorePath(step->drvPath), step->drv->platform);
continue;
}

View File

@@ -70,10 +70,31 @@ State::PromMetrics::PromMetrics()
.Register(*registry)
.Add({})
)
, queue_max_id(
prometheus::BuildGauge()
.Name("hydraqueuerunner_queue_max_build_id_info")
.Help("Maximum build record ID in the queue")
, dispatcher_time_spent_running(
prometheus::BuildCounter()
.Name("hydraqueuerunner_dispatcher_time_spent_running")
.Help("Time (in micros) spent running the dispatcher")
.Register(*registry)
.Add({})
)
, dispatcher_time_spent_waiting(
prometheus::BuildCounter()
.Name("hydraqueuerunner_dispatcher_time_spent_waiting")
.Help("Time (in micros) spent waiting for the dispatcher to obtain work")
.Register(*registry)
.Add({})
)
, queue_monitor_time_spent_running(
prometheus::BuildCounter()
.Name("hydraqueuerunner_queue_monitor_time_spent_running")
.Help("Time (in micros) spent running the queue monitor")
.Register(*registry)
.Add({})
)
, queue_monitor_time_spent_waiting(
prometheus::BuildCounter()
.Name("hydraqueuerunner_queue_monitor_time_spent_waiting")
.Help("Time (in micros) spent waiting for the queue monitor to obtain work")
.Register(*registry)
.Add({})
)
@@ -85,6 +106,7 @@ State::State(std::optional<std::string> metricsAddrOpt)
: config(std::make_unique<HydraConfig>())
, maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0))
, dbPool(config->getIntOption("max_db_connections", 128))
, localWorkThrottler(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2)))
, maxOutputSize(config->getIntOption("max_output_size", 2ULL << 30))
, maxLogSize(config->getIntOption("max_log_size", 64ULL << 20))
, uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false))
@@ -135,26 +157,67 @@ void State::parseMachines(const std::string & contents)
oldMachines = *machines_;
}
for (auto && machine_ : nix::Machine::parseConfig({}, contents)) {
auto machine = std::make_shared<::Machine>(std::move(machine_));
for (auto line : tokenizeString<Strings>(contents, "\n")) {
line = trim(std::string(line, 0, line.find('#')));
auto tokens = tokenizeString<std::vector<std::string>>(line);
if (tokens.size() < 3) continue;
tokens.resize(8);
if (tokens[5] == "-") tokens[5] = "";
auto supportedFeatures = tokenizeString<StringSet>(tokens[5], ",");
if (tokens[6] == "-") tokens[6] = "";
auto mandatoryFeatures = tokenizeString<StringSet>(tokens[6], ",");
for (auto & f : mandatoryFeatures)
supportedFeatures.insert(f);
using MaxJobs = std::remove_const<decltype(nix::Machine::maxJobs)>::type;
auto machine = std::make_shared<::Machine>(nix::Machine {
// `storeUri`, not yet used
"",
// `systemTypes`, not yet used
{},
// `sshKey`
tokens[2] == "-" ? "" : tokens[2],
// `maxJobs`
tokens[3] != ""
? string2Int<MaxJobs>(tokens[3]).value()
: 1,
// `speedFactor`, not yet used
1,
// `supportedFeatures`
std::move(supportedFeatures),
// `mandatoryFeatures`
std::move(mandatoryFeatures),
// `sshPublicHostKey`
tokens[7] != "" && tokens[7] != "-"
? base64Decode(tokens[7])
: "",
});
machine->sshName = tokens[0];
machine->systemTypesSet = tokenizeString<StringSet>(tokens[1], ",");
machine->speedFactorFloat = atof(tokens[4].c_str());
/* Re-use the State object of the previous machine with the
same name. */
auto i = oldMachines.find(machine->storeUri.variant);
auto i = oldMachines.find(machine->sshName);
if (i == oldMachines.end())
printMsg(lvlChatty, "adding new machine %1%", machine->storeUri.render());
printMsg(lvlChatty, "adding new machine %1%", machine->sshName);
else
printMsg(lvlChatty, "updating machine %1%", machine->storeUri.render());
printMsg(lvlChatty, "updating machine %1%", machine->sshName);
machine->state = i == oldMachines.end()
? std::make_shared<::Machine::State>()
: i->second->state;
newMachines[machine->storeUri.variant] = machine;
newMachines[machine->sshName] = machine;
}
for (auto & m : oldMachines)
if (newMachines.find(m.first) == newMachines.end()) {
if (m.second->enabled)
printInfo("removing machine %1%", m.second->storeUri.render());
printInfo("removing machine %1%", m.first);
/* Add a disabled ::Machine object to make sure stats are
maintained. */
auto machine = std::make_shared<::Machine>(*(m.second));
@@ -182,7 +245,7 @@ void State::monitorMachinesFile()
getEnv("NIX_REMOTE_SYSTEMS").value_or(pathExists(defaultMachinesFile) ? defaultMachinesFile : ""), ":");
if (machinesFiles.empty()) {
parseMachines("ssh-ng://localhost " +
parseMachines("localhost " +
(settings.thisSystem == "x86_64-linux" ? "x86_64-linux,i686-linux" : settings.thisSystem.get())
+ " - " + std::to_string(settings.maxBuildJobs) + " 1 "
+ concatStringsSep(",", StoreConfig::getDefaultSystemFeatures()));
@@ -600,7 +663,7 @@ void State::dumpStatus(Connection & conn)
json machine = {
{"enabled", m->enabled},
{"systemTypes", m->systemTypes},
{"systemTypes", m->systemTypesSet},
{"supportedFeatures", m->supportedFeatures},
{"mandatoryFeatures", m->mandatoryFeatures},
{"nrStepsDone", s->nrStepsDone.load()},
@@ -618,7 +681,7 @@ void State::dumpStatus(Connection & conn)
machine["avgStepTime"] = (float) s->totalStepTime / s->nrStepsDone;
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / s->nrStepsDone;
}
statusJson["machines"][m->storeUri.render()] = machine;
statusJson["machines"][m->sshName] = machine;
}
}

View File

@@ -1,22 +0,0 @@
srcs = files(
'builder.cc',
'build-remote.cc',
'build-result.cc',
'dispatcher.cc',
'hydra-queue-runner.cc',
'nar-extractor.cc',
'queue-monitor.cc',
)
hydra_queue_runner = executable('hydra-queue-runner',
'hydra-queue-runner.cc',
srcs,
dependencies: [
libhydra_dep,
nix_dep,
pqxx_dep,
prom_cpp_core_dep,
prom_cpp_pull_dep,
],
install: true,
)

View File

@@ -6,46 +6,7 @@
using namespace nix;
struct NarMemberConstructor : CreateRegularFileSink
{
NarMemberData & curMember;
HashSink hashSink = HashSink { HashAlgorithm::SHA256 };
std::optional<uint64_t> expectedSize;
NarMemberConstructor(NarMemberData & curMember)
: curMember(curMember)
{ }
void isExecutable() override
{
}
void preallocateContents(uint64_t size) override
{
expectedSize = size;
}
void operator () (std::string_view data) override
{
assert(expectedSize);
*curMember.fileSize += data.size();
hashSink(data);
if (curMember.contents) {
curMember.contents->append(data);
}
assert(curMember.fileSize <= expectedSize);
if (curMember.fileSize == expectedSize) {
auto [hash, len] = hashSink.finish();
assert(curMember.fileSize == len);
curMember.sha256 = hash;
}
}
};
struct Extractor : FileSystemObjectSink
struct Extractor : ParseSink
{
std::unordered_set<Path> filesToKeep {
"/nix-support/hydra-build-products",
@@ -54,41 +15,65 @@ struct Extractor : FileSystemObjectSink
};
NarMemberDatas & members;
std::filesystem::path prefix;
Path toKey(const CanonPath & path)
{
std::filesystem::path p = prefix;
// Conditional to avoid trailing slash
if (!path.isRoot()) p /= path.rel();
return p;
}
NarMemberData * curMember = nullptr;
Path prefix;
Extractor(NarMemberDatas & members, const Path & prefix)
: members(members), prefix(prefix)
{ }
void createDirectory(const CanonPath & path) override
void createDirectory(const Path & path) override
{
members.insert_or_assign(toKey(path), NarMemberData { .type = SourceAccessor::Type::tDirectory });
members.insert_or_assign(prefix + path, NarMemberData { .type = SourceAccessor::Type::tDirectory });
}
void createRegularFile(const CanonPath & path, std::function<void(CreateRegularFileSink &)> func) override
void createRegularFile(const Path & path) override
{
NarMemberConstructor nmc {
members.insert_or_assign(toKey(path), NarMemberData {
.type = SourceAccessor::Type::tRegular,
.fileSize = 0,
.contents = filesToKeep.count(path.abs()) ? std::optional("") : std::nullopt,
}).first->second,
};
func(nmc);
curMember = &members.insert_or_assign(prefix + path, NarMemberData {
.type = SourceAccessor::Type::tRegular,
.fileSize = 0,
.contents = filesToKeep.count(path) ? std::optional("") : std::nullopt,
}).first->second;
}
void createSymlink(const CanonPath & path, const std::string & target) override
std::optional<uint64_t> expectedSize;
std::unique_ptr<HashSink> hashSink;
void preallocateContents(uint64_t size) override
{
members.insert_or_assign(toKey(path), NarMemberData { .type = SourceAccessor::Type::tSymlink });
expectedSize = size;
hashSink = std::make_unique<HashSink>(htSHA256);
}
void receiveContents(std::string_view data) override
{
assert(expectedSize);
assert(curMember);
assert(hashSink);
*curMember->fileSize += data.size();
(*hashSink)(data);
if (curMember->contents) {
curMember->contents->append(data);
}
assert(curMember->fileSize <= expectedSize);
if (curMember->fileSize == expectedSize) {
auto [hash, len] = hashSink->finish();
assert(curMember->fileSize == len);
curMember->sha256 = hash;
hashSink.reset();
}
}
void createSymlink(const Path & path, const std::string & target) override
{
members.insert_or_assign(prefix + path, NarMemberData { .type = SourceAccessor::Type::tSymlink });
}
void isExecutable() override
{ }
void closeRegularFile() override
{ }
};

View File

@@ -1,6 +1,7 @@
#include "state.hh"
#include "hydra-build-result.hh"
#include "globals.hh"
#include "thread-pool.hh"
#include <cstring>
@@ -37,16 +38,21 @@ void State::queueMonitorLoop(Connection & conn)
auto destStore = getDestStore();
unsigned int lastBuildId = 0;
bool quit = false;
while (!quit) {
auto t_before_work = std::chrono::steady_clock::now();
localStore->clearPathInfoCache();
bool done = getQueuedBuilds(conn, destStore, lastBuildId);
bool done = getQueuedBuilds(conn, destStore);
if (buildOne && buildOneDone) quit = true;
auto t_after_work = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
/* Sleep until we get notification from the database about an
event. */
if (done && !quit) {
@@ -56,12 +62,10 @@ void State::queueMonitorLoop(Connection & conn)
conn.get_notifs();
if (auto lowestId = buildsAdded.get()) {
lastBuildId = std::min(lastBuildId, static_cast<unsigned>(std::stoul(*lowestId) - 1));
printMsg(lvlTalkative, "got notification: new builds added to the queue");
}
if (buildsRestarted.get()) {
printMsg(lvlTalkative, "got notification: builds restarted");
lastBuildId = 0; // check all builds
}
if (buildsCancelled.get() || buildsDeleted.get() || buildsBumped.get()) {
printMsg(lvlTalkative, "got notification: builds cancelled or bumped");
@@ -71,6 +75,10 @@ void State::queueMonitorLoop(Connection & conn)
printMsg(lvlTalkative, "got notification: jobset shares changed");
processJobsetSharesChange(conn);
}
auto t_after_sleep = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
}
exit(0);
@@ -84,20 +92,18 @@ struct PreviousFailure : public std::exception {
bool State::getQueuedBuilds(Connection & conn,
ref<Store> destStore, unsigned int & lastBuildId)
ref<Store> destStore)
{
prom.queue_checks_started.Increment();
printInfo("checking the queue for builds > %d...", lastBuildId);
printInfo("checking the queue for builds...");
/* Grab the queued builds from the database, but don't process
them yet (since we don't want a long-running transaction). */
std::vector<BuildID> newIDs;
std::map<BuildID, Build::ptr> newBuildsByID;
std::unordered_map<BuildID, Build::ptr> newBuildsByID;
std::multimap<StorePath, BuildID> newBuildsByPath;
unsigned int newLastBuildId = lastBuildId;
{
pqxx::work txn(conn);
@@ -106,17 +112,12 @@ bool State::getQueuedBuilds(Connection & conn,
"jobsets.name as jobset, job, drvPath, maxsilent, timeout, timestamp, "
"globalPriority, priority from Builds "
"inner join jobsets on builds.jobset_id = jobsets.id "
"where builds.id > $1 and finished = 0 order by globalPriority desc, builds.id",
lastBuildId);
"where finished = 0 order by globalPriority desc, random()");
for (auto const & row : res) {
auto builds_(builds.lock());
BuildID id = row["id"].as<BuildID>();
if (buildOne && id != buildOne) continue;
if (id > newLastBuildId) {
newLastBuildId = id;
prom.queue_max_id.Set(id);
}
if (builds_->count(id)) continue;
auto build = std::make_shared<Build>(
@@ -298,7 +299,7 @@ bool State::getQueuedBuilds(Connection & conn,
try {
createBuild(build);
} catch (Error & e) {
e.addTrace({}, HintFmt("while loading build %d: ", build->id));
e.addTrace({}, hintfmt("while loading build %d: ", build->id));
throw;
}
@@ -318,15 +319,13 @@ bool State::getQueuedBuilds(Connection & conn,
/* Stop after a certain time to allow priority bumps to be
processed. */
if (std::chrono::system_clock::now() > start + std::chrono::seconds(600)) {
if (std::chrono::system_clock::now() > start + std::chrono::seconds(60)) {
prom.queue_checks_early_exits.Increment();
break;
}
}
prom.queue_checks_finished.Increment();
lastBuildId = newBuildsByID.empty() ? newLastBuildId : newBuildsByID.begin()->first - 1;
return newBuildsByID.empty();
}
@@ -405,6 +404,34 @@ void State::processQueueChange(Connection & conn)
}
std::map<DrvOutput, std::optional<StorePath>> State::getMissingRemotePaths(
ref<Store> destStore,
const std::map<DrvOutput, std::optional<StorePath>> & paths)
{
Sync<std::map<DrvOutput, std::optional<StorePath>>> missing_;
ThreadPool tp;
for (auto & [output, maybeOutputPath] : paths) {
if (!maybeOutputPath) {
auto missing(missing_.lock());
missing->insert({output, maybeOutputPath});
} else {
tp.enqueue([&] {
if (!destStore->isValidPath(*maybeOutputPath)) {
auto missing(missing_.lock());
missing->insert({output, maybeOutputPath});
}
});
}
}
tp.process();
auto missing(missing_.lock());
return *missing;
}
Step::ptr State::createStep(ref<Store> destStore,
Connection & conn, Build::ptr build, const StorePath & drvPath,
Build::ptr referringBuild, Step::ptr referringStep, std::set<StorePath> & finishedDrvs,
@@ -485,16 +512,15 @@ Step::ptr State::createStep(ref<Store> destStore,
/* Are all outputs valid? */
auto outputHashes = staticOutputHashes(*localStore, *(step->drv));
bool valid = true;
std::map<DrvOutput, std::optional<StorePath>> missing;
std::map<DrvOutput, std::optional<StorePath>> paths;
for (auto & [outputName, maybeOutputPath] : destStore->queryPartialDerivationOutputMap(drvPath, &*localStore)) {
auto outputHash = outputHashes.at(outputName);
if (maybeOutputPath && destStore->isValidPath(*maybeOutputPath))
continue;
valid = false;
missing.insert({{outputHash, outputName}, maybeOutputPath});
paths.insert({{outputHash, outputName}, maybeOutputPath});
}
auto missing = getMissingRemotePaths(destStore, paths);
bool valid = missing.empty();
/* Try to copy the missing paths from the local store or from
substitutes. */
if (!missing.empty()) {
@@ -700,7 +726,7 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
product.fileSize = row[2].as<off_t>();
}
if (!row[3].is_null())
product.sha256hash = Hash::parseAny(row[3].as<std::string>(), HashAlgorithm::SHA256);
product.sha256hash = Hash::parseAny(row[3].as<std::string>(), htSHA256);
if (!row[4].is_null())
product.path = row[4].as<std::string>();
product.name = row[5].as<std::string>();

View File

@@ -6,6 +6,8 @@
#include <map>
#include <memory>
#include <queue>
#include <regex>
#include <semaphore>
#include <prometheus/counter.h>
#include <prometheus/gauge.h>
@@ -20,7 +22,7 @@
#include "store-api.hh"
#include "sync.hh"
#include "nar-extractor.hh"
#include "ssh-store.hh"
#include "serve-protocol.hh"
#include "machines.hh"
@@ -55,6 +57,7 @@ typedef enum {
ssConnecting = 10,
ssSendingInputs = 20,
ssBuilding = 30,
ssWaitingForLocalSlot = 35,
ssReceivingOutputs = 40,
ssPostProcessing = 50,
} StepState;
@@ -238,6 +241,18 @@ struct Machine : nix::Machine
{
typedef std::shared_ptr<Machine> ptr;
/* TODO Get rid of: `nix::Machine::storeUri` is normalized in a way
we are not yet used to, but once we are, we don't need this. */
std::string sshName;
/* TODO Get rid once `nix::Machine::systemTypes` is a set not
vector. */
std::set<std::string> systemTypesSet;
/* TODO Get rid once `nix::Machine::systemTypes` is a `float` not
an `int`. */
float speedFactorFloat = 1.0;
struct State {
typedef std::shared_ptr<State> ptr;
counter currentJobs{0};
@@ -264,7 +279,7 @@ struct Machine : nix::Machine
{
/* Check that this machine is of the type required by the
step. */
if (!systemTypes.count(step->drv->platform == "builtin" ? nix::settings.thisSystem : step->drv->platform))
if (!systemTypesSet.count(step->drv->platform == "builtin" ? nix::settings.thisSystem : step->drv->platform))
return false;
/* Check that the step requires all mandatory features of this
@@ -287,19 +302,41 @@ struct Machine : nix::Machine
return true;
}
bool isLocalhost() const;
bool isLocalhost()
{
std::regex r("^(ssh://|ssh-ng://)?localhost$");
return std::regex_search(sshName, r);
}
// A connection to a machine
struct Connection {
nix::FdSource from;
nix::FdSink to;
nix::ServeProto::Version remoteVersion;
// Backpointer to the machine
ptr machine;
// Opened store
nix::ref<nix::RemoteStore> store;
operator nix::ServeProto::ReadConn ()
{
return {
.from = from,
.version = remoteVersion,
};
}
operator nix::ServeProto::WriteConn ()
{
return {
.to = to,
.version = remoteVersion,
};
}
};
};
struct HydraConfig;
class HydraConfig;
class State
@@ -349,9 +386,13 @@ private:
/* The build machines. */
std::mutex machinesReadyLock;
typedef std::map<nix::StoreReference::Variant, Machine::ptr> Machines;
typedef std::map<std::string, Machine::ptr> Machines;
nix::Sync<Machines> machines; // FIXME: use atomic_shared_ptr
/* Throttler for CPU-bound local work. */
static constexpr unsigned int maxSupportedLocalWorkers = 1024;
std::counting_semaphore<maxSupportedLocalWorkers> localWorkThrottler;
/* Various stats. */
time_t startedAt;
counter nrBuildsRead{0};
@@ -429,7 +470,7 @@ private:
/* How often the build steps of a jobset should be repeated in
order to detect non-determinism. */
std::map<std::pair<std::string, std::string>, size_t> jobsetRepeats;
std::map<std::pair<std::string, std::string>, unsigned int> jobsetRepeats;
bool uploadLogsToBinaryCache;
@@ -449,7 +490,12 @@ private:
prometheus::Counter& queue_steps_created;
prometheus::Counter& queue_checks_early_exits;
prometheus::Counter& queue_checks_finished;
prometheus::Gauge& queue_max_id;
prometheus::Counter& dispatcher_time_spent_running;
prometheus::Counter& dispatcher_time_spent_waiting;
prometheus::Counter& queue_monitor_time_spent_running;
prometheus::Counter& queue_monitor_time_spent_waiting;
PromMetrics();
};
@@ -458,6 +504,12 @@ private:
public:
State(std::optional<std::string> metricsAddrOpt);
struct BuildOptions {
unsigned int maxSilentTime, buildTimeout, repeats;
size_t maxLogSize;
bool enforceDeterminism;
};
private:
nix::MaintainCount<counter> startDbUpdate();
@@ -493,8 +545,7 @@ private:
void queueMonitorLoop(Connection & conn);
/* Check the queue for new builds. */
bool getQueuedBuilds(Connection & conn,
nix::ref<nix::Store> destStore, unsigned int & lastBuildId);
bool getQueuedBuilds(Connection & conn, nix::ref<nix::Store> destStore);
/* Handle cancellation, deletion and priority bumps. */
void processQueueChange(Connection & conn);
@@ -502,6 +553,12 @@ private:
BuildOutput getBuildOutputCached(Connection & conn, nix::ref<nix::Store> destStore,
const nix::StorePath & drvPath);
/* Returns paths missing from the remote store. Paths are processed in
* parallel to work around the possible latency of remote stores. */
std::map<nix::DrvOutput, std::optional<nix::StorePath>> getMissingRemotePaths(
nix::ref<nix::Store> destStore,
const std::map<nix::DrvOutput, std::optional<nix::StorePath>> & paths);
Step::ptr createStep(nix::ref<nix::Store> store,
Connection & conn, Build::ptr build, const nix::StorePath & drvPath,
Build::ptr referringBuild, Step::ptr referringStep, std::set<nix::StorePath> & finishedDrvs,
@@ -537,11 +594,13 @@ private:
retried. */
enum StepResult { sDone, sRetry, sMaybeCancelled };
StepResult doBuildStep(nix::ref<nix::Store> destStore,
MachineReservation::ptr reservation,
MachineReservation::ptr & reservation,
std::shared_ptr<ActiveStep> activeStep);
void buildRemote(nix::ref<nix::Store> destStore,
MachineReservation::ptr & reservation,
Machine::ptr machine, Step::ptr step,
const BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,
std::function<void(StepState)> updateStep,
NarMemberDatas & narMembers);

View File

@@ -4,6 +4,7 @@ use strict;
use warnings;
use base 'Hydra::Base::Controller::REST';
use List::SomeUtils qw(any);
use Nix::Store;
use Hydra::Helper::Nix;
use Hydra::Helper::CatalystUtils;
@@ -29,7 +30,7 @@ sub getChannelData {
my $outputs = {};
foreach my $output (@outputs) {
my $outPath = $output->get_column("outpath");
next if $checkValidity && !$MACHINE_LOCAL_STORE->isValidPath($outPath);
next if $checkValidity && !isValidPath($outPath);
$outputs->{$output->get_column("outname")} = $outPath;
push @storePaths, $outPath;
# Put the system type in the manifest (for top-level

View File

@@ -292,23 +292,6 @@ sub push_github : Chained('api') PathPart('push-github') Args(0) {
$c->response->body("");
}
sub push_gitea : Chained('api') PathPart('push-gitea') Args(0) {
my ($self, $c) = @_;
$c->{stash}->{json}->{jobsetsTriggered} = [];
my $in = $c->request->{data};
my $url = $in->{repository}->{clone_url} or die;
$url =~ s/.git$//;
print STDERR "got push from Gitea repository $url\n";
triggerJobset($self, $c, $_, 0) foreach $c->model('DB::Jobsets')->search(
{ 'project.enabled' => 1, 'me.enabled' => 1 },
{ join => 'project'
, where => \ [ 'me.flake like ? or exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value like ?)', [ 'flake', "%$url%"], [ 'value', "%$url%" ] ]
});
$c->response->body("");
}
1;

View File

@@ -10,6 +10,8 @@ use File::Basename;
use File::LibMagic;
use File::stat;
use Data::Dump qw(dump);
use Nix::Store;
use Nix::Config;
use List::SomeUtils qw(all);
use Encode;
use JSON::PP;
@@ -81,9 +83,9 @@ sub build_GET {
# false because `$_->path` will be empty
$c->stash->{available} =
$c->stash->{isLocalStore}
? all { $_->path && $MACHINE_LOCAL_STORE->isValidPath($_->path) } $build->buildoutputs->all
? all { $_->path && isValidPath($_->path) } $build->buildoutputs->all
: 1;
$c->stash->{drvAvailable} = $MACHINE_LOCAL_STORE->isValidPath($build->drvpath);
$c->stash->{drvAvailable} = isValidPath $build->drvpath;
if ($build->finished && $build->iscachedbuild) {
my $path = ($build->buildoutputs)[0]->path or undef;
@@ -310,7 +312,7 @@ sub output : Chained('buildChain') PathPart Args(1) {
error($c, "This build is not finished yet.") unless $build->finished;
my $output = $build->buildoutputs->find({name => $outputName});
notFound($c, "This build has no output named $outputName") unless defined $output;
gone($c, "Output is no longer available.") unless $MACHINE_LOCAL_STORE->isValidPath($output->path);
gone($c, "Output is no longer available.") unless isValidPath $output->path;
$c->response->header('Content-Disposition', "attachment; filename=\"build-${\$build->id}-${\$outputName}.nar.bz2\"");
$c->stash->{current_view} = 'NixNAR';
@@ -427,7 +429,7 @@ sub getDependencyGraph {
};
$$done{$path} = $node;
my @refs;
foreach my $ref ($MACHINE_LOCAL_STORE->queryReferences($path)) {
foreach my $ref (queryReferences($path)) {
next if $ref eq $path;
next unless $runtime || $ref =~ /\.drv$/;
getDependencyGraph($self, $c, $runtime, $done, $ref);
@@ -435,7 +437,7 @@ sub getDependencyGraph {
}
# Show in reverse topological order to flatten the graph.
# Should probably do a proper BFS.
my @sorted = reverse $MACHINE_LOCAL_STORE->topoSortPaths(@refs);
my @sorted = reverse topoSortPaths(@refs);
$node->{refs} = [map { $$done{$_} } @sorted];
}
@@ -448,7 +450,7 @@ sub build_deps : Chained('buildChain') PathPart('build-deps') {
my $build = $c->stash->{build};
my $drvPath = $build->drvpath;
error($c, "Derivation no longer available.") unless $MACHINE_LOCAL_STORE->isValidPath($drvPath);
error($c, "Derivation no longer available.") unless isValidPath $drvPath;
$c->stash->{buildTimeGraph} = getDependencyGraph($self, $c, 0, {}, $drvPath);
@@ -463,7 +465,7 @@ sub runtime_deps : Chained('buildChain') PathPart('runtime-deps') {
requireLocalStore($c);
error($c, "Build outputs no longer available.") unless all { $MACHINE_LOCAL_STORE->isValidPath($_) } @outPaths;
error($c, "Build outputs no longer available.") unless all { isValidPath($_) } @outPaths;
my $done = {};
$c->stash->{runtimeGraph} = [ map { getDependencyGraph($self, $c, 1, $done, $_) } @outPaths ];
@@ -483,7 +485,7 @@ sub nix : Chained('buildChain') PathPart('nix') CaptureArgs(0) {
if (isLocalStore) {
foreach my $out ($build->buildoutputs) {
notFound($c, "Path " . $out->path . " is no longer available.")
unless $MACHINE_LOCAL_STORE->isValidPath($out->path);
unless isValidPath($out->path);
}
}

View File

@@ -364,6 +364,21 @@ sub evals_GET {
);
}
sub errors :Chained('jobsetChain') :PathPart('errors') :Args(0) :ActionClass('REST') { }
sub errors_GET {
my ($self, $c) = @_;
$c->stash->{template} = 'eval-error.tt';
my $jobsetName = $c->stash->{params}->{name};
$c->stash->{jobset} = $c->stash->{project}->jobsets->find(
{ name => $jobsetName },
{ '+columns' => { 'errormsg' => 'errormsg' } }
);
$self->status_ok($c, entity => $c->stash->{jobset});
}
# Redirect to the latest finished evaluation of this jobset.
sub latest_eval : Chained('jobsetChain') PathPart('latest-eval') {

View File

@@ -86,6 +86,17 @@ sub view_GET {
);
}
sub errors :Chained('evalChain') :PathPart('errors') :Args(0) :ActionClass('REST') { }
sub errors_GET {
my ($self, $c) = @_;
$c->stash->{template} = 'eval-error.tt';
$c->stash->{eval} = $c->model('DB::JobsetEvals')->find($c->stash->{eval}->id, { prefetch => 'evaluationerror' });
$self->status_ok($c, entity => $c->stash->{eval});
}
sub create_jobset : Chained('evalChain') PathPart('create-jobset') Args(0) {
my ($self, $c) = @_;

View File

@@ -35,7 +35,6 @@ sub noLoginNeeded {
return $whitelisted ||
$c->request->path eq "api/push-github" ||
$c->request->path eq "api/push-gitea" ||
$c->request->path eq "google-login" ||
$c->request->path eq "github-redirect" ||
$c->request->path eq "github-login" ||
@@ -51,7 +50,6 @@ sub begin :Private {
$c->stash->{curUri} = $c->request->uri;
$c->stash->{version} = $ENV{"HYDRA_RELEASE"} || "<devel>";
$c->stash->{nixVersion} = $ENV{"NIX_RELEASE"} || "<devel>";
$c->stash->{nixEvalJobsVersion} = $ENV{"NIX_EVAL_JOBS_RELEASE"} || "<devel>";
$c->stash->{curTime} = time;
$c->stash->{logo} = defined $c->config->{hydra_logo} ? "/logo" : "";
$c->stash->{tracker} = defined $c->config->{tracker} ? $c->config->{tracker} : "";
@@ -82,7 +80,7 @@ sub begin :Private {
$_->supportedInputTypes($c->stash->{inputTypes}) foreach @{$c->hydra_plugins};
# XSRF protection: require POST requests to have the same origin.
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github" && $c->req->path ne "api/push-gitea") {
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github") {
my $referer = $c->req->header('Referer');
$referer //= $c->req->header('Origin');
my $base = $c->req->base;
@@ -162,7 +160,7 @@ sub status_GET {
{ "buildsteps.busy" => { '!=', 0 } },
{ order_by => ["globalpriority DESC", "id"],
join => "buildsteps",
columns => [@buildListColumns]
columns => [@buildListColumns, 'buildsteps.drvpath', 'buildsteps.type']
})]
);
}
@@ -331,7 +329,7 @@ sub nar :Local :Args(1) {
else {
$path = $Nix::Config::storeDir . "/$path";
gone($c, "Path " . $path . " is no longer available.") unless $MACHINE_LOCAL_STORE->isValidPath($path);
gone($c, "Path " . $path . " is no longer available.") unless isValidPath($path);
$c->stash->{current_view} = 'NixNAR';
$c->stash->{storePath} = $path;
@@ -369,7 +367,7 @@ sub realisations :Path('realisations') :Args(StrMatch[REALISATIONS_REGEX]) {
else {
my ($rawDrvOutput) = $realisation =~ REALISATIONS_REGEX;
my $rawRealisation = $MACHINE_LOCAL_STORE->queryRawRealisation($rawDrvOutput);
my $rawRealisation = queryRawRealisation($rawDrvOutput);
if (!$rawRealisation) {
$c->response->status(404);
@@ -398,7 +396,7 @@ sub narinfo :Path :Args(StrMatch[NARINFO_REGEX]) {
my ($hash) = $narinfo =~ NARINFO_REGEX;
die("Hash length was not 32") if length($hash) != 32;
my $path = $MACHINE_LOCAL_STORE->queryPathFromHashPart($hash);
my $path = queryPathFromHashPart($hash);
if (!$path) {
$c->response->status(404);

View File

@@ -37,7 +37,16 @@ sub buildDiff {
my $n = 0;
foreach my $build (@{$builds}) {
my $aborted = $build->finished != 0 && ($build->buildstatus == 3 || $build->buildstatus == 4);
my $aborted = $build->finished != 0 && (
# aborted
$build->buildstatus == 3
# cancelled
|| $build->buildstatus == 4
# timeout
|| $build->buildstatus == 7
# log limit exceeded
|| $build->buildstatus == 10
);
my $d;
my $found = 0;
while ($n < scalar(@{$builds2})) {
@@ -79,4 +88,4 @@ sub buildDiff {
return $ret;
}
1;
1;

View File

@@ -40,11 +40,8 @@ our @EXPORT = qw(
registerRoot
restartBuilds
run
$MACHINE_LOCAL_STORE
);
our $MACHINE_LOCAL_STORE = Nix::Store->new();
sub getHydraHome {
my $dir = $ENV{"HYDRA_HOME"} or die "The HYDRA_HOME directory does not exist!\n";
@@ -174,9 +171,6 @@ sub getDrvLogPath {
for ($fn . $bucketed, $fn . $bucketed . ".bz2") {
return $_ if -f $_;
}
for ($fn . $bucketed, $fn . $bucketed . ".zst") {
return $_ if -f $_;
}
return undef;
}
@@ -193,10 +187,6 @@ sub findLog {
return undef if scalar @outPaths == 0;
# Filter out any NULLs. Content-addressed derivations
# that haven't built yet or failed to build may have a NULL outPath.
@outPaths = grep {defined} @outPaths;
my @steps = $c->model('DB::BuildSteps')->search(
{ path => { -in => [@outPaths] } },
{ select => ["drvpath"]
@@ -296,8 +286,7 @@ sub getEvals {
my @evals = $evals_result_set->search(
{ hasnewbuilds => 1 },
{ order_by => "$me.id DESC", rows => $rows, offset => $offset
, prefetch => { evaluationerror => [ ] } });
{ order_by => "$me.id DESC", rows => $rows, offset => $offset });
my @res = ();
my $cache = {};
@@ -504,7 +493,7 @@ sub restartBuilds {
$builds = $builds->search({ finished => 1 });
foreach my $build ($builds->search({}, { columns => ["drvpath"] })) {
next if !$MACHINE_LOCAL_STORE->isValidPath($build->drvpath);
next if !isValidPath($build->drvpath);
registerRoot $build->drvpath;
}

View File

@@ -7,6 +7,7 @@ use Digest::SHA qw(sha256_hex);
use File::Path;
use Hydra::Helper::Exec;
use Hydra::Helper::Nix;
use Nix::Store;
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
@@ -37,9 +38,9 @@ sub fetchInput {
(my $cachedInput) = $self->{db}->resultset('CachedBazaarInputs')->search(
{uri => $uri, revision => $revision});
$MACHINE_LOCAL_STORE->addTempRoot($cachedInput->storepath) if defined $cachedInput;
addTempRoot($cachedInput->storepath) if defined $cachedInput;
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
} else {
@@ -57,7 +58,7 @@ sub fetchInput {
($sha256, $storePath) = split ' ', $stdout;
# FIXME: time window between nix-prefetch-bzr and addTempRoot.
$MACHINE_LOCAL_STORE->addTempRoot($storePath);
addTempRoot($storePath);
$self->{db}->txn_do(sub {
$self->{db}->resultset('CachedBazaarInputs')->create(

View File

@@ -9,24 +9,11 @@ use Hydra::Helper::CatalystUtils;
sub stepFinished {
my ($self, $step, $logPath) = @_;
my $doCompress = $self->{config}->{'compress_build_logs'} // '1';
my $silent = $self->{config}->{'compress_build_logs_silent'} // '0';
my $compression = $self->{config}->{'compress_build_logs_compression'} // 'bzip2';
my $doCompress = $self->{config}->{'compress_build_logs'} // "1";
if (not -e $logPath or $doCompress ne "1") {
return;
}
if ($silent ne '1') {
print STDERR "compressing '$logPath' with $compression...\n";
}
if ($compression eq 'bzip2') {
system('bzip2', '--force', $logPath);
} elsif ($compression eq 'zstd') {
system('zstd', '--rm', '--quiet', '-T0', $logPath);
} else {
print STDERR "unknown compression type '$compression'\n";
if ($doCompress eq "1" && -e $logPath) {
print STDERR "compressing $logPath...\n";
system("bzip2", "--force", $logPath);
}
}

View File

@@ -7,6 +7,7 @@ use Digest::SHA qw(sha256_hex);
use File::Path;
use Hydra::Helper::Exec;
use Hydra::Helper::Nix;
use Nix::Store;
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
@@ -57,7 +58,7 @@ sub fetchInput {
{uri => $uri, revision => $revision},
{rows => 1});
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
$revision = $cachedInput->revision;
@@ -74,8 +75,8 @@ sub fetchInput {
die "darcs changes --count failed" if $? != 0;
system "rm", "-rf", "$tmpDir/export/_darcs";
$storePath = $MACHINE_LOCAL_STORE->addToStore("$tmpDir/export", 1, "sha256");
$sha256 = $MACHINE_LOCAL_STORE->queryPathHash($storePath);
$storePath = addToStore("$tmpDir/export", 1, "sha256");
$sha256 = queryPathHash($storePath);
$sha256 =~ s/sha256://;
$self->{db}->txn_do(sub {

View File

@@ -186,9 +186,9 @@ sub fetchInput {
{uri => $uri, branch => $branch, revision => $revision, isdeepclone => defined($deepClone) ? 1 : 0},
{rows => 1});
$MACHINE_LOCAL_STORE->addTempRoot($cachedInput->storepath) if defined $cachedInput;
addTempRoot($cachedInput->storepath) if defined $cachedInput;
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
$revision = $cachedInput->revision;
@@ -217,7 +217,7 @@ sub fetchInput {
($sha256, $storePath) = split ' ', grab(cmd => ["nix-prefetch-git", $clonePath, $revision], chomp => 1);
# FIXME: time window between nix-prefetch-git and addTempRoot.
$MACHINE_LOCAL_STORE->addTempRoot($storePath);
addTempRoot($storePath);
$self->{db}->txn_do(sub {
$self->{db}->resultset('CachedGitInputs')->update_or_create(

View File

@@ -88,6 +88,10 @@ sub buildQueued {
common(@_, [], 0);
}
sub buildStarted {
common(@_, [], 1);
}
sub buildFinished {
common(@_, 2);
}

View File

@@ -7,6 +7,7 @@ use Digest::SHA qw(sha256_hex);
use File::Path;
use Hydra::Helper::Nix;
use Hydra::Helper::Exec;
use Nix::Store;
use Fcntl qw(:flock);
sub supportedInputTypes {
@@ -67,9 +68,9 @@ sub fetchInput {
(my $cachedInput) = $self->{db}->resultset('CachedHgInputs')->search(
{uri => $uri, branch => $branch, revision => $revision});
$MACHINE_LOCAL_STORE->addTempRoot($cachedInput->storepath) if defined $cachedInput;
addTempRoot($cachedInput->storepath) if defined $cachedInput;
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
} else {
@@ -84,7 +85,7 @@ sub fetchInput {
($sha256, $storePath) = split ' ', $stdout;
# FIXME: time window between nix-prefetch-hg and addTempRoot.
$MACHINE_LOCAL_STORE->addTempRoot($storePath);
addTempRoot($storePath);
$self->{db}->txn_do(sub {
$self->{db}->resultset('CachedHgInputs')->update_or_create(

View File

@@ -5,6 +5,7 @@ use warnings;
use parent 'Hydra::Plugin';
use POSIX qw(strftime);
use Hydra::Helper::Nix;
use Nix::Store;
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
@@ -29,7 +30,7 @@ sub fetchInput {
{srcpath => $uri, lastseen => {">", $timestamp - $timeout}},
{rows => 1, order_by => "lastseen DESC"});
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
$timestamp = $cachedInput->timestamp;
@@ -45,7 +46,7 @@ sub fetchInput {
}
chomp $storePath;
$sha256 = ($MACHINE_LOCAL_STORE->queryPathInfo($storePath, 0))[1] or die;
$sha256 = (queryPathInfo($storePath, 0))[1] or die;
($cachedInput) = $self->{db}->resultset('CachedPathInputs')->search(
{srcpath => $uri, sha256hash => $sha256});

View File

@@ -14,7 +14,6 @@ use Nix::Config;
use Nix::Store;
use Hydra::Model::DB;
use Hydra::Helper::CatalystUtils;
use Hydra::Helper::Nix;
sub isEnabled {
my ($self) = @_;
@@ -93,7 +92,7 @@ sub buildFinished {
my $hash = substr basename($path), 0, 32;
my ($deriver, $narHash, $time, $narSize, $refs) = queryPathInfo($path, 0);
my $system;
if (defined $deriver and $MACHINE_LOCAL_STORE->isValidPath($deriver)) {
if (defined $deriver and isValidPath($deriver)) {
$system = derivationFromPath($deriver)->{platform};
}
foreach my $reference (@{$refs}) {

View File

@@ -7,6 +7,7 @@ use Digest::SHA qw(sha256_hex);
use Hydra::Helper::Exec;
use Hydra::Helper::Nix;
use IPC::Run;
use Nix::Store;
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
@@ -44,9 +45,9 @@ sub fetchInput {
(my $cachedInput) = $self->{db}->resultset('CachedSubversionInputs')->search(
{uri => $uri, revision => $revision});
$MACHINE_LOCAL_STORE->addTempRoot($cachedInput->storepath) if defined $cachedInput;
addTempRoot($cachedInput->storepath) if defined $cachedInput;
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
} else {
@@ -61,16 +62,16 @@ sub fetchInput {
die "error checking out Subversion repo at `$uri':\n$stderr" if $res;
if ($type eq "svn-checkout") {
$storePath = $MACHINE_LOCAL_STORE->addToStore($wcPath, 1, "sha256");
$storePath = addToStore($wcPath, 1, "sha256");
} else {
# Hm, if the Nix Perl bindings supported filters in
# addToStore(), then we wouldn't need to make a copy here.
my $tmpDir = File::Temp->newdir("hydra-svn-export.XXXXXX", CLEANUP => 1, TMPDIR => 1) or die;
(system "svn", "export", $wcPath, "$tmpDir/source", "--quiet") == 0 or die "svn export failed";
$storePath = $MACHINE_LOCAL_STORE->addToStore("$tmpDir/source", 1, "sha256");
$storePath = addToStore("$tmpDir/source", 1, "sha256");
}
$sha256 = $MACHINE_LOCAL_STORE->queryPathHash($storePath); $sha256 =~ s/sha256://;
$sha256 = queryPathHash($storePath); $sha256 =~ s/sha256://;
$self->{db}->txn_do(sub {
$self->{db}->resultset('CachedSubversionInputs')->update_or_create(

View File

@@ -105,4 +105,6 @@ __PACKAGE__->add_column(
"+id" => { retrieve_on_insert => 1 }
);
__PACKAGE__->mk_group_accessors('column' => 'has_error');
1;

View File

@@ -386,6 +386,8 @@ __PACKAGE__->add_column(
"+id" => { retrieve_on_insert => 1 }
);
__PACKAGE__->mk_group_accessors('column' => 'has_error');
sub supportsDynamicRunCommand {
my ($self) = @_;

View File

@@ -0,0 +1,30 @@
package Hydra::Schema::ResultSet::EvaluationErrors;
use strict;
use utf8;
use warnings;
use parent 'DBIx::Class::ResultSet';
use Storable qw(dclone);
__PACKAGE__->load_components('Helper::ResultSet::RemoveColumns');
# Exclude expensive error message values unless explicitly requested, and
# replace them with a summary field describing their presence/absence.
sub search_rs {
my ( $class, $query, $attrs ) = @_;
if ($attrs) {
$attrs = dclone($attrs);
}
unless (exists $attrs->{'select'} || exists $attrs->{'columns'}) {
$attrs->{'+columns'}->{'has_error'} = "errormsg != ''";
}
unless (exists $attrs->{'+columns'}->{'errormsg'}) {
push @{ $attrs->{'remove_columns'} }, 'errormsg';
}
return $class->next::method($query, $attrs);
}

View File

@@ -0,0 +1,30 @@
package Hydra::Schema::ResultSet::Jobsets;
use strict;
use utf8;
use warnings;
use parent 'DBIx::Class::ResultSet';
use Storable qw(dclone);
__PACKAGE__->load_components('Helper::ResultSet::RemoveColumns');
# Exclude expensive error message values unless explicitly requested, and
# replace them with a summary field describing their presence/absence.
sub search_rs {
my ( $class, $query, $attrs ) = @_;
if ($attrs) {
$attrs = dclone($attrs);
}
unless (exists $attrs->{'select'} || exists $attrs->{'columns'}) {
$attrs->{'+columns'}->{'has_error'} = "errormsg != ''";
}
unless (exists $attrs->{'+columns'}->{'errormsg'}) {
push @{ $attrs->{'remove_columns'} }, 'errormsg';
}
return $class->next::method($query, $attrs);
}

View File

@@ -8,7 +8,6 @@ use MIME::Base64;
use Nix::Manifest;
use Nix::Store;
use Nix::Utils;
use Hydra::Helper::Nix;
use base qw/Catalyst::View/;
sub process {
@@ -18,7 +17,7 @@ sub process {
$c->response->content_type('text/x-nix-narinfo'); # !!! check MIME type
my ($deriver, $narHash, $time, $narSize, $refs) = $MACHINE_LOCAL_STORE->queryPathInfo($storePath, 1);
my ($deriver, $narHash, $time, $narSize, $refs) = queryPathInfo($storePath, 1);
my $info;
$info .= "StorePath: $storePath\n";
@@ -29,8 +28,8 @@ sub process {
$info .= "References: " . join(" ", map { basename $_ } @{$refs}) . "\n";
if (defined $deriver) {
$info .= "Deriver: " . basename $deriver . "\n";
if ($MACHINE_LOCAL_STORE->isValidPath($deriver)) {
my $drv = $MACHINE_LOCAL_STORE->derivationFromPath($deriver);
if (isValidPath($deriver)) {
my $drv = derivationFromPath($deriver);
$info .= "System: $drv->{platform}\n";
}
}

View File

@@ -16,10 +16,7 @@ sub process {
my $tail = int($c->stash->{tail} // "0");
if ($logPath =~ /\.zst$/) {
my $doTail = $tail ? "| tail -n '$tail'" : "";
open($fh, "-|", "zstd -dc < '$logPath' $doTail") or die;
} elsif ($logPath =~ /\.bz2$/) {
if ($logPath =~ /\.bz2$/) {
my $doTail = $tail ? "| tail -n '$tail'" : "";
open($fh, "-|", "bzip2 -dc < '$logPath' $doTail") or die;
} else {

22
src/lib/Makefile.am Normal file
View File

@@ -0,0 +1,22 @@
PERL_MODULES = \
$(wildcard *.pm) \
$(wildcard Hydra/*.pm) \
$(wildcard Hydra/Helper/*.pm) \
$(wildcard Hydra/Model/*.pm) \
$(wildcard Hydra/View/*.pm) \
$(wildcard Hydra/Schema/*.pm) \
$(wildcard Hydra/Schema/Result/*.pm) \
$(wildcard Hydra/Schema/ResultSet/*.pm) \
$(wildcard Hydra/Controller/*.pm) \
$(wildcard Hydra/Base/*.pm) \
$(wildcard Hydra/Base/Controller/*.pm) \
$(wildcard Hydra/Script/*.pm) \
$(wildcard Hydra/Component/*.pm) \
$(wildcard Hydra/Event/*.pm) \
$(wildcard Hydra/Plugin/*.pm)
EXTRA_DIST = \
$(PERL_MODULES)
hydradir = $(libexecdir)/hydra/lib
nobase_hydra_DATA = $(PERL_MODULES)

View File

@@ -1,5 +0,0 @@
libhydra_inc = include_directories('.')
libhydra_dep = declare_dependency(
include_directories: [libhydra_inc],
)

View File

@@ -1,85 +0,0 @@
# Native code
subdir('libhydra')
subdir('hydra-evaluator')
subdir('hydra-queue-runner')
hydra_libexecdir = get_option('libexecdir') / 'hydra'
# Data and interpreted
foreach dir : ['lib', 'root']
install_subdir(dir,
install_dir: hydra_libexecdir,
)
endforeach
subdir('sql')
subdir('ttf')
# Static files for website
hydra_libexecdir_static = hydra_libexecdir / 'root' / 'static'
## Bootstrap
bootstrap_name = 'bootstrap-4.3.1-dist'
bootstrap = custom_target(
'extract-bootstrap',
input: 'root' / (bootstrap_name + '.zip'),
output: bootstrap_name,
command: ['unzip', '-u', '-d', '@OUTDIR@', '@INPUT@'],
)
custom_target(
'name-bootstrap',
input: bootstrap,
output: 'bootstrap',
command: ['cp', '-r', '@INPUT@' , '@OUTPUT@'],
install: true,
install_dir: hydra_libexecdir_static,
)
## Flot
custom_target(
'extract-flot',
input: 'root' / 'flot-0.8.3.zip',
output: 'flot',
command: ['unzip', '-u', '-d', '@OUTDIR@', '@INPUT@'],
install: true,
install_dir: hydra_libexecdir_static / 'js',
)
## Fontawesome
fontawesome_name = 'fontawesome-free-5.10.2-web'
fontawesome = custom_target(
'extract-fontawesome',
input: 'root' / (fontawesome_name + '.zip'),
output: fontawesome_name,
command: ['unzip', '-u', '-d', '@OUTDIR@', '@INPUT@'],
)
custom_target(
'name-fontawesome-css',
input: fontawesome,
output: 'css',
command: ['cp', '-r', '@INPUT@/css', '@OUTPUT@'],
install: true,
install_dir: hydra_libexecdir_static / 'fontawesome',
)
custom_target(
'name-fontawesome-webfonts',
input: fontawesome,
output: 'webfonts',
command: ['cp', '-r', '@INPUT@/webfonts', '@OUTPUT@'],
install: true,
install_dir: hydra_libexecdir_static / 'fontawesome',
)
# Scripts
install_subdir('script',
install_dir: get_option('bindir'),
exclude_files: [
'hydra-dev-server',
],
install_mode: 'rwxr-xr-x',
strip_directory: true,
)

39
src/root/Makefile.am Normal file
View File

@@ -0,0 +1,39 @@
TEMPLATES = $(wildcard *.tt)
STATIC = \
$(wildcard static/images/*) \
$(wildcard static/css/*) \
static/js/bootbox.min.js \
static/js/popper.min.js \
static/js/common.js \
static/js/jquery/jquery-3.4.1.min.js \
static/js/jquery/jquery-ui-1.10.4.min.js
FLOT = flot-0.8.3.zip
BOOTSTRAP = bootstrap-4.3.1-dist.zip
FONTAWESOME = fontawesome-free-5.10.2-web.zip
ZIPS = $(FLOT) $(BOOTSTRAP) $(FONTAWESOME)
EXTRA_DIST = $(TEMPLATES) $(STATIC) $(ZIPS)
hydradir = $(libexecdir)/hydra/root
nobase_hydra_DATA = $(EXTRA_DIST)
all:
mkdir -p $(srcdir)/static/js
unzip -u -d $(srcdir)/static $(BOOTSTRAP)
rm -rf $(srcdir)/static/bootstrap
mv $(srcdir)/static/$(basename $(BOOTSTRAP)) $(srcdir)/static/bootstrap
unzip -u -d $(srcdir)/static/js $(FLOT)
unzip -u -d $(srcdir)/static $(FONTAWESOME)
rm -rf $(srcdir)/static/fontawesome
mv $(srcdir)/static/$(basename $(FONTAWESOME)) $(srcdir)/static/fontawesome
install-data-local: $(ZIPS)
mkdir -p $(hydradir)/static/js
cp -prvd $(srcdir)/static/js/* $(hydradir)/static/js
mkdir -p $(hydradir)/static/bootstrap
cp -prvd $(srcdir)/static/bootstrap/* $(hydradir)/static/bootstrap
mkdir -p $(hydradir)/static/fontawesome/{css,webfonts}
cp -prvd $(srcdir)/static/fontawesome/css/* $(hydradir)/static/fontawesome/css
cp -prvd $(srcdir)/static/fontawesome/webfonts/* $(hydradir)/static/fontawesome/webfonts

View File

@@ -61,21 +61,7 @@ END;
<td>[% IF step.busy != 0 || ((step.machine || step.starttime) && (step.status == 0 || step.status == 1 || step.status == 3 || step.status == 4 || step.status == 7)); INCLUDE renderMachineName machine=step.machine; ELSE; "<em>n/a</em>"; END %]</td>
<td class="step-status">
[% IF step.busy != 0 %]
[% IF step.busy == 1 %]
<strong>Preparing</strong>
[% ELSIF step.busy == 10 %]
<strong>Connecting</strong>
[% ELSIF step.busy == 20 %]
<strong>Sending inputs</strong>
[% ELSIF step.busy == 30 %]
<strong>Building</strong>
[% ELSIF step.busy == 40 %]
<strong>Receiving outputs</strong>
[% ELSIF step.busy == 50 %]
<strong>Post-processing</strong>
[% ELSE %]
<strong>Unknown state</strong>
[% END %]
[% INCLUDE renderBusyStatus %]
[% ELSIF step.status == 0 %]
[% IF step.isnondeterministic %]
<span class="warn">Succeeded with non-determistic result</span>

View File

@@ -91,6 +91,17 @@ BLOCK renderDuration;
duration % 60 %]s[%
END;
BLOCK renderDrvInfo;
drvname = step.drvpath
.substr(11) # strip `/nix/store/`
.split('-').slice(1).join("-") # strip hash part
.substr(0, -4); # strip `.drv`
IF drvname != releasename;
IF step.type == 0; action = "Build"; ELSE; action = "Substitution"; END;
IF drvname; %]<em> ([% action %] of [% drvname %])</em>[% END;
END;
END;
BLOCK renderBuildListHeader %]
<table class="table table-striped table-condensed clickable-rows">
@@ -131,7 +142,12 @@ BLOCK renderBuildListBody;
[% END %]
<td><a class="row-link" href="[% link %]">[% build.id %]</a></td>
[% IF !hideJobName %]
<td><a href="[%link%]">[% IF !hideJobsetName %][%build.jobset.get_column("project")%]:[%build.jobset.get_column("name")%]:[% END %][%build.get_column("job")%]</td>
<td>
<a href="[%link%]">[% IF !hideJobsetName %][%build.jobset.get_column("project")%]:[%build.jobset.get_column("name")%]:[% END %][%build.get_column("job")%]</a>
[% IF showStepName %]
[% INCLUDE renderDrvInfo step=build.buildsteps releasename=build.nixname %]
[% END %]
</td>
[% END %]
<td class="nowrap">[% t = showSchedulingInfo ? build.timestamp : build.stoptime; IF t; INCLUDE renderRelativeDate timestamp=(showSchedulingInfo ? build.timestamp : build.stoptime); ELSE; "-"; END %]</td>
<td>[% !showSchedulingInfo and build.get_column('releasename') ? build.get_column('releasename') : build.nixname %]</td>
@@ -245,6 +261,27 @@ BLOCK renderBuildStatusIcon;
END;
BLOCK renderBusyStatus;
IF step.busy == 1 %]
<strong>Preparing</strong>
[% ELSIF step.busy == 10 %]
<strong>Connecting</strong>
[% ELSIF step.busy == 20 %]
<strong>Sending inputs</strong>
[% ELSIF step.busy == 30 %]
<strong>Building</strong>
[% ELSIF step.busy == 35 %]
<strong>Waiting to receive outputs</strong>
[% ELSIF step.busy == 40 %]
<strong>Receiving outputs</strong>
[% ELSIF step.busy == 50 %]
<strong>Post-processing</strong>
[% ELSE %]
<strong>Unknown state</strong>
[% END;
END;
BLOCK renderStatus;
IF build.finished;
buildstatus = build.buildstatus;
@@ -374,7 +411,7 @@ BLOCK renderInputDiff; %]
[% ELSIF bi1.uri == bi2.uri && bi1.revision != bi2.revision %]
[% IF bi1.type == "git" %]
<tr><td>
<b>[% bi1.name %]</b></td><td><tt>[% INCLUDE renderDiffUri contents=(bi1.revision.substr(0, 12) _ ' to ' _ bi2.revision.substr(0, 12)) %]</tt>
<b>[% bi1.name %]</b></td><td><tt>[% INCLUDE renderDiffUri contents=(bi1.revision.substr(0, 8) _ ' to ' _ bi2.revision.substr(0, 8)) %]</tt>
</td></tr>
[% ELSE %]
<tr><td>
@@ -476,7 +513,7 @@ BLOCK renderEvals %]
ELSE %]
-
[% END %]
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<span class="badge badge-warning">Eval Errors</span>
[% END %]
</td>
@@ -602,7 +639,7 @@ BLOCK renderJobsetOverview %]
<td>[% HTML.escape(j.description) %]</td>
<td>[% IF j.lastcheckedtime;
INCLUDE renderDateTime timestamp = j.lastcheckedtime;
IF j.errormsg || j.fetcherrormsg; %]&nbsp;<span class = 'badge badge-warning'>Error</span>[% END;
IF j.has_error || j.fetcherrormsg; %]&nbsp;<span class = 'badge badge-warning'>Error</span>[% END;
ELSE; "-";
END %]</td>
[% IF j.get_column('nrtotal') > 0 %]

26
src/root/eval-error.tt Normal file
View File

@@ -0,0 +1,26 @@
[% PROCESS common.tt %]
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
[% INCLUDE style.tt %]
</head>
<body>
<div class="tab-content tab-pane">
<div id="tabs-errors" class="">
[% IF jobset %]
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(jobset.errortime || jobset.lastcheckedtime) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(jobset.fetcherrormsg || jobset.errormsg) %]</pre></div></div>
[% ELSIF eval %]
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
[% END %]
</div>
</div>
</body>
</html>

View File

@@ -65,7 +65,7 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
[% IF aborted.size > 0 %]
<li class="nav-item"><a class="nav-link" href="#tabs-aborted" data-toggle="tab"><span class="text-warning">Aborted Jobs ([% aborted.size %])</span></a></li>
<li class="nav-item"><a class="nav-link" href="#tabs-aborted" data-toggle="tab"><span class="text-warning">Aborted / Timed out Jobs ([% aborted.size %])</span></a></li>
[% END %]
[% IF nowFail.size > 0 %]
<li class="nav-item"><a class="nav-link" href="#tabs-now-fail" data-toggle="tab"><span class="text-warning">Newly Failing Jobs ([% nowFail.size %])</span></a></li>
@@ -90,7 +90,7 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
<li class="nav-item"><a class="nav-link" href="#tabs-inputs" data-toggle="tab">Inputs</a></li>
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<li class="nav-item"><a class="nav-link" href="#tabs-errors" data-toggle="tab"><span class="text-warning">Evaluation Errors</span></a></li>
[% END %]
</ul>
@@ -108,13 +108,6 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
<div class="tab-content">
[% IF eval.evaluationerror.errormsg %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
</div>
[% END %]
<div id="tabs-aborted" class="tab-pane">
[% INCLUDE renderSome builds=aborted tabname="#tabs-aborted" %]
</div>
@@ -172,10 +165,9 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
</div>
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
<iframe src="[% c.uri_for(c.controller('JobsetEval').action_for('errors'), [eval.id], params) %]" loading="lazy" frameBorder="0" width="100%"></iframe>
</div>
[% END %]
</div>

View File

@@ -61,7 +61,7 @@
[% END %]
<li class="nav-item"><a class="nav-link active" href="#tabs-evaluations" data-toggle="tab">Evaluations</a></li>
[% IF jobset.errormsg || jobset.fetcherrormsg %]
[% IF jobset.has_error || jobset.fetcherrormsg %]
<li class="nav-item"><a class="nav-link" href="#tabs-errors" data-toggle="tab"><span class="text-warning">Evaluation Errors</span></a></li>
[% END %]
<li class="nav-item"><a class="nav-link" href="#tabs-jobs" data-toggle="tab">Jobs</a></li>
@@ -79,7 +79,7 @@
<th>Last checked:</th>
<td>
[% IF jobset.lastcheckedtime %]
[% INCLUDE renderDateTime timestamp = jobset.lastcheckedtime %], [% IF jobset.errormsg || jobset.fetcherrormsg %]<em class="text-warning">with errors!</em>[% ELSE %]<em>no errors</em>[% END %]
[% INCLUDE renderDateTime timestamp = jobset.lastcheckedtime %], [% IF jobset.has_error || jobset.fetcherrormsg %]<em class="text-warning">with errors!</em>[% ELSE %]<em>no errors</em>[% END %]
[% ELSE %]
<em>never</em>
[% END %]
@@ -117,10 +117,9 @@
</div>
[% IF jobset.errormsg || jobset.fetcherrormsg %]
[% IF jobset.has_error || jobset.fetcherrormsg %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(jobset.errortime || jobset.lastcheckedtime) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(jobset.fetcherrormsg || jobset.errormsg) %]</pre></div></div>
<iframe src="[% c.uri_for('/jobset' project.name jobset.name "errors") %]" loading="lazy" frameBorder="0" width="100%"></iframe>
</div>
[% END %]

View File

@@ -10,31 +10,7 @@
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-3.4.1.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-ui-1.10.4.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/moment/moment-2.24.0.min.js") %]"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link href="[% c.uri_for("/static/fontawesome/css/all.css") %]" rel="stylesheet" />
<script type="text/javascript" src="[% c.uri_for("/static/js/popper.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/bootstrap/js/bootstrap.min.js") %]"></script>
<link href="[% c.uri_for("/static/bootstrap/css/bootstrap.min.css") %]" rel="stylesheet" />
<!-- hydra.css may need to be moved to before boostrap to make the @media rule work. -->
<link rel="stylesheet" href="[% c.uri_for("/static/css/hydra.css") %]" type="text/css" />
<link rel="stylesheet" href="[% c.uri_for("/static/css/rotated-th.css") %]" type="text/css" />
<style>
.popover { max-width: 40%; }
</style>
<script type="text/javascript" src="[% c.uri_for("/static/js/bootbox.min.js") %]"></script>
<link rel="stylesheet" href="[% c.uri_for("/static/css/tree.css") %]" type="text/css" />
<script type="text/javascript" src="[% c.uri_for("/static/js/common.js") %]"></script>
[% INCLUDE style.tt %]
[% IF c.config.enable_google_login %]
<meta name="google-signin-client_id" content="[% c.config.google_client_id %]">
@@ -93,7 +69,7 @@
<footer class="navbar">
<hr />
<small>
<em><a href="http://nixos.org/hydra" target="_blank" class="squiggle">Hydra</a> [% HTML.escape(version) %] (using [% HTML.escape(nixVersion) %] and [% HTML.escape(nixEvalJobsVersion) %]).</em>
<em><a href="http://nixos.org/hydra" target="_blank" class="squiggle">Hydra</a> [% HTML.escape(version) %] (using [% HTML.escape(nixVersion) %]).</em>
[% IF c.user_exists %]
You are signed in as <tt>[% HTML.escape(c.user.username) %]</tt>
[%- IF c.user.type == 'google' %] via Google[% END %].

View File

@@ -6,10 +6,10 @@
<thead>
<tr>
<th>Job</th>
<th>System</th>
<th>Build</th>
<th>Step</th>
<th>What</th>
<th>Status</th>
<th>Since</th>
</tr>
</thead>
@@ -40,10 +40,10 @@
[% idle = 0 %]
<tr>
<td><tt>[% INCLUDE renderFullJobName project=step.project jobset=step.jobset job=step.job %]</tt></td>
<td><tt>[% step.system %]</tt></td>
<td><a href="[% c.uri_for('/build' step.build) %]">[% step.build %]</a></td>
<td>[% IF step.busy >= 30 %]<a class="row-link" href="[% c.uri_for('/build' step.build 'nixlog' step.stepnr 'tail') %]">[% step.stepnr %]</a>[% ELSE; step.stepnr; END %]</td>
<td><tt>[% step.drvpath.match('-(.*)').0 %]</tt></td>
<td>[% INCLUDE renderBusyStatus %]</td>
<td style="width: 10em">[% INCLUDE renderDuration duration = curTime - step.starttime %] </td>
</tr>
[% END %]

View File

@@ -7,7 +7,7 @@ main() {
set -e
tmpDir=$(realpath "${TMPDIR:-/tmp}")/build-[% build.id +%]
tmpDir=${TMPDIR:-/tmp}/build-[% build.id +%]
declare -a args extraArgs

View File

@@ -129,6 +129,12 @@ $(document).ready(function() {
el.addClass("is-local");
}
});
[...document.getElementsByTagName("iframe")].forEach((element) => {
element.contentWindow.addEventListener("DOMContentLoaded", (_) => {
element.style.height = element.contentWindow.document.body.scrollHeight + 'px';
})
})
});
var tabsLoaded = {};

View File

@@ -7,7 +7,7 @@
[% ELSE %]
[% INCLUDE renderBuildList builds=resource showSchedulingInfo=1 hideResultInfo=1 busy=1 %]
[% INCLUDE renderBuildList builds=resource showSchedulingInfo=1 hideResultInfo=1 busy=1 showStepName=1 %]
[% END %]

24
src/root/style.tt Normal file
View File

@@ -0,0 +1,24 @@
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-3.4.1.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-ui-1.10.4.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/moment/moment-2.24.0.min.js") %]"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link href="[% c.uri_for("/static/fontawesome/css/all.css") %]" rel="stylesheet" />
<script type="text/javascript" src="[% c.uri_for("/static/js/popper.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/bootstrap/js/bootstrap.min.js") %]"></script>
<link href="[% c.uri_for("/static/bootstrap/css/bootstrap.min.css") %]" rel="stylesheet" />
<!-- hydra.css may need to be moved to before boostrap to make the @media rule work. -->
<link rel="stylesheet" href="[% c.uri_for("/static/css/hydra.css") %]" type="text/css" />
<link rel="stylesheet" href="[% c.uri_for("/static/css/rotated-th.css") %]" type="text/css" />
<style>
.popover { max-width: 40%; }
</style>
<script type="text/javascript" src="[% c.uri_for("/static/js/bootbox.min.js") %]"></script>
<link rel="stylesheet" href="[% c.uri_for("/static/css/tree.css") %]" type="text/css" />
<script type="text/javascript" src="[% c.uri_for("/static/js/common.js") %]"></script>

19
src/script/Makefile.am Normal file
View File

@@ -0,0 +1,19 @@
EXTRA_DIST = \
$(distributable_scripts)
distributable_scripts = \
hydra-backfill-ids \
hydra-init \
hydra-eval-jobset \
hydra-server \
hydra-update-gc-roots \
hydra-s3-backup-collect-garbage \
hydra-create-user \
hydra-notify \
hydra-send-stats \
nix-prefetch-git \
nix-prefetch-bzr \
nix-prefetch-hg
bin_SCRIPTS = \
$(distributable_scripts)

View File

@@ -17,7 +17,6 @@ use Hydra::Helper::Nix;
use Hydra::Model::DB;
use Hydra::Plugin;
use Hydra::Schema;
use IPC::Run;
use JSON::MaybeXS;
use Net::Statsd;
use Nix::Store;
@@ -86,14 +85,14 @@ sub attrsToSQL {
# Fetch a store path from 'eval_substituter' if not already present.
sub getPath {
my ($path) = @_;
return 1 if $MACHINE_LOCAL_STORE->isValidPath($path);
return 1 if isValidPath($path);
my $substituter = $config->{eval_substituter};
system("nix", "--experimental-features", "nix-command", "copy", "--from", $substituter, "--", $path)
if defined $substituter;
return $MACHINE_LOCAL_STORE->isValidPath($path);
return isValidPath($path);
}
@@ -144,7 +143,7 @@ sub fetchInputBuild {
, version => $version
, outputName => $mainOutput->name
};
if ($MACHINE_LOCAL_STORE->isValidPath($prevBuild->drvpath)) {
if (isValidPath($prevBuild->drvpath)) {
$result->{drvPath} = $prevBuild->drvpath;
}
@@ -234,7 +233,7 @@ sub fetchInputEval {
my $out = $build->buildoutputs->find({ name => "out" });
next unless defined $out;
# FIXME: Should we fail if the path is not valid?
next unless $MACHINE_LOCAL_STORE->isValidPath($out->path);
next unless isValidPath($out->path);
$jobs->{$build->get_column('job')} = $out->path;
}
@@ -358,32 +357,22 @@ sub evalJobs {
my @cmd;
if (defined $flakeRef) {
my $nix_expr =
"let " .
"flake = builtins.getFlake (toString \"$flakeRef\"); " .
"in " .
"flake.hydraJobs " .
"or flake.checks " .
"or (throw \"flake '$flakeRef' does not provide any Hydra jobs or checks\")";
@cmd = ("nix-eval-jobs", "--expr", $nix_expr);
@cmd = ("hydra-eval-jobs",
"--flake", $flakeRef,
"--gc-roots-dir", getGCRootsDir,
"--max-jobs", 1);
} else {
my $nixExprInput = $inputInfo->{$nixExprInputName}->[0]
or die "cannot find the input containing the job expression\n";
@cmd = ("nix-eval-jobs",
@cmd = ("hydra-eval-jobs",
"<" . $nixExprInputName . "/" . $nixExprPath . ">",
"--gc-roots-dir", getGCRootsDir,
"--max-jobs", 1,
inputsToArgs($inputInfo));
}
push @cmd, ("--gc-roots-dir", getGCRootsDir);
push @cmd, ("--max-jobs", 1);
push @cmd, "--meta";
push @cmd, "--constituents";
push @cmd, "--force-recurse";
push @cmd, ("--option", "allow-import-from-derivation", "false") if $config->{allow_import_from_derivation} // "true" ne "true";
push @cmd, ("--workers", $config->{evaluator_workers} // 1);
push @cmd, ("--max-memory-size", $config->{evaluator_max_memory_size} // 4096);
push @cmd, "--no-allow-import-from-derivation" if $config->{allow_import_from_derivation} // "true" ne "true";
if (defined $ENV{'HYDRA_DEBUG'}) {
sub escape {
@@ -395,40 +384,14 @@ sub evalJobs {
print STDERR "evaluator: @escaped\n";
}
my $evalProc = IPC::Run::start \@cmd,
'>', IPC::Run::new_chunker, \my $out,
'2>', \my $err;
(my $res, my $jobsJSON, my $stderr) = captureStdoutStderr(21600, @cmd);
die "hydra-eval-jobs returned " . ($res & 127 ? "signal $res" : "exit code " . ($res >> 8))
. ":\n" . ($stderr ? decode("utf-8", $stderr) : "(no output)\n")
if $res;
return sub {
while (1) {
$evalProc->pump;
if (!defined $out && !defined $err) {
$evalProc->finish;
if ($?) {
die "nix-eval-jobs returned " . ($? & 127 ? "signal $?" : "exit code " . ($? >> 8)) . "\n";
}
return;
}
print STDERR "$stderr";
if (defined $err) {
print STDERR "$err";
undef $err;
}
if (defined $out && $out ne '') {
my $job;
try {
$job = decode_json($out);
} catch {
warn "nix-eval-jobs sent invalid JSON.\n parse error: $_\n invalid json: $out\n";
};
undef $out;
if (defined $job) {
return $job;
}
}
}
};
return decode_json($jobsJSON);
}
@@ -457,7 +420,7 @@ sub checkBuild {
my $firstOutputName = $outputNames[0];
my $firstOutputPath = $buildInfo->{outputs}->{$firstOutputName};
my $jobName = $buildInfo->{attr} or die;
my $jobName = $buildInfo->{jobName} or die;
my $drvPath = $buildInfo->{drvPath} or die;
my $build;
@@ -511,30 +474,9 @@ sub checkBuild {
my $time = time();
sub getMeta {
my ($s, $def) = @_;
return ($s || "") eq "" ? $def : $s;
}
sub getMetaStrings {
my ($v, $k, $acc) = @_;
my $t = ref $v;
if ($t eq 'HASH') {
push @$acc, $v->{$k} if exists $v->{$k};
} elsif ($t eq 'ARRAY') {
getMetaStrings($_, $k, $acc) foreach @$v;
} elsif (defined $v) {
push @$acc, $v;
}
}
sub getMetaConcatStrings {
my ($v, $k) = @_;
my @strings;
getMetaStrings($v, $k, \@strings);
return join(", ", @strings) || undef;
sub null {
my ($s) = @_;
return $s eq "" ? undef : $s;
}
# Add the build to the database.
@@ -542,19 +484,19 @@ sub checkBuild {
{ timestamp => $time
, jobset_id => $jobset->id
, job => $jobName
, description => getMeta($buildInfo->{meta}->{description}, undef)
, license => getMetaConcatStrings($buildInfo->{meta}->{license}, "shortName")
, homepage => getMeta($buildInfo->{meta}->{homepage}, undef)
, maintainers => getMetaConcatStrings($buildInfo->{meta}->{maintainers}, "email")
, maxsilent => getMeta($buildInfo->{meta}->{maxSilent}, 7200)
, timeout => getMeta($buildInfo->{meta}->{timeout}, 36000)
, nixname => $buildInfo->{name}
, description => null($buildInfo->{description})
, license => null($buildInfo->{license})
, homepage => null($buildInfo->{homepage})
, maintainers => null($buildInfo->{maintainers})
, maxsilent => $buildInfo->{maxSilent}
, timeout => $buildInfo->{timeout}
, nixname => $buildInfo->{nixName}
, drvpath => $drvPath
, system => $buildInfo->{system}
, priority => getMeta($buildInfo->{meta}->{schedulingPriority}, 100)
, priority => $buildInfo->{schedulingPriority}
, finished => 0
, iscurrent => 1
, ischannel => getMeta($buildInfo->{meta}->{isChannel}, 0)
, ischannel => $buildInfo->{isChannel}
});
$build->buildoutputs->create({ name => $_, path => $buildInfo->{outputs}->{$_} })
@@ -723,7 +665,7 @@ sub checkJobsetWrapped {
return;
}
# Hash the arguments to nix-eval-jobs and check the
# Hash the arguments to hydra-eval-jobs and check the
# JobsetInputHashes to see if the previous evaluation had the same
# inputs. If so, bail out.
my @args = ($jobset->nixexprinput // "", $jobset->nixexprpath // "", inputsToArgs($inputInfo));
@@ -745,12 +687,19 @@ sub checkJobsetWrapped {
# Evaluate the job expression.
my $evalStart = clock_gettime(CLOCK_MONOTONIC);
my $evalStop;
my $jobsIter = evalJobs($project->name . ":" . $jobset->name, $inputInfo, $jobset->nixexprinput, $jobset->nixexprpath, $flakeRef);
my $jobs = evalJobs($project->name . ":" . $jobset->name, $inputInfo, $jobset->nixexprinput, $jobset->nixexprpath, $flakeRef);
my $evalStop = clock_gettime(CLOCK_MONOTONIC);
if ($jobsetsJobset) {
my @keys = keys %$jobs;
die "The .jobsets jobset must only have a single job named 'jobsets'"
unless (scalar @keys) == 1 && $keys[0] eq "jobsets";
}
Net::Statsd::timing("hydra.evaluator.eval_time", int(($evalStop - $evalStart) * 1000));
if ($dryRun) {
while (defined(my $job = $jobsIter->())) {
my $name = $job->{attr};
foreach my $name (keys %{$jobs}) {
my $job = $jobs->{$name};
if (defined $job->{drvPath}) {
print STDERR "good job $name: $job->{drvPath}\n";
} else {
@@ -760,20 +709,36 @@ sub checkJobsetWrapped {
return;
}
die "Jobset contains a job with an empty name. Make sure the jobset evaluates to an attrset of jobs.\n"
if defined $jobs->{""};
$jobs->{$_}->{jobName} = $_ for keys %{$jobs};
my $jobOutPathMap = {};
my $jobsetChanged = 0;
my $dbStart = clock_gettime(CLOCK_MONOTONIC);
# Store the error messages for jobs that failed to evaluate.
my $evaluationErrorTime = time;
my $evaluationErrorMsg = "";
foreach my $job (values %{$jobs}) {
next unless defined $job->{error};
$evaluationErrorMsg .=
($job->{jobName} ne "" ? "in job $job->{jobName}" : "at top-level") .
":\n" . $job->{error} . "\n\n";
}
setJobsetError($jobset, $evaluationErrorMsg, $evaluationErrorTime);
my $evaluationErrorRecord = $db->resultset('EvaluationErrors')->create(
{ errormsg => $evaluationErrorMsg
, errortime => $evaluationErrorTime
}
);
my $jobOutPathMap = {};
my $jobsetChanged = 0;
my %buildMap;
$db->txn_do(sub {
my $prevEval = getPrevJobsetEval($db, $jobset, 1);
# Clear the "current" flag on all builds. Since we're in a
@@ -786,7 +751,7 @@ sub checkJobsetWrapped {
, evaluationerror => $evaluationErrorRecord
, timestamp => time
, checkouttime => abs(int($checkoutStop - $checkoutStart))
, evaltime => 0
, evaltime => abs(int($evalStop - $evalStart))
, hasnewbuilds => 0
, nrbuilds => 0
, flake => $flakeRef
@@ -794,24 +759,11 @@ sub checkJobsetWrapped {
, nixexprpath => $jobset->nixexprpath
});
my @jobsWithConstituents;
while (defined(my $job = $jobsIter->())) {
if ($jobsetsJobset) {
die "The .jobsets jobset must only have a single job named 'jobsets'"
unless $job->{attr} eq "jobsets";
}
$evaluationErrorMsg .=
($job->{attr} ne "" ? "in job $job->{attr}" : "at top-level") .
":\n" . $job->{error} . "\n\n" if defined $job->{error};
checkBuild($db, $jobset, $ev, $inputInfo, $job, \%buildMap, $prevEval, $jobOutPathMap, $plugins)
unless defined $job->{error};
if (defined $job->{constituents}) {
push @jobsWithConstituents, $job;
}
# Schedule each successfully evaluated job.
foreach my $job (permute(values %{$jobs})) {
next if defined $job->{error};
#print STDERR "considering job " . $project->name, ":", $jobset->name, ":", $job->{jobName} . "\n";
checkBuild($db, $jobset, $ev, $inputInfo, $job, \%buildMap, $prevEval, $jobOutPathMap, $plugins);
}
# Have any builds been added or removed since last time?
@@ -849,20 +801,21 @@ sub checkJobsetWrapped {
$drvPathToId{$x->{drvPath}} = $x;
}
foreach my $job (values @jobsWithConstituents) {
next unless defined $job->{constituents};
foreach my $job (values %{$jobs}) {
next unless $job->{constituents};
if (defined $job->{error}) {
die "aggregate job $job->{attr} failed with the error: $job->{error}\n";
die "aggregate job $job->{jobName} failed with the error: $job->{error}\n";
}
my $x = $drvPathToId{$job->{drvPath}} or
die "aggregate job $job->{attr} has no corresponding build record.\n";
die "aggregate job $job->{jobName} has no corresponding build record.\n";
foreach my $drvPath (@{$job->{constituents}}) {
my $constituent = $drvPathToId{$drvPath};
if (defined $constituent) {
$db->resultset('AggregateConstituents')->update_or_create({aggregate => $x->{id}, constituent => $constituent->{id}});
} else {
warn "aggregate job $job->{attr} has a constituent $drvPath that doesn't correspond to a Hydra build\n";
warn "aggregate job $job->{jobName} has a constituent $drvPath that doesn't correspond to a Hydra build\n";
}
}
}
@@ -904,15 +857,11 @@ sub checkJobsetWrapped {
$jobset->update({ enabled => 0 }) if $jobset->enabled == 2;
$jobset->update({ lastcheckedtime => time, forceeval => undef });
$evaluationErrorRecord->update({ errormsg => $evaluationErrorMsg });
setJobsetError($jobset, $evaluationErrorMsg, $evaluationErrorTime);
$evalStop = clock_gettime(CLOCK_MONOTONIC);
$ev->update({ evaltime => abs(int($evalStop - $evalStart)) });
});
Net::Statsd::timing("hydra.evaluator.eval_time", int(($evalStop - $evalStart) * 1000));
my $dbStop = clock_gettime(CLOCK_MONOTONIC);
Net::Statsd::timing("hydra.evaluator.db_time", int(($dbStop - $dbStart) * 1000));
Net::Statsd::increment("hydra.evaluator.evals");
Net::Statsd::increment("hydra.evaluator.cached_evals") unless $jobsetChanged;
}

View File

@@ -5,6 +5,7 @@ use warnings;
use File::Path;
use File::stat;
use File::Basename;
use Nix::Store;
use Hydra::Config;
use Hydra::Schema;
use Hydra::Helper::Nix;
@@ -46,7 +47,7 @@ sub keepBuild {
$build->finished && ($build->buildstatus == 0 || $build->buildstatus == 6))
{
foreach my $path (split / /, $build->get_column('outpaths')) {
if ($MACHINE_LOCAL_STORE->isValidPath($path)) {
if (isValidPath($path)) {
addRoot $path;
} else {
print STDERR " warning: output ", $path, " has disappeared\n" if $build->finished;
@@ -54,7 +55,7 @@ sub keepBuild {
}
}
if (!$build->finished || ($keepFailedDrvs && $build->buildstatus != 0)) {
if ($MACHINE_LOCAL_STORE->isValidPath($build->drvpath)) {
if (isValidPath($build->drvpath)) {
addRoot $build->drvpath;
} else {
print STDERR " warning: derivation ", $build->drvpath, " has disappeared\n";

View File

@@ -78,7 +78,7 @@ fi
init_remote(){
local url=$1;
git init --initial-branch=trunk;
git init;
git remote add origin $url;
}

9
src/sql/Makefile.am Normal file
View File

@@ -0,0 +1,9 @@
sqldir = $(libexecdir)/hydra/sql
nobase_dist_sql_DATA = \
hydra.sql \
test.sql \
upgrade-*.sql \
update-dbix.pl
update-dbix: hydra.sql
./update-dbix-harness.sh

View File

@@ -1,90 +0,0 @@
sql_files = files(
'hydra.sql',
'test.sql',
'update-dbix.pl',
'upgrade-2.sql',
'upgrade-3.sql',
'upgrade-4.sql',
'upgrade-5.sql',
'upgrade-6.sql',
'upgrade-7.sql',
'upgrade-8.sql',
'upgrade-9.sql',
'upgrade-10.sql',
'upgrade-11.sql',
'upgrade-12.sql',
'upgrade-13.sql',
'upgrade-14.sql',
'upgrade-15.sql',
'upgrade-16.sql',
'upgrade-17.sql',
'upgrade-18.sql',
'upgrade-19.sql',
'upgrade-20.sql',
'upgrade-21.sql',
'upgrade-22.sql',
'upgrade-23.sql',
'upgrade-24.sql',
'upgrade-25.sql',
'upgrade-26.sql',
'upgrade-27.sql',
'upgrade-28.sql',
'upgrade-29.sql',
'upgrade-30.sql',
'upgrade-31.sql',
'upgrade-32.sql',
'upgrade-33.sql',
'upgrade-34.sql',
'upgrade-35.sql',
'upgrade-36.sql',
'upgrade-37.sql',
'upgrade-38.sql',
'upgrade-39.sql',
'upgrade-40.sql',
'upgrade-41.sql',
'upgrade-42.sql',
'upgrade-43.sql',
'upgrade-44.sql',
'upgrade-45.sql',
'upgrade-46.sql',
'upgrade-47.sql',
'upgrade-48.sql',
'upgrade-49.sql',
'upgrade-50.sql',
'upgrade-51.sql',
'upgrade-52.sql',
'upgrade-53.sql',
'upgrade-54.sql',
'upgrade-55.sql',
'upgrade-56.sql',
'upgrade-57.sql',
'upgrade-58.sql',
'upgrade-59.sql',
'upgrade-60.sql',
'upgrade-61.sql',
'upgrade-62.sql',
'upgrade-63.sql',
'upgrade-64.sql',
'upgrade-65.sql',
'upgrade-66.sql',
'upgrade-67.sql',
'upgrade-68.sql',
'upgrade-69.sql',
'upgrade-70.sql',
'upgrade-71.sql',
'upgrade-72.sql',
'upgrade-73.sql',
'upgrade-74.sql',
'upgrade-75.sql',
'upgrade-76.sql',
'upgrade-77.sql',
'upgrade-78.sql',
'upgrade-79.sql',
'upgrade-80.sql',
'upgrade-81.sql',
'upgrade-82.sql',
'upgrade-83.sql',
'upgrade-84.sql',
)
install_data(sql_files, install_dir: hydra_libexecdir / 'sql')

4
src/ttf/Makefile.am Normal file
View File

@@ -0,0 +1,4 @@
EXTRA_DIST = COPYING.LIB StayPuft.ttf
ttfdir = $(libexecdir)/hydra/ttf
nobase_ttf_DATA = $(EXTRA_DIST)

View File

@@ -1,5 +0,0 @@
data_files = files(
'StayPuft.ttf',
'COPYING.LIB',
)
install_data(data_files, install_dir: hydra_libexecdir / 'ttf')

View File

@@ -54,14 +54,13 @@ subtest "/job/PROJECT/JOBSET/JOB/shield" => sub {
subtest "/job/PROJECT/JOBSET/JOB/prometheus" => sub {
my $response = request(GET '/job/' . $project->name . '/' . $jobset->name . '/' . $build->job . '/prometheus');
ok($response->is_success, "The page showing the job's prometheus data returns 200.");
my $metrics = $response->content;
like($metrics, qr/hydra_job_failed\{.*\} 0/);
like($metrics, qr/hydra_job_completion_time\{.*\} [\d]+/);
like($metrics, qr/hydra_build_closure_size\{.*\} 96/);
like($metrics, qr/hydra_build_output_size\{.*\} 96/);
ok($metrics =~ m/hydra_job_failed\{.*\} 0/);
ok($metrics =~ m/hydra_job_completion_time\{.*\} [\d]+/);
ok($metrics =~ m/hydra_build_closure_size\{.*\} 96/);
ok($metrics =~ m/hydra_build_output_size\{.*\} 96/);
};
done_testing;

View File

@@ -32,4 +32,9 @@ subtest "/jobset/PROJECT/JOBSET/evals" => sub {
ok($jobsetevals->is_success, "The page showing the jobset evals returns 200.");
};
subtest "/jobset/PROJECT/JOBSET/errors" => sub {
my $jobsetevals = request(GET '/jobset/' . $project->name . '/' . $jobset->name . '/errors');
ok($jobsetevals->is_success, "The page showing the jobset eval errors returns 200.");
};
done_testing;

View File

@@ -186,7 +186,7 @@ subtest 'Update jobset "job" to have an invalid input type' => sub {
})
);
ok(!$jobsetupdate->is_success);
like($jobsetupdate->content, qr/Invalid input type.*valid types:/);
ok($jobsetupdate->content =~ m/Invalid input type.*valid types:/);
};

View File

@@ -35,6 +35,10 @@ subtest "Fetching the eval's overview" => sub {
is($fetch->code, 200, "channel page is 200");
};
subtest "Fetching the eval's overview" => sub {
my $fetch = request(GET '/eval/' . $eval->id, '/errors');
is($fetch->code, 200, "errors page is 200");
};
done_testing;

View File

@@ -24,7 +24,7 @@ my $cookie = $login->header("set-cookie");
my $my_jobs = request(GET '/dashboard/alice/my-jobs-tab', Accept => 'application/json', Cookie => $cookie);
ok($my_jobs->is_success);
my $content = $my_jobs->content();
like($content, qr/empty_dir/);
ok($content =~ /empty_dir/);
ok(!($content =~ /fails/));
ok(!($content =~ /succeed_with_failed/));
done_testing;

39
t/Makefile.am Normal file
View File

@@ -0,0 +1,39 @@
TESTS_ENVIRONMENT = \
BZR_HOME="$(abs_builddir)/data" \
HYDRA_DBI="dbi:Pg:dbname=hydra-test-suite;port=6433" \
HYDRA_DATA="$(abs_builddir)/data" \
HYDRA_HOME="$(top_srcdir)/src" \
HYDRA_CONFIG= \
NIX_REMOTE= \
NIX_REMOTE_SYSTEMS= \
NIX_CONF_DIR="$(abs_builddir)/nix/etc/nix" \
NIX_STATE_DIR="$(abs_builddir)/nix/var/nix" \
NIX_STORE_DIR="$(abs_builddir)/nix/store" \
NIX_LOG_DIR="$(abs_builddir)/nix/var/log/nix" \
PGHOST=/tmp \
PERL5LIB="$(srcdir):$(abs_top_srcdir)/src/lib:$$PERL5LIB" \
PYTHONPATH= \
PATH=$(abs_top_srcdir)/src/hydra-evaluator:$(abs_top_srcdir)/src/script:$(abs_top_srcdir)/src/hydra-eval-jobs:$(abs_top_srcdir)/src/hydra-queue-runner:$$PATH \
perl -w
EXTRA_DIST = \
$(wildcard *.pm) \
$(wildcard jobs/*.nix) \
$(wildcard jobs/*.sh) \
$(TESTS)
TESTS = \
perlcritic.pl \
test.pl
check_SCRIPTS = repos
repos: dirs
dirs:
mkdir -p data
touch data/hydra.conf
mkdir -p nix
mkdir -p nix/etc/nix
mkdir -p nix/store
mkdir -p nix/var

View File

@@ -115,7 +115,7 @@ subtest "evaluation" => sub {
my $build = decode_json(request_json({ uri => "/build/" . $evals->[0]->{builds}->[0] })->content());
is($build->{job}, "job", "The build's job name is job");
is($build->{finished}, 0, "The build isn't finished yet");
like($build->{buildoutputs}->{out}->{path}, qr/\/nix\/store\/[a-zA-Z0-9]{32}-job$/, "The build's outpath is in the Nix store and named 'job'");
ok($build->{buildoutputs}->{out}->{path} =~ /\/nix\/store\/[a-zA-Z0-9]{32}-job$/, "The build's outpath is in the Nix store and named 'job'");
subtest "search" => sub {
my $search_project = decode_json(request_json({ uri => "/search/?query=sample" })->content());

View File

@@ -27,13 +27,13 @@ my $project = $db->resultset('Projects')->create({name => "tests", displayname =
my $jobset = createBaseJobset("content-addressed", "content-addressed.nix", $ctx{jobsdir});
ok(evalSucceeds($jobset), "Evaluating jobs/content-addressed.nix should exit with return code 0");
is(nrQueuedBuildsForJobset($jobset), 6, "Evaluating jobs/content-addressed.nix should result in 6 builds");
is(nrQueuedBuildsForJobset($jobset), 5, "Evaluating jobs/content-addressed.nix should result in 4 builds");
for my $build (queuedBuildsForJobset($jobset)) {
ok(runBuild($build), "Build '".$build->job."' from jobs/content-addressed.nix should exit with code 0");
my $newbuild = $db->resultset('Builds')->find($build->id);
is($newbuild->finished, 1, "Build '".$build->job."' from jobs/content-addressed.nix should be finished.");
my $expected = $build->job eq "fails" ? 1 : $build->job =~ /with_failed/ ? 6 : $build->job =~ /FailingCA/ ? 2 : 0;
my $expected = $build->job eq "fails" ? 1 : $build->job =~ /with_failed/ ? 6 : 0;
is($newbuild->buildstatus, $expected, "Build '".$build->job."' from jobs/content-addressed.nix should have buildstatus $expected.");
my $response = request("/build/".$build->id);
@@ -55,8 +55,6 @@ for my $build (queuedBuildsForJobset($jobset)) {
}
# XXX: deststoredir is undefined: Use of uninitialized value $ctx{"deststoredir"} in concatenation (.) or string at t/content-addressed/basic.t line 58.
# XXX: This test seems to not do what it seems to be doing. See documentation: https://metacpan.org/pod/Test2::V0#isnt($got,-$do_not_want,-$name)
isnt(<$ctx{deststoredir}/realisations/*>, "", "The destination store should have the realisations of the built derivations registered");
done_testing;

View File

@@ -18,14 +18,14 @@ isnt($res, 0, "hydra-eval-jobset exits non-zero");
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
like(
$stderr,
qr/aggregate job mixed_aggregate failed with the error: "constituentA": does not exist/,
qr/aggregate job mixed_aggregate failed with the error: constituentA: does not exist/,
"The stderr record includes a relevant error message"
);
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/aggregate job mixed_aggregate failed with the error: "constituentA": does not exist/,
qr/aggregate job mixed_aggregate failed with the error: constituentA: does not exist/,
"The jobset records a relevant error message"
);

View File

@@ -5,58 +5,13 @@ use Test2::V0;
my $ctx = test_context();
my $expression = 'constituents.nix';
my $jobsetCtx = $ctx->makeJobset(
expression => $expression,
);
my $builds = $ctx->evaluateJobset(
jobset => $jobsetCtx->{"jobset"},
expression => $expression,
build => 0,
my $builds = $ctx->makeAndEvaluateJobset(
expression => 'constituents.nix',
);
my $constituentA = $builds->{"constituentA"};
my $directAggregate = $builds->{"direct_aggregate"};
my $indirectAggregate = $builds->{"indirect_aggregate"};
my $mixedAggregate = $builds->{"mixed_aggregate"};
# Ensure that we get exactly the aggregates we expect
my %expected_constituents = (
'direct_aggregate' => {
'constituentA' => 1,
},
'indirect_aggregate' => {
'constituentA' => 1,
},
'mixed_aggregate' => {
# Note that `constituentA_alias` becomes `constituentA`, because
# the shorter name is preferred
'constituentA' => 1,
'constituentB' => 1,
},
);
my $rs = $ctx->db->resultset('AggregateConstituents')->search(
{},
{
join => [ 'aggregate', 'constituent' ], # Use correct relationship names
columns => [],
'+select' => [ 'aggregate.job', 'constituent.job' ],
'+as' => [ 'aggregate_job', 'constituent_job' ],
}
);
my %actual_constituents;
while (my $row = $rs->next) {
my $aggregate_job = $row->get_column('aggregate_job');
my $constituent_job = $row->get_column('constituent_job');
$actual_constituents{$aggregate_job} //= {};
$actual_constituents{$aggregate_job}{$constituent_job} = 1;
}
is(\%actual_constituents, \%expected_constituents, "Exact aggregate constituents as expected");
# Check that deletion also doesn't work accordingly
is(system('nix-store', '--delete', $constituentA->drvpath), 256, "Deleting a constituent derivation fails");
is(system('nix-store', '--delete', $directAggregate->drvpath), 256, "Deleting the direct aggregate derivation fails");

View File

@@ -1,67 +0,0 @@
use feature 'unicode_strings';
use strict;
use warnings;
use Setup;
use Test2::V0;
use File::Copy qw(cp);
my $ctx = test_context(
nix_config => qq|
experimental-features = nix-command flakes
|,
hydra_config => q|
<runcommand>
evaluator_pure_eval = false
</runcommand>
|
);
sub checkFlake {
my ($flake) = @_;
cp($ctx->jobsdir . "/basic.nix", $ctx->jobsdir . "/" . $flake);
cp($ctx->jobsdir . "/config.nix", $ctx->jobsdir . "/" . $flake);
cp($ctx->jobsdir . "/empty-dir-builder.sh", $ctx->jobsdir . "/" . $flake);
cp($ctx->jobsdir . "/fail.sh", $ctx->jobsdir . "/" . $flake);
cp($ctx->jobsdir . "/succeed-with-failed.sh", $ctx->jobsdir . "/" . $flake);
chmod 0755, $ctx->jobsdir . "/" . $flake . "/empty-dir-builder.sh";
chmod 0755, $ctx->jobsdir . "/" . $flake . "/fail.sh";
chmod 0755, $ctx->jobsdir . "/" . $flake . "/succeed-with-failed.sh";
my $builds = $ctx->makeAndEvaluateJobset(
flake => 'path:' . $ctx->jobsdir . "/" . $flake,
build => 1
);
subtest "Build: succeed_with_failed" => sub {
my $build = $builds->{"succeed_with_failed"};
is($build->finished, 1, "Build should be finished.");
is($build->buildstatus, 6, "succeeeded-but-failed should have buildstatus 6.");
};
subtest "Build: empty_dir" => sub {
my $build = $builds->{"empty_dir"};
is($build->finished, 1, "Build should be finished.");
is($build->buildstatus, 0, "Should have succeeded.");
};
subtest "Build: fails" => sub {
my $build = $builds->{"fails"};
is($build->finished, 1, "Build should be finished.");
is($build->buildstatus, 1, "Should have failed.");
};
}
subtest "Flake using `checks`" => sub {
checkFlake 'flake-checks'
};
subtest "Flake using `hydraJobs`" => sub {
checkFlake 'flake-hydraJobs'
};
done_testing;

View File

@@ -1,22 +0,0 @@
use feature 'unicode_strings';
use strict;
use warnings;
use Setup;
use Test2::V0;
my $ctx = test_context();
my $builds = $ctx->makeAndEvaluateJobset(
expression => "meta.nix",
build => 1
);
my $build = $builds->{"full-of-meta"};
is($build->finished, 1, "Build should be finished.");
is($build->description, "This is the description of the job.", "Wrong description extracted from the build.");
is($build->license, "MIT, BSD", "Wrong licenses extracted from the build.");
is($build->homepage, "https://example.com/", "Wrong homepage extracted from the build.");
is($build->maintainers, 'alice@example.com, bob@not.found', "Wrong maintainers extracted from the build.");
done_testing;

View File

@@ -31,10 +31,6 @@ if ($sd_res != 0) {
skip_all("`systemd-run` returned non-zero when executing `true` (expected 0)");
}
# XXX(Mindavi): We should think about how to fix this.
# Note that it was always skipped on ofborg/h.n.o (nixos hydra) since systemd-run is not present in the ambient environment there.
skip_all("Always fails, an error about 'oom' being a string is logged and the process never OOMs. Needs a way to use more memory.");
my $ctx = test_context();
# Contain the memory usage to 25 MegaBytes using `systemd-run`

View File

@@ -5,8 +5,6 @@ rec {
builder = ./empty-dir-builder.sh;
};
constituentA_alias = constituentA;
constituentB = mkDerivation {
name = "empty-dir-B";
builder = ./empty-dir-builder.sh;
@@ -34,7 +32,7 @@ rec {
name = "mixed_aggregate";
_hydraAggregate = true;
constituents = [
"constituentA_alias"
"constituentA"
constituentB
];
builder = ./empty-dir-builder.sh;

View File

@@ -25,13 +25,6 @@ rec {
FOO = empty_dir;
};
caDependingOnFailingCA =
cfg.mkContentAddressedDerivation {
name = "ca-depending-on-failing-ca";
builder = ./dir-with-file-builder.sh;
FOO = fails;
};
nonCaDependingOnCA =
cfg.mkDerivation {
name = "non-ca-depending-on-ca";

View File

@@ -1,6 +0,0 @@
{
outputs = { ... }: {
checks =
import ./basic.nix;
};
}

View File

@@ -1,6 +0,0 @@
{
outputs = { ... }: {
hydraJobs =
import ./basic.nix;
};
}

View File

@@ -1,17 +0,0 @@
with import ./config.nix;
{
full-of-meta =
mkDerivation {
name = "full-of-meta";
builder = ./empty-dir-builder.sh;
meta = {
description = "This is the description of the job.";
license = [ { shortName = "MIT"; } "BSD" ];
homepage = "https://example.com/";
maintainers = [ "alice@example.com" { email = "bob@not.found"; } ];
outPath = "${placeholder "out"}";
};
};
}

Some files were not shown because too many files have changed in this diff Show More