Compare commits

...

136 Commits

Author SHA1 Message Date
Jörg Thalheim
e41b487db2 Merge pull request #1507 from NixOS/compare-active-jobsets
jobset-eval: reduce compare options to active jobsets
2025-08-12 08:18:58 +00:00
Martin Weinelt
a0cc377863 jobset-eval: reduce compare options to active jobsets
The list of jobsets is very high on hydra.nixos.org and the compare to
dropdown listing goes over multiple full pages in the busy projects.

If we ignore jobsets that we disable this interface becomes more usable
again.
2025-08-11 01:29:14 +02:00
John Ericson
79ba8fdd04 Merge pull request #1505 from NixOS/no-built-scripts-meson-shell
package.nix: fix PATH for devshell
2025-08-05 14:35:14 +00:00
Jörg Thalheim
c645b7ff67 package.nix: fix PATH for devshell
We don't install scripts to build so this must point to src
2025-08-05 00:22:46 +02:00
John Ericson
c12d0a66d8 Merge pull request #1503 from NixOS/libpqxx-and-ci
Libpqxx and ci
2025-08-04 22:13:09 +00:00
Jörg Thalheim
2f6ec150ec ci: also build on aarch64-linux 2025-08-04 17:44:16 -04:00
Jörg Thalheim
2b4f4cf6f4 cache build with the magic nix cache 2025-08-04 17:44:16 -04:00
Jörg Thalheim
e33b4f88dc queue-runner: Add missing signal.h include for SIGINT and kill() 2025-08-04 17:44:16 -04:00
Jörg Thalheim
a9b89ee779 Migrate from deprecated notification_receiver to connection::listen()
libpqxx 7.10.1 deprecates the notification_receiver class.
2025-08-04 17:44:16 -04:00
Jörg Thalheim
84b4fe36b6 Fix libpqxx 7.10.1 API compatibility
- Replace deprecated exec_params/exec_params0 calls with exec()
- Wrap all parameterized queries with pqxx::params{}
- Add .no_rows()/.one_row() to exec calls that don't return results
2025-08-04 17:44:16 -04:00
Jörg Thalheim
081d0c079a hydra-eval-jobs: unset NIX_PATH 2025-08-04 17:44:16 -04:00
Jörg Thalheim
a75c5a405c docs/hacking: document how to run single tests 2025-08-04 17:44:16 -04:00
Janne Heß
957884d174 Merge pull request #1501 from NixOS/fix/useless-message
Remove useless previous eval message
2025-08-02 12:26:54 +00:00
Janne Heß
05a05667d8 Merge branch 'master' into fix/useless-message 2025-08-02 14:21:44 +02:00
Janne Heß
0527fddd6a Remove useless previous eval message
This message serves no purpose and looks like something went wrong.
There is nothing wrong, there is just no previous evaluation.
2025-08-02 14:20:59 +02:00
Janne Heß
0017a1d0f3 Merge pull request #1498 from NixOS/feat/new-q-runner-machine-status
machine-status: Render new queue runner details
2025-08-02 12:11:07 +00:00
Janne Heß
e9895e81af Merge branch 'master' into feat/new-q-runner-machine-status 2025-08-02 14:05:55 +02:00
Janne Heß
424a767035 Merge pull request #1500 from NixOS/feat/improve-developer-expercience
Improve general developer experience
2025-08-02 12:05:41 +00:00
Janne Heß
7096ae3a5b machine-status: Fixup double localhost during development 2025-08-02 14:05:23 +02:00
Janne Heß
ec3d0c696b Fix the evaluator not finding hydra-eval-jobset 2025-08-02 13:53:25 +02:00
Janne Heß
d2c10bf851 Fixup static libraries in development server 2025-08-02 13:53:22 +02:00
Janne Heß
80b9d82ea4 Fix meson and ninja commands and link bootstrap 2025-08-02 13:41:39 +02:00
Janne Heß
85ab735653 Add nix-direnv 2025-08-02 13:41:16 +02:00
Janne Heß
632a59172a machine-status: Make new runner status prettier
- Remove bottom margin
- Properly format memory in human format
- Calculate free memory
- Format the load with 2 digits after comma
- Lpad pressure percentages
- Use a macro to render pressure
- Score -> Scheduling Score
- More spacing in the load
- Add IRQ pressure
2025-08-01 11:25:14 +02:00
Janne Heß
95f5d331ee Merge pull request #1499 from NixOS/feat/document-pg-conncetion
Document how to connect to postgres
2025-07-31 16:54:32 +00:00
Janne Heß
6e9e13333f Document how to connect to postgres 2025-07-31 18:48:47 +02:00
Janne Heß
7b1968236d machine-status: Render new queue runner details 2025-07-31 18:45:04 +02:00
Janne Heß
b812bb5017 Merge pull request #869 from andir/patch-1
Add Queue Runner Status to the topbar
2025-07-17 21:31:27 +00:00
Janne Heß
61573c71d1 Merge pull request #1497 from helsinki-systems/feat/show-new-q-runner-status
Show queue runner v2 status
2025-07-17 21:30:36 +00:00
Janne Heß
f50263976c Merge branch 'master' into patch-1 2025-07-17 23:21:18 +02:00
Janne Heß
c413b275ff Merge pull request #1206 from iwanders/CORE-21733-add-link-to-raw-log
Add a link to the raw log.
2025-07-16 20:18:43 +00:00
John Ericson
f7a9113166 Merge pull request #1494 from SuperSandro2000/patch-2
module: sync with nixpkgs
2025-07-16 19:44:14 +00:00
Janne Heß
97ec796db5 Merge branch 'master' into CORE-21733-add-link-to-raw-log 2025-07-16 18:42:40 +02:00
Janne Heß
42400ef20c Merge pull request #1156 from helsinki-systems/fix/local-store-detection
Fix local store detection and related issues
2025-07-16 16:31:15 +00:00
Janne Heß
2fcfa969b8 Merge branch 'master' into fix/local-store-detection 2025-07-16 18:25:54 +02:00
Janne Heß
4f3b783d30 Merge pull request #1493 from NixOS/hostname-utility
Replace nettools with hostname-debian
2025-07-16 16:22:17 +00:00
Janne Heß
80980f8b32 Fix PATH for the foreman scripts 2025-07-16 17:39:19 +02:00
Janne Heß
d0008d4238 Show queue runner v2 status
This is guarded behind a setting and will overwrite everything that was
learned from the machines file. Also drops `sshKeys` since that wasn't
used anyway.
2025-07-16 17:39:06 +02:00
Janne Heß
3b89d2d6b5 Merge pull request #1495 from Erethon/fix-nix-download-url
fix: Update Nix download url
2025-07-15 19:16:32 +00:00
Dionysis Grigoropoulos
62fcacb7d2 fix: Update Nix download url 2025-07-15 19:45:13 +03:00
Sandro
b3b48bc237 module: sync with nixpkgs 2025-07-04 12:01:42 +02:00
Martin Weinelt
c544042051 Replace nettools with hostname-debian
As far as I understand we include nettools for its hostname executable
used by the Sys-Hostname-Long perl package. But if we just need that then
the hostname-debian package provides a simpler and better maintained
version.
2025-07-04 06:46:35 +02:00
Jörg Thalheim
aa62c7f7db Merge pull request #1490 from NixOS/update-flakes
Update flake inputs
2025-06-24 23:19:28 +00:00
Mic92
605a0e9ce9 flake.lock: Update 2025-06-25 01:03:35 +02:00
Jörg Thalheim
6786e52eb5 Merge pull request #1489 from NixOS/ci
github: update test workflow to use latest nix &  add update-flakes action
2025-06-24 16:51:54 +00:00
Jörg Thalheim
9efe38c60b add update-flakes action 2025-06-24 18:46:33 +02:00
Jörg Thalheim
c621f27482 test: bump used nix version 2025-06-24 18:45:14 +02:00
John Ericson
ed500ca434 Merge pull request #1202 from thejohncrafter/doc-request-base
docs: refine instructions for proxy setting
2025-06-15 22:14:38 +00:00
Julien Marquet
635aff50dd docs: refine instructions for proxy setting 2025-05-27 12:45:12 -04:00
Jörg Thalheim
2e3c168ec4 Merge pull request #1487 from tomjnixon/reverse_proxy_docs
doc/manual: correct nginx reverse proxy example
2025-05-27 04:58:53 +00:00
John Ericson
362524b563 Merge pull request #1485 from NixOS/nix-2.29
Nix 2.29
2025-05-26 05:17:23 +00:00
John Ericson
de3646cb13 Merge pull request #1488 from NixOS/nixpkgs-25.05
flake.lock: Update Nixpkgs to 25.05
2025-05-26 01:11:40 +00:00
John Ericson
278a3ebfd5 Fix build with Nix 2.29 2025-05-25 20:53:18 -04:00
John Ericson
dafa252d08 flake.lock: Update Nix and nix-eval-jobs to 2.29
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/70921714cb3b5e6041b7413459541838651079f3?narHash=sha256-ZbB3IH9OlJvo14GlQZbYHzJojf/HCDT38GzYTod8DaU%3D' (2025-04-23)
  → 'github:NixOS/nix/d761dad79c79af17aa476a29749bd9d69747548f?narHash=sha256-rCpANMHFIlafta6J/G0ILRd%2BWNSnzv/lzi40Y8f1AR8%3D' (2025-05-25)
• Updated input 'nix-eval-jobs':
    'github:nix-community/nix-eval-jobs/1260c6599d22dfd8c25fea6893c3d031996b20e1?narHash=sha256-n220U5pjzCtTtOJtbga4Xr/PyllowKw9anSevgCqJEw%3D' (2025-04-11)
  → 'github:nix-community/nix-eval-jobs/d9262e535e35454daebcebd434bdb9c1486bb998?narHash=sha256-AJ22q6yWc1hPkqssXMxQqD6QUeJ6hbx52xWHhKsmuP0%3D' (2025-05-25)
2025-05-25 20:52:39 -04:00
John Ericson
8a50488f6c flake.lock: Update Nixpkgs to 25.05
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/eea3403f7ca9f9942098f4f2756adab4ec924b2b?narHash=sha256-JT1wMjLIypWJA0N2V27WpUw8feDmTok4Dwkb0oYXDS4%3D' (2025-04-23)
  → 'github:NixOS/nixpkgs/db1aed32009f408e4048c1dd0beaf714dd34ed93?narHash=sha256-8A7HjmnvCpDjmETrZY1QwzKunR63LiP7lHu1eA5q6JI%3D' (2025-05-24)
2025-05-25 20:51:14 -04:00
Thomas Nixon
8bb7d27588 doc/manual: correct nginx reverse proxy example
- hydra does not remove the base URI from the request before processing
  it, so this must be done in the reverse proxy. in nginx this is done
  by giving proxy_pass a URI rather than a protocol/host/port; see:

  https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass

- proxy_redirect is not correct/required: hydra uses proxy headers to
  correctly form redirects in most cases, and where it doesn't it
  produces local redirects which aren't matched by this directive anyway
2025-05-23 23:39:44 +01:00
Martin Weinelt
35c9264306 Merge pull request #1484 from NixOS/nix-keep-options
Migrate from "gc-" prefixed nix options
2025-05-23 22:19:15 +00:00
Martin Weinelt
da1aebe970 Migrate from "gc-" prefixed nix options
These have been deprecated, e.g. gc-keep-outputs is now just
keep-outputs.
2025-05-15 04:08:57 +02:00
Jörg Thalheim
183bc39d1a Merge pull request #1483 from SuperSandro2000/patch-2
Add missing slash
2025-05-09 15:49:33 +00:00
Sandro
2ae27dd20d Add missing slash
error: access to absolute path '/nix/store/sai35xfsrba2a2vasmzxakmn54wdfa13-sourcepackaging' is forbidden in pure evaluation mode (use '--impure' to override)
2025-05-05 00:10:59 +02:00
Jörg Thalheim
1b5c2fb747 Merge pull request #1479 from qowoz/queue-runner
queue runner: attempt at slightly smarter scheduling criteria
2025-05-01 07:02:54 +00:00
Jörg Thalheim
8d068fea3e Merge pull request #1482 from NixOS/hydra-passthru
Expose nix package in hydra package
2025-04-29 18:35:50 +00:00
Jörg Thalheim
8218a9ad1b hydra: expose nix-cli package
This makes it easier in other packages to get the nix version used to
build Hydra.
2025-04-29 20:27:30 +02:00
John Ericson
455f1a0665 Merge pull request #1481 from NixOS/nix-flake-false
Use Nix without the flake
2025-04-23 22:04:54 +00:00
John Ericson
89fcb931ce Use Nix without the flake
This is what we do for `nix-eval-jobs` already. It allows for more
fine-grained control over dependencies.
2025-04-23 17:58:52 -04:00
Martin Weinelt
b023cc8f87 Merge pull request #1480 from NixOS/update-nix
flake.lock: Update
2025-04-23 18:05:14 +00:00
Martin Weinelt
23755bf001 flake.lock: Update
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/a4962f73b5fc874d4b16baef47921daf349addfc' (2025-04-07)
  → 'github:NixOS/nix/70921714cb3b5e6041b7413459541838651079f3' (2025-04-23)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/db8f4fe18ce772a9c8f3adf321416981c8fe9371' (2025-04-07)
  → 'github:NixOS/nixpkgs/eea3403f7ca9f9942098f4f2756adab4ec924b2b' (2025-04-23)
2025-04-23 18:27:14 +02:00
Pierre Bourdon
720db63d52 queue runner: attempt at slightly smarter scheduling criteria
Instead of just going for "whatever is the oldest build we know of",
use the following first:

- Is the step more constrained? If so, schedule it first to avoid
  filling up "more desirable" build slots with less constrained builds.

- Does the step have more dependents? If so, schedule it first to try
  and maximize open parallelism and breadth of scheduling options.

(cherry picked from commit b8d03adaf4)
2025-04-20 13:44:06 +10:00
John Ericson
bdde73acbd Merge pull request #1478 from qowoz/fix-actions
jobset-eval: fix actions not showing up sometimes for new jobs
2025-04-16 16:49:31 +00:00
Pierre Bourdon
0ab357e435 jobset-eval: fix actions not showing up sometimes for new jobs
New jobs have their "new" status take precedence over them being
"failed" or "queued", which means actions that can act on "failed" or
"queued" jobs weren't shown to the user when they could only act on
"new" jobs.

(cherry picked from commit 9a4a5dd624)
2025-04-16 09:50:32 +10:00
Jörg Thalheim
6fcfa9e796 Merge commit from fork
Re-enable restrict-eval for non-flakes
2025-04-15 06:48:18 +02:00
Martin Weinelt
ffbde9c9e3 Merge pull request #1474 from NixOS/machine-status-colspan
web: increase colspan for machine row in machine status
2025-04-13 14:37:52 +00:00
Martin Weinelt
cf33a9158a web: increase colspan for machine row in machine status 2025-04-13 08:29:01 +02:00
John Ericson
5f6b075754 Merge pull request #1470 from qowoz/eval-view
Fix displaying eval errors in jobset eval view
2025-04-11 15:49:05 +00:00
Jörg Thalheim
8d75026513 re-enable restrict-eval for non-flakes 2025-04-11 13:42:55 +02:00
Maximilian Bosch
f1a976d3fd Fix displaying eval errors in jobset eval view
Quickfix for something that annoyed me once too often.

Specifically, I'm talking about `/eval/1#tabs-errors`.

To not fetch long errors on each request, this is only done on-demand.
I.e., when the tab is opened, an iframe is requested with the errors.
This iframe uses a template for both the jobset view and the jobset-eval
view. It is differentiated by checking if `jobset` or `eval` is defined.

However, the jobset-eval view also has a `jobset` variable in its stash
which means that in both cases the `if` path was used. Since
`jobset.fetcherrormsg` isn't defined in the eval case though, you always
got an empty error.

The band-aid fix is relatively simple: swap if and else: the `eval`
variable is not defined in the stash of the jobset view, so now this is
a useful condition to decide which view we're in.

(cherry picked from commit 70c3d75f73)
2025-04-11 09:03:11 +10:00
Jörg Thalheim
d5ad16abc2 Merge pull request #1472 from SuperSandro2000/without-aws-sdk
Fix compilation with a nix which was compiled withou aws sdk
2025-04-10 15:24:38 +00:00
Sandro Jäckel
7e0157e387 Fix compilation with a nix which was compiled withou aws sdk 2025-04-09 17:53:14 +02:00
John Ericson
c8de5b99e3 Merge pull request #1471 from NixOS/queue-runner-machines-json
Queue-runner: Always produce a machines JSON object
2025-04-08 21:44:23 +00:00
John Ericson
a5b17d0686 Queue-runner: Always produce a machines JSON object
Even if there are no machines, there should at least be an empty object.
2025-04-08 17:38:19 -04:00
John Ericson
1c52c4c0ed Merge pull request #1456 from NixOS/hydra.nixos.org-rebased
web: replace 'errormsg' with 'errormsg IS NULL' in most cases
2025-04-07 19:05:51 +00:00
Pierre Bourdon
b4322edd05 web: replace 'errormsg' with 'errormsg IS NULL' in most cases
This is implement in an extremely hacky way due to poor DBIx feature
support. Ideally, what we'd need is a way to tell DBIx to ignore the
errormsg column unless explicitly requested, and to automatically add a
computed 'errormsg IS NULL' column in others. Since it does not support
that, this commit instead hacks some support via method overrides while
taking care to not break anything obvious.
2025-04-07 14:48:07 -04:00
John Ericson
8350f964ee Merge pull request #1469 from NixOS/release-reservations
queue-runner: release machine reservation while copying outputs
2025-04-07 18:19:16 +00:00
Pierre Bourdon
143a07bff0 queue-runner: release machine reservation while copying outputs
This allows for better builder usage when the queue runner is busy. To
avoid running into uncontrollable imbalances between builder/queue
runner, we only release the machine reservation after the local
throttler has found a slot to start copying the outputs for that build.

As opposed to asserting uniqueness to understand resource utilization,
we just switch to using `std::unique_ptr`.
2025-04-07 14:01:50 -04:00
John Ericson
cc4b206d85 Merge pull request #1466 from NixOS/bump-nixpkgs
Bump nixpkgs
2025-04-07 17:32:38 +00:00
John Ericson
e77444da98 Merge pull request #1468 from NixOS/steps-waiting-for-download-slot
Add metric for builds waiting for download slot
2025-04-07 17:28:00 +00:00
K900
8a6482bb1c Add metric for builds waiting for download slot
(cherry picked from commit f23ec71227911891807706b6b978836e4d80edde)
2025-04-07 13:16:49 -04:00
Jörg Thalheim
b3a433336e bump nixpkgs 2025-04-07 19:09:46 +02:00
Jörg Thalheim
68b2d6da0a Merge pull request #1467 from NixOS/merge-queue
Make github actions ci merge-queue friendly
2025-04-07 17:02:43 +00:00
Jörg Thalheim
c94ba404fd don't build hydra twice in a pull request + enable merge queue 2025-04-07 18:57:32 +02:00
Jörg Thalheim
56170dd117 Merge pull request #1464 from NixOS/more-hydra.nixos.org-changes
More hydra.nixos.org changes
2025-04-07 16:50:36 +00:00
Jörg Thalheim
d4b55f8190 Merge pull request #1465 from NixOS/gitea
test/gitea: fix eval
2025-04-07 18:48:57 +02:00
Jörg Thalheim
78687e23cf test/gitea: fix eval 2025-04-07 18:43:12 +02:00
Jörg Thalheim
f02fc5e2ff Merge pull request #1463 from NixOS/fix-nixos-tests
Fix evaluation of NixOS tests, avoid `with`
2025-04-07 18:37:50 +02:00
Pierre Bourdon
8e02589ac8 queue-runner: switch to pseudorandom ordering of builds processing
We don't rely on sequential / monotonic build IDs processing anymore, so
randomizing actually has the advantage of mixing builds for different
systems together, to avoid only one chunk of builds for a single system
getting processed while builders for other systems are starved.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
52a0199a9b queue runner: introduce some parallelism for remote paths lookup
Each output for a given step being ingested is looked up in parallel,
which should basically multiply the speed of builds ingestion by the
average number of outputs per derivation.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
9265fc5002 queue-runner: reduce the time between queue monitor restarts
This will induce more DB queries (though these are fairly cheap), but at
the benefit of processing bumps within 1m instead of within 10m.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
d8ffa6b56a queue-runner: remove id > X from new builds query
Running the query with/without it shows that it makes no difference to
postgres, since there's an index on finished=0 already. This allows a
few simplifications, but also paves the way towards running multiple
parallel monitor threads in the future.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
efcf6815d9 queue-runner: add prom metrics to allow detecting internal bottlenecks
By looking at the ratio of running vs. waiting for the dispatcher and
the queue monitor, we should get better visibility into what hydra is
currently bottlenecked on.

There are other side effects we can try to measure to get to the same
result, but having a simple way doesn't cost us much.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
1e2d3211d9 queue-runner: limit parallelism of CPU intensive operations
My current theory is that running more parallel xz than available CPU
cores is reducing our overall throughput by requiring more scheduling
overhead and more cache thrashing.
2025-04-07 12:33:35 -04:00
Pierre Bourdon
5a9985f96c web: Skip System on /machines
It is redundant
2025-04-07 12:33:35 -04:00
John Ericson
0d0c4f278b Fix evaluation of NixOS tests, avoid with 2025-04-07 12:32:28 -04:00
John Ericson
3fdb18a4bc Merge pull request #1462 from NixOS/web-changes
A number of Perl-side changes from the hydra.nixos.org branch
2025-04-07 12:17:16 -04:00
Maximilian Bosch
6133693097 readIntoSocket: fix with store URIs containing an &
The third argument to `open()` in `-|` mode is passed to a shell if it's
a string. In my case the store URI contains
`?secret-key=${signingKey.directory}/secret&compression=zstd`

For the `nix store cat` case this means that

* until `&` the process will be started in the background. This fails
  immediately because no path to cat is specified.
* `compression=zstd` is a variable assignment
* the `$path` argument to `store cat` is attempted to be executed as
  another command

Passing just the list solves the problem.

(cherry picked from commit 3ee51dbe589458cc54ff753317bbc6db530bddc0)
2025-04-07 11:59:49 -04:00
git@71rd.net
abe35881e4 Stream files from store instead of buffering them
When an artifact is requested from hydra the output is first copied
from the nix store into memory and then sent as a response, delaying
the download and taking up significant amounts of memory.

As reported in https://github.com/NixOS/hydra/issues/1357

Instead of calling a command and blocking while reading in the entire
output, this adds read_into_socket(). the function takes a
command, starting a subprocess with that command, returning a file
descriptor attached to stdout.
This file descriptor is then by responsebuilder of Catalyst to steam
the output directly

(cherry picked from commit 459aa0a5983a0bd546399c08231468d6e9282f54)
2025-04-07 11:59:49 -04:00
ajs124
99359c251a lazy-load evaluation errors
Closes #1362
2025-04-07 11:54:47 -04:00
Maximilian Bosch
9d8f30affe Only show stepname if it doesn't equal the name of the drv
When building e.g. nixpkgs, the "Running builds" view will mostly look
like this

    hello.x86_64-linux (Build of hello-X.Y)
    exa.x86_64-linux (Build of exa-X.Y)
    ...

This doesn't provide any useful information. Showing the step name only
makes sense if it's not a child of the job's derivation. With this
patch, that information will only be shown if the drv name (i.e. w/o
`/nix/store/` prefix, .drv ext & hash) is not equal to the drv name of
the job itself (build.nixname).
2025-04-07 11:54:47 -04:00
Maximilian Bosch
33b982f408 Running builds view: show build step names
When using Hydra to build machine configurations, you'll often see
"nixosConfigurations.foo" five times, i.e. for each build step being
run. This isn't very helpful I think because in such a case, a single
build step can also be compiling the Linux kernel.

This change also fetches the `drvpath` and `type` from the `buildsteps`
relation. We're already joining it, so this doesn't make much difference
(confirmed via query logging that this doesn't cause extra SQL queries).

Unfortunately build steps don't have a human readable name, so I'm
deriving it from the drvpath by stripping away the hash (assuming that
it'll never contain a `-` and that `/nix/store/` is used as prefix). I
decided against using the Nix bindings for that to avoid too much
overhead due to store operations for each build step.
2025-04-07 11:54:47 -04:00
Maximilian Bosch
a816e8e22c Make "timed out" and "log limit exceeded" builds aborted
In 73694087a0 I gave builds that failed
because of a timeout or exceeded log limit a stop sign and I stand by
that reasoning: with that it's possible to distinguish between actual
build failures and rather transient things such as timeouts.

Back then I considered it a feature that these are shown in a different
tab, but I don't think that's a good idea anymore. When using a jobset to
e.g. track the regressions from a mass rebuild (like a compiler or gcc
update), "Newly failed builds" should exclusively display regressions (and
flaky builds of course, not much I can do about that).

Also, when a bunch of builds fail in such a jobset because of e.g. a
broken connection to a builder that results in a timeout, I want to be
able to restart them all w/o rebuilding actual regressions.

To make it clear that we not only have "Aborted" builds in the tab, I
renamed the label to "Aborted / Timed out".
2025-04-07 11:54:47 -04:00
Pierre Bourdon
0159135fc7 web: include current step status on /machines 2025-04-07 11:54:47 -04:00
John Ericson
1d2d3ae6b7 Merge pull request #1461 from NixOS/nix-2.28
Nix 2.28
2025-04-07 11:52:45 -04:00
John Ericson
257b211832 Merge pull request #1460 from NixOS/nix-2.27
Nix 2.27
2025-04-07 11:37:43 -04:00
John Ericson
d6a5df25bf Fix the build 2025-04-07 11:36:59 -04:00
John Ericson
6534a54ee5 Fix Nix code
Can now at least enter dev shell, but build is still broken.
2025-04-07 11:28:34 -04:00
John Ericson
1595064bee flake.lock: Update to nix and nix-eval-jobs 2.28
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/d0f98c76f962147610489e84c10033ca92e9c532?narHash=sha256-u6RhBWQ1XohTZ4Ub5ml1PTcaxQgtqFNng6Sohy1rojw%3D' (2025-04-07)
  → 'github:NixOS/nix/a4962f73b5fc874d4b16baef47921daf349addfc?narHash=sha256-r%2BpsCOW77vTSTNbxTVrYHeh6OgB0QukbnyUVDwg8s4I%3D' (2025-04-07)
• Updated input 'nix-eval-jobs':
    'github:nix-community/nix-eval-jobs/62f9c9e8d00d2ff6ab27a6197ab459a8e0808e59?narHash=sha256-PypQspB7h7EENe4RQQUQj2Ay8J1%2BO49AKNO9JbAU4Ek%3D' (2025-04-07)
  → 'github:nix-community/nix-eval-jobs/cba718bafe5dc1607c2b6761ecf53c641a6f3b21?narHash=sha256-v5n6t49X7MOpqS9j0FtI6TWOXvxuZMmGsp2OfUK5QfA%3D' (2025-04-07)
2025-04-07 11:16:09 -04:00
John Ericson
1cb1e139c4 Fix build (due to C++ API changes) 2025-04-07 11:12:12 -04:00
John Ericson
6b97e3ab7b flake.lock: Update to nix and nix-eval-jobs 2.27
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/e310c19a1aeb1ce1ed4d41d5ab2d02db596e0918?narHash=sha256-q/RgA4bB7zWai4oPySq9mch7qH14IEeom2P64SXdqHs%3D' (2025-02-18)
  → 'github:NixOS/nix/d0f98c76f962147610489e84c10033ca92e9c532?narHash=sha256-u6RhBWQ1XohTZ4Ub5ml1PTcaxQgtqFNng6Sohy1rojw%3D' (2025-04-07)
• Updated input 'nix-eval-jobs':
    'github:nix-community/nix-eval-jobs/f7418fc1fa45b96d37baa95ff3c016dd5be3876b?narHash=sha256-Lo4KFBNcY8tmBuCmEr2XV0IUZtxXHmbXPNLkov/QSU0%3D' (2025-03-26)
  → 'github:nix-community/nix-eval-jobs/62f9c9e8d00d2ff6ab27a6197ab459a8e0808e59?narHash=sha256-PypQspB7h7EENe4RQQUQj2Ay8J1%2BO49AKNO9JbAU4Ek%3D' (2025-04-07)
2025-04-07 11:02:52 -04:00
Jörg Thalheim
cad08f87d2 Merge pull request #1458 from NixOS/meson
docs: fix contribution guide for new meson-based build
2025-03-29 15:37:06 +01:00
Jörg Thalheim
3fef32b364 gitignore hydra-data as created by foreman 2025-03-29 14:31:18 +00:00
Jörg Thalheim
ae18a7b3ae fix development workflow after switching to meson-based build 2025-03-29 14:31:18 +00:00
Jörg Thalheim
b657bcdfb7 Merge pull request #1457 from dermetfan/fix-1429
hydra-eval-jobset: do not wait on n-e-j inside transaction
2025-03-29 11:36:13 +01:00
Jörg Thalheim
3b4c4972c2 Merge pull request #1449 from knedlsepp/fix-metrics-rendering-with-special-characters
Fix rendering of metrics with special characters
2025-03-29 08:52:07 +01:00
John Ericson
b2fe3f5218 Merge pull request #1455 from qowoz/226-constituent
nix-eval-jobs + constituent globs
2025-03-27 23:18:39 -04:00
Maximilian Bosch
9911f0107f Reimplement (named) constituent jobs (+globbing) based on nix-eval-jobs
Depends on https://github.com/nix-community/nix-eval-jobs/pull/349 & #1421.

Almost equivalent to #1425, but with a small change: when having e.g. an
aggregate job with a glob that matches nothing, the jobset evaluation is
failed now. This was the intended behavior before (hydra-eval-jobset
fails hard if an aggregate is broken), the code-path was never reached
however since the aggregate was never marked as broken in this case
before.
2025-03-28 11:12:54 +10:00
zowoq
feebb61897 flake.lock: Update
Flake lock file updates:

• Updated input 'nix-eval-jobs':
    'github:nix-community/nix-eval-jobs/4b392b284877d203ae262e16af269f702df036bc?narHash=sha256-3wIReAqdTALv39gkWXLMZQvHyBOc3yPkWT2ZsItxedY%3D' (2025-02-14)
  → 'github:nix-community/nix-eval-jobs/f7418fc1fa45b96d37baa95ff3c016dd5be3876b?narHash=sha256-Lo4KFBNcY8tmBuCmEr2XV0IUZtxXHmbXPNLkov/QSU0%3D' (2025-03-26)
2025-03-28 11:12:54 +10:00
zowoq
4bcbed2f1b hydraTest: remove outdated postgresql version
error: postgresql_12 has been removed since it reached its EOL upstream
2025-03-28 11:12:48 +10:00
Robin Stumm
987dad3371 hydra-eval-jobset: do not wait on n-e-j inside transaction
fixes #1429
2025-03-26 20:23:26 +01:00
John Ericson
d2db3c7446 Merge pull request #1450 from NixOS/hydra-compress-race
Fix race condition in hydra-compress-logs
2025-03-16 14:37:39 -04:00
John Ericson
97dcdae068 Merge pull request #1451 from NixOS/revert-to-fix-hangs
Revert "Use `LegacySSHStore`"
2025-03-03 10:18:28 -05:00
John Ericson
9a5bd39d4c Revert "Use LegacySSHStore"
There were some hangs caused by this. Need to fix them, ideally
reproducing the issue in a test, before trying this again.

This reverts commit 4a4a0f901c.
2025-03-03 10:12:38 -05:00
Martin Weinelt
f1deb22c02 Fix race condition in hydra-compress-logs 2025-03-02 03:08:26 +01:00
Josef Kemetmüller
d22d030503 Fix rendering of metrics with special characters
My main motivation here is to get metrics with brackets to work in order
to support "pytest" test names:

- test_foo.py::test_bar[1]
- test_foo.py::test_bar[2]

I couldn't find an "HTML escape"-style function that would generate
valid html `id` attribute names from random strings, so I went with a
hash digest instead.
2025-02-27 09:25:42 +01:00
John Ericson
18c0d76210 Merge pull request #1444 from NixOS/use-legacy-ssh-store
Use `LegacySSHStore`
2025-02-18 14:37:17 -05:00
Ivor Wanders
cba85a6a19 Add a link to the raw log. 2022-05-04 13:32:47 -04:00
Janne Heß
54675a0d94 Fix local store detection and related issues
- Add localStore into the stash because it's used in templates
- Hide the Channels button for non-local stores because the link 404s
  anyway
- Fix a style issue when having popovers in dark mode
2022-02-13 14:24:36 +01:00
Andreas Rammhold
c35791fcc2 Add Queue Runner Status to the topbar
I've been searching for this waaay too often in the past and I simply do not see a reason not to include it in the topbar by default.
2021-02-09 14:10:08 +01:00
76 changed files with 1423 additions and 606 deletions

1
.envrc Normal file
View File

@@ -0,0 +1 @@
use flake

View File

@@ -1,14 +1,27 @@
name: "Test"
on:
pull_request:
merge_group:
push:
branches:
- master
jobs:
tests:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- system: x86_64-linux
runner: ubuntu-latest
- system: aarch64-linux
runner: ubuntu-24.04-arm
runs-on: ${{ matrix.runner }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: cachix/install-nix-action@v17
#- run: nix flake check
- run: nix-build -A checks.x86_64-linux.build -A checks.x86_64-linux.validate-openapi
- uses: cachix/install-nix-action@v31
with:
extra_nix_config: |
extra-systems = ${{ matrix.system }}
- uses: DeterminateSystems/magic-nix-cache-action@main
- run: nix-build -A checks.${{ matrix.system }}.build -A checks.${{ matrix.system }}.validate-openapi

28
.github/workflows/update-flakes.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: "Update Flakes"
on:
schedule:
# Run weekly on Monday at 00:00 UTC
- cron: '0 0 * * 1'
workflow_dispatch:
jobs:
update-flakes:
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v31
- name: Update flake inputs
run: nix flake update
- name: Create Pull Request
uses: peter-evans/create-pull-request@v5
with:
commit-message: "flake.lock: Update"
title: "Update flake inputs"
body: |
Automated flake input updates.
This PR was automatically created by the update-flakes workflow.
branch: update-flakes
delete-branch: true

5
.gitignore vendored
View File

@@ -1,8 +1,13 @@
*~
/.direnv/
.test_info.*
/src/root/static/bootstrap
/src/root/static/fontawesome
/src/root/static/js/flot
/src/sql/hydra-postgresql.sql
/src/sql/hydra-sqlite.sql
/src/sql/tmp.sqlite
.hydra-data
result
result-*
outputs

View File

@@ -72,28 +72,32 @@ Make sure **State** at the top of the page is set to "_Enabled_" and click on "_
You can build Hydra via `nix-build` using the provided [default.nix](./default.nix):
```
$ nix-build
$ nix build
```
### Development Environment
You can use the provided shell.nix to get a working development environment:
```
$ nix-shell
$ autoreconfPhase
$ configurePhase # NOTE: not ./configure
$ make
$ nix develop
$ ln -svf ../../../build/src/bootstrap src/root/static/bootstrap
$ ln -svf ../../../build/src/fontawesome src/root/static/fontawesome
$ ln -svf ../../../../build/src/flot src/root/static/js/flot
$ meson setup build
$ ninja -C build
```
The development environment can also automatically be established using [nix-direnv](https://github.com/nix-community/nix-direnv).
### Executing Hydra During Development
When working on new features or bug fixes you need to be able to run Hydra from your working copy. This
can be done using [foreman](https://github.com/ddollar/foreman):
```
$ nix-shell
$ nix develop
$ # hack hack
$ make
$ ninja -C build
$ foreman start
```
@@ -101,7 +105,7 @@ Have a look at the [Procfile](./Procfile) if you want to see how the processes a
conflicts with services that might be running on your host, hydra and postgress are started on custom ports:
- hydra-server: 63333 with the username "alice" and the password "foobar"
- postgresql: 64444
- postgresql: 64444, can be connected to using `psql -p 64444 -h localhost hydra`
Note that this is only ever meant as an ad-hoc way of executing Hydra during development. Please make use of the
NixOS module for actually running Hydra in production.
@@ -115,22 +119,24 @@ Start by following the steps in [Development Environment](#development-environme
Then, you can run the tests and the perlcritic linter together with:
```console
$ nix-shell
$ make check
$ nix develop
$ ninja -C build test
```
You can run a single test with:
```
$ nix-shell
$ yath test ./t/foo/bar.t
$ nix develop
$ cd build
$ meson test --test-args=../t/Hydra/Event.t testsuite
```
And you can run just perlcritic with:
```
$ nix-shell
$ make perlcritic
$ nix develop
$ cd build
$ meson test perlcritic
```
### JSON API

View File

@@ -51,10 +51,12 @@ base_uri example.com
`base_uri` should be your hydra servers proxied URL. If you are using
Hydra nixos module then setting `hydraURL` option should be enough.
If you want to serve Hydra with a prefix path, for example
[http://example.com/hydra]() then you need to configure your reverse
proxy to pass `X-Request-Base` to hydra, with prefix path as value. For
example if you are using nginx, then use configuration similar to
You also need to configure your reverse proxy to pass `X-Request-Base`
to hydra, with the same value as `base_uri`.
This also covers the case of serving Hydra with a prefix path,
as in [http://example.com/hydra]().
For example if you are using nginx, then use configuration similar to
following:
server {
@@ -63,8 +65,7 @@ following:
.. other configuration ..
location /hydra/ {
proxy_pass http://127.0.0.1:3000;
proxy_redirect http://127.0.0.1:3000 https://example.com/hydra;
proxy_pass http://127.0.0.1:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
@@ -74,6 +75,9 @@ following:
}
}
Note the trailing slash on the `proxy_pass` directive, which causes nginx to
strip off the `/hydra/` part of the URL before passing it to hydra.
Populating a Cache
------------------

View File

@@ -11,12 +11,6 @@ $ cd hydra
To enter a shell in which all environment variables (such as `PERL5LIB`)
and dependencies can be found:
```console
$ nix-shell
```
of when flakes are enabled:
```console
$ nix develop
```
@@ -24,15 +18,15 @@ $ nix develop
To build Hydra, you should then do:
```console
[nix-shell]$ autoreconfPhase
[nix-shell]$ configurePhase
[nix-shell]$ make -j$(nproc)
$ mesonConfigurePhase
$ ninja
```
You start a local database, the webserver, and other components with
foreman:
```console
$ ninja -C build
$ foreman start
```
@@ -47,17 +41,20 @@ $ ./src/script/hydra-server
You can run Hydra's test suite with the following:
```console
[nix-shell]$ make check
[nix-shell]$ # to run as many tests as you have cores:
[nix-shell]$ make check YATH_JOB_COUNT=$NIX_BUILD_CORES
[nix-shell]$ # or run yath directly:
[nix-shell]$ yath test
[nix-shell]$ # to run as many tests as you have cores:
[nix-shell]$ yath test -j $NIX_BUILD_CORES
$ meson test
# to run as many tests as you have cores:
$ YATH_JOB_COUNT=$NIX_BUILD_CORES meson test
```
When using `yath` instead of `make check`, ensure you have run `make`
in the root of the repository at least once.
To run individual tests:
```console
# Run a specific test file
$ PERL5LIB=t/lib:$PERL5LIB perl t/test.pl t/Hydra/Controller/API/checks.t
# Run all tests in a directory
$ PERL5LIB=t/lib:$PERL5LIB perl t/test.pl t/Hydra/Controller/API/
```
**Warning**: Currently, the tests can fail
if run with high parallelism [due to an issue in
@@ -75,7 +72,7 @@ will reload the page every time you save.
To build Hydra and its dependencies:
```console
$ nix-build release.nix -A build.x86_64-linux
$ nix build .#packages.x86_64-linux.default
```
## Development Tasks

View File

@@ -48,7 +48,7 @@ Getting Nix
If your server runs NixOS you are all set to continue with installation
of Hydra. Otherwise you first need to install Nix. The latest stable
version can be found one [the Nix web
site](http://nixos.org/nix/download.html), along with a manual, which
site](https://nixos.org/download/), along with a manual, which
includes installation instructions.
Installation

33
flake.lock generated
View File

@@ -1,27 +1,18 @@
{
"nodes": {
"nix": {
"inputs": {
"flake-compat": [],
"flake-parts": [],
"git-hooks-nix": [],
"nixpkgs": [
"nixpkgs"
],
"nixpkgs-23-11": [],
"nixpkgs-regression": []
},
"flake": false,
"locked": {
"lastModified": 1739899400,
"narHash": "sha256-q/RgA4bB7zWai4oPySq9mch7qH14IEeom2P64SXdqHs=",
"lastModified": 1750777360,
"narHash": "sha256-nDWFxwhT+fQNgi4rrr55EKjpxDyVKSl1KaNmSXtYj40=",
"owner": "NixOS",
"repo": "nix",
"rev": "e310c19a1aeb1ce1ed4d41d5ab2d02db596e0918",
"rev": "7bb200199705eddd53cb34660a76567c6f1295d9",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "2.26-maintenance",
"ref": "2.29-maintenance",
"repo": "nix",
"type": "github"
}
@@ -29,11 +20,11 @@
"nix-eval-jobs": {
"flake": false,
"locked": {
"lastModified": 1739500569,
"narHash": "sha256-3wIReAqdTALv39gkWXLMZQvHyBOc3yPkWT2ZsItxedY=",
"lastModified": 1748680938,
"narHash": "sha256-TQk6pEMD0mFw7jZXpg7+2qNKGbAluMQgc55OMgEO8bM=",
"owner": "nix-community",
"repo": "nix-eval-jobs",
"rev": "4b392b284877d203ae262e16af269f702df036bc",
"rev": "974a4af3d4a8fd242d8d0e2608da4be87a62b83f",
"type": "github"
},
"original": {
@@ -44,16 +35,16 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1739461644,
"narHash": "sha256-1o1qR0KYozYGRrnqytSpAhVBYLNBHX+Lv6I39zGRzKM=",
"lastModified": 1750736827,
"narHash": "sha256-UcNP7BR41xMTe0sfHBH8R79+HdCw0OwkC/ZKrQEuMeo=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "97a719c9f0a07923c957cf51b20b329f9fb9d43f",
"rev": "b4a30b08433ad7b6e1dfba0833fb0fe69d43dfec",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-24.11-small",
"ref": "nixos-25.05-small",
"repo": "nixpkgs",
"type": "github"
}

View File

@@ -1,18 +1,12 @@
{
description = "A Nix-based continuous build system";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.11-small";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05-small";
inputs.nix = {
url = "github:NixOS/nix/2.26-maintenance";
inputs.nixpkgs.follows = "nixpkgs";
# hide nix dev tooling from our lock file
inputs.flake-parts.follows = "";
inputs.git-hooks-nix.follows = "";
inputs.nixpkgs-regression.follows = "";
inputs.nixpkgs-23-11.follows = "";
inputs.flake-compat.follows = "";
url = "github:NixOS/nix/2.29-maintenance";
# We want to control the deps precisely
flake = false;
};
inputs.nix-eval-jobs = {
@@ -30,11 +24,27 @@
# A Nixpkgs overlay that provides a 'hydra' package.
overlays.default = final: prev: {
nix-eval-jobs = final.callPackage nix-eval-jobs {};
nixDependenciesForHydra = final.lib.makeScope final.newScope
(import (nix + "/packaging/dependencies.nix") {
pkgs = final;
inherit (final) stdenv;
inputs = {};
});
nixComponentsForHydra = final.lib.makeScope final.nixDependenciesForHydra.newScope
(import (nix + "/packaging/components.nix") {
officialRelease = true;
inherit (final) lib;
pkgs = final;
src = nix;
maintainers = [ ];
});
nix-eval-jobs = final.callPackage nix-eval-jobs {
nixComponents = final.nixComponentsForHydra;
};
hydra = final.callPackage ./package.nix {
inherit (nixpkgs.lib) fileset;
inherit (final.lib) fileset;
rawSrc = self;
nix-perl-bindings = final.nixComponents.nix-perl-bindings;
nixComponents = final.nixComponentsForHydra;
};
};
@@ -73,21 +83,31 @@
validate-openapi = hydraJobs.tests.validate-openapi.${system};
});
packages = forEachSystem (system: {
nix-eval-jobs = nixpkgs.legacyPackages.${system}.callPackage nix-eval-jobs {
nix = nix.packages.${system}.nix;
packages = forEachSystem (system: let
inherit (nixpkgs) lib;
pkgs = nixpkgs.legacyPackages.${system};
nixDependencies = lib.makeScope pkgs.newScope
(import (nix + "/packaging/dependencies.nix") {
inherit pkgs;
inherit (pkgs) stdenv;
inputs = {};
});
nixComponents = lib.makeScope nixDependencies.newScope
(import (nix + "/packaging/components.nix") {
officialRelease = true;
inherit lib pkgs;
src = nix;
maintainers = [ ];
});
in {
nix-eval-jobs = pkgs.callPackage nix-eval-jobs {
inherit nixComponents;
};
hydra = nixpkgs.legacyPackages.${system}.callPackage ./package.nix {
hydra = pkgs.callPackage ./package.nix {
inherit (nixpkgs.lib) fileset;
inherit nixComponents;
inherit (self.packages.${system}) nix-eval-jobs;
rawSrc = self;
inherit (nix.packages.${system})
nix-util
nix-store
nix-main
nix-cli
;
nix-perl-bindings = nix.hydraJobs.perlBindings.${system};
};
default = self.packages.${system}.hydra;
});

View File

@@ -1,5 +1,7 @@
#!/bin/sh
export PATH=$(pwd)/src/script:$PATH
# wait for hydra-server to listen
while ! nc -z localhost 63333; do sleep 1; done

View File

@@ -1,5 +1,7 @@
#!/bin/sh
export PATH=$(pwd)/src/script:$PATH
# wait for postgresql to listen
while ! pg_isready -h $(pwd)/.hydra-data/postgres -p 64444; do sleep 1; done

View File

@@ -1,5 +1,7 @@
#!/bin/sh
export PATH=$(pwd)/src/script:$PATH
# wait for hydra-server to listen
while ! nc -z localhost 63333; do sleep 1; done

View File

@@ -12,20 +12,6 @@ nix_util_dep = dependency('nix-util', required: true)
nix_store_dep = dependency('nix-store', required: true)
nix_main_dep = dependency('nix-main', required: true)
# Nix need extra flags not provided in its pkg-config files.
nix_dep = declare_dependency(
dependencies: [
nix_util_dep,
nix_store_dep,
nix_main_dep,
],
compile_args: [
'-include', 'nix/config-util.hh',
'-include', 'nix/config-store.hh',
'-include', 'nix/config-main.hh',
],
)
pqxx_dep = dependency('libpqxx', required: true)
prom_cpp_core_dep = dependency('prometheus-cpp-core', required: true)

View File

@@ -15,7 +15,6 @@
systemd.services.hydra-send-stats.enable = false;
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql_12;
# The following is to work around the following error from hydra-server:
# [error] Caught exception in engine "Cannot determine local time zone"

View File

@@ -228,8 +228,8 @@ in
nix.settings = {
trusted-users = [ "hydra-queue-runner" ];
gc-keep-outputs = true;
gc-keep-derivations = true;
keep-outputs = true;
keep-derivations = true;
};
services.hydra-dev.extraConfig =
@@ -340,7 +340,7 @@ in
requires = [ "hydra-init.service" ];
wants = [ "network-online.target" ];
after = [ "hydra-init.service" "network.target" "network-online.target" ];
path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ];
path = [ cfg.package pkgs.hostname-debian pkgs.openssh pkgs.bzip2 config.nix.package ];
restartTriggers = [ hydraConf ];
environment = env // {
PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr
@@ -364,7 +364,7 @@ in
requires = [ "hydra-init.service" ];
restartTriggers = [ hydraConf ];
after = [ "hydra-init.service" "network.target" ];
path = with pkgs; [ nettools cfg.package jq ];
path = with pkgs; [ hostname-debian cfg.package jq ];
environment = env // {
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-evaluator";
};
@@ -463,12 +463,12 @@ in
''
set -eou pipefail
compression=$(sed -nr 's/compress_build_logs_compression = ()/\1/p' ${baseDir}/hydra.conf)
if [[ $compression == "" ]]; then
compression="bzip2"
if [[ $compression == "" || $compression == bzip2 ]]; then
compressionCmd=(bzip2)
elif [[ $compression == zstd ]]; then
compression="zstd --rm"
compressionCmd=(zstd --rm)
fi
find ${baseDir}/build-logs -type f -name "*.drv" -mtime +3 -size +0c | xargs -r "$compression" --force --quiet
find ${baseDir}/build-logs -ignore_readdir_race -type f -name "*.drv" -mtime +3 -size +0c -print0 | xargs -0 -r "''${compressionCmd[@]}" --force --quiet
'';
startAt = "Sun 01:45";
};

View File

@@ -27,8 +27,7 @@ in
{
install = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
(import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; }).simpleTest {
name = "hydra-install";
nodes.machine = hydraServer;
testScript =
@@ -43,8 +42,7 @@ in
});
notifications = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
(import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; }).simpleTest {
name = "hydra-notifications";
nodes.machine = {
imports = [ hydraServer ];
@@ -56,7 +54,7 @@ in
'';
services.influxdb.enable = true;
};
testScript = ''
testScript = { nodes, ... }: ''
machine.wait_for_job("hydra-init")
# Create an admin account and some other state.
@@ -87,7 +85,7 @@ in
# Setup the project and jobset
machine.succeed(
"su - hydra -c 'perl -I ${config.services.hydra-dev.package.perlDeps}/lib/perl5/site_perl ${./t/setup-notifications-jobset.pl}' >&2"
"su - hydra -c 'perl -I ${nodes.machine.services.hydra-dev.package.perlDeps}/lib/perl5/site_perl ${./t/setup-notifications-jobset.pl}' >&2"
)
# Wait until hydra has build the job and
@@ -101,9 +99,10 @@ in
});
gitea = forEachSystem (system:
let pkgs = nixpkgs.legacyPackages.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
makeTest {
let
pkgs = nixpkgs.legacyPackages.${system};
in
(import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; }).makeTest {
name = "hydra-gitea";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];

View File

@@ -8,11 +8,7 @@
, perlPackages
, nix-util
, nix-store
, nix-main
, nix-cli
, nix-perl-bindings
, nixComponents
, git
, makeWrapper
@@ -65,7 +61,7 @@ let
name = "hydra-perl-deps";
paths = lib.closePropagation
([
nix-perl-bindings
nixComponents.nix-perl-bindings
git
] ++ (with perlPackages; [
AuthenSASL
@@ -93,6 +89,7 @@ let
DateTime
DBDPg
DBDSQLite
DBIxClassHelpers
DigestSHA1
EmailMIME
EmailSender
@@ -113,6 +110,7 @@ let
NetAmazonS3
NetPrometheus
NetStatsd
NumberBytesHuman
PadWalker
ParallelForkManager
PerlCriticCommunity
@@ -165,7 +163,7 @@ stdenv.mkDerivation (finalAttrs: {
nukeReferences
pkg-config
mdbook
nix-cli
nixComponents.nix-cli
perlDeps
perl
unzip
@@ -175,9 +173,9 @@ stdenv.mkDerivation (finalAttrs: {
libpqxx
openssl
libxslt
nix-util
nix-store
nix-main
nixComponents.nix-util
nixComponents.nix-store
nixComponents.nix-main
perlDeps
perl
boost
@@ -204,14 +202,14 @@ stdenv.mkDerivation (finalAttrs: {
glibcLocales
libressl.nc
python3
nix-cli
nixComponents.nix-cli
];
hydraPath = lib.makeBinPath (
[
subversion
openssh
nix-cli
nixComponents.nix-cli
coreutils
findutils
pixz
@@ -241,7 +239,7 @@ stdenv.mkDerivation (finalAttrs: {
shellHook = ''
pushd $(git rev-parse --show-toplevel) >/dev/null
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-queue-runner:$PATH
PATH=$(pwd)/build/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/build/src/hydra-queue-runner:$PATH
PERL5LIB=$(pwd)/src/lib:$PERL5LIB
export HYDRA_HOME="$(pwd)/src/"
mkdir -p .hydra-data
@@ -272,7 +270,7 @@ stdenv.mkDerivation (finalAttrs: {
--prefix PATH ':' $out/bin:$hydraPath \
--set HYDRA_RELEASE ${version} \
--set HYDRA_HOME $out/libexec/hydra \
--set NIX_RELEASE ${nix-cli.name or "unknown"} \
--set NIX_RELEASE ${nixComponents.nix-cli.name or "unknown"} \
--set NIX_EVAL_JOBS_RELEASE ${nix-eval-jobs.name or "unknown"}
done
'';
@@ -280,5 +278,8 @@ stdenv.mkDerivation (finalAttrs: {
dontStrip = true;
meta.description = "Build of Hydra on ${stdenv.system}";
passthru = { inherit perlDeps; };
passthru = {
inherit perlDeps;
nix = nixComponents.nix-cli;
};
})

View File

@@ -1,8 +1,8 @@
#include "db.hh"
#include "hydra-config.hh"
#include "pool.hh"
#include "shared.hh"
#include "signals.hh"
#include <nix/util/pool.hh>
#include <nix/main/shared.hh>
#include <nix/util/signals.hh>
#include <algorithm>
#include <thread>
@@ -180,10 +180,8 @@ struct Evaluator
{
auto conn(dbPool.get());
pqxx::work txn(*conn);
txn.exec_params0
("update Jobsets set startTime = $1 where id = $2",
now,
jobset.name.id);
txn.exec("update Jobsets set startTime = $1 where id = $2",
pqxx::params{now, jobset.name.id}).no_rows();
txn.commit();
}
@@ -234,7 +232,7 @@ struct Evaluator
pqxx::work txn(*conn);
if (jobset.evaluation_style == EvaluationStyle::ONE_AT_A_TIME) {
auto evaluation_res = txn.exec_params
auto evaluation_res = txn.exec
("select id from JobsetEvals "
"where jobset_id = $1 "
"order by id desc limit 1"
@@ -250,7 +248,7 @@ struct Evaluator
auto evaluation_id = evaluation_res[0][0].as<int>();
auto unfinished_build_res = txn.exec_params
auto unfinished_build_res = txn.exec
("select id from Builds "
"join JobsetEvalMembers "
" on (JobsetEvalMembers.build = Builds.id) "
@@ -420,21 +418,18 @@ struct Evaluator
/* Clear the trigger time to prevent this
jobset from getting stuck in an endless
failing eval loop. */
txn.exec_params0
txn.exec
("update Jobsets set triggerTime = null where id = $1 and startTime is not null and triggerTime <= startTime",
jobset.name.id);
jobset.name.id).no_rows();
/* Clear the start time. */
txn.exec_params0
txn.exec
("update Jobsets set startTime = null where id = $1",
jobset.name.id);
jobset.name.id).no_rows();
if (!WIFEXITED(status) || WEXITSTATUS(status) > 1) {
txn.exec_params0
("update Jobsets set errorMsg = $1, lastCheckedTime = $2, errorTime = $2, fetchErrorMsg = null where id = $3",
fmt("evaluation %s", statusToString(status)),
now,
jobset.name.id);
txn.exec("update Jobsets set errorMsg = $1, lastCheckedTime = $2, errorTime = $2, fetchErrorMsg = null where id = $3",
pqxx::params{fmt("evaluation %s", statusToString(status)), now, jobset.name.id}).no_rows();
}
txn.commit();
@@ -459,7 +454,7 @@ struct Evaluator
{
auto conn(dbPool.get());
pqxx::work txn(*conn);
txn.exec("update Jobsets set startTime = null");
txn.exec("update Jobsets set startTime = null").no_rows();
txn.commit();
}

View File

@@ -2,7 +2,8 @@ hydra_evaluator = executable('hydra-evaluator',
'hydra-evaluator.cc',
dependencies: [
libhydra_dep,
nix_dep,
nix_util_dep,
nix_main_dep,
pqxx_dep,
],
install: true,

View File

@@ -5,17 +5,20 @@
#include <sys/stat.h>
#include <fcntl.h>
#include "build-result.hh"
#include "path.hh"
#include "legacy-ssh-store.hh"
#include "serve-protocol.hh"
#include <nix/store/build-result.hh>
#include <nix/store/path.hh>
#include <nix/store/legacy-ssh-store.hh>
#include <nix/store/serve-protocol.hh>
#include <nix/store/serve-protocol-impl.hh>
#include "state.hh"
#include "current-process.hh"
#include "processes.hh"
#include "util.hh"
#include "ssh.hh"
#include "finally.hh"
#include "url.hh"
#include <nix/util/current-process.hh>
#include <nix/util/processes.hh>
#include <nix/util/util.hh>
#include <nix/store/serve-protocol.hh>
#include <nix/store/serve-protocol-impl.hh>
#include <nix/store/ssh.hh>
#include <nix/util/finally.hh>
#include <nix/util/url.hh>
using namespace nix;
@@ -36,6 +39,38 @@ bool ::Machine::isLocalhost() const
namespace nix::build_remote {
static std::unique_ptr<SSHMaster::Connection> openConnection(
::Machine::ptr machine, SSHMaster & master)
{
Strings command = {"nix-store", "--serve", "--write"};
if (machine->isLocalhost()) {
command.push_back("--builders");
command.push_back("");
} else {
auto remoteStore = machine->storeUri.params.find("remote-store");
if (remoteStore != machine->storeUri.params.end()) {
command.push_back("--store");
command.push_back(escapeShellArgAlways(remoteStore->second));
}
}
auto ret = master.startCommand(std::move(command), {
"-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
});
// XXX: determine the actual max value we can use from /proc.
// FIXME: Should this be upstreamed into `startCommand` in Nix?
int pipesize = 1024 * 1024;
fcntl(ret->in.get(), F_SETPIPE_SZ, &pipesize);
fcntl(ret->out.get(), F_SETPIPE_SZ, &pipesize);
return ret;
}
static void copyClosureTo(
::Machine::Connection & conn,
Store & destStore,
@@ -52,8 +87,8 @@ static void copyClosureTo(
// FIXME: substitute output pollutes our build log
/* Get back the set of paths that are already valid on the remote
host. */
auto present = conn.store->queryValidPaths(
closure, true, useSubstitutes);
auto present = conn.queryValidPaths(
destStore, true, closure, useSubstitutes);
if (present.size() == closure.size()) return;
@@ -68,7 +103,12 @@ static void copyClosureTo(
std::unique_lock<std::timed_mutex> sendLock(conn.machine->state->sendLock,
std::chrono::seconds(600));
conn.store->addMultipleToStoreLegacy(destStore, missing);
conn.to << ServeProto::Command::ImportPaths;
destStore.exportPaths(missing, conn.to);
conn.to.flush();
if (readInt(conn.from) != 1)
throw Error("remote machine failed to import closure");
}
@@ -188,7 +228,7 @@ static BuildResult performBuild(
counter & nrStepsBuilding
)
{
auto kont = conn.store->buildDerivationAsync(drvPath, drv, options);
conn.putBuildDerivationRequest(localStore, drvPath, drv, options);
BuildResult result;
@@ -197,10 +237,7 @@ static BuildResult performBuild(
startTime = time(0);
{
MaintainCount<counter> mc(nrStepsBuilding);
result = kont();
// Without proper call-once functions, we need to manually
// delete after calling.
kont = {};
result = ServeProto::Serialise<BuildResult>::read(localStore, conn);
}
stopTime = time(0);
@@ -216,7 +253,7 @@ static BuildResult performBuild(
// If the protocol was too old to give us `builtOutputs`, initialize
// it manually by introspecting the derivation.
if (GET_PROTOCOL_MINOR(conn.store->getProtocol()) < 6)
if (GET_PROTOCOL_MINOR(conn.remoteVersion) < 6)
{
// If the remote is too old to handle CA derivations, we cant get this
// far anyways
@@ -249,25 +286,26 @@ static void copyPathFromRemote(
const ValidPathInfo & info
)
{
/* Receive the NAR from the remote and add it to the
destination store. Meanwhile, extract all the info from the
NAR that getBuildOutput() needs. */
auto source2 = sinkToSource([&](Sink & sink)
{
/* Note: we should only send the command to dump the store
path to the remote if the NAR is actually going to get read
by the destination store, which won't happen if this path
is already valid on the destination store. Since this
lambda function only gets executed if someone tries to read
from source2, we will send the command from here rather
than outside the lambda. */
conn.store->narFromPath(info.path, [&](Source & source) {
TeeSource tee{source, sink};
extractNarData(tee, conn.store->printStorePath(info.path), narMembers);
});
});
/* Receive the NAR from the remote and add it to the
destination store. Meanwhile, extract all the info from the
NAR that getBuildOutput() needs. */
auto source2 = sinkToSource([&](Sink & sink)
{
/* Note: we should only send the command to dump the store
path to the remote if the NAR is actually going to get read
by the destination store, which won't happen if this path
is already valid on the destination store. Since this
lambda function only gets executed if someone tries to read
from source2, we will send the command from here rather
than outside the lambda. */
conn.to << ServeProto::Command::DumpStorePath << localStore.printStorePath(info.path);
conn.to.flush();
destStore.addToStore(info, *source2, NoRepair, NoCheckSigs);
TeeSource tee(conn.from, sink);
extractNarData(tee, localStore.printStorePath(info.path), narMembers);
});
destStore.addToStore(info, *source2, NoRepair, NoCheckSigs);
}
static void copyPathsFromRemote(
@@ -348,8 +386,19 @@ void RemoteResult::updateWithBuildResult(const nix::BuildResult & buildResult)
}
/* Utility guard object to auto-release a semaphore on destruction. */
template <typename T>
class SemaphoreReleaser {
public:
SemaphoreReleaser(T* s) : sem(s) {}
~SemaphoreReleaser() { sem->release(); }
private:
T* sem;
};
void State::buildRemote(ref<Store> destStore,
std::unique_ptr<MachineReservation> reservation,
::Machine::ptr machine, Step::ptr step,
const ServeProto::BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,
@@ -366,39 +415,30 @@ void State::buildRemote(ref<Store> destStore,
updateStep(ssConnecting);
// FIXME: rewrite to use Store.
::Machine::Connection conn {
.machine = machine,
.store = [&]{
auto * pSpecified = std::get_if<StoreReference::Specified>(&machine->storeUri.variant);
if (!pSpecified || pSpecified->scheme != "ssh") {
throw Error("Currently, only (legacy-)ssh stores are supported!");
}
auto storeRef = machine->completeStoreReference();
auto remoteStore = machine->openStore().dynamic_pointer_cast<LegacySSHStore>();
assert(remoteStore);
auto * pSpecified = std::get_if<StoreReference::Specified>(&storeRef.variant);
if (!pSpecified || pSpecified->scheme != "ssh") {
throw Error("Currently, only (legacy-)ssh stores are supported!");
}
remoteStore->connPipeSize = 1024 * 1024;
if (machine->isLocalhost()) {
auto rp_new = remoteStore->remoteProgram.get();
rp_new.push_back("--builders");
rp_new.push_back("");
const_cast<nix::Setting<Strings> &>(remoteStore->remoteProgram).assign(rp_new);
}
remoteStore->extraSshArgs = {
"-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
};
const_cast<nix::Setting<int> &>(remoteStore->logFD).assign(logFD.get());
return nix::ref{remoteStore};
}(),
LegacySSHStoreConfig storeConfig {
pSpecified->scheme,
pSpecified->authority,
storeRef.params
};
auto master = storeConfig.createSSHMaster(
false, // no SSH master yet
logFD.get());
// FIXME: rewrite to use Store.
auto child = build_remote::openConnection(machine, master);
{
auto activeStepState(activeStep->state_.lock());
if (activeStepState->cancelled) throw Error("step cancelled");
activeStepState->pid = conn.store->getConnectionPid();
activeStepState->pid = child->sshPid;
}
Finally clearPid([&]() {
@@ -413,12 +453,35 @@ void State::buildRemote(ref<Store> destStore,
process. Meh. */
});
::Machine::Connection conn {
{
.to = child->in.get(),
.from = child->out.get(),
/* Handshake. */
.remoteVersion = 0xdadbeef, // FIXME avoid dummy initialize
},
/*.machine =*/ machine,
};
Finally updateStats([&]() {
auto stats = conn.store->getConnectionStats();
bytesReceived += stats.bytesReceived;
bytesSent += stats.bytesSent;
bytesReceived += conn.from.read;
bytesSent += conn.to.written;
});
constexpr ServeProto::Version our_version = 0x206;
try {
conn.remoteVersion = decltype(conn)::handshake(
conn.to,
conn.from,
our_version,
machine->storeUri.render());
} catch (EndOfFile & e) {
child->sshPid.wait();
std::string s = chomp(readFile(result.logFile));
throw Error("cannot connect to %1%: %2%", machine->storeUri.render(), s);
}
{
auto info(machine->state->connectInfo.lock());
info->consecutiveFailures = 0;
@@ -475,6 +538,23 @@ void State::buildRemote(ref<Store> destStore,
result.logFile = "";
}
/* Throttle CPU-bound work. Opportunistically skip updating the current
* step, since this requires a DB roundtrip. */
if (!localWorkThrottler.try_acquire()) {
MaintainCount<counter> mc(nrStepsWaitingForDownloadSlot);
updateStep(ssWaitingForLocalSlot);
localWorkThrottler.acquire();
}
SemaphoreReleaser releaser(&localWorkThrottler);
/* Once we've started copying outputs, release the machine reservation
* so further builds can happen. We do not release the machine earlier
* to avoid situations where the queue runner is bottlenecked on
* copying outputs and we end up building too many things that we
* haven't been able to allow copy slots for. */
reservation.reset();
wakeDispatcher();
StorePathSet outputs;
for (auto & [_, realisation] : buildResult.builtOutputs)
outputs.insert(realisation.outPath);
@@ -487,7 +567,7 @@ void State::buildRemote(ref<Store> destStore,
auto now1 = std::chrono::steady_clock::now();
auto infos = conn.store->queryPathInfosUncached(outputs);
auto infos = conn.queryPathInfos(*localStore, outputs);
size_t totalNarSize = 0;
for (auto & [_, info] : infos) totalNarSize += info.narSize;
@@ -522,11 +602,9 @@ void State::buildRemote(ref<Store> destStore,
}
}
/* Shut down the connection done by RAII.
Only difference is kill() instead of wait() (i.e. send signal
then wait())
*/
/* Shut down the connection. */
child->in = -1;
child->sshPid.wait();
} catch (Error & e) {
/* Disable this machine until a certain period of time has

View File

@@ -1,7 +1,7 @@
#include "hydra-build-result.hh"
#include "store-api.hh"
#include "util.hh"
#include "source-accessor.hh"
#include <nix/store/store-api.hh>
#include <nix/util/util.hh>
#include <nix/util/source-accessor.hh>
#include <regex>

View File

@@ -2,8 +2,8 @@
#include "state.hh"
#include "hydra-build-result.hh"
#include "finally.hh"
#include "binary-cache-store.hh"
#include <nix/util/finally.hh>
#include <nix/store/binary-cache-store.hh>
using namespace nix;
@@ -16,7 +16,7 @@ void setThreadName(const std::string & name)
}
void State::builder(MachineReservation::ptr reservation)
void State::builder(std::unique_ptr<MachineReservation> reservation)
{
setThreadName("bld~" + std::string(reservation->step->drvPath.to_string()));
@@ -35,22 +35,20 @@ void State::builder(MachineReservation::ptr reservation)
activeSteps_.lock()->erase(activeStep);
});
std::string machine = reservation->machine->storeUri.render();
try {
auto destStore = getDestStore();
res = doBuildStep(destStore, reservation, activeStep);
// Might release the reservation.
res = doBuildStep(destStore, std::move(reservation), activeStep);
} catch (std::exception & e) {
printMsg(lvlError, "uncaught exception building %s on %s: %s",
localStore->printStorePath(reservation->step->drvPath),
reservation->machine->storeUri.render(),
localStore->printStorePath(activeStep->step->drvPath),
machine,
e.what());
}
}
/* Release the machine and wake up the dispatcher. */
assert(reservation.unique());
reservation = 0;
wakeDispatcher();
/* If there was a temporary failure, retry the step after an
exponentially increasing interval. */
Step::ptr step = wstep.lock();
@@ -72,11 +70,11 @@ void State::builder(MachineReservation::ptr reservation)
State::StepResult State::doBuildStep(nix::ref<Store> destStore,
MachineReservation::ptr reservation,
std::unique_ptr<MachineReservation> reservation,
std::shared_ptr<ActiveStep> activeStep)
{
auto & step(reservation->step);
auto & machine(reservation->machine);
auto step(reservation->step);
auto machine(reservation->machine);
{
auto step_(step->state.lock());
@@ -211,7 +209,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
try {
/* FIXME: referring builds may have conflicting timeouts. */
buildRemote(destStore, machine, step, buildOptions, result, activeStep, updateStep, narMembers);
buildRemote(destStore, std::move(reservation), machine, step, buildOptions, result, activeStep, updateStep, narMembers);
} catch (Error & e) {
if (activeStep->state_.lock()->cancelled) {
printInfo("marking step %d of build %d as cancelled", stepNr, buildId);
@@ -460,13 +458,12 @@ void State::failStep(
for (auto & build : indirect) {
if (build->finishedInDB) continue;
printError("marking build %1% as failed", build->id);
txn.exec_params0
("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $4, isCachedBuild = $5, notificationPendingSince = $4 where id = $1 and finished = 0",
build->id,
txn.exec("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $4, isCachedBuild = $5, notificationPendingSince = $4 where id = $1 and finished = 0",
pqxx::params{build->id,
(int) (build->drvPath != step->drvPath && result.buildStatus() == bsFailed ? bsDepFailed : result.buildStatus()),
result.startTime,
result.stopTime,
result.stepStatus == bsCachedFailure ? 1 : 0);
result.stepStatus == bsCachedFailure ? 1 : 0}).no_rows();
nrBuildsDone++;
}
@@ -475,7 +472,7 @@ void State::failStep(
if (result.stepStatus != bsCachedFailure && result.canCache)
for (auto & i : step->drv->outputsAndOptPaths(*localStore))
if (i.second.second)
txn.exec_params0("insert into FailedPaths values ($1)", localStore->printStorePath(*i.second.second));
txn.exec("insert into FailedPaths values ($1)", pqxx::params{localStore->printStorePath(*i.second.second)}).no_rows();
txn.commit();
}

View File

@@ -40,13 +40,15 @@ void State::dispatcher()
printMsg(lvlDebug, "dispatcher woken up");
nrDispatcherWakeups++;
auto now1 = std::chrono::steady_clock::now();
auto t_before_work = std::chrono::steady_clock::now();
auto sleepUntil = doDispatch();
auto now2 = std::chrono::steady_clock::now();
auto t_after_work = std::chrono::steady_clock::now();
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(now2 - now1).count();
prom.dispatcher_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(t_after_work - t_before_work).count();
/* Sleep until we're woken up (either because a runnable build
is added, or because a build finishes). */
@@ -60,6 +62,10 @@ void State::dispatcher()
*dispatcherWakeup_ = false;
}
auto t_after_sleep = std::chrono::steady_clock::now();
prom.dispatcher_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
} catch (std::exception & e) {
printError("dispatcher: %s", e.what());
sleep(1);
@@ -128,6 +134,8 @@ system_time State::doDispatch()
comparator is a partial ordering (see MachineInfo). */
int highestGlobalPriority;
int highestLocalPriority;
size_t numRequiredSystemFeatures;
size_t numRevDeps;
BuildID lowestBuildID;
StepInfo(Step::ptr step, Step::State & step_) : step(step)
@@ -136,6 +144,8 @@ system_time State::doDispatch()
lowestShareUsed = std::min(lowestShareUsed, jobset->shareUsed());
highestGlobalPriority = step_.highestGlobalPriority;
highestLocalPriority = step_.highestLocalPriority;
numRequiredSystemFeatures = step->requiredSystemFeatures.size();
numRevDeps = step_.rdeps.size();
lowestBuildID = step_.lowestBuildID;
}
};
@@ -188,6 +198,8 @@ system_time State::doDispatch()
a.highestGlobalPriority != b.highestGlobalPriority ? a.highestGlobalPriority > b.highestGlobalPriority :
a.lowestShareUsed != b.lowestShareUsed ? a.lowestShareUsed < b.lowestShareUsed :
a.highestLocalPriority != b.highestLocalPriority ? a.highestLocalPriority > b.highestLocalPriority :
a.numRequiredSystemFeatures != b.numRequiredSystemFeatures ? a.numRequiredSystemFeatures > b.numRequiredSystemFeatures :
a.numRevDeps != b.numRevDeps ? a.numRevDeps > b.numRevDeps :
a.lowestBuildID < b.lowestBuildID;
});
@@ -282,7 +294,7 @@ system_time State::doDispatch()
/* Make a slot reservation and start a thread to
do the build. */
auto builderThread = std::thread(&State::builder, this,
std::make_shared<MachineReservation>(*this, step, mi.machine));
std::make_unique<MachineReservation>(*this, step, mi.machine));
builderThread.detach(); // FIXME?
keepGoing = true;

View File

@@ -2,9 +2,9 @@
#include <memory>
#include "hash.hh"
#include "derivations.hh"
#include "store-api.hh"
#include <nix/util/hash.hh>
#include <nix/store/derivations.hh>
#include <nix/store/store-api.hh>
#include "nar-extractor.hh"
struct BuildProduct

View File

@@ -11,16 +11,16 @@
#include <nlohmann/json.hpp>
#include "signals.hh"
#include <nix/util/signals.hh>
#include "state.hh"
#include "hydra-build-result.hh"
#include "store-api.hh"
#include "remote-store.hh"
#include <nix/store/store-open.hh>
#include <nix/store/remote-store.hh>
#include "globals.hh"
#include <nix/store/globals.hh>
#include "hydra-config.hh"
#include "s3-binary-cache-store.hh"
#include "shared.hh"
#include <nix/store/s3-binary-cache-store.hh>
#include <nix/main/shared.hh>
using namespace nix;
using nlohmann::json;
@@ -70,10 +70,31 @@ State::PromMetrics::PromMetrics()
.Register(*registry)
.Add({})
)
, queue_max_id(
prometheus::BuildGauge()
.Name("hydraqueuerunner_queue_max_build_id_info")
.Help("Maximum build record ID in the queue")
, dispatcher_time_spent_running(
prometheus::BuildCounter()
.Name("hydraqueuerunner_dispatcher_time_spent_running")
.Help("Time (in micros) spent running the dispatcher")
.Register(*registry)
.Add({})
)
, dispatcher_time_spent_waiting(
prometheus::BuildCounter()
.Name("hydraqueuerunner_dispatcher_time_spent_waiting")
.Help("Time (in micros) spent waiting for the dispatcher to obtain work")
.Register(*registry)
.Add({})
)
, queue_monitor_time_spent_running(
prometheus::BuildCounter()
.Name("hydraqueuerunner_queue_monitor_time_spent_running")
.Help("Time (in micros) spent running the queue monitor")
.Register(*registry)
.Add({})
)
, queue_monitor_time_spent_waiting(
prometheus::BuildCounter()
.Name("hydraqueuerunner_queue_monitor_time_spent_waiting")
.Help("Time (in micros) spent waiting for the queue monitor to obtain work")
.Register(*registry)
.Add({})
)
@@ -85,6 +106,7 @@ State::State(std::optional<std::string> metricsAddrOpt)
: config(std::make_unique<HydraConfig>())
, maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0))
, dbPool(config->getIntOption("max_db_connections", 128))
, localWorkThrottler(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2)))
, maxOutputSize(config->getIntOption("max_output_size", 2ULL << 30))
, maxLogSize(config->getIntOption("max_log_size", 64ULL << 20))
, uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false))
@@ -254,17 +276,16 @@ void State::monitorMachinesFile()
void State::clearBusy(Connection & conn, time_t stopTime)
{
pqxx::work txn(conn);
txn.exec_params0
("update BuildSteps set busy = 0, status = $1, stopTime = $2 where busy != 0",
(int) bsAborted,
stopTime != 0 ? std::make_optional(stopTime) : std::nullopt);
txn.exec("update BuildSteps set busy = 0, status = $1, stopTime = $2 where busy != 0",
pqxx::params{(int) bsAborted,
stopTime != 0 ? std::make_optional(stopTime) : std::nullopt}).no_rows();
txn.commit();
}
unsigned int State::allocBuildStep(pqxx::work & txn, BuildID buildId)
{
auto res = txn.exec_params1("select max(stepnr) from BuildSteps where build = $1", buildId);
auto res = txn.exec("select max(stepnr) from BuildSteps where build = $1", buildId).one_row();
return res[0].is_null() ? 1 : res[0].as<int>() + 1;
}
@@ -275,9 +296,8 @@ unsigned int State::createBuildStep(pqxx::work & txn, time_t startTime, BuildID
restart:
auto stepNr = allocBuildStep(txn, buildId);
auto r = txn.exec_params
("insert into BuildSteps (build, stepnr, type, drvPath, busy, startTime, system, status, propagatedFrom, errorMsg, stopTime, machine) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) on conflict do nothing",
buildId,
auto r = txn.exec("insert into BuildSteps (build, stepnr, type, drvPath, busy, startTime, system, status, propagatedFrom, errorMsg, stopTime, machine) values ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) on conflict do nothing",
pqxx::params{buildId,
stepNr,
0, // == build
localStore->printStorePath(step->drvPath),
@@ -288,17 +308,16 @@ unsigned int State::createBuildStep(pqxx::work & txn, time_t startTime, BuildID
propagatedFrom != 0 ? std::make_optional(propagatedFrom) : std::nullopt, // internal::params
errorMsg != "" ? std::make_optional(errorMsg) : std::nullopt,
startTime != 0 && status != bsBusy ? std::make_optional(startTime) : std::nullopt,
machine);
machine});
if (r.affected_rows() == 0) goto restart;
for (auto & [name, output] : getDestStore()->queryPartialDerivationOutputMap(step->drvPath, &*localStore))
txn.exec_params0
("insert into BuildStepOutputs (build, stepnr, name, path) values ($1, $2, $3, $4)",
buildId, stepNr, name,
txn.exec("insert into BuildStepOutputs (build, stepnr, name, path) values ($1, $2, $3, $4)",
pqxx::params{buildId, stepNr, name,
output
? std::optional { localStore->printStorePath(*output)}
: std::nullopt);
: std::nullopt}).no_rows();
if (status == bsBusy)
txn.exec(fmt("notify step_started, '%d\t%d'", buildId, stepNr));
@@ -309,11 +328,10 @@ unsigned int State::createBuildStep(pqxx::work & txn, time_t startTime, BuildID
void State::updateBuildStep(pqxx::work & txn, BuildID buildId, unsigned int stepNr, StepState stepState)
{
if (txn.exec_params
("update BuildSteps set busy = $1 where build = $2 and stepnr = $3 and busy != 0 and status is null",
(int) stepState,
if (txn.exec("update BuildSteps set busy = $1 where build = $2 and stepnr = $3 and busy != 0 and status is null",
pqxx::params{(int) stepState,
buildId,
stepNr).affected_rows() != 1)
stepNr}).affected_rows() != 1)
throw Error("step %d of build %d is in an unexpected state", stepNr, buildId);
}
@@ -323,29 +341,27 @@ void State::finishBuildStep(pqxx::work & txn, const RemoteResult & result,
{
assert(result.startTime);
assert(result.stopTime);
txn.exec_params0
("update BuildSteps set busy = 0, status = $1, errorMsg = $4, startTime = $5, stopTime = $6, machine = $7, overhead = $8, timesBuilt = $9, isNonDeterministic = $10 where build = $2 and stepnr = $3",
(int) result.stepStatus, buildId, stepNr,
txn.exec("update BuildSteps set busy = 0, status = $1, errorMsg = $4, startTime = $5, stopTime = $6, machine = $7, overhead = $8, timesBuilt = $9, isNonDeterministic = $10 where build = $2 and stepnr = $3",
pqxx::params{(int) result.stepStatus, buildId, stepNr,
result.errorMsg != "" ? std::make_optional(result.errorMsg) : std::nullopt,
result.startTime, result.stopTime,
machine != "" ? std::make_optional(machine) : std::nullopt,
result.overhead != 0 ? std::make_optional(result.overhead) : std::nullopt,
result.timesBuilt > 0 ? std::make_optional(result.timesBuilt) : std::nullopt,
result.timesBuilt > 1 ? std::make_optional(result.isNonDeterministic) : std::nullopt);
result.timesBuilt > 1 ? std::make_optional(result.isNonDeterministic) : std::nullopt}).no_rows();
assert(result.logFile.find('\t') == std::string::npos);
txn.exec(fmt("notify step_finished, '%d\t%d\t%s'",
buildId, stepNr, result.logFile));
if (result.stepStatus == bsSuccess) {
// Update the corresponding `BuildStepOutputs` row to add the output path
auto res = txn.exec_params1("select drvPath from BuildSteps where build = $1 and stepnr = $2", buildId, stepNr);
auto res = txn.exec("select drvPath from BuildSteps where build = $1 and stepnr = $2", pqxx::params{buildId, stepNr}).one_row();
assert(res.size());
StorePath drvPath = localStore->parseStorePath(res[0].as<std::string>());
// If we've finished building, all the paths should be known
for (auto & [name, output] : getDestStore()->queryDerivationOutputMap(drvPath, &*localStore))
txn.exec_params0
("update BuildStepOutputs set path = $4 where build = $1 and stepnr = $2 and name = $3",
buildId, stepNr, name, localStore->printStorePath(output));
txn.exec("update BuildStepOutputs set path = $4 where build = $1 and stepnr = $2 and name = $3",
pqxx::params{buildId, stepNr, name, localStore->printStorePath(output)}).no_rows();
}
}
@@ -356,23 +372,21 @@ int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t sto
restart:
auto stepNr = allocBuildStep(txn, build->id);
auto r = txn.exec_params
("insert into BuildSteps (build, stepnr, type, drvPath, busy, status, startTime, stopTime) values ($1, $2, $3, $4, $5, $6, $7, $8) on conflict do nothing",
build->id,
auto r = txn.exec("insert into BuildSteps (build, stepnr, type, drvPath, busy, status, startTime, stopTime) values ($1, $2, $3, $4, $5, $6, $7, $8) on conflict do nothing",
pqxx::params{build->id,
stepNr,
1, // == substitution
(localStore->printStorePath(drvPath)),
0,
0,
startTime,
stopTime);
stopTime});
if (r.affected_rows() == 0) goto restart;
txn.exec_params0
("insert into BuildStepOutputs (build, stepnr, name, path) values ($1, $2, $3, $4)",
build->id, stepNr, outputName,
localStore->printStorePath(storePath));
txn.exec("insert into BuildStepOutputs (build, stepnr, name, path) values ($1, $2, $3, $4)",
pqxx::params{build->id, stepNr, outputName,
localStore->printStorePath(storePath)}).no_rows();
return stepNr;
}
@@ -439,35 +453,32 @@ void State::markSucceededBuild(pqxx::work & txn, Build::ptr build,
{
if (build->finishedInDB) return;
if (txn.exec_params("select 1 from Builds where id = $1 and finished = 0", build->id).empty()) return;
if (txn.exec("select 1 from Builds where id = $1 and finished = 0", pqxx::params{build->id}).empty()) return;
txn.exec_params0
("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $4, size = $5, closureSize = $6, releaseName = $7, isCachedBuild = $8, notificationPendingSince = $4 where id = $1",
build->id,
txn.exec("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $4, size = $5, closureSize = $6, releaseName = $7, isCachedBuild = $8, notificationPendingSince = $4 where id = $1",
pqxx::params{build->id,
(int) (res.failed ? bsFailedWithOutput : bsSuccess),
startTime,
stopTime,
res.size,
res.closureSize,
res.releaseName != "" ? std::make_optional(res.releaseName) : std::nullopt,
isCachedBuild ? 1 : 0);
isCachedBuild ? 1 : 0}).no_rows();
for (auto & [outputName, outputPath] : res.outputs) {
txn.exec_params0
("update BuildOutputs set path = $3 where build = $1 and name = $2",
build->id,
txn.exec("update BuildOutputs set path = $3 where build = $1 and name = $2",
pqxx::params{build->id,
outputName,
localStore->printStorePath(outputPath)
);
localStore->printStorePath(outputPath)}
).no_rows();
}
txn.exec_params0("delete from BuildProducts where build = $1", build->id);
txn.exec("delete from BuildProducts where build = $1", pqxx::params{build->id}).no_rows();
unsigned int productNr = 1;
for (auto & product : res.products) {
txn.exec_params0
("insert into BuildProducts (build, productnr, type, subtype, fileSize, sha256hash, path, name, defaultPath) values ($1, $2, $3, $4, $5, $6, $7, $8, $9)",
build->id,
txn.exec("insert into BuildProducts (build, productnr, type, subtype, fileSize, sha256hash, path, name, defaultPath) values ($1, $2, $3, $4, $5, $6, $7, $8, $9)",
pqxx::params{build->id,
productNr++,
product.type,
product.subtype,
@@ -475,22 +486,21 @@ void State::markSucceededBuild(pqxx::work & txn, Build::ptr build,
product.sha256hash ? std::make_optional(product.sha256hash->to_string(HashFormat::Base16, false)) : std::nullopt,
product.path,
product.name,
product.defaultPath);
product.defaultPath}).no_rows();
}
txn.exec_params0("delete from BuildMetrics where build = $1", build->id);
txn.exec("delete from BuildMetrics where build = $1", pqxx::params{build->id}).no_rows();
for (auto & metric : res.metrics) {
txn.exec_params0
("insert into BuildMetrics (build, name, unit, value, project, jobset, job, timestamp) values ($1, $2, $3, $4, $5, $6, $7, $8)",
build->id,
txn.exec("insert into BuildMetrics (build, name, unit, value, project, jobset, job, timestamp) values ($1, $2, $3, $4, $5, $6, $7, $8)",
pqxx::params{build->id,
metric.second.name,
metric.second.unit != "" ? std::make_optional(metric.second.unit) : std::nullopt,
metric.second.value,
build->projectName,
build->jobsetName,
build->jobName,
build->timestamp);
build->timestamp}).no_rows();
}
nrBuildsDone++;
@@ -502,7 +512,7 @@ bool State::checkCachedFailure(Step::ptr step, Connection & conn)
pqxx::work txn(conn);
for (auto & i : step->drv->outputsAndOptPaths(*localStore))
if (i.second.second)
if (!txn.exec_params("select 1 from FailedPaths where path = $1", localStore->printStorePath(*i.second.second)).empty())
if (!txn.exec("select 1 from FailedPaths where path = $1", pqxx::params{localStore->printStorePath(*i.second.second)}).empty())
return true;
return false;
}
@@ -551,6 +561,7 @@ void State::dumpStatus(Connection & conn)
{"nrActiveSteps", activeSteps_.lock()->size()},
{"nrStepsBuilding", nrStepsBuilding.load()},
{"nrStepsCopyingTo", nrStepsCopyingTo.load()},
{"nrStepsWaitingForDownloadSlot", nrStepsWaitingForDownloadSlot.load()},
{"nrStepsCopyingFrom", nrStepsCopyingFrom.load()},
{"nrStepsWaiting", nrStepsWaiting.load()},
{"nrUnsupportedSteps", nrUnsupportedSteps.load()},
@@ -592,6 +603,7 @@ void State::dumpStatus(Connection & conn)
}
{
auto machines_json = json::object();
auto machines_(machines.lock());
for (auto & i : *machines_) {
auto & m(i.second);
@@ -618,8 +630,9 @@ void State::dumpStatus(Connection & conn)
machine["avgStepTime"] = (float) s->totalStepTime / s->nrStepsDone;
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / s->nrStepsDone;
}
statusJson["machines"][m->storeUri.render()] = machine;
machines_json[m->storeUri.render()] = machine;
}
statusJson["machines"] = machines_json;
}
{
@@ -678,6 +691,7 @@ void State::dumpStatus(Connection & conn)
: 0.0},
};
#if NIX_WITH_S3_SUPPORT
auto s3Store = dynamic_cast<S3BinaryCacheStore *>(&*store);
if (s3Store) {
auto & s3Stats = s3Store->getS3Stats();
@@ -703,14 +717,15 @@ void State::dumpStatus(Connection & conn)
+ s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09},
};
}
#endif
}
{
auto mc = startDbUpdate();
pqxx::work txn(conn);
// FIXME: use PostgreSQL 9.5 upsert.
txn.exec("delete from SystemStatus where what = 'queue-runner'");
txn.exec_params0("insert into SystemStatus values ('queue-runner', $1)", statusJson.dump());
txn.exec("delete from SystemStatus where what = 'queue-runner'").no_rows();
txn.exec("insert into SystemStatus values ('queue-runner', $1)", pqxx::params{statusJson.dump()}).no_rows();
txn.exec("notify status_dumped");
txn.commit();
}
@@ -775,7 +790,7 @@ void State::unlock()
{
pqxx::work txn(*conn);
txn.exec("delete from SystemStatus where what = 'queue-runner'");
txn.exec("delete from SystemStatus where what = 'queue-runner'").no_rows();
txn.commit();
}
}
@@ -805,7 +820,7 @@ void State::run(BuildID buildOne)
<< metricsAddr << "/metrics (port " << exposerPort << ")"
<< std::endl;
Store::Params localParams;
Store::Config::Params localParams;
localParams["max-connections"] = "16";
localParams["max-connection-age"] = "600";
localStore = openStore(getEnv("NIX_REMOTE").value_or(""), localParams);
@@ -853,11 +868,10 @@ void State::run(BuildID buildOne)
pqxx::work txn(*conn);
for (auto & step : steps) {
printMsg(lvlError, "cleaning orphaned step %d of build %d", step.second, step.first);
txn.exec_params0
("update BuildSteps set busy = 0, status = $1 where build = $2 and stepnr = $3 and busy != 0",
(int) bsAborted,
txn.exec("update BuildSteps set busy = 0, status = $1 where build = $2 and stepnr = $3 and busy != 0",
pqxx::params{(int) bsAborted,
step.first,
step.second);
step.second}).no_rows();
}
txn.commit();
} catch (std::exception & e) {

View File

@@ -13,7 +13,9 @@ hydra_queue_runner = executable('hydra-queue-runner',
srcs,
dependencies: [
libhydra_dep,
nix_dep,
nix_util_dep,
nix_store_dep,
nix_main_dep,
pqxx_dep,
prom_cpp_core_dep,
prom_cpp_pull_dep,

View File

@@ -1,6 +1,6 @@
#include "nar-extractor.hh"
#include "archive.hh"
#include <nix/util/archive.hh>
#include <unordered_set>

View File

@@ -1,9 +1,9 @@
#pragma once
#include "source-accessor.hh"
#include "types.hh"
#include "serialise.hh"
#include "hash.hh"
#include <nix/util/source-accessor.hh>
#include <nix/util/types.hh>
#include <nix/util/serialise.hh>
#include <nix/util/hash.hh>
struct NarMemberData
{

View File

@@ -1,8 +1,11 @@
#include "state.hh"
#include "hydra-build-result.hh"
#include "globals.hh"
#include <nix/store/globals.hh>
#include <nix/store/parsed-derivations.hh>
#include <nix/util/thread-pool.hh>
#include <cstring>
#include <signal.h>
using namespace nix;
@@ -37,16 +40,21 @@ void State::queueMonitorLoop(Connection & conn)
auto destStore = getDestStore();
unsigned int lastBuildId = 0;
bool quit = false;
while (!quit) {
auto t_before_work = std::chrono::steady_clock::now();
localStore->clearPathInfoCache();
bool done = getQueuedBuilds(conn, destStore, lastBuildId);
bool done = getQueuedBuilds(conn, destStore);
if (buildOne && buildOneDone) quit = true;
auto t_after_work = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
/* Sleep until we get notification from the database about an
event. */
if (done && !quit) {
@@ -56,12 +64,10 @@ void State::queueMonitorLoop(Connection & conn)
conn.get_notifs();
if (auto lowestId = buildsAdded.get()) {
lastBuildId = std::min(lastBuildId, static_cast<unsigned>(std::stoul(*lowestId) - 1));
printMsg(lvlTalkative, "got notification: new builds added to the queue");
}
if (buildsRestarted.get()) {
printMsg(lvlTalkative, "got notification: builds restarted");
lastBuildId = 0; // check all builds
}
if (buildsCancelled.get() || buildsDeleted.get() || buildsBumped.get()) {
printMsg(lvlTalkative, "got notification: builds cancelled or bumped");
@@ -71,6 +77,10 @@ void State::queueMonitorLoop(Connection & conn)
printMsg(lvlTalkative, "got notification: jobset shares changed");
processJobsetSharesChange(conn);
}
auto t_after_sleep = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
}
exit(0);
@@ -84,39 +94,31 @@ struct PreviousFailure : public std::exception {
bool State::getQueuedBuilds(Connection & conn,
ref<Store> destStore, unsigned int & lastBuildId)
ref<Store> destStore)
{
prom.queue_checks_started.Increment();
printInfo("checking the queue for builds > %d...", lastBuildId);
printInfo("checking the queue for builds...");
/* Grab the queued builds from the database, but don't process
them yet (since we don't want a long-running transaction). */
std::vector<BuildID> newIDs;
std::map<BuildID, Build::ptr> newBuildsByID;
std::unordered_map<BuildID, Build::ptr> newBuildsByID;
std::multimap<StorePath, BuildID> newBuildsByPath;
unsigned int newLastBuildId = lastBuildId;
{
pqxx::work txn(conn);
auto res = txn.exec_params
("select builds.id, builds.jobset_id, jobsets.project as project, "
auto res = txn.exec("select builds.id, builds.jobset_id, jobsets.project as project, "
"jobsets.name as jobset, job, drvPath, maxsilent, timeout, timestamp, "
"globalPriority, priority from Builds "
"inner join jobsets on builds.jobset_id = jobsets.id "
"where builds.id > $1 and finished = 0 order by globalPriority desc, builds.id",
lastBuildId);
"where finished = 0 order by globalPriority desc, random()");
for (auto const & row : res) {
auto builds_(builds.lock());
BuildID id = row["id"].as<BuildID>();
if (buildOne && id != buildOne) continue;
if (id > newLastBuildId) {
newLastBuildId = id;
prom.queue_max_id.Set(id);
}
if (builds_->count(id)) continue;
auto build = std::make_shared<Build>(
@@ -156,11 +158,10 @@ bool State::getQueuedBuilds(Connection & conn,
if (!build->finishedInDB) {
auto mc = startDbUpdate();
pqxx::work txn(conn);
txn.exec_params0
("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $3 where id = $1 and finished = 0",
build->id,
txn.exec("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $3 where id = $1 and finished = 0",
pqxx::params{build->id,
(int) bsAborted,
time(0));
time(0)}).no_rows();
txn.commit();
build->finishedInDB = true;
nrBuildsDone++;
@@ -190,22 +191,20 @@ bool State::getQueuedBuilds(Connection & conn,
derivation path, then by output path. */
BuildID propagatedFrom = 0;
auto res = txn.exec_params1
("select max(build) from BuildSteps where drvPath = $1 and startTime != 0 and stopTime != 0 and status = 1",
localStore->printStorePath(ex.step->drvPath));
auto res = txn.exec("select max(build) from BuildSteps where drvPath = $1 and startTime != 0 and stopTime != 0 and status = 1",
pqxx::params{localStore->printStorePath(ex.step->drvPath)}).one_row();
if (!res[0].is_null()) propagatedFrom = res[0].as<BuildID>();
if (!propagatedFrom) {
for (auto & [outputName, optOutputPath] : destStore->queryPartialDerivationOutputMap(ex.step->drvPath, &*localStore)) {
constexpr std::string_view common = "select max(s.build) from BuildSteps s join BuildStepOutputs o on s.build = o.build where startTime != 0 and stopTime != 0 and status = 1";
auto res = optOutputPath
? txn.exec_params(
? txn.exec(
std::string { common } + " and path = $1",
localStore->printStorePath(*optOutputPath))
: txn.exec_params(
pqxx::params{localStore->printStorePath(*optOutputPath)})
: txn.exec(
std::string { common } + " and drvPath = $1 and name = $2",
localStore->printStorePath(ex.step->drvPath),
outputName);
pqxx::params{localStore->printStorePath(ex.step->drvPath), outputName});
if (!res[0][0].is_null()) {
propagatedFrom = res[0][0].as<BuildID>();
break;
@@ -214,12 +213,11 @@ bool State::getQueuedBuilds(Connection & conn,
}
createBuildStep(txn, 0, build->id, ex.step, "", bsCachedFailure, "", propagatedFrom);
txn.exec_params
("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $3, isCachedBuild = 1, notificationPendingSince = $3 "
txn.exec("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $3, isCachedBuild = 1, notificationPendingSince = $3 "
"where id = $1 and finished = 0",
build->id,
pqxx::params{build->id,
(int) (ex.step->drvPath == build->drvPath ? bsFailed : bsDepFailed),
time(0));
time(0)}).no_rows();
notifyBuildFinished(txn, build->id, {});
txn.commit();
build->finishedInDB = true;
@@ -318,15 +316,13 @@ bool State::getQueuedBuilds(Connection & conn,
/* Stop after a certain time to allow priority bumps to be
processed. */
if (std::chrono::system_clock::now() > start + std::chrono::seconds(600)) {
if (std::chrono::system_clock::now() > start + std::chrono::seconds(60)) {
prom.queue_checks_early_exits.Increment();
break;
}
}
prom.queue_checks_finished.Increment();
lastBuildId = newBuildsByID.empty() ? newLastBuildId : newBuildsByID.begin()->first - 1;
return newBuildsByID.empty();
}
@@ -405,6 +401,34 @@ void State::processQueueChange(Connection & conn)
}
std::map<DrvOutput, std::optional<StorePath>> State::getMissingRemotePaths(
ref<Store> destStore,
const std::map<DrvOutput, std::optional<StorePath>> & paths)
{
Sync<std::map<DrvOutput, std::optional<StorePath>>> missing_;
ThreadPool tp;
for (auto & [output, maybeOutputPath] : paths) {
if (!maybeOutputPath) {
auto missing(missing_.lock());
missing->insert({output, maybeOutputPath});
} else {
tp.enqueue([&] {
if (!destStore->isValidPath(*maybeOutputPath)) {
auto missing(missing_.lock());
missing->insert({output, maybeOutputPath});
}
});
}
}
tp.process();
auto missing(missing_.lock());
return *missing;
}
Step::ptr State::createStep(ref<Store> destStore,
Connection & conn, Build::ptr build, const StorePath & drvPath,
Build::ptr referringBuild, Step::ptr referringStep, std::set<StorePath> & finishedDrvs,
@@ -463,14 +487,23 @@ Step::ptr State::createStep(ref<Store> destStore,
it's not runnable yet, and other threads won't make it
runnable while step->created == false. */
step->drv = std::make_unique<Derivation>(localStore->readDerivation(drvPath));
step->parsedDrv = std::make_unique<ParsedDerivation>(drvPath, *step->drv);
{
auto parsedOpt = StructuredAttrs::tryParse(step->drv->env);
try {
step->drvOptions = std::make_unique<DerivationOptions>(
DerivationOptions::fromStructuredAttrs(step->drv->env, parsedOpt ? &*parsedOpt : nullptr));
} catch (Error & e) {
e.addTrace({}, "while parsing derivation '%s'", localStore->printStorePath(drvPath));
throw;
}
}
step->preferLocalBuild = step->parsedDrv->willBuildLocally(*localStore);
step->preferLocalBuild = step->drvOptions->willBuildLocally(*localStore, *step->drv);
step->isDeterministic = getOr(step->drv->env, "isDetermistic", "0") == "1";
step->systemType = step->drv->platform;
{
StringSet features = step->requiredSystemFeatures = step->parsedDrv->getRequiredSystemFeatures();
StringSet features = step->requiredSystemFeatures = step->drvOptions->getRequiredSystemFeatures(*step->drv);
if (step->preferLocalBuild)
features.insert("local");
if (!features.empty()) {
@@ -485,16 +518,15 @@ Step::ptr State::createStep(ref<Store> destStore,
/* Are all outputs valid? */
auto outputHashes = staticOutputHashes(*localStore, *(step->drv));
bool valid = true;
std::map<DrvOutput, std::optional<StorePath>> missing;
std::map<DrvOutput, std::optional<StorePath>> paths;
for (auto & [outputName, maybeOutputPath] : destStore->queryPartialDerivationOutputMap(drvPath, &*localStore)) {
auto outputHash = outputHashes.at(outputName);
if (maybeOutputPath && destStore->isValidPath(*maybeOutputPath))
continue;
valid = false;
missing.insert({{outputHash, outputName}, maybeOutputPath});
paths.insert({{outputHash, outputName}, maybeOutputPath});
}
auto missing = getMissingRemotePaths(destStore, paths);
bool valid = missing.empty();
/* Try to copy the missing paths from the local store or from
substitutes. */
if (!missing.empty()) {
@@ -617,10 +649,8 @@ Jobset::ptr State::createJobset(pqxx::work & txn,
if (i != jobsets_->end()) return i->second;
}
auto res = txn.exec_params1
("select schedulingShares from Jobsets where id = $1",
jobsetID);
if (res.empty()) throw Error("missing jobset - can't happen");
auto res = txn.exec("select schedulingShares from Jobsets where id = $1",
pqxx::params{jobsetID}).one_row();
auto shares = res["schedulingShares"].as<unsigned int>();
@@ -628,11 +658,10 @@ Jobset::ptr State::createJobset(pqxx::work & txn,
jobset->setShares(shares);
/* Load the build steps from the last 24 hours. */
auto res2 = txn.exec_params
("select s.startTime, s.stopTime from BuildSteps s join Builds b on build = id "
auto res2 = txn.exec("select s.startTime, s.stopTime from BuildSteps s join Builds b on build = id "
"where s.startTime is not null and s.stopTime > $1 and jobset_id = $2",
time(0) - Jobset::schedulingWindow * 10,
jobsetID);
pqxx::params{time(0) - Jobset::schedulingWindow * 10,
jobsetID});
for (auto const & row : res2) {
time_t startTime = row["startTime"].as<time_t>();
time_t stopTime = row["stopTime"].as<time_t>();
@@ -669,11 +698,10 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
pqxx::work txn(conn);
for (auto & [name, output] : derivationOutputs) {
auto r = txn.exec_params
("select id, buildStatus, releaseName, closureSize, size from Builds b "
auto r = txn.exec("select id, buildStatus, releaseName, closureSize, size from Builds b "
"join BuildOutputs o on b.id = o.build "
"where finished = 1 and (buildStatus = 0 or buildStatus = 6) and path = $1",
localStore->printStorePath(output));
pqxx::params{localStore->printStorePath(output)});
if (r.empty()) continue;
BuildID id = r[0][0].as<BuildID>();
@@ -685,9 +713,8 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
res.closureSize = r[0][3].is_null() ? 0 : r[0][3].as<uint64_t>();
res.size = r[0][4].is_null() ? 0 : r[0][4].as<uint64_t>();
auto products = txn.exec_params
("select type, subtype, fileSize, sha256hash, path, name, defaultPath from BuildProducts where build = $1 order by productnr",
id);
auto products = txn.exec("select type, subtype, fileSize, sha256hash, path, name, defaultPath from BuildProducts where build = $1 order by productnr",
pqxx::params{id});
for (auto row : products) {
BuildProduct product;
@@ -709,9 +736,8 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
res.products.emplace_back(product);
}
auto metrics = txn.exec_params
("select name, unit, value from BuildMetrics where build = $1",
id);
auto metrics = txn.exec("select name, unit, value from BuildMetrics where build = $1",
pqxx::params{id});
for (auto row : metrics) {
BuildMetric metric;

View File

@@ -6,6 +6,8 @@
#include <map>
#include <memory>
#include <queue>
#include <regex>
#include <semaphore>
#include <prometheus/counter.h>
#include <prometheus/gauge.h>
@@ -13,15 +15,18 @@
#include "db.hh"
#include "parsed-derivations.hh"
#include "pathlocks.hh"
#include "pool.hh"
#include "build-result.hh"
#include "store-api.hh"
#include "sync.hh"
#include <nix/store/derivations.hh>
#include <nix/store/derivation-options.hh>
#include <nix/store/pathlocks.hh>
#include <nix/util/pool.hh>
#include <nix/store/build-result.hh>
#include <nix/store/store-api.hh>
#include <nix/util/sync.hh>
#include "nar-extractor.hh"
#include "legacy-ssh-store.hh"
#include "machines.hh"
#include <nix/store/serve-protocol.hh>
#include <nix/store/serve-protocol-impl.hh>
#include <nix/store/serve-protocol-connection.hh>
#include <nix/store/machines.hh>
typedef unsigned int BuildID;
@@ -55,6 +60,7 @@ typedef enum {
ssConnecting = 10,
ssSendingInputs = 20,
ssBuilding = 30,
ssWaitingForLocalSlot = 35,
ssReceivingOutputs = 40,
ssPostProcessing = 50,
} StepState;
@@ -165,8 +171,8 @@ struct Step
nix::StorePath drvPath;
std::unique_ptr<nix::Derivation> drv;
std::unique_ptr<nix::ParsedDerivation> parsedDrv;
std::set<std::string> requiredSystemFeatures;
std::unique_ptr<nix::DerivationOptions> drvOptions;
nix::StringSet requiredSystemFeatures;
bool preferLocalBuild;
bool isDeterministic;
std::string systemType; // concatenation of drv.platform and requiredSystemFeatures
@@ -290,11 +296,9 @@ struct Machine : nix::Machine
bool isLocalhost() const;
// A connection to a machine
struct Connection {
struct Connection : nix::ServeProto::BasicClientConnection {
// Backpointer to the machine
ptr machine;
// Opened store
nix::ref<nix::LegacySSHStore> store;
};
};
@@ -352,6 +356,10 @@ private:
typedef std::map<nix::StoreReference::Variant, Machine::ptr> Machines;
nix::Sync<Machines> machines; // FIXME: use atomic_shared_ptr
/* Throttler for CPU-bound local work. */
static constexpr unsigned int maxSupportedLocalWorkers = 1024;
std::counting_semaphore<maxSupportedLocalWorkers> localWorkThrottler;
/* Various stats. */
time_t startedAt;
counter nrBuildsRead{0};
@@ -361,6 +369,7 @@ private:
counter nrStepsDone{0};
counter nrStepsBuilding{0};
counter nrStepsCopyingTo{0};
counter nrStepsWaitingForDownloadSlot{0};
counter nrStepsCopyingFrom{0};
counter nrStepsWaiting{0};
counter nrUnsupportedSteps{0};
@@ -391,7 +400,6 @@ private:
struct MachineReservation
{
typedef std::shared_ptr<MachineReservation> ptr;
State & state;
Step::ptr step;
Machine::ptr machine;
@@ -449,7 +457,12 @@ private:
prometheus::Counter& queue_steps_created;
prometheus::Counter& queue_checks_early_exits;
prometheus::Counter& queue_checks_finished;
prometheus::Gauge& queue_max_id;
prometheus::Counter& dispatcher_time_spent_running;
prometheus::Counter& dispatcher_time_spent_waiting;
prometheus::Counter& queue_monitor_time_spent_running;
prometheus::Counter& queue_monitor_time_spent_waiting;
PromMetrics();
};
@@ -493,8 +506,7 @@ private:
void queueMonitorLoop(Connection & conn);
/* Check the queue for new builds. */
bool getQueuedBuilds(Connection & conn,
nix::ref<nix::Store> destStore, unsigned int & lastBuildId);
bool getQueuedBuilds(Connection & conn, nix::ref<nix::Store> destStore);
/* Handle cancellation, deletion and priority bumps. */
void processQueueChange(Connection & conn);
@@ -502,6 +514,12 @@ private:
BuildOutput getBuildOutputCached(Connection & conn, nix::ref<nix::Store> destStore,
const nix::StorePath & drvPath);
/* Returns paths missing from the remote store. Paths are processed in
* parallel to work around the possible latency of remote stores. */
std::map<nix::DrvOutput, std::optional<nix::StorePath>> getMissingRemotePaths(
nix::ref<nix::Store> destStore,
const std::map<nix::DrvOutput, std::optional<nix::StorePath>> & paths);
Step::ptr createStep(nix::ref<nix::Store> store,
Connection & conn, Build::ptr build, const nix::StorePath & drvPath,
Build::ptr referringBuild, Step::ptr referringStep, std::set<nix::StorePath> & finishedDrvs,
@@ -531,16 +549,17 @@ private:
void abortUnsupported();
void builder(MachineReservation::ptr reservation);
void builder(std::unique_ptr<MachineReservation> reservation);
/* Perform the given build step. Return true if the step is to be
retried. */
enum StepResult { sDone, sRetry, sMaybeCancelled };
StepResult doBuildStep(nix::ref<nix::Store> destStore,
MachineReservation::ptr reservation,
std::unique_ptr<MachineReservation> reservation,
std::shared_ptr<ActiveStep> activeStep);
void buildRemote(nix::ref<nix::Store> destStore,
std::unique_ptr<MachineReservation> reservation,
Machine::ptr machine, Step::ptr step,
const nix::ServeProto::BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,

View File

@@ -238,7 +238,7 @@ sub serveFile {
# XSS hole.
$c->response->header('Content-Security-Policy' => 'sandbox allow-scripts');
$c->stash->{'plain'} = { data => grab(cmd => ["nix", "--experimental-features", "nix-command",
$c->stash->{'plain'} = { data => readIntoSocket(cmd => ["nix", "--experimental-features", "nix-command",
"store", "cat", "--store", getStoreUri(), "$path"]) };
# Detect MIME type.

View File

@@ -364,6 +364,21 @@ sub evals_GET {
);
}
sub errors :Chained('jobsetChain') :PathPart('errors') :Args(0) :ActionClass('REST') { }
sub errors_GET {
my ($self, $c) = @_;
$c->stash->{template} = 'eval-error.tt';
my $jobsetName = $c->stash->{params}->{name};
$c->stash->{jobset} = $c->stash->{project}->jobsets->find(
{ name => $jobsetName },
{ '+columns' => { 'errormsg' => 'errormsg' } }
);
$self->status_ok($c, entity => $c->stash->{jobset});
}
# Redirect to the latest finished evaluation of this jobset.
sub latest_eval : Chained('jobsetChain') PathPart('latest-eval') {

View File

@@ -76,7 +76,9 @@ sub view_GET {
$c->stash->{removed} = $diff->{removed};
$c->stash->{unfinished} = $diff->{unfinished};
$c->stash->{aborted} = $diff->{aborted};
$c->stash->{failed} = $diff->{failed};
$c->stash->{totalAborted} = $diff->{totalAborted};
$c->stash->{totalFailed} = $diff->{totalFailed};
$c->stash->{totalQueued} = $diff->{totalQueued};
$c->stash->{full} = ($c->req->params->{full} || "0") eq "1";
@@ -86,6 +88,17 @@ sub view_GET {
);
}
sub errors :Chained('evalChain') :PathPart('errors') :Args(0) :ActionClass('REST') { }
sub errors_GET {
my ($self, $c) = @_;
$c->stash->{template} = 'eval-error.tt';
$c->stash->{eval} = $c->model('DB::JobsetEvals')->find($c->stash->{eval}->id, { prefetch => 'evaluationerror' });
$self->status_ok($c, entity => $c->stash->{eval});
}
sub create_jobset : Chained('evalChain') PathPart('create-jobset') Args(0) {
my ($self, $c) = @_;

View File

@@ -9,6 +9,7 @@ use Hydra::Helper::CatalystUtils;
use Hydra::View::TT;
use Nix::Store;
use Nix::Config;
use Number::Bytes::Human qw(format_bytes);
use Encode;
use File::Basename;
use JSON::MaybeXS;
@@ -57,6 +58,7 @@ sub begin :Private {
$c->stash->{tracker} = defined $c->config->{tracker} ? $c->config->{tracker} : "";
$c->stash->{flashMsg} = $c->flash->{flashMsg};
$c->stash->{successMsg} = $c->flash->{successMsg};
$c->stash->{localStore} = isLocalStore;
$c->stash->{isPrivateHydra} = $c->config->{private} // "0" ne "0";
@@ -162,7 +164,7 @@ sub status_GET {
{ "buildsteps.busy" => { '!=', 0 } },
{ order_by => ["globalpriority DESC", "id"],
join => "buildsteps",
columns => [@buildListColumns]
columns => [@buildListColumns, 'buildsteps.drvpath', 'buildsteps.type']
})]
);
}
@@ -188,8 +190,10 @@ sub machines :Local Args(0) {
my ($self, $c) = @_;
my $machines = getMachines;
# Add entry for localhost.
$machines->{''} //= {};
# Add entry for localhost. The implicit addition is not needed with queue runner v2
if (not $c->config->{'queue_runner_endpoint'}) {
$machines->{''} //= {};
}
delete $machines->{'localhost'};
my $status = $c->model('DB::SystemStatus')->find("queue-runner");
@@ -197,9 +201,11 @@ sub machines :Local Args(0) {
my $ms = decode_json($status->status)->{"machines"};
foreach my $name (keys %{$ms}) {
$name = "" if $name eq "localhost";
$machines->{$name} //= {disabled => 1};
$machines->{$name}->{nrStepsDone} = $ms->{$name}->{nrStepsDone};
$machines->{$name}->{avgStepBuildTime} = $ms->{$name}->{avgStepBuildTime} // 0;
my $outName = $name;
$outName = "" if $name eq "ssh://localhost";
$machines->{$outName} //= {disabled => 1};
$machines->{$outName}->{nrStepsDone} = $ms->{$name}->{nrStepsDone};
$machines->{$outName}->{avgStepBuildTime} = $ms->{$name}->{avgStepBuildTime} // 0;
}
}
@@ -212,6 +218,19 @@ sub machines :Local Args(0) {
"where busy != 0 order by machine, stepnr",
{ Slice => {} });
$c->stash->{template} = 'machine-status.tt';
$c->stash->{human_bytes} = sub {
my ($bytes) = @_;
return format_bytes($bytes, si => 1);
};
$c->stash->{pretty_load} = sub {
my ($load) = @_;
return sprintf('%.2f', $load);
};
$c->stash->{pretty_percent} = sub {
my ($percent) = @_;
my $ret = sprintf('%.2f', $percent);
return ('&nbsp;' x (6 - length($ret))) . $ret;
};
$self->status_ok($c, entity => $c->stash->{machines});
}

View File

@@ -32,12 +32,26 @@ sub buildDiff {
removed => [],
unfinished => [],
aborted => [],
failed => [],
# These summary counters cut across the categories to determine whether
# actions such as "Restart all failed" or "Bump queue" are available.
totalAborted => 0,
totalFailed => 0,
totalQueued => 0,
};
my $n = 0;
foreach my $build (@{$builds}) {
my $aborted = $build->finished != 0 && ($build->buildstatus == 3 || $build->buildstatus == 4);
my $aborted = $build->finished != 0 && (
# aborted
$build->buildstatus == 3
# cancelled
|| $build->buildstatus == 4
# timeout
|| $build->buildstatus == 7
# log limit exceeded
|| $build->buildstatus == 10
);
my $d;
my $found = 0;
while ($n < scalar(@{$builds2})) {
@@ -71,12 +85,19 @@ sub buildDiff {
} else {
push @{$ret->{new}}, $build if !$found;
}
if (defined $build->buildstatus && $build->buildstatus != 0) {
push @{$ret->{failed}}, $build;
if ($build->finished != 0 && $build->buildstatus != 0) {
if ($aborted) {
++$ret->{totalAborted};
} else {
++$ret->{totalFailed};
}
} elsif ($build->finished == 0) {
++$ret->{totalQueued};
}
}
return $ret;
}
1;
1;

View File

@@ -12,6 +12,8 @@ use Nix::Store;
use Encode;
use Sys::Hostname::Long;
use IPC::Run;
use LWP::UserAgent;
use JSON::MaybeXS;
use UUID4::Tiny qw(is_uuid4_string);
our @ISA = qw(Exporter);
@@ -36,6 +38,7 @@ our @EXPORT = qw(
jobsetOverview
jobsetOverview_
pathIsInsidePrefix
readIntoSocket
readNixFile
registerRoot
restartBuilds
@@ -296,8 +299,7 @@ sub getEvals {
my @evals = $evals_result_set->search(
{ hasnewbuilds => 1 },
{ order_by => "$me.id DESC", rows => $rows, offset => $offset
, prefetch => { evaluationerror => [ ] } });
{ order_by => "$me.id DESC", rows => $rows, offset => $offset });
my @res = ();
my $cache = {};
@@ -340,37 +342,68 @@ sub getEvals {
sub getMachines {
my %machines = ();
my $config = getHydraConfig();
my @machinesFiles = split /:/, ($ENV{"NIX_REMOTE_SYSTEMS"} || "/etc/nix/machines");
if ($config->{'queue_runner_endpoint'}) {
my $ua = LWP::UserAgent->new();
my $resp = $ua->get($config->{'queue_runner_endpoint'} . "/status/machines");
if (not $resp->is_success) {
print STDERR "Unable to ask queue runner for machines\n";
return \%machines;
}
for my $machinesFile (@machinesFiles) {
next unless -e $machinesFile;
open(my $conf, "<", $machinesFile) or die;
while (my $line = <$conf>) {
chomp($line);
$line =~ s/\#.*$//g;
next if $line =~ /^\s*$/;
my @tokens = split /\s+/, $line;
my $data = decode_json($resp->decoded_content) or return \%machines;
my $machinesData = $data->{machines};
if (!defined($tokens[5]) || $tokens[5] eq "-") {
$tokens[5] = "";
}
my @supportedFeatures = split(/,/, $tokens[5] || "");
if (!defined($tokens[6]) || $tokens[6] eq "-") {
$tokens[6] = "";
}
my @mandatoryFeatures = split(/,/, $tokens[6] || "");
$machines{$tokens[0]} =
{ systemTypes => [ split(/,/, $tokens[1]) ]
, sshKeys => $tokens[2]
, maxJobs => int($tokens[3])
, speedFactor => 1.0 * (defined $tokens[4] ? int($tokens[4]) : 1)
, supportedFeatures => [ @supportedFeatures, @mandatoryFeatures ]
, mandatoryFeatures => [ @mandatoryFeatures ]
foreach my $machineName (keys %$machinesData) {
my $machine = %$machinesData{$machineName};
$machines{$machineName} =
{ systemTypes => $machine->{systems}
, maxJobs => $machine->{maxJobs}
, speedFactor => $machine->{speedFactor}
, supportedFeatures => [ @{$machine->{supportedFeatures}}, @{$machine->{mandatoryFeatures}} ]
, mandatoryFeatures => [ @{$machine->{mandatoryFeatures}} ]
# New fields for the machine status
, primarySystemType => $machine->{systems}[0]
, hasCapacity => $machine->{hasCapacity}
, hasDynamicCapacity => $machine->{hasDynamicCapacity}
, hasStaticCapacity => $machine->{hasStaticCapacity}
, score => $machine->{score}
, stats => $machine->{stats}
, memTotal => $machine->{totalMem}
};
}
close $conf;
} else {
my @machinesFiles = split /:/, ($ENV{"NIX_REMOTE_SYSTEMS"} || "/etc/nix/machines");
for my $machinesFile (@machinesFiles) {
next unless -e $machinesFile;
open(my $conf, "<", $machinesFile) or die;
while (my $line = <$conf>) {
chomp($line);
$line =~ s/\#.*$//g;
next if $line =~ /^\s*$/;
my @tokens = split /\s+/, $line;
if (!defined($tokens[5]) || $tokens[5] eq "-") {
$tokens[5] = "";
}
my @supportedFeatures = split(/,/, $tokens[5] || "");
if (!defined($tokens[6]) || $tokens[6] eq "-") {
$tokens[6] = "";
}
my @mandatoryFeatures = split(/,/, $tokens[6] || "");
$machines{$tokens[0]} =
{ systemTypes => [ split(/,/, $tokens[1]) ]
, maxJobs => int($tokens[3])
, speedFactor => 1.0 * (defined $tokens[4] ? int($tokens[4]) : 1)
, supportedFeatures => [ @supportedFeatures, @mandatoryFeatures ]
, mandatoryFeatures => [ @mandatoryFeatures ]
};
}
close $conf;
}
}
return \%machines;
@@ -417,6 +450,16 @@ sub pathIsInsidePrefix {
return $cur;
}
sub readIntoSocket{
my (%args) = @_;
my $sock;
eval {
open($sock, "-|", @{$args{cmd}}) or die q(failed to open socket from command:\n $x);
};
return $sock;
}

View File

@@ -105,4 +105,6 @@ __PACKAGE__->add_column(
"+id" => { retrieve_on_insert => 1 }
);
__PACKAGE__->mk_group_accessors('column' => 'has_error');
1;

View File

@@ -386,6 +386,8 @@ __PACKAGE__->add_column(
"+id" => { retrieve_on_insert => 1 }
);
__PACKAGE__->mk_group_accessors('column' => 'has_error');
sub supportsDynamicRunCommand {
my ($self) = @_;

View File

@@ -0,0 +1,30 @@
package Hydra::Schema::ResultSet::EvaluationErrors;
use strict;
use utf8;
use warnings;
use parent 'DBIx::Class::ResultSet';
use Storable qw(dclone);
__PACKAGE__->load_components('Helper::ResultSet::RemoveColumns');
# Exclude expensive error message values unless explicitly requested, and
# replace them with a summary field describing their presence/absence.
sub search_rs {
my ( $class, $query, $attrs ) = @_;
if ($attrs) {
$attrs = dclone($attrs);
}
unless (exists $attrs->{'select'} || exists $attrs->{'columns'}) {
$attrs->{'+columns'}->{'has_error'} = "errormsg != ''";
}
unless (exists $attrs->{'+columns'}->{'errormsg'}) {
push @{ $attrs->{'remove_columns'} }, 'errormsg';
}
return $class->next::method($query, $attrs);
}

View File

@@ -0,0 +1,30 @@
package Hydra::Schema::ResultSet::Jobsets;
use strict;
use utf8;
use warnings;
use parent 'DBIx::Class::ResultSet';
use Storable qw(dclone);
__PACKAGE__->load_components('Helper::ResultSet::RemoveColumns');
# Exclude expensive error message values unless explicitly requested, and
# replace them with a summary field describing their presence/absence.
sub search_rs {
my ( $class, $query, $attrs ) = @_;
if ($attrs) {
$attrs = dclone($attrs);
}
unless (exists $attrs->{'select'} || exists $attrs->{'columns'}) {
$attrs->{'+columns'}->{'has_error'} = "errormsg != ''";
}
unless (exists $attrs->{'+columns'}->{'errormsg'}) {
push @{ $attrs->{'remove_columns'} }, 'errormsg';
}
return $class->next::method($query, $attrs);
}

View File

@@ -6,6 +6,7 @@ use base 'Catalyst::View::TT';
use Template::Plugin::HTML;
use Hydra::Helper::Nix;
use Time::Seconds;
use Digest::SHA qw(sha1_hex);
__PACKAGE__->config(
TEMPLATE_EXTENSION => '.tt',
@@ -25,8 +26,14 @@ __PACKAGE__->config(
makeNameTextForJobset
relativeDuration
stripSSHUser
metricDivId
/]);
sub metricDivId {
my ($self, $c, $text) = @_;
return "metric-" . sha1_hex($text);
}
sub buildLogExists {
my ($self, $c, $build) = @_;
return 1 if defined $c->config->{log_prefix};

View File

@@ -2,8 +2,8 @@
#include <pqxx/pqxx>
#include "environment-variables.hh"
#include "util.hh"
#include <nix/util/environment-variables.hh>
#include <nix/util/util.hh>
struct Connection : pqxx::connection
@@ -27,19 +27,20 @@ struct Connection : pqxx::connection
};
class receiver : public pqxx::notification_receiver
class receiver
{
std::optional<std::string> status;
pqxx::connection & conn;
public:
receiver(pqxx::connection_base & c, const std::string & channel)
: pqxx::notification_receiver(c, channel) { }
void operator() (const std::string & payload, int pid) override
: conn(static_cast<pqxx::connection &>(c))
{
status = payload;
};
conn.listen(channel, [this](pqxx::notification n) {
status = std::string(n.payload);
});
}
std::optional<std::string> get() {
auto s = status;

View File

@@ -2,8 +2,8 @@
#include <map>
#include "file-system.hh"
#include "util.hh"
#include <nix/util/file-system.hh>
#include <nix/util/util.hh>
struct HydraConfig
{

View File

@@ -57,20 +57,12 @@ fontawesome = custom_target(
command: ['unzip', '-u', '-d', '@OUTDIR@', '@INPUT@'],
)
custom_target(
'name-fontawesome-css',
'name-fontawesome',
input: fontawesome,
output: 'css',
command: ['cp', '-r', '@INPUT@/css', '@OUTPUT@'],
output: 'fontawesome',
command: ['cp', '-r', '@INPUT@' , '@OUTPUT@'],
install: true,
install_dir: hydra_libexecdir_static / 'fontawesome',
)
custom_target(
'name-fontawesome-webfonts',
input: fontawesome,
output: 'webfonts',
command: ['cp', '-r', '@INPUT@/webfonts', '@OUTPUT@'],
install: true,
install_dir: hydra_libexecdir_static / 'fontawesome',
install_dir: hydra_libexecdir_static,
)
# Scripts

View File

@@ -61,21 +61,7 @@ END;
<td>[% IF step.busy != 0 || ((step.machine || step.starttime) && (step.status == 0 || step.status == 1 || step.status == 3 || step.status == 4 || step.status == 7)); INCLUDE renderMachineName machine=step.machine; ELSE; "<em>n/a</em>"; END %]</td>
<td class="step-status">
[% IF step.busy != 0 %]
[% IF step.busy == 1 %]
<strong>Preparing</strong>
[% ELSIF step.busy == 10 %]
<strong>Connecting</strong>
[% ELSIF step.busy == 20 %]
<strong>Sending inputs</strong>
[% ELSIF step.busy == 30 %]
<strong>Building</strong>
[% ELSIF step.busy == 40 %]
<strong>Receiving outputs</strong>
[% ELSIF step.busy == 50 %]
<strong>Post-processing</strong>
[% ELSE %]
<strong>Unknown state</strong>
[% END %]
[% INCLUDE renderBusyStatus %]
[% ELSIF step.status == 0 %]
[% IF step.isnondeterministic %]
<span class="warn">Succeeded with non-determistic result</span>
@@ -577,7 +563,7 @@ END;
[% IF eval.flake %]
<p>If you have <a href='https://nixos.org/nix/download.html'>Nix
<p>If you have <a href='https://nixos.org/download/'>Nix
installed</a>, you can reproduce this build on your own machine by
running the following command:</p>
@@ -587,7 +573,7 @@ END;
[% ELSE %]
<p>If you have <a href='https://nixos.org/nix/download.html'>Nix
<p>If you have <a href='https://nixos.org/download/'>Nix
installed</a>, you can reproduce this build on your own machine by
downloading <a [% HTML.attributes(href => url) %]>a script</a>
that checks out all inputs of the build and then invokes Nix to

View File

@@ -91,6 +91,17 @@ BLOCK renderDuration;
duration % 60 %]s[%
END;
BLOCK renderDrvInfo;
drvname = step.drvpath
.substr(11) # strip `/nix/store/`
.split('-').slice(1).join("-") # strip hash part
.substr(0, -4); # strip `.drv`
IF drvname != releasename;
IF step.type == 0; action = "Build"; ELSE; action = "Substitution"; END;
IF drvname; %]<em> ([% action %] of [% drvname %])</em>[% END;
END;
END;
BLOCK renderBuildListHeader %]
<table class="table table-striped table-condensed clickable-rows">
@@ -131,7 +142,12 @@ BLOCK renderBuildListBody;
[% END %]
<td><a class="row-link" href="[% link %]">[% build.id %]</a></td>
[% IF !hideJobName %]
<td><a href="[%link%]">[% IF !hideJobsetName %][%build.jobset.get_column("project")%]:[%build.jobset.get_column("name")%]:[% END %][%build.get_column("job")%]</td>
<td>
<a href="[%link%]">[% IF !hideJobsetName %][%build.jobset.get_column("project")%]:[%build.jobset.get_column("name")%]:[% END %][%build.get_column("job")%]</a>
[% IF showStepName %]
[% INCLUDE renderDrvInfo step=build.buildsteps releasename=build.nixname %]
[% END %]
</td>
[% END %]
<td class="nowrap">[% t = showSchedulingInfo ? build.timestamp : build.stoptime; IF t; INCLUDE renderRelativeDate timestamp=(showSchedulingInfo ? build.timestamp : build.stoptime); ELSE; "-"; END %]</td>
<td>[% !showSchedulingInfo and build.get_column('releasename') ? build.get_column('releasename') : build.nixname %]</td>
@@ -245,6 +261,27 @@ BLOCK renderBuildStatusIcon;
END;
BLOCK renderBusyStatus;
IF step.busy == 1 %]
<strong>Preparing</strong>
[% ELSIF step.busy == 10 %]
<strong>Connecting</strong>
[% ELSIF step.busy == 20 %]
<strong>Sending inputs</strong>
[% ELSIF step.busy == 30 %]
<strong>Building</strong>
[% ELSIF step.busy == 35 %]
<strong>Waiting to receive outputs</strong>
[% ELSIF step.busy == 40 %]
<strong>Receiving outputs</strong>
[% ELSIF step.busy == 50 %]
<strong>Post-processing</strong>
[% ELSE %]
<strong>Unknown state</strong>
[% END;
END;
BLOCK renderStatus;
IF build.finished;
buildstatus = build.buildstatus;
@@ -476,7 +513,7 @@ BLOCK renderEvals %]
ELSE %]
-
[% END %]
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<span class="badge badge-warning">Eval Errors</span>
[% END %]
</td>
@@ -602,7 +639,7 @@ BLOCK renderJobsetOverview %]
<td>[% HTML.escape(j.description) %]</td>
<td>[% IF j.lastcheckedtime;
INCLUDE renderDateTime timestamp = j.lastcheckedtime;
IF j.errormsg || j.fetcherrormsg; %]&nbsp;<span class = 'badge badge-warning'>Error</span>[% END;
IF j.has_error || j.fetcherrormsg; %]&nbsp;<span class = 'badge badge-warning'>Error</span>[% END;
ELSE; "-";
END %]</td>
[% IF j.get_column('nrtotal') > 0 %]
@@ -648,6 +685,14 @@ BLOCK includeFlot %]
[% END;
BLOCK renderYesNo %]
[% IF value %]
<span class="text-success">Yes</span>
[% ELSE %]
<span class="text-danger">No</span>
[% END %]
[% END;
BLOCK createChart %]
<div id="[%id%]-chart" style="width: 1000px; height: 400px;"></div>

26
src/root/eval-error.tt Normal file
View File

@@ -0,0 +1,26 @@
[% PROCESS common.tt %]
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
[% INCLUDE style.tt %]
</head>
<body>
<div class="tab-content tab-pane">
<div id="tabs-errors" class="">
[% IF eval %]
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
[% ELSIF jobset %]
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(jobset.errortime || jobset.lastcheckedtime) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(jobset.fetcherrormsg || jobset.errormsg) %]</pre></div></div>
[% END %]
</div>
</div>
</body>
</html>

View File

@@ -18,8 +18,7 @@
<h3>Metric: <a [% HTML.attributes(href => c.uri_for('/job' project.name jobset.name job 'metric' metric.name)) %]><tt>[%HTML.escape(metric.name)%]</tt></a></h3>
[% id = "metric-" _ metric.name;
id = id.replace('\.', '_');
[% id = metricDivId(metric.name);
INCLUDE createChart dataUrl=c.uri_for('/job' project.name jobset.name job 'metric' metric.name); %]
[% END %]

View File

@@ -13,7 +13,7 @@
<a class="dropdown-item" href="?compare=-[% 31 * 24 * 60 * 60 %]&full=[% full ? 1 : 0 %]">This jobset <strong>one month</strong> earlier</a>
[% IF project.jobsets_rs.count > 1 %]
<div class="dropdown-divider"></div>
[% FOREACH j IN project.jobsets.sort('name'); IF j.name != jobset.name %]
[% FOREACH j IN project.jobsets.sort('name'); IF j.name != jobset.name && j.enabled == 1 %]
<a class="dropdown-item" href="?compare=[% j.name %]&full=[% full ? 1 : 0 %]">Jobset <tt>[% project.name %]:[% j.name %]</tt></a>
[% END; END %]
[% END %]
@@ -30,8 +30,6 @@ eval.checkouttime %]s and evaluation took [% eval.evaltime %]s.</p>
project=otherEval.jobset.project.name jobset=otherEval.jobset.name %] evaluation <a href="[%
c.uri_for(c.controller('JobsetEval').action_for('view'),
[otherEval.id]) %]">[% otherEval.id %]</a>.</p>
[% ELSE %]
<div class="alert alert-danger">Couldn't find an evaluation to compare to.</div>
[% END %]
<form>
@@ -48,16 +46,16 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
<a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#">Actions</a>
<div class="dropdown-menu">
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('create_jobset'), [eval.id]) %]">Create a jobset from this evaluation</a>
[% IF unfinished.size > 0 %]
[% IF totalQueued > 0 %]
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('cancel'), [eval.id]) %]">Cancel all scheduled builds</a>
[% END %]
[% IF aborted.size > 0 || stillFail.size > 0 || nowFail.size > 0 || failed.size > 0 %]
[% IF totalFailed > 0 %]
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('restart_failed'), [eval.id]) %]">Restart all failed builds</a>
[% END %]
[% IF aborted.size > 0 %]
[% IF totalAborted > 0 %]
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('restart_aborted'), [eval.id]) %]">Restart all aborted builds</a>
[% END %]
[% IF unfinished.size > 0 %]
[% IF totalQueued > 0 %]
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('bump'), [eval.id]) %]">Bump builds to front of queue</a>
[% END %]
</div>
@@ -65,7 +63,7 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
[% IF aborted.size > 0 %]
<li class="nav-item"><a class="nav-link" href="#tabs-aborted" data-toggle="tab"><span class="text-warning">Aborted Jobs ([% aborted.size %])</span></a></li>
<li class="nav-item"><a class="nav-link" href="#tabs-aborted" data-toggle="tab"><span class="text-warning">Aborted / Timed out Jobs ([% aborted.size %])</span></a></li>
[% END %]
[% IF nowFail.size > 0 %]
<li class="nav-item"><a class="nav-link" href="#tabs-now-fail" data-toggle="tab"><span class="text-warning">Newly Failing Jobs ([% nowFail.size %])</span></a></li>
@@ -90,7 +88,7 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
<li class="nav-item"><a class="nav-link" href="#tabs-inputs" data-toggle="tab">Inputs</a></li>
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<li class="nav-item"><a class="nav-link" href="#tabs-errors" data-toggle="tab"><span class="text-warning">Evaluation Errors</span></a></li>
[% END %]
</ul>
@@ -108,13 +106,6 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
<div class="tab-content">
[% IF eval.evaluationerror.errormsg %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
</div>
[% END %]
<div id="tabs-aborted" class="tab-pane">
[% INCLUDE renderSome builds=aborted tabname="#tabs-aborted" %]
</div>
@@ -172,10 +163,9 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
[% END %]
</div>
[% IF eval.evaluationerror.errormsg %]
[% IF eval.evaluationerror.has_error %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(eval.evaluationerror.errortime || eval.timestamp) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(eval.evaluationerror.errormsg) %]</pre></div></div>
<iframe src="[% c.uri_for(c.controller('JobsetEval').action_for('errors'), [eval.id], params) %]" loading="lazy" frameBorder="0" width="100%"></iframe>
</div>
[% END %]
</div>

View File

@@ -61,7 +61,7 @@
[% END %]
<li class="nav-item"><a class="nav-link active" href="#tabs-evaluations" data-toggle="tab">Evaluations</a></li>
[% IF jobset.errormsg || jobset.fetcherrormsg %]
[% IF jobset.has_error || jobset.fetcherrormsg %]
<li class="nav-item"><a class="nav-link" href="#tabs-errors" data-toggle="tab"><span class="text-warning">Evaluation Errors</span></a></li>
[% END %]
<li class="nav-item"><a class="nav-link" href="#tabs-jobs" data-toggle="tab">Jobs</a></li>
@@ -79,7 +79,7 @@
<th>Last checked:</th>
<td>
[% IF jobset.lastcheckedtime %]
[% INCLUDE renderDateTime timestamp = jobset.lastcheckedtime %], [% IF jobset.errormsg || jobset.fetcherrormsg %]<em class="text-warning">with errors!</em>[% ELSE %]<em>no errors</em>[% END %]
[% INCLUDE renderDateTime timestamp = jobset.lastcheckedtime %], [% IF jobset.has_error || jobset.fetcherrormsg %]<em class="text-warning">with errors!</em>[% ELSE %]<em>no errors</em>[% END %]
[% ELSE %]
<em>never</em>
[% END %]
@@ -117,10 +117,9 @@
</div>
[% IF jobset.errormsg || jobset.fetcherrormsg %]
[% IF jobset.has_error || jobset.fetcherrormsg %]
<div id="tabs-errors" class="tab-pane">
<p>Errors occurred at [% INCLUDE renderDateTime timestamp=(jobset.errortime || jobset.lastcheckedtime) %].</p>
<div class="card bg-light"><div class="card-body"><pre>[% HTML.escape(jobset.fetcherrormsg || jobset.errormsg) %]</pre></div></div>
<iframe src="[% c.uri_for('/jobset' project.name jobset.name "errors") %]" loading="lazy" frameBorder="0" width="100%"></iframe>
</div>
[% END %]

View File

@@ -10,31 +10,7 @@
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge" />
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-3.4.1.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-ui-1.10.4.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/moment/moment-2.24.0.min.js") %]"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link href="[% c.uri_for("/static/fontawesome/css/all.css") %]" rel="stylesheet" />
<script type="text/javascript" src="[% c.uri_for("/static/js/popper.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/bootstrap/js/bootstrap.min.js") %]"></script>
<link href="[% c.uri_for("/static/bootstrap/css/bootstrap.min.css") %]" rel="stylesheet" />
<!-- hydra.css may need to be moved to before boostrap to make the @media rule work. -->
<link rel="stylesheet" href="[% c.uri_for("/static/css/hydra.css") %]" type="text/css" />
<link rel="stylesheet" href="[% c.uri_for("/static/css/rotated-th.css") %]" type="text/css" />
<style>
.popover { max-width: 40%; }
</style>
<script type="text/javascript" src="[% c.uri_for("/static/js/bootbox.min.js") %]"></script>
<link rel="stylesheet" href="[% c.uri_for("/static/css/tree.css") %]" type="text/css" />
<script type="text/javascript" src="[% c.uri_for("/static/js/common.js") %]"></script>
[% INCLUDE style.tt %]
[% IF c.config.enable_google_login %]
<meta name="google-signin-client_id" content="[% c.config.google_client_id %]">

View File

@@ -11,7 +11,8 @@
[% ELSE %]
is
[% END %]
the build log of derivation <tt>[% IF step; step.drvpath; ELSE; build.drvpath; END %]</tt>.
the build log (<a href="[% step ? c.uri_for('/build' build.id 'nixlog' step.stepnr, 'raw')
: c.uri_for('/build' build.id 'log', 'raw') %]">raw</a>) of derivation <tt>[% IF step; step.drvpath; ELSE; build.drvpath; END %]</tt>.
[% IF step && step.machine %]
It was built on <tt>[% step.machine %]</tt>.
[% END %]

View File

@@ -6,10 +6,10 @@
<thead>
<tr>
<th>Job</th>
<th>System</th>
<th>Build</th>
<th>Step</th>
<th>What</th>
<th>Status</th>
<th>Since</th>
</tr>
</thead>
@@ -17,12 +17,48 @@
[% name = m.key ? stripSSHUser(m.key) : "localhost" %]
<thead>
<tr>
<th colspan="6">
<th colspan="7">
<tt [% IF m.value.disabled %]style="text-decoration: line-through;"[% END %]>[% INCLUDE renderMachineName machine=m.key %]</tt>
[% IF m.value.systemTypes %]
[% IF m.value.primarySystemType %]
<span class="muted" style="font-weight: normal;">
([% comma=0; FOREACH system IN m.value.systemTypes %][% IF comma; %], [% ELSE; comma = 1; END %]<tt>[% system %]</tt>[% END %])
(<tt>[% m.value.primarySystemType %]</tt>)
</span>
&nbsp;
[% WRAPPER makePopover title="Details" classes="btn-secondary btn-sm" %]
<ul class="list-unstyled mb-0">
<li><b>System types:&nbsp;</b>[% comma=0; FOREACH system IN m.value.systemTypes %][% IF comma; %], [% ELSE; comma = 1; END %]<tt>[% system %]</tt>[% END %]</li>
<li><b>Supported Features:&nbsp;</b>[% comma=0; FOREACH feat IN m.value.supportedFeatures %][% IF comma; %], [% ELSE; comma = 1; END %]<tt>[% feat %]</tt>[% END %]</li>
<li><b>Mandatory Features:&nbsp;</b>[% comma=0; FOREACH feat IN m.value.mandatoryFeatures %][% IF comma; %], [% ELSE; comma = 1; END %]<tt>[% feat %]</tt>[% END %]</li>
<li><b>Capacity:&nbsp;</b>[% INCLUDE renderYesNo value=m.value.hasCapacity %]&nbsp;<b>Static:&nbsp;</b>[% INCLUDE renderYesNo value=m.value.hasStaticCapacity %]&nbsp;<b>Dynamic:&nbsp;</b>[% INCLUDE renderYesNo value=m.value.hasDynamicCapacity %]</li>
<li><b>Scheduling Score:&nbsp;</b>[% m.value.score %]</li>
<li><b>Load:&nbsp;</b><tt>[% pretty_load(m.value.stats.load1) %]</tt>&nbsp;&nbsp;&nbsp;<tt>[% pretty_load(m.value.stats.load5) %]</tt>&nbsp;&nbsp;&nbsp;<tt>[% pretty_load(m.value.stats.load15) %]</tt></li>
<li><b>Memory:&nbsp;</b><tt>[% human_bytes(m.value.stats.memUsage) %]</tt> of <tt>[% human_bytes(m.value.memTotal) %]</tt> used (<tt>[% human_bytes(m.value.memTotal - m.value.stats.memUsage) %]</tt> free)</li>
[% pressure = m.value.stats.pressure %]
[% MACRO render_pressure(title, pressure) BLOCK %]
[% IF pressure %]
<tr><td><b>[% title %]:</b></td><td><tt>[% pretty_percent(pressure.avg10) %]%</tt></td><td><td><tt>[% pretty_percent(pressure.avg60) %]%</tt></td><td><td><tt>[% pretty_percent(pressure.avg300) %]%</tt></td><td>
[% END %]
[% END %]
[% IF pressure %]
<li><b>Pressure:&nbsp;</b>
<table class="pressureTable">
[% render_pressure('Some CPU', pressure.cpuSome) %]
[% render_pressure('Some IO', pressure.ioSome) %]
[% render_pressure('Full IO', pressure.ioFull) %]
[% render_pressure('Full IRQ', pressure.irqFull) %]
[% render_pressure('Some Memory', pressure.memSome) %]
[% render_pressure('Full Memory', pressure.memFull) %]
</table>
</li>
[% END %]
</ul>
[% END %]
[% ELSE %]
[% IF m.value.systemTypes %]
<span class="muted" style="font-weight: normal;">
([% comma=0; FOREACH system IN m.value.systemTypes %][% IF comma; %], [% ELSE; comma = 1; END %]<tt>[% system %]</tt>[% END %])
</span>
[% END %]
[% END %]
[% IF m.value.nrStepsDone %]
<span class="muted" style="font-weight: normal;">
@@ -40,10 +76,10 @@
[% idle = 0 %]
<tr>
<td><tt>[% INCLUDE renderFullJobName project=step.project jobset=step.jobset job=step.job %]</tt></td>
<td><tt>[% step.system %]</tt></td>
<td><a href="[% c.uri_for('/build' step.build) %]">[% step.build %]</a></td>
<td>[% IF step.busy >= 30 %]<a class="row-link" href="[% c.uri_for('/build' step.build 'nixlog' step.stepnr 'tail') %]">[% step.stepnr %]</a>[% ELSE; step.stepnr; END %]</td>
<td><tt>[% step.drvpath.match('-(.*)').0 %]</tt></td>
<td>[% INCLUDE renderBusyStatus %]</td>
<td style="width: 10em">[% INCLUDE renderDuration duration = curTime - step.starttime %] </td>
</tr>
[% END %]

View File

@@ -181,12 +181,20 @@ a.squiggle:hover {
padding-bottom: 1px;
}
table.pressureTable {
margin-left: 2em;
}
table.pressureTable td {
padding: 0 .4em;
}
@media (prefers-color-scheme: dark) {
/* Prevent some flickering */
html {
background-color: #1f1f1f;
}
body, div.popover {
body, div.popover, div.popover-body {
background-color: #1f1f1f;
color: #fafafa !important;
}

View File

@@ -129,6 +129,12 @@ $(document).ready(function() {
el.addClass("is-local");
}
});
[...document.getElementsByTagName("iframe")].forEach((element) => {
element.contentWindow.addEventListener("DOMContentLoaded", (_) => {
element.style.height = element.contentWindow.document.body.scrollHeight + 'px';
})
})
});
var tabsLoaded = {};

View File

@@ -7,7 +7,7 @@
[% ELSE %]
[% INCLUDE renderBuildList builds=resource showSchedulingInfo=1 hideResultInfo=1 busy=1 %]
[% INCLUDE renderBuildList builds=resource showSchedulingInfo=1 hideResultInfo=1 busy=1 showStepName=1 %]
[% END %]

24
src/root/style.tt Normal file
View File

@@ -0,0 +1,24 @@
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-3.4.1.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/jquery/jquery-ui-1.10.4.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/js/moment/moment-2.24.0.min.js") %]"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link href="[% c.uri_for("/static/fontawesome/css/all.css") %]" rel="stylesheet" />
<script type="text/javascript" src="[% c.uri_for("/static/js/popper.min.js") %]"></script>
<script type="text/javascript" src="[% c.uri_for("/static/bootstrap/js/bootstrap.min.js") %]"></script>
<link href="[% c.uri_for("/static/bootstrap/css/bootstrap.min.css") %]" rel="stylesheet" />
<!-- hydra.css may need to be moved to before boostrap to make the @media rule work. -->
<link rel="stylesheet" href="[% c.uri_for("/static/css/hydra.css") %]" type="text/css" />
<link rel="stylesheet" href="[% c.uri_for("/static/css/rotated-th.css") %]" type="text/css" />
<style>
.popover { max-width: 40%; }
</style>
<script type="text/javascript" src="[% c.uri_for("/static/js/bootbox.min.js") %]"></script>
<link rel="stylesheet" href="[% c.uri_for("/static/css/tree.css") %]" type="text/css" />
<script type="text/javascript" src="[% c.uri_for("/static/js/common.js") %]"></script>

View File

@@ -34,6 +34,9 @@
[% INCLUDE menuItem
uri = c.uri_for(c.controller('Root').action_for('steps'))
title = "Latest steps" %]
[% INCLUDE menuItem
uri = c.uri_for(c.controller('Root').action_for('queue_runner_status'))
title = "Queue Runner Status" %]
[% END %]
[% IF project %]
@@ -42,7 +45,7 @@
<div class="dropdown-divider"></div>
[% INCLUDE menuItem uri = c.uri_for(c.controller('Project').action_for('project'), [project.name]) title = "Overview" %]
[% INCLUDE menuItem uri = c.uri_for(c.controller('Project').action_for('all'), [project.name]) title = "Latest builds" %]
[% INCLUDE menuItem uri = c.uri_for('/project' project.name 'channel' 'latest') title = "Channel" %]
[% IF localStore %][% INCLUDE menuItem uri = c.uri_for('/project' project.name 'channel' 'latest') title = "Channel" %][% END %]
[% END %]
[% END %]
@@ -59,7 +62,7 @@
[% INCLUDE menuItem
uri = c.uri_for(c.controller('Jobset').action_for('all'), [project.name, jobset.name])
title = "Latest builds" %]
[% INCLUDE menuItem uri = c.uri_for('/jobset' project.name jobset.name 'channel' 'latest') title = "Channel" %]
[% IF localStore %][% INCLUDE menuItem uri = c.uri_for('/jobset' project.name jobset.name 'channel' 'latest') title = "Channel" %][% END %]
[% END %]
[% END %]
@@ -73,7 +76,7 @@
[% INCLUDE menuItem
uri = c.uri_for(c.controller('Job').action_for('all'), [project.name, jobset.name, job])
title = "Latest builds" %]
[% INCLUDE menuItem uri = c.uri_for('/job' project.name jobset.name job 'channel' 'latest') title = "Channel" %]
[% IF localStore %][% INCLUDE menuItem uri = c.uri_for('/job' project.name jobset.name job 'channel' 'latest') title = "Channel" %][% END %]
[% END %]
[% END %]

View File

@@ -372,6 +372,7 @@ sub evalJobs {
or die "cannot find the input containing the job expression\n";
@cmd = ("nix-eval-jobs",
"--option", "restrict-eval", "true",
"<" . $nixExprInputName . "/" . $nixExprPath . ">",
inputsToArgs($inputInfo));
}
@@ -395,9 +396,14 @@ sub evalJobs {
print STDERR "evaluator: @escaped\n";
}
# Unset NIX_PATH for nix-eval-jobs to ensure reproducible evaluations
my %env = %ENV;
delete $env{'NIX_PATH'};
my $evalProc = IPC::Run::start \@cmd,
'>', IPC::Run::new_chunker, \my $out,
'2>', \my $err;
'2>', \my $err,
init => sub { %ENV = %env; };
return sub {
while (1) {
@@ -773,6 +779,9 @@ sub checkJobsetWrapped {
my $jobsetChanged = 0;
my %buildMap;
my @jobs;
push @jobs, $_ while defined($_ = $jobsIter->());
$db->txn_do(sub {
my $prevEval = getPrevJobsetEval($db, $jobset, 1);
@@ -796,7 +805,7 @@ sub checkJobsetWrapped {
my @jobsWithConstituents;
while (defined(my $job = $jobsIter->())) {
foreach my $job (@jobs) {
if ($jobsetsJobset) {
die "The .jobsets jobset must only have a single job named 'jobsets'"
unless $job->{attr} eq "jobsets";

View File

@@ -32,4 +32,9 @@ subtest "/jobset/PROJECT/JOBSET/evals" => sub {
ok($jobsetevals->is_success, "The page showing the jobset evals returns 200.");
};
subtest "/jobset/PROJECT/JOBSET/errors" => sub {
my $jobsetevals = request(GET '/jobset/' . $project->name . '/' . $jobset->name . '/errors');
ok($jobsetevals->is_success, "The page showing the jobset eval errors returns 200.");
};
done_testing;

View File

@@ -35,6 +35,10 @@ subtest "Fetching the eval's overview" => sub {
is($fetch->code, 200, "channel page is 200");
};
subtest "Fetching the eval's overview" => sub {
my $fetch = request(GET '/eval/' . $eval->id, '/errors');
is($fetch->code, 200, "errors page is 200");
};
done_testing;

View File

@@ -25,7 +25,10 @@ subtest "empty diff" => sub {
removed => [],
unfinished => [],
aborted => [],
failed => [],
totalAborted => 0,
totalFailed => 0,
totalQueued => 0,
},
"empty list of jobs returns empty diff"
);
@@ -48,12 +51,7 @@ subtest "2 different jobs" => sub {
"succeed_with_failed is a new job"
);
is(scalar(@{$ret->{failed}}), 1, "list of failed jobs is 1 element long");
is(
$ret->{failed}[0]->get_column('id'),
$builds->{"succeed_with_failed"}->get_column('id'),
"succeed_with_failed is a failed job"
);
is($ret->{totalFailed}, 1, "total failed jobs is 1");
is(
$ret->{removed},
@@ -70,9 +68,9 @@ subtest "2 different jobs" => sub {
subtest "failed job with no previous history" => sub {
my $ret = buildDiff([$builds->{"fails"}], []);
is(scalar(@{$ret->{failed}}), 1, "list of failed jobs is 1 element long");
is($ret->{totalFailed}, 1, "total failed jobs is 1");
is(
$ret->{failed}[0]->get_column('id'),
$ret->{new}[0]->get_column('id'),
$builds->{"fails"}->get_column('id'),
"fails is a failed job"
);
@@ -93,7 +91,6 @@ subtest "not-yet-built job with no previous history" => sub {
is($ret->{removed}, [], "removed");
is($ret->{unfinished}, [], "unfinished");
is($ret->{aborted}, [], "aborted");
is($ret->{failed}, [], "failed");
is(scalar(@{$ret->{new}}), 1, "list of new jobs is 1 element long");
is(

View File

@@ -33,7 +33,6 @@ close $fh;
is(Hydra::Helper::Nix::getMachines(), {
'root@ip' => {
'systemTypes' => ["x86_64-darwin"],
'sshKeys' => '/sshkey',
'maxJobs' => 15,
'speedFactor' => 15,
'supportedFeatures' => ["big-parallel", "kvm", "nixos-test" ],
@@ -41,7 +40,6 @@ is(Hydra::Helper::Nix::getMachines(), {
},
'root@baz' => {
'systemTypes' => [ "aarch64-darwin" ],
'sshKeys' => '/sshkey',
'maxJobs' => 4,
'speedFactor' => 1,
'supportedFeatures' => ["big-parallel"],
@@ -49,7 +47,6 @@ is(Hydra::Helper::Nix::getMachines(), {
},
'root@bux' => {
'systemTypes' => [ "i686-linux", "x86_64-linux" ],
'sshKeys' => '/var/sshkey',
'maxJobs' => 1,
'speedFactor' => 1,
'supportedFeatures' => [ "kvm", "nixos-test", "benchmark" ],
@@ -57,7 +54,6 @@ is(Hydra::Helper::Nix::getMachines(), {
},
'root@lotsofspace' => {
'systemTypes' => [ "i686-linux", "x86_64-linux" ],
'sshKeys' => '/var/sshkey',
'maxJobs' => 1,
'speedFactor' => 1,
'supportedFeatures' => [ "kvm", "nixos-test", "benchmark" ],

View File

@@ -6,27 +6,55 @@ use Hydra::Helper::Exec;
my $ctx = test_context();
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-broken.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
subtest "broken constituents expression" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-broken.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
isnt($res, 0, "hydra-eval-jobset exits non-zero");
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
like(
$stderr,
qr/aggregate job mixed_aggregate failed with the error: "constituentA": does not exist/,
"The stderr record includes a relevant error message"
);
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
isnt($res, 0, "hydra-eval-jobset exits non-zero");
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
like(
$stderr,
qr/aggregate job 'mixed_aggregate' references non-existent job 'constituentA'/,
"The stderr record includes a relevant error message"
);
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/aggregate job mixed_aggregate failed with the error: "constituentA": does not exist/,
"The jobset records a relevant error message"
);
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/aggregate job mixed_aggregate failed with the error: constituentA: does not exist/,
"The jobset records a relevant error message"
);
};
subtest "no matches" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-no-matches.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
isnt($res, 0, "hydra-eval-jobset exits non-zero");
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
like(
$stderr,
qr/aggregate job 'non_match_aggregate' references constituent glob pattern 'tests\.\*' with no matches/,
"The stderr record includes a relevant error message"
);
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/aggregate job non_match_aggregate failed with the error: tests\.\*: constituent glob pattern had no matches/,
qr/in job non_match_aggregate:\ntests\.\*: constituent glob pattern had no matches/,
"The jobset records a relevant error message"
);
};
done_testing;

View File

@@ -0,0 +1,138 @@
use strict;
use warnings;
use Setup;
use Test2::V0;
use Hydra::Helper::Exec;
use Data::Dumper;
my $ctx = test_context();
subtest "general glob testing" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-glob.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
is($res, 0, "hydra-eval-jobset exits zero");
my $builds = {};
for my $build ($jobset->builds) {
$builds->{$build->job} = $build;
}
subtest "basic globbing works" => sub {
ok(defined $builds->{"ok_aggregate"}, "'ok_aggregate' is part of the jobset evaluation");
my @constituents = $builds->{"ok_aggregate"}->constituents->all;
is(2, scalar @constituents, "'ok_aggregate' has two constituents");
my @sortedConstituentNames = sort (map { $_->nixname } @constituents);
is($sortedConstituentNames[0], "empty-dir-A", "first constituent of 'ok_aggregate' is 'empty-dir-A'");
is($sortedConstituentNames[1], "empty-dir-B", "second constituent of 'ok_aggregate' is 'empty-dir-B'");
};
subtest "transitivity is OK" => sub {
ok(defined $builds->{"indirect_aggregate"}, "'indirect_aggregate' is part of the jobset evaluation");
my @constituents = $builds->{"indirect_aggregate"}->constituents->all;
is(1, scalar @constituents, "'indirect_aggregate' has one constituent");
is($constituents[0]->nixname, "direct_aggregate", "'indirect_aggregate' has 'direct_aggregate' as single constituent");
};
};
subtest "* selects all except current aggregate" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-glob-all.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
subtest "no eval errors" => sub {
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
ok(
$stderr !~ "aggregate job ok_aggregate has a constituent .* that doesn't correspond to a Hydra build",
"Catchall wildcard must not select itself as constituent"
);
$jobset->discard_changes; # refresh from DB
is(
$jobset->has_error,
0,
"eval-errors non-empty"
);
};
my $builds = {};
for my $build ($jobset->builds) {
$builds->{$build->job} = $build;
}
subtest "two constituents" => sub {
ok(defined $builds->{"ok_aggregate"}, "'ok_aggregate' is part of the jobset evaluation");
my @constituents = $builds->{"ok_aggregate"}->constituents->all;
is(2, scalar @constituents, "'ok_aggregate' has two constituents");
my @sortedConstituentNames = sort (map { $_->nixname } @constituents);
is($sortedConstituentNames[0], "empty-dir-A", "first constituent of 'ok_aggregate' is 'empty-dir-A'");
is($sortedConstituentNames[1], "empty-dir-B", "second constituent of 'ok_aggregate' is 'empty-dir-B'");
};
};
subtest "trivial cycle check" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-cycle.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
ok(
$stderr =~ "Found dependency cycle between jobs 'indirect_aggregate' and 'ok_aggregate'",
"Dependency cycle error is on stderr"
);
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/Dependency cycle: indirect_aggregate <-> ok_aggregate/,
"eval-errors non-empty"
);
is(0, $jobset->builds->count, "No builds should be scheduled");
};
subtest "cycle check with globbing" => sub {
my $jobsetCtx = $ctx->makeJobset(
expression => 'constituents-cycle-glob.nix',
);
my $jobset = $jobsetCtx->{"jobset"};
my ($res, $stdout, $stderr) = captureStdoutStderr(60,
("hydra-eval-jobset", $jobsetCtx->{"project"}->name, $jobset->name)
);
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
like(
$jobset->errormsg,
qr/aggregate job indirect_aggregate failed with the error: Dependency cycle: indirect_aggregate <-> packages.constituentA/,
"packages.constituentA error missing"
);
# on this branch of Hydra, hydra-eval-jobset fails hard if an aggregate
# job is broken.
is(0, $jobset->builds->count, "Zero jobs are scheduled");
};
done_testing;

14
t/jobs/config.nix Normal file
View File

@@ -0,0 +1,14 @@
rec {
path = "/nix/store/l9mg93sgx50y88p5rr6x1vib6j1rjsds-coreutils-9.1/bin";
mkDerivation = args:
derivation ({
system = builtins.currentSystem;
PATH = path;
} // args);
mkContentAddressedDerivation = args: mkDerivation ({
__contentAddressed = true;
outputHashMode = "recursive";
outputHashAlgo = "sha256";
} // args);
}

View File

@@ -0,0 +1,34 @@
with import ./config.nix;
{
packages.constituentA = mkDerivation {
name = "empty-dir-A";
builder = ./empty-dir-builder.sh;
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [ "*_aggregate" ];
};
packages.constituentB = mkDerivation {
name = "empty-dir-B";
builder = ./empty-dir-builder.sh;
};
ok_aggregate = mkDerivation {
name = "direct_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"packages.*"
];
builder = ./empty-dir-builder.sh;
};
indirect_aggregate = mkDerivation {
name = "indirect_aggregate";
_hydraAggregate = true;
constituents = [
"ok_aggregate"
];
builder = ./empty-dir-builder.sh;
};
}

View File

@@ -0,0 +1,21 @@
with import ./config.nix;
{
ok_aggregate = mkDerivation {
name = "direct_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"indirect_aggregate"
];
builder = ./empty-dir-builder.sh;
};
indirect_aggregate = mkDerivation {
name = "indirect_aggregate";
_hydraAggregate = true;
constituents = [
"ok_aggregate"
];
builder = ./empty-dir-builder.sh;
};
}

View File

@@ -0,0 +1,22 @@
with import ./config.nix;
{
packages.constituentA = mkDerivation {
name = "empty-dir-A";
builder = ./empty-dir-builder.sh;
};
packages.constituentB = mkDerivation {
name = "empty-dir-B";
builder = ./empty-dir-builder.sh;
};
ok_aggregate = mkDerivation {
name = "direct_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"*"
];
builder = ./empty-dir-builder.sh;
};
}

View File

@@ -0,0 +1,31 @@
with import ./config.nix;
{
packages.constituentA = mkDerivation {
name = "empty-dir-A";
builder = ./empty-dir-builder.sh;
};
packages.constituentB = mkDerivation {
name = "empty-dir-B";
builder = ./empty-dir-builder.sh;
};
ok_aggregate = mkDerivation {
name = "direct_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"packages.*"
];
builder = ./empty-dir-builder.sh;
};
indirect_aggregate = mkDerivation {
name = "indirect_aggregate";
_hydraAggregate = true;
constituents = [
"ok_aggregate"
];
builder = ./empty-dir-builder.sh;
};
}

View File

@@ -0,0 +1,20 @@
with import ./config.nix;
{
non_match_aggregate = mkDerivation {
name = "mixed_aggregate";
_hydraAggregate = true;
_hydraGlobConstituents = true;
constituents = [
"tests.*"
];
builder = ./empty-dir-builder.sh;
};
# Without a second job no jobset is attempted to be created
# (the only job would be broken)
# and thus the constituent validation is never reached.
dummy = mkDerivation {
name = "dummy";
builder = ./empty-dir-builder.sh;
};
}

View File

@@ -0,0 +1,24 @@
{
"enabled": 1,
"hidden": false,
"description": "declarative-jobset-example",
"nixexprinput": "src",
"nixexprpath": "declarative/generator.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"src": {
"type": "path",
"value": "/home/ma27/Projects/hydra-cppnix/t/jobs",
"emailresponsible": false
},
"jobspath": {
"type": "string",
"value": "/home/ma27/Projects/hydra-cppnix/t/jobs",
"emailresponsible": false
}
}
}

View File

@@ -14,7 +14,7 @@ our @EXPORT = qw(
sub evalSucceeds {
my ($jobset) = @_;
my ($res, $stdout, $stderr) = captureStdoutStderr(60, ("hydra-eval-jobset", $jobset->project->name, $jobset->name));
$jobset->discard_changes; # refresh from DB
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
if ($res) {
chomp $stdout; chomp $stderr;
utf8::decode($stdout) or die "Invalid unicode in stdout.";
@@ -29,7 +29,7 @@ sub evalSucceeds {
sub evalFails {
my ($jobset) = @_;
my ($res, $stdout, $stderr) = captureStdoutStderr(60, ("hydra-eval-jobset", $jobset->project->name, $jobset->name));
$jobset->discard_changes; # refresh from DB
$jobset->discard_changes({ '+columns' => {'errormsg' => 'errormsg'} }); # refresh from DB
if (!$res) {
chomp $stdout; chomp $stderr;
utf8::decode($stdout) or die "Invalid unicode in stdout.";

View File

@@ -22,11 +22,11 @@ is(nrQueuedBuildsForJobset($jobset), 0, "Evaluating jobs/broken-constituent.nix
like(
$jobset->errormsg,
qr/^"does-not-exist": does not exist$/m,
qr/^does-not-exist: does not exist$/m,
"Evaluating jobs/broken-constituent.nix should log an error for does-not-exist");
like(
$jobset->errormsg,
qr/^"does-not-evaluate": "error: assertion 'false' failed/m,
qr/^does-not-evaluate: error: assertion 'false' failed/m,
"Evaluating jobs/broken-constituent.nix should log an error for does-not-evaluate");
done_testing;

View File

@@ -13,7 +13,7 @@ my $constituentBuildA = $builds->{"constituentA"};
my $constituentBuildB = $builds->{"constituentB"};
my $eval = $constituentBuildA->jobsetevals->first();
is($eval->evaluationerror->errormsg, "");
is($eval->evaluationerror->has_error, 0);
subtest "Verifying the direct aggregate" => sub {
my $aggBuild = $builds->{"direct_aggregate"};