Compare commits

...

69 Commits

Author SHA1 Message Date
Jörg Thalheim
250780aaf2 tests: use like for testing regexes
This gives us better diagnostics when the test fails.
2024-08-21 08:34:25 +02:00
Janne Heß
7de7122479 Merge pull request #1398 from marius851000/document_foreman_user
Document the default Hydra user and port in hacking.md
2024-08-01 09:58:30 +02:00
marius david
ada51d70fc Document the default user and port in hacking.md 2024-07-23 22:39:22 +02:00
John Ericson
d7986226f0 Merge pull request #1227 from SuperSandro2000/gitea-push-hook
Add gitea push hook
2024-07-09 14:31:10 -04:00
John Ericson
b3e0d9a8b7 Merge pull request #1387 from NixOS/pipe-buffer-size
queue-runner: try larger pipe buffer sizes
2024-05-23 11:50:15 -04:00
Pierre Bourdon
5728011da1 queue-runner: try larger pipe buffer sizes
(cherry picked from commit 18466e8326)
2024-05-23 11:42:35 -04:00
John Ericson
559376e907 Merge pull request #1377 from SuperSandro2000/fix-doi-1375
Fix doi resolution after #1375
2024-05-21 18:11:21 -04:00
Janne Heß
998df1657e Merge pull request #1382 from Mic92/patch-1
README: update wiki link
2024-05-08 21:37:18 +02:00
Jörg Thalheim
f99cdaf5fe README: update wiki link 2024-05-08 21:31:32 +02:00
John Ericson
3bf00e31c0 Merge pull request #1381 from NixOS/factor-out-tests
Try again to ensure hydra module is usable
2024-05-03 12:50:02 -04:00
John Ericson
e149da7b9b Try again to ensure hydra module is usable
Nixpkgs only contains a `hydra_unstable`, not `hydra`, package, so
adjust the default accordingly, and then override it to our package in
the separate module which does that.
2024-05-03 12:41:17 -04:00
John Ericson
e81c36ac92 Merge pull request #1380 from NixOS/factor-out-tests
Factor out NixOS tests, and clean up
2024-05-03 12:33:50 -04:00
John Ericson
743795b2b0 Factor out NixOS tests, and clean up
Due to newer nixpkgs, there were a number of things that could be
cleaned up in the process.
2024-05-03 12:26:06 -04:00
John Ericson
50378aef22 Merge pull request #1379 from NixOS/remove-unneeded-override
Remove `PrometheusTiny` from overlay
2024-05-03 12:06:29 -04:00
John Ericson
92155f9a07 Remove PrometheusTiny from overlay
It's in Nixpkgs for a good while now.
2024-05-03 11:41:48 -04:00
John Ericson
29ce5c603c Merge pull request #1378 from NixOS/nix-2.22
Update to Nix 2.22
2024-05-03 11:14:49 -04:00
John Ericson
4bd687e3e6 Update to Nix 2.22
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/60824fa97c588a0faf68ea61260a47e388b0a4e5' (2024-04-11)
  → 'github:NixOS/nix/1c8150ac312b5f9ba1b3f6768ff43b09867e5883' (2024-04-23)
• Added input 'nix/flake-parts':
    'github:hercules-ci/flake-parts/9126214d0a59633752a136528f5f3b9aa8565b7d' (2024-04-01)
• Added input 'nix/flake-parts/nixpkgs-lib':
    follows 'nix/nixpkgs'
• Added input 'nix/pre-commit-hooks':
    'github:cachix/pre-commit-hooks.nix/40e6053ecb65fcbf12863338a6dcefb3f55f1bf8' (2024-04-12)
• Added input 'nix/pre-commit-hooks/flake-compat':
    follows 'nix'
• Added input 'nix/pre-commit-hooks/flake-utils':
    'github:numtide/flake-utils/5aed5285a952e0b949eb3ba02c12fa4fcfef535f' (2022-11-02)
• Added input 'nix/pre-commit-hooks/gitignore':
    follows 'nix'
• Added input 'nix/pre-commit-hooks/nixpkgs':
    follows 'nix/nixpkgs'
• Added input 'nix/pre-commit-hooks/nixpkgs-stable':
    follows 'nix/nixpkgs'
2024-05-03 10:47:43 -04:00
Sandro Jäckel
1b8154e67f Fix doi resolution after #1375
This fixes:

> Caught exception in Hydra::Controller::Root->realisations "Undefined subroutine &Hydra::Controller::Root::queryRawRealisation called at /nix/store/v842xb35ph8ka1yi1xanjhk4xh1pn5nm-hydra-2024-04-22/libexec/hydra/lib/Hydra/Controller/Root.pm line 371."
2024-05-03 14:31:34 +02:00
Pierre Bourdon
b72528be50 web: serveFile: also serve a CSP putting served HTML in its own origin 2024-04-22 16:28:50 +02:00
John Ericson
8b48579593 Merge pull request #1374 from Mindavi/bugfix/rendering-issue-content-addressed
ca-derivations: fix rendering issue
2024-04-18 13:08:30 -04:00
John Ericson
ef7bf1e67b Merge pull request #1375 from NixOS/nix-2.21
Nix 2.21
2024-04-12 17:28:37 -04:00
John Ericson
ab1f64aa4d flake.lock: Update
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/c4ebb82da4eade975e874da600dc50e9dec610cb' (2024-02-12)
  → 'github:NixOS/nix/60824fa97c588a0faf68ea61260a47e388b0a4e5' (2024-04-11)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/a1982c92d8980a0114372973cbdfe0a307f1bdea' (2024-01-12)
  → 'github:NixOS/nixpkgs/1d6a23f11e44d0fb64b3237569b87658a9eb5643' (2024-04-11)
• Removed input 'nixpkgs-for-fileset'
2024-04-12 12:26:11 -04:00
Rick van Schijndel
3f913a771d t: content-addressed: add a comment about a misleading testcase 2024-04-03 22:55:42 +02:00
Rick van Schijndel
71986632ce hydra-server: findLog: fix issue with ca-derivations enabled
When content addressed derivations are built on the hydra server,
one may run into an issue where some builds suddenly don't load anymore.

This seems to be caused by outPaths that are NULL (which is
allowed for ca-derivations). Filter them out to prevent querying the
database for them, which is not supported by the database abstraction
layer that's currently in use.

On my instance this appears to resolve the issue.
I feel like I might be doing this at the wrong abstraction layer, but on
the other hand -- it seems to resolve it and it also doesn't really look
like it will hurt anything.

The test added in a previous commit uncovers this issue, and this commit
resolves it. So I'm happy with this patch for now.

The issue I was seeing on my server:

hydra-server[2549]: [error] Couldn't render template "undef error - DBIx::Class::SQLMaker::ClassicExtensions::puke(): Fatal: NULL-within-IN not implemented: The upcoming SQL::Abstract::Classic 2.0 will emit the logically correct SQL instead of raising this exception. at /nix/store/<hash>-hydra-unstable-2024-03-08_nix_2_20/libexec/hydra/lib/Hydra/Helper/Nix.pm line 190

See also short discussion here: https://github.com/NixOS/nixpkgs/pull/297392#issuecomment-2035366263
2024-04-03 22:47:22 +02:00
Rick van Schijndel
1665aed5e3 t: content-addressed: add test for caDependingOnFailingCA
This uncovers an issue with the front-end.
2024-04-03 22:45:53 +02:00
John Ericson
b676b08fac Merge pull request #1368 from Ma27/login-submit-btn
Use `submit` event in login form
2024-03-26 11:23:51 -04:00
John Ericson
d614163e9c Merge pull request #1370 from Ma27/reconnect-db
hydra-queue-runner: drop broken connections from pool
2024-03-26 11:21:35 -04:00
Maximilian Bosch
99afff03b0 hydra-queue-runner: drop broken connections from pool
Closes #1336

When restarting postgresql, the connections are still reused in
`hydra-queue-runner` causing errors like this

    main thread: Lost connection to the database server.
    queue monitor: Lost connection to the database server.

and no more builds being processed.

`hydra-evaluator` doesn't have that issue since it crashes right away.
We could let it retry indefinitely as well (see below), but I don't
want to change too much.

If the DB is still unreachable 10s later, the process will stop with a
non-zero exit code because of a missing DB connection. This however
isn't such a big deal because it will be immediately restarted
afterwards. With the current configuration, Hydra will never give up,
but restart (and retry) infinitely. To me that seems reasonable, i.e. to
retry DB connections on a long-running process. If this doesn't work
out, the monitoring should fire anyways because the queue fills up, but
I'm open to discuss that.

Please note that this isn't reproducible with the DB and the queue
runner on the same machine when using `services.hydra-dev`, because of
the `Requires=` dependency `hydra-queue-runner.service` ->
`hydra-init.service` -> `postgresql.service` that causes the queue
runner to be restarted on `systemctl restart postgresql`.

Internally, Hydra uses Nix's pool data structure: it basically has N
slots (here DB connections) and whenever a new one is requested, an idle
slot is provided or a new one is created (when N slots are active, it'll
be waited until one slot is free). The issue in the code here is however
that whenever an error is encountered, the slot is released, however the
same broken connection will be reused the next time. By using
`Pool::Handle::markBad`, Nix will drop a broken slot. This is now being
done when `pqxx::broken_connection` was caught.
2024-03-15 14:09:31 +01:00
ajs124
8f56209bd6 Merge pull request #1361 from Ma27/fix-gitea-test
flake: fix gitea integration test
2024-03-08 15:28:07 +01:00
Maximilian Bosch
806c375c33 Don't send gitea status update when build is started
This was the source of a flaky test because sometimes hydra-notify was
quick enough to send out `buildStarted` and sometimes it apparently
wasn't which was quickly spottable with `nix build --rebuild`.

Removing that status update doesn't make a difference functionally,
gitea doesn't differentiate between "queued" and "running", so we send
the same status ("pending") out on both events, so we'd even safe one
avoidable request.
2024-03-08 11:07:38 +01:00
Maximilian Bosch
669617ab54 Use submit event in login form
It's a pet peeve from me when logging into my personal Hydra that I
always have to press the button rather than hitting Return after entering
my password.

Reason for that is that the form doesn't have a "submit" button, so far
it was always listened to the "click" event. Submit does that and you
can hit Return alternatively.
2024-03-07 18:49:52 +01:00
Janne Heß
c45c06509a Merge pull request #1364 from K900/urlencode-logs
urlencode drv names when fetching logs
2024-03-01 10:11:08 +01:00
K900
9db5d0a88d urlencode drv names when fetching logs
Otherwise names with special characters like + break things.
2024-02-26 22:48:16 +03:00
John Ericson
973cb644d3 Merge pull request #1359 from Ma27/nix-perl-bindings
Update perl bindings for nix#9863
2024-02-12 12:55:50 -05:00
Maximilian Bosch
e499509595 Switch to new Nix bindings, update Nix for that
Implements support for Nix's new Perl bindings[1]. The current state
basically does `openStore()`, but always uses `auto` and doesn't support
stores at other URIs.

Even though the stores are cached inside the Perl implementation, I
decided to instantiate those once in the Nix helper module. That way
store openings aren't cluttered across the entire codebase. Also, there
are two stores used later on - MACHINE_LOCAL_STORE for `auto`,
BINARY_CACHE_STORE for the one from `store_uri` in `hydra.conf` - and
using consistent names should make the intent clearer then.

This doesn't contain any behavioral changes, i.e. the build product
availability issue from #1352 isn't fixed. This patch only contains the
migration to the new API.

[1] https://github.com/NixOS/nix/pull/9863
2024-02-12 18:50:56 +01:00
Maximilian Bosch
ceff5c5cfe flake: fix gitea integration test
This is an integration test that confirms that jobset definitions from
git repositories are correctly built and status updates pushed to the
gitea instance. The following things needed to be fixed:

* We're still on 23.05 where gitea is marked as insecure. Not going to
  update nixpkgs right now, but going for the quick fix.
* Since gitea 1.19 tokens have scopes that describe what's possible.
  Not specifying the scope in the DB appears to imply that no
  permissions are granted.
* Apparently we have three status updates now (for three status hooks,
  queued/started/finished). No idea why that was broken before, but the
  behavior still looks correct.
2024-02-12 18:30:03 +01:00
John Ericson
878c0f240e Switch (back) to Nix master
Re-creating `nix-next` after using it in #1354.

Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/8df68a213fc52a57b02a57005b0e06cc8de40ce3' (2024-01-25)
  → 'github:NixOS/nix/75ebb90a70f6320c1c7a1fca87a0a8adb0716143' (2024-01-30)
2024-01-30 14:09:38 -05:00
John Ericson
c1bd50a80d Merge pull request #1354 from NixOS/nix-2.20
Nix 2.20
2024-01-30 14:07:49 -05:00
John Ericson
14aabc1cc9 Update to released Nix 2.20
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/8df68a213fc52a57b02a57005b0e06cc8de40ce3' (2024-01-25)
  → 'github:NixOS/nix/8f42912c80c0a03f62f6a3d28a3af05a9762565d' (2024-01-30)
2024-01-30 13:33:20 -05:00
John Ericson
7b826ec5ad Merge branch 'nix-next' into nix-2.20 2024-01-30 13:26:45 -05:00
John Ericson
7a53b866f6 Merge branch 'master' into nix-next
• Updated input 'nix' (merge):
    'github:NixOS/nix/212ba69e6f995992f8b4e4c0656d19c0156c8714'
    'github:NixOS/nix/2c4bb93ba5a97e7078896ebc36385ce172960e4e' (2024-01-25)
  → 'github:NixOS/nix/8df68a213fc52a57b02a57005b0e06cc8de40ce3' (2024-01-25)
2024-01-25 16:26:07 -05:00
John Ericson
449eb2d873 Use more nix::Machine fields
The upstream fields were made to match Hydra, so we can get rid of the
extra fields temporary added in
70e5469303.
2024-01-24 20:14:31 -05:00
John Ericson
2bdbf51d7d flake.lock: Update
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/b6aee9a93f6646bbffd919d362a5c75c37bb9caa' (2024-01-23)
  → 'github:NixOS/nix/212ba69e6f995992f8b4e4c0656d19c0156c8714' (2024-01-24)
2024-01-24 18:46:56 -05:00
John Ericson
9e7ac58042 Merge branch 'master' into nix-next 2024-01-24 18:36:03 -05:00
John Ericson
9a86da0e7b Merge branch 'master' into nix-next 2024-01-23 15:49:14 -05:00
John Ericson
20b0ad3ba2 Merge pull request #1339 from NixOS/use-nix-ssh
Use Nix's `SSHMaster`
2024-01-23 10:35:02 -05:00
John Ericson
7386caaecf Use Nix's SSHMaster 2024-01-23 10:24:02 -05:00
John Ericson
84c46b6b68 Update to newer Nix
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/74534829f23b668fb9b2f2a14ff6afa4d5e71d4a' (2024-01-22)
  → 'github:NixOS/nix/b6aee9a93f6646bbffd919d362a5c75c37bb9caa' (2024-01-23)
2024-01-23 10:21:48 -05:00
John Ericson
f1d9230f25 Merge remote-tracking branch 'upstream/master' into nix-next 2024-01-23 01:18:13 -05:00
John Ericson
34c51fcea9 Merge pull request #1165 from obsidiansystems/factor-out-proto
Begin factoring out the protocol code
2024-01-22 14:49:07 -05:00
John Ericson
4ac31c89df Use nix::serv_proto::BasicConnection in build_remote.cc
- Use the type itself

  This lays the foundation for being able to dedup the protocol code.

- Use `BasicConnection::handshake`, replacing ours.

- Use `BasicConnection::queryValidPaths`

- Use `BasicConnection::putBuildDerivationRequest`
2024-01-22 14:20:39 -05:00
John Ericson
db7aa01b8d Update to newer Nix master
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/b7e016ab2464ad2e7e2e856ad0f173157135aae0' (2023-12-10)
  → 'github:NixOS/nix/74534829f23b668fb9b2f2a14ff6afa4d5e71d4a' (2024-01-22)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/e9f06adb793d1cca5384907b3b8a4071d5d7cb19' (2023-12-03)
  → 'github:NixOS/nixpkgs/a1982c92d8980a0114372973cbdfe0a307f1bdea' (2024-01-12)
• Removed input 'nix/lowdown-src'
2024-01-22 14:14:59 -05:00
John Ericson
89cfe26533 Merge remote-tracking branch 'upstream/master' into nix-next 2024-01-22 13:01:40 -05:00
John Ericson
f3a760ad9c Merge pull request #1324 from obsidiansystems/serve-proto-build-options-serializer
Use `ServeProto::Serialise<ServeProto::BuildOptions>`
2023-12-11 10:45:33 -05:00
John Ericson
8c10331ee8 Fix totalNarSize summation
I accidentally removed it in d0d3b0a298.
2023-12-10 14:05:26 -05:00
John Ericson
20f5a2120c Use ServeProto::Serialise<ServeProto::BuildOptions> 2023-12-10 13:24:17 -05:00
John Ericson
b56d2383c1 Do not attempt to speak a newer version of the protocol
Both sides need to agree on a version (with `std::min`) for anything to
work. Somehow... we've never done this.

With this comment, the next commit succeeds. Without this commit, the
next commit fails. This is because the next commit exposes serializers
which do different things for proto version 2.7, and we're currently
requesting 2.6.

Opened https://github.com/NixOS/nix/issues/9584 to track this issue
2023-12-10 13:24:17 -05:00
John Ericson
2bd67562b5 Merge pull request #1323 from obsidiansystems/serve-proto-build-options
Use `ServeProto::BuildOption`
2023-12-10 13:23:43 -05:00
John Ericson
69a5b00e60 Use ServeProto::BuildOption
More deduplication with Nix.
2023-12-10 13:01:00 -05:00
John Ericson
1d80b72ffb flake.lock: Update Nix master
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/c8458bd731eb1c74159bebe459ea00165e056b65' (2023-12-09)
  → 'github:NixOS/nix/b7e016ab2464ad2e7e2e856ad0f173157135aae0' (2023-12-10)
2023-12-10 13:00:47 -05:00
John Ericson
105fd18fee Merge pull request #1322 from NixOS/use-UnkeyedValidPathInfo-serializer
Use `ServeProto::Serialise<UnkeyedValidPathInfo>` for `QueryValidPaths`
2023-12-10 11:52:56 -05:00
John Ericson
f6f817926a std::move the into the path info map 2023-12-09 12:12:00 -05:00
John Ericson
d0d3b0a298 Use ServeProto::Serialise<UnkeyedValidPathInfo> for QueryValidPaths
Companion to already-merged https://github.com/NixOS/nix/pull/9560
2023-12-09 12:08:04 -05:00
John Ericson
3f932a6731 build-remote: Use std::map<StorePath, UnkeyedValidPathInfo>
It is less denormalized
2023-12-09 11:59:09 -05:00
John Ericson
aaa0e128c1 flake.lock: Update Nix master
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/c3827ff6348a4d5199eaddf8dbc2ca2e2ef46ec5' (2023-12-07)
  → 'github:NixOS/nix/c8458bd731eb1c74159bebe459ea00165e056b65' (2023-12-09)
2023-12-09 11:59:02 -05:00
John Ericson
4515b5aa17 Merge pull request #1321 from NixOS/master
Mere `master` into `nix-next`
2023-12-09 11:53:58 -05:00
John Ericson
20c8263e3c Update to Nix master
The point of this branch is to always track Nix master, so we are
proactively ready to upgrade to the next Nix release when it is ready.

Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/50f8f1c8bc019a4c0fd098b9ac674b94cfc6af0d' (2023-11-27)
  → 'github:NixOS/nix/c3827ff6348a4d5199eaddf8dbc2ca2e2ef46ec5' (2023-12-07)
• Added input 'nix/libgit2':
    'github:libgit2/libgit2/45fd9ed7ae1a9b74b957ef4f337bc3c8b3df01b5' (2023-10-18)
2023-12-07 13:11:31 -05:00
Sandro
a81c6a3a80 Match URIs that don't end in .git
Co-authored-by: Charlotte <lotte@chir.rs>
2022-07-01 22:21:32 +02:00
Sandro Jäckel
750978a192 Add gitea push hook 2022-06-18 13:22:42 +02:00
40 changed files with 723 additions and 708 deletions

View File

@@ -140,7 +140,7 @@ You can also interface with Hydra through a JSON API. The API is defined in [hyd
## Additional Resources
- [Hydra User's Guide](https://nixos.org/hydra/manual/)
- [Hydra on the NixOS Wiki](https://nixos.wiki/wiki/Hydra)
- [Hydra on the NixOS Wiki](https://wiki.nixos.org/wiki/Hydra)
- [hydra-cli](https://github.com/nlewo/hydra-cli)
- [Peter Simons - Hydra: Setting up your own build farm (NixOS)](https://www.youtube.com/watch?v=RXV0Y5Bn-QQ)

View File

@@ -30,6 +30,8 @@ foreman:
$ foreman start
```
The Hydra interface will be available on port 63333, with an admin user named "alice" with password "foobar"
You can run just the Hydra web server in your source tree as follows:
```console

View File

@@ -1,9 +1,12 @@
# Webhooks
Hydra can be notified by github's webhook to trigger a new evaluation when a
Hydra can be notified by github or gitea with webhooks to trigger a new evaluation when a
jobset has a github repo in its input.
To set up a github webhook go to `https://github.com/<yourhandle>/<yourrepo>/settings` and in the `Webhooks` tab
click on `Add webhook`.
## GitHub
To set up a webhook for a GitHub repository go to `https://github.com/<yourhandle>/<yourrepo>/settings`
and in the `Webhooks` tab click on `Add webhook`.
- In `Payload URL` fill in `https://<your-hydra-domain>/api/push-github`.
- In `Content type` switch to `application/json`.
@@ -11,3 +14,14 @@ click on `Add webhook`.
- For `Which events would you like to trigger this webhook?` keep the default option for events on `Just the push event.`.
Then add the hook with `Add webhook`.
## Gitea
To set up a webhook for a Gitea repository go to the settings of the repository in your Gitea instance
and in the `Webhooks` tab click on `Add Webhook` and choose `Gitea` in the drop down.
- In `Target URL` fill in `https://<your-hydra-domain>/api/push-gitea`.
- Keep HTTP method `POST`, POST Content Type `application/json` and Trigger On `Push Events`.
- Change the branch filter to match the git branch hydra builds.
Then add the hook with `Add webhook`.

127
flake.lock generated
View File

@@ -16,74 +16,96 @@
"type": "github"
}
},
"lowdown-src": {
"flake": false,
"flake-parts": {
"inputs": {
"nixpkgs-lib": [
"nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1633514407,
"narHash": "sha256-Dw32tiMjdK9t3ETl5fzGrutQTzh2rufgZV4A/BbxuD4=",
"owner": "kristapsdz",
"repo": "lowdown",
"rev": "d2c2b44ff6c27b936ec27358a2653caaef8f73b8",
"lastModified": 1712014858,
"narHash": "sha256-sB4SWl2lX95bExY2gMFG5HIzvva5AVMJd4Igm+GpZNw=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "9126214d0a59633752a136528f5f3b9aa8565b7d",
"type": "github"
},
"original": {
"owner": "kristapsdz",
"repo": "lowdown",
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"flake-utils": {
"locked": {
"lastModified": 1667395993,
"narHash": "sha256-nuEHfE/LcWyuSWnS8t12N1wc105Qtau+/OdUAjtQ0rA=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5aed5285a952e0b949eb3ba02c12fa4fcfef535f",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"libgit2": {
"flake": false,
"locked": {
"lastModified": 1697646580,
"narHash": "sha256-oX4Z3S9WtJlwvj0uH9HlYcWv+x1hqp8mhXl7HsLu2f0=",
"owner": "libgit2",
"repo": "libgit2",
"rev": "45fd9ed7ae1a9b74b957ef4f337bc3c8b3df01b5",
"type": "github"
},
"original": {
"owner": "libgit2",
"repo": "libgit2",
"type": "github"
}
},
"nix": {
"inputs": {
"flake-compat": "flake-compat",
"lowdown-src": "lowdown-src",
"flake-parts": "flake-parts",
"libgit2": "libgit2",
"nixpkgs": [
"nixpkgs"
],
"nixpkgs-regression": "nixpkgs-regression"
"nixpkgs-regression": "nixpkgs-regression",
"pre-commit-hooks": "pre-commit-hooks"
},
"locked": {
"lastModified": 1706208340,
"narHash": "sha256-wNyHUEIiKKVs6UXrUzhP7RSJQv0A8jckgcuylzftl8k=",
"lastModified": 1713874370,
"narHash": "sha256-gW1mO/CvsQQ5gvgiwzxsGhPFI/tx30NING+qgF5Do0s=",
"owner": "NixOS",
"repo": "nix",
"rev": "2c4bb93ba5a97e7078896ebc36385ce172960e4e",
"rev": "1c8150ac312b5f9ba1b3f6768ff43b09867e5883",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "2.19-maintenance",
"ref": "2.22-maintenance",
"repo": "nix",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1701615100,
"narHash": "sha256-7VI84NGBvlCTduw2aHLVB62NvCiZUlALLqBe5v684Aw=",
"lastModified": 1712848736,
"narHash": "sha256-CzZwhqyLlebljv1zFS2KWVH/3byHND0LfaO1jKsGuVo=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "e9f06adb793d1cca5384907b3b8a4071d5d7cb19",
"rev": "1d6a23f11e44d0fb64b3237569b87658a9eb5643",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-for-fileset": {
"locked": {
"lastModified": 1706098335,
"narHash": "sha256-r3dWjT8P9/Ah5m5ul4WqIWD8muj5F+/gbCdjiNVBKmU=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a77ab169a83a4175169d78684ddd2e54486ac651",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.11",
"ref": "nixos-23.11-small",
"repo": "nixpkgs",
"type": "github"
}
@@ -104,11 +126,42 @@
"type": "github"
}
},
"pre-commit-hooks": {
"inputs": {
"flake-compat": [
"nix"
],
"flake-utils": "flake-utils",
"gitignore": [
"nix"
],
"nixpkgs": [
"nix",
"nixpkgs"
],
"nixpkgs-stable": [
"nix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1712897695,
"narHash": "sha256-nMirxrGteNAl9sWiOhoN5tIHyjBbVi5e2tgZUgZlK3Y=",
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"rev": "40e6053ecb65fcbf12863338a6dcefb3f55f1bf8",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"type": "github"
}
},
"root": {
"inputs": {
"nix": "nix",
"nixpkgs": "nixpkgs",
"nixpkgs-for-fileset": "nixpkgs-for-fileset"
"nixpkgs": "nixpkgs"
}
}
},

337
flake.nix
View File

@@ -1,16 +1,11 @@
{
description = "A Nix-based continuous build system";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.05";
inputs.nix.url = "github:NixOS/nix/2.19-maintenance";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.11-small";
inputs.nix.url = "github:NixOS/nix/2.22-maintenance";
inputs.nix.inputs.nixpkgs.follows = "nixpkgs";
# TODO get rid of this once https://github.com/NixOS/nix/pull/9546 is
# mered and we upgrade or Nix, so the main `nixpkgs` input is at least
# 23.11 and has `lib.fileset`.
inputs.nixpkgs-for-fileset.url = "github:NixOS/nixpkgs/nixos-23.11";
outputs = { self, nixpkgs, nix, nixpkgs-for-fileset }:
outputs = { self, nixpkgs, nix }:
let
systems = [ "x86_64-linux" "aarch64-linux" ];
forEachSystem = nixpkgs.lib.genAttrs systems;
@@ -22,52 +17,13 @@
overlays = overlayList;
});
# NixOS configuration used for VM tests.
hydraServer =
{ config, pkgs, ... }:
{
imports = [ self.nixosModules.hydraTest ];
virtualisation.memorySize = 1024;
virtualisation.writableStore = true;
environment.systemPackages = [ pkgs.perlPackages.LWP pkgs.perlPackages.JSON ];
nix = {
# Without this nix tries to fetch packages from the default
# cache.nixos.org which is not reachable from this sandboxed NixOS test.
binaryCaches = [ ];
};
};
in
rec {
# A Nixpkgs overlay that provides a 'hydra' package.
overlays.default = final: prev: {
# Add LDAP dependencies that aren't currently found within nixpkgs.
perlPackages = prev.perlPackages // {
PrometheusTiny = final.perlPackages.buildPerlPackage {
pname = "Prometheus-Tiny";
version = "0.007";
src = final.fetchurl {
url = "mirror://cpan/authors/id/R/RO/ROBN/Prometheus-Tiny-0.007.tar.gz";
sha256 = "0ef8b226a2025cdde4df80129dd319aa29e884e653c17dc96f4823d985c028ec";
};
buildInputs = with final.perlPackages; [ HTTPMessage Plack TestException ];
meta = {
homepage = "https://github.com/robn/Prometheus-Tiny";
description = "A tiny Prometheus client";
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
};
};
};
hydra = final.callPackage ./package.nix {
inherit (nixpkgs-for-fileset.lib) fileset;
inherit (nixpkgs.lib) fileset;
rawSrc = self;
};
};
@@ -93,286 +49,9 @@
echo "doc manual $out/share/doc/hydra" >> $out/nix-support/hydra-build-products
'');
tests.install = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-install";
nodes.machine = hydraServer;
testScript =
''
machine.wait_for_job("hydra-init")
machine.wait_for_job("hydra-server")
machine.wait_for_job("hydra-evaluator")
machine.wait_for_job("hydra-queue-runner")
machine.wait_for_open_port(3000)
machine.succeed("curl --fail http://localhost:3000/")
'';
});
tests.notifications = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-notifications";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
<influxdb>
url = http://127.0.0.1:8086
db = hydra
</influxdb>
'';
services.influxdb.enable = true;
};
testScript = ''
machine.wait_for_job("hydra-init")
# Create an admin account and some other state.
machine.succeed(
"""
su - hydra -c "hydra-create-user root --email-address 'alice@example.org' --password foobar --role admin"
mkdir /run/jobset
chmod 755 /run/jobset
cp ${./t/jobs/api-test.nix} /run/jobset/default.nix
chmod 644 /run/jobset/default.nix
chown -R hydra /run/jobset
"""
)
# Wait until InfluxDB can receive web requests
machine.wait_for_job("influxdb")
machine.wait_for_open_port(8086)
# Create an InfluxDB database where hydra will write to
machine.succeed(
"curl -XPOST 'http://127.0.0.1:8086/query' "
+ "--data-urlencode 'q=CREATE DATABASE hydra'"
)
# Wait until hydra-server can receive HTTP requests
machine.wait_for_job("hydra-server")
machine.wait_for_open_port(3000)
# Setup the project and jobset
machine.succeed(
"su - hydra -c 'perl -I ${pkgs.hydra.perlDeps}/lib/perl5/site_perl ${./t/setup-notifications-jobset.pl}' >&2"
)
# Wait until hydra has build the job and
# the InfluxDBNotification plugin uploaded its notification to InfluxDB
machine.wait_until_succeeds(
"curl -s -H 'Accept: application/csv' "
+ "-G 'http://127.0.0.1:8086/query?db=hydra' "
+ "--data-urlencode 'q=SELECT * FROM hydra_build_status' | grep success"
)
'';
});
tests.gitea = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
makeTest {
name = "hydra-gitea";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
<gitea_authorization>
root=d7f16a3412e01a43a414535b16007c6931d3a9c7
</gitea_authorization>
'';
nix = {
distributedBuilds = true;
buildMachines = [{
hostName = "localhost";
systems = [ system ];
}];
binaryCaches = [ ];
};
services.gitea = {
enable = true;
database.type = "postgres";
disableRegistration = true;
httpPort = 3001;
};
services.openssh.enable = true;
environment.systemPackages = with pkgs; [ gitea git jq gawk ];
networking.firewall.allowedTCPPorts = [ 3000 ];
};
skipLint = true;
testScript =
let
scripts.mktoken = pkgs.writeText "token.sql" ''
INSERT INTO access_token (id, uid, name, created_unix, updated_unix, token_hash, token_salt, token_last_eight) VALUES (1, 1, 'hydra', 1617107360, 1617107360, 'a930f319ca362d7b49a4040ac0af74521c3a3c3303a86f327b01994430672d33b6ec53e4ea774253208686c712495e12a486', 'XRjWE9YW0g', '31d3a9c7');
'';
scripts.git-setup = pkgs.writeShellScript "setup.sh" ''
set -x
mkdir -p /tmp/repo $HOME/.ssh
cat ${snakeoilKeypair.privkey} > $HOME/.ssh/privk
chmod 0400 $HOME/.ssh/privk
git -C /tmp/repo init
cp ${smallDrv} /tmp/repo/jobset.nix
git -C /tmp/repo add .
git config --global user.email test@localhost
git config --global user.name test
git -C /tmp/repo commit -m 'Initial import'
git -C /tmp/repo remote add origin gitea@machine:root/repo
GIT_SSH_COMMAND='ssh -i $HOME/.ssh/privk -o StrictHostKeyChecking=no' \
git -C /tmp/repo push origin master
git -C /tmp/repo log >&2
'';
scripts.hydra-setup = pkgs.writeShellScript "hydra.sh" ''
set -x
su -l hydra -c "hydra-create-user root --email-address \
'alice@example.org' --password foobar --role admin"
URL=http://localhost:3000
USERNAME="root"
PASSWORD="foobar"
PROJECT_NAME="trivial"
JOBSET_NAME="trivial"
mycurl() {
curl --referer $URL -H "Accept: application/json" \
-H "Content-Type: application/json" $@
}
cat >data.json <<EOF
{ "username": "$USERNAME", "password": "$PASSWORD" }
EOF
mycurl -X POST -d '@data.json' $URL/login -c hydra-cookie.txt
cat >data.json <<EOF
{
"displayname":"Trivial",
"enabled":"1",
"visible":"1"
}
EOF
mycurl --silent -X PUT $URL/project/$PROJECT_NAME \
-d @data.json -b hydra-cookie.txt
cat >data.json <<EOF
{
"description": "Trivial",
"checkinterval": "60",
"enabled": "1",
"visible": "1",
"keepnr": "1",
"enableemail": true,
"emailoverride": "hydra@localhost",
"type": 0,
"nixexprinput": "git",
"nixexprpath": "jobset.nix",
"inputs": {
"git": {"value": "http://localhost:3001/root/repo.git", "type": "git"},
"gitea_repo_name": {"value": "repo", "type": "string"},
"gitea_repo_owner": {"value": "root", "type": "string"},
"gitea_status_repo": {"value": "git", "type": "string"},
"gitea_http_url": {"value": "http://localhost:3001", "type": "string"}
}
}
EOF
mycurl --silent -X PUT $URL/jobset/$PROJECT_NAME/$JOBSET_NAME \
-d @data.json -b hydra-cookie.txt
'';
api_token = "d7f16a3412e01a43a414535b16007c6931d3a9c7";
snakeoilKeypair = {
privkey = pkgs.writeText "privkey.snakeoil" ''
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIHQf/khLvYrQ8IOika5yqtWvI0oquHlpRLTZiJy5dRJmoAoGCCqGSM49
AwEHoUQDQgAEKF0DYGbBwbj06tA3fd/+yP44cvmwmHBWXZCKbS+RQlAKvLXMWkpN
r1lwMyJZoSGgBHoUahoYjTh9/sJL7XLJtA==
-----END EC PRIVATE KEY-----
'';
pubkey = pkgs.lib.concatStrings [
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHA"
"yNTYAAABBBChdA2BmwcG49OrQN33f/sj+OHL5sJhwVl2Qim0vkUJQCry1zFpKTa"
"9ZcDMiWaEhoAR6FGoaGI04ff7CS+1yybQ= sakeoil"
];
};
smallDrv = pkgs.writeText "jobset.nix" ''
{ trivial = builtins.derivation {
name = "trivial";
system = "${system}";
builder = "/bin/sh";
allowSubstitutes = false;
preferLocalBuild = true;
args = ["-c" "echo success > $out; exit 0"];
};
}
'';
in
''
import json
machine.start()
machine.wait_for_unit("multi-user.target")
machine.wait_for_open_port(3000)
machine.wait_for_open_port(3001)
machine.succeed(
"su -l gitea -c 'GITEA_WORK_DIR=/var/lib/gitea gitea admin user create "
+ "--username root --password root --email test@localhost'"
)
machine.succeed("su -l postgres -c 'psql gitea < ${scripts.mktoken}'")
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/user/repos "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"auto_init":false, "description":"string", "license":"mit", "name":"repo", "private":false}\'''
)
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/user/keys "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"key":"${snakeoilKeypair.pubkey}","read_only":true,"title":"SSH"}\'''
)
machine.succeed(
"${scripts.git-setup}"
)
machine.succeed(
"${scripts.hydra-setup}"
)
machine.wait_until_succeeds(
'curl -Lf -s http://localhost:3000/build/1 -H "Accept: application/json" '
+ '| jq .buildstatus | xargs test 0 -eq'
)
data = machine.succeed(
'curl -Lf -s "http://localhost:3001/api/v1/repos/root/repo/statuses/$(cd /tmp/repo && git show | head -n1 | awk "{print \\$2}")" '
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
)
response = json.loads(data)
assert len(response) == 2, "Expected exactly two status updates for latest commit!"
assert response[0]['status'] == "success", "Expected latest status to be success!"
assert response[1]['status'] == "pending", "Expected first status to be pending!"
machine.shutdown()
'';
});
tests.validate-openapi = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
pkgs.runCommand "validate-openapi"
{ buildInputs = [ pkgs.openapi-generator-cli ]; }
''
openapi-generator-cli validate -i ${./hydra-api.yaml}
touch $out
'');
tests = import ./nixos-tests.nix {
inherit forEachSystem nixpkgs pkgsBySystem nixosModules;
};
container = nixosConfigurations.container.config.system.build.toplevel;
};
@@ -396,6 +75,8 @@
system = "x86_64-linux";
modules =
[
self.nixosModules.hydra
self.nixosModules.overlayNixpkgsForThisHydra
self.nixosModules.hydraTest
self.nixosModules.hydraProxy
{

View File

@@ -1,14 +1,14 @@
{ overlays }:
rec {
hydra = {
imports = [ ./hydra.nix ];
{
hydra = import ./hydra.nix;
overlayNixpkgsForThisHydra = { pkgs, ... }: {
nixpkgs = { inherit overlays; };
services.hydra.package = pkgs.hydra;
};
hydraTest = { pkgs, ... }: {
imports = [ hydra ];
services.hydra-dev.enable = true;
services.hydra-dev.hydraURL = "http://hydra.example.org";
services.hydra-dev.notificationSender = "admin@hydra.example.org";
@@ -16,7 +16,7 @@ rec {
systemd.services.hydra-send-stats.enable = false;
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql_11;
services.postgresql.package = pkgs.postgresql_12;
# The following is to work around the following error from hydra-server:
# [error] Caught exception in engine "Cannot determine local time zone"

View File

@@ -68,7 +68,7 @@ in
package = mkOption {
type = types.path;
default = pkgs.hydra;
default = pkgs.hydra_unstable;
defaultText = literalExpression "pkgs.hydra";
description = "The Hydra package.";
};
@@ -233,7 +233,7 @@ in
gc-keep-outputs = true;
gc-keep-derivations = true;
};
services.hydra-dev.extraConfig =
''
using_frontend_proxy = 1

309
nixos-tests.nix Normal file
View File

@@ -0,0 +1,309 @@
{ forEachSystem, nixpkgs, pkgsBySystem, nixosModules }:
let
# NixOS configuration used for VM tests.
hydraServer =
{ config, pkgs, ... }:
{
imports = [
nixosModules.hydra
nixosModules.overlayNixpkgsForThisHydra
nixosModules.hydraTest
];
virtualisation.memorySize = 1024;
virtualisation.writableStore = true;
environment.systemPackages = [ pkgs.perlPackages.LWP pkgs.perlPackages.JSON ];
nix = {
# Without this nix tries to fetch packages from the default
# cache.nixos.org which is not reachable from this sandboxed NixOS test.
settings.substituters = [ ];
};
};
in
{
install = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-install";
nodes.machine = hydraServer;
testScript =
''
machine.wait_for_job("hydra-init")
machine.wait_for_job("hydra-server")
machine.wait_for_job("hydra-evaluator")
machine.wait_for_job("hydra-queue-runner")
machine.wait_for_open_port(3000)
machine.succeed("curl --fail http://localhost:3000/")
'';
});
notifications = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-notifications";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
<influxdb>
url = http://127.0.0.1:8086
db = hydra
</influxdb>
'';
services.influxdb.enable = true;
};
testScript = ''
machine.wait_for_job("hydra-init")
# Create an admin account and some other state.
machine.succeed(
"""
su - hydra -c "hydra-create-user root --email-address 'alice@example.org' --password foobar --role admin"
mkdir /run/jobset
chmod 755 /run/jobset
cp ${./t/jobs/api-test.nix} /run/jobset/default.nix
chmod 644 /run/jobset/default.nix
chown -R hydra /run/jobset
"""
)
# Wait until InfluxDB can receive web requests
machine.wait_for_job("influxdb")
machine.wait_for_open_port(8086)
# Create an InfluxDB database where hydra will write to
machine.succeed(
"curl -XPOST 'http://127.0.0.1:8086/query' "
+ "--data-urlencode 'q=CREATE DATABASE hydra'"
)
# Wait until hydra-server can receive HTTP requests
machine.wait_for_job("hydra-server")
machine.wait_for_open_port(3000)
# Setup the project and jobset
machine.succeed(
"su - hydra -c 'perl -I ${pkgs.hydra.perlDeps}/lib/perl5/site_perl ${./t/setup-notifications-jobset.pl}' >&2"
)
# Wait until hydra has build the job and
# the InfluxDBNotification plugin uploaded its notification to InfluxDB
machine.wait_until_succeeds(
"curl -s -H 'Accept: application/csv' "
+ "-G 'http://127.0.0.1:8086/query?db=hydra' "
+ "--data-urlencode 'q=SELECT * FROM hydra_build_status' | grep success"
)
'';
});
gitea = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
makeTest {
name = "hydra-gitea";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
<gitea_authorization>
root=d7f16a3412e01a43a414535b16007c6931d3a9c7
</gitea_authorization>
'';
nixpkgs.config.permittedInsecurePackages = [ "gitea-1.19.4" ];
nix = {
settings.substituters = [ ];
};
services.gitea = {
enable = true;
database.type = "postgres";
settings = {
service.DISABLE_REGISTRATION = true;
server.HTTP_PORT = 3001;
};
};
services.openssh.enable = true;
environment.systemPackages = with pkgs; [ gitea git jq gawk ];
networking.firewall.allowedTCPPorts = [ 3000 ];
};
skipLint = true;
testScript =
let
scripts.mktoken = pkgs.writeText "token.sql" ''
INSERT INTO access_token (id, uid, name, created_unix, updated_unix, token_hash, token_salt, token_last_eight, scope) VALUES (1, 1, 'hydra', 1617107360, 1617107360, 'a930f319ca362d7b49a4040ac0af74521c3a3c3303a86f327b01994430672d33b6ec53e4ea774253208686c712495e12a486', 'XRjWE9YW0g', '31d3a9c7', 'all');
'';
scripts.git-setup = pkgs.writeShellScript "setup.sh" ''
set -x
mkdir -p /tmp/repo $HOME/.ssh
cat ${snakeoilKeypair.privkey} > $HOME/.ssh/privk
chmod 0400 $HOME/.ssh/privk
git -C /tmp/repo init
cp ${smallDrv} /tmp/repo/jobset.nix
git -C /tmp/repo add .
git config --global user.email test@localhost
git config --global user.name test
git -C /tmp/repo commit -m 'Initial import'
git -C /tmp/repo remote add origin gitea@machine:root/repo
GIT_SSH_COMMAND='ssh -i $HOME/.ssh/privk -o StrictHostKeyChecking=no' \
git -C /tmp/repo push origin master
git -C /tmp/repo log >&2
'';
scripts.hydra-setup = pkgs.writeShellScript "hydra.sh" ''
set -x
su -l hydra -c "hydra-create-user root --email-address \
'alice@example.org' --password foobar --role admin"
URL=http://localhost:3000
USERNAME="root"
PASSWORD="foobar"
PROJECT_NAME="trivial"
JOBSET_NAME="trivial"
mycurl() {
curl --referer $URL -H "Accept: application/json" \
-H "Content-Type: application/json" $@
}
cat >data.json <<EOF
{ "username": "$USERNAME", "password": "$PASSWORD" }
EOF
mycurl -X POST -d '@data.json' $URL/login -c hydra-cookie.txt
cat >data.json <<EOF
{
"displayname":"Trivial",
"enabled":"1",
"visible":"1"
}
EOF
mycurl --silent -X PUT $URL/project/$PROJECT_NAME \
-d @data.json -b hydra-cookie.txt
cat >data.json <<EOF
{
"description": "Trivial",
"checkinterval": "60",
"enabled": "1",
"visible": "1",
"keepnr": "1",
"enableemail": true,
"emailoverride": "hydra@localhost",
"type": 0,
"nixexprinput": "git",
"nixexprpath": "jobset.nix",
"inputs": {
"git": {"value": "http://localhost:3001/root/repo.git", "type": "git"},
"gitea_repo_name": {"value": "repo", "type": "string"},
"gitea_repo_owner": {"value": "root", "type": "string"},
"gitea_status_repo": {"value": "git", "type": "string"},
"gitea_http_url": {"value": "http://localhost:3001", "type": "string"}
}
}
EOF
mycurl --silent -X PUT $URL/jobset/$PROJECT_NAME/$JOBSET_NAME \
-d @data.json -b hydra-cookie.txt
'';
api_token = "d7f16a3412e01a43a414535b16007c6931d3a9c7";
snakeoilKeypair = {
privkey = pkgs.writeText "privkey.snakeoil" ''
-----BEGIN EC PRIVATE KEY-----
MHcCAQEEIHQf/khLvYrQ8IOika5yqtWvI0oquHlpRLTZiJy5dRJmoAoGCCqGSM49
AwEHoUQDQgAEKF0DYGbBwbj06tA3fd/+yP44cvmwmHBWXZCKbS+RQlAKvLXMWkpN
r1lwMyJZoSGgBHoUahoYjTh9/sJL7XLJtA==
-----END EC PRIVATE KEY-----
'';
pubkey = pkgs.lib.concatStrings [
"ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHA"
"yNTYAAABBBChdA2BmwcG49OrQN33f/sj+OHL5sJhwVl2Qim0vkUJQCry1zFpKTa"
"9ZcDMiWaEhoAR6FGoaGI04ff7CS+1yybQ= sakeoil"
];
};
smallDrv = pkgs.writeText "jobset.nix" ''
{ trivial = builtins.derivation {
name = "trivial";
system = "${system}";
builder = "/bin/sh";
allowSubstitutes = false;
preferLocalBuild = true;
args = ["-c" "echo success > $out; exit 0"];
};
}
'';
in
''
import json
machine.start()
machine.wait_for_unit("multi-user.target")
machine.wait_for_open_port(3000)
machine.wait_for_open_port(3001)
machine.succeed(
"su -l gitea -c 'GITEA_WORK_DIR=/var/lib/gitea gitea admin user create "
+ "--username root --password root --email test@localhost'"
)
machine.succeed("su -l postgres -c 'psql gitea < ${scripts.mktoken}'")
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/user/repos "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"auto_init":false, "description":"string", "license":"mit", "name":"repo", "private":false}\'''
)
machine.succeed(
"curl --fail -X POST http://localhost:3001/api/v1/user/keys "
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
+ ' -d \'{"key":"${snakeoilKeypair.pubkey}","read_only":true,"title":"SSH"}\'''
)
machine.succeed(
"${scripts.git-setup}"
)
machine.succeed(
"${scripts.hydra-setup}"
)
machine.wait_until_succeeds(
'curl -Lf -s http://localhost:3000/build/1 -H "Accept: application/json" '
+ '| jq .buildstatus | xargs test 0 -eq'
)
data = machine.succeed(
'curl -Lf -s "http://localhost:3001/api/v1/repos/root/repo/statuses/$(cd /tmp/repo && git show | head -n1 | awk "{print \\$2}")" '
+ "-H 'Accept: application/json' -H 'Content-Type: application/json' "
+ f"-H 'Authorization: token ${api_token}'"
)
response = json.loads(data)
assert len(response) == 2, "Expected exactly three status updates for latest commit (queued, finished)!"
assert response[0]['status'] == "success", "Expected finished status to be success!"
assert response[1]['status'] == "pending", "Expected queued status to be pending!"
machine.shutdown()
'';
});
validate-openapi = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
pkgs.runCommand "validate-openapi"
{ buildInputs = [ pkgs.openapi-generator-cli ]; }
''
openapi-generator-cli validate -i ${./hydra-api.yaml}
touch $out
'');
}

View File

@@ -89,7 +89,7 @@ struct MyArgs : MixEvalArgs, MixCommonArgs, RootArgs
static MyArgs myArgs;
static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const std::string & name, const std::string & subAttribute)
static std::string queryMetaStrings(EvalState & state, PackageInfo & drv, const std::string & name, const std::string & subAttribute)
{
Strings res;
std::function<void(Value & v)> rec;
@@ -102,8 +102,8 @@ static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const std:
for (unsigned int n = 0; n < v.listSize(); ++n)
rec(*v.listElems()[n]);
else if (v.type() == nAttrs) {
auto a = v.attrs->find(state.symbols.create(subAttribute));
if (a != v.attrs->end())
auto a = v.attrs()->find(state.symbols.create(subAttribute));
if (a != v.attrs()->end())
res.push_back(std::string(state.forceString(*a->value, a->pos, "while evaluating meta attributes")));
}
};
@@ -138,12 +138,12 @@ static void worker(
callFlake(state, lockedFlake, *vFlake);
auto vOutputs = vFlake->attrs->get(state.symbols.create("outputs"))->value;
auto vOutputs = vFlake->attrs()->get(state.symbols.create("outputs"))->value;
state.forceValue(*vOutputs, noPos);
auto aHydraJobs = vOutputs->attrs->get(state.symbols.create("hydraJobs"));
auto aHydraJobs = vOutputs->attrs()->get(state.symbols.create("hydraJobs"));
if (!aHydraJobs)
aHydraJobs = vOutputs->attrs->get(state.symbols.create("checks"));
aHydraJobs = vOutputs->attrs()->get(state.symbols.create("checks"));
if (!aHydraJobs)
throw Error("flake '%s' does not provide any Hydra jobs or checks", flakeRef);
@@ -181,11 +181,11 @@ static void worker(
// CA derivations do not have static output paths, so we
// have to defensively not query output paths in case we
// encounter one.
DrvInfo::Outputs outputs = drv->queryOutputs(
PackageInfo::Outputs outputs = drv->queryOutputs(
!experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
if (drv->querySystem() == "unknown")
throw EvalError("derivation must have a 'system' attribute");
state.error<EvalError>("derivation must have a 'system' attribute").debugThrow();
auto drvPath = state.store->printStorePath(drv->requireDrvPath());
@@ -204,11 +204,11 @@ static void worker(
job["isChannel"] = drv->queryMetaBool("isHydraChannel", false);
/* If this is an aggregate, then get its constituents. */
auto a = v->attrs->get(state.symbols.create("_hydraAggregate"));
auto a = v->attrs()->get(state.symbols.create("_hydraAggregate"));
if (a && state.forceBool(*a->value, a->pos, "while evaluating the `_hydraAggregate` attribute")) {
auto a = v->attrs->get(state.symbols.create("constituents"));
auto a = v->attrs()->get(state.symbols.create("constituents"));
if (!a)
throw EvalError("derivation must have a constituents attribute");
state.error<EvalError>("derivation must have a constituents attribute").debugThrow();
NixStringContext context;
state.coerceToString(a->pos, *a->value, context, "while evaluating the `constituents` attribute", true, false);
@@ -260,7 +260,7 @@ static void worker(
else if (v->type() == nAttrs) {
auto attrs = nlohmann::json::array();
StringSet ss;
for (auto & i : v->attrs->lexicographicOrder(state.symbols)) {
for (auto & i : v->attrs()->lexicographicOrder(state.symbols)) {
std::string name(state.symbols[i->name]);
if (name.find(' ') != std::string::npos) {
printError("skipping job with illegal name '%s'", name);
@@ -274,7 +274,7 @@ static void worker(
else if (v->type() == nNull)
;
else throw TypeError("attribute '%s' is %s, which is not supported", attrPath, showType(*v));
else state.error<TypeError>("attribute '%s' is %s, which is not supported", attrPath, showType(*v)).debugThrow();
} catch (EvalError & e) {
auto msg = e.msg();
@@ -368,7 +368,7 @@ int main(int argc, char * * argv)
]()
{
try {
EvalState state(myArgs.searchPath, openStore());
EvalState state(myArgs.lookupPath, openStore());
Bindings & autoArgs = *myArgs.getAutoArgs(state);
worker(state, autoArgs, *to, *from);
} catch (Error & e) {

View File

@@ -38,7 +38,7 @@ class JobsetId {
friend bool operator!= (const JobsetId & lhs, const JobsetName & rhs);
std::string display() const {
return str(format("%1%:%2% (jobset#%3%)") % project % jobset % id);
return boost::str(boost::format("%1%:%2% (jobset#%3%)") % project % jobset % id);
}
};
bool operator==(const JobsetId & lhs, const JobsetId & rhs)

View File

@@ -8,6 +8,7 @@
#include "build-result.hh"
#include "path.hh"
#include "serve-protocol.hh"
#include "serve-protocol-impl.hh"
#include "state.hh"
#include "current-process.hh"
#include "processes.hh"
@@ -20,12 +21,6 @@
using namespace nix;
static void append(Strings & dst, const Strings & src)
{
dst.insert(dst.end(), src.begin(), src.end());
}
namespace nix::build_remote {
static Strings extraStoreArgs(std::string & machine)
@@ -48,58 +43,31 @@ static Strings extraStoreArgs(std::string & machine)
return result;
}
static void openConnection(::Machine::ptr machine, Path tmpDir, int stderrFD, SSHMaster::Connection & child)
static std::unique_ptr<SSHMaster::Connection> openConnection(
::Machine::ptr machine, SSHMaster & master)
{
std::string pgmName;
Pipe to, from;
to.create();
from.create();
Strings argv;
Strings command = {"nix-store", "--serve", "--write"};
if (machine->isLocalhost()) {
pgmName = "nix-store";
argv = {"nix-store", "--builders", "", "--serve", "--write"};
command.push_back("--builders");
command.push_back("");
} else {
pgmName = "ssh";
auto sshName = machine->sshName;
Strings extraArgs = extraStoreArgs(sshName);
argv = {"ssh", sshName};
if (machine->sshKey != "") append(argv, {"-i", machine->sshKey});
if (machine->sshPublicHostKey != "") {
Path fileName = tmpDir + "/host-key";
auto p = machine->sshName.find("@");
std::string host = p != std::string::npos ? std::string(machine->sshName, p + 1) : machine->sshName;
writeFile(fileName, host + " " + machine->sshPublicHostKey + "\n");
append(argv, {"-oUserKnownHostsFile=" + fileName});
}
append(argv,
{ "-x", "-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
, "--", "nix-store", "--serve", "--write" });
append(argv, extraArgs);
command.splice(command.end(), extraStoreArgs(machine->sshName));
}
child.sshPid = startProcess([&]() {
restoreProcessContext();
if (dup2(to.readSide.get(), STDIN_FILENO) == -1)
throw SysError("cannot dup input pipe to stdin");
if (dup2(from.writeSide.get(), STDOUT_FILENO) == -1)
throw SysError("cannot dup output pipe to stdout");
if (dup2(stderrFD, STDERR_FILENO) == -1)
throw SysError("cannot dup stderr");
execvp(argv.front().c_str(), (char * *) stringsToCharPtrs(argv).data()); // FIXME: remove cast
throw SysError("cannot start %s", pgmName);
auto ret = master.startCommand(std::move(command), {
"-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
});
to.readSide = -1;
from.writeSide = -1;
// XXX: determine the actual max value we can use from /proc.
child.in = to.writeSide.release();
child.out = from.readSide.release();
// FIXME: Should this be upstreamed into `startCommand` in Nix?
int pipesize = 1024 * 1024;
fcntl(ret->in.get(), F_SETPIPE_SZ, &pipesize);
fcntl(ret->out.get(), F_SETPIPE_SZ, &pipesize);
return ret;
}
@@ -117,13 +85,10 @@ static void copyClosureTo(
garbage-collect paths that are already there. Optionally, ask
the remote host to substitute missing paths. */
// FIXME: substitute output pollutes our build log
conn.to << ServeProto::Command::QueryValidPaths << 1 << useSubstitutes;
ServeProto::write(destStore, conn, closure);
conn.to.flush();
/* Get back the set of paths that are already valid on the remote
host. */
auto present = ServeProto::Serialise<StorePathSet>::read(destStore, conn);
auto present = conn.queryValidPaths(
destStore, true, closure, useSubstitutes);
if (present.size() == closure.size()) return;
@@ -148,7 +113,7 @@ static void copyClosureTo(
// FIXME: use Store::topoSortPaths().
static StorePaths reverseTopoSortPaths(const std::map<StorePath, ValidPathInfo> & paths)
static StorePaths reverseTopoSortPaths(const std::map<StorePath, UnkeyedValidPathInfo> & paths)
{
StorePaths sorted;
StorePathSet visited;
@@ -189,28 +154,6 @@ static std::pair<Path, AutoCloseFD> openLogFile(const std::string & logDir, cons
return {std::move(logFile), std::move(logFD)};
}
/**
* @param conn is not fully initialized; it is this functions job to set
* the `remoteVersion` field after the handshake is completed.
* Therefore, no `ServeProto::Serialize` functions can be used until
* that field is set.
*/
static void handshake(::Machine::Connection & conn, unsigned int repeats)
{
conn.to << SERVE_MAGIC_1 << 0x206;
conn.to.flush();
unsigned int magic = readInt(conn.from);
if (magic != SERVE_MAGIC_2)
throw Error("protocol mismatch with nix-store --serve on %1%", conn.machine->sshName);
conn.remoteVersion = readInt(conn.from);
// Now `conn` is initialized.
if (GET_PROTOCOL_MAJOR(conn.remoteVersion) != 0x200)
throw Error("unsupported nix-store --serve protocol version on %1%", conn.machine->sshName);
if (GET_PROTOCOL_MINOR(conn.remoteVersion) < 3 && repeats > 0)
throw Error("machine %1% does not support repeating a build; please upgrade it to Nix 1.12", conn.machine->sshName);
}
static BasicDerivation sendInputs(
State & state,
Step & step,
@@ -281,21 +224,11 @@ static BuildResult performBuild(
Store & localStore,
StorePath drvPath,
const BasicDerivation & drv,
const State::BuildOptions & options,
const ServeProto::BuildOptions & options,
counter & nrStepsBuilding
)
{
conn.to << ServeProto::Command::BuildDerivation << localStore.printStorePath(drvPath);
writeDerivation(conn.to, localStore, drv);
conn.to << options.maxSilentTime << options.buildTimeout;
if (GET_PROTOCOL_MINOR(conn.remoteVersion) >= 2)
conn.to << options.maxLogSize;
if (GET_PROTOCOL_MINOR(conn.remoteVersion) >= 3) {
conn.to
<< options.repeats // == build-repeat
<< options.enforceDeterminism;
}
conn.to.flush();
conn.putBuildDerivationRequest(localStore, drvPath, drv, options);
BuildResult result;
@@ -345,7 +278,7 @@ static BuildResult performBuild(
return result;
}
static std::map<StorePath, ValidPathInfo> queryPathInfos(
static std::map<StorePath, UnkeyedValidPathInfo> queryPathInfos(
::Machine::Connection & conn,
Store & localStore,
StorePathSet & outputs,
@@ -354,30 +287,18 @@ static std::map<StorePath, ValidPathInfo> queryPathInfos(
{
/* Get info about each output path. */
std::map<StorePath, ValidPathInfo> infos;
std::map<StorePath, UnkeyedValidPathInfo> infos;
conn.to << ServeProto::Command::QueryPathInfos;
ServeProto::write(localStore, conn, outputs);
conn.to.flush();
while (true) {
auto storePathS = readString(conn.from);
if (storePathS == "") break;
auto deriver = readString(conn.from); // deriver
auto references = ServeProto::Serialise<StorePathSet>::read(localStore, conn);
readLongLong(conn.from); // download size
auto narSize = readLongLong(conn.from);
auto narHash = Hash::parseAny(readString(conn.from), htSHA256);
auto ca = ContentAddress::parseOpt(readString(conn.from));
readStrings<StringSet>(conn.from); // sigs
ValidPathInfo info(localStore.parseStorePath(storePathS), narHash);
assert(outputs.count(info.path));
info.references = references;
info.narSize = narSize;
auto storePath = localStore.parseStorePath(storePathS);
auto info = ServeProto::Serialise<UnkeyedValidPathInfo>::read(localStore, conn);
totalNarSize += info.narSize;
info.narHash = narHash;
info.ca = ca;
if (deriver != "")
info.deriver = localStore.parseStorePath(deriver);
infos.insert_or_assign(info.path, info);
infos.insert_or_assign(std::move(storePath), std::move(info));
}
return infos;
@@ -418,14 +339,16 @@ static void copyPathsFromRemote(
NarMemberDatas & narMembers,
Store & localStore,
Store & destStore,
const std::map<StorePath, ValidPathInfo> & infos
const std::map<StorePath, UnkeyedValidPathInfo> & infos
)
{
auto pathsSorted = reverseTopoSortPaths(infos);
for (auto & path : pathsSorted) {
auto & info = infos.find(path)->second;
copyPathFromRemote(conn, narMembers, localStore, destStore, info);
copyPathFromRemote(
conn, narMembers, localStore, destStore,
ValidPathInfo { path, info });
}
}
@@ -492,7 +415,7 @@ void RemoteResult::updateWithBuildResult(const nix::BuildResult & buildResult)
void State::buildRemote(ref<Store> destStore,
::Machine::ptr machine, Step::ptr step,
const BuildOptions & buildOptions,
const ServeProto::BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,
std::function<void(StepState)> updateStep,
NarMemberDatas & narMembers)
@@ -503,21 +426,26 @@ void State::buildRemote(ref<Store> destStore,
AutoDelete logFileDel(logFile, false);
result.logFile = logFile;
nix::Path tmpDir = createTempDir();
AutoDelete tmpDirDel(tmpDir, true);
try {
updateStep(ssConnecting);
SSHMaster master {
machine->sshName,
machine->sshKey,
machine->sshPublicHostKey,
false, // no SSH master yet
false, // no compression yet
logFD.get(),
};
// FIXME: rewrite to use Store.
SSHMaster::Connection child;
build_remote::openConnection(machine, tmpDir, logFD.get(), child);
auto child = build_remote::openConnection(machine, master);
{
auto activeStepState(activeStep->state_.lock());
if (activeStepState->cancelled) throw Error("step cancelled");
activeStepState->pid = child.sshPid;
activeStepState->pid = child->sshPid;
}
Finally clearPid([&]() {
@@ -533,9 +461,13 @@ void State::buildRemote(ref<Store> destStore,
});
::Machine::Connection conn {
.from = child.out.get(),
.to = child.in.get(),
.machine = machine,
{
.to = child->in.get(),
.from = child->out.get(),
/* Handshake. */
.remoteVersion = 0xdadbeef, // FIXME avoid dummy initialize
},
/*.machine =*/ machine,
};
Finally updateStats([&]() {
@@ -543,14 +475,26 @@ void State::buildRemote(ref<Store> destStore,
bytesSent += conn.to.written;
});
constexpr ServeProto::Version our_version = 0x206;
try {
build_remote::handshake(conn, buildOptions.repeats);
conn.remoteVersion = decltype(conn)::handshake(
conn.to,
conn.from,
our_version,
machine->sshName);
} catch (EndOfFile & e) {
child.sshPid.wait();
child->sshPid.wait();
std::string s = chomp(readFile(result.logFile));
throw Error("cannot connect to %1%: %2%", machine->sshName, s);
}
// Do not attempt to speak a newer version of the protocol.
//
// Per https://github.com/NixOS/nix/issues/9584 should be handled as
// part of `handshake` in upstream nix.
conn.remoteVersion = std::min(conn.remoteVersion, our_version);
{
auto info(machine->state->connectInfo.lock());
info->consecutiveFailures = 0;
@@ -653,8 +597,8 @@ void State::buildRemote(ref<Store> destStore,
}
/* Shut down the connection. */
child.in = -1;
child.sshPid.wait();
child->in = -1;
child->sshPid.wait();
} catch (Error & e) {
/* Disable this machine until a certain period of time has

View File

@@ -98,10 +98,13 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
it). */
BuildID buildId;
std::optional<StorePath> buildDrvPath;
BuildOptions buildOptions;
buildOptions.repeats = step->isDeterministic ? 1 : 0;
buildOptions.maxLogSize = maxLogSize;
buildOptions.enforceDeterminism = step->isDeterministic;
// Other fields set below
nix::ServeProto::BuildOptions buildOptions {
.maxLogSize = maxLogSize,
.nrRepeats = step->isDeterministic ? 1u : 0u,
.enforceDeterminism = step->isDeterministic,
.keepFailed = false,
};
auto conn(dbPool.get());
@@ -136,7 +139,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
{
auto i = jobsetRepeats.find(std::make_pair(build2->projectName, build2->jobsetName));
if (i != jobsetRepeats.end())
buildOptions.repeats = std::max(buildOptions.repeats, i->second);
buildOptions.nrRepeats = std::max(buildOptions.nrRepeats, i->second);
}
}
if (!build) build = *dependents.begin();
@@ -147,7 +150,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
buildOptions.buildTimeout = build->buildTimeout;
printInfo("performing step %s %d times on %s (needed by build %d and %d others)",
localStore->printStorePath(step->drvPath), buildOptions.repeats + 1, machine->sshName, buildId, (dependents.size() - 1));
localStore->printStorePath(step->drvPath), buildOptions.nrRepeats + 1, machine->sshName, buildId, (dependents.size() - 1));
}
if (!buildOneDone)

View File

@@ -231,11 +231,11 @@ system_time State::doDispatch()
sort(machinesSorted.begin(), machinesSorted.end(),
[](const MachineInfo & a, const MachineInfo & b) -> bool
{
float ta = std::round(a.currentJobs / a.machine->speedFactorFloat);
float tb = std::round(b.currentJobs / b.machine->speedFactorFloat);
float ta = std::round(a.currentJobs / a.machine->speedFactor);
float tb = std::round(b.currentJobs / b.machine->speedFactor);
return
ta != tb ? ta < tb :
a.machine->speedFactorFloat != b.machine->speedFactorFloat ? a.machine->speedFactorFloat > b.machine->speedFactorFloat :
a.machine->speedFactor != b.machine->speedFactor ? a.machine->speedFactor > b.machine->speedFactor :
a.currentJobs > b.currentJobs;
});

View File

@@ -155,16 +155,16 @@ void State::parseMachines(const std::string & contents)
auto machine = std::make_shared<::Machine>(nix::Machine {
// `storeUri`, not yet used
"",
// `systemTypes`, not yet used
{},
// `systemTypes`
tokenizeString<StringSet>(tokens[1], ","),
// `sshKey`
tokens[2] == "-" ? "" : tokens[2],
// `maxJobs`
tokens[3] != ""
? string2Int<MaxJobs>(tokens[3]).value()
: 1,
// `speedFactor`, not yet used
1,
// `speedFactor`
atof(tokens[4].c_str()),
// `supportedFeatures`
std::move(supportedFeatures),
// `mandatoryFeatures`
@@ -176,8 +176,6 @@ void State::parseMachines(const std::string & contents)
});
machine->sshName = tokens[0];
machine->systemTypesSet = tokenizeString<StringSet>(tokens[1], ",");
machine->speedFactorFloat = atof(tokens[4].c_str());
/* Re-use the State object of the previous machine with the
same name. */
@@ -641,7 +639,7 @@ void State::dumpStatus(Connection & conn)
json machine = {
{"enabled", m->enabled},
{"systemTypes", m->systemTypesSet},
{"systemTypes", m->systemTypes},
{"supportedFeatures", m->supportedFeatures},
{"mandatoryFeatures", m->mandatoryFeatures},
{"nrStepsDone", s->nrStepsDone.load()},
@@ -928,10 +926,17 @@ void State::run(BuildID buildOne)
while (true) {
try {
auto conn(dbPool.get());
receiver dumpStatus_(*conn, "dump_status");
while (true) {
conn->await_notification();
dumpStatus(*conn);
try {
receiver dumpStatus_(*conn, "dump_status");
while (true) {
conn->await_notification();
dumpStatus(*conn);
}
} catch (pqxx::broken_connection & connEx) {
printMsg(lvlError, "main thread: %s", connEx.what());
printMsg(lvlError, "main thread: Reconnecting in 10s");
conn.markBad();
sleep(10);
}
} catch (std::exception & e) {
printMsg(lvlError, "main thread: %s", e.what());

View File

@@ -6,7 +6,46 @@
using namespace nix;
struct Extractor : ParseSink
struct NarMemberConstructor : CreateRegularFileSink
{
NarMemberData & curMember;
HashSink hashSink = HashSink { HashAlgorithm::SHA256 };
std::optional<uint64_t> expectedSize;
NarMemberConstructor(NarMemberData & curMember)
: curMember(curMember)
{ }
void isExecutable() override
{
}
void preallocateContents(uint64_t size) override
{
expectedSize = size;
}
void operator () (std::string_view data) override
{
assert(expectedSize);
*curMember.fileSize += data.size();
hashSink(data);
if (curMember.contents) {
curMember.contents->append(data);
}
assert(curMember.fileSize <= expectedSize);
if (curMember.fileSize == expectedSize) {
auto [hash, len] = hashSink.finish();
assert(curMember.fileSize == len);
curMember.sha256 = hash;
}
}
};
struct Extractor : FileSystemObjectSink
{
std::unordered_set<Path> filesToKeep {
"/nix-support/hydra-build-products",
@@ -15,7 +54,6 @@ struct Extractor : ParseSink
};
NarMemberDatas & members;
NarMemberData * curMember = nullptr;
Path prefix;
Extractor(NarMemberDatas & members, const Path & prefix)
@@ -27,53 +65,22 @@ struct Extractor : ParseSink
members.insert_or_assign(prefix + path, NarMemberData { .type = SourceAccessor::Type::tDirectory });
}
void createRegularFile(const Path & path) override
void createRegularFile(const Path & path, std::function<void(CreateRegularFileSink &)> func) override
{
curMember = &members.insert_or_assign(prefix + path, NarMemberData {
.type = SourceAccessor::Type::tRegular,
.fileSize = 0,
.contents = filesToKeep.count(path) ? std::optional("") : std::nullopt,
}).first->second;
}
std::optional<uint64_t> expectedSize;
std::unique_ptr<HashSink> hashSink;
void preallocateContents(uint64_t size) override
{
expectedSize = size;
hashSink = std::make_unique<HashSink>(htSHA256);
}
void receiveContents(std::string_view data) override
{
assert(expectedSize);
assert(curMember);
assert(hashSink);
*curMember->fileSize += data.size();
(*hashSink)(data);
if (curMember->contents) {
curMember->contents->append(data);
}
assert(curMember->fileSize <= expectedSize);
if (curMember->fileSize == expectedSize) {
auto [hash, len] = hashSink->finish();
assert(curMember->fileSize == len);
curMember->sha256 = hash;
hashSink.reset();
}
NarMemberConstructor nmc {
members.insert_or_assign(prefix + path, NarMemberData {
.type = SourceAccessor::Type::tRegular,
.fileSize = 0,
.contents = filesToKeep.count(path) ? std::optional("") : std::nullopt,
}).first->second,
};
func(nmc);
}
void createSymlink(const Path & path, const std::string & target) override
{
members.insert_or_assign(prefix + path, NarMemberData { .type = SourceAccessor::Type::tSymlink });
}
void isExecutable() override
{ }
void closeRegularFile() override
{ }
};

View File

@@ -10,8 +10,14 @@ using namespace nix;
void State::queueMonitor()
{
while (true) {
auto conn(dbPool.get());
try {
queueMonitorLoop();
queueMonitorLoop(*conn);
} catch (pqxx::broken_connection & e) {
printMsg(lvlError, "queue monitor: %s", e.what());
printMsg(lvlError, "queue monitor: Reconnecting in 10s");
conn.markBad();
sleep(10);
} catch (std::exception & e) {
printError("queue monitor: %s", e.what());
sleep(10); // probably a DB problem, so don't retry right away
@@ -20,16 +26,14 @@ void State::queueMonitor()
}
void State::queueMonitorLoop()
void State::queueMonitorLoop(Connection & conn)
{
auto conn(dbPool.get());
receiver buildsAdded(*conn, "builds_added");
receiver buildsRestarted(*conn, "builds_restarted");
receiver buildsCancelled(*conn, "builds_cancelled");
receiver buildsDeleted(*conn, "builds_deleted");
receiver buildsBumped(*conn, "builds_bumped");
receiver jobsetSharesChanged(*conn, "jobset_shares_changed");
receiver buildsAdded(conn, "builds_added");
receiver buildsRestarted(conn, "builds_restarted");
receiver buildsCancelled(conn, "builds_cancelled");
receiver buildsDeleted(conn, "builds_deleted");
receiver buildsBumped(conn, "builds_bumped");
receiver jobsetSharesChanged(conn, "jobset_shares_changed");
auto destStore = getDestStore();
@@ -39,17 +43,17 @@ void State::queueMonitorLoop()
while (!quit) {
localStore->clearPathInfoCache();
bool done = getQueuedBuilds(*conn, destStore, lastBuildId);
bool done = getQueuedBuilds(conn, destStore, lastBuildId);
if (buildOne && buildOneDone) quit = true;
/* Sleep until we get notification from the database about an
event. */
if (done && !quit) {
conn->await_notification();
conn.await_notification();
nrQueueWakeups++;
} else
conn->get_notifs();
conn.get_notifs();
if (auto lowestId = buildsAdded.get()) {
lastBuildId = std::min(lastBuildId, static_cast<unsigned>(std::stoul(*lowestId) - 1));
@@ -61,11 +65,11 @@ void State::queueMonitorLoop()
}
if (buildsCancelled.get() || buildsDeleted.get() || buildsBumped.get()) {
printMsg(lvlTalkative, "got notification: builds cancelled or bumped");
processQueueChange(*conn);
processQueueChange(conn);
}
if (jobsetSharesChanged.get()) {
printMsg(lvlTalkative, "got notification: jobset shares changed");
processJobsetSharesChange(*conn);
processJobsetSharesChange(conn);
}
}
@@ -294,7 +298,7 @@ bool State::getQueuedBuilds(Connection & conn,
try {
createBuild(build);
} catch (Error & e) {
e.addTrace({}, hintfmt("while loading build %d: ", build->id));
e.addTrace({}, HintFmt("while loading build %d: ", build->id));
throw;
}
@@ -696,7 +700,7 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
product.fileSize = row[2].as<off_t>();
}
if (!row[3].is_null())
product.sha256hash = Hash::parseAny(row[3].as<std::string>(), htSHA256);
product.sha256hash = Hash::parseAny(row[3].as<std::string>(), HashAlgorithm::SHA256);
if (!row[4].is_null())
product.path = row[4].as<std::string>();
product.name = row[5].as<std::string>();

View File

@@ -22,6 +22,7 @@
#include "sync.hh"
#include "nar-extractor.hh"
#include "serve-protocol.hh"
#include "serve-protocol-impl.hh"
#include "machines.hh"
@@ -243,14 +244,6 @@ struct Machine : nix::Machine
we are not yet used to, but once we are, we don't need this. */
std::string sshName;
/* TODO Get rid once `nix::Machine::systemTypes` is a set not
vector. */
std::set<std::string> systemTypesSet;
/* TODO Get rid once `nix::Machine::systemTypes` is a `float` not
an `int`. */
float speedFactorFloat = 1.0;
struct State {
typedef std::shared_ptr<State> ptr;
counter currentJobs{0};
@@ -277,7 +270,7 @@ struct Machine : nix::Machine
{
/* Check that this machine is of the type required by the
step. */
if (!systemTypesSet.count(step->drv->platform == "builtin" ? nix::settings.thisSystem : step->drv->platform))
if (!systemTypes.count(step->drv->platform == "builtin" ? nix::settings.thisSystem : step->drv->platform))
return false;
/* Check that the step requires all mandatory features of this
@@ -307,29 +300,9 @@ struct Machine : nix::Machine
}
// A connection to a machine
struct Connection {
nix::FdSource from;
nix::FdSink to;
nix::ServeProto::Version remoteVersion;
struct Connection : nix::ServeProto::BasicClientConnection {
// Backpointer to the machine
ptr machine;
operator nix::ServeProto::ReadConn ()
{
return {
.from = from,
.version = remoteVersion,
};
}
operator nix::ServeProto::WriteConn ()
{
return {
.to = to,
.version = remoteVersion,
};
}
};
};
@@ -464,7 +437,7 @@ private:
/* How often the build steps of a jobset should be repeated in
order to detect non-determinism. */
std::map<std::pair<std::string, std::string>, unsigned int> jobsetRepeats;
std::map<std::pair<std::string, std::string>, size_t> jobsetRepeats;
bool uploadLogsToBinaryCache;
@@ -493,12 +466,6 @@ private:
public:
State(std::optional<std::string> metricsAddrOpt);
struct BuildOptions {
unsigned int maxSilentTime, buildTimeout, repeats;
size_t maxLogSize;
bool enforceDeterminism;
};
private:
nix::MaintainCount<counter> startDbUpdate();
@@ -531,7 +498,7 @@ private:
void queueMonitor();
void queueMonitorLoop();
void queueMonitorLoop(Connection & conn);
/* Check the queue for new builds. */
bool getQueuedBuilds(Connection & conn,
@@ -583,7 +550,7 @@ private:
void buildRemote(nix::ref<nix::Store> destStore,
Machine::ptr machine, Step::ptr step,
const BuildOptions & buildOptions,
const nix::ServeProto::BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,
std::function<void(StepState)> updateStep,
NarMemberDatas & narMembers);

View File

@@ -4,7 +4,6 @@ use strict;
use warnings;
use base 'Hydra::Base::Controller::REST';
use List::SomeUtils qw(any);
use Nix::Store;
use Hydra::Helper::Nix;
use Hydra::Helper::CatalystUtils;
@@ -30,7 +29,7 @@ sub getChannelData {
my $outputs = {};
foreach my $output (@outputs) {
my $outPath = $output->get_column("outpath");
next if $checkValidity && !isValidPath($outPath);
next if $checkValidity && !$MACHINE_LOCAL_STORE->isValidPath($outPath);
$outputs->{$output->get_column("outname")} = $outPath;
push @storePaths, $outPath;
# Put the system type in the manifest (for top-level

View File

@@ -285,6 +285,23 @@ sub push_github : Chained('api') PathPart('push-github') Args(0) {
$c->response->body("");
}
sub push_gitea : Chained('api') PathPart('push-gitea') Args(0) {
my ($self, $c) = @_;
$c->{stash}->{json}->{jobsetsTriggered} = [];
my $in = $c->request->{data};
my $url = $in->{repository}->{clone_url} or die;
$url =~ s/.git$//;
print STDERR "got push from Gitea repository $url\n";
triggerJobset($self, $c, $_, 0) foreach $c->model('DB::Jobsets')->search(
{ 'project.enabled' => 1, 'me.enabled' => 1 },
{ join => 'project'
, where => \ [ 'me.flake like ? or exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value like ?)', [ 'flake', "%$url%"], [ 'value', "%$url%" ] ]
});
$c->response->body("");
}
1;

View File

@@ -10,11 +10,10 @@ use File::Basename;
use File::LibMagic;
use File::stat;
use Data::Dump qw(dump);
use Nix::Store;
use Nix::Config;
use List::SomeUtils qw(all);
use Encode;
use JSON::PP;
use WWW::Form::UrlEncoded::PP qw();
use feature 'state';
@@ -82,9 +81,9 @@ sub build_GET {
# false because `$_->path` will be empty
$c->stash->{available} =
$c->stash->{isLocalStore}
? all { $_->path && isValidPath($_->path) } $build->buildoutputs->all
? all { $_->path && $MACHINE_LOCAL_STORE->isValidPath($_->path) } $build->buildoutputs->all
: 1;
$c->stash->{drvAvailable} = isValidPath $build->drvpath;
$c->stash->{drvAvailable} = $MACHINE_LOCAL_STORE->isValidPath($build->drvpath);
if ($build->finished && $build->iscachedbuild) {
my $path = ($build->buildoutputs)[0]->path or undef;
@@ -141,7 +140,7 @@ sub view_nixlog : Chained('buildChain') PathPart('nixlog') {
$c->stash->{step} = $step;
my $drvPath = $step->drvpath;
my $log_uri = $c->uri_for($c->controller('Root')->action_for("log"), [basename($drvPath)]);
my $log_uri = $c->uri_for($c->controller('Root')->action_for("log"), [WWW::Form::UrlEncoded::PP::url_encode(basename($drvPath))]);
showLog($c, $mode, $log_uri);
}
@@ -150,7 +149,7 @@ sub view_log : Chained('buildChain') PathPart('log') {
my ($self, $c, $mode) = @_;
my $drvPath = $c->stash->{build}->drvpath;
my $log_uri = $c->uri_for($c->controller('Root')->action_for("log"), [basename($drvPath)]);
my $log_uri = $c->uri_for($c->controller('Root')->action_for("log"), [WWW::Form::UrlEncoded::PP::url_encode(basename($drvPath))]);
showLog($c, $mode, $log_uri);
}
@@ -235,6 +234,9 @@ sub serveFile {
}
elsif ($ls->{type} eq "regular") {
# Have the hosted data considered its own origin to avoid being a giant
# XSS hole.
$c->response->header('Content-Security-Policy' => 'sandbox allow-scripts');
$c->stash->{'plain'} = { data => grab(cmd => ["nix", "--experimental-features", "nix-command",
"store", "cat", "--store", getStoreUri(), "$path"]) };
@@ -308,7 +310,7 @@ sub output : Chained('buildChain') PathPart Args(1) {
error($c, "This build is not finished yet.") unless $build->finished;
my $output = $build->buildoutputs->find({name => $outputName});
notFound($c, "This build has no output named $outputName") unless defined $output;
gone($c, "Output is no longer available.") unless isValidPath $output->path;
gone($c, "Output is no longer available.") unless $MACHINE_LOCAL_STORE->isValidPath($output->path);
$c->response->header('Content-Disposition', "attachment; filename=\"build-${\$build->id}-${\$outputName}.nar.bz2\"");
$c->stash->{current_view} = 'NixNAR';
@@ -425,7 +427,7 @@ sub getDependencyGraph {
};
$$done{$path} = $node;
my @refs;
foreach my $ref (queryReferences($path)) {
foreach my $ref ($MACHINE_LOCAL_STORE->queryReferences($path)) {
next if $ref eq $path;
next unless $runtime || $ref =~ /\.drv$/;
getDependencyGraph($self, $c, $runtime, $done, $ref);
@@ -433,7 +435,7 @@ sub getDependencyGraph {
}
# Show in reverse topological order to flatten the graph.
# Should probably do a proper BFS.
my @sorted = reverse topoSortPaths(@refs);
my @sorted = reverse $MACHINE_LOCAL_STORE->topoSortPaths(@refs);
$node->{refs} = [map { $$done{$_} } @sorted];
}
@@ -446,7 +448,7 @@ sub build_deps : Chained('buildChain') PathPart('build-deps') {
my $build = $c->stash->{build};
my $drvPath = $build->drvpath;
error($c, "Derivation no longer available.") unless isValidPath $drvPath;
error($c, "Derivation no longer available.") unless $MACHINE_LOCAL_STORE->isValidPath($drvPath);
$c->stash->{buildTimeGraph} = getDependencyGraph($self, $c, 0, {}, $drvPath);
@@ -461,7 +463,7 @@ sub runtime_deps : Chained('buildChain') PathPart('runtime-deps') {
requireLocalStore($c);
error($c, "Build outputs no longer available.") unless all { isValidPath($_) } @outPaths;
error($c, "Build outputs no longer available.") unless all { $MACHINE_LOCAL_STORE->isValidPath($_) } @outPaths;
my $done = {};
$c->stash->{runtimeGraph} = [ map { getDependencyGraph($self, $c, 1, $done, $_) } @outPaths ];
@@ -481,7 +483,7 @@ sub nix : Chained('buildChain') PathPart('nix') CaptureArgs(0) {
if (isLocalStore) {
foreach my $out ($build->buildoutputs) {
notFound($c, "Path " . $out->path . " is no longer available.")
unless isValidPath($out->path);
unless $MACHINE_LOCAL_STORE->isValidPath($out->path);
}
}

View File

@@ -16,6 +16,7 @@ use List::Util qw[min max];
use List::SomeUtils qw{any};
use Net::Prometheus;
use Types::Standard qw/StrMatch/;
use WWW::Form::UrlEncoded::PP qw();
use constant NARINFO_REGEX => qr{^([a-z0-9]{32})\.narinfo$};
# e.g.: https://hydra.example.com/realisations/sha256:a62128132508a3a32eef651d6467695944763602f226ac630543e947d9feb140!out.doi
@@ -34,6 +35,7 @@ sub noLoginNeeded {
return $whitelisted ||
$c->request->path eq "api/push-github" ||
$c->request->path eq "api/push-gitea" ||
$c->request->path eq "google-login" ||
$c->request->path eq "github-redirect" ||
$c->request->path eq "github-login" ||
@@ -79,7 +81,7 @@ sub begin :Private {
$_->supportedInputTypes($c->stash->{inputTypes}) foreach @{$c->hydra_plugins};
# XSRF protection: require POST requests to have the same origin.
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github") {
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github" && $c->req->path ne "api/push-gitea") {
my $referer = $c->req->header('Referer');
$referer //= $c->req->header('Origin');
my $base = $c->req->base;
@@ -366,7 +368,7 @@ sub realisations :Path('realisations') :Args(StrMatch[REALISATIONS_REGEX]) {
else {
my ($rawDrvOutput) = $realisation =~ REALISATIONS_REGEX;
my $rawRealisation = queryRawRealisation($rawDrvOutput);
my $rawRealisation = $MACHINE_LOCAL_STORE->queryRawRealisation($rawDrvOutput);
if (!$rawRealisation) {
$c->response->status(404);
@@ -395,7 +397,7 @@ sub narinfo :Path :Args(StrMatch[NARINFO_REGEX]) {
my ($hash) = $narinfo =~ NARINFO_REGEX;
die("Hash length was not 32") if length($hash) != 32;
my $path = queryPathFromHashPart($hash);
my $path = $MACHINE_LOCAL_STORE->queryPathFromHashPart($hash);
if (!$path) {
$c->response->status(404);
@@ -553,7 +555,7 @@ sub log :Local :Args(1) {
my $logPrefix = $c->config->{log_prefix};
if (defined $logPrefix) {
$c->res->redirect($logPrefix . "log/" . basename($drvPath));
$c->res->redirect($logPrefix . "log/" . WWW::Form::UrlEncoded::PP::url_encode(basename($drvPath)));
} else {
notFound($c, "The build log of $drvPath is not available.");
}

View File

@@ -40,8 +40,11 @@ our @EXPORT = qw(
registerRoot
restartBuilds
run
$MACHINE_LOCAL_STORE
);
our $MACHINE_LOCAL_STORE = Nix::Store->new();
sub getHydraHome {
my $dir = $ENV{"HYDRA_HOME"} or die "The HYDRA_HOME directory does not exist!\n";
@@ -187,6 +190,10 @@ sub findLog {
return undef if scalar @outPaths == 0;
# Filter out any NULLs. Content-addressed derivations
# that haven't built yet or failed to build may have a NULL outPath.
@outPaths = grep {defined} @outPaths;
my @steps = $c->model('DB::BuildSteps')->search(
{ path => { -in => [@outPaths] } },
{ select => ["drvpath"]
@@ -494,7 +501,7 @@ sub restartBuilds {
$builds = $builds->search({ finished => 1 });
foreach my $build ($builds->search({}, { columns => ["drvpath"] })) {
next if !isValidPath($build->drvpath);
next if !$MACHINE_LOCAL_STORE->isValidPath($build->drvpath);
registerRoot $build->drvpath;
}

View File

@@ -7,7 +7,6 @@ use Digest::SHA qw(sha256_hex);
use File::Path;
use Hydra::Helper::Exec;
use Hydra::Helper::Nix;
use Nix::Store;
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
@@ -38,9 +37,9 @@ sub fetchInput {
(my $cachedInput) = $self->{db}->resultset('CachedBazaarInputs')->search(
{uri => $uri, revision => $revision});
addTempRoot($cachedInput->storepath) if defined $cachedInput;
$MACHINE_LOCAL_STORE->addTempRoot($cachedInput->storepath) if defined $cachedInput;
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
} else {
@@ -58,7 +57,7 @@ sub fetchInput {
($sha256, $storePath) = split ' ', $stdout;
# FIXME: time window between nix-prefetch-bzr and addTempRoot.
addTempRoot($storePath);
$MACHINE_LOCAL_STORE->addTempRoot($storePath);
$self->{db}->txn_do(sub {
$self->{db}->resultset('CachedBazaarInputs')->create(

View File

@@ -7,7 +7,6 @@ use Digest::SHA qw(sha256_hex);
use File::Path;
use Hydra::Helper::Exec;
use Hydra::Helper::Nix;
use Nix::Store;
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
@@ -58,7 +57,7 @@ sub fetchInput {
{uri => $uri, revision => $revision},
{rows => 1});
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
$revision = $cachedInput->revision;
@@ -75,8 +74,8 @@ sub fetchInput {
die "darcs changes --count failed" if $? != 0;
system "rm", "-rf", "$tmpDir/export/_darcs";
$storePath = addToStore("$tmpDir/export", 1, "sha256");
$sha256 = queryPathHash($storePath);
$storePath = $MACHINE_LOCAL_STORE->addToStore("$tmpDir/export", 1, "sha256");
$sha256 = $MACHINE_LOCAL_STORE->queryPathHash($storePath);
$sha256 =~ s/sha256://;
$self->{db}->txn_do(sub {

View File

@@ -186,9 +186,9 @@ sub fetchInput {
{uri => $uri, branch => $branch, revision => $revision, isdeepclone => defined($deepClone) ? 1 : 0},
{rows => 1});
addTempRoot($cachedInput->storepath) if defined $cachedInput;
$MACHINE_LOCAL_STORE->addTempRoot($cachedInput->storepath) if defined $cachedInput;
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
$revision = $cachedInput->revision;
@@ -217,7 +217,7 @@ sub fetchInput {
($sha256, $storePath) = split ' ', grab(cmd => ["nix-prefetch-git", $clonePath, $revision], chomp => 1);
# FIXME: time window between nix-prefetch-git and addTempRoot.
addTempRoot($storePath);
$MACHINE_LOCAL_STORE->addTempRoot($storePath);
$self->{db}->txn_do(sub {
$self->{db}->resultset('CachedGitInputs')->update_or_create(

View File

@@ -88,10 +88,6 @@ sub buildQueued {
common(@_, [], 0);
}
sub buildStarted {
common(@_, [], 1);
}
sub buildFinished {
common(@_, 2);
}

View File

@@ -7,7 +7,6 @@ use Digest::SHA qw(sha256_hex);
use File::Path;
use Hydra::Helper::Nix;
use Hydra::Helper::Exec;
use Nix::Store;
use Fcntl qw(:flock);
sub supportedInputTypes {
@@ -68,9 +67,9 @@ sub fetchInput {
(my $cachedInput) = $self->{db}->resultset('CachedHgInputs')->search(
{uri => $uri, branch => $branch, revision => $revision});
addTempRoot($cachedInput->storepath) if defined $cachedInput;
$MACHINE_LOCAL_STORE->addTempRoot($cachedInput->storepath) if defined $cachedInput;
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
} else {
@@ -85,7 +84,7 @@ sub fetchInput {
($sha256, $storePath) = split ' ', $stdout;
# FIXME: time window between nix-prefetch-hg and addTempRoot.
addTempRoot($storePath);
$MACHINE_LOCAL_STORE->addTempRoot($storePath);
$self->{db}->txn_do(sub {
$self->{db}->resultset('CachedHgInputs')->update_or_create(

View File

@@ -5,7 +5,6 @@ use warnings;
use parent 'Hydra::Plugin';
use POSIX qw(strftime);
use Hydra::Helper::Nix;
use Nix::Store;
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
@@ -30,7 +29,7 @@ sub fetchInput {
{srcpath => $uri, lastseen => {">", $timestamp - $timeout}},
{rows => 1, order_by => "lastseen DESC"});
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
if (defined $cachedInput && $MACHINE_LOCAL_STORE->isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
$sha256 = $cachedInput->sha256hash;
$timestamp = $cachedInput->timestamp;
@@ -46,7 +45,7 @@ sub fetchInput {
}
chomp $storePath;
$sha256 = (queryPathInfo($storePath, 0))[1] or die;
$sha256 = ($MACHINE_LOCAL_STORE->queryPathInfo($storePath, 0))[1] or die;
($cachedInput) = $self->{db}->resultset('CachedPathInputs')->search(
{srcpath => $uri, sha256hash => $sha256});

View File

@@ -7,7 +7,6 @@ use Digest::SHA qw(sha256_hex);
use Hydra::Helper::Exec;
use Hydra::Helper::Nix;
use IPC::Run;
use Nix::Store;
sub supportedInputTypes {
my ($self, $inputTypes) = @_;
@@ -45,7 +44,7 @@ sub fetchInput {
(my $cachedInput) = $self->{db}->resultset('CachedSubversionInputs')->search(
{uri => $uri, revision => $revision});
addTempRoot($cachedInput->storepath) if defined $cachedInput;
$MACHINE_LOCAL_STORE->addTempRoot($cachedInput->storepath) if defined $cachedInput;
if (defined $cachedInput && isValidPath($cachedInput->storepath)) {
$storePath = $cachedInput->storepath;
@@ -62,16 +61,16 @@ sub fetchInput {
die "error checking out Subversion repo at `$uri':\n$stderr" if $res;
if ($type eq "svn-checkout") {
$storePath = addToStore($wcPath, 1, "sha256");
$storePath = $MACHINE_LOCAL_STORE->addToStore($wcPath, 1, "sha256");
} else {
# Hm, if the Nix Perl bindings supported filters in
# addToStore(), then we wouldn't need to make a copy here.
my $tmpDir = File::Temp->newdir("hydra-svn-export.XXXXXX", CLEANUP => 1, TMPDIR => 1) or die;
(system "svn", "export", $wcPath, "$tmpDir/source", "--quiet") == 0 or die "svn export failed";
$storePath = addToStore("$tmpDir/source", 1, "sha256");
$storePath = $MACHINE_LOCAL_STORE->addToStore("$tmpDir/source", 1, "sha256");
}
$sha256 = queryPathHash($storePath); $sha256 =~ s/sha256://;
$sha256 = $MACHINE_LOCAL_STORE->queryPathHash($storePath); $sha256 =~ s/sha256://;
$self->{db}->txn_do(sub {
$self->{db}->resultset('CachedSubversionInputs')->update_or_create(

View File

@@ -6,8 +6,7 @@ use File::Basename;
use Hydra::Helper::CatalystUtils;
use MIME::Base64;
use Nix::Manifest;
use Nix::Store;
use Nix::Utils;
use Hydra::Helper::Nix;
use base qw/Catalyst::View/;
sub process {
@@ -17,7 +16,7 @@ sub process {
$c->response->content_type('text/x-nix-narinfo'); # !!! check MIME type
my ($deriver, $narHash, $time, $narSize, $refs) = queryPathInfo($storePath, 1);
my ($deriver, $narHash, $time, $narSize, $refs) = $MACHINE_LOCAL_STORE->queryPathInfo($storePath, 1);
my $info;
$info .= "StorePath: $storePath\n";
@@ -28,8 +27,8 @@ sub process {
$info .= "References: " . join(" ", map { basename $_ } @{$refs}) . "\n";
if (defined $deriver) {
$info .= "Deriver: " . basename $deriver . "\n";
if (isValidPath($deriver)) {
my $drv = derivationFromPath($deriver);
if ($MACHINE_LOCAL_STORE->isValidPath($deriver)) {
my $drv = $MACHINE_LOCAL_STORE->derivationFromPath($deriver);
$info .= "System: $drv->{platform}\n";
}
}

View File

@@ -33,7 +33,7 @@
<div id="hydra-signin" class="modal hide fade" tabindex="-1" role="dialog" aria-hidden="true">
<div class="modal-dialog" role="document">
<div class="modal-content">
<form>
<form id="signin-form">
<div class="modal-body">
<div class="form-group">
<label for="username" class="col-form-label">User name</label>
@@ -45,7 +45,7 @@
</div>
</div>
<div class="modal-footer">
<button id="do-signin" type="button" class="btn btn-primary">Sign in</button>
<button type="submit" class="btn btn-primary">Sign in</button>
<button type="button" class="btn btn-secondary" data-dismiss="modal">Cancel</button>
</div>
</form>
@@ -57,10 +57,11 @@
function finishSignOut() { }
$("#do-signin").click(function() {
$("#signin-form").submit(function(e) {
e.preventDefault();
requestJSON({
url: "[% c.uri_for('/login') %]",
data: $(this).parents("form").serialize(),
data: $(this).serialize(),
type: 'POST',
success: function(data) {
window.location.reload();

View File

@@ -85,14 +85,14 @@ sub attrsToSQL {
# Fetch a store path from 'eval_substituter' if not already present.
sub getPath {
my ($path) = @_;
return 1 if isValidPath($path);
return 1 if $MACHINE_LOCAL_STORE->isValidPath($path);
my $substituter = $config->{eval_substituter};
system("nix", "--experimental-features", "nix-command", "copy", "--from", $substituter, "--", $path)
if defined $substituter;
return isValidPath($path);
return $MACHINE_LOCAL_STORE->isValidPath($path);
}
@@ -143,7 +143,7 @@ sub fetchInputBuild {
, version => $version
, outputName => $mainOutput->name
};
if (isValidPath($prevBuild->drvpath)) {
if ($MACHINE_LOCAL_STORE->isValidPath($prevBuild->drvpath)) {
$result->{drvPath} = $prevBuild->drvpath;
}
@@ -233,7 +233,7 @@ sub fetchInputEval {
my $out = $build->buildoutputs->find({ name => "out" });
next unless defined $out;
# FIXME: Should we fail if the path is not valid?
next unless isValidPath($out->path);
next unless $MACHINE_LOCAL_STORE->isValidPath($out->path);
$jobs->{$build->get_column('job')} = $out->path;
}

View File

@@ -5,7 +5,6 @@ use warnings;
use File::Path;
use File::stat;
use File::Basename;
use Nix::Store;
use Hydra::Config;
use Hydra::Schema;
use Hydra::Helper::Nix;
@@ -47,7 +46,7 @@ sub keepBuild {
$build->finished && ($build->buildstatus == 0 || $build->buildstatus == 6))
{
foreach my $path (split / /, $build->get_column('outpaths')) {
if (isValidPath($path)) {
if ($MACHINE_LOCAL_STORE->isValidPath($path)) {
addRoot $path;
} else {
print STDERR " warning: output ", $path, " has disappeared\n" if $build->finished;
@@ -55,7 +54,7 @@ sub keepBuild {
}
}
if (!$build->finished || ($keepFailedDrvs && $build->buildstatus != 0)) {
if (isValidPath($build->drvpath)) {
if ($MACHINE_LOCAL_STORE->isValidPath($build->drvpath)) {
addRoot $build->drvpath;
} else {
print STDERR " warning: derivation ", $build->drvpath, " has disappeared\n";

View File

@@ -54,13 +54,14 @@ subtest "/job/PROJECT/JOBSET/JOB/shield" => sub {
subtest "/job/PROJECT/JOBSET/JOB/prometheus" => sub {
my $response = request(GET '/job/' . $project->name . '/' . $jobset->name . '/' . $build->job . '/prometheus');
ok($response->is_success, "The page showing the job's prometheus data returns 200.");
my $metrics = $response->content;
ok($metrics =~ m/hydra_job_failed\{.*\} 0/);
ok($metrics =~ m/hydra_job_completion_time\{.*\} [\d]+/);
ok($metrics =~ m/hydra_build_closure_size\{.*\} 96/);
ok($metrics =~ m/hydra_build_output_size\{.*\} 96/);
ok($response->is_success, "The page showing the job's prometheus data returns 200.");
my $metrics = $response->content;
like($metrics, qr/hydra_job_failed\{.*\} 0/);
like($metrics, qr/hydra_job_completion_time\{.*\} [\d]+/);
like($metrics, qr/hydra_build_closure_size\{.*\} 96/);
like($metrics, qr/hydra_build_output_size\{.*\} 96/);
};
done_testing;

View File

@@ -186,7 +186,7 @@ subtest 'Update jobset "job" to have an invalid input type' => sub {
})
);
ok(!$jobsetupdate->is_success);
ok($jobsetupdate->content =~ m/Invalid input type.*valid types:/);
like($jobsetupdate->content, qr/Invalid input type.*valid types:/);
};

View File

@@ -24,7 +24,7 @@ my $cookie = $login->header("set-cookie");
my $my_jobs = request(GET '/dashboard/alice/my-jobs-tab', Accept => 'application/json', Cookie => $cookie);
ok($my_jobs->is_success);
my $content = $my_jobs->content();
ok($content =~ /empty_dir/);
like($content, qr/empty_dir/);
ok(!($content =~ /fails/));
ok(!($content =~ /succeed_with_failed/));
done_testing;

View File

@@ -115,7 +115,7 @@ subtest "evaluation" => sub {
my $build = decode_json(request_json({ uri => "/build/" . $evals->[0]->{builds}->[0] })->content());
is($build->{job}, "job", "The build's job name is job");
is($build->{finished}, 0, "The build isn't finished yet");
ok($build->{buildoutputs}->{out}->{path} =~ /\/nix\/store\/[a-zA-Z0-9]{32}-job$/, "The build's outpath is in the Nix store and named 'job'");
like($build->{buildoutputs}->{out}->{path}, qr/\/nix\/store\/[a-zA-Z0-9]{32}-job$/, "The build's outpath is in the Nix store and named 'job'");
subtest "search" => sub {
my $search_project = decode_json(request_json({ uri => "/search/?query=sample" })->content());

View File

@@ -27,13 +27,13 @@ my $project = $db->resultset('Projects')->create({name => "tests", displayname =
my $jobset = createBaseJobset("content-addressed", "content-addressed.nix", $ctx{jobsdir});
ok(evalSucceeds($jobset), "Evaluating jobs/content-addressed.nix should exit with return code 0");
is(nrQueuedBuildsForJobset($jobset), 5, "Evaluating jobs/content-addressed.nix should result in 4 builds");
is(nrQueuedBuildsForJobset($jobset), 6, "Evaluating jobs/content-addressed.nix should result in 6 builds");
for my $build (queuedBuildsForJobset($jobset)) {
ok(runBuild($build), "Build '".$build->job."' from jobs/content-addressed.nix should exit with code 0");
my $newbuild = $db->resultset('Builds')->find($build->id);
is($newbuild->finished, 1, "Build '".$build->job."' from jobs/content-addressed.nix should be finished.");
my $expected = $build->job eq "fails" ? 1 : $build->job =~ /with_failed/ ? 6 : 0;
my $expected = $build->job eq "fails" ? 1 : $build->job =~ /with_failed/ ? 6 : $build->job =~ /FailingCA/ ? 2 : 0;
is($newbuild->buildstatus, $expected, "Build '".$build->job."' from jobs/content-addressed.nix should have buildstatus $expected.");
my $response = request("/build/".$build->id);
@@ -55,6 +55,8 @@ for my $build (queuedBuildsForJobset($jobset)) {
}
# XXX: deststoredir is undefined: Use of uninitialized value $ctx{"deststoredir"} in concatenation (.) or string at t/content-addressed/basic.t line 58.
# XXX: This test seems to not do what it seems to be doing. See documentation: https://metacpan.org/pod/Test2::V0#isnt($got,-$do_not_want,-$name)
isnt(<$ctx{deststoredir}/realisations/*>, "", "The destination store should have the realisations of the built derivations registered");
done_testing;

View File

@@ -25,6 +25,13 @@ rec {
FOO = empty_dir;
};
caDependingOnFailingCA =
cfg.mkContentAddressedDerivation {
name = "ca-depending-on-failing-ca";
builder = ./dir-with-file-builder.sh;
FOO = fails;
};
nonCaDependingOnCA =
cfg.mkDerivation {
name = "non-ca-depending-on-ca";

View File

@@ -3,7 +3,6 @@ use warnings;
use File::Basename;
use Hydra::Model::DB;
use Hydra::Helper::Nix;
use Nix::Store;
use Cwd;
my $db = Hydra::Model::DB->new;