Compare commits

..

141 Commits

Author SHA1 Message Date
John Ericson
838648c0ce Merge pull request #1349 from NixOS/ca-no-new-col
Allow building content-addressed derivations with hydra, minimally
2024-01-26 17:54:02 -05:00
John Ericson
6ac4292912 Merge pull request #1351 from Ma27/hacking-fixes
Small fixes for the development environment
2024-01-26 17:22:42 -05:00
John Ericson
b503280256 Add migration to drop non-null constraints 2024-01-26 11:53:58 -05:00
Maximilian Bosch
b4c91b5a6a package: move foreman to nativeCheckInputs
In 1bd195a513 strictDeps was set for the
Hydra package. As a result, `checkInputs` aren't available anymore in
the local dev-shell which is the sole purpose of foreman, to start
services and a database for development.
2024-01-26 17:30:07 +01:00
Maximilian Bosch
8477009310 doc/manual: fix instructions in contribution guidelines
In 5db374cb50 the `bootstrap` script was
removed, however it's still referenced in the contribution guidelines.
Change that to `autoreconfPhase` as intended by the commit.
2024-01-26 17:28:07 +01:00
John Ericson
c62eaf248f Remove now-unneeded workaround 2024-01-26 01:20:07 -05:00
John Ericson
13b5f007ef Merge branch 'master' into ca-no-new-col 2024-01-26 01:19:45 -05:00
John Ericson
7f5889559e Merge pull request #1350 from NixOS/remove-old-workaround
Remove now-unneeded workaround
2024-01-26 01:13:37 -05:00
John Ericson
5ee0e443e4 Remove now-unneeded workaround 2024-01-26 01:08:11 -05:00
John Ericson
323b556dc8 Minimal CA support
This verison has a worse UI, but also chnages the schema less: One
non-null constraint is removed, but no new columns are added.

Co-Authored-By: Andrea Ciceri <andrea.ciceri@autistici.org>
Co-Authored-By: regnat <rg@regnat.ovh>
2024-01-26 00:34:58 -05:00
John Ericson
458b9e4242 Merge pull request #1348 from NixOS/ca-prep
More CA derivations prep
2024-01-25 21:53:40 -05:00
John Ericson
fcde5908d8 More CA derivations prep
Again, with care not to change the schema in any way.
2024-01-25 21:32:22 -05:00
John Ericson
083ef46c12 Merge pull request #1344 from delroth/google-popup
web: disable Sign in with Google popup
2024-01-25 16:36:16 -05:00
John Ericson
8a02bb7c36 Merge pull request #1347 from NixOS/simplify-req-features
Simplify `StoreConfig::getDefaultSystemFeatures` call
2024-01-25 16:17:25 -05:00
John Ericson
c64eed7d07 Simplify StoreConfig::getDefaultSystemFeatures call
That method is now static.
2024-01-25 15:58:07 -05:00
John Ericson
aed130cd17 flake.lock: Update
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/03e96b9dc011a16a0f6db9c7cb021ff93f8dcf88' (2024-01-19)
  → 'github:NixOS/nix/2c4bb93ba5a97e7078896ebc36385ce172960e4e' (2024-01-25)
2024-01-25 15:57:39 -05:00
John Ericson
7a6c401d42 Merge pull request #1346 from obsidiansystems/flake-reorg
Clean up the flake/build in a a few ways
2024-01-25 15:55:47 -05:00
John Ericson
b5ed0787f7 Replace "not Perl" and "Perl again" with something more self-explanatory 2024-01-25 14:55:10 -05:00
John Ericson
c5f37eca91 Reorganize hydra modules 2024-01-25 14:55:07 -05:00
John Ericson
73b6c1fb11 Filter out (mosts test) when !doCheck 2024-01-25 14:55:07 -05:00
John Ericson
4bbc7b8f75 Use the Nixpkgs fileset library to filter source
Now I can change Nix files without causing rebuilds.
2024-01-25 14:55:07 -05:00
John Ericson
d6d6d1b649 flake.nix: Temporarily add a second Nixpkgs for lib.fileset
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/b38e5a665e9d0aa7975beb0ed12e42d13a392e74' (2023-12-13)
  → 'github:NixOS/nix/03e96b9dc011a16a0f6db9c7cb021ff93f8dcf88' (2024-01-19)
• Added input 'nixpkgs-for-fileset':
    'github:NixOS/nixpkgs/a77ab169a83a4175169d78684ddd2e54486ac651' (2024-01-24)
2024-01-25 14:55:07 -05:00
John Ericson
1bd195a513 Clean up deps
- `strictDeps`

- Ensure it builds with and without `doCheck`
2024-01-25 14:55:07 -05:00
John Ericson
1471aacadc Split out a package.nix
Just like we did with Nix.
2024-01-25 14:55:06 -05:00
John Ericson
62ddeb0ff0 Merge pull request #1345 from SuperSandro2000/patch-2
Remove automake, libtool
2024-01-25 14:47:07 -05:00
Sandro
a876e46894 Remove automake, libtool
Those are already part of autoreconfHook
2024-01-25 17:12:40 +01:00
Pierre Bourdon
6df06b089e web: disable Sign in with Google popup 2024-01-25 09:27:46 +01:00
John Ericson
cc50fdff6f Merge pull request #1343 from obsidiansystems/default-machine-file-features
Use `StoreConfig::getDefaultSystemFeatures` for default machine config
2024-01-24 21:44:32 -05:00
John Ericson
b1fa6b3aac Use StoreConfig::getDefaultSystemFeatures for default machine config
We have to oddly make a `StoreConfig` subclass to get it, but
https://github.com/NixOS/nix/pull/9848 will fix that.

The purpose of this is to ensure that, absent an explicit config,
`localhost` includes `ca-derivations` and `recursive-nix` if those
experimental features are enabled.

Very much the complement of #1342, the previous PR.
2024-01-24 21:37:13 -05:00
John Ericson
f6a2b7562a Merge pull request #1342 from obsidiansystems/dedup-required-system-features
Use `nix::ParsedDerivation::getRequiredSystemFeatures()`
2024-01-24 21:13:49 -05:00
John Ericson
07cb5d1b7c Use nix::ParsedDerivation::getRequiredSystemFeatures()
A slight dedup, and also ensures that floating CA derivations require a
`ca-derivations` experimental feature. This fixes the scheduling issue
that @SuperSandro2000 found.
2024-01-24 21:04:14 -05:00
John Ericson
d45e14fd43 Merge pull request #1316 from NixOS/ca-derivations-prep
Prepare for CA derivation support with lower impact changes
2024-01-24 18:12:42 -05:00
John Ericson
d02e20a4c1 Merge pull request #1341 from NixOS/machine-dedup
Use Nix's `Machine` type in a minimal way
2024-01-23 15:38:19 -05:00
John Ericson
70e5469303 Use Nix's Machine type in a mimimal way
This is *just* using the fields from that type, and only where the types
coincide. (There are two fields with different types, `speedFactor` most
interestingly.) No code is reused, so we can be sure that no behavior is
changed.

Once the types are reconciled on the Nix side, then we can start
carefully actually reusing code.

Progress on #1164
2024-01-23 12:18:57 -05:00
John Ericson
2e6ee28f9b Machine -> ::Machine so we don't conflict with Nix's 2024-01-23 11:03:19 -05:00
John Ericson
f5c0efb11e Merge pull request #1340 from NixOS/start-using-nix-ssh
Replace `Child` with `SSHMaster::Connection`
2024-01-23 01:17:26 -05:00
John Ericson
4e8fbaa3d6 Replace Child with SSHMaster::Connection
Nix defines basically an identical struct for the same purpose, so let's
just use that.
2024-01-23 01:11:46 -05:00
John Ericson
588a0c5269 Merge remote-tracking branch 'upstream/master' into ca-derivations-prep 2023-12-23 19:19:54 -05:00
John Ericson
02e453fc8c Merge pull request #1329 from NixOS/small-std-optional-cleanup
Clean up `std::optional` dereferencing in the queue runner
2023-12-23 19:18:41 -05:00
John Ericson
75f26f1fc4 Clean up std::optional dereferencing in the queue runner
Instead of doing this partial operation a number of times, assert (with
a comment, get a reference to the thing inside, and use that just once.
(This refactor was done twice, "just once" for each time.)
2023-12-23 19:10:58 -05:00
John Ericson
3c89067f52 Merge pull request #1328 from JackKelly-Bellroy/doc-store-uri
Document the `store_uri` parameter by way of example
2023-12-23 03:57:10 -05:00
Jack Kelly
abd858d3dc Document the store_uri parameter by way of example 2023-12-19 07:54:40 +10:00
John Ericson
163dbf7f54 Merge pull request #1327 from NixOS/latest-2.19
flake.lock: Update Nix
2023-12-14 00:57:19 -05:00
John Ericson
642156372f Merge branch 'latest-2.19' into ca-derivations-prep 2023-12-14 00:33:22 -05:00
John Ericson
7517c134c5 flake.lock: Update Nix
Newer 2.19-maintenance has some `restricted-eval` fixes that benefit
Hydra.

Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/50f8f1c8bc019a4c0fd098b9ac674b94cfc6af0d' (2023-12-11)
  → 'github:NixOS/nix/b38e5a665e9d0aa7975beb0ed12e42d13a392e74' (2023-12-13)
2023-12-14 00:32:15 -05:00
John Ericson
6e67884ff1 One more queryDerivationOutputMap should use the eval store param 2023-12-11 14:05:18 -05:00
John Ericson
a6b6c5a539 Revert query -- those columns don't exist yet! 2023-12-11 12:58:54 -05:00
John Ericson
ebfefb9161 Sync up with some changes done to the main CA branch 2023-12-11 12:46:36 -05:00
John Ericson
8783dd53f6 Merge remote-tracking branch 'upstream/master' into ca-derivations-prep 2023-12-11 12:42:43 -05:00
John Ericson
411e4d0c24 Let tests themselves intentionally leak temp dir (#1320)
* Let tests themselves intentionally leak temp dir

By default Yath will clean up temporary files, so the result is the
same. But `--keep-dirs` can be passed to `yath test` telling Yath to
*not* clean them up instead. This is very useful for debugging.

* Update t/lib/HydraTestContext.pm

Co-authored-by: Cole Helbling <cole.e.helbling@outlook.com>
2023-12-08 16:30:31 +00:00
John Ericson
831021808c Merge pull request #1318 from obsidiansystems/use-build-result-serialiser
Use factored-out `BuildResult` serializer
2023-12-08 11:25:05 -05:00
John Ericson
2ee0068fdc Do not copy for both stores for now
It has a performance cost, and as the comment says we should be doing
the better solution. We want to land this preparatory change on prod
while the rest is still on staging, so we should just skip it for now.

Skipping it will not affect regular fixed-output and input-addressed
derivations, which are the only ones prod would deal with upon getting
this code.

The main CA derivations support branch will revert this commit so it
still works.
2023-12-07 15:05:03 -05:00
John Ericson
31ea6458ca Merge remote-tracking branch 'upstream/master' into ca-derivations-prep 2023-12-07 15:01:35 -05:00
John Ericson
91bbd5366f Merge pull request #1319 from NixOS/nixpkgs-newer-23.05
flake.lock: Update Nixpkgs
2023-12-07 13:10:08 -05:00
John Ericson
a45a27851b flake.lock: Update Nixpkgs
This will be required for upgrading Nix beyond 2.19.

Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/ef0bc3976340dab9a4e087a0bcff661a8b2e87f3' (2023-06-21)
  → 'github:NixOS/nixpkgs/e9f06adb793d1cca5384907b3b8a4071d5d7cb19' (2023-12-03)
2023-12-07 12:21:18 -05:00
John Ericson
6a54ab24e2 Use factored-out BuildResult serializer
For the record, here is the Nix 2.19 version:
https://github.com/NixOS/nix/blob/2.19-maintenance/src/libstore/serve-protocol.cc,
which is what we would initially use.

It is a more complete version of what Hydra has today except for one
thing: it always unconditionally sets the start/stop times.

I think that is correct at the other end seems to unconditionally
measure them, but just to be extra careful, I reproduced the old
behavior of falling back on Hydra's own measurements if `startTime` is
0.

The only difference is that the fallback `stopTime` is now measured from
after the entire `BuildResult` is transferred over the wire, but I think
that should be negligible if it is measurable at all. (And remember,
this is fallback case I already suspect is dead code.)
2023-12-07 02:00:22 -05:00
John Ericson
58707438ba Merge pull request #1317 from obsidiansystems/substitute-flag
`copyClosureTo`: Use `SubstituteFlag` instead of `bool`
2023-12-07 00:23:54 -05:00
John Ericson
86cd5e9076 copyClosureTo: Use SubstituteFlag instead of bool
This matches Nix (in the same serialization logic in
`src/libstore/legacy-ssh-store.cc`) and adds clarity.
2023-12-07 00:18:50 -05:00
John Ericson
11f8030b0f Add comment from GitHub about adding to store as code comment 2023-12-06 17:59:25 -05:00
John Ericson
3df8feb3a2 Add TODO about setting null instead of empty string in JSON
An empty string is a sneaky way to avoid hard failures --- things that
expect strings still get strings, but it does conversely open the door
up to soft failures (spooky-action-at-a-distance ones because the string
did not have the expected invariants).

"Fail fast" with null will ultimately make the system more robust, but
force us to fix more things up front, and I don't want to change this
without also fixing those things up front, especially as this commit is
for now just part of the the preparatory PR for which this is dead code.
2023-12-05 11:31:06 -05:00
John Ericson
069b7775c5 hydra-eval-jobs: Ensure we have output path if ca-derivations is disabled
Brought up by @thufschmitt in
https://github.com/NixOS/hydra/pull/1316#discussion_r1415111329 . This
makes this closer to what was originally there --- which just dispatched
off the experimental feature rather than the presence/absense of the
output, too.
2023-12-05 11:26:26 -05:00
John Ericson
e3443cd22a Put back nicer copyClosure instead of manual closure + copy
It looks like we accidentally got the old code back, probably after a merge
conflict resolution.
2023-12-04 17:41:11 -05:00
John Ericson
8046ec2668 Remove unused outputHashes variable
This looks like a stray copy paste.
2023-12-04 16:21:56 -05:00
John Ericson
9ba4417940 Prepare for CA derivation support with lower impact changes
This is just C++ changes without any Perl / Frontend / SQL Schema
changes.

The idea is that it should be possible to redeploy Hydra with these
chnages with (a) no schema migration and also (b) no regressions. We
should be able to much more safely deploy these to a staging server and
then production `hydra.nixos.org`.

Extracted from #875

Co-Authored-By: Théophane Hufschmitt <theophane.hufschmitt@tweag.io>
Co-Authored-By: Alexander Sosedkin <monk@unboiled.info>
Co-Authored-By: Andrea Ciceri <andrea.ciceri@autistici.org>
Co-Authored-By: Charlotte 🦝 Delenk Mlotte@chir.rs>
Co-Authored-By: Sandro Jäckel <sandro.jaeckel@gmail.com>
2023-12-04 16:14:47 -05:00
John Ericson
a5d44b60ea Merge pull request #1313 from obsidiansystems/split-buildRemote
Split the `buildRemote` function, take 2
2023-12-04 11:37:36 -05:00
John Ericson
363604846a Again, use const in for loop
As requested by @teh. Was lost in merge with master, now added back.
2023-12-04 11:31:05 -05:00
John Ericson
162b538912 Remove unused thisArrow variable 2023-12-04 11:27:39 -05:00
John Ericson
104baef503 Document the connection initialization process 2023-12-04 09:42:04 -05:00
John Ericson
3c5636162a Merge remote-tracking branch 'upstream/master' into split-buildRemote 2023-12-04 09:38:43 -05:00
Janne Heß
874fcae1e8 Merge pull request #1301 from delroth/queue-runner-perf
queue-runner: only re-sort runnables by prio once per dispatch cycle
2023-12-04 15:27:14 +01:00
Eelco Dolstra
4dc8fe0b08 Merge pull request #1312 from obsidiansystems/clean-up-deps
Cleanup deps
2023-12-04 15:15:09 +01:00
John Ericson
67eeabd518 Merge remote-tracking branch 'upstream/master' into split-buildRemote 2023-12-04 09:12:58 -05:00
John Ericson
622c25e3c4 Sedding prior to merge 2023-12-04 08:56:06 -05:00
Eelco Dolstra
f216bce0e6 Merge pull request #1314 from obsidiansystems/2.19
Upgrade to Nix 2.19
2023-12-01 17:07:51 +01:00
Eelco Dolstra
4d1c850512 Merge pull request #1308 from chayleaf/2.18
support nix 2.18
2023-12-01 15:10:29 +01:00
John Ericson
c922e73c11 Update to Nix 2.19
Flake lock file updates:

• Updated input 'nix':
    'github:NixOS/nix/f5f4de6a550327b4b1a06123c2e450f1b92c73b6' (2023-10-02)
  → 'github:NixOS/nix/50f8f1c8bc019a4c0fd098b9ac674b94cfc6af0d' (2023-11-27)
2023-11-30 15:26:46 -05:00
John Ericson
e172461e55 Use const in for loop
As requested by @teh
2023-11-30 12:19:20 -05:00
John Ericson
0917145622 Make new functions not in header static 2023-11-30 12:19:05 -05:00
John Ericson
2bda7ca642 Further use Machine::Connection to deduplicate 2023-11-30 11:31:58 -05:00
John Ericson
831a2d9bd5 Merge remote-tracking branch 'upstream/master' into split-buildRemote 2023-11-30 11:27:40 -05:00
John Ericson
5db374cb50 Cleanup deps
- `nativeBuildInputs` vs `buildInputs`

- narrow down `with`s for clarity

- use `autoreconfHook` not `bootstrap` script

These sorts of changes have also been done in the Nix repo.
2023-11-30 10:48:17 -05:00
chayleaf
e9da80fff6 support nix 2.18 2023-11-21 18:41:52 +07:00
Janne Heß
8f48e4ddec Merge pull request #1268 from knedlsepp/fix-mime
Fix MIME types when serving .js and .css to fix rendering of HTML reports
2023-11-17 22:16:27 +01:00
Janne Heß
33f8a36736 Merge pull request #1304 from stigtsp/crypt-passphrase-argon2-output-len
Set output length of C::P::Argon2 hashes to 16
2023-10-20 16:03:18 +02:00
Stig Palmquist
6a5fb9efae Set output length of C::P::Argon2 hashes to 16
Since the default lengths in Crypt::Passphrase::Argon2 changed from 16
to 32 in in 0.009, some tests that expected the passphrase to be
unchanged started failing.
2023-10-20 00:09:28 +02:00
Janne Heß
c1a5ff3959 Merge pull request #1258 from nh2/patch-1
`renderInputDiff`: Increase git hash length 6 -> 8
2023-09-09 17:17:43 +02:00
Janne Heß
8520ab1391 Merge pull request #1300 from sternenseemann/doc-eval-builds
Document all_builds (/eval/{eval-id}/builds) endpoint
2023-09-09 17:11:52 +02:00
Janne Heß
8a413ce71a Merge pull request #1293 from arianvp/patch-1
Fix docs for  /eval/{id}  endpoint
2023-09-09 17:10:56 +02:00
Pierre Bourdon
b7c864c515 queue-runner: only re-sort runnables by prio once per dispatch cycle
The previous implementation was O(N²lg(N)) due to sorting the full
runnables priority list once per runnable being scheduled. While not
confirmed, this is suspected to cause performance issues and
bottlenecking with the queue runner when the runnable list gets large
enough.

This commit changes the dispatcher to instead only sort runnables per
priority once per dispatch cycle. This has the drawback of being less
reactive to runnable priority changes: the previous code would react
immediately, while this might end up using "old" priorities until the
next dispatch cycle. However, dispatch cycles are not supposed to take
very long (seconds, not minutes/hours), so this is not expected to have
much or any practical impact.

Ideally runnables would be maintained in a sorted data structure instead
of the current approach of copying + sorting in the scheduler. This
would however be a much more invasive change to implement, and might
have to wait until we can confirm where the queue runner bottlenecks
actually lie.
2023-09-08 23:38:30 +02:00
sternenseemann
e2195c46d1 hydra-api.yaml: document all_builds (/eval/{eval-id}/builds) 2023-08-30 15:08:11 +02:00
sternenseemann
113836ebae hydra-api.yaml: name JobsetEval parameter eval-id
This is more accurate since the id space is not shared between build and
eval ids.
2023-08-30 15:06:48 +02:00
Eelco Dolstra
00d30874da Merge pull request #1296 from DeterminateSystems/nix-2.17
Support Nix 2.17
2023-08-23 17:53:14 +02:00
Eelco Dolstra
35ccc9ebb2 Fix indentation
Co-authored-by: John Ericson <git@JohnEricson.me>
2023-08-23 17:04:45 +02:00
Linus Heckemann
9f0427385f Apply LTO fix suggested by Ericson2314 2023-08-20 14:55:56 +02:00
Linus Heckemann
b23431a657 Support Nix 2.17 2023-08-04 15:53:48 +02:00
Eelco Dolstra
60e2c377d3 Merge pull request #1295 from arianvp/patch-3
Fix documentation of defaultpath in api docs
2023-08-03 17:09:21 +02:00
Arian van Putten
a78664f1b5 Fix documentation of defaultpath in api docs 2023-07-20 14:43:03 +02:00
Arian van Putten
46246dcae3 Fix docs for /eval/{id} endpoint
You  need to pass it an eval-id, not a build-id pretty sure
2023-07-19 15:13:25 +02:00
Janne Heß
d135b123cd Merge pull request #1292 from Ma27/fix-queue-runner-stats
hydra-queue-runner: fix stats
2023-07-17 09:56:05 +02:00
Janne Heß
526e8bd744 Merge pull request #1291 from NixOS/update-nix-nixpkgs 2023-06-25 20:53:50 +02:00
Maximilian Bosch
5c35d1be20 hydra-queue-runner: fix stats 2023-06-25 17:28:15 +02:00
Eelco Dolstra
ce001bb142 Relax time interval checks
I saw one of these failing randomly.
2023-06-23 15:09:09 +02:00
Eelco Dolstra
9f69bb5c2c Fix compilation against Nix 2.16 2023-06-23 15:06:55 +02:00
Eelco Dolstra
a0c8440a5c Update to Nix 2.16 and NixOS 23.05
Flake lock file updates:

• Updated input 'nix':
    'github:nixos/nix/4acc684ef7b3117c6d6ac12837398a0008a53d85' (2023-02-22)
  → 'github:NixOS/nix/84050709ea18f3285a85d729f40c8f8eddf5008e' (2023-06-06)
• Added input 'nix/flake-compat':
    'github:edolstra/flake-compat/35bb57c0c8d8b62bbfd284272c928ceb64ddbde9' (2023-01-17)
• Updated input 'nixpkgs':
    follows 'nix/nixpkgs'
  → 'github:NixOS/nixpkgs/ef0bc3976340dab9a4e087a0bcff661a8b2e87f3' (2023-06-21)
2023-06-23 15:06:46 +02:00
Janne Heß
13ef4e3c5d Merge pull request #1286 from JulienMalka/fix-hydra-eval-jobs-dot
hydra-eval-jobs: fix jobs containing a dot being dropped
2023-05-08 14:48:33 +02:00
Julien Malka
b4099df91e hydra-eval-jobs: fix jobs containing a dot being dropped 2023-04-25 10:37:41 +02:00
Eelco Dolstra
082495e34e Merge pull request #1275 from Ma27/nix-2.13
Nix 2.13 + nixpkgs input update
2023-03-27 13:30:13 +02:00
Graham Christensen
399b61ff67 Merge pull request #1277 from Mindavi/systemd/network-online
systemd: hydra-queue-runner: wait for network-online
2023-03-24 09:56:29 -04:00
Graham Christensen
3da6ef0d6d Merge pull request #1281 from NixOS/rv-google-signin
Use new Google for Web signin
2023-03-24 09:55:22 -04:00
Rob Vermaas
f88bef15ed Use new Google for Web signin, the old way will be deprecated Mar 31st 2023 2023-03-14 09:04:12 +01:00
Rick van Schijndel
a084e204ae systemd: hydra-queue-runner: wait for network.target too
Co-authored-by: Sandro <sandro.jaeckel@gmail.com>
2023-03-07 21:56:20 +01:00
Graham Christensen
ecfa817d30 Merge pull request #1279 from DeterminateSystems/drop-unnecessary-index
Drop unused IndexBuildOutputsOnPath index
2023-03-06 11:06:31 -05:00
Cole Helbling
8d53c3ca11 test: use ubuntu-latest 2023-03-06 07:56:05 -08:00
Cole Helbling
810d2e6b51 Drop unused IndexBuildOutputsOnPath index
Also it's larger than the actual table it's indexing lol.

    -[ RECORD 30 ]----------+-----------------------------------------
    table_name              | buildoutputs
    index_name              | indexbuildoutputsonpath
    index_scans_count       | 0
    index_size              | 31 GB
    table_reads_index_count | 2128699937
    table_reads_seq_count   | 0
    table_reads_count       | 2128699937
    table_writes_count      | 22442976
    table_size              | 28 GB
2023-03-06 07:56:05 -08:00
Maximilian Bosch
f44d3d6ec9 Update Nix to 2.13.3
Includes the following required fixes:

* perl-bindings are correctly initialized: 77d8066e83
* /etc/ must be unwritable in build sandbox: 4acc684ef7
2023-03-04 12:07:34 +01:00
Rick van Schijndel
65c1249227 systemd: hydra-queue-runner: wait for network-online
This prevents eval errors when a machine is just started and the network isn't yet online.
I'm running hydra on a laptop and the network takes a bit of time to come online (WLAN),
so it's nice if the evaluator starts only when the network actually goes online.

Otherwise an error like this can happen on the first eval(s):

```
error fetching latest change from git repo at `https://github.com/nixos/nixpkgs.git':
fatal: unable to access 'https://github.com/nixos/nixpkgs.git/': Could not resolve host: github.com
```
2023-02-16 19:24:53 +01:00
Linus Heckemann
73dff15039 tests: ports are numbers 2023-02-04 20:12:30 +01:00
Linus Heckemann
ddd3ac3a4d name tests 2023-02-04 20:12:30 +01:00
Maximilian Bosch
c7716817a9 Update Nix to 2.13 2023-02-04 20:11:53 +01:00
Linus Heckemann
5b35e13898 hydra-queue-runner: use initializer lists for constructing JSON
And also fix the parts that were broken
2023-02-04 20:08:27 +01:00
Linus Heckemann
96e36201eb hydra-queue-runner: adapt to nlohmann::json 2023-02-04 20:08:09 +01:00
Josef Kemetmüller
ad99d3366f Fix MIME types when serving .js and .css
To correctly render HTML reports we make sure to return the following MIME
types instead of "text/plain"

- *.css: "text/css"
- *.js: "application/javascript"

Fixes: #1267
2022-12-29 22:26:59 +01:00
Graham Christensen
f48f00ee6d Merge pull request #1256 from cransom/hydra-evaluator-broken-connection
exit with error if database connectivity lost
2022-12-22 19:28:51 -05:00
Eelco Dolstra
d1fac69c21 Merge pull request #1264 from SuperSandro2000/patch-2
Fix example config
2022-12-05 17:48:46 +01:00
Graham Christensen
01802efc17 Merge pull request #1263 from Ma27/fix-my-jobs-tab
Fix "My Jobs" tab in user dashboard
2022-12-05 01:55:49 +01:00
Sandro
7f816e3237 Fix link 2022-12-05 00:35:05 +01:00
Sandro
213879484d Fix example config 2022-12-05 00:22:35 +01:00
Maximilian Bosch
fd765bc97a Fix "My Jobs" tab in user dashboard
Nowadays `Builds` doesn't reference `Project` directly anymore. This
means that simply resolving both `jobset` and `project` with a single
JOIN from `Builds` doesn't work anymore. Instead we need to resolve the
relation to `jobset` first and then the relation to `project`.

For similar fixes see e.g. c7c4759600.
2022-11-22 20:54:51 +01:00
Niklas Hambüchen
d1d171ee90 renderInputDiff: Increase git hash length 6 -> 8
Nixpkgs has so many commits that length 6 is often ambiguous,
making the use of the shown values with git difficult.
2022-11-02 17:30:32 +01:00
Casey Ransom
70ad3a924a exit with error if database connectivity lost
There's currently no automatic recovery for disconnected databases in
the evaluator. This means if the database is ever temporarily
unavailable, hydra-evaluator will sit and spin with no work
accomplished.

If this condition is caught, the daemon will exit and systemd will be
responsible for resuming the service.
2022-10-26 16:13:40 -04:00
John Ericson
3526d61ff2 Merge remote-tracking branch 'upstream/master' into split-buildRemote 2022-10-25 11:24:54 -04:00
Théophane Hufschmitt
143c31734f Move all the build remote utils to their namespace
Just don't pollute the global one
2022-10-25 10:04:29 +02:00
Théophane Hufschmitt
6e571e26ff Build the resolved derivation and not the original one 2022-03-29 17:05:30 +02:00
Théophane Hufschmitt
92b627ac1b Remove an accidental re-indenting of a comment
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
2022-03-29 17:04:19 +02:00
Théophane Hufschmitt
b430d41afd Use the BuildOptions more eagerly 2022-03-29 17:04:19 +02:00
Théophane Hufschmitt
fd0ae78eba Factor out the copying from the build store 2022-03-29 17:04:19 +02:00
Théophane Hufschmitt
a778a89f04 Factor out the queryPathInfos part of the build 2022-03-29 17:04:19 +02:00
Théophane Hufschmitt
365776f5d7 Factor out the building part 2022-03-29 17:04:19 +02:00
Théophane Hufschmitt
9f1b911625 Factor more stuff out 2022-03-29 17:04:17 +02:00
Théophane Hufschmitt
2f494b7834 Factor out the creation of the log file 2022-03-29 16:52:59 +02:00
Théophane Hufschmitt
5db8642224 Factor out a struct representing a connection to a machine 2022-03-29 16:52:59 +02:00
57 changed files with 1675 additions and 978 deletions

View File

@@ -4,7 +4,7 @@ on:
push:
jobs:
tests:
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:

1
.gitignore vendored
View File

@@ -38,6 +38,7 @@ t/jobs/declarative/project.json
hydra-config.h
hydra-config.h.in
result
result-*
outputs
config
stamp-h1

View File

@@ -1,8 +1,12 @@
SUBDIRS = src t doc
SUBDIRS = src doc
if CAN_DO_CHECK
SUBDIRS += t
endif
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
DIST_SUBDIRS = $(SUBDIRS)
EXTRA_DIST = hydra-module.nix
EXTRA_DIST = nixos-modules/hydra.nix
install-data-local: hydra-module.nix
install-data-local: nixos-modules/hydra.nix
$(INSTALL) -d $(DESTDIR)$(datadir)/nix
$(INSTALL_DATA) hydra-module.nix $(DESTDIR)$(datadir)/nix/
$(INSTALL_DATA) nixos-modules/hydra.nix $(DESTDIR)$(datadir)/nix/hydra-module.nix

View File

@@ -80,7 +80,7 @@ $ nix-build
You can use the provided shell.nix to get a working development environment:
```
$ nix-shell
$ ./bootstrap
$ autoreconfPhase
$ configurePhase # NOTE: not ./configure
$ make
```

View File

@@ -1,2 +0,0 @@
#! /bin/sh -e
exec autoreconf -vfi

View File

@@ -10,8 +10,6 @@ AC_PROG_LN_S
AC_PROG_LIBTOOL
AC_PROG_CXX
CXXFLAGS+=" -std=c++17"
AC_PATH_PROG([XSLTPROC], [xsltproc])
AC_ARG_WITH([docbook-xsl],
@@ -55,9 +53,6 @@ PKG_CHECK_MODULES([NIX], [nix-main nix-expr nix-store])
testPath="$(dirname $(type -p expr))"
AC_SUBST(testPath)
jobsPath="$(realpath ./t/jobs)"
AC_SUBST(jobsPath)
CXXFLAGS+=" -include nix/config.h"
AC_CONFIG_FILES([
@@ -73,11 +68,22 @@ AC_CONFIG_FILES([
src/lib/Makefile
src/root/Makefile
src/script/Makefile
t/Makefile
t/jobs/config.nix
t/jobs/declarative/project.json
])
# Tests might be filtered out
AM_CONDITIONAL([CAN_DO_CHECK], [test -f "$srcdir/t/api-test.t"])
AM_COND_IF(
[CAN_DO_CHECK],
[
jobsPath="$(realpath ./t/jobs)"
AC_SUBST(jobsPath)
AC_CONFIG_FILES([
t/Makefile
t/jobs/config.nix
t/jobs/declarative/project.json
])
])
AC_CONFIG_COMMANDS([executable-scripts], [])
AC_CONFIG_HEADER([hydra-config.h])

View File

@@ -74,6 +74,30 @@ following:
}
}
Populating a Cache
------------------
A common use for Hydra is to pre-build and cache derivations which
take a long time to build. While it is possible to direcly access the
Hydra server's store over SSH, a more scalable option is to upload
built derivations to a remote store like an [S3-compatible object
store](https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-help-stores.html#s3-binary-cache-store). Setting
the `store_uri` parameter will cause Hydra to sign and upload
derivations as they are built:
```
store_uri = s3://cache-bucket-name?compression=zstd&parallel-compression=true&write-nar-listing=1&ls-compression=br&log-compression=br&secret-key=/path/to/cache/private/key
```
This example uses [Zstandard](https://github.com/facebook/zstd)
compression on derivations to reduce CPU usage on the server, but
[Brotli](https://brotli.org/) compression for derivation listings and
build logs because it has better browser support.
See [`nix help
stores`](https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-help-stores.html)
for a description of the store URI format.
Statsd Configuration
--------------------
@@ -131,8 +155,8 @@ use LDAP to manage roles and users.
This is configured by defining the `<ldap>` block in the configuration file.
In this block it's possible to configure the authentication plugin in the
`<config>` block. All options are directly passed to `Catalyst::Authentication::Store::LDAP`.
The documentation for the available settings can be found [here]
(https://metacpan.org/pod/Catalyst::Authentication::Store::LDAP#CONFIGURATION-OPTIONS).
The documentation for the available settings can be found
[here](https://metacpan.org/pod/Catalyst::Authentication::Store::LDAP#CONFIGURATION-OPTIONS).
Note that the bind password (if needed) should be supplied as an included file to
prevent it from leaking to the Nix store.
@@ -179,6 +203,7 @@ Example configuration:
<role_search_options>
deref = always
</role_search_options>
</store>
</config>
<role_mapping>
# Make all users in the hydra_admin group Hydra admins

View File

@@ -18,7 +18,7 @@ $ nix-shell
To build Hydra, you should then do:
```console
[nix-shell]$ ./bootstrap
[nix-shell]$ autoreconfPhase
[nix-shell]$ configurePhase
[nix-shell]$ make
```

View File

@@ -404,3 +404,10 @@ analogous:
| `String value` | `gitea_status_repo` | *Name of the `Git checkout` input* |
| `String value` | `gitea_http_url` | *Public URL of `gitea`*, optional |
Content-addressed derivations
-----------------------------
Hydra can to a certain extent use the [`ca-derivations` experimental Nix feature](https://github.com/NixOS/rfcs/pull/62).
To use it, make sure that the Nix version you use is at least as recent as the one used in hydra's flake.
Be warned that this support is still highly experimental, and anything beyond the basic functionality might be broken at that point.

63
flake.lock generated
View File

@@ -1,5 +1,21 @@
{
"nodes": {
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1673956053,
"narHash": "sha256-4gtG9iQuiKITOjNQQeQIpoIB6b16fm+504Ch3sNKLd8=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "35bb57c0c8d8b62bbfd284272c928ceb64ddbde9",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"lowdown-src": {
"flake": false,
"locked": {
@@ -18,37 +34,56 @@
},
"nix": {
"inputs": {
"flake-compat": "flake-compat",
"lowdown-src": "lowdown-src",
"nixpkgs": "nixpkgs",
"nixpkgs": [
"nixpkgs"
],
"nixpkgs-regression": "nixpkgs-regression"
},
"locked": {
"lastModified": 1668607642,
"narHash": "sha256-lNnk5thRq43XPcA+5KwoHgdsKf3urmE4B2xzHokVMbc=",
"owner": "edolstra",
"lastModified": 1706208340,
"narHash": "sha256-wNyHUEIiKKVs6UXrUzhP7RSJQv0A8jckgcuylzftl8k=",
"owner": "NixOS",
"repo": "nix",
"rev": "561440bd6ddebd53d7b42bced22cb78fd607a6de",
"rev": "2c4bb93ba5a97e7078896ebc36385ce172960e4e",
"type": "github"
},
"original": {
"owner": "edolstra",
"ref": "lazy-trees",
"owner": "NixOS",
"ref": "2.19-maintenance",
"repo": "nix",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1657693803,
"narHash": "sha256-G++2CJ9u0E7NNTAi9n5G8TdDmGJXcIjkJ3NF8cetQB8=",
"lastModified": 1701615100,
"narHash": "sha256-7VI84NGBvlCTduw2aHLVB62NvCiZUlALLqBe5v684Aw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "365e1b3a859281cf11b94f87231adeabbdd878a2",
"rev": "e9f06adb793d1cca5384907b3b8a4071d5d7cb19",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-22.05-small",
"ref": "nixos-23.05",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-for-fileset": {
"locked": {
"lastModified": 1706098335,
"narHash": "sha256-r3dWjT8P9/Ah5m5ul4WqIWD8muj5F+/gbCdjiNVBKmU=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a77ab169a83a4175169d78684ddd2e54486ac651",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-23.11",
"repo": "nixpkgs",
"type": "github"
}
@@ -72,10 +107,8 @@
"root": {
"inputs": {
"nix": "nix",
"nixpkgs": [
"nix",
"nixpkgs"
]
"nixpkgs": "nixpkgs",
"nixpkgs-for-fileset": "nixpkgs-for-fileset"
}
}
},

275
flake.nix
View File

@@ -1,19 +1,25 @@
{
description = "A Nix-based continuous build system";
inputs.nixpkgs.follows = "nix/nixpkgs";
inputs.nix.url = "github:edolstra/nix/lazy-trees";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.05";
inputs.nix.url = "github:NixOS/nix/2.19-maintenance";
inputs.nix.inputs.nixpkgs.follows = "nixpkgs";
outputs = { self, nixpkgs, nix }:
# TODO get rid of this once https://github.com/NixOS/nix/pull/9546 is
# mered and we upgrade or Nix, so the main `nixpkgs` input is at least
# 23.11 and has `lib.fileset`.
inputs.nixpkgs-for-fileset.url = "github:NixOS/nixpkgs/nixos-23.11";
outputs = { self, nixpkgs, nix, nixpkgs-for-fileset }:
let
version = "${builtins.readFile ./version.txt}.${builtins.substring 0 8 (self.lastModifiedDate or "19700101")}.${self.shortRev or "DIRTY"}";
systems = [ "x86_64-linux" "aarch64-linux" ];
forEachSystem = nixpkgs.lib.genAttrs systems;
overlayList = [ self.overlays.default nix.overlays.default ];
pkgsBySystem = forEachSystem (system: import nixpkgs {
inherit system;
overlays = [ self.overlays.default nix.overlays.default ];
overlays = overlayList;
});
# NixOS configuration used for VM tests.
@@ -60,197 +66,9 @@
};
hydra = with final; let
perlDeps = buildEnv {
name = "hydra-perl-deps";
paths = with perlPackages; lib.closePropagation
[
AuthenSASL
CatalystActionREST
CatalystAuthenticationStoreDBIxClass
CatalystAuthenticationStoreLDAP
CatalystDevel
CatalystPluginAccessLog
CatalystPluginAuthorizationRoles
CatalystPluginCaptcha
CatalystPluginPrometheusTiny
CatalystPluginSessionStateCookie
CatalystPluginSessionStoreFastMmap
CatalystPluginStackTrace
CatalystTraitForRequestProxyBase
CatalystViewDownload
CatalystViewJSON
CatalystViewTT
CatalystXRoleApplicator
CatalystXScriptServerStarman
CryptPassphrase
CryptPassphraseArgon2
CryptRandPasswd
DataDump
DateTime
DBDPg
DBDSQLite
DigestSHA1
EmailMIME
EmailSender
FileLibMagic
FileSlurper
FileWhich
final.nix.perl-bindings
git
IOCompress
IPCRun
IPCRun3
JSON
JSONMaybeXS
JSONXS
ListSomeUtils
LWP
LWPProtocolHttps
ModulePluggable
NetAmazonS3
NetPrometheus
NetStatsd
PadWalker
ParallelForkManager
PerlCriticCommunity
PrometheusTinyShared
ReadonlyX
SetScalar
SQLSplitStatement
Starman
StringCompareConstantTime
SysHostnameLong
TermSizeAny
TermReadKey
Test2Harness
TestPostgreSQL
TextDiff
TextTable
UUID4Tiny
YAML
XMLSimple
];
};
in
stdenv.mkDerivation {
name = "hydra-${version}";
src = self;
buildInputs =
[
makeWrapper
autoconf
automake
libtool
unzip
nukeReferences
pkg-config
libpqxx
top-git
mercurial
darcs
subversion
breezy
openssl
bzip2
libxslt
final.nix
perlDeps
perl
mdbook
pixz
boost
postgresql_13
(if lib.versionAtLeast lib.version "20.03pre"
then nlohmann_json
else nlohmann_json.override { multipleHeaders = true; })
prometheus-cpp
];
checkInputs = [
cacert
foreman
glibcLocales
libressl.nc
openldap
python3
];
hydraPath = lib.makeBinPath (
[
subversion
openssh
final.nix
coreutils
findutils
pixz
gzip
bzip2
xz
gnutar
unzip
git
top-git
mercurial
darcs
gnused
breezy
] ++ lib.optionals stdenv.isLinux [ rpm dpkg cdrkit ]
);
OPENLDAP_ROOT = openldap;
shellHook = ''
pushd $(git rev-parse --show-toplevel) >/dev/null
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-eval-jobs:$(pwd)/src/hydra-queue-runner:$PATH
PERL5LIB=$(pwd)/src/lib:$PERL5LIB
export HYDRA_HOME="$(pwd)/src/"
mkdir -p .hydra-data
export HYDRA_DATA="$(pwd)/.hydra-data"
export HYDRA_DBI='dbi:Pg:dbname=hydra;host=localhost;port=64444'
popd >/dev/null
'';
preConfigure = "autoreconf -vfi";
NIX_LDFLAGS = [ "-lpthread" ];
enableParallelBuilding = true;
doCheck = true;
preCheck = ''
patchShebangs .
export LOGNAME=''${LOGNAME:-foo}
# set $HOME for bzr so it can create its trace file
export HOME=$(mktemp -d)
'';
postInstall = ''
mkdir -p $out/nix-support
for i in $out/bin/*; do
read -n 4 chars < $i
if [[ $chars =~ ELF ]]; then continue; fi
wrapProgram $i \
--prefix PERL5LIB ':' $out/libexec/hydra/lib:$PERL5LIB \
--prefix PATH ':' $out/bin:$hydraPath \
--set HYDRA_RELEASE ${version} \
--set HYDRA_HOME $out/libexec/hydra \
--set NIX_RELEASE ${final.nix.name or "unknown"}
done
'';
dontStrip = true;
meta.description = "Build of Hydra on ${final.stdenv.system}";
passthru = { inherit perlDeps; inherit (final) nix; };
hydra = final.callPackage ./package.nix {
inherit (nixpkgs-for-fileset.lib) fileset;
rawSrc = self;
};
};
@@ -258,9 +76,15 @@
build = forEachSystem (system: packages.${system}.hydra);
buildNoTests = forEachSystem (system:
packages.${system}.hydra.overrideAttrs (_: {
doCheck = false;
})
);
manual = forEachSystem (system:
let pkgs = pkgsBySystem.${system}; in
pkgs.runCommand "hydra-manual-${version}" { }
pkgs.runCommand "hydra-manual-${pkgs.hydra.version}" { }
''
mkdir -p $out/share
cp -prvd ${pkgs.hydra}/share/doc $out/share/
@@ -272,6 +96,7 @@
tests.install = forEachSystem (system:
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-install";
nodes.machine = hydraServer;
testScript =
''
@@ -279,7 +104,7 @@
machine.wait_for_job("hydra-server")
machine.wait_for_job("hydra-evaluator")
machine.wait_for_job("hydra-queue-runner")
machine.wait_for_open_port("3000")
machine.wait_for_open_port(3000)
machine.succeed("curl --fail http://localhost:3000/")
'';
});
@@ -288,6 +113,7 @@
let pkgs = pkgsBySystem.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
simpleTest {
name = "hydra-notifications";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
@@ -315,7 +141,7 @@
# Wait until InfluxDB can receive web requests
machine.wait_for_job("influxdb")
machine.wait_for_open_port("8086")
machine.wait_for_open_port(8086)
# Create an InfluxDB database where hydra will write to
machine.succeed(
@@ -325,7 +151,7 @@
# Wait until hydra-server can receive HTTP requests
machine.wait_for_job("hydra-server")
machine.wait_for_open_port("3000")
machine.wait_for_open_port(3000)
# Setup the project and jobset
machine.succeed(
@@ -346,6 +172,7 @@
let pkgs = pkgsBySystem.${system}; in
with import (nixpkgs + "/nixos/lib/testing-python.nix") { inherit system; };
makeTest {
name = "hydra-gitea";
nodes.machine = { pkgs, ... }: {
imports = [ hydraServer ];
services.hydra-dev.extraConfig = ''
@@ -561,50 +388,8 @@
default = pkgsBySystem.${system}.hydra;
});
nixosModules.hydra = {
imports = [ ./hydra-module.nix ];
nixpkgs.overlays = [ self.overlays.default nix.overlays.default ];
};
nixosModules.hydraTest = { pkgs, ... }: {
imports = [ self.nixosModules.hydra ];
services.hydra-dev.enable = true;
services.hydra-dev.hydraURL = "http://hydra.example.org";
services.hydra-dev.notificationSender = "admin@hydra.example.org";
systemd.services.hydra-send-stats.enable = false;
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql_11;
# The following is to work around the following error from hydra-server:
# [error] Caught exception in engine "Cannot determine local time zone"
time.timeZone = "UTC";
nix.extraOptions = ''
allowed-uris = https://github.com/
'';
};
nixosModules.hydraProxy = {
services.httpd = {
enable = true;
adminAddr = "hydra-admin@example.org";
extraConfig = ''
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyRequests Off
ProxyPreserveHost On
ProxyPass /apache-errors !
ErrorDocument 503 /apache-errors/503.html
ProxyPass / http://127.0.0.1:3000/ retry=5 disablereuse=on
ProxyPassReverse / http://127.0.0.1:3000/
'';
};
nixosModules = import ./nixos-modules {
overlays = overlayList;
};
nixosConfigurations.container = nixpkgs.lib.nixosSystem {

View File

@@ -533,13 +533,13 @@ paths:
schema:
$ref: '#/components/schemas/Error'
/eval/{build-id}:
/eval/{eval-id}:
get:
summary: Retrieves evaluations identified by build id
summary: Retrieves evaluations identified by eval id
parameters:
- name: build-id
- name: eval-id
in: path
description: build identifier
description: eval identifier
required: true
schema:
type: integer
@@ -551,6 +551,24 @@ paths:
schema:
$ref: '#/components/schemas/JobsetEval'
/eval/{eval-id}/builds:
get:
summary: Retrieves all builds belonging to an evaluation identified by eval id
parameters:
- name: eval-id
in: path
description: eval identifier
required: true
schema:
type: integer
responses:
'200':
description: builds
content:
application/json:
schema:
$ref: '#/components/schemas/JobsetEvalBuilds'
components:
schemas:
@@ -796,6 +814,13 @@ components:
additionalProperties:
$ref: '#/components/schemas/JobsetEvalInput'
JobsetEvalBuilds:
type: array
items:
type: object
additionalProperties:
$ref: '#/components/schemas/Build'
JobsetOverview:
type: array
items:
@@ -870,7 +895,7 @@ components:
description: Size of the produced file
type: integer
defaultpath:
description: This is a Git/Mercurial commit hash or a Subversion revision number
description: if path is a directory, the default file relative to path to be served
type: string
'type':
description: Types of build product (user defined)

49
nixos-modules/default.nix Normal file
View File

@@ -0,0 +1,49 @@
{ overlays }:
rec {
hydra = {
imports = [ ./hydra.nix ];
nixpkgs = { inherit overlays; };
};
hydraTest = { pkgs, ... }: {
imports = [ hydra ];
services.hydra-dev.enable = true;
services.hydra-dev.hydraURL = "http://hydra.example.org";
services.hydra-dev.notificationSender = "admin@hydra.example.org";
systemd.services.hydra-send-stats.enable = false;
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql_11;
# The following is to work around the following error from hydra-server:
# [error] Caught exception in engine "Cannot determine local time zone"
time.timeZone = "UTC";
nix.extraOptions = ''
allowed-uris = https://github.com/
'';
};
hydraProxy = {
services.httpd = {
enable = true;
adminAddr = "hydra-admin@example.org";
extraConfig = ''
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyRequests Off
ProxyPreserveHost On
ProxyPass /apache-errors !
ErrorDocument 503 /apache-errors/503.html
ProxyPass / http://127.0.0.1:3000/ retry=5 disablereuse=on
ProxyPassReverse / http://127.0.0.1:3000/
'';
};
};
}

View File

@@ -340,7 +340,7 @@ in
systemd.services.hydra-queue-runner =
{ wantedBy = [ "multi-user.target" ];
requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" "network.target" ];
after = [ "hydra-init.service" "network.target" "network-online.target" ];
path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ];
restartTriggers = [ hydraConf ];
environment = env // {

272
package.nix Normal file
View File

@@ -0,0 +1,272 @@
{ stdenv
, lib
, fileset
, rawSrc
, buildEnv
, perlPackages
, nix
, git
, makeWrapper
, autoreconfHook
, nukeReferences
, pkg-config
, mdbook
, unzip
, libpqxx
, top-git
, mercurial
, darcs
, subversion
, breezy
, openssl
, bzip2
, libxslt
, perl
, pixz
, boost
, postgresql_13
, nlohmann_json
, prometheus-cpp
, cacert
, foreman
, glibcLocales
, libressl
, openldap
, python3
, openssh
, coreutils
, findutils
, gzip
, xz
, gnutar
, gnused
, rpm
, dpkg
, cdrkit
}:
let
perlDeps = buildEnv {
name = "hydra-perl-deps";
paths = lib.closePropagation
([
nix.perl-bindings
git
] ++ (with perlPackages; [
AuthenSASL
CatalystActionREST
CatalystAuthenticationStoreDBIxClass
CatalystAuthenticationStoreLDAP
CatalystDevel
CatalystPluginAccessLog
CatalystPluginAuthorizationRoles
CatalystPluginCaptcha
CatalystPluginPrometheusTiny
CatalystPluginSessionStateCookie
CatalystPluginSessionStoreFastMmap
CatalystPluginStackTrace
CatalystTraitForRequestProxyBase
CatalystViewDownload
CatalystViewJSON
CatalystViewTT
CatalystXRoleApplicator
CatalystXScriptServerStarman
CryptPassphrase
CryptPassphraseArgon2
CryptRandPasswd
DataDump
DateTime
DBDPg
DBDSQLite
DigestSHA1
EmailMIME
EmailSender
FileLibMagic
FileSlurper
FileWhich
IOCompress
IPCRun
IPCRun3
JSON
JSONMaybeXS
JSONXS
ListSomeUtils
LWP
LWPProtocolHttps
ModulePluggable
NetAmazonS3
NetPrometheus
NetStatsd
PadWalker
ParallelForkManager
PerlCriticCommunity
PrometheusTinyShared
ReadonlyX
SetScalar
SQLSplitStatement
Starman
StringCompareConstantTime
SysHostnameLong
TermSizeAny
TermReadKey
Test2Harness
TestPostgreSQL
TextDiff
TextTable
UUID4Tiny
YAML
XMLSimple
]));
};
version = "${builtins.readFile ./version.txt}.${builtins.substring 0 8 (rawSrc.lastModifiedDate or "19700101")}.${rawSrc.shortRev or "DIRTY"}";
in
stdenv.mkDerivation (finalAttrs: {
pname = "hydra";
inherit version;
src = fileset.toSource {
root = ./.;
fileset = fileset.unions ([
./version.txt
./configure.ac
./Makefile.am
./src
./doc
./nixos-modules/hydra.nix
# These are always needed to appease Automake
./t/Makefile.am
./t/jobs/config.nix.in
./t/jobs/declarative/project.json.in
] ++ lib.optionals finalAttrs.doCheck [
./t
./.perlcriticrc
./.yath.rc
]);
};
strictDeps = true;
nativeBuildInputs = [
makeWrapper
autoreconfHook
nukeReferences
pkg-config
mdbook
nix
perlDeps
perl
unzip
];
buildInputs = [
libpqxx
openssl
libxslt
nix
perlDeps
perl
boost
nlohmann_json
prometheus-cpp
];
nativeCheckInputs = [
bzip2
darcs
foreman
top-git
mercurial
subversion
breezy
openldap
postgresql_13
pixz
];
checkInputs = [
cacert
glibcLocales
libressl.nc
python3
];
hydraPath = lib.makeBinPath (
[
subversion
openssh
nix
coreutils
findutils
pixz
gzip
bzip2
xz
gnutar
unzip
git
top-git
mercurial
darcs
gnused
breezy
] ++ lib.optionals stdenv.isLinux [ rpm dpkg cdrkit ]
);
OPENLDAP_ROOT = openldap;
shellHook = ''
pushd $(git rev-parse --show-toplevel) >/dev/null
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-eval-jobs:$(pwd)/src/hydra-queue-runner:$PATH
PERL5LIB=$(pwd)/src/lib:$PERL5LIB
export HYDRA_HOME="$(pwd)/src/"
mkdir -p .hydra-data
export HYDRA_DATA="$(pwd)/.hydra-data"
export HYDRA_DBI='dbi:Pg:dbname=hydra;host=localhost;port=64444'
popd >/dev/null
'';
NIX_LDFLAGS = [ "-lpthread" ];
enableParallelBuilding = true;
doCheck = true;
preCheck = ''
patchShebangs .
export LOGNAME=''${LOGNAME:-foo}
# set $HOME for bzr so it can create its trace file
export HOME=$(mktemp -d)
'';
postInstall = ''
mkdir -p $out/nix-support
for i in $out/bin/*; do
read -n 4 chars < $i
if [[ $chars =~ ELF ]]; then continue; fi
wrapProgram $i \
--prefix PERL5LIB ':' $out/libexec/hydra/lib:$PERL5LIB \
--prefix PATH ':' $out/bin:$hydraPath \
--set HYDRA_RELEASE ${version} \
--set HYDRA_HOME $out/libexec/hydra \
--set NIX_RELEASE ${nix.name or "unknown"}
done
'';
dontStrip = true;
meta.description = "Build of Hydra on ${stdenv.system}";
passthru = { inherit perlDeps nix; };
})

View File

@@ -7,6 +7,9 @@
#include "store-api.hh"
#include "eval.hh"
#include "eval-inline.hh"
#include "eval-settings.hh"
#include "signals.hh"
#include "terminal.hh"
#include "util.hh"
#include "get-drvs.hh"
#include "globals.hh"
@@ -25,7 +28,8 @@
#include <nlohmann/json.hpp>
void check_pid_status_nonblocking(pid_t check_pid) {
void check_pid_status_nonblocking(pid_t check_pid)
{
// Only check 'initialized' and known PID's
if (check_pid <= 0) { return; }
@@ -52,7 +56,7 @@ using namespace nix;
static Path gcRootsDir;
static size_t maxMemorySize;
struct MyArgs : MixEvalArgs, MixCommonArgs
struct MyArgs : MixEvalArgs, MixCommonArgs, RootArgs
{
Path releaseExpr;
bool flake = false;
@@ -93,14 +97,14 @@ static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const std:
rec = [&](Value & v) {
state.forceValue(v, noPos);
if (v.type() == nString)
res.push_back(v.string.s);
res.emplace_back(v.string_view());
else if (v.isList())
for (unsigned int n = 0; n < v.listSize(); ++n)
rec(*v.listElems()[n]);
else if (v.type() == nAttrs) {
auto a = v.attrs->find(state.symbols.create(subAttribute));
if (a != v.attrs->end())
res.push_back(std::string(state.forceString(*a->value)));
res.push_back(std::string(state.forceString(*a->value, a->pos, "while evaluating meta attributes")));
}
};
@@ -174,7 +178,11 @@ static void worker(
if (auto drv = getDerivation(state, *v, false)) {
DrvInfo::Outputs outputs = drv->queryOutputs();
// CA derivations do not have static output paths, so we
// have to defensively not query output paths in case we
// encounter one.
DrvInfo::Outputs outputs = drv->queryOutputs(
!experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
if (drv->querySystem() == "unknown")
throw EvalError("derivation must have a 'system' attribute");
@@ -197,26 +205,30 @@ static void worker(
/* If this is an aggregate, then get its constituents. */
auto a = v->attrs->get(state.symbols.create("_hydraAggregate"));
if (a && state.forceBool(*a->value, a->pos)) {
if (a && state.forceBool(*a->value, a->pos, "while evaluating the `_hydraAggregate` attribute")) {
auto a = v->attrs->get(state.symbols.create("constituents"));
if (!a)
throw EvalError("derivation must have a constituents attribute");
NixStringContext context;
state.coerceToString(a->pos, *a->value, context, "while evaluating the `constituents` attribute", true, false);
for (auto & c : context)
std::visit(overloaded {
[&](const NixStringContextElem::Built & b) {
job["constituents"].push_back(b.drvPath->to_string(*state.store));
},
[&](const NixStringContextElem::Opaque & o) {
},
[&](const NixStringContextElem::DrvDeep & d) {
},
}, c.raw);
PathSet context;
state.coerceToString(a->pos, *a->value, context, true, false);
for (auto & i : context)
if (i.at(0) == '!') {
size_t index = i.find("!", 1);
job["constituents"].push_back(std::string(i, index + 1));
}
state.forceList(*a->value, a->pos);
state.forceList(*a->value, a->pos, "while evaluating the `constituents` attribute");
for (unsigned int n = 0; n < a->value->listSize(); ++n) {
auto v = a->value->listElems()[n];
state.forceValue(*v, noPos);
if (v->type() == nString)
job["namedConstituents"].push_back(state.forceStringNoCtx(*v));
job["namedConstituents"].push_back(v->string_view());
}
}
@@ -231,12 +243,17 @@ static void worker(
}
nlohmann::json out;
for (auto & j : outputs)
// FIXME: handle CA/impure builds.
if (j.second)
out[j.first] = state.store->printStorePath(*j.second);
for (auto & [outputName, optOutputPath] : outputs) {
if (optOutputPath) {
out[outputName] = state.store->printStorePath(*optOutputPath);
} else {
// See the `queryOutputs` call above; we should
// not encounter missing output paths otherwise.
assert(experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
out[outputName] = nullptr;
}
}
job["outputs"] = std::move(out);
reply["job"] = std::move(job);
}
@@ -245,7 +262,7 @@ static void worker(
StringSet ss;
for (auto & i : v->attrs->lexicographicOrder(state.symbols)) {
std::string name(state.symbols[i->name]);
if (name.find('.') != std::string::npos || name.find(' ') != std::string::npos) {
if (name.find(' ') != std::string::npos) {
printError("skipping job with illegal name '%s'", name);
continue;
}
@@ -416,7 +433,11 @@ int main(int argc, char * * argv)
if (response.find("attrs") != response.end()) {
for (auto & i : response["attrs"]) {
auto s = (attrPath.empty() ? "" : attrPath + ".") + (std::string) i;
std::string path = i;
if (path.find(".") != std::string::npos){
path = "\"" + path + "\"";
}
auto s = (attrPath.empty() ? "" : attrPath + ".") + (std::string) path;
newAttrs.insert(s);
}
}
@@ -507,7 +528,7 @@ int main(int argc, char * * argv)
auto drvPath2 = store->parseStorePath((std::string) (*job2)["drvPath"]);
auto drv2 = store->readDerivation(drvPath2);
job["constituents"].push_back(store->printStorePath(drvPath2));
drv.inputDrvs[drvPath2] = {drv2.outputs.begin()->first};
drv.inputDrvs.map[drvPath2].value = {drv2.outputs.begin()->first};
}
if (brokenJobs.empty()) {

View File

@@ -2,6 +2,7 @@
#include "hydra-config.hh"
#include "pool.hh"
#include "shared.hh"
#include "signals.hh"
#include <algorithm>
#include <thread>
@@ -366,6 +367,9 @@ struct Evaluator
printInfo("received jobset event");
}
} catch (pqxx::broken_connection & e) {
printError("Database connection broken: %s", e.what());
std::_Exit(1);
} catch (std::exception & e) {
printError("exception in database monitor thread: %s", e.what());
sleep(30);
@@ -473,6 +477,9 @@ struct Evaluator
while (true) {
try {
loop();
} catch (pqxx::broken_connection & e) {
printError("Database connection broken: %s", e.what());
std::_Exit(1);
} catch (std::exception & e) {
printError("exception in main loop: %s", e.what());
sleep(30);

View File

@@ -6,28 +6,28 @@
#include <fcntl.h>
#include "build-result.hh"
#include "path.hh"
#include "serve-protocol.hh"
#include "state.hh"
#include "current-process.hh"
#include "processes.hh"
#include "util.hh"
#include "worker-protocol.hh"
#include "serve-protocol.hh"
#include "serve-protocol-impl.hh"
#include "ssh.hh"
#include "finally.hh"
#include "url.hh"
using namespace nix;
struct Child
{
Pid pid;
AutoCloseFD to, from;
};
static void append(Strings & dst, const Strings & src)
{
dst.insert(dst.end(), src.begin(), src.end());
}
namespace nix::build_remote {
static Strings extraStoreArgs(std::string & machine)
{
Strings result;
@@ -48,7 +48,7 @@ static Strings extraStoreArgs(std::string & machine)
return result;
}
static void openConnection(Machine::ptr machine, Path tmpDir, int stderrFD, Child & child)
static void openConnection(::Machine::ptr machine, Path tmpDir, int stderrFD, SSHMaster::Connection & child)
{
std::string pgmName;
Pipe to, from;
@@ -78,7 +78,7 @@ static void openConnection(Machine::ptr machine, Path tmpDir, int stderrFD, Chil
append(argv, extraArgs);
}
child.pid = startProcess([&]() {
child.sshPid = startProcess([&]() {
restoreProcessContext();
if (dup2(to.readSide.get(), STDIN_FILENO) == -1)
@@ -98,14 +98,16 @@ static void openConnection(Machine::ptr machine, Path tmpDir, int stderrFD, Chil
to.readSide = -1;
from.writeSide = -1;
child.to = to.writeSide.release();
child.from = from.readSide.release();
child.in = to.writeSide.release();
child.out = from.readSide.release();
}
static void copyClosureTo(std::timed_mutex & sendMutex, Store & destStore,
FdSource & from, FdSink & to, const StorePathSet & paths,
bool useSubstitutes = false)
static void copyClosureTo(
::Machine::Connection & conn,
Store & destStore,
const StorePathSet & paths,
SubstituteFlag useSubstitutes = NoSubstitute)
{
StorePathSet closure;
destStore.computeFSClosure(paths, closure);
@@ -115,13 +117,13 @@ static void copyClosureTo(std::timed_mutex & sendMutex, Store & destStore,
garbage-collect paths that are already there. Optionally, ask
the remote host to substitute missing paths. */
// FIXME: substitute output pollutes our build log
to << cmdQueryValidPaths << 1 << useSubstitutes;
worker_proto::write(destStore, to, closure);
to.flush();
conn.to << ServeProto::Command::QueryValidPaths << 1 << useSubstitutes;
ServeProto::write(destStore, conn, closure);
conn.to.flush();
/* Get back the set of paths that are already valid on the remote
host. */
auto present = worker_proto::read(destStore, from, Phantom<StorePathSet> {});
auto present = ServeProto::Serialise<StorePathSet>::read(destStore, conn);
if (present.size() == closure.size()) return;
@@ -133,20 +135,20 @@ static void copyClosureTo(std::timed_mutex & sendMutex, Store & destStore,
printMsg(lvlDebug, "sending %d missing paths", missing.size());
std::unique_lock<std::timed_mutex> sendLock(sendMutex,
std::unique_lock<std::timed_mutex> sendLock(conn.machine->state->sendLock,
std::chrono::seconds(600));
to << cmdImportPaths;
destStore.exportPaths(missing, to);
to.flush();
conn.to << ServeProto::Command::ImportPaths;
destStore.exportPaths(missing, conn.to);
conn.to.flush();
if (readInt(from) != 1)
if (readInt(conn.from) != 1)
throw Error("remote machine failed to import closure");
}
// FIXME: use Store::topoSortPaths().
StorePaths reverseTopoSortPaths(const std::map<StorePath, ValidPathInfo> & paths)
static StorePaths reverseTopoSortPaths(const std::map<StorePath, ValidPathInfo> & paths)
{
StorePaths sorted;
StorePathSet visited;
@@ -174,24 +176,332 @@ StorePaths reverseTopoSortPaths(const std::map<StorePath, ValidPathInfo> & paths
return sorted;
}
static std::pair<Path, AutoCloseFD> openLogFile(const std::string & logDir, const StorePath & drvPath)
{
std::string base(drvPath.to_string());
auto logFile = logDir + "/" + std::string(base, 0, 2) + "/" + std::string(base, 2);
createDirs(dirOf(logFile));
AutoCloseFD logFD = open(logFile.c_str(), O_CREAT | O_TRUNC | O_WRONLY, 0666);
if (!logFD) throw SysError("creating log file %s", logFile);
return {std::move(logFile), std::move(logFD)};
}
/**
* @param conn is not fully initialized; it is this functions job to set
* the `remoteVersion` field after the handshake is completed.
* Therefore, no `ServeProto::Serialize` functions can be used until
* that field is set.
*/
static void handshake(::Machine::Connection & conn, unsigned int repeats)
{
conn.to << SERVE_MAGIC_1 << 0x206;
conn.to.flush();
unsigned int magic = readInt(conn.from);
if (magic != SERVE_MAGIC_2)
throw Error("protocol mismatch with nix-store --serve on %1%", conn.machine->sshName);
conn.remoteVersion = readInt(conn.from);
// Now `conn` is initialized.
if (GET_PROTOCOL_MAJOR(conn.remoteVersion) != 0x200)
throw Error("unsupported nix-store --serve protocol version on %1%", conn.machine->sshName);
if (GET_PROTOCOL_MINOR(conn.remoteVersion) < 3 && repeats > 0)
throw Error("machine %1% does not support repeating a build; please upgrade it to Nix 1.12", conn.machine->sshName);
}
static BasicDerivation sendInputs(
State & state,
Step & step,
Store & localStore,
Store & destStore,
::Machine::Connection & conn,
unsigned int & overhead,
counter & nrStepsWaiting,
counter & nrStepsCopyingTo
)
{
/* Replace the input derivations by their output paths to send a
minimal closure to the builder.
`tryResolve` currently does *not* rewrite input addresses, so it
is safe to do this in all cases. (It should probably have a mode
to do that, however, but we would not use it here.)
*/
BasicDerivation basicDrv = ({
auto maybeBasicDrv = step.drv->tryResolve(destStore, &localStore);
if (!maybeBasicDrv)
throw Error(
"the derivation '%s' cant be resolved. Its probably "
"missing some outputs",
localStore.printStorePath(step.drvPath));
*maybeBasicDrv;
});
/* Ensure that the inputs exist in the destination store. This is
a no-op for regular stores, but for the binary cache store,
this will copy the inputs to the binary cache from the local
store. */
if (&localStore != &destStore) {
copyClosure(localStore, destStore,
step.drv->inputSrcs,
NoRepair, NoCheckSigs, NoSubstitute);
}
{
auto mc1 = std::make_shared<MaintainCount<counter>>(nrStepsWaiting);
mc1.reset();
MaintainCount<counter> mc2(nrStepsCopyingTo);
printMsg(lvlDebug, "sending closure of %s to %s",
localStore.printStorePath(step.drvPath), conn.machine->sshName);
auto now1 = std::chrono::steady_clock::now();
/* Copy the input closure. */
if (conn.machine->isLocalhost()) {
StorePathSet closure;
destStore.computeFSClosure(basicDrv.inputSrcs, closure);
copyPaths(destStore, localStore, closure, NoRepair, NoCheckSigs, NoSubstitute);
} else {
copyClosureTo(conn, destStore, basicDrv.inputSrcs, Substitute);
}
auto now2 = std::chrono::steady_clock::now();
overhead += std::chrono::duration_cast<std::chrono::milliseconds>(now2 - now1).count();
}
return basicDrv;
}
static BuildResult performBuild(
::Machine::Connection & conn,
Store & localStore,
StorePath drvPath,
const BasicDerivation & drv,
const State::BuildOptions & options,
counter & nrStepsBuilding
)
{
conn.to << ServeProto::Command::BuildDerivation << localStore.printStorePath(drvPath);
writeDerivation(conn.to, localStore, drv);
conn.to << options.maxSilentTime << options.buildTimeout;
if (GET_PROTOCOL_MINOR(conn.remoteVersion) >= 2)
conn.to << options.maxLogSize;
if (GET_PROTOCOL_MINOR(conn.remoteVersion) >= 3) {
conn.to
<< options.repeats // == build-repeat
<< options.enforceDeterminism;
}
conn.to.flush();
BuildResult result;
time_t startTime, stopTime;
startTime = time(0);
{
MaintainCount<counter> mc(nrStepsBuilding);
result = ServeProto::Serialise<BuildResult>::read(localStore, conn);
}
stopTime = time(0);
if (!result.startTime) {
// If the builder gave `startTime = 0`, use our measurements
// instead of the builder's.
//
// Note: this represents the duration of a single round, rather
// than all rounds.
result.startTime = startTime;
result.stopTime = stopTime;
}
// If the protocol was too old to give us `builtOutputs`, initialize
// it manually by introspecting the derivation.
if (GET_PROTOCOL_MINOR(conn.remoteVersion) < 6)
{
// If the remote is too old to handle CA derivations, we cant get this
// far anyways
assert(drv.type().hasKnownOutputPaths());
DerivationOutputsAndOptPaths drvOutputs = drv.outputsAndOptPaths(localStore);
// Since this a `BasicDerivation`, `staticOutputHashes` will not
// do any real work.
auto outputHashes = staticOutputHashes(localStore, drv);
for (auto & [outputName, output] : drvOutputs) {
auto outputPath = output.second;
// Weve just asserted that the output paths of the derivation
// were known
assert(outputPath);
auto outputHash = outputHashes.at(outputName);
auto drvOutput = DrvOutput { outputHash, outputName };
result.builtOutputs.insert_or_assign(
std::move(outputName),
Realisation { drvOutput, *outputPath });
}
}
return result;
}
static std::map<StorePath, ValidPathInfo> queryPathInfos(
::Machine::Connection & conn,
Store & localStore,
StorePathSet & outputs,
size_t & totalNarSize
)
{
/* Get info about each output path. */
std::map<StorePath, ValidPathInfo> infos;
conn.to << ServeProto::Command::QueryPathInfos;
ServeProto::write(localStore, conn, outputs);
conn.to.flush();
while (true) {
auto storePathS = readString(conn.from);
if (storePathS == "") break;
auto deriver = readString(conn.from); // deriver
auto references = ServeProto::Serialise<StorePathSet>::read(localStore, conn);
readLongLong(conn.from); // download size
auto narSize = readLongLong(conn.from);
auto narHash = Hash::parseAny(readString(conn.from), htSHA256);
auto ca = ContentAddress::parseOpt(readString(conn.from));
readStrings<StringSet>(conn.from); // sigs
ValidPathInfo info(localStore.parseStorePath(storePathS), narHash);
assert(outputs.count(info.path));
info.references = references;
info.narSize = narSize;
totalNarSize += info.narSize;
info.narHash = narHash;
info.ca = ca;
if (deriver != "")
info.deriver = localStore.parseStorePath(deriver);
infos.insert_or_assign(info.path, info);
}
return infos;
}
static void copyPathFromRemote(
::Machine::Connection & conn,
NarMemberDatas & narMembers,
Store & localStore,
Store & destStore,
const ValidPathInfo & info
)
{
/* Receive the NAR from the remote and add it to the
destination store. Meanwhile, extract all the info from the
NAR that getBuildOutput() needs. */
auto source2 = sinkToSource([&](Sink & sink)
{
/* Note: we should only send the command to dump the store
path to the remote if the NAR is actually going to get read
by the destination store, which won't happen if this path
is already valid on the destination store. Since this
lambda function only gets executed if someone tries to read
from source2, we will send the command from here rather
than outside the lambda. */
conn.to << ServeProto::Command::DumpStorePath << localStore.printStorePath(info.path);
conn.to.flush();
TeeSource tee(conn.from, sink);
extractNarData(tee, localStore.printStorePath(info.path), narMembers);
});
destStore.addToStore(info, *source2, NoRepair, NoCheckSigs);
}
static void copyPathsFromRemote(
::Machine::Connection & conn,
NarMemberDatas & narMembers,
Store & localStore,
Store & destStore,
const std::map<StorePath, ValidPathInfo> & infos
)
{
auto pathsSorted = reverseTopoSortPaths(infos);
for (auto & path : pathsSorted) {
auto & info = infos.find(path)->second;
copyPathFromRemote(conn, narMembers, localStore, destStore, info);
}
}
}
/* using namespace nix::build_remote; */
void RemoteResult::updateWithBuildResult(const nix::BuildResult & buildResult)
{
startTime = buildResult.startTime;
stopTime = buildResult.stopTime;
timesBuilt = buildResult.timesBuilt;
errorMsg = buildResult.errorMsg;
isNonDeterministic = buildResult.isNonDeterministic;
switch ((BuildResult::Status) buildResult.status) {
case BuildResult::Built:
stepStatus = bsSuccess;
break;
case BuildResult::Substituted:
case BuildResult::AlreadyValid:
stepStatus = bsSuccess;
isCached = true;
break;
case BuildResult::PermanentFailure:
stepStatus = bsFailed;
canCache = true;
errorMsg = "";
break;
case BuildResult::InputRejected:
case BuildResult::OutputRejected:
stepStatus = bsFailed;
canCache = true;
break;
case BuildResult::TransientFailure:
stepStatus = bsFailed;
canRetry = true;
errorMsg = "";
break;
case BuildResult::TimedOut:
stepStatus = bsTimedOut;
errorMsg = "";
break;
case BuildResult::MiscFailure:
stepStatus = bsAborted;
canRetry = true;
break;
case BuildResult::LogLimitExceeded:
stepStatus = bsLogLimitExceeded;
break;
case BuildResult::NotDeterministic:
stepStatus = bsNotDeterministic;
canRetry = false;
canCache = true;
break;
default:
stepStatus = bsAborted;
break;
}
}
void State::buildRemote(ref<Store> destStore,
Machine::ptr machine, Step::ptr step,
unsigned int maxSilentTime, unsigned int buildTimeout, unsigned int repeats,
::Machine::ptr machine, Step::ptr step,
const BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,
std::function<void(StepState)> updateStep,
NarMemberDatas & narMembers)
{
assert(BuildResult::TimedOut == 8);
std::string base(step->drvPath.to_string());
result.logFile = logDir + "/" + std::string(base, 0, 2) + "/" + std::string(base, 2);
AutoDelete autoDelete(result.logFile, false);
createDirs(dirOf(result.logFile));
AutoCloseFD logFD = open(result.logFile.c_str(), O_CREAT | O_TRUNC | O_WRONLY, 0666);
if (!logFD) throw SysError("creating log file %s", result.logFile);
auto [logFile, logFD] = build_remote::openLogFile(logDir, step->drvPath);
AutoDelete logFileDel(logFile, false);
result.logFile = logFile;
nix::Path tmpDir = createTempDir();
AutoDelete tmpDirDel(tmpDir, true);
@@ -201,13 +511,13 @@ void State::buildRemote(ref<Store> destStore,
updateStep(ssConnecting);
// FIXME: rewrite to use Store.
Child child;
openConnection(machine, tmpDir, logFD.get(), child);
SSHMaster::Connection child;
build_remote::openConnection(machine, tmpDir, logFD.get(), child);
{
auto activeStepState(activeStep->state_.lock());
if (activeStepState->cancelled) throw Error("step cancelled");
activeStepState->pid = child.pid;
activeStepState->pid = child.sshPid;
}
Finally clearPid([&]() {
@@ -222,32 +532,21 @@ void State::buildRemote(ref<Store> destStore,
process. Meh. */
});
FdSource from(child.from.get());
FdSink to(child.to.get());
::Machine::Connection conn {
.from = child.out.get(),
.to = child.in.get(),
.machine = machine,
};
Finally updateStats([&]() {
bytesReceived += from.read;
bytesSent += to.written;
bytesReceived += conn.from.read;
bytesSent += conn.to.written;
});
/* Handshake. */
unsigned int remoteVersion;
try {
to << SERVE_MAGIC_1 << 0x206;
to.flush();
unsigned int magic = readInt(from);
if (magic != SERVE_MAGIC_2)
throw Error("protocol mismatch with nix-store --serve on %1%", machine->sshName);
remoteVersion = readInt(from);
if (GET_PROTOCOL_MAJOR(remoteVersion) != 0x200)
throw Error("unsupported nix-store --serve protocol version on %1%", machine->sshName);
if (GET_PROTOCOL_MINOR(remoteVersion) < 3 && repeats > 0)
throw Error("machine %1% does not support repeating a build; please upgrade it to Nix 1.12", machine->sshName);
build_remote::handshake(conn, buildOptions.repeats);
} catch (EndOfFile & e) {
child.pid.wait();
child.sshPid.wait();
std::string s = chomp(readFile(result.logFile));
throw Error("cannot connect to %1%: %2%", machine->sshName, s);
}
@@ -263,62 +562,12 @@ void State::buildRemote(ref<Store> destStore,
copy the immediate sources of the derivation and the required
outputs of the input derivations. */
updateStep(ssSendingInputs);
BasicDerivation resolvedDrv = build_remote::sendInputs(*this, *step, *localStore, *destStore, conn, result.overhead, nrStepsWaiting, nrStepsCopyingTo);
StorePathSet inputs;
BasicDerivation basicDrv(*step->drv);
for (auto & p : step->drv->inputSrcs)
inputs.insert(p);
for (auto & input : step->drv->inputDrvs) {
auto drv2 = localStore->readDerivation(input.first);
for (auto & name : input.second) {
if (auto i = get(drv2.outputs, name)) {
auto outPath = i->path(*localStore, drv2.name, name);
inputs.insert(*outPath);
basicDrv.inputSrcs.insert(*outPath);
}
}
}
/* Ensure that the inputs exist in the destination store. This is
a no-op for regular stores, but for the binary cache store,
this will copy the inputs to the binary cache from the local
store. */
if (localStore != std::shared_ptr<Store>(destStore)) {
copyClosure(*localStore, *destStore,
step->drv->inputSrcs,
NoRepair, NoCheckSigs, NoSubstitute);
}
{
auto mc1 = std::make_shared<MaintainCount<counter>>(nrStepsWaiting);
mc1.reset();
MaintainCount<counter> mc2(nrStepsCopyingTo);
printMsg(lvlDebug, "sending closure of %s to %s",
localStore->printStorePath(step->drvPath), machine->sshName);
auto now1 = std::chrono::steady_clock::now();
/* Copy the input closure. */
if (machine->isLocalhost()) {
StorePathSet closure;
destStore->computeFSClosure(inputs, closure);
copyPaths(*destStore, *localStore, closure, NoRepair, NoCheckSigs, NoSubstitute);
} else {
copyClosureTo(machine->state->sendLock, *destStore, from, to, inputs, true);
}
auto now2 = std::chrono::steady_clock::now();
result.overhead += std::chrono::duration_cast<std::chrono::milliseconds>(now2 - now1).count();
}
autoDelete.cancel();
logFileDel.cancel();
/* Truncate the log to get rid of messages about substitutions
etc. on the remote system. */
etc. on the remote system. */
if (lseek(logFD.get(), SEEK_SET, 0) != 0)
throw SysError("seeking to the start of log file %s", result.logFile);
@@ -334,85 +583,17 @@ void State::buildRemote(ref<Store> destStore,
updateStep(ssBuilding);
to << cmdBuildDerivation << localStore->printStorePath(step->drvPath);
writeDerivation(to, *localStore, basicDrv);
to << maxSilentTime << buildTimeout;
if (GET_PROTOCOL_MINOR(remoteVersion) >= 2)
to << maxLogSize;
if (GET_PROTOCOL_MINOR(remoteVersion) >= 3) {
to << repeats // == build-repeat
<< step->isDeterministic; // == enforce-determinism
}
to.flush();
BuildResult buildResult = build_remote::performBuild(
conn,
*localStore,
step->drvPath,
resolvedDrv,
buildOptions,
nrStepsBuilding
);
result.startTime = time(0);
int res;
{
MaintainCount<counter> mc(nrStepsBuilding);
res = readInt(from);
}
result.stopTime = time(0);
result.updateWithBuildResult(buildResult);
result.errorMsg = readString(from);
if (GET_PROTOCOL_MINOR(remoteVersion) >= 3) {
result.timesBuilt = readInt(from);
result.isNonDeterministic = readInt(from);
auto start = readInt(from);
auto stop = readInt(from);
if (start && start) {
/* Note: this represents the duration of a single
round, rather than all rounds. */
result.startTime = start;
result.stopTime = stop;
}
}
if (GET_PROTOCOL_MINOR(remoteVersion) >= 6) {
worker_proto::read(*localStore, from, Phantom<DrvOutputs> {});
}
switch ((BuildResult::Status) res) {
case BuildResult::Built:
result.stepStatus = bsSuccess;
break;
case BuildResult::Substituted:
case BuildResult::AlreadyValid:
result.stepStatus = bsSuccess;
result.isCached = true;
break;
case BuildResult::PermanentFailure:
result.stepStatus = bsFailed;
result.canCache = true;
result.errorMsg = "";
break;
case BuildResult::InputRejected:
case BuildResult::OutputRejected:
result.stepStatus = bsFailed;
result.canCache = true;
break;
case BuildResult::TransientFailure:
result.stepStatus = bsFailed;
result.canRetry = true;
result.errorMsg = "";
break;
case BuildResult::TimedOut:
result.stepStatus = bsTimedOut;
result.errorMsg = "";
break;
case BuildResult::MiscFailure:
result.stepStatus = bsAborted;
result.canRetry = true;
break;
case BuildResult::LogLimitExceeded:
result.stepStatus = bsLogLimitExceeded;
break;
case BuildResult::NotDeterministic:
result.stepStatus = bsNotDeterministic;
result.canRetry = false;
result.canCache = true;
break;
default:
result.stepStatus = bsAborted;
break;
}
if (result.stepStatus != bsSuccess) return;
result.errorMsg = "";
@@ -426,6 +607,10 @@ void State::buildRemote(ref<Store> destStore,
result.logFile = "";
}
StorePathSet outputs;
for (auto & [_, realisation] : buildResult.builtOutputs)
outputs.insert(realisation.outPath);
/* Copy the output paths. */
if (!machine->isLocalhost() || localStore != std::shared_ptr<Store>(destStore)) {
updateStep(ssReceivingOutputs);
@@ -434,39 +619,8 @@ void State::buildRemote(ref<Store> destStore,
auto now1 = std::chrono::steady_clock::now();
StorePathSet outputs;
for (auto & i : step->drv->outputsAndOptPaths(*localStore)) {
if (i.second.second)
outputs.insert(*i.second.second);
}
/* Get info about each output path. */
std::map<StorePath, ValidPathInfo> infos;
size_t totalNarSize = 0;
to << cmdQueryPathInfos;
worker_proto::write(*localStore, to, outputs);
to.flush();
while (true) {
auto storePathS = readString(from);
if (storePathS == "") break;
auto deriver = readString(from); // deriver
auto references = worker_proto::read(*localStore, from, Phantom<StorePathSet> {});
readLongLong(from); // download size
auto narSize = readLongLong(from);
auto narHash = Hash::parseAny(readString(from), htSHA256);
auto ca = parseContentAddressOpt(readString(from));
readStrings<StringSet>(from); // sigs
ValidPathInfo info(localStore->parseStorePath(storePathS), narHash);
assert(outputs.count(info.path));
info.references = references;
info.narSize = narSize;
totalNarSize += info.narSize;
info.narHash = narHash;
info.ca = ca;
if (deriver != "")
info.deriver = localStore->parseStorePath(deriver);
infos.insert_or_assign(info.path, info);
}
auto infos = build_remote::queryPathInfos(conn, *localStore, outputs, totalNarSize);
if (totalNarSize > maxOutputSize) {
result.stepStatus = bsNarSizeLimitExceeded;
@@ -477,41 +631,30 @@ void State::buildRemote(ref<Store> destStore,
printMsg(lvlDebug, "copying outputs of %s from %s (%d bytes)",
localStore->printStorePath(step->drvPath), machine->sshName, totalNarSize);
auto pathsSorted = reverseTopoSortPaths(infos);
for (auto & path : pathsSorted) {
auto & info = infos.find(path)->second;
/* Receive the NAR from the remote and add it to the
destination store. Meanwhile, extract all the info from the
NAR that getBuildOutput() needs. */
auto source2 = sinkToSource([&](Sink & sink)
{
/* Note: we should only send the command to dump the store
path to the remote if the NAR is actually going to get read
by the destination store, which won't happen if this path
is already valid on the destination store. Since this
lambda function only gets executed if someone tries to read
from source2, we will send the command from here rather
than outside the lambda. */
to << cmdDumpStorePath << localStore->printStorePath(path);
to.flush();
TeeSource tee(from, sink);
extractNarData(tee, localStore->printStorePath(path), narMembers);
});
destStore->addToStore(info, *source2, NoRepair, NoCheckSigs);
}
build_remote::copyPathsFromRemote(conn, narMembers, *localStore, *destStore, infos);
auto now2 = std::chrono::steady_clock::now();
result.overhead += std::chrono::duration_cast<std::chrono::milliseconds>(now2 - now1).count();
}
/* Register the outputs of the newly built drv */
if (experimentalFeatureSettings.isEnabled(Xp::CaDerivations)) {
auto outputHashes = staticOutputHashes(*localStore, *step->drv);
for (auto & [outputName, realisation] : buildResult.builtOutputs) {
// Register the resolved drv output
destStore->registerDrvOutput(realisation);
// Also register the unresolved one
auto unresolvedRealisation = realisation;
unresolvedRealisation.signatures.clear();
unresolvedRealisation.id.drvHash = outputHashes.at(outputName);
destStore->registerDrvOutput(unresolvedRealisation);
}
}
/* Shut down the connection. */
child.to = -1;
child.pid.wait();
child.in = -1;
child.sshPid.wait();
} catch (Error & e) {
/* Disable this machine until a certain period of time has

View File

@@ -1,7 +1,7 @@
#include "hydra-build-result.hh"
#include "store-api.hh"
#include "util.hh"
#include "fs-accessor.hh"
#include "source-accessor.hh"
#include <regex>
@@ -11,18 +11,18 @@ using namespace nix;
BuildOutput getBuildOutput(
nix::ref<Store> store,
NarMemberDatas & narMembers,
const Derivation & drv)
const OutputPathMap derivationOutputs)
{
BuildOutput res;
/* Compute the closure size. */
StorePathSet outputs;
StorePathSet closure;
for (auto & i : drv.outputsAndOptPaths(*store))
if (i.second.second) {
store->computeFSClosure(*i.second.second, closure);
outputs.insert(*i.second.second);
}
for (auto& [outputName, outputPath] : derivationOutputs) {
store->computeFSClosure(outputPath, closure);
outputs.insert(outputPath);
res.outputs.insert({outputName, outputPath});
}
for (auto & path : closure) {
auto info = store->queryPathInfo(path);
res.closureSize += info->narSize;
@@ -63,7 +63,7 @@ BuildOutput getBuildOutput(
auto productsFile = narMembers.find(outputS + "/nix-support/hydra-build-products");
if (productsFile == narMembers.end() ||
productsFile->second.type != FSAccessor::Type::tRegular)
productsFile->second.type != SourceAccessor::Type::tRegular)
continue;
assert(productsFile->second.contents);
@@ -94,7 +94,7 @@ BuildOutput getBuildOutput(
product.name = product.path == store->printStorePath(output) ? "" : baseNameOf(product.path);
if (file->second.type == FSAccessor::Type::tRegular) {
if (file->second.type == SourceAccessor::Type::tRegular) {
product.isRegular = true;
product.fileSize = file->second.fileSize.value();
product.sha256hash = file->second.sha256.value();
@@ -107,17 +107,16 @@ BuildOutput getBuildOutput(
/* If no build products were explicitly declared, then add all
outputs as a product of type "nix-build". */
if (!explicitProducts) {
for (auto & [name, output] : drv.outputs) {
for (auto & [name, output] : derivationOutputs) {
BuildProduct product;
auto outPath = output.path(*store, drv.name, name);
product.path = store->printStorePath(*outPath);
product.path = store->printStorePath(output);
product.type = "nix-build";
product.subtype = name == "out" ? "" : name;
product.name = outPath->name();
product.name = output.name();
auto file = narMembers.find(product.path);
assert(file != narMembers.end());
if (file->second.type == FSAccessor::Type::tDirectory)
if (file->second.type == SourceAccessor::Type::tDirectory)
res.products.push_back(product);
}
}
@@ -126,7 +125,7 @@ BuildOutput getBuildOutput(
for (auto & output : outputs) {
auto file = narMembers.find(store->printStorePath(output) + "/nix-support/hydra-release-name");
if (file == narMembers.end() ||
file->second.type != FSAccessor::Type::tRegular)
file->second.type != SourceAccessor::Type::tRegular)
continue;
res.releaseName = trim(file->second.contents.value());
// FIXME: validate release name
@@ -136,7 +135,7 @@ BuildOutput getBuildOutput(
for (auto & output : outputs) {
auto file = narMembers.find(store->printStorePath(output) + "/nix-support/hydra-metrics");
if (file == narMembers.end() ||
file->second.type != FSAccessor::Type::tRegular)
file->second.type != SourceAccessor::Type::tRegular)
continue;
for (auto & line : tokenizeString<Strings>(file->second.contents.value(), "\n")) {
auto fields = tokenizeString<std::vector<std::string>>(line);

View File

@@ -98,8 +98,10 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
it). */
BuildID buildId;
std::optional<StorePath> buildDrvPath;
unsigned int maxSilentTime, buildTimeout;
unsigned int repeats = step->isDeterministic ? 1 : 0;
BuildOptions buildOptions;
buildOptions.repeats = step->isDeterministic ? 1 : 0;
buildOptions.maxLogSize = maxLogSize;
buildOptions.enforceDeterminism = step->isDeterministic;
auto conn(dbPool.get());
@@ -134,18 +136,18 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
{
auto i = jobsetRepeats.find(std::make_pair(build2->projectName, build2->jobsetName));
if (i != jobsetRepeats.end())
repeats = std::max(repeats, i->second);
buildOptions.repeats = std::max(buildOptions.repeats, i->second);
}
}
if (!build) build = *dependents.begin();
buildId = build->id;
buildDrvPath = build->drvPath;
maxSilentTime = build->maxSilentTime;
buildTimeout = build->buildTimeout;
buildOptions.maxSilentTime = build->maxSilentTime;
buildOptions.buildTimeout = build->buildTimeout;
printInfo("performing step %s %d times on %s (needed by build %d and %d others)",
localStore->printStorePath(step->drvPath), repeats + 1, machine->sshName, buildId, (dependents.size() - 1));
localStore->printStorePath(step->drvPath), buildOptions.repeats + 1, machine->sshName, buildId, (dependents.size() - 1));
}
if (!buildOneDone)
@@ -206,7 +208,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
try {
/* FIXME: referring builds may have conflicting timeouts. */
buildRemote(destStore, machine, step, maxSilentTime, buildTimeout, repeats, result, activeStep, updateStep, narMembers);
buildRemote(destStore, machine, step, buildOptions, result, activeStep, updateStep, narMembers);
} catch (Error & e) {
if (activeStep->state_.lock()->cancelled) {
printInfo("marking step %d of build %d as cancelled", stepNr, buildId);
@@ -221,7 +223,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
if (result.stepStatus == bsSuccess) {
updateStep(ssPostProcessing);
res = getBuildOutput(destStore, narMembers, *step->drv);
res = getBuildOutput(destStore, narMembers, destStore->queryDerivationOutputMap(step->drvPath, &*localStore));
}
}
@@ -275,9 +277,12 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
assert(stepNr);
for (auto & i : step->drv->outputsAndOptPaths(*localStore)) {
if (i.second.second)
addRoot(*i.second.second);
for (auto & [outputName, optOutputPath] : destStore->queryPartialDerivationOutputMap(step->drvPath, &*localStore)) {
if (!optOutputPath)
throw Error(
"Missing output %s for derivation %d which was supposed to have succeeded",
outputName, localStore->printStorePath(step->drvPath));
addRoot(*optOutputPath);
}
/* Register success in the database for all Build objects that
@@ -323,7 +328,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
pqxx::work txn(*conn);
for (auto & b : direct) {
printMsg(lvlInfo, format("marking build %1% as succeeded") % b->id);
printInfo("marking build %1% as succeeded", b->id);
markSucceededBuild(txn, b, res, buildId != b->id || result.isCached,
result.startTime, result.stopTime);
}
@@ -398,7 +403,7 @@ void State::failStep(
Step::ptr step,
BuildID buildId,
const RemoteResult & result,
Machine::ptr machine,
::Machine::ptr machine,
bool & stepFinished)
{
/* Register failure in the database for all Build objects that
@@ -451,7 +456,7 @@ void State::failStep(
/* Mark all builds that depend on this derivation as failed. */
for (auto & build : indirect) {
if (build->finishedInDB) continue;
printMsg(lvlError, format("marking build %1% as failed") % build->id);
printError("marking build %1% as failed", build->id);
txn.exec_params0
("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $4, isCachedBuild = $5, notificationPendingSince = $4 where id = $1 and finished = 0",
build->id,

View File

@@ -52,7 +52,7 @@ void State::dispatcher()
{
auto dispatcherWakeup_(dispatcherWakeup.lock());
if (!*dispatcherWakeup_) {
printMsg(lvlDebug, format("dispatcher sleeping for %1%s") %
debug("dispatcher sleeping for %1%s",
std::chrono::duration_cast<std::chrono::seconds>(sleepUntil - std::chrono::system_clock::now()).count());
dispatcherWakeup_.wait_until(dispatcherWakeupCV, sleepUntil);
}
@@ -60,7 +60,7 @@ void State::dispatcher()
}
} catch (std::exception & e) {
printMsg(lvlError, format("dispatcher: %1%") % e.what());
printError("dispatcher: %s", e.what());
sleep(1);
}
@@ -80,17 +80,118 @@ system_time State::doDispatch()
jobset.second->pruneSteps();
auto s2 = jobset.second->shareUsed();
if (s1 != s2)
printMsg(lvlDebug, format("pruned scheduling window of %1%:%2% from %3% to %4%")
% jobset.first.first % jobset.first.second % s1 % s2);
debug("pruned scheduling window of %1%:%2% from %3% to %4%",
jobset.first.first, jobset.first.second, s1, s2);
}
}
system_time now = std::chrono::system_clock::now();
/* Start steps until we're out of steps or slots. */
auto sleepUntil = system_time::max();
bool keepGoing;
/* Sort the runnable steps by priority. Priority is establised
as follows (in order of precedence):
- The global priority of the builds that depend on the
step. This allows admins to bump a build to the front of
the queue.
- The lowest used scheduling share of the jobsets depending
on the step.
- The local priority of the build, as set via the build's
meta.schedulingPriority field. Note that this is not
quite correct: the local priority should only be used to
establish priority between builds in the same jobset, but
here it's used between steps in different jobsets if they
happen to have the same lowest used scheduling share. But
that's not very likely.
- The lowest ID of the builds depending on the step;
i.e. older builds take priority over new ones.
FIXME: O(n lg n); obviously, it would be better to keep a
runnable queue sorted by priority. */
struct StepInfo
{
Step::ptr step;
bool alreadyScheduled = false;
/* The lowest share used of any jobset depending on this
step. */
double lowestShareUsed = 1e9;
/* Info copied from step->state to ensure that the
comparator is a partial ordering (see MachineInfo). */
int highestGlobalPriority;
int highestLocalPriority;
BuildID lowestBuildID;
StepInfo(Step::ptr step, Step::State & step_) : step(step)
{
for (auto & jobset : step_.jobsets)
lowestShareUsed = std::min(lowestShareUsed, jobset->shareUsed());
highestGlobalPriority = step_.highestGlobalPriority;
highestLocalPriority = step_.highestLocalPriority;
lowestBuildID = step_.lowestBuildID;
}
};
std::vector<StepInfo> runnableSorted;
struct RunnablePerType
{
unsigned int count{0};
std::chrono::seconds waitTime{0};
};
std::unordered_map<std::string, RunnablePerType> runnablePerType;
{
auto runnable_(runnable.lock());
runnableSorted.reserve(runnable_->size());
for (auto i = runnable_->begin(); i != runnable_->end(); ) {
auto step = i->lock();
/* Remove dead steps. */
if (!step) {
i = runnable_->erase(i);
continue;
}
++i;
auto & r = runnablePerType[step->systemType];
r.count++;
/* Skip previously failed steps that aren't ready
to be retried. */
auto step_(step->state.lock());
r.waitTime += std::chrono::duration_cast<std::chrono::seconds>(now - step_->runnableSince);
if (step_->tries > 0 && step_->after > now) {
if (step_->after < sleepUntil)
sleepUntil = step_->after;
continue;
}
runnableSorted.emplace_back(step, *step_);
}
}
sort(runnableSorted.begin(), runnableSorted.end(),
[](const StepInfo & a, const StepInfo & b)
{
return
a.highestGlobalPriority != b.highestGlobalPriority ? a.highestGlobalPriority > b.highestGlobalPriority :
a.lowestShareUsed != b.lowestShareUsed ? a.lowestShareUsed < b.lowestShareUsed :
a.highestLocalPriority != b.highestLocalPriority ? a.highestLocalPriority > b.highestLocalPriority :
a.lowestBuildID < b.lowestBuildID;
});
do {
system_time now = std::chrono::system_clock::now();
now = std::chrono::system_clock::now();
/* Copy the currentJobs field of each machine. This is
necessary to ensure that the sort comparator below is
@@ -98,7 +199,7 @@ system_time State::doDispatch()
filter out temporarily disabled machines. */
struct MachineInfo
{
Machine::ptr machine;
::Machine::ptr machine;
unsigned long currentJobs;
};
std::vector<MachineInfo> machinesSorted;
@@ -130,112 +231,14 @@ system_time State::doDispatch()
sort(machinesSorted.begin(), machinesSorted.end(),
[](const MachineInfo & a, const MachineInfo & b) -> bool
{
float ta = std::round(a.currentJobs / a.machine->speedFactor);
float tb = std::round(b.currentJobs / b.machine->speedFactor);
float ta = std::round(a.currentJobs / a.machine->speedFactorFloat);
float tb = std::round(b.currentJobs / b.machine->speedFactorFloat);
return
ta != tb ? ta < tb :
a.machine->speedFactor != b.machine->speedFactor ? a.machine->speedFactor > b.machine->speedFactor :
a.machine->speedFactorFloat != b.machine->speedFactorFloat ? a.machine->speedFactorFloat > b.machine->speedFactorFloat :
a.currentJobs > b.currentJobs;
});
/* Sort the runnable steps by priority. Priority is establised
as follows (in order of precedence):
- The global priority of the builds that depend on the
step. This allows admins to bump a build to the front of
the queue.
- The lowest used scheduling share of the jobsets depending
on the step.
- The local priority of the build, as set via the build's
meta.schedulingPriority field. Note that this is not
quite correct: the local priority should only be used to
establish priority between builds in the same jobset, but
here it's used between steps in different jobsets if they
happen to have the same lowest used scheduling share. But
that's not very likely.
- The lowest ID of the builds depending on the step;
i.e. older builds take priority over new ones.
FIXME: O(n lg n); obviously, it would be better to keep a
runnable queue sorted by priority. */
struct StepInfo
{
Step::ptr step;
/* The lowest share used of any jobset depending on this
step. */
double lowestShareUsed = 1e9;
/* Info copied from step->state to ensure that the
comparator is a partial ordering (see MachineInfo). */
int highestGlobalPriority;
int highestLocalPriority;
BuildID lowestBuildID;
StepInfo(Step::ptr step, Step::State & step_) : step(step)
{
for (auto & jobset : step_.jobsets)
lowestShareUsed = std::min(lowestShareUsed, jobset->shareUsed());
highestGlobalPriority = step_.highestGlobalPriority;
highestLocalPriority = step_.highestLocalPriority;
lowestBuildID = step_.lowestBuildID;
}
};
std::vector<StepInfo> runnableSorted;
struct RunnablePerType
{
unsigned int count{0};
std::chrono::seconds waitTime{0};
};
std::unordered_map<std::string, RunnablePerType> runnablePerType;
{
auto runnable_(runnable.lock());
runnableSorted.reserve(runnable_->size());
for (auto i = runnable_->begin(); i != runnable_->end(); ) {
auto step = i->lock();
/* Remove dead steps. */
if (!step) {
i = runnable_->erase(i);
continue;
}
++i;
auto & r = runnablePerType[step->systemType];
r.count++;
/* Skip previously failed steps that aren't ready
to be retried. */
auto step_(step->state.lock());
r.waitTime += std::chrono::duration_cast<std::chrono::seconds>(now - step_->runnableSince);
if (step_->tries > 0 && step_->after > now) {
if (step_->after < sleepUntil)
sleepUntil = step_->after;
continue;
}
runnableSorted.emplace_back(step, *step_);
}
}
sort(runnableSorted.begin(), runnableSorted.end(),
[](const StepInfo & a, const StepInfo & b)
{
return
a.highestGlobalPriority != b.highestGlobalPriority ? a.highestGlobalPriority > b.highestGlobalPriority :
a.lowestShareUsed != b.lowestShareUsed ? a.lowestShareUsed < b.lowestShareUsed :
a.highestLocalPriority != b.highestLocalPriority ? a.highestLocalPriority > b.highestLocalPriority :
a.lowestBuildID < b.lowestBuildID;
});
/* Find a machine with a free slot and find a step to run
on it. Once we find such a pair, we restart the outer
loop because the machine sorting will have changed. */
@@ -245,6 +248,8 @@ system_time State::doDispatch()
if (mi.machine->state->currentJobs >= mi.machine->maxJobs) continue;
for (auto & stepInfo : runnableSorted) {
if (stepInfo.alreadyScheduled) continue;
auto & step(stepInfo.step);
/* Can this machine do this step? */
@@ -271,6 +276,8 @@ system_time State::doDispatch()
r.count--;
}
stepInfo.alreadyScheduled = true;
/* Make a slot reservation and start a thread to
do the build. */
auto builderThread = std::thread(&State::builder, this,
@@ -428,7 +435,7 @@ void Jobset::pruneSteps()
}
State::MachineReservation::MachineReservation(State & state, Step::ptr step, Machine::ptr machine)
State::MachineReservation::MachineReservation(State & state, Step::ptr step, ::Machine::ptr machine)
: state(state), step(step), machine(machine)
{
machine->state->currentJobs++;

View File

@@ -36,10 +36,12 @@ struct BuildOutput
std::list<BuildProduct> products;
std::map<std::string, nix::StorePath> outputs;
std::map<std::string, BuildMetric> metrics;
};
BuildOutput getBuildOutput(
nix::ref<nix::Store> store,
NarMemberDatas & narMembers,
const nix::Derivation & drv);
const nix::OutputPathMap derivationOutputs);

View File

@@ -1,6 +1,7 @@
#include <iostream>
#include <thread>
#include <optional>
#include <type_traits>
#include <sys/types.h>
#include <sys/stat.h>
@@ -8,6 +9,9 @@
#include <prometheus/exposer.h>
#include <nlohmann/json.hpp>
#include "signals.hh"
#include "state.hh"
#include "hydra-build-result.hh"
#include "store-api.hh"
@@ -15,20 +19,11 @@
#include "globals.hh"
#include "hydra-config.hh"
#include "json.hh"
#include "s3-binary-cache-store.hh"
#include "shared.hh"
using namespace nix;
namespace nix {
template<> void toJSON<std::atomic<long>>(std::ostream & str, const std::atomic<long> & n) { str << n; }
template<> void toJSON<std::atomic<uint64_t>>(std::ostream & str, const std::atomic<uint64_t> & n) { str << n; }
template<> void toJSON<double>(std::ostream & str, const double & n) { str << n; }
}
using nlohmann::json;
std::string getEnvOrDie(const std::string & key)
@@ -146,33 +141,53 @@ void State::parseMachines(const std::string & contents)
if (tokens.size() < 3) continue;
tokens.resize(8);
auto machine = std::make_shared<Machine>();
machine->sshName = tokens[0];
machine->systemTypes = tokenizeString<StringSet>(tokens[1], ",");
machine->sshKey = tokens[2] == "-" ? std::string("") : tokens[2];
if (tokens[3] != "")
machine->maxJobs = string2Int<decltype(machine->maxJobs)>(tokens[3]).value();
else
machine->maxJobs = 1;
machine->speedFactor = atof(tokens[4].c_str());
if (tokens[5] == "-") tokens[5] = "";
machine->supportedFeatures = tokenizeString<StringSet>(tokens[5], ",");
auto supportedFeatures = tokenizeString<StringSet>(tokens[5], ",");
if (tokens[6] == "-") tokens[6] = "";
machine->mandatoryFeatures = tokenizeString<StringSet>(tokens[6], ",");
for (auto & f : machine->mandatoryFeatures)
machine->supportedFeatures.insert(f);
if (tokens[7] != "" && tokens[7] != "-")
machine->sshPublicHostKey = base64Decode(tokens[7]);
auto mandatoryFeatures = tokenizeString<StringSet>(tokens[6], ",");
for (auto & f : mandatoryFeatures)
supportedFeatures.insert(f);
using MaxJobs = std::remove_const<decltype(nix::Machine::maxJobs)>::type;
auto machine = std::make_shared<::Machine>(nix::Machine {
// `storeUri`, not yet used
"",
// `systemTypes`, not yet used
{},
// `sshKey`
tokens[2] == "-" ? "" : tokens[2],
// `maxJobs`
tokens[3] != ""
? string2Int<MaxJobs>(tokens[3]).value()
: 1,
// `speedFactor`, not yet used
1,
// `supportedFeatures`
std::move(supportedFeatures),
// `mandatoryFeatures`
std::move(mandatoryFeatures),
// `sshPublicHostKey`
tokens[7] != "" && tokens[7] != "-"
? base64Decode(tokens[7])
: "",
});
machine->sshName = tokens[0];
machine->systemTypesSet = tokenizeString<StringSet>(tokens[1], ",");
machine->speedFactorFloat = atof(tokens[4].c_str());
/* Re-use the State object of the previous machine with the
same name. */
auto i = oldMachines.find(machine->sshName);
if (i == oldMachines.end())
printMsg(lvlChatty, format("adding new machine %1%") % machine->sshName);
printMsg(lvlChatty, "adding new machine %1%", machine->sshName);
else
printMsg(lvlChatty, format("updating machine %1%") % machine->sshName);
printMsg(lvlChatty, "updating machine %1%", machine->sshName);
machine->state = i == oldMachines.end()
? std::make_shared<Machine::State>()
? std::make_shared<::Machine::State>()
: i->second->state;
newMachines[machine->sshName] = machine;
}
@@ -180,10 +195,10 @@ void State::parseMachines(const std::string & contents)
for (auto & m : oldMachines)
if (newMachines.find(m.first) == newMachines.end()) {
if (m.second->enabled)
printMsg(lvlInfo, format("removing machine %1%") % m.first);
/* Add a disabled Machine object to make sure stats are
printInfo("removing machine %1%", m.first);
/* Add a disabled ::Machine object to make sure stats are
maintained. */
auto machine = std::make_shared<Machine>(*(m.second));
auto machine = std::make_shared<::Machine>(*(m.second));
machine->enabled = false;
newMachines[m.first] = machine;
}
@@ -211,7 +226,7 @@ void State::monitorMachinesFile()
parseMachines("localhost " +
(settings.thisSystem == "x86_64-linux" ? "x86_64-linux,i686-linux" : settings.thisSystem.get())
+ " - " + std::to_string(settings.maxBuildJobs) + " 1 "
+ concatStringsSep(",", settings.systemFeatures.get()));
+ concatStringsSep(",", StoreConfig::getDefaultSystemFeatures()));
machinesReadyLock.unlock();
return;
}
@@ -318,10 +333,13 @@ unsigned int State::createBuildStep(pqxx::work & txn, time_t startTime, BuildID
if (r.affected_rows() == 0) goto restart;
for (auto & [name, output] : step->drv->outputs)
for (auto & [name, output] : getDestStore()->queryPartialDerivationOutputMap(step->drvPath, &*localStore))
txn.exec_params0
("insert into BuildStepOutputs (build, stepnr, name, path) values ($1, $2, $3, $4)",
buildId, stepNr, name, localStore->printStorePath(*output.path(*localStore, step->drv->name, name)));
buildId, stepNr, name,
output
? std::optional { localStore->printStorePath(*output)}
: std::nullopt);
if (status == bsBusy)
txn.exec(fmt("notify step_started, '%d\t%d'", buildId, stepNr));
@@ -358,11 +376,23 @@ void State::finishBuildStep(pqxx::work & txn, const RemoteResult & result,
assert(result.logFile.find('\t') == std::string::npos);
txn.exec(fmt("notify step_finished, '%d\t%d\t%s'",
buildId, stepNr, result.logFile));
if (result.stepStatus == bsSuccess) {
// Update the corresponding `BuildStepOutputs` row to add the output path
auto res = txn.exec_params1("select drvPath from BuildSteps where build = $1 and stepnr = $2", buildId, stepNr);
assert(res.size());
StorePath drvPath = localStore->parseStorePath(res[0].as<std::string>());
// If we've finished building, all the paths should be known
for (auto & [name, output] : getDestStore()->queryDerivationOutputMap(drvPath, &*localStore))
txn.exec_params0
("update BuildStepOutputs set path = $4 where build = $1 and stepnr = $2 and name = $3",
buildId, stepNr, name, localStore->printStorePath(output));
}
}
int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
Build::ptr build, const StorePath & drvPath, const std::string & outputName, const StorePath & storePath)
Build::ptr build, const StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const StorePath & storePath)
{
restart:
auto stepNr = allocBuildStep(txn, build->id);
@@ -463,6 +493,15 @@ void State::markSucceededBuild(pqxx::work & txn, Build::ptr build,
res.releaseName != "" ? std::make_optional(res.releaseName) : std::nullopt,
isCachedBuild ? 1 : 0);
for (auto & [outputName, outputPath] : res.outputs) {
txn.exec_params0
("update BuildOutputs set path = $3 where build = $1 and name = $2",
build->id,
outputName,
localStore->printStorePath(outputPath)
);
}
txn.exec_params0("delete from BuildProducts where build = $1", build->id);
unsigned int productNr = 1;
@@ -474,7 +513,7 @@ void State::markSucceededBuild(pqxx::work & txn, Build::ptr build,
product.type,
product.subtype,
product.fileSize ? std::make_optional(*product.fileSize) : std::nullopt,
product.sha256hash ? std::make_optional(product.sha256hash->to_string(Base16, false)) : std::nullopt,
product.sha256hash ? std::make_optional(product.sha256hash->to_string(HashFormat::Base16, false)) : std::nullopt,
product.path,
product.name,
product.defaultPath);
@@ -542,181 +581,168 @@ std::shared_ptr<PathLocks> State::acquireGlobalLock()
void State::dumpStatus(Connection & conn)
{
std::ostringstream out;
time_t now = time(0);
json statusJson = {
{"status", "up"},
{"time", time(0)},
{"uptime", now - startedAt},
{"pid", getpid()},
{"nrQueuedBuilds", builds.lock()->size()},
{"nrActiveSteps", activeSteps_.lock()->size()},
{"nrStepsBuilding", nrStepsBuilding.load()},
{"nrStepsCopyingTo", nrStepsCopyingTo.load()},
{"nrStepsCopyingFrom", nrStepsCopyingFrom.load()},
{"nrStepsWaiting", nrStepsWaiting.load()},
{"nrUnsupportedSteps", nrUnsupportedSteps.load()},
{"bytesSent", bytesSent.load()},
{"bytesReceived", bytesReceived.load()},
{"nrBuildsRead", nrBuildsRead.load()},
{"buildReadTimeMs", buildReadTimeMs.load()},
{"buildReadTimeAvgMs", nrBuildsRead == 0 ? 0.0 : (float) buildReadTimeMs / nrBuildsRead},
{"nrBuildsDone", nrBuildsDone.load()},
{"nrStepsStarted", nrStepsStarted.load()},
{"nrStepsDone", nrStepsDone.load()},
{"nrRetries", nrRetries.load()},
{"maxNrRetries", maxNrRetries.load()},
{"nrQueueWakeups", nrQueueWakeups.load()},
{"nrDispatcherWakeups", nrDispatcherWakeups.load()},
{"dispatchTimeMs", dispatchTimeMs.load()},
{"dispatchTimeAvgMs", nrDispatcherWakeups == 0 ? 0.0 : (float) dispatchTimeMs / nrDispatcherWakeups},
{"nrDbConnections", dbPool.count()},
{"nrActiveDbUpdates", nrActiveDbUpdates.load()},
};
{
JSONObject root(out);
time_t now = time(0);
root.attr("status", "up");
root.attr("time", time(0));
root.attr("uptime", now - startedAt);
root.attr("pid", getpid());
{
auto builds_(builds.lock());
root.attr("nrQueuedBuilds", builds_->size());
}
{
auto steps_(steps.lock());
for (auto i = steps_->begin(); i != steps_->end(); )
if (i->second.lock()) ++i; else i = steps_->erase(i);
root.attr("nrUnfinishedSteps", steps_->size());
statusJson["nrUnfinishedSteps"] = steps_->size();
}
{
auto runnable_(runnable.lock());
for (auto i = runnable_->begin(); i != runnable_->end(); )
if (i->lock()) ++i; else i = runnable_->erase(i);
root.attr("nrRunnableSteps", runnable_->size());
statusJson["nrRunnableSteps"] = runnable_->size();
}
root.attr("nrActiveSteps", activeSteps_.lock()->size());
root.attr("nrStepsBuilding", nrStepsBuilding);
root.attr("nrStepsCopyingTo", nrStepsCopyingTo);
root.attr("nrStepsCopyingFrom", nrStepsCopyingFrom);
root.attr("nrStepsWaiting", nrStepsWaiting);
root.attr("nrUnsupportedSteps", nrUnsupportedSteps);
root.attr("bytesSent", bytesSent);
root.attr("bytesReceived", bytesReceived);
root.attr("nrBuildsRead", nrBuildsRead);
root.attr("buildReadTimeMs", buildReadTimeMs);
root.attr("buildReadTimeAvgMs", nrBuildsRead == 0 ? 0.0 : (float) buildReadTimeMs / nrBuildsRead);
root.attr("nrBuildsDone", nrBuildsDone);
root.attr("nrStepsStarted", nrStepsStarted);
root.attr("nrStepsDone", nrStepsDone);
root.attr("nrRetries", nrRetries);
root.attr("maxNrRetries", maxNrRetries);
if (nrStepsDone) {
root.attr("totalStepTime", totalStepTime);
root.attr("totalStepBuildTime", totalStepBuildTime);
root.attr("avgStepTime", (float) totalStepTime / nrStepsDone);
root.attr("avgStepBuildTime", (float) totalStepBuildTime / nrStepsDone);
statusJson["totalStepTime"] = totalStepTime.load();
statusJson["totalStepBuildTime"] = totalStepBuildTime.load();
statusJson["avgStepTime"] = (float) totalStepTime / nrStepsDone;
statusJson["avgStepBuildTime"] = (float) totalStepBuildTime / nrStepsDone;
}
root.attr("nrQueueWakeups", nrQueueWakeups);
root.attr("nrDispatcherWakeups", nrDispatcherWakeups);
root.attr("dispatchTimeMs", dispatchTimeMs);
root.attr("dispatchTimeAvgMs", nrDispatcherWakeups == 0 ? 0.0 : (float) dispatchTimeMs / nrDispatcherWakeups);
root.attr("nrDbConnections", dbPool.count());
root.attr("nrActiveDbUpdates", nrActiveDbUpdates);
{
auto nested = root.object("machines");
auto machines_(machines.lock());
for (auto & i : *machines_) {
auto & m(i.second);
auto & s(m->state);
auto nested2 = nested.object(m->sshName);
nested2.attr("enabled", m->enabled);
{
auto list = nested2.list("systemTypes");
for (auto & s : m->systemTypes)
list.elem(s);
}
{
auto list = nested2.list("supportedFeatures");
for (auto & s : m->supportedFeatures)
list.elem(s);
}
{
auto list = nested2.list("mandatoryFeatures");
for (auto & s : m->mandatoryFeatures)
list.elem(s);
}
nested2.attr("currentJobs", s->currentJobs);
if (s->currentJobs == 0)
nested2.attr("idleSince", s->idleSince);
nested2.attr("nrStepsDone", s->nrStepsDone);
if (m->state->nrStepsDone) {
nested2.attr("totalStepTime", s->totalStepTime);
nested2.attr("totalStepBuildTime", s->totalStepBuildTime);
nested2.attr("avgStepTime", (float) s->totalStepTime / s->nrStepsDone);
nested2.attr("avgStepBuildTime", (float) s->totalStepBuildTime / s->nrStepsDone);
}
auto info(m->state->connectInfo.lock());
nested2.attr("disabledUntil", std::chrono::system_clock::to_time_t(info->disabledUntil));
nested2.attr("lastFailure", std::chrono::system_clock::to_time_t(info->lastFailure));
nested2.attr("consecutiveFailures", info->consecutiveFailures);
json machine = {
{"enabled", m->enabled},
{"systemTypes", m->systemTypesSet},
{"supportedFeatures", m->supportedFeatures},
{"mandatoryFeatures", m->mandatoryFeatures},
{"nrStepsDone", s->nrStepsDone.load()},
{"currentJobs", s->currentJobs.load()},
{"disabledUntil", std::chrono::system_clock::to_time_t(info->disabledUntil)},
{"lastFailure", std::chrono::system_clock::to_time_t(info->lastFailure)},
{"consecutiveFailures", info->consecutiveFailures},
};
if (s->currentJobs == 0)
machine["idleSince"] = s->idleSince.load();
if (m->state->nrStepsDone) {
machine["totalStepTime"] = s->totalStepTime.load();
machine["totalStepBuildTime"] = s->totalStepBuildTime.load();
machine["avgStepTime"] = (float) s->totalStepTime / s->nrStepsDone;
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / s->nrStepsDone;
}
statusJson["machines"][m->sshName] = machine;
}
}
{
auto nested = root.object("jobsets");
auto jobsets_json = json::object();
auto jobsets_(jobsets.lock());
for (auto & jobset : *jobsets_) {
auto nested2 = nested.object(jobset.first.first + ":" + jobset.first.second);
nested2.attr("shareUsed", jobset.second->shareUsed());
nested2.attr("seconds", jobset.second->getSeconds());
jobsets_json[jobset.first.first + ":" + jobset.first.second] = {
{"shareUsed", jobset.second->shareUsed()},
{"seconds", jobset.second->getSeconds()},
};
}
statusJson["jobsets"] = jobsets_json;
}
{
auto nested = root.object("machineTypes");
auto machineTypesJson = json::object();
auto machineTypes_(machineTypes.lock());
for (auto & i : *machineTypes_) {
auto nested2 = nested.object(i.first);
nested2.attr("runnable", i.second.runnable);
nested2.attr("running", i.second.running);
auto machineTypeJson = machineTypesJson[i.first] = {
{"runnable", i.second.runnable},
{"running", i.second.running},
};
if (i.second.runnable > 0)
nested2.attr("waitTime", i.second.waitTime.count() +
i.second.runnable * (time(0) - lastDispatcherCheck));
machineTypeJson["waitTime"] = i.second.waitTime.count() +
i.second.runnable * (time(0) - lastDispatcherCheck);
if (i.second.running == 0)
nested2.attr("lastActive", std::chrono::system_clock::to_time_t(i.second.lastActive));
machineTypeJson["lastActive"] = std::chrono::system_clock::to_time_t(i.second.lastActive);
}
statusJson["machineTypes"] = machineTypesJson;
}
auto store = getDestStore();
auto nested = root.object("store");
auto & stats = store->getStats();
nested.attr("narInfoRead", stats.narInfoRead);
nested.attr("narInfoReadAverted", stats.narInfoReadAverted);
nested.attr("narInfoMissing", stats.narInfoMissing);
nested.attr("narInfoWrite", stats.narInfoWrite);
nested.attr("narInfoCacheSize", stats.pathInfoCacheSize);
nested.attr("narRead", stats.narRead);
nested.attr("narReadBytes", stats.narReadBytes);
nested.attr("narReadCompressedBytes", stats.narReadCompressedBytes);
nested.attr("narWrite", stats.narWrite);
nested.attr("narWriteAverted", stats.narWriteAverted);
nested.attr("narWriteBytes", stats.narWriteBytes);
nested.attr("narWriteCompressedBytes", stats.narWriteCompressedBytes);
nested.attr("narWriteCompressionTimeMs", stats.narWriteCompressionTimeMs);
nested.attr("narCompressionSavings",
stats.narWriteBytes
? 1.0 - (double) stats.narWriteCompressedBytes / stats.narWriteBytes
: 0.0);
nested.attr("narCompressionSpeed", // MiB/s
statusJson["store"] = {
{"narInfoRead", stats.narInfoRead.load()},
{"narInfoReadAverted", stats.narInfoReadAverted.load()},
{"narInfoMissing", stats.narInfoMissing.load()},
{"narInfoWrite", stats.narInfoWrite.load()},
{"narInfoCacheSize", stats.pathInfoCacheSize.load()},
{"narRead", stats.narRead.load()},
{"narReadBytes", stats.narReadBytes.load()},
{"narReadCompressedBytes", stats.narReadCompressedBytes.load()},
{"narWrite", stats.narWrite.load()},
{"narWriteAverted", stats.narWriteAverted.load()},
{"narWriteBytes", stats.narWriteBytes.load()},
{"narWriteCompressedBytes", stats.narWriteCompressedBytes.load()},
{"narWriteCompressionTimeMs", stats.narWriteCompressionTimeMs.load()},
{"narCompressionSavings",
stats.narWriteBytes
? 1.0 - (double) stats.narWriteCompressedBytes / stats.narWriteBytes
: 0.0},
{"narCompressionSpeed", // MiB/s
stats.narWriteCompressionTimeMs
? (double) stats.narWriteBytes / stats.narWriteCompressionTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0);
: 0.0},
};
auto s3Store = dynamic_cast<S3BinaryCacheStore *>(&*store);
if (s3Store) {
auto nested2 = nested.object("s3");
auto & s3Stats = s3Store->getS3Stats();
nested2.attr("put", s3Stats.put);
nested2.attr("putBytes", s3Stats.putBytes);
nested2.attr("putTimeMs", s3Stats.putTimeMs);
nested2.attr("putSpeed",
s3Stats.putTimeMs
? (double) s3Stats.putBytes / s3Stats.putTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0);
nested2.attr("get", s3Stats.get);
nested2.attr("getBytes", s3Stats.getBytes);
nested2.attr("getTimeMs", s3Stats.getTimeMs);
nested2.attr("getSpeed",
s3Stats.getTimeMs
? (double) s3Stats.getBytes / s3Stats.getTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0);
nested2.attr("head", s3Stats.head);
nested2.attr("costDollarApprox",
(s3Stats.get + s3Stats.head) / 10000.0 * 0.004
+ s3Stats.put / 1000.0 * 0.005 +
+ s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09);
auto jsonS3 = statusJson["s3"] = {
{"put", s3Stats.put.load()},
{"putBytes", s3Stats.putBytes.load()},
{"putTimeMs", s3Stats.putTimeMs.load()},
{"putSpeed",
s3Stats.putTimeMs
? (double) s3Stats.putBytes / s3Stats.putTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0},
{"get", s3Stats.get.load()},
{"getBytes", s3Stats.getBytes.load()},
{"getTimeMs", s3Stats.getTimeMs.load()},
{"getSpeed",
s3Stats.getTimeMs
? (double) s3Stats.getBytes / s3Stats.getTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0},
{"head", s3Stats.head.load()},
{"costDollarApprox",
(s3Stats.get + s3Stats.head) / 10000.0 * 0.004
+ s3Stats.put / 1000.0 * 0.005 +
+ s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09},
};
}
}
@@ -725,7 +751,7 @@ void State::dumpStatus(Connection & conn)
pqxx::work txn(conn);
// FIXME: use PostgreSQL 9.5 upsert.
txn.exec("delete from SystemStatus where what = 'queue-runner'");
txn.exec_params0("insert into SystemStatus values ('queue-runner', $1)", out.str());
txn.exec_params0("insert into SystemStatus values ('queue-runner', $1)", statusJson.dump());
txn.exec("notify status_dumped");
txn.commit();
}
@@ -950,7 +976,6 @@ int main(int argc, char * * argv)
});
settings.verboseBuild = true;
settings.lockCPU = false;
State state{metricsAddrOpt};
if (status)

View File

@@ -24,13 +24,13 @@ struct Extractor : ParseSink
void createDirectory(const Path & path) override
{
members.insert_or_assign(prefix + path, NarMemberData { .type = FSAccessor::Type::tDirectory });
members.insert_or_assign(prefix + path, NarMemberData { .type = SourceAccessor::Type::tDirectory });
}
void createRegularFile(const Path & path) override
{
curMember = &members.insert_or_assign(prefix + path, NarMemberData {
.type = FSAccessor::Type::tRegular,
.type = SourceAccessor::Type::tRegular,
.fileSize = 0,
.contents = filesToKeep.count(path) ? std::optional("") : std::nullopt,
}).first->second;
@@ -66,8 +66,14 @@ struct Extractor : ParseSink
void createSymlink(const Path & path, const std::string & target) override
{
members.insert_or_assign(prefix + path, NarMemberData { .type = FSAccessor::Type::tSymlink });
members.insert_or_assign(prefix + path, NarMemberData { .type = SourceAccessor::Type::tSymlink });
}
void isExecutable() override
{ }
void closeRegularFile() override
{ }
};

View File

@@ -1,13 +1,13 @@
#pragma once
#include "fs-accessor.hh"
#include "source-accessor.hh"
#include "types.hh"
#include "serialise.hh"
#include "hash.hh"
struct NarMemberData
{
nix::FSAccessor::Type type;
nix::SourceAccessor::Type type;
std::optional<uint64_t> fileSize;
std::optional<std::string> contents;
std::optional<nix::Hash> sha256;

View File

@@ -13,7 +13,7 @@ void State::queueMonitor()
try {
queueMonitorLoop();
} catch (std::exception & e) {
printMsg(lvlError, format("queue monitor: %1%") % e.what());
printError("queue monitor: %s", e.what());
sleep(10); // probably a DB problem, so don't retry right away
}
}
@@ -142,13 +142,13 @@ bool State::getQueuedBuilds(Connection & conn,
createBuild = [&](Build::ptr build) {
prom.queue_build_loads.Increment();
printMsg(lvlTalkative, format("loading build %1% (%2%)") % build->id % build->fullJobName());
printMsg(lvlTalkative, "loading build %1% (%2%)", build->id, build->fullJobName());
nrAdded++;
newBuildsByID.erase(build->id);
if (!localStore->isValidPath(build->drvPath)) {
/* Derivation has been GC'ed prematurely. */
printMsg(lvlError, format("aborting GC'ed build %1%") % build->id);
printError("aborting GC'ed build %1%", build->id);
if (!build->finishedInDB) {
auto mc = startDbUpdate();
pqxx::work txn(conn);
@@ -192,15 +192,19 @@ bool State::getQueuedBuilds(Connection & conn,
if (!res[0].is_null()) propagatedFrom = res[0].as<BuildID>();
if (!propagatedFrom) {
for (auto & i : ex.step->drv->outputsAndOptPaths(*localStore)) {
if (i.second.second) {
auto res = txn.exec_params
("select max(s.build) from BuildSteps s join BuildStepOutputs o on s.build = o.build where path = $1 and startTime != 0 and stopTime != 0 and status = 1",
localStore->printStorePath(*i.second.second));
if (!res[0][0].is_null()) {
propagatedFrom = res[0][0].as<BuildID>();
break;
}
for (auto & [outputName, optOutputPath] : destStore->queryPartialDerivationOutputMap(ex.step->drvPath, &*localStore)) {
constexpr std::string_view common = "select max(s.build) from BuildSteps s join BuildStepOutputs o on s.build = o.build where startTime != 0 and stopTime != 0 and status = 1";
auto res = optOutputPath
? txn.exec_params(
std::string { common } + " and path = $1",
localStore->printStorePath(*optOutputPath))
: txn.exec_params(
std::string { common } + " and drvPath = $1 and name = $2",
localStore->printStorePath(ex.step->drvPath),
outputName);
if (!res[0][0].is_null()) {
propagatedFrom = res[0][0].as<BuildID>();
break;
}
}
}
@@ -236,12 +240,10 @@ bool State::getQueuedBuilds(Connection & conn,
/* If we didn't get a step, it means the step's outputs are
all valid. So we mark this as a finished, cached build. */
if (!step) {
auto drv = localStore->readDerivation(build->drvPath);
BuildOutput res = getBuildOutputCached(conn, destStore, drv);
BuildOutput res = getBuildOutputCached(conn, destStore, build->drvPath);
for (auto & i : drv.outputsAndOptPaths(*localStore))
if (i.second.second)
addRoot(*i.second.second);
for (auto & i : destStore->queryDerivationOutputMap(build->drvPath, &*localStore))
addRoot(i.second);
{
auto mc = startDbUpdate();
@@ -302,7 +304,7 @@ bool State::getQueuedBuilds(Connection & conn,
/* Add the new runnable build steps to runnable and wake up
the builder threads. */
printMsg(lvlChatty, format("got %1% new runnable steps from %2% new builds") % newRunnable.size() % nrAdded);
printMsg(lvlChatty, "got %1% new runnable steps from %2% new builds", newRunnable.size(), nrAdded);
for (auto & r : newRunnable)
makeRunnable(r);
@@ -315,7 +317,7 @@ bool State::getQueuedBuilds(Connection & conn,
if (std::chrono::system_clock::now() > start + std::chrono::seconds(600)) {
prom.queue_checks_early_exits.Increment();
break;
}
}
}
prom.queue_checks_finished.Increment();
@@ -358,13 +360,13 @@ void State::processQueueChange(Connection & conn)
for (auto i = builds_->begin(); i != builds_->end(); ) {
auto b = currentIds.find(i->first);
if (b == currentIds.end()) {
printMsg(lvlInfo, format("discarding cancelled build %1%") % i->first);
printInfo("discarding cancelled build %1%", i->first);
i = builds_->erase(i);
// FIXME: ideally we would interrupt active build steps here.
continue;
}
if (i->second->globalPriority < b->second) {
printMsg(lvlInfo, format("priority of build %1% increased") % i->first);
printInfo("priority of build %1% increased", i->first);
i->second->globalPriority = b->second;
i->second->propagatePriorities();
}
@@ -464,10 +466,7 @@ Step::ptr State::createStep(ref<Store> destStore,
step->systemType = step->drv->platform;
{
auto i = step->drv->env.find("requiredSystemFeatures");
StringSet features;
if (i != step->drv->env.end())
features = step->requiredSystemFeatures = tokenizeString<std::set<std::string>>(i->second);
StringSet features = step->requiredSystemFeatures = step->parsedDrv->getRequiredSystemFeatures();
if (step->preferLocalBuild)
features.insert("local");
if (!features.empty()) {
@@ -481,26 +480,41 @@ Step::ptr State::createStep(ref<Store> destStore,
throw PreviousFailure{step};
/* Are all outputs valid? */
auto outputHashes = staticOutputHashes(*localStore, *(step->drv));
bool valid = true;
DerivationOutputs missing;
for (auto & i : step->drv->outputs)
if (!destStore->isValidPath(*i.second.path(*localStore, step->drv->name, i.first))) {
valid = false;
missing.insert_or_assign(i.first, i.second);
}
std::map<DrvOutput, std::optional<StorePath>> missing;
for (auto & [outputName, maybeOutputPath] : destStore->queryPartialDerivationOutputMap(drvPath, &*localStore)) {
auto outputHash = outputHashes.at(outputName);
if (maybeOutputPath && destStore->isValidPath(*maybeOutputPath))
continue;
valid = false;
missing.insert({{outputHash, outputName}, maybeOutputPath});
}
/* Try to copy the missing paths from the local store or from
substitutes. */
if (!missing.empty()) {
size_t avail = 0;
for (auto & i : missing) {
auto path = i.second.path(*localStore, step->drv->name, i.first);
if (/* localStore != destStore && */ localStore->isValidPath(*path))
for (auto & [i, pathOpt] : missing) {
// If we don't know the output path from the destination
// store, see if the local store can tell us.
if (/* localStore != destStore && */ !pathOpt && experimentalFeatureSettings.isEnabled(Xp::CaDerivations))
if (auto maybeRealisation = localStore->queryRealisation(i))
pathOpt = maybeRealisation->outPath;
if (!pathOpt) {
// No hope of getting the store object if we don't know
// the path.
continue;
}
auto & path = *pathOpt;
if (/* localStore != destStore && */ localStore->isValidPath(path))
avail++;
else if (useSubstitutes) {
SubstitutablePathInfos infos;
localStore->querySubstitutablePathInfos({{*path, {}}}, infos);
localStore->querySubstitutablePathInfos({{path, {}}}, infos);
if (infos.size() == 1)
avail++;
}
@@ -508,26 +522,29 @@ Step::ptr State::createStep(ref<Store> destStore,
if (missing.size() == avail) {
valid = true;
for (auto & i : missing) {
auto path = i.second.path(*localStore, step->drv->name, i.first);
for (auto & [i, pathOpt] : missing) {
// If we found everything, then we should know the path
// to every missing store object now.
assert(pathOpt);
auto & path = *pathOpt;
try {
time_t startTime = time(0);
if (localStore->isValidPath(*path))
if (localStore->isValidPath(path))
printInfo("copying output %1% of %2% from local store",
localStore->printStorePath(*path),
localStore->printStorePath(path),
localStore->printStorePath(drvPath));
else {
printInfo("substituting output %1% of %2%",
localStore->printStorePath(*path),
localStore->printStorePath(path),
localStore->printStorePath(drvPath));
localStore->ensurePath(*path);
localStore->ensurePath(path);
// FIXME: should copy directly from substituter to destStore.
}
copyClosure(*localStore, *destStore,
StorePathSet { *path },
StorePathSet { path },
NoRepair, CheckSigs, NoSubstitute);
time_t stopTime = time(0);
@@ -535,13 +552,13 @@ Step::ptr State::createStep(ref<Store> destStore,
{
auto mc = startDbUpdate();
pqxx::work txn(conn);
createSubstitutionStep(txn, startTime, stopTime, build, drvPath, "out", *path);
createSubstitutionStep(txn, startTime, stopTime, build, drvPath, *(step->drv), "out", path);
txn.commit();
}
} catch (Error & e) {
printError("while copying/substituting output %s of %s: %s",
localStore->printStorePath(*path),
localStore->printStorePath(path),
localStore->printStorePath(drvPath),
e.what());
valid = false;
@@ -561,7 +578,7 @@ Step::ptr State::createStep(ref<Store> destStore,
printMsg(lvlDebug, "creating build step %1%", localStore->printStorePath(drvPath));
/* Create steps for the dependencies. */
for (auto & i : step->drv->inputDrvs) {
for (auto & i : step->drv->inputDrvs.map) {
auto dep = createStep(destStore, conn, build, i.first, 0, step, finishedDrvs, newSteps, newRunnable);
if (dep) {
auto step_(step->state.lock());
@@ -640,21 +657,23 @@ void State::processJobsetSharesChange(Connection & conn)
}
BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store> destStore, const nix::Derivation & drv)
BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store> destStore, const nix::StorePath & drvPath)
{
auto derivationOutputs = destStore->queryDerivationOutputMap(drvPath, &*localStore);
{
pqxx::work txn(conn);
for (auto & [name, output] : drv.outputsAndOptPaths(*localStore)) {
for (auto & [name, output] : derivationOutputs) {
auto r = txn.exec_params
("select id, buildStatus, releaseName, closureSize, size from Builds b "
"join BuildOutputs o on b.id = o.build "
"where finished = 1 and (buildStatus = 0 or buildStatus = 6) and path = $1",
localStore->printStorePath(*output.second));
localStore->printStorePath(output));
if (r.empty()) continue;
BuildID id = r[0][0].as<BuildID>();
printMsg(lvlInfo, format("reusing build %d") % id);
printInfo("reusing build %d", id);
BuildOutput res;
res.failed = r[0][1].as<int>() == bsFailedWithOutput;
@@ -704,5 +723,5 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
}
NarMemberDatas narMembers;
return getBuildOutput(destStore, narMembers, drv);
return getBuildOutput(destStore, narMembers, derivationOutputs);
}

View File

@@ -21,6 +21,8 @@
#include "store-api.hh"
#include "sync.hh"
#include "nar-extractor.hh"
#include "serve-protocol.hh"
#include "machines.hh"
typedef unsigned int BuildID;
@@ -78,6 +80,8 @@ struct RemoteResult
{
return stepStatus == bsCachedFailure ? bsFailed : stepStatus;
}
void updateWithBuildResult(const nix::BuildResult &);
};
@@ -231,17 +235,21 @@ void getDependents(Step::ptr step, std::set<Build::ptr> & builds, std::set<Step:
void visitDependencies(std::function<void(Step::ptr)> visitor, Step::ptr step);
struct Machine
struct Machine : nix::Machine
{
typedef std::shared_ptr<Machine> ptr;
bool enabled{true};
/* TODO Get rid of: `nix::Machine::storeUri` is normalized in a way
we are not yet used to, but once we are, we don't need this. */
std::string sshName;
std::string sshName, sshKey;
std::set<std::string> systemTypes, supportedFeatures, mandatoryFeatures;
unsigned int maxJobs = 1;
float speedFactor = 1.0;
std::string sshPublicHostKey;
/* TODO Get rid once `nix::Machine::systemTypes` is a set not
vector. */
std::set<std::string> systemTypesSet;
/* TODO Get rid once `nix::Machine::systemTypes` is a `float` not
an `int`. */
float speedFactorFloat = 1.0;
struct State {
typedef std::shared_ptr<State> ptr;
@@ -269,7 +277,7 @@ struct Machine
{
/* Check that this machine is of the type required by the
step. */
if (!systemTypes.count(step->drv->platform == "builtin" ? nix::settings.thisSystem : step->drv->platform))
if (!systemTypesSet.count(step->drv->platform == "builtin" ? nix::settings.thisSystem : step->drv->platform))
return false;
/* Check that the step requires all mandatory features of this
@@ -297,6 +305,32 @@ struct Machine
std::regex r("^(ssh://|ssh-ng://)?localhost$");
return std::regex_search(sshName, r);
}
// A connection to a machine
struct Connection {
nix::FdSource from;
nix::FdSink to;
nix::ServeProto::Version remoteVersion;
// Backpointer to the machine
ptr machine;
operator nix::ServeProto::ReadConn ()
{
return {
.from = from,
.version = remoteVersion,
};
}
operator nix::ServeProto::WriteConn ()
{
return {
.to = to,
.version = remoteVersion,
};
}
};
};
@@ -459,6 +493,12 @@ private:
public:
State(std::optional<std::string> metricsAddrOpt);
struct BuildOptions {
unsigned int maxSilentTime, buildTimeout, repeats;
size_t maxLogSize;
bool enforceDeterminism;
};
private:
nix::MaintainCount<counter> startDbUpdate();
@@ -485,7 +525,7 @@ private:
const std::string & machine);
int createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
Build::ptr build, const nix::StorePath & drvPath, const std::string & outputName, const nix::StorePath & storePath);
Build::ptr build, const nix::StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const nix::StorePath & storePath);
void updateBuild(pqxx::work & txn, Build::ptr build, BuildStatus status);
@@ -501,7 +541,7 @@ private:
void processQueueChange(Connection & conn);
BuildOutput getBuildOutputCached(Connection & conn, nix::ref<nix::Store> destStore,
const nix::Derivation & drv);
const nix::StorePath & drvPath);
Step::ptr createStep(nix::ref<nix::Store> store,
Connection & conn, Build::ptr build, const nix::StorePath & drvPath,
@@ -543,8 +583,7 @@ private:
void buildRemote(nix::ref<nix::Store> destStore,
Machine::ptr machine, Step::ptr step,
unsigned int maxSilentTime, unsigned int buildTimeout,
unsigned int repeats,
const BuildOptions & buildOptions,
RemoteResult & result, std::shared_ptr<ActiveStep> activeStep,
std::function<void(StepState)> updateStep,
NarMemberDatas & narMembers);

View File

@@ -78,14 +78,16 @@ sub build_GET {
$c->stash->{template} = 'build.tt';
$c->stash->{isLocalStore} = isLocalStore();
# XXX: If the derivation is content-addressed then this will always return
# false because `$_->path` will be empty
$c->stash->{available} =
$c->stash->{isLocalStore}
? all { isValidPath($_->path) } $build->buildoutputs->all
? all { $_->path && isValidPath($_->path) } $build->buildoutputs->all
: 1;
$c->stash->{drvAvailable} = isValidPath $build->drvpath;
if ($build->finished && $build->iscachedbuild) {
my $path = ($build->buildoutputs)[0]->path or die;
my $path = ($build->buildoutputs)[0]->path or undef;
my $cachedBuildStep = findBuildStepByOutPath($self, $c, $path);
if (defined $cachedBuildStep) {
$c->stash->{cachedBuild} = $cachedBuildStep->build;
@@ -238,9 +240,17 @@ sub serveFile {
"store", "cat", "--store", getStoreUri(), "$path"]) };
# Detect MIME type.
state $magic = File::LibMagic->new(follow_symlinks => 1);
my $info = $magic->info_from_filename($path);
my $type = $info->{mime_with_encoding};
my $type = "text/plain";
if ($path =~ /.*\.(\S{1,})$/xms) {
my $ext = $1;
my $mimeTypes = MIME::Types->new(only_complete => 1);
my $t = $mimeTypes->mimeTypeOf($ext);
$type = ref $t ? $t->type : $t if $t;
} else {
state $magic = File::LibMagic->new(follow_symlinks => 1);
my $info = $magic->info_from_filename($path);
$type = $info->{mime_with_encoding};
}
$c->response->content_type($type);
$c->forward('Hydra::View::Plain');
}

View File

@@ -18,6 +18,8 @@ use Net::Prometheus;
use Types::Standard qw/StrMatch/;
use constant NARINFO_REGEX => qr{^([a-z0-9]{32})\.narinfo$};
# e.g.: https://hydra.example.com/realisations/sha256:a62128132508a3a32eef651d6467695944763602f226ac630543e947d9feb140!out.doi
use constant REALISATIONS_REGEX => qr{^(sha256:[a-z0-9]{64}![a-z]+)\.doi$};
# Put this controller at top-level.
__PACKAGE__->config->{namespace} = '';
@@ -355,6 +357,33 @@ sub nix_cache_info :Path('nix-cache-info') :Args(0) {
}
sub realisations :Path('realisations') :Args(StrMatch[REALISATIONS_REGEX]) {
my ($self, $c, $realisation) = @_;
if (!isLocalStore) {
notFound($c, "There is no binary cache here.");
}
else {
my ($rawDrvOutput) = $realisation =~ REALISATIONS_REGEX;
my $rawRealisation = queryRawRealisation($rawDrvOutput);
if (!$rawRealisation) {
$c->response->status(404);
$c->response->content_type('text/plain');
$c->stash->{plain}->{data} = "does not exist\n";
$c->forward('Hydra::View::Plain');
setCacheHeaders($c, 60 * 60);
return;
}
$c->response->content_type('text/plain');
$c->stash->{plain}->{data} = $rawRealisation;
$c->forward('Hydra::View::Plain');
}
}
sub narinfo :Path :Args(StrMatch[NARINFO_REGEX]) {
my ($self, $c, $narinfo) = @_;

View File

@@ -463,7 +463,7 @@ sub my_jobs_tab :Chained('dashboard_base') :PathPart('my-jobs-tab') :Args(0) {
, "jobset.enabled" => 1
},
{ order_by => ["project", "jobset", "job"]
, join => ["project", "jobset"]
, join => {"jobset" => "project"}
})];
}

View File

@@ -49,7 +49,7 @@ __PACKAGE__->table("buildoutputs");
=head2 path
data_type: 'text'
is_nullable: 0
is_nullable: 1
=cut
@@ -59,7 +59,7 @@ __PACKAGE__->add_columns(
"name",
{ data_type => "text", is_nullable => 0 },
"path",
{ data_type => "text", is_nullable => 0 },
{ data_type => "text", is_nullable => 1 },
);
=head1 PRIMARY KEY
@@ -94,8 +94,8 @@ __PACKAGE__->belongs_to(
);
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2021-08-26 12:02:36
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:gU+kZ6A0ISKpaXGRGve8mg
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2022-06-30 12:02:32
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:Jsabm3YTcI7YvCuNdKP5Ng
my %hint = (
columns => [

View File

@@ -55,7 +55,7 @@ __PACKAGE__->table("buildstepoutputs");
=head2 path
data_type: 'text'
is_nullable: 0
is_nullable: 1
=cut
@@ -67,7 +67,7 @@ __PACKAGE__->add_columns(
"name",
{ data_type => "text", is_nullable => 0 },
"path",
{ data_type => "text", is_nullable => 0 },
{ data_type => "text", is_nullable => 1 },
);
=head1 PRIMARY KEY
@@ -119,8 +119,8 @@ __PACKAGE__->belongs_to(
);
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2021-08-26 12:02:36
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:gxp8rOjpRVen4YbIjomHTw
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2022-06-30 12:02:32
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:Bad70CRTt7zb2GGuRoQ++Q
# You can replace this text with custom code or comments, and it will be preserved on regeneration

View File

@@ -216,7 +216,7 @@ sub json_hint {
sub _authenticator() {
my $authenticator = Crypt::Passphrase->new(
encoder => 'Argon2',
encoder => { module => 'Argon2', output_size => 16 },
validators => [
(sub {
my ($password, $hash) = @_;

View File

@@ -2,6 +2,7 @@
#include <pqxx/pqxx>
#include "environment-variables.hh"
#include "util.hh"

View File

@@ -2,6 +2,7 @@
#include <map>
#include "file-system.hh"
#include "util.hh"
struct HydraConfig

View File

@@ -82,7 +82,7 @@
function onGoogleSignIn(googleUser) {
requestJSON({
url: "[% c.uri_for('/google-login') %]",
data: "id_token=" + googleUser.getAuthResponse().id_token,
data: "id_token=" + googleUser.credential,
type: 'POST',
success: function(data) {
window.location.reload();
@@ -91,9 +91,6 @@
return false;
};
$("#google-signin").click(function() {
$(".g-signin2:first-child > div").click();
});
</script>
[% END %]

View File

@@ -374,7 +374,7 @@ BLOCK renderInputDiff; %]
[% ELSIF bi1.uri == bi2.uri && bi1.revision != bi2.revision %]
[% IF bi1.type == "git" %]
<tr><td>
<b>[% bi1.name %]</b></td><td><tt>[% INCLUDE renderDiffUri contents=(bi1.revision.substr(0, 6) _ ' to ' _ bi2.revision.substr(0, 6)) %]</tt>
<b>[% bi1.name %]</b></td><td><tt>[% INCLUDE renderDiffUri contents=(bi1.revision.substr(0, 8) _ ' to ' _ bi2.revision.substr(0, 8)) %]</tt>
</td></tr>
[% ELSE %]
<tr><td>

View File

@@ -133,8 +133,10 @@
[% ELSE %]
[% WRAPPER makeSubMenu title="Sign in" id="sign-in-menu" align="right" %]
[% IF c.config.enable_google_login %]
<div style="display: none" class="g-signin2" data-onsuccess="onGoogleSignIn" data-theme="dark"></div>
<a class="dropdown-item" href="#" id="google-signin">Sign in with Google</a>
<script src="https://accounts.google.com/gsi/client" async defer></script>
<div id="g_id_onload" data-client_id="[% c.config.google_client_id %]" data-auto_prompt="false" data-callback="onGoogleSignIn">
</div>
<div class="g_id_signin" data-type="standard"></div>
<div class="dropdown-divider"></div>
[% END %]
[% IF c.config.github_client_id %]

View File

@@ -438,13 +438,17 @@ sub checkBuild {
# new build to be scheduled if the meta.maintainers field is
# changed?
if (defined $prevEval) {
my $pathOrDrvConstraint = defined $firstOutputPath
? { path => $firstOutputPath }
: { drvPath => $drvPath };
my ($prevBuild) = $prevEval->builds->search(
# The "project" and "jobset" constraints are
# semantically unnecessary (because they're implied by
# the eval), but they give a factor 1000 speedup on
# the Nixpkgs jobset with PostgreSQL.
{ jobset_id => $jobset->get_column('id'), job => $jobName,
name => $firstOutputName, path => $firstOutputPath },
name => $firstOutputName, %$pathOrDrvConstraint },
{ rows => 1, columns => ['id', 'finished'], join => ['buildoutputs'] });
if (defined $prevBuild) {
#print STDERR " already scheduled/built as build ", $prevBuild->id, "\n";

View File

@@ -247,7 +247,7 @@ create trigger BuildBumped after update on Builds for each row
create table BuildOutputs (
build integer not null,
name text not null,
path text not null,
path text,
primary key (build, name),
foreign key (build) references Builds(id) on delete cascade
);
@@ -303,7 +303,7 @@ create table BuildStepOutputs (
build integer not null,
stepnr integer not null,
name text not null,
path text not null,
path text,
primary key (build, stepnr, name),
foreign key (build) references Builds(id) on delete cascade,
foreign key (build, stepnr) references BuildSteps(build, stepnr) on delete cascade

3
src/sql/upgrade-83.sql Normal file
View File

@@ -0,0 +1,3 @@
-- This index was introduced in a migration but was never recorded in
-- hydra.sql (the source of truth), which is why `if exists` is required.
drop index if exists IndexBuildOutputsOnPath;

4
src/sql/upgrade-84.sql Normal file
View File

@@ -0,0 +1,4 @@
-- CA derivations do not have statically known output paths. The values
-- are only filled in after the build runs.
ALTER TABLE BuildStepOutputs ALTER COLUMN path DROP NOT NULL;
ALTER TABLE BuildOutputs ALTER COLUMN path DROP NOT NULL;

View File

@@ -0,0 +1,30 @@
use strict;
use warnings;
use Setup;
my $ctx = test_context();
use HTTP::Request::Common;
use Test2::V0;
use Catalyst::Test ();
Catalyst::Test->import('Hydra');
require Hydra::Schema;
require Hydra::Model::DB;
my $db = $ctx->db();
my $user = $db->resultset('Users')->create({ username => 'alice', emailaddress => 'alice@invalid.org', password => '!' });
$user->setPassword('foobar');
my $builds = $ctx->makeAndEvaluateJobset(
expression => "basic.nix",
build => 1
);
my $login = request(POST '/login', Referer => 'http://localhost', Content => {
username => 'alice',
password => 'foobar',
});
is($login->code, 302);
my $cookie = $login->header("set-cookie");
my $my_jobs = request(GET '/dashboard/alice/my-jobs-tab', Accept => 'application/json', Cookie => $cookie);
ok($my_jobs->is_success);
my $content = $my_jobs->content();
ok($content =~ /empty_dir/);
ok(!($content =~ /fails/));
ok(!($content =~ /succeed_with_failed/));
done_testing;

View File

@@ -57,8 +57,8 @@ subtest "Validate a run log was created" => sub {
ok($runlog->did_succeed(), "The process did succeed.");
is($runlog->job_matcher, "*:*:*", "An unspecified job matcher is defaulted to *:*:*");
is($runlog->command, 'cp "$HYDRA_JSON" "$HYDRA_DATA/joboutput.json"', "The executed command is saved.");
is($runlog->start_time, within(time() - 1, 2), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 2), "The end time is also recent.");
is($runlog->start_time, within(time() - 1, 5), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 5), "The end time is also recent.");
is($runlog->exit_code, 0, "This command should have succeeded.");
subtest "Validate the run log file exists" => sub {

View File

@@ -43,8 +43,8 @@ subtest "Validate a run log was created" => sub {
ok($runlog->did_fail_with_exec_error(), "The process failed to start due to an exec error.");
is($runlog->job_matcher, "*:*:*", "An unspecified job matcher is defaulted to *:*:*");
is($runlog->command, 'invalid-command-this-does-not-exist', "The executed command is saved.");
is($runlog->start_time, within(time() - 1, 2), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 2), "The end time is also recent.");
is($runlog->start_time, within(time() - 1, 5), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 5), "The end time is also recent.");
is($runlog->exit_code, undef, "This command should not have executed.");
is($runlog->error_number, 2, "This command failed to exec.");
};

View File

@@ -55,7 +55,7 @@ subtest "Starting a process" => sub {
ok($runlog->is_running(), "The process is running.");
ok(!$runlog->did_fail_with_signal(), "The process was not killed by a signal.");
ok(!$runlog->did_fail_with_exec_error(), "The process did not fail to start due to an exec error.");
is($runlog->start_time, within(time() - 1, 2), "The start time is recent.");
is($runlog->start_time, within(time() - 1, 5), "The start time is recent.");
is($runlog->end_time, undef, "The end time is undefined.");
is($runlog->exit_code, undef, "The exit code is undefined.");
is($runlog->signal, undef, "The signal is undefined.");
@@ -70,8 +70,8 @@ subtest "The process completed (success)" => sub {
ok(!$runlog->is_running(), "The process is not running.");
ok(!$runlog->did_fail_with_signal(), "The process was not killed by a signal.");
ok(!$runlog->did_fail_with_exec_error(), "The process did not fail to start due to an exec error.");
is($runlog->start_time, within(time() - 1, 2), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 2), "The end time is recent.");
is($runlog->start_time, within(time() - 1, 5), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 5), "The end time is recent.");
is($runlog->error_number, undef, "The error number is undefined");
is($runlog->exit_code, 0, "The exit code is 0.");
is($runlog->signal, undef, "The signal is undefined.");
@@ -86,8 +86,8 @@ subtest "The process completed (errored)" => sub {
ok(!$runlog->is_running(), "The process is not running.");
ok(!$runlog->did_fail_with_signal(), "The process was not killed by a signal.");
ok(!$runlog->did_fail_with_exec_error(), "The process did not fail to start due to an exec error.");
is($runlog->start_time, within(time() - 1, 2), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 2), "The end time is recent.");
is($runlog->start_time, within(time() - 1, 5), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 5), "The end time is recent.");
is($runlog->error_number, undef, "The error number is undefined");
is($runlog->exit_code, 85, "The exit code is 85.");
is($runlog->signal, undef, "The signal is undefined.");
@@ -102,8 +102,8 @@ subtest "The process completed (status 15, child error 0)" => sub {
ok(!$runlog->is_running(), "The process is not running.");
ok($runlog->did_fail_with_signal(), "The process was killed by a signal.");
ok(!$runlog->did_fail_with_exec_error(), "The process did not fail to start due to an exec error.");
is($runlog->start_time, within(time() - 1, 2), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 2), "The end time is recent.");
is($runlog->start_time, within(time() - 1, 5), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 5), "The end time is recent.");
is($runlog->error_number, undef, "The error number is undefined");
is($runlog->exit_code, undef, "The exit code is undefined.");
is($runlog->signal, 15, "Signal 15 was sent.");
@@ -118,8 +118,8 @@ subtest "The process completed (signaled)" => sub {
ok(!$runlog->is_running(), "The process is not running.");
ok($runlog->did_fail_with_signal(), "The process was killed by a signal.");
ok(!$runlog->did_fail_with_exec_error(), "The process did not fail to start due to an exec error.");
is($runlog->start_time, within(time() - 1, 2), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 2), "The end time is recent.");
is($runlog->start_time, within(time() - 1, 5), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 5), "The end time is recent.");
is($runlog->error_number, undef, "The error number is undefined");
is($runlog->exit_code, undef, "The exit code is undefined.");
is($runlog->signal, 9, "The signal is 9.");
@@ -134,8 +134,8 @@ subtest "The process failed to start" => sub {
ok(!$runlog->is_running(), "The process is running.");
ok(!$runlog->did_fail_with_signal(), "The process was not killed by a signal.");
ok($runlog->did_fail_with_exec_error(), "The process failed to start due to an exec error.");
is($runlog->start_time, within(time() - 1, 2), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 2), "The end time is recent.");
is($runlog->start_time, within(time() - 1, 5), "The start time is recent.");
is($runlog->end_time, within(time() - 1, 5), "The end time is recent.");
is($runlog->error_number, 2, "The error number is saved");
is($runlog->exit_code, undef, "The exit code is undefined.");
is($runlog->signal, undef, "The signal is undefined.");

View File

@@ -25,11 +25,11 @@ subtest "requeue" => sub {
$task->requeue();
is($task->attempts, 2, "We should have stored a second retry");
is($task->retry_at, within(time() + 4, 2), "Delayed two exponential backoff step");
is($task->retry_at, within(time() + 4, 5), "Delayed two exponential backoff step");
$task->requeue();
is($task->attempts, 3, "We should have stored a third retry");
is($task->retry_at, within(time() + 8, 2), "Delayed a third exponential backoff step");
is($task->retry_at, within(time() + 8, 5), "Delayed a third exponential backoff step");
};
done_testing;

View File

@@ -101,7 +101,7 @@ subtest "save_task" => sub {
is($retry->pluginname, "FooPluginName", "Plugin name should match");
is($retry->payload, "1", "Payload should match");
is($retry->attempts, 1, "We've had one attempt");
is($retry->retry_at, within(time() + 1, 2), "The retry at should be approximately one second away");
is($retry->retry_at, within(time() + 1, 5), "The retry at should be approximately one second away");
};
done_testing;

View File

@@ -0,0 +1,61 @@
use feature 'unicode_strings';
use strict;
use warnings;
use Setup;
my %ctx = test_init(
nix_config => qq|
experimental-features = ca-derivations
|,
);
require Hydra::Schema;
require Hydra::Model::DB;
use JSON::MaybeXS;
use HTTP::Request::Common;
use Test2::V0;
require Catalyst::Test;
Catalyst::Test->import('Hydra');
my $db = Hydra::Model::DB->new;
hydra_setup($db);
my $project = $db->resultset('Projects')->create({name => "tests", displayname => "", owner => "root"});
my $jobset = createBaseJobset("content-addressed", "content-addressed.nix", $ctx{jobsdir});
ok(evalSucceeds($jobset), "Evaluating jobs/content-addressed.nix should exit with return code 0");
is(nrQueuedBuildsForJobset($jobset), 5, "Evaluating jobs/content-addressed.nix should result in 4 builds");
for my $build (queuedBuildsForJobset($jobset)) {
ok(runBuild($build), "Build '".$build->job."' from jobs/content-addressed.nix should exit with code 0");
my $newbuild = $db->resultset('Builds')->find($build->id);
is($newbuild->finished, 1, "Build '".$build->job."' from jobs/content-addressed.nix should be finished.");
my $expected = $build->job eq "fails" ? 1 : $build->job =~ /with_failed/ ? 6 : 0;
is($newbuild->buildstatus, $expected, "Build '".$build->job."' from jobs/content-addressed.nix should have buildstatus $expected.");
my $response = request("/build/".$build->id);
ok($response->is_success, "The 'build' page for build '".$build->job."' should load properly");
if ($newbuild->buildstatus == 0) {
my $buildOutputs = $newbuild->buildoutputs;
for my $output ($newbuild->buildoutputs) {
# XXX: This hardcodes /nix/store/.
# It's fine because in practice the nix store for the tests will be of
# the form `/some/thing/nix/store/`, but it would be cleaner if there
# was a way to query Nix for its store dir?
like(
$output->path, qr|/nix/store/|,
"Output '".$output->name."' of build '".$build->job."' should be a valid store path"
);
}
}
}
isnt(<$ctx{deststoredir}/realisations/*>, "", "The destination store should have the realisations of the built derivations registered");
done_testing;

View File

@@ -0,0 +1,28 @@
use feature 'unicode_strings';
use strict;
use warnings;
use Setup;
my %ctx = test_init();
require Hydra::Schema;
require Hydra::Model::DB;
use JSON::MaybeXS;
use HTTP::Request::Common;
use Test2::V0;
require Catalyst::Test;
Catalyst::Test->import('Hydra');
my $db = Hydra::Model::DB->new;
hydra_setup($db);
my $project = $db->resultset('Projects')->create({name => "tests", displayname => "", owner => "root"});
my $jobset = createBaseJobset("content-addressed", "content-addressed.nix", $ctx{jobsdir});
ok(evalSucceeds($jobset), "Evaluating jobs/content-addressed.nix without the experimental feature should exit with return code 0");
is(nrQueuedBuildsForJobset($jobset), 0, "Evaluating jobs/content-addressed.nix without the experimental Nix feature should result in 0 build");
done_testing;

View File

@@ -4,6 +4,8 @@ with import ./config.nix;
mkDerivation {
name = "empty-dir";
builder = ./empty-dir-builder.sh;
meta.maintainers = [ "alice@invalid.org" ];
meta.outPath = "${placeholder "out"}";
};
fails =

View File

@@ -6,4 +6,9 @@ rec {
system = builtins.currentSystem;
PATH = path;
} // args);
mkContentAddressedDerivation = args: mkDerivation ({
__contentAddressed = true;
outputHashMode = "recursive";
outputHashAlgo = "sha256";
} // args);
}

View File

@@ -0,0 +1,35 @@
let cfg = import ./config.nix; in
rec {
empty_dir =
cfg.mkContentAddressedDerivation {
name = "empty-dir";
builder = ./empty-dir-builder.sh;
};
fails =
cfg.mkContentAddressedDerivation {
name = "fails";
builder = ./fail.sh;
};
succeed_with_failed =
cfg.mkContentAddressedDerivation {
name = "succeed-with-failed";
builder = ./succeed-with-failed.sh;
};
caDependingOnCA =
cfg.mkContentAddressedDerivation {
name = "ca-depending-on-ca";
builder = ./dir-with-file-builder.sh;
FOO = empty_dir;
};
nonCaDependingOnCA =
cfg.mkDerivation {
name = "non-ca-depending-on-ca";
builder = ./dir-with-file-builder.sh;
FOO = empty_dir;
};
}

View File

@@ -0,0 +1,4 @@
#! /bin/sh
mkdir $out
echo foo > $out/a-file

View File

@@ -1,6 +1,3 @@
#! /bin/sh
# Workaround for https://github.com/NixOS/nix/pull/6051
echo "some output"
mkdir $out

View File

@@ -39,7 +39,11 @@ use Hydra::Helper::Exec;
sub new {
my ($class, %opts) = @_;
my $dir = File::Temp->newdir();
my $deststoredir;
# Cleanup will be managed by yath. By the default it will be cleaned
# up, but can be kept to aid in debugging test failures.
my $dir = File::Temp->newdir(CLEANUP => 0);
$ENV{'HYDRA_DATA'} = "$dir/hydra-data";
mkdir $ENV{'HYDRA_DATA'};
@@ -53,6 +57,7 @@ sub new {
my $hydra_config = $opts{'hydra_config'} || "";
$hydra_config = "queue_runner_metrics_address = 127.0.0.1:0\n" . $hydra_config;
if ($opts{'use_external_destination_store'} // 1) {
$deststoredir = "$dir/nix/dest-store";
$hydra_config = "store_uri = file://$dir/nix/dest-store\n" . $hydra_config;
}
@@ -79,7 +84,8 @@ sub new {
nix_state_dir => $nix_state_dir,
nix_log_dir => $nix_log_dir,
testdir => abs_path(dirname(__FILE__) . "/.."),
jobsdir => abs_path(dirname(__FILE__) . "/../jobs")
jobsdir => abs_path(dirname(__FILE__) . "/../jobs"),
deststoredir => $deststoredir,
}, $class;
if ($opts{'before_init'}) {

View File

@@ -8,7 +8,7 @@ my $binarycachedir = File::Temp->newdir();
my $ctx = test_context(
nix_config => qq|
experimental-features = nix-command
experimental-features = nix-command ca-derivations
substituters = file://${binarycachedir}?trusted=1
|,
hydra_config => q|