Compare commits
354 commits
2023-04-30
...
main
Author | SHA1 | Date | |
---|---|---|---|
|
d3b8b3bb16 | ||
|
a616fd2a37 | ||
|
6c067bfd76 | ||
|
89e4df134b | ||
|
a05194faa1 | ||
|
3b185b53cd | ||
|
0b93541ffc | ||
|
db9f23de54 | ||
|
6d84004259 | ||
|
b1d9d52b6f | ||
|
4b7b5d564d | ||
|
98b5a0cbe1 | ||
|
db2c3b32d4 | ||
|
5066986c6d | ||
|
b6c15d4aea | ||
|
0c8e1fca49 | ||
|
91b76cb17b | ||
|
82523d1d8e | ||
|
a10469b382 | ||
|
6f8022ab45 | ||
|
3963dfc6f7 | ||
|
bda54ca5fc | ||
|
627ef246c6 | ||
|
93a8d7deae | ||
|
97407545f0 | ||
|
af381c9e0e | ||
|
6aa29e3d44 | ||
|
74d47a30a7 | ||
|
cf47f003fa | ||
|
7d67347fe8 | ||
|
c0fc09ced9 | ||
|
dad7a7d066 | ||
|
d03685a331 | ||
|
66db3633e6 | ||
|
3b716969b9 | ||
|
b911dca56b | ||
|
996982abd3 | ||
|
1008f44ddb | ||
|
8509c33a07 | ||
|
556f0412f7 | ||
|
74708d8c82 | ||
|
3f645405d5 | ||
|
0cb4c70d4f | ||
|
f605161e3f | ||
|
d148278969 | ||
|
e829f9c71c | ||
|
4ef4b2d5cc | ||
|
4e6a202fe5 | ||
|
fe9cc4290e | ||
|
16f4bf7d32 | ||
|
93bced9c7b | ||
|
a90b14b0e4 | ||
|
705f59f3e4 | ||
|
7f5c34890e | ||
|
7e1ddcb5cc | ||
|
f00e9240cd | ||
|
aa0f1facf4 | ||
|
e02e0fc5f5 | ||
|
a7ff3bb917 | ||
|
9774b35d75 | ||
|
1b95689795 | ||
|
f3861e60fc | ||
|
0bacffa87e | ||
|
15e328a8a5 | ||
|
c989385713 | ||
|
80896601aa | ||
|
3382f0b084 | ||
|
9e521434a4 | ||
|
0fae5906ef | ||
|
164e1008e5 | ||
|
feb1a282da | ||
|
f45d0855ad | ||
|
3b24b73988 | ||
|
aa421fa279 | ||
|
5e51d8ed42 | ||
|
5539ef7275 | ||
|
599c75c167 | ||
|
5660e21fb8 | ||
|
7abbd84a8a | ||
|
f1181cc62a | ||
|
a697b9c1ed | ||
|
be2063abf5 | ||
|
8a2d4cb7cb | ||
|
0348074eee | ||
|
4c05530aa3 | ||
|
ed946d0f54 | ||
|
82b6df801f | ||
|
e385811e74 | ||
|
e7076aefb8 | ||
|
4ad4512814 | ||
|
6980165781 | ||
|
eb89a98c6a | ||
|
4f4ec159f0 | ||
|
781606388c | ||
|
9b504f7b47 | ||
|
5f7a7ee355 | ||
|
4a9d9b9e64 | ||
|
5e7b8c201a | ||
|
b3fb495844 | ||
|
3682c4ecb4 | ||
|
9d58769708 | ||
|
ddd2eb7be9 | ||
|
74cb9034e3 | ||
|
82c30f5cf7 | ||
|
cd206e4660 | ||
|
1c20916144 | ||
|
0d5eb12574 | ||
|
cabaab6237 | ||
|
0d199592e8 | ||
|
2359531a9e | ||
|
fcdcf1eb58 | ||
|
77879b09c5 | ||
|
0e1240a92f | ||
|
c4e8c08065 | ||
|
db2213254d | ||
|
fd0596cd15 | ||
|
ba9db84d1a | ||
|
5efba83794 | ||
|
dc45f96f8e | ||
|
0273d9dc2d | ||
|
b96f044905 | ||
|
d7e471a392 | ||
|
f3ac0def98 | ||
|
45c67b92f6 | ||
|
09669f13cf | ||
|
e38ba5e7db | ||
|
77ce39c2d3 | ||
|
52b952be0f | ||
|
700a6de5bc | ||
|
d22ee0fb8e | ||
|
7db5a7f8d9 | ||
|
863fb9c760 | ||
|
1139bb7d12 | ||
|
330199c532 | ||
|
f6eadb933a | ||
|
346c7af6a9 | ||
|
63f4a6f746 | ||
|
fd83ce5964 | ||
|
c31664dce4 | ||
|
409d49bcc4 | ||
|
5407eeb50d | ||
|
e934c2a70d | ||
|
cca6cc9bea | ||
|
9b05b16463 | ||
|
cf2608d1ea | ||
|
92cd7c8e19 | ||
|
6d77918ad1 | ||
|
f0dc5e90a0 | ||
|
a9c4dbe512 | ||
|
eea67757f2 | ||
|
d668f23d4f | ||
|
0136b86ab7 | ||
|
9a965f0d41 | ||
|
1e9af8efba | ||
|
d2f408d683 | ||
|
1917d803a8 | ||
|
7e9d43667b | ||
|
83572c5c71 | ||
|
0863f48c7e | ||
|
d4bcd15847 | ||
|
62db322759 | ||
|
9efc297f1f | ||
|
b5b83fd62b | ||
|
a441c04a13 | ||
|
7b73082ea2 | ||
|
b3f32e59af | ||
|
513a5eb80e | ||
|
9cb0716425 | ||
|
bf11dac848 | ||
|
e8448e3807 | ||
|
29fff564cd | ||
|
924748200e | ||
|
8de0ce42b9 | ||
|
716eca31e9 | ||
|
00da0ada8b | ||
|
d608ad6210 | ||
|
bbc80cb926 | ||
|
70f6c8bf21 | ||
|
b6bc471f38 | ||
|
8e93b0e8e8 | ||
|
a379783994 | ||
|
b1bfa4968e | ||
|
d556450664 | ||
|
830ad470aa | ||
|
7bf25c2559 | ||
|
294912488c | ||
|
9d79a0b92d | ||
|
b9e3e5b62d | ||
|
8d49cc7e27 | ||
|
deefb19f21 | ||
|
f2743b28da | ||
|
a6bb88d7c1 | ||
|
6f26e40f80 | ||
|
e87b34bdc6 | ||
|
c5f53958be | ||
|
fd6764ac8e | ||
|
64137dcfb7 | ||
|
b0aaffb661 | ||
|
1967cae29b | ||
|
fb239e3203 | ||
|
f82d6e49ca | ||
|
16dec924c8 | ||
|
d4eb913533 | ||
|
182939943f | ||
|
5a1ea04ce8 | ||
|
e6f0b3b5b8 | ||
|
91cf33f75d | ||
|
331984a5b9 | ||
|
dc2d259179 | ||
|
82385a9444 | ||
|
90df4cf1b2 | ||
|
e613ab40a5 | ||
|
114a2ab8df | ||
|
2b6e6f39ad | ||
|
e9ba98411e | ||
|
31638583d9 | ||
|
ed35b09b8f | ||
|
03f0829d09 | ||
|
7fc1b91ba6 | ||
|
82c3c2df1a | ||
|
9fc823e4b1 | ||
|
12999b61dd | ||
|
49a2fcc138 | ||
|
8f88e4f15a | ||
|
a1bb3b56fd | ||
|
1a7ec5f339 | ||
|
5d01cb8904 | ||
|
dcf84d8a53 | ||
|
73adae040d | ||
|
db662b3690 | ||
|
cf92a979e2 | ||
|
87058716fb | ||
|
c701ba4787 | ||
|
57ff1df6e0 | ||
|
3dcfd6ea3d | ||
|
c6006ee699 | ||
|
f2629f2ea3 | ||
|
cf48ed88ba | ||
|
ccc27329dc | ||
|
b0bd503b11 | ||
|
8c14933e70 | ||
|
34d15f21c2 | ||
|
32d29f0813 | ||
|
2e2c0400c8 | ||
|
054c8d912f | ||
|
9e4a5f7363 | ||
|
ec38401097 | ||
|
45bfe0a9b2 | ||
|
316534996a | ||
|
67b1363d25 | ||
|
946c41cf4f | ||
|
341d49a24d | ||
|
b21d476aca | ||
|
a29307a9d9 | ||
|
4bfbfec477 | ||
|
fed01c9807 | ||
|
a83f29d5a9 | ||
|
69c55ee003 | ||
|
01ef57c667 | ||
|
a384adbbc6 | ||
|
e3271d8469 | ||
|
84386c1b16 | ||
|
fd7c8580af | ||
|
35596a182b | ||
|
c9d3f67264 | ||
|
94031fc198 | ||
|
d5e4baed54 | ||
|
d4caa7e065 | ||
|
de4160b023 | ||
|
609c0a0773 | ||
|
0c029f7e79 | ||
|
eef3c32eb2 | ||
|
c40b651873 | ||
|
b498341857 | ||
|
0d727eb262 | ||
|
7c71c94366 | ||
|
49d2cb0cb5 | ||
|
85626b6bbd | ||
|
35400f76fa | ||
|
0cf31b2d22 | ||
|
c8cc7b2448 | ||
|
3be962cdb3 | ||
|
a5edbc9ac4 | ||
|
66bab3d805 | ||
|
293926f5d5 | ||
|
43c5ba923f | ||
|
acc5afc428 | ||
|
27a1a90d25 | ||
|
83ec0ba909 | ||
|
ed86e2f15a | ||
|
d4bebccc12 | ||
|
c75b67e892 | ||
|
bc6031eff7 | ||
|
c69c353d93 | ||
|
fcc016e9b3 | ||
|
d5caee38f2 | ||
|
9e26208e13 | ||
|
a05c5ba3ad | ||
|
c248520a66 | ||
|
10d639cc6b | ||
|
5a8134410d | ||
|
b79c3aa1a3 | ||
|
9c6499ec08 | ||
|
d139faa40c | ||
|
220efa69c0 | ||
|
df3cb60978 | ||
|
7e7096e60b | ||
|
8eea12dd78 | ||
|
c8fad20f49 | ||
|
1596e4b1fd | ||
|
c9e076db68 | ||
|
bc1842d649 | ||
|
90b8cc6a7a | ||
|
4d5a35ac65 | ||
|
8f81f40d62 | ||
|
9f90cba993 | ||
|
48b05a0ca8 | ||
|
9eb8b08a69 | ||
|
4d868b7f3c | ||
|
63a57edaa3 | ||
|
5180cd56e1 | ||
|
370989b2d0 | ||
|
71f470d670 | ||
|
c0c363bf59 | ||
|
0d71463662 | ||
|
ebcf341de7 | ||
|
14334f76ed | ||
|
f24e0721dc | ||
|
e36300ce28 | ||
|
09ddbe166f | ||
|
da0713e629 | ||
|
bbd055ac3b | ||
|
462b2660de | ||
|
ebdbfeb54a | ||
|
436b441cad | ||
|
552dbcdda9 | ||
|
a50b094c1a | ||
|
6cc53f16d8 | ||
|
8fcd56dc7b | ||
|
c9318f08e2 | ||
|
c7f8919470 | ||
|
14dfa5cc15 | ||
|
99a53a1f4c | ||
|
df2219eeb8 | ||
|
216f3d1740 | ||
|
8aa186897f | ||
|
3fa7707bc1 | ||
|
9038442191 | ||
|
c133be12d8 | ||
|
1072e3c383 | ||
|
a4513405b5 | ||
|
410765b516 | ||
|
06b9c3962e | ||
|
0b64655a40 |
81 changed files with 5059 additions and 2402 deletions
2
.dockerignore
Normal file
2
.dockerignore
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
Dockerfile
|
||||||
|
forgejo-runner
|
16
.editorconfig
Normal file
16
.editorconfig
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
root = true
|
||||||
|
|
||||||
|
[*]
|
||||||
|
indent_style = space
|
||||||
|
indent_size = 2
|
||||||
|
tab_width = 2
|
||||||
|
end_of_line = lf
|
||||||
|
charset = utf-8
|
||||||
|
trim_trailing_whitespace = true
|
||||||
|
insert_final_newline = true
|
||||||
|
|
||||||
|
[*.{go}]
|
||||||
|
indent_style = tab
|
||||||
|
|
||||||
|
[Makefile]
|
||||||
|
indent_style = tab
|
16
.forgejo/cascading-pr-setup-forgejo
Executable file
16
.forgejo/cascading-pr-setup-forgejo
Executable file
|
@ -0,0 +1,16 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -ex
|
||||||
|
|
||||||
|
setup_forgejo=$1
|
||||||
|
setup_forgejo_pr=$2
|
||||||
|
runner=$3
|
||||||
|
runner_pr=$4
|
||||||
|
|
||||||
|
url=$(jq --raw-output .head.repo.html_url < $runner_pr)
|
||||||
|
test "$url" != null
|
||||||
|
branch=$(jq --raw-output .head.ref < $runner_pr)
|
||||||
|
test "$branch" != null
|
||||||
|
cd $setup_forgejo
|
||||||
|
./utils/upgrade-runner.sh $url @$branch
|
||||||
|
date > last-upgrade
|
24
.forgejo/labelscompare.py
Normal file
24
.forgejo/labelscompare.py
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
import json
|
||||||
|
|
||||||
|
expectedLabels = {
|
||||||
|
"maintainer": "contact@forgejo.org",
|
||||||
|
"org.opencontainers.image.authors": "Forgejo",
|
||||||
|
"org.opencontainers.image.url": "https://forgejo.org",
|
||||||
|
"org.opencontainers.image.documentation": "https://forgejo.org/docs/latest/admin/actions/#forgejo-runner",
|
||||||
|
"org.opencontainers.image.source": "https://code.forgejo.org/forgejo/runner",
|
||||||
|
"org.opencontainers.image.version": "1.2.3",
|
||||||
|
"org.opencontainers.image.vendor": "Forgejo",
|
||||||
|
"org.opencontainers.image.licenses": "MIT",
|
||||||
|
"org.opencontainers.image.title": "Forgejo Runner",
|
||||||
|
"org.opencontainers.image.description": "A runner for Forgejo Actions.",
|
||||||
|
}
|
||||||
|
inspect = None
|
||||||
|
with open("./labels.json", "r") as f:
|
||||||
|
inspect = json.load(f)
|
||||||
|
|
||||||
|
assert inspect
|
||||||
|
labels = inspect[0]["Config"]["Labels"]
|
||||||
|
|
||||||
|
for k, v in expectedLabels.items():
|
||||||
|
assert k in labels, f"'{k}' is missing from labels"
|
||||||
|
assert labels[k] == v, f"expected {v} in key {k}, found {labels[k]}"
|
11
.forgejo/testdata/ipv6.yml
vendored
Normal file
11
.forgejo/testdata/ipv6.yml
vendored
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
---
|
||||||
|
on: push
|
||||||
|
jobs:
|
||||||
|
ipv6:
|
||||||
|
runs-on: docker
|
||||||
|
container:
|
||||||
|
image: code.forgejo.org/oci/debian:bookworm
|
||||||
|
steps:
|
||||||
|
- run: |
|
||||||
|
apt update -qq ; apt --quiet install -qq --yes iputils-ping
|
||||||
|
ping -c 1 -6 ::1
|
90
.forgejo/workflows/build-release-integration.yml
Normal file
90
.forgejo/workflows/build-release-integration.yml
Normal file
|
@ -0,0 +1,90 @@
|
||||||
|
name: Integration tests for the release process
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
paths:
|
||||||
|
- go.mod
|
||||||
|
- Dockerfile
|
||||||
|
- .forgejo/workflows/build-release.yml
|
||||||
|
- .forgejo/workflows/build-release-integration.yml
|
||||||
|
pull_request:
|
||||||
|
paths:
|
||||||
|
- go.mod
|
||||||
|
- Dockerfile
|
||||||
|
- .forgejo/workflows/build-release.yml
|
||||||
|
- .forgejo/workflows/build-release-integration.yml
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
release-simulation:
|
||||||
|
runs-on: self-hosted
|
||||||
|
if: github.repository_owner != 'forgejo-integration' && github.repository_owner != 'forgejo-release'
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- id: forgejo
|
||||||
|
uses: https://code.forgejo.org/actions/setup-forgejo@v1
|
||||||
|
with:
|
||||||
|
user: root
|
||||||
|
password: admin1234
|
||||||
|
image-version: 1.20
|
||||||
|
lxc-ip-prefix: 10.0.9
|
||||||
|
|
||||||
|
- name: publish
|
||||||
|
run: |
|
||||||
|
set -x
|
||||||
|
|
||||||
|
version=1.2.3
|
||||||
|
cat > /etc/docker/daemon.json <<EOF
|
||||||
|
{
|
||||||
|
"insecure-registries" : ["${{ steps.forgejo.outputs.host-port }}"]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
systemctl restart docker
|
||||||
|
|
||||||
|
dir=$(mktemp -d)
|
||||||
|
trap "rm -fr $dir" EXIT
|
||||||
|
|
||||||
|
url=http://root:admin1234@${{ steps.forgejo.outputs.host-port }}
|
||||||
|
export FORGEJO_RUNNER_LOGS="${{ steps.forgejo.outputs.runner-logs }}"
|
||||||
|
|
||||||
|
#
|
||||||
|
# Create a new project with the runner and the release workflow only
|
||||||
|
#
|
||||||
|
rsync -a --exclude .git ./ $dir/
|
||||||
|
rm $(find $dir/.forgejo/workflows/*.yml | grep -v build-release.yml)
|
||||||
|
forgejo-test-helper.sh push $dir $url root runner
|
||||||
|
sha=$(forgejo-test-helper.sh branch_tip $url root/runner main)
|
||||||
|
|
||||||
|
#
|
||||||
|
# Push a tag to trigger the release workflow and wait for it to complete
|
||||||
|
#
|
||||||
|
forgejo-curl.sh api_json --data-raw '{"tag_name": "v'$version'", "target": "'$sha'"}' $url/api/v1/repos/root/runner/tags
|
||||||
|
LOOPS=180 forgejo-test-helper.sh wait_success "$url" root/runner $sha
|
||||||
|
|
||||||
|
#
|
||||||
|
# uncomment to see the logs even when everything is reported to be working ok
|
||||||
|
#
|
||||||
|
#cat $FORGEJO_RUNNER_LOGS
|
||||||
|
|
||||||
|
#
|
||||||
|
# Minimal sanity checks. e2e test is for the setup-forgejo action
|
||||||
|
#
|
||||||
|
for arch in amd64 arm64 ; do
|
||||||
|
binary=forgejo-runner-$version-linux-$arch
|
||||||
|
for suffix in '' '.xz' ; do
|
||||||
|
curl --fail -L -sS $url/root/runner/releases/download/v$version/$binary$suffix > $binary$suffix
|
||||||
|
if test "$suffix" = .xz ; then
|
||||||
|
unxz --keep $binary$suffix
|
||||||
|
fi
|
||||||
|
chmod +x $binary
|
||||||
|
./$binary --version | grep $version
|
||||||
|
curl --fail -L -sS $url/root/runner/releases/download/v$version/$binary$suffix.sha256 > $binary$suffix.sha256
|
||||||
|
shasum -a 256 --check $binary$suffix.sha256
|
||||||
|
rm $binary$suffix
|
||||||
|
done
|
||||||
|
done
|
||||||
|
|
||||||
|
docker pull ${{ steps.forgejo.outputs.host-port }}/root/runner:$version
|
||||||
|
|
||||||
|
docker inspect ${{ steps.forgejo.outputs.host-port}}/root/runner:$version > labels.json
|
||||||
|
python3 .forgejo/labelscompare.py
|
103
.forgejo/workflows/build-release.yml
Normal file
103
.forgejo/workflows/build-release.yml
Normal file
|
@ -0,0 +1,103 @@
|
||||||
|
# SPDX-License-Identifier: MIT
|
||||||
|
#
|
||||||
|
# https://code.forgejo.org/forgejo/runner
|
||||||
|
#
|
||||||
|
# Build the runner binaries and OCI images
|
||||||
|
#
|
||||||
|
# ROLE: forgejo-integration
|
||||||
|
# DOER: forgejo-ci
|
||||||
|
# TOKEN: <generated from https://code.forgejo.org/forgejo-ci>
|
||||||
|
#
|
||||||
|
name: Build release
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
tags: 'v*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
release:
|
||||||
|
runs-on: self-hosted
|
||||||
|
# root is used for testing, allow it
|
||||||
|
if: secrets.ROLE == 'forgejo-integration' || github.repository_owner == 'root'
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Increase the verbosity when there are no secrets
|
||||||
|
id: verbose
|
||||||
|
run: |
|
||||||
|
if test -z "${{ secrets.TOKEN }}"; then
|
||||||
|
value=true
|
||||||
|
else
|
||||||
|
value=false
|
||||||
|
fi
|
||||||
|
echo "value=$value" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: Sanitize the name of the repository
|
||||||
|
id: repository
|
||||||
|
run: |
|
||||||
|
echo "value=${GITHUB_REPOSITORY##*/}" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: create test TOKEN
|
||||||
|
id: token
|
||||||
|
if: ${{ secrets.TOKEN == '' }}
|
||||||
|
run: |
|
||||||
|
apt-get -qq install -y jq
|
||||||
|
url="${{ env.GITHUB_SERVER_URL }}"
|
||||||
|
hostport=${url##http*://}
|
||||||
|
hostport=${hostport%%/}
|
||||||
|
doer=root
|
||||||
|
api=http://$doer:admin1234@$hostport/api/v1/users/$doer/tokens
|
||||||
|
curl -sS -X DELETE $api/release
|
||||||
|
token=$(curl -sS -X POST -H 'Content-Type: application/json' --data-raw '{"name": "release", "scopes": ["all"]}' $api | jq --raw-output .sha1)
|
||||||
|
echo "value=${token}" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: version from ref_name
|
||||||
|
id: tag-version
|
||||||
|
run: |
|
||||||
|
version=${GITHUB_REF_NAME##*v}
|
||||||
|
echo "value=$version" >> "$GITHUB_OUTPUT"
|
||||||
|
|
||||||
|
- name: release notes
|
||||||
|
id: release-notes
|
||||||
|
run: |
|
||||||
|
anchor=${{ steps.tag-version.outputs.value }}
|
||||||
|
anchor=${anchor//./-}
|
||||||
|
cat >> "$GITHUB_OUTPUT" <<EOF
|
||||||
|
value<<ENDVAR
|
||||||
|
See https://code.forgejo.org/forgejo/runner/src/branch/main/RELEASE-NOTES.md#$anchor
|
||||||
|
ENDVAR
|
||||||
|
EOF
|
||||||
|
|
||||||
|
- name: build without TOKEN
|
||||||
|
if: ${{ secrets.TOKEN == '' }}
|
||||||
|
uses: https://code.forgejo.org/forgejo/forgejo-build-publish/build@v5
|
||||||
|
with:
|
||||||
|
forgejo: "${{ env.GITHUB_SERVER_URL }}"
|
||||||
|
owner: "${{ env.GITHUB_REPOSITORY_OWNER }}"
|
||||||
|
repository: "${{ steps.repository.outputs.value }}"
|
||||||
|
doer: root
|
||||||
|
sha: "${{ github.sha }}"
|
||||||
|
release-version: "${{ steps.tag-version.outputs.value }}"
|
||||||
|
token: ${{ steps.token.outputs.value }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
release-notes: "${{ steps.release-notes.outputs.value }}"
|
||||||
|
binary-name: forgejo-runner
|
||||||
|
binary-path: /bin/forgejo-runner
|
||||||
|
verbose: ${{ steps.verbose.outputs.value }}
|
||||||
|
|
||||||
|
- name: build with TOKEN
|
||||||
|
if: ${{ secrets.TOKEN != '' }}
|
||||||
|
uses: https://code.forgejo.org/forgejo/forgejo-build-publish/build@v5
|
||||||
|
with:
|
||||||
|
forgejo: "${{ env.GITHUB_SERVER_URL }}"
|
||||||
|
owner: "${{ env.GITHUB_REPOSITORY_OWNER }}"
|
||||||
|
repository: "${{ steps.repository.outputs.value }}"
|
||||||
|
doer: "${{ secrets.DOER }}"
|
||||||
|
sha: "${{ github.sha }}"
|
||||||
|
release-version: "${{ steps.tag-version.outputs.value }}"
|
||||||
|
token: "${{ secrets.TOKEN }}"
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
release-notes: "${{ steps.release-notes.outputs.value }}"
|
||||||
|
binary-name: forgejo-runner
|
||||||
|
binary-path: /bin/forgejo-runner
|
||||||
|
verbose: ${{ steps.verbose.outputs.value }}
|
25
.forgejo/workflows/cascade-setup-forgejo.yml
Normal file
25
.forgejo/workflows/cascade-setup-forgejo.yml
Normal file
|
@ -0,0 +1,25 @@
|
||||||
|
# SPDX-License-Identifier: MIT
|
||||||
|
on:
|
||||||
|
pull_request_target:
|
||||||
|
types:
|
||||||
|
- opened
|
||||||
|
- synchronize
|
||||||
|
- closed
|
||||||
|
jobs:
|
||||||
|
cascade:
|
||||||
|
runs-on: docker
|
||||||
|
if: vars.CASCADE != 'no'
|
||||||
|
steps:
|
||||||
|
- uses: actions/cascading-pr@v1
|
||||||
|
with:
|
||||||
|
origin-url: ${{ env.GITHUB_SERVER_URL }}
|
||||||
|
origin-repo: forgejo/runner
|
||||||
|
origin-token: ${{ secrets.CASCADING_PR_ORIGIN }}
|
||||||
|
origin-pr: ${{ github.event.pull_request.number }}
|
||||||
|
destination-url: ${{ env.GITHUB_SERVER_URL }}
|
||||||
|
destination-repo: actions/setup-forgejo
|
||||||
|
destination-fork-repo: cascading-pr/setup-forgejo
|
||||||
|
destination-branch: main
|
||||||
|
destination-token: ${{ secrets.CASCADING_PR_DESTINATION }}
|
||||||
|
close-merge: true
|
||||||
|
update: .forgejo/cascading-pr-setup-forgejo
|
70
.forgejo/workflows/example-docker-compose.yml
Normal file
70
.forgejo/workflows/example-docker-compose.yml
Normal file
|
@ -0,0 +1,70 @@
|
||||||
|
# SPDX-License-Identifier: MIT
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'main'
|
||||||
|
pull_request:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
example-docker-compose:
|
||||||
|
runs-on: self-hosted
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Install docker
|
||||||
|
run: |
|
||||||
|
apt-get update -qq
|
||||||
|
export DEBIAN_FRONTEND=noninteractive
|
||||||
|
apt-get install -qq -y ca-certificates curl gnupg
|
||||||
|
install -m 0755 -d /etc/apt/keyrings
|
||||||
|
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
apt-get update -qq
|
||||||
|
apt-get install -qq -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin=2.20.2-1~debian.11~bullseye
|
||||||
|
docker version
|
||||||
|
#
|
||||||
|
# docker compose is prone to non backward compatible changes, pin it
|
||||||
|
#
|
||||||
|
apt-get install -qq -y docker-compose-plugin=2.20.2-1~debian.11~bullseye
|
||||||
|
docker compose version
|
||||||
|
|
||||||
|
- name: run the example
|
||||||
|
run: |
|
||||||
|
set -x
|
||||||
|
cd examples/docker-compose
|
||||||
|
secret=$(openssl rand -hex 20)
|
||||||
|
sed -i -e "s/{SHARED_SECRET}/$secret/" compose-forgejo-and-runner.yml
|
||||||
|
cli="docker compose --progress quiet -f compose-forgejo-and-runner.yml"
|
||||||
|
#
|
||||||
|
# Launch Forgejo & the runner
|
||||||
|
#
|
||||||
|
$cli up -d
|
||||||
|
for delay in $(seq 60) ; do test -f /srv/runner-data/.runner && break ; sleep 30 ; done
|
||||||
|
test -f /srv/runner-data/.runner
|
||||||
|
#
|
||||||
|
# Run the demo workflow
|
||||||
|
#
|
||||||
|
cli="$cli -f compose-demo-workflow.yml"
|
||||||
|
$cli up -d demo-workflow
|
||||||
|
#
|
||||||
|
# Wait for the demo workflow to complete
|
||||||
|
#
|
||||||
|
success='DEMO WORKFLOW SUCCESS'
|
||||||
|
failure='DEMO WORKFLOW FAILURE'
|
||||||
|
for delay in $(seq 60) ; do
|
||||||
|
$cli logs demo-workflow > /tmp/out
|
||||||
|
grep --quiet "$success" /tmp/out && break
|
||||||
|
grep --quiet "$failure" /tmp/out && break
|
||||||
|
$cli ps --all
|
||||||
|
$cli logs --tail=20 runner-daemon demo-workflow
|
||||||
|
sleep 30
|
||||||
|
done
|
||||||
|
grep --quiet "$success" /tmp/out
|
||||||
|
$cli logs runner-daemon > /tmp/runner.log
|
||||||
|
grep --quiet 'Start image=code.forgejo.org/oci/node:20-bookworm' /tmp/runner.log
|
||||||
|
|
||||||
|
- name: full docker compose logs
|
||||||
|
if: always()
|
||||||
|
run: |
|
||||||
|
cd examples/docker-compose
|
||||||
|
docker compose -f compose-forgejo-and-runner.yml -f compose-demo-workflow.yml logs
|
42
.forgejo/workflows/publish-release.yml
Normal file
42
.forgejo/workflows/publish-release.yml
Normal file
|
@ -0,0 +1,42 @@
|
||||||
|
# SPDX-License-Identifier: MIT
|
||||||
|
#
|
||||||
|
# https://forgejo.octopuce.forgejo.org/forgejo-release/runner
|
||||||
|
#
|
||||||
|
# Copies & sign a release from code.forgejo.org/forgejo-integration/runner to code.forgejo.org/forgejo/runner
|
||||||
|
#
|
||||||
|
# ROLE: forgejo-release
|
||||||
|
# FORGEJO: https://code.forgejo.org
|
||||||
|
# FROM_OWNER: forgejo-integration
|
||||||
|
# TO_OWNER: forgejo
|
||||||
|
# DOER: release-team
|
||||||
|
# TOKEN: <generated from codeberg.org/release-team>
|
||||||
|
# GPG_PRIVATE_KEY: <XYZ>
|
||||||
|
# GPG_PASSPHRASE: <ABC>
|
||||||
|
#
|
||||||
|
name: publish
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
tags: 'v*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
publish:
|
||||||
|
runs-on: self-hosted
|
||||||
|
if: secrets.DOER != '' && secrets.FORGEJO != '' && secrets.TO_OWNER != '' && secrets.FROM_OWNER != '' && secrets.TOKEN != ''
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: copy & sign
|
||||||
|
uses: https://code.forgejo.org/forgejo/forgejo-build-publish/publish@v1
|
||||||
|
with:
|
||||||
|
forgejo: ${{ secrets.FORGEJO }}
|
||||||
|
from-owner: ${{ secrets.FROM_OWNER }}
|
||||||
|
to-owner: ${{ secrets.TO_OWNER }}
|
||||||
|
repo: "runner"
|
||||||
|
ref-name: ${{ github.ref_name }}
|
||||||
|
container-suffixes: " "
|
||||||
|
doer: ${{ secrets.DOER }}
|
||||||
|
token: ${{ secrets.TOKEN }}
|
||||||
|
gpg-private-key: ${{ secrets.GPG_PRIVATE_KEY }}
|
||||||
|
gpg-passphrase: ${{ secrets.GPG_PASSPHRASE }}
|
||||||
|
verbose: ${{ secrets.VERBOSE }}
|
108
.forgejo/workflows/test.yml
Normal file
108
.forgejo/workflows/test.yml
Normal file
|
@ -0,0 +1,108 @@
|
||||||
|
name: checks
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- 'main'
|
||||||
|
pull_request:
|
||||||
|
|
||||||
|
env:
|
||||||
|
FORGEJO_HOST_PORT: 'forgejo:3000'
|
||||||
|
FORGEJO_ADMIN_USER: 'root'
|
||||||
|
FORGEJO_ADMIN_PASSWORD: 'admin1234'
|
||||||
|
FORGEJO_RUNNER_SECRET: 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
|
||||||
|
FORGEJO_SCRIPT: |
|
||||||
|
/bin/s6-svscan /etc/s6 & sleep 10 ; su -c "forgejo admin user create --admin --username $FORGEJO_ADMIN_USER --password $FORGEJO_ADMIN_PASSWORD --email root@example.com" git && su -c "forgejo forgejo-cli actions register --labels docker --name therunner --secret $FORGEJO_RUNNER_SECRET" git && sleep infinity
|
||||||
|
GOPROXY: https://goproxy.io,direct
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build-and-tests:
|
||||||
|
name: build and test
|
||||||
|
if: github.repository_owner != 'forgejo-integration' && github.repository_owner != 'forgejo-experimental' && github.repository_owner != 'forgejo-release'
|
||||||
|
runs-on: docker
|
||||||
|
|
||||||
|
services:
|
||||||
|
forgejo:
|
||||||
|
image: codeberg.org/forgejo/forgejo:1.21
|
||||||
|
env:
|
||||||
|
FORGEJO__security__INSTALL_LOCK: "true"
|
||||||
|
FORGEJO__log__LEVEL: "debug"
|
||||||
|
FORGEJO__actions__ENABLED: "true"
|
||||||
|
FORGEJO_ADMIN_USER: ${{ env.FORGEJO_ADMIN_USER }}
|
||||||
|
FORGEJO_ADMIN_PASSWORD: ${{ env.FORGEJO_ADMIN_PASSWORD }}
|
||||||
|
FORGEJO_RUNNER_SECRET: ${{ env.FORGEJO_RUNNER_SECRET }}
|
||||||
|
cmd:
|
||||||
|
- 'bash'
|
||||||
|
- '-c'
|
||||||
|
- ${{ env.FORGEJO_SCRIPT }}
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/setup-go@v3
|
||||||
|
with:
|
||||||
|
go-version: '1.21'
|
||||||
|
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- run: make vet
|
||||||
|
|
||||||
|
- run: make build
|
||||||
|
|
||||||
|
- uses: https://code.forgejo.org/actions/upload-artifact@v3
|
||||||
|
with:
|
||||||
|
name: forgejo-runner
|
||||||
|
path: forgejo-runner
|
||||||
|
|
||||||
|
- name: check the forgejo server is responding
|
||||||
|
run: |
|
||||||
|
apt-get update -qq
|
||||||
|
apt-get install -y -qq jq curl
|
||||||
|
test $FORGEJO_ADMIN_USER = $(curl -sS http://$FORGEJO_ADMIN_USER:$FORGEJO_ADMIN_PASSWORD@$FORGEJO_HOST_PORT/api/v1/user | jq --raw-output .login)
|
||||||
|
|
||||||
|
- run: make FORGEJO_URL=http://$FORGEJO_HOST_PORT test
|
||||||
|
|
||||||
|
runner-exec-tests:
|
||||||
|
needs: [build-and-tests]
|
||||||
|
name: runner exec tests
|
||||||
|
if: github.repository_owner != 'forgejo-integration' && github.repository_owner != 'forgejo-experimental' && github.repository_owner != 'forgejo-release'
|
||||||
|
runs-on: self-hosted
|
||||||
|
|
||||||
|
steps:
|
||||||
|
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- uses: https://code.forgejo.org/actions/download-artifact@v3
|
||||||
|
with:
|
||||||
|
name: forgejo-runner
|
||||||
|
|
||||||
|
- name: install docker
|
||||||
|
run: |
|
||||||
|
mkdir /etc/docker
|
||||||
|
cat > /etc/docker/daemon.json <<EOF
|
||||||
|
{
|
||||||
|
"ipv6": true,
|
||||||
|
"experimental": true,
|
||||||
|
"ip6tables": true,
|
||||||
|
"fixed-cidr-v6": "fd05:d0ca:1::/64",
|
||||||
|
"default-address-pools": [
|
||||||
|
{
|
||||||
|
"base": "172.19.0.0/16",
|
||||||
|
"size": 24
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"base": "fd05:d0ca:2::/104",
|
||||||
|
"size": 112
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
apt --quiet install --yes -qq docker.io
|
||||||
|
|
||||||
|
- name: forgejo-runner exec --enable-ipv6
|
||||||
|
run: |
|
||||||
|
set -x
|
||||||
|
chmod +x forgejo-runner
|
||||||
|
./forgejo-runner exec --enable-ipv6 --workflows .forgejo/testdata/ipv6.yml
|
||||||
|
if ./forgejo-runner exec --workflows .forgejo/testdata/ipv6.yml >& /tmp/out ; then
|
||||||
|
cat /tmp/out
|
||||||
|
echo "IPv6 not enabled, should fail"
|
||||||
|
exit 1
|
||||||
|
fi
|
1
.gitattributes
vendored
Normal file
1
.gitattributes
vendored
Normal file
|
@ -0,0 +1 @@
|
||||||
|
* text=auto eol=lf
|
|
@ -1,21 +0,0 @@
|
||||||
name: checks
|
|
||||||
on: [push]
|
|
||||||
|
|
||||||
env:
|
|
||||||
GOPROXY: https://goproxy.io,direct
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
lint:
|
|
||||||
name: lint
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: actions/setup-go@v3
|
|
||||||
with:
|
|
||||||
go-version: 1.17
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
- uses: Jerome1337/golint-action@v1.0.2
|
|
||||||
#- name: golangci-lint
|
|
||||||
# uses: golangci/golangci-lint-action@v3
|
|
||||||
# with:
|
|
||||||
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
|
|
||||||
# version: v1.29
|
|
13
.gitignore
vendored
13
.gitignore
vendored
|
@ -1,3 +1,14 @@
|
||||||
act_runner
|
*~
|
||||||
|
|
||||||
|
forgejo-runner
|
||||||
.env
|
.env
|
||||||
.runner
|
.runner
|
||||||
|
coverage.txt
|
||||||
|
/gitea-vet
|
||||||
|
/config.yaml
|
||||||
|
|
||||||
|
# MS VSCode
|
||||||
|
.vscode
|
||||||
|
__debug_bin
|
||||||
|
# gorelease binary folder
|
||||||
|
dist
|
||||||
|
|
|
@ -1,14 +1,11 @@
|
||||||
linters:
|
linters:
|
||||||
enable:
|
enable:
|
||||||
- gosimple
|
- gosimple
|
||||||
- deadcode
|
|
||||||
- typecheck
|
- typecheck
|
||||||
- govet
|
- govet
|
||||||
- errcheck
|
- errcheck
|
||||||
- staticcheck
|
- staticcheck
|
||||||
- unused
|
- unused
|
||||||
- structcheck
|
|
||||||
- varcheck
|
|
||||||
- dupl
|
- dupl
|
||||||
#- gocyclo # The cyclomatic complexety of a lot of functions is too high, we should refactor those another time.
|
#- gocyclo # The cyclomatic complexety of a lot of functions is too high, we should refactor those another time.
|
||||||
- gofmt
|
- gofmt
|
||||||
|
@ -112,7 +109,6 @@ issues:
|
||||||
- gocritic
|
- gocritic
|
||||||
- linters:
|
- linters:
|
||||||
- unused
|
- unused
|
||||||
- deadcode
|
|
||||||
text: "swagger"
|
text: "swagger"
|
||||||
- path: contrib/pr/checkout.go
|
- path: contrib/pr/checkout.go
|
||||||
linters:
|
linters:
|
||||||
|
@ -154,9 +150,6 @@ issues:
|
||||||
- path: cmd/dump.go
|
- path: cmd/dump.go
|
||||||
linters:
|
linters:
|
||||||
- dupl
|
- dupl
|
||||||
- path: services/webhook/webhook.go
|
|
||||||
linters:
|
|
||||||
- structcheck
|
|
||||||
- text: "commentFormatting: put a space between `//` and comment text"
|
- text: "commentFormatting: put a space between `//` and comment text"
|
||||||
linters:
|
linters:
|
||||||
- gocritic
|
- gocritic
|
||||||
|
|
47
Dockerfile
Normal file
47
Dockerfile
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
FROM --platform=$BUILDPLATFORM code.forgejo.org/oci/tonistiigi/xx AS xx
|
||||||
|
|
||||||
|
FROM --platform=$BUILDPLATFORM code.forgejo.org/oci/golang:1.21-alpine3.19 as build-env
|
||||||
|
|
||||||
|
#
|
||||||
|
# Transparently cross compile for the target platform
|
||||||
|
#
|
||||||
|
COPY --from=xx / /
|
||||||
|
ARG TARGETPLATFORM
|
||||||
|
RUN apk --no-cache add clang lld
|
||||||
|
RUN xx-apk --no-cache add gcc musl-dev
|
||||||
|
RUN xx-go --wrap
|
||||||
|
|
||||||
|
# Do not remove `git` here, it is required for getting runner version when executing `make build`
|
||||||
|
RUN apk add --no-cache build-base git
|
||||||
|
|
||||||
|
COPY . /srv
|
||||||
|
WORKDIR /srv
|
||||||
|
|
||||||
|
RUN make clean && make build
|
||||||
|
|
||||||
|
FROM code.forgejo.org/oci/alpine:3.19
|
||||||
|
ARG RELEASE_VERSION
|
||||||
|
RUN apk add --no-cache git bash
|
||||||
|
|
||||||
|
COPY --from=build-env /srv/forgejo-runner /bin/forgejo-runner
|
||||||
|
|
||||||
|
LABEL maintainer="contact@forgejo.org" \
|
||||||
|
org.opencontainers.image.authors="Forgejo" \
|
||||||
|
org.opencontainers.image.url="https://forgejo.org" \
|
||||||
|
org.opencontainers.image.documentation="https://forgejo.org/docs/latest/admin/actions/#forgejo-runner" \
|
||||||
|
org.opencontainers.image.source="https://code.forgejo.org/forgejo/runner" \
|
||||||
|
org.opencontainers.image.version="${RELEASE_VERSION}" \
|
||||||
|
org.opencontainers.image.vendor="Forgejo" \
|
||||||
|
org.opencontainers.image.licenses="MIT" \
|
||||||
|
org.opencontainers.image.title="Forgejo Runner" \
|
||||||
|
org.opencontainers.image.description="A runner for Forgejo Actions."
|
||||||
|
|
||||||
|
ENV HOME=/data
|
||||||
|
|
||||||
|
USER 1000:1000
|
||||||
|
|
||||||
|
WORKDIR /data
|
||||||
|
|
||||||
|
VOLUME ["/data"]
|
||||||
|
|
||||||
|
CMD ["/bin/forgejo-runner"]
|
1
LICENSE
1
LICENSE
|
@ -1,3 +1,4 @@
|
||||||
|
Copyright (c) 2023 The Forgejo Authors
|
||||||
Copyright (c) 2022 The Gitea Authors
|
Copyright (c) 2022 The Gitea Authors
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
|
61
Makefile
61
Makefile
|
@ -1,5 +1,5 @@
|
||||||
DIST := dist
|
DIST := dist
|
||||||
EXECUTABLE := act_runner
|
EXECUTABLE := forgejo-runner
|
||||||
GOFMT ?= gofumpt -l
|
GOFMT ?= gofumpt -l
|
||||||
DIST := dist
|
DIST := dist
|
||||||
DIST_DIRS := $(DIST)/binaries $(DIST)/release
|
DIST_DIRS := $(DIST)/binaries $(DIST)/release
|
||||||
|
@ -7,19 +7,21 @@ GO ?= go
|
||||||
SHASUM ?= shasum -a 256
|
SHASUM ?= shasum -a 256
|
||||||
HAS_GO = $(shell hash $(GO) > /dev/null 2>&1 && echo "GO" || echo "NOGO" )
|
HAS_GO = $(shell hash $(GO) > /dev/null 2>&1 && echo "GO" || echo "NOGO" )
|
||||||
XGO_PACKAGE ?= src.techknowlogick.com/xgo@latest
|
XGO_PACKAGE ?= src.techknowlogick.com/xgo@latest
|
||||||
XGO_VERSION := go-1.18.x
|
XGO_VERSION := go-1.21.x
|
||||||
GXZ_PAGAGE ?= github.com/ulikunitz/xz/cmd/gxz@v0.5.10
|
GXZ_PAGAGE ?= github.com/ulikunitz/xz/cmd/gxz@v0.5.10
|
||||||
|
|
||||||
LINUX_ARCHS ?= linux/amd64,linux/arm64
|
LINUX_ARCHS ?= linux/amd64,linux/arm64
|
||||||
DARWIN_ARCHS ?= darwin-12/amd64,darwin-12/arm64
|
DARWIN_ARCHS ?= darwin-12/amd64,darwin-12/arm64
|
||||||
WINDOWS_ARCHS ?= windows/amd64
|
WINDOWS_ARCHS ?= windows/amd64
|
||||||
GOFILES := $(shell find . -type f -name "*.go" ! -name "generated.*")
|
GO_FMT_FILES := $(shell find . -type f -name "*.go" ! -name "generated.*")
|
||||||
|
GOFILES := $(shell find . -type f -name "*.go" -o -name "go.mod" ! -name "generated.*")
|
||||||
|
|
||||||
ifneq ($(shell uname), Darwin)
|
DOCKER_IMAGE ?= gitea/act_runner
|
||||||
EXTLDFLAGS = -extldflags "-static" $(null)
|
DOCKER_TAG ?= nightly
|
||||||
else
|
DOCKER_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)
|
||||||
EXTLDFLAGS =
|
DOCKER_ROOTLESS_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)-dind-rootless
|
||||||
endif
|
|
||||||
|
EXTLDFLAGS = -extldflags "-static" $(null)
|
||||||
|
|
||||||
ifeq ($(HAS_GO), GO)
|
ifeq ($(HAS_GO), GO)
|
||||||
GOPATH ?= $(shell $(GO) env GOPATH)
|
GOPATH ?= $(shell $(GO) env GOPATH)
|
||||||
|
@ -49,7 +51,7 @@ else
|
||||||
ifneq ($(DRONE_BRANCH),)
|
ifneq ($(DRONE_BRANCH),)
|
||||||
VERSION ?= $(subst release/v,,$(DRONE_BRANCH))
|
VERSION ?= $(subst release/v,,$(DRONE_BRANCH))
|
||||||
else
|
else
|
||||||
VERSION ?= master
|
VERSION ?= main
|
||||||
endif
|
endif
|
||||||
|
|
||||||
STORED_VERSION=$(shell cat $(STORED_VERSION_FILE) 2>/dev/null)
|
STORED_VERSION=$(shell cat $(STORED_VERSION_FILE) 2>/dev/null)
|
||||||
|
@ -60,26 +62,36 @@ else
|
||||||
endif
|
endif
|
||||||
endif
|
endif
|
||||||
|
|
||||||
|
GO_PACKAGES_TO_VET ?= $(filter-out gitea.com/gitea/act_runner/internal/pkg/client/mocks,$(shell $(GO) list ./...))
|
||||||
|
|
||||||
|
|
||||||
TAGS ?=
|
TAGS ?=
|
||||||
LDFLAGS ?= -X 'main.Version=$(VERSION)'
|
LDFLAGS ?= -X "gitea.com/gitea/act_runner/internal/pkg/ver.version=v$(RELASE_VERSION)"
|
||||||
|
|
||||||
all: build
|
all: build
|
||||||
|
|
||||||
fmt:
|
fmt:
|
||||||
@hash gofumpt > /dev/null 2>&1; if [ $$? -ne 0 ]; then \
|
@hash gofumpt > /dev/null 2>&1; if [ $$? -ne 0 ]; then \
|
||||||
$(GO) install -u mvdan.cc/gofumpt; \
|
$(GO) install mvdan.cc/gofumpt@latest; \
|
||||||
fi
|
fi
|
||||||
$(GOFMT) -w $(GOFILES)
|
$(GOFMT) -w $(GO_FMT_FILES)
|
||||||
|
|
||||||
vet:
|
.PHONY: go-check
|
||||||
$(GO) vet ./...
|
go-check:
|
||||||
|
$(eval MIN_GO_VERSION_STR := $(shell grep -Eo '^go\s+[0-9]+\.[0-9]+' go.mod | cut -d' ' -f2))
|
||||||
|
$(eval MIN_GO_VERSION := $(shell printf "%03d%03d" $(shell echo '$(MIN_GO_VERSION_STR)' | tr '.' ' ')))
|
||||||
|
$(eval GO_VERSION := $(shell printf "%03d%03d" $(shell $(GO) version | grep -Eo '[0-9]+\.[0-9]+' | tr '.' ' ');))
|
||||||
|
@if [ "$(GO_VERSION)" -lt "$(MIN_GO_VERSION)" ]; then \
|
||||||
|
echo "Act Runner requires Go $(MIN_GO_VERSION_STR) or greater to build. You can get it at https://go.dev/dl/"; \
|
||||||
|
exit 1; \
|
||||||
|
fi
|
||||||
|
|
||||||
.PHONY: fmt-check
|
.PHONY: fmt-check
|
||||||
fmt-check:
|
fmt-check:
|
||||||
@hash gofumpt > /dev/null 2>&1; if [ $$? -ne 0 ]; then \
|
@hash gofumpt > /dev/null 2>&1; if [ $$? -ne 0 ]; then \
|
||||||
$(GO) install -u mvdan.cc/gofumpt; \
|
$(GO) install mvdan.cc/gofumpt@latest; \
|
||||||
fi
|
fi
|
||||||
@diff=$$($(GOFMT) -d $(GOFILES)); \
|
@diff=$$($(GOFMT) -d $(GO_FMT_FILES)); \
|
||||||
if [ -n "$$diff" ]; then \
|
if [ -n "$$diff" ]; then \
|
||||||
echo "Please run 'make fmt' and commit the result:"; \
|
echo "Please run 'make fmt' and commit the result:"; \
|
||||||
echo "$${diff}"; \
|
echo "$${diff}"; \
|
||||||
|
@ -89,13 +101,18 @@ fmt-check:
|
||||||
test: fmt-check
|
test: fmt-check
|
||||||
@$(GO) test -v -cover -coverprofile coverage.txt ./... && echo "\n==>\033[32m Ok\033[m\n" || exit 1
|
@$(GO) test -v -cover -coverprofile coverage.txt ./... && echo "\n==>\033[32m Ok\033[m\n" || exit 1
|
||||||
|
|
||||||
|
.PHONY: vet
|
||||||
|
vet:
|
||||||
|
@echo "Running go vet..."
|
||||||
|
@$(GO) vet $(GO_PACKAGES_TO_VET)
|
||||||
|
|
||||||
install: $(GOFILES)
|
install: $(GOFILES)
|
||||||
$(GO) install -v -tags '$(TAGS)' -ldflags '$(EXTLDFLAGS)-s -w $(LDFLAGS)'
|
$(GO) install -v -tags '$(TAGS)' -ldflags '$(EXTLDFLAGS)-s -w $(LDFLAGS)'
|
||||||
|
|
||||||
build: $(EXECUTABLE)
|
build: go-check $(EXECUTABLE)
|
||||||
|
|
||||||
$(EXECUTABLE): $(GOFILES)
|
$(EXECUTABLE): $(GOFILES)
|
||||||
$(GO) build -v -tags '$(TAGS)' -ldflags '$(EXTLDFLAGS)-s -w $(LDFLAGS)' -o $@
|
$(GO) build -v -tags 'netgo osusergo $(TAGS)' -ldflags '$(EXTLDFLAGS)-s -w $(LDFLAGS)' -o $@
|
||||||
|
|
||||||
.PHONY: deps-backend
|
.PHONY: deps-backend
|
||||||
deps-backend:
|
deps-backend:
|
||||||
|
@ -142,6 +159,14 @@ release-check: | $(DIST_DIRS)
|
||||||
release-compress: | $(DIST_DIRS)
|
release-compress: | $(DIST_DIRS)
|
||||||
cd $(DIST)/release/; for file in `find . -type f -name "*"`; do echo "compressing $${file}" && $(GO) run $(GXZ_PAGAGE) -k -9 $${file}; done;
|
cd $(DIST)/release/; for file in `find . -type f -name "*"`; do echo "compressing $${file}" && $(GO) run $(GXZ_PAGAGE) -k -9 $${file}; done;
|
||||||
|
|
||||||
|
.PHONY: docker
|
||||||
|
docker:
|
||||||
|
if ! docker buildx version >/dev/null 2>&1; then \
|
||||||
|
ARG_DISABLE_CONTENT_TRUST=--disable-content-trust=false; \
|
||||||
|
fi; \
|
||||||
|
docker build $${ARG_DISABLE_CONTENT_TRUST} -t $(DOCKER_REF) .
|
||||||
|
docker build $${ARG_DISABLE_CONTENT_TRUST} -t $(DOCKER_ROOTLESS_REF) -f Dockerfile.rootless .
|
||||||
|
|
||||||
clean:
|
clean:
|
||||||
$(GO) clean -x -i ./...
|
$(GO) clean -x -i ./...
|
||||||
rm -rf coverage.txt $(EXECUTABLE) $(DIST)
|
rm -rf coverage.txt $(EXECUTABLE) $(DIST)
|
||||||
|
|
115
README.md
115
README.md
|
@ -1,61 +1,94 @@
|
||||||
# act runner
|
# Forgejo Runner
|
||||||
|
|
||||||
Act runner is a runner for Gitea based on [act](https://gitea.com/gitea/act).
|
**WARNING:** this is [alpha release quality](https://en.wikipedia.org/wiki/Software_release_life_cycle#Alpha) code and should not be considered secure enough to deploy in production.
|
||||||
|
|
||||||
## Prerequisites
|
A daemon that connects to a Forgejo instance and runs jobs for continous integration. The [installation and usage instructions](https://forgejo.org/docs/next/admin/actions/) are part of the Forgejo documentation.
|
||||||
|
|
||||||
Docker Engine Community version is required. To install Docker CE, follow the official [install instructions](https://docs.docker.com/engine/install/).
|
# Reporting bugs
|
||||||
|
|
||||||
## Quickstart
|
When filing a bug in [the issue tracker](https://code.forgejo.org/forgejo/runner/issues), it is very helpful to propose a pull request [in the end-to-end tests](https://code.forgejo.org/forgejo/end-to-end/src/branch/main/actions) repository that adds a reproducer. It will fail the CI and unambiguously demonstrate that the problem exists. In most cases it is enough to add a workflow ([see the echo example](https://code.forgejo.org/forgejo/end-to-end/src/branch/main/actions/example-echo)). For more complicated cases it is also possible to add a runner config file as well as shell scripts to setup and teardown the test case ([see the service example](https://code.forgejo.org/forgejo/end-to-end/src/branch/main/actions/example-service)).
|
||||||
|
|
||||||
### Build
|
# Hacking
|
||||||
|
|
||||||
```bash
|
The Forgejo runner depends on [a fork of ACT](https://code.forgejo.org/forgejo/act) and is a dependency of the [setup-forgejo action](https://code.forgejo.org/actions/setup-forgejo). See [the full dependency graph](https://code.forgejo.org/actions/cascading-pr/#forgejo-dependencies) for a global view.
|
||||||
make build
|
|
||||||
|
## Local debug
|
||||||
|
|
||||||
|
The repositories are checked out in the same directory:
|
||||||
|
|
||||||
|
- **runner**: [Forgejo runner](https://code.forgejo.org/forgejo/runner)
|
||||||
|
- **act**: [ACT](https://code.forgejo.org/forgejo/act)
|
||||||
|
- **setup-forgejo**: [setup-forgejo](https://code.forgejo.org/actions/setup-forgejo)
|
||||||
|
|
||||||
|
### Install dependencies
|
||||||
|
|
||||||
|
The dependencies are installed manually or with:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
setup-forgejo/forgejo-dependencies.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
### Register
|
### Build the Forgejo runner with the local ACT
|
||||||
|
|
||||||
```bash
|
The Forgejo runner is rebuilt with the ACT directory by changing the `runner/go.mod` file to:
|
||||||
./act_runner register
|
|
||||||
|
```
|
||||||
|
replace github.com/nektos/act => ../act
|
||||||
```
|
```
|
||||||
|
|
||||||
And you will be asked to input:
|
Running:
|
||||||
|
|
||||||
1. Gitea instance URL, like `http://192.168.8.8:3000/`. You should use your gitea instance ROOT_URL as the instance argument
|
```
|
||||||
and you should not use `localhost` or `127.0.0.1` as instance IP;
|
cd runner ; go mod tidy
|
||||||
2. Runner token, you can get it from `http://192.168.8.8:3000/admin/runners`;
|
|
||||||
3. Runner name, you can just leave it blank;
|
|
||||||
4. Runner labels, you can just leave it blank.
|
|
||||||
|
|
||||||
The process looks like:
|
|
||||||
|
|
||||||
```text
|
|
||||||
INFO Registering runner, arch=amd64, os=darwin, version=0.1.5.
|
|
||||||
WARN Runner in user-mode.
|
|
||||||
INFO Enter the Gitea instance URL (for example, https://gitea.com/):
|
|
||||||
http://192.168.8.8:3000/
|
|
||||||
INFO Enter the runner token:
|
|
||||||
fe884e8027dc292970d4e0303fe82b14xxxxxxxx
|
|
||||||
INFO Enter the runner name (if set empty, use hostname:Test.local ):
|
|
||||||
|
|
||||||
INFO Enter the runner labels, leave blank to use the default labels (comma-separated, for example, self-hosted,ubuntu-20.04:docker://node:16-bullseye,ubuntu-18.04:docker://node:16-buster):
|
|
||||||
|
|
||||||
INFO Registering runner, name=Test.local, instance=http://192.168.8.8:3000/, labels=[ubuntu-latest:docker://node:16-bullseye ubuntu-22.04:docker://node:16-bullseye ubuntu-20.04:docker://node:16-bullseye ubuntu-18.04:docker://node:16-buster].
|
|
||||||
DEBU Successfully pinged the Gitea instance server
|
|
||||||
INFO Runner registered successfully.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
You can also register with command line arguments.
|
Building:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
./act_runner register --instance http://192.168.8.8:3000 --token <my_runner_token> --no-interactive
|
cd runner ; rm -f forgejo-runner ; make forgejo-runner
|
||||||
```
|
```
|
||||||
|
|
||||||
If the registry succeed, it will run immediately. Next time, you could run the runner directly.
|
### Launch Forgejo and the runner
|
||||||
|
|
||||||
### Run
|
A Forgejo instance is launched with:
|
||||||
|
|
||||||
```bash
|
```shell
|
||||||
./act_runner daemon
|
cd setup-forgejo
|
||||||
|
./forgejo.sh setup
|
||||||
|
firefox $(cat forgejo-url)
|
||||||
|
```
|
||||||
|
|
||||||
|
The user is `root` with password `admin1234`. The runner is registered with:
|
||||||
|
|
||||||
|
```
|
||||||
|
cd setup-forgejo
|
||||||
|
docker exec --user 1000 forgejo forgejo actions generate-runner-token > forgejo-runner-token
|
||||||
|
../runner/forgejo-runner register --no-interactive --instance "$(cat forgejo-url)" --name runner --token $(cat forgejo-runner-token) --labels docker:docker://node:20-bullseye,self-hosted:host://-self-hosted,lxc:lxc://debian:bullseye
|
||||||
|
```
|
||||||
|
|
||||||
|
And launched with:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cd setup-forgejo ; ../runner/forgejo-runner --config runner-config.yml daemon
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that the `runner-config.yml` is required in that particular case
|
||||||
|
to configure the network in `bridge` mode, otherwise the runner will
|
||||||
|
create a network that cannot reach the forgejo instance.
|
||||||
|
|
||||||
|
### Try a sample workflow
|
||||||
|
|
||||||
|
From the Forgejo web interface, create a repository and add the
|
||||||
|
following to `.forgejo/workflows/try.yaml`. It will launch the job and
|
||||||
|
the result can be observed from the `actions` tab.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
on: [push]
|
||||||
|
jobs:
|
||||||
|
ls:
|
||||||
|
runs-on: docker
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- run: |
|
||||||
|
ls ${{ github.workspace }}
|
||||||
```
|
```
|
102
RELEASE-NOTES.md
Normal file
102
RELEASE-NOTES.md
Normal file
|
@ -0,0 +1,102 @@
|
||||||
|
# Release Notes
|
||||||
|
|
||||||
|
## 3.5.2
|
||||||
|
|
||||||
|
* Fix [crash in some cases when the YAML structure is not as expected](https://code.forgejo.org/forgejo/runner/issues/267).
|
||||||
|
|
||||||
|
## 3.5.1
|
||||||
|
|
||||||
|
* Fix [CVE-2024-24557](https://nvd.nist.gov/vuln/detail/CVE-2024-24557)
|
||||||
|
* [Add report_interval option to config](https://code.forgejo.org/forgejo/runner/pulls/220) to allow setting the interval of status and log reports
|
||||||
|
|
||||||
|
## 3.5.0
|
||||||
|
|
||||||
|
* [Allow graceful shutdowns](https://code.forgejo.org/forgejo/runner/pulls/202): when receiving a signal (INT or TERM) wait for running jobs to complete (up to shutdown_timeout).
|
||||||
|
* [Fix label declaration](https://code.forgejo.org/forgejo/runner/pulls/176): Runner in daemon mode now takes labels found in config.yml into account when declaration was successful.
|
||||||
|
* [Fix the docker compose example](https://code.forgejo.org/forgejo/runner/pulls/175) to workaround the race on labels.
|
||||||
|
* [Fix the kubernetes dind example](https://code.forgejo.org/forgejo/runner/pulls/169).
|
||||||
|
* [Rewrite ::group:: and ::endgroup:: commands like github](https://code.forgejo.org/forgejo/runner/pulls/183).
|
||||||
|
* [Added opencontainers labels to the image](https://code.forgejo.org/forgejo/runner/pulls/195)
|
||||||
|
* [Upgrade the default container to node:20](https://code.forgejo.org/forgejo/runner/pulls/203)
|
||||||
|
|
||||||
|
## 3.4.1
|
||||||
|
|
||||||
|
* Fixes a regression introduced in 3.4.0 by which a job with no image explicitly set would
|
||||||
|
[be bound to the host](https://code.forgejo.org/forgejo/runner/issues/165)
|
||||||
|
network instead of a custom network (empty string in the configuration file).
|
||||||
|
|
||||||
|
## 3.4.0
|
||||||
|
|
||||||
|
Although this version is able to run [actions/upload-artifact@v4](https://code.forgejo.org/actions/upload-artifact/src/tag/v4) and [actions/download-artifact@v4](https://code.forgejo.org/actions/download-artifact/src/tag/v4), these actions will fail because it does not run against GitHub.com. A fork of those two actions with this check disabled is made available at:
|
||||||
|
|
||||||
|
* https://code.forgejo.org/forgejo/upload-artifact/src/tag/v4
|
||||||
|
* https://code.forgejo.org/forgejo/download-artifact/src/tag/v4
|
||||||
|
|
||||||
|
and they can be used as shown in [an example from the end-to-end test suite](https://code.forgejo.org/forgejo/end-to-end/src/branch/main/actions/example-artifacts-v4/.forgejo/workflows/test.yml).
|
||||||
|
|
||||||
|
* When running against codeberg.org, the default poll frequency is 30s instead of 2s.
|
||||||
|
* Fix compatibility issue with actions/{upload,download}-artifact@v4.
|
||||||
|
* Upgrade ACT v1.20.0 which brings:
|
||||||
|
* `[container].options` from the config file is exposed in containers created by the workflows
|
||||||
|
* the expressions in the value of `jobs.<job-id>.runs-on` are evaluated
|
||||||
|
* fix a bug causing the evaluated expression of `jobs.<job-id>.runs-on` to fail if it was an array
|
||||||
|
* mount `act-toolcache:/opt/hostedtoolcache` instead of `act-toolcache:/toolcache`
|
||||||
|
* a few improvements to the readability of the error messages displayed in the logs
|
||||||
|
* `amd64` can be used instead of `x86_64` and `arm64` intead of `aarch64` when specifying the architecture
|
||||||
|
* fixed YAML parsing bugs preventing dispatch workflows to be parsed correctly
|
||||||
|
* add support for `runs-on.labels` which is equivalent to `runs-on` followed by a list of labels
|
||||||
|
* the expressions in the service `ports` and `volumes` values are evaluated
|
||||||
|
* network aliases are only supported when the network is user specified, not when it is provided by the runner
|
||||||
|
* If `[runner].insecure` is true in the configuration, insecure cloning actions is allowed
|
||||||
|
|
||||||
|
## 3.3.0
|
||||||
|
|
||||||
|
* Support IPv6 with addresses from a private range and NAT for
|
||||||
|
docker:// with --enable-ipv6 and [container].enable_ipv6
|
||||||
|
lxc:// always
|
||||||
|
|
||||||
|
## 3.2.0
|
||||||
|
|
||||||
|
* Support LXC container capabilities via `lxc:lxc://debian:bookworm:k8s` or `lxc:lxc://debian:bookworm:docker lxc k8s`
|
||||||
|
* Update ACT v1.16.0 to resolve a [race condition when bootstraping LXC templates](https://code.forgejo.org/forgejo/act/pulls/23)
|
||||||
|
|
||||||
|
## 3.1.0
|
||||||
|
|
||||||
|
The `self-hosted` label that was hardwired to be a LXC container
|
||||||
|
running `debian:bullseye` was reworked and documented ([user guide](https://forgejo.org/docs/next/user/actions/#jobsjob_idruns-on) and [admin guide](https://forgejo.org/docs/next/admin/actions/#labels-and-runs-on)).
|
||||||
|
|
||||||
|
There now are two different schemes: `lxc://` for LXC containers and
|
||||||
|
`host://` for running directly on the host.
|
||||||
|
|
||||||
|
* Support the `host://` scheme for running directly on the host.
|
||||||
|
* Support the `lxc://` scheme in labels
|
||||||
|
* Update [code.forgejo.org/forgejo/act v1.14.0](https://code.forgejo.org/forgejo/act/pulls/19) to implement both self-hosted and LXC schemes
|
||||||
|
|
||||||
|
## 3.0.3
|
||||||
|
|
||||||
|
* Update [code.forgejo.org/forgejo/act v1.13.0](https://code.forgejo.org/forgejo/runner/pulls/106) to keep up with github.com/nektos/act
|
||||||
|
|
||||||
|
## 3.0.2
|
||||||
|
|
||||||
|
* Update [code.forgejo.org/forgejo/act v1.12.0](https://code.forgejo.org/forgejo/runner/pulls/106) to upgrade the node installed in the LXC container to node20
|
||||||
|
|
||||||
|
## 3.0.1
|
||||||
|
|
||||||
|
* Update [code.forgejo.org/forgejo/act v1.11.0](https://code.forgejo.org/forgejo/runner/pulls/86) to resolve a bug preventing actions based on node20 from running, such as [checkout@v4](https://code.forgejo.org/actions/checkout/src/tag/v4).
|
||||||
|
|
||||||
|
## 3.0.0
|
||||||
|
|
||||||
|
* Publish a rootless OCI image
|
||||||
|
* Refactor the release process
|
||||||
|
|
||||||
|
## 2.5.0
|
||||||
|
|
||||||
|
* Update [code.forgejo.org/forgejo/act v1.10.0](https://code.forgejo.org/forgejo/runner/pulls/71)
|
||||||
|
|
||||||
|
## 2.4.0
|
||||||
|
|
||||||
|
* Update [code.forgejo.org/forgejo/act v1.9.0](https://code.forgejo.org/forgejo/runner/pulls/64)
|
||||||
|
|
||||||
|
## 2.3.0
|
||||||
|
|
||||||
|
* Add support for [offline registration](https://forgejo.org/docs/next/admin/actions/#offline-registration).
|
11
build.go
Normal file
11
build.go
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
//go:build vendor
|
||||||
|
|
||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
// for vet
|
||||||
|
_ "code.gitea.io/gitea-vet"
|
||||||
|
)
|
63
cmd/cmd.go
63
cmd/cmd.go
|
@ -1,63 +0,0 @@
|
||||||
package cmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
const version = "0.1.5"
|
|
||||||
|
|
||||||
type globalArgs struct {
|
|
||||||
EnvFile string
|
|
||||||
}
|
|
||||||
|
|
||||||
func Execute(ctx context.Context) {
|
|
||||||
// task := runtime.NewTask("gitea", 0, nil, nil)
|
|
||||||
|
|
||||||
var gArgs globalArgs
|
|
||||||
|
|
||||||
// ./act_runner
|
|
||||||
rootCmd := &cobra.Command{
|
|
||||||
Use: "act [event name to run]\nIf no event name passed, will default to \"on: push\"",
|
|
||||||
Short: "Run GitHub actions locally by specifying the event name (e.g. `push`) or an action name directly.",
|
|
||||||
Args: cobra.MaximumNArgs(1),
|
|
||||||
Version: version,
|
|
||||||
SilenceUsage: true,
|
|
||||||
}
|
|
||||||
rootCmd.PersistentFlags().StringVarP(&gArgs.EnvFile, "env-file", "", ".env", "Read in a file of environment variables.")
|
|
||||||
|
|
||||||
// ./act_runner register
|
|
||||||
var regArgs registerArgs
|
|
||||||
registerCmd := &cobra.Command{
|
|
||||||
Use: "register",
|
|
||||||
Short: "Register a runner to the server",
|
|
||||||
Args: cobra.MaximumNArgs(0),
|
|
||||||
RunE: runRegister(ctx, ®Args, gArgs.EnvFile), // must use a pointer to regArgs
|
|
||||||
}
|
|
||||||
registerCmd.Flags().BoolVar(®Args.NoInteractive, "no-interactive", false, "Disable interactive mode")
|
|
||||||
registerCmd.Flags().StringVar(®Args.InstanceAddr, "instance", "", "Gitea instance address")
|
|
||||||
registerCmd.Flags().BoolVar(®Args.Insecure, "insecure", false, "If check server's certificate if it's https protocol")
|
|
||||||
registerCmd.Flags().StringVar(®Args.Token, "token", "", "Runner token")
|
|
||||||
registerCmd.Flags().StringVar(®Args.RunnerName, "name", "", "Runner name")
|
|
||||||
registerCmd.Flags().StringVar(®Args.Labels, "labels", "", "Runner tags, comma separated")
|
|
||||||
rootCmd.AddCommand(registerCmd)
|
|
||||||
|
|
||||||
// ./act_runner daemon
|
|
||||||
daemonCmd := &cobra.Command{
|
|
||||||
Use: "daemon",
|
|
||||||
Short: "Run as a runner daemon",
|
|
||||||
Args: cobra.MaximumNArgs(1),
|
|
||||||
RunE: runDaemon(ctx, gArgs.EnvFile),
|
|
||||||
}
|
|
||||||
// add all command
|
|
||||||
rootCmd.AddCommand(daemonCmd)
|
|
||||||
|
|
||||||
// hide completion command
|
|
||||||
rootCmd.CompletionOptions.HiddenDefaultCmd = true
|
|
||||||
|
|
||||||
if err := rootCmd.Execute(); err != nil {
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
112
cmd/daemon.go
112
cmd/daemon.go
|
@ -1,112 +0,0 @@
|
||||||
package cmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"gitea.com/gitea/act_runner/client"
|
|
||||||
"gitea.com/gitea/act_runner/config"
|
|
||||||
"gitea.com/gitea/act_runner/engine"
|
|
||||||
"gitea.com/gitea/act_runner/poller"
|
|
||||||
"gitea.com/gitea/act_runner/runtime"
|
|
||||||
|
|
||||||
"github.com/joho/godotenv"
|
|
||||||
"github.com/mattn/go-isatty"
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
"golang.org/x/sync/errgroup"
|
|
||||||
)
|
|
||||||
|
|
||||||
func runDaemon(ctx context.Context, envFile string) func(cmd *cobra.Command, args []string) error {
|
|
||||||
return func(cmd *cobra.Command, args []string) error {
|
|
||||||
log.Infoln("Starting runner daemon")
|
|
||||||
|
|
||||||
_ = godotenv.Load(envFile)
|
|
||||||
cfg, err := config.FromEnviron()
|
|
||||||
if err != nil {
|
|
||||||
log.WithError(err).
|
|
||||||
Fatalln("invalid configuration")
|
|
||||||
}
|
|
||||||
|
|
||||||
initLogging(cfg)
|
|
||||||
|
|
||||||
// require docker if a runner label uses a docker backend
|
|
||||||
needsDocker := false
|
|
||||||
for _, l := range cfg.Runner.Labels {
|
|
||||||
splits := strings.SplitN(l, ":", 2)
|
|
||||||
if len(splits) == 2 && strings.HasPrefix(splits[1], "docker://") {
|
|
||||||
needsDocker = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if needsDocker {
|
|
||||||
// try to connect to docker daemon
|
|
||||||
// if failed, exit with error
|
|
||||||
if err := engine.Start(ctx); err != nil {
|
|
||||||
log.WithError(err).Fatalln("failed to connect docker daemon engine")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var g errgroup.Group
|
|
||||||
|
|
||||||
cli := client.New(
|
|
||||||
cfg.Client.Address,
|
|
||||||
cfg.Client.Insecure,
|
|
||||||
cfg.Runner.UUID,
|
|
||||||
cfg.Runner.Token,
|
|
||||||
)
|
|
||||||
|
|
||||||
runner := &runtime.Runner{
|
|
||||||
Client: cli,
|
|
||||||
Machine: cfg.Runner.Name,
|
|
||||||
ForgeInstance: cfg.Client.Address,
|
|
||||||
Environ: cfg.Runner.Environ,
|
|
||||||
Labels: cfg.Runner.Labels,
|
|
||||||
}
|
|
||||||
|
|
||||||
poller := poller.New(
|
|
||||||
cli,
|
|
||||||
runner.Run,
|
|
||||||
cfg.Runner.Capacity,
|
|
||||||
)
|
|
||||||
|
|
||||||
g.Go(func() error {
|
|
||||||
l := log.WithField("capacity", cfg.Runner.Capacity).
|
|
||||||
WithField("endpoint", cfg.Client.Address).
|
|
||||||
WithField("os", cfg.Platform.OS).
|
|
||||||
WithField("arch", cfg.Platform.Arch)
|
|
||||||
l.Infoln("polling the remote server")
|
|
||||||
|
|
||||||
if err := poller.Poll(ctx); err != nil {
|
|
||||||
l.Errorf("poller error: %v", err)
|
|
||||||
}
|
|
||||||
poller.Wait()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
err = g.Wait()
|
|
||||||
if err != nil {
|
|
||||||
log.WithError(err).
|
|
||||||
Errorln("shutting down the server")
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// initLogging setup the global logrus logger.
|
|
||||||
func initLogging(cfg config.Config) {
|
|
||||||
isTerm := isatty.IsTerminal(os.Stdout.Fd())
|
|
||||||
log.SetFormatter(&log.TextFormatter{
|
|
||||||
DisableColors: !isTerm,
|
|
||||||
FullTimestamp: true,
|
|
||||||
})
|
|
||||||
|
|
||||||
if cfg.Debug {
|
|
||||||
log.SetLevel(log.DebugLevel)
|
|
||||||
}
|
|
||||||
if cfg.Trace {
|
|
||||||
log.SetLevel(log.TraceLevel)
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,10 +0,0 @@
|
||||||
package cmd
|
|
||||||
|
|
||||||
import "testing"
|
|
||||||
|
|
||||||
func TestValidateLabels(t *testing.T) {
|
|
||||||
labels := []string{"ubuntu-latest:docker://node:16-buster", "self-hosted"}
|
|
||||||
if err := validateLabels(labels); err != nil {
|
|
||||||
t.Errorf("validateLabels() error = %v", err)
|
|
||||||
}
|
|
||||||
}
|
|
114
config/config.go
114
config/config.go
|
@ -1,114 +0,0 @@
|
||||||
package config
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"runtime"
|
|
||||||
"strconv"
|
|
||||||
|
|
||||||
"gitea.com/gitea/act_runner/core"
|
|
||||||
|
|
||||||
"github.com/joho/godotenv"
|
|
||||||
"github.com/kelseyhightower/envconfig"
|
|
||||||
)
|
|
||||||
|
|
||||||
type (
|
|
||||||
// Config provides the system configuration.
|
|
||||||
Config struct {
|
|
||||||
Debug bool `envconfig:"GITEA_DEBUG"`
|
|
||||||
Trace bool `envconfig:"GITEA_TRACE"`
|
|
||||||
Client Client
|
|
||||||
Runner Runner
|
|
||||||
Platform Platform
|
|
||||||
}
|
|
||||||
|
|
||||||
Client struct {
|
|
||||||
Address string `ignored:"true"`
|
|
||||||
Insecure bool
|
|
||||||
}
|
|
||||||
|
|
||||||
Runner struct {
|
|
||||||
UUID string `ignored:"true"`
|
|
||||||
Name string `envconfig:"GITEA_RUNNER_NAME"`
|
|
||||||
Token string `ignored:"true"`
|
|
||||||
Capacity int `envconfig:"GITEA_RUNNER_CAPACITY" default:"1"`
|
|
||||||
File string `envconfig:"GITEA_RUNNER_FILE" default:".runner"`
|
|
||||||
Environ map[string]string `envconfig:"GITEA_RUNNER_ENVIRON"`
|
|
||||||
EnvFile string `envconfig:"GITEA_RUNNER_ENV_FILE"`
|
|
||||||
Labels []string `envconfig:"GITEA_RUNNER_LABELS"`
|
|
||||||
}
|
|
||||||
|
|
||||||
Platform struct {
|
|
||||||
OS string `envconfig:"GITEA_PLATFORM_OS"`
|
|
||||||
Arch string `envconfig:"GITEA_PLATFORM_ARCH"`
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
// FromEnviron returns the settings from the environment.
|
|
||||||
func FromEnviron() (Config, error) {
|
|
||||||
cfg := Config{}
|
|
||||||
if err := envconfig.Process("", &cfg); err != nil {
|
|
||||||
return cfg, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// check runner config exist
|
|
||||||
f, err := os.Stat(cfg.Runner.File)
|
|
||||||
if err == nil && !f.IsDir() {
|
|
||||||
jsonFile, _ := os.Open(cfg.Runner.File)
|
|
||||||
defer jsonFile.Close()
|
|
||||||
byteValue, _ := io.ReadAll(jsonFile)
|
|
||||||
var runner core.Runner
|
|
||||||
if err := json.Unmarshal(byteValue, &runner); err != nil {
|
|
||||||
return cfg, err
|
|
||||||
}
|
|
||||||
if runner.UUID != "" {
|
|
||||||
cfg.Runner.UUID = runner.UUID
|
|
||||||
}
|
|
||||||
if runner.Token != "" {
|
|
||||||
cfg.Runner.Token = runner.Token
|
|
||||||
}
|
|
||||||
if len(runner.Labels) != 0 {
|
|
||||||
cfg.Runner.Labels = runner.Labels
|
|
||||||
}
|
|
||||||
if runner.Address != "" {
|
|
||||||
cfg.Client.Address = runner.Address
|
|
||||||
}
|
|
||||||
if runner.Insecure != "" {
|
|
||||||
cfg.Client.Insecure, _ = strconv.ParseBool(runner.Insecure)
|
|
||||||
}
|
|
||||||
} else if err != nil {
|
|
||||||
return cfg, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// runner config
|
|
||||||
if cfg.Runner.Environ == nil {
|
|
||||||
cfg.Runner.Environ = map[string]string{
|
|
||||||
"GITHUB_API_URL": cfg.Client.Address + "/api/v1",
|
|
||||||
"GITHUB_SERVER_URL": cfg.Client.Address,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if cfg.Runner.Name == "" {
|
|
||||||
cfg.Runner.Name, _ = os.Hostname()
|
|
||||||
}
|
|
||||||
|
|
||||||
// platform config
|
|
||||||
if cfg.Platform.OS == "" {
|
|
||||||
cfg.Platform.OS = runtime.GOOS
|
|
||||||
}
|
|
||||||
if cfg.Platform.Arch == "" {
|
|
||||||
cfg.Platform.Arch = runtime.GOARCH
|
|
||||||
}
|
|
||||||
|
|
||||||
if file := cfg.Runner.EnvFile; file != "" {
|
|
||||||
envs, err := godotenv.Read(file)
|
|
||||||
if err != nil {
|
|
||||||
return cfg, err
|
|
||||||
}
|
|
||||||
for k, v := range envs {
|
|
||||||
cfg.Runner.Environ[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return cfg, nil
|
|
||||||
}
|
|
18
contrib/forgejo-runner.service
Normal file
18
contrib/forgejo-runner.service
Normal file
|
@ -0,0 +1,18 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Forgejo Runner
|
||||||
|
Documentation=https://forgejo.org/docs/latest/admin/actions/
|
||||||
|
After=docker.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=forgejo-runner daemon
|
||||||
|
ExecReload=/bin/kill -s HUP $MAINPID
|
||||||
|
|
||||||
|
# This user and working directory must already exist
|
||||||
|
User=runner
|
||||||
|
WorkingDirectory=/home/runner
|
||||||
|
Restart=on-failure
|
||||||
|
TimeoutSec=0
|
||||||
|
RestartSec=10
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -1,17 +0,0 @@
|
||||||
package core
|
|
||||||
|
|
||||||
const (
|
|
||||||
UUIDHeader = "x-runner-uuid"
|
|
||||||
TokenHeader = "x-runner-token"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Runner struct
|
|
||||||
type Runner struct {
|
|
||||||
ID int64 `json:"id"`
|
|
||||||
UUID string `json:"uuid"`
|
|
||||||
Name string `json:"name"`
|
|
||||||
Token string `json:"token"`
|
|
||||||
Address string `json:"address"`
|
|
||||||
Insecure string `json:"insecure"`
|
|
||||||
Labels []string `json:"labels"`
|
|
||||||
}
|
|
|
@ -1,37 +0,0 @@
|
||||||
package engine
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
|
|
||||||
"github.com/docker/docker/client"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Docker struct {
|
|
||||||
client client.APIClient
|
|
||||||
hidePull bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func New(opts ...Option) (*Docker, error) {
|
|
||||||
cli, err := client.NewClientWithOpts(client.FromEnv)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
srv := &Docker{
|
|
||||||
client: cli,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Loop through each option
|
|
||||||
for _, opt := range opts {
|
|
||||||
// Call the option giving the instantiated
|
|
||||||
opt.Apply(srv)
|
|
||||||
}
|
|
||||||
|
|
||||||
return srv, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ping pings the Docker daemon.
|
|
||||||
func (e *Docker) Ping(ctx context.Context) error {
|
|
||||||
_, err := e.client.Ping(ctx)
|
|
||||||
return err
|
|
||||||
}
|
|
|
@ -1,43 +0,0 @@
|
||||||
package engine
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Start start docker engine api loop
|
|
||||||
func Start(ctx context.Context) error {
|
|
||||||
engine, err := New()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
count := 0
|
|
||||||
for {
|
|
||||||
err := engine.Ping(ctx)
|
|
||||||
if err == context.Canceled {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
return ctx.Err()
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
log.WithError(err).
|
|
||||||
Errorln("cannot ping the docker daemon")
|
|
||||||
count++
|
|
||||||
if count == 5 {
|
|
||||||
return fmt.Errorf("retry connect to docker daemon failed: %d times", count)
|
|
||||||
}
|
|
||||||
time.Sleep(time.Second)
|
|
||||||
} else {
|
|
||||||
log.Infoln("successfully ping the docker daemon")
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
|
@ -1,30 +0,0 @@
|
||||||
package engine
|
|
||||||
|
|
||||||
import "github.com/docker/docker/client"
|
|
||||||
|
|
||||||
// An Option configures a mutex.
|
|
||||||
type Option interface {
|
|
||||||
Apply(*Docker)
|
|
||||||
}
|
|
||||||
|
|
||||||
// OptionFunc is a function that configure a value.
|
|
||||||
type OptionFunc func(*Docker)
|
|
||||||
|
|
||||||
// Apply calls f(option)
|
|
||||||
func (f OptionFunc) Apply(docker *Docker) {
|
|
||||||
f(docker)
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithClient set custom client
|
|
||||||
func WithClient(c client.APIClient) Option {
|
|
||||||
return OptionFunc(func(q *Docker) {
|
|
||||||
q.client = c
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// WithHidePull hide pull event.
|
|
||||||
func WithHidePull(v bool) Option {
|
|
||||||
return OptionFunc(func(q *Docker) {
|
|
||||||
q.hidePull = v
|
|
||||||
})
|
|
||||||
}
|
|
10
examples/README.md
Normal file
10
examples/README.md
Normal file
|
@ -0,0 +1,10 @@
|
||||||
|
This directory contains a collection of usage and deployment examples.
|
||||||
|
|
||||||
|
Workflow examples can be found [in the documentation](https://forgejo.org/docs/next/user/actions/)
|
||||||
|
and in the [sources of the setup-forgejo](https://code.forgejo.org/actions/setup-forgejo/src/branch/main/testdata) action.
|
||||||
|
|
||||||
|
| Section | Description |
|
||||||
|
|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
|
| [`docker`](docker) | using the host docker server by mounting the socket |
|
||||||
|
| [`docker-compose`](docker-compose) | all in one docker-compose with the Forgejo server, the runner and docker in docker |
|
||||||
|
| [`kubernetes`](kubernetes) | a sample deployment for the Forgejo runner |
|
113
examples/docker-compose/README.md
Normal file
113
examples/docker-compose/README.md
Normal file
|
@ -0,0 +1,113 @@
|
||||||
|
## Docker compose with docker-in-docker
|
||||||
|
|
||||||
|
The `compose-forgejo-and-runner.yml` compose file runs a Forgejo
|
||||||
|
instance and registers a `Forgejo runner`. A docker server is also
|
||||||
|
launched within a container (using
|
||||||
|
[dind](https://hub.docker.com/_/docker/tags?name=dind)) and will be
|
||||||
|
used by the `Forgejo runner` to execute the workflows.
|
||||||
|
|
||||||
|
### Quick start
|
||||||
|
|
||||||
|
```sh
|
||||||
|
rm -fr /srv/runner-data /srv/forgejo-data
|
||||||
|
secret=$(openssl rand -hex 20)
|
||||||
|
sed -i -e "s/{SHARED_SECRET}/$secret/" compose-forgejo-and-runner.yml
|
||||||
|
docker compose -f compose-forgejo-and-runner.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
Visit http://0.0.0.0:8080/admin/actions/runners with login `root` and password `{ROOT_PASSWORD}` and see the runner is registered with the label `docker`.
|
||||||
|
|
||||||
|
> NOTE: the `Your ROOT_URL in app.ini is "http://localhost:3000/", it's unlikely matching the site you are visiting.` message is a warning that can be ignored in the context of this example.
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker compose -f compose-forgejo-and-runner.yml -f compose-demo-workflow.yml up demo-workflow
|
||||||
|
```
|
||||||
|
|
||||||
|
Visit http://0.0.0.0:8080/root/test/actions/runs/1 and see that the job ran.
|
||||||
|
|
||||||
|
|
||||||
|
### Running
|
||||||
|
|
||||||
|
Create a shared secret with:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
openssl rand -hex 20
|
||||||
|
```
|
||||||
|
|
||||||
|
Replace all occurences of {SHARED_SECRET} in
|
||||||
|
[compose-forgejo-and-runner.yml](compose-forgejo-and-runner.yml).
|
||||||
|
|
||||||
|
> **NOTE:** a token obtained from the Forgejo web interface cannot be used as a shared secret.
|
||||||
|
|
||||||
|
Replace {ROOT_PASSWORD} with a secure password in
|
||||||
|
[compose-forgejo-and-runner.yml](compose-forgejo-and-runner.yml).
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker compose -f compose-forgejo-and-runner.yml up
|
||||||
|
Creating docker-compose_docker-in-docker_1 ... done
|
||||||
|
Creating docker-compose_forgejo_1 ... done
|
||||||
|
Creating docker-compose_runner-register_1 ... done
|
||||||
|
...
|
||||||
|
docker-in-docker_1 | time="2023-08-24T10:22:15.023338461Z" level=warning msg="WARNING: API is accessible on http://0.0.0.0:2376
|
||||||
|
...
|
||||||
|
forgejo_1 | 2023/08/24 10:22:14 ...s/graceful/server.go:75:func1() [D] Starting server on tcp:0.0.0.0:3000 (PID: 19)
|
||||||
|
...
|
||||||
|
runner-daemon_1 | time="2023-08-24T10:22:16Z" level=info msg="Starting runner daemon"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manual testing
|
||||||
|
|
||||||
|
To login the Forgejo instance:
|
||||||
|
|
||||||
|
* URL: http://0.0.0.0:8080
|
||||||
|
* user: `root`
|
||||||
|
* password: `{ROOT_PASSWORD}`
|
||||||
|
|
||||||
|
`Forgejo Actions` is enabled by default when creating a repository.
|
||||||
|
|
||||||
|
## Tests workflow
|
||||||
|
|
||||||
|
The `compose-demo-workflow.yml` compose file runs two demo workflows:
|
||||||
|
* one to verify the `Forgejo runner` can pick up a task from the Forgejo instance
|
||||||
|
and run it to completion.
|
||||||
|
* one to verify docker can be run inside the `Forgejo runner` container.
|
||||||
|
|
||||||
|
A new repository is created in root/test with the following workflows:
|
||||||
|
|
||||||
|
#### `.forgejo/workflows/demo.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
on: [push]
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: docker
|
||||||
|
steps:
|
||||||
|
- run: echo All Good
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `.forgejo/workflows/demo_docker.yml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
on: [push]
|
||||||
|
jobs:
|
||||||
|
test_docker:
|
||||||
|
runs-on: ubuntu-22.04
|
||||||
|
steps:
|
||||||
|
- run: docker info
|
||||||
|
```
|
||||||
|
|
||||||
|
A wait loop expects the status of the check associated with the
|
||||||
|
commit in Forgejo to show "success" to assert the workflow was run.
|
||||||
|
|
||||||
|
### Running
|
||||||
|
|
||||||
|
```sh
|
||||||
|
$ docker-compose -f compose-forgejo-and-runner.yml -f compose-demo-workflow.yml up demo-workflow
|
||||||
|
...
|
||||||
|
demo-workflow_1 | To http://forgejo:3000/root/test
|
||||||
|
demo-workflow_1 | + 5ce134e...261cc79 main -> main (forced update)
|
||||||
|
demo-workflow_1 | branch 'main' set up to track 'http://root:admin1234@forgejo:3000/root/test/main'.
|
||||||
|
...
|
||||||
|
demo-workflow_1 | running
|
||||||
|
...
|
||||||
|
```
|
35
examples/docker-compose/compose-demo-workflow.yml
Normal file
35
examples/docker-compose/compose-demo-workflow.yml
Normal file
|
@ -0,0 +1,35 @@
|
||||||
|
# Copyright 2024 The Forgejo Authors.
|
||||||
|
# SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
services:
|
||||||
|
|
||||||
|
demo-workflow:
|
||||||
|
image: code.forgejo.org/oci/alpine:3.19
|
||||||
|
links:
|
||||||
|
- forgejo
|
||||||
|
command: >-
|
||||||
|
sh -ec '
|
||||||
|
apk add --quiet git curl jq ;
|
||||||
|
mkdir -p /srv/demo ;
|
||||||
|
cd /srv/demo ;
|
||||||
|
git init --initial-branch=main ;
|
||||||
|
mkdir -p .forgejo/workflows ;
|
||||||
|
echo "{ on: [push], jobs: { test: { runs-on: docker, steps: [ {uses: actions/checkout@v4}, { run: echo All Good } ] } } }" > .forgejo/workflows/demo.yml ;
|
||||||
|
echo "{ on: [push], jobs: { test_docker: { runs-on: ubuntu-22.04, steps: [ { run: docker info } ] } } }" > .forgejo/workflows/demo_docker.yml ;
|
||||||
|
git add . ;
|
||||||
|
git config user.email root@example.com ;
|
||||||
|
git config user.name username ;
|
||||||
|
git commit -m demo ;
|
||||||
|
while : ; do
|
||||||
|
git push --set-upstream --force http://root:{ROOT_PASSWORD}@forgejo:3000/root/test main && break ;
|
||||||
|
sleep 5 ;
|
||||||
|
done ;
|
||||||
|
sha=`git rev-parse HEAD` ;
|
||||||
|
for delay in 1 1 1 1 2 5 5 10 10 10 15 30 30 30 30 30 30 30 ; do
|
||||||
|
curl -sS -f http://forgejo:3000/api/v1/repos/root/test/commits/$$sha/status | jq --raw-output .state | tee status ;
|
||||||
|
if grep success status ; then echo DEMO WORKFLOW SUCCESS && break ; fi ;
|
||||||
|
if grep failure status ; then echo DEMO WORKFLOW FAILURE && break ; fi ;
|
||||||
|
sleep $$delay ;
|
||||||
|
done ;
|
||||||
|
grep success status || echo DEMO WORKFLOW FAILURE
|
||||||
|
'
|
93
examples/docker-compose/compose-forgejo-and-runner.yml
Normal file
93
examples/docker-compose/compose-forgejo-and-runner.yml
Normal file
|
@ -0,0 +1,93 @@
|
||||||
|
# Copyright 2024 The Forgejo Authors.
|
||||||
|
# SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
#
|
||||||
|
# Create a secret with:
|
||||||
|
#
|
||||||
|
# openssl rand -hex 20
|
||||||
|
#
|
||||||
|
# Replace all occurences of {SHARED_SECRET} below with the output.
|
||||||
|
#
|
||||||
|
# NOTE: a token obtained from the Forgejo web interface cannot be used
|
||||||
|
# as a shared secret.
|
||||||
|
#
|
||||||
|
# Replace {ROOT_PASSWORD} with a secure password
|
||||||
|
#
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
docker_certs:
|
||||||
|
|
||||||
|
services:
|
||||||
|
|
||||||
|
docker-in-docker:
|
||||||
|
image: code.forgejo.org/oci/docker:dind
|
||||||
|
hostname: docker # Must set hostname as TLS certificates are only valid for docker or localhost
|
||||||
|
privileged: true
|
||||||
|
environment:
|
||||||
|
DOCKER_TLS_CERTDIR: /certs
|
||||||
|
DOCKER_HOST: docker-in-docker
|
||||||
|
volumes:
|
||||||
|
- docker_certs:/certs
|
||||||
|
|
||||||
|
forgejo:
|
||||||
|
image: codeberg.org/forgejo/forgejo:1.21
|
||||||
|
command: >-
|
||||||
|
bash -c '
|
||||||
|
/bin/s6-svscan /etc/s6 &
|
||||||
|
sleep 10 ;
|
||||||
|
su -c "forgejo forgejo-cli actions register --secret {SHARED_SECRET}" git ;
|
||||||
|
su -c "forgejo admin user create --admin --username root --password {ROOT_PASSWORD} --email root@example.com" git ;
|
||||||
|
sleep infinity
|
||||||
|
'
|
||||||
|
environment:
|
||||||
|
FORGEJO__security__INSTALL_LOCK: "true"
|
||||||
|
FORGEJO__log__LEVEL: "debug"
|
||||||
|
FORGEJO__repository__ENABLE_PUSH_CREATE_USER: "true"
|
||||||
|
FORGEJO__repository__DEFAULT_PUSH_CREATE_PRIVATE: "false"
|
||||||
|
FORGEJO__repository__DEFAULT_REPO_UNITS: "repo.code,repo.actions"
|
||||||
|
volumes:
|
||||||
|
- /srv/forgejo-data:/data
|
||||||
|
ports:
|
||||||
|
- 8080:3000
|
||||||
|
|
||||||
|
runner-register:
|
||||||
|
image: code.forgejo.org/forgejo/runner:3.4.1
|
||||||
|
links:
|
||||||
|
- docker-in-docker
|
||||||
|
- forgejo
|
||||||
|
environment:
|
||||||
|
DOCKER_HOST: tcp://docker-in-docker:2376
|
||||||
|
volumes:
|
||||||
|
- /srv/runner-data:/data
|
||||||
|
user: 0:0
|
||||||
|
command: >-
|
||||||
|
bash -ec '
|
||||||
|
while : ; do
|
||||||
|
forgejo-runner create-runner-file --connect --instance http://forgejo:3000 --name runner --secret {SHARED_SECRET} && break ;
|
||||||
|
sleep 1 ;
|
||||||
|
done ;
|
||||||
|
sed -i -e "s|\"labels\": null|\"labels\": [\"docker:docker://code.forgejo.org/oci/node:20-bookworm\", \"ubuntu-22.04:docker://catthehacker/ubuntu:act-22.04\"]|" .runner ;
|
||||||
|
forgejo-runner generate-config > config.yml ;
|
||||||
|
sed -i -e "s|network: .*|network: host|" config.yml ;
|
||||||
|
sed -i -e "s|^ envs:$$| envs:\n DOCKER_HOST: tcp://docker:2376\n DOCKER_TLS_VERIFY: 1\n DOCKER_CERT_PATH: /certs/client|" config.yml ;
|
||||||
|
sed -i -e "s|^ options:| options: -v /certs/client:/certs/client|" config.yml ;
|
||||||
|
sed -i -e "s| valid_volumes: \[\]$$| valid_volumes:\n - /certs/client|" config.yml ;
|
||||||
|
chown -R 1000:1000 /data
|
||||||
|
'
|
||||||
|
|
||||||
|
runner-daemon:
|
||||||
|
image: code.forgejo.org/forgejo/runner:3.4.1
|
||||||
|
links:
|
||||||
|
- docker-in-docker
|
||||||
|
- forgejo
|
||||||
|
environment:
|
||||||
|
DOCKER_HOST: tcp://docker:2376
|
||||||
|
DOCKER_CERT_PATH: /certs/client
|
||||||
|
DOCKER_TLS_VERIFY: "1"
|
||||||
|
volumes:
|
||||||
|
- /srv/runner-data:/data
|
||||||
|
- docker_certs:/certs
|
||||||
|
command: >-
|
||||||
|
bash -c '
|
||||||
|
while : ; do test -w .runner && forgejo-runner --config config.yml daemon ; sleep 1 ; done
|
||||||
|
'
|
12
examples/docker/README.md
Normal file
12
examples/docker/README.md
Normal file
|
@ -0,0 +1,12 @@
|
||||||
|
The following assumes:
|
||||||
|
|
||||||
|
* a docker server runs on the host
|
||||||
|
* the docker group of the host is GID 133
|
||||||
|
* a `.runner` file exists in /tmp/data
|
||||||
|
* a `runner-config.yml` file exists in /tmp/data
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/data:/data --user 1000:133 --rm code.forgejo.org/forgejo/runner:3.0.0 forgejo-runner --config runner-config.yaml daemon
|
||||||
|
```
|
||||||
|
|
||||||
|
The workflows will run using the host docker srever
|
7
examples/kubernetes/README.md
Normal file
7
examples/kubernetes/README.md
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
## Kubernetes Docker in Docker Deployment
|
||||||
|
|
||||||
|
Registers Kubernetes pod runners using [offline registration](https://forgejo.org/docs/v1.21/admin/actions/#offline-registration), allowing the scaling of runners as needed.
|
||||||
|
|
||||||
|
NOTE: Docker in Docker (dind) requires elevated privileges on Kubernetes. The current way to achieve this is to set the pod `SecurityContext` to `privileged`. Keep in mind that this is a potential security issue that has the potential for a malicious application to break out of the container context.
|
||||||
|
|
||||||
|
[`dind-docker.yaml`](dind-docker.yaml) creates a deployment and secret for Kubernetes to act as a runner. The Docker credentials are re-generated each time the pod connects and does not need to be persisted.
|
87
examples/kubernetes/dind-docker.yaml
Normal file
87
examples/kubernetes/dind-docker.yaml
Normal file
|
@ -0,0 +1,87 @@
|
||||||
|
# Secret data.
|
||||||
|
# You will need to retrive this from the web UI, and your Forgejo instance must be running v1.21+
|
||||||
|
# Alternatively, create this with
|
||||||
|
# kubectl create secret generic runner-secret --from-literal=token=your_offline_token_here
|
||||||
|
apiVersion: v1
|
||||||
|
stringData:
|
||||||
|
token: your_offline_secret_here
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: runner-secret
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: forgejo-runner
|
||||||
|
name: forgejo-runner
|
||||||
|
spec:
|
||||||
|
# Two replicas means that if one is busy, the other can pick up jobs.
|
||||||
|
replicas: 2
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: forgejo-runner
|
||||||
|
strategy: {}
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
creationTimestamp: null
|
||||||
|
labels:
|
||||||
|
app: forgejo-runner
|
||||||
|
spec:
|
||||||
|
restartPolicy: Always
|
||||||
|
volumes:
|
||||||
|
- name: docker-certs
|
||||||
|
emptyDir: {}
|
||||||
|
- name: runner-data
|
||||||
|
emptyDir: {}
|
||||||
|
# Initialise our configuration file using offline registration
|
||||||
|
# https://forgejo.org/docs/v1.21/admin/actions/#offline-registration
|
||||||
|
initContainers:
|
||||||
|
- name: runner-register
|
||||||
|
image: code.forgejo.org/forgejo/runner:3.2.0
|
||||||
|
command: ["forgejo-runner", "register", "--no-interactive", "--token", $(RUNNER_SECRET), "--name", $(RUNNER_NAME), "--instance", $(FORGEJO_INSTANCE_URL)]
|
||||||
|
env:
|
||||||
|
- name: RUNNER_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.name
|
||||||
|
- name: RUNNER_SECRET
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: runner-secret
|
||||||
|
key: token
|
||||||
|
- name: FORGEJO_INSTANCE_URL
|
||||||
|
value: http://forgejo-http.forgejo.svc.cluster.local:3000
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpu: "0.50"
|
||||||
|
memory: "64Mi"
|
||||||
|
volumeMounts:
|
||||||
|
- name: runner-data
|
||||||
|
mountPath: /data
|
||||||
|
containers:
|
||||||
|
- name: runner
|
||||||
|
image: code.forgejo.org/forgejo/runner:3.0.0
|
||||||
|
command: ["sh", "-c", "while ! nc -z localhost 2376 </dev/null; do echo 'waiting for docker daemon...'; sleep 5; done; forgejo-runner daemon"]
|
||||||
|
env:
|
||||||
|
- name: DOCKER_HOST
|
||||||
|
value: tcp://localhost:2376
|
||||||
|
- name: DOCKER_CERT_PATH
|
||||||
|
value: /certs/client
|
||||||
|
- name: DOCKER_TLS_VERIFY
|
||||||
|
value: "1"
|
||||||
|
volumeMounts:
|
||||||
|
- name: docker-certs
|
||||||
|
mountPath: /certs
|
||||||
|
- name: runner-data
|
||||||
|
mountPath: /data
|
||||||
|
- name: daemon
|
||||||
|
image: docker:23.0.6-dind
|
||||||
|
env:
|
||||||
|
- name: DOCKER_TLS_CERTDIR
|
||||||
|
value: /certs
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
volumeMounts:
|
||||||
|
- name: docker-certs
|
||||||
|
mountPath: /certs
|
137
go.mod
137
go.mod
|
@ -1,80 +1,105 @@
|
||||||
module gitea.com/gitea/act_runner
|
module gitea.com/gitea/act_runner
|
||||||
|
|
||||||
go 1.18
|
go 1.21.13
|
||||||
|
|
||||||
|
toolchain go1.23.1
|
||||||
|
|
||||||
require (
|
require (
|
||||||
code.gitea.io/actions-proto-go v0.2.0
|
code.gitea.io/actions-proto-go v0.4.0
|
||||||
github.com/avast/retry-go/v4 v4.3.1
|
code.gitea.io/gitea-vet v0.2.3
|
||||||
github.com/bufbuild/connect-go v1.3.1
|
connectrpc.com/connect v1.17.0
|
||||||
github.com/docker/docker v20.10.21+incompatible
|
github.com/avast/retry-go/v4 v4.6.0
|
||||||
github.com/joho/godotenv v1.4.0
|
github.com/docker/docker v25.0.6+incompatible
|
||||||
github.com/kelseyhightower/envconfig v1.4.0
|
github.com/google/uuid v1.6.0
|
||||||
github.com/mattn/go-isatty v0.0.16
|
github.com/joho/godotenv v1.5.1
|
||||||
github.com/nektos/act v0.0.0
|
github.com/mattn/go-isatty v0.0.20
|
||||||
github.com/sirupsen/logrus v1.9.0
|
github.com/nektos/act v0.2.49
|
||||||
github.com/spf13/cobra v1.6.1
|
github.com/sirupsen/logrus v1.9.3
|
||||||
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde
|
github.com/spf13/cobra v1.8.1
|
||||||
google.golang.org/protobuf v1.28.1
|
github.com/stretchr/testify v1.9.0
|
||||||
|
golang.org/x/term v0.24.0
|
||||||
|
golang.org/x/time v0.6.0
|
||||||
|
google.golang.org/protobuf v1.34.2
|
||||||
|
gopkg.in/yaml.v3 v3.0.1
|
||||||
|
gotest.tools/v3 v3.5.1
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
|
dario.cat/mergo v1.0.0 // indirect
|
||||||
|
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
|
||||||
github.com/Masterminds/semver v1.5.0 // indirect
|
github.com/Masterminds/semver v1.5.0 // indirect
|
||||||
github.com/Microsoft/go-winio v0.5.2 // indirect
|
github.com/Microsoft/go-winio v0.6.1 // indirect
|
||||||
github.com/Microsoft/hcsshim v0.9.3 // indirect
|
github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371 // indirect
|
||||||
github.com/ProtonMail/go-crypto v0.0.0-20220404123522-616f957b79ad // indirect
|
github.com/cloudflare/circl v1.3.7 // indirect
|
||||||
github.com/acomagu/bufpipe v1.0.3 // indirect
|
github.com/containerd/containerd v1.7.13 // indirect
|
||||||
github.com/containerd/cgroups v1.0.3 // indirect
|
github.com/containerd/log v0.1.0 // indirect
|
||||||
github.com/containerd/containerd v1.6.6 // indirect
|
github.com/creack/pty v1.1.21 // indirect
|
||||||
github.com/creack/pty v1.1.18 // indirect
|
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
|
||||||
github.com/docker/cli v20.10.21+incompatible // indirect
|
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||||
github.com/docker/distribution v2.8.1+incompatible // indirect
|
github.com/distribution/reference v0.5.0 // indirect
|
||||||
github.com/docker/docker-credential-helpers v0.6.4 // indirect
|
github.com/docker/cli v25.0.3+incompatible // indirect
|
||||||
github.com/docker/go-connections v0.4.0 // indirect
|
github.com/docker/distribution v2.8.3+incompatible // indirect
|
||||||
github.com/docker/go-units v0.4.0 // indirect
|
github.com/docker/docker-credential-helpers v0.8.0 // indirect
|
||||||
github.com/emirpasic/gods v1.12.0 // indirect
|
github.com/docker/go-connections v0.5.0 // indirect
|
||||||
github.com/fatih/color v1.13.0 // indirect
|
github.com/docker/go-units v0.5.0 // indirect
|
||||||
github.com/go-git/gcfg v1.5.0 // indirect
|
github.com/emirpasic/gods v1.18.1 // indirect
|
||||||
github.com/go-git/go-billy/v5 v5.3.1 // indirect
|
github.com/fatih/color v1.16.0 // indirect
|
||||||
github.com/go-git/go-git/v5 v5.4.2 // indirect
|
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||||
github.com/go-ini/ini v1.67.0 // indirect
|
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
|
||||||
|
github.com/go-git/go-billy/v5 v5.5.0 // indirect
|
||||||
|
github.com/go-git/go-git/v5 v5.11.0 // indirect
|
||||||
|
github.com/go-logr/logr v1.3.0 // indirect
|
||||||
|
github.com/go-logr/stdr v1.2.2 // indirect
|
||||||
|
github.com/gobwas/glob v0.2.3 // indirect
|
||||||
github.com/gogo/protobuf v1.3.2 // indirect
|
github.com/gogo/protobuf v1.3.2 // indirect
|
||||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||||
|
github.com/google/go-cmp v0.6.0 // indirect
|
||||||
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
|
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
|
||||||
github.com/imdario/mergo v0.3.13 // indirect
|
github.com/imdario/mergo v0.3.16 // indirect
|
||||||
github.com/inconshreveable/mousetrap v1.0.1 // indirect
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
|
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
|
||||||
github.com/julienschmidt/httprouter v1.3.0 // indirect
|
github.com/julienschmidt/httprouter v1.3.0 // indirect
|
||||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
|
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
|
||||||
github.com/kevinburke/ssh_config v1.2.0 // indirect
|
github.com/kevinburke/ssh_config v1.2.0 // indirect
|
||||||
|
github.com/klauspost/compress v1.17.4 // indirect
|
||||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||||
github.com/mattn/go-runewidth v0.0.13 // indirect
|
github.com/mattn/go-runewidth v0.0.15 // indirect
|
||||||
github.com/mitchellh/go-homedir v1.1.0 // indirect
|
github.com/mitchellh/mapstructure v1.5.0 // indirect
|
||||||
github.com/mitchellh/mapstructure v1.1.2 // indirect
|
github.com/moby/buildkit v0.13.2 // indirect
|
||||||
github.com/moby/buildkit v0.10.6 // indirect
|
github.com/moby/patternmatcher v0.6.0 // indirect
|
||||||
github.com/moby/sys/mount v0.3.1 // indirect
|
github.com/moby/sys/sequential v0.5.0 // indirect
|
||||||
github.com/moby/sys/mountinfo v0.6.0 // indirect
|
github.com/moby/sys/user v0.1.0 // indirect
|
||||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||||
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
|
github.com/opencontainers/image-spec v1.1.0-rc5 // indirect
|
||||||
github.com/opencontainers/runc v1.1.2 // indirect
|
github.com/opencontainers/selinux v1.11.0 // indirect
|
||||||
github.com/opencontainers/selinux v1.10.2 // indirect
|
github.com/pjbgf/sha1cd v0.3.0 // indirect
|
||||||
github.com/pkg/errors v0.9.1 // indirect
|
github.com/pkg/errors v0.9.1 // indirect
|
||||||
github.com/rhysd/actionlint v1.6.22 // indirect
|
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||||
github.com/rivo/uniseg v0.3.4 // indirect
|
github.com/rhysd/actionlint v1.6.27 // indirect
|
||||||
github.com/robfig/cron v1.2.0 // indirect
|
github.com/rivo/uniseg v0.4.7 // indirect
|
||||||
github.com/sergi/go-diff v1.2.0 // indirect
|
github.com/robfig/cron/v3 v3.0.1 // indirect
|
||||||
|
github.com/sergi/go-diff v1.3.1 // indirect
|
||||||
|
github.com/skeema/knownhosts v1.2.1 // indirect
|
||||||
github.com/spf13/pflag v1.0.5 // indirect
|
github.com/spf13/pflag v1.0.5 // indirect
|
||||||
github.com/xanzy/ssh-agent v0.3.1 // indirect
|
github.com/stretchr/objx v0.5.2 // indirect
|
||||||
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f // indirect
|
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a // indirect
|
||||||
|
github.com/xanzy/ssh-agent v0.3.3 // indirect
|
||||||
|
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
|
||||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
|
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
|
||||||
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f // indirect
|
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
|
||||||
go.opencensus.io v0.23.0 // indirect
|
go.etcd.io/bbolt v1.3.9 // indirect
|
||||||
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 // indirect
|
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect
|
||||||
golang.org/x/net v0.0.0-20220906165146-f3363e06e74c // indirect
|
go.opentelemetry.io/otel v1.21.0 // indirect
|
||||||
golang.org/x/sys v0.0.0-20220818161305-2296e01440c6 // indirect
|
go.opentelemetry.io/otel/metric v1.21.0 // indirect
|
||||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
|
go.opentelemetry.io/otel/trace v1.21.0 // indirect
|
||||||
|
golang.org/x/crypto v0.21.0 // indirect
|
||||||
|
golang.org/x/mod v0.13.0 // indirect
|
||||||
|
golang.org/x/net v0.23.0 // indirect
|
||||||
|
golang.org/x/sync v0.6.0 // indirect
|
||||||
|
golang.org/x/sys v0.25.0 // indirect
|
||||||
|
golang.org/x/tools v0.14.0 // indirect
|
||||||
gopkg.in/warnings.v0 v0.1.2 // indirect
|
gopkg.in/warnings.v0 v0.1.2 // indirect
|
||||||
gopkg.in/yaml.v2 v2.4.0 // indirect
|
gopkg.in/yaml.v2 v2.4.0 // indirect
|
||||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
|
||||||
)
|
)
|
||||||
|
|
||||||
replace github.com/nektos/act => gitea.com/gitea/act v0.234.2
|
replace github.com/nektos/act => code.forgejo.org/forgejo/act v1.21.3
|
||||||
|
|
69
internal/app/cmd/cache-server.go
Normal file
69
internal/app/cmd/cache-server.go
Normal file
|
@ -0,0 +1,69 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"os/signal"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
|
||||||
|
"github.com/nektos/act/pkg/artifactcache"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
type cacheServerArgs struct {
|
||||||
|
Dir string
|
||||||
|
Host string
|
||||||
|
Port uint16
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCacheServer(ctx context.Context, configFile *string, cacheArgs *cacheServerArgs) func(cmd *cobra.Command, args []string) error {
|
||||||
|
return func(cmd *cobra.Command, args []string) error {
|
||||||
|
cfg, err := config.LoadDefault(*configFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
initLogging(cfg)
|
||||||
|
|
||||||
|
var (
|
||||||
|
dir = cfg.Cache.Dir
|
||||||
|
host = cfg.Cache.Host
|
||||||
|
port = cfg.Cache.Port
|
||||||
|
)
|
||||||
|
|
||||||
|
// cacheArgs has higher priority
|
||||||
|
if cacheArgs.Dir != "" {
|
||||||
|
dir = cacheArgs.Dir
|
||||||
|
}
|
||||||
|
if cacheArgs.Host != "" {
|
||||||
|
host = cacheArgs.Host
|
||||||
|
}
|
||||||
|
if cacheArgs.Port != 0 {
|
||||||
|
port = cacheArgs.Port
|
||||||
|
}
|
||||||
|
|
||||||
|
cacheHandler, err := artifactcache.StartHandler(
|
||||||
|
dir,
|
||||||
|
host,
|
||||||
|
port,
|
||||||
|
log.StandardLogger().WithField("module", "cache_request"),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Infof("cache server is listening on %v", cacheHandler.ExternalURL())
|
||||||
|
|
||||||
|
c := make(chan os.Signal, 1)
|
||||||
|
signal.Notify(c, os.Interrupt)
|
||||||
|
<-c
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
87
internal/app/cmd/cmd.go
Normal file
87
internal/app/cmd/cmd.go
Normal file
|
@ -0,0 +1,87 @@
|
||||||
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/ver"
|
||||||
|
)
|
||||||
|
|
||||||
|
func Execute(ctx context.Context) {
|
||||||
|
// ./act_runner
|
||||||
|
rootCmd := &cobra.Command{
|
||||||
|
Use: "forgejo-runner [event name to run]\nIf no event name passed, will default to \"on: push\"",
|
||||||
|
Short: "Run Forgejo Actions locally by specifying the event name (e.g. `push`) or an action name directly.",
|
||||||
|
Args: cobra.MaximumNArgs(1),
|
||||||
|
Version: ver.Version(),
|
||||||
|
SilenceUsage: true,
|
||||||
|
}
|
||||||
|
configFile := ""
|
||||||
|
rootCmd.PersistentFlags().StringVarP(&configFile, "config", "c", "", "Config file path")
|
||||||
|
|
||||||
|
// ./act_runner register
|
||||||
|
var regArgs registerArgs
|
||||||
|
registerCmd := &cobra.Command{
|
||||||
|
Use: "register",
|
||||||
|
Short: "Register a runner to the server",
|
||||||
|
Args: cobra.MaximumNArgs(0),
|
||||||
|
RunE: runRegister(ctx, ®Args, &configFile), // must use a pointer to regArgs
|
||||||
|
}
|
||||||
|
registerCmd.Flags().BoolVar(®Args.NoInteractive, "no-interactive", false, "Disable interactive mode")
|
||||||
|
registerCmd.Flags().StringVar(®Args.InstanceAddr, "instance", "", "Forgejo instance address")
|
||||||
|
registerCmd.Flags().StringVar(®Args.Token, "token", "", "Runner token")
|
||||||
|
registerCmd.Flags().StringVar(®Args.RunnerName, "name", "", "Runner name")
|
||||||
|
registerCmd.Flags().StringVar(®Args.Labels, "labels", "", "Runner tags, comma separated")
|
||||||
|
rootCmd.AddCommand(registerCmd)
|
||||||
|
|
||||||
|
rootCmd.AddCommand(createRunnerFileCmd(ctx, &configFile))
|
||||||
|
|
||||||
|
// ./act_runner daemon
|
||||||
|
daemonCmd := &cobra.Command{
|
||||||
|
Use: "daemon",
|
||||||
|
Short: "Run as a runner daemon",
|
||||||
|
Args: cobra.MaximumNArgs(1),
|
||||||
|
RunE: runDaemon(ctx, &configFile),
|
||||||
|
}
|
||||||
|
rootCmd.AddCommand(daemonCmd)
|
||||||
|
|
||||||
|
// ./act_runner exec
|
||||||
|
rootCmd.AddCommand(loadExecCmd(ctx))
|
||||||
|
|
||||||
|
// ./act_runner config
|
||||||
|
rootCmd.AddCommand(&cobra.Command{
|
||||||
|
Use: "generate-config",
|
||||||
|
Short: "Generate an example config file",
|
||||||
|
Args: cobra.MaximumNArgs(0),
|
||||||
|
Run: func(_ *cobra.Command, _ []string) {
|
||||||
|
fmt.Printf("%s", config.Example)
|
||||||
|
},
|
||||||
|
})
|
||||||
|
|
||||||
|
// ./act_runner cache-server
|
||||||
|
var cacheArgs cacheServerArgs
|
||||||
|
cacheCmd := &cobra.Command{
|
||||||
|
Use: "cache-server",
|
||||||
|
Short: "Start a cache server for the cache action",
|
||||||
|
Args: cobra.MaximumNArgs(0),
|
||||||
|
RunE: runCacheServer(ctx, &configFile, &cacheArgs),
|
||||||
|
}
|
||||||
|
cacheCmd.Flags().StringVarP(&cacheArgs.Dir, "dir", "d", "", "Cache directory")
|
||||||
|
cacheCmd.Flags().StringVarP(&cacheArgs.Host, "host", "s", "", "Host of the cache server")
|
||||||
|
cacheCmd.Flags().Uint16VarP(&cacheArgs.Port, "port", "p", 0, "Port of the cache server")
|
||||||
|
rootCmd.AddCommand(cacheCmd)
|
||||||
|
|
||||||
|
// hide completion command
|
||||||
|
rootCmd.CompletionOptions.HiddenDefaultCmd = true
|
||||||
|
|
||||||
|
if err := rootCmd.Execute(); err != nil {
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
}
|
164
internal/app/cmd/create-runner-file.go
Normal file
164
internal/app/cmd/create-runner-file.go
Normal file
|
@ -0,0 +1,164 @@
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/hex"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
pingv1 "code.gitea.io/actions-proto-go/ping/v1"
|
||||||
|
"connectrpc.com/connect"
|
||||||
|
gouuid "github.com/google/uuid"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/app/run"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/client"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/ver"
|
||||||
|
)
|
||||||
|
|
||||||
|
type createRunnerFileArgs struct {
|
||||||
|
Connect bool
|
||||||
|
InstanceAddr string
|
||||||
|
Secret string
|
||||||
|
Name string
|
||||||
|
}
|
||||||
|
|
||||||
|
func createRunnerFileCmd(ctx context.Context, configFile *string) *cobra.Command {
|
||||||
|
var argsVar createRunnerFileArgs
|
||||||
|
cmd := &cobra.Command{
|
||||||
|
Use: "create-runner-file",
|
||||||
|
Short: "Create a runner file using a shared secret used to pre-register the runner on the Forgejo instance",
|
||||||
|
Args: cobra.MaximumNArgs(0),
|
||||||
|
RunE: runCreateRunnerFile(ctx, &argsVar, configFile),
|
||||||
|
}
|
||||||
|
cmd.Flags().BoolVar(&argsVar.Connect, "connect", false, "tries to connect to the instance using the secret (Forgejo v1.21 instance or greater)")
|
||||||
|
cmd.Flags().StringVar(&argsVar.InstanceAddr, "instance", "", "Forgejo instance address")
|
||||||
|
cmd.MarkFlagRequired("instance")
|
||||||
|
cmd.Flags().StringVar(&argsVar.Secret, "secret", "", "secret shared with the Forgejo instance via forgejo-cli actions register")
|
||||||
|
cmd.MarkFlagRequired("secret")
|
||||||
|
cmd.Flags().StringVar(&argsVar.Name, "name", "", "Runner name")
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// must be exactly the same as fogejo/models/actions/forgejo.go
|
||||||
|
func uuidFromSecret(secret string) (string, error) {
|
||||||
|
uuid, err := gouuid.FromBytes([]byte(secret[:16]))
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("gouuid.FromBytes %v", err)
|
||||||
|
}
|
||||||
|
return uuid.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// should be exactly the same as forgejo/cmd/forgejo/actions.go
|
||||||
|
func validateSecret(secret string) error {
|
||||||
|
secretLen := len(secret)
|
||||||
|
if secretLen != 40 {
|
||||||
|
return fmt.Errorf("the secret must be exactly 40 characters long, not %d", secretLen)
|
||||||
|
}
|
||||||
|
if _, err := hex.DecodeString(secret); err != nil {
|
||||||
|
return fmt.Errorf("the secret must be an hexadecimal string: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func ping(cfg *config.Config, reg *config.Registration) error {
|
||||||
|
// initial http client
|
||||||
|
cli := client.New(
|
||||||
|
reg.Address,
|
||||||
|
cfg.Runner.Insecure,
|
||||||
|
"",
|
||||||
|
"",
|
||||||
|
ver.Version(),
|
||||||
|
)
|
||||||
|
|
||||||
|
_, err := cli.Ping(context.Background(), connect.NewRequest(&pingv1.PingRequest{
|
||||||
|
Data: reg.UUID,
|
||||||
|
}))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("ping %s failed %w", reg.Address, err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runCreateRunnerFile(ctx context.Context, args *createRunnerFileArgs, configFile *string) func(cmd *cobra.Command, args []string) error {
|
||||||
|
return func(*cobra.Command, []string) error {
|
||||||
|
log.SetLevel(log.DebugLevel)
|
||||||
|
log.Info("Creating runner file")
|
||||||
|
|
||||||
|
//
|
||||||
|
// Prepare the registration data
|
||||||
|
//
|
||||||
|
cfg, err := config.LoadDefault(*configFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := validateSecret(args.Secret); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
uuid, err := uuidFromSecret(args.Secret)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
name := args.Name
|
||||||
|
if name == "" {
|
||||||
|
name, _ = os.Hostname()
|
||||||
|
log.Infof("Runner name is empty, use hostname '%s'.", name)
|
||||||
|
}
|
||||||
|
|
||||||
|
reg := &config.Registration{
|
||||||
|
Name: name,
|
||||||
|
UUID: uuid,
|
||||||
|
Token: args.Secret,
|
||||||
|
Address: args.InstanceAddr,
|
||||||
|
}
|
||||||
|
|
||||||
|
//
|
||||||
|
// Verify the Forgejo instance is reachable
|
||||||
|
//
|
||||||
|
if err := ping(cfg, reg); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
//
|
||||||
|
// Save the registration file
|
||||||
|
//
|
||||||
|
if err := config.SaveRegistration(cfg.Runner.File, reg); err != nil {
|
||||||
|
return fmt.Errorf("failed to save runner config to %s: %w", cfg.Runner.File, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
//
|
||||||
|
// Verify the secret works
|
||||||
|
//
|
||||||
|
if args.Connect {
|
||||||
|
cli := client.New(
|
||||||
|
reg.Address,
|
||||||
|
cfg.Runner.Insecure,
|
||||||
|
reg.UUID,
|
||||||
|
reg.Token,
|
||||||
|
ver.Version(),
|
||||||
|
)
|
||||||
|
|
||||||
|
runner := run.NewRunner(cfg, reg, cli)
|
||||||
|
resp, err := runner.Declare(ctx, cfg.Runner.Labels)
|
||||||
|
|
||||||
|
if err != nil && connect.CodeOf(err) == connect.CodeUnimplemented {
|
||||||
|
log.Warn("Cannot verify the connection because the Forgejo instance is lower than v1.21")
|
||||||
|
} else if err != nil {
|
||||||
|
log.WithError(err).Error("fail to invoke Declare")
|
||||||
|
return err
|
||||||
|
} else {
|
||||||
|
log.Infof("connection successful: %s, with version: %s, with labels: %v",
|
||||||
|
resp.Msg.Runner.Name, resp.Msg.Runner.Version, resp.Msg.Runner.Labels)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
118
internal/app/cmd/create-runner-file_test.go
Normal file
118
internal/app/cmd/create-runner-file_test.go
Normal file
|
@ -0,0 +1,118 @@
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
|
"connectrpc.com/connect"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/client"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/ver"
|
||||||
|
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"gopkg.in/yaml.v3"
|
||||||
|
)
|
||||||
|
|
||||||
|
func executeCommand(ctx context.Context, cmd *cobra.Command, args ...string) (string, error) {
|
||||||
|
buf := new(bytes.Buffer)
|
||||||
|
cmd.SetOut(buf)
|
||||||
|
cmd.SetErr(buf)
|
||||||
|
cmd.SetArgs(args)
|
||||||
|
|
||||||
|
err := cmd.ExecuteContext(ctx)
|
||||||
|
|
||||||
|
return buf.String(), err
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_createRunnerFileCmd(t *testing.T) {
|
||||||
|
configFile := "config.yml"
|
||||||
|
ctx := context.Background()
|
||||||
|
cmd := createRunnerFileCmd(ctx, &configFile)
|
||||||
|
output, err := executeCommand(ctx, cmd)
|
||||||
|
assert.ErrorContains(t, err, `required flag(s) "instance", "secret" not set`)
|
||||||
|
assert.Contains(t, output, "Usage:")
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_validateSecret(t *testing.T) {
|
||||||
|
assert.ErrorContains(t, validateSecret("abc"), "exactly 40 characters")
|
||||||
|
assert.ErrorContains(t, validateSecret("ZAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"), "must be an hexadecimal")
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_uuidFromSecret(t *testing.T) {
|
||||||
|
uuid, err := uuidFromSecret("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.EqualValues(t, uuid, "41414141-4141-4141-4141-414141414141")
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_ping(t *testing.T) {
|
||||||
|
cfg := &config.Config{}
|
||||||
|
address := os.Getenv("FORGEJO_URL")
|
||||||
|
if address == "" {
|
||||||
|
address = "https://code.forgejo.org"
|
||||||
|
}
|
||||||
|
reg := &config.Registration{
|
||||||
|
Address: address,
|
||||||
|
UUID: "create-runner-file_test.go",
|
||||||
|
}
|
||||||
|
assert.NoError(t, ping(cfg, reg))
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_runCreateRunnerFile(t *testing.T) {
|
||||||
|
//
|
||||||
|
// Set the .runner file to be in a temporary directory
|
||||||
|
//
|
||||||
|
dir := t.TempDir()
|
||||||
|
configFile := dir + "/config.yml"
|
||||||
|
runnerFile := dir + "/.runner"
|
||||||
|
cfg, err := config.LoadDefault("")
|
||||||
|
cfg.Runner.File = runnerFile
|
||||||
|
yamlData, err := yaml.Marshal(cfg)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.NoError(t, os.WriteFile(configFile, yamlData, 0o666))
|
||||||
|
|
||||||
|
instance, has := os.LookupEnv("FORGEJO_URL")
|
||||||
|
if !has {
|
||||||
|
instance = "https://code.forgejo.org"
|
||||||
|
}
|
||||||
|
secret, has := os.LookupEnv("FORGEJO_RUNNER_SECRET")
|
||||||
|
assert.True(t, has)
|
||||||
|
name := "testrunner"
|
||||||
|
|
||||||
|
//
|
||||||
|
// Run create-runner-file
|
||||||
|
//
|
||||||
|
ctx := context.Background()
|
||||||
|
cmd := createRunnerFileCmd(ctx, &configFile)
|
||||||
|
output, err := executeCommand(ctx, cmd, "--connect", "--secret", secret, "--instance", instance, "--name", name)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.EqualValues(t, "", output)
|
||||||
|
|
||||||
|
//
|
||||||
|
// Read back the runner file and verify its content
|
||||||
|
//
|
||||||
|
reg, err := config.LoadRegistration(runnerFile)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.EqualValues(t, secret, reg.Token)
|
||||||
|
assert.EqualValues(t, instance, reg.Address)
|
||||||
|
|
||||||
|
//
|
||||||
|
// Verify that fetching a task successfully returns there is
|
||||||
|
// no task for this runner
|
||||||
|
//
|
||||||
|
cli := client.New(
|
||||||
|
reg.Address,
|
||||||
|
cfg.Runner.Insecure,
|
||||||
|
reg.UUID,
|
||||||
|
reg.Token,
|
||||||
|
ver.Version(),
|
||||||
|
)
|
||||||
|
resp, err := cli.FetchTask(ctx, connect.NewRequest(&runnerv1.FetchTaskRequest{}))
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Nil(t, resp.Msg.Task)
|
||||||
|
}
|
208
internal/app/cmd/daemon.go
Normal file
208
internal/app/cmd/daemon.go
Normal file
|
@ -0,0 +1,208 @@
|
||||||
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"runtime"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"connectrpc.com/connect"
|
||||||
|
"github.com/mattn/go-isatty"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/app/poll"
|
||||||
|
"gitea.com/gitea/act_runner/internal/app/run"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/client"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/envcheck"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/labels"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/ver"
|
||||||
|
)
|
||||||
|
|
||||||
|
func runDaemon(ctx context.Context, configFile *string) func(cmd *cobra.Command, args []string) error {
|
||||||
|
return func(cmd *cobra.Command, args []string) error {
|
||||||
|
cfg, err := config.LoadDefault(*configFile)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("invalid configuration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
initLogging(cfg)
|
||||||
|
log.Infoln("Starting runner daemon")
|
||||||
|
|
||||||
|
reg, err := config.LoadRegistration(cfg.Runner.File)
|
||||||
|
if os.IsNotExist(err) {
|
||||||
|
log.Error("registration file not found, please register the runner first")
|
||||||
|
return err
|
||||||
|
} else if err != nil {
|
||||||
|
return fmt.Errorf("failed to load registration file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg.Tune(reg.Address)
|
||||||
|
|
||||||
|
lbls := reg.Labels
|
||||||
|
if len(cfg.Runner.Labels) > 0 {
|
||||||
|
lbls = cfg.Runner.Labels
|
||||||
|
}
|
||||||
|
|
||||||
|
ls := labels.Labels{}
|
||||||
|
for _, l := range lbls {
|
||||||
|
label, err := labels.Parse(l)
|
||||||
|
if err != nil {
|
||||||
|
log.WithError(err).Warnf("ignored invalid label %q", l)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ls = append(ls, label)
|
||||||
|
}
|
||||||
|
if len(ls) == 0 {
|
||||||
|
log.Warn("no labels configured, runner may not be able to pick up jobs")
|
||||||
|
}
|
||||||
|
|
||||||
|
if ls.RequireDocker() {
|
||||||
|
dockerSocketPath, err := getDockerSocketPath(cfg.Container.DockerHost)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := envcheck.CheckIfDockerRunning(ctx, dockerSocketPath); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// if dockerSocketPath passes the check, override DOCKER_HOST with dockerSocketPath
|
||||||
|
os.Setenv("DOCKER_HOST", dockerSocketPath)
|
||||||
|
// empty cfg.Container.DockerHost means act_runner need to find an available docker host automatically
|
||||||
|
// and assign the path to cfg.Container.DockerHost
|
||||||
|
if cfg.Container.DockerHost == "" {
|
||||||
|
cfg.Container.DockerHost = dockerSocketPath
|
||||||
|
}
|
||||||
|
// check the scheme, if the scheme is not npipe or unix
|
||||||
|
// set cfg.Container.DockerHost to "-" because it can't be mounted to the job container
|
||||||
|
if protoIndex := strings.Index(cfg.Container.DockerHost, "://"); protoIndex != -1 {
|
||||||
|
scheme := cfg.Container.DockerHost[:protoIndex]
|
||||||
|
if !strings.EqualFold(scheme, "npipe") && !strings.EqualFold(scheme, "unix") {
|
||||||
|
cfg.Container.DockerHost = "-"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cli := client.New(
|
||||||
|
reg.Address,
|
||||||
|
cfg.Runner.Insecure,
|
||||||
|
reg.UUID,
|
||||||
|
reg.Token,
|
||||||
|
ver.Version(),
|
||||||
|
)
|
||||||
|
|
||||||
|
runner := run.NewRunner(cfg, reg, cli)
|
||||||
|
// declare the labels of the runner before fetching tasks
|
||||||
|
resp, err := runner.Declare(ctx, ls.Names())
|
||||||
|
if err != nil && connect.CodeOf(err) == connect.CodeUnimplemented {
|
||||||
|
// Gitea instance is older version. skip declare step.
|
||||||
|
log.Warn("Because the Forgejo instance is an old version, skipping declaring the labels and version.")
|
||||||
|
} else if err != nil {
|
||||||
|
log.WithError(err).Error("fail to invoke Declare")
|
||||||
|
return err
|
||||||
|
} else {
|
||||||
|
log.Infof("runner: %s, with version: %s, with labels: %v, declared successfully",
|
||||||
|
resp.Msg.Runner.Name, resp.Msg.Runner.Version, resp.Msg.Runner.Labels)
|
||||||
|
// if declared successfully, override the labels in the.runner file with valid labels in the config file (if specified)
|
||||||
|
runner.Update(ctx, ls)
|
||||||
|
reg.Labels = ls.ToStrings()
|
||||||
|
if err := config.SaveRegistration(cfg.Runner.File, reg); err != nil {
|
||||||
|
return fmt.Errorf("failed to save runner config: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
poller := poll.New(cfg, cli, runner)
|
||||||
|
|
||||||
|
go poller.Poll()
|
||||||
|
|
||||||
|
<-ctx.Done()
|
||||||
|
log.Infof("runner: %s shutdown initiated, waiting [runner].shutdown_timeout=%s for running jobs to complete before shutting down", resp.Msg.Runner.Name, cfg.Runner.ShutdownTimeout)
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), cfg.Runner.ShutdownTimeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
err = poller.Shutdown(ctx)
|
||||||
|
if err != nil {
|
||||||
|
log.Warnf("runner: %s cancelled in progress jobs during shutdown", resp.Msg.Runner.Name)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// initLogging setup the global logrus logger.
|
||||||
|
func initLogging(cfg *config.Config) {
|
||||||
|
isTerm := isatty.IsTerminal(os.Stdout.Fd())
|
||||||
|
format := &log.TextFormatter{
|
||||||
|
DisableColors: !isTerm,
|
||||||
|
FullTimestamp: true,
|
||||||
|
}
|
||||||
|
log.SetFormatter(format)
|
||||||
|
|
||||||
|
if l := cfg.Log.Level; l != "" {
|
||||||
|
level, err := log.ParseLevel(l)
|
||||||
|
if err != nil {
|
||||||
|
log.WithError(err).
|
||||||
|
Errorf("invalid log level: %q", l)
|
||||||
|
}
|
||||||
|
|
||||||
|
// debug level
|
||||||
|
if level == log.DebugLevel {
|
||||||
|
log.SetReportCaller(true)
|
||||||
|
format.CallerPrettyfier = func(f *runtime.Frame) (string, string) {
|
||||||
|
// get function name
|
||||||
|
s := strings.Split(f.Function, ".")
|
||||||
|
funcname := "[" + s[len(s)-1] + "]"
|
||||||
|
// get file name and line number
|
||||||
|
_, filename := path.Split(f.File)
|
||||||
|
filename = "[" + filename + ":" + strconv.Itoa(f.Line) + "]"
|
||||||
|
return funcname, filename
|
||||||
|
}
|
||||||
|
log.SetFormatter(format)
|
||||||
|
}
|
||||||
|
|
||||||
|
if log.GetLevel() != level {
|
||||||
|
log.Infof("log level changed to %v", level)
|
||||||
|
log.SetLevel(level)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var commonSocketPaths = []string{
|
||||||
|
"/var/run/docker.sock",
|
||||||
|
"/run/podman/podman.sock",
|
||||||
|
"$HOME/.colima/docker.sock",
|
||||||
|
"$XDG_RUNTIME_DIR/docker.sock",
|
||||||
|
"$XDG_RUNTIME_DIR/podman/podman.sock",
|
||||||
|
`\\.\pipe\docker_engine`,
|
||||||
|
"$HOME/.docker/run/docker.sock",
|
||||||
|
}
|
||||||
|
|
||||||
|
func getDockerSocketPath(configDockerHost string) (string, error) {
|
||||||
|
// a `-` means don't mount the docker socket to job containers
|
||||||
|
if configDockerHost != "" && configDockerHost != "-" {
|
||||||
|
return configDockerHost, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
socket, found := os.LookupEnv("DOCKER_HOST")
|
||||||
|
if found {
|
||||||
|
return socket, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, p := range commonSocketPaths {
|
||||||
|
if _, err := os.Lstat(os.ExpandEnv(p)); err == nil {
|
||||||
|
if strings.HasPrefix(p, `\\.\`) {
|
||||||
|
return "npipe://" + filepath.ToSlash(os.ExpandEnv(p)), nil
|
||||||
|
}
|
||||||
|
return "unix://" + filepath.ToSlash(os.ExpandEnv(p)), nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return "", fmt.Errorf("daemon Docker Engine socket not found and docker_host config was invalid")
|
||||||
|
}
|
495
internal/app/cmd/exec.go
Normal file
495
internal/app/cmd/exec.go
Normal file
|
@ -0,0 +1,495 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// Copyright 2019 nektos
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package cmd
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/docker/api/types/container"
|
||||||
|
"github.com/joho/godotenv"
|
||||||
|
"github.com/nektos/act/pkg/artifactcache"
|
||||||
|
"github.com/nektos/act/pkg/artifacts"
|
||||||
|
"github.com/nektos/act/pkg/common"
|
||||||
|
"github.com/nektos/act/pkg/model"
|
||||||
|
"github.com/nektos/act/pkg/runner"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
"golang.org/x/term"
|
||||||
|
)
|
||||||
|
|
||||||
|
type executeArgs struct {
|
||||||
|
runList bool
|
||||||
|
job string
|
||||||
|
event string
|
||||||
|
workdir string
|
||||||
|
workflowsPath string
|
||||||
|
noWorkflowRecurse bool
|
||||||
|
autodetectEvent bool
|
||||||
|
forcePull bool
|
||||||
|
forceRebuild bool
|
||||||
|
jsonLogger bool
|
||||||
|
envs []string
|
||||||
|
envfile string
|
||||||
|
secrets []string
|
||||||
|
defaultActionsURL string
|
||||||
|
insecureSecrets bool
|
||||||
|
privileged bool
|
||||||
|
usernsMode string
|
||||||
|
containerArchitecture string
|
||||||
|
containerDaemonSocket string
|
||||||
|
useGitIgnore bool
|
||||||
|
containerCapAdd []string
|
||||||
|
containerCapDrop []string
|
||||||
|
containerOptions string
|
||||||
|
artifactServerPath string
|
||||||
|
artifactServerAddr string
|
||||||
|
artifactServerPort string
|
||||||
|
noSkipCheckout bool
|
||||||
|
debug bool
|
||||||
|
dryrun bool
|
||||||
|
image string
|
||||||
|
cacheHandler *artifactcache.Handler
|
||||||
|
network string
|
||||||
|
enableIPv6 bool
|
||||||
|
githubInstance string
|
||||||
|
}
|
||||||
|
|
||||||
|
// WorkflowsPath returns path to workflow file(s)
|
||||||
|
func (i *executeArgs) WorkflowsPath() string {
|
||||||
|
return i.resolve(i.workflowsPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Envfile returns path to .env
|
||||||
|
func (i *executeArgs) Envfile() string {
|
||||||
|
return i.resolve(i.envfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (i *executeArgs) LoadSecrets() map[string]string {
|
||||||
|
s := make(map[string]string)
|
||||||
|
for _, secretPair := range i.secrets {
|
||||||
|
secretPairParts := strings.SplitN(secretPair, "=", 2)
|
||||||
|
secretPairParts[0] = strings.ToUpper(secretPairParts[0])
|
||||||
|
if strings.ToUpper(s[secretPairParts[0]]) == secretPairParts[0] {
|
||||||
|
log.Errorf("Secret %s is already defined (secrets are case insensitive)", secretPairParts[0])
|
||||||
|
}
|
||||||
|
if len(secretPairParts) == 2 {
|
||||||
|
s[secretPairParts[0]] = secretPairParts[1]
|
||||||
|
} else if env, ok := os.LookupEnv(secretPairParts[0]); ok && env != "" {
|
||||||
|
s[secretPairParts[0]] = env
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Provide value for '%s': ", secretPairParts[0])
|
||||||
|
val, err := term.ReadPassword(int(os.Stdin.Fd()))
|
||||||
|
fmt.Println()
|
||||||
|
if err != nil {
|
||||||
|
log.Errorf("failed to read input: %v", err)
|
||||||
|
os.Exit(1)
|
||||||
|
}
|
||||||
|
s[secretPairParts[0]] = string(val)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
|
func readEnvs(path string, envs map[string]string) bool {
|
||||||
|
if _, err := os.Stat(path); err == nil {
|
||||||
|
env, err := godotenv.Read(path)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Error loading from %s: %v", path, err)
|
||||||
|
}
|
||||||
|
for k, v := range env {
|
||||||
|
envs[k] = v
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (i *executeArgs) LoadEnvs() map[string]string {
|
||||||
|
envs := make(map[string]string)
|
||||||
|
if i.envs != nil {
|
||||||
|
for _, envVar := range i.envs {
|
||||||
|
e := strings.SplitN(envVar, `=`, 2)
|
||||||
|
if len(e) == 2 {
|
||||||
|
envs[e[0]] = e[1]
|
||||||
|
} else {
|
||||||
|
envs[e[0]] = ""
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
_ = readEnvs(i.Envfile(), envs)
|
||||||
|
|
||||||
|
envs["ACTIONS_CACHE_URL"] = i.cacheHandler.ExternalURL() + "/"
|
||||||
|
|
||||||
|
return envs
|
||||||
|
}
|
||||||
|
|
||||||
|
// Workdir returns path to workdir
|
||||||
|
func (i *executeArgs) Workdir() string {
|
||||||
|
return i.resolve(".")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (i *executeArgs) resolve(path string) string {
|
||||||
|
basedir, err := filepath.Abs(i.workdir)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err)
|
||||||
|
}
|
||||||
|
if path == "" {
|
||||||
|
return path
|
||||||
|
}
|
||||||
|
if !filepath.IsAbs(path) {
|
||||||
|
path = filepath.Join(basedir, path)
|
||||||
|
}
|
||||||
|
return path
|
||||||
|
}
|
||||||
|
|
||||||
|
func printList(plan *model.Plan) error {
|
||||||
|
type lineInfoDef struct {
|
||||||
|
jobID string
|
||||||
|
jobName string
|
||||||
|
stage string
|
||||||
|
wfName string
|
||||||
|
wfFile string
|
||||||
|
events string
|
||||||
|
}
|
||||||
|
lineInfos := []lineInfoDef{}
|
||||||
|
|
||||||
|
header := lineInfoDef{
|
||||||
|
jobID: "Job ID",
|
||||||
|
jobName: "Job name",
|
||||||
|
stage: "Stage",
|
||||||
|
wfName: "Workflow name",
|
||||||
|
wfFile: "Workflow file",
|
||||||
|
events: "Events",
|
||||||
|
}
|
||||||
|
|
||||||
|
jobs := map[string]bool{}
|
||||||
|
duplicateJobIDs := false
|
||||||
|
|
||||||
|
jobIDMaxWidth := len(header.jobID)
|
||||||
|
jobNameMaxWidth := len(header.jobName)
|
||||||
|
stageMaxWidth := len(header.stage)
|
||||||
|
wfNameMaxWidth := len(header.wfName)
|
||||||
|
wfFileMaxWidth := len(header.wfFile)
|
||||||
|
eventsMaxWidth := len(header.events)
|
||||||
|
|
||||||
|
for i, stage := range plan.Stages {
|
||||||
|
for _, r := range stage.Runs {
|
||||||
|
jobID := r.JobID
|
||||||
|
line := lineInfoDef{
|
||||||
|
jobID: jobID,
|
||||||
|
jobName: r.String(),
|
||||||
|
stage: strconv.Itoa(i),
|
||||||
|
wfName: r.Workflow.Name,
|
||||||
|
wfFile: r.Workflow.File,
|
||||||
|
events: strings.Join(r.Workflow.On(), `,`),
|
||||||
|
}
|
||||||
|
if _, ok := jobs[jobID]; ok {
|
||||||
|
duplicateJobIDs = true
|
||||||
|
} else {
|
||||||
|
jobs[jobID] = true
|
||||||
|
}
|
||||||
|
lineInfos = append(lineInfos, line)
|
||||||
|
if jobIDMaxWidth < len(line.jobID) {
|
||||||
|
jobIDMaxWidth = len(line.jobID)
|
||||||
|
}
|
||||||
|
if jobNameMaxWidth < len(line.jobName) {
|
||||||
|
jobNameMaxWidth = len(line.jobName)
|
||||||
|
}
|
||||||
|
if stageMaxWidth < len(line.stage) {
|
||||||
|
stageMaxWidth = len(line.stage)
|
||||||
|
}
|
||||||
|
if wfNameMaxWidth < len(line.wfName) {
|
||||||
|
wfNameMaxWidth = len(line.wfName)
|
||||||
|
}
|
||||||
|
if wfFileMaxWidth < len(line.wfFile) {
|
||||||
|
wfFileMaxWidth = len(line.wfFile)
|
||||||
|
}
|
||||||
|
if eventsMaxWidth < len(line.events) {
|
||||||
|
eventsMaxWidth = len(line.events)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
jobIDMaxWidth += 2
|
||||||
|
jobNameMaxWidth += 2
|
||||||
|
stageMaxWidth += 2
|
||||||
|
wfNameMaxWidth += 2
|
||||||
|
wfFileMaxWidth += 2
|
||||||
|
|
||||||
|
fmt.Printf("%*s%*s%*s%*s%*s%*s\n",
|
||||||
|
-stageMaxWidth, header.stage,
|
||||||
|
-jobIDMaxWidth, header.jobID,
|
||||||
|
-jobNameMaxWidth, header.jobName,
|
||||||
|
-wfNameMaxWidth, header.wfName,
|
||||||
|
-wfFileMaxWidth, header.wfFile,
|
||||||
|
-eventsMaxWidth, header.events,
|
||||||
|
)
|
||||||
|
for _, line := range lineInfos {
|
||||||
|
fmt.Printf("%*s%*s%*s%*s%*s%*s\n",
|
||||||
|
-stageMaxWidth, line.stage,
|
||||||
|
-jobIDMaxWidth, line.jobID,
|
||||||
|
-jobNameMaxWidth, line.jobName,
|
||||||
|
-wfNameMaxWidth, line.wfName,
|
||||||
|
-wfFileMaxWidth, line.wfFile,
|
||||||
|
-eventsMaxWidth, line.events,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
if duplicateJobIDs {
|
||||||
|
fmt.Print("\nDetected multiple jobs with the same job name, use `-W` to specify the path to the specific workflow.\n")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runExecList(ctx context.Context, planner model.WorkflowPlanner, execArgs *executeArgs) error {
|
||||||
|
// plan with filtered jobs - to be used for filtering only
|
||||||
|
var filterPlan *model.Plan
|
||||||
|
|
||||||
|
// Determine the event name to be filtered
|
||||||
|
var filterEventName string
|
||||||
|
|
||||||
|
if len(execArgs.event) > 0 {
|
||||||
|
log.Infof("Using chosed event for filtering: %s", execArgs.event)
|
||||||
|
filterEventName = execArgs.event
|
||||||
|
} else if execArgs.autodetectEvent {
|
||||||
|
// collect all events from loaded workflows
|
||||||
|
events := planner.GetEvents()
|
||||||
|
|
||||||
|
// set default event type to first event from many available
|
||||||
|
// this way user dont have to specify the event.
|
||||||
|
log.Infof("Using first detected workflow event for filtering: %s", events[0])
|
||||||
|
|
||||||
|
filterEventName = events[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
var err error
|
||||||
|
if execArgs.job != "" {
|
||||||
|
log.Infof("Preparing plan with a job: %s", execArgs.job)
|
||||||
|
filterPlan, err = planner.PlanJob(execArgs.job)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else if filterEventName != "" {
|
||||||
|
log.Infof("Preparing plan for a event: %s", filterEventName)
|
||||||
|
filterPlan, err = planner.PlanEvent(filterEventName)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
log.Infof("Preparing plan with all jobs")
|
||||||
|
filterPlan, err = planner.PlanAll()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = printList(filterPlan)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command, args []string) error {
|
||||||
|
return func(cmd *cobra.Command, args []string) error {
|
||||||
|
planner, err := model.NewWorkflowPlanner(execArgs.WorkflowsPath(), execArgs.noWorkflowRecurse)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if execArgs.runList {
|
||||||
|
return runExecList(ctx, planner, execArgs)
|
||||||
|
}
|
||||||
|
|
||||||
|
// plan with triggered jobs
|
||||||
|
var plan *model.Plan
|
||||||
|
|
||||||
|
// Determine the event name to be triggered
|
||||||
|
var eventName string
|
||||||
|
|
||||||
|
// collect all events from loaded workflows
|
||||||
|
events := planner.GetEvents()
|
||||||
|
|
||||||
|
if len(execArgs.event) > 0 {
|
||||||
|
log.Infof("Using chosed event for filtering: %s", execArgs.event)
|
||||||
|
eventName = execArgs.event
|
||||||
|
} else if len(events) == 1 && len(events[0]) > 0 {
|
||||||
|
log.Infof("Using the only detected workflow event: %s", events[0])
|
||||||
|
eventName = events[0]
|
||||||
|
} else if execArgs.autodetectEvent && len(events) > 0 && len(events[0]) > 0 {
|
||||||
|
// set default event type to first event from many available
|
||||||
|
// this way user dont have to specify the event.
|
||||||
|
log.Infof("Using first detected workflow event: %s", events[0])
|
||||||
|
eventName = events[0]
|
||||||
|
} else {
|
||||||
|
log.Infof("Using default workflow event: push")
|
||||||
|
eventName = "push"
|
||||||
|
}
|
||||||
|
|
||||||
|
// build the plan for this run
|
||||||
|
if execArgs.job != "" {
|
||||||
|
log.Infof("Planning job: %s", execArgs.job)
|
||||||
|
plan, err = planner.PlanJob(execArgs.job)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
log.Infof("Planning jobs for event: %s", eventName)
|
||||||
|
plan, err = planner.PlanEvent(eventName)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
maxLifetime := 3 * time.Hour
|
||||||
|
if deadline, ok := ctx.Deadline(); ok {
|
||||||
|
maxLifetime = time.Until(deadline)
|
||||||
|
}
|
||||||
|
|
||||||
|
// init a cache server
|
||||||
|
handler, err := artifactcache.StartHandler("", "", 0, log.StandardLogger().WithField("module", "cache_request"))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Infof("cache handler listens on: %v", handler.ExternalURL())
|
||||||
|
execArgs.cacheHandler = handler
|
||||||
|
|
||||||
|
if len(execArgs.artifactServerAddr) == 0 {
|
||||||
|
ip := common.GetOutboundIP()
|
||||||
|
if ip == nil {
|
||||||
|
return fmt.Errorf("unable to determine outbound IP address")
|
||||||
|
}
|
||||||
|
execArgs.artifactServerAddr = ip.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(execArgs.artifactServerPath) == 0 {
|
||||||
|
tempDir, err := os.MkdirTemp("", "gitea-act-")
|
||||||
|
if err != nil {
|
||||||
|
fmt.Println(err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tempDir)
|
||||||
|
|
||||||
|
execArgs.artifactServerPath = tempDir
|
||||||
|
}
|
||||||
|
|
||||||
|
// run the plan
|
||||||
|
config := &runner.Config{
|
||||||
|
Workdir: execArgs.Workdir(),
|
||||||
|
BindWorkdir: false,
|
||||||
|
ReuseContainers: false,
|
||||||
|
ForcePull: execArgs.forcePull,
|
||||||
|
ForceRebuild: execArgs.forceRebuild,
|
||||||
|
LogOutput: true,
|
||||||
|
JSONLogger: execArgs.jsonLogger,
|
||||||
|
Env: execArgs.LoadEnvs(),
|
||||||
|
Secrets: execArgs.LoadSecrets(),
|
||||||
|
InsecureSecrets: execArgs.insecureSecrets,
|
||||||
|
Privileged: execArgs.privileged,
|
||||||
|
UsernsMode: execArgs.usernsMode,
|
||||||
|
ContainerArchitecture: execArgs.containerArchitecture,
|
||||||
|
ContainerDaemonSocket: execArgs.containerDaemonSocket,
|
||||||
|
UseGitIgnore: execArgs.useGitIgnore,
|
||||||
|
GitHubInstance: execArgs.githubInstance,
|
||||||
|
ContainerCapAdd: execArgs.containerCapAdd,
|
||||||
|
ContainerCapDrop: execArgs.containerCapDrop,
|
||||||
|
ContainerOptions: execArgs.containerOptions,
|
||||||
|
AutoRemove: true,
|
||||||
|
ArtifactServerPath: execArgs.artifactServerPath,
|
||||||
|
ArtifactServerPort: execArgs.artifactServerPort,
|
||||||
|
ArtifactServerAddr: execArgs.artifactServerAddr,
|
||||||
|
NoSkipCheckout: execArgs.noSkipCheckout,
|
||||||
|
// PresetGitHubContext: preset,
|
||||||
|
// EventJSON: string(eventJSON),
|
||||||
|
ContainerNamePrefix: fmt.Sprintf("FORGEJO-ACTIONS-TASK-%s", eventName),
|
||||||
|
ContainerMaxLifetime: maxLifetime,
|
||||||
|
ContainerNetworkMode: container.NetworkMode(execArgs.network),
|
||||||
|
ContainerNetworkEnableIPv6: execArgs.enableIPv6,
|
||||||
|
DefaultActionInstance: execArgs.defaultActionsURL,
|
||||||
|
PlatformPicker: func(_ []string) string {
|
||||||
|
return execArgs.image
|
||||||
|
},
|
||||||
|
ValidVolumes: []string{"**"}, // All volumes are allowed for `exec` command
|
||||||
|
}
|
||||||
|
|
||||||
|
config.Env["ACT_EXEC"] = "true"
|
||||||
|
|
||||||
|
if t := config.Secrets["GITEA_TOKEN"]; t != "" {
|
||||||
|
config.Token = t
|
||||||
|
} else if t := config.Secrets["GITHUB_TOKEN"]; t != "" {
|
||||||
|
config.Token = t
|
||||||
|
}
|
||||||
|
|
||||||
|
if !execArgs.debug {
|
||||||
|
logLevel := log.InfoLevel
|
||||||
|
config.JobLoggerLevel = &logLevel
|
||||||
|
}
|
||||||
|
|
||||||
|
r, err := runner.New(config)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
artifactCancel := artifacts.Serve(ctx, execArgs.artifactServerPath, execArgs.artifactServerAddr, execArgs.artifactServerPort)
|
||||||
|
log.Debugf("artifacts server started at %s:%s", execArgs.artifactServerPath, execArgs.artifactServerPort)
|
||||||
|
|
||||||
|
ctx = common.WithDryrun(ctx, execArgs.dryrun)
|
||||||
|
executor := r.NewPlanExecutor(plan).Finally(func(ctx context.Context) error {
|
||||||
|
artifactCancel()
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
return executor(ctx)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func loadExecCmd(ctx context.Context) *cobra.Command {
|
||||||
|
execArg := executeArgs{}
|
||||||
|
|
||||||
|
execCmd := &cobra.Command{
|
||||||
|
Use: "exec",
|
||||||
|
Short: "Run workflow locally.",
|
||||||
|
Args: cobra.MaximumNArgs(20),
|
||||||
|
RunE: runExec(ctx, &execArg),
|
||||||
|
}
|
||||||
|
|
||||||
|
execCmd.Flags().BoolVarP(&execArg.runList, "list", "l", false, "list workflows")
|
||||||
|
execCmd.Flags().StringVarP(&execArg.job, "job", "j", "", "run a specific job ID")
|
||||||
|
execCmd.Flags().StringVarP(&execArg.event, "event", "E", "", "run a event name")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.workflowsPath, "workflows", "W", "./.forgejo/workflows/", "path to workflow file(s)")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.workdir, "directory", "C", ".", "working directory")
|
||||||
|
execCmd.PersistentFlags().BoolVarP(&execArg.noWorkflowRecurse, "no-recurse", "", false, "Flag to disable running workflows from subdirectories of specified path in '--workflows'/'-W' flag")
|
||||||
|
execCmd.Flags().BoolVarP(&execArg.autodetectEvent, "detect-event", "", false, "Use first event type from workflow as event that triggered the workflow")
|
||||||
|
execCmd.Flags().BoolVarP(&execArg.forcePull, "pull", "p", false, "pull docker image(s) even if already present")
|
||||||
|
execCmd.Flags().BoolVarP(&execArg.forceRebuild, "rebuild", "", false, "rebuild local action docker image(s) even if already present")
|
||||||
|
execCmd.PersistentFlags().BoolVar(&execArg.jsonLogger, "json", false, "Output logs in json format")
|
||||||
|
execCmd.Flags().StringArrayVarP(&execArg.envs, "env", "", []string{}, "env to make available to actions with optional value (e.g. --env myenv=foo or --env myenv)")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.envfile, "env-file", "", ".env", "environment file to read and use as env in the containers")
|
||||||
|
execCmd.Flags().StringArrayVarP(&execArg.secrets, "secret", "s", []string{}, "secret to make available to actions with optional value (e.g. -s mysecret=foo or -s mysecret)")
|
||||||
|
execCmd.PersistentFlags().BoolVarP(&execArg.insecureSecrets, "insecure-secrets", "", false, "NOT RECOMMENDED! Doesn't hide secrets while printing logs.")
|
||||||
|
execCmd.Flags().BoolVar(&execArg.privileged, "privileged", false, "use privileged mode")
|
||||||
|
execCmd.Flags().StringVar(&execArg.usernsMode, "userns", "", "user namespace to use")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.containerArchitecture, "container-architecture", "", "", "Architecture which should be used to run containers, e.g.: linux/amd64. If not specified, will use host default architecture. Requires Docker server API Version 1.41+. Ignored on earlier Docker server platforms.")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.containerDaemonSocket, "container-daemon-socket", "", "/var/run/docker.sock", "Path to Docker daemon socket which will be mounted to containers")
|
||||||
|
execCmd.Flags().BoolVar(&execArg.useGitIgnore, "use-gitignore", true, "Controls whether paths specified in .gitignore should be copied into container")
|
||||||
|
execCmd.Flags().StringArrayVarP(&execArg.containerCapAdd, "container-cap-add", "", []string{}, "kernel capabilities to add to the workflow containers (e.g. --container-cap-add SYS_PTRACE)")
|
||||||
|
execCmd.Flags().StringArrayVarP(&execArg.containerCapDrop, "container-cap-drop", "", []string{}, "kernel capabilities to remove from the workflow containers (e.g. --container-cap-drop SYS_PTRACE)")
|
||||||
|
execCmd.Flags().StringVarP(&execArg.containerOptions, "container-opts", "", "", "container options")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.artifactServerPath, "artifact-server-path", "", ".", "Defines the path where the artifact server stores uploads and retrieves downloads from. If not specified the artifact server will not start.")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.artifactServerAddr, "artifact-server-addr", "", "", "Defines the address where the artifact server listens")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.artifactServerPort, "artifact-server-port", "", "34567", "Defines the port where the artifact server listens (will only bind to localhost).")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.defaultActionsURL, "default-actions-url", "", "https://code.forgejo.org", "Defines the default base url of the action.")
|
||||||
|
execCmd.PersistentFlags().BoolVarP(&execArg.noSkipCheckout, "no-skip-checkout", "", false, "Do not skip actions/checkout")
|
||||||
|
execCmd.PersistentFlags().BoolVarP(&execArg.debug, "debug", "d", false, "enable debug log")
|
||||||
|
execCmd.PersistentFlags().BoolVarP(&execArg.dryrun, "dryrun", "n", false, "dryrun mode")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.image, "image", "i", "node:20-bullseye", "Docker image to use. Use \"-self-hosted\" to run directly on the host.")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.network, "network", "", "", "Specify the network to which the container will connect")
|
||||||
|
execCmd.PersistentFlags().BoolVarP(&execArg.enableIPv6, "enable-ipv6", "6", false, "Create network with IPv6 enabled.")
|
||||||
|
execCmd.PersistentFlags().StringVarP(&execArg.githubInstance, "gitea-instance", "", "", "Gitea instance to use.")
|
||||||
|
|
||||||
|
return execCmd
|
||||||
|
}
|
|
@ -1,3 +1,6 @@
|
||||||
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
package cmd
|
package cmd
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -6,24 +9,25 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"runtime"
|
goruntime "runtime"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
pingv1 "code.gitea.io/actions-proto-go/ping/v1"
|
pingv1 "code.gitea.io/actions-proto-go/ping/v1"
|
||||||
"gitea.com/gitea/act_runner/client"
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
"gitea.com/gitea/act_runner/config"
|
"connectrpc.com/connect"
|
||||||
"gitea.com/gitea/act_runner/register"
|
|
||||||
|
|
||||||
"github.com/bufbuild/connect-go"
|
|
||||||
"github.com/joho/godotenv"
|
|
||||||
"github.com/mattn/go-isatty"
|
"github.com/mattn/go-isatty"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/client"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/labels"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/ver"
|
||||||
)
|
)
|
||||||
|
|
||||||
// runRegister registers a runner to the server
|
// runRegister registers a runner to the server
|
||||||
func runRegister(ctx context.Context, regArgs *registerArgs, envFile string) func(*cobra.Command, []string) error {
|
func runRegister(ctx context.Context, regArgs *registerArgs, configFile *string) func(*cobra.Command, []string) error {
|
||||||
return func(cmd *cobra.Command, args []string) error {
|
return func(cmd *cobra.Command, args []string) error {
|
||||||
log.SetReportCaller(false)
|
log.SetReportCaller(false)
|
||||||
isTerm := isatty.IsTerminal(os.Stdout.Fd())
|
isTerm := isatty.IsTerminal(os.Stdout.Fd())
|
||||||
|
@ -34,7 +38,7 @@ func runRegister(ctx context.Context, regArgs *registerArgs, envFile string) fun
|
||||||
log.SetLevel(log.DebugLevel)
|
log.SetLevel(log.DebugLevel)
|
||||||
|
|
||||||
log.Infof("Registering runner, arch=%s, os=%s, version=%s.",
|
log.Infof("Registering runner, arch=%s, os=%s, version=%s.",
|
||||||
runtime.GOARCH, runtime.GOOS, version)
|
goruntime.GOARCH, goruntime.GOOS, ver.Version())
|
||||||
|
|
||||||
// runner always needs root permission
|
// runner always needs root permission
|
||||||
if os.Getuid() != 0 {
|
if os.Getuid() != 0 {
|
||||||
|
@ -43,14 +47,13 @@ func runRegister(ctx context.Context, regArgs *registerArgs, envFile string) fun
|
||||||
}
|
}
|
||||||
|
|
||||||
if regArgs.NoInteractive {
|
if regArgs.NoInteractive {
|
||||||
if err := registerNoInteractive(envFile, regArgs); err != nil {
|
if err := registerNoInteractive(ctx, *configFile, regArgs); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
go func() {
|
go func() {
|
||||||
if err := registerInteractive(envFile); err != nil {
|
if err := registerInteractive(ctx, *configFile); err != nil {
|
||||||
// log.Errorln(err)
|
log.Fatal(err)
|
||||||
os.Exit(2)
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
os.Exit(0)
|
os.Exit(0)
|
||||||
|
@ -69,7 +72,6 @@ func runRegister(ctx context.Context, regArgs *registerArgs, envFile string) fun
|
||||||
type registerArgs struct {
|
type registerArgs struct {
|
||||||
NoInteractive bool
|
NoInteractive bool
|
||||||
InstanceAddr string
|
InstanceAddr string
|
||||||
Insecure bool
|
|
||||||
Token string
|
Token string
|
||||||
RunnerName string
|
RunnerName string
|
||||||
Labels string
|
Labels string
|
||||||
|
@ -83,24 +85,20 @@ const (
|
||||||
StageInputInstance
|
StageInputInstance
|
||||||
StageInputToken
|
StageInputToken
|
||||||
StageInputRunnerName
|
StageInputRunnerName
|
||||||
StageInputCustomLabels
|
StageInputLabels
|
||||||
StageWaitingForRegistration
|
StageWaitingForRegistration
|
||||||
StageExit
|
StageExit
|
||||||
)
|
)
|
||||||
|
|
||||||
var defaultLabels = []string{
|
var defaultLabels = []string{
|
||||||
"ubuntu-latest:docker://node:16-bullseye",
|
"docker:docker://node:20-bullseye",
|
||||||
"ubuntu-22.04:docker://node:16-bullseye", // There's no node:16-bookworm yet
|
|
||||||
"ubuntu-20.04:docker://node:16-bullseye",
|
|
||||||
"ubuntu-18.04:docker://node:16-buster",
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type registerInputs struct {
|
type registerInputs struct {
|
||||||
InstanceAddr string
|
InstanceAddr string
|
||||||
Insecure bool
|
|
||||||
Token string
|
Token string
|
||||||
RunnerName string
|
RunnerName string
|
||||||
CustomLabels []string
|
Labels []string
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *registerInputs) validate() error {
|
func (r *registerInputs) validate() error {
|
||||||
|
@ -110,25 +108,22 @@ func (r *registerInputs) validate() error {
|
||||||
if r.Token == "" {
|
if r.Token == "" {
|
||||||
return fmt.Errorf("token is empty")
|
return fmt.Errorf("token is empty")
|
||||||
}
|
}
|
||||||
if len(r.CustomLabels) > 0 {
|
if len(r.Labels) > 0 {
|
||||||
return validateLabels(r.CustomLabels)
|
return validateLabels(r.Labels)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func validateLabels(labels []string) error {
|
func validateLabels(ls []string) error {
|
||||||
for _, label := range labels {
|
for _, label := range ls {
|
||||||
values := strings.SplitN(label, ":", 2)
|
if _, err := labels.Parse(label); err != nil {
|
||||||
if len(values) > 2 {
|
return err
|
||||||
return fmt.Errorf("Invalid label: %s", label)
|
|
||||||
}
|
}
|
||||||
// len(values) == 1, label for non docker execution environment
|
|
||||||
// TODO: validate value format, like docker://node:16-buster
|
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *registerInputs) assignToNext(stage registerStage, value string) registerStage {
|
func (r *registerInputs) assignToNext(stage registerStage, value string, cfg *config.Config) registerStage {
|
||||||
// must set instance address and token.
|
// must set instance address and token.
|
||||||
// if empty, keep current stage.
|
// if empty, keep current stage.
|
||||||
if stage == StageInputInstance || stage == StageInputToken {
|
if stage == StageInputInstance || stage == StageInputToken {
|
||||||
|
@ -156,32 +151,50 @@ func (r *registerInputs) assignToNext(stage registerStage, value string) registe
|
||||||
return StageInputRunnerName
|
return StageInputRunnerName
|
||||||
case StageInputRunnerName:
|
case StageInputRunnerName:
|
||||||
r.RunnerName = value
|
r.RunnerName = value
|
||||||
return StageInputCustomLabels
|
// if there are some labels configured in config file, skip input labels stage
|
||||||
case StageInputCustomLabels:
|
if len(cfg.Runner.Labels) > 0 {
|
||||||
r.CustomLabels = defaultLabels
|
ls := make([]string, 0, len(cfg.Runner.Labels))
|
||||||
|
for _, l := range cfg.Runner.Labels {
|
||||||
|
_, err := labels.Parse(l)
|
||||||
|
if err != nil {
|
||||||
|
log.WithError(err).Warnf("ignored invalid label %q", l)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
ls = append(ls, l)
|
||||||
|
}
|
||||||
|
if len(ls) == 0 {
|
||||||
|
log.Warn("no valid labels configured in config file, runner may not be able to pick up jobs")
|
||||||
|
}
|
||||||
|
r.Labels = ls
|
||||||
|
return StageWaitingForRegistration
|
||||||
|
}
|
||||||
|
return StageInputLabels
|
||||||
|
case StageInputLabels:
|
||||||
|
r.Labels = defaultLabels
|
||||||
if value != "" {
|
if value != "" {
|
||||||
r.CustomLabels = strings.Split(value, ",")
|
r.Labels = strings.Split(value, ",")
|
||||||
}
|
}
|
||||||
|
|
||||||
if validateLabels(r.CustomLabels) != nil {
|
if validateLabels(r.Labels) != nil {
|
||||||
log.Infoln("Invalid labels, please input again, leave blank to use the default labels (for example, ubuntu-20.04:docker://node:16-bullseye,ubuntu-18.04:docker://node:16-buster)")
|
log.Infoln("Invalid labels, please input again, leave blank to use the default labels (for example, ubuntu-20.04:docker://node:20-bookworm,ubuntu-18.04:docker://node:20-bookworm)")
|
||||||
return StageInputCustomLabels
|
return StageInputLabels
|
||||||
}
|
}
|
||||||
return StageWaitingForRegistration
|
return StageWaitingForRegistration
|
||||||
}
|
}
|
||||||
return StageUnknown
|
return StageUnknown
|
||||||
}
|
}
|
||||||
|
|
||||||
func registerInteractive(envFile string) error {
|
func registerInteractive(ctx context.Context, configFile string) error {
|
||||||
var (
|
var (
|
||||||
reader = bufio.NewReader(os.Stdin)
|
reader = bufio.NewReader(os.Stdin)
|
||||||
stage = StageInputInstance
|
stage = StageInputInstance
|
||||||
inputs = new(registerInputs)
|
inputs = new(registerInputs)
|
||||||
)
|
)
|
||||||
|
|
||||||
// check if overwrite local config
|
cfg, err := config.LoadDefault(configFile)
|
||||||
_ = godotenv.Load(envFile)
|
if err != nil {
|
||||||
cfg, _ := config.FromEnviron()
|
return fmt.Errorf("failed to load config: %v", err)
|
||||||
|
}
|
||||||
if f, err := os.Stat(cfg.Runner.File); err == nil && !f.IsDir() {
|
if f, err := os.Stat(cfg.Runner.File); err == nil && !f.IsDir() {
|
||||||
stage = StageOverwriteLocalConfig
|
stage = StageOverwriteLocalConfig
|
||||||
}
|
}
|
||||||
|
@ -193,15 +206,14 @@ func registerInteractive(envFile string) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
stage = inputs.assignToNext(stage, strings.TrimSpace(cmdString))
|
stage = inputs.assignToNext(stage, strings.TrimSpace(cmdString), cfg)
|
||||||
|
|
||||||
if stage == StageWaitingForRegistration {
|
if stage == StageWaitingForRegistration {
|
||||||
log.Infof("Registering runner, name=%s, instance=%s, labels=%v.", inputs.RunnerName, inputs.InstanceAddr, inputs.CustomLabels)
|
log.Infof("Registering runner, name=%s, instance=%s, labels=%v.", inputs.RunnerName, inputs.InstanceAddr, inputs.Labels)
|
||||||
if err := doRegister(&cfg, inputs); err != nil {
|
if err := doRegister(ctx, cfg, inputs); err != nil {
|
||||||
log.Errorf("Failed to register runner: %v", err)
|
return fmt.Errorf("Failed to register runner: %w", err)
|
||||||
} else {
|
|
||||||
log.Infof("Runner registered successfully.")
|
|
||||||
}
|
}
|
||||||
|
log.Infof("Runner registered successfully.")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -221,33 +233,43 @@ func printStageHelp(stage registerStage) {
|
||||||
case StageOverwriteLocalConfig:
|
case StageOverwriteLocalConfig:
|
||||||
log.Infoln("Runner is already registered, overwrite local config? [y/N]")
|
log.Infoln("Runner is already registered, overwrite local config? [y/N]")
|
||||||
case StageInputInstance:
|
case StageInputInstance:
|
||||||
log.Infoln("Enter the Gitea instance URL (for example, https://gitea.com/):")
|
log.Infoln("Enter the Forgejo instance URL (for example, https://next.forgejo.org/):")
|
||||||
case StageInputToken:
|
case StageInputToken:
|
||||||
log.Infoln("Enter the runner token:")
|
log.Infoln("Enter the runner token:")
|
||||||
case StageInputRunnerName:
|
case StageInputRunnerName:
|
||||||
hostname, _ := os.Hostname()
|
hostname, _ := os.Hostname()
|
||||||
log.Infof("Enter the runner name (if set empty, use hostname:%s ):\n", hostname)
|
log.Infof("Enter the runner name (if set empty, use hostname: %s):\n", hostname)
|
||||||
case StageInputCustomLabels:
|
case StageInputLabels:
|
||||||
log.Infoln("Enter the runner labels, leave blank to use the default labels (comma-separated, for example, self-hosted,ubuntu-20.04:docker://node:16-bullseye,ubuntu-18.04:docker://node:16-buster):")
|
log.Infoln("Enter the runner labels, leave blank to use the default labels (comma-separated, for example, ubuntu-20.04:docker://node:20-bookworm,ubuntu-18.04:docker://node:20-bookworm):")
|
||||||
case StageWaitingForRegistration:
|
case StageWaitingForRegistration:
|
||||||
log.Infoln("Waiting for registration...")
|
log.Infoln("Waiting for registration...")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func registerNoInteractive(envFile string, regArgs *registerArgs) error {
|
func registerNoInteractive(ctx context.Context, configFile string, regArgs *registerArgs) error {
|
||||||
_ = godotenv.Load(envFile)
|
cfg, err := config.LoadDefault(configFile)
|
||||||
cfg, _ := config.FromEnviron()
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
inputs := ®isterInputs{
|
inputs := ®isterInputs{
|
||||||
InstanceAddr: regArgs.InstanceAddr,
|
InstanceAddr: regArgs.InstanceAddr,
|
||||||
Insecure: regArgs.Insecure,
|
|
||||||
Token: regArgs.Token,
|
Token: regArgs.Token,
|
||||||
RunnerName: regArgs.RunnerName,
|
RunnerName: regArgs.RunnerName,
|
||||||
CustomLabels: defaultLabels,
|
Labels: defaultLabels,
|
||||||
}
|
}
|
||||||
regArgs.Labels = strings.TrimSpace(regArgs.Labels)
|
regArgs.Labels = strings.TrimSpace(regArgs.Labels)
|
||||||
|
// command line flag.
|
||||||
if regArgs.Labels != "" {
|
if regArgs.Labels != "" {
|
||||||
inputs.CustomLabels = strings.Split(regArgs.Labels, ",")
|
inputs.Labels = strings.Split(regArgs.Labels, ",")
|
||||||
}
|
}
|
||||||
|
// specify labels in config file.
|
||||||
|
if len(cfg.Runner.Labels) > 0 {
|
||||||
|
if regArgs.Labels != "" {
|
||||||
|
log.Warn("Labels from command will be ignored, use labels defined in config file.")
|
||||||
|
}
|
||||||
|
inputs.Labels = cfg.Runner.Labels
|
||||||
|
}
|
||||||
|
|
||||||
if inputs.RunnerName == "" {
|
if inputs.RunnerName == "" {
|
||||||
inputs.RunnerName, _ = os.Hostname()
|
inputs.RunnerName, _ = os.Hostname()
|
||||||
log.Infof("Runner name is empty, use hostname '%s'.", inputs.RunnerName)
|
log.Infof("Runner name is empty, use hostname '%s'.", inputs.RunnerName)
|
||||||
|
@ -256,22 +278,21 @@ func registerNoInteractive(envFile string, regArgs *registerArgs) error {
|
||||||
log.WithError(err).Errorf("Invalid input, please re-run act command.")
|
log.WithError(err).Errorf("Invalid input, please re-run act command.")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
if err := doRegister(&cfg, inputs); err != nil {
|
if err := doRegister(ctx, cfg, inputs); err != nil {
|
||||||
log.Errorf("Failed to register runner: %v", err)
|
return fmt.Errorf("Failed to register runner: %w", err)
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
log.Infof("Runner registered successfully.")
|
log.Infof("Runner registered successfully.")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func doRegister(cfg *config.Config, inputs *registerInputs) error {
|
func doRegister(ctx context.Context, cfg *config.Config, inputs *registerInputs) error {
|
||||||
ctx := context.Background()
|
|
||||||
|
|
||||||
// initial http client
|
// initial http client
|
||||||
cli := client.New(
|
cli := client.New(
|
||||||
inputs.InstanceAddr,
|
inputs.InstanceAddr,
|
||||||
inputs.Insecure,
|
cfg.Runner.Insecure,
|
||||||
"", "",
|
"",
|
||||||
|
"",
|
||||||
|
ver.Version(),
|
||||||
)
|
)
|
||||||
|
|
||||||
for {
|
for {
|
||||||
|
@ -280,7 +301,7 @@ func doRegister(cfg *config.Config, inputs *registerInputs) error {
|
||||||
}))
|
}))
|
||||||
select {
|
select {
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return nil
|
return ctx.Err()
|
||||||
default:
|
default:
|
||||||
}
|
}
|
||||||
if ctx.Err() != nil {
|
if ctx.Err() != nil {
|
||||||
|
@ -288,18 +309,47 @@ func doRegister(cfg *config.Config, inputs *registerInputs) error {
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.WithError(err).
|
log.WithError(err).
|
||||||
Errorln("Cannot ping the Gitea instance server")
|
Errorln("Cannot ping the Forgejo instance server")
|
||||||
// TODO: if ping failed, retry or exit
|
// TODO: if ping failed, retry or exit
|
||||||
time.Sleep(time.Second)
|
time.Sleep(time.Second)
|
||||||
} else {
|
} else {
|
||||||
log.Debugln("Successfully pinged the Gitea instance server")
|
log.Debugln("Successfully pinged the Forgejo instance server")
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg.Runner.Name = inputs.RunnerName
|
reg := &config.Registration{
|
||||||
cfg.Runner.Token = inputs.Token
|
Name: inputs.RunnerName,
|
||||||
cfg.Runner.Labels = inputs.CustomLabels
|
Token: inputs.Token,
|
||||||
_, err := register.New(cli).Register(ctx, cfg.Runner)
|
Address: inputs.InstanceAddr,
|
||||||
|
Labels: inputs.Labels,
|
||||||
|
}
|
||||||
|
|
||||||
|
ls := make([]string, len(reg.Labels))
|
||||||
|
for i, v := range reg.Labels {
|
||||||
|
l, _ := labels.Parse(v)
|
||||||
|
ls[i] = l.Name
|
||||||
|
}
|
||||||
|
// register new runner.
|
||||||
|
resp, err := cli.Register(ctx, connect.NewRequest(&runnerv1.RegisterRequest{
|
||||||
|
Name: reg.Name,
|
||||||
|
Token: reg.Token,
|
||||||
|
Version: ver.Version(),
|
||||||
|
AgentLabels: ls, // Could be removed after Gitea 1.20
|
||||||
|
Labels: ls,
|
||||||
|
}))
|
||||||
|
if err != nil {
|
||||||
|
log.WithError(err).Error("poller: cannot register new runner")
|
||||||
return err
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
reg.ID = resp.Msg.Runner.Id
|
||||||
|
reg.UUID = resp.Msg.Runner.Uuid
|
||||||
|
reg.Name = resp.Msg.Runner.Name
|
||||||
|
reg.Token = resp.Msg.Runner.Token
|
||||||
|
|
||||||
|
if err := config.SaveRegistration(cfg.Runner.File, reg); err != nil {
|
||||||
|
return fmt.Errorf("failed to save runner config: %w", err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
167
internal/app/poll/poller.go
Normal file
167
internal/app/poll/poller.go
Normal file
|
@ -0,0 +1,167 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package poll
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"sync/atomic"
|
||||||
|
|
||||||
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
|
"connectrpc.com/connect"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"golang.org/x/time/rate"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/app/run"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/client"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
)
|
||||||
|
|
||||||
|
const PollerID = "PollerID"
|
||||||
|
|
||||||
|
type Poller interface {
|
||||||
|
Poll()
|
||||||
|
Shutdown(ctx context.Context) error
|
||||||
|
}
|
||||||
|
|
||||||
|
type poller struct {
|
||||||
|
client client.Client
|
||||||
|
runner run.RunnerInterface
|
||||||
|
cfg *config.Config
|
||||||
|
tasksVersion atomic.Int64 // tasksVersion used to store the version of the last task fetched from the Gitea.
|
||||||
|
|
||||||
|
pollingCtx context.Context
|
||||||
|
shutdownPolling context.CancelFunc
|
||||||
|
|
||||||
|
jobsCtx context.Context
|
||||||
|
shutdownJobs context.CancelFunc
|
||||||
|
|
||||||
|
done chan any
|
||||||
|
}
|
||||||
|
|
||||||
|
func New(cfg *config.Config, client client.Client, runner run.RunnerInterface) Poller {
|
||||||
|
return (&poller{}).init(cfg, client, runner)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *poller) init(cfg *config.Config, client client.Client, runner run.RunnerInterface) Poller {
|
||||||
|
pollingCtx, shutdownPolling := context.WithCancel(context.Background())
|
||||||
|
|
||||||
|
jobsCtx, shutdownJobs := context.WithCancel(context.Background())
|
||||||
|
|
||||||
|
done := make(chan any)
|
||||||
|
|
||||||
|
p.client = client
|
||||||
|
p.runner = runner
|
||||||
|
p.cfg = cfg
|
||||||
|
|
||||||
|
p.pollingCtx = pollingCtx
|
||||||
|
p.shutdownPolling = shutdownPolling
|
||||||
|
|
||||||
|
p.jobsCtx = jobsCtx
|
||||||
|
p.shutdownJobs = shutdownJobs
|
||||||
|
p.done = done
|
||||||
|
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *poller) Poll() {
|
||||||
|
limiter := rate.NewLimiter(rate.Every(p.cfg.Runner.FetchInterval), 1)
|
||||||
|
wg := &sync.WaitGroup{}
|
||||||
|
for i := 0; i < p.cfg.Runner.Capacity; i++ {
|
||||||
|
wg.Add(1)
|
||||||
|
go p.poll(i, wg, limiter)
|
||||||
|
}
|
||||||
|
wg.Wait()
|
||||||
|
|
||||||
|
// signal the poller is finished
|
||||||
|
close(p.done)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *poller) Shutdown(ctx context.Context) error {
|
||||||
|
p.shutdownPolling()
|
||||||
|
|
||||||
|
select {
|
||||||
|
case <-p.done:
|
||||||
|
log.Trace("all jobs are complete")
|
||||||
|
return nil
|
||||||
|
|
||||||
|
case <-ctx.Done():
|
||||||
|
log.Trace("forcing the jobs to shutdown")
|
||||||
|
p.shutdownJobs()
|
||||||
|
<-p.done
|
||||||
|
log.Trace("all jobs have been shutdown")
|
||||||
|
return ctx.Err()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *poller) poll(id int, wg *sync.WaitGroup, limiter *rate.Limiter) {
|
||||||
|
log.Infof("[poller %d] launched", id)
|
||||||
|
defer wg.Done()
|
||||||
|
for {
|
||||||
|
if err := limiter.Wait(p.pollingCtx); err != nil {
|
||||||
|
log.Infof("[poller %d] shutdown", id)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
task, ok := p.fetchTask(p.pollingCtx)
|
||||||
|
if !ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
p.runTaskWithRecover(p.jobsCtx, task)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *poller) runTaskWithRecover(ctx context.Context, task *runnerv1.Task) {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
err := fmt.Errorf("panic: %v", r)
|
||||||
|
log.WithError(err).Error("panic in runTaskWithRecover")
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
if err := p.runner.Run(ctx, task); err != nil {
|
||||||
|
log.WithError(err).Error("failed to run task")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (p *poller) fetchTask(ctx context.Context) (*runnerv1.Task, bool) {
|
||||||
|
reqCtx, cancel := context.WithTimeout(ctx, p.cfg.Runner.FetchTimeout)
|
||||||
|
defer cancel()
|
||||||
|
|
||||||
|
// Load the version value that was in the cache when the request was sent.
|
||||||
|
v := p.tasksVersion.Load()
|
||||||
|
resp, err := p.client.FetchTask(reqCtx, connect.NewRequest(&runnerv1.FetchTaskRequest{
|
||||||
|
TasksVersion: v,
|
||||||
|
}))
|
||||||
|
if errors.Is(err, context.DeadlineExceeded) {
|
||||||
|
log.Trace("deadline exceeded")
|
||||||
|
err = nil
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, context.Canceled) {
|
||||||
|
log.WithError(err).Debugf("shutdown, fetch task canceled")
|
||||||
|
} else {
|
||||||
|
log.WithError(err).Error("failed to fetch task")
|
||||||
|
}
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp == nil || resp.Msg == nil {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp.Msg.TasksVersion > v {
|
||||||
|
p.tasksVersion.CompareAndSwap(v, resp.Msg.TasksVersion)
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp.Msg.Task == nil {
|
||||||
|
return nil, false
|
||||||
|
}
|
||||||
|
|
||||||
|
// got a task, set `tasksVersion` to zero to focre query db in next request.
|
||||||
|
p.tasksVersion.CompareAndSwap(resp.Msg.TasksVersion, 0)
|
||||||
|
|
||||||
|
return resp.Msg.Task, true
|
||||||
|
}
|
263
internal/app/poll/poller_test.go
Normal file
263
internal/app/poll/poller_test.go
Normal file
|
@ -0,0 +1,263 @@
|
||||||
|
// Copyright The Forgejo Authors.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package poll
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"connectrpc.com/connect"
|
||||||
|
|
||||||
|
"code.gitea.io/actions-proto-go/ping/v1/pingv1connect"
|
||||||
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
|
"code.gitea.io/actions-proto-go/runner/v1/runnerv1connect"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
type mockPoller struct {
|
||||||
|
poller
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o *mockPoller) Poll() {
|
||||||
|
o.poller.Poll()
|
||||||
|
}
|
||||||
|
|
||||||
|
type mockClient struct {
|
||||||
|
pingv1connect.PingServiceClient
|
||||||
|
runnerv1connect.RunnerServiceClient
|
||||||
|
|
||||||
|
sleep time.Duration
|
||||||
|
cancel bool
|
||||||
|
err error
|
||||||
|
noTask bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o mockClient) Address() string {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o mockClient) Insecure() bool {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o *mockClient) FetchTask(ctx context.Context, req *connect.Request[runnerv1.FetchTaskRequest]) (*connect.Response[runnerv1.FetchTaskResponse], error) {
|
||||||
|
if o.sleep > 0 {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
log.Trace("fetch task done")
|
||||||
|
return nil, context.DeadlineExceeded
|
||||||
|
case <-time.After(o.sleep):
|
||||||
|
log.Trace("slept")
|
||||||
|
return nil, fmt.Errorf("unexpected")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if o.cancel {
|
||||||
|
return nil, context.Canceled
|
||||||
|
}
|
||||||
|
if o.err != nil {
|
||||||
|
return nil, o.err
|
||||||
|
}
|
||||||
|
task := &runnerv1.Task{}
|
||||||
|
if o.noTask {
|
||||||
|
task = nil
|
||||||
|
o.noTask = false
|
||||||
|
}
|
||||||
|
|
||||||
|
return connect.NewResponse(&runnerv1.FetchTaskResponse{
|
||||||
|
Task: task,
|
||||||
|
TasksVersion: int64(1),
|
||||||
|
}), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type mockRunner struct {
|
||||||
|
cfg *config.Runner
|
||||||
|
log chan string
|
||||||
|
panics bool
|
||||||
|
err error
|
||||||
|
}
|
||||||
|
|
||||||
|
func (o *mockRunner) Run(ctx context.Context, task *runnerv1.Task) error {
|
||||||
|
o.log <- "runner starts"
|
||||||
|
if o.panics {
|
||||||
|
log.Trace("panics")
|
||||||
|
o.log <- "runner panics"
|
||||||
|
o.panics = false
|
||||||
|
panic("whatever")
|
||||||
|
}
|
||||||
|
if o.err != nil {
|
||||||
|
log.Trace("error")
|
||||||
|
o.log <- "runner error"
|
||||||
|
err := o.err
|
||||||
|
o.err = nil
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-ctx.Done():
|
||||||
|
log.Trace("shutdown")
|
||||||
|
o.log <- "runner shutdown"
|
||||||
|
return nil
|
||||||
|
case <-time.After(o.cfg.Timeout):
|
||||||
|
log.Trace("after")
|
||||||
|
o.log <- "runner timeout"
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func setTrace(t *testing.T) {
|
||||||
|
t.Helper()
|
||||||
|
log.SetReportCaller(true)
|
||||||
|
log.SetLevel(log.TraceLevel)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPoller_New(t *testing.T) {
|
||||||
|
p := New(&config.Config{}, &mockClient{}, &mockRunner{})
|
||||||
|
assert.NotNil(t, p)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPoller_Runner(t *testing.T) {
|
||||||
|
setTrace(t)
|
||||||
|
for _, testCase := range []struct {
|
||||||
|
name string
|
||||||
|
timeout time.Duration
|
||||||
|
noTask bool
|
||||||
|
panics bool
|
||||||
|
err error
|
||||||
|
expected string
|
||||||
|
contextTimeout time.Duration
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "Simple",
|
||||||
|
timeout: 10 * time.Second,
|
||||||
|
expected: "runner shutdown",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Panics",
|
||||||
|
timeout: 10 * time.Second,
|
||||||
|
panics: true,
|
||||||
|
expected: "runner panics",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Error",
|
||||||
|
timeout: 10 * time.Second,
|
||||||
|
err: fmt.Errorf("ERROR"),
|
||||||
|
expected: "runner error",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "PollTaskError",
|
||||||
|
timeout: 10 * time.Second,
|
||||||
|
noTask: true,
|
||||||
|
expected: "runner shutdown",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "ShutdownTimeout",
|
||||||
|
timeout: 1 * time.Second,
|
||||||
|
contextTimeout: 1 * time.Minute,
|
||||||
|
expected: "runner timeout",
|
||||||
|
},
|
||||||
|
} {
|
||||||
|
t.Run(testCase.name, func(t *testing.T) {
|
||||||
|
runnerLog := make(chan string, 3)
|
||||||
|
configRunner := config.Runner{
|
||||||
|
FetchInterval: 1,
|
||||||
|
Capacity: 1,
|
||||||
|
Timeout: testCase.timeout,
|
||||||
|
}
|
||||||
|
p := &mockPoller{}
|
||||||
|
p.init(
|
||||||
|
&config.Config{
|
||||||
|
Runner: configRunner,
|
||||||
|
},
|
||||||
|
&mockClient{
|
||||||
|
noTask: testCase.noTask,
|
||||||
|
},
|
||||||
|
&mockRunner{
|
||||||
|
cfg: &configRunner,
|
||||||
|
log: runnerLog,
|
||||||
|
panics: testCase.panics,
|
||||||
|
err: testCase.err,
|
||||||
|
})
|
||||||
|
go p.Poll()
|
||||||
|
assert.Equal(t, "runner starts", <-runnerLog)
|
||||||
|
var ctx context.Context
|
||||||
|
var cancel context.CancelFunc
|
||||||
|
if testCase.contextTimeout > 0 {
|
||||||
|
ctx, cancel = context.WithTimeout(context.Background(), testCase.contextTimeout)
|
||||||
|
defer cancel()
|
||||||
|
} else {
|
||||||
|
ctx, cancel = context.WithCancel(context.Background())
|
||||||
|
cancel()
|
||||||
|
}
|
||||||
|
p.Shutdown(ctx)
|
||||||
|
<-p.done
|
||||||
|
assert.Equal(t, testCase.expected, <-runnerLog)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPoller_Fetch(t *testing.T) {
|
||||||
|
setTrace(t)
|
||||||
|
for _, testCase := range []struct {
|
||||||
|
name string
|
||||||
|
noTask bool
|
||||||
|
sleep time.Duration
|
||||||
|
err error
|
||||||
|
cancel bool
|
||||||
|
success bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "Success",
|
||||||
|
success: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Timeout",
|
||||||
|
sleep: 100 * time.Millisecond,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Canceled",
|
||||||
|
cancel: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "NoTask",
|
||||||
|
noTask: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "Error",
|
||||||
|
err: fmt.Errorf("random error"),
|
||||||
|
},
|
||||||
|
} {
|
||||||
|
t.Run(testCase.name, func(t *testing.T) {
|
||||||
|
configRunner := config.Runner{
|
||||||
|
FetchTimeout: 1 * time.Millisecond,
|
||||||
|
}
|
||||||
|
p := &mockPoller{}
|
||||||
|
p.init(
|
||||||
|
&config.Config{
|
||||||
|
Runner: configRunner,
|
||||||
|
},
|
||||||
|
&mockClient{
|
||||||
|
sleep: testCase.sleep,
|
||||||
|
cancel: testCase.cancel,
|
||||||
|
noTask: testCase.noTask,
|
||||||
|
err: testCase.err,
|
||||||
|
},
|
||||||
|
&mockRunner{},
|
||||||
|
)
|
||||||
|
task, ok := p.fetchTask(context.Background())
|
||||||
|
if testCase.success {
|
||||||
|
assert.True(t, ok)
|
||||||
|
assert.NotNil(t, task)
|
||||||
|
} else {
|
||||||
|
assert.False(t, ok)
|
||||||
|
assert.Nil(t, task)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
260
internal/app/run/runner.go
Normal file
260
internal/app/run/runner.go
Normal file
|
@ -0,0 +1,260 @@
|
||||||
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package run
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
|
"connectrpc.com/connect"
|
||||||
|
"github.com/docker/docker/api/types/container"
|
||||||
|
"github.com/nektos/act/pkg/artifactcache"
|
||||||
|
"github.com/nektos/act/pkg/common"
|
||||||
|
"github.com/nektos/act/pkg/model"
|
||||||
|
"github.com/nektos/act/pkg/runner"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/client"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/config"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/labels"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/report"
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/ver"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Runner runs the pipeline.
|
||||||
|
type Runner struct {
|
||||||
|
name string
|
||||||
|
|
||||||
|
cfg *config.Config
|
||||||
|
|
||||||
|
client client.Client
|
||||||
|
labels labels.Labels
|
||||||
|
envs map[string]string
|
||||||
|
|
||||||
|
runningTasks sync.Map
|
||||||
|
}
|
||||||
|
|
||||||
|
type RunnerInterface interface {
|
||||||
|
Run(ctx context.Context, task *runnerv1.Task) error
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRunner(cfg *config.Config, reg *config.Registration, cli client.Client) *Runner {
|
||||||
|
ls := labels.Labels{}
|
||||||
|
for _, v := range reg.Labels {
|
||||||
|
if l, err := labels.Parse(v); err == nil {
|
||||||
|
ls = append(ls, l)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Runner.Envs == nil {
|
||||||
|
cfg.Runner.Envs = make(map[string]string, 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
cfg.Runner.Envs["GITHUB_SERVER_URL"] = reg.Address
|
||||||
|
|
||||||
|
envs := make(map[string]string, len(cfg.Runner.Envs))
|
||||||
|
for k, v := range cfg.Runner.Envs {
|
||||||
|
envs[k] = v
|
||||||
|
}
|
||||||
|
if cfg.Cache.Enabled == nil || *cfg.Cache.Enabled {
|
||||||
|
if cfg.Cache.ExternalServer != "" {
|
||||||
|
envs["ACTIONS_CACHE_URL"] = cfg.Cache.ExternalServer
|
||||||
|
} else {
|
||||||
|
cacheHandler, err := artifactcache.StartHandler(
|
||||||
|
cfg.Cache.Dir,
|
||||||
|
cfg.Cache.Host,
|
||||||
|
cfg.Cache.Port,
|
||||||
|
log.StandardLogger().WithField("module", "cache_request"),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
log.Errorf("cannot init cache server, it will be disabled: %v", err)
|
||||||
|
// go on
|
||||||
|
} else {
|
||||||
|
envs["ACTIONS_CACHE_URL"] = cacheHandler.ExternalURL() + "/"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// set artifact gitea api
|
||||||
|
artifactGiteaAPI := strings.TrimSuffix(cli.Address(), "/") + "/api/actions_pipeline/"
|
||||||
|
envs["ACTIONS_RUNTIME_URL"] = artifactGiteaAPI
|
||||||
|
envs["ACTIONS_RESULTS_URL"] = strings.TrimSuffix(cli.Address(), "/")
|
||||||
|
|
||||||
|
// Set specific environments to distinguish between Gitea and GitHub
|
||||||
|
envs["GITEA_ACTIONS"] = "true"
|
||||||
|
envs["GITEA_ACTIONS_RUNNER_VERSION"] = ver.Version()
|
||||||
|
|
||||||
|
return &Runner{
|
||||||
|
name: reg.Name,
|
||||||
|
cfg: cfg,
|
||||||
|
client: cli,
|
||||||
|
labels: ls,
|
||||||
|
envs: envs,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *Runner) Run(ctx context.Context, task *runnerv1.Task) error {
|
||||||
|
if _, ok := r.runningTasks.Load(task.Id); ok {
|
||||||
|
return fmt.Errorf("task %d is already running", task.Id)
|
||||||
|
}
|
||||||
|
r.runningTasks.Store(task.Id, struct{}{})
|
||||||
|
defer r.runningTasks.Delete(task.Id)
|
||||||
|
|
||||||
|
ctx, cancel := context.WithTimeout(ctx, r.cfg.Runner.Timeout)
|
||||||
|
defer cancel()
|
||||||
|
reporter := report.NewReporter(ctx, cancel, r.client, task, r.cfg.Runner.ReportInterval)
|
||||||
|
var runErr error
|
||||||
|
defer func() {
|
||||||
|
lastWords := ""
|
||||||
|
if runErr != nil {
|
||||||
|
lastWords = runErr.Error()
|
||||||
|
}
|
||||||
|
_ = reporter.Close(lastWords)
|
||||||
|
}()
|
||||||
|
reporter.RunDaemon()
|
||||||
|
runErr = r.run(ctx, task, reporter)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.Reporter) (err error) {
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
err = fmt.Errorf("panic: %v", r)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
reporter.Logf("%s(version:%s) received task %v of job %v, be triggered by event: %s", r.name, ver.Version(), task.Id, task.Context.Fields["job"].GetStringValue(), task.Context.Fields["event_name"].GetStringValue())
|
||||||
|
|
||||||
|
workflow, jobID, err := generateWorkflow(task)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
plan, err := model.CombineWorkflowPlanner(workflow).PlanJob(jobID)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
job := workflow.GetJob(jobID)
|
||||||
|
reporter.ResetSteps(len(job.Steps))
|
||||||
|
|
||||||
|
taskContext := task.Context.Fields
|
||||||
|
|
||||||
|
log.Infof("task %v repo is %v %v %v", task.Id, taskContext["repository"].GetStringValue(),
|
||||||
|
taskContext["gitea_default_actions_url"].GetStringValue(),
|
||||||
|
r.client.Address())
|
||||||
|
|
||||||
|
preset := &model.GithubContext{
|
||||||
|
Event: taskContext["event"].GetStructValue().AsMap(),
|
||||||
|
RunID: taskContext["run_id"].GetStringValue(),
|
||||||
|
RunNumber: taskContext["run_number"].GetStringValue(),
|
||||||
|
Actor: taskContext["actor"].GetStringValue(),
|
||||||
|
Repository: taskContext["repository"].GetStringValue(),
|
||||||
|
EventName: taskContext["event_name"].GetStringValue(),
|
||||||
|
Sha: taskContext["sha"].GetStringValue(),
|
||||||
|
Ref: taskContext["ref"].GetStringValue(),
|
||||||
|
RefName: taskContext["ref_name"].GetStringValue(),
|
||||||
|
RefType: taskContext["ref_type"].GetStringValue(),
|
||||||
|
HeadRef: taskContext["head_ref"].GetStringValue(),
|
||||||
|
BaseRef: taskContext["base_ref"].GetStringValue(),
|
||||||
|
Token: taskContext["token"].GetStringValue(),
|
||||||
|
RepositoryOwner: taskContext["repository_owner"].GetStringValue(),
|
||||||
|
RetentionDays: taskContext["retention_days"].GetStringValue(),
|
||||||
|
}
|
||||||
|
if t := task.Secrets["GITEA_TOKEN"]; t != "" {
|
||||||
|
preset.Token = t
|
||||||
|
} else if t := task.Secrets["GITHUB_TOKEN"]; t != "" {
|
||||||
|
preset.Token = t
|
||||||
|
}
|
||||||
|
|
||||||
|
giteaRuntimeToken := taskContext["gitea_runtime_token"].GetStringValue()
|
||||||
|
if giteaRuntimeToken == "" {
|
||||||
|
// use task token to action api token for previous Gitea Server Versions
|
||||||
|
giteaRuntimeToken = preset.Token
|
||||||
|
}
|
||||||
|
r.envs["ACTIONS_RUNTIME_TOKEN"] = giteaRuntimeToken
|
||||||
|
|
||||||
|
eventJSON, err := json.Marshal(preset.Event)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
maxLifetime := 3 * time.Hour
|
||||||
|
if deadline, ok := ctx.Deadline(); ok {
|
||||||
|
maxLifetime = time.Until(deadline)
|
||||||
|
}
|
||||||
|
|
||||||
|
var inputs map[string]string
|
||||||
|
if preset.EventName == "workflow_dispatch" {
|
||||||
|
if inputsRaw, ok := preset.Event["inputs"]; ok {
|
||||||
|
inputs, _ = inputsRaw.(map[string]string)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
runnerConfig := &runner.Config{
|
||||||
|
// On Linux, Workdir will be like "/<parent_directory>/<owner>/<repo>"
|
||||||
|
// On Windows, Workdir will be like "\<parent_directory>\<owner>\<repo>"
|
||||||
|
Workdir: filepath.FromSlash(filepath.Clean(fmt.Sprintf("/%s/%s", r.cfg.Container.WorkdirParent, preset.Repository))),
|
||||||
|
BindWorkdir: false,
|
||||||
|
ActionCacheDir: filepath.FromSlash(r.cfg.Host.WorkdirParent),
|
||||||
|
|
||||||
|
ReuseContainers: false,
|
||||||
|
ForcePull: r.cfg.Container.ForcePull,
|
||||||
|
ForceRebuild: false,
|
||||||
|
LogOutput: true,
|
||||||
|
JSONLogger: false,
|
||||||
|
Env: r.envs,
|
||||||
|
Secrets: task.Secrets,
|
||||||
|
GitHubInstance: strings.TrimSuffix(r.client.Address(), "/"),
|
||||||
|
AutoRemove: true,
|
||||||
|
NoSkipCheckout: true,
|
||||||
|
PresetGitHubContext: preset,
|
||||||
|
EventJSON: string(eventJSON),
|
||||||
|
ContainerNamePrefix: fmt.Sprintf("GITEA-ACTIONS-TASK-%d", task.Id),
|
||||||
|
ContainerMaxLifetime: maxLifetime,
|
||||||
|
ContainerNetworkMode: container.NetworkMode(r.cfg.Container.Network),
|
||||||
|
ContainerNetworkEnableIPv6: r.cfg.Container.EnableIPv6,
|
||||||
|
ContainerOptions: r.cfg.Container.Options,
|
||||||
|
ContainerDaemonSocket: r.cfg.Container.DockerHost,
|
||||||
|
Privileged: r.cfg.Container.Privileged,
|
||||||
|
DefaultActionInstance: taskContext["gitea_default_actions_url"].GetStringValue(),
|
||||||
|
PlatformPicker: r.labels.PickPlatform,
|
||||||
|
Vars: task.Vars,
|
||||||
|
ValidVolumes: r.cfg.Container.ValidVolumes,
|
||||||
|
InsecureSkipTLS: r.cfg.Runner.Insecure,
|
||||||
|
Inputs: inputs,
|
||||||
|
}
|
||||||
|
|
||||||
|
rr, err := runner.New(runnerConfig)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
executor := rr.NewPlanExecutor(plan)
|
||||||
|
|
||||||
|
reporter.Logf("workflow prepared")
|
||||||
|
|
||||||
|
// add logger recorders
|
||||||
|
ctx = common.WithLoggerHook(ctx, reporter)
|
||||||
|
|
||||||
|
execErr := executor(ctx)
|
||||||
|
reporter.SetOutputs(job.Outputs)
|
||||||
|
return execErr
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *Runner) Declare(ctx context.Context, labels []string) (*connect.Response[runnerv1.DeclareResponse], error) {
|
||||||
|
return r.client.Declare(ctx, connect.NewRequest(&runnerv1.DeclareRequest{
|
||||||
|
Version: ver.Version(),
|
||||||
|
Labels: labels,
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *Runner) Update(ctx context.Context, labels labels.Labels) {
|
||||||
|
r.labels = labels
|
||||||
|
}
|
37
internal/app/run/runner_test.go
Normal file
37
internal/app/run/runner_test.go
Normal file
|
@ -0,0 +1,37 @@
|
||||||
|
package run
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/labels"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestLabelUpdate(t *testing.T) {
|
||||||
|
ctx := context.Background()
|
||||||
|
ls := labels.Labels{}
|
||||||
|
|
||||||
|
initialLabel, err := labels.Parse("testlabel:docker://alpine")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
ls = append(ls, initialLabel)
|
||||||
|
|
||||||
|
newLs := labels.Labels{}
|
||||||
|
|
||||||
|
newLabel, err := labels.Parse("next label:host")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
newLs = append(newLs, initialLabel)
|
||||||
|
newLs = append(newLs, newLabel)
|
||||||
|
|
||||||
|
runner := Runner{
|
||||||
|
labels: ls,
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Contains(t, runner.labels, initialLabel)
|
||||||
|
assert.NotContains(t, runner.labels, newLabel)
|
||||||
|
|
||||||
|
runner.Update(ctx, newLs)
|
||||||
|
|
||||||
|
assert.Contains(t, runner.labels, initialLabel)
|
||||||
|
assert.Contains(t, runner.labels, newLabel)
|
||||||
|
}
|
54
internal/app/run/workflow.go
Normal file
54
internal/app/run/workflow.go
Normal file
|
@ -0,0 +1,54 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package run
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
|
"github.com/nektos/act/pkg/model"
|
||||||
|
"gopkg.in/yaml.v3"
|
||||||
|
)
|
||||||
|
|
||||||
|
func generateWorkflow(task *runnerv1.Task) (*model.Workflow, string, error) {
|
||||||
|
workflow, err := model.ReadWorkflow(bytes.NewReader(task.WorkflowPayload))
|
||||||
|
if err != nil {
|
||||||
|
return nil, "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
jobIDs := workflow.GetJobIDs()
|
||||||
|
if len(jobIDs) != 1 {
|
||||||
|
return nil, "", fmt.Errorf("multiple jobs found: %v", jobIDs)
|
||||||
|
}
|
||||||
|
jobID := jobIDs[0]
|
||||||
|
|
||||||
|
needJobIDs := make([]string, 0, len(task.Needs))
|
||||||
|
for id, need := range task.Needs {
|
||||||
|
needJobIDs = append(needJobIDs, id)
|
||||||
|
needJob := &model.Job{
|
||||||
|
Outputs: need.Outputs,
|
||||||
|
Result: strings.ToLower(strings.TrimPrefix(need.Result.String(), "RESULT_")),
|
||||||
|
}
|
||||||
|
workflow.Jobs[id] = needJob
|
||||||
|
}
|
||||||
|
sort.Strings(needJobIDs)
|
||||||
|
|
||||||
|
rawNeeds := yaml.Node{
|
||||||
|
Kind: yaml.SequenceNode,
|
||||||
|
Content: make([]*yaml.Node, 0, len(needJobIDs)),
|
||||||
|
}
|
||||||
|
for _, id := range needJobIDs {
|
||||||
|
rawNeeds.Content = append(rawNeeds.Content, &yaml.Node{
|
||||||
|
Kind: yaml.ScalarNode,
|
||||||
|
Value: id,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
workflow.Jobs[jobID].RawNeeds = rawNeeds
|
||||||
|
|
||||||
|
return workflow, jobID, nil
|
||||||
|
}
|
96
internal/app/run/workflow_test.go
Normal file
96
internal/app/run/workflow_test.go
Normal file
|
@ -0,0 +1,96 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package run
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
|
"github.com/nektos/act/pkg/model"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"gotest.tools/v3/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func Test_generateWorkflow(t *testing.T) {
|
||||||
|
type args struct {
|
||||||
|
task *runnerv1.Task
|
||||||
|
}
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
args args
|
||||||
|
assert func(t *testing.T, wf *model.Workflow, err error)
|
||||||
|
want1 string
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "has needs",
|
||||||
|
args: args{
|
||||||
|
task: &runnerv1.Task{
|
||||||
|
WorkflowPayload: []byte(`
|
||||||
|
name: Build and deploy
|
||||||
|
on: push
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
job9:
|
||||||
|
needs: build
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
- run: ./deploy --build ${{ needs.job1.outputs.output1 }}
|
||||||
|
- run: ./deploy --build ${{ needs.job2.outputs.output2 }}
|
||||||
|
`),
|
||||||
|
Needs: map[string]*runnerv1.TaskNeed{
|
||||||
|
"job1": {
|
||||||
|
Outputs: map[string]string{
|
||||||
|
"output1": "output1 value",
|
||||||
|
},
|
||||||
|
Result: runnerv1.Result_RESULT_SUCCESS,
|
||||||
|
},
|
||||||
|
"job2": {
|
||||||
|
Outputs: map[string]string{
|
||||||
|
"output2": "output2 value",
|
||||||
|
},
|
||||||
|
Result: runnerv1.Result_RESULT_SUCCESS,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
assert: func(t *testing.T, wf *model.Workflow, err error) {
|
||||||
|
assert.DeepEqual(t, wf.GetJob("job9").Needs(), []string{"job1", "job2"})
|
||||||
|
},
|
||||||
|
want1: "job9",
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "valid YAML syntax in top level env but wrong value type",
|
||||||
|
args: args{
|
||||||
|
task: &runnerv1.Task{
|
||||||
|
WorkflowPayload: []byte(`
|
||||||
|
on: push
|
||||||
|
|
||||||
|
env:
|
||||||
|
value: {{ }}
|
||||||
|
`),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
assert: func(t *testing.T, wf *model.Workflow, err error) {
|
||||||
|
require.Nil(t, wf)
|
||||||
|
assert.ErrorContains(t, err, "cannot unmarshal")
|
||||||
|
},
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
got, got1, err := generateWorkflow(tt.args.task)
|
||||||
|
if tt.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, got1, tt.want1)
|
||||||
|
}
|
||||||
|
tt.assert(t, got, err)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,3 +1,6 @@
|
||||||
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
package client
|
package client
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -6,6 +9,8 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
// A Client manages communication with the runner.
|
// A Client manages communication with the runner.
|
||||||
|
//
|
||||||
|
//go:generate mockery --name Client
|
||||||
type Client interface {
|
type Client interface {
|
||||||
pingv1connect.PingServiceClient
|
pingv1connect.PingServiceClient
|
||||||
runnerv1connect.RunnerServiceClient
|
runnerv1connect.RunnerServiceClient
|
11
internal/pkg/client/header.go
Normal file
11
internal/pkg/client/header.go
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package client
|
||||||
|
|
||||||
|
const (
|
||||||
|
UUIDHeader = "x-runner-uuid"
|
||||||
|
TokenHeader = "x-runner-token"
|
||||||
|
// Deprecated: could be removed after Gitea 1.20 released
|
||||||
|
VersionHeader = "x-runner-version"
|
||||||
|
)
|
|
@ -1,3 +1,6 @@
|
||||||
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
package client
|
package client
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
@ -8,11 +11,10 @@ import (
|
||||||
|
|
||||||
"code.gitea.io/actions-proto-go/ping/v1/pingv1connect"
|
"code.gitea.io/actions-proto-go/ping/v1/pingv1connect"
|
||||||
"code.gitea.io/actions-proto-go/runner/v1/runnerv1connect"
|
"code.gitea.io/actions-proto-go/runner/v1/runnerv1connect"
|
||||||
"gitea.com/gitea/act_runner/core"
|
"connectrpc.com/connect"
|
||||||
"github.com/bufbuild/connect-go"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func getHttpClient(endpoint string, insecure bool) *http.Client {
|
func getHTTPClient(endpoint string, insecure bool) *http.Client {
|
||||||
if strings.HasPrefix(endpoint, "https://") && insecure {
|
if strings.HasPrefix(endpoint, "https://") && insecure {
|
||||||
return &http.Client{
|
return &http.Client{
|
||||||
Transport: &http.Transport{
|
Transport: &http.Transport{
|
||||||
|
@ -26,16 +28,20 @@ func getHttpClient(endpoint string, insecure bool) *http.Client {
|
||||||
}
|
}
|
||||||
|
|
||||||
// New returns a new runner client.
|
// New returns a new runner client.
|
||||||
func New(endpoint string, insecure bool, uuid, token string, opts ...connect.ClientOption) *HTTPClient {
|
func New(endpoint string, insecure bool, uuid, token, version string, opts ...connect.ClientOption) *HTTPClient {
|
||||||
baseURL := strings.TrimRight(endpoint, "/") + "/api/actions"
|
baseURL := strings.TrimRight(endpoint, "/") + "/api/actions"
|
||||||
|
|
||||||
opts = append(opts, connect.WithInterceptors(connect.UnaryInterceptorFunc(func(next connect.UnaryFunc) connect.UnaryFunc {
|
opts = append(opts, connect.WithInterceptors(connect.UnaryInterceptorFunc(func(next connect.UnaryFunc) connect.UnaryFunc {
|
||||||
return func(ctx context.Context, req connect.AnyRequest) (connect.AnyResponse, error) {
|
return func(ctx context.Context, req connect.AnyRequest) (connect.AnyResponse, error) {
|
||||||
if uuid != "" {
|
if uuid != "" {
|
||||||
req.Header().Set(core.UUIDHeader, uuid)
|
req.Header().Set(UUIDHeader, uuid)
|
||||||
}
|
}
|
||||||
if token != "" {
|
if token != "" {
|
||||||
req.Header().Set(core.TokenHeader, token)
|
req.Header().Set(TokenHeader, token)
|
||||||
|
}
|
||||||
|
// TODO: version will be removed from request header after Gitea 1.20 released.
|
||||||
|
if version != "" {
|
||||||
|
req.Header().Set(VersionHeader, version)
|
||||||
}
|
}
|
||||||
return next(ctx, req)
|
return next(ctx, req)
|
||||||
}
|
}
|
||||||
|
@ -43,12 +49,12 @@ func New(endpoint string, insecure bool, uuid, token string, opts ...connect.Cli
|
||||||
|
|
||||||
return &HTTPClient{
|
return &HTTPClient{
|
||||||
PingServiceClient: pingv1connect.NewPingServiceClient(
|
PingServiceClient: pingv1connect.NewPingServiceClient(
|
||||||
getHttpClient(endpoint, insecure),
|
getHTTPClient(endpoint, insecure),
|
||||||
baseURL,
|
baseURL,
|
||||||
opts...,
|
opts...,
|
||||||
),
|
),
|
||||||
RunnerServiceClient: runnerv1connect.NewRunnerServiceClient(
|
RunnerServiceClient: runnerv1connect.NewRunnerServiceClient(
|
||||||
getHttpClient(endpoint, insecure),
|
getHTTPClient(endpoint, insecure),
|
||||||
baseURL,
|
baseURL,
|
||||||
opts...,
|
opts...,
|
||||||
),
|
),
|
219
internal/pkg/client/mocks/Client.go
Normal file
219
internal/pkg/client/mocks/Client.go
Normal file
|
@ -0,0 +1,219 @@
|
||||||
|
// Code generated by mockery v2.26.1. DO NOT EDIT.
|
||||||
|
|
||||||
|
package mocks
|
||||||
|
|
||||||
|
import (
|
||||||
|
context "context"
|
||||||
|
|
||||||
|
connect "connectrpc.com/connect"
|
||||||
|
|
||||||
|
mock "github.com/stretchr/testify/mock"
|
||||||
|
|
||||||
|
pingv1 "code.gitea.io/actions-proto-go/ping/v1"
|
||||||
|
|
||||||
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Client is an autogenerated mock type for the Client type
|
||||||
|
type Client struct {
|
||||||
|
mock.Mock
|
||||||
|
}
|
||||||
|
|
||||||
|
// Address provides a mock function with given fields:
|
||||||
|
func (_m *Client) Address() string {
|
||||||
|
ret := _m.Called()
|
||||||
|
|
||||||
|
var r0 string
|
||||||
|
if rf, ok := ret.Get(0).(func() string); ok {
|
||||||
|
r0 = rf()
|
||||||
|
} else {
|
||||||
|
r0 = ret.Get(0).(string)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Declare provides a mock function with given fields: _a0, _a1
|
||||||
|
func (_m *Client) Declare(_a0 context.Context, _a1 *connect.Request[runnerv1.DeclareRequest]) (*connect.Response[runnerv1.DeclareResponse], error) {
|
||||||
|
ret := _m.Called(_a0, _a1)
|
||||||
|
|
||||||
|
var r0 *connect.Response[runnerv1.DeclareResponse]
|
||||||
|
var r1 error
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.DeclareRequest]) (*connect.Response[runnerv1.DeclareResponse], error)); ok {
|
||||||
|
return rf(_a0, _a1)
|
||||||
|
}
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.DeclareRequest]) *connect.Response[runnerv1.DeclareResponse]); ok {
|
||||||
|
r0 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
if ret.Get(0) != nil {
|
||||||
|
r0 = ret.Get(0).(*connect.Response[runnerv1.DeclareResponse])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.DeclareRequest]) error); ok {
|
||||||
|
r1 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
r1 = ret.Error(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r0, r1
|
||||||
|
}
|
||||||
|
|
||||||
|
// FetchTask provides a mock function with given fields: _a0, _a1
|
||||||
|
func (_m *Client) FetchTask(_a0 context.Context, _a1 *connect.Request[runnerv1.FetchTaskRequest]) (*connect.Response[runnerv1.FetchTaskResponse], error) {
|
||||||
|
ret := _m.Called(_a0, _a1)
|
||||||
|
|
||||||
|
var r0 *connect.Response[runnerv1.FetchTaskResponse]
|
||||||
|
var r1 error
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.FetchTaskRequest]) (*connect.Response[runnerv1.FetchTaskResponse], error)); ok {
|
||||||
|
return rf(_a0, _a1)
|
||||||
|
}
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.FetchTaskRequest]) *connect.Response[runnerv1.FetchTaskResponse]); ok {
|
||||||
|
r0 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
if ret.Get(0) != nil {
|
||||||
|
r0 = ret.Get(0).(*connect.Response[runnerv1.FetchTaskResponse])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.FetchTaskRequest]) error); ok {
|
||||||
|
r1 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
r1 = ret.Error(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r0, r1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Insecure provides a mock function with given fields:
|
||||||
|
func (_m *Client) Insecure() bool {
|
||||||
|
ret := _m.Called()
|
||||||
|
|
||||||
|
var r0 bool
|
||||||
|
if rf, ok := ret.Get(0).(func() bool); ok {
|
||||||
|
r0 = rf()
|
||||||
|
} else {
|
||||||
|
r0 = ret.Get(0).(bool)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ping provides a mock function with given fields: _a0, _a1
|
||||||
|
func (_m *Client) Ping(_a0 context.Context, _a1 *connect.Request[pingv1.PingRequest]) (*connect.Response[pingv1.PingResponse], error) {
|
||||||
|
ret := _m.Called(_a0, _a1)
|
||||||
|
|
||||||
|
var r0 *connect.Response[pingv1.PingResponse]
|
||||||
|
var r1 error
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[pingv1.PingRequest]) (*connect.Response[pingv1.PingResponse], error)); ok {
|
||||||
|
return rf(_a0, _a1)
|
||||||
|
}
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[pingv1.PingRequest]) *connect.Response[pingv1.PingResponse]); ok {
|
||||||
|
r0 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
if ret.Get(0) != nil {
|
||||||
|
r0 = ret.Get(0).(*connect.Response[pingv1.PingResponse])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[pingv1.PingRequest]) error); ok {
|
||||||
|
r1 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
r1 = ret.Error(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r0, r1
|
||||||
|
}
|
||||||
|
|
||||||
|
// Register provides a mock function with given fields: _a0, _a1
|
||||||
|
func (_m *Client) Register(_a0 context.Context, _a1 *connect.Request[runnerv1.RegisterRequest]) (*connect.Response[runnerv1.RegisterResponse], error) {
|
||||||
|
ret := _m.Called(_a0, _a1)
|
||||||
|
|
||||||
|
var r0 *connect.Response[runnerv1.RegisterResponse]
|
||||||
|
var r1 error
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.RegisterRequest]) (*connect.Response[runnerv1.RegisterResponse], error)); ok {
|
||||||
|
return rf(_a0, _a1)
|
||||||
|
}
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.RegisterRequest]) *connect.Response[runnerv1.RegisterResponse]); ok {
|
||||||
|
r0 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
if ret.Get(0) != nil {
|
||||||
|
r0 = ret.Get(0).(*connect.Response[runnerv1.RegisterResponse])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.RegisterRequest]) error); ok {
|
||||||
|
r1 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
r1 = ret.Error(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r0, r1
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateLog provides a mock function with given fields: _a0, _a1
|
||||||
|
func (_m *Client) UpdateLog(_a0 context.Context, _a1 *connect.Request[runnerv1.UpdateLogRequest]) (*connect.Response[runnerv1.UpdateLogResponse], error) {
|
||||||
|
ret := _m.Called(_a0, _a1)
|
||||||
|
|
||||||
|
var r0 *connect.Response[runnerv1.UpdateLogResponse]
|
||||||
|
var r1 error
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.UpdateLogRequest]) (*connect.Response[runnerv1.UpdateLogResponse], error)); ok {
|
||||||
|
return rf(_a0, _a1)
|
||||||
|
}
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.UpdateLogRequest]) *connect.Response[runnerv1.UpdateLogResponse]); ok {
|
||||||
|
r0 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
if ret.Get(0) != nil {
|
||||||
|
r0 = ret.Get(0).(*connect.Response[runnerv1.UpdateLogResponse])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.UpdateLogRequest]) error); ok {
|
||||||
|
r1 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
r1 = ret.Error(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r0, r1
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateTask provides a mock function with given fields: _a0, _a1
|
||||||
|
func (_m *Client) UpdateTask(_a0 context.Context, _a1 *connect.Request[runnerv1.UpdateTaskRequest]) (*connect.Response[runnerv1.UpdateTaskResponse], error) {
|
||||||
|
ret := _m.Called(_a0, _a1)
|
||||||
|
|
||||||
|
var r0 *connect.Response[runnerv1.UpdateTaskResponse]
|
||||||
|
var r1 error
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.UpdateTaskRequest]) (*connect.Response[runnerv1.UpdateTaskResponse], error)); ok {
|
||||||
|
return rf(_a0, _a1)
|
||||||
|
}
|
||||||
|
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.UpdateTaskRequest]) *connect.Response[runnerv1.UpdateTaskResponse]); ok {
|
||||||
|
r0 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
if ret.Get(0) != nil {
|
||||||
|
r0 = ret.Get(0).(*connect.Response[runnerv1.UpdateTaskResponse])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.UpdateTaskRequest]) error); ok {
|
||||||
|
r1 = rf(_a0, _a1)
|
||||||
|
} else {
|
||||||
|
r1 = ret.Error(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return r0, r1
|
||||||
|
}
|
||||||
|
|
||||||
|
type mockConstructorTestingTNewClient interface {
|
||||||
|
mock.TestingT
|
||||||
|
Cleanup(func())
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewClient creates a new instance of Client. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
|
||||||
|
func NewClient(t mockConstructorTestingTNewClient) *Client {
|
||||||
|
mock := &Client{}
|
||||||
|
mock.Mock.Test(t)
|
||||||
|
|
||||||
|
t.Cleanup(func() { mock.AssertExpectations(t) })
|
||||||
|
|
||||||
|
return mock
|
||||||
|
}
|
100
internal/pkg/config/config.example.yaml
Normal file
100
internal/pkg/config/config.example.yaml
Normal file
|
@ -0,0 +1,100 @@
|
||||||
|
# Example configuration file, it's safe to copy this as the default config file without any modification.
|
||||||
|
|
||||||
|
# You don't have to copy this file to your instance,
|
||||||
|
# just run `./act_runner generate-config > config.yaml` to generate a config file.
|
||||||
|
|
||||||
|
log:
|
||||||
|
# The level of logging, can be trace, debug, info, warn, error, fatal
|
||||||
|
level: info
|
||||||
|
|
||||||
|
runner:
|
||||||
|
# Where to store the registration result.
|
||||||
|
file: .runner
|
||||||
|
# Execute how many tasks concurrently at the same time.
|
||||||
|
capacity: 1
|
||||||
|
# Extra environment variables to run jobs.
|
||||||
|
envs:
|
||||||
|
A_TEST_ENV_NAME_1: a_test_env_value_1
|
||||||
|
A_TEST_ENV_NAME_2: a_test_env_value_2
|
||||||
|
# Extra environment variables to run jobs from a file.
|
||||||
|
# It will be ignored if it's empty or the file doesn't exist.
|
||||||
|
env_file: .env
|
||||||
|
# The timeout for a job to be finished.
|
||||||
|
# Please note that the Forgejo instance also has a timeout (3h by default) for the job.
|
||||||
|
# So the job could be stopped by the Forgejo instance if it's timeout is shorter than this.
|
||||||
|
timeout: 3h
|
||||||
|
# The timeout for the runner to wait for running jobs to finish when
|
||||||
|
# shutting down because a TERM or INT signal has been received. Any
|
||||||
|
# running jobs that haven't finished after this timeout will be
|
||||||
|
# cancelled.
|
||||||
|
# If unset or zero the jobs will be cancelled immediately.
|
||||||
|
shutdown_timeout: 3h
|
||||||
|
# Whether skip verifying the TLS certificate of the instance.
|
||||||
|
insecure: false
|
||||||
|
# The timeout for fetching the job from the Forgejo instance.
|
||||||
|
fetch_timeout: 5s
|
||||||
|
# The interval for fetching the job from the Forgejo instance.
|
||||||
|
fetch_interval: 2s
|
||||||
|
# The interval for reporting the job status and logs to the Forgejo instance.
|
||||||
|
report_interval: 1s
|
||||||
|
# The labels of a runner are used to determine which jobs the runner can run, and how to run them.
|
||||||
|
# Like: ["macos-arm64:host", "ubuntu-latest:docker://node:20-bookworm", "ubuntu-22.04:docker://node:20-bookworm"]
|
||||||
|
# If it's empty when registering, it will ask for inputting labels.
|
||||||
|
# If it's empty when execute `deamon`, will use labels in `.runner` file.
|
||||||
|
labels: []
|
||||||
|
|
||||||
|
cache:
|
||||||
|
# Enable cache server to use actions/cache.
|
||||||
|
enabled: true
|
||||||
|
# The directory to store the cache data.
|
||||||
|
# If it's empty, the cache data will be stored in $HOME/.cache/actcache.
|
||||||
|
dir: ""
|
||||||
|
# The host of the cache server.
|
||||||
|
# It's not for the address to listen, but the address to connect from job containers.
|
||||||
|
# So 0.0.0.0 is a bad choice, leave it empty to detect automatically.
|
||||||
|
host: ""
|
||||||
|
# The port of the cache server.
|
||||||
|
# 0 means to use a random available port.
|
||||||
|
port: 0
|
||||||
|
# The external cache server URL. Valid only when enable is true.
|
||||||
|
# If it's specified, act_runner will use this URL as the ACTIONS_CACHE_URL rather than start a server by itself.
|
||||||
|
# The URL should generally end with "/".
|
||||||
|
external_server: ""
|
||||||
|
|
||||||
|
container:
|
||||||
|
# Specifies the network to which the container will connect.
|
||||||
|
# Could be host, bridge or the name of a custom network.
|
||||||
|
# If it's empty, create a network automatically.
|
||||||
|
network: ""
|
||||||
|
# Whether to create networks with IPv6 enabled. Requires the Docker daemon to be set up accordingly.
|
||||||
|
# Only takes effect if "network" is set to "".
|
||||||
|
enable_ipv6: false
|
||||||
|
# Whether to use privileged mode or not when launching task containers (privileged mode is required for Docker-in-Docker).
|
||||||
|
privileged: false
|
||||||
|
# And other options to be used when the container is started (eg, --add-host=my.forgejo.url:host-gateway).
|
||||||
|
options:
|
||||||
|
# The parent directory of a job's working directory.
|
||||||
|
# If it's empty, /workspace will be used.
|
||||||
|
workdir_parent:
|
||||||
|
# Volumes (including bind mounts) can be mounted to containers. Glob syntax is supported, see https://github.com/gobwas/glob
|
||||||
|
# You can specify multiple volumes. If the sequence is empty, no volumes can be mounted.
|
||||||
|
# For example, if you only allow containers to mount the `data` volume and all the json files in `/src`, you should change the config to:
|
||||||
|
# valid_volumes:
|
||||||
|
# - data
|
||||||
|
# - /src/*.json
|
||||||
|
# If you want to allow any volume, please use the following configuration:
|
||||||
|
# valid_volumes:
|
||||||
|
# - '**'
|
||||||
|
valid_volumes: []
|
||||||
|
# overrides the docker client host with the specified one.
|
||||||
|
# If it's empty, act_runner will find an available docker host automatically.
|
||||||
|
# If it's "-", act_runner will find an available docker host automatically, but the docker host won't be mounted to the job containers and service containers.
|
||||||
|
# If it's not empty or "-", the specified docker host will be used. An error will be returned if it doesn't work.
|
||||||
|
docker_host: ""
|
||||||
|
# Pull docker image(s) even if already present
|
||||||
|
force_pull: false
|
||||||
|
|
||||||
|
host:
|
||||||
|
# The parent directory of a job's working directory.
|
||||||
|
# If it's empty, $HOME/.cache/act/ will be used.
|
||||||
|
workdir_parent:
|
166
internal/pkg/config/config.go
Normal file
166
internal/pkg/config/config.go
Normal file
|
@ -0,0 +1,166 @@
|
||||||
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/joho/godotenv"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"gopkg.in/yaml.v3"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Log represents the configuration for logging.
|
||||||
|
type Log struct {
|
||||||
|
Level string `yaml:"level"` // Level indicates the logging level.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Runner represents the configuration for the runner.
|
||||||
|
type Runner struct {
|
||||||
|
File string `yaml:"file"` // File specifies the file path for the runner.
|
||||||
|
Capacity int `yaml:"capacity"` // Capacity specifies the capacity of the runner.
|
||||||
|
Envs map[string]string `yaml:"envs"` // Envs stores environment variables for the runner.
|
||||||
|
EnvFile string `yaml:"env_file"` // EnvFile specifies the path to the file containing environment variables for the runner.
|
||||||
|
Timeout time.Duration `yaml:"timeout"` // Timeout specifies the duration for runner timeout.
|
||||||
|
ShutdownTimeout time.Duration `yaml:"shutdown_timeout"` // ShutdownTimeout specifies the duration to wait for running jobs to complete during a shutdown of the runner.
|
||||||
|
Insecure bool `yaml:"insecure"` // Insecure indicates whether the runner operates in an insecure mode.
|
||||||
|
FetchTimeout time.Duration `yaml:"fetch_timeout"` // FetchTimeout specifies the timeout duration for fetching resources.
|
||||||
|
FetchInterval time.Duration `yaml:"fetch_interval"` // FetchInterval specifies the interval duration for fetching resources.
|
||||||
|
ReportInterval time.Duration `yaml:"report_interval"` // ReportInterval specifies the interval duration for reporting status and logs of a running job.
|
||||||
|
Labels []string `yaml:"labels"` // Labels specify the labels of the runner. Labels are declared on each startup
|
||||||
|
}
|
||||||
|
|
||||||
|
// Cache represents the configuration for caching.
|
||||||
|
type Cache struct {
|
||||||
|
Enabled *bool `yaml:"enabled"` // Enabled indicates whether caching is enabled. It is a pointer to distinguish between false and not set. If not set, it will be true.
|
||||||
|
Dir string `yaml:"dir"` // Dir specifies the directory path for caching.
|
||||||
|
Host string `yaml:"host"` // Host specifies the caching host.
|
||||||
|
Port uint16 `yaml:"port"` // Port specifies the caching port.
|
||||||
|
ExternalServer string `yaml:"external_server"` // ExternalServer specifies the URL of external cache server
|
||||||
|
}
|
||||||
|
|
||||||
|
// Container represents the configuration for the container.
|
||||||
|
type Container struct {
|
||||||
|
Network string `yaml:"network"` // Network specifies the network for the container.
|
||||||
|
NetworkMode string `yaml:"network_mode"` // Deprecated: use Network instead. Could be removed after Gitea 1.20
|
||||||
|
EnableIPv6 bool `yaml:"enable_ipv6"` // EnableIPv6 indicates whether the network is created with IPv6 enabled.
|
||||||
|
Privileged bool `yaml:"privileged"` // Privileged indicates whether the container runs in privileged mode.
|
||||||
|
Options string `yaml:"options"` // Options specifies additional options for the container.
|
||||||
|
WorkdirParent string `yaml:"workdir_parent"` // WorkdirParent specifies the parent directory for the container's working directory.
|
||||||
|
ValidVolumes []string `yaml:"valid_volumes"` // ValidVolumes specifies the volumes (including bind mounts) can be mounted to containers.
|
||||||
|
DockerHost string `yaml:"docker_host"` // DockerHost specifies the Docker host. It overrides the value specified in environment variable DOCKER_HOST.
|
||||||
|
ForcePull bool `yaml:"force_pull"` // Pull docker image(s) even if already present
|
||||||
|
}
|
||||||
|
|
||||||
|
// Host represents the configuration for the host.
|
||||||
|
type Host struct {
|
||||||
|
WorkdirParent string `yaml:"workdir_parent"` // WorkdirParent specifies the parent directory for the host's working directory.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Config represents the overall configuration.
|
||||||
|
type Config struct {
|
||||||
|
Log Log `yaml:"log"` // Log represents the configuration for logging.
|
||||||
|
Runner Runner `yaml:"runner"` // Runner represents the configuration for the runner.
|
||||||
|
Cache Cache `yaml:"cache"` // Cache represents the configuration for caching.
|
||||||
|
Container Container `yaml:"container"` // Container represents the configuration for the container.
|
||||||
|
Host Host `yaml:"host"` // Host represents the configuration for the host.
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tune the config settings accordingly to the Forgejo instance that will be used.
|
||||||
|
func (c *Config) Tune(instanceURL string) {
|
||||||
|
if instanceURL == "https://codeberg.org" {
|
||||||
|
if c.Runner.FetchInterval < 30*time.Second {
|
||||||
|
log.Info("The runner is configured to be used by a public instance, fetch interval is set to 30 seconds.")
|
||||||
|
c.Runner.FetchInterval = 30 * time.Second
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadDefault returns the default configuration.
|
||||||
|
// If file is not empty, it will be used to load the configuration.
|
||||||
|
func LoadDefault(file string) (*Config, error) {
|
||||||
|
cfg := &Config{}
|
||||||
|
if file != "" {
|
||||||
|
content, err := os.ReadFile(file)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("open config file %q: %w", file, err)
|
||||||
|
}
|
||||||
|
if err := yaml.Unmarshal(content, cfg); err != nil {
|
||||||
|
return nil, fmt.Errorf("parse config file %q: %w", file, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
compatibleWithOldEnvs(file != "", cfg)
|
||||||
|
|
||||||
|
if cfg.Runner.EnvFile != "" {
|
||||||
|
if stat, err := os.Stat(cfg.Runner.EnvFile); err == nil && !stat.IsDir() {
|
||||||
|
envs, err := godotenv.Read(cfg.Runner.EnvFile)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("read env file %q: %w", cfg.Runner.EnvFile, err)
|
||||||
|
}
|
||||||
|
if cfg.Runner.Envs == nil {
|
||||||
|
cfg.Runner.Envs = map[string]string{}
|
||||||
|
}
|
||||||
|
for k, v := range envs {
|
||||||
|
cfg.Runner.Envs[k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if cfg.Log.Level == "" {
|
||||||
|
cfg.Log.Level = "info"
|
||||||
|
}
|
||||||
|
if cfg.Runner.File == "" {
|
||||||
|
cfg.Runner.File = ".runner"
|
||||||
|
}
|
||||||
|
if cfg.Runner.Capacity <= 0 {
|
||||||
|
cfg.Runner.Capacity = 1
|
||||||
|
}
|
||||||
|
if cfg.Runner.Timeout <= 0 {
|
||||||
|
cfg.Runner.Timeout = 3 * time.Hour
|
||||||
|
}
|
||||||
|
if cfg.Cache.Enabled == nil {
|
||||||
|
b := true
|
||||||
|
cfg.Cache.Enabled = &b
|
||||||
|
}
|
||||||
|
if *cfg.Cache.Enabled {
|
||||||
|
if cfg.Cache.Dir == "" {
|
||||||
|
home, _ := os.UserHomeDir()
|
||||||
|
cfg.Cache.Dir = filepath.Join(home, ".cache", "actcache")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if cfg.Container.WorkdirParent == "" {
|
||||||
|
cfg.Container.WorkdirParent = "workspace"
|
||||||
|
}
|
||||||
|
if cfg.Host.WorkdirParent == "" {
|
||||||
|
home, _ := os.UserHomeDir()
|
||||||
|
cfg.Host.WorkdirParent = filepath.Join(home, ".cache", "act")
|
||||||
|
}
|
||||||
|
if cfg.Runner.FetchTimeout <= 0 {
|
||||||
|
cfg.Runner.FetchTimeout = 5 * time.Second
|
||||||
|
}
|
||||||
|
if cfg.Runner.FetchInterval <= 0 {
|
||||||
|
cfg.Runner.FetchInterval = 2 * time.Second
|
||||||
|
}
|
||||||
|
if cfg.Runner.ReportInterval <= 0 {
|
||||||
|
cfg.Runner.ReportInterval = time.Second
|
||||||
|
}
|
||||||
|
|
||||||
|
// although `container.network_mode` will be deprecated, but we have to be compatible with it for now.
|
||||||
|
if cfg.Container.NetworkMode != "" && cfg.Container.Network == "" {
|
||||||
|
log.Warn("You are trying to use deprecated configuration item of `container.network_mode`, please use `container.network` instead.")
|
||||||
|
if cfg.Container.NetworkMode == "bridge" {
|
||||||
|
// Previously, if the value of `container.network_mode` is `bridge`, we will create a new network for job.
|
||||||
|
// But “bridge” is easily confused with the bridge network created by Docker by default.
|
||||||
|
// So we set the value of `container.network` to empty string to make `act_runner` automatically create a new network for job.
|
||||||
|
cfg.Container.Network = ""
|
||||||
|
} else {
|
||||||
|
cfg.Container.Network = cfg.Container.NetworkMode
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return cfg, nil
|
||||||
|
}
|
37
internal/pkg/config/config_test.go
Normal file
37
internal/pkg/config/config_test.go
Normal file
|
@ -0,0 +1,37 @@
|
||||||
|
// Copyright 2024 The Forgejo Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestConfigTune(t *testing.T) {
|
||||||
|
c := &Config{
|
||||||
|
Runner: Runner{},
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Run("Public instance tuning", func(t *testing.T) {
|
||||||
|
c.Runner.FetchInterval = 60 * time.Second
|
||||||
|
c.Tune("https://codeberg.org")
|
||||||
|
assert.EqualValues(t, 60*time.Second, c.Runner.FetchInterval)
|
||||||
|
|
||||||
|
c.Runner.FetchInterval = 2 * time.Second
|
||||||
|
c.Tune("https://codeberg.org")
|
||||||
|
assert.EqualValues(t, 30*time.Second, c.Runner.FetchInterval)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("Non-public instance tuning", func(t *testing.T) {
|
||||||
|
c.Runner.FetchInterval = 60 * time.Second
|
||||||
|
c.Tune("https://example.com")
|
||||||
|
assert.EqualValues(t, 60*time.Second, c.Runner.FetchInterval)
|
||||||
|
|
||||||
|
c.Runner.FetchInterval = 2 * time.Second
|
||||||
|
c.Tune("https://codeberg.com")
|
||||||
|
assert.EqualValues(t, 2*time.Second, c.Runner.FetchInterval)
|
||||||
|
})
|
||||||
|
}
|
62
internal/pkg/config/deprecated.go
Normal file
62
internal/pkg/config/deprecated.go
Normal file
|
@ -0,0 +1,62 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Deprecated: could be removed in the future. TODO: remove it when Gitea 1.20.0 is released.
|
||||||
|
// Be compatible with old envs.
|
||||||
|
func compatibleWithOldEnvs(fileUsed bool, cfg *Config) {
|
||||||
|
handleEnv := func(key string) (string, bool) {
|
||||||
|
if v, ok := os.LookupEnv(key); ok {
|
||||||
|
if fileUsed {
|
||||||
|
log.Warnf("env %s has been ignored because config file is used", key)
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
log.Warnf("env %s will be deprecated, please use config file instead", key)
|
||||||
|
return v, true
|
||||||
|
}
|
||||||
|
return "", false
|
||||||
|
}
|
||||||
|
|
||||||
|
if v, ok := handleEnv("GITEA_DEBUG"); ok {
|
||||||
|
if b, _ := strconv.ParseBool(v); b {
|
||||||
|
cfg.Log.Level = "debug"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := handleEnv("GITEA_TRACE"); ok {
|
||||||
|
if b, _ := strconv.ParseBool(v); b {
|
||||||
|
cfg.Log.Level = "trace"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := handleEnv("GITEA_RUNNER_CAPACITY"); ok {
|
||||||
|
if i, _ := strconv.Atoi(v); i > 0 {
|
||||||
|
cfg.Runner.Capacity = i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := handleEnv("GITEA_RUNNER_FILE"); ok {
|
||||||
|
cfg.Runner.File = v
|
||||||
|
}
|
||||||
|
if v, ok := handleEnv("GITEA_RUNNER_ENVIRON"); ok {
|
||||||
|
splits := strings.Split(v, ",")
|
||||||
|
if cfg.Runner.Envs == nil {
|
||||||
|
cfg.Runner.Envs = map[string]string{}
|
||||||
|
}
|
||||||
|
for _, split := range splits {
|
||||||
|
kv := strings.SplitN(split, ":", 2)
|
||||||
|
if len(kv) == 2 && kv[0] != "" {
|
||||||
|
cfg.Runner.Envs[kv[0]] = kv[1]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if v, ok := handleEnv("GITEA_RUNNER_ENV_FILE"); ok {
|
||||||
|
cfg.Runner.EnvFile = v
|
||||||
|
}
|
||||||
|
}
|
9
internal/pkg/config/embed.go
Normal file
9
internal/pkg/config/embed.go
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package config
|
||||||
|
|
||||||
|
import _ "embed"
|
||||||
|
|
||||||
|
//go:embed config.example.yaml
|
||||||
|
var Example []byte
|
54
internal/pkg/config/registration.go
Normal file
54
internal/pkg/config/registration.go
Normal file
|
@ -0,0 +1,54 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"os"
|
||||||
|
)
|
||||||
|
|
||||||
|
const registrationWarning = "This file is automatically generated by act-runner. Do not edit it manually unless you know what you are doing. Removing this file will cause act runner to re-register as a new runner."
|
||||||
|
|
||||||
|
// Registration is the registration information for a runner
|
||||||
|
type Registration struct {
|
||||||
|
Warning string `json:"WARNING"` // Warning message to display, it's always the registrationWarning constant
|
||||||
|
|
||||||
|
ID int64 `json:"id"`
|
||||||
|
UUID string `json:"uuid"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Token string `json:"token"`
|
||||||
|
Address string `json:"address"`
|
||||||
|
Labels []string `json:"labels"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func LoadRegistration(file string) (*Registration, error) {
|
||||||
|
f, err := os.Open(file)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
var reg Registration
|
||||||
|
if err := json.NewDecoder(f).Decode(®); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
reg.Warning = ""
|
||||||
|
|
||||||
|
return ®, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func SaveRegistration(file string, reg *Registration) error {
|
||||||
|
f, err := os.Create(file)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
reg.Warning = registrationWarning
|
||||||
|
|
||||||
|
enc := json.NewEncoder(f)
|
||||||
|
enc.SetIndent("", " ")
|
||||||
|
return enc.Encode(reg)
|
||||||
|
}
|
5
internal/pkg/envcheck/doc.go
Normal file
5
internal/pkg/envcheck/doc.go
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
// Package envcheck provides a simple way to check if the environment is ready to run jobs.
|
||||||
|
package envcheck
|
34
internal/pkg/envcheck/docker.go
Normal file
34
internal/pkg/envcheck/docker.go
Normal file
|
@ -0,0 +1,34 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package envcheck
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/docker/docker/client"
|
||||||
|
)
|
||||||
|
|
||||||
|
func CheckIfDockerRunning(ctx context.Context, configDockerHost string) error {
|
||||||
|
opts := []client.Opt{
|
||||||
|
client.FromEnv,
|
||||||
|
}
|
||||||
|
|
||||||
|
if configDockerHost != "" {
|
||||||
|
opts = append(opts, client.WithHost(configDockerHost))
|
||||||
|
}
|
||||||
|
|
||||||
|
cli, err := client.NewClientWithOpts(opts...)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
_, err = cli.Ping(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("cannot ping the docker daemon. is it running? %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
109
internal/pkg/labels/labels.go
Normal file
109
internal/pkg/labels/labels.go
Normal file
|
@ -0,0 +1,109 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package labels
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
SchemeHost = "host"
|
||||||
|
SchemeDocker = "docker"
|
||||||
|
SchemeLXC = "lxc"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Label struct {
|
||||||
|
Name string
|
||||||
|
Schema string
|
||||||
|
Arg string
|
||||||
|
}
|
||||||
|
|
||||||
|
func Parse(str string) (*Label, error) {
|
||||||
|
splits := strings.SplitN(str, ":", 3)
|
||||||
|
label := &Label{
|
||||||
|
Name: splits[0],
|
||||||
|
Schema: "host",
|
||||||
|
Arg: "",
|
||||||
|
}
|
||||||
|
if len(splits) >= 2 {
|
||||||
|
label.Schema = splits[1]
|
||||||
|
}
|
||||||
|
if len(splits) >= 3 {
|
||||||
|
label.Arg = splits[2]
|
||||||
|
}
|
||||||
|
if label.Schema != SchemeHost && label.Schema != SchemeDocker && label.Schema != SchemeLXC {
|
||||||
|
return nil, fmt.Errorf("unsupported schema: %s", label.Schema)
|
||||||
|
}
|
||||||
|
return label, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type Labels []*Label
|
||||||
|
|
||||||
|
func (l Labels) RequireDocker() bool {
|
||||||
|
for _, label := range l {
|
||||||
|
if label.Schema == SchemeDocker {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l Labels) PickPlatform(runsOn []string) string {
|
||||||
|
platforms := make(map[string]string, len(l))
|
||||||
|
for _, label := range l {
|
||||||
|
switch label.Schema {
|
||||||
|
case SchemeDocker:
|
||||||
|
// "//" will be ignored
|
||||||
|
platforms[label.Name] = strings.TrimPrefix(label.Arg, "//")
|
||||||
|
case SchemeHost:
|
||||||
|
platforms[label.Name] = "-self-hosted"
|
||||||
|
case SchemeLXC:
|
||||||
|
platforms[label.Name] = "lxc:" + strings.TrimPrefix(label.Arg, "//")
|
||||||
|
default:
|
||||||
|
// It should not happen, because Parse has checked it.
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, v := range runsOn {
|
||||||
|
if v, ok := platforms[v]; ok {
|
||||||
|
return v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: support multiple labels
|
||||||
|
// like:
|
||||||
|
// ["ubuntu-22.04"] => "ubuntu:22.04"
|
||||||
|
// ["with-gpu"] => "linux:with-gpu"
|
||||||
|
// ["ubuntu-22.04", "with-gpu"] => "ubuntu:22.04_with-gpu"
|
||||||
|
|
||||||
|
// return default.
|
||||||
|
// So the runner receives a task with a label that the runner doesn't have,
|
||||||
|
// it happens when the user have edited the label of the runner in the web UI.
|
||||||
|
// TODO: it may be not correct, what if the runner is used as host mode only?
|
||||||
|
return "node:20-bullseye"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l Labels) Names() []string {
|
||||||
|
names := make([]string, 0, len(l))
|
||||||
|
for _, label := range l {
|
||||||
|
names = append(names, label.Name)
|
||||||
|
}
|
||||||
|
return names
|
||||||
|
}
|
||||||
|
|
||||||
|
func (l Labels) ToStrings() []string {
|
||||||
|
ls := make([]string, 0, len(l))
|
||||||
|
for _, label := range l {
|
||||||
|
lbl := label.Name
|
||||||
|
if label.Schema != "" {
|
||||||
|
lbl += ":" + label.Schema
|
||||||
|
if label.Arg != "" {
|
||||||
|
lbl += ":" + label.Arg
|
||||||
|
}
|
||||||
|
}
|
||||||
|
ls = append(ls, lbl)
|
||||||
|
}
|
||||||
|
return ls
|
||||||
|
}
|
63
internal/pkg/labels/labels_test.go
Normal file
63
internal/pkg/labels/labels_test.go
Normal file
|
@ -0,0 +1,63 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package labels
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"gotest.tools/v3/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestParse(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
args string
|
||||||
|
want *Label
|
||||||
|
wantErr bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
args: "ubuntu:docker://node:18",
|
||||||
|
want: &Label{
|
||||||
|
Name: "ubuntu",
|
||||||
|
Schema: "docker",
|
||||||
|
Arg: "//node:18",
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
args: "ubuntu:host",
|
||||||
|
want: &Label{
|
||||||
|
Name: "ubuntu",
|
||||||
|
Schema: "host",
|
||||||
|
Arg: "",
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
args: "ubuntu",
|
||||||
|
want: &Label{
|
||||||
|
Name: "ubuntu",
|
||||||
|
Schema: "host",
|
||||||
|
Arg: "",
|
||||||
|
},
|
||||||
|
wantErr: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
args: "ubuntu:vm:ubuntu-18.04",
|
||||||
|
want: nil,
|
||||||
|
wantErr: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.args, func(t *testing.T) {
|
||||||
|
got, err := Parse(tt.args)
|
||||||
|
if tt.wantErr {
|
||||||
|
require.Error(t, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.DeepEqual(t, got, tt.want)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,20 +1,24 @@
|
||||||
package runtime
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package report
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"regexp"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
"gitea.com/gitea/act_runner/client"
|
"connectrpc.com/connect"
|
||||||
|
|
||||||
retry "github.com/avast/retry-go/v4"
|
retry "github.com/avast/retry-go/v4"
|
||||||
"github.com/bufbuild/connect-go"
|
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
"google.golang.org/protobuf/proto"
|
"google.golang.org/protobuf/proto"
|
||||||
"google.golang.org/protobuf/types/known/timestamppb"
|
"google.golang.org/protobuf/types/known/timestamppb"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/client"
|
||||||
)
|
)
|
||||||
|
|
||||||
type Reporter struct {
|
type Reporter struct {
|
||||||
|
@ -28,33 +32,51 @@ type Reporter struct {
|
||||||
logOffset int
|
logOffset int
|
||||||
logRows []*runnerv1.LogRow
|
logRows []*runnerv1.LogRow
|
||||||
logReplacer *strings.Replacer
|
logReplacer *strings.Replacer
|
||||||
|
oldnew []string
|
||||||
|
reportInterval time.Duration
|
||||||
|
|
||||||
state *runnerv1.TaskState
|
state *runnerv1.TaskState
|
||||||
stateM sync.RWMutex
|
stateMu sync.RWMutex
|
||||||
|
outputs sync.Map
|
||||||
|
|
||||||
|
debugOutputEnabled bool
|
||||||
|
stopCommandEndToken string
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.Client, task *runnerv1.Task) *Reporter {
|
func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.Client, task *runnerv1.Task, reportInterval time.Duration) *Reporter {
|
||||||
var oldnew []string
|
var oldnew []string
|
||||||
if v := task.Context.Fields["token"].GetStringValue(); v != "" {
|
if v := task.Context.Fields["token"].GetStringValue(); v != "" {
|
||||||
oldnew = append(oldnew, v, "***")
|
oldnew = append(oldnew, v, "***")
|
||||||
}
|
}
|
||||||
|
if v := task.Context.Fields["gitea_runtime_token"].GetStringValue(); v != "" {
|
||||||
|
oldnew = append(oldnew, v, "***")
|
||||||
|
}
|
||||||
for _, v := range task.Secrets {
|
for _, v := range task.Secrets {
|
||||||
oldnew = append(oldnew, v, "***")
|
oldnew = append(oldnew, v, "***")
|
||||||
}
|
}
|
||||||
|
|
||||||
return &Reporter{
|
rv := &Reporter{
|
||||||
ctx: ctx,
|
ctx: ctx,
|
||||||
cancel: cancel,
|
cancel: cancel,
|
||||||
client: client,
|
client: client,
|
||||||
|
oldnew: oldnew,
|
||||||
|
reportInterval: reportInterval,
|
||||||
logReplacer: strings.NewReplacer(oldnew...),
|
logReplacer: strings.NewReplacer(oldnew...),
|
||||||
state: &runnerv1.TaskState{
|
state: &runnerv1.TaskState{
|
||||||
Id: task.Id,
|
Id: task.Id,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if task.Secrets["ACTIONS_STEP_DEBUG"] == "true" {
|
||||||
|
rv.debugOutputEnabled = true
|
||||||
|
}
|
||||||
|
|
||||||
|
return rv
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *Reporter) ResetSteps(l int) {
|
func (r *Reporter) ResetSteps(l int) {
|
||||||
r.stateM.Lock()
|
r.stateMu.Lock()
|
||||||
defer r.stateM.Unlock()
|
defer r.stateMu.Unlock()
|
||||||
for i := 0; i < l; i++ {
|
for i := 0; i < l; i++ {
|
||||||
r.state.Steps = append(r.state.Steps, &runnerv1.StepState{
|
r.state.Steps = append(r.state.Steps, &runnerv1.StepState{
|
||||||
Id: int64(i),
|
Id: int64(i),
|
||||||
|
@ -66,9 +88,16 @@ func (r *Reporter) Levels() []log.Level {
|
||||||
return log.AllLevels
|
return log.AllLevels
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func appendIfNotNil[T any](s []*T, v *T) []*T {
|
||||||
|
if v != nil {
|
||||||
|
return append(s, v)
|
||||||
|
}
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
func (r *Reporter) Fire(entry *log.Entry) error {
|
func (r *Reporter) Fire(entry *log.Entry) error {
|
||||||
r.stateM.Lock()
|
r.stateMu.Lock()
|
||||||
defer r.stateM.Unlock()
|
defer r.stateMu.Unlock()
|
||||||
|
|
||||||
log.WithFields(entry.Data).Trace(entry.Message)
|
log.WithFields(entry.Data).Trace(entry.Message)
|
||||||
|
|
||||||
|
@ -87,25 +116,28 @@ func (r *Reporter) Fire(entry *log.Entry) error {
|
||||||
for _, s := range r.state.Steps {
|
for _, s := range r.state.Steps {
|
||||||
if s.Result == runnerv1.Result_RESULT_UNSPECIFIED {
|
if s.Result == runnerv1.Result_RESULT_UNSPECIFIED {
|
||||||
s.Result = runnerv1.Result_RESULT_CANCELLED
|
s.Result = runnerv1.Result_RESULT_CANCELLED
|
||||||
|
if jobResult == runnerv1.Result_RESULT_SKIPPED {
|
||||||
|
s.Result = runnerv1.Result_RESULT_SKIPPED
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if !r.duringSteps() {
|
if !r.duringSteps() {
|
||||||
r.logRows = append(r.logRows, r.parseLogRow(entry))
|
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
var step *runnerv1.StepState
|
var step *runnerv1.StepState
|
||||||
if v, ok := entry.Data["stepNumber"]; ok {
|
if v, ok := entry.Data["stepNumber"]; ok {
|
||||||
if v, ok := v.(int); ok {
|
if v, ok := v.(int); ok && len(r.state.Steps) > v {
|
||||||
step = r.state.Steps[v]
|
step = r.state.Steps[v]
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if step == nil {
|
if step == nil {
|
||||||
if !r.duringSteps() {
|
if !r.duringSteps() {
|
||||||
r.logRows = append(r.logRows, r.parseLogRow(entry))
|
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -115,14 +147,16 @@ func (r *Reporter) Fire(entry *log.Entry) error {
|
||||||
}
|
}
|
||||||
if v, ok := entry.Data["raw_output"]; ok {
|
if v, ok := entry.Data["raw_output"]; ok {
|
||||||
if rawOutput, ok := v.(bool); ok && rawOutput {
|
if rawOutput, ok := v.(bool); ok && rawOutput {
|
||||||
|
if row := r.parseLogRow(entry); row != nil {
|
||||||
if step.LogLength == 0 {
|
if step.LogLength == 0 {
|
||||||
step.LogIndex = int64(r.logOffset + len(r.logRows))
|
step.LogIndex = int64(r.logOffset + len(r.logRows))
|
||||||
}
|
}
|
||||||
step.LogLength++
|
step.LogLength++
|
||||||
r.logRows = append(r.logRows, r.parseLogRow(entry))
|
r.logRows = append(r.logRows, row)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
} else if !r.duringSteps() {
|
} else if !r.duringSteps() {
|
||||||
r.logRows = append(r.logRows, r.parseLogRow(entry))
|
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
|
||||||
}
|
}
|
||||||
if v, ok := entry.Data["stepResult"]; ok {
|
if v, ok := entry.Data["stepResult"]; ok {
|
||||||
if stepResult, ok := r.parseResult(v); ok {
|
if stepResult, ok := r.parseResult(v); ok {
|
||||||
|
@ -148,13 +182,17 @@ func (r *Reporter) RunDaemon() {
|
||||||
_ = r.ReportLog(false)
|
_ = r.ReportLog(false)
|
||||||
_ = r.ReportState()
|
_ = r.ReportState()
|
||||||
|
|
||||||
time.AfterFunc(time.Second, r.RunDaemon)
|
time.AfterFunc(r.reportInterval, r.RunDaemon)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r *Reporter) Logf(format string, a ...interface{}) {
|
func (r *Reporter) Logf(format string, a ...interface{}) {
|
||||||
r.stateM.Lock()
|
r.stateMu.Lock()
|
||||||
defer r.stateM.Unlock()
|
defer r.stateMu.Unlock()
|
||||||
|
|
||||||
|
r.logf(format, a...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *Reporter) logf(format string, a ...interface{}) {
|
||||||
if !r.duringSteps() {
|
if !r.duringSteps() {
|
||||||
r.logRows = append(r.logRows, &runnerv1.LogRow{
|
r.logRows = append(r.logRows, &runnerv1.LogRow{
|
||||||
Time: timestamppb.Now(),
|
Time: timestamppb.Now(),
|
||||||
|
@ -163,10 +201,30 @@ func (r *Reporter) Logf(format string, a ...interface{}) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *Reporter) SetOutputs(outputs map[string]string) {
|
||||||
|
r.stateMu.Lock()
|
||||||
|
defer r.stateMu.Unlock()
|
||||||
|
|
||||||
|
for k, v := range outputs {
|
||||||
|
if len(k) > 255 {
|
||||||
|
r.logf("ignore output because the key is too long: %q", k)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if l := len(v); l > 1024*1024 {
|
||||||
|
log.Println("ignore output because the value is too long:", k, l)
|
||||||
|
r.logf("ignore output because the value %q is too long: %d", k, l)
|
||||||
|
}
|
||||||
|
if _, ok := r.outputs.Load(k); ok {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
r.outputs.Store(k, v)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func (r *Reporter) Close(lastWords string) error {
|
func (r *Reporter) Close(lastWords string) error {
|
||||||
r.closed = true
|
r.closed = true
|
||||||
|
|
||||||
r.stateM.Lock()
|
r.stateMu.Lock()
|
||||||
if r.state.Result == runnerv1.Result_RESULT_UNSPECIFIED {
|
if r.state.Result == runnerv1.Result_RESULT_UNSPECIFIED {
|
||||||
if lastWords == "" {
|
if lastWords == "" {
|
||||||
lastWords = "Early termination"
|
lastWords = "Early termination"
|
||||||
|
@ -176,18 +234,19 @@ func (r *Reporter) Close(lastWords string) error {
|
||||||
v.Result = runnerv1.Result_RESULT_CANCELLED
|
v.Result = runnerv1.Result_RESULT_CANCELLED
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
r.state.Result = runnerv1.Result_RESULT_FAILURE
|
||||||
r.logRows = append(r.logRows, &runnerv1.LogRow{
|
r.logRows = append(r.logRows, &runnerv1.LogRow{
|
||||||
Time: timestamppb.Now(),
|
Time: timestamppb.Now(),
|
||||||
Content: lastWords,
|
Content: lastWords,
|
||||||
})
|
})
|
||||||
return nil
|
r.state.StoppedAt = timestamppb.Now()
|
||||||
} else if lastWords != "" {
|
} else if lastWords != "" {
|
||||||
r.logRows = append(r.logRows, &runnerv1.LogRow{
|
r.logRows = append(r.logRows, &runnerv1.LogRow{
|
||||||
Time: timestamppb.Now(),
|
Time: timestamppb.Now(),
|
||||||
Content: lastWords,
|
Content: lastWords,
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
r.stateM.Unlock()
|
r.stateMu.Unlock()
|
||||||
|
|
||||||
return retry.Do(func() error {
|
return retry.Do(func() error {
|
||||||
if err := r.ReportLog(true); err != nil {
|
if err := r.ReportLog(true); err != nil {
|
||||||
|
@ -201,9 +260,9 @@ func (r *Reporter) ReportLog(noMore bool) error {
|
||||||
r.clientM.Lock()
|
r.clientM.Lock()
|
||||||
defer r.clientM.Unlock()
|
defer r.clientM.Unlock()
|
||||||
|
|
||||||
r.stateM.RLock()
|
r.stateMu.RLock()
|
||||||
rows := r.logRows
|
rows := r.logRows
|
||||||
r.stateM.RUnlock()
|
r.stateMu.RUnlock()
|
||||||
|
|
||||||
resp, err := r.client.UpdateLog(r.ctx, connect.NewRequest(&runnerv1.UpdateLogRequest{
|
resp, err := r.client.UpdateLog(r.ctx, connect.NewRequest(&runnerv1.UpdateLogRequest{
|
||||||
TaskId: r.state.Id,
|
TaskId: r.state.Id,
|
||||||
|
@ -220,10 +279,10 @@ func (r *Reporter) ReportLog(noMore bool) error {
|
||||||
return fmt.Errorf("submitted logs are lost")
|
return fmt.Errorf("submitted logs are lost")
|
||||||
}
|
}
|
||||||
|
|
||||||
r.stateM.Lock()
|
r.stateMu.Lock()
|
||||||
r.logRows = r.logRows[ack-r.logOffset:]
|
r.logRows = r.logRows[ack-r.logOffset:]
|
||||||
r.logOffset = ack
|
r.logOffset = ack
|
||||||
r.stateM.Unlock()
|
r.stateMu.Unlock()
|
||||||
|
|
||||||
if noMore && ack < r.logOffset+len(rows) {
|
if noMore && ack < r.logOffset+len(rows) {
|
||||||
return fmt.Errorf("not all logs are submitted")
|
return fmt.Errorf("not all logs are submitted")
|
||||||
|
@ -236,21 +295,45 @@ func (r *Reporter) ReportState() error {
|
||||||
r.clientM.Lock()
|
r.clientM.Lock()
|
||||||
defer r.clientM.Unlock()
|
defer r.clientM.Unlock()
|
||||||
|
|
||||||
r.stateM.RLock()
|
r.stateMu.RLock()
|
||||||
state := proto.Clone(r.state).(*runnerv1.TaskState)
|
state := proto.Clone(r.state).(*runnerv1.TaskState)
|
||||||
r.stateM.RUnlock()
|
r.stateMu.RUnlock()
|
||||||
|
|
||||||
|
outputs := make(map[string]string)
|
||||||
|
r.outputs.Range(func(k, v interface{}) bool {
|
||||||
|
if val, ok := v.(string); ok {
|
||||||
|
outputs[k.(string)] = val
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
|
||||||
resp, err := r.client.UpdateTask(r.ctx, connect.NewRequest(&runnerv1.UpdateTaskRequest{
|
resp, err := r.client.UpdateTask(r.ctx, connect.NewRequest(&runnerv1.UpdateTaskRequest{
|
||||||
State: state,
|
State: state,
|
||||||
|
Outputs: outputs,
|
||||||
}))
|
}))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for _, k := range resp.Msg.SentOutputs {
|
||||||
|
r.outputs.Store(k, struct{}{})
|
||||||
|
}
|
||||||
|
|
||||||
if resp.Msg.State != nil && resp.Msg.State.Result == runnerv1.Result_RESULT_CANCELLED {
|
if resp.Msg.State != nil && resp.Msg.State.Result == runnerv1.Result_RESULT_CANCELLED {
|
||||||
r.cancel()
|
r.cancel()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var noSent []string
|
||||||
|
r.outputs.Range(func(k, v interface{}) bool {
|
||||||
|
if _, ok := v.(string); ok {
|
||||||
|
noSent = append(noSent, k.(string))
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
})
|
||||||
|
if len(noSent) > 0 {
|
||||||
|
return fmt.Errorf("there are still outputs that have not been sent: %v", noSent)
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -284,11 +367,71 @@ func (r *Reporter) parseResult(result interface{}) (runnerv1.Result, bool) {
|
||||||
return ret, ok
|
return ret, ok
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var cmdRegex = regexp.MustCompile(`^::([^ :]+)( .*)?::(.*)$`)
|
||||||
|
|
||||||
|
func (r *Reporter) handleCommand(originalContent, command, parameters, value string) *string {
|
||||||
|
if r.stopCommandEndToken != "" && command != r.stopCommandEndToken {
|
||||||
|
return &originalContent
|
||||||
|
}
|
||||||
|
|
||||||
|
switch command {
|
||||||
|
case "add-mask":
|
||||||
|
r.addMask(value)
|
||||||
|
return nil
|
||||||
|
case "debug":
|
||||||
|
if r.debugOutputEnabled {
|
||||||
|
return &value
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
|
||||||
|
case "notice":
|
||||||
|
// Not implemented yet, so just return the original content.
|
||||||
|
return &originalContent
|
||||||
|
case "warning":
|
||||||
|
// Not implemented yet, so just return the original content.
|
||||||
|
return &originalContent
|
||||||
|
case "error":
|
||||||
|
// Not implemented yet, so just return the original content.
|
||||||
|
return &originalContent
|
||||||
|
case "group":
|
||||||
|
// Rewriting into ##[] syntax which the frontend understands
|
||||||
|
content := "##[group]" + value
|
||||||
|
return &content
|
||||||
|
case "endgroup":
|
||||||
|
// Ditto
|
||||||
|
content := "##[endgroup]"
|
||||||
|
return &content
|
||||||
|
case "stop-commands":
|
||||||
|
r.stopCommandEndToken = value
|
||||||
|
return nil
|
||||||
|
case r.stopCommandEndToken:
|
||||||
|
r.stopCommandEndToken = ""
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return &originalContent
|
||||||
|
}
|
||||||
|
|
||||||
func (r *Reporter) parseLogRow(entry *log.Entry) *runnerv1.LogRow {
|
func (r *Reporter) parseLogRow(entry *log.Entry) *runnerv1.LogRow {
|
||||||
content := strings.TrimRightFunc(entry.Message, func(r rune) bool { return r == '\r' || r == '\n' })
|
content := strings.TrimRightFunc(entry.Message, func(r rune) bool { return r == '\r' || r == '\n' })
|
||||||
|
|
||||||
|
matches := cmdRegex.FindStringSubmatch(content)
|
||||||
|
if matches != nil {
|
||||||
|
if output := r.handleCommand(content, matches[1], matches[2], matches[3]); output != nil {
|
||||||
|
content = *output
|
||||||
|
} else {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
content = r.logReplacer.Replace(content)
|
content = r.logReplacer.Replace(content)
|
||||||
|
|
||||||
return &runnerv1.LogRow{
|
return &runnerv1.LogRow{
|
||||||
Time: timestamppb.New(entry.Time),
|
Time: timestamppb.New(entry.Time),
|
||||||
Content: content,
|
Content: strings.ToValidUTF8(content, "?"),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (r *Reporter) addMask(msg string) {
|
||||||
|
r.oldnew = append(r.oldnew, msg, "***")
|
||||||
|
r.logReplacer = strings.NewReplacer(r.oldnew...)
|
||||||
|
}
|
198
internal/pkg/report/reporter_test.go
Normal file
198
internal/pkg/report/reporter_test.go
Normal file
|
@ -0,0 +1,198 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package report
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
||||||
|
connect_go "connectrpc.com/connect"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/mock"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"google.golang.org/protobuf/types/known/structpb"
|
||||||
|
|
||||||
|
"gitea.com/gitea/act_runner/internal/pkg/client/mocks"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestReporter_parseLogRow(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
debugOutputEnabled bool
|
||||||
|
args []string
|
||||||
|
want []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
"No command", false,
|
||||||
|
[]string{"Hello, world!"},
|
||||||
|
[]string{"Hello, world!"},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Add-mask", false,
|
||||||
|
[]string{
|
||||||
|
"foo mysecret bar",
|
||||||
|
"::add-mask::mysecret",
|
||||||
|
"foo mysecret bar",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"foo mysecret bar",
|
||||||
|
"<nil>",
|
||||||
|
"foo *** bar",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Debug enabled", true,
|
||||||
|
[]string{
|
||||||
|
"::debug::GitHub Actions runtime token access controls",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"GitHub Actions runtime token access controls",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Debug not enabled", false,
|
||||||
|
[]string{
|
||||||
|
"::debug::GitHub Actions runtime token access controls",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"<nil>",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"notice", false,
|
||||||
|
[]string{
|
||||||
|
"::notice file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"::notice file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"warning", false,
|
||||||
|
[]string{
|
||||||
|
"::warning file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"::warning file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"error", false,
|
||||||
|
[]string{
|
||||||
|
"::error file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"::error file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"group", false,
|
||||||
|
[]string{
|
||||||
|
"::group::",
|
||||||
|
"::endgroup::",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"##[group]",
|
||||||
|
"##[endgroup]",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"stop-commands", false,
|
||||||
|
[]string{
|
||||||
|
"::add-mask::foo",
|
||||||
|
"::stop-commands::myverycoolstoptoken",
|
||||||
|
"::add-mask::bar",
|
||||||
|
"::debug::Stuff",
|
||||||
|
"myverycoolstoptoken",
|
||||||
|
"::add-mask::baz",
|
||||||
|
"::myverycoolstoptoken::",
|
||||||
|
"::add-mask::wibble",
|
||||||
|
"foo bar baz wibble",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"<nil>",
|
||||||
|
"<nil>",
|
||||||
|
"::add-mask::bar",
|
||||||
|
"::debug::Stuff",
|
||||||
|
"myverycoolstoptoken",
|
||||||
|
"::add-mask::baz",
|
||||||
|
"<nil>",
|
||||||
|
"<nil>",
|
||||||
|
"*** bar baz ***",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"unknown command", false,
|
||||||
|
[]string{
|
||||||
|
"::set-mask::foo",
|
||||||
|
},
|
||||||
|
[]string{
|
||||||
|
"::set-mask::foo",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
r := &Reporter{
|
||||||
|
logReplacer: strings.NewReplacer(),
|
||||||
|
debugOutputEnabled: tt.debugOutputEnabled,
|
||||||
|
}
|
||||||
|
for idx, arg := range tt.args {
|
||||||
|
rv := r.parseLogRow(&log.Entry{Message: arg})
|
||||||
|
got := "<nil>"
|
||||||
|
|
||||||
|
if rv != nil {
|
||||||
|
got = rv.Content
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Equal(t, tt.want[idx], got)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestReporter_Fire(t *testing.T) {
|
||||||
|
t.Run("ignore command lines", func(t *testing.T) {
|
||||||
|
client := mocks.NewClient(t)
|
||||||
|
client.On("UpdateLog", mock.Anything, mock.Anything).Return(func(_ context.Context, req *connect_go.Request[runnerv1.UpdateLogRequest]) (*connect_go.Response[runnerv1.UpdateLogResponse], error) {
|
||||||
|
t.Logf("Received UpdateLog: %s", req.Msg.String())
|
||||||
|
return connect_go.NewResponse(&runnerv1.UpdateLogResponse{
|
||||||
|
AckIndex: req.Msg.Index + int64(len(req.Msg.Rows)),
|
||||||
|
}), nil
|
||||||
|
})
|
||||||
|
client.On("UpdateTask", mock.Anything, mock.Anything).Return(func(_ context.Context, req *connect_go.Request[runnerv1.UpdateTaskRequest]) (*connect_go.Response[runnerv1.UpdateTaskResponse], error) {
|
||||||
|
t.Logf("Received UpdateTask: %s", req.Msg.String())
|
||||||
|
return connect_go.NewResponse(&runnerv1.UpdateTaskResponse{}), nil
|
||||||
|
})
|
||||||
|
ctx, cancel := context.WithCancel(context.Background())
|
||||||
|
taskCtx, err := structpb.NewStruct(map[string]interface{}{})
|
||||||
|
require.NoError(t, err)
|
||||||
|
reporter := NewReporter(ctx, cancel, client, &runnerv1.Task{
|
||||||
|
Context: taskCtx,
|
||||||
|
}, time.Second)
|
||||||
|
defer func() {
|
||||||
|
assert.NoError(t, reporter.Close(""))
|
||||||
|
}()
|
||||||
|
reporter.ResetSteps(5)
|
||||||
|
|
||||||
|
dataStep0 := map[string]interface{}{
|
||||||
|
"stage": "Main",
|
||||||
|
"stepNumber": 0,
|
||||||
|
"raw_output": true,
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
|
||||||
|
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
|
||||||
|
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
|
||||||
|
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
|
||||||
|
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
|
||||||
|
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
|
||||||
|
|
||||||
|
assert.Equal(t, int64(3), reporter.state.Steps[0].LogLength)
|
||||||
|
})
|
||||||
|
}
|
11
internal/pkg/ver/version.go
Normal file
11
internal/pkg/ver/version.go
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
// Copyright 2023 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
|
package ver
|
||||||
|
|
||||||
|
// go build -ldflags "-X gitea.com/gitea/act_runner/internal/pkg/ver.version=1.2.3"
|
||||||
|
var version = "dev"
|
||||||
|
|
||||||
|
func Version() string {
|
||||||
|
return version
|
||||||
|
}
|
27
main.go
27
main.go
|
@ -1,34 +1,19 @@
|
||||||
|
// Copyright 2022 The Gitea Authors. All rights reserved.
|
||||||
|
// SPDX-License-Identifier: MIT
|
||||||
|
|
||||||
package main
|
package main
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"os"
|
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
"gitea.com/gitea/act_runner/cmd"
|
"gitea.com/gitea/act_runner/internal/app/cmd"
|
||||||
)
|
)
|
||||||
|
|
||||||
func withContextFunc(ctx context.Context, f func()) context.Context {
|
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
|
||||||
go func() {
|
|
||||||
c := make(chan os.Signal, 1)
|
|
||||||
signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
|
|
||||||
defer signal.Stop(c)
|
|
||||||
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
case <-c:
|
|
||||||
cancel()
|
|
||||||
f()
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
return ctx
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
ctx := withContextFunc(context.Background(), func() {})
|
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
|
||||||
|
defer stop()
|
||||||
// run the command
|
// run the command
|
||||||
cmd.Execute(ctx)
|
cmd.Execute(ctx)
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,33 +0,0 @@
|
||||||
package poller
|
|
||||||
|
|
||||||
import "sync/atomic"
|
|
||||||
|
|
||||||
// Metric interface
|
|
||||||
type Metric interface {
|
|
||||||
IncBusyWorker() int64
|
|
||||||
DecBusyWorker() int64
|
|
||||||
BusyWorkers() int64
|
|
||||||
}
|
|
||||||
|
|
||||||
var _ Metric = (*metric)(nil)
|
|
||||||
|
|
||||||
type metric struct {
|
|
||||||
busyWorkers int64
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewMetric for default metric structure
|
|
||||||
func NewMetric() Metric {
|
|
||||||
return &metric{}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *metric) IncBusyWorker() int64 {
|
|
||||||
return atomic.AddInt64(&m.busyWorkers, 1)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *metric) DecBusyWorker() int64 {
|
|
||||||
return atomic.AddInt64(&m.busyWorkers, -1)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *metric) BusyWorkers() int64 {
|
|
||||||
return atomic.LoadInt64(&m.busyWorkers)
|
|
||||||
}
|
|
146
poller/poller.go
146
poller/poller.go
|
@ -1,146 +0,0 @@
|
||||||
package poller
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"errors"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
|
||||||
"gitea.com/gitea/act_runner/client"
|
|
||||||
|
|
||||||
"github.com/bufbuild/connect-go"
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
var ErrDataLock = errors.New("Data Lock Error")
|
|
||||||
|
|
||||||
func New(cli client.Client, dispatch func(context.Context, *runnerv1.Task) error, workerNum int) *Poller {
|
|
||||||
return &Poller{
|
|
||||||
Client: cli,
|
|
||||||
Dispatch: dispatch,
|
|
||||||
routineGroup: newRoutineGroup(),
|
|
||||||
metric: &metric{},
|
|
||||||
workerNum: workerNum,
|
|
||||||
ready: make(chan struct{}, 1),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Poller struct {
|
|
||||||
Client client.Client
|
|
||||||
Dispatch func(context.Context, *runnerv1.Task) error
|
|
||||||
|
|
||||||
sync.Mutex
|
|
||||||
routineGroup *routineGroup
|
|
||||||
metric *metric
|
|
||||||
ready chan struct{}
|
|
||||||
workerNum int
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Poller) schedule() {
|
|
||||||
p.Lock()
|
|
||||||
defer p.Unlock()
|
|
||||||
if int(p.metric.BusyWorkers()) >= p.workerNum {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
select {
|
|
||||||
case p.ready <- struct{}{}:
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Poller) Wait() {
|
|
||||||
p.routineGroup.Wait()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Poller) Poll(ctx context.Context) error {
|
|
||||||
l := log.WithField("func", "Poll")
|
|
||||||
|
|
||||||
for {
|
|
||||||
// check worker number
|
|
||||||
p.schedule()
|
|
||||||
|
|
||||||
select {
|
|
||||||
// wait worker ready
|
|
||||||
case <-p.ready:
|
|
||||||
case <-ctx.Done():
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
LOOP:
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-ctx.Done():
|
|
||||||
break LOOP
|
|
||||||
default:
|
|
||||||
task, err := p.pollTask(ctx)
|
|
||||||
if task == nil || err != nil {
|
|
||||||
if err != nil {
|
|
||||||
l.Errorf("can't find the task: %v", err.Error())
|
|
||||||
}
|
|
||||||
time.Sleep(5 * time.Second)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
p.metric.IncBusyWorker()
|
|
||||||
p.routineGroup.Run(func() {
|
|
||||||
defer p.schedule()
|
|
||||||
defer p.metric.DecBusyWorker()
|
|
||||||
if err := p.dispatchTask(ctx, task); err != nil {
|
|
||||||
l.Errorf("execute task: %v", err.Error())
|
|
||||||
}
|
|
||||||
})
|
|
||||||
break LOOP
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Poller) pollTask(ctx context.Context) (*runnerv1.Task, error) {
|
|
||||||
l := log.WithField("func", "pollTask")
|
|
||||||
l.Info("poller: request stage from remote server")
|
|
||||||
|
|
||||||
reqCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
// request a new build stage for execution from the central
|
|
||||||
// build server.
|
|
||||||
resp, err := p.Client.FetchTask(reqCtx, connect.NewRequest(&runnerv1.FetchTaskRequest{}))
|
|
||||||
if err == context.Canceled || err == context.DeadlineExceeded {
|
|
||||||
l.WithError(err).Trace("poller: no stage returned")
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil && err == ErrDataLock {
|
|
||||||
l.WithError(err).Info("task accepted by another runner")
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
l.WithError(err).Error("cannot accept task")
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// exit if a nil or empty stage is returned from the system
|
|
||||||
// and allow the runner to retry.
|
|
||||||
if resp.Msg.Task == nil || resp.Msg.Task.Id == 0 {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return resp.Msg.Task, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Poller) dispatchTask(ctx context.Context, task *runnerv1.Task) error {
|
|
||||||
l := log.WithField("func", "dispatchTask")
|
|
||||||
defer func() {
|
|
||||||
e := recover()
|
|
||||||
if e != nil {
|
|
||||||
l.Errorf("panic error: %v", e)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
runCtx, cancel := context.WithTimeout(ctx, time.Hour)
|
|
||||||
defer cancel()
|
|
||||||
|
|
||||||
return p.Dispatch(runCtx, task)
|
|
||||||
}
|
|
|
@ -1,24 +0,0 @@
|
||||||
package poller
|
|
||||||
|
|
||||||
import "sync"
|
|
||||||
|
|
||||||
type routineGroup struct {
|
|
||||||
waitGroup sync.WaitGroup
|
|
||||||
}
|
|
||||||
|
|
||||||
func newRoutineGroup() *routineGroup {
|
|
||||||
return new(routineGroup)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (g *routineGroup) Run(fn func()) {
|
|
||||||
g.waitGroup.Add(1)
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
defer g.waitGroup.Done()
|
|
||||||
fn()
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (g *routineGroup) Wait() {
|
|
||||||
g.waitGroup.Wait()
|
|
||||||
}
|
|
|
@ -1,63 +0,0 @@
|
||||||
package register
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"os"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
|
||||||
"gitea.com/gitea/act_runner/client"
|
|
||||||
"gitea.com/gitea/act_runner/config"
|
|
||||||
"gitea.com/gitea/act_runner/core"
|
|
||||||
|
|
||||||
"github.com/bufbuild/connect-go"
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
func New(cli client.Client) *Register {
|
|
||||||
return &Register{
|
|
||||||
Client: cli,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Register struct {
|
|
||||||
Client client.Client
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Register) Register(ctx context.Context, cfg config.Runner) (*core.Runner, error) {
|
|
||||||
labels := make([]string, len(cfg.Labels))
|
|
||||||
for i, v := range cfg.Labels {
|
|
||||||
labels[i] = strings.SplitN(v, ":", 2)[0]
|
|
||||||
}
|
|
||||||
// register new runner.
|
|
||||||
resp, err := p.Client.Register(ctx, connect.NewRequest(&runnerv1.RegisterRequest{
|
|
||||||
Name: cfg.Name,
|
|
||||||
Token: cfg.Token,
|
|
||||||
AgentLabels: labels,
|
|
||||||
}))
|
|
||||||
if err != nil {
|
|
||||||
log.WithError(err).Error("poller: cannot register new runner")
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
data := &core.Runner{
|
|
||||||
ID: resp.Msg.Runner.Id,
|
|
||||||
UUID: resp.Msg.Runner.Uuid,
|
|
||||||
Name: resp.Msg.Runner.Name,
|
|
||||||
Token: resp.Msg.Runner.Token,
|
|
||||||
Address: p.Client.Address(),
|
|
||||||
Insecure: strconv.FormatBool(p.Client.Insecure()),
|
|
||||||
Labels: cfg.Labels,
|
|
||||||
}
|
|
||||||
|
|
||||||
file, err := json.MarshalIndent(data, "", " ")
|
|
||||||
if err != nil {
|
|
||||||
log.WithError(err).Error("poller: cannot marshal the json input")
|
|
||||||
return data, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// store runner config in .runner file
|
|
||||||
return data, os.WriteFile(cfg.File, file, 0o644)
|
|
||||||
}
|
|
11
renovate.json
Normal file
11
renovate.json
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
{
|
||||||
|
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
|
||||||
|
"extends": ["local>forgejo/renovate-config"],
|
||||||
|
"packageRules": [
|
||||||
|
{
|
||||||
|
"description": "Disable nektos/act, it's replaced",
|
||||||
|
"matchDepNames": ["github.com/nektos/act"],
|
||||||
|
"enabled": false
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
|
@ -1,64 +0,0 @@
|
||||||
package runtime
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
|
||||||
"gitea.com/gitea/act_runner/client"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Runner runs the pipeline.
|
|
||||||
type Runner struct {
|
|
||||||
Machine string
|
|
||||||
ForgeInstance string
|
|
||||||
Environ map[string]string
|
|
||||||
Client client.Client
|
|
||||||
Labels []string
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run runs the pipeline stage.
|
|
||||||
func (s *Runner) Run(ctx context.Context, task *runnerv1.Task) error {
|
|
||||||
return NewTask(s.ForgeInstance, task.Id, s.Client, s.Environ, s.platformPicker).Run(ctx, task)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Runner) platformPicker(labels []string) string {
|
|
||||||
// "ubuntu-18.04:docker://node:16-buster"
|
|
||||||
// "self-hosted"
|
|
||||||
|
|
||||||
platforms := make(map[string]string, len(labels))
|
|
||||||
for _, l := range s.Labels {
|
|
||||||
// "ubuntu-18.04:docker://node:16-buster"
|
|
||||||
splits := strings.SplitN(l, ":", 2)
|
|
||||||
if len(splits) == 1 {
|
|
||||||
// identifier for non docker execution environment
|
|
||||||
platforms[splits[0]] = "-self-hosted"
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// ["ubuntu-18.04", "docker://node:16-buster"]
|
|
||||||
k, v := splits[0], splits[1]
|
|
||||||
|
|
||||||
if prefix := "docker://"; !strings.HasPrefix(v, prefix) {
|
|
||||||
continue
|
|
||||||
} else {
|
|
||||||
v = strings.TrimPrefix(v, prefix)
|
|
||||||
}
|
|
||||||
// ubuntu-18.04 => node:16-buster
|
|
||||||
platforms[k] = v
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, label := range labels {
|
|
||||||
if v, ok := platforms[label]; ok {
|
|
||||||
return v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: support multiple labels
|
|
||||||
// like:
|
|
||||||
// ["ubuntu-22.04"] => "ubuntu:22.04"
|
|
||||||
// ["with-gpu"] => "linux:with-gpu"
|
|
||||||
// ["ubuntu-22.04", "with-gpu"] => "ubuntu:22.04_with-gpu"
|
|
||||||
|
|
||||||
// return default
|
|
||||||
return "node:16-bullseye"
|
|
||||||
}
|
|
265
runtime/task.go
265
runtime/task.go
|
@ -1,265 +0,0 @@
|
||||||
package runtime
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
|
|
||||||
"gitea.com/gitea/act_runner/client"
|
|
||||||
|
|
||||||
"github.com/nektos/act/pkg/artifacts"
|
|
||||||
"github.com/nektos/act/pkg/common"
|
|
||||||
"github.com/nektos/act/pkg/model"
|
|
||||||
"github.com/nektos/act/pkg/runner"
|
|
||||||
log "github.com/sirupsen/logrus"
|
|
||||||
)
|
|
||||||
|
|
||||||
var globalTaskMap sync.Map
|
|
||||||
|
|
||||||
type TaskInput struct {
|
|
||||||
repoDirectory string
|
|
||||||
// actor string
|
|
||||||
// workdir string
|
|
||||||
// workflowsPath string
|
|
||||||
// autodetectEvent bool
|
|
||||||
// eventPath string
|
|
||||||
// reuseContainers bool
|
|
||||||
// bindWorkdir bool
|
|
||||||
// secrets []string
|
|
||||||
envs map[string]string
|
|
||||||
// platforms []string
|
|
||||||
// dryrun bool
|
|
||||||
forcePull bool
|
|
||||||
forceRebuild bool
|
|
||||||
// noOutput bool
|
|
||||||
// envfile string
|
|
||||||
// secretfile string
|
|
||||||
insecureSecrets bool
|
|
||||||
// defaultBranch string
|
|
||||||
privileged bool
|
|
||||||
usernsMode string
|
|
||||||
containerArchitecture string
|
|
||||||
containerDaemonSocket string
|
|
||||||
// noWorkflowRecurse bool
|
|
||||||
useGitIgnore bool
|
|
||||||
containerCapAdd []string
|
|
||||||
containerCapDrop []string
|
|
||||||
// autoRemove bool
|
|
||||||
artifactServerPath string
|
|
||||||
artifactServerPort string
|
|
||||||
jsonLogger bool
|
|
||||||
// noSkipCheckout bool
|
|
||||||
// remoteName string
|
|
||||||
|
|
||||||
EnvFile string
|
|
||||||
|
|
||||||
containerNetworkMode string
|
|
||||||
}
|
|
||||||
|
|
||||||
type Task struct {
|
|
||||||
BuildID int64
|
|
||||||
Input *TaskInput
|
|
||||||
|
|
||||||
client client.Client
|
|
||||||
log *log.Entry
|
|
||||||
platformPicker func([]string) string
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewTask creates a new task
|
|
||||||
func NewTask(forgeInstance string, buildID int64, client client.Client, runnerEnvs map[string]string, picker func([]string) string) *Task {
|
|
||||||
task := &Task{
|
|
||||||
Input: &TaskInput{
|
|
||||||
envs: runnerEnvs,
|
|
||||||
containerNetworkMode: "bridge", // TODO should be configurable
|
|
||||||
},
|
|
||||||
BuildID: buildID,
|
|
||||||
|
|
||||||
client: client,
|
|
||||||
log: log.WithField("buildID", buildID),
|
|
||||||
platformPicker: picker,
|
|
||||||
}
|
|
||||||
task.Input.repoDirectory, _ = os.Getwd()
|
|
||||||
return task
|
|
||||||
}
|
|
||||||
|
|
||||||
// getWorkflowsPath return the workflows directory, it will try .gitea first and then fallback to .github
|
|
||||||
func getWorkflowsPath(dir string) (string, error) {
|
|
||||||
p := filepath.Join(dir, ".gitea/workflows")
|
|
||||||
_, err := os.Stat(p)
|
|
||||||
if err != nil {
|
|
||||||
if !os.IsNotExist(err) {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return filepath.Join(dir, ".github/workflows"), nil
|
|
||||||
}
|
|
||||||
return p, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func getToken(task *runnerv1.Task) string {
|
|
||||||
token := task.Secrets["GITHUB_TOKEN"]
|
|
||||||
if task.Secrets["GITEA_TOKEN"] != "" {
|
|
||||||
token = task.Secrets["GITEA_TOKEN"]
|
|
||||||
}
|
|
||||||
if task.Context.Fields["token"].GetStringValue() != "" {
|
|
||||||
token = task.Context.Fields["token"].GetStringValue()
|
|
||||||
}
|
|
||||||
return token
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *Task) Run(ctx context.Context, task *runnerv1.Task) (lastErr error) {
|
|
||||||
ctx, cancel := context.WithCancel(ctx)
|
|
||||||
defer cancel()
|
|
||||||
_, exist := globalTaskMap.Load(task.Id)
|
|
||||||
if exist {
|
|
||||||
return fmt.Errorf("task %d already exists", task.Id)
|
|
||||||
}
|
|
||||||
|
|
||||||
// set task ve to global map
|
|
||||||
// when task is done or canceled, it will be removed from the map
|
|
||||||
globalTaskMap.Store(task.Id, t)
|
|
||||||
defer globalTaskMap.Delete(task.Id)
|
|
||||||
|
|
||||||
lastWords := ""
|
|
||||||
reporter := NewReporter(ctx, cancel, t.client, task)
|
|
||||||
defer func() {
|
|
||||||
// set the job to failed on an error return value
|
|
||||||
if lastErr != nil {
|
|
||||||
reporter.Fire(&log.Entry{
|
|
||||||
Data: log.Fields{
|
|
||||||
"jobResult": "failure",
|
|
||||||
},
|
|
||||||
})
|
|
||||||
}
|
|
||||||
_ = reporter.Close(lastWords)
|
|
||||||
}()
|
|
||||||
reporter.RunDaemon()
|
|
||||||
|
|
||||||
reporter.Logf("received task %v of job %v", task.Id, task.Context.Fields["job"].GetStringValue())
|
|
||||||
|
|
||||||
workflowsPath, err := getWorkflowsPath(t.Input.repoDirectory)
|
|
||||||
if err != nil {
|
|
||||||
lastWords = err.Error()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
t.log.Debugf("workflows path: %s", workflowsPath)
|
|
||||||
|
|
||||||
workflow, err := model.ReadWorkflow(bytes.NewReader(task.WorkflowPayload))
|
|
||||||
if err != nil {
|
|
||||||
lastWords = err.Error()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
var plan *model.Plan
|
|
||||||
jobIDs := workflow.GetJobIDs()
|
|
||||||
if len(jobIDs) != 1 {
|
|
||||||
err := fmt.Errorf("multiple jobs found: %v", jobIDs)
|
|
||||||
lastWords = err.Error()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
jobID := jobIDs[0]
|
|
||||||
plan = model.CombineWorkflowPlanner(workflow).PlanJob(jobID)
|
|
||||||
job := workflow.GetJob(jobID)
|
|
||||||
reporter.ResetSteps(len(job.Steps))
|
|
||||||
|
|
||||||
log.Infof("plan: %+v", plan.Stages[0].Runs)
|
|
||||||
|
|
||||||
token := getToken(task)
|
|
||||||
dataContext := task.Context.Fields
|
|
||||||
|
|
||||||
log.Infof("task %v repo is %v %v %v", task.Id, dataContext["repository"].GetStringValue(),
|
|
||||||
dataContext["gitea_default_actions_url"].GetStringValue(),
|
|
||||||
t.client.Address())
|
|
||||||
|
|
||||||
preset := &model.GithubContext{
|
|
||||||
Event: dataContext["event"].GetStructValue().AsMap(),
|
|
||||||
RunID: dataContext["run_id"].GetStringValue(),
|
|
||||||
RunNumber: dataContext["run_number"].GetStringValue(),
|
|
||||||
Actor: dataContext["actor"].GetStringValue(),
|
|
||||||
Repository: dataContext["repository"].GetStringValue(),
|
|
||||||
EventName: dataContext["event_name"].GetStringValue(),
|
|
||||||
Sha: dataContext["sha"].GetStringValue(),
|
|
||||||
Ref: dataContext["ref"].GetStringValue(),
|
|
||||||
RefName: dataContext["ref_name"].GetStringValue(),
|
|
||||||
RefType: dataContext["ref_type"].GetStringValue(),
|
|
||||||
HeadRef: dataContext["head_ref"].GetStringValue(),
|
|
||||||
BaseRef: dataContext["base_ref"].GetStringValue(),
|
|
||||||
Token: token,
|
|
||||||
RepositoryOwner: dataContext["repository_owner"].GetStringValue(),
|
|
||||||
RetentionDays: dataContext["retention_days"].GetStringValue(),
|
|
||||||
}
|
|
||||||
eventJSON, err := json.Marshal(preset.Event)
|
|
||||||
if err != nil {
|
|
||||||
lastWords = err.Error()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
maxLifetime := 3 * time.Hour
|
|
||||||
if deadline, ok := ctx.Deadline(); ok {
|
|
||||||
maxLifetime = time.Until(deadline)
|
|
||||||
}
|
|
||||||
|
|
||||||
input := t.Input
|
|
||||||
config := &runner.Config{
|
|
||||||
Workdir: "/" + preset.Repository,
|
|
||||||
BindWorkdir: false,
|
|
||||||
ReuseContainers: false,
|
|
||||||
ForcePull: input.forcePull,
|
|
||||||
ForceRebuild: input.forceRebuild,
|
|
||||||
LogOutput: true,
|
|
||||||
JSONLogger: input.jsonLogger,
|
|
||||||
Env: input.envs,
|
|
||||||
Secrets: task.Secrets,
|
|
||||||
InsecureSecrets: input.insecureSecrets,
|
|
||||||
Privileged: input.privileged,
|
|
||||||
UsernsMode: input.usernsMode,
|
|
||||||
ContainerArchitecture: input.containerArchitecture,
|
|
||||||
ContainerDaemonSocket: input.containerDaemonSocket,
|
|
||||||
UseGitIgnore: input.useGitIgnore,
|
|
||||||
GitHubInstance: t.client.Address(),
|
|
||||||
ContainerCapAdd: input.containerCapAdd,
|
|
||||||
ContainerCapDrop: input.containerCapDrop,
|
|
||||||
AutoRemove: true,
|
|
||||||
ArtifactServerPath: input.artifactServerPath,
|
|
||||||
ArtifactServerPort: input.artifactServerPort,
|
|
||||||
NoSkipCheckout: true,
|
|
||||||
PresetGitHubContext: preset,
|
|
||||||
EventJSON: string(eventJSON),
|
|
||||||
ContainerNamePrefix: fmt.Sprintf("GITEA-ACTIONS-TASK-%d", task.Id),
|
|
||||||
ContainerMaxLifetime: maxLifetime,
|
|
||||||
ContainerNetworkMode: input.containerNetworkMode,
|
|
||||||
DefaultActionInstance: dataContext["gitea_default_actions_url"].GetStringValue(),
|
|
||||||
PlatformPicker: t.platformPicker,
|
|
||||||
}
|
|
||||||
r, err := runner.New(config)
|
|
||||||
if err != nil {
|
|
||||||
lastWords = err.Error()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
artifactCancel := artifacts.Serve(ctx, input.artifactServerPath, input.artifactServerPort)
|
|
||||||
t.log.Debugf("artifacts server started at %s:%s", input.artifactServerPath, input.artifactServerPort)
|
|
||||||
|
|
||||||
executor := r.NewPlanExecutor(plan).Finally(func(ctx context.Context) error {
|
|
||||||
artifactCancel()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
t.log.Infof("workflow prepared")
|
|
||||||
reporter.Logf("workflow prepared")
|
|
||||||
|
|
||||||
// add logger recorders
|
|
||||||
ctx = common.WithLoggerHook(ctx, reporter)
|
|
||||||
|
|
||||||
if err := executor(ctx); err != nil {
|
|
||||||
lastWords = err.Error()
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
9
scripts/rootless.sh
Executable file
9
scripts/rootless.sh
Executable file
|
@ -0,0 +1,9 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# wait for docker daemon
|
||||||
|
while ! nc -z localhost 2376 </dev/null; do
|
||||||
|
echo 'waiting for docker daemon...'
|
||||||
|
sleep 5
|
||||||
|
done
|
||||||
|
|
||||||
|
. /opt/act/run.sh
|
48
scripts/run.sh
Executable file
48
scripts/run.sh
Executable file
|
@ -0,0 +1,48 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
if [[ ! -d /data ]]; then
|
||||||
|
mkdir -p /data
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd /data
|
||||||
|
|
||||||
|
CONFIG_ARG=""
|
||||||
|
if [[ ! -z "${CONFIG_FILE}" ]]; then
|
||||||
|
CONFIG_ARG="--config ${CONFIG_FILE}"
|
||||||
|
fi
|
||||||
|
EXTRA_ARGS=""
|
||||||
|
if [[ ! -z "${GITEA_RUNNER_LABELS}" ]]; then
|
||||||
|
EXTRA_ARGS="${EXTRA_ARGS} --labels ${GITEA_RUNNER_LABELS}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Use the same ENV variable names as https://github.com/vegardit/docker-gitea-act-runner
|
||||||
|
|
||||||
|
if [[ ! -s .runner ]]; then
|
||||||
|
try=$((try + 1))
|
||||||
|
success=0
|
||||||
|
|
||||||
|
# The point of this loop is to make it simple, when running both forgejo-runner and gitea in docker,
|
||||||
|
# for the forgejo-runner to wait a moment for gitea to become available before erroring out. Within
|
||||||
|
# the context of a single docker-compose, something similar could be done via healthchecks, but
|
||||||
|
# this is more flexible.
|
||||||
|
while [[ $success -eq 0 ]] && [[ $try -lt ${GITEA_MAX_REG_ATTEMPTS:-10} ]]; do
|
||||||
|
forgejo-runner register \
|
||||||
|
--instance "${GITEA_INSTANCE_URL}" \
|
||||||
|
--token "${GITEA_RUNNER_REGISTRATION_TOKEN}" \
|
||||||
|
--name "${GITEA_RUNNER_NAME:-`hostname`}" \
|
||||||
|
${CONFIG_ARG} ${EXTRA_ARGS} --no-interactive 2>&1 | tee /tmp/reg.log
|
||||||
|
|
||||||
|
cat /tmp/reg.log | grep 'Runner registered successfully' > /dev/null
|
||||||
|
if [[ $? -eq 0 ]]; then
|
||||||
|
echo "SUCCESS"
|
||||||
|
success=1
|
||||||
|
else
|
||||||
|
echo "Waiting to retry ..."
|
||||||
|
sleep 5
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
# Prevent reading the token from the forgejo-runner process
|
||||||
|
unset GITEA_RUNNER_REGISTRATION_TOKEN
|
||||||
|
|
||||||
|
forgejo-runner daemon ${CONFIG_ARG}
|
13
scripts/supervisord.conf
Normal file
13
scripts/supervisord.conf
Normal file
|
@ -0,0 +1,13 @@
|
||||||
|
[supervisord]
|
||||||
|
nodaemon=true
|
||||||
|
logfile=/dev/null
|
||||||
|
logfile_maxbytes=0
|
||||||
|
|
||||||
|
[program:dockerd]
|
||||||
|
command=/usr/local/bin/dockerd-entrypoint.sh
|
||||||
|
|
||||||
|
[program:act_runner]
|
||||||
|
stdout_logfile=/dev/fd/1
|
||||||
|
stdout_logfile_maxbytes=0
|
||||||
|
redirect_stderr=true
|
||||||
|
command=/opt/act/rootless.sh
|
67
scripts/systemd.md
Normal file
67
scripts/systemd.md
Normal file
|
@ -0,0 +1,67 @@
|
||||||
|
# Forgejo Runner with systemd User Services
|
||||||
|
|
||||||
|
It is possible to use systemd's user services together with
|
||||||
|
[podman](https://podman.io/) to run `forgejo-runner` using a normal user
|
||||||
|
account without any privileges and automatically start on boot.
|
||||||
|
|
||||||
|
This was last tested on Fedora 39 on 2024-02-19, but should work elsewhere as
|
||||||
|
well.
|
||||||
|
|
||||||
|
Place the `forgejo-runner` binary in `/usr/local/bin/forgejo-runner` and make
|
||||||
|
sure it can be executed (`chmod +x /usr/local/bin/forgejo-runner`).
|
||||||
|
|
||||||
|
Install and enable `podman` as a user service:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ sudo dnf -y install podman
|
||||||
|
```
|
||||||
|
|
||||||
|
You *may* need to reboot your system after installing `podman` as it
|
||||||
|
modifies some system configuration(s) that may need to be activated. Without
|
||||||
|
rebooting the system my runner errored out when trying to set firewall rules, a
|
||||||
|
reboot fixed it.
|
||||||
|
|
||||||
|
Enable `podman` as a user service:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ systemctl --user start podman.socket
|
||||||
|
$ systemctl --user enable podman.socket
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure processes remain after your user account logs out:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ loginctl enable-linger
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the file `/etc/systemd/user/forgejo-runner.service` with the following
|
||||||
|
content:
|
||||||
|
|
||||||
|
```
|
||||||
|
[Unit]
|
||||||
|
Description=Forgejo Runner
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
ExecStart=/usr/local/bin/forgejo-runner daemon
|
||||||
|
Restart=on-failure
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=default.target
|
||||||
|
```
|
||||||
|
|
||||||
|
Now activate it as a user service:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ systemctl --user daemon-reload
|
||||||
|
$ systemctl --user start forgejo-runner
|
||||||
|
$ systemctl --user enable forgejo-runner
|
||||||
|
```
|
||||||
|
|
||||||
|
To see/follow the log of `forgejo-runner`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ journalctl -f -t forgejo-runner
|
||||||
|
```
|
||||||
|
|
||||||
|
If you reboot your system, all should come back automatically.
|
Loading…
Reference in a new issue