Compare commits

..

28 commits
main ... v1.3.1

Author SHA1 Message Date
earl-warren
b202c62bf7 Merge pull request 'upgrade to codeberg.org/forgejo/act v1.1.0' (#1) from wip-act-upgrade into main
Reviewed-on: https://code.forgejo.org/forgejo/runner/pulls/1
2023-03-11 14:51:11 +00:00
Earl Warren
49f9622eca
upgrade to codeberg.org/forgejo/act v1.1.0 2023-03-11 15:24:54 +01:00
Earl Warren
e160695183 Merge pull request 'run in daemon mode by default' (#8) from earl-warren/runner:wip-daemon into main
Reviewed-on: https://codeberg.org/forgejo/runner/pulls/8
2023-03-09 22:56:28 +00:00
Earl Warren
b685c432d4 Merge pull request 'define FORGEJO_RUNNER_FILE' (#7) from earl-warren/runner:wip-configfile into main
Reviewed-on: https://codeberg.org/forgejo/runner/pulls/7
2023-03-09 22:56:17 +00:00
Earl Warren
132b318d8b
run in daemon mode by default 2023-03-09 18:40:19 +01:00
Earl Warren
a02fbdc7af
define FORGEJO_RUNNER_FILE 2023-03-09 18:25:54 +01:00
Loïc Dachary
1bb87d0ebb Merge pull request 'cherry-pick some changes from the gitea repository' (#6) from earl-warren/runner:wip-sync into main
Reviewed-on: https://codeberg.org/forgejo/runner/pulls/6
2023-03-08 14:15:12 +00:00
HesterG
048b2e630f
Fix wrong last step duration when job failed (#41)
This PR is to fix the wrong last step duration when job failed like shown in the screenshot.
The reason is because when job failed, `Fire` function did not pass in Time, and `r.state.StoppedAt` is by default set to `0001-01-01 08:05:43 +0805 LMT`, which is later on reported to gitea by `UpdateTask`, which calls `UpdateTaskByState` to update the `task.Stopped`, and `task.Stopped` is used in `FullSteps`, resulting in wrong calcaulation of last step duration.

Co-authored-by: nickname <test@123.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/41
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: HesterG <hesterg@noreply.gitea.io>
Co-committed-by: HesterG <hesterg@noreply.gitea.io>
2023-03-08 15:07:09 +01:00
Earl Warren
40954c450d
.gitignore forgejo-runner 2023-03-08 15:02:28 +01:00
Earl Warren
4d5007a333
s/gitea/forgejo/ for exec 2023-03-08 15:01:49 +01:00
a1012112796
65d2485f58
Add exec subcommand for runner so that we can run the tasks locally(#39)
Most codes are copied from https://gitea.com/gitea/act/src/branch/main/cmd
and do some small changes to make it run again

examples:

```SHELL
./act_runner exec -l
./act_runner exec -j lint
./act_runner exec -j lint -n
```

some example result:

![屏幕截图 2023-03-06 135735](/attachments/547bd05c-ade2-41f7-ba60-c9937fa32d5f)

![屏幕截图 2023-03-06 140643](/attachments/e8f48dba-c7f3-4daa-a163-aa9b36b1dc32)

Signed-off-by: a1012112796 <1012112796@qq.com>

fix #32

Reviewed-on: https://gitea.com/gitea/act_runner/pulls/39
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: a1012112796 <1012112796@qq.com>
Co-committed-by: a1012112796 <1012112796@qq.com>
2023-03-08 14:59:52 +01:00
Zettat123
ba181d4a50
Add runner name to log (#37)
User can get the name of the runner that executed the specified job.
![image](/attachments/61328f68-7223-4345-85c7-ac08781e81db)

Co-authored-by: Zettat123 <zettat123@gmail.com>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/37
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@noreply.gitea.io>
Co-committed-by: Zettat123 <zettat123@noreply.gitea.io>
2023-03-08 14:58:59 +01:00
ChristopherHX
cbf360f543
fix docker executor on windows and local actions (#34)
If the Workdir field doesn't ends with the filepath seperator,
bad things happen

Fixes #33

Sample for host mode on windows, needs be adjusted for linux e.g. replace pwsh with bash
Also fixes
```yaml
on: push
jobs:
  _:
    runs-on: self-hosted
    steps:
    - uses: actions/checkout@v3
      with:
        path: subdir/action
    - uses: ./subdir/action
```

with an action.yml in the same repo
```yaml
runs:
  using: composite
  steps:
    - run: |
        echo "Hello World"
      shell: pwsh
```

Co-authored-by: Christopher Homberger <christopher.homberger@web.de>
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/34
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: ChristopherHX <christopherhx@noreply.gitea.io>
Co-committed-by: ChristopherHX <christopherhx@noreply.gitea.io>
2023-03-08 14:58:04 +01:00
Earl Warren
8833fca093
trigger the build 2023-03-01 00:32:50 +01:00
Earl Warren
7aaa64a648 Merge pull request 'run test from Actions' (#3) from earl-warren/runner:wip-test into main
Reviewed-on: https://codeberg.org/forgejo/runner/pulls/3
2023-02-28 23:24:31 +00:00
Earl Warren
9213f7dc62
run test from Actions 2023-03-01 00:10:02 +01:00
Earl Warren
af6fea7b97 Merge pull request 'Further rebranding to Forgejo' (#2) from crystal/runner:rebrand into main
Reviewed-on: https://codeberg.org/forgejo/runner/pulls/2
2023-02-28 23:09:48 +00:00
techknowlogick
1da9c87cac
fix lint error (#30)
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/30
Reviewed-by: John Olheiser <john+gitea@jolheiser.com>
2023-02-28 23:53:30 +01:00
Jason Song
c5c4e275ac
Support cache (#25)
See [Caching dependencies to speed up workflows](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows).

Reviewed-on: https://gitea.com/gitea/act_runner/pulls/25
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-by: Zettat123 <zettat123@noreply.gitea.io>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-committed-by: Jason Song <i@wolfogre.com>
2023-02-28 23:48:37 +01:00
crystal
9d3406c31d
Further rebranding to Forgejo 2023-02-28 15:38:58 -07:00
Earl Warren
6b85bb17d1 Merge pull request 'Rebrand runner for Forgejo' (#1) from crystal/runner:rebrand into main
Reviewed-on: https://codeberg.org/forgejo/runner/pulls/1
2023-02-28 22:23:26 +00:00
crystal
fb4284bfae
Rebrand runner for Forgejo 2023-02-28 15:05:33 -07:00
Lunny Xiao
5cb3d6f034
Enable action as CI to test/build/release (#26)
Reviewed-on: https://gitea.com/gitea/act_runner/pulls/26
2023-02-24 17:52:04 +01:00
Loïc Dachary
7e1a0119c0
upgrade act v1.0.1 2023-02-24 08:40:50 +01:00
Loïc Dachary
3b670a1e42
the canonical name is forgejo-runner 2023-02-23 00:09:12 +01:00
Earl Warren
956fe61fb9
Update README 2023-02-22 18:47:42 +01:00
Earl Warren
c81e7d4beb
upgrade to codeberg.org/forgejo/act v1.0.0 2023-02-22 18:21:23 +01:00
Earl Warren
d5798f067a
s|gitea.com/gitea/act_runner|codeberg.org/forgejo/runner| 2023-02-22 17:19:43 +01:00
85 changed files with 3502 additions and 4637 deletions

View file

@ -1,2 +0,0 @@
Dockerfile
forgejo-runner

View file

@ -1,16 +0,0 @@
root = true
[*]
indent_style = space
indent_size = 2
tab_width = 2
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
insert_final_newline = true
[*.{go}]
indent_style = tab
[Makefile]
indent_style = tab

View file

@ -1,16 +0,0 @@
#!/bin/bash
set -ex
setup_forgejo=$1
setup_forgejo_pr=$2
runner=$3
runner_pr=$4
url=$(jq --raw-output .head.repo.html_url < $runner_pr)
test "$url" != null
branch=$(jq --raw-output .head.ref < $runner_pr)
test "$branch" != null
cd $setup_forgejo
./utils/upgrade-runner.sh $url @$branch
date > last-upgrade

View file

@ -1,24 +0,0 @@
import json
expectedLabels = {
"maintainer": "contact@forgejo.org",
"org.opencontainers.image.authors": "Forgejo",
"org.opencontainers.image.url": "https://forgejo.org",
"org.opencontainers.image.documentation": "https://forgejo.org/docs/latest/admin/actions/#forgejo-runner",
"org.opencontainers.image.source": "https://code.forgejo.org/forgejo/runner",
"org.opencontainers.image.version": "1.2.3",
"org.opencontainers.image.vendor": "Forgejo",
"org.opencontainers.image.licenses": "MIT",
"org.opencontainers.image.title": "Forgejo Runner",
"org.opencontainers.image.description": "A runner for Forgejo Actions.",
}
inspect = None
with open("./labels.json", "r") as f:
inspect = json.load(f)
assert inspect
labels = inspect[0]["Config"]["Labels"]
for k, v in expectedLabels.items():
assert k in labels, f"'{k}' is missing from labels"
assert labels[k] == v, f"expected {v} in key {k}, found {labels[k]}"

View file

@ -1,11 +0,0 @@
---
on: push
jobs:
ipv6:
runs-on: docker
container:
image: code.forgejo.org/oci/debian:bookworm
steps:
- run: |
apt update -qq ; apt --quiet install -qq --yes iputils-ping
ping -c 1 -6 ::1

View file

@ -1,90 +0,0 @@
name: Integration tests for the release process
on:
push:
paths:
- go.mod
- Dockerfile
- .forgejo/workflows/build-release.yml
- .forgejo/workflows/build-release-integration.yml
pull_request:
paths:
- go.mod
- Dockerfile
- .forgejo/workflows/build-release.yml
- .forgejo/workflows/build-release-integration.yml
jobs:
release-simulation:
runs-on: self-hosted
if: github.repository_owner != 'forgejo-integration' && github.repository_owner != 'forgejo-release'
steps:
- uses: actions/checkout@v3
- id: forgejo
uses: https://code.forgejo.org/actions/setup-forgejo@v1
with:
user: root
password: admin1234
image-version: 1.20
lxc-ip-prefix: 10.0.9
- name: publish
run: |
set -x
version=1.2.3
cat > /etc/docker/daemon.json <<EOF
{
"insecure-registries" : ["${{ steps.forgejo.outputs.host-port }}"]
}
EOF
systemctl restart docker
dir=$(mktemp -d)
trap "rm -fr $dir" EXIT
url=http://root:admin1234@${{ steps.forgejo.outputs.host-port }}
export FORGEJO_RUNNER_LOGS="${{ steps.forgejo.outputs.runner-logs }}"
#
# Create a new project with the runner and the release workflow only
#
rsync -a --exclude .git ./ $dir/
rm $(find $dir/.forgejo/workflows/*.yml | grep -v build-release.yml)
forgejo-test-helper.sh push $dir $url root runner
sha=$(forgejo-test-helper.sh branch_tip $url root/runner main)
#
# Push a tag to trigger the release workflow and wait for it to complete
#
forgejo-curl.sh api_json --data-raw '{"tag_name": "v'$version'", "target": "'$sha'"}' $url/api/v1/repos/root/runner/tags
LOOPS=180 forgejo-test-helper.sh wait_success "$url" root/runner $sha
#
# uncomment to see the logs even when everything is reported to be working ok
#
#cat $FORGEJO_RUNNER_LOGS
#
# Minimal sanity checks. e2e test is for the setup-forgejo action
#
for arch in amd64 arm64 ; do
binary=forgejo-runner-$version-linux-$arch
for suffix in '' '.xz' ; do
curl --fail -L -sS $url/root/runner/releases/download/v$version/$binary$suffix > $binary$suffix
if test "$suffix" = .xz ; then
unxz --keep $binary$suffix
fi
chmod +x $binary
./$binary --version | grep $version
curl --fail -L -sS $url/root/runner/releases/download/v$version/$binary$suffix.sha256 > $binary$suffix.sha256
shasum -a 256 --check $binary$suffix.sha256
rm $binary$suffix
done
done
docker pull ${{ steps.forgejo.outputs.host-port }}/root/runner:$version
docker inspect ${{ steps.forgejo.outputs.host-port}}/root/runner:$version > labels.json
python3 .forgejo/labelscompare.py

View file

@ -1,103 +0,0 @@
# SPDX-License-Identifier: MIT
#
# https://code.forgejo.org/forgejo/runner
#
# Build the runner binaries and OCI images
#
# ROLE: forgejo-integration
# DOER: forgejo-ci
# TOKEN: <generated from https://code.forgejo.org/forgejo-ci>
#
name: Build release
on:
push:
tags: 'v*'
jobs:
release:
runs-on: self-hosted
# root is used for testing, allow it
if: secrets.ROLE == 'forgejo-integration' || github.repository_owner == 'root'
steps:
- uses: actions/checkout@v3
- name: Increase the verbosity when there are no secrets
id: verbose
run: |
if test -z "${{ secrets.TOKEN }}"; then
value=true
else
value=false
fi
echo "value=$value" >> "$GITHUB_OUTPUT"
- name: Sanitize the name of the repository
id: repository
run: |
echo "value=${GITHUB_REPOSITORY##*/}" >> "$GITHUB_OUTPUT"
- name: create test TOKEN
id: token
if: ${{ secrets.TOKEN == '' }}
run: |
apt-get -qq install -y jq
url="${{ env.GITHUB_SERVER_URL }}"
hostport=${url##http*://}
hostport=${hostport%%/}
doer=root
api=http://$doer:admin1234@$hostport/api/v1/users/$doer/tokens
curl -sS -X DELETE $api/release
token=$(curl -sS -X POST -H 'Content-Type: application/json' --data-raw '{"name": "release", "scopes": ["all"]}' $api | jq --raw-output .sha1)
echo "value=${token}" >> "$GITHUB_OUTPUT"
- name: version from ref_name
id: tag-version
run: |
version=${GITHUB_REF_NAME##*v}
echo "value=$version" >> "$GITHUB_OUTPUT"
- name: release notes
id: release-notes
run: |
anchor=${{ steps.tag-version.outputs.value }}
anchor=${anchor//./-}
cat >> "$GITHUB_OUTPUT" <<EOF
value<<ENDVAR
See https://code.forgejo.org/forgejo/runner/src/branch/main/RELEASE-NOTES.md#$anchor
ENDVAR
EOF
- name: build without TOKEN
if: ${{ secrets.TOKEN == '' }}
uses: https://code.forgejo.org/forgejo/forgejo-build-publish/build@v5
with:
forgejo: "${{ env.GITHUB_SERVER_URL }}"
owner: "${{ env.GITHUB_REPOSITORY_OWNER }}"
repository: "${{ steps.repository.outputs.value }}"
doer: root
sha: "${{ github.sha }}"
release-version: "${{ steps.tag-version.outputs.value }}"
token: ${{ steps.token.outputs.value }}
platforms: linux/amd64,linux/arm64
release-notes: "${{ steps.release-notes.outputs.value }}"
binary-name: forgejo-runner
binary-path: /bin/forgejo-runner
verbose: ${{ steps.verbose.outputs.value }}
- name: build with TOKEN
if: ${{ secrets.TOKEN != '' }}
uses: https://code.forgejo.org/forgejo/forgejo-build-publish/build@v5
with:
forgejo: "${{ env.GITHUB_SERVER_URL }}"
owner: "${{ env.GITHUB_REPOSITORY_OWNER }}"
repository: "${{ steps.repository.outputs.value }}"
doer: "${{ secrets.DOER }}"
sha: "${{ github.sha }}"
release-version: "${{ steps.tag-version.outputs.value }}"
token: "${{ secrets.TOKEN }}"
platforms: linux/amd64,linux/arm64
release-notes: "${{ steps.release-notes.outputs.value }}"
binary-name: forgejo-runner
binary-path: /bin/forgejo-runner
verbose: ${{ steps.verbose.outputs.value }}

View file

@ -1,25 +0,0 @@
# SPDX-License-Identifier: MIT
on:
pull_request_target:
types:
- opened
- synchronize
- closed
jobs:
cascade:
runs-on: docker
if: vars.CASCADE != 'no'
steps:
- uses: actions/cascading-pr@v1
with:
origin-url: ${{ env.GITHUB_SERVER_URL }}
origin-repo: forgejo/runner
origin-token: ${{ secrets.CASCADING_PR_ORIGIN }}
origin-pr: ${{ github.event.pull_request.number }}
destination-url: ${{ env.GITHUB_SERVER_URL }}
destination-repo: actions/setup-forgejo
destination-fork-repo: cascading-pr/setup-forgejo
destination-branch: main
destination-token: ${{ secrets.CASCADING_PR_DESTINATION }}
close-merge: true
update: .forgejo/cascading-pr-setup-forgejo

View file

@ -1,70 +0,0 @@
# SPDX-License-Identifier: MIT
on:
push:
branches:
- 'main'
pull_request:
jobs:
example-docker-compose:
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- name: Install docker
run: |
apt-get update -qq
export DEBIAN_FRONTEND=noninteractive
apt-get install -qq -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update -qq
apt-get install -qq -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin=2.20.2-1~debian.11~bullseye
docker version
#
# docker compose is prone to non backward compatible changes, pin it
#
apt-get install -qq -y docker-compose-plugin=2.20.2-1~debian.11~bullseye
docker compose version
- name: run the example
run: |
set -x
cd examples/docker-compose
secret=$(openssl rand -hex 20)
sed -i -e "s/{SHARED_SECRET}/$secret/" compose-forgejo-and-runner.yml
cli="docker compose --progress quiet -f compose-forgejo-and-runner.yml"
#
# Launch Forgejo & the runner
#
$cli up -d
for delay in $(seq 60) ; do test -f /srv/runner-data/.runner && break ; sleep 30 ; done
test -f /srv/runner-data/.runner
#
# Run the demo workflow
#
cli="$cli -f compose-demo-workflow.yml"
$cli up -d demo-workflow
#
# Wait for the demo workflow to complete
#
success='DEMO WORKFLOW SUCCESS'
failure='DEMO WORKFLOW FAILURE'
for delay in $(seq 60) ; do
$cli logs demo-workflow > /tmp/out
grep --quiet "$success" /tmp/out && break
grep --quiet "$failure" /tmp/out && break
$cli ps --all
$cli logs --tail=20 runner-daemon demo-workflow
sleep 30
done
grep --quiet "$success" /tmp/out
$cli logs runner-daemon > /tmp/runner.log
grep --quiet 'Start image=code.forgejo.org/oci/node:20-bookworm' /tmp/runner.log
- name: full docker compose logs
if: always()
run: |
cd examples/docker-compose
docker compose -f compose-forgejo-and-runner.yml -f compose-demo-workflow.yml logs

View file

@ -1,42 +0,0 @@
# SPDX-License-Identifier: MIT
#
# https://forgejo.octopuce.forgejo.org/forgejo-release/runner
#
# Copies & sign a release from code.forgejo.org/forgejo-integration/runner to code.forgejo.org/forgejo/runner
#
# ROLE: forgejo-release
# FORGEJO: https://code.forgejo.org
# FROM_OWNER: forgejo-integration
# TO_OWNER: forgejo
# DOER: release-team
# TOKEN: <generated from codeberg.org/release-team>
# GPG_PRIVATE_KEY: <XYZ>
# GPG_PASSPHRASE: <ABC>
#
name: publish
on:
push:
tags: 'v*'
jobs:
publish:
runs-on: self-hosted
if: secrets.DOER != '' && secrets.FORGEJO != '' && secrets.TO_OWNER != '' && secrets.FROM_OWNER != '' && secrets.TOKEN != ''
steps:
- uses: actions/checkout@v3
- name: copy & sign
uses: https://code.forgejo.org/forgejo/forgejo-build-publish/publish@v1
with:
forgejo: ${{ secrets.FORGEJO }}
from-owner: ${{ secrets.FROM_OWNER }}
to-owner: ${{ secrets.TO_OWNER }}
repo: "runner"
ref-name: ${{ github.ref_name }}
container-suffixes: " "
doer: ${{ secrets.DOER }}
token: ${{ secrets.TOKEN }}
gpg-private-key: ${{ secrets.GPG_PRIVATE_KEY }}
gpg-passphrase: ${{ secrets.GPG_PASSPHRASE }}
verbose: ${{ secrets.VERBOSE }}

View file

@ -1,108 +1,23 @@
name: checks
on:
push:
branches:
- 'main'
pull_request:
- push
- pull_request
env:
FORGEJO_HOST_PORT: 'forgejo:3000'
FORGEJO_ADMIN_USER: 'root'
FORGEJO_ADMIN_PASSWORD: 'admin1234'
FORGEJO_RUNNER_SECRET: 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
FORGEJO_SCRIPT: |
/bin/s6-svscan /etc/s6 & sleep 10 ; su -c "forgejo admin user create --admin --username $FORGEJO_ADMIN_USER --password $FORGEJO_ADMIN_PASSWORD --email root@example.com" git && su -c "forgejo forgejo-cli actions register --labels docker --name therunner --secret $FORGEJO_RUNNER_SECRET" git && sleep infinity
GOPROXY: https://goproxy.io,direct
jobs:
build-and-tests:
name: build and test
if: github.repository_owner != 'forgejo-integration' && github.repository_owner != 'forgejo-experimental' && github.repository_owner != 'forgejo-release'
runs-on: docker
services:
forgejo:
image: codeberg.org/forgejo/forgejo:1.21
env:
FORGEJO__security__INSTALL_LOCK: "true"
FORGEJO__log__LEVEL: "debug"
FORGEJO__actions__ENABLED: "true"
FORGEJO_ADMIN_USER: ${{ env.FORGEJO_ADMIN_USER }}
FORGEJO_ADMIN_PASSWORD: ${{ env.FORGEJO_ADMIN_PASSWORD }}
FORGEJO_RUNNER_SECRET: ${{ env.FORGEJO_RUNNER_SECRET }}
cmd:
- 'bash'
- '-c'
- ${{ env.FORGEJO_SCRIPT }}
lint:
name: check and test
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: '1.21'
- uses: actions/checkout@v4
- run: make vet
- run: make build
- uses: https://code.forgejo.org/actions/upload-artifact@v3
with:
name: forgejo-runner
path: forgejo-runner
- name: check the forgejo server is responding
run: |
apt-get update -qq
apt-get install -y -qq jq curl
test $FORGEJO_ADMIN_USER = $(curl -sS http://$FORGEJO_ADMIN_USER:$FORGEJO_ADMIN_PASSWORD@$FORGEJO_HOST_PORT/api/v1/user | jq --raw-output .login)
- run: make FORGEJO_URL=http://$FORGEJO_HOST_PORT test
runner-exec-tests:
needs: [build-and-tests]
name: runner exec tests
if: github.repository_owner != 'forgejo-integration' && github.repository_owner != 'forgejo-experimental' && github.repository_owner != 'forgejo-release'
runs-on: self-hosted
steps:
- uses: actions/checkout@v4
- uses: https://code.forgejo.org/actions/download-artifact@v3
with:
name: forgejo-runner
- name: install docker
run: |
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"ipv6": true,
"experimental": true,
"ip6tables": true,
"fixed-cidr-v6": "fd05:d0ca:1::/64",
"default-address-pools": [
{
"base": "172.19.0.0/16",
"size": 24
},
{
"base": "fd05:d0ca:2::/104",
"size": 112
}
]
}
EOF
apt --quiet install --yes -qq docker.io
- name: forgejo-runner exec --enable-ipv6
run: |
set -x
chmod +x forgejo-runner
./forgejo-runner exec --enable-ipv6 --workflows .forgejo/testdata/ipv6.yml
if ./forgejo-runner exec --workflows .forgejo/testdata/ipv6.yml >& /tmp/out ; then
cat /tmp/out
echo "IPv6 not enabled, should fail"
exit 1
fi
go-version: 1.20
- uses: actions/checkout@v3
- name: vet checks
run: make vet
- name: build
run: make build
- name: test
run: make test

1
.gitattributes vendored
View file

@ -1 +0,0 @@
* text=auto eol=lf

23
.gitea/workflows/test.yml Normal file
View file

@ -0,0 +1,23 @@
name: checks
on:
- push
- pull_request
env:
GOPROXY: https://goproxy.io,direct
jobs:
lint:
name: check and test
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: 1.20
- uses: actions/checkout@v3
- name: vet checks
run: make vet
- name: build
run: make build
- name: test
run: make test

5
.gitignore vendored
View file

@ -1,14 +1,9 @@
*~
forgejo-runner
.env
.runner
coverage.txt
/gitea-vet
/config.yaml
# MS VSCode
.vscode
__debug_bin
# gorelease binary folder
dist

View file

@ -1,11 +1,14 @@
linters:
enable:
- gosimple
- deadcode
- typecheck
- govet
- errcheck
- staticcheck
- unused
- structcheck
- varcheck
- dupl
#- gocyclo # The cyclomatic complexety of a lot of functions is too high, we should refactor those another time.
- gofmt
@ -109,6 +112,7 @@ issues:
- gocritic
- linters:
- unused
- deadcode
text: "swagger"
- path: contrib/pr/checkout.go
linters:
@ -150,6 +154,9 @@ issues:
- path: cmd/dump.go
linters:
- dupl
- path: services/webhook/webhook.go
linters:
- structcheck
- text: "commentFormatting: put a space between `//` and comment text"
linters:
- gocritic

View file

@ -1,47 +0,0 @@
FROM --platform=$BUILDPLATFORM code.forgejo.org/oci/tonistiigi/xx AS xx
FROM --platform=$BUILDPLATFORM code.forgejo.org/oci/golang:1.21-alpine3.19 as build-env
#
# Transparently cross compile for the target platform
#
COPY --from=xx / /
ARG TARGETPLATFORM
RUN apk --no-cache add clang lld
RUN xx-apk --no-cache add gcc musl-dev
RUN xx-go --wrap
# Do not remove `git` here, it is required for getting runner version when executing `make build`
RUN apk add --no-cache build-base git
COPY . /srv
WORKDIR /srv
RUN make clean && make build
FROM code.forgejo.org/oci/alpine:3.19
ARG RELEASE_VERSION
RUN apk add --no-cache git bash
COPY --from=build-env /srv/forgejo-runner /bin/forgejo-runner
LABEL maintainer="contact@forgejo.org" \
org.opencontainers.image.authors="Forgejo" \
org.opencontainers.image.url="https://forgejo.org" \
org.opencontainers.image.documentation="https://forgejo.org/docs/latest/admin/actions/#forgejo-runner" \
org.opencontainers.image.source="https://code.forgejo.org/forgejo/runner" \
org.opencontainers.image.version="${RELEASE_VERSION}" \
org.opencontainers.image.vendor="Forgejo" \
org.opencontainers.image.licenses="MIT" \
org.opencontainers.image.title="Forgejo Runner" \
org.opencontainers.image.description="A runner for Forgejo Actions."
ENV HOME=/data
USER 1000:1000
WORKDIR /data
VOLUME ["/data"]
CMD ["/bin/forgejo-runner"]

View file

@ -1,4 +1,3 @@
Copyright (c) 2023 The Forgejo Authors
Copyright (c) 2022 The Gitea Authors
Permission is hereby granted, free of charge, to any person obtaining a copy

View file

@ -7,21 +7,19 @@ GO ?= go
SHASUM ?= shasum -a 256
HAS_GO = $(shell hash $(GO) > /dev/null 2>&1 && echo "GO" || echo "NOGO" )
XGO_PACKAGE ?= src.techknowlogick.com/xgo@latest
XGO_VERSION := go-1.21.x
XGO_VERSION := go-1.18.x
GXZ_PAGAGE ?= github.com/ulikunitz/xz/cmd/gxz@v0.5.10
LINUX_ARCHS ?= linux/amd64,linux/arm64
DARWIN_ARCHS ?= darwin-12/amd64,darwin-12/arm64
WINDOWS_ARCHS ?= windows/amd64
GO_FMT_FILES := $(shell find . -type f -name "*.go" ! -name "generated.*")
GOFILES := $(shell find . -type f -name "*.go" -o -name "go.mod" ! -name "generated.*")
GOFILES := $(shell find . -type f -name "*.go" ! -name "generated.*")
DOCKER_IMAGE ?= gitea/act_runner
DOCKER_TAG ?= nightly
DOCKER_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)
DOCKER_ROOTLESS_REF := $(DOCKER_IMAGE):$(DOCKER_TAG)-dind-rootless
EXTLDFLAGS = -extldflags "-static" $(null)
ifneq ($(shell uname), Darwin)
EXTLDFLAGS = -extldflags "-static" $(null)
else
EXTLDFLAGS =
endif
ifeq ($(HAS_GO), GO)
GOPATH ?= $(shell $(GO) env GOPATH)
@ -51,7 +49,7 @@ else
ifneq ($(DRONE_BRANCH),)
VERSION ?= $(subst release/v,,$(DRONE_BRANCH))
else
VERSION ?= main
VERSION ?= master
endif
STORED_VERSION=$(shell cat $(STORED_VERSION_FILE) 2>/dev/null)
@ -62,11 +60,8 @@ else
endif
endif
GO_PACKAGES_TO_VET ?= $(filter-out gitea.com/gitea/act_runner/internal/pkg/client/mocks,$(shell $(GO) list ./...))
TAGS ?=
LDFLAGS ?= -X "gitea.com/gitea/act_runner/internal/pkg/ver.version=v$(RELASE_VERSION)"
LDFLAGS ?= -X 'main.Version=$(VERSION)'
all: build
@ -74,24 +69,17 @@ fmt:
@hash gofumpt > /dev/null 2>&1; if [ $$? -ne 0 ]; then \
$(GO) install mvdan.cc/gofumpt@latest; \
fi
$(GOFMT) -w $(GO_FMT_FILES)
$(GOFMT) -w $(GOFILES)
.PHONY: go-check
go-check:
$(eval MIN_GO_VERSION_STR := $(shell grep -Eo '^go\s+[0-9]+\.[0-9]+' go.mod | cut -d' ' -f2))
$(eval MIN_GO_VERSION := $(shell printf "%03d%03d" $(shell echo '$(MIN_GO_VERSION_STR)' | tr '.' ' ')))
$(eval GO_VERSION := $(shell printf "%03d%03d" $(shell $(GO) version | grep -Eo '[0-9]+\.[0-9]+' | tr '.' ' ');))
@if [ "$(GO_VERSION)" -lt "$(MIN_GO_VERSION)" ]; then \
echo "Act Runner requires Go $(MIN_GO_VERSION_STR) or greater to build. You can get it at https://go.dev/dl/"; \
exit 1; \
fi
vet:
$(GO) vet ./...
.PHONY: fmt-check
fmt-check:
@hash gofumpt > /dev/null 2>&1; if [ $$? -ne 0 ]; then \
$(GO) install mvdan.cc/gofumpt@latest; \
fi
@diff=$$($(GOFMT) -d $(GO_FMT_FILES)); \
@diff=$$($(GOFMT) -d $(GOFILES)); \
if [ -n "$$diff" ]; then \
echo "Please run 'make fmt' and commit the result:"; \
echo "$${diff}"; \
@ -101,18 +89,13 @@ fmt-check:
test: fmt-check
@$(GO) test -v -cover -coverprofile coverage.txt ./... && echo "\n==>\033[32m Ok\033[m\n" || exit 1
.PHONY: vet
vet:
@echo "Running go vet..."
@$(GO) vet $(GO_PACKAGES_TO_VET)
install: $(GOFILES)
$(GO) install -v -tags '$(TAGS)' -ldflags '$(EXTLDFLAGS)-s -w $(LDFLAGS)'
build: go-check $(EXECUTABLE)
build: $(EXECUTABLE)
$(EXECUTABLE): $(GOFILES)
$(GO) build -v -tags 'netgo osusergo $(TAGS)' -ldflags '$(EXTLDFLAGS)-s -w $(LDFLAGS)' -o $@
$(GO) build -v -tags '$(TAGS)' -ldflags '$(EXTLDFLAGS)-s -w $(LDFLAGS)' -o $@
.PHONY: deps-backend
deps-backend:
@ -159,14 +142,6 @@ release-check: | $(DIST_DIRS)
release-compress: | $(DIST_DIRS)
cd $(DIST)/release/; for file in `find . -type f -name "*"`; do echo "compressing $${file}" && $(GO) run $(GXZ_PAGAGE) -k -9 $${file}; done;
.PHONY: docker
docker:
if ! docker buildx version >/dev/null 2>&1; then \
ARG_DISABLE_CONTENT_TRUST=--disable-content-trust=false; \
fi; \
docker build $${ARG_DISABLE_CONTENT_TRUST} -t $(DOCKER_REF) .
docker build $${ARG_DISABLE_CONTENT_TRUST} -t $(DOCKER_ROOTLESS_REF) -f Dockerfile.rootless .
clean:
$(GO) clean -x -i ./...
rm -rf coverage.txt $(EXECUTABLE) $(DIST)

View file

@ -1,94 +1,6 @@
# Forgejo Runner
# Forgejo Actions runner
**WARNING:** this is [alpha release quality](https://en.wikipedia.org/wiki/Software_release_life_cycle#Alpha) code and should not be considered secure enough to deploy in production.
Runs workflows found in `.forgejo/workflows`, using a format similar to GitHub actions but with a Free Software implementation.
A daemon that connects to a Forgejo instance and runs jobs for continous integration. The [installation and usage instructions](https://forgejo.org/docs/next/admin/actions/) are part of the Forgejo documentation.
It is compatible with Forgejo v1.19.0-0-rc0
# Reporting bugs
When filing a bug in [the issue tracker](https://code.forgejo.org/forgejo/runner/issues), it is very helpful to propose a pull request [in the end-to-end tests](https://code.forgejo.org/forgejo/end-to-end/src/branch/main/actions) repository that adds a reproducer. It will fail the CI and unambiguously demonstrate that the problem exists. In most cases it is enough to add a workflow ([see the echo example](https://code.forgejo.org/forgejo/end-to-end/src/branch/main/actions/example-echo)). For more complicated cases it is also possible to add a runner config file as well as shell scripts to setup and teardown the test case ([see the service example](https://code.forgejo.org/forgejo/end-to-end/src/branch/main/actions/example-service)).
# Hacking
The Forgejo runner depends on [a fork of ACT](https://code.forgejo.org/forgejo/act) and is a dependency of the [setup-forgejo action](https://code.forgejo.org/actions/setup-forgejo). See [the full dependency graph](https://code.forgejo.org/actions/cascading-pr/#forgejo-dependencies) for a global view.
## Local debug
The repositories are checked out in the same directory:
- **runner**: [Forgejo runner](https://code.forgejo.org/forgejo/runner)
- **act**: [ACT](https://code.forgejo.org/forgejo/act)
- **setup-forgejo**: [setup-forgejo](https://code.forgejo.org/actions/setup-forgejo)
### Install dependencies
The dependencies are installed manually or with:
```shell
setup-forgejo/forgejo-dependencies.sh
```
### Build the Forgejo runner with the local ACT
The Forgejo runner is rebuilt with the ACT directory by changing the `runner/go.mod` file to:
```
replace github.com/nektos/act => ../act
```
Running:
```
cd runner ; go mod tidy
```
Building:
```shell
cd runner ; rm -f forgejo-runner ; make forgejo-runner
```
### Launch Forgejo and the runner
A Forgejo instance is launched with:
```shell
cd setup-forgejo
./forgejo.sh setup
firefox $(cat forgejo-url)
```
The user is `root` with password `admin1234`. The runner is registered with:
```
cd setup-forgejo
docker exec --user 1000 forgejo forgejo actions generate-runner-token > forgejo-runner-token
../runner/forgejo-runner register --no-interactive --instance "$(cat forgejo-url)" --name runner --token $(cat forgejo-runner-token) --labels docker:docker://node:20-bullseye,self-hosted:host://-self-hosted,lxc:lxc://debian:bullseye
```
And launched with:
```shell
cd setup-forgejo ; ../runner/forgejo-runner --config runner-config.yml daemon
```
Note that the `runner-config.yml` is required in that particular case
to configure the network in `bridge` mode, otherwise the runner will
create a network that cannot reach the forgejo instance.
### Try a sample workflow
From the Forgejo web interface, create a repository and add the
following to `.forgejo/workflows/try.yaml`. It will launch the job and
the result can be observed from the `actions` tab.
```yaml
on: [push]
jobs:
ls:
runs-on: docker
steps:
- uses: actions/checkout@v3
- run: |
ls ${{ github.workspace }}
```

View file

@ -1,102 +0,0 @@
# Release Notes
## 3.5.2
* Fix [crash in some cases when the YAML structure is not as expected](https://code.forgejo.org/forgejo/runner/issues/267).
## 3.5.1
* Fix [CVE-2024-24557](https://nvd.nist.gov/vuln/detail/CVE-2024-24557)
* [Add report_interval option to config](https://code.forgejo.org/forgejo/runner/pulls/220) to allow setting the interval of status and log reports
## 3.5.0
* [Allow graceful shutdowns](https://code.forgejo.org/forgejo/runner/pulls/202): when receiving a signal (INT or TERM) wait for running jobs to complete (up to shutdown_timeout).
* [Fix label declaration](https://code.forgejo.org/forgejo/runner/pulls/176): Runner in daemon mode now takes labels found in config.yml into account when declaration was successful.
* [Fix the docker compose example](https://code.forgejo.org/forgejo/runner/pulls/175) to workaround the race on labels.
* [Fix the kubernetes dind example](https://code.forgejo.org/forgejo/runner/pulls/169).
* [Rewrite ::group:: and ::endgroup:: commands like github](https://code.forgejo.org/forgejo/runner/pulls/183).
* [Added opencontainers labels to the image](https://code.forgejo.org/forgejo/runner/pulls/195)
* [Upgrade the default container to node:20](https://code.forgejo.org/forgejo/runner/pulls/203)
## 3.4.1
* Fixes a regression introduced in 3.4.0 by which a job with no image explicitly set would
[be bound to the host](https://code.forgejo.org/forgejo/runner/issues/165)
network instead of a custom network (empty string in the configuration file).
## 3.4.0
Although this version is able to run [actions/upload-artifact@v4](https://code.forgejo.org/actions/upload-artifact/src/tag/v4) and [actions/download-artifact@v4](https://code.forgejo.org/actions/download-artifact/src/tag/v4), these actions will fail because it does not run against GitHub.com. A fork of those two actions with this check disabled is made available at:
* https://code.forgejo.org/forgejo/upload-artifact/src/tag/v4
* https://code.forgejo.org/forgejo/download-artifact/src/tag/v4
and they can be used as shown in [an example from the end-to-end test suite](https://code.forgejo.org/forgejo/end-to-end/src/branch/main/actions/example-artifacts-v4/.forgejo/workflows/test.yml).
* When running against codeberg.org, the default poll frequency is 30s instead of 2s.
* Fix compatibility issue with actions/{upload,download}-artifact@v4.
* Upgrade ACT v1.20.0 which brings:
* `[container].options` from the config file is exposed in containers created by the workflows
* the expressions in the value of `jobs.<job-id>.runs-on` are evaluated
* fix a bug causing the evaluated expression of `jobs.<job-id>.runs-on` to fail if it was an array
* mount `act-toolcache:/opt/hostedtoolcache` instead of `act-toolcache:/toolcache`
* a few improvements to the readability of the error messages displayed in the logs
* `amd64` can be used instead of `x86_64` and `arm64` intead of `aarch64` when specifying the architecture
* fixed YAML parsing bugs preventing dispatch workflows to be parsed correctly
* add support for `runs-on.labels` which is equivalent to `runs-on` followed by a list of labels
* the expressions in the service `ports` and `volumes` values are evaluated
* network aliases are only supported when the network is user specified, not when it is provided by the runner
* If `[runner].insecure` is true in the configuration, insecure cloning actions is allowed
## 3.3.0
* Support IPv6 with addresses from a private range and NAT for
docker:// with --enable-ipv6 and [container].enable_ipv6
lxc:// always
## 3.2.0
* Support LXC container capabilities via `lxc:lxc://debian:bookworm:k8s` or `lxc:lxc://debian:bookworm:docker lxc k8s`
* Update ACT v1.16.0 to resolve a [race condition when bootstraping LXC templates](https://code.forgejo.org/forgejo/act/pulls/23)
## 3.1.0
The `self-hosted` label that was hardwired to be a LXC container
running `debian:bullseye` was reworked and documented ([user guide](https://forgejo.org/docs/next/user/actions/#jobsjob_idruns-on) and [admin guide](https://forgejo.org/docs/next/admin/actions/#labels-and-runs-on)).
There now are two different schemes: `lxc://` for LXC containers and
`host://` for running directly on the host.
* Support the `host://` scheme for running directly on the host.
* Support the `lxc://` scheme in labels
* Update [code.forgejo.org/forgejo/act v1.14.0](https://code.forgejo.org/forgejo/act/pulls/19) to implement both self-hosted and LXC schemes
## 3.0.3
* Update [code.forgejo.org/forgejo/act v1.13.0](https://code.forgejo.org/forgejo/runner/pulls/106) to keep up with github.com/nektos/act
## 3.0.2
* Update [code.forgejo.org/forgejo/act v1.12.0](https://code.forgejo.org/forgejo/runner/pulls/106) to upgrade the node installed in the LXC container to node20
## 3.0.1
* Update [code.forgejo.org/forgejo/act v1.11.0](https://code.forgejo.org/forgejo/runner/pulls/86) to resolve a bug preventing actions based on node20 from running, such as [checkout@v4](https://code.forgejo.org/actions/checkout/src/tag/v4).
## 3.0.0
* Publish a rootless OCI image
* Refactor the release process
## 2.5.0
* Update [code.forgejo.org/forgejo/act v1.10.0](https://code.forgejo.org/forgejo/runner/pulls/71)
## 2.4.0
* Update [code.forgejo.org/forgejo/act v1.9.0](https://code.forgejo.org/forgejo/runner/pulls/64)
## 2.3.0
* Add support for [offline registration](https://forgejo.org/docs/next/admin/actions/#offline-registration).

7
artifactcache/README.md Normal file
View file

@ -0,0 +1,7 @@
Inspired by:
https://github.com/sp-ricard-valverde/github-act-cache-server
TODO:
- Authorization
- [Restrictions for accessing a cache](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#restrictions-for-accessing-a-cache)
- [Force deleting cache entries](https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#force-deleting-cache-entries)

407
artifactcache/handler.go Normal file
View file

@ -0,0 +1,407 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package artifactcache
import (
"context"
"fmt"
"net"
"net/http"
"os"
"path/filepath"
"strconv"
"strings"
"sync/atomic"
"time"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/go-chi/render"
"github.com/nektos/act/pkg/common"
log "github.com/sirupsen/logrus"
_ "modernc.org/sqlite"
"xorm.io/builder"
"xorm.io/xorm"
)
const (
urlBase = "/_apis/artifactcache"
)
var logger = log.StandardLogger().WithField("module", "cache_request")
type Handler struct {
engine engine
storage *Storage
router *chi.Mux
listener net.Listener
gc atomic.Bool
gcAt time.Time
}
func NewHandler() (*Handler, error) {
h := &Handler{}
dir := "" // TODO: make the dir configurable if necessary
if home, err := os.UserHomeDir(); err != nil {
return nil, err
} else {
dir = filepath.Join(home, ".cache/actcache")
}
if err := os.MkdirAll(dir, 0o755); err != nil {
return nil, err
}
e, err := xorm.NewEngine("sqlite", filepath.Join(dir, "sqlite.db"))
if err != nil {
return nil, err
}
if err := e.Sync(&Cache{}); err != nil {
return nil, err
}
h.engine = engine{e: e}
storage, err := NewStorage(filepath.Join(dir, "cache"))
if err != nil {
return nil, err
}
h.storage = storage
router := chi.NewRouter()
router.Use(middleware.RequestLogger(&middleware.DefaultLogFormatter{Logger: logger}))
router.Use(func(handler http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
handler.ServeHTTP(w, r)
go h.gcCache()
})
})
router.Use(middleware.Logger)
router.Route(urlBase, func(r chi.Router) {
r.Get("/cache", h.find)
r.Route("/caches", func(r chi.Router) {
r.Post("/", h.reserve)
r.Route("/{id}", func(r chi.Router) {
r.Patch("/", h.upload)
r.Post("/", h.commit)
})
})
r.Get("/artifacts/{id}", h.get)
r.Post("/clean", h.clean)
})
h.router = router
h.gcCache()
// TODO: make the port configurable if necessary
listener, err := net.Listen("tcp", ":0") // random available port
if err != nil {
return nil, err
}
go func() {
if err := http.Serve(listener, h.router); err != nil {
logger.Errorf("http serve: %v", err)
}
}()
h.listener = listener
return h, nil
}
func (h *Handler) ExternalURL() string {
// TODO: make the external url configurable if necessary
return fmt.Sprintf("http://%s:%d",
common.GetOutboundIP().String(),
h.listener.Addr().(*net.TCPAddr).Port)
}
// GET /_apis/artifactcache/cache
func (h *Handler) find(w http.ResponseWriter, r *http.Request) {
keys := strings.Split(r.URL.Query().Get("keys"), ",")
version := r.URL.Query().Get("version")
cache, err := h.findCache(r.Context(), keys, version)
if err != nil {
responseJson(w, r, 500, err)
return
}
if cache == nil {
responseJson(w, r, 204)
return
}
if ok, err := h.storage.Exist(cache.ID); err != nil {
responseJson(w, r, 500, err)
return
} else if !ok {
_ = h.engine.Exec(func(sess *xorm.Session) error {
_, err := sess.Delete(cache)
return err
})
responseJson(w, r, 204)
return
}
responseJson(w, r, 200, map[string]any{
"result": "hit",
"archiveLocation": fmt.Sprintf("%s%s/artifacts/%d", h.ExternalURL(), urlBase, cache.ID),
"cacheKey": cache.Key,
})
}
// POST /_apis/artifactcache/caches
func (h *Handler) reserve(w http.ResponseWriter, r *http.Request) {
cache := &Cache{}
if err := render.Bind(r, cache); err != nil {
responseJson(w, r, 400, err)
return
}
if ok, err := h.engine.ExecBool(func(sess *xorm.Session) (bool, error) {
return sess.Where(builder.Eq{"key": cache.Key, "version": cache.Version}).Get(&Cache{})
}); err != nil {
responseJson(w, r, 500, err)
return
} else if ok {
responseJson(w, r, 400, fmt.Errorf("already exist"))
return
}
if err := h.engine.Exec(func(sess *xorm.Session) error {
_, err := sess.Insert(cache)
return err
}); err != nil {
responseJson(w, r, 500, err)
return
}
responseJson(w, r, 200, map[string]any{
"cacheId": cache.ID,
})
return
}
// PATCH /_apis/artifactcache/caches/:id
func (h *Handler) upload(w http.ResponseWriter, r *http.Request) {
id, err := strconv.ParseInt(chi.URLParam(r, "id"), 10, 64)
if err != nil {
responseJson(w, r, 400, err)
return
}
cache := &Cache{
ID: id,
}
if ok, err := h.engine.ExecBool(func(sess *xorm.Session) (bool, error) {
return sess.Get(cache)
}); err != nil {
responseJson(w, r, 500, err)
return
} else if !ok {
responseJson(w, r, 400, fmt.Errorf("cache %d: not reserved", id))
return
}
if cache.Complete {
responseJson(w, r, 400, fmt.Errorf("cache %v %q: already complete", cache.ID, cache.Key))
return
}
start, _, err := parseContentRange(r.Header.Get("Content-Range"))
if err != nil {
responseJson(w, r, 400, err)
return
}
if err := h.storage.Write(cache.ID, start, r.Body); err != nil {
responseJson(w, r, 500, err)
}
h.useCache(r.Context(), id)
responseJson(w, r, 200)
}
// POST /_apis/artifactcache/caches/:id
func (h *Handler) commit(w http.ResponseWriter, r *http.Request) {
id, err := strconv.ParseInt(chi.URLParam(r, "id"), 10, 64)
if err != nil {
responseJson(w, r, 400, err)
return
}
cache := &Cache{
ID: id,
}
if ok, err := h.engine.ExecBool(func(sess *xorm.Session) (bool, error) {
return sess.Get(cache)
}); err != nil {
responseJson(w, r, 500, err)
return
} else if !ok {
responseJson(w, r, 400, fmt.Errorf("cache %d: not reserved", id))
return
}
if cache.Complete {
responseJson(w, r, 400, fmt.Errorf("cache %v %q: already complete", cache.ID, cache.Key))
return
}
if err := h.storage.Commit(cache.ID, cache.Size); err != nil {
responseJson(w, r, 500, err)
return
}
cache.Complete = true
if err := h.engine.Exec(func(sess *xorm.Session) error {
_, err := sess.ID(cache.ID).Cols("complete").Update(cache)
return err
}); err != nil {
responseJson(w, r, 500, err)
return
}
responseJson(w, r, 200)
}
// GET /_apis/artifactcache/artifacts/:id
func (h *Handler) get(w http.ResponseWriter, r *http.Request) {
id, err := strconv.ParseInt(chi.URLParam(r, "id"), 10, 64)
if err != nil {
responseJson(w, r, 400, err)
return
}
h.useCache(r.Context(), id)
h.storage.Serve(w, r, id)
}
// POST /_apis/artifactcache/clean
func (h *Handler) clean(w http.ResponseWriter, r *http.Request) {
// TODO: don't support force deleting cache entries
// see: https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#force-deleting-cache-entries
responseJson(w, r, 200)
}
// if not found, return (nil, nil) instead of an error.
func (h *Handler) findCache(ctx context.Context, keys []string, version string) (*Cache, error) {
if len(keys) == 0 {
return nil, nil
}
key := keys[0] // the first key is for exact match.
cache := &Cache{}
if ok, err := h.engine.ExecBool(func(sess *xorm.Session) (bool, error) {
return sess.Where(builder.Eq{"key": key, "version": version, "complete": true}).Get(cache)
}); err != nil {
return nil, err
} else if ok {
return cache, nil
}
for _, prefix := range keys[1:] {
if ok, err := h.engine.ExecBool(func(sess *xorm.Session) (bool, error) {
return sess.Where(builder.And(
builder.Like{"key", prefix + "%"},
builder.Eq{"version": version, "complete": true},
)).OrderBy("id DESC").Get(cache)
}); err != nil {
return nil, err
} else if ok {
return cache, nil
}
}
return nil, nil
}
func (h *Handler) useCache(ctx context.Context, id int64) {
// keep quiet
_ = h.engine.Exec(func(sess *xorm.Session) error {
_, err := sess.Context(ctx).Cols("used_at").Update(&Cache{
ID: id,
UsedAt: time.Now().Unix(),
})
return err
})
}
func (h *Handler) gcCache() {
if h.gc.Load() {
return
}
if !h.gc.CompareAndSwap(false, true) {
return
}
defer h.gc.Store(false)
if time.Since(h.gcAt) < time.Hour {
logger.Infof("skip gc: %v", h.gcAt.String())
return
}
h.gcAt = time.Now()
logger.Infof("gc: %v", h.gcAt.String())
const (
keepUsed = 30 * 24 * time.Hour
keepUnused = 7 * 24 * time.Hour
keepTemp = 5 * time.Minute
)
var caches []*Cache
if err := h.engine.Exec(func(sess *xorm.Session) error {
return sess.Where(builder.And(builder.Lt{"used_at": time.Now().Add(-keepTemp).Unix()}, builder.Eq{"complete": false})).
Find(&caches)
}); err != nil {
logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
h.storage.Remove(cache.ID)
if err := h.engine.Exec(func(sess *xorm.Session) error {
_, err := sess.Delete(cache)
return err
}); err != nil {
logger.Warnf("delete cache: %v", err)
continue
}
logger.Infof("deleted cache: %+v", cache)
}
}
caches = caches[:0]
if err := h.engine.Exec(func(sess *xorm.Session) error {
return sess.Where(builder.Lt{"used_at": time.Now().Add(-keepUnused).Unix()}).
Find(&caches)
}); err != nil {
logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
h.storage.Remove(cache.ID)
if err := h.engine.Exec(func(sess *xorm.Session) error {
_, err := sess.Delete(cache)
return err
}); err != nil {
logger.Warnf("delete cache: %v", err)
continue
}
logger.Infof("deleted cache: %+v", cache)
}
}
caches = caches[:0]
if err := h.engine.Exec(func(sess *xorm.Session) error {
return sess.Where(builder.Lt{"created_at": time.Now().Add(-keepUsed).Unix()}).
Find(&caches)
}); err != nil {
logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
h.storage.Remove(cache.ID)
if err := h.engine.Exec(func(sess *xorm.Session) error {
_, err := sess.Delete(cache)
return err
}); err != nil {
logger.Warnf("delete cache: %v", err)
continue
}
logger.Infof("deleted cache: %+v", cache)
}
}
}

30
artifactcache/model.go Normal file
View file

@ -0,0 +1,30 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package artifactcache
import (
"fmt"
"net/http"
)
type Cache struct {
ID int64 `xorm:"id pk autoincr" json:"-"`
Key string `xorm:"TEXT index unique(key_version)" json:"key"`
Version string `xorm:"TEXT unique(key_version)" json:"version"`
Size int64 `json:"cacheSize"`
Complete bool `xorm:"index(complete_used_at)" json:"-"`
UsedAt int64 `xorm:"index(complete_used_at) updated" json:"-"`
CreatedAt int64 `xorm:"index created" json:"-"`
}
// Bind implements render.Binder
func (c *Cache) Bind(_ *http.Request) error {
if c.Key == "" {
return fmt.Errorf("missing key")
}
if c.Version == "" {
return fmt.Errorf("missing version")
}
return nil
}

129
artifactcache/storage.go Normal file
View file

@ -0,0 +1,129 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package artifactcache
import (
"fmt"
"io"
"net/http"
"os"
"path/filepath"
)
type Storage struct {
rootDir string
}
func NewStorage(rootDir string) (*Storage, error) {
if err := os.MkdirAll(rootDir, 0o755); err != nil {
return nil, err
}
return &Storage{
rootDir: rootDir,
}, nil
}
func (s *Storage) Exist(id int64) (bool, error) {
name := s.filename(id)
if _, err := os.Stat(name); os.IsNotExist(err) {
return false, nil
} else if err != nil {
return false, err
}
return true, nil
}
func (s *Storage) Write(id int64, offset int64, reader io.Reader) error {
name := s.tempName(id, offset)
if err := os.MkdirAll(filepath.Dir(name), 0o755); err != nil {
return err
}
file, err := os.Create(name)
if err != nil {
return err
}
defer file.Close()
_, err = io.Copy(file, reader)
return err
}
func (s *Storage) Commit(id int64, size int64) error {
defer func() {
_ = os.RemoveAll(s.tempDir(id))
}()
name := s.filename(id)
tempNames, err := s.tempNames(id)
if err != nil {
return err
}
if err := os.MkdirAll(filepath.Dir(name), 0o755); err != nil {
return err
}
file, err := os.Create(name)
if err != nil {
return err
}
defer file.Close()
var written int64
for _, v := range tempNames {
f, err := os.Open(v)
if err != nil {
return err
}
n, err := io.Copy(file, f)
_ = f.Close()
if err != nil {
return err
}
written += n
}
if written != size {
_ = file.Close()
_ = os.Remove(name)
return fmt.Errorf("broken file: %v != %v", written, size)
}
return nil
}
func (s *Storage) Serve(w http.ResponseWriter, r *http.Request, id int64) {
name := s.filename(id)
http.ServeFile(w, r, name)
}
func (s *Storage) Remove(id int64) {
_ = os.Remove(s.filename(id))
_ = os.RemoveAll(s.tempDir(id))
}
func (s *Storage) filename(id int64) string {
return filepath.Join(s.rootDir, fmt.Sprintf("%02x", id%0xff), fmt.Sprint(id))
}
func (s *Storage) tempDir(id int64) string {
return filepath.Join(s.rootDir, "tmp", fmt.Sprint(id))
}
func (s *Storage) tempName(id, offset int64) string {
return filepath.Join(s.tempDir(id), fmt.Sprintf("%016x", offset))
}
func (s *Storage) tempNames(id int64) ([]string, error) {
dir := s.tempDir(id)
files, err := os.ReadDir(dir)
if err != nil {
return nil, err
}
var names []string
for _, v := range files {
if !v.IsDir() {
names = append(names, filepath.Join(dir, v.Name()))
}
}
return names, nil
}

72
artifactcache/util.go Normal file
View file

@ -0,0 +1,72 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package artifactcache
import (
"fmt"
"net/http"
"strconv"
"strings"
"sync"
"github.com/go-chi/render"
"xorm.io/xorm"
)
func responseJson(w http.ResponseWriter, r *http.Request, code int, v ...any) {
render.Status(r, code)
if len(v) == 0 || v[0] == nil {
render.JSON(w, r, struct{}{})
} else if err, ok := v[0].(error); ok {
logger.Errorf("%v %v: %v", r.Method, r.RequestURI, err)
render.JSON(w, r, map[string]any{
"error": err.Error(),
})
} else {
render.JSON(w, r, v[0])
}
}
func parseContentRange(s string) (int64, int64, error) {
// support the format like "bytes 11-22/*" only
s, _, _ = strings.Cut(strings.TrimPrefix(s, "bytes "), "/")
s1, s2, _ := strings.Cut(s, "-")
start, err := strconv.ParseInt(s1, 10, 64)
if err != nil {
return 0, 0, fmt.Errorf("parse %q: %w", s, err)
}
stop, err := strconv.ParseInt(s2, 10, 64)
if err != nil {
return 0, 0, fmt.Errorf("parse %q: %w", s, err)
}
return start, stop, nil
}
// engine is a wrapper of *xorm.Engine, with a lock.
// To avoid racing of sqlite, we don't careperformance here.
type engine struct {
e *xorm.Engine
m sync.Mutex
}
func (e *engine) Exec(f func(*xorm.Session) error) error {
e.m.Lock()
defer e.m.Unlock()
sess := e.e.NewSession()
defer sess.Close()
return f(sess)
}
func (e *engine) ExecBool(f func(*xorm.Session) (bool, error)) (bool, error) {
e.m.Lock()
defer e.m.Unlock()
sess := e.e.NewSession()
defer sess.Close()
return f(sess)
}

View file

@ -1,6 +1,3 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package client
import (
@ -9,8 +6,6 @@ import (
)
// A Client manages communication with the runner.
//
//go:generate mockery --name Client
type Client interface {
pingv1connect.PingServiceClient
runnerv1connect.RunnerServiceClient

View file

@ -1,6 +1,3 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package client
import (
@ -11,10 +8,11 @@ import (
"code.gitea.io/actions-proto-go/ping/v1/pingv1connect"
"code.gitea.io/actions-proto-go/runner/v1/runnerv1connect"
"connectrpc.com/connect"
"codeberg.org/forgejo/runner/core"
"github.com/bufbuild/connect-go"
)
func getHTTPClient(endpoint string, insecure bool) *http.Client {
func getHttpClient(endpoint string, insecure bool) *http.Client {
if strings.HasPrefix(endpoint, "https://") && insecure {
return &http.Client{
Transport: &http.Transport{
@ -28,20 +26,16 @@ func getHTTPClient(endpoint string, insecure bool) *http.Client {
}
// New returns a new runner client.
func New(endpoint string, insecure bool, uuid, token, version string, opts ...connect.ClientOption) *HTTPClient {
func New(endpoint string, insecure bool, uuid, token string, opts ...connect.ClientOption) *HTTPClient {
baseURL := strings.TrimRight(endpoint, "/") + "/api/actions"
opts = append(opts, connect.WithInterceptors(connect.UnaryInterceptorFunc(func(next connect.UnaryFunc) connect.UnaryFunc {
return func(ctx context.Context, req connect.AnyRequest) (connect.AnyResponse, error) {
if uuid != "" {
req.Header().Set(UUIDHeader, uuid)
req.Header().Set(core.UUIDHeader, uuid)
}
if token != "" {
req.Header().Set(TokenHeader, token)
}
// TODO: version will be removed from request header after Gitea 1.20 released.
if version != "" {
req.Header().Set(VersionHeader, version)
req.Header().Set(core.TokenHeader, token)
}
return next(ctx, req)
}
@ -49,12 +43,12 @@ func New(endpoint string, insecure bool, uuid, token, version string, opts ...co
return &HTTPClient{
PingServiceClient: pingv1connect.NewPingServiceClient(
getHTTPClient(endpoint, insecure),
getHttpClient(endpoint, insecure),
baseURL,
opts...,
),
RunnerServiceClient: runnerv1connect.NewRunnerServiceClient(
getHTTPClient(endpoint, insecure),
getHttpClient(endpoint, insecure),
baseURL,
opts...,
),

66
cmd/cmd.go Normal file
View file

@ -0,0 +1,66 @@
package cmd
import (
"context"
"os"
"github.com/spf13/cobra"
)
const version = "0.1.5"
type globalArgs struct {
EnvFile string
}
func Execute(ctx context.Context) {
// task := runtime.NewTask("gitea", 0, nil, nil)
var gArgs globalArgs
// ./act_runner
rootCmd := &cobra.Command{
Use: "act [event name to run]\nIf no event name passed, will default to \"on: push\"",
Short: "Run GitHub actions locally by specifying the event name (e.g. `push`) or an action name directly.",
Args: cobra.MaximumNArgs(1),
Version: version,
SilenceUsage: true,
RunE: runDaemon(ctx, gArgs.EnvFile),
}
rootCmd.PersistentFlags().StringVarP(&gArgs.EnvFile, "env-file", "", ".env", "Read in a file of environment variables.")
// ./act_runner register
var regArgs registerArgs
registerCmd := &cobra.Command{
Use: "register",
Short: "Register a runner to the server",
Args: cobra.MaximumNArgs(0),
RunE: runRegister(ctx, &regArgs, gArgs.EnvFile), // must use a pointer to regArgs
}
registerCmd.Flags().BoolVar(&regArgs.NoInteractive, "no-interactive", false, "Disable interactive mode")
registerCmd.Flags().StringVar(&regArgs.InstanceAddr, "instance", "", "Forgejo instance address")
registerCmd.Flags().BoolVar(&regArgs.Insecure, "insecure", false, "If check server's certificate if it's https protocol")
registerCmd.Flags().StringVar(&regArgs.Token, "token", "", "Runner token")
registerCmd.Flags().StringVar(&regArgs.RunnerName, "name", "", "Runner name")
registerCmd.Flags().StringVar(&regArgs.Labels, "labels", "", "Runner tags, comma separated")
rootCmd.AddCommand(registerCmd)
// ./act_runner daemon
daemonCmd := &cobra.Command{
Use: "daemon",
Short: "Run as a runner daemon",
Args: cobra.MaximumNArgs(1),
RunE: runDaemon(ctx, gArgs.EnvFile),
}
rootCmd.AddCommand(daemonCmd)
// ./act_runner exec
rootCmd.AddCommand(loadExecCmd(ctx))
// hide completion command
rootCmd.CompletionOptions.HiddenDefaultCmd = true
if err := rootCmd.Execute(); err != nil {
os.Exit(1)
}
}

120
cmd/daemon.go Normal file
View file

@ -0,0 +1,120 @@
package cmd
import (
"context"
"os"
"strings"
"codeberg.org/forgejo/runner/artifactcache"
"codeberg.org/forgejo/runner/client"
"codeberg.org/forgejo/runner/config"
"codeberg.org/forgejo/runner/engine"
"codeberg.org/forgejo/runner/poller"
"codeberg.org/forgejo/runner/runtime"
"github.com/joho/godotenv"
"github.com/mattn/go-isatty"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
func runDaemon(ctx context.Context, envFile string) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
log.Infoln("Starting runner daemon")
_ = godotenv.Load(envFile)
cfg, err := config.FromEnviron()
if err != nil {
log.WithError(err).
Fatalln("invalid configuration")
}
initLogging(cfg)
// require docker if a runner label uses a docker backend
needsDocker := false
for _, l := range cfg.Runner.Labels {
splits := strings.SplitN(l, ":", 2)
if len(splits) == 2 && strings.HasPrefix(splits[1], "docker://") {
needsDocker = true
break
}
}
if needsDocker {
// try to connect to docker daemon
// if failed, exit with error
if err := engine.Start(ctx); err != nil {
log.WithError(err).Fatalln("failed to connect docker daemon engine")
}
}
handler, err := artifactcache.NewHandler()
if err != nil {
return err
}
log.Infof("cache handler listens on: %v", handler.ExternalURL())
var g errgroup.Group
cli := client.New(
cfg.Client.Address,
cfg.Client.Insecure,
cfg.Runner.UUID,
cfg.Runner.Token,
)
runner := &runtime.Runner{
Client: cli,
Machine: cfg.Runner.Name,
ForgeInstance: cfg.Client.Address,
Environ: cfg.Runner.Environ,
Labels: cfg.Runner.Labels,
CacheHandler: handler,
}
poller := poller.New(
cli,
runner.Run,
cfg.Runner.Capacity,
)
g.Go(func() error {
l := log.WithField("capacity", cfg.Runner.Capacity).
WithField("endpoint", cfg.Client.Address).
WithField("os", cfg.Platform.OS).
WithField("arch", cfg.Platform.Arch)
l.Infoln("polling the remote server")
if err := poller.Poll(ctx); err != nil {
l.Errorf("poller error: %v", err)
}
poller.Wait()
return nil
})
err = g.Wait()
if err != nil {
log.WithError(err).
Errorln("shutting down the server")
}
return err
}
}
// initLogging setup the global logrus logger.
func initLogging(cfg config.Config) {
isTerm := isatty.IsTerminal(os.Stdout.Fd())
log.SetFormatter(&log.TextFormatter{
DisableColors: !isTerm,
FullTimestamp: true,
})
if cfg.Debug {
log.SetLevel(log.DebugLevel)
}
if cfg.Trace {
log.SetLevel(log.TraceLevel)
}
}

View file

@ -13,9 +13,8 @@ import (
"strings"
"time"
"github.com/docker/docker/api/types/container"
"codeberg.org/forgejo/runner/artifactcache"
"github.com/joho/godotenv"
"github.com/nektos/act/pkg/artifactcache"
"github.com/nektos/act/pkg/artifacts"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/model"
@ -39,7 +38,7 @@ type executeArgs struct {
envs []string
envfile string
secrets []string
defaultActionsURL string
defaultActionsUrl string
insecureSecrets bool
privileged bool
usernsMode string
@ -48,18 +47,12 @@ type executeArgs struct {
useGitIgnore bool
containerCapAdd []string
containerCapDrop []string
containerOptions string
artifactServerPath string
artifactServerAddr string
artifactServerPort string
noSkipCheckout bool
debug bool
dryrun bool
image string
cacheHandler *artifactcache.Handler
network string
enableIPv6 bool
githubInstance string
}
// WorkflowsPath returns path to workflow file(s)
@ -253,7 +246,7 @@ func runExecList(ctx context.Context, planner model.WorkflowPlanner, execArgs *e
var filterPlan *model.Plan
// Determine the event name to be filtered
var filterEventName string
var filterEventName string = ""
if len(execArgs.event) > 0 {
log.Infof("Using chosed event for filtering: %s", execArgs.event)
@ -269,28 +262,18 @@ func runExecList(ctx context.Context, planner model.WorkflowPlanner, execArgs *e
filterEventName = events[0]
}
var err error
if execArgs.job != "" {
log.Infof("Preparing plan with a job: %s", execArgs.job)
filterPlan, err = planner.PlanJob(execArgs.job)
if err != nil {
return err
}
filterPlan = planner.PlanJob(execArgs.job)
} else if filterEventName != "" {
log.Infof("Preparing plan for a event: %s", filterEventName)
filterPlan, err = planner.PlanEvent(filterEventName)
if err != nil {
return err
}
filterPlan = planner.PlanEvent(filterEventName)
} else {
log.Infof("Preparing plan with all jobs")
filterPlan, err = planner.PlanAll()
if err != nil {
return err
}
filterPlan = planner.PlanAll()
}
_ = printList(filterPlan)
printList(filterPlan)
return nil
}
@ -317,7 +300,7 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
if len(execArgs.event) > 0 {
log.Infof("Using chosed event for filtering: %s", execArgs.event)
eventName = execArgs.event
eventName = args[0]
} else if len(events) == 1 && len(events[0]) > 0 {
log.Infof("Using the only detected workflow event: %s", events[0])
eventName = events[0]
@ -334,16 +317,10 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
// build the plan for this run
if execArgs.job != "" {
log.Infof("Planning job: %s", execArgs.job)
plan, err = planner.PlanJob(execArgs.job)
if err != nil {
return err
}
plan = planner.PlanJob(execArgs.job)
} else {
log.Infof("Planning jobs for event: %s", eventName)
plan, err = planner.PlanEvent(eventName)
if err != nil {
return err
}
plan = planner.PlanEvent(eventName)
}
maxLifetime := 3 * time.Hour
@ -352,31 +329,13 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
}
// init a cache server
handler, err := artifactcache.StartHandler("", "", 0, log.StandardLogger().WithField("module", "cache_request"))
handler, err := artifactcache.NewHandler()
if err != nil {
return err
}
log.Infof("cache handler listens on: %v", handler.ExternalURL())
execArgs.cacheHandler = handler
if len(execArgs.artifactServerAddr) == 0 {
ip := common.GetOutboundIP()
if ip == nil {
return fmt.Errorf("unable to determine outbound IP address")
}
execArgs.artifactServerAddr = ip.String()
}
if len(execArgs.artifactServerPath) == 0 {
tempDir, err := os.MkdirTemp("", "gitea-act-")
if err != nil {
fmt.Println(err)
}
defer os.RemoveAll(tempDir)
execArgs.artifactServerPath = tempDir
}
// run the plan
config := &runner.Config{
Workdir: execArgs.Workdir(),
@ -394,47 +353,47 @@ func runExec(ctx context.Context, execArgs *executeArgs) func(cmd *cobra.Command
ContainerArchitecture: execArgs.containerArchitecture,
ContainerDaemonSocket: execArgs.containerDaemonSocket,
UseGitIgnore: execArgs.useGitIgnore,
GitHubInstance: execArgs.githubInstance,
ContainerCapAdd: execArgs.containerCapAdd,
ContainerCapDrop: execArgs.containerCapDrop,
ContainerOptions: execArgs.containerOptions,
AutoRemove: true,
ArtifactServerPath: execArgs.artifactServerPath,
ArtifactServerPort: execArgs.artifactServerPort,
ArtifactServerAddr: execArgs.artifactServerAddr,
NoSkipCheckout: execArgs.noSkipCheckout,
// PresetGitHubContext: preset,
// EventJSON: string(eventJSON),
ContainerNamePrefix: fmt.Sprintf("FORGEJO-ACTIONS-TASK-%s", eventName),
ContainerMaxLifetime: maxLifetime,
ContainerNetworkMode: container.NetworkMode(execArgs.network),
ContainerNetworkEnableIPv6: execArgs.enableIPv6,
DefaultActionInstance: execArgs.defaultActionsURL,
// GitHubInstance: t.client.Address(),
ContainerCapAdd: execArgs.containerCapAdd,
ContainerCapDrop: execArgs.containerCapDrop,
AutoRemove: true,
ArtifactServerPath: execArgs.artifactServerPath,
ArtifactServerPort: execArgs.artifactServerPort,
NoSkipCheckout: execArgs.noSkipCheckout,
// PresetGitHubContext: preset,
// EventJSON: string(eventJSON),
ContainerNamePrefix: fmt.Sprintf("GITEA-ACTIONS-TASK-%s", eventName),
ContainerMaxLifetime: maxLifetime,
ContainerNetworkMode: "bridge",
DefaultActionInstance: execArgs.defaultActionsUrl,
PlatformPicker: func(_ []string) string {
return execArgs.image
return "node:16-bullseye"
},
ValidVolumes: []string{"**"}, // All volumes are allowed for `exec` command
}
config.Env["ACT_EXEC"] = "true"
if t := config.Secrets["GITEA_TOKEN"]; t != "" {
config.Token = t
} else if t := config.Secrets["GITHUB_TOKEN"]; t != "" {
config.Token = t
}
if !execArgs.debug {
logLevel := log.InfoLevel
config.JobLoggerLevel = &logLevel
}
// TODO: handle log level config
// waiting https://gitea.com/gitea/act/pulls/19
// if !execArgs.debug {
// logLevel := log.Level(log.InfoLevel)
// config.JobLoggerLevel = &logLevel
// }
r, err := runner.New(config)
if err != nil {
return err
}
artifactCancel := artifacts.Serve(ctx, execArgs.artifactServerPath, execArgs.artifactServerAddr, execArgs.artifactServerPort)
if len(execArgs.artifactServerPath) == 0 {
tempDir, err := os.MkdirTemp("", "gitea-act-")
if err != nil {
fmt.Println(err)
}
defer os.RemoveAll(tempDir)
execArgs.artifactServerPath = tempDir
}
artifactCancel := artifacts.Serve(ctx, execArgs.artifactServerPath, execArgs.artifactServerPort)
log.Debugf("artifacts server started at %s:%s", execArgs.artifactServerPath, execArgs.artifactServerPort)
ctx = common.WithDryrun(ctx, execArgs.dryrun)
@ -478,18 +437,12 @@ func loadExecCmd(ctx context.Context) *cobra.Command {
execCmd.Flags().BoolVar(&execArg.useGitIgnore, "use-gitignore", true, "Controls whether paths specified in .gitignore should be copied into container")
execCmd.Flags().StringArrayVarP(&execArg.containerCapAdd, "container-cap-add", "", []string{}, "kernel capabilities to add to the workflow containers (e.g. --container-cap-add SYS_PTRACE)")
execCmd.Flags().StringArrayVarP(&execArg.containerCapDrop, "container-cap-drop", "", []string{}, "kernel capabilities to remove from the workflow containers (e.g. --container-cap-drop SYS_PTRACE)")
execCmd.Flags().StringVarP(&execArg.containerOptions, "container-opts", "", "", "container options")
execCmd.PersistentFlags().StringVarP(&execArg.artifactServerPath, "artifact-server-path", "", ".", "Defines the path where the artifact server stores uploads and retrieves downloads from. If not specified the artifact server will not start.")
execCmd.PersistentFlags().StringVarP(&execArg.artifactServerAddr, "artifact-server-addr", "", "", "Defines the address where the artifact server listens")
execCmd.PersistentFlags().StringVarP(&execArg.artifactServerPort, "artifact-server-port", "", "34567", "Defines the port where the artifact server listens (will only bind to localhost).")
execCmd.PersistentFlags().StringVarP(&execArg.defaultActionsURL, "default-actions-url", "", "https://code.forgejo.org", "Defines the default base url of the action.")
execCmd.PersistentFlags().StringVarP(&execArg.defaultActionsUrl, "default-actions-url", "", "https://codeberg.org", "Defines the default url of action instance.")
execCmd.PersistentFlags().BoolVarP(&execArg.noSkipCheckout, "no-skip-checkout", "", false, "Do not skip actions/checkout")
execCmd.PersistentFlags().BoolVarP(&execArg.debug, "debug", "d", false, "enable debug log")
execCmd.PersistentFlags().BoolVarP(&execArg.dryrun, "dryrun", "n", false, "dryrun mode")
execCmd.PersistentFlags().StringVarP(&execArg.image, "image", "i", "node:20-bullseye", "Docker image to use. Use \"-self-hosted\" to run directly on the host.")
execCmd.PersistentFlags().StringVarP(&execArg.network, "network", "", "", "Specify the network to which the container will connect")
execCmd.PersistentFlags().BoolVarP(&execArg.enableIPv6, "enable-ipv6", "6", false, "Create network with IPv6 enabled.")
execCmd.PersistentFlags().StringVarP(&execArg.githubInstance, "gitea-instance", "", "", "Gitea instance to use.")
return execCmd
}

View file

@ -1,6 +1,3 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
@ -9,25 +6,24 @@ import (
"fmt"
"os"
"os/signal"
goruntime "runtime"
"runtime"
"strings"
"time"
pingv1 "code.gitea.io/actions-proto-go/ping/v1"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"connectrpc.com/connect"
"codeberg.org/forgejo/runner/client"
"codeberg.org/forgejo/runner/config"
"codeberg.org/forgejo/runner/register"
"github.com/bufbuild/connect-go"
"github.com/joho/godotenv"
"github.com/mattn/go-isatty"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/labels"
"gitea.com/gitea/act_runner/internal/pkg/ver"
)
// runRegister registers a runner to the server
func runRegister(ctx context.Context, regArgs *registerArgs, configFile *string) func(*cobra.Command, []string) error {
func runRegister(ctx context.Context, regArgs *registerArgs, envFile string) func(*cobra.Command, []string) error {
return func(cmd *cobra.Command, args []string) error {
log.SetReportCaller(false)
isTerm := isatty.IsTerminal(os.Stdout.Fd())
@ -38,7 +34,7 @@ func runRegister(ctx context.Context, regArgs *registerArgs, configFile *string)
log.SetLevel(log.DebugLevel)
log.Infof("Registering runner, arch=%s, os=%s, version=%s.",
goruntime.GOARCH, goruntime.GOOS, ver.Version())
runtime.GOARCH, runtime.GOOS, version)
// runner always needs root permission
if os.Getuid() != 0 {
@ -47,13 +43,14 @@ func runRegister(ctx context.Context, regArgs *registerArgs, configFile *string)
}
if regArgs.NoInteractive {
if err := registerNoInteractive(ctx, *configFile, regArgs); err != nil {
if err := registerNoInteractive(envFile, regArgs); err != nil {
return err
}
} else {
go func() {
if err := registerInteractive(ctx, *configFile); err != nil {
log.Fatal(err)
if err := registerInteractive(envFile); err != nil {
// log.Errorln(err)
os.Exit(2)
return
}
os.Exit(0)
@ -72,6 +69,7 @@ func runRegister(ctx context.Context, regArgs *registerArgs, configFile *string)
type registerArgs struct {
NoInteractive bool
InstanceAddr string
Insecure bool
Token string
RunnerName string
Labels string
@ -85,20 +83,24 @@ const (
StageInputInstance
StageInputToken
StageInputRunnerName
StageInputLabels
StageInputCustomLabels
StageWaitingForRegistration
StageExit
)
var defaultLabels = []string{
"docker:docker://node:20-bullseye",
"ubuntu-latest:docker://node:16-bullseye",
"ubuntu-22.04:docker://node:16-bullseye", // There's no node:16-bookworm yet
"ubuntu-20.04:docker://node:16-bullseye",
"ubuntu-18.04:docker://node:16-buster",
}
type registerInputs struct {
InstanceAddr string
Insecure bool
Token string
RunnerName string
Labels []string
CustomLabels []string
}
func (r *registerInputs) validate() error {
@ -108,22 +110,25 @@ func (r *registerInputs) validate() error {
if r.Token == "" {
return fmt.Errorf("token is empty")
}
if len(r.Labels) > 0 {
return validateLabels(r.Labels)
if len(r.CustomLabels) > 0 {
return validateLabels(r.CustomLabels)
}
return nil
}
func validateLabels(ls []string) error {
for _, label := range ls {
if _, err := labels.Parse(label); err != nil {
return err
func validateLabels(labels []string) error {
for _, label := range labels {
values := strings.SplitN(label, ":", 2)
if len(values) > 2 {
return fmt.Errorf("Invalid label: %s", label)
}
// len(values) == 1, label for non docker execution environment
// TODO: validate value format, like docker://node:16-buster
}
return nil
}
func (r *registerInputs) assignToNext(stage registerStage, value string, cfg *config.Config) registerStage {
func (r *registerInputs) assignToNext(stage registerStage, value string) registerStage {
// must set instance address and token.
// if empty, keep current stage.
if stage == StageInputInstance || stage == StageInputToken {
@ -151,50 +156,32 @@ func (r *registerInputs) assignToNext(stage registerStage, value string, cfg *co
return StageInputRunnerName
case StageInputRunnerName:
r.RunnerName = value
// if there are some labels configured in config file, skip input labels stage
if len(cfg.Runner.Labels) > 0 {
ls := make([]string, 0, len(cfg.Runner.Labels))
for _, l := range cfg.Runner.Labels {
_, err := labels.Parse(l)
if err != nil {
log.WithError(err).Warnf("ignored invalid label %q", l)
continue
}
ls = append(ls, l)
}
if len(ls) == 0 {
log.Warn("no valid labels configured in config file, runner may not be able to pick up jobs")
}
r.Labels = ls
return StageWaitingForRegistration
}
return StageInputLabels
case StageInputLabels:
r.Labels = defaultLabels
return StageInputCustomLabels
case StageInputCustomLabels:
r.CustomLabels = defaultLabels
if value != "" {
r.Labels = strings.Split(value, ",")
r.CustomLabels = strings.Split(value, ",")
}
if validateLabels(r.Labels) != nil {
log.Infoln("Invalid labels, please input again, leave blank to use the default labels (for example, ubuntu-20.04:docker://node:20-bookworm,ubuntu-18.04:docker://node:20-bookworm)")
return StageInputLabels
if validateLabels(r.CustomLabels) != nil {
log.Infoln("Invalid labels, please input again, leave blank to use the default labels (for example, ubuntu-20.04:docker://node:16-bullseye,ubuntu-18.04:docker://node:16-buster)")
return StageInputCustomLabels
}
return StageWaitingForRegistration
}
return StageUnknown
}
func registerInteractive(ctx context.Context, configFile string) error {
func registerInteractive(envFile string) error {
var (
reader = bufio.NewReader(os.Stdin)
stage = StageInputInstance
inputs = new(registerInputs)
)
cfg, err := config.LoadDefault(configFile)
if err != nil {
return fmt.Errorf("failed to load config: %v", err)
}
// check if overwrite local config
_ = godotenv.Load(envFile)
cfg, _ := config.FromEnviron()
if f, err := os.Stat(cfg.Runner.File); err == nil && !f.IsDir() {
stage = StageOverwriteLocalConfig
}
@ -206,14 +193,15 @@ func registerInteractive(ctx context.Context, configFile string) error {
if err != nil {
return err
}
stage = inputs.assignToNext(stage, strings.TrimSpace(cmdString), cfg)
stage = inputs.assignToNext(stage, strings.TrimSpace(cmdString))
if stage == StageWaitingForRegistration {
log.Infof("Registering runner, name=%s, instance=%s, labels=%v.", inputs.RunnerName, inputs.InstanceAddr, inputs.Labels)
if err := doRegister(ctx, cfg, inputs); err != nil {
return fmt.Errorf("Failed to register runner: %w", err)
log.Infof("Registering runner, name=%s, instance=%s, labels=%v.", inputs.RunnerName, inputs.InstanceAddr, inputs.CustomLabels)
if err := doRegister(&cfg, inputs); err != nil {
log.Errorf("Failed to register runner: %v", err)
} else {
log.Infof("Runner registered successfully.")
}
log.Infof("Runner registered successfully.")
return nil
}
@ -233,43 +221,33 @@ func printStageHelp(stage registerStage) {
case StageOverwriteLocalConfig:
log.Infoln("Runner is already registered, overwrite local config? [y/N]")
case StageInputInstance:
log.Infoln("Enter the Forgejo instance URL (for example, https://next.forgejo.org/):")
log.Infoln("Enter the Forgejo instance URL (for example, https://codeberg.org/):")
case StageInputToken:
log.Infoln("Enter the runner token:")
case StageInputRunnerName:
hostname, _ := os.Hostname()
log.Infof("Enter the runner name (if set empty, use hostname: %s):\n", hostname)
case StageInputLabels:
log.Infoln("Enter the runner labels, leave blank to use the default labels (comma-separated, for example, ubuntu-20.04:docker://node:20-bookworm,ubuntu-18.04:docker://node:20-bookworm):")
log.Infof("Enter the runner name (if set empty, use hostname:%s ):\n", hostname)
case StageInputCustomLabels:
log.Infoln("Enter the runner labels, leave blank to use the default labels (comma-separated, for example, self-hosted,ubuntu-20.04:docker://node:16-bullseye,ubuntu-18.04:docker://node:16-buster):")
case StageWaitingForRegistration:
log.Infoln("Waiting for registration...")
}
}
func registerNoInteractive(ctx context.Context, configFile string, regArgs *registerArgs) error {
cfg, err := config.LoadDefault(configFile)
if err != nil {
return err
}
func registerNoInteractive(envFile string, regArgs *registerArgs) error {
_ = godotenv.Load(envFile)
cfg, _ := config.FromEnviron()
inputs := &registerInputs{
InstanceAddr: regArgs.InstanceAddr,
Insecure: regArgs.Insecure,
Token: regArgs.Token,
RunnerName: regArgs.RunnerName,
Labels: defaultLabels,
CustomLabels: defaultLabels,
}
regArgs.Labels = strings.TrimSpace(regArgs.Labels)
// command line flag.
if regArgs.Labels != "" {
inputs.Labels = strings.Split(regArgs.Labels, ",")
inputs.CustomLabels = strings.Split(regArgs.Labels, ",")
}
// specify labels in config file.
if len(cfg.Runner.Labels) > 0 {
if regArgs.Labels != "" {
log.Warn("Labels from command will be ignored, use labels defined in config file.")
}
inputs.Labels = cfg.Runner.Labels
}
if inputs.RunnerName == "" {
inputs.RunnerName, _ = os.Hostname()
log.Infof("Runner name is empty, use hostname '%s'.", inputs.RunnerName)
@ -278,21 +256,22 @@ func registerNoInteractive(ctx context.Context, configFile string, regArgs *regi
log.WithError(err).Errorf("Invalid input, please re-run act command.")
return nil
}
if err := doRegister(ctx, cfg, inputs); err != nil {
return fmt.Errorf("Failed to register runner: %w", err)
if err := doRegister(&cfg, inputs); err != nil {
log.Errorf("Failed to register runner: %v", err)
return nil
}
log.Infof("Runner registered successfully.")
return nil
}
func doRegister(ctx context.Context, cfg *config.Config, inputs *registerInputs) error {
func doRegister(cfg *config.Config, inputs *registerInputs) error {
ctx := context.Background()
// initial http client
cli := client.New(
inputs.InstanceAddr,
cfg.Runner.Insecure,
"",
"",
ver.Version(),
inputs.Insecure,
"", "",
)
for {
@ -301,7 +280,7 @@ func doRegister(ctx context.Context, cfg *config.Config, inputs *registerInputs)
}))
select {
case <-ctx.Done():
return ctx.Err()
return nil
default:
}
if ctx.Err() != nil {
@ -318,38 +297,9 @@ func doRegister(ctx context.Context, cfg *config.Config, inputs *registerInputs)
}
}
reg := &config.Registration{
Name: inputs.RunnerName,
Token: inputs.Token,
Address: inputs.InstanceAddr,
Labels: inputs.Labels,
}
ls := make([]string, len(reg.Labels))
for i, v := range reg.Labels {
l, _ := labels.Parse(v)
ls[i] = l.Name
}
// register new runner.
resp, err := cli.Register(ctx, connect.NewRequest(&runnerv1.RegisterRequest{
Name: reg.Name,
Token: reg.Token,
Version: ver.Version(),
AgentLabels: ls, // Could be removed after Gitea 1.20
Labels: ls,
}))
if err != nil {
log.WithError(err).Error("poller: cannot register new runner")
return err
}
reg.ID = resp.Msg.Runner.Id
reg.UUID = resp.Msg.Runner.Uuid
reg.Name = resp.Msg.Runner.Name
reg.Token = resp.Msg.Runner.Token
if err := config.SaveRegistration(cfg.Runner.File, reg); err != nil {
return fmt.Errorf("failed to save runner config: %w", err)
}
return nil
cfg.Runner.Name = inputs.RunnerName
cfg.Runner.Token = inputs.Token
cfg.Runner.Labels = inputs.CustomLabels
_, err := register.New(cli).Register(ctx, cfg.Runner)
return err
}

10
cmd/register_test.go Normal file
View file

@ -0,0 +1,10 @@
package cmd
import "testing"
func TestValidateLabels(t *testing.T) {
labels := []string{"ubuntu-latest:docker://node:16-buster", "self-hosted"}
if err := validateLabels(labels); err != nil {
t.Errorf("validateLabels() error = %v", err)
}
}

117
config/config.go Normal file
View file

@ -0,0 +1,117 @@
package config
import (
"encoding/json"
"io"
"os"
"runtime"
"strconv"
"codeberg.org/forgejo/runner/core"
"github.com/joho/godotenv"
"github.com/kelseyhightower/envconfig"
)
type (
// Config provides the system configuration.
Config struct {
Debug bool `envconfig:"GITEA_DEBUG"`
Trace bool `envconfig:"GITEA_TRACE"`
Client Client
Runner Runner
Platform Platform
}
Client struct {
Address string `ignored:"true"`
Insecure bool
}
Runner struct {
UUID string `ignored:"true"`
Name string `envconfig:"GITEA_RUNNER_NAME"`
Token string `ignored:"true"`
Capacity int `envconfig:"GITEA_RUNNER_CAPACITY" default:"1"`
File string `envconfig:"FORGEJO_RUNNER_FILE" default:".runner"`
Environ map[string]string `envconfig:"GITEA_RUNNER_ENVIRON"`
EnvFile string `envconfig:"GITEA_RUNNER_ENV_FILE"`
Labels []string `envconfig:"GITEA_RUNNER_LABELS"`
}
Platform struct {
OS string `envconfig:"GITEA_PLATFORM_OS"`
Arch string `envconfig:"GITEA_PLATFORM_ARCH"`
}
)
// FromEnviron returns the settings from the environment.
func FromEnviron() (Config, error) {
cfg := Config{}
if err := envconfig.Process("", &cfg); err != nil {
return cfg, err
}
// check runner config exist
f, err := os.Stat(cfg.Runner.File)
if err == nil && !f.IsDir() {
jsonFile, _ := os.Open(cfg.Runner.File)
defer jsonFile.Close()
byteValue, _ := io.ReadAll(jsonFile)
var runner core.Runner
if err := json.Unmarshal(byteValue, &runner); err != nil {
return cfg, err
}
if runner.UUID != "" {
cfg.Runner.UUID = runner.UUID
}
if runner.Name != "" {
cfg.Runner.Name = runner.Name
}
if runner.Token != "" {
cfg.Runner.Token = runner.Token
}
if len(runner.Labels) != 0 {
cfg.Runner.Labels = runner.Labels
}
if runner.Address != "" {
cfg.Client.Address = runner.Address
}
if runner.Insecure != "" {
cfg.Client.Insecure, _ = strconv.ParseBool(runner.Insecure)
}
} else if err != nil {
return cfg, err
}
// runner config
if cfg.Runner.Environ == nil {
cfg.Runner.Environ = map[string]string{
"GITHUB_API_URL": cfg.Client.Address + "/api/v1",
"GITHUB_SERVER_URL": cfg.Client.Address,
}
}
if cfg.Runner.Name == "" {
cfg.Runner.Name, _ = os.Hostname()
}
// platform config
if cfg.Platform.OS == "" {
cfg.Platform.OS = runtime.GOOS
}
if cfg.Platform.Arch == "" {
cfg.Platform.Arch = runtime.GOARCH
}
if file := cfg.Runner.EnvFile; file != "" {
envs, err := godotenv.Read(file)
if err != nil {
return cfg, err
}
for k, v := range envs {
cfg.Runner.Environ[k] = v
}
}
return cfg, nil
}

View file

@ -1,18 +0,0 @@
[Unit]
Description=Forgejo Runner
Documentation=https://forgejo.org/docs/latest/admin/actions/
After=docker.service
[Service]
ExecStart=forgejo-runner daemon
ExecReload=/bin/kill -s HUP $MAINPID
# This user and working directory must already exist
User=runner
WorkingDirectory=/home/runner
Restart=on-failure
TimeoutSec=0
RestartSec=10
[Install]
WantedBy=multi-user.target

17
core/runner.go Normal file
View file

@ -0,0 +1,17 @@
package core
const (
UUIDHeader = "x-runner-uuid"
TokenHeader = "x-runner-token"
)
// Runner struct
type Runner struct {
ID int64 `json:"id"`
UUID string `json:"uuid"`
Name string `json:"name"`
Token string `json:"token"`
Address string `json:"address"`
Insecure string `json:"insecure"`
Labels []string `json:"labels"`
}

37
engine/docker.go Normal file
View file

@ -0,0 +1,37 @@
package engine
import (
"context"
"github.com/docker/docker/client"
)
type Docker struct {
client client.APIClient
hidePull bool
}
func New(opts ...Option) (*Docker, error) {
cli, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
return nil, err
}
srv := &Docker{
client: cli,
}
// Loop through each option
for _, opt := range opts {
// Call the option giving the instantiated
opt.Apply(srv)
}
return srv, nil
}
// Ping pings the Docker daemon.
func (e *Docker) Ping(ctx context.Context) error {
_, err := e.client.Ping(ctx)
return err
}

43
engine/engine.go Normal file
View file

@ -0,0 +1,43 @@
package engine
import (
"context"
"fmt"
"time"
log "github.com/sirupsen/logrus"
)
// Start start docker engine api loop
func Start(ctx context.Context) error {
engine, err := New()
if err != nil {
return err
}
count := 0
for {
err := engine.Ping(ctx)
if err == context.Canceled {
break
}
select {
case <-ctx.Done():
return ctx.Err()
default:
}
if err != nil {
log.WithError(err).
Errorln("cannot ping the docker daemon")
count++
if count == 5 {
return fmt.Errorf("retry connect to docker daemon failed: %d times", count)
}
time.Sleep(time.Second)
} else {
log.Infoln("successfully ping the docker daemon")
break
}
}
return nil
}

30
engine/options.go Normal file
View file

@ -0,0 +1,30 @@
package engine
import "github.com/docker/docker/client"
// An Option configures a mutex.
type Option interface {
Apply(*Docker)
}
// OptionFunc is a function that configure a value.
type OptionFunc func(*Docker)
// Apply calls f(option)
func (f OptionFunc) Apply(docker *Docker) {
f(docker)
}
// WithClient set custom client
func WithClient(c client.APIClient) Option {
return OptionFunc(func(q *Docker) {
q.client = c
})
}
// WithHidePull hide pull event.
func WithHidePull(v bool) Option {
return OptionFunc(func(q *Docker) {
q.hidePull = v
})
}

View file

@ -1,10 +0,0 @@
This directory contains a collection of usage and deployment examples.
Workflow examples can be found [in the documentation](https://forgejo.org/docs/next/user/actions/)
and in the [sources of the setup-forgejo](https://code.forgejo.org/actions/setup-forgejo/src/branch/main/testdata) action.
| Section | Description |
|-----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [`docker`](docker) | using the host docker server by mounting the socket |
| [`docker-compose`](docker-compose) | all in one docker-compose with the Forgejo server, the runner and docker in docker |
| [`kubernetes`](kubernetes) | a sample deployment for the Forgejo runner |

View file

@ -1,113 +0,0 @@
## Docker compose with docker-in-docker
The `compose-forgejo-and-runner.yml` compose file runs a Forgejo
instance and registers a `Forgejo runner`. A docker server is also
launched within a container (using
[dind](https://hub.docker.com/_/docker/tags?name=dind)) and will be
used by the `Forgejo runner` to execute the workflows.
### Quick start
```sh
rm -fr /srv/runner-data /srv/forgejo-data
secret=$(openssl rand -hex 20)
sed -i -e "s/{SHARED_SECRET}/$secret/" compose-forgejo-and-runner.yml
docker compose -f compose-forgejo-and-runner.yml up -d
```
Visit http://0.0.0.0:8080/admin/actions/runners with login `root` and password `{ROOT_PASSWORD}` and see the runner is registered with the label `docker`.
> NOTE: the `Your ROOT_URL in app.ini is "http://localhost:3000/", it's unlikely matching the site you are visiting.` message is a warning that can be ignored in the context of this example.
```sh
docker compose -f compose-forgejo-and-runner.yml -f compose-demo-workflow.yml up demo-workflow
```
Visit http://0.0.0.0:8080/root/test/actions/runs/1 and see that the job ran.
### Running
Create a shared secret with:
```sh
openssl rand -hex 20
```
Replace all occurences of {SHARED_SECRET} in
[compose-forgejo-and-runner.yml](compose-forgejo-and-runner.yml).
> **NOTE:** a token obtained from the Forgejo web interface cannot be used as a shared secret.
Replace {ROOT_PASSWORD} with a secure password in
[compose-forgejo-and-runner.yml](compose-forgejo-and-runner.yml).
```sh
docker compose -f compose-forgejo-and-runner.yml up
Creating docker-compose_docker-in-docker_1 ... done
Creating docker-compose_forgejo_1 ... done
Creating docker-compose_runner-register_1 ... done
...
docker-in-docker_1 | time="2023-08-24T10:22:15.023338461Z" level=warning msg="WARNING: API is accessible on http://0.0.0.0:2376
...
forgejo_1 | 2023/08/24 10:22:14 ...s/graceful/server.go:75:func1() [D] Starting server on tcp:0.0.0.0:3000 (PID: 19)
...
runner-daemon_1 | time="2023-08-24T10:22:16Z" level=info msg="Starting runner daemon"
```
### Manual testing
To login the Forgejo instance:
* URL: http://0.0.0.0:8080
* user: `root`
* password: `{ROOT_PASSWORD}`
`Forgejo Actions` is enabled by default when creating a repository.
## Tests workflow
The `compose-demo-workflow.yml` compose file runs two demo workflows:
* one to verify the `Forgejo runner` can pick up a task from the Forgejo instance
and run it to completion.
* one to verify docker can be run inside the `Forgejo runner` container.
A new repository is created in root/test with the following workflows:
#### `.forgejo/workflows/demo.yml`:
```yaml
on: [push]
jobs:
test:
runs-on: docker
steps:
- run: echo All Good
```
#### `.forgejo/workflows/demo_docker.yml`
```yaml
on: [push]
jobs:
test_docker:
runs-on: ubuntu-22.04
steps:
- run: docker info
```
A wait loop expects the status of the check associated with the
commit in Forgejo to show "success" to assert the workflow was run.
### Running
```sh
$ docker-compose -f compose-forgejo-and-runner.yml -f compose-demo-workflow.yml up demo-workflow
...
demo-workflow_1 | To http://forgejo:3000/root/test
demo-workflow_1 | + 5ce134e...261cc79 main -> main (forced update)
demo-workflow_1 | branch 'main' set up to track 'http://root:admin1234@forgejo:3000/root/test/main'.
...
demo-workflow_1 | running
...
```

View file

@ -1,35 +0,0 @@
# Copyright 2024 The Forgejo Authors.
# SPDX-License-Identifier: MIT
services:
demo-workflow:
image: code.forgejo.org/oci/alpine:3.19
links:
- forgejo
command: >-
sh -ec '
apk add --quiet git curl jq ;
mkdir -p /srv/demo ;
cd /srv/demo ;
git init --initial-branch=main ;
mkdir -p .forgejo/workflows ;
echo "{ on: [push], jobs: { test: { runs-on: docker, steps: [ {uses: actions/checkout@v4}, { run: echo All Good } ] } } }" > .forgejo/workflows/demo.yml ;
echo "{ on: [push], jobs: { test_docker: { runs-on: ubuntu-22.04, steps: [ { run: docker info } ] } } }" > .forgejo/workflows/demo_docker.yml ;
git add . ;
git config user.email root@example.com ;
git config user.name username ;
git commit -m demo ;
while : ; do
git push --set-upstream --force http://root:{ROOT_PASSWORD}@forgejo:3000/root/test main && break ;
sleep 5 ;
done ;
sha=`git rev-parse HEAD` ;
for delay in 1 1 1 1 2 5 5 10 10 10 15 30 30 30 30 30 30 30 ; do
curl -sS -f http://forgejo:3000/api/v1/repos/root/test/commits/$$sha/status | jq --raw-output .state | tee status ;
if grep success status ; then echo DEMO WORKFLOW SUCCESS && break ; fi ;
if grep failure status ; then echo DEMO WORKFLOW FAILURE && break ; fi ;
sleep $$delay ;
done ;
grep success status || echo DEMO WORKFLOW FAILURE
'

View file

@ -1,93 +0,0 @@
# Copyright 2024 The Forgejo Authors.
# SPDX-License-Identifier: MIT
#
# Create a secret with:
#
# openssl rand -hex 20
#
# Replace all occurences of {SHARED_SECRET} below with the output.
#
# NOTE: a token obtained from the Forgejo web interface cannot be used
# as a shared secret.
#
# Replace {ROOT_PASSWORD} with a secure password
#
volumes:
docker_certs:
services:
docker-in-docker:
image: code.forgejo.org/oci/docker:dind
hostname: docker # Must set hostname as TLS certificates are only valid for docker or localhost
privileged: true
environment:
DOCKER_TLS_CERTDIR: /certs
DOCKER_HOST: docker-in-docker
volumes:
- docker_certs:/certs
forgejo:
image: codeberg.org/forgejo/forgejo:1.21
command: >-
bash -c '
/bin/s6-svscan /etc/s6 &
sleep 10 ;
su -c "forgejo forgejo-cli actions register --secret {SHARED_SECRET}" git ;
su -c "forgejo admin user create --admin --username root --password {ROOT_PASSWORD} --email root@example.com" git ;
sleep infinity
'
environment:
FORGEJO__security__INSTALL_LOCK: "true"
FORGEJO__log__LEVEL: "debug"
FORGEJO__repository__ENABLE_PUSH_CREATE_USER: "true"
FORGEJO__repository__DEFAULT_PUSH_CREATE_PRIVATE: "false"
FORGEJO__repository__DEFAULT_REPO_UNITS: "repo.code,repo.actions"
volumes:
- /srv/forgejo-data:/data
ports:
- 8080:3000
runner-register:
image: code.forgejo.org/forgejo/runner:3.4.1
links:
- docker-in-docker
- forgejo
environment:
DOCKER_HOST: tcp://docker-in-docker:2376
volumes:
- /srv/runner-data:/data
user: 0:0
command: >-
bash -ec '
while : ; do
forgejo-runner create-runner-file --connect --instance http://forgejo:3000 --name runner --secret {SHARED_SECRET} && break ;
sleep 1 ;
done ;
sed -i -e "s|\"labels\": null|\"labels\": [\"docker:docker://code.forgejo.org/oci/node:20-bookworm\", \"ubuntu-22.04:docker://catthehacker/ubuntu:act-22.04\"]|" .runner ;
forgejo-runner generate-config > config.yml ;
sed -i -e "s|network: .*|network: host|" config.yml ;
sed -i -e "s|^ envs:$$| envs:\n DOCKER_HOST: tcp://docker:2376\n DOCKER_TLS_VERIFY: 1\n DOCKER_CERT_PATH: /certs/client|" config.yml ;
sed -i -e "s|^ options:| options: -v /certs/client:/certs/client|" config.yml ;
sed -i -e "s| valid_volumes: \[\]$$| valid_volumes:\n - /certs/client|" config.yml ;
chown -R 1000:1000 /data
'
runner-daemon:
image: code.forgejo.org/forgejo/runner:3.4.1
links:
- docker-in-docker
- forgejo
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_CERT_PATH: /certs/client
DOCKER_TLS_VERIFY: "1"
volumes:
- /srv/runner-data:/data
- docker_certs:/certs
command: >-
bash -c '
while : ; do test -w .runner && forgejo-runner --config config.yml daemon ; sleep 1 ; done
'

View file

@ -1,12 +0,0 @@
The following assumes:
* a docker server runs on the host
* the docker group of the host is GID 133
* a `.runner` file exists in /tmp/data
* a `runner-config.yml` file exists in /tmp/data
```sh
docker run -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/data:/data --user 1000:133 --rm code.forgejo.org/forgejo/runner:3.0.0 forgejo-runner --config runner-config.yaml daemon
```
The workflows will run using the host docker srever

View file

@ -1,7 +0,0 @@
## Kubernetes Docker in Docker Deployment
Registers Kubernetes pod runners using [offline registration](https://forgejo.org/docs/v1.21/admin/actions/#offline-registration), allowing the scaling of runners as needed.
NOTE: Docker in Docker (dind) requires elevated privileges on Kubernetes. The current way to achieve this is to set the pod `SecurityContext` to `privileged`. Keep in mind that this is a potential security issue that has the potential for a malicious application to break out of the container context.
[`dind-docker.yaml`](dind-docker.yaml) creates a deployment and secret for Kubernetes to act as a runner. The Docker credentials are re-generated each time the pod connects and does not need to be persisted.

View file

@ -1,87 +0,0 @@
# Secret data.
# You will need to retrive this from the web UI, and your Forgejo instance must be running v1.21+
# Alternatively, create this with
# kubectl create secret generic runner-secret --from-literal=token=your_offline_token_here
apiVersion: v1
stringData:
token: your_offline_secret_here
kind: Secret
metadata:
name: runner-secret
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: forgejo-runner
name: forgejo-runner
spec:
# Two replicas means that if one is busy, the other can pick up jobs.
replicas: 2
selector:
matchLabels:
app: forgejo-runner
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: forgejo-runner
spec:
restartPolicy: Always
volumes:
- name: docker-certs
emptyDir: {}
- name: runner-data
emptyDir: {}
# Initialise our configuration file using offline registration
# https://forgejo.org/docs/v1.21/admin/actions/#offline-registration
initContainers:
- name: runner-register
image: code.forgejo.org/forgejo/runner:3.2.0
command: ["forgejo-runner", "register", "--no-interactive", "--token", $(RUNNER_SECRET), "--name", $(RUNNER_NAME), "--instance", $(FORGEJO_INSTANCE_URL)]
env:
- name: RUNNER_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RUNNER_SECRET
valueFrom:
secretKeyRef:
name: runner-secret
key: token
- name: FORGEJO_INSTANCE_URL
value: http://forgejo-http.forgejo.svc.cluster.local:3000
resources:
limits:
cpu: "0.50"
memory: "64Mi"
volumeMounts:
- name: runner-data
mountPath: /data
containers:
- name: runner
image: code.forgejo.org/forgejo/runner:3.0.0
command: ["sh", "-c", "while ! nc -z localhost 2376 </dev/null; do echo 'waiting for docker daemon...'; sleep 5; done; forgejo-runner daemon"]
env:
- name: DOCKER_HOST
value: tcp://localhost:2376
- name: DOCKER_CERT_PATH
value: /certs/client
- name: DOCKER_TLS_VERIFY
value: "1"
volumeMounts:
- name: docker-certs
mountPath: /certs
- name: runner-data
mountPath: /data
- name: daemon
image: docker:23.0.6-dind
env:
- name: DOCKER_TLS_CERTDIR
value: /certs
securityContext:
privileged: true
volumeMounts:
- name: docker-certs
mountPath: /certs

166
go.mod
View file

@ -1,105 +1,107 @@
module gitea.com/gitea/act_runner
module codeberg.org/forgejo/runner
go 1.21.13
toolchain go1.23.1
go 1.18
require (
code.gitea.io/actions-proto-go v0.4.0
code.gitea.io/gitea-vet v0.2.3
connectrpc.com/connect v1.17.0
github.com/avast/retry-go/v4 v4.6.0
github.com/docker/docker v25.0.6+incompatible
github.com/google/uuid v1.6.0
github.com/joho/godotenv v1.5.1
github.com/mattn/go-isatty v0.0.20
github.com/nektos/act v0.2.49
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.8.1
github.com/stretchr/testify v1.9.0
golang.org/x/term v0.24.0
golang.org/x/time v0.6.0
google.golang.org/protobuf v1.34.2
gopkg.in/yaml.v3 v3.0.1
gotest.tools/v3 v3.5.1
code.gitea.io/actions-proto-go v0.2.0
code.gitea.io/gitea-vet v0.2.3-0.20230113022436-2b1561217fa5
github.com/avast/retry-go/v4 v4.3.1
github.com/bufbuild/connect-go v1.3.1
github.com/docker/docker v20.10.21+incompatible
github.com/go-chi/chi/v5 v5.0.8
github.com/go-chi/render v1.0.2
github.com/joho/godotenv v1.4.0
github.com/kelseyhightower/envconfig v1.4.0
github.com/mattn/go-isatty v0.0.16
github.com/nektos/act v0.0.0
github.com/sirupsen/logrus v1.9.0
github.com/spf13/cobra v1.6.1
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211
google.golang.org/protobuf v1.28.1
modernc.org/sqlite v1.14.2
xorm.io/builder v0.3.11-0.20220531020008-1bd24a7dc978
xorm.io/xorm v1.3.2
)
require (
dario.cat/mergo v1.0.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 // indirect
github.com/Masterminds/semver v1.5.0 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/ProtonMail/go-crypto v0.0.0-20230828082145-3c4c8a2d2371 // indirect
github.com/cloudflare/circl v1.3.7 // indirect
github.com/containerd/containerd v1.7.13 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/creack/pty v1.1.21 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/distribution/reference v0.5.0 // indirect
github.com/docker/cli v25.0.3+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.8.0 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/emirpasic/gods v1.18.1 // indirect
github.com/fatih/color v1.16.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
github.com/go-git/go-billy/v5 v5.5.0 // indirect
github.com/go-git/go-git/v5 v5.11.0 // indirect
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/gobwas/glob v0.2.3 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/hcsshim v0.9.3 // indirect
github.com/ProtonMail/go-crypto v0.0.0-20220404123522-616f957b79ad // indirect
github.com/acomagu/bufpipe v1.0.3 // indirect
github.com/ajg/form v1.5.1 // indirect
github.com/containerd/cgroups v1.0.3 // indirect
github.com/containerd/containerd v1.6.6 // indirect
github.com/creack/pty v1.1.18 // indirect
github.com/docker/cli v20.10.21+incompatible // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/docker-credential-helpers v0.6.4 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/emirpasic/gods v1.12.0 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/go-git/gcfg v1.5.0 // indirect
github.com/go-git/go-billy/v5 v5.3.1 // indirect
github.com/go-git/go-git/v5 v5.4.2 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/goccy/go-json v0.8.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/google/uuid v1.3.0 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/julienschmidt/httprouter v1.3.0 // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
github.com/kevinburke/ssh_config v1.2.0 // indirect
github.com/klauspost/compress v1.17.4 // indirect
github.com/mattn/go-colorable v0.1.13 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/moby/buildkit v0.13.2 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/user v0.1.0 // indirect
github.com/mattn/go-runewidth v0.0.13 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
github.com/mitchellh/mapstructure v1.1.2 // indirect
github.com/moby/buildkit v0.10.6 // indirect
github.com/moby/sys/mount v0.3.1 // indirect
github.com/moby/sys/mountinfo v0.6.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.0-rc5 // indirect
github.com/opencontainers/selinux v1.11.0 // indirect
github.com/pjbgf/sha1cd v0.3.0 // indirect
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
github.com/opencontainers/runc v1.1.2 // indirect
github.com/opencontainers/selinux v1.10.2 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/rhysd/actionlint v1.6.27 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/sergi/go-diff v1.3.1 // indirect
github.com/skeema/knownhosts v1.2.1 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0 // indirect
github.com/rhysd/actionlint v1.6.22 // indirect
github.com/rivo/uniseg v0.3.4 // indirect
github.com/robfig/cron v1.2.0 // indirect
github.com/sergi/go-diff v1.2.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/syndtr/goleveldb v1.0.0 // indirect
github.com/xanzy/ssh-agent v0.3.1 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
github.com/xeipuuv/gojsonschema v1.2.0 // indirect
go.etcd.io/bbolt v1.3.9 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect
go.opentelemetry.io/otel v1.21.0 // indirect
go.opentelemetry.io/otel/metric v1.21.0 // indirect
go.opentelemetry.io/otel/trace v1.21.0 // indirect
golang.org/x/crypto v0.21.0 // indirect
golang.org/x/mod v0.13.0 // indirect
golang.org/x/net v0.23.0 // indirect
golang.org/x/sync v0.6.0 // indirect
golang.org/x/sys v0.25.0 // indirect
golang.org/x/tools v0.14.0 // indirect
github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f // indirect
go.opencensus.io v0.23.0 // indirect
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 // indirect
golang.org/x/mod v0.4.2 // indirect
golang.org/x/net v0.0.0-20220906165146-f3363e06e74c // indirect
golang.org/x/sys v0.0.0-20220818161305-2296e01440c6 // indirect
golang.org/x/tools v0.1.5 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
lukechampine.com/uint128 v1.1.1 // indirect
modernc.org/cc/v3 v3.35.18 // indirect
modernc.org/ccgo/v3 v3.12.82 // indirect
modernc.org/libc v1.11.87 // indirect
modernc.org/mathutil v1.4.1 // indirect
modernc.org/memory v1.0.5 // indirect
modernc.org/opt v0.1.1 // indirect
modernc.org/strutil v1.1.1 // indirect
modernc.org/token v1.0.0 // indirect
)
replace github.com/nektos/act => code.forgejo.org/forgejo/act v1.21.3
replace github.com/nektos/act => codeberg.org/forgejo/act v1.1.0

1708
go.sum

File diff suppressed because it is too large Load diff

View file

@ -1,69 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"context"
"fmt"
"os"
"os/signal"
"gitea.com/gitea/act_runner/internal/pkg/config"
"github.com/nektos/act/pkg/artifactcache"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
type cacheServerArgs struct {
Dir string
Host string
Port uint16
}
func runCacheServer(ctx context.Context, configFile *string, cacheArgs *cacheServerArgs) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
cfg, err := config.LoadDefault(*configFile)
if err != nil {
return fmt.Errorf("invalid configuration: %w", err)
}
initLogging(cfg)
var (
dir = cfg.Cache.Dir
host = cfg.Cache.Host
port = cfg.Cache.Port
)
// cacheArgs has higher priority
if cacheArgs.Dir != "" {
dir = cacheArgs.Dir
}
if cacheArgs.Host != "" {
host = cacheArgs.Host
}
if cacheArgs.Port != 0 {
port = cacheArgs.Port
}
cacheHandler, err := artifactcache.StartHandler(
dir,
host,
port,
log.StandardLogger().WithField("module", "cache_request"),
)
if err != nil {
return err
}
log.Infof("cache server is listening on %v", cacheHandler.ExternalURL())
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
<-c
return nil
}
}

View file

@ -1,87 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"context"
"fmt"
"os"
"github.com/spf13/cobra"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/ver"
)
func Execute(ctx context.Context) {
// ./act_runner
rootCmd := &cobra.Command{
Use: "forgejo-runner [event name to run]\nIf no event name passed, will default to \"on: push\"",
Short: "Run Forgejo Actions locally by specifying the event name (e.g. `push`) or an action name directly.",
Args: cobra.MaximumNArgs(1),
Version: ver.Version(),
SilenceUsage: true,
}
configFile := ""
rootCmd.PersistentFlags().StringVarP(&configFile, "config", "c", "", "Config file path")
// ./act_runner register
var regArgs registerArgs
registerCmd := &cobra.Command{
Use: "register",
Short: "Register a runner to the server",
Args: cobra.MaximumNArgs(0),
RunE: runRegister(ctx, &regArgs, &configFile), // must use a pointer to regArgs
}
registerCmd.Flags().BoolVar(&regArgs.NoInteractive, "no-interactive", false, "Disable interactive mode")
registerCmd.Flags().StringVar(&regArgs.InstanceAddr, "instance", "", "Forgejo instance address")
registerCmd.Flags().StringVar(&regArgs.Token, "token", "", "Runner token")
registerCmd.Flags().StringVar(&regArgs.RunnerName, "name", "", "Runner name")
registerCmd.Flags().StringVar(&regArgs.Labels, "labels", "", "Runner tags, comma separated")
rootCmd.AddCommand(registerCmd)
rootCmd.AddCommand(createRunnerFileCmd(ctx, &configFile))
// ./act_runner daemon
daemonCmd := &cobra.Command{
Use: "daemon",
Short: "Run as a runner daemon",
Args: cobra.MaximumNArgs(1),
RunE: runDaemon(ctx, &configFile),
}
rootCmd.AddCommand(daemonCmd)
// ./act_runner exec
rootCmd.AddCommand(loadExecCmd(ctx))
// ./act_runner config
rootCmd.AddCommand(&cobra.Command{
Use: "generate-config",
Short: "Generate an example config file",
Args: cobra.MaximumNArgs(0),
Run: func(_ *cobra.Command, _ []string) {
fmt.Printf("%s", config.Example)
},
})
// ./act_runner cache-server
var cacheArgs cacheServerArgs
cacheCmd := &cobra.Command{
Use: "cache-server",
Short: "Start a cache server for the cache action",
Args: cobra.MaximumNArgs(0),
RunE: runCacheServer(ctx, &configFile, &cacheArgs),
}
cacheCmd.Flags().StringVarP(&cacheArgs.Dir, "dir", "d", "", "Cache directory")
cacheCmd.Flags().StringVarP(&cacheArgs.Host, "host", "s", "", "Host of the cache server")
cacheCmd.Flags().Uint16VarP(&cacheArgs.Port, "port", "p", 0, "Port of the cache server")
rootCmd.AddCommand(cacheCmd)
// hide completion command
rootCmd.CompletionOptions.HiddenDefaultCmd = true
if err := rootCmd.Execute(); err != nil {
os.Exit(1)
}
}

View file

@ -1,164 +0,0 @@
// SPDX-License-Identifier: MIT
package cmd
import (
"context"
"encoding/hex"
"fmt"
"os"
pingv1 "code.gitea.io/actions-proto-go/ping/v1"
"connectrpc.com/connect"
gouuid "github.com/google/uuid"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gitea.com/gitea/act_runner/internal/app/run"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/ver"
)
type createRunnerFileArgs struct {
Connect bool
InstanceAddr string
Secret string
Name string
}
func createRunnerFileCmd(ctx context.Context, configFile *string) *cobra.Command {
var argsVar createRunnerFileArgs
cmd := &cobra.Command{
Use: "create-runner-file",
Short: "Create a runner file using a shared secret used to pre-register the runner on the Forgejo instance",
Args: cobra.MaximumNArgs(0),
RunE: runCreateRunnerFile(ctx, &argsVar, configFile),
}
cmd.Flags().BoolVar(&argsVar.Connect, "connect", false, "tries to connect to the instance using the secret (Forgejo v1.21 instance or greater)")
cmd.Flags().StringVar(&argsVar.InstanceAddr, "instance", "", "Forgejo instance address")
cmd.MarkFlagRequired("instance")
cmd.Flags().StringVar(&argsVar.Secret, "secret", "", "secret shared with the Forgejo instance via forgejo-cli actions register")
cmd.MarkFlagRequired("secret")
cmd.Flags().StringVar(&argsVar.Name, "name", "", "Runner name")
return cmd
}
// must be exactly the same as fogejo/models/actions/forgejo.go
func uuidFromSecret(secret string) (string, error) {
uuid, err := gouuid.FromBytes([]byte(secret[:16]))
if err != nil {
return "", fmt.Errorf("gouuid.FromBytes %v", err)
}
return uuid.String(), nil
}
// should be exactly the same as forgejo/cmd/forgejo/actions.go
func validateSecret(secret string) error {
secretLen := len(secret)
if secretLen != 40 {
return fmt.Errorf("the secret must be exactly 40 characters long, not %d", secretLen)
}
if _, err := hex.DecodeString(secret); err != nil {
return fmt.Errorf("the secret must be an hexadecimal string: %w", err)
}
return nil
}
func ping(cfg *config.Config, reg *config.Registration) error {
// initial http client
cli := client.New(
reg.Address,
cfg.Runner.Insecure,
"",
"",
ver.Version(),
)
_, err := cli.Ping(context.Background(), connect.NewRequest(&pingv1.PingRequest{
Data: reg.UUID,
}))
if err != nil {
return fmt.Errorf("ping %s failed %w", reg.Address, err)
}
return nil
}
func runCreateRunnerFile(ctx context.Context, args *createRunnerFileArgs, configFile *string) func(cmd *cobra.Command, args []string) error {
return func(*cobra.Command, []string) error {
log.SetLevel(log.DebugLevel)
log.Info("Creating runner file")
//
// Prepare the registration data
//
cfg, err := config.LoadDefault(*configFile)
if err != nil {
return fmt.Errorf("invalid configuration: %w", err)
}
if err := validateSecret(args.Secret); err != nil {
return err
}
uuid, err := uuidFromSecret(args.Secret)
if err != nil {
return err
}
name := args.Name
if name == "" {
name, _ = os.Hostname()
log.Infof("Runner name is empty, use hostname '%s'.", name)
}
reg := &config.Registration{
Name: name,
UUID: uuid,
Token: args.Secret,
Address: args.InstanceAddr,
}
//
// Verify the Forgejo instance is reachable
//
if err := ping(cfg, reg); err != nil {
return err
}
//
// Save the registration file
//
if err := config.SaveRegistration(cfg.Runner.File, reg); err != nil {
return fmt.Errorf("failed to save runner config to %s: %w", cfg.Runner.File, err)
}
//
// Verify the secret works
//
if args.Connect {
cli := client.New(
reg.Address,
cfg.Runner.Insecure,
reg.UUID,
reg.Token,
ver.Version(),
)
runner := run.NewRunner(cfg, reg, cli)
resp, err := runner.Declare(ctx, cfg.Runner.Labels)
if err != nil && connect.CodeOf(err) == connect.CodeUnimplemented {
log.Warn("Cannot verify the connection because the Forgejo instance is lower than v1.21")
} else if err != nil {
log.WithError(err).Error("fail to invoke Declare")
return err
} else {
log.Infof("connection successful: %s, with version: %s, with labels: %v",
resp.Msg.Runner.Name, resp.Msg.Runner.Version, resp.Msg.Runner.Labels)
}
}
return nil
}
}

View file

@ -1,118 +0,0 @@
// SPDX-License-Identifier: MIT
package cmd
import (
"bytes"
"context"
"os"
"testing"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"connectrpc.com/connect"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/ver"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
"gopkg.in/yaml.v3"
)
func executeCommand(ctx context.Context, cmd *cobra.Command, args ...string) (string, error) {
buf := new(bytes.Buffer)
cmd.SetOut(buf)
cmd.SetErr(buf)
cmd.SetArgs(args)
err := cmd.ExecuteContext(ctx)
return buf.String(), err
}
func Test_createRunnerFileCmd(t *testing.T) {
configFile := "config.yml"
ctx := context.Background()
cmd := createRunnerFileCmd(ctx, &configFile)
output, err := executeCommand(ctx, cmd)
assert.ErrorContains(t, err, `required flag(s) "instance", "secret" not set`)
assert.Contains(t, output, "Usage:")
}
func Test_validateSecret(t *testing.T) {
assert.ErrorContains(t, validateSecret("abc"), "exactly 40 characters")
assert.ErrorContains(t, validateSecret("ZAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"), "must be an hexadecimal")
}
func Test_uuidFromSecret(t *testing.T) {
uuid, err := uuidFromSecret("AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
assert.NoError(t, err)
assert.EqualValues(t, uuid, "41414141-4141-4141-4141-414141414141")
}
func Test_ping(t *testing.T) {
cfg := &config.Config{}
address := os.Getenv("FORGEJO_URL")
if address == "" {
address = "https://code.forgejo.org"
}
reg := &config.Registration{
Address: address,
UUID: "create-runner-file_test.go",
}
assert.NoError(t, ping(cfg, reg))
}
func Test_runCreateRunnerFile(t *testing.T) {
//
// Set the .runner file to be in a temporary directory
//
dir := t.TempDir()
configFile := dir + "/config.yml"
runnerFile := dir + "/.runner"
cfg, err := config.LoadDefault("")
cfg.Runner.File = runnerFile
yamlData, err := yaml.Marshal(cfg)
assert.NoError(t, err)
assert.NoError(t, os.WriteFile(configFile, yamlData, 0o666))
instance, has := os.LookupEnv("FORGEJO_URL")
if !has {
instance = "https://code.forgejo.org"
}
secret, has := os.LookupEnv("FORGEJO_RUNNER_SECRET")
assert.True(t, has)
name := "testrunner"
//
// Run create-runner-file
//
ctx := context.Background()
cmd := createRunnerFileCmd(ctx, &configFile)
output, err := executeCommand(ctx, cmd, "--connect", "--secret", secret, "--instance", instance, "--name", name)
assert.NoError(t, err)
assert.EqualValues(t, "", output)
//
// Read back the runner file and verify its content
//
reg, err := config.LoadRegistration(runnerFile)
assert.NoError(t, err)
assert.EqualValues(t, secret, reg.Token)
assert.EqualValues(t, instance, reg.Address)
//
// Verify that fetching a task successfully returns there is
// no task for this runner
//
cli := client.New(
reg.Address,
cfg.Runner.Insecure,
reg.UUID,
reg.Token,
ver.Version(),
)
resp, err := cli.FetchTask(ctx, connect.NewRequest(&runnerv1.FetchTaskRequest{}))
assert.NoError(t, err)
assert.Nil(t, resp.Msg.Task)
}

View file

@ -1,208 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package cmd
import (
"context"
"fmt"
"os"
"path"
"path/filepath"
"runtime"
"strconv"
"strings"
"connectrpc.com/connect"
"github.com/mattn/go-isatty"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gitea.com/gitea/act_runner/internal/app/poll"
"gitea.com/gitea/act_runner/internal/app/run"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/envcheck"
"gitea.com/gitea/act_runner/internal/pkg/labels"
"gitea.com/gitea/act_runner/internal/pkg/ver"
)
func runDaemon(ctx context.Context, configFile *string) func(cmd *cobra.Command, args []string) error {
return func(cmd *cobra.Command, args []string) error {
cfg, err := config.LoadDefault(*configFile)
if err != nil {
return fmt.Errorf("invalid configuration: %w", err)
}
initLogging(cfg)
log.Infoln("Starting runner daemon")
reg, err := config.LoadRegistration(cfg.Runner.File)
if os.IsNotExist(err) {
log.Error("registration file not found, please register the runner first")
return err
} else if err != nil {
return fmt.Errorf("failed to load registration file: %w", err)
}
cfg.Tune(reg.Address)
lbls := reg.Labels
if len(cfg.Runner.Labels) > 0 {
lbls = cfg.Runner.Labels
}
ls := labels.Labels{}
for _, l := range lbls {
label, err := labels.Parse(l)
if err != nil {
log.WithError(err).Warnf("ignored invalid label %q", l)
continue
}
ls = append(ls, label)
}
if len(ls) == 0 {
log.Warn("no labels configured, runner may not be able to pick up jobs")
}
if ls.RequireDocker() {
dockerSocketPath, err := getDockerSocketPath(cfg.Container.DockerHost)
if err != nil {
return err
}
if err := envcheck.CheckIfDockerRunning(ctx, dockerSocketPath); err != nil {
return err
}
// if dockerSocketPath passes the check, override DOCKER_HOST with dockerSocketPath
os.Setenv("DOCKER_HOST", dockerSocketPath)
// empty cfg.Container.DockerHost means act_runner need to find an available docker host automatically
// and assign the path to cfg.Container.DockerHost
if cfg.Container.DockerHost == "" {
cfg.Container.DockerHost = dockerSocketPath
}
// check the scheme, if the scheme is not npipe or unix
// set cfg.Container.DockerHost to "-" because it can't be mounted to the job container
if protoIndex := strings.Index(cfg.Container.DockerHost, "://"); protoIndex != -1 {
scheme := cfg.Container.DockerHost[:protoIndex]
if !strings.EqualFold(scheme, "npipe") && !strings.EqualFold(scheme, "unix") {
cfg.Container.DockerHost = "-"
}
}
}
cli := client.New(
reg.Address,
cfg.Runner.Insecure,
reg.UUID,
reg.Token,
ver.Version(),
)
runner := run.NewRunner(cfg, reg, cli)
// declare the labels of the runner before fetching tasks
resp, err := runner.Declare(ctx, ls.Names())
if err != nil && connect.CodeOf(err) == connect.CodeUnimplemented {
// Gitea instance is older version. skip declare step.
log.Warn("Because the Forgejo instance is an old version, skipping declaring the labels and version.")
} else if err != nil {
log.WithError(err).Error("fail to invoke Declare")
return err
} else {
log.Infof("runner: %s, with version: %s, with labels: %v, declared successfully",
resp.Msg.Runner.Name, resp.Msg.Runner.Version, resp.Msg.Runner.Labels)
// if declared successfully, override the labels in the.runner file with valid labels in the config file (if specified)
runner.Update(ctx, ls)
reg.Labels = ls.ToStrings()
if err := config.SaveRegistration(cfg.Runner.File, reg); err != nil {
return fmt.Errorf("failed to save runner config: %w", err)
}
}
poller := poll.New(cfg, cli, runner)
go poller.Poll()
<-ctx.Done()
log.Infof("runner: %s shutdown initiated, waiting [runner].shutdown_timeout=%s for running jobs to complete before shutting down", resp.Msg.Runner.Name, cfg.Runner.ShutdownTimeout)
ctx, cancel := context.WithTimeout(context.Background(), cfg.Runner.ShutdownTimeout)
defer cancel()
err = poller.Shutdown(ctx)
if err != nil {
log.Warnf("runner: %s cancelled in progress jobs during shutdown", resp.Msg.Runner.Name)
}
return nil
}
}
// initLogging setup the global logrus logger.
func initLogging(cfg *config.Config) {
isTerm := isatty.IsTerminal(os.Stdout.Fd())
format := &log.TextFormatter{
DisableColors: !isTerm,
FullTimestamp: true,
}
log.SetFormatter(format)
if l := cfg.Log.Level; l != "" {
level, err := log.ParseLevel(l)
if err != nil {
log.WithError(err).
Errorf("invalid log level: %q", l)
}
// debug level
if level == log.DebugLevel {
log.SetReportCaller(true)
format.CallerPrettyfier = func(f *runtime.Frame) (string, string) {
// get function name
s := strings.Split(f.Function, ".")
funcname := "[" + s[len(s)-1] + "]"
// get file name and line number
_, filename := path.Split(f.File)
filename = "[" + filename + ":" + strconv.Itoa(f.Line) + "]"
return funcname, filename
}
log.SetFormatter(format)
}
if log.GetLevel() != level {
log.Infof("log level changed to %v", level)
log.SetLevel(level)
}
}
}
var commonSocketPaths = []string{
"/var/run/docker.sock",
"/run/podman/podman.sock",
"$HOME/.colima/docker.sock",
"$XDG_RUNTIME_DIR/docker.sock",
"$XDG_RUNTIME_DIR/podman/podman.sock",
`\\.\pipe\docker_engine`,
"$HOME/.docker/run/docker.sock",
}
func getDockerSocketPath(configDockerHost string) (string, error) {
// a `-` means don't mount the docker socket to job containers
if configDockerHost != "" && configDockerHost != "-" {
return configDockerHost, nil
}
socket, found := os.LookupEnv("DOCKER_HOST")
if found {
return socket, nil
}
for _, p := range commonSocketPaths {
if _, err := os.Lstat(os.ExpandEnv(p)); err == nil {
if strings.HasPrefix(p, `\\.\`) {
return "npipe://" + filepath.ToSlash(os.ExpandEnv(p)), nil
}
return "unix://" + filepath.ToSlash(os.ExpandEnv(p)), nil
}
}
return "", fmt.Errorf("daemon Docker Engine socket not found and docker_host config was invalid")
}

View file

@ -1,167 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package poll
import (
"context"
"errors"
"fmt"
"sync"
"sync/atomic"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"connectrpc.com/connect"
log "github.com/sirupsen/logrus"
"golang.org/x/time/rate"
"gitea.com/gitea/act_runner/internal/app/run"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
)
const PollerID = "PollerID"
type Poller interface {
Poll()
Shutdown(ctx context.Context) error
}
type poller struct {
client client.Client
runner run.RunnerInterface
cfg *config.Config
tasksVersion atomic.Int64 // tasksVersion used to store the version of the last task fetched from the Gitea.
pollingCtx context.Context
shutdownPolling context.CancelFunc
jobsCtx context.Context
shutdownJobs context.CancelFunc
done chan any
}
func New(cfg *config.Config, client client.Client, runner run.RunnerInterface) Poller {
return (&poller{}).init(cfg, client, runner)
}
func (p *poller) init(cfg *config.Config, client client.Client, runner run.RunnerInterface) Poller {
pollingCtx, shutdownPolling := context.WithCancel(context.Background())
jobsCtx, shutdownJobs := context.WithCancel(context.Background())
done := make(chan any)
p.client = client
p.runner = runner
p.cfg = cfg
p.pollingCtx = pollingCtx
p.shutdownPolling = shutdownPolling
p.jobsCtx = jobsCtx
p.shutdownJobs = shutdownJobs
p.done = done
return p
}
func (p *poller) Poll() {
limiter := rate.NewLimiter(rate.Every(p.cfg.Runner.FetchInterval), 1)
wg := &sync.WaitGroup{}
for i := 0; i < p.cfg.Runner.Capacity; i++ {
wg.Add(1)
go p.poll(i, wg, limiter)
}
wg.Wait()
// signal the poller is finished
close(p.done)
}
func (p *poller) Shutdown(ctx context.Context) error {
p.shutdownPolling()
select {
case <-p.done:
log.Trace("all jobs are complete")
return nil
case <-ctx.Done():
log.Trace("forcing the jobs to shutdown")
p.shutdownJobs()
<-p.done
log.Trace("all jobs have been shutdown")
return ctx.Err()
}
}
func (p *poller) poll(id int, wg *sync.WaitGroup, limiter *rate.Limiter) {
log.Infof("[poller %d] launched", id)
defer wg.Done()
for {
if err := limiter.Wait(p.pollingCtx); err != nil {
log.Infof("[poller %d] shutdown", id)
return
}
task, ok := p.fetchTask(p.pollingCtx)
if !ok {
continue
}
p.runTaskWithRecover(p.jobsCtx, task)
}
}
func (p *poller) runTaskWithRecover(ctx context.Context, task *runnerv1.Task) {
defer func() {
if r := recover(); r != nil {
err := fmt.Errorf("panic: %v", r)
log.WithError(err).Error("panic in runTaskWithRecover")
}
}()
if err := p.runner.Run(ctx, task); err != nil {
log.WithError(err).Error("failed to run task")
}
}
func (p *poller) fetchTask(ctx context.Context) (*runnerv1.Task, bool) {
reqCtx, cancel := context.WithTimeout(ctx, p.cfg.Runner.FetchTimeout)
defer cancel()
// Load the version value that was in the cache when the request was sent.
v := p.tasksVersion.Load()
resp, err := p.client.FetchTask(reqCtx, connect.NewRequest(&runnerv1.FetchTaskRequest{
TasksVersion: v,
}))
if errors.Is(err, context.DeadlineExceeded) {
log.Trace("deadline exceeded")
err = nil
}
if err != nil {
if errors.Is(err, context.Canceled) {
log.WithError(err).Debugf("shutdown, fetch task canceled")
} else {
log.WithError(err).Error("failed to fetch task")
}
return nil, false
}
if resp == nil || resp.Msg == nil {
return nil, false
}
if resp.Msg.TasksVersion > v {
p.tasksVersion.CompareAndSwap(v, resp.Msg.TasksVersion)
}
if resp.Msg.Task == nil {
return nil, false
}
// got a task, set `tasksVersion` to zero to focre query db in next request.
p.tasksVersion.CompareAndSwap(resp.Msg.TasksVersion, 0)
return resp.Msg.Task, true
}

View file

@ -1,263 +0,0 @@
// Copyright The Forgejo Authors.
// SPDX-License-Identifier: MIT
package poll
import (
"context"
"fmt"
"testing"
"time"
"connectrpc.com/connect"
"code.gitea.io/actions-proto-go/ping/v1/pingv1connect"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"code.gitea.io/actions-proto-go/runner/v1/runnerv1connect"
"gitea.com/gitea/act_runner/internal/pkg/config"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
)
type mockPoller struct {
poller
}
func (o *mockPoller) Poll() {
o.poller.Poll()
}
type mockClient struct {
pingv1connect.PingServiceClient
runnerv1connect.RunnerServiceClient
sleep time.Duration
cancel bool
err error
noTask bool
}
func (o mockClient) Address() string {
return ""
}
func (o mockClient) Insecure() bool {
return true
}
func (o *mockClient) FetchTask(ctx context.Context, req *connect.Request[runnerv1.FetchTaskRequest]) (*connect.Response[runnerv1.FetchTaskResponse], error) {
if o.sleep > 0 {
select {
case <-ctx.Done():
log.Trace("fetch task done")
return nil, context.DeadlineExceeded
case <-time.After(o.sleep):
log.Trace("slept")
return nil, fmt.Errorf("unexpected")
}
}
if o.cancel {
return nil, context.Canceled
}
if o.err != nil {
return nil, o.err
}
task := &runnerv1.Task{}
if o.noTask {
task = nil
o.noTask = false
}
return connect.NewResponse(&runnerv1.FetchTaskResponse{
Task: task,
TasksVersion: int64(1),
}), nil
}
type mockRunner struct {
cfg *config.Runner
log chan string
panics bool
err error
}
func (o *mockRunner) Run(ctx context.Context, task *runnerv1.Task) error {
o.log <- "runner starts"
if o.panics {
log.Trace("panics")
o.log <- "runner panics"
o.panics = false
panic("whatever")
}
if o.err != nil {
log.Trace("error")
o.log <- "runner error"
err := o.err
o.err = nil
return err
}
for {
select {
case <-ctx.Done():
log.Trace("shutdown")
o.log <- "runner shutdown"
return nil
case <-time.After(o.cfg.Timeout):
log.Trace("after")
o.log <- "runner timeout"
return nil
}
}
}
func setTrace(t *testing.T) {
t.Helper()
log.SetReportCaller(true)
log.SetLevel(log.TraceLevel)
}
func TestPoller_New(t *testing.T) {
p := New(&config.Config{}, &mockClient{}, &mockRunner{})
assert.NotNil(t, p)
}
func TestPoller_Runner(t *testing.T) {
setTrace(t)
for _, testCase := range []struct {
name string
timeout time.Duration
noTask bool
panics bool
err error
expected string
contextTimeout time.Duration
}{
{
name: "Simple",
timeout: 10 * time.Second,
expected: "runner shutdown",
},
{
name: "Panics",
timeout: 10 * time.Second,
panics: true,
expected: "runner panics",
},
{
name: "Error",
timeout: 10 * time.Second,
err: fmt.Errorf("ERROR"),
expected: "runner error",
},
{
name: "PollTaskError",
timeout: 10 * time.Second,
noTask: true,
expected: "runner shutdown",
},
{
name: "ShutdownTimeout",
timeout: 1 * time.Second,
contextTimeout: 1 * time.Minute,
expected: "runner timeout",
},
} {
t.Run(testCase.name, func(t *testing.T) {
runnerLog := make(chan string, 3)
configRunner := config.Runner{
FetchInterval: 1,
Capacity: 1,
Timeout: testCase.timeout,
}
p := &mockPoller{}
p.init(
&config.Config{
Runner: configRunner,
},
&mockClient{
noTask: testCase.noTask,
},
&mockRunner{
cfg: &configRunner,
log: runnerLog,
panics: testCase.panics,
err: testCase.err,
})
go p.Poll()
assert.Equal(t, "runner starts", <-runnerLog)
var ctx context.Context
var cancel context.CancelFunc
if testCase.contextTimeout > 0 {
ctx, cancel = context.WithTimeout(context.Background(), testCase.contextTimeout)
defer cancel()
} else {
ctx, cancel = context.WithCancel(context.Background())
cancel()
}
p.Shutdown(ctx)
<-p.done
assert.Equal(t, testCase.expected, <-runnerLog)
})
}
}
func TestPoller_Fetch(t *testing.T) {
setTrace(t)
for _, testCase := range []struct {
name string
noTask bool
sleep time.Duration
err error
cancel bool
success bool
}{
{
name: "Success",
success: true,
},
{
name: "Timeout",
sleep: 100 * time.Millisecond,
},
{
name: "Canceled",
cancel: true,
},
{
name: "NoTask",
noTask: true,
},
{
name: "Error",
err: fmt.Errorf("random error"),
},
} {
t.Run(testCase.name, func(t *testing.T) {
configRunner := config.Runner{
FetchTimeout: 1 * time.Millisecond,
}
p := &mockPoller{}
p.init(
&config.Config{
Runner: configRunner,
},
&mockClient{
sleep: testCase.sleep,
cancel: testCase.cancel,
noTask: testCase.noTask,
err: testCase.err,
},
&mockRunner{},
)
task, ok := p.fetchTask(context.Background())
if testCase.success {
assert.True(t, ok)
assert.NotNil(t, task)
} else {
assert.False(t, ok)
assert.Nil(t, task)
}
})
}
}

View file

@ -1,260 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package run
import (
"context"
"encoding/json"
"fmt"
"path/filepath"
"strings"
"sync"
"time"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"connectrpc.com/connect"
"github.com/docker/docker/api/types/container"
"github.com/nektos/act/pkg/artifactcache"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/model"
"github.com/nektos/act/pkg/runner"
log "github.com/sirupsen/logrus"
"gitea.com/gitea/act_runner/internal/pkg/client"
"gitea.com/gitea/act_runner/internal/pkg/config"
"gitea.com/gitea/act_runner/internal/pkg/labels"
"gitea.com/gitea/act_runner/internal/pkg/report"
"gitea.com/gitea/act_runner/internal/pkg/ver"
)
// Runner runs the pipeline.
type Runner struct {
name string
cfg *config.Config
client client.Client
labels labels.Labels
envs map[string]string
runningTasks sync.Map
}
type RunnerInterface interface {
Run(ctx context.Context, task *runnerv1.Task) error
}
func NewRunner(cfg *config.Config, reg *config.Registration, cli client.Client) *Runner {
ls := labels.Labels{}
for _, v := range reg.Labels {
if l, err := labels.Parse(v); err == nil {
ls = append(ls, l)
}
}
if cfg.Runner.Envs == nil {
cfg.Runner.Envs = make(map[string]string, 10)
}
cfg.Runner.Envs["GITHUB_SERVER_URL"] = reg.Address
envs := make(map[string]string, len(cfg.Runner.Envs))
for k, v := range cfg.Runner.Envs {
envs[k] = v
}
if cfg.Cache.Enabled == nil || *cfg.Cache.Enabled {
if cfg.Cache.ExternalServer != "" {
envs["ACTIONS_CACHE_URL"] = cfg.Cache.ExternalServer
} else {
cacheHandler, err := artifactcache.StartHandler(
cfg.Cache.Dir,
cfg.Cache.Host,
cfg.Cache.Port,
log.StandardLogger().WithField("module", "cache_request"),
)
if err != nil {
log.Errorf("cannot init cache server, it will be disabled: %v", err)
// go on
} else {
envs["ACTIONS_CACHE_URL"] = cacheHandler.ExternalURL() + "/"
}
}
}
// set artifact gitea api
artifactGiteaAPI := strings.TrimSuffix(cli.Address(), "/") + "/api/actions_pipeline/"
envs["ACTIONS_RUNTIME_URL"] = artifactGiteaAPI
envs["ACTIONS_RESULTS_URL"] = strings.TrimSuffix(cli.Address(), "/")
// Set specific environments to distinguish between Gitea and GitHub
envs["GITEA_ACTIONS"] = "true"
envs["GITEA_ACTIONS_RUNNER_VERSION"] = ver.Version()
return &Runner{
name: reg.Name,
cfg: cfg,
client: cli,
labels: ls,
envs: envs,
}
}
func (r *Runner) Run(ctx context.Context, task *runnerv1.Task) error {
if _, ok := r.runningTasks.Load(task.Id); ok {
return fmt.Errorf("task %d is already running", task.Id)
}
r.runningTasks.Store(task.Id, struct{}{})
defer r.runningTasks.Delete(task.Id)
ctx, cancel := context.WithTimeout(ctx, r.cfg.Runner.Timeout)
defer cancel()
reporter := report.NewReporter(ctx, cancel, r.client, task, r.cfg.Runner.ReportInterval)
var runErr error
defer func() {
lastWords := ""
if runErr != nil {
lastWords = runErr.Error()
}
_ = reporter.Close(lastWords)
}()
reporter.RunDaemon()
runErr = r.run(ctx, task, reporter)
return nil
}
func (r *Runner) run(ctx context.Context, task *runnerv1.Task, reporter *report.Reporter) (err error) {
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("panic: %v", r)
}
}()
reporter.Logf("%s(version:%s) received task %v of job %v, be triggered by event: %s", r.name, ver.Version(), task.Id, task.Context.Fields["job"].GetStringValue(), task.Context.Fields["event_name"].GetStringValue())
workflow, jobID, err := generateWorkflow(task)
if err != nil {
return err
}
plan, err := model.CombineWorkflowPlanner(workflow).PlanJob(jobID)
if err != nil {
return err
}
job := workflow.GetJob(jobID)
reporter.ResetSteps(len(job.Steps))
taskContext := task.Context.Fields
log.Infof("task %v repo is %v %v %v", task.Id, taskContext["repository"].GetStringValue(),
taskContext["gitea_default_actions_url"].GetStringValue(),
r.client.Address())
preset := &model.GithubContext{
Event: taskContext["event"].GetStructValue().AsMap(),
RunID: taskContext["run_id"].GetStringValue(),
RunNumber: taskContext["run_number"].GetStringValue(),
Actor: taskContext["actor"].GetStringValue(),
Repository: taskContext["repository"].GetStringValue(),
EventName: taskContext["event_name"].GetStringValue(),
Sha: taskContext["sha"].GetStringValue(),
Ref: taskContext["ref"].GetStringValue(),
RefName: taskContext["ref_name"].GetStringValue(),
RefType: taskContext["ref_type"].GetStringValue(),
HeadRef: taskContext["head_ref"].GetStringValue(),
BaseRef: taskContext["base_ref"].GetStringValue(),
Token: taskContext["token"].GetStringValue(),
RepositoryOwner: taskContext["repository_owner"].GetStringValue(),
RetentionDays: taskContext["retention_days"].GetStringValue(),
}
if t := task.Secrets["GITEA_TOKEN"]; t != "" {
preset.Token = t
} else if t := task.Secrets["GITHUB_TOKEN"]; t != "" {
preset.Token = t
}
giteaRuntimeToken := taskContext["gitea_runtime_token"].GetStringValue()
if giteaRuntimeToken == "" {
// use task token to action api token for previous Gitea Server Versions
giteaRuntimeToken = preset.Token
}
r.envs["ACTIONS_RUNTIME_TOKEN"] = giteaRuntimeToken
eventJSON, err := json.Marshal(preset.Event)
if err != nil {
return err
}
maxLifetime := 3 * time.Hour
if deadline, ok := ctx.Deadline(); ok {
maxLifetime = time.Until(deadline)
}
var inputs map[string]string
if preset.EventName == "workflow_dispatch" {
if inputsRaw, ok := preset.Event["inputs"]; ok {
inputs, _ = inputsRaw.(map[string]string)
}
}
runnerConfig := &runner.Config{
// On Linux, Workdir will be like "/<parent_directory>/<owner>/<repo>"
// On Windows, Workdir will be like "\<parent_directory>\<owner>\<repo>"
Workdir: filepath.FromSlash(filepath.Clean(fmt.Sprintf("/%s/%s", r.cfg.Container.WorkdirParent, preset.Repository))),
BindWorkdir: false,
ActionCacheDir: filepath.FromSlash(r.cfg.Host.WorkdirParent),
ReuseContainers: false,
ForcePull: r.cfg.Container.ForcePull,
ForceRebuild: false,
LogOutput: true,
JSONLogger: false,
Env: r.envs,
Secrets: task.Secrets,
GitHubInstance: strings.TrimSuffix(r.client.Address(), "/"),
AutoRemove: true,
NoSkipCheckout: true,
PresetGitHubContext: preset,
EventJSON: string(eventJSON),
ContainerNamePrefix: fmt.Sprintf("GITEA-ACTIONS-TASK-%d", task.Id),
ContainerMaxLifetime: maxLifetime,
ContainerNetworkMode: container.NetworkMode(r.cfg.Container.Network),
ContainerNetworkEnableIPv6: r.cfg.Container.EnableIPv6,
ContainerOptions: r.cfg.Container.Options,
ContainerDaemonSocket: r.cfg.Container.DockerHost,
Privileged: r.cfg.Container.Privileged,
DefaultActionInstance: taskContext["gitea_default_actions_url"].GetStringValue(),
PlatformPicker: r.labels.PickPlatform,
Vars: task.Vars,
ValidVolumes: r.cfg.Container.ValidVolumes,
InsecureSkipTLS: r.cfg.Runner.Insecure,
Inputs: inputs,
}
rr, err := runner.New(runnerConfig)
if err != nil {
return err
}
executor := rr.NewPlanExecutor(plan)
reporter.Logf("workflow prepared")
// add logger recorders
ctx = common.WithLoggerHook(ctx, reporter)
execErr := executor(ctx)
reporter.SetOutputs(job.Outputs)
return execErr
}
func (r *Runner) Declare(ctx context.Context, labels []string) (*connect.Response[runnerv1.DeclareResponse], error) {
return r.client.Declare(ctx, connect.NewRequest(&runnerv1.DeclareRequest{
Version: ver.Version(),
Labels: labels,
}))
}
func (r *Runner) Update(ctx context.Context, labels labels.Labels) {
r.labels = labels
}

View file

@ -1,37 +0,0 @@
package run
import (
"context"
"testing"
"gitea.com/gitea/act_runner/internal/pkg/labels"
"github.com/stretchr/testify/assert"
)
func TestLabelUpdate(t *testing.T) {
ctx := context.Background()
ls := labels.Labels{}
initialLabel, err := labels.Parse("testlabel:docker://alpine")
assert.NoError(t, err)
ls = append(ls, initialLabel)
newLs := labels.Labels{}
newLabel, err := labels.Parse("next label:host")
assert.NoError(t, err)
newLs = append(newLs, initialLabel)
newLs = append(newLs, newLabel)
runner := Runner{
labels: ls,
}
assert.Contains(t, runner.labels, initialLabel)
assert.NotContains(t, runner.labels, newLabel)
runner.Update(ctx, newLs)
assert.Contains(t, runner.labels, initialLabel)
assert.Contains(t, runner.labels, newLabel)
}

View file

@ -1,54 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package run
import (
"bytes"
"fmt"
"sort"
"strings"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"github.com/nektos/act/pkg/model"
"gopkg.in/yaml.v3"
)
func generateWorkflow(task *runnerv1.Task) (*model.Workflow, string, error) {
workflow, err := model.ReadWorkflow(bytes.NewReader(task.WorkflowPayload))
if err != nil {
return nil, "", err
}
jobIDs := workflow.GetJobIDs()
if len(jobIDs) != 1 {
return nil, "", fmt.Errorf("multiple jobs found: %v", jobIDs)
}
jobID := jobIDs[0]
needJobIDs := make([]string, 0, len(task.Needs))
for id, need := range task.Needs {
needJobIDs = append(needJobIDs, id)
needJob := &model.Job{
Outputs: need.Outputs,
Result: strings.ToLower(strings.TrimPrefix(need.Result.String(), "RESULT_")),
}
workflow.Jobs[id] = needJob
}
sort.Strings(needJobIDs)
rawNeeds := yaml.Node{
Kind: yaml.SequenceNode,
Content: make([]*yaml.Node, 0, len(needJobIDs)),
}
for _, id := range needJobIDs {
rawNeeds.Content = append(rawNeeds.Content, &yaml.Node{
Kind: yaml.ScalarNode,
Value: id,
})
}
workflow.Jobs[jobID].RawNeeds = rawNeeds
return workflow, jobID, nil
}

View file

@ -1,96 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package run
import (
"testing"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"github.com/nektos/act/pkg/model"
"github.com/stretchr/testify/require"
"gotest.tools/v3/assert"
)
func Test_generateWorkflow(t *testing.T) {
type args struct {
task *runnerv1.Task
}
tests := []struct {
name string
args args
assert func(t *testing.T, wf *model.Workflow, err error)
want1 string
wantErr bool
}{
{
name: "has needs",
args: args{
task: &runnerv1.Task{
WorkflowPayload: []byte(`
name: Build and deploy
on: push
jobs:
job9:
needs: build
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: ./deploy --build ${{ needs.job1.outputs.output1 }}
- run: ./deploy --build ${{ needs.job2.outputs.output2 }}
`),
Needs: map[string]*runnerv1.TaskNeed{
"job1": {
Outputs: map[string]string{
"output1": "output1 value",
},
Result: runnerv1.Result_RESULT_SUCCESS,
},
"job2": {
Outputs: map[string]string{
"output2": "output2 value",
},
Result: runnerv1.Result_RESULT_SUCCESS,
},
},
},
},
assert: func(t *testing.T, wf *model.Workflow, err error) {
assert.DeepEqual(t, wf.GetJob("job9").Needs(), []string{"job1", "job2"})
},
want1: "job9",
wantErr: false,
},
{
name: "valid YAML syntax in top level env but wrong value type",
args: args{
task: &runnerv1.Task{
WorkflowPayload: []byte(`
on: push
env:
value: {{ }}
`),
},
},
assert: func(t *testing.T, wf *model.Workflow, err error) {
require.Nil(t, wf)
assert.ErrorContains(t, err, "cannot unmarshal")
},
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, got1, err := generateWorkflow(tt.args.task)
if tt.wantErr {
require.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, got1, tt.want1)
}
tt.assert(t, got, err)
})
}
}

View file

@ -1,11 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package client
const (
UUIDHeader = "x-runner-uuid"
TokenHeader = "x-runner-token"
// Deprecated: could be removed after Gitea 1.20 released
VersionHeader = "x-runner-version"
)

View file

@ -1,219 +0,0 @@
// Code generated by mockery v2.26.1. DO NOT EDIT.
package mocks
import (
context "context"
connect "connectrpc.com/connect"
mock "github.com/stretchr/testify/mock"
pingv1 "code.gitea.io/actions-proto-go/ping/v1"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
)
// Client is an autogenerated mock type for the Client type
type Client struct {
mock.Mock
}
// Address provides a mock function with given fields:
func (_m *Client) Address() string {
ret := _m.Called()
var r0 string
if rf, ok := ret.Get(0).(func() string); ok {
r0 = rf()
} else {
r0 = ret.Get(0).(string)
}
return r0
}
// Declare provides a mock function with given fields: _a0, _a1
func (_m *Client) Declare(_a0 context.Context, _a1 *connect.Request[runnerv1.DeclareRequest]) (*connect.Response[runnerv1.DeclareResponse], error) {
ret := _m.Called(_a0, _a1)
var r0 *connect.Response[runnerv1.DeclareResponse]
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.DeclareRequest]) (*connect.Response[runnerv1.DeclareResponse], error)); ok {
return rf(_a0, _a1)
}
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.DeclareRequest]) *connect.Response[runnerv1.DeclareResponse]); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*connect.Response[runnerv1.DeclareResponse])
}
}
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.DeclareRequest]) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// FetchTask provides a mock function with given fields: _a0, _a1
func (_m *Client) FetchTask(_a0 context.Context, _a1 *connect.Request[runnerv1.FetchTaskRequest]) (*connect.Response[runnerv1.FetchTaskResponse], error) {
ret := _m.Called(_a0, _a1)
var r0 *connect.Response[runnerv1.FetchTaskResponse]
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.FetchTaskRequest]) (*connect.Response[runnerv1.FetchTaskResponse], error)); ok {
return rf(_a0, _a1)
}
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.FetchTaskRequest]) *connect.Response[runnerv1.FetchTaskResponse]); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*connect.Response[runnerv1.FetchTaskResponse])
}
}
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.FetchTaskRequest]) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Insecure provides a mock function with given fields:
func (_m *Client) Insecure() bool {
ret := _m.Called()
var r0 bool
if rf, ok := ret.Get(0).(func() bool); ok {
r0 = rf()
} else {
r0 = ret.Get(0).(bool)
}
return r0
}
// Ping provides a mock function with given fields: _a0, _a1
func (_m *Client) Ping(_a0 context.Context, _a1 *connect.Request[pingv1.PingRequest]) (*connect.Response[pingv1.PingResponse], error) {
ret := _m.Called(_a0, _a1)
var r0 *connect.Response[pingv1.PingResponse]
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[pingv1.PingRequest]) (*connect.Response[pingv1.PingResponse], error)); ok {
return rf(_a0, _a1)
}
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[pingv1.PingRequest]) *connect.Response[pingv1.PingResponse]); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*connect.Response[pingv1.PingResponse])
}
}
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[pingv1.PingRequest]) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// Register provides a mock function with given fields: _a0, _a1
func (_m *Client) Register(_a0 context.Context, _a1 *connect.Request[runnerv1.RegisterRequest]) (*connect.Response[runnerv1.RegisterResponse], error) {
ret := _m.Called(_a0, _a1)
var r0 *connect.Response[runnerv1.RegisterResponse]
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.RegisterRequest]) (*connect.Response[runnerv1.RegisterResponse], error)); ok {
return rf(_a0, _a1)
}
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.RegisterRequest]) *connect.Response[runnerv1.RegisterResponse]); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*connect.Response[runnerv1.RegisterResponse])
}
}
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.RegisterRequest]) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// UpdateLog provides a mock function with given fields: _a0, _a1
func (_m *Client) UpdateLog(_a0 context.Context, _a1 *connect.Request[runnerv1.UpdateLogRequest]) (*connect.Response[runnerv1.UpdateLogResponse], error) {
ret := _m.Called(_a0, _a1)
var r0 *connect.Response[runnerv1.UpdateLogResponse]
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.UpdateLogRequest]) (*connect.Response[runnerv1.UpdateLogResponse], error)); ok {
return rf(_a0, _a1)
}
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.UpdateLogRequest]) *connect.Response[runnerv1.UpdateLogResponse]); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*connect.Response[runnerv1.UpdateLogResponse])
}
}
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.UpdateLogRequest]) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// UpdateTask provides a mock function with given fields: _a0, _a1
func (_m *Client) UpdateTask(_a0 context.Context, _a1 *connect.Request[runnerv1.UpdateTaskRequest]) (*connect.Response[runnerv1.UpdateTaskResponse], error) {
ret := _m.Called(_a0, _a1)
var r0 *connect.Response[runnerv1.UpdateTaskResponse]
var r1 error
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.UpdateTaskRequest]) (*connect.Response[runnerv1.UpdateTaskResponse], error)); ok {
return rf(_a0, _a1)
}
if rf, ok := ret.Get(0).(func(context.Context, *connect.Request[runnerv1.UpdateTaskRequest]) *connect.Response[runnerv1.UpdateTaskResponse]); ok {
r0 = rf(_a0, _a1)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*connect.Response[runnerv1.UpdateTaskResponse])
}
}
if rf, ok := ret.Get(1).(func(context.Context, *connect.Request[runnerv1.UpdateTaskRequest]) error); ok {
r1 = rf(_a0, _a1)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
type mockConstructorTestingTNewClient interface {
mock.TestingT
Cleanup(func())
}
// NewClient creates a new instance of Client. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
func NewClient(t mockConstructorTestingTNewClient) *Client {
mock := &Client{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

View file

@ -1,100 +0,0 @@
# Example configuration file, it's safe to copy this as the default config file without any modification.
# You don't have to copy this file to your instance,
# just run `./act_runner generate-config > config.yaml` to generate a config file.
log:
# The level of logging, can be trace, debug, info, warn, error, fatal
level: info
runner:
# Where to store the registration result.
file: .runner
# Execute how many tasks concurrently at the same time.
capacity: 1
# Extra environment variables to run jobs.
envs:
A_TEST_ENV_NAME_1: a_test_env_value_1
A_TEST_ENV_NAME_2: a_test_env_value_2
# Extra environment variables to run jobs from a file.
# It will be ignored if it's empty or the file doesn't exist.
env_file: .env
# The timeout for a job to be finished.
# Please note that the Forgejo instance also has a timeout (3h by default) for the job.
# So the job could be stopped by the Forgejo instance if it's timeout is shorter than this.
timeout: 3h
# The timeout for the runner to wait for running jobs to finish when
# shutting down because a TERM or INT signal has been received. Any
# running jobs that haven't finished after this timeout will be
# cancelled.
# If unset or zero the jobs will be cancelled immediately.
shutdown_timeout: 3h
# Whether skip verifying the TLS certificate of the instance.
insecure: false
# The timeout for fetching the job from the Forgejo instance.
fetch_timeout: 5s
# The interval for fetching the job from the Forgejo instance.
fetch_interval: 2s
# The interval for reporting the job status and logs to the Forgejo instance.
report_interval: 1s
# The labels of a runner are used to determine which jobs the runner can run, and how to run them.
# Like: ["macos-arm64:host", "ubuntu-latest:docker://node:20-bookworm", "ubuntu-22.04:docker://node:20-bookworm"]
# If it's empty when registering, it will ask for inputting labels.
# If it's empty when execute `deamon`, will use labels in `.runner` file.
labels: []
cache:
# Enable cache server to use actions/cache.
enabled: true
# The directory to store the cache data.
# If it's empty, the cache data will be stored in $HOME/.cache/actcache.
dir: ""
# The host of the cache server.
# It's not for the address to listen, but the address to connect from job containers.
# So 0.0.0.0 is a bad choice, leave it empty to detect automatically.
host: ""
# The port of the cache server.
# 0 means to use a random available port.
port: 0
# The external cache server URL. Valid only when enable is true.
# If it's specified, act_runner will use this URL as the ACTIONS_CACHE_URL rather than start a server by itself.
# The URL should generally end with "/".
external_server: ""
container:
# Specifies the network to which the container will connect.
# Could be host, bridge or the name of a custom network.
# If it's empty, create a network automatically.
network: ""
# Whether to create networks with IPv6 enabled. Requires the Docker daemon to be set up accordingly.
# Only takes effect if "network" is set to "".
enable_ipv6: false
# Whether to use privileged mode or not when launching task containers (privileged mode is required for Docker-in-Docker).
privileged: false
# And other options to be used when the container is started (eg, --add-host=my.forgejo.url:host-gateway).
options:
# The parent directory of a job's working directory.
# If it's empty, /workspace will be used.
workdir_parent:
# Volumes (including bind mounts) can be mounted to containers. Glob syntax is supported, see https://github.com/gobwas/glob
# You can specify multiple volumes. If the sequence is empty, no volumes can be mounted.
# For example, if you only allow containers to mount the `data` volume and all the json files in `/src`, you should change the config to:
# valid_volumes:
# - data
# - /src/*.json
# If you want to allow any volume, please use the following configuration:
# valid_volumes:
# - '**'
valid_volumes: []
# overrides the docker client host with the specified one.
# If it's empty, act_runner will find an available docker host automatically.
# If it's "-", act_runner will find an available docker host automatically, but the docker host won't be mounted to the job containers and service containers.
# If it's not empty or "-", the specified docker host will be used. An error will be returned if it doesn't work.
docker_host: ""
# Pull docker image(s) even if already present
force_pull: false
host:
# The parent directory of a job's working directory.
# If it's empty, $HOME/.cache/act/ will be used.
workdir_parent:

View file

@ -1,166 +0,0 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package config
import (
"fmt"
"os"
"path/filepath"
"time"
"github.com/joho/godotenv"
log "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
)
// Log represents the configuration for logging.
type Log struct {
Level string `yaml:"level"` // Level indicates the logging level.
}
// Runner represents the configuration for the runner.
type Runner struct {
File string `yaml:"file"` // File specifies the file path for the runner.
Capacity int `yaml:"capacity"` // Capacity specifies the capacity of the runner.
Envs map[string]string `yaml:"envs"` // Envs stores environment variables for the runner.
EnvFile string `yaml:"env_file"` // EnvFile specifies the path to the file containing environment variables for the runner.
Timeout time.Duration `yaml:"timeout"` // Timeout specifies the duration for runner timeout.
ShutdownTimeout time.Duration `yaml:"shutdown_timeout"` // ShutdownTimeout specifies the duration to wait for running jobs to complete during a shutdown of the runner.
Insecure bool `yaml:"insecure"` // Insecure indicates whether the runner operates in an insecure mode.
FetchTimeout time.Duration `yaml:"fetch_timeout"` // FetchTimeout specifies the timeout duration for fetching resources.
FetchInterval time.Duration `yaml:"fetch_interval"` // FetchInterval specifies the interval duration for fetching resources.
ReportInterval time.Duration `yaml:"report_interval"` // ReportInterval specifies the interval duration for reporting status and logs of a running job.
Labels []string `yaml:"labels"` // Labels specify the labels of the runner. Labels are declared on each startup
}
// Cache represents the configuration for caching.
type Cache struct {
Enabled *bool `yaml:"enabled"` // Enabled indicates whether caching is enabled. It is a pointer to distinguish between false and not set. If not set, it will be true.
Dir string `yaml:"dir"` // Dir specifies the directory path for caching.
Host string `yaml:"host"` // Host specifies the caching host.
Port uint16 `yaml:"port"` // Port specifies the caching port.
ExternalServer string `yaml:"external_server"` // ExternalServer specifies the URL of external cache server
}
// Container represents the configuration for the container.
type Container struct {
Network string `yaml:"network"` // Network specifies the network for the container.
NetworkMode string `yaml:"network_mode"` // Deprecated: use Network instead. Could be removed after Gitea 1.20
EnableIPv6 bool `yaml:"enable_ipv6"` // EnableIPv6 indicates whether the network is created with IPv6 enabled.
Privileged bool `yaml:"privileged"` // Privileged indicates whether the container runs in privileged mode.
Options string `yaml:"options"` // Options specifies additional options for the container.
WorkdirParent string `yaml:"workdir_parent"` // WorkdirParent specifies the parent directory for the container's working directory.
ValidVolumes []string `yaml:"valid_volumes"` // ValidVolumes specifies the volumes (including bind mounts) can be mounted to containers.
DockerHost string `yaml:"docker_host"` // DockerHost specifies the Docker host. It overrides the value specified in environment variable DOCKER_HOST.
ForcePull bool `yaml:"force_pull"` // Pull docker image(s) even if already present
}
// Host represents the configuration for the host.
type Host struct {
WorkdirParent string `yaml:"workdir_parent"` // WorkdirParent specifies the parent directory for the host's working directory.
}
// Config represents the overall configuration.
type Config struct {
Log Log `yaml:"log"` // Log represents the configuration for logging.
Runner Runner `yaml:"runner"` // Runner represents the configuration for the runner.
Cache Cache `yaml:"cache"` // Cache represents the configuration for caching.
Container Container `yaml:"container"` // Container represents the configuration for the container.
Host Host `yaml:"host"` // Host represents the configuration for the host.
}
// Tune the config settings accordingly to the Forgejo instance that will be used.
func (c *Config) Tune(instanceURL string) {
if instanceURL == "https://codeberg.org" {
if c.Runner.FetchInterval < 30*time.Second {
log.Info("The runner is configured to be used by a public instance, fetch interval is set to 30 seconds.")
c.Runner.FetchInterval = 30 * time.Second
}
}
}
// LoadDefault returns the default configuration.
// If file is not empty, it will be used to load the configuration.
func LoadDefault(file string) (*Config, error) {
cfg := &Config{}
if file != "" {
content, err := os.ReadFile(file)
if err != nil {
return nil, fmt.Errorf("open config file %q: %w", file, err)
}
if err := yaml.Unmarshal(content, cfg); err != nil {
return nil, fmt.Errorf("parse config file %q: %w", file, err)
}
}
compatibleWithOldEnvs(file != "", cfg)
if cfg.Runner.EnvFile != "" {
if stat, err := os.Stat(cfg.Runner.EnvFile); err == nil && !stat.IsDir() {
envs, err := godotenv.Read(cfg.Runner.EnvFile)
if err != nil {
return nil, fmt.Errorf("read env file %q: %w", cfg.Runner.EnvFile, err)
}
if cfg.Runner.Envs == nil {
cfg.Runner.Envs = map[string]string{}
}
for k, v := range envs {
cfg.Runner.Envs[k] = v
}
}
}
if cfg.Log.Level == "" {
cfg.Log.Level = "info"
}
if cfg.Runner.File == "" {
cfg.Runner.File = ".runner"
}
if cfg.Runner.Capacity <= 0 {
cfg.Runner.Capacity = 1
}
if cfg.Runner.Timeout <= 0 {
cfg.Runner.Timeout = 3 * time.Hour
}
if cfg.Cache.Enabled == nil {
b := true
cfg.Cache.Enabled = &b
}
if *cfg.Cache.Enabled {
if cfg.Cache.Dir == "" {
home, _ := os.UserHomeDir()
cfg.Cache.Dir = filepath.Join(home, ".cache", "actcache")
}
}
if cfg.Container.WorkdirParent == "" {
cfg.Container.WorkdirParent = "workspace"
}
if cfg.Host.WorkdirParent == "" {
home, _ := os.UserHomeDir()
cfg.Host.WorkdirParent = filepath.Join(home, ".cache", "act")
}
if cfg.Runner.FetchTimeout <= 0 {
cfg.Runner.FetchTimeout = 5 * time.Second
}
if cfg.Runner.FetchInterval <= 0 {
cfg.Runner.FetchInterval = 2 * time.Second
}
if cfg.Runner.ReportInterval <= 0 {
cfg.Runner.ReportInterval = time.Second
}
// although `container.network_mode` will be deprecated, but we have to be compatible with it for now.
if cfg.Container.NetworkMode != "" && cfg.Container.Network == "" {
log.Warn("You are trying to use deprecated configuration item of `container.network_mode`, please use `container.network` instead.")
if cfg.Container.NetworkMode == "bridge" {
// Previously, if the value of `container.network_mode` is `bridge`, we will create a new network for job.
// But “bridge” is easily confused with the bridge network created by Docker by default.
// So we set the value of `container.network` to empty string to make `act_runner` automatically create a new network for job.
cfg.Container.Network = ""
} else {
cfg.Container.Network = cfg.Container.NetworkMode
}
}
return cfg, nil
}

View file

@ -1,37 +0,0 @@
// Copyright 2024 The Forgejo Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package config
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestConfigTune(t *testing.T) {
c := &Config{
Runner: Runner{},
}
t.Run("Public instance tuning", func(t *testing.T) {
c.Runner.FetchInterval = 60 * time.Second
c.Tune("https://codeberg.org")
assert.EqualValues(t, 60*time.Second, c.Runner.FetchInterval)
c.Runner.FetchInterval = 2 * time.Second
c.Tune("https://codeberg.org")
assert.EqualValues(t, 30*time.Second, c.Runner.FetchInterval)
})
t.Run("Non-public instance tuning", func(t *testing.T) {
c.Runner.FetchInterval = 60 * time.Second
c.Tune("https://example.com")
assert.EqualValues(t, 60*time.Second, c.Runner.FetchInterval)
c.Runner.FetchInterval = 2 * time.Second
c.Tune("https://codeberg.com")
assert.EqualValues(t, 2*time.Second, c.Runner.FetchInterval)
})
}

View file

@ -1,62 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package config
import (
"os"
"strconv"
"strings"
log "github.com/sirupsen/logrus"
)
// Deprecated: could be removed in the future. TODO: remove it when Gitea 1.20.0 is released.
// Be compatible with old envs.
func compatibleWithOldEnvs(fileUsed bool, cfg *Config) {
handleEnv := func(key string) (string, bool) {
if v, ok := os.LookupEnv(key); ok {
if fileUsed {
log.Warnf("env %s has been ignored because config file is used", key)
return "", false
}
log.Warnf("env %s will be deprecated, please use config file instead", key)
return v, true
}
return "", false
}
if v, ok := handleEnv("GITEA_DEBUG"); ok {
if b, _ := strconv.ParseBool(v); b {
cfg.Log.Level = "debug"
}
}
if v, ok := handleEnv("GITEA_TRACE"); ok {
if b, _ := strconv.ParseBool(v); b {
cfg.Log.Level = "trace"
}
}
if v, ok := handleEnv("GITEA_RUNNER_CAPACITY"); ok {
if i, _ := strconv.Atoi(v); i > 0 {
cfg.Runner.Capacity = i
}
}
if v, ok := handleEnv("GITEA_RUNNER_FILE"); ok {
cfg.Runner.File = v
}
if v, ok := handleEnv("GITEA_RUNNER_ENVIRON"); ok {
splits := strings.Split(v, ",")
if cfg.Runner.Envs == nil {
cfg.Runner.Envs = map[string]string{}
}
for _, split := range splits {
kv := strings.SplitN(split, ":", 2)
if len(kv) == 2 && kv[0] != "" {
cfg.Runner.Envs[kv[0]] = kv[1]
}
}
}
if v, ok := handleEnv("GITEA_RUNNER_ENV_FILE"); ok {
cfg.Runner.EnvFile = v
}
}

View file

@ -1,9 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package config
import _ "embed"
//go:embed config.example.yaml
var Example []byte

View file

@ -1,54 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package config
import (
"encoding/json"
"os"
)
const registrationWarning = "This file is automatically generated by act-runner. Do not edit it manually unless you know what you are doing. Removing this file will cause act runner to re-register as a new runner."
// Registration is the registration information for a runner
type Registration struct {
Warning string `json:"WARNING"` // Warning message to display, it's always the registrationWarning constant
ID int64 `json:"id"`
UUID string `json:"uuid"`
Name string `json:"name"`
Token string `json:"token"`
Address string `json:"address"`
Labels []string `json:"labels"`
}
func LoadRegistration(file string) (*Registration, error) {
f, err := os.Open(file)
if err != nil {
return nil, err
}
defer f.Close()
var reg Registration
if err := json.NewDecoder(f).Decode(&reg); err != nil {
return nil, err
}
reg.Warning = ""
return &reg, nil
}
func SaveRegistration(file string, reg *Registration) error {
f, err := os.Create(file)
if err != nil {
return err
}
defer f.Close()
reg.Warning = registrationWarning
enc := json.NewEncoder(f)
enc.SetIndent("", " ")
return enc.Encode(reg)
}

View file

@ -1,5 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
// Package envcheck provides a simple way to check if the environment is ready to run jobs.
package envcheck

View file

@ -1,34 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package envcheck
import (
"context"
"fmt"
"github.com/docker/docker/client"
)
func CheckIfDockerRunning(ctx context.Context, configDockerHost string) error {
opts := []client.Opt{
client.FromEnv,
}
if configDockerHost != "" {
opts = append(opts, client.WithHost(configDockerHost))
}
cli, err := client.NewClientWithOpts(opts...)
if err != nil {
return err
}
defer cli.Close()
_, err = cli.Ping(ctx)
if err != nil {
return fmt.Errorf("cannot ping the docker daemon. is it running? %w", err)
}
return nil
}

View file

@ -1,109 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package labels
import (
"fmt"
"strings"
)
const (
SchemeHost = "host"
SchemeDocker = "docker"
SchemeLXC = "lxc"
)
type Label struct {
Name string
Schema string
Arg string
}
func Parse(str string) (*Label, error) {
splits := strings.SplitN(str, ":", 3)
label := &Label{
Name: splits[0],
Schema: "host",
Arg: "",
}
if len(splits) >= 2 {
label.Schema = splits[1]
}
if len(splits) >= 3 {
label.Arg = splits[2]
}
if label.Schema != SchemeHost && label.Schema != SchemeDocker && label.Schema != SchemeLXC {
return nil, fmt.Errorf("unsupported schema: %s", label.Schema)
}
return label, nil
}
type Labels []*Label
func (l Labels) RequireDocker() bool {
for _, label := range l {
if label.Schema == SchemeDocker {
return true
}
}
return false
}
func (l Labels) PickPlatform(runsOn []string) string {
platforms := make(map[string]string, len(l))
for _, label := range l {
switch label.Schema {
case SchemeDocker:
// "//" will be ignored
platforms[label.Name] = strings.TrimPrefix(label.Arg, "//")
case SchemeHost:
platforms[label.Name] = "-self-hosted"
case SchemeLXC:
platforms[label.Name] = "lxc:" + strings.TrimPrefix(label.Arg, "//")
default:
// It should not happen, because Parse has checked it.
continue
}
}
for _, v := range runsOn {
if v, ok := platforms[v]; ok {
return v
}
}
// TODO: support multiple labels
// like:
// ["ubuntu-22.04"] => "ubuntu:22.04"
// ["with-gpu"] => "linux:with-gpu"
// ["ubuntu-22.04", "with-gpu"] => "ubuntu:22.04_with-gpu"
// return default.
// So the runner receives a task with a label that the runner doesn't have,
// it happens when the user have edited the label of the runner in the web UI.
// TODO: it may be not correct, what if the runner is used as host mode only?
return "node:20-bullseye"
}
func (l Labels) Names() []string {
names := make([]string, 0, len(l))
for _, label := range l {
names = append(names, label.Name)
}
return names
}
func (l Labels) ToStrings() []string {
ls := make([]string, 0, len(l))
for _, label := range l {
lbl := label.Name
if label.Schema != "" {
lbl += ":" + label.Schema
if label.Arg != "" {
lbl += ":" + label.Arg
}
}
ls = append(ls, lbl)
}
return ls
}

View file

@ -1,63 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package labels
import (
"testing"
"github.com/stretchr/testify/require"
"gotest.tools/v3/assert"
)
func TestParse(t *testing.T) {
tests := []struct {
args string
want *Label
wantErr bool
}{
{
args: "ubuntu:docker://node:18",
want: &Label{
Name: "ubuntu",
Schema: "docker",
Arg: "//node:18",
},
wantErr: false,
},
{
args: "ubuntu:host",
want: &Label{
Name: "ubuntu",
Schema: "host",
Arg: "",
},
wantErr: false,
},
{
args: "ubuntu",
want: &Label{
Name: "ubuntu",
Schema: "host",
Arg: "",
},
wantErr: false,
},
{
args: "ubuntu:vm:ubuntu-18.04",
want: nil,
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.args, func(t *testing.T) {
got, err := Parse(tt.args)
if tt.wantErr {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.DeepEqual(t, got, tt.want)
})
}
}

View file

@ -1,198 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package report
import (
"context"
"strings"
"testing"
"time"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
connect_go "connectrpc.com/connect"
log "github.com/sirupsen/logrus"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"google.golang.org/protobuf/types/known/structpb"
"gitea.com/gitea/act_runner/internal/pkg/client/mocks"
)
func TestReporter_parseLogRow(t *testing.T) {
tests := []struct {
name string
debugOutputEnabled bool
args []string
want []string
}{
{
"No command", false,
[]string{"Hello, world!"},
[]string{"Hello, world!"},
},
{
"Add-mask", false,
[]string{
"foo mysecret bar",
"::add-mask::mysecret",
"foo mysecret bar",
},
[]string{
"foo mysecret bar",
"<nil>",
"foo *** bar",
},
},
{
"Debug enabled", true,
[]string{
"::debug::GitHub Actions runtime token access controls",
},
[]string{
"GitHub Actions runtime token access controls",
},
},
{
"Debug not enabled", false,
[]string{
"::debug::GitHub Actions runtime token access controls",
},
[]string{
"<nil>",
},
},
{
"notice", false,
[]string{
"::notice file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
},
[]string{
"::notice file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
},
},
{
"warning", false,
[]string{
"::warning file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
},
[]string{
"::warning file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
},
},
{
"error", false,
[]string{
"::error file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
},
[]string{
"::error file=file.name,line=42,endLine=48,title=Cool Title::Gosh, that's not going to work",
},
},
{
"group", false,
[]string{
"::group::",
"::endgroup::",
},
[]string{
"##[group]",
"##[endgroup]",
},
},
{
"stop-commands", false,
[]string{
"::add-mask::foo",
"::stop-commands::myverycoolstoptoken",
"::add-mask::bar",
"::debug::Stuff",
"myverycoolstoptoken",
"::add-mask::baz",
"::myverycoolstoptoken::",
"::add-mask::wibble",
"foo bar baz wibble",
},
[]string{
"<nil>",
"<nil>",
"::add-mask::bar",
"::debug::Stuff",
"myverycoolstoptoken",
"::add-mask::baz",
"<nil>",
"<nil>",
"*** bar baz ***",
},
},
{
"unknown command", false,
[]string{
"::set-mask::foo",
},
[]string{
"::set-mask::foo",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
r := &Reporter{
logReplacer: strings.NewReplacer(),
debugOutputEnabled: tt.debugOutputEnabled,
}
for idx, arg := range tt.args {
rv := r.parseLogRow(&log.Entry{Message: arg})
got := "<nil>"
if rv != nil {
got = rv.Content
}
assert.Equal(t, tt.want[idx], got)
}
})
}
}
func TestReporter_Fire(t *testing.T) {
t.Run("ignore command lines", func(t *testing.T) {
client := mocks.NewClient(t)
client.On("UpdateLog", mock.Anything, mock.Anything).Return(func(_ context.Context, req *connect_go.Request[runnerv1.UpdateLogRequest]) (*connect_go.Response[runnerv1.UpdateLogResponse], error) {
t.Logf("Received UpdateLog: %s", req.Msg.String())
return connect_go.NewResponse(&runnerv1.UpdateLogResponse{
AckIndex: req.Msg.Index + int64(len(req.Msg.Rows)),
}), nil
})
client.On("UpdateTask", mock.Anything, mock.Anything).Return(func(_ context.Context, req *connect_go.Request[runnerv1.UpdateTaskRequest]) (*connect_go.Response[runnerv1.UpdateTaskResponse], error) {
t.Logf("Received UpdateTask: %s", req.Msg.String())
return connect_go.NewResponse(&runnerv1.UpdateTaskResponse{}), nil
})
ctx, cancel := context.WithCancel(context.Background())
taskCtx, err := structpb.NewStruct(map[string]interface{}{})
require.NoError(t, err)
reporter := NewReporter(ctx, cancel, client, &runnerv1.Task{
Context: taskCtx,
}, time.Second)
defer func() {
assert.NoError(t, reporter.Close(""))
}()
reporter.ResetSteps(5)
dataStep0 := map[string]interface{}{
"stage": "Main",
"stepNumber": 0,
"raw_output": true,
}
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "::debug::debug log line", Data: dataStep0}))
assert.NoError(t, reporter.Fire(&log.Entry{Message: "regular log line", Data: dataStep0}))
assert.Equal(t, int64(3), reporter.state.Steps[0].LogLength)
})
}

View file

@ -1,11 +0,0 @@
// Copyright 2023 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package ver
// go build -ldflags "-X gitea.com/gitea/act_runner/internal/pkg/ver.version=1.2.3"
var version = "dev"
func Version() string {
return version
}

27
main.go
View file

@ -1,19 +1,34 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package main
import (
"context"
"os"
"os/signal"
"syscall"
"gitea.com/gitea/act_runner/internal/app/cmd"
"codeberg.org/forgejo/runner/cmd"
)
func withContextFunc(ctx context.Context, f func()) context.Context {
ctx, cancel := context.WithCancel(ctx)
go func() {
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGINT, syscall.SIGTERM)
defer signal.Stop(c)
select {
case <-ctx.Done():
case <-c:
cancel()
f()
}
}()
return ctx
}
func main() {
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer stop()
ctx := withContextFunc(context.Background(), func() {})
// run the command
cmd.Execute(ctx)
}

33
poller/metric.go Normal file
View file

@ -0,0 +1,33 @@
package poller
import "sync/atomic"
// Metric interface
type Metric interface {
IncBusyWorker() int64
DecBusyWorker() int64
BusyWorkers() int64
}
var _ Metric = (*metric)(nil)
type metric struct {
busyWorkers int64
}
// NewMetric for default metric structure
func NewMetric() Metric {
return &metric{}
}
func (m *metric) IncBusyWorker() int64 {
return atomic.AddInt64(&m.busyWorkers, 1)
}
func (m *metric) DecBusyWorker() int64 {
return atomic.AddInt64(&m.busyWorkers, -1)
}
func (m *metric) BusyWorkers() int64 {
return atomic.LoadInt64(&m.busyWorkers)
}

146
poller/poller.go Normal file
View file

@ -0,0 +1,146 @@
package poller
import (
"context"
"errors"
"sync"
"time"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"codeberg.org/forgejo/runner/client"
"github.com/bufbuild/connect-go"
log "github.com/sirupsen/logrus"
)
var ErrDataLock = errors.New("Data Lock Error")
func New(cli client.Client, dispatch func(context.Context, *runnerv1.Task) error, workerNum int) *Poller {
return &Poller{
Client: cli,
Dispatch: dispatch,
routineGroup: newRoutineGroup(),
metric: &metric{},
workerNum: workerNum,
ready: make(chan struct{}, 1),
}
}
type Poller struct {
Client client.Client
Dispatch func(context.Context, *runnerv1.Task) error
sync.Mutex
routineGroup *routineGroup
metric *metric
ready chan struct{}
workerNum int
}
func (p *Poller) schedule() {
p.Lock()
defer p.Unlock()
if int(p.metric.BusyWorkers()) >= p.workerNum {
return
}
select {
case p.ready <- struct{}{}:
default:
}
}
func (p *Poller) Wait() {
p.routineGroup.Wait()
}
func (p *Poller) Poll(ctx context.Context) error {
l := log.WithField("func", "Poll")
for {
// check worker number
p.schedule()
select {
// wait worker ready
case <-p.ready:
case <-ctx.Done():
return nil
}
LOOP:
for {
select {
case <-ctx.Done():
break LOOP
default:
task, err := p.pollTask(ctx)
if task == nil || err != nil {
if err != nil {
l.Errorf("can't find the task: %v", err.Error())
}
time.Sleep(5 * time.Second)
break
}
p.metric.IncBusyWorker()
p.routineGroup.Run(func() {
defer p.schedule()
defer p.metric.DecBusyWorker()
if err := p.dispatchTask(ctx, task); err != nil {
l.Errorf("execute task: %v", err.Error())
}
})
break LOOP
}
}
}
}
func (p *Poller) pollTask(ctx context.Context) (*runnerv1.Task, error) {
l := log.WithField("func", "pollTask")
l.Info("poller: request stage from remote server")
reqCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
// request a new build stage for execution from the central
// build server.
resp, err := p.Client.FetchTask(reqCtx, connect.NewRequest(&runnerv1.FetchTaskRequest{}))
if err == context.Canceled || err == context.DeadlineExceeded {
l.WithError(err).Trace("poller: no stage returned")
return nil, nil
}
if err != nil && err == ErrDataLock {
l.WithError(err).Info("task accepted by another runner")
return nil, nil
}
if err != nil {
l.WithError(err).Error("cannot accept task")
return nil, err
}
// exit if a nil or empty stage is returned from the system
// and allow the runner to retry.
if resp.Msg.Task == nil || resp.Msg.Task.Id == 0 {
return nil, nil
}
return resp.Msg.Task, nil
}
func (p *Poller) dispatchTask(ctx context.Context, task *runnerv1.Task) error {
l := log.WithField("func", "dispatchTask")
defer func() {
e := recover()
if e != nil {
l.Errorf("panic error: %v", e)
}
}()
runCtx, cancel := context.WithTimeout(ctx, time.Hour)
defer cancel()
return p.Dispatch(runCtx, task)
}

24
poller/thread.go Normal file
View file

@ -0,0 +1,24 @@
package poller
import "sync"
type routineGroup struct {
waitGroup sync.WaitGroup
}
func newRoutineGroup() *routineGroup {
return new(routineGroup)
}
func (g *routineGroup) Run(fn func()) {
g.waitGroup.Add(1)
go func() {
defer g.waitGroup.Done()
fn()
}()
}
func (g *routineGroup) Wait() {
g.waitGroup.Wait()
}

63
register/register.go Normal file
View file

@ -0,0 +1,63 @@
package register
import (
"context"
"encoding/json"
"os"
"strconv"
"strings"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"codeberg.org/forgejo/runner/client"
"codeberg.org/forgejo/runner/config"
"codeberg.org/forgejo/runner/core"
"github.com/bufbuild/connect-go"
log "github.com/sirupsen/logrus"
)
func New(cli client.Client) *Register {
return &Register{
Client: cli,
}
}
type Register struct {
Client client.Client
}
func (p *Register) Register(ctx context.Context, cfg config.Runner) (*core.Runner, error) {
labels := make([]string, len(cfg.Labels))
for i, v := range cfg.Labels {
labels[i] = strings.SplitN(v, ":", 2)[0]
}
// register new runner.
resp, err := p.Client.Register(ctx, connect.NewRequest(&runnerv1.RegisterRequest{
Name: cfg.Name,
Token: cfg.Token,
AgentLabels: labels,
}))
if err != nil {
log.WithError(err).Error("poller: cannot register new runner")
return nil, err
}
data := &core.Runner{
ID: resp.Msg.Runner.Id,
UUID: resp.Msg.Runner.Uuid,
Name: resp.Msg.Runner.Name,
Token: resp.Msg.Runner.Token,
Address: p.Client.Address(),
Insecure: strconv.FormatBool(p.Client.Insecure()),
Labels: cfg.Labels,
}
file, err := json.MarshalIndent(data, "", " ")
if err != nil {
log.WithError(err).Error("poller: cannot marshal the json input")
return data, err
}
// store runner config in .runner file
return data, os.WriteFile(cfg.File, file, 0o644)
}

View file

@ -1,11 +0,0 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": ["local>forgejo/renovate-config"],
"packageRules": [
{
"description": "Disable nektos/act, it's replaced",
"matchDepNames": ["github.com/nektos/act"],
"enabled": false
}
]
}

View file

@ -1,24 +1,20 @@
// Copyright 2022 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package report
package runtime
import (
"context"
"fmt"
"regexp"
"strings"
"sync"
"time"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"connectrpc.com/connect"
"codeberg.org/forgejo/runner/client"
retry "github.com/avast/retry-go/v4"
"github.com/bufbuild/connect-go"
log "github.com/sirupsen/logrus"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/types/known/timestamppb"
"gitea.com/gitea/act_runner/internal/pkg/client"
)
type Reporter struct {
@ -29,54 +25,36 @@ type Reporter struct {
client client.Client
clientM sync.Mutex
logOffset int
logRows []*runnerv1.LogRow
logReplacer *strings.Replacer
oldnew []string
reportInterval time.Duration
state *runnerv1.TaskState
stateMu sync.RWMutex
outputs sync.Map
debugOutputEnabled bool
stopCommandEndToken string
logOffset int
logRows []*runnerv1.LogRow
logReplacer *strings.Replacer
state *runnerv1.TaskState
stateM sync.RWMutex
}
func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.Client, task *runnerv1.Task, reportInterval time.Duration) *Reporter {
func NewReporter(ctx context.Context, cancel context.CancelFunc, client client.Client, task *runnerv1.Task) *Reporter {
var oldnew []string
if v := task.Context.Fields["token"].GetStringValue(); v != "" {
oldnew = append(oldnew, v, "***")
}
if v := task.Context.Fields["gitea_runtime_token"].GetStringValue(); v != "" {
oldnew = append(oldnew, v, "***")
}
for _, v := range task.Secrets {
oldnew = append(oldnew, v, "***")
}
rv := &Reporter{
ctx: ctx,
cancel: cancel,
client: client,
oldnew: oldnew,
reportInterval: reportInterval,
logReplacer: strings.NewReplacer(oldnew...),
return &Reporter{
ctx: ctx,
cancel: cancel,
client: client,
logReplacer: strings.NewReplacer(oldnew...),
state: &runnerv1.TaskState{
Id: task.Id,
},
}
if task.Secrets["ACTIONS_STEP_DEBUG"] == "true" {
rv.debugOutputEnabled = true
}
return rv
}
func (r *Reporter) ResetSteps(l int) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
r.stateM.Lock()
defer r.stateM.Unlock()
for i := 0; i < l; i++ {
r.state.Steps = append(r.state.Steps, &runnerv1.StepState{
Id: int64(i),
@ -88,16 +66,9 @@ func (r *Reporter) Levels() []log.Level {
return log.AllLevels
}
func appendIfNotNil[T any](s []*T, v *T) []*T {
if v != nil {
return append(s, v)
}
return s
}
func (r *Reporter) Fire(entry *log.Entry) error {
r.stateMu.Lock()
defer r.stateMu.Unlock()
r.stateM.Lock()
defer r.stateM.Unlock()
log.WithFields(entry.Data).Trace(entry.Message)
@ -116,28 +87,25 @@ func (r *Reporter) Fire(entry *log.Entry) error {
for _, s := range r.state.Steps {
if s.Result == runnerv1.Result_RESULT_UNSPECIFIED {
s.Result = runnerv1.Result_RESULT_CANCELLED
if jobResult == runnerv1.Result_RESULT_SKIPPED {
s.Result = runnerv1.Result_RESULT_SKIPPED
}
}
}
}
}
if !r.duringSteps() {
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
r.logRows = append(r.logRows, r.parseLogRow(entry))
}
return nil
}
var step *runnerv1.StepState
if v, ok := entry.Data["stepNumber"]; ok {
if v, ok := v.(int); ok && len(r.state.Steps) > v {
if v, ok := v.(int); ok {
step = r.state.Steps[v]
}
}
if step == nil {
if !r.duringSteps() {
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
r.logRows = append(r.logRows, r.parseLogRow(entry))
}
return nil
}
@ -147,16 +115,14 @@ func (r *Reporter) Fire(entry *log.Entry) error {
}
if v, ok := entry.Data["raw_output"]; ok {
if rawOutput, ok := v.(bool); ok && rawOutput {
if row := r.parseLogRow(entry); row != nil {
if step.LogLength == 0 {
step.LogIndex = int64(r.logOffset + len(r.logRows))
}
step.LogLength++
r.logRows = append(r.logRows, row)
if step.LogLength == 0 {
step.LogIndex = int64(r.logOffset + len(r.logRows))
}
step.LogLength++
r.logRows = append(r.logRows, r.parseLogRow(entry))
}
} else if !r.duringSteps() {
r.logRows = appendIfNotNil(r.logRows, r.parseLogRow(entry))
r.logRows = append(r.logRows, r.parseLogRow(entry))
}
if v, ok := entry.Data["stepResult"]; ok {
if stepResult, ok := r.parseResult(v); ok {
@ -182,17 +148,13 @@ func (r *Reporter) RunDaemon() {
_ = r.ReportLog(false)
_ = r.ReportState()
time.AfterFunc(r.reportInterval, r.RunDaemon)
time.AfterFunc(time.Second, r.RunDaemon)
}
func (r *Reporter) Logf(format string, a ...interface{}) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
r.stateM.Lock()
defer r.stateM.Unlock()
r.logf(format, a...)
}
func (r *Reporter) logf(format string, a ...interface{}) {
if !r.duringSteps() {
r.logRows = append(r.logRows, &runnerv1.LogRow{
Time: timestamppb.Now(),
@ -201,30 +163,10 @@ func (r *Reporter) logf(format string, a ...interface{}) {
}
}
func (r *Reporter) SetOutputs(outputs map[string]string) {
r.stateMu.Lock()
defer r.stateMu.Unlock()
for k, v := range outputs {
if len(k) > 255 {
r.logf("ignore output because the key is too long: %q", k)
continue
}
if l := len(v); l > 1024*1024 {
log.Println("ignore output because the value is too long:", k, l)
r.logf("ignore output because the value %q is too long: %d", k, l)
}
if _, ok := r.outputs.Load(k); ok {
continue
}
r.outputs.Store(k, v)
}
}
func (r *Reporter) Close(lastWords string) error {
r.closed = true
r.stateMu.Lock()
r.stateM.Lock()
if r.state.Result == runnerv1.Result_RESULT_UNSPECIFIED {
if lastWords == "" {
lastWords = "Early termination"
@ -234,19 +176,18 @@ func (r *Reporter) Close(lastWords string) error {
v.Result = runnerv1.Result_RESULT_CANCELLED
}
}
r.state.Result = runnerv1.Result_RESULT_FAILURE
r.logRows = append(r.logRows, &runnerv1.LogRow{
Time: timestamppb.Now(),
Content: lastWords,
})
r.state.StoppedAt = timestamppb.Now()
return nil
} else if lastWords != "" {
r.logRows = append(r.logRows, &runnerv1.LogRow{
Time: timestamppb.Now(),
Content: lastWords,
})
}
r.stateMu.Unlock()
r.stateM.Unlock()
return retry.Do(func() error {
if err := r.ReportLog(true); err != nil {
@ -260,9 +201,9 @@ func (r *Reporter) ReportLog(noMore bool) error {
r.clientM.Lock()
defer r.clientM.Unlock()
r.stateMu.RLock()
r.stateM.RLock()
rows := r.logRows
r.stateMu.RUnlock()
r.stateM.RUnlock()
resp, err := r.client.UpdateLog(r.ctx, connect.NewRequest(&runnerv1.UpdateLogRequest{
TaskId: r.state.Id,
@ -279,10 +220,10 @@ func (r *Reporter) ReportLog(noMore bool) error {
return fmt.Errorf("submitted logs are lost")
}
r.stateMu.Lock()
r.stateM.Lock()
r.logRows = r.logRows[ack-r.logOffset:]
r.logOffset = ack
r.stateMu.Unlock()
r.stateM.Unlock()
if noMore && ack < r.logOffset+len(rows) {
return fmt.Errorf("not all logs are submitted")
@ -295,45 +236,21 @@ func (r *Reporter) ReportState() error {
r.clientM.Lock()
defer r.clientM.Unlock()
r.stateMu.RLock()
r.stateM.RLock()
state := proto.Clone(r.state).(*runnerv1.TaskState)
r.stateMu.RUnlock()
outputs := make(map[string]string)
r.outputs.Range(func(k, v interface{}) bool {
if val, ok := v.(string); ok {
outputs[k.(string)] = val
}
return true
})
r.stateM.RUnlock()
resp, err := r.client.UpdateTask(r.ctx, connect.NewRequest(&runnerv1.UpdateTaskRequest{
State: state,
Outputs: outputs,
State: state,
}))
if err != nil {
return err
}
for _, k := range resp.Msg.SentOutputs {
r.outputs.Store(k, struct{}{})
}
if resp.Msg.State != nil && resp.Msg.State.Result == runnerv1.Result_RESULT_CANCELLED {
r.cancel()
}
var noSent []string
r.outputs.Range(func(k, v interface{}) bool {
if _, ok := v.(string); ok {
noSent = append(noSent, k.(string))
}
return true
})
if len(noSent) > 0 {
return fmt.Errorf("there are still outputs that have not been sent: %v", noSent)
}
return nil
}
@ -367,71 +284,11 @@ func (r *Reporter) parseResult(result interface{}) (runnerv1.Result, bool) {
return ret, ok
}
var cmdRegex = regexp.MustCompile(`^::([^ :]+)( .*)?::(.*)$`)
func (r *Reporter) handleCommand(originalContent, command, parameters, value string) *string {
if r.stopCommandEndToken != "" && command != r.stopCommandEndToken {
return &originalContent
}
switch command {
case "add-mask":
r.addMask(value)
return nil
case "debug":
if r.debugOutputEnabled {
return &value
}
return nil
case "notice":
// Not implemented yet, so just return the original content.
return &originalContent
case "warning":
// Not implemented yet, so just return the original content.
return &originalContent
case "error":
// Not implemented yet, so just return the original content.
return &originalContent
case "group":
// Rewriting into ##[] syntax which the frontend understands
content := "##[group]" + value
return &content
case "endgroup":
// Ditto
content := "##[endgroup]"
return &content
case "stop-commands":
r.stopCommandEndToken = value
return nil
case r.stopCommandEndToken:
r.stopCommandEndToken = ""
return nil
}
return &originalContent
}
func (r *Reporter) parseLogRow(entry *log.Entry) *runnerv1.LogRow {
content := strings.TrimRightFunc(entry.Message, func(r rune) bool { return r == '\r' || r == '\n' })
matches := cmdRegex.FindStringSubmatch(content)
if matches != nil {
if output := r.handleCommand(content, matches[1], matches[2], matches[3]); output != nil {
content = *output
} else {
return nil
}
}
content = r.logReplacer.Replace(content)
return &runnerv1.LogRow{
Time: timestamppb.New(entry.Time),
Content: strings.ToValidUTF8(content, "?"),
Content: content,
}
}
func (r *Reporter) addMask(msg string) {
r.oldnew = append(r.oldnew, msg, "***")
r.logReplacer = strings.NewReplacer(r.oldnew...)
}

71
runtime/runtime.go Normal file
View file

@ -0,0 +1,71 @@
package runtime
import (
"context"
"strings"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"codeberg.org/forgejo/runner/artifactcache"
"codeberg.org/forgejo/runner/client"
)
// Runner runs the pipeline.
type Runner struct {
Machine string
ForgeInstance string
Environ map[string]string
Client client.Client
Labels []string
CacheHandler *artifactcache.Handler
}
// Run runs the pipeline stage.
func (s *Runner) Run(ctx context.Context, task *runnerv1.Task) error {
env := map[string]string{}
for k, v := range s.Environ {
env[k] = v
}
env["ACTIONS_CACHE_URL"] = s.CacheHandler.ExternalURL() + "/"
return NewTask(s.ForgeInstance, task.Id, s.Client, env, s.platformPicker).Run(ctx, task, s.Machine)
}
func (s *Runner) platformPicker(labels []string) string {
// "ubuntu-18.04:docker://node:16-buster"
// "self-hosted"
platforms := make(map[string]string, len(labels))
for _, l := range s.Labels {
// "ubuntu-18.04:docker://node:16-buster"
splits := strings.SplitN(l, ":", 2)
if len(splits) == 1 {
// identifier for non docker execution environment
platforms[splits[0]] = "-self-hosted"
continue
}
// ["ubuntu-18.04", "docker://node:16-buster"]
k, v := splits[0], splits[1]
if prefix := "docker://"; !strings.HasPrefix(v, prefix) {
continue
} else {
v = strings.TrimPrefix(v, prefix)
}
// ubuntu-18.04 => node:16-buster
platforms[k] = v
}
for _, label := range labels {
if v, ok := platforms[label]; ok {
return v
}
}
// TODO: support multiple labels
// like:
// ["ubuntu-22.04"] => "ubuntu:22.04"
// ["with-gpu"] => "linux:with-gpu"
// ["ubuntu-22.04", "with-gpu"] => "ubuntu:22.04_with-gpu"
// return default
return "node:16-bullseye"
}

266
runtime/task.go Normal file
View file

@ -0,0 +1,266 @@
package runtime
import (
"bytes"
"context"
"encoding/json"
"fmt"
"os"
"path/filepath"
"sync"
"time"
runnerv1 "code.gitea.io/actions-proto-go/runner/v1"
"github.com/nektos/act/pkg/artifacts"
"github.com/nektos/act/pkg/common"
"github.com/nektos/act/pkg/model"
"github.com/nektos/act/pkg/runner"
log "github.com/sirupsen/logrus"
"codeberg.org/forgejo/runner/client"
)
var globalTaskMap sync.Map
type TaskInput struct {
repoDirectory string
// actor string
// workdir string
// workflowsPath string
// autodetectEvent bool
// eventPath string
// reuseContainers bool
// bindWorkdir bool
// secrets []string
envs map[string]string
// platforms []string
// dryrun bool
forcePull bool
forceRebuild bool
// noOutput bool
// envfile string
// secretfile string
insecureSecrets bool
// defaultBranch string
privileged bool
usernsMode string
containerArchitecture string
containerDaemonSocket string
// noWorkflowRecurse bool
useGitIgnore bool
containerCapAdd []string
containerCapDrop []string
// autoRemove bool
artifactServerPath string
artifactServerPort string
jsonLogger bool
// noSkipCheckout bool
// remoteName string
EnvFile string
containerNetworkMode string
}
type Task struct {
BuildID int64
Input *TaskInput
client client.Client
log *log.Entry
platformPicker func([]string) string
}
// NewTask creates a new task
func NewTask(forgeInstance string, buildID int64, client client.Client, runnerEnvs map[string]string, picker func([]string) string) *Task {
task := &Task{
Input: &TaskInput{
envs: runnerEnvs,
containerNetworkMode: "bridge", // TODO should be configurable
},
BuildID: buildID,
client: client,
log: log.WithField("buildID", buildID),
platformPicker: picker,
}
task.Input.repoDirectory, _ = os.Getwd()
return task
}
// getWorkflowsPath return the workflows directory, it will try .gitea first and then fallback to .github
func getWorkflowsPath(dir string) (string, error) {
p := filepath.Join(dir, ".gitea/workflows")
_, err := os.Stat(p)
if err != nil {
if !os.IsNotExist(err) {
return "", err
}
return filepath.Join(dir, ".github/workflows"), nil
}
return p, nil
}
func getToken(task *runnerv1.Task) string {
token := task.Secrets["GITHUB_TOKEN"]
if task.Secrets["GITEA_TOKEN"] != "" {
token = task.Secrets["GITEA_TOKEN"]
}
if task.Context.Fields["token"].GetStringValue() != "" {
token = task.Context.Fields["token"].GetStringValue()
}
return token
}
func (t *Task) Run(ctx context.Context, task *runnerv1.Task, runnerName string) (lastErr error) {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
_, exist := globalTaskMap.Load(task.Id)
if exist {
return fmt.Errorf("task %d already exists", task.Id)
}
// set task ve to global map
// when task is done or canceled, it will be removed from the map
globalTaskMap.Store(task.Id, t)
defer globalTaskMap.Delete(task.Id)
lastWords := ""
reporter := NewReporter(ctx, cancel, t.client, task)
defer func() {
// set the job to failed on an error return value
if lastErr != nil {
reporter.Fire(&log.Entry{
Data: log.Fields{
"jobResult": "failure",
},
Time: time.Now(),
})
}
_ = reporter.Close(lastWords)
}()
reporter.RunDaemon()
reporter.Logf("%s received task %v of job %v", runnerName, task.Id, task.Context.Fields["job"].GetStringValue())
workflowsPath, err := getWorkflowsPath(t.Input.repoDirectory)
if err != nil {
lastWords = err.Error()
return err
}
t.log.Debugf("workflows path: %s", workflowsPath)
workflow, err := model.ReadWorkflow(bytes.NewReader(task.WorkflowPayload))
if err != nil {
lastWords = err.Error()
return err
}
var plan *model.Plan
jobIDs := workflow.GetJobIDs()
if len(jobIDs) != 1 {
err := fmt.Errorf("multiple jobs found: %v", jobIDs)
lastWords = err.Error()
return err
}
jobID := jobIDs[0]
plan = model.CombineWorkflowPlanner(workflow).PlanJob(jobID)
job := workflow.GetJob(jobID)
reporter.ResetSteps(len(job.Steps))
log.Infof("plan: %+v", plan.Stages[0].Runs)
token := getToken(task)
dataContext := task.Context.Fields
log.Infof("task %v repo is %v %v %v", task.Id, dataContext["repository"].GetStringValue(),
dataContext["gitea_default_actions_url"].GetStringValue(),
t.client.Address())
preset := &model.GithubContext{
Event: dataContext["event"].GetStructValue().AsMap(),
RunID: dataContext["run_id"].GetStringValue(),
RunNumber: dataContext["run_number"].GetStringValue(),
Actor: dataContext["actor"].GetStringValue(),
Repository: dataContext["repository"].GetStringValue(),
EventName: dataContext["event_name"].GetStringValue(),
Sha: dataContext["sha"].GetStringValue(),
Ref: dataContext["ref"].GetStringValue(),
RefName: dataContext["ref_name"].GetStringValue(),
RefType: dataContext["ref_type"].GetStringValue(),
HeadRef: dataContext["head_ref"].GetStringValue(),
BaseRef: dataContext["base_ref"].GetStringValue(),
Token: token,
RepositoryOwner: dataContext["repository_owner"].GetStringValue(),
RetentionDays: dataContext["retention_days"].GetStringValue(),
}
eventJSON, err := json.Marshal(preset.Event)
if err != nil {
lastWords = err.Error()
return err
}
maxLifetime := 3 * time.Hour
if deadline, ok := ctx.Deadline(); ok {
maxLifetime = time.Until(deadline)
}
input := t.Input
config := &runner.Config{
Workdir: "." + string(filepath.Separator),
BindWorkdir: false,
ReuseContainers: false,
ForcePull: input.forcePull,
ForceRebuild: input.forceRebuild,
LogOutput: true,
JSONLogger: input.jsonLogger,
Env: input.envs,
Secrets: task.Secrets,
InsecureSecrets: input.insecureSecrets,
Privileged: input.privileged,
UsernsMode: input.usernsMode,
ContainerArchitecture: input.containerArchitecture,
ContainerDaemonSocket: input.containerDaemonSocket,
UseGitIgnore: input.useGitIgnore,
GitHubInstance: t.client.Address(),
ContainerCapAdd: input.containerCapAdd,
ContainerCapDrop: input.containerCapDrop,
AutoRemove: true,
ArtifactServerPath: input.artifactServerPath,
ArtifactServerPort: input.artifactServerPort,
NoSkipCheckout: true,
PresetGitHubContext: preset,
EventJSON: string(eventJSON),
ContainerNamePrefix: fmt.Sprintf("GITEA-ACTIONS-TASK-%d", task.Id),
ContainerMaxLifetime: maxLifetime,
ContainerNetworkMode: input.containerNetworkMode,
DefaultActionInstance: dataContext["gitea_default_actions_url"].GetStringValue(),
PlatformPicker: t.platformPicker,
}
r, err := runner.New(config)
if err != nil {
lastWords = err.Error()
return err
}
artifactCancel := artifacts.Serve(ctx, input.artifactServerPath, input.artifactServerPort)
t.log.Debugf("artifacts server started at %s:%s", input.artifactServerPath, input.artifactServerPort)
executor := r.NewPlanExecutor(plan).Finally(func(ctx context.Context) error {
artifactCancel()
return nil
})
t.log.Infof("workflow prepared")
reporter.Logf("workflow prepared")
// add logger recorders
ctx = common.WithLoggerHook(ctx, reporter)
if err := executor(ctx); err != nil {
lastWords = err.Error()
return err
}
return nil
}

View file

@ -1,9 +0,0 @@
#!/usr/bin/env bash
# wait for docker daemon
while ! nc -z localhost 2376 </dev/null; do
echo 'waiting for docker daemon...'
sleep 5
done
. /opt/act/run.sh

View file

@ -1,48 +0,0 @@
#!/usr/bin/env bash
if [[ ! -d /data ]]; then
mkdir -p /data
fi
cd /data
CONFIG_ARG=""
if [[ ! -z "${CONFIG_FILE}" ]]; then
CONFIG_ARG="--config ${CONFIG_FILE}"
fi
EXTRA_ARGS=""
if [[ ! -z "${GITEA_RUNNER_LABELS}" ]]; then
EXTRA_ARGS="${EXTRA_ARGS} --labels ${GITEA_RUNNER_LABELS}"
fi
# Use the same ENV variable names as https://github.com/vegardit/docker-gitea-act-runner
if [[ ! -s .runner ]]; then
try=$((try + 1))
success=0
# The point of this loop is to make it simple, when running both forgejo-runner and gitea in docker,
# for the forgejo-runner to wait a moment for gitea to become available before erroring out. Within
# the context of a single docker-compose, something similar could be done via healthchecks, but
# this is more flexible.
while [[ $success -eq 0 ]] && [[ $try -lt ${GITEA_MAX_REG_ATTEMPTS:-10} ]]; do
forgejo-runner register \
--instance "${GITEA_INSTANCE_URL}" \
--token "${GITEA_RUNNER_REGISTRATION_TOKEN}" \
--name "${GITEA_RUNNER_NAME:-`hostname`}" \
${CONFIG_ARG} ${EXTRA_ARGS} --no-interactive 2>&1 | tee /tmp/reg.log
cat /tmp/reg.log | grep 'Runner registered successfully' > /dev/null
if [[ $? -eq 0 ]]; then
echo "SUCCESS"
success=1
else
echo "Waiting to retry ..."
sleep 5
fi
done
fi
# Prevent reading the token from the forgejo-runner process
unset GITEA_RUNNER_REGISTRATION_TOKEN
forgejo-runner daemon ${CONFIG_ARG}

View file

@ -1,13 +0,0 @@
[supervisord]
nodaemon=true
logfile=/dev/null
logfile_maxbytes=0
[program:dockerd]
command=/usr/local/bin/dockerd-entrypoint.sh
[program:act_runner]
stdout_logfile=/dev/fd/1
stdout_logfile_maxbytes=0
redirect_stderr=true
command=/opt/act/rootless.sh

View file

@ -1,67 +0,0 @@
# Forgejo Runner with systemd User Services
It is possible to use systemd's user services together with
[podman](https://podman.io/) to run `forgejo-runner` using a normal user
account without any privileges and automatically start on boot.
This was last tested on Fedora 39 on 2024-02-19, but should work elsewhere as
well.
Place the `forgejo-runner` binary in `/usr/local/bin/forgejo-runner` and make
sure it can be executed (`chmod +x /usr/local/bin/forgejo-runner`).
Install and enable `podman` as a user service:
```bash
$ sudo dnf -y install podman
```
You *may* need to reboot your system after installing `podman` as it
modifies some system configuration(s) that may need to be activated. Without
rebooting the system my runner errored out when trying to set firewall rules, a
reboot fixed it.
Enable `podman` as a user service:
```
$ systemctl --user start podman.socket
$ systemctl --user enable podman.socket
```
Make sure processes remain after your user account logs out:
```bash
$ loginctl enable-linger
```
Create the file `/etc/systemd/user/forgejo-runner.service` with the following
content:
```
[Unit]
Description=Forgejo Runner
[Service]
Type=simple
ExecStart=/usr/local/bin/forgejo-runner daemon
Restart=on-failure
[Install]
WantedBy=default.target
```
Now activate it as a user service:
```bash
$ systemctl --user daemon-reload
$ systemctl --user start forgejo-runner
$ systemctl --user enable forgejo-runner
```
To see/follow the log of `forgejo-runner`:
```bash
$ journalctl -f -t forgejo-runner
```
If you reboot your system, all should come back automatically.