Moving to buildx and Gitea Actions from *-docker-compose repos

I’m deprecating the entire build setup which is used for tt-rss docker hub (and my internal registry) images, that unholy abomination with separate scripts repo, jenkins pipeline, multi-repo polling, and - the most ugly part - two-step build process which involves checking out the repo again while container is being built.

The whole thing sucks. Partially because it organically evolved from running docker-compose build manually to a CI pipeline without any improvements to the build process.

Now that Gitea sorta-kinda supports GHA I’m planning to throw the whole thing away and replace it with much less complicated setup which is going to look similar to this (using Epube as an example):

the-epube/build.yml at master - the-epube - Tiny Tiny RSS - this is the main pipeline that runs on push-to-master, it lints, builds and pushes the images, using docker buildx.

This only uses my internal registry but it’s easy to publish images to Docker Hub in a similar fashion.

Docker stuff is now embedded into main repository: the-epube/Dockerfile at master - the-epube - Tiny Tiny RSS

Note how instead of doing the whole song-and-dance with git clone we just pass application source dir as a separate build context because buildx allows us to do just that. Using buildx also allows for cross-platform images, i.e. arm.

Separate docker scripts repo is no longer going to do anything with static images.

Dynamic setup (i.e. we git pull source code on re/start) stays as-is.

e: here’s how pipeline output looks main/the-epube - the-epube - Tiny Tiny RSS

https://dev.tt-rss.org/tt-rss/tt-rss/actions/runs/7

HEADS UP: tt-rss images are now handled by a similar simplified build pipeline.

if something got broken, you know who to blame. :slight_smile:

multiarch images (finally!):

note that i have no way to verify that those actually work.

Something is messed up, but I’m not exactly sure what.

I pull the images, and running
docker image inspect cthulhoo/ttrss-fpm-pgsql-static | grep Architecture
shows
"Architecture": "arm64",
but it fails to run with exec /bin/sh: exec format error, and sure enough, looking in the image shows
bin/busybox: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, stripped

@Jeff looks like my internal registry is messed up, if I build using docker hub in the Dockerfile everything seems normal:

(ansible-2.12) dev-debian:app (master):$ grep -i targetp Dockerfile 
FROM --platform=$TARGETPLATFORM alpine:3.13
ARG TARGETPLATFORM
RUN echo building for $TARGETPLATFORM $(file /bin/busybox) $(ldd /bin/busybox)
(ansible-2.12) dev-debian:app (master):$ docker buildx build . --build-context=app-src=. --platform linux/amd64,linux/arm64,linux/arm/v7 -t cthulhoo/multiarch-test:latest --progress plain --no-cache 2>&1 | grep buildi
#17 [linux/amd64 stage-0 4/7] RUN echo building for linux/amd64 $(file /bin/busybox) $(ldd /bin/busybox)
#0 0.061 building for linux/amd64 /bin/busybox: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, stripped /lib/ld-musl-x86_64.so.1 (0x7fceee9d0000) libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7fceee9d0000)
#22 [linux/arm64 stage-0 4/7] RUN echo building for linux/arm64 $(file /bin/busybox) $(ldd /bin/busybox)
#0 0.130 building for linux/arm64 /bin/busybox: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-aarch64.so.1, stripped /lib/ld-musl-aarch64.so.1 (0x5500000000) libc.musl-aarch64.so.1 => /lib/ld-musl-aarch64.so.1 (0x5500000000)
#23 [linux/arm/v7 stage-0 4/7] RUN echo building for linux/arm/v7 $(file /bin/busybox) $(ldd /bin/busybox)
#0 0.137 building for linux/arm/v7 /bin/busybox: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-armhf.so.1, stripped /lib/ld-musl-armhf.so.1 (0x40000000) libc.musl-armv7.so.1 => /lib/ld-musl-armhf.so.1 (0x40000000)

if i’m using my registry:2 instance, things become broken:

(ansible-2.12) dev-debian:app (master):$ grep -i targetp Dockerfile 
FROM --platform=$TARGETPLATFORM registry.fakecake.org/alpine:3.13
ARG TARGETPLATFORM
RUN echo building for $TARGETPLATFORM $(file /bin/busybox) $(ldd /bin/busybox)
(ansible-2.12) dev-debian:app (master):$ docker buildx build . --build-context=app-src=. --platform linux/amd64,linux/arm64,linux/arm/v7 -t cthulhoo/multiarch-test:latest --progress plain --no-cache 2>&1 | grep buildi
#17 [linux/arm64 stage-0 4/7] RUN echo building for linux/arm64 $(file /bin/busybox) $(ldd /bin/busybox)
#0 0.063 building for linux/arm64 /bin/busybox: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, stripped /lib/ld-musl-x86_64.so.1 (0x7f0d2e42b000) libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f0d2e42b000)
#22 [linux/amd64 stage-0 4/7] RUN echo building for linux/amd64 $(file /bin/busybox) $(ldd /bin/busybox)
#22 0.061 building for linux/amd64 /bin/busybox: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, stripped /lib/ld-musl-x86_64.so.1 (0x7f2397776000) libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f2397776000)
#25 [linux/arm/v7 stage-0 4/7] RUN echo building for linux/arm/v7 $(file /bin/busybox) $(ldd /bin/busybox)
#0 0.047 building for linux/arm/v7 /bin/busybox: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-x86_64.so.1, stripped /lib/ld-musl-x86_64.so.1 (0x7ff828930000) libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7ff828930000)

i’ll try to take a look at this later today, thanks for reporting. :+1:

i probably should be switching to Harbor, registry:2 is somewhat opaque in its workings. or maybe it’s just me.

well, i’ve managed to deploy harbor, even if it won’t fix multiarch images it was still worth it. :thinking: registry:2 sucks.

btw

trivy vulnerability scan results for the latest tt-rss fpm image. mostly javascript npm dependency crap (used for building only).

why would those even be in the resulting image? :thinking:


much better without useless node_modules crap:

also reduced image size by ~20 megs.

wouldn’t have noticed this if not for harbor and trivy. cool stuff.

It seems to be working, at least for arm64. Pulled down the new images, and it started up right away.

the more i’m screwing around with gitea/act ci the more i’m thinking of migrating back to gitlab where everything is already finished and just works.

CI system gitlab uses is also way better than GHA.

:thinking:

i wonder how much of a PITA would be maintaining a semi-public instance of gitlab.

Alpine does it: alpine · GitLab

imagine doing this

when you can do this

and yes i know all about custom actions and reusable workflows. they suck. GHA syntax and overall concept is garbage.

the whole idea that you need fucking NODEJS to checkout your source code before even doing anything on the pipeline is absolutely idiotic.

gitlab starts the pipeline with the source directory already prepared (and you can check out other stuff if you need it).

p.s. gitea/act has no artifact support either (literally none).