Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Leverage GitHub action arm64 runner #2422

Open
tenzen-y opened this issue Feb 6, 2025 · 10 comments
Open

Leverage GitHub action arm64 runner #2422

tenzen-y opened this issue Feb 6, 2025 · 10 comments
Assignees

Comments

@tenzen-y
Copy link
Member

tenzen-y commented Feb 6, 2025

What you would like to be added?

I would like to restructure our container image-building mechanism by leveraging the arm64 runner.

Currently, we are building the multi-arch container image by QEMU, and the emulation gives us painfully longer building time.
I guess that we can improve the CI duration once we build the multi-arch image with multiple GitHub actions runner (and arm64).

Why is this needed?

This probably allows us to compress image building and CI time.

Love this feature?

Give it a 👍 We prioritize the features with most 👍

@tenzen-y
Copy link
Member Author

tenzen-y commented Feb 6, 2025

/remove-label lifecycle/needs-triage

@mahdikhashan
Copy link
Member

@tenzen-y can I help with this? i worked on a ci issue on katib as well.

@tenzen-y
Copy link
Member Author

@tenzen-y can I help with this? i worked on a ci issue on katib as well.

If you can take this, I would really appreciate it

@manojks1999
Copy link

.take-issue

@mahdikhashan
Copy link
Member

/assign

@thesuperzapper
Copy link
Member

I am trying to figure out how you build the same image tag on ARM64 and AMD64 runners, then push under the same multi-arch tag manifest.

I guess you might be able to set up a "remote buildx builder" in one runner and call it from the other because I think you need to push at the same time otherwise you overwrite the tag.

Or I might be missing something obvious.

@thesuperzapper
Copy link
Member

Related kubeflow/notebooks#216

Also, I was a bit confused about the ability to use "remote buildx builders", because that would require the ability for jobs to be able to talk to each other, which is not currently possible in GHA.

I did ask upstream about the possibility of running a service (persistent container for the life of a job) on the ARM nodes, and then access it from a X86 node job:

@astefanutti
Copy link
Contributor

astefanutti commented Feb 18, 2025

For Golang binaries, the fastest and easiest way to build multi-architecture images is to rely on the Go cross-platform build support and use docker/build-push-action, all within the same hosted runner, e.g.:

    - name: Build linux/amd64 binary
      env:
        GOOS: linux
        GOARCH: amd64
      run: |
        GOOS=$GOOS GOARCH=$GOARCH go build -a -o manager-$GOARCH main.go
      working-directory: ${{env.working-directory}}

    - name: Build linux/arm64 binary
      env:
        GOOS: linux
        GOARCH: arm64
      run: |
        GOOS=$GOOS GOARCH=$GOARCH go build -a -o manager-$GOARCH main.go
      working-directory: ${{env.working-directory}}

    - name: Build Multi-arch Image
      uses: docker/build-push-action@v5
      with:
        platforms: linux/amd64,linux/arm64
        context: ${{env.working-directory}}
        file: ${{env.working-directory}}/Dockerfile.buildx
        push: ${{env.PUSH}}
        tags: |
          quay.io/${{env.REPO_ORG}}/${{env.REPO_NAME}}:${{ steps.vars.outputs.sha_short }}
          quay.io/${{env.REPO_ORG}}/${{env.REPO_NAME}}:${{ github.event.inputs.tag }}

This even works with CGO.

This solves the initial performance issue with QEMU, without introducing too much complexity.

@mahdikhashan
Copy link
Member

thank you all for the information. @astefanutti @thesuperzapper

@tenzen-y
Copy link
Member Author

For Golang binaries, the fastest and easiest way to build multi-architecture images is to rely on the Go cross-platform build support and use docker/build-push-action, all within the same hosted runner, e.g.:

    - name: Build linux/amd64 binary
      env:
        GOOS: linux
        GOARCH: amd64
      run: |
        GOOS=$GOOS GOARCH=$GOARCH go build -a -o manager-$GOARCH main.go
      working-directory: ${{env.working-directory}}

    - name: Build linux/arm64 binary
      env:
        GOOS: linux
        GOARCH: arm64
      run: |
        GOOS=$GOOS GOARCH=$GOARCH go build -a -o manager-$GOARCH main.go
      working-directory: ${{env.working-directory}}

    - name: Build Multi-arch Image
      uses: docker/build-push-action@v5
      with:
        platforms: linux/amd64,linux/arm64
        context: ${{env.working-directory}}
        file: ${{env.working-directory}}/Dockerfile.buildx
        push: ${{env.PUSH}}
        tags: |
          quay.io/${{env.REPO_ORG}}/${{env.REPO_NAME}}:${{ steps.vars.outputs.sha_short }}
          quay.io/${{env.REPO_ORG}}/${{env.REPO_NAME}}:${{ github.event.inputs.tag }}

This even works with CGO.

This solves the initial performance issue with QEMU, without introducing too much complexity.

Good point. Actually, I considered the solution. But we have Python images as well. So, I proposed creating a gh template leveraging arm64 runner, and then we use the template in all image buildings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants