Until he actually had to use it.
Took 2 hours of reading through examples just to deploy the site.
Turns out, it is hard to do even just the bash
stuff when you can’t see the container.
Instead of using up time/$$ on github actions, you should try running the script locally to make sure everything works before commiting: https://github.com/nektos/act
Github CI still feels like an alpha project sometimes. Certain stacks look like they are supported, but it can be difficult to do the same thing as other CI tools (like GitLab/CircleCI/etc…) such as running things locally. Their tool will get you 95% of the way there. Other tools will also allow you to ssh into the box itself (Gitlab/Circleci) which is extremity useful when debugging scripts/processes.
My personal opinion is that github actions is a work in progress given the state of much of the community. GitLab has much better tools. But this is a great learning experience for sure. And more projects that use CI/CD the better!
allow you to ssh into the box itself (Gitlab/Circleci)
In that case, things just get way easier. I can just check it out like a normal system.
yep! After doing CI/CD for close to 10 years, its one of the things Travis/CircleCI/GitLab has done that make it soooo much easier to debug. Saves time and sanity. Because as much as we hate it, sometimes the only way to debug is to actually dig into the system your working under.
Docker helps as well.
How did you find one of my GitHub repos?
I just used Google to search “zangoose github” and one of your github.io sites popped up.
That’s how I found your github.I don’t have any GitHub.io sites but I appreciate the joke :)
zangoose github
Oh, I might have mistaken a GitHub site talking about you with your site.
So, I guess I haven’t found your GitHubZangoose is a Pokemon, there’s probably hundreds of sites with it
I get it. There’s probably 100’s of sites with you on them.
Time for the yearly barrage of “Setup CI”…“Fix CI” commits.
That is my experience with basically every CI service out there.
Normally, you don’t want to commit code unless it’s been at least minimally tested, and preferably more than that.
All the CI’s, however, force a workflow where you can only test it by committing the code and seeing if it works. I’m not sure how to fix that, but I see the problem.
If you can test it on a feature branch then at least you can squash or tidy the commits after you’ve got them working. If you can only test by committing to main though, curse whoever designed that.
Well, it does have triggers for other branches:
on: push: branches: [ "main" ] pull_request: branches: [ "main" ]
So, most probably would have a way to run it on other branches.
You can also use the
workflow_dispatch
execution pattern and use some data input params and execute through the portal interface.However, do be careful about trusting input params without sanitizing them (GH has docs around this).
Thanks, I’ll look into that.
While trying this time (as you can see in one of the commits), I addedworkflow_dispatch
at the wrong place, causing a problem. Later realised that it is part of theon
Line the other commenter said, there’s nothing wrong with committing temp/untested code to a feature branch as long as you clean it up before the PR.
There are issues that come up in niche cases. If you’re using
git bisect
to track down a bug, a non-working commit can throw that off.You might have misunderstood what I meant by “clean up before the PR.” None of the temp commits should end up in the main branch, where people would be bisecting.
Here’s my hot tip! (ok maybe luke warm)
Write as much of your CICD in a scripting language like bash/python/whatever. You’ll be able to test it locally and then the testing phase of your CICD will just be setting up the environment so it has the right git branches coined, permissions, etc.
You won’t need to do 30 commits now, only like 7! And you’ll cry for only like 20 minutes instead of a whole afternoon!
Aggressively seconding this. If you can just do a step in a bash command, do that, don’t use the stupid yaml wrapper they provide that actually just turns around and runs the same bash command but with extra abstraction to learn, break, fix, and maintain for stupid, meaningless upgrades. It will save you time because you’ll be using better-tested, more widely-used tools and approaches.
Yeah, I think that’s the best that can be done right now.
It also leads to a different question: do we really need these fancy systems, or do we need a bunch of bash scripts with a cronjob or monitors to trigger the build?
In my last workplace, I was responsible for making whatever automation I wanted (others just did everything manually) and I just appended a bunch of bash scripts to the Qt Creator Build and Run commands. It easily worked pretty well.
I guess the fancy systems are again, just to add another layer of abstraction, when everything is running on their containers instead of ours.
We have all of our build and CI in
make
so, theoretically, all the CI system needs to do is run a single command. Then I try to run the command on a CI server, it is missing an OS package (and their package manager version is a major version behind so I need to download a pre-built binary from the project site). Then the tests get kill for using too much memory. Then, after I reduce resource limits, the tests time out…I am grateful that we use CircleCI as our SaaS CICD and they let me SSH on to a test container so I can see what is going on.
We test our code locally, but we cannot test the workflow. By definition, testing the workflow has to be done on a CI-like system.
There is nektos/act for running github actions locally, it works for simple cases. There still are many differences between act and github actions.
It might be possible for a CI to define workflow steps using Containerfile/Dockerfile. Such workflows would be reproducible locally.
Every time I create a new repo haha I usually just delete the runs and squash the commits so it looks like I got it first time.
Missing a few “.” or “please work” commit messages.
In those cases, I just use amend.
It’s a new website afterall, nobody is pulling that.
Also ‘iwghrfuiowqg’ if its 6am in the morning and higher brain function has been fried plus your angry
This also used to happen to me
What i recommend is to create a private repo with the same content, create and test the workflow file there and copy it back into the main repo when you get it to work.
There’s a vscode extension, I think called GHA, which validates your workflow yaml inline so you can avoid a lot of that trial and error.
GitHub CI is great. too great. some devs have taken it upon their hands to attempt and wield an unwieldy power.
and unfortunately at my work it is my job to fix that unwieldy power
Guess GitHub can now claim to have created a lot more jobs.
Next, for me to check out GitLab CI.
And then keep a minimalist git serving solution for my own use.
Unless I’m doing a simple bash or pwsh script, I prefer to use GHA Script due to the headaches caused by how things are translated down and missing quotes/slashes/etc can cause massive headaches.
I’ve been meaning on spending a morning getting Nektos/ACT running.
I’ve been meaning on spending a morning getting Nektos/ACT running.
I was just going to say I need to find a way to run it all on my system to learn it. If this can do it without actually having to push to GitHub, it would be really good for practice.
Act works out pretty good but you need to pass it a token and stuff so the actual github CLI bits can work which is kind of a hassle. It took me much too long to discover you need a classic token, the one from the github CLI app
gh auth token
won’t work.Edit: Ah! Also getting act setup involved getting docker setup which involved me enabling virtualization in my bios for what I swear is like the 4th time I’ve done so. Also because I’m on Windows (iirc at least) I had to setup WSL or just make a windows container ಠ_ಠ
You also need to know what the internal GitHub event json looks like. Using act was such a pain I just gave up. Have tried several times now and it’s just easier to create a second repo just for testing and overwrite it with your current repo anytime you need to do major workflow changes.
Docker issues are always fun. I’ve repeatedly ran into docker kubernetes ssl certs being blocked by my ISP because they are dumb. Recently switched ISPs that let let’s me actually have that control.
Feel you bro. Been there (and probably “about to be there soon”) too.