Edgecase 2022: Kubernetes at the edge
Edgecase 2022: Kubernetes at the edge
Finally, after the pandemic abruptly changed every meetup and conference into a video conference, in-person events are possible again! The first conference I attended since early 2020 was Edgecase 2022, organized by Fullstaq and focusing on running Kubernetes at the edge.
This is not directly what we are doing or intend on doing, since we use AWS. Nevertheless, underlying techniques might still be applicable. Furthermore, it never hurts to look at things from a different perspective and it’s great to interact with peers that are equally enthusiast about the technology they use.
Characteristics of running ‘at the edge’ include:
- Slow or flaky network connectivity with centralized or cloud-based systems
- Low compute/memory specs
- Need to avoid having to transfer large amounts of data generated at the edge
I won’t go over the individual talks but instead address some of the topics that stood out to me.
ArgoCD
ArgoCD is a well-established name in the Kubernetes ecosystem. It’s a tool that enables managing applications via GitOps of which I heard quite some positive things.
Now GitOps is a bit of a buzzword that might mean different things to different people. My take, at the risk of cutting corners, is: “Adhering to best practices of Infra-as-Code, Config-as-Code and automation, leveraging the power of Git for collaboration and context via pull requests, history and commit messages”. Something like that.
Within GitOps there’s the push-based variant: Adding pipelines to Git that apply changes to the target environment. Another approach is pull-based GitOps: A component running inside the target environment that continuously monitors the Git repo and ensures the environment always reflect what’s in Git. This is what ArgoCD offers by running inside a cluster, or multiple clusters.
Rephrased in the context of Kubernetes:
- Kubernetes components continuously keep the state of the cluster in accordance to the manifests that are applied.
- Similarly, ArgoCD continuously makes the manifests applied in the cluster in accordance to what is in Git.
Some of the downsides of pipelines that ArgoCD remediates:
- Risk of actual state drifting from Git state. The “where is the truth” challenge. This depends on the scope covered by pipeline, and access engineers (or systems) have to the infrastructure. Increasing factor: No Git commits means no applying, means no reconciliation.
- Granting a CI/CD pipeline read/write access to the target environment is arguably harder than providing a system read-only access to a Git repository.
- Scalability. An increasing amount of clusters or applications does not result in a similar increase in number of deployment pipelines.
Now it’s good to acknowledge that Pipelines won’t be redundant: There’s still tests and building of artifacts. That stays the same. However deploys can change from issuing a batch of API calls to updating the desired state in Git.
Another observation is that it benefits Kubernetes deploys only. So if other platforms such as AWS Lambda get thrown in the mix, the advantage becomes less clear-cut.
Specific to Kubernetes at the edge, some interesting techniques were illustrated that showcase how ArgoCD is a perfect fit for such cases.
One can set up a primary cluster that has good connectivity to the centre of command (office, cloud). In remote locations (spokes) a local ArgoCD installation can keep applications in sync, using a local Git copy. This way the remote ArgoCD instance always has a reliable Git repo available and can tolerate unreliable internet connectivity.
So, centralized Git commits get synced to remote locations and ArgoCD takes it from there. Metrics and logs follow the reverse direction: Both Prometheus and Loki running at the edge store data in files that span a couple of hours. Once persisted these files can be shipped to a centralized locations where, using Thanos, dashboards can provide info about the state of all the edge locations, albeit with some delay.
K3S
Getting a lot of attention on EdgeCase was K3S, a lightweight Kubernetes distribution. It’s certified (meaning: Fully compatible with it’s big brother Kubernetes) yet simpler, packing all of the moving parts of Kubernetes in a single binary.
As the docs put it:
Lightweight Kubernetes. Easy to install, half the memory, all in a binary of less than 100 MB.
Great for:
* Edge
* IoT
* CI
* Development
* ARM
* Embedding K8s
* Situations where a PhD in K8s clusterology is infeasible
Some of these advantages are reduced when putting K3S against Kubernetes managed by cloud vendors, such as EKS, AKS and GKE. Regardless, being able to throw the yaml
we love and loathe to a Raspberry Pi in pretty much the same way as to a big cluster in the cloud, showcases the versatility of Kubernetes.
One of the cases presented, illustrated K3S running on small servers right next to greenhouses. Data collected in the greenhouses is processed at the location, allowing a much smaller set of data to be moved around. K3S provides the means for an easy to maintain cluster at the edge.
LeafCloud
Leafcloud focuses on running compute in a more energy-efficient way. They do so by running compute in ‘Leaf sites’, where the heat generated by the servers can be used to replace fossil fuels. For security reasons, data is stored in a more traditional datacenter, and glass fiber lines connect data to the leaf sites.
Leaf cloud is based on OpenStack, so with tools like Terraform it can be provisioned in a similar way as more traditional clouds.
It’s unlikely one would migrate all of the infrastructure using the 200+ services provided by AWS to run on Leafcloud. However, for specific cloud-agnostic workloads (if there is such a thing) it could be relatively easy.
Regardless, it’s an interesting, innovative concept that addresses the important topic of sustainability1.
Not moving to Leafcloud there’s still plenty of things we can do to not waste energy, such as:
- Right-sizing, serverless like AWS lambda being the prime example
- Turn off things we don’t use
- Use efficient programming languages
Concluding
One of the take-aways is that the ‘pets vs. cattle’ paradigm applies to Kubernetes clusters in a similar way as virtual machines: Operating one or many clusters, short- or long-lived, it should hardly make a difference. Another is the versatility of the platform: Same API, totally different environments.
I really like this scale of event, which in a lot of ways reminded of the 2019 september Kubernetes Community Days: Small scale (~200 ppl), nice venue, single day, single track (no FOMO), not too expensive (in this case free even) and well organized.
Simply a lot to like. So thanks to all the people that have put hard work in organizing. Till the next one!
-
Even if one is a ‘climate-sceptic’, recent events illustrate the strategic value of needing to buy less energy. ↩︎