KubeDR Going Strong – Enhanced with New Features
It has been slightly more than a month since Catalogic released KubeDR. Since then, we have been busy adding features and making improvements to the project inspired by all the feedback we’ve received from the community. We are very excited to share all the changes that went into KubeDR since its release on January 15.
In the first release, we only supported a disaster recovery scenario restore by using a separate Python utility. With the latest changes, it is now possible to restore etcd snapshots and certificate files in a running cluster. Regular Restore allows you to restore certificates and etcd snapshot by simply creating a custom resource. The assumption is that the cluster is up and running but you need access to this data for one reason or another.
Another advancement we’ve been able to achieve is in the area of metrics. KubeDR exposes several metrics that can be scraped with Prometheus and visualized using Grafana. Most of the metrics deal with the internal implementation but RED metrics provide very useful information to the user. These include:
- kubedr_backup_size_bytes (Gauge)
- kubedr_num_backups (Counter)
- kubedr_num_successful_backups (Counter)
- kubedr_num_failed_backups (Counter)
- kubedr_backup_duration_seconds (Histogram)
Some other features added in this release include:
- KubeDR now sets the “status” section of all its resources with detailed information regarding operations such as repo initialization, backup, and restore. All Kubernetes objects have spec and status “Spec” is meant to convey user intent while “status” is supposed to reflect the actual state of the resource in the cluster. It is typically used by the cluster components to convey information to the user. Please check status update for details.
- KubeDR now generates Kubernetes events after every operation (such as backup and restore). Description of all the events generated by KubeDR can be found at events.
In addition to all the above features, we have also fixed few bugs and made many robustness improvements. Apart from adding features to KubeDR, we are also actively experimenting on the best way to implement custom resources in Kubernetes. KubeDR is built using kubebuilder project which simplifies implementation of custom resources.
However, there are still quite a few challenges in implementing Kubernetes controllers. Specifically, how would controllers communicate with the pods that they spawn? The pods are the ones that do real work such as repo initialization, backup, and restore. Exactly how we do this and what design choices we make will be explained in a future blog post, but we are very excited to be participating in this “cutting-edge” technology.
We always welcome and encourage feedback, so please give it a try and let us know what you think.