-
Notifications
You must be signed in to change notification settings - Fork 765
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance Harbor Helm Chart: Granular Persistence Configuration for Individual Components #1912
Comments
You can granularly disable persistent volume with existing Helm chart. Here are the examples of how you can do that for PostgreSQL, Jobservice, and Registry, but following similar approach you can also disable PVCs for Trivy and Redis. Do note however that if you want to disable the creation of PVCs you will need to run these services externally. For PostgreSQL, if you look at the following line in the StatefulSet file, you can see that the StatefulSet gets created only if you are using internal database.
This means that you can configure database parameters and use external database. Example looks something like this: database:
type: external
external:
host: <postgres-server-fqdn>
port: 5432
username: <postgres-user>
existingSecret: <secret-name-with-credentials>
sslmode: <sslmode>
coreDatabase: <db-name> Jobservice can store logs in three different ways: file, database, and stdout. By default it is configured to store logs to file which requires additional PVC for jobservice pods. As you can see simlar as with PostgreSQL, the PVC only gets created when jobservice is configured to store logs into the filesystem
Alternatively you can persiste logs from jobservice into the database or you can just output them to the stdout. If you want to store them into the database you have set your values like this: jobservice:
jobLogger:
- database Registry can store data into the filesystem or multiple types of blob storages. Again if you look into the templates the PVC only gets created if
If you want to S3 or S3 compatible storage like minio you can use configuration similar to this: persistence:
imageChartStorage:
type: s3
s3:
region: "minio"
existingSecret: <secret-name-with-credentials>
regionendpoint: https://<minio-server-fqdn>
bucket: harbor
secure: true
skipverify: false
disableredirect: false |
Hi, Thank you for your detailed explanation. I appreciate the time you took to provide examples for PostgreSQL, Jobservice, and Registry. The primary intent behind my feature request is to simplify and enhance the user experience when configuring persistence for individual components. While it’s true that the current Helm chart allows disabling persistence by using external services or changing specific configurations, these approaches can sometimes be non-intuitive and require additional external infrastructure or adjustments. For example:
My proposal is aimed at providing a more user-friendly and granular configuration option directly in the Helm chart, allowing persistence to be toggled independently for each component without needing to resort to external services or additional steps. This would make the chart more flexible. Thank you again for your response. |
HI, I agree that the current option for granular option is not the most user-friendly, but is there even an alternative. If I understand you corretly you are proposing a feature that would allow the user to toggle PVC of individual componentes while also opting out of using the external service that is otherwise provided with the chart. As I see it this feature could lead to inconsistency of a system, because some components would store data to persistent storage while the others would store them to ephemeral storage. For example, if you would disable persistance for PostgreSQL and opt out of using external PostgreSQL server all your data would be ephemeral. Now let's say you push image to Harbor. The metadata of the image would be stored to PostgreSQL and the image blob would be stored to the registry persistent volume. While the pods continue to run this configuration would work fine, but what would happen if your pods restart. Data stored in your PostgreSQL would be lost, meaning you wouldn't be able to pull the image or see the image in the GUI, while the actual image blob would still exist. If your usecase is meant for testing purposes you should in my opinion just disable the persistance for all components. |
Hello, In our particular case, we use external services for most of the components:
The reason for wanting more granularity in the persistence configuration stems from our workflow with Trivy. We have a very large volume of images in Harbor, but scans are only performed once a month. During that one time (when the monthly scan is triggered) the Harbor deployment scales multiple Trivy replicas, which in turn generates multiple PersistentVolumes (PVCs) just to store the Trivy database or cache during the scan. The problem is: About 20 PVs are created when Trivy scales. |
Is your feature request related to a problem? Please describe.
Currently, the Harbor Helm chart offers limited configurability when enabling persistence. Users can only enable persistence globally, which applies to all components of Harbor. This limitation can be frustrating when users want to enable persistence for specific components, such as the registry, Trivy, Jobservice, database (PostgreSQL), or Redis, without applying it to the entire stack. This lack of granularity makes it difficult to optimize storage usage and costs according to specific requirements.
Describe the solution you'd like
I propose that the Harbor Helm chart introduces the ability to configure persistence on a per-component basis. This means that in the values.yaml file, users should be able to enable or disable persistence independently for each component. For example, users should have the option to configure persistence for:
This granular approach would allow users to better align storage configuration with their operational needs and resources.
The text was updated successfully, but these errors were encountered: