Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In One-Click you can create a new project or manage existing ones. Each project will get a unique ID which will be used to identify the project inside the Kubernetes cluster. The project will also get a unique namespace in the Kubernetes cluster which has the same name as the project ID.
To create a new project, log in to the One-Click platform and click on the top right corner New Project button. Fill in the project name and description and select a blueprint to use for the project. Click on the Create project button to create the project.
You can also set some labels for the project. Labels are used to group projects together and can be used to filter projects in the project list.
Each project has it's own overview page where you can see the details of the project and it's resources.
There will be a Kubernetes namespace each One-Click project with certain labels:
In the project overview, you can see all the deployments in this project. You can also navigate to the Blueprints, create a new Deployment or make some project settings. In the listed deployments you can see it's status, the ID of the current rollout, the amount of running replicas in the cluster and the current image deployed.
In the project settings, you can change the project name, avatar and labels. You can also delete the project from the project settings which will also delete the namespace in the Kubernetes cluster.
In the deployment overview, you can see the counts of: rollouts, instances, interfaces, volumes, envs and secrets. You can also see the current container image. There is also a cpu and memory usage graph for the project. (keep in mind this is a live usage view, for tracking usage please use something like the )
one-click.dev/displayName
The users display name in pocketbase.
one-click.dev/projectId
The project name which is the ID.
one-click.dev/userId
The users ID of the project.
one-click.dev/username
The username of the project.




You can get all resources with the corresponding labels for the projectId and deploymentId set. So you can debug in your cluster if something breaks.
In the project settings, you can change the project name, description, and labels. You can also delete the project from the project settings. There is also an option to use advanced editing mode for the project. Head over to the CRD section to learn more about the Operator CRD. You can also directly create a new blueprint from the selected project.
With advanced editing, you can add settings to the project that are not available in the UI. You can also see the raw CRD of the deployment.
With a new blueprint from the deployment, you can create a new blueprint with the same settings as the selected deployment. The only thing which is not copied are the ingress settings.

🚀 Get started 🚀
with deploying One-Click to your Kubernetes cluster!
🏗️ Architecture 🏗️
Learn more about the One-Click architecture!
On the scale page you can configure the two scaling options horizontal and vertical. The horizontal scaling is the number of instances (replicas) and the vertical scaling is the CPU and memory request and limit.
You can define the number of minimum and maximum replicas. The target CPU defines the autoscaling behaviour. If the CPU usage is above the target CPU the replicas will get scaled up. To get the current CPU usage you can head over to the deployment overview page.
The vertical scaling is the CPU and memory request and limit. The request is the minimum amount of CPU and memory the pod will get. The limit is the maximum amount of CPU and memory the pod will get. If the pod exceeds the limit it will get terminated. The request and limit are defined in millicores and mega bytes. The limit should be equal or higher than the request.
Make sure to know your application requirements to set the right values. If you don't know the requirements you can start with the default values and monitor the behaviour. If you see that the pod gets terminated because of the limit you can increase the limit.
One-Click can run inside or outside of a Kubernetes cluster.
You will need the following to run OneClick:
Kubernetes cluster
Docker daemon
Node.js v18.16.0 or higher
v9.5.1 or higher
Install the Operator Follow the installation instructions provided in the or in the
Install the UI & Backend Check out the folder and change the values for your environment. Then run the following commands:
Access the UI
Access Pocketbase on your URL or localhost:8080 with /_ as the path. Example:
Head over to the docs to get more information why we use pocketbase and how you can adminstrate it.
Blueprints are stored configurations of a One-Click CRD. With these blueprints you can easily bootstrap new projects with no effort. The use cases are for example to predifine some enhanced configurations in your rollout crd yaml file which then get automatically applied when creating a new project out of this blueprint.
After you are logged into your One-Click account you can see the Blueprints button. When you click on this button you will get navigated to the blueprints overview page.
On this overview page you see your blueprints and also the one's from your community. These are blueprints other users in the same One-Click instance shared with you through a link.
When you click on the 3 dots of a blueprint you can "edit", "share" or "delete" the selected blueprint.
When clicking on "share" in the action menu of a certain blueprint you will get a custom link:
Another user will get the following action when visiting the link:
He can also "unsubscribe" the blueprint again:
The user who shared his blueprint sees now the Shared count increasing and when hovering over the number he sees the users which added his blueprint:
You can easily create new blueprints by clicking on the "New Blueprint" button.
Only the owner of the blueprint can edit it again:
For the network, you have a few options to configure. You can define as many services and ingress interfaces as you want. The services are the internal Kubernetes services and the ingress is the external access to your services. To create a new network interface you can click the New Interface button.
You can define the following options:
name: the name of the interface, must be unique in the deployment
port: the port of the interface (take your application port, defined in the Dockerfile)















localhost:8080/_userscd deployment
kubectl apply -k .# if you are using an ingress
kubectl get ingress -n one-click
# if you want to use port-forwarding
kubectl port-forward -n one-click svc/one-click-ui 8080:80helm repo add one-click https://charts.one-click.dev
helm upgrade --install one-click/one-clickingress class: the ingress class of the interface (the dropdown will show you the available ingress classes inside the Kubernetes cluster)
host: the host of the interface (e.g. example.com)
path: the path of the interface (e.g. /api, /)
tls: if you want to use TLS you can toggle the switch and define the secret name
(tls secret name: the secret name of the TLS certificate) if not defined it will default to the host name -> use case: you want to auto-generate the TLS certificate with cert-manager annotation
The DNS name is the actual Kubernetes Service name in the cluster. This name you can use in other deployments to connect to the service via DNS lookup. (click on the copy icon to copy the name).

The map feature in a deployment uses svelte-flow to graphically show the resources of the current rollout in the selected deployment. Everything gets updated in real time via a websocket endpoint described in the pocketbase docs: Pocketbase
You can move the components with your mouse, zoom in and out and dig into its configurations / logs / events when click on a component.
When clicking on a component you can see its current applied manifest in-cluster configurations. This will give you some more information about the component itself.
When selecting a pod you can also see it's logs directly streamed:
Also for the pod you can see the events which happen inside the cluster. This will also help you to troubleshoot your rollouts.
You also have the ability to delete a selected pod by clicking on the red trash icon.
Under images you can manage the image of the deployment. You can configure the registry (e.g. ghcr.io, docker.io), your username and password if it's private and also the repository/image. Last but not least you can define the image tag. If you need to debug something you can copy the current rollout ID to search the components inside the Kubernetes cluster.
If you don't want to manually update your image tag each time you push a new version of your image to the registry you can activate the Auto Update feature. There you can specify an interval (1m, 5m, 10m), a pattern and policy how the image registry should get checked and the image should get updated.
Interval: the cron ticker defined in the environment variable will check the modulo of the minutes and checks the registry in this interval. So it's crucial to let the cron tick interval at 1m as it is default.
Pattern: a regex pattern which parses the image tag. The default is the default semver notation x.x.x
Policy: the policy (semver or timestamp) defines the sorting. For timestamp it will only work if the image tag is a unix timestamp.
The behaviour and consept is similar to the one of fluxcd:
Each time you edit and change something in a deployment a new rollout will get created. This is like a snapshot of your configuration. This gives you the power to undo any changes you did to your deployment configuration like changing the port of an interface or updating your image tag. You can see every rollout in the rollouts table. Through the frontend you won't be able to delete a rollout, you can just hide it. This is due to get some statistics about you rollouts. If you need to delete a rollout completely you need to go to the pocketbase backend and delete the record in the "rollouts" collection.
When selecting a previous rollout you can click on "rollback" and then a diff shows up which diffs the CRD files and show you exactly what will change:
On the envs & secrets page you can configure the environment variables and secrets for your application. The environment variables are key-value pairs that are injected into the container. The secrets are sensitive data that are stored as Kubernetes secrets. The secrets are base64 encoded and can be used as environment variables or mounted as files. You can simply copy and paste the content of your .env file or secret file into the text area.
Both the environment variables and secrets are available in the container as environment variables.







^\d+.\d+.\d+$
semver
Default x.x.x semver pattern. e.g. 1.2.0 will get updated to 1.2.1
dev-\d+.\d+.\d+$
semver
Default x.x.x semver pattern with a dev- prefix.
.*
timestamp
Any pattern will get updated with a unix timestamp.
preview-*
timestamp
A pattern with the preview- prefix which will get udpated with a unix timestamp.


You can hide your rollouts to keep the table organized. You can also delete a rollout from the table, but then it will also affect your stats page on the overview page.
When you have hidden rollouts you can show them by toggle the "Show hidden" slider:

The following diagram shows how and what the One-Click operator manages.
In red you see everything responsible for the frontend / pocketbase backend. In blue you see everything which handles Kubernetes natively. In green you see what the One-Click operator manages and controls.
Every Kubernetes resource will get created and managed within a Kubernetes namespace named with the project ID
The architecture of the solution is designed to operate within a Kubernetes ecosystem, focusing on simplicity and manageability for deploying containers. Here is a high-level view of its main components:
Frontend Component: Developed with Svelte, the frontend provides the user interface. Its primary role is to facilitate user interaction and input, which it relays to the backend for processing.
Backend System: The backend, powered by Pocketbase, acts as the central processing unit. It interprets requests from the frontend, managing the necessary API calls and interactions within the Kubernetes environment.
Kubernetes Cluster Interaction: The backend is responsible for orchestrating various elements within the Kubernetes cluster. A key function includes the creation and management of namespaces, segregating projects to maintain orderly and isolated operational environments.
Custom Resource Management (Rollouts): Rollouts, defined as Custom Resource Definitions (CRDs) within Kubernetes, are managed by the backend. These are central to the deployment and operational processes, serving as bespoke objects tailored to the system's requirements.
One-Click Kubernetes Operator: This component simplifies interactions with Kubernetes. It automates the handling of several Kubernetes native objects and processes, including deployments, scaling, and resource allocation. The operator is crucial for streamlining complex tasks and ensuring efficient system operations.
System Scalability: The architecture is designed with scalability in mind, using Kubernetes' capabilities to handle a range of workloads and adapting as necessary for different project sizes and requirements.
This architecture aims to streamline the deployment process for OSS containers, offering an efficient and manageable system that leverages the strengths of , , and .
For persistent storage, you can define volumes. The volumes are the persistent storage for your application. You can define as many volumes as you want. To create a new volume you can click the New Volume button.
You can define the following options:
name: the name of the volume, must be unique in the deployment
mount path: the mount path of the volume (e.g. /data, /var/lib/mysql)
size: the size of the volume in GiB (gibibyte)
storage class: the storage class of the volume (the dropdown will show you the available storage classes inside the Kubernetes cluster)
One-Click uses the open source backend to achieve things like authentication and storing data. It also serves the frontend of One-Click. In the release process the pocketbase and frontend code will get compiled and put into a single container image and pushed to the Github container registry.
Pocketbase offers the ability to extend it with our own golang code. You can listen on certain events and then execute code. We use that to make the Kubernetes api calls and manage the Kubernetes resources created via the frontend interface.
You can find the code under the following link:
Pocketbase uses JWT tokens for authentication. The frontend sends a request to the pocketbase backend with the user credentials. The backend then checks if the user exists and if the password is correct. If everything is correct, the backend will return a JWT token. The frontend will then store this token in the local storage and use it for every request to the backend.




For installing the One-Click operator in your Kubernetes cluster head to config directory inside the one-click-operator repository: https://github.com/janlauber/one-click-operator/tree/main/config
We also support the ability to use the following authentication providers:
Github
Microsoft
The frontend will automatically display the login buttons for these providers if they are enabled in the pocketbase backend.
Generated with the PocketBaseUML tool.
Pocketbase allows you to create custom endpoints. These endpoints can be used to execute custom code. We use this feature to serve everything to the frontend. The custom endpoints are written in golang and you can find them in the main.go file under the app.OnBeforeServe().Add(func(e *core.ServeEvent) error { function.
/
*
Serves the frontend
/api
*
Serves the backend
/_
*
Serves the backend
/rollouts/:projectId/:rolloutId/status
GET
Get rollout status
All endpoints are protected by the JWT authentication, except the websocket endpoints. The frontend will send the JWT token in the header of the request.
LOCAL
false
Set to true if you're running KubeLab locally. It will take your local kubeconfig under .kube/config
LOCAL_KUBECONFIG_FILE
~/.kube/config
Set to the path of your kubeconfig file if you're running KubeLab locally. It will take your local kubeconfig under the specified path
CronTick
*/1 * * * *
The tick in cron notation at which the auto image update will check for new updates in the registry. Do not change this under 1min
For more information about pocketbase, please visit the official documentation. Also dig into the source code of our implementations and try to understand how we use pocketbase in our project.
/rollouts/:projectId/:rolloutId/metrics
GET
Get rollout metrics
/rollouts/:projectId/:rolloutId/events
GET
Get rollout events
/rollouts/:projectId/:podName/logs
GET
Get pod logs
/pb/blueprints/:blueprintId
GET
Get blueprint
/pb/blueprints/shared/:blueprintId
POST
Share blueprint
/auto-update/:autoUpdateId
POST
Auto update
/cluster-info
GET
Get cluster info
/rollouts/:projectId/:podName
DELETE
Delete pod
/ws/k8s/rollouts
GET
Websocket to get resource updates of a rollout
/ws/k8s/logs
GET
Websocket to get pod logs
/ws/k8s/events
GET
Websocket to get pod events


The development of the Kubernetes operator, housed in the "one-click-operator" repository, involves multiple key components and processes. Central to this is the rollout_types.go file, where the structures for the Rollout Custom Resource Definition (CRD) are defined. These structures are crucial because they dictate the configuration and capabilities of the Rollout CRD. The operator-sdk, known for its user-friendliness, is then used to generate corresponding YAML files. These files are stored in the "config" folder, signifying their role in configuring the operator.
The operator is designed with a specific domain, "one-click.dev", and uses the API CRD version "v1alpha1". This versioning indicates that the operator is in its early stages of development and is not yet considered production-ready—a common practice in the Kubernetes community. For guidance on best practices in developing Kubernetes operators using the operator-sdk, resources are available at https://sdk.operatorframework.io/docs/best-practices/best-practices/.
Following the CRD definition, the next step involves the implementation of the operator's logic. This is where the concept of abstraction becomes pivotal. The operator aims to simplify the management and creation of various Kubernetes objects by consolidating them into a single CRD— the Rollout CRD. This abstraction is particularly beneficial for the backend of the One-Click platform, as it reduces the complexity of managing multiple Kubernetes resources. Instead, the platform can focus on managing just the Rollout CRD.
The Kubernetes objects included in this abstraction are:
Deployment
Environment Variables
Volumes
Image Pull Secret
Horizontal Pod Autoscaler
Secret
Persistent Volume Claims
Services
Ingresses
CronJobs
Service Account
Each of these components plays a vital role in the Kubernetes ecosystem, contributing to aspects like deployment management, security, scaling, and connectivity.
In the "controllers" folder, you'll find the rollout_controller.go file, which is integral to the operator's functionality. This file contains the Reconcile() function, a critical component that interacts with the Kubernetes API. The Reconcile function acts as the heart of the operator, responding to changes and ensuring that the desired state of the Kubernetes objects is maintained. For each Kubernetes object listed above, separate Go files are created. These files encapsulate the specific logic required to manage each object, thereby modularizing the code and making it more manageable and maintainable.
To further elaborate on the Kubernetes operator development, let's delve deeper into the Reconcile() function and the concept of the owner functionality.
apiVersion: one-click.dev/v1alpha1
kind: Rollout
metadata:
name: nginx
namespace: test
spec:
args: ["nginx", "-g", "daemon off;"]
command: ["nginx"]
rolloutStrategy: rollingUpdate # or "recreate" (if not specified then "rollingUpdate" is used)
nodeSelector:
kubernetes.io/hostname: minikube
tolerations:
- key: "storage"
operator: "Equal"
value: "ssd"
effect: "NoSchedule"
hostAliases:
- ip: "10.10.10.10"
hostnames:
- "foo.local"
- "bar.local"
image:
registry: "docker.io"
repository: "nginx"
tag: "latest"
username: "test"
password: "test3"
securityContext: # can also be set to {}
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
privileged: false
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
horizontalScale:
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 80
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "200m"
memory: "256Mi"
env:
- name: "REFLEX_USERNAME"
value: "admin"
- name: DEBUG
value: "true"
secrets:
- name: "REFLEX_PASSWORD"
value: "admin"
- name: "ANOTHER_SECRET"
value: "122"
volumes:
- name: "data"
mountPath: "/data"
size: "2Gi"
storageClass: "standard"
interfaces:
- name: "http"
port: 80
- name: "https"
port: 443
ingress:
ingressClass: "nginx"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
rules:
- host: "reflex.oneclickapps.dev"
path: "/"
tls: true
tlsSecretName: "wildcard-tls-secret"
- host: "reflex.oneclickapps.dev"
path: "/test"
tls: false
cronjobs:
- name: some-bash-job
suspend: false
image:
password: ""
registry: docker.io
repository: library/busybox
tag: latest
username: ""
schedule: "*/1 * * * *"
command: ["echo", "hello"]
maxRetries: 3
backoffLimit: 2
env:
- name: SOME_ENV
value: "some-value"
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
serviceAccountName: "nginx"name: Docker Image Build & Push
on:
release:
types: [created]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push-image:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
- name: Build and push Docker image
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}stages:
- build
- push
variables:
DOCKER_VERSION: 25.0.2
before_script:
- export IMAGE_TAG="$CI_REGISTRY_IMAGE:$CI_COMMIT_TAG"
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
services:
- docker:$DOCKER_VERSION-dind
build_push:
stage: build
image: docker:$DOCKER_VERSION
tags:
- ti
script:
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- tagsARG POETRY_VERSION=1.4
FROM python:3.12-slim as base
WORKDIR /streamlit
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONFAULTHANDLER=1 \
PYTHONUNBUFFERED=1
EXPOSE 8501
FROM base as poetry
ARG POETRY_VERSION
ENV POETRY_CACHE_DIR=/opt/.poetry-cache \
PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_IGNORE_INSTALLED=1 \
PIP_NO_CACHE_DIR=1
# hadolint ignore=DL3013
RUN pip install --upgrade pip setuptools \
&& pip install poetry=="${POETRY_VERSION}"
COPY pyproject.toml poetry.lock* ./
# hadolint ignore=SC1091
RUN python -m venv /venv \
&& . /venv/bin/activate \
&& poetry install --only main \
--no-root --no-interaction --no-ansi
COPY <<-EOT /entrypoint.sh
#!/usr/bin/env sh
set -e
. /venv/bin/activate
exec "\$@"
EOT
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
FROM poetry as dev
# hadolint ignore=SC1091
RUN . /venv/bin/activate \
&& poetry install \
--no-root --no-interaction --no-ansi
COPY . .
CMD ["streamlit", "run", "./app/🏠_Home.py"] # replace with your streamlit app
FROM base as prod
COPY --from=poetry /venv /venv
ENV PATH="/venv/bin:${PATH}"
COPY . .
CMD ["streamlit", "run", "./app/🏠_Home.py", "--server.port=8501", "--server.address=0.0.0.0"] # replace with your streamlit appFROM golang:1.22-alpine AS builder
WORKDIR /build
COPY go.mod go.sum main.go ./
RUN go mod tidy \
&& CGO_ENABLED=0 go build
FROM alpine:3.19 as runtime
RUN addgroup -S app \
&& adduser -S -G app app
WORKDIR /home/app
COPY --from=builder /build/app .
RUN chown -R app:app ./
USER app
CMD ["./app"]FROM node:20.2.0-alpine3.18 as base
FROM base as deps
WORKDIR /app
COPY package*.json ./
RUN npm install
FROM deps AS builder
WORKDIR /app
COPY . .
RUN npm run build
FROM deps AS prod-deps
WORKDIR /app
RUN npm install --production
FROM base as runner
WORKDIR /app
RUN addgroup --system --gid 1001 remix
RUN adduser --system --uid 1001 remix
USER remix
COPY --from=prod-deps --chown=remix:remix /app/package*.json ./
COPY --from=prod-deps --chown=remix:remix /app/node_modules ./node_modules
COPY --from=builder --chown=remix:remix /app/build ./build
COPY --from=builder --chown=remix:remix /app/public ./public
EXPOSE 3000
ENTRYPOINT [ "node", "node_modules/.bin/remix-serve", "build/index.js"]Deployment status
Deployment not found
Deployment metrics
Deployment not found
Events list for the deployment
Deployment not found
Pod logs
Pod not found
Blueprint details
Blueprint not found
Blueprint shared successfully
Blueprint not found
Update initiated
Update initiated
Pod deleted successfully
Pod not found
No content
WebSocket connection established
WebSocket connection established
No content
WebSocket connection established
WebSocket connection established
No content
WebSocket connection established
WebSocket connection established
No content
GET /api/pb/{projectId}/{deploymentId}/status HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
{
"replicas": 1,
"podNames": [
"text"
],
"resources": {
"requestSum": {
"cpu": "text",
"memory": "text"
},
"limitSum": {
"cpu": "text",
"memory": "text"
}
},
"status": "text"
}GET /api/pb/{projectId}/{deploymentId}/metrics HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
{
"metrics": [
{
"name": "text",
"cpu": "text",
"memory": "text"
}
]
}GET /api/pb/{projectId}/{deploymentId}/events HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
{
"events": [
{
"reason": "text",
"message": "text",
"typus": "text"
}
]
}GET /api/pb/{projectId}/{podName}/logs HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
{
"logs": "text"
}GET /api/pb/blueprints/{blueprintId} HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
{
"id": "text",
"details": "text"
}POST /api/pb/blueprints/shared/{blueprintId} HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
{
"message": "text"
}POST /api/pb/auto-update/{autoUpdateId} HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
{
"status": "text"
}GET /api/pb/cluster-info HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
{
"clusterId": "text",
"status": "text"
}DELETE /api/pb/{projectId}/{podName} HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
GET /api/ws/k8s/deployments HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
GET /api/ws/k8s/logs HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
GET /api/ws/k8s/events HTTP/1.1
Host: example.com
Authorization: Bearer YOUR_OAUTH2_TOKEN
Accept: */*
Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.
Pocketbase is an open-source, self-hosted firebase alternative.
Ghost is a professional publishing platform. In production, you should use a mysql database. This is an example of a multi-deployment setup with MySQL and Ghost.
apiVersion: one-click.dev/v1alpha1
kind: Rollout
spec:
env: []
horizontalScale:
maxReplicas: 1
minReplicas: 1
targetCPUUtilizationPercentage: 80
image:
password: ''
registry: docker.io
repository: nodered/node-red
tag: latest
username: ''
interfaces:
- name: http
port: 1880
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 300m
memory: 256Mi
secrets: []
serviceAccountName: one-click
volumes:
- mountPath: /data
name: data
size: 1Gi
storageClass: '' # replace with your storage classapiVersion: one-click.dev/v1alpha1
kind: Rollout
spec:
env: []
horizontalScale:
maxReplicas: 1
minReplicas: 1
targetCPUUtilizationPercentage: 80
image:
password: ''
registry: ghcr.io
repository: muchobien/pocketbase
tag: latest
username: ''
interfaces:
- name: http
port: 8090
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 300m
memory: 256Mi
secrets: []
serviceAccountName: one-click
volumes:
- mountPath: /pb_data
name: pb-data
size: 1Gi
storageClass: '' # replace with your storage classapiVersion: one-click.dev/v1alpha1
kind: Rollout
spec:
env:
- name: MYSQL_DATABASE
value: ghost
- name: MYSQL_USER
value: admin
horizontalScale:
maxReplicas: 1
minReplicas: 1
targetCPUUtilizationPercentage: 80
image:
password: ''
registry: docker.io
repository: library/mysql
tag: latest
username: ''
interfaces:
- name: mysql
port: 3306
resources:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 300m
memory: 512Mi
secrets:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_PASSWORD
value: password
serviceAccountName: one-click
volumes:
- mountPath: /var/lib/mysql
name: data
size: 1Gi
storageClass: '' # replace with your storage classapiVersion: one-click.dev/v1alpha1
kind: Rollout
spec:
env:
- name: database__client
value: mysql
- name: database__connection__host
value: mysql-12pr47a4cgcr9rx-svc # replace with your mysql service name (printed in the interface section of the mysql deployment)
- name: database__connection__user
value: admin
- name: database__connection__database
value: ghost
- name: url
value: https://ghost.one-click.dev
horizontalScale:
maxReplicas: 1
minReplicas: 1
targetCPUUtilizationPercentage: 80
image:
password: ''
registry: docker.io
repository: library/ghost
tag: latest
username: ''
interfaces:
- ingress:
ingressClass: nginx-external # replace with your ingress class
rules:
- host: ghost.one-click.dev
path: /
tls: true
tlsSecretName: wildcard-cert # replace with your tls secret name
name: http
port: 2368
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 300m
memory: 256Mi
secrets:
- name: database__connection__password
value: password
serviceAccountName: one-click
volumes:
- mountPath: /var/lib/ghost/content
name: data
size: 1Gi
storageClass: '' # replace with your storage class