Compare commits
10 Commits
8f1d8ef784
...
864f58ff01
| Author | SHA1 | Date |
|---|---|---|
|
|
864f58ff01 | |
|
|
2b21d746fd | |
|
|
1e97e0253c | |
|
|
9e62b870a1 | |
|
|
37c2e479c5 | |
|
|
7b31deb65b | |
|
|
34a1b2b567 | |
|
|
ae63ff6645 | |
|
|
93584bd449 | |
|
|
fa379fbcd1 |
|
|
@ -1 +1 @@
|
|||
secrets/
|
||||
/secrets/
|
||||
|
|
|
|||
|
|
@ -38,11 +38,10 @@ Fedora:
|
|||
- Really fast for something stable...
|
||||
- Cockpit is nice
|
||||
- Fedora minimal can't be installed on
|
||||
cockpit.
|
||||
cockpit without hitting tab alot.
|
||||
|
||||
Decision: Stream
|
||||
Decision: Fedora Server
|
||||
|
||||
Put var/rancher on a separate partition.
|
||||
|
||||
### K3S Distro
|
||||
|
||||
|
|
@ -164,8 +163,14 @@ istio
|
|||
- Traffic I'm interested in is mainly not L7.
|
||||
- blessed by Air Force
|
||||
|
||||
Decision: flannel vxlan
|
||||
not worth the extra complexity of cilium.
|
||||
multus:
|
||||
- tried on fedora and didn't get very far I think
|
||||
because of something with k3s.
|
||||
|
||||
Decision: cilium
|
||||
want network policies and hubble observability
|
||||
is a risk, but this is supposed to be a learning
|
||||
experience.
|
||||
|
||||
## What goes on each cluster/VM?
|
||||
|
||||
|
|
@ -227,31 +232,26 @@ Lightsail:
|
|||
- Leave alone
|
||||
|
||||
Infra Cluster:
|
||||
- On Host:
|
||||
1. CoreDNS
|
||||
2. Wireguard
|
||||
- On Cluster:
|
||||
1. Keycloak
|
||||
2. Kanboard
|
||||
3. OneDev
|
||||
4. Harbor
|
||||
- RAM 4 GiB total
|
||||
- 2 CPUs
|
||||
- 120Gib Hardrive
|
||||
|
||||
Main Cluster:
|
||||
- On Host:
|
||||
1. Wireguard
|
||||
- On Cluster:
|
||||
1. Tekton
|
||||
2. MQTT Broker
|
||||
3. Squid
|
||||
4. j7s-os-deployment
|
||||
- RAM 4 GiB total
|
||||
- 2 CPUs
|
||||
- 120Gib Hardrive
|
||||
|
||||
## Stuff to experiment with
|
||||
[ ] Manually placing keycloak image in k3s through k3s thing
|
||||
and/or through cri.
|
||||
## Secrets
|
||||
|
||||
[ ] Keycloak ssl passthrough.
|
||||
Options:
|
||||
Mozilla Kops
|
||||
Bitnami Sealed Secrets
|
||||
|
||||
[ ] fedora 37 server install with k3s.
|
||||
Both work with Flux.
|
||||
Sealed Secrets seems more integrated with k8s when not using
|
||||
Flux.
|
||||
|
||||
Decision: Bitnami Sealed Secrets
|
||||
|
||||
## Experiments
|
||||
|
||||
|
|
@ -271,7 +271,371 @@ Install nginx with:
|
|||
```
|
||||
helm upgrade --install ingress-nginx ingress-nginx \
|
||||
--repo https://kubernetes.github.io/ingress-nginx \
|
||||
--namespace ingress-nginx --create-namespace
|
||||
--namespace ingress-nginx --create-namespace \
|
||||
--set controller.ingressClassResource.default=true
|
||||
```
|
||||
|
||||
### k3s with nginx on fedora server
|
||||
```
|
||||
sudo systemctl disable firewalld --now
|
||||
export INSTALL_K3S_EXEC="server --disable traefik --selinux"
|
||||
curl -sfL https://get.k3s.io | sh -s -
|
||||
sudo chown jimmy:jimmy /etc/rancher/k3s/k3s.yaml
|
||||
sudo dnf install helm
|
||||
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
|
||||
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
|
||||
```
|
||||
|
||||
Import simple-ros2.
|
||||
Laptop:
|
||||
```
|
||||
podman save -o simple-ros2.tar simple-ros2:latest
|
||||
scp simple-ros2.tar 192.168.1.106:~/.
|
||||
```
|
||||
On server:
|
||||
```
|
||||
sudo ctr images import ./simple-ros2.tar
|
||||
# wait forever....
|
||||
```
|
||||
|
||||
Test yaml:
|
||||
```
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: test-pod
|
||||
spec:
|
||||
containers:
|
||||
- name: simple-ros2
|
||||
image: localhost/simple-ros2:latest
|
||||
imagePullPolicy: Never
|
||||
args: [ros2, launch, j7s-simple, j7s_publisher_launch.py]
|
||||
```
|
||||
|
||||
### VM Host set up
|
||||
|
||||
I **think** I ran something like this when I set up the VM host.
|
||||
I don't remember exactly, and I didn't document it...
|
||||
|
||||
This should be carefully looked at before running.
|
||||
|
||||
```
|
||||
nmcli connection add ifname br0 type bridge con-name br0 connection.zone trusted
|
||||
nmcli connection add type bridge-slave ifname enp4s0 master br0
|
||||
nmcli connection modify br0 bridge.stp no
|
||||
nmcli connection modify enp4s0 autoconnect no
|
||||
nmcli connection down enp4s0
|
||||
nmcli connection up id br0
|
||||
```
|
||||
|
||||
### Kubeseal Use
|
||||
cat secret.yaml | kubeseal --format yaml > sealedsecret.yaml
|
||||
|
||||
# Actual Install Notes
|
||||
|
||||
## To Do List
|
||||
|
||||
Infra Cluster:
|
||||
- On Host:
|
||||
1. CoreDNS [x]
|
||||
2. Wireguard [x]
|
||||
- On Cluster:
|
||||
1. Keycloak
|
||||
2. Kanboard
|
||||
3. OneDev
|
||||
4. Harbor [x]
|
||||
|
||||
Main Cluster:
|
||||
- On Host:
|
||||
1. Wireguard [x]
|
||||
- On Cluster:
|
||||
1. Tekton
|
||||
2. MQTT Broker
|
||||
3. Squid
|
||||
4. j7s-os-deployment
|
||||
5. Flux
|
||||
|
||||
[x] Give accounts on Harbor to clusters.
|
||||
[ ] Push images to Harbor.
|
||||
[ ] Hubble.
|
||||
|
||||
## Regularly Scheduled Programming
|
||||
|
||||
Fedora Server 37 keep defaults.
|
||||
|
||||
Infra:
|
||||
On VM:
|
||||
```
|
||||
sudo hostnamectl set-hostname infra-cluster
|
||||
sudo systemctl disable firewalld --now
|
||||
sudo su
|
||||
export INSTALL_K3S_EXEC="server --disable traefik --flannel-backend=none --disable-network-policy --cluster-cidr 10.44.0.0/16 --service-cidr 10.45.0.0/16 --cluster-dns 10.45.0.10 --selinux"
|
||||
curl -sfL https://get.k3s.io | sh -s -
|
||||
exit
|
||||
sudo cp /etc/rancher/k3s/k3s.yaml ~/infra.yaml
|
||||
sudo chown jimmy:jimmy ~/infra.yaml
|
||||
exit
|
||||
```
|
||||
|
||||
on laptop
|
||||
```
|
||||
scp jimmy@192.168.1.112:~/infra.yaml /home/jimmy/.kube/.
|
||||
export KUBECONFIG=~/.kube/infra.yaml
|
||||
vim KUBECONFIG and fix ip.
|
||||
```
|
||||
Install cilium cli.
|
||||
|
||||
On laptop:
|
||||
```
|
||||
cilium install
|
||||
```
|
||||
wait...
|
||||
```
|
||||
helm upgrade --debug --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
|
||||
```
|
||||
Main:
|
||||
On VM:
|
||||
```
|
||||
sudo hostnamectl set-hostname j7s-cluster
|
||||
sudo systemctl disable firewalld --now
|
||||
sudo su
|
||||
export INSTALL_K3S_EXEC="server --disable traefik --flannel-backend=none --disable-network-policy --cluster-cidr 10.46.0.0/16 --service-cidr 10.47.0.0/16 --cluster-dns 10.47.0.10 --selinux"
|
||||
curl -sfL https://get.k3s.io | sh -s -
|
||||
exit
|
||||
sudo cp /etc/rancher/k3s/k3s.yaml ~/j7s-cluster.yaml
|
||||
sudo chown jimmy:jimmy ~/j7s-cluster.yaml
|
||||
exit
|
||||
```
|
||||
|
||||
on laptop
|
||||
```
|
||||
scp jimmy@192.168.1.103:~/j7s-cluster.yaml /home/jimmy/.kube/.
|
||||
export KUBECONFIG=~/.kube/j7s-cluster.yaml
|
||||
vim KUBECONFIG and fix ip.
|
||||
```
|
||||
On laptop:
|
||||
```
|
||||
cilium install
|
||||
```
|
||||
wait...
|
||||
```
|
||||
helm upgrade --debug --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
|
||||
```
|
||||
|
||||
Install Sealed Secrets:
|
||||
|
||||
Main:
|
||||
```
|
||||
export KUBECONFIG=~/.kube/j7s-cluster.yaml
|
||||
wget https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.19.5/controller.yaml
|
||||
kubectl apply -f controller.yaml
|
||||
```
|
||||
Infra:
|
||||
```
|
||||
export KUBECONFIG=~/.kube/infra.yaml
|
||||
kubectl apply -f controller.yaml
|
||||
rm controller.yaml
|
||||
```
|
||||
|
||||
Install kubeseal.
|
||||
|
||||
Merge kube config files:
|
||||
|
||||
1. Manually modify each config file and get rid of all the defaults
|
||||
to something unique for that file.
|
||||
( I have k3s for the original cluster, j7s for the new main cluster, and infra
|
||||
for the new infra cluster. )
|
||||
2. Do some magic.
|
||||
```
|
||||
cp config.yaml config.yaml.back.<date>
|
||||
export KUBECONFIG=~/.kube/config:~/.kube/infra.yaml:~/.kube/j7s-cluster.yaml
|
||||
kubectl config view --flatten > new-config
|
||||
mv new-confg config
|
||||
export KUBECONFIG=~/.kube/config
|
||||
chmod 600 ~/.kube/config
|
||||
```
|
||||
|
||||
Use kubeseal to encrypt secrets for harbor.
|
||||
|
||||
Install harbor.
|
||||
```
|
||||
cd infra-cluster/harbor
|
||||
kubectl apply -f namespace
|
||||
kubectl apply -f secrets
|
||||
cd helm
|
||||
./install.bash
|
||||
```
|
||||
|
||||
Build coredns rpm following instructions in coredns folder.
|
||||
scp to infra:
|
||||
```
|
||||
scp redhat/RPMS/x86_64/coredns-1.8.4-1.fc37.x86_64.rpm jimmy@192.168.1.112:~/.
|
||||
ssh jimmy@192.168.1.112
|
||||
sudo dnf install ./coredns-1.8.4-1.fc37.x86_64.rpm
|
||||
exit
|
||||
```
|
||||
Copy over corefile from coredns folder.
|
||||
```
|
||||
scp Corefile jimmy@192.168.1.112:~/.
|
||||
ssh jimmy@192.168.1.112
|
||||
sudo cp Corefile /etc/coredns/Corefile
|
||||
sudo systemctl start coredns
|
||||
sudo systemctl enable coredns
|
||||
|
||||
sudo dnf install policycoreutils-devel rpm-build
|
||||
sepolicy generate --application /bin/coredns
|
||||
./coredns.sh
|
||||
# Until it works....
|
||||
sudo su
|
||||
ausearch -c '(coredns)' --raw | audit2allow -M my-coredns
|
||||
semodule -i my-coredns.pp
|
||||
# Also:
|
||||
sudo setsebool -P domain_can_mmap_files 1
|
||||
# Turn of resolver.
|
||||
sudo vim /etc/systemd/resolved.conf
|
||||
DNSStubListener=no
|
||||
```
|
||||
|
||||
Wound up turning off SELinux...
|
||||
```
|
||||
sudo vi /etc/selinux/config
|
||||
# SELINUX=permissive
|
||||
sudo grubby --update-kernel ALL --args selinux=0
|
||||
```
|
||||
|
||||
Wound up reverting back.
|
||||
|
||||
Add:
|
||||
```
|
||||
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
|
||||
AmbientCapabilities=CAP_NET_BIND_SERVICE
|
||||
```
|
||||
|
||||
under `[Service]` in
|
||||
```
|
||||
sudo vim /usr/lib/systemd/system/coredns.service
|
||||
```
|
||||
|
||||
Wireguard:
|
||||
|
||||
```
|
||||
sudo dnf install wireguard-tools
|
||||
wg genkey | tee wg.key | wg pubkey > wg.pub
|
||||
vim wg0.conf
|
||||
<<<
|
||||
[Interface]
|
||||
Address = 10.100.100.?/24
|
||||
PrivateKey = <Contents from file.>
|
||||
|
||||
[Peer]
|
||||
PublicKey = zgcRWY3MAwKGokyRs9dR4E5smoeFy1Hh4MfDcDM3iSc=
|
||||
AllowedIPs = 10.100.100.0/24
|
||||
Endpoint = vpn.jpace121.net:51902
|
||||
PersistentKeepAlive = 25
|
||||
<<<<
|
||||
```
|
||||
Add to server:
|
||||
```
|
||||
# Infra k3s node
|
||||
[Peer]
|
||||
PublicKey = <>
|
||||
AllowedIPs = 10.100.100.7/32
|
||||
|
||||
# Add to systemd
|
||||
sudo systemctl enable wg-quick@wg0.service
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl start wg-quick@wg0
|
||||
```
|
||||
|
||||
Tried using nm below, moved to wg-quick for consistency.
|
||||
```
|
||||
nmcli con import type wireguard file /etc/wireguard/wg0.conf
|
||||
```
|
||||
|
||||
Better:
|
||||
```
|
||||
sudo cp wg0.conf /etc/wireguard/wg0.conf
|
||||
sudo chown root:root /etc/wireguard/wg0.conf
|
||||
wg-quick up wg0
|
||||
```
|
||||
|
||||
Harbor Login:
|
||||
|
||||
```
|
||||
scp harbor_tls.crt jimmy@10.100.100.7:.
|
||||
ssh jimmy@10.100.100.7
|
||||
sudo cp harbor_tls.crt /etc/rancher/k3s/.
|
||||
```
|
||||
|
||||
`/etc/rancher/k3s/registries.yaml`
|
||||
```
|
||||
configs:
|
||||
"harbor.internal.jpace121.net":
|
||||
auth:
|
||||
username: robot$k8s+infra-cluster
|
||||
password: <from harbor>
|
||||
tls:
|
||||
ca_file: /etc/rancher/k3s/harbor_tls.crt
|
||||
```
|
||||
|
||||
Kanboard:
|
||||
|
||||
Get PV Name:
|
||||
```
|
||||
kubectl describe pvc kanboard-pvc --context k3s
|
||||
```
|
||||
Use PV name to locate directory:
|
||||
```
|
||||
kubectl describe pv pvc-89a4265c-b39c-4628-9e6b-df091fae4fd8 --context k3s
|
||||
```
|
||||
|
||||
Can tell on `k3s-node1` at `/var/lib/rancher/k3s/storage/pvc-89a4265c-b39c-4628-9e6b-df091fae4fd8_default_kanboard-pvc`
|
||||
|
||||
|
||||
```
|
||||
ssh jimmy@192.168.1.135
|
||||
sudo su
|
||||
cd /var/lib/rancher/k3s/storage/pvc-89a4265c-b39c-4628-9e6b-df091fae4fd8_default_kanboard-pvc
|
||||
tar cvpzf /home/jimmy/kanboard-pvc.tar.gz .
|
||||
exit
|
||||
cd ~
|
||||
sudo chown jimmy:jimmy kanboard-pvc.tar.gz
|
||||
exit
|
||||
scp jimmy@192.168.1.135:~/kanboard-pvc.tar.gz /tmp/kanboard-pvc.tar.gz
|
||||
```
|
||||
Apply PVC.
|
||||
Want: `volumeBindingMode: Immediate`
|
||||
```
|
||||
kubectl apply manifests --context infra
|
||||
<wait til pvc exists>
|
||||
<delete everyone but pvc>
|
||||
kubectl describe pvc kanboard-pvc --context infra --namespace kanboard
|
||||
kubectl describe pv pvc-fe710c38-52ce-495b-bb8d-bea48222a21b --namespace kanboard
|
||||
```
|
||||
|
||||
```
|
||||
scp /tmp/kanboard-pvc.tar.gz jimmy@192.168.1.112:.
|
||||
ssh jimmy@192.168.1.112
|
||||
sudo su
|
||||
chown root:root ./kanboard-pvc.tar.gz
|
||||
cd /var/lib/rancher/k3s/storage/pvc-fe710c38-52ce-495b-bb8d-bea48222a21b_kanboard_kanboard-pvc
|
||||
rm -rf *
|
||||
tar xpvzf /home/jimmy/kanboard-pvc.tar.gz
|
||||
exit
|
||||
exit
|
||||
kubectl apply -f manifests/
|
||||
```
|
||||
Make secret:
|
||||
```
|
||||
cat kanboard-cookie.yaml | kubeseal --format yaml > kanboard-cookie-sealed.yaml
|
||||
```
|
||||
|
||||
Where should I proxy to?
|
||||
```
|
||||
kubectl -n ingress-nginx get svc
|
||||
ngress-nginx-controller LoadBalancer 10.45.94.103 192.168.1.112 80:31566/TCP,443:32594/TCP 23d
|
||||
```
|
||||
> 10.100.100.7:31566
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,2 +0,0 @@
|
|||
#helm repo add harbor https://helm.goharbor.io
|
||||
helm upgrade harbor -f values.yaml harbor/harbor -n harbor
|
||||
|
|
@ -226,9 +226,9 @@ spec:
|
|||
- name: k8s_service
|
||||
value: onedev
|
||||
- name: ingress_host
|
||||
value: git.jpace121.net
|
||||
value: onedev.intenral.jpace121.net
|
||||
- name: ingress_tls
|
||||
value: "true"
|
||||
value: "false"
|
||||
- name: hibernate_dialect
|
||||
value: org.hibernate.dialect.MySQL5InnoDBDialect
|
||||
- name: hibernate_connection_driver_class
|
||||
|
|
@ -328,7 +328,7 @@ metadata:
|
|||
name: onedev
|
||||
spec:
|
||||
rules:
|
||||
- host: git.jpace121.net
|
||||
- host: onedev.internal.jpace121.net
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
|
|
|
|||
|
|
@ -0,0 +1,27 @@
|
|||
---
|
||||
# Replace app.ini settings with env variables in the form GITEA__SECTION_NAME__KEY_NAME
|
||||
# "_0X2E_" for "." in section name.
|
||||
# "_0x2A_" for *
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: gitea-config
|
||||
namespace: gitea
|
||||
data:
|
||||
GITEA____APP_NAME: "Forge"
|
||||
GITEA__server__DOMAIN: "git.jpace121.net"
|
||||
GITEA__server__SSH_DOMAIN: "git.jpace121.net"
|
||||
GITEA__server__HTTP_PORT: "3000"
|
||||
GITEA__server__ROOT_URL: https://git.jpace121.net/
|
||||
GITEA__server__START_SSH_SERVER: "true"
|
||||
GITEA__server__SSH_LISTEN_PORT: "2222"
|
||||
GITEA__server__LFS_START_SERVER: "true"
|
||||
GITEA__server__LFS_OFFLINE_MODE: "true"
|
||||
GITEA__server__LANDING_PAGE: "explore"
|
||||
GITEA__database__PATH: "/data/gitea/gitea.db"
|
||||
GITEA__database__DB_TYPE: "sqlite3"
|
||||
GITEA__service__ALLOW_ONLY_EXTERNAL_REGISTRATION: "true"
|
||||
GITEA__openid__ENABLE_OPENID_SIGNIN: "false"
|
||||
GITEA__openid__ENABLE_OPENID_SIGNUP: "false"
|
||||
GITEA__webhook__ALLOWED_HOST_LIST: "0x2A"
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: gitea-deployment
|
||||
namespace: gitea
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: gitea
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: gitea
|
||||
spec:
|
||||
containers:
|
||||
- name: gitea-app
|
||||
image: docker.io/gitea/gitea:1.19.0
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
- containerPort: 2222
|
||||
volumeMounts:
|
||||
- name: storage
|
||||
mountPath: "/data"
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: gitea-config
|
||||
volumes:
|
||||
- name: storage
|
||||
persistentVolumeClaim:
|
||||
claimName: gitea-pvc
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: gitea
|
||||
namespace: gitea
|
||||
spec:
|
||||
rules:
|
||||
- host: git.jpace121.net
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: gitea-http
|
||||
port:
|
||||
number: 3000
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: gitea-pvc
|
||||
namespace: gitea
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 50Gi
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-ssh
|
||||
namespace: gitea
|
||||
spec:
|
||||
type: NodePort
|
||||
selector:
|
||||
app: gitea
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 2222
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: gitea-http
|
||||
namespace: gitea
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: gitea
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 3000
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: gitea
|
||||
0
deployments/harbor/install.bash → infra-cluster/harbor/helm/install.bash
Normal file → Executable file
0
deployments/harbor/install.bash → infra-cluster/harbor/helm/install.bash
Normal file → Executable file
|
|
@ -0,0 +1,2 @@
|
|||
#helm repo add harbor https://helm.goharbor.io
|
||||
helm upgrade --debug --install harbor -f values.yaml harbor/harbor -n harbor --create-namespace
|
||||
|
|
@ -44,7 +44,7 @@ expose:
|
|||
controller: default
|
||||
## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress
|
||||
kubeVersionOverride: ""
|
||||
className: ""
|
||||
className: "nginx"
|
||||
annotations:
|
||||
# note different ingress controllers may require a different ssl-redirect annotation
|
||||
# for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines below
|
||||
|
|
@ -215,14 +215,14 @@ persistence:
|
|||
# Specify the "storageClass" used to provision the volume. Or the default
|
||||
# StorageClass will be used (the default).
|
||||
# Set it to "-" to disable dynamic provisioning
|
||||
storageClass: "nfs-client"
|
||||
storageClass: ""
|
||||
subPath: ""
|
||||
accessMode: ReadWriteOnce
|
||||
size: 50Gi
|
||||
annotations: {}
|
||||
chartmuseum:
|
||||
existingClaim: ""
|
||||
storageClass: "nfs-client"
|
||||
storageClass: ""
|
||||
subPath: ""
|
||||
accessMode: ReadWriteOnce
|
||||
size: 5Gi
|
||||
|
|
@ -230,14 +230,14 @@ persistence:
|
|||
jobservice:
|
||||
jobLog:
|
||||
existingClaim: ""
|
||||
storageClass: "nfs-client"
|
||||
storageClass: ""
|
||||
subPath: ""
|
||||
accessMode: ReadWriteOnce
|
||||
size: 1Gi
|
||||
annotations: {}
|
||||
scanDataExports:
|
||||
existingClaim: ""
|
||||
storageClass: "nfs-client"
|
||||
storageClass: ""
|
||||
subPath: ""
|
||||
accessMode: ReadWriteOnce
|
||||
size: 1Gi
|
||||
|
|
@ -246,7 +246,7 @@ persistence:
|
|||
# be ignored
|
||||
database:
|
||||
existingClaim: ""
|
||||
storageClass: "nfs-client"
|
||||
storageClass: ""
|
||||
subPath: ""
|
||||
accessMode: ReadWriteOnce
|
||||
size: 1Gi
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: harbor
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: bitnami.com/v1alpha1
|
||||
kind: SealedSecret
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: harbor-tls-secret
|
||||
namespace: harbor
|
||||
spec:
|
||||
encryptedData:
|
||||
tls.crt: AgBEpaISgzD53Qg6EcZiYBQjyiglHMVpiK+QF2HRHQ+0ooWCFL8BzC3kI65Y6Es9CCENK8XAFbJwO0fCNXNcZnKoSKnpMwcR0PSPiaKRgv8PaQtqKy8dSqdbphL3OeEbqbFt8a+EEry+rFXIpzF+UTaLKnqsZnwRb/2Px7uMVLyMbYzEZnhMPyTRjAYmYfoGceBvN8JHZV6tbuyJBtG6wRvX/otWYlDs02OtLX+ll54rSaK+l1BAV/OTURA/7k/agaD8C9wmggCibyGpakzYL2DcnpUZiNqmubjqtQCJ5u+A3qPQacF0OcjDMTjxsoxEkG5w+SeJBDW7EowPqp6jYMeS1auw1GCXjLIwU/4V/CyRAcboqNz3v+z4Iu8SIyMhPRjdOGQ93A52DukEHYZ6ediQDgBWDvIM+iIM7zyDewNVoUxmqDx5pqkCfk1DerDIQFCS6yxgTEWtg2E+qNVtxMT+Oeq1KTnFh64KKqzYdD/d9sFwkRGbrjfPju8n5ZBb+pHQHzkAQQFB079OsLyTIIC7rYAKmH+yylBi2ObZRCVwXqMXDKJKlT5dsC1yR03FaUCDfwyJ08XafdkyqJzARBsy+zxuQ/wlJduMduKGbKuthhPo91e/tX49jd934LmcveKJrlTTegdVS20+lXy/jubhoT48d6UunhM+5Idou4IkltrfZYzBI+y9kCKhLsHAgN4jix4+8jvdAD56r5ibwfI1RaKeiNFslFdHNWCWpY1XRHmNGKBuDQiDGPLHcgogT1hP0C0wcihmfD7k843OjZDvTbnENnM7+RaYhmf8hx/MJh/2JCPpKnzDB35VuICoFjHIabquqBmT6lLzcDWrrsiymomg1YFys/UBbJWQ2iarI9L0nhQGnP+8NIrr3MA+iAm0M85Ih+SIWgYr0wHmW3SK+dTsUoADNvfDyabEl5bhvZvnBf8yAXfK3mBVryMHko0djNVBxR+REseWgwSJFgDVk0Hywh0uQiGt8hUvyvrhjJvrOx6/39ZRnuG6Cly6v+hYXkiOl+rDlCh/vAsfbw1L+kb1lnHUQ7fOiwa0+63px+c1Kh91eKCfc8hGBPbWovxSxkb4CSRlJX30XAbni+Oqt6b/KE8Rab06PlrJM4oUAqTPgmzzhbaScfCsxQLY5ig/Mub0wVBByKT+ffQy/EZ25SGlJO+Poi2cn02Oiab8Ftwksbn1uBJ+UPpnKxJmPoP47wD7Z6pi2wCGzZNizREUKEoALbYxyNPUQkpoQL8+TEvdvPipOTQa9+uFm045o2GhrCWkm0T303bIXmwzHuscB5Eg5qx1KNzCtIIaSkoUdetEt1Ulk1WKlEUNiC1Jd8jImHT/1xLyjTyIbWKV1vAbW5EybU0xMFO7ySu7hK5d9GBcfoN/KbRlaW9Bcw3am5aKOGRIKeGxlvSNBeUfvzLtylbLW48Pk2mvslKTNJbGP3+d4MLX5gVRGsmxT+1ngQKwkDrobYvCsVm2EC1scpGDszaUb2xETm4vGSfkztByCkDPwofXFAJ9PdIxIC6lsz+eRhdj59uWufhYVbTN4kl/5inhlu03baGRNGNd0tX9Y79v4wI25yepYiCl/3nLgiSzNDQswYQcWCHFX20QQo5RPl3oU5RAbMpT5u9Ta9hD76vfOIjo+tzmPR4o58IKoyYsiUqupkrShLFlplaFp6o8TyrIOybsNH8/rjTzY+PVwiocK5br0qFdKeffePPo/h3Z
|
||||
tls.key: AgA6k4GUBe6ZOWPTuI8nEblV/Gt2DzWuvZRUGLpBhzioi+C1m7Pged/LkhE7eIka1+utyI3GogzF5L00e9jpwnTYV3I6X3u4DIJ2+EAhI0RkrP+xAbVibY+dN+ye6lwdcpiMGNkTakQ2ab/hMXUhqLA9klzED2/hmMbsMJ09CkbFGU3vRX9ejG2xP7rq5pRqgTdZOPydnMRLoTxq5DyeEdXF/lxkyzKMDlo54hSX7I4JLQprGgbjzsaCd7SNHNJaLUEFMVCwkI4Ns2VxG6AaqZ2gc5iQaUhxDOgu5sSSHSZ3XQUfwopeDO/xJCNuDJ20HwbpvaklrR/VQ7wpiWTbyIdYzQCfe7aph7WgG0yfdsi9REsmg+xW9JfE6QdB3jChrVZPz/DLKIsf+BflARhIjGNlAGBXc5ylzm4zWrzy7lme6gjY3cWlgQLoGIgBVYt0a5gcgISajVCAjb2WOT+gJmoLstl+BR5TWn3S8Fq97NvpQcYCWrV9T33vAbJQdFdayYGmAvhrZSdpCs8mwU3fwa1ulRsQFkMVxGwp9/AaLC0mcaqHdSxSAhtqlOpETzhA7GL1AkBTeQlgGdf25o9t/ZUZ6ZOS9c5eJXYjBiHYZ3TYBO+WgXlk6LXQmPcCeuO7IDZJ/3GVBTQCl712YUJ1ivpCMxlbt0j+EvlIOA8YBapCFxnw5wU2WvzObJAWJVpG0doqnQdupcG3AJQUytGcDeYfDfIjv+A2N5864zjtQFJ7aQmFqGDInwqcTkW4xYrXfST+kpBaWrfu5I9J+Ea5zp1TFnWzYSpPMN6OTfo26f6OAh0RO4DRPnb6HEMNcP90NbBKzFHv+sykCpK3XO9vVgx20lH2QdjB+BDKhXnUBBq+IFygmBAY2aa1LNygoyXLGjt3LjlEZO1R0Vc5ViP8+zglt4GmIdP+fqqFfMUnACdUC2O6RJ8ZZ5Owx4s5DHWWqRxNxkCvjcjNbNHCih5KzuhVsYcy9d0UepjNhj2aNcBQORbELsdFkk6XRfglMPAxlaxs
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: harbor-tls-secret
|
||||
namespace: harbor
|
||||
type: kubernetes.io/tls
|
||||
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
apiVersion: bitnami.com/v1alpha1
|
||||
kind: SealedSecret
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: harbor-token
|
||||
namespace: harbor
|
||||
spec:
|
||||
encryptedData:
|
||||
tls.crt: AgAvwazdKDS15lrLi8kBslPiZwjkRGcvN3np4UXji4Ck7oKoN2pHIDGLby3mRy0H6udwQB5LfG4qhzBB/gs1p07lsweIWYq3vnJhKrTave0oUxhjNBEtCbaVxWt6I7nAeNLz0EugxK9rvPqDokDA+xVTHGnkXDPUZX2Bg33vJuaW9ZywNGSLAzZtMrfHEkOM+m3LIsY8GT5i3K3adIjeejNmpmF6tAx7mWibmiveOvbEkTcaPpWxaMn/IgYykOa1xVVyZujBv6u0YqpPX2m9traysSypOkzdGWF/c7ebC2esZCI8u4/OrOYmbTJyt3m/9v4MyxB5Dso9EkxM9WJDGmyAHYivj6zKFjmSK5ItsZrqaVI3OAO1Tvf0Q1rI/nYyhGtwOd4mgGr4XZCTNMvPt5JXYYdC9+2miFFS/c6LdSxdjIs+YecoRvWBrcakKqRDDppEPZM3+3zGx8IbvGOBzqVqzp6Pk1p1kY9hY+7fRLAJF4MZPEmCQfYdHGoZ7zlYCysgFRpjMXHRUbwMvTqQLqh979ej11X+LzzVu4x825JZY9Ds/gqkKdvOP4FjMoVXAIB5U63rrOSJOMu4S8K96DwVvm4NQ+cSF6lGPp02SgUsNBLS2Z6B9EdV/g2tv5KAIHJBF7x4KbwKgEcIRGtKjp8ntA1ONmK0AbMxdSl5/R4cwjMqSFnDb/Zy7jpsRigBIT2cOGAUTHz3TW8yoH9+nGSjokNSactr8oWeGjzaMP15QMCypacNdqj6Epbc5grT5TNIrE3oTiYOsWMPxy2bH11YonnwCIiqsxWqucp+N7PQCTdzUzf4QmCF82HxCJ5CuMRotrMy8CG1P/P3FQJxoT2Gr6RMdccymbZ9RD/v2R/vE3McCdpCxapiX8FwVXJmYJVSS2vxA8KqJHit5+eLMzBqVlJ5rTVonK2RuCXs5ywOGj/Wv8gaz5dHIBJTeefwXITesq5cCKWclUftwD6YiPod3o0vxhxYKU6s4z7pGGuzDGbJ2UQ7I7LbHbt42yGmCN9QZk54QhOYMZekCn10+RO6Q+kNkANeKRXQJS79276TMSkGWyRqvYhTYdnTWo7EBvmMm3DerAC5Ey4Tt5tExRwebC8ZjgwlnEu37Hdqixvy4j39ihXwKfQ+IC1+es0IalJQUzoqPxcrQgXGlV4XAZoVKoAikswJkrhl2tIxSweb1xcCYxyV7XP1BSc+26sq1p9RX3WH2FA9ogvKRvLqx5r0/01r5rqXLzHhnCIJVBYee/OJM1wt5Bm+WBNVTOr4cgpc3N9eZq+bl6gMBMg5r7v43BdbkCHoA7QqBU3wIG+1b2prXj8Wc3Y0oHXkDedETeTUhmAp+d3yIZvOQtTCYVw3pBerB2d0QCCUiljfx9VSQ1KHE8gTm3WBtZ0zaBMvzLsClqGYeoQOTA8ayQr+rZrXC6uU+SnN24C9YRDXbaZniolMEOfdeaPZR7iHLXuxl9L2TzPEUhoXjVipFb2MJkWrYbpClOKbnoq1IdX/HoM1TE4YRe6jcWTKpJi0WG5wU/j6QadEmscNnBs1opF1RTS8iSuxRZMFb5MKb7mxJbAaHhrI7iehL5rsAyDXKWsaJIBXHDNZyOiBWwBEHrxsANvdVmVneWHs3PSEzycsI4N31NgXYoEFPcZrLJD4iVJhpbBMcqrFkKbh3Bf9DbyNBjR4VeZbC/rrN54eZpp/z1hBUzU7k/yUJw27F74Igq+Aw0J/dgmx6SeVgUR9HXRn4Qefjzi6Efdgzx53BUeeN0fDqbeb8yEdG7lFu6Fc3whHcWsZGlD0lHwmVevCprI6e5UKvuyWBu36b4QmZHeWo0AtBh0t4gcL9bLxrCirp0LmjCTldaOopUaqrHFBHmvXK6gLl+DhJJ6hA4XlMFbdP+DAkoykYfzgdIbcN4GTgoSHihtxjXlZz7tzDKe9mG9zvB7/O2Q6okWCroNAMwRDg0xZvw==
|
||||
tls.key: AgCm+k6tYejqHId4QlboyC+9/0C1jfPlZEdoudVC1SAwheNOzpZBSD2g7ClN0olWyp7RxMEH3LUZzmaGsY+GKa1lqqH/BEapBCfFq4cf/jJ+K51Ta0v87dpWns3ox0UDgTeS0wqo8Yn3nDL7EGrwNdiPkF0WgOLCFV6TDCSTFzaE4ViPlkqFO64dNjCxFqpKKIhKj4+CAIc6syKdzx6ZV977pa8OL8p4gz+pjXplmbj/rALO1Hym7juLsx2T1LPbNuIa8hxB7mL7eQFd4k9Y9qIUt5bYZYm0GX6g7yLESwokxkI1oV9bTRmQWAtyyGmgv4yuQYNgpJgNFKuCxJMwqfh4JHiEQ+0Sa38I80cMs2mlxBzP9KroY1UGt1/Zj2Y7d9fclYZfT8mxALkweNYwyjPpb2Yc4vSYOV4ofHNoDaMSCJlYwGTDzqyNBUlkG2kURzB26Fq7qifmn4S5OomP4bhoxsLdCqHlB4NnzQcee4yjB4pOTnorQIM34T1F9aZPkIkZ54VZoQgQS23nU94WZy27V+KTeTNY7zlr43OGg9yg1s4dCx8LBgehB6uYOYxH+23Xcv8Ee8wpjZZu6LN3zZipf+sPFgGX7rYoEYad+yc3v0jc1vxTGew2tQOdJxtzcjTgYGFmv9bYkMVl6z/xVYT6rvqUES6Brxc2XOk/eBz71L1FRCqSf7Bgb/vP4vAVkr5tt8sLm3vtsBontvVXHW93lfTTfHxYGAFMgqMAY28adzDe+Dp34+CVO2SsFu3LiJBZwIZFRtTJ6Uu0kWbjIn0g7sc+ilipv6z33rNLaEvz7Mh5HQnRQl58iZ8w5VFXnPyYqRn4vh5ajsLgiy18HBfZNeHrajAutcJsu2gmFhBvpNygOofX1ivZ9LulfsgMwMEwFDMUWvxD2zoi2p3ALDlQDmKVPinQOZpr2AGQ+i3oLEiAcH2lBzJYMpaJOdBUaLAagGWtL2GctpYz4PNZCcDfZOBgjJnk4cDZKbUVSaqB1YuJcCX6NSv8+jTwbYRftBtrvQv8NzKg+WKxyiTH6r0foVgN0ubIQz7r80ChvkeOeluGbxAolZQLnd9wqoR6YxHzPvKfuTnrQLzqbJpRruGnm14nY0w/yvd8PAESHM/NaH6ygYhQvFUzng6CUOnviuFcDPeqqDaWRGa+KR+KRAggMG323g8s0a1tDKw5QTDI38geWtiAs+27f8qEkHY2odvGBSPz8jQTa0+ajv0xm8v+/nkCEyMmxasbCfUYoK8L1OLoR851FlEZHQT0LwKrDbiIJS3LJIDGFr+Ur/p9LaPQPDFPJF0Nd6STF1BdY52SlS1Edydr8y2asb1GXODZ/l0f7V+3z5I3H/F7v30564fQ8GqrdP2PnwyXaLfeD0cqE+Bgf7rboszYoVSYRpRnPobCVNhkFg/U4qTHgCStSezLo0IT+UDIvrd3EP0TNxhxUc6tjXvSYklBuxOVo3BZGkVzc3MzhgV0/O0f48jdQZDeE7enKY+zbkRnJGTlNMB4UFInXBrFHA07obZZ1MaX5OPTZkJaGZELSB37YhiPVSoJG4PNGNddL3sSJKXGM0mY+7RmHc8jcTuK77+D2EUO4XoFPWZ32zxuTVj4UGEKf7DORIQtgytUrxP+U7kzXb+stZOPuxY3iDbHK0gcKoAFZzvT+cZu0ESwhRAeAmgx2pEeznve9ZDolHFwDn1PKpBjA1nKM6bZCK7YWJOhwf0pdYA6eX7j7nrZB/jNfuvodHOl/R7/G76C8mFZSEKUnIss/9VxNkQU7O6S3RIqBmXdRdRMEHSKmFk9W029qRRUFFdOO2T40ZxIuk+dt7Z/VcNV9vLrHbfC8q6YVD1LGYTudKcwSsMNCl/KpSdtaD75nmuMOkESC01q9A5n35g=
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: harbor-token
|
||||
namespace: harbor
|
||||
type: Opaque
|
||||
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
-----BEGIN CERTIFICATE-----
|
||||
MIICCjCCAa+gAwIBAgIUF5nJK6l4Y+oDycbHEtgByz0DTrIwCgYIKoZIzj0EAwIw
|
||||
bjELMAkGA1UEBhMCVVMxFTATBgNVBAgMDFBlbm5zeWx2YW5pYTETMBEGA1UEBwwK
|
||||
UGl0dHNidXJnaDEMMAoGA1UECgwDajdzMSUwIwYDVQQDDBxoYXJib3IuaW50ZXJu
|
||||
YWwuanBhY2UxMjEubmV0MB4XDTIzMDIxMjIyMDY1NFoXDTMzMDEzMDIyMDY1NFow
|
||||
bjELMAkGA1UEBhMCVVMxFTATBgNVBAgMDFBlbm5zeWx2YW5pYTETMBEGA1UEBwwK
|
||||
UGl0dHNidXJnaDEMMAoGA1UECgwDajdzMSUwIwYDVQQDDBxoYXJib3IuaW50ZXJu
|
||||
YWwuanBhY2UxMjEubmV0MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEg0jRtGpv
|
||||
l8llYG7xnop0VswmLVZIfJwZrrvOZA/leJ7P3VO2kxmZG2QEGODPIRtkc+WibQKm
|
||||
s5hDRLFl/xxKpKMrMCkwJwYDVR0RBCAwHoIcaGFyYm9yLmludGVybmFsLmpwYWNl
|
||||
MTIxLm5ldDAKBggqhkjOPQQDAgNJADBGAiEAlxZfpVU2Db1xD9F+Fk5ArYneUZS0
|
||||
U1ddr2qUvNrX+IMCIQDipHJ+MbWHzxzDHeHmvRWA6g6xq47VHGuF71FBkZRcxg==
|
||||
-----END CERTIFICATE-----
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
-----BEGIN PUBLIC KEY-----
|
||||
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEg0jRtGpvl8llYG7xnop0VswmLVZI
|
||||
fJwZrrvOZA/leJ7P3VO2kxmZG2QEGODPIRtkc+WibQKms5hDRLFl/xxKpA==
|
||||
-----END PUBLIC KEY-----
|
||||
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kanboard-deployment
|
||||
namespace: kanboard
|
||||
labels:
|
||||
app: kanboard
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: kanboard
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: kanboard
|
||||
spec:
|
||||
containers:
|
||||
- name: oauth-proxy
|
||||
image: quay.io/oauth2-proxy/oauth2-proxy:v7.4.0
|
||||
args:
|
||||
- --cookie-secret=`$COOKIE_SECRET`
|
||||
- --cookie-secure=false
|
||||
- --email-domain=*
|
||||
- --provider=keycloak-oidc
|
||||
- --client-id=kanboard
|
||||
- --client-secret=oT6dMBS87jc385utLumMoffJ9MqLEGRY
|
||||
- --redirect-url=https://kanboard.jpace121.net
|
||||
- --oidc-issuer-url=https://auth.jpace121.net/realms/jpace121-main
|
||||
- --reverse-proxy=true
|
||||
- --upstream=http://localhost:80/
|
||||
- --http-address=0.0.0.0:8080
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
env:
|
||||
- name: COOKIE_SECRET
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: kanboard-cookie
|
||||
key: cookie-secret
|
||||
- name: kanboard-app
|
||||
image: harbor.internal.jpace121.net/k8s/kanboard:latest
|
||||
ports:
|
||||
- containerPort: 80
|
||||
- containerPort: 443
|
||||
env:
|
||||
- name: DATABASE_URL
|
||||
value: "postgres://postgres:jdsjkksksklw@localhost/kanboard"
|
||||
- name: kanboard-db
|
||||
image: docker.io/library/postgres:bullseye
|
||||
env:
|
||||
- name: POSTGRES_DB
|
||||
value: "kanboard"
|
||||
- name: POSTGRES_PASSWORD
|
||||
value: "jdsjkksksklw"
|
||||
volumeMounts:
|
||||
- name: db-storage
|
||||
mountPath: "/var/lib/postgresql/data"
|
||||
volumes:
|
||||
- name: db-storage
|
||||
persistentVolumeClaim:
|
||||
claimName: kanboard-pvc
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: kanboard-ingress
|
||||
namespace: kanboard
|
||||
annotations:
|
||||
nginx.ingress.kubernetes.io/proxy-buffering: "on"
|
||||
nginx.ingress.kubernetes.io/proxy-buffer-size: "512k"
|
||||
spec:
|
||||
rules:
|
||||
- host: kanboard.jpace121.net
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: kanboard-service
|
||||
port:
|
||||
number: 80
|
||||
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: kanboard-pvc
|
||||
namespace: kanboard
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kanboard-service
|
||||
namespace: kanboard
|
||||
spec:
|
||||
selector:
|
||||
app: kanboard
|
||||
ports:
|
||||
- protocol: TCP
|
||||
targetPort: 8080
|
||||
port: 80
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: kanboard
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
apiVersion: bitnami.com/v1alpha1
|
||||
kind: SealedSecret
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: kanboard-cookie
|
||||
namespace: kanboard
|
||||
spec:
|
||||
encryptedData:
|
||||
cookie-secret: AgAV5rJbI4+pdtEBSOXZOI77dov8AZH0ejLbP+3FZt0B4ebYcpytB5oLYpHL7n9d/cvUdIn1MmM44CQ4Fc7nlrspIdFm9P2U9t1ElIt6LeqcE1A65lRm2xHvKQ+HIBSRE8uhwJZQqX+4i88/zhc1z+jTjr9bBwxpm402sFRND4a/3X8UWtcj2/dQUMPGe84Q23SjzGg9pYFEphWZ4Pt6dVBpKy7pPenHKk/F2zhHpVy+4GfC61Ho8enbj7avBVWpj5zoW0R1bphRClbYTRKyHyIqEvgqgW2N8Wo9AVvQ0GzZw5nyIFW3vIkivNzVTjufXIQV5RdPECaCJfnK06WIe3STMTHsJ1m+igacAXNSui6L6g37dO0DpMTcMQCBWL4c0cCgOfBhhIQy6zCcpI/MJBcWOI55c1E1zrxMDJxlskQjh9A01wwWGQfT5qm2PXO5j1hARe1aUmBJvhzzQnoVJB+2RxFnZjJTzdCOXkr1a4gQB03xXOZlrxGy3HpIPDFQCOWfxEn8pKjzl1dufkhpH14pyKfyWheSpufQNtjcZ9WrcDQvifmaIjCLkDZ4QiuQXCKlPuNLHoYVeTR7et8RO3DFm292t2PXQ115wIZ57vR+PqInOu0X33cK3kr7bSyTbJsZSJRWj7UkiHhGKs5L69ohfFfam539jtlU69XVKvTW05oeDDh2CtIYGbCDUvLIzAUAY5BJYTueAaBTp6o4KczQVbWW7AZSJy6XqaTZte3CDRh+7SOaJa/eNIEkJQ==
|
||||
template:
|
||||
metadata:
|
||||
creationTimestamp: null
|
||||
name: kanboard-cookie
|
||||
namespace: kanboard
|
||||
type: Opaque
|
||||
|
||||
5
notes.md
5
notes.md
|
|
@ -211,3 +211,8 @@ If we later want to do this on an overlay network:
|
|||
`INSTALL_K3S_EXEC="server --node-ip '10.100.100.5' --advertise-address '10.100.100.5' --flannel-iface 'wg0'" ./k3s.sh`
|
||||
4. For node:
|
||||
`INSTALL_K3S_EXEC="agent --server 'https://10.100.100.5:6443' --token 'K3S_TOKEN' --node-ip '10.100.100.?' --advertise-address '10.100.100.?' --flannel-iface 'wg0'" ./k3s.sh`
|
||||
|
||||
# Bad Ideas
|
||||
|
||||
1. Longhorn -> wonky performance issues on cluster after installing
|
||||
2. Multus -> CNI version does not seem compatible with k3s.
|
||||
|
|
|
|||
Loading…
Reference in New Issue