Add notes on nfs setup.

This commit is contained in:
James Pace 2023-02-01 18:29:42 -05:00
parent b373fca18c
commit 89bd3d102e
2 changed files with 72 additions and 27 deletions

View File

@ -69,41 +69,53 @@ Port forward locally:
kubectl port-forward -n tekton-pipelines service/tekton-dashboard 9097:9097
```
# Bad Ideas
# NFS
Amabassador: (for knative)
Server: CentOS 9
Set up:
```
sudo dnf install nfs-utils vim
sudo mkdir /srv/nfs
sudo chown jimmy:jimmy /srv/nfs
sudo chmod 777 /srv/nfs/
```
Put into `/etc/exports`:
```
/srv/nfs 192.168.1.0/24(rw,root_squash)
```
Start everything:
```
systemctl enable --now rpcbind
systemctl enable --now nfs-server
firewall-cmd --permanent --add-service nfs
firewall-cmd --reload
systemctl restart nfs-server
```
Start with these instruction to disable traefik.
https://www.suse.com/c/rancher_blog/deploy-an-ingress-controller-on-k3s/
use `--disable=traefik` in systemd.
The equal is important...
Test on Debian:
```
sudo apt install nfs-common
sudo mkdir -p /mnt/nfs
sudo mount 192.168.1.149:/srv/nfs /mnt/nfs
```
Follow the instructions https://www.getambassador.io/docs/edge-stack/latest/topics/install/yaml-install/ to install ambassador.
I used the file in ./ambassador/listener.yaml to set up the listener.
I'm not sure why ambassdor is listening on 80 instead of 8080 given the
settings I applied, or why changing from 8080 to 80 in the seeting borks
it.
I removed amabassador andput back traefik.
On the k3s nodes:
```
sudo apt install nfs-common
```
Install to the cluster:
```
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=192.168.1.149 \
--set nfs.path=/srv/nfs
```
# Future Ideas
If we later want to do this on an overlay network:
3. For master:
`INSTALL_K3S_EXEC="server --node-ip '10.100.100.5' --advertise-address '10.100.100.5' --flannel-iface 'wg0'" ./k3s.sh`
4. For node:
`INSTALL_K3S_EXEC="agent --server 'https://10.100.100.5:6443' --token 'K3S_TOKEN' --node-ip '10.100.100.?' --advertise-address '10.100.100.?' --flannel-iface 'wg0'" ./k3s.sh`
For now sticking to single node...
Set up a namespace:
```
kubectl create -f j7s-dev-namspace.json
```
```
kubectl config set-context j7s-dev --namespace=j7s-dev \
--cluster=j7s-dev \
--user=default
```
I'm not sure the above command works...

33
runs/nfs-test.yaml Normal file
View File

@ -0,0 +1,33 @@
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox:stable
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim