ภายใน folder service ต้องสร้างไฟล์ metadata.yaml ที่มีรายละเอียดของ application ตามตัวอย่าง โดยข้อมูลนี้จะแสดงในหน้า catalog
YAML
displayName: Anything LLM
description: all-in-one AI application with built-in RAG, AI agents, and more.
category:
- AI
type: catalog
allowMultipleInstances: false
scope:
- workspace
licensing:
- Ultimate
- Enterprise
certifications:
overview:|-
## Product Overview
AnythingLLM is a full-stack application where you can use commercial off-the-shelf LLMs or popular open source LLMs and vectorDB solutions to build a private ChatGPT with no compromises that you can run locally as well as host remotely and be able to chat intelligently with any documents you provide it.
AnythingLLM divides your documents into objects called `workspaces`. A Workspace functions a lot like a thread, but with the addition of containerization of your documents. Workspaces can share documents, but they do not talk to each other so you can keep your context for each workspace clean.
### Cool features of AnythingLLM
- 🆕 [**Custom AI Agents**](https://docs.anythingllm.com/agent/custom/introduction)
- 🖼️ **Multi-modal support (both closed and open-source LLMs!)**
- 👤 Multi-user instance support and permissioning _Docker version only_
- 🦾 Agents inside your workspace (browse the web, run code, etc)
- 💬 [Custom Embeddable Chat widget for your website](./embed/README.md) _Docker version only_
- 📖 Multiple document type support (PDF, TXT, DOCX, etc)
- Simple chat UI with Drag-n-Drop funcitonality and clear citations.
- 100% Cloud deployment ready.
- Works with all popular [closed and open-source LLM providers](#supported-llms-embedder-models-speech-models-and-vector-databases).
- Built-in cost & time-saving measures for managing very large documents compared to any other chat UI.
[nutanix@harbor ~]$ curl -o ndk-1.2.0.tar "https://download.nutanix.com/downloads/ndk/1.2.0/ndk-1.2.0.tar?Expires=1752582997&Key-Pair-Id=APKAJTTNCWPEI42QKMSA&Signature=CGOEgIDQHcJ1fTI8nIMbB5mcrM~5jPFcfS~5PyKDFGQyeNGlfBHyookKrzTTearX6L1aLLyEL6psYlkYIZdDlGIghHQuyb5qQBcxVGqiJ2ENuJD2MZKJkBFb6gnJ5s0JynyfkReAwPU5Ls4Vwb9yZXhzROm25Adezn-noLnkQUpLYQkyNl3~3n3X6xWR7qQQhHbo~QH3GEmYylmfsAcfx78WrH6-t9q3AV-2vhOGFNFz8k4gueqbRAjcLiKJBe7pJ-MlQ1KCo2ZYQMg3OACgqx7epi-2t0ImmpKs0I3rGa0lBqccOggX3n0tfaSED1cwTyjFRRpgFGYWZRX0qdrCvQ__"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 526M 100 526M 0 0 22.5M 0 0:00:23 0:00:23 --:--:-- 30.1M
export APP_NAME=<<application name>>
export APP_NAMESPACE=<<application namespace>>
export TGT_NAMESPACE=<<target namespace>>
***NOTE: target namespace must exist on target cluster (create if necessary)***
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cat << EOF > ${APP_NAME}-snap-rg.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: ${APP_NAME}-snap-rg
namespace: ${APP_NAMESPACE}
spec:
from:
- group: dataservices.nutanix.com
kind: ApplicationSnapshotRestore
namespace: ${TGT_NAMESPACE}
to:
- group: dataservices.nutanix.com
kind: ApplicationSnapshot
name: ${APP_NAME}-snap
EOF
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
kubectl apply -f ${APP_NAME}-snap-rg.yaml
- verify reference grant exists:
kubectl get referencegrant -n ${APP_NAMESPACE}
--> ${APP_NAME}-snap-rg is in the list of reference grants
ตัวอย่างค่าที่ใช้ในการสร้าง reference grant
[nutanix@harbor ~]$ k create ns newapplication
namespace/newapplication created
[nutanix@harbor ~]$ cat << EOF > wordpress-snap-rg.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: wordpress-snap-rg
namespace: application
spec:
from:
- group: dataservices.nutanix.com
kind: ApplicationSnapshotRestore
namespace: newapplication
to:
- group: dataservices.nutanix.com
kind: ApplicationSnapshot
name: wordpress-snap
EOF
[nutanix@harbor ~]$ k apply -f wordpress-snap-rg.yaml
referencegrant.gateway.networking.k8s.io/wordpress-snap-rg created
[nutanix@harbor ~]$ kubectl get referencegrant -n application
NAME AGE
wordpress-snap-rg 35s
สร้าง application restore
cat << EOF > ${APP_NAME}-rg-restore.yaml
apiVersion: dataservices.nutanix.com/v1alpha1
kind: ApplicationSnapshotRestore
metadata:
name: ${APP_NAME}-rg-restore
namespace: ${TGT_NAMESPACE}
spec:
applicationSnapshotName: ${APP_NAME}-snap
applicationSnapshotNamespace: ${APP_NAMESPACE}
EOF
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
kubectl apply -f ${APP_NAME}-rg-restore.yaml
- verify restore operation completes successfully (can take a few minutes)
kubectl get applicationsnapshotrestore -n ${TGT_NAMESPACE}
--> ${APP_NAME}-rg-restore shows a COMPLETED status of 'true'
- verify all artifacts referenced in snapshot describe operation have been restored on the target namespace:
***For example***
kubectl get all -n ${TGT_NAMESPACE}
kubectcl get pvc -n ${TGT_NAMESPACE}
ตัวอย่างการ restore application
[nutanix@harbor ~]$ cat << EOF > wordpress-rg-restore.yaml
apiVersion: dataservices.nutanix.com/v1alpha1
kind: ApplicationSnapshotRestore
metadata:
name: wordpress-rg-restore
namespace: newapplication
spec:
applicationSnapshotName: wordpress-snap
applicationSnapshotNamespace: application
EOF
[nutanix@harbor ~]$ k apply -f wordpress-rg-restore.yaml
applicationsnapshotrestore.dataservices.nutanix.com/wordpress-rg-restore created
[nutanix@harbor ~]$ kubectl get applicationsnapshotrestore -n newapplication
NAME SNAPSHOT-NAME COMPLETED
wordpress-rg-restore wordpress-snap false
[nutanix@harbor ~]$ kubectl get applicationsnapshotrestore -n newapplication
NAME SNAPSHOT-NAME COMPLETED
wordpress-rg-restore wordpress-snap true
[nutanix@harbor ~]$ k get all -n newapplication
NAME READY STATUS RESTARTS AGE
pod/wordpress-84f858d9fd-8mjdp 1/1 Running 0 2m24s
pod/wordpress-mysql-556f6f65cc-bjd8q 1/1 Running 0 2m24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/wordpress ClusterIP 10.99.164.74 <none> 80/TCP 2m24s
service/wordpress-mysql ClusterIP None <none> 3306/TCP 2m24s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/wordpress 1/1 1 1 2m24s
deployment.apps/wordpress-mysql 1/1 1 1 2m24s
NAME DESIRED CURRENT READY AGE
replicaset.apps/wordpress-84f858d9fd 1 1 1 2m24s
replicaset.apps/wordpress-mysql-556f6f65cc 1 1 1 2m24s
- verify ndk intercom service is working on both source & target clusters:
cd ~/nkp-v2.15.0/cli
export SOURCE_NAME=<<NKP source cluster name>>
export TARGET_NAME=<<target cluster name>>
export APP_NAME=<<application name>>
export APP_NAMESPACE=<<application namespace>>
export KUBECONFIG=~/nkp-v2.15.0/cli/${SOURCE_NAME}.conf
kubectl get svc -n ntnx-system
--> load balancer service 'ndk-intercom-service' should exist and have an assigned external IP
kubectl get svc -n ntnx-system --kubeconfig ${TARGET_NAME}.conf
--> load balancer service 'ndk-intercom-service' should exist and have an assigned external IP
--> cache external IP (needed for the remote cr)
- verify an application snapshot exists and is ready:
kubectl get applicationsnapshot -n ${APP_NAMESPACE}
--> snapshot status for <<app name>>-snap should be "true"
- create remote custom resource:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cat << EOF > ndk-${TARGET_NAME}-remote.yaml
apiVersion: dataservices.nutanix.com/v1alpha1
kind: Remote
metadata:
name: ndk-${TARGET_NAME}-remote
spec:
clusterName: ${TARGET_NAME}
ndkServiceIp: <<EXTERNAL IP of target cluster's 'ndk-intercom-service' LB service>>
ndkServicePort: 2021
tlsConfig:
skipTLSVerify: true
EOF
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--> 'tlsConfig' section may not be needed or may need modified based on how NDK was installed, see following link: https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Data-Services-for-Kubernetes-v1_2:top-remote-cr-create-cli-k8s.html
kubectl apply -f ndk-${TARGET_NAME}-remote.yaml
- verify remote cr is ready:
kubectl get remote
--> ndk-${TARGET_NAME}-remote should be listed and AVAILABLE status set to 'True'
ตัวอย่างการสร้าง Remote cluster
[nutanix@harbor ~]$ cat << EOF > ndk-cluster2-remote.yaml
apiVersion: dataservices.nutanix.com/v1alpha1
kind: Remote
metadata:
name: ndk-cluster2-remote
spec:
clusterName: cluster2
ndkServiceIp: 10.38.53.17
ndkServicePort: 2021
tlsConfig:
skipTLSVerify: true
EOF
[nutanix@harbor ~]$ k apply -f ndk-cluster2-remote.yaml
remote.dataservices.nutanix.com/ndk-cluster2-remote created
[nutanix@harbor ~]$ k get remote
NAME ADDRESS PORT AVAILABLE
ndk-cluster2-remote 10.38.53.17 2021 True
cat << EOF > ${APP_NAME}-replicate.yaml
apiVersion: dataservices.nutanix.com/v1alpha1
kind: ApplicationSnapshotReplication
metadata:
name: ${APP_NAME}-replicate
namespace: ${APP_NAMESPACE}
spec:
applicationSnapshotName: ${APP_NAME}-snap
replicationTargetName: ${TARGET_NAME}
EOF
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
kubectl apply -f ${APP_NAME}-replicate.yaml
- monitor an application snapshot replication:
kubectl get applicationsnapshotreplication -n ${APP_NAMESPACE}
--> within a few minutes AVAILABLE status for <<app name>>-snap should be "True"
kubectl get applicationsnapshot -n ntnx-system ${APP_NAME}-snap --kubeconfig ${TARGET_NAME}.conf
--> should list snapshot on target cluster with a READY-TO-USE status of "true"
ตัวอย่างการสร้าง application snapshot replication
***NOTE: This enables application snapshots saved in one cluster to be restored to their original namespace on another cluster***
- verify the NDK reference grant CRD exists on the 'target' cluster:
cd ~/nkp-v2.15.0/cli
export TARGET_NAME=<<NKP target cluster name>>
export KUBECONFIG=~/nkp-v2.15.0/cli/${TARGET_NAME}.conf
kubectl get crd | grep 'referencegrants'
--> the 'referencegrants.gateway.networking.k8s.io' cred should be listed
- create a reference grant:
export APP_NAME=<<application name>>
export TGT_NAMESPACE=<<application's namespace on source cluster>>
***NOTE: target namespace must exist on target cluster (create if necessary)***
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cat << EOF > ${APP_NAME}-snap-rg-${TARGET_NAME}.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: ${APP_NAME}-snap-rg-${TARGET_NAME}
namespace: ntnx-system
spec:
from:
- group: dataservices.nutanix.com
kind: ApplicationSnapshotRestore
namespace: ${TGT_NAMESPACE}
to:
- group: dataservices.nutanix.com
kind: ApplicationSnapshot
name: ${APP_NAME}-snap
EOF
ตัวอย่างการสร้าง application snapshot replication
[nutanix@harbor ~]$ cat << EOF > wordpress-replicate.yaml
apiVersion: dataservices.nutanix.com/v1alpha1
kind: ApplicationSnapshotReplication
metadata:
name: wordpress-replicate
namespace: application
spec:
applicationSnapshotName: wordpress-snap
replicationTargetName: cluster2
EOF
[nutanix@harbor ~]$ k apply -f wordpress-replicate.yaml
applicationsnapshotreplication.dataservices.nutanix.com/wordpress-replicate created
[nutanix@harbor ~]$ k get applicationsnapshotreplication -A
NAMESPACE NAME AVAILABLE APPLICATIONSNAPSHOT REPLICATIONTARGET AGE
application wordpress-replicate False wordpress-snap cluster2 12s
[nutanix@harbor ~]$ k get applicationsnapshotreplication -A
NAMESPACE NAME AVAILABLE APPLICATIONSNAPSHOT REPLICATIONTARGET AGE
application wordpress-replicate True wordpress-snap cluster2 6m57s
[nutanix@harbor ~]$ kubectl get applicationsnapshot -n ntnx-system
NAME AGE READY-TO-USE BOUND-SNAPSHOTCONTENT SNAPSHOT-AGE
wordpress-snap 9m46s true asc-531ae7ca-7afb-40a6-a4f6-403c8e732cfa-1980ca1b120 6m58s
สร้าง reference grant สำหรับการ restore โดยต้องสร้างที่ cluster ปลายทาง
- create a reference grant:
export APP_NAME=<<application name>>
export TGT_NAMESPACE=<<application's namespace on source cluster>>
***NOTE: target namespace must exist on target cluster (create if necessary)***
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cat << EOF > ${APP_NAME}-snap-rg-${TARGET_NAME}.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: ${APP_NAME}-snap-rg-${TARGET_NAME}
namespace: ntnx-system
spec:
from:
- group: dataservices.nutanix.com
kind: ApplicationSnapshotRestore
namespace: ${TGT_NAMESPACE}
to:
- group: dataservices.nutanix.com
kind: ApplicationSnapshot
name: ${APP_NAME}-snap
EOF
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
kubectl apply -f ${APP_NAME}-snap-rg-${TARGET_NAME}.yaml
- verify reference grant exists:
kubectl get referencegrant -n ntnx-system
--> ${APP_NAME}-snap-rg-${TARGET_NAME} is in the list of reference grants
ตัวอย่างการสร้าง reference grant
[nutanix@harbor ~]$ cat << EOF > wordpress-snap-rg-cluster2.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: wordpress-snap-rg-cluster2
namespace: ntnx-system
spec:
from:
- group: dataservices.nutanix.com
kind: ApplicationSnapshotRestore
namespace: application
to:
- group: dataservices.nutanix.com
kind: ApplicationSnapshot
name: wordpress-snap
EOF
[nutanix@harbor ~]$ k apply -f wordpress-snap-rg-cluster2.yaml
referencegrant.gateway.networking.k8s.io/wordpress-snap-rg-cluster2 created
[nutanix@harbor ~]$ k get referencegrant -A
NAMESPACE NAME AGE
ntnx-system wordpress-snap-rg-cluster2 9s
ทำการ restore application ที่ cluster ปลายทาง
cat << EOF > ${APP_NAME}-rg-restore-${TARGET_NAME}.yaml
apiVersion: dataservices.nutanix.com/v1alpha1
kind: ApplicationSnapshotRestore
metadata:
name: ${APP_NAME}-rg-restore-${TARGET_NAME}
namespace: ${TGT_NAMESPACE}
spec:
applicationSnapshotName: ${APP_NAME}-snap
applicationSnapshotNamespace: ntnx-system
EOF
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
kubectl apply -f ${APP_NAME}-rg-restore-${TARGET_NAME}.yaml
- verify restore operation completes successfully (can take a few minutes)
kubectl get applicationsnapshotrestore -n ${TGT_NAMESPACE}
--> ${APP_NAME}-rg-restore-${TARGET_NAME} shows a COMPLETED status of 'true'
- verify all artifacts referenced in snapshot describe operation have been restored on the target namespace:
***For example***
kubectl get all -n ${TGT_NAMESPACE}
kubectcl get pvc -n ${TGT_NAMESPACE}
ตัวอย่างการ restore application
[nutanix@harbor ~]$ cat << EOF > wordpress-rg-restore-cluster2.yaml
apiVersion: dataservices.nutanix.com/v1alpha1
kind: ApplicationSnapshotRestore
metadata:
name: wordpress-rg-restore-cluster2
namespace: application
spec:
applicationSnapshotName: wordpress-snap
applicationSnapshotNamespace: ntnx-system
EOF
[nutanix@harbor ~]$ k apply -f wordpress-rg-restore-cluster2.yaml
applicationsnapshotrestore.dataservices.nutanix.com/wordpress-rg-restore-cluster2 created
[nutanix@harbor ~]$ k get applicationsnapshotrestore -A
NAMESPACE NAME SNAPSHOT-NAME COMPLETED
application wordpress-rg-restore-cluster2 wordpress-snap false
[nutanix@harbor ~]$ k get applicationsnapshotrestore -A
NAMESPACE NAME SNAPSHOT-NAME COMPLETED
application wordpress-rg-restore-cluster2 wordpress-snap true
[nutanix@harbor ~]$ k get pod -n application
NAME READY STATUS RESTARTS AGE
wordpress-84f858d9fd-zqtrt 1/1 Running 0 78s
wordpress-mysql-556f6f65cc-s2h8m 1/1 Running 0 78s
[nutanix@harbor ~]$
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
...........
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe wordpress-backup-1` and `velero backup logs wordpress-backup-1`.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
..
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe wordpress-restore-1` and `velero restore logs wordpress-restore-1`.
[nutanix@harbor nkp]$ docker login 10.38.252.79
Username: admin
Password:
WARNING! Your credentials are stored unencrypted in '/home/nutanix/.docker/config.json'.
Configure a credential helper to remove this warning. See
https://docs.docker.com/go/credential-store/
Login Succeeded
ติดตั้ง kubectl สำหรับเข้าถึง kubernetes
[nutanix@harbor nkp-v2.17.0]$ sudo curl -Lo /usr/local/bin/kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 53.7M 100 53.7M 0 0 74.2M 0 --:--:-- --:--:-- --:--:-- 74.3M
[nutanix@harbor nkp-v2.17.0]$ sudo chmod +x /usr/local/bin/kubectl
[nutanix@harbor ~]$ curl -o nkp-air-gapped-bundle_v2.17.0_linux_amd64.tar.gz "https://download.nutanix.com/downloads/nkp/v2.17.0/nkp-air-gapped-bundle_v2.17.0_linux_amd64.tar.gz?Expires=1768402002&Key-Pair-Id=APKAJTTNCWPEI42QKMSA&Signature=DINg8wm1mGZR8fgXilavLmDe81UR126bHPdhLddcLuz8BQrXOPWSzHW29kVrNNUXP~CMS-cLArguoPGDwljnmBDaFACTsA9n~ooyA~Ff~9TEyjHiaWaNjez9gVawoyvHszti7Mfad7Bj4btFIsK9xGVYhJSuWAOryx2ieltb3GOEKrjjgZY4ykw7i3EzZrd9hXRga0DjbE3Lfy-YX~2h0~uhH~XiF08tvOI~LfTMi7AZDt2hJyZYgcXPqeVDJuFjureFdb4J4NNRu-lwbbrqipAQoMIYIOwUPLdV0oRFad4MAOUqqMhZIikSQSoYznkLb5WiFeVGxaagjcPFjflS1w__"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 20.8G 100 20.8G 0 0 135M 0 0:02:37 0:02:37 --:--:-- 139M
[nutanix@harbor ~]$ tar -xvf nkp-air-gapped-bundle_v2.17.0_linux_amd64.tar.gz
[nutanix@harbor nkp]$ cd nkp-v2.17.0/
[nutanix@harbor nkp-v2.17.0]$ docker load -i konvoy-bootstrap-image-v2.17.0.tar
[nutanix@harbor nkp-v2.17.0]$ docker image ls | grep konvoy-bootstrap
WARNING: This output is designed for human readability. For machine-readable output, please use --format.
mesosphere/konvoy-bootstrap:v2.17.0 9f13ef224cd1 5.86GB 2.92GB
[nutanix@harbor nkp-v2.17.0]$ nkp push bundle --bundle ./container-images/konvoy-image-bundle-v2.17.0.tar --to-registry=10.38.252.79/nkp --to-registry-username=admin --to-registry-password=Harbor12345 --to-registry-insecure-skip-tls-verify
[nutanix@harbor nkp-v2.17.0]$ nkp push bundle --bundle ./container-images/kommander-image-bundle-v2.17.0.tar --to-registry=10.38.252.79/nkp --to-registry-username=admin --to-registry-password=Harbor12345 --to-registry-insecure-skip-tls-verify
สร้าง ssh key สำหรับ access VM ที่สร้างขึ้นโดยระบบ
[nutanix@harbor ~]$ ssh-keygen -t ed25519
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/nutanix/.ssh/id_ed25519):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/nutanix/.ssh/id_ed25519
Your public key has been saved in /home/nutanix/.ssh/id_ed25519.pub
The key fingerprint is:
SHA256://XGZ4k3rrFq2BC+QWmPk3NJNt972vQ5vHrnSuOyTm0 nutanix@harbor.local
The key's randomart image is:
+--[ED25519 256]--+
| |
| |
| . |
| = + |
| S B + . |
| X + o . |
| @ ..E.+|
| o =.*o&X|
| .o=*XX%|
+----[SHA256]-----+
Performance: Ed25519 is significantly faster for both signing and verification operations. It uses elliptic curve cryptography which requires less computational overhead than RSA’s integer factorization approach.
Key size: Ed25519 uses 256-bit keys that provide security equivalent to 3072-bit RSA keys. This means much smaller key sizes for the same security level, reducing storage and transmission overhead.
Security design: Ed25519 was designed from the ground up to avoid many implementation pitfalls that have plagued RSA. It’s resistant to timing attacks, doesn’t require careful random number generation during signing (unlike RSA), and uses deterministic signatures.
Simplicity: The algorithm has fewer parameters and configuration options, reducing the chance of implementation errors. RSA requires choosing padding schemes, key sizes, and other parameters that can introduce vulnerabilities if done incorrectly.
Side-channel resistance: Ed25519 is designed to be resistant to side-channel attacks like timing and power analysis attacks, whereas RSA implementations often leak information through timing variations.
Future-proofing: While both are considered secure today, Ed25519’s elliptic curve foundation generally scales better as security requirements increase over time.
The main trade-off is that RSA is more widely supported in legacy systems, but for new applications, Ed25519 is generally the better choice due to its superior performance and security characteristics.
ติดตั้ง NKP ด้วย cli โดยจะต้อง export user และ password variable ก่อนจะ run nkp command
กรณีที่การติดตั้งสามารถ access internet ได้ไม่จำเป็นต้องใช้ option –airgapped และระบบจะทำการติดตั้งโดยใช้ค่า default ถ้าไม่ระบุใน cli เช่น ขนาดของ VM และจำนวน VM เช่น Control plane จำนวน 3 VMs และ Worker จำนวน 4 VMs ดังตัวอย่าง options อื่นๆ ดังนี้
𝗘𝗡𝗩𝗜𝗥𝗢𝗡𝗠𝗘𝗡𝗧 𝗩𝗔𝗥𝗜𝗔𝗕𝗟𝗘𝗦
export CLUSTER_NAME="wskn-mgmt-ag" # Name of the Kubernetes cluster
export NUTANIX_PC_FQDN_ENDPOINT_WITH_PORT="https://10.168.100.4:9440" # Nutanix Prism Central endpoint URL with port
export CONTROL_PLANE_IP="10.168.102.30" # IP address for the Kubernetes control plane
export IMAGE_NAME="nkp-rocky-9.4-release-1.29.6-20240816215147" # Name of the VM image to use for cluster nodes
export PRISM_ELEMENT_CLUSTER_NAME="wskn-nongpu" # Name of the Nutanix Prism Element cluster
export SUBNET_NAME="non-gpu-airgap" # Name of the subnet to use for cluster nodes
export PROJECT_NAME="default" # Name of the Nutanix project
export CONTROL_PLANE_REPLICAS="3" # Number of control plane replicas
export CONTROL_PLANE_VCPUS="4" # Number of vCPUs for control plane nodes
export CONTROL_PLANE_CORES_PER_VCPU="1" # Number of cores per vCPU for control plane nodes
export CONTROL_PLANE_MEMORY_GIB="16" # Memory in GiB for control plane nodes
export WORKER_REPLICAS="3" # Number of worker node replicas
export WORKER_VCPUS="8" # Number of vCPUs for worker nodes
export WORKER_CORES_PER_VCPU="1" # Number of cores per vCPU for worker nodes
export WORKER_MEMORY_GIB="32" # Memory in GiB for worker nodes
export NUTANIX_STORAGE_CONTAINER_NAME="default-container-xxx" # Name of the Nutanix storage container
export CSI_FILESYSTEM="ext4" # Filesystem type for CSI volumes
export CSI_HYPERVISOR_ATTACHED="true" # Whether to use hypervisor-attached volumes for CSI
export LB_IP_RANGE="10.168.102.31-10.168.102.31" # IP range for load balancer services
export SSH_KEY_FILE="/root/.ssh/id_rsa.pub" # Path to the SSH public key file
export NUTANIX_USER="admin" # Nutanix PrismCentral username (left blank for security)
export NUTANIX_PASSWORD="" # Nutanix PrismCentral password (left blank for security)
export REGISTRY_URL="https://registry.wskn-ag.local/library" # URL for the private container registry
export REGISTRY_USERNAME="admin" # Username for authenticating with the private registry (left blank for security)
export REGISTRY_PASSWORD="" # Password for authenticating with the private registry (left blank for security)
export REGISTRY_CA="/root/wskn-ag-certs/server.crt" # Path to the CA certificate for the private registry
𝗜𝗡𝗦𝗧𝗔𝗟𝗟𝗔𝗧𝗜𝗢𝗡 𝗖𝗢𝗠𝗠𝗔𝗡𝗗
nkp create cluster nutanix --cluster-name $CLUSTER_NAME \
--endpoint $NUTANIX_PC_FQDN_ENDPOINT_WITH_PORT\
--control-plane-endpoint-ip $CONTROL_PLANE_IP \
--control-plane-vm-image $IMAGE_NAME \
--control-plane-prism-element-cluster $PRISM_ELEMENT_CLUSTER_NAME \
--control-plane-subnets $SUBNET_NAME \
--control-plane-pc-project $PROJECT_NAME \
--control-plane-replicas $CONTROL_PLANE_REPLICAS \
--control-plane-vcpus $CONTROL_PLANE_VCPUS \
--control-plane-cores-per-vcpu $CONTROL_PLANE_CORES_PER_VCPU \
--control-plane-memory $CONTROL_PLANE_MEMORY_GIB \
--worker-vm-image $IMAGE_NAME \
--worker-prism-element-cluster $PRISM_ELEMENT_CLUSTER_NAME \
--worker-subnets $SUBNET_NAME \
--worker-pc-project $PROJECT_NAME \
--worker-replicas $WORKER_REPLICAS \
--worker-vcpus $WORKER_VCPUS \
--worker-cores-per-vcpu $WORKER_CORES_PER_VCPU \
--worker-memory $WORKER_MEMORY_GIB \
--ssh-public-key-file $SSH_KEY_FILE \
--csi-storage-container $NUTANIX_STORAGE_CONTAINER_NAME \
--csi-file-system $CSI_FILESYSTEM \
--csi-hypervisor-attached-volumes=$CSI_HYPERVISOR_ATTACHED \
--kubernetes-service-load-balancer-ip-range $LB_IP_RANGE \
--insecure \
--self-managed \
--airgapped \
--registry-mirror-url $REGISTRY_URL \
--registry-mirror-cacert $REGISTRY_CA \
--registry-mirror-username=$REGISTRY_USERNAME \
--registry-mirror-password=$REGISTRY_PASSWORD
Cluster default/nkp-mgmt kubeconfig was written to to the filesystem.
You can now view resources in the new cluster by using the --kubeconfig flag with kubectl.
For example: kubectl --kubeconfig="/home/nutanix/nkp-mgmt.conf" get nodes
Starting Kommander installation
✓ Deploying Flux
✓ Deploying Ingress certificate
✓ Creating kommander-overrides ConfigMap
✓ Deploying Git Operator
✓ Creating GitClaim for management GitRepository
✓ Creating GitClaimUser for accessing management GitRepository
✓ Deploying Flux configuration
✓ Deploying Kommander Operator
✓ Creating KommanderCore resource
✓ Cleaning up Kommander bootstrap resources
✓ Deploying Gatekeeper
✓ Creating PlatformVersionArtifact
✓ Deploying Kommander AppManagement
✓ 4 out of 14 core applications have been installed (waiting for dex, dex-k8s-authenticator and 8 more)
✓ 5 out of 14 core applications have been installed (waiting for dex, dex-k8s-authenticator and 7 more)
✓ 10 out of 14 core applications have been installed (waiting for dex-k8s-authenticator, kommander and 2 more)
✓ 11 out of 14 core applications have been installed (waiting for dex-k8s-authenticator, kommander-ui and 1 more)
✓ 13 out of 14 core applications have been installed (waiting for traefik-forward-auth-mgmt)
Cluster was created successfully! Get the dashboard details with:
nkp get dashboard --kubeconfig="/home/nutanix/nkp-mgmt.conf"
# Generate CA Key
[nutanix@harbor harbor]$ cd generate-cert
# 1. Generate CA (Root Authority)
[nutanix@harbor generate-cert]$ openssl genrsa -out ca.key 4096
[nutanix@harbor generate-cert]$ openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=TH/ST=BKK/L=BKK/O=ntnxlab/OU=demo/CN=ntnxlab-Internal-CA Root CA" \
-key ca.key -out ca.crt
# 2. Generate Server Key
[nutanix@harbor generate-cert]$ openssl genrsa -out harbor.local.key 2048
# 3. Generate CSR (Certificate Signing Request)
# Use the actual address you will type into your browser/CLI as the CN
[nutanix@harbor generate-cert]$ openssl req -sha512 -new \
-subj "/C=TH/ST=BKK/L=BKK/O=ntnxlab/OU=demo/CN=ntnxlab.local" \
-key harbor.local.key -out harbor.local.csr
# 4. Create v3.ext (Crucial fix: CA:FALSE and proper SAN)
[nutanix@harbor generate-cert]$ cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1=ntnxlab.local
DNS.2=*.ntnxlab.local
DNS.3=harbor.ntnxlab.local
IP.1=10.38.252.79 # REPLACE with your actual Harbor VM IP
EOF
# 5. Generate Server Certificate (Signed by the CA)
[nutanix@harbor generate-cert]$ openssl x509 -req -sha512 -days 3650 \
-extfile v3.ext \
-CA ca.crt -CAkey ca.key -CAcreateserial \
-in harbor.local.csr \
-out harbor.local.crt
# verify new CA, expect correct subject output
[nutanix@harbor generate-cert]$ openssl x509 -in ca.crt -nameopt sep_multiline -subject -noout
# verify new CA, Should return OK
[nutanix@harbor generate-cert]$ openssl verify -CAfile ca.crt harbor.local.crt
# verify new CA, Should show your DNS and IP
[nutanix@harbor generate-cert]$ openssl x509 -in harbor.local.crt -text -noout | grep -A 1 "Subject Alternative Name"
To make your Kubernetes nodes or Docker clients trust this certificate,
you must copy the ca.crt (not the harbor.local.crt) to the OS trust store:
RHEL/CentOS: Copy to /etc/pki/ca-trust/source/anchors/ and run update-ca-trust.
Ubuntu/Debian: Copy to /usr/local/share/ca-certificates/ and run update-ca-certificates.
Docker specific: Docker also requires the certs in
/etc/docker/certs.d/harbor.ntnxlab.local/ca.crt
# Optional - Convert the certificate to PEM format for docker
[nutanix@harbor generate-cert]$ openssl x509 -inform PEM -in harbor.local.crt -out harbor.local.cert
#copy harbor.local.crt and harbor.local.key for harbor server
[nutanix@harbor generate-cert]$ cp harbor.local.crt ../certs
[nutanix@harbor generate-cert]$ cp harbor.local.key ../certs