Welcome to your complete guide on getting started with the Hyperledger Fabric Operator! In this tutorial, we’ll walk you through the essential steps required to deploy your own network using Kubernetes. But before we dive in, let’s take a quick look at some of the astounding features this operator offers.
Key Features
- Create Certificate Authorities (CA)
- Create Peers
- Create Ordering Services
- Create resources without manual provisioning of cryptographic material
- Domain routing with SNI using Istio
- Run chaincode as external chaincode in Kubernetes
- Support for Hyperledger Fabric 2.3+
- Managed genesis for Ordering services
- End-to-End testing including the execution of chaincodes in KIND
- Renewal of certificates
Creating a Kubernetes Cluster
First things first! In order to deploy your network, you need a Kubernetes Cluster. For this, we’ll utilize KinD (Kubernetes in Docker). Ensure that ports 80 and 443 are available before you create the cluster.
kind-config.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.25.8
extraPortMappings:
- containerPort: 30949
hostPort: 80
- containerPort: 30950
hostPort: 443
kind create cluster --config=kind-config.yaml
Think of it as setting up a stage for a play. You must make sure that the venue (Kubernetes cluster) is ready and the necessary equipment (ports) is in place to showcase your amazing performance (deployment).
Installing the Kubernetes Operator
Now that the Kubernetes Cluster is up and running, the next step is to install the Kubernetes operator for Fabric. This sets the stage for deploying Certification Fabric Peers, Orderers, and Authorities.
You’ll first need to install Helm. You can find the instructions here.
helm repo add kfs https://kfsoftware.github.io/hlf-helm-charts --force-update
helm install hlf-operator --version=1.9.2 --kfshlf-operator
Installing Istio
Next, we’ll need to install Istio, which will help in managing traffic and services for our application.
curl -L https://istio.io/downloadIstio | sh -
kubectl create namespace istio-system
istioctl operator init
kubectl apply -f - EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-gateway
namespace: istio-system
spec:
addonComponents:
grafana:
enabled: false
kiali:
enabled: false
prometheus:
enabled: false
tracing:
enabled: false
components:
ingressGateways:
- enabled: true
k8s:
hpaSpec:
minReplicas: 1
service:
ports:
- name: http
port: 80
targetPort: 8080
nodePort: 30949
- name: https
port: 443
targetPort: 8443
nodePort: 30950
type: NodePort
name: istio-ingressgateway
pilot:
enabled: true
k8s:
hpaSpec:
minReplicas: 1
resources:
limits:
cpu: 300m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
meshConfig:
accessLogFile: dev/stdout
enableTracing: false
outboundTrafficPolicy:
mode: ALLOW_ANY
profile: default
EOF
Imagine Istio as a traffic cop directing vehicles at a busy intersection, ensuring each service (or “vehicle”) flows smoothly to its intended destination without collisions.
Deploying a Peer Organization
With everything in place, we can start deploying our Peer Organizations. This involves creating a certificate authority and registering users.
kubectl hlf ca create --image=$CA_IMAGE --version=$CA_VERSION --storage-class=standard --capacity=1Gi --name=org1-ca --enroll-id=enroll --enroll-pw=enrollpw --hosts=org1-ca.localho.st --istio-port=443
kubectl wait --timeout=180s --for=condition=Running fabriccas.hlf.kungfusoftware.es --all
Remember, each peer is like a performer, and the certificate authority acts as their manager, ensuring they have all the necessary credentials to perform on stage.
Troubleshooting
If something goes wrong or doesn’t work as expected, here are some troubleshooting tips:
- Chaincode Installation Build Error: Sometimes, the installation might fail due to an unsupported local Kubernetes version. If you’re using something like Minikube, consider switching to KinD which is tested and supported.
- Connection Issues: Ensure that your Istio configuration is correctly set up, and double-check the relevant ports.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.