m Fabric Admin

Key Points


References


Key Concepts



Get started with Fabric 2.1

https://www.youtube.com/watch?v=IC1zO5oXqXg


CA key management



Public / Private key management

Private keys are never sent anywhere.  Only public keys are included with transactions.
If you are using the fabric-ca-client or any of the SDKs, by default privates keys are created on the local file system of the host in which enroll.  You can also choose to use the PKCS11 provider to have the private generated and stored in an HSM.
If you do generate it on the local file system, then you should set the permissions to 0400 on *nix based OS’s.  You should also encrypt the file system ( especially when running in a public cloud)

Gari Singh
978-846-7499


Fabric Key Rotation Video - 2021

https://www.youtube.com/watch?v=UgZWMRiYXMQ


Hyperledger Aries, Ursa. Learn more about Hyperledger projects: https://www.hyperledger.org/use Case Studies: https://www.hyperledger.org/learn/cas... Training & Certification: https://www.hyperledger.org/learn/tra... Tutorials: https://www.hyperledger.org/use/tutor... Webinars: https://www.hyperledger.org/learn/web... Events: https://www.hyperledger.org/events Vendor Directory: https://www.hyperledger.org/use/vendo... Subscribe to the Hyperledger Newsletter: https://www.hyperledger.org/newsletter




Kubernetes toolsets for a Fabric network

https://github.com/APGGroeiFabriek/PIVT











Deprecated >> Fabric v1x >> Kafka and Zookeeper nodes need to be persisted

https://stackoverflow.com/questions/50287088/when-using-hyperledger-fabric-with-kafka-consensus-is-persistent-storage-requir/50289394#50289394

You do need to persist the storage for the Kafka and Zookeeper nodes.

For Kafka, you can set the KAFKA_LOG_DIRS env variable and then make sure you attach an external volume to that directory.

For Zookeeper, the default data directory is /data so just attach an external volume to that directory


Fabric Admin to Manage Networks using Kubectl







Fabric Test Network on Kubernetes



For a production network(s) you will have a successful outcome with the combination of Kubernetes and fabric-operator.  







The operator can also be supplemented by the fabric-operations-console for cases where you would prefer to administer the networks with a GUI, or automate deployment with Ansible or REST SDKs.  For the CLI integration, fabric-operator works directly with the raw Kube APIs and fabric CLI binaries, and can be integrated with your CI/CD/automation infrastructure as necessary.  Both the operator and console are designed to support the "remote management" of Fabric networks, in cases where you do not have access to the underlying infrastructure, but can establish network visibility to peers and orderers via gRPCs service URLs.







fabric-operator has been designed for easy integration with the new Gateway Client (> 2.4), Chaincode-as-a-Service (> 2.4.1), and the new fabric-builder-k8s, providing near instantaneous deployments of production images, and interactive step-level debugging in the IDE of your choice.  In addition, operator provides some key certificate enrollment and renewal functions which will set your network off to a healthy start and long-term stability.







We have a sample-network available which will "just work" on a local development environment (KIND, Rancher/k3s, minikube), and extends naturally to kube clusters running in the wild at EKS, IKS, and hybrid cloud environments.  The sample network uses a combination of kubectl / kustomization to apply a network, and some light shell scripting to illustrate the configuration of Fabric services using the native CLI binaries.  In general, we've found the combination of KIND, operator, and Chaincode-as-a-Service to provide a development environment that is superior to the fabric-samples "test-network.sh", providing clear alignment with production operations.







Regarding the network setup, there are good opportunities to align some of the higher-order systems, such as Minifabric, BAF, Cello, K8s operators, and the like, into a community supported "really easy to use" and "really easy to live with" solutions that allow us to just write some chaincode... 

Please feel free to reach out, either on the mailing list or Discord #fabric-kubernetes for additional guidance, feedback, and general banter.


Planning Considerations for Fabric Solution

https://medium.com/zeeve/crucial-considerations-before-deploying-hyperledger-fabric-in-the-blockchain-network-in-2023-5ac2cdad1be9

Fabric network concept

While Peers store the ledger that contains the transaction log and the world state, Orderers maintain the consistency of the state and create blocks of transactions. CAs connect the entire network using the certificate’s chain of trust.

The channel in the chart depicts a key abstraction in Hyperledger Fabric which forms a subnet in the network isolating the state and smart contracts. All the peers on the channel have access to the very data and smart contracts.


primary questions to consider are:

  1. How many nodes do you need to establish high availability?
  2. Which company provides the best globally available nodes?
  3. On what cloud platforms/data centers would you wish to deploy?
  4. Does this data center satisfy your requirements for uptime, disaster recovery, monitoring, and alerts?
  5. How to ensure the security of your private keys and roots of trust?

Key engineering requirements based on trust engineering - what proofs do you need for DLT?

Key engineering requirements based on trust engineering - what ledger data needs to be shared and when? why?


Fabric Consensus Method for Ordering Service

Currently orderers support either CFT (Raft) or BFT (still in v3 preview stage). The ordering service is setup as to run one and only run of those two protocols for block consensus. 

Chaincode transactions will be ordered according to the channel configuration setup for the ordering service, meaning there is no way of changing consensus on the fly.

Addressing the three ideas:
  1. Two orderer organizations in the same channel will share the same ordering rules, as they are part of the same ordering service.
  2. Not possible for the reasons I outlined above.
  3. You may have two channels that follow two different ordering protocols (Raft and BFT for v3.0.0-preview) as long as they are served by different orderers. The orderer runtime will run only one of the protocols after initial setup. That would also mean that your chaincode logic would be separate in two channels (two chaincodes) and that information would be registered in separate ledgers.
Beware that ordering logic is decoupled from smart contract logic.

CTO @ GoLedger

here are some of the network configuration decisions you need to make before deployment:

  • Certificate Authority (CA) configuration: A CA helps in issuing digital certificates and authenticates the digital identities of systems. It also helps certify the key’s ownership by the certificate’s name subject. That said, it is recommended to use a separate CA for every organization. You may decide to use this CA as the root or to operate as an intermediate associated with the root. It is also recommended that the production environment use Transport Layer Security (TLS), which requires the setting up a TLS CA to generate TLS certificates for the nodes. You can then deploy the TLS CA before the enrolment CA.
  • Database type: Hyperledger Fabric can utilize either LevelDB or CouchDB. Of the two, a few channels in a network may prefer LevelDB when speed is a priority, and the rest may lean towards CouchDB-rich query operations. That said, channels must ensure that peers don’t use both LevelDB and CouchDB, as the latter tends to impose restrictions on data for values & keys. Values & keys that are valid for LevelDB may not be accurate for CouchDB.
  • Peers and Ordering nodes: Peers are among the fundamental elements of a set of peer nodes in the Fabric network. It is a type of node that hosts the chain code and the ledger in a Fabric system. Channels connect peers to each other, and they can be grouped based on the ledger and contract management needs. Orderers, on the other hand, implement the consensus protocol to order transactions, along with sorting said transactions submitted by members using consensus methods in the fabric. It is explained how Peers, Orderers, and CAs fit the Fabric network topology chart further down the article.
  • Channels for Organizations: An identical copy of the ledger is shared with the organizations on a channel. These organizations can not only collaborate with each other to manage said ledger but can also create collections of private data. This enables a subgroup of organizations to commit, enforce, and question private data without the necessity of creating a different channel. Based on your use case, you might choose Channels as the best way forward to ensure the privacy and isolation of some transactions or take a call if having private data collections works better.
  • Container orchestration: There are various containers you can choose from to run your Hyperledger Fabric network. Based on your use case, you may either create separate containers for CouchDB, orderer, peer, gRPC, logging, communication, chaincode, and others or take a call to combine some of these in a single container.
  • Chaincode deployment: Hyperledger Fabric offers the option to either use built-in ‘build & run’ or custom ‘build & run’ to deploy chaincodes. If you require the latter, you can opt to use External Builders and Launchers and you also have the option of Running Chaincode as an External Service as well.
  • Importance of Firewalls: Just like in a traditional system setting, a solid firewall is a necessity in a production deployment environment. As elements that belong to one organization may often need access to elements of other organizations, advanced network configurations and firewalls play a vital role during the access.

Every organization will need expertise in management systems and its own production environment to operate the network efficiently. That said, based on your requirement and use case, you may own & manage the appropriate blockchain network components.


Kubernetes Deployment Tips


Leverage Fabric Operator for easy network runtime management of clusters, pods, namespaces, processes


Leverage free sample Helm charts as templates for your own

you may find the versions of each ‘hlf’ mentioned below:

  1. hlf-ca (2.1.0): is the short form of Hyperledger Fabric ‘Certificate Authority.’ Essentially, it is a CA node for the Fabric permissioned blockchain framework, which can either be installed as a Root CA or an intermediate CA (by pointing to a parent CA, which can itself be a Root CA or an intermediate).
  2. hlf-couchdb (2.1.0): is the CouchDB instance for Hyperledger Fabric, and it is the node holding the blockchain ledger of each peer for the Fabric permissioned blockchain framework.
  3. hlf-ord (3.1.0): is the Hyperledger Fabric Orderer node type that is responsible for the consensus for the Fabric permissioned blockchain framework.
  4. hlf-peer (5.1.0): is the Hyperledger Fabric Peer node type responsible for endorsing transactions and maintaining a record on the blockchain for the Fabric permissioned blockchain framework.


Deploy Fabric Network on Kubernetes with Bevel Operators

https://www.hyperledger.org/blog/2023/04/27/introducing-kubernetes-operators-in-hyperledger-bevel-with-bevel-operator-fabric

Adding Bevel-Operator-Fabric is an important milestone for Hyperledger Bevel and a step towards the redesign of Bevel with support for Kubernetes operators to achieve DLT (distributed ledger technology) network deployments and automation.

What is Bevel-Operator-Fabric

Bevel-Operator-Fabric is the first of the different sets of Kubernetes operators that Hyperledger Bevel will ultimately support. As the name suggests, it is a Kubernetes operator for managing Hyperledger Fabric networks. Bevel-operator-fabric enables developers to define Hyperledger Fabric components in the form of custom resources and associated controllers to manage the lifecycle of those components. 

Bevel-operator-fabric has two components to it.

  1. Operator
  2. Kubectl-hlf plugin

Operator runs on Kubernetes and tries to bring the current state to the desired state, a process called reconciliation. The kubectl-hlf plugin is used to create the custom resource and send the commands to the operator. It also provides an abstraction to create CRDs.

Users having the need to rapidly and consistently setup production ready DLT networks should consider Hyperledger Bevel, whereas users needing to deploy Hyperledger Fabric components quickly and have operator pattern in place should consider Bevel-Operator-Fabric.


Potential Value Opportunities


HLF Training programs

HFCA - Hyperledger Fabric Certified Administrator

https://training.linuxfoundation.org/certification/certified-hyperledger-fabric
-administrator-chfa/

HLF CA course - $299

https://training.linuxfoundation.org/training/hyperledger-fabric-administration-lfs272/



Potential Challenges



Open Issues List


CA tutoriials work for SOLO, Kafka, NOT Raft

Here is some more background info and some update to this issue.







Background:



I started out this exercise by following the Fabric CA Operations tutorial (https://hyperledger-fabric-ca.readthedocs.io/en/latest/operations_guide.html) so it is a standard 4 CAs 2 Orgs setup. It worked with SOLO or KAFKA without any problem. The Orderer TLS setup is pretty standard too:







fabric-ca-client register -d --id.name orderer1-org0 --id.secret ordererPW --id.type orderer -u https://0.0.0.0:6052
fabric-ca-client enroll -d -u https://orderer1-org0:ordererPW@0.0.0.0:6052 --enrollment.profile tls --csr.hosts orderer1-org0
 

Update:



When the orderers first started with genesis block, it first complained that "certificate is valid for orderer[X]-org0, not orderer[Y]-org0". So I tried to put all 5 orderers hostname into csr hosts (which I know is probably not supposed to).  It ended up with another complaint that "certificate presented by orderer[X]-org0:7050 doesn't match any authorized certificate" which I have no idea because it was saying that the certificate was valid in the first complaint.

_._,_._,_



Deploying System Chaincode Debug


TLS Issues Debug

https://hyperledger-fabric.readthedocs.io/en/latest/enable_tls.html#debugging-tls-issues

Dave.Enyeart >>

It looks like you have successfully deployed the system chaincode and can connect to the peer, but not to the orderer.

The working invoke includes the orderer connection flags:

--tls --cafile "${PWD}"/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt

Your non-working invoke doesn’t include these flags.

The flags are required on invoke (but not query) so that the client can submit the endorsed transaction to the orderer. These flags provide the TLS level authorization when connecting to orderer.


Reported.error>>

2023-04-11 14:04:13.292 KST [nodeCmd] serve -> INFO 069 Deployed system chaincodes

I do “peer chaincode query -o 127.0.0.1:6050 -C "mychannel" -n xscc -c '{"Args":["XGetBlockByNumber","mychannel","0"]}'” and I get a response as expected. (that is, result identical to GetBlockByNumber on qscc)

However, I do “peer chaincode invoke -o 127.0.0.1:6050 -C "mychannel" -n xscc -c '{"Args":["XPutState"]}'” then I get this:

Error: error getting broadcast client: orderer client failed to connect to 127.0.0.1:6050: failed to create new connection: context deadline exceeded

A regular invoke call like "peer chaincode invoke -o 127.0.0.1:6050 -C mychannel -n basic -c '{"Args":["CreateAsset","1","blue","35","tom","1000"]}' --tls --cafile "${PWD}"/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt" works:

2023-04-11 15:39:45.264 KST [chaincodeCmd] chaincodeInvokeOrQuery -> INFO 001 Chaincode invoke successful. result: status:200

Obviously my orderer1 at 6050 is up and listening fine; I was able to create a channel via it and a regular chaincode seem to work. I wonder if this is not a matter of connectivity and something else that I don't know then. Could you hint me in the right direction? 


Thanks. 



Candidate Solutions



Hyperledger #Sessions:  Smart BFT dev & deploy w Bevel, Kubernetes 


If you miss any of these but want to see the recording, they'll all be posted to the Hyperledger YouTube channel at: https://www.youtube.com/Hyperledger


Hyperledger Certified Service Providers

https://www.hyperledger.org/use/hcsp

Fabric Enterprise Service Providers - Leaders - 2022




Fabric Monitoring and Alerting


Fabric supports Prometheus and Grafana for runtime metrics

Fabric can provide alerting to  IBM QRadar

https://github.com/guikarai/HLF-QRADAR

 illustrating the use of channel-based events of Hyperledger Fabric, and to redirect captured event/incidents to a QRadar SIEM.



Java SDK not loading on Fabric 2.3 ( need LTS configs from 2.2 )


 
The 2.3 release was an interim/development release. 2.2 being the LTS. There wasn't a docker image for farbic-javaenv:2.3 released.  The quick way around this is to pul down the fabric-javaenv:2.2 and retag it as 2.3
 
Hope that helps.
 
Regards, Matthew White

Debug Fabric instances in Docker Compose

https://medium.com/@rsripathi781/docker-cheat-sheet-for-hyperledger-fabric-128e89f2f36b

Docker Cheat Sheet for Hyperledger Fabric.pdf


docker ps -a

2. Check logs of containers — You might need them to check logs of peer / orderer when you invoke/query chaincodes , joining peer to channels etc..

docker logs containerid

3. Get into docker container — You may need to go into container to explore volumes which you might mounted during container creation. You can get hold of blocks being routed by orderer or explore the ledger stored in peer.

docker exec -it containerid bash

4.Get into Fabric CLI — If you had defined CLI in docker-compose , then this command can take you to the specified peer.

docker exec -it cli bash

5. Restart Container

docker restart containerid

6. Run all services defined in docker-compose

docker-compose -f yourdockercompose.yaml up -d

7. Run specific service defined in docker-compose

docker-compose -f yourdockercompose.yaml up -d servicename

8. Tear down container volumes

docker-compose -f yourdockercompose.yaml down — volumes

9.Force remove all running containers

docker rm -f $(docker ps -aq)

10. Remove all images which where utilized by containers

docker rmi -f $(docker images -q )

11. Remove images which where utilized by containers with like search

docker rmi -f $(docker images dev-* -q)

dev-* — Remove all images which has container name starting with dev


Step-by-step guide for Example



sample code block

sample code block
 



Recommended Next Steps