- Caliper useful for performance analysis
- Explorer useful for view transaction, writes blocks
- CouchDB Mango makes querying Fabric world state easy
- LevelDB can be used for the world state DB
- Other Fabric tools, utilities listed here
- Carbon Accounting project
- Business Partner Agent - exchange master data between partners
https://github.com/kfsoftware/hlf-sync
Hyperledger Fabric stores the information in blocks, but this information is not structured and lacks search/processing capabilities of new databases.
This project aims to store all the information in an OffChain database to access the blockchain data as well as add as a means to see the data for other purposes, such as validating, dashboards, statistics, etc
Pre requisites:
- A running Hyperledger Fabric network
- A running supported database
You can download the binary in the release page
hlf-sync --network=./hlf.yaml --config=config.yaml --channel=mychannelname
The configuration file for a postgresql backend
database:
type: sql
driver: postgres
dataSource: host=localhost port=5432 user=postgres password=postgres dbname=hlf sslmode=disable
The configuration file for a mysql backend
database:
type: sql
driver: mysql
dataSource: root:my-secret-pw@tcp(127.0.0.1:3306)/hlf?charset=utf8mb4&parseTime=True&loc=Local
The configuration file for an Elasticsearch backend
database:
type: elasticsearch
urls:
- http://localhost:9200
user:
password:
Private keys are never sent anywhere. Only public keys are included with transactions.
If you are using the fabric-ca-client or any of the SDKs, by default privates keys are created on the local file system of the host in which enroll. You can also choose to use the PKCS11 provider to have the private generated and stored in an HSM.
If you do generate it on the local file system, then you should set the permissions to 0400 on *nix based OS’s. You should also encrypt the file system ( especially when running in a public cloud)
https://github.com/APGGroeiFabriek/PIVT
https://stackoverflow.com/questions/50287088/when-using-hyperledger-fabric-with-kafka-consensus-is-persistent-storage-requir/50289394#50289394
You do need to persist the storage for the Kafka and Zookeeper nodes.
For Kafka, you can set the KAFKA_LOG_DIRS env variable and then make sure you attach an external volume to that directory.
For Zookeeper, the default data directory is /data so just attach an external volume to that directory
answered May 11 '18 at 9:38
https://www.linkedin.com/posts/movee97_hyperledger-fabric-project-accelerator-activity-6780446612964569088-JUuA
https://github.com/hyperledger-labs/fabex
https://vadiminshakov.medium.com/fabex-tutorial-an-introduction-to-the-right-hyperledger-fabric-explorer-cd9ee1848cd9
https://github.com/hyperledger-labs/business-partner-agent
The Business Partner Agent allows to manage and exchange master data between organizations. Exchange of master data should not happen via telephone, excel, e-mail or various supplier portals. Organizations should be able to publish documents like addresses, locations, contacts, bank accounts and certifications publicly, or exchange them privately with their business partners in a machine-readable and tamper-proof format. Furthermore, verified documents, issued by trusted institutions, are able streamline the process of onboarding new business partners.
The Business Partner Agent is built on top of the Hyperledger Self-Sovereign Identity Stack, in particular Hyperledger Indy and Hyperledger Cloud Agent Python.
- Attach a public organizational profile to your public DID (either did:indy/sov or did:web)
- Add business partners by their public DID and view their public profile.
- Add documents based on Indy schemas and request verifications from business partners
- Share and request verified documents with/from your business partners
https://github.com/hyperledger-labs/blockchain-carbon-accounting
https://wiki.hyperledger.org/display/CASIG/Carbon+Accounting+and+Certification+Working+Group
The mission of this working group is to
- identify how blockchain or distributed ledger technologies (DLT's) could improve corporate or personal carbon accounting
- make carbon accounting and certifications more open, transparent, and credible
- build collaboration between consumers, businesses, investors, and offset developers across industries and national boundaries.
We're here to help
- Businesses and organizations take action on climate change by making the process easier and less costly.
- Certifying entities scale by streamlining the verification of corporate climate action.
- General public and consumers trust corporate climate action.
- Lenders and investors align their capital decisions with climate goals.
- Offset buyers and developeres connect with each other with greater trust and transparency.
https://www.offsetguide.org/understanding-carbon-offsets/what-is-a-carbon-offset/
A carbon offset credit is a transferrable instrument certified by governments or independent certification bodies to represent an emission reduction of one metric tonne of CO2, or an equivalent amount of other GHGs (see Text Box, below). The purchaser of an offset credit can “retire” it to claim the underlying reduction towards their own GHG reduction goals.
https://github.com/hyperledger-labs/blockchain-carbon-accounting/blob/main/utility-emissions-channel/README.md
implements the Utility Emissions Channel Hyperledger Fabric network in a docker-compose setup and provides a REST API to interact with the blockchain. To see how it works, check out this video.
To calculate emissions, we need data on the emissions from electricity usage. We're currently using the U.S. Environmental Protection Agency eGRID data, U.S. Energy Information Administration's Utility Identifiers, and European Environment Agency's Renewable Energy Share and CO2 Emissions Intensity. The Node.js script egrid-data-loader.js
in utility-emissions-channel/docker-compose-setup/
imports this data into the Fabric network.
https://github.com/hyperledger-labs/blockchain-carbon-accounting/blob/main/net-emissions-token-network/README.md
The (net) emissions tokens network is a blockchain network for recording and trading the emissions from different channels such as the utility emissions channel, plus offsetting Renewable Energy Certificates and carbon offsets. Each token represents either an emissions debt, which you incur through activities that emit greenhouse gases, or an emissions credit, which offset the debt by removing emissions from the atmosphere.
Read more on the Hyperledger Emissions Tokens Network Wiki page.
To see a demo of how it works, check out this video.
See the documentation for more information and instructions:
https://labs.hyperledger.org/labs/convector-framework.html
Convector is a Model/Controller fullstack JavaScript framework to improve and speed up the development experience of clean, scalable and robust Smart Contract Systems. The developer focuses on the EDApps (Enterprise Decentralized Applications) and contractual relationships of participants rather than in lower level blockchain details.
It currently supports Hyperledger Fabric and provides tools to build fullstack TypeScript Smart Contract Systems made up of native JavaScript chaincodes, backend layers (Node.JS), and front end modules (such as AngularJS and React).
Rather than creating new models for chaincode development, it improves the existing development lifecycle on top of Fabric’s models, NodeJS backends, and front end libraries and frameworks by abstracting logic into Models and Controllers, as well providing useful tools for the developer such as local development blockchain network creation, and testing frameworks. The Frameworks also comes with pre built storage and adapter layers to support the basic flow of communication from front end to back end to blockchain, as well CouchDB querying.
Its modular approach aims to be make Convector a cross blockchain framework, making it possible to plug in third party and own made data layers (blockchain, http libraries, etc) and adapters (Fabric’s SDK, CouchDB drivers).
https://github.com/worldsibu/convector
https://medium.com/@rsripathi781/docker-cheat-sheet-for-hyperledger-fabric-128e89f2f36b
Docker Cheat Sheet for Hyperledger Fabric.pdf
docker ps -a
2. Check logs of containers — You might need them to check logs of peer / orderer when you invoke/query chaincodes , joining peer to channels etc..
docker logs containerid
3. Get into docker container — You may need to go into container to explore volumes which you might mounted during container creation. You can get hold of blocks being routed by orderer or explore the ledger stored in peer.
docker exec -it containerid bash
4.Get into Fabric CLI — If you had defined CLI in docker-compose , then this command can take you to the specified peer.
docker exec -it cli bash
5. Restart Container
docker restart containerid
6. Run all services defined in docker-compose
docker-compose -f yourdockercompose.yaml up -d
7. Run specific service defined in docker-compose
docker-compose -f yourdockercompose.yaml up -d servicename
8. Tear down container volumes
docker-compose -f yourdockercompose.yaml down — volumes
9.Force remove all running containers
docker rm -f $(docker ps -aq)
10. Remove all images which where utilized by containers
docker rmi -f $(docker images -q )
11. Remove images which where utilized by containers with like search
docker rmi -f $(docker images dev-* -q)
dev-* — Remove all images which has container name starting with dev
sample code block
Related articles