Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


Week 1

  •  Exploration. Setup Hyperledger Fabric network, connect Hyperledger Explorer to Fabric. Use Filebeat to send data to Kibana.

Week 2

  •  Exploration. Extend Fabric network (Fabric-ca, binary data and json chaincode). Dump ledger data from HL Explorer and visualize it in Kibana.

Week 3

  •  Write Beats agent with configuration that sends data to Elasticsearch.

Week 4

  •  Create operational dashboards similar to HL Explorer. Create data query dashboards.

Week 5

  •  Refine data flow, send every block and transaction data to Elasticsearch. Make keys indexable.

Week 6

  •  Refactor code, prepare the system to receive data from various peers in separate or similar indices. Add peer selection functionality. Add which user issued query in beats agent and in dashboard. Modify the application such that: 1) it only writes the most recently added keys to ledger 2) adds a previous key in the data schema, which can be added to key value in addition to hash.

Week 7

  •  Test agent with multiple peers, multiple channels. Each channel may have its own zero or more chaincodes and data schema. It should be possible to specify per channel chaincode data schema in beats agent.

Week 8

  •  Create example HL Fabric network setups and dashboards for different topics and use-cases (supply chain, medicine provenance, etc.)

Week 9

  •  Refine the examples and prepare for submission as Hyperledger Lab. Evaluate how to read data directly from ledger file instead of using peer APIs.

Week 10

  •  Submit the project as Hyperledger Lab.

Week 11

  •  Create program that dumps data into custom output (default implementation is json, but can be implemented for any databases) for exploring analysis possibilities aside from Elasticsearch.

Week 12

  •  Refine documentation, evaluate how to replace the ledger file with custom database (CouchDB, MongoDB).