Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To handle a maximum size proposal payload approaching 50MB (which would result in a 100 MB transaction, including the payload read-write set in the response, plus typically 4 to 10 KB for identify/signature certs on the block) in our k8s cluster network tests, the only increase required for the default configuration settings are is for the AbsoluteMaxBytes and the . If you are using fast disks, then set the WAL SnapshotIntervalSize (required for raft orderers only) to be big enough for multiple blocks, to avoid individual writes for every block. Of course, one must be sure to use adequate pod memory GB RAM of the k8s hosts. Below are the relevant config settings for a test in k8s cluster, as used in this sample network specification file being used with a launcher tool in automation tests:

Increase orderer.batchsize.absolutemaxbytes : increase to 99MB
up to 100 MB (Refer to GRPC for exact payload size limitation.)
Increase orderer.batchsize.preferredmaxbytes : increase to 10 MB
(optional)
Increase orderer.etcdraft_options.SnapshotIntervalSize : increase up to 100 MB (optional)
Increased Increase the pod memory to minimum 4Gi for the peers, cc containers, and kafkas
Increased Increase the pod memory to minimum 2Gi for the orderers, couchdb
SnapshotIntervalSize: 100 MB

Note: to process multiple huge msgs, it is recommended to allocate more GB for the pods, to avoid causing the peers or orderers to panic due to resource limitations.

Note: in raft networks, the orderers update the Write Ahead Log (WAL) periodically. If disk-write speeds are too slow (eg. slower than 10 IOPS/GB), then you may experience seconds of delays when processing huge messages - and possibly even dropped transactions if a leadership change is triggered due to the slow WAL disk write (which may block the raft orderer channel leader from sending heartbeats to other orderers).

...