Background

Whitepaper gave some information about Triggers and initial requirements about this functionality, but a lot of open questions come into play when implementation details were discussed.

The design of Iroha Triggers mainly takes inspiration from:

  1. Database triggers - Oracle example
  2. Smart contracts in other blockchains
  3. Event driven architecture

Problem

Currently we can submit transactions that can modify global state if they pass validation. These transactions can contain `Query`s inside of them, allowing them limited capability to execute logic on-chain.
Transactions can include:

  1. Iroha Special Instrucions
  2. (Unclear if needed and not yet implemented) WASM blobs

Independently of their content type they will have limited capability as they will only be able to execute the contained logic once.


Though as it is established in blockchain space current use cases require a more sophisticated functionality to efficiently support DeFi:

Use CaseNeeds *

Swaps

Triggered by user; Does several additional state validations; Manages liquidity;

Periodic Reward Distribution

Executed periodically; Distributes to all users satisfying certain condition;

Customizable and modifiable transaction fees

Executed for each transaction; Have ability to disallow transaction execution;
Stable coins contractsExecuted periodically or triggered by user; Possibly check external conditions provided by oracle contracts;
Oracle smart contractsTriggered  by user; Aggregate data from external world; Provide data to other contracts

*- Algorithms for all of these use cases need to be on chain for transparency and safety guarantees, which are provided by consensus.


Other use cases for permissioned blockchains might include (these use cases are proposed to be handled outside of the context of triggers, see `Outside of Triggers Scope` section):

Use CaseNeedsComments
Transaction Metadata ValidationExecuted for each transaction; Have ability to disallow transaction execution;
Transaction `Set` Instructions Validation for Account, Domain and Asset Definition metadataExecuted for each transaction; Have ability to disallow transaction execution; Have access to transaction ISI contents;

Permission Management

Executed for each transaction; Have ability to disallow transaction execution; Have access to transaction ISI contents;We can let all permissions be customizable and upgrade-able during runtime if we use Triggers.


Solution

To execute arbitrary logic on-chain for unrestricted number of times other popular blockchains have Smart Contracts. Our solution for introducing smart contract functionality is called Triggers.

This is how triggers are described in the whitepaper:

Triggers are Iroha Special Instructions that may be invoked based on condition. This is very powerful feature and enables Turing complete computation for Iroha's Special Instructions.

  • Trigger at timestamp
  • Trigger at blockchain
  • Trigger at condition

Events

Before discussing Triggers in detail it is important to consider Events, which might play a big role in Trigger execution.

Events can originate from 2 places:

  1. Triggers can decide to emit events
  2. Events will be automatically emitted for WSV changes
    1. The events are based on WSV diff after particular transaction execution.
    2. Examples: AccountCreated(_), AcccountMetadataFieldSet(_), AccountBalanceChanged(_)  etc.

Events emitted during execution of entities in the current block are added to the block. Though Triggers can react to them only in the next block.

Representation

As you can see in the quote from the whitepaper the Triggers were initially planned to be represented as ISI.
But with the recent decision to introduce WASM, it is suggested to use it for Triggers. So that Triggers can be written in Rust* and compiled into WASM.
This way Triggers will benefit from a high level language features of an already established language and will be able to solve all the listed use cases with ease.
As discussed previously, solving listed use cases with ISI was impossible or very difficult and needed significant work on the language design side.

*- generally speaking any language could be used that can be compiled into WASM. But due to Rust being our primary development language it is suggested to focus on it.

Execution

1 - Leader creates a block out of transactions that it got from the queue

2 - Leader adds trigger execution recommendations to the created block

Leader also should add events, emitted by the transaction execution and triggers, to the block.

3 - Validators receive the block and check if they agree with leader trigger execution recommendations

  1. Yes - if other checks also pass then validator votes for the block

  2. No - validator does not vote for the block

4 - If block got enough votes - all peers execute block transactions and triggers in their order at block commit.

Triggers might just influence other transactions in a way that they are executed before them, but do not directly do anything to other transactions.

Trigger Failure Case

In the case that at Stage 2 leader notices Trigger failure. It should indicate in the execution recommendation that this Trigger failed and Trigger error code.

Validators will then check that they also have this trigger failing.

On block commit peers will not need to try to execute this trigger, as it was proven that it fails.

Place in the WSV hierarchy

Corresponding to their meaning Triggers can be stored in 2 ways:

  • Top Level Triggers - in the World entity
    • They can operate on any data in WSV
  • Domain Level Triggers - in the corresponding Domain
    • Their access is restricted only to their Domain
  • Asset Definition Level Triggers
    • Their access is restricted only to assets of this definition

Second option might be interesting for permission-ed blockchains.

Categories

By purpose:

  • System Level - Influence the whole blockchain system rules and features
    • Do not pay fees for execution*
    • Permissions are not checked for them (can freely modify WSV state)
    • In comparison with Substrate based chains: Similar to runtime in Substrate
  • User Level - Provide services to users and other apps
    • Pay fees for execution*
    • Act on behalf of their technical accounts and their permissions
    • In comparison with Substrate based chains: Similar to smart contracts in Substrate

*- for more on this see Execution Limits

Also Registration process is different for them.

By wake up condition:

  • Time-based - (System Level and User Level )
    • At timestamp
    • Each `duration`
    • For complex use cases Trigger might register itself again to execute at another timestamp
    • We should limit the granularity of possible timestamps, as network has a limit of how fast it can produce and commit blocks
    • Time based Triggers seem to be difficult task for synchronization between peers. The following approach is suggested:
      • Recommendations to execute them should be added by the leader to the block (even if there are no transactions - leader should generate a block with trigger execution recommendations for them)

      • They are best effort - Iroha does not guarantee them to be executed at exact time. But it guarantees execution at closest possible time if network is operational.

      • If a leader does not produce a block with appropriate time based triggers in the defined time period after those trigger's selected timestamps - view change happens - similar to how if leader does not produce block for ordinary transactions

      • Next leader in this case will need to include the previous time based trigger in a block (when a view change happens it will be given a new time window to do this)

    • Example: Periodic rewards, scheduled batch processing
  • Block number-based  - (System Level and User Level )
    • At block
    • Each `n_blocks`
    • For complex use cases Trigger might register itself again to execute at another height
    • Might not be needed as there are already time based triggers, and block time is not fixed in Iroha v2.
    • Example: Per block rewards, scheduled batch processing
  • Triggers that are triggered by specific ISI call - ExecuteTrigger(Params) - (System Level and User Level )
    • Similar to smart contracts as they are in Ethereum blockchain
    • Execute as part of the transaction instructions execution
    • For System Level - only admin can call the Trigger or through democracy pallet
    • Example: Swap with liquidity pool, locking funds (for bridges or stable coins for example)
  • Event-based - (System Level and User Level )
    • Triggered by events emitted from other triggers or transaction execution.
    • The triggers react to the events emitted in the previous block as shown in the `Execution` section.
    • If leader does not supply the needed trigger execution recommendations based on events from previous block -
      the block will be rejected and the next leader will have to do this.
    • Leader has to produce a block with event based triggers in time, even if it will contain no transactions.
      Or the view change will happen and the scenario proceeds similar to time based triggers.
    • Example: Reacting to currency flow, reacting to system events (events from system level triggers), keeping metadata in sync

Persistent State

Complex Triggers (such as Swap triggers managing Liquidity Pools) might need to store data between their invocations. For this purpose Triggers will have following persistent storage:

  • Key value Store (Similar to Account, Asset Definition, Domain and Tx metadata)
  • On demand technical accounts where they might keep transferred funds.
    • While of course technical accounts can be emulated with key value store.
      This might complicate usage of Transfer, Mint and other currency related features while using Triggers.
      This also loosens the more strict definition of Currency and therefore is more error-prone.
      Therefore is suggested to keep the technical accounts option.

Registration

Trigger is registered by Register<Trigger> instruction. Submitted inside of it as WASM blob.
We have to consider adding appropriate permissions for this instruction.

For private blockchain we might start with a simple permission that only user with corresponding Permission Token can register Triggers.

For public blockchain we need to consider two distinct cases for Trigger registration based on their purpose:

  1. System Level Triggers - should be registered with democracy voting module*
  2. User Level Triggers - users can freely register their own triggers if they can pay trigger registration fee (which might depend on WASM blob size)

*- democracy voting module is assumed to be similar to the one used in Substrate based chains for runtime upgrades.

For public blockchain use cases we can also introduce parachain like behavior for Triggers to be only registered for a fixed amount of time, after which they will need to be registered again.
This will prevent clogging blockchain with "dead" triggers - triggers that no longer have users.

Communication with other Triggers

Generally Triggers can communicate with other Triggers by changing the WSV state and also by calling other triggers through ExecuteTrigger ISI.

ExecuteTrigger ISI might be designed to return some value, therefore introducing a class of `Library Triggers` that are used by other Triggers for various features.

Triggers can also communicate by defining and emitting events at their execution.

Execution Limits

Trigger execution uses validators' processing resources and therefore has to be limited for the network to stay operational.
Limits would mainly apply to User Level triggers as they are registered freely and are not part of the system itself.

The following is proposed:

  1. Limit max number of WASM instructions that can be executed per Trigger
  2. Take network fees for Trigger execution:
    1. In general case fees can be taken from the Trigger's technical account balance.
    2. Fees can be taken for each WASM instruction that is executed - the concept is known as `fuel` in WASM runtimes.
    3. If technical account does not have enough funds - changes made by Trigger are reverted.
    4. For Triggers called by ExecuteTrigger(_) we might consider the same approach that is used in Ethereum - take funds from the account that called it.

There is also a question of how to limit their `condition` execution if condition is also in WASM. Possibly fees will be also taken just for condition checks, but in lesser amount than for the actual execution.

Misc

It might be good to provide Triggers with a way to `Unregister` themselves. Especially for public blockchain use cases.

Outside of Triggers Scope

The following topics are proposed not to be handled with Triggers but instead considered separately. Both of the topics need a separate RFC, but they are mentioned here in the context of their relation to Triggers.

In Triggers we wanted to both of these as part of Transaction Based Triggers (After/Before Transaction) - this proposal effectively removes this type of Triggers from design.

Instruction and Query Permissions

We considered moving our permission check into Triggers (for them to be runtime upgradeable), but this does not seem like a good idea due to the following reason.

If Triggers are used to check instructions' permissions during transaction execution, then how should Trigger permissions be checked.
If it is decided to:

  1. Handle transaction permission checks and trigger permission checks separately - this will create inconsistency.
  2. Use triggers to check execution of other triggers - this introduces a lot of complexity into the design.

Instead it is proposed to have Permissions as WASM plugin which is not a Trigger, but a separate part of the system. This will allow us to upgrade them at runtime, without the complexity that might be there with Triggers.

It is possible to either migrate current permission system simply to WASM and provide instructions to update them, or we can work on diff based permissioning - which will check WSV diffs after instruction/transaction/trigger execution.

Network Fees

We also considered using Triggers for handling network fees (transaction and trigger fees), but again there is a following issue.

If transaction based Triggers are used to calculate transaction fees, how should we calculate Trigger execution fees.
Here again there might arise either consistency or complexity problems.

Instead it is proposed to have either a highly configurable system where fee configuration can be done at runtime, or again do it through a separate WASM plugin (if fee handling logic might be changed at runtime).

Additional Information

40 Comments

  1. Nikita Puzankov Can you give an example of an impure Trigger? And what requirements we have for it?

    I understand that the terminology is from FP. Just curious how it is seen in this context.

    1. Yes - I hope we do not have such requirements, but it's Instruction that may use I/O in any form - make a call to 3rd party service or read something from outside the blockstore.

      Makoto Takemiya Salakhiev Kamil do we have this options on the radar or we can assume that our Iroha Special Instructions will be pure always (except dependence on system time)?

  2. Iurii Vinogradov comments from previous PR:

    The question is - should we execute them here or send a new transaction to the ledger?

    Such trigger executions should be a part of parent transactions called by account to identify the signature and account of a callee. Also possible to create nested transactions inside this original transaction especially if retry might happen or state updates independently for original action and trigger action. Ethereum uses nested transactions, but the only original transaction has a signature. Also, the transaction should be reverted, if the trigger was executed not successfully, so the trigger should be a part of the original transaction otherwise original transaction might be committed before the trigger result.

    Also trigger execution should depend on result of Original transaction if it failed or not.

    > Second one - how should we spread results of it's execution between peers and synchronize I/O actions like notifications, etc.?

    One of the approaches is that we share only the original transaction between peers, but every peer adds triggers by itself.

    First question - how should the listener understand was it triggered or not? It has only the current state, not the delta of changes.

    One of the possible approaches is that three should be some parameters of transaction which values can call a trigger. And if the transaction has some values in which this trigger is targeted, then the trigger will be executed. For example, if transactions have command transfer asset A to account B then trigger should be executed. The disadvantage of such an approach is that it requires additional checks for transactions that may slow the system.

    Smart Contract approach don't have this disadvantage.

    > Third one - how should we check that listener execution will not break the next transactions? Generally, it can break the next transactions it is how it should be. If the order is always deterministic and we know that listener execution finishes before the next transaction, then it is the right behavior. A problem will be if transactions and listeners will be executed concurrently or in indeterministic order.

    Listeners should be executed after all block's transactions execution


    Difficult to agree on it. Changes which listener add to the state on a business level are a part of original transaction. So, it looks like it is more appropriate that trigger breaks next transactions than next transactions break trigger called by the previous transaction.

  3. Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:40]
    [In reply to Iurii Vin]
    Event listeners should be stored on each validator and after every block commit, they should execute

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:40]
    I wrote a diagram of this a few months back

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:42]
    [ Photo ]

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:42]
    when a leader creates a block, the leader should then go through all events and collect transactions that would be executed after this block is confirmed

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:43]
    For example, if you have a Event: if account A receives > 100 XOR, send 10 XOR to account B

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:43]
    Then the block tx is: 115 XOR to account A from account C

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:44]
    and the validators see the event that was previously registered and generate the tx, 10 XOR to account B from account A

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:44]
    and this is just executed along with the block

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:46]
    [In reply to Makoto Takemiya 武宮誠 (Sora.org - Soramitsu)]
    This I mean w.r.t. on-chain events

    Makoto Takemiya 武宮誠 (Sora.org - Soramitsu), [19.06.20 14:46]
    If you mean something related to client-side, then this is not related

  4. Egor Ivkov :

    For the ASAP Execution I see a potential problem with OnTimestamp trigger that we have. As newly joined peers when synchronizing already will be in a different time period and will not execute this trigger.

  5. Comments from Andrei Lebedevabout triggers:

    it looks like an opportunity for decoupling, as triggers can be a part of iroha2-core, and permissions as a part of iroha2-isi.

    https://sawtooth.hyperledger.org/docs/core/releases/latest/architecture/events_and_transactions_receipts.html#events

    We can look at this as an example of such decoupling.

  6. Andrei Lebedev - please provide details on Impurity.

    1. We can support impurity (in terms of pure function) in two ways:

      • If we will include state Merkle root in the block, then we will have agreement on whether the nodes have the same state. If nodes ended up with different state, the block will be reverted.
      • If we want to avoid reverted blocks, the nodes can generate multisignature transactions with the data from an impure source, and if some generated transaction will get enough signatures to commit, then the data will end up in the state.
  7. The need to check and validate conditions for trigger execution maybe cause performance degradation. There is another hybrid option of traditional SC and our triggers approach mixing. I can call it passive triggers.

    It means that the trigger not listens to transactions and not performs check conditions by default. Instead of it, Trigger should be noticed that it should check conditions and be executed. To implement it we need ISI something like noticeTrigger or ability to add triger_Id to any instruction call & something like trigger_Id itself needed, to be an argument of this instruction. It gives the ability for complex or simple ISI to add trigger notification, or call it directly. After such a notification trigger becomes alive and checks if conditions are ok to be executed or not. It gives us the ability not to check triggers on every transaction slowing down performance what can be significant in case of many triggers, but we still will have advantages of our triggers design.

    Also, there is a specific use case for such passive triggers approach. For example, it is very easy to implement a store using such an approach. Because all transactions for the store are just payments and different goods can have the same price it is impossible to differentiate which good to send as an exchange result for the payment by any conditions with only transfer ISI. but if UI can attach a direct trigger call to the transfer it will be obvious what user wants to by and the specific trigger can easily be executed.

    1. Iurii Vinogradov thank you, I think I will add this comment to Set of OOB ISIs because it's more about it than triggers itself.

      What about performance degradation - it should be the same, we will check presence of this ISI inside transactions as with any other ISI based trigger.

      1. The way you see it there will be no performance gains obviously. But it is strange to be stick to this designeven when it has an obvious disadvantage. I speak about the different possibility to make a trigger call a part of specific ISI execution. There are possibilities to design it not breaking the current triggers model. 

        1. What disadvantage? 

    2. Great idea. It is like taking out ineffective frequent checks and regrouping these conditions into an optimal scheme that is well optimized in conjunction with consensus and blocks.

  8. > How to execute them?

    > ...

    > 3. As a new transaction

    Here the question would be of who will sign this transaction? All transactions are signed by account, so there are several options:

    1. Create an account for each trigger with proper permissions that owns this trigger
    2. Use the account that sent the CreateTrigger transaction

    But with both of these options there is a problem - every peer in the network will then need to know the private key of this account as the peer may be a leader when the trigger is activated and may need to sign this new transaction. So this introduces a potential big security flaw, which I am not sure how we can mitigate if we choose this option of trigger execution.

    1. Thanks Egor Ivkov- looks like this option is not as good as it seems. I will remove it and point out the need to prevent cycles in triggers execution.

  9. The execution of triggers as `a set of instructions outside transaction` (2nd option) makes the most sense to me.


    I already mentioned the flaws of the 3rd option. As for the 1st option (As additional set of instructions inside transaction) it would be very bad UX to my mind to fail the transaction if some of the triggers fail as triggers might be completely unrelated to the person who sent this transaction and to what he wants to do.

    Example

    1. A person transfers some amount of coins to some other person
    2. Some DApp developer has set a faulty trigger to execute on block height
    3. This transaction will fail and all the following transactions will fail as the faulty trigger will always prevent tx execution as it will be a part of the tx.
  10. > When to check Trigger execution conditions?

    To my mind triggers should be checked Every block when the block is committed, otherwise we might overcomplicate the design of triggers.

    > Where to store them?

    If we store them as assets they can be safely moved into external module and detachable from the main Iroha. We already have this design for permissions, so might as well take it for triggers also for the consistency.

    1. Unfortunately I didn't see an ability to detach triggers from Iroha app as a separate module because their execution on WSV is tightly coupled. Maybe Data Model will help in the future.

    1. From the first view transact events looks more like a candidate for cloud events implementation.

  11. > Peer can be a place to store Iroha Triggers. We can register new Triggers by Register<Peer, Trigger> Iroha Special Instruction. 

    Right now peer is just an entity that is attached to the wsv. We shouldn't actually need any peer id to register a trigger. So I guess the ISI should be just Register<Trigger>

    1. Yes, as with domains and asset definitions registration we do not have peer identification actually, just a virtual concept of the "current peer".

  12. How time will be synchronized between peers for time-dependent Triggers?

    1. Great question, I do not think we can have a total time sync between peers and we should define some non-functional requirements for this functionality. 

      For triggers we assume that time on different peers in some sync state with delays less or equal to block build time.

      I will add this to assumptions section, thank you.

    2. There is such a thing as a logical clock, where it is emphasized to check the conditions of what has actually worked, and to check what should be followed. Or a set of conditions that can then follow.

      For example, if a set of events, this is called a partially ordered multi set or POMSET. Between something the conditions of order are designated, or remain free and dependent on the situation.

      Then it comes down to proving what has already happened and not when it happened. And the release of the block and the time of its release is something that has already been fixed in time and certified by consensus.

      https://en.wikipedia.org/wiki/Lamport_timestamp
      Conceptually, this logical clock can be thought of as a clock that only has meaning in relation to messages moving between processes. When a process receives a message, it re-synchronizes its logical clock with that sender. The above-mentioned vector clock is a generalization of the idea into the context of an arbitrary number of parallel, independent processes.

  13. Makoto Takemiya - question about Iroha "one-time" Triggers:

    • Use cases
    • Possible option - to unregister trigger after execution (If(account.asset1.qty>10), [TakeFee, UnregisterItself])
    • Evgeni Zhukov - provide use cases for additional parameter and GC

    Makoto Takemiya - schema of Trigger:

    struct Trigger {
      condition: Instruction,
      execution: Instruction, (Mint<Account, Asset>, Sequence<Vec<Instruction>>, If<Instruction, Instruction, Instruction>...)
    }

    Makoto Takemiya - triggers on triggers, do we need to trigger on triggers?

    Iurii Vinogradov - we had a trigger that should transfer some amount of assets on condition Mint<Account, Asset> (distribution of new assets) for example to accounts with some marker asset or to 1000 of account (all users of Iroha, all participants of liquidity pool, etc.)

    1. Let's admit the case when it is more efficient to block the subsequent execution of a trigger than to delete the trigger.

      For example, there is a logical trigger system that encodes the business logic of a situation. This system assumes mobile execution of events, so that the trigger is triggered once.

      Mobile execution means either a fresh event, the details of which are known only for example for one of the parties, or the dynamic choice of a case. In this case, it is effective to keep a blank of triggers that is triggered once and possibly in cases.

      For example, adding an additional condition to the trigger's firing allows you to make logical dependencies, including one-shot execution or execution only after or before, including in combination with a single-shot, which can also be encoded in the condition.

      When the logical stub is no longer needed and triggers are no longer needed, they can be removed. And in the process of operation, the logic and the one-time execution can be verified as Boolean logic.

      It is also possible to use ready-made trigger templates as templates for cases, since creating or deleting a trigger and allocating unique places for them is not required.

      If use an additional parameter to fire triggers as a condition to control before or after an event and check this order, then this can replace the triggering of triggers. That is, calling triggers by a trigger is replaced by logical control of the correct execution order due to the activation of the ability to fire and deactivation.

      This allows you to keep all triggers ready and immediately in advance, while knowing the possible cases of mobile execution, counting the number and completely not depending on any cycles to get the necessary options for the event scenario.

      1. Introduction of new trigger's types, additional parameters like counters and additional functionality like "fire" and "deactivate" brings additional complexity to our implementation.

        I will insist on lightweight solution till it covers all use cases.

        1. I would suggest a trigger to have it's own state - key-value storage, and execution context - data provided by event that triggered it.

          1. It's good to have context as input parameter, but why do we need storage here?

            1. It is one of the possible solution to keep track of how many times the trigger was executed, so that it can deregister itself when it was called two times, for example. We can consider it as a smart contract.

              1. Thank you Andrei Lebedev,

                yes - one of the options to implement this track is to store it somewhere. Some questions arise when we talk about "n-times triggers" so I will appreciate if you or people you ask to will give us more use-cases. I bet most of them can be solved without even need to unregister trigger.

              2. Maybe it is possible to reduce a boolean variable equated to False to make a "quick delete", it is not necessary to do a complete delete, for example, just quickly remove it and then when you need to clean it up in the background without priority. A boolean variable can give an instant guarantee simply.

                1. I think implementation details can be moved to responsibilities of developer. Here we need to define use-cases and requirements.

    2. Nobuko Yoshida's talk will present a summary of recent papers on multiparty session types for verifying distributed, parallel and concurrent programs, illustrating how theoretical work is grounded on collaborations with industry partners. This research led to the development of the protocol description language
      https://www.youtube.com/watch?v=HEg088cW528
      https://speakerdeck.com/paperswelove/multiparty-session-types-and-their-applications-to-concurrent-and-distributed-systems?slide=21

      The classic case is a bookstore. An illustration and presentation can be found here.
      The meaning is two stubs, for the buyer and the seller, and actions that are called according to the possible branching of events.
      The meaning of mobility is that there are many books and actions are not known in advance, but there is a verified logic that covers a set of bookstore cases and allows them to be verified in advance.

      1. Thanks Evgeni Zhukov- here is the thing. I will move this comment to new RFC about Peer's reputation system or will name it another way, but this ideas are not related to Triggers.

        Triggers and Instructions execution is not related to distributed computations, World State View and changes on it (by ISI) executed on each Peer separately and did not synchronized in any way.

        What we synchronize between Peers is their BlockStores:

        PlantUML diagram

        @startuml
        node leader
        node peer1
        node peer2
        node peer3
        node peer4
        leader -- peer1 : VoteBlock
        leader -- peer2 : SyncBloc
        leader -- peer3 : VoteBlock
        leader -- peer4
        @enduml

        This synchronization is a part of:

        1. Consensus when leader peer distributes newly up-voted block between participants
        2. Block synchronization when peer check state of previous blocks to find missing parts of our blockchain

        And here message-passing and eventual consistency between peers takes place. So I think Ideas from Yoshida's talk can be applied here.

    1. Vadim Reutskiy can you add them to requirements?


  14. References on the topic of formal verification of processes and events, and response to events (triggers)

    http://mrg.doc.ic.ac.uk/people/nobuko-yoshida/
    Nobuko Yoshida is Professor of Computing at Imperial College London. Last 10 years, her main research interests are theories and applications of protocols specifications and verifications.

    Strong Normalisation in theπ-Calculus
    https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.9784&rep=rep1&type=pdf

    Compositional Event Structure Semanticsfor theπ-Calculus
    https://pdfs.semanticscholar.org/8d83/40aadd22aa60f468b93ef1c84d08b577a8d7.pdf?_ga=2.93109145.561525550.1598357506-1470973304.1598357506

    https://katalog.uu.se/profile/?id=XX3795
    Björn Victor - Uppsala University, Sweden
    Professor at Department of Information Technology,<em> Division of Computing Science

    Acyclic Solos and Differential Interaction Nets
    https://perso.ens-lyon.fr/olivier.laurent/solos.pdf

    https://en.wikipedia.org/wiki/%CE%A0-calculus

    Higher category models of the pi-calculus
    Mike Stay, Lucius Gregory Meredith
    https://arxiv.org/abs/1504.04311#paper

    Safety Verification of Software Using Structured Petri Nets
    https://www.researchgate.net/publication/221147642_Safety_Verification_of_Software_Using_Structured_Petri_Nets

    Towards Verifying Petri Nets: A Model CheckingApproach
    http://eprints.sim.ucm.es/11488/1/M._Rosa_Martos-Master_2010.pdf

  15. Pay attention that some of the comments here might be outdated and only distantly relate to the current design.