Status

DECIDED

Stakeholders
OutcomeThis functionality moves out of scope of 2.0.0
Due date
Owner

Background

Iroha Modular Data Model is a concept of "low-level" API for Iroha Peers processes configuration. It should be compared with Iroha DSL and decision about inclusion of this functionality in 2.0.x versions should be made.

More details on initial research: 

Problem

The idea behind DM or HL Transact is to decouple business logic from the blockchain. It will allow to run Iroha with a subset of ISIs required for a specific deployment, or solely with Burrow EVM, or WASM as business logic layer, for example.

Solution

A data model (further referred to as DM) is a business model abstraction. It may provide interfaces to execute some commands and query some state. A DM implementation is a module that can be attached to an Iroha node. In that case, the set of commands delivered to a DM module is strictly determined by the ledger, which enables to build extensible blockchain applications.

Data Model approach gives an ability to extend basic Peer-side (server-side) functionality with general purpose languages (python, C++, etc.) and environments.

Iroha Users can deploy payload schemes for commands and queries:

message Set {
string key = 1;
string value = 2;
}
message Nuke {}

message Command {
// extends CallModel
message Payload {
oneof command {
Set set = 1;
Nuke nuke = 2;
}
}
Payload payload = 1;
.iroha.protocol.DataModelId dm_id = 2;
}

And their implementations:

_save_file_path = str()
_persistent_kv_storage = dict()
_block_kv_storage = dict()
_tx_kv_storage = dict()


def get_supported_data_model_ids():
    return [('test_kv', '0.1.0')]


def execute(cmd_serialized: memoryview):
    cmd = kv_schema_pb2.Command()
    cmd.ParseFromString(cmd_serialized)
    which = cmd.payload.WhichOneof('command')
    global _tx_kv_storage
    if which == 'set':
        key = cmd.payload.set.key
        val = cmd.payload.set.value
        if not key in _tx_kv_storage:
            if len(_tx_kv_storage) >= MAX_SIZE:
                return (3, "storage limit exceeded")
        _tx_kv_storage[key] = val
        print(f'storage[{key}] is set to {val}')
        return None
    elif which == 'nuke':
        _tx_kv_storage.clear()
        print(f'storage cleared')
        return None


def commit_transaction():
    global _tx_kv_storage
    global _block_kv_storage
    _block_kv_storage = _tx_kv_storage.copy()


def commit_block():
    commit_transaction()
    global _block_kv_storage
    global _persistent_kv_storage
    _persistent_kv_storage = _block_kv_storage.copy()
    _save_persistent()


def rollback_transaction():
    global _tx_kv_storage
    global _block_kv_storage
    _tx_kv_storage = _block_kv_storage.copy()


def rollback_block():
    global _block_kv_storage
    global _persistent_kv_storage
    _block_kv_storage = _persistent_kv_storage.copy()
    rollback_transaction()


def _save_persistent():
    global _save_file_path
    global _persistent_kv_storage
    with open(_save_file_path, 'wt') as out:
        json.dump(_persistent_kv_storage, out)
    print(f'saved persistent data to {_save_file_path}')


def _load_persistent():
    global _save_file_path
    global _tx_kv_storage
    global _block_kv_storage
    global _persistent_kv_storage
    if (os.path.isfile(_save_file_path)):
        with open(_save_file_path, 'rt') as inp:
            _persistent_kv_storage = json.load(inp)
        _block_kv_storage = _persistent_kv_storage.copy()
        _tx_kv_storage = _persistent_kv_storage.copy()


def initialize(save_file_path: str):
    global _save_file_path
    _save_file_path = save_file_path
    _load_persistent()

Implementation has no strict API, so it can work with disk and network I/O, use available libraries, global static memory, etc.

It also has no Iroha API inside, but it can be added if needed.

Decisions

  • Use protobuf schema for Commands and Queries payloads  Use some schema for Commands and Queries descriptions
  • Commands and Queries implementation should align to appropriate interfaces (rollback_block, load_persistent, etc.) in available SKDs?
  • Executor makes a decision to proceed with command/query or not

Alternatives

  • Use client-side transformers and adapters to support 3rd party tools via Client-side Iroha API
  • Use already existing WASM runtimes as a backend

Concerns

  • No strict requirements to environment is powerful yet dangerous solution
  • Solution did not align well with Iroha 2.0 in it's current form

Assumptions

  • Clients and Iroha use gRPC or another protocol with Protobuf 
  • Custom commands and queries has no obvious API for Iroha

Risks

  • Protobuf will not be enough for 3rd party applications and users `[9;7]`
  • Powerful environments will require a lot of resources `[9;5]`

Additional Information

19 Comments

  1. Questions

    Protobuf needed?

    Mikhail Boldyrev :

    iroha1 clients use protobuf, so the clients are bound to it. it seams reasonable to reuse it as we already have it, and it also provides zero-copy possibilities. so we did not consider other protocols. if we need one, we can put it in protobuf field, but it is better to avoid such nesting.


    Model is an extension to Iroha domain or a replacement?

    Looks like it's just additional types of `Command` and `Query` entities.

    Who can use DM module API?

    Looks like clients can build own command and query bodies, while developers can deploy implementationss.

    What is an API for command executor?

    In an example was no API used (except Python std). 

    How Query works inside with model related entities?

    Implementations has direct access to static mutable memory?

    Like `kv_storage` here.

    1. > Protobuf needed?

      No, it is only used as an example.

      > Model is an extension to Iroha domain or a replacement?

      Replacement. The document described the easiest way to integrate it into Iroha 1. In general, the ability to deploy only required models is a plus.

      > Who can use DM module API?

      Correct. Clients can build messages which can be parsed by a specific DM.

      > What is an API for command executor?

      For Python, it was execute(cmd_serialized: memoryview), and commit/rollback methods. We need to discuss which methods have to be present in order to support events.

      > How Query works inside with model related entities?

      Could you elaborate? As I understand from the question, it works exactly the same as command, it parses the user payload, and performs some logic based on it. From API perspective, the query method may return an iterator, so that Iroha can write responses to the client.

      > Implementations has direct access to static mutable memory?

      This is implementation-specific, as I understand. Depending on how DM is implemented, it can perform the actions it requires.

      1. Thank you , Andrei Lebedev

        So additional questions:

        1. So there will be no model for world state view - just raw data?
        2. About queries, I asked how to retrieve information from the chain or WSV, but looks like it will not be stored there (see question above)? Only transactions with commands with encoded payloads?


  2. Use protobuf schema for Commands and Queries payloads

    Clients and Iroha use gRPC or another protocol with Protobuf

    Protobuf will not be enough for 3rd party applications and users `[9;7]`

    Decision for Iroha 1, as it already uses protobuf. Not applicable in this RFC.

    Use already existing WASM runtimes as a backend

    It is still required to wrap it under some API, as with other runtimes.

  3. Integrations with different tools from Blockchain ecosystem required additional changes in Iroha 1.0. 

    This is too vague, can we better describe the problem? What tools from the blockchain ecosystem we need to integrate with and how does DM help with this. Yes, there is an example about the HL Burrow integration in the provided PDF document, but this is not the key goal of the DM solution.  

    1. Andrei Lebedev Mikhail Boldyrev  - please provide more information here.

      Makoto Takemiya  - it will be great to hear your general opinion as Product Owner.

    2. The idea behind DM or HL Transact is to decouple business logic from the blockchain. It will allow to run Iroha with a subset of ISIs required for a specific deployment, making the system more secure, as this deployment will have smaller and simpler business logic module. Another option is to run solely with Burrow EVM, or WASM as business logic layer, for example.

      1. In my understanding, the aim of the DM is not to support integration with different tools from the blockchain ecosystem, but to decouple business logic from the blockchain part and allow deployment of the business logic to the Iroha node. We should probably change this in the original RFC description.   

  4. Solution did not align well with Iroha 2.0 in it's current form.

    Why not? Can we be more specific why it does not align with Iroha 2.0? With which concepts does not align and why. 

    1. Iroha 2.0 concepts are more high level in general. Modular Data Models will require to rebuild the whole implementation, using this approach, so to build all Iroha 2.0 concepts:

      • Iroha Special Instructions
      • Iroha Queries
      • Iroha Triggers
      • P2P protocol
      • Consensus (Sumeragi)
      • World State View

      on top of it we will need to have implemented Modular Data Models first.

      In Iroha 2.0 we provide this concepts as API while Modular Data Models works with Transactions, Blocks and general purpose environments.

      1. It seems that current concepts are more low-level, on contrary. DM API or Transact API describes general logic execution, while ISIs and Triggers are a specific implementation of API.

        1. Transactions and Blocks are lower than Logic on top of them (like Domain model with ISI and Triggers). Do you agree?


        2. Andrei Lebedev do we really need to rebuild everything to be able to add DM to Iroha 2.0? Can you maybe explain for every concept in Iroha 2.0 (special instructions, queries, triggers, P2P protocol and consensus) why they have to be changed to support DM and how? To me it looks more like the DM can be added as another concept in Iroha 2.0

          1. Observations from the codebase:

            • Instructions and queries are not related to P2P protocol and consensus, therefore these two components are not related to this RFC.
            • Triggers are executed with transactions, so we can combine instructions and triggers for this discussion
            • Queries use World State View, so we can say that queries encapsulate WSV in a way

            Changes to transaction implementation:

            Torii endpoint will be decoupled from concrete World State View, and will only interact with DM "registry", which will reroute the transaction to the specified DM. Current implementation of ISIs can be used, so we only need to implement one layer of abstraction, and the execution pipeline will be the same.

            Changes to query implementation:

            Similar to transactions, current implementation will be wrapped under DM "registry". This is one additional indirection layer, and current implementation will be reused under this indirection layer as well.

  5. Can we add under the solution section how DM complements existing Iroha 2.0 concept of Iroha Special Instructions? In the background section it is written "It should be compared with Iroha DSL", however in the remaining of the document there is no detailed comparison with examples when one approach works better then another and how both can work together. 

    1. Ok, I will try to add it later.