Background
Iroha Modular Data Model is a concept of "low-level" API for Iroha Peers processes configuration. It should be compared with Iroha DSL and decision about inclusion of this functionality in 2.0.x versions should be made.
More details on initial research:
Problem
The idea behind DM or HL Transact is to decouple business logic from the blockchain. It will allow to run Iroha with a subset of ISIs required for a specific deployment, or solely with Burrow EVM, or WASM as business logic layer, for example.
Solution
A data model (further referred to as DM) is a business model abstraction. It may provide interfaces to execute some commands and query some state. A DM implementation is a module that can be attached to an Iroha node. In that case, the set of commands delivered to a DM module is strictly determined by the ledger, which enables to build extensible blockchain applications.
Data Model approach gives an ability to extend basic Peer-side (server-side) functionality with general purpose languages (python, C++, etc.) and environments.
Iroha Users can deploy payload schemes for commands and queries:
message Set { string key = 1; string value = 2; } message Nuke {} message Command { // extends CallModel message Payload { oneof command { Set set = 1; Nuke nuke = 2; } } Payload payload = 1; .iroha.protocol.DataModelId dm_id = 2; }
And their implementations:
Implementation has no strict API, so it can work with disk and network I/O, use available libraries, global static memory, etc.
It also has no Iroha API inside, but it can be added if needed.
Decisions
Use protobuf schema for Commands and Queries payloadsUse some schema for Commands and Queries descriptions- Commands and Queries implementation should align to appropriate interfaces (rollback_block, load_persistent, etc.) in available SKDs?
- Executor makes a decision to proceed with command/query or not
Alternatives
- Use client-side transformers and adapters to support 3rd party tools via Client-side Iroha API
- Use already existing WASM runtimes as a backend
Concerns
- No strict requirements to environment is powerful yet dangerous solution
- Solution did not align well with Iroha 2.0 in it's current form
Assumptions
Clients and Iroha use gRPC or another protocol with Protobuf- Custom commands and queries has no obvious API for Iroha
Risks
Protobuf will not be enough for 3rd party applications and users `[9;7]`- Powerful environments will require a lot of resources `[9;5]`
19 Comments
Nikita Puzankov
Questions
Protobuf needed?
Mikhail Boldyrev :
Model is an extension to Iroha domain or a replacement?
Looks like it's just additional types of `Command` and `Query` entities.
Who can use DM module API?
Looks like clients can build own command and query bodies, while developers can deploy implementationss.
What is an API for command executor?
In an example was no API used (except Python std).
How Query works inside with model related entities?
Implementations has direct access to static mutable memory?
Like `kv_storage` here.
Andrei Lebedev
> Protobuf needed?
No, it is only used as an example.
> Model is an extension to Iroha domain or a replacement?
Replacement. The document described the easiest way to integrate it into Iroha 1. In general, the ability to deploy only required models is a plus.
> Who can use DM module API?
Correct. Clients can build messages which can be parsed by a specific DM.
> What is an API for command executor?
For Python, it was
execute(cmd_serialized: memoryview)
, and commit/rollback methods. We need to discuss which methods have to be present in order to support events.> How Query works inside with model related entities?
Could you elaborate? As I understand from the question, it works exactly the same as command, it parses the user payload, and performs some logic based on it. From API perspective, the query method may return an iterator, so that Iroha can write responses to the client.
> Implementations has direct access to static mutable memory?
This is implementation-specific, as I understand. Depending on how DM is implemented, it can perform the actions it requires.
Nikita Puzankov
Thank you , Andrei Lebedev
So additional questions:
Andrei Lebedev
https://docs.rs/transact/0.2.4/transact/#modules should be included in the RFC
Andrei Lebedev
Decision for Iroha 1, as it already uses protobuf. Not applicable in this RFC.
It is still required to wrap it under some API, as with other runtimes.
Ales Zivkovic
Integrations with different tools from Blockchain ecosystem required additional changes in Iroha 1.0.
This is too vague, can we better describe the problem? What tools from the blockchain ecosystem we need to integrate with and how does DM help with this. Yes, there is an example about the HL Burrow integration in the provided PDF document, but this is not the key goal of the DM solution.
Nikita Puzankov
Andrei Lebedev Mikhail Boldyrev - please provide more information here.
Makoto Takemiya - it will be great to hear your general opinion as Product Owner.
Andrei Lebedev
The idea behind DM or HL Transact is to decouple business logic from the blockchain. It will allow to run Iroha with a subset of ISIs required for a specific deployment, making the system more secure, as this deployment will have smaller and simpler business logic module. Another option is to run solely with Burrow EVM, or WASM as business logic layer, for example.
Nikita Puzankov
Ales Zivkovic is it enough for you?
Ales Zivkovic
In my understanding, the aim of the DM is not to support integration with different tools from the blockchain ecosystem, but to decouple business logic from the blockchain part and allow deployment of the business logic to the Iroha node. We should probably change this in the original RFC description.
Nikita Puzankov
updated
Ales Zivkovic
Solution did not align well with Iroha 2.0 in it's current form.
Why not? Can we be more specific why it does not align with Iroha 2.0? With which concepts does not align and why.
Nikita Puzankov
Iroha 2.0 concepts are more high level in general. Modular Data Models will require to rebuild the whole implementation, using this approach, so to build all Iroha 2.0 concepts:
on top of it we will need to have implemented Modular Data Models first.
In Iroha 2.0 we provide this concepts as API while Modular Data Models works with Transactions, Blocks and general purpose environments.
Andrei Lebedev
It seems that current concepts are more low-level, on contrary. DM API or Transact API describes general logic execution, while ISIs and Triggers are a specific implementation of API.
Nikita Puzankov
Transactions and Blocks are lower than Logic on top of them (like Domain model with ISI and Triggers). Do you agree?
Ales Zivkovic
Andrei Lebedev do we really need to rebuild everything to be able to add DM to Iroha 2.0? Can you maybe explain for every concept in Iroha 2.0 (special instructions, queries, triggers, P2P protocol and consensus) why they have to be changed to support DM and how? To me it looks more like the DM can be added as another concept in Iroha 2.0
Andrei Lebedev
Observations from the codebase:
Changes to transaction implementation:
Torii endpoint will be decoupled from concrete World State View, and will only interact with DM "registry", which will reroute the transaction to the specified DM. Current implementation of ISIs can be used, so we only need to implement one layer of abstraction, and the execution pipeline will be the same.
Changes to query implementation:
Similar to transactions, current implementation will be wrapped under DM "registry". This is one additional indirection layer, and current implementation will be reused under this indirection layer as well.
Ales Zivkovic
Can we add under the solution section how DM complements existing Iroha 2.0 concept of Iroha Special Instructions? In the background section it is written "It should be compared with Iroha DSL", however in the remaining of the document there is no detailed comparison with examples when one approach works better then another and how both can work together.
Nikita Puzankov
Ok, I will try to add it later.