Page tree
Skip to end of metadata
Go to start of metadata

Join our Peer Programming Call-In

We have a bi-weekly Peer Programming Zoom call for developers on Mondays at 9 AM US Pacific time (UTC-07:00 America/Los Angeles.)

We're currently implementing the Utility Emissions Channel use case together. 

If you're interested, please check the calendar for the next call

The Idea

Writing software requires getting data from external sources, which usually means accessing an API by sending a GET or POST request to a URL with authentication keys and a request message and parsing the XML or JSON that comes back.  While this approach is fully automated, it comes with a lot of potential problems:

  • You have to build a different API for every source.
  • The API is periodically unavailable, so you're down as well.
  • The API could change and break your app.
  • The API could simply be taken away from you, to "upgrade to a better API", to "improve security", etc. that leaves you scrambling for an alternative.
  • You have to store a local copy of the data retrieved from the API, in case it goes away.
  • You had to work out a procedure for audit and verification of the data obtained from the API.
  • You worry about security because now you have a big app with lots of data and authentication keys.

As a result, these API integrations are expensive and time consuming to build and maintain, so they don't scale well in scope.  When we're talking about building a Hyperledger-based blockchain app for Carbon Accounting and Certification Working Group, which could require integrating data from a large number of sources, we came upon a better way to do it with the blockchain itself.

Instead of each source providing us with an API, they could put their data on the blockchain.  This could be a Hyperledger permissioned channel that is only open to their customers and other trusted parties.  The data could be the carbon emissions of a product or a particular invoice, for example.

Instead of a big app consuming data from all these sources, an app could simply traverse through multiple channels.  Instead of having a single access key from each data source, it could request that the user of the app pass it their authentication key or token.  Then it would pass that token through to the data source's channel and obtain the data it needs.  It would then perform the calculation with this data.  Finally, it could open a channel of its own and put the results on this channel.  This channel could then be open to its customers, so they could in turn use the data for their own carbon emissions calculations.

An additional advantage is that the sources contributing the data simply have to store the data in the channel.  They do not have to be responsible for its long-term keeping and availability.  Thus, this allows specialty companies, such as CO2 emissions auditing companies, to audit the data from a larger entity, such as a utility, shipping company, airline, or manufacturer, and then store it in the channel for everyone else to use.  This is similar to the offline scenario where a building engineer files a report for a building developer with the city's building department: Once the report has been filed, the building engineer needs no longer to be responsible for making it available.  The city planning department is responsible for making it available to the building developer and his sub-contractors and customers, as well as other government agencies.

There could be many channels of data from electric utilities, shipping companies, airlines, and suppliers of industrial commodities such as plastics or steel, each placed on the channel by a trusted CO2 emissions auditing company.  Our app is a carbon calculator.  The user of our app would pass in his keys to access the channels, for example from his utility emissions channel, based on his status as their customer.  We would pass those keys to the channel and get the emissions from the utility, which we could then use for our calculations.  The entire process is run as a serverless micro service.   

The advantages of this approach are:

  • The auditing company doesn't have to worry about keeping an API up and running.  Once it has done its analysis, it's written to the blockchain channel and no longer requires a server to provide it, just like web content deployed on a CDN.  
  • The format is standardized by agreement between the provider and users of the data.
  • The content of the data cannot be altered once it is written to the blockchain, removing concerns about audit.
  • Even if the auditing company changes the format or stops providing data in the future, data from the past will always be available.
  • The app consuming the data can be much smaller.  For example, instead of storing an authentication key for many sources and data from many users, it can use keys provided by a customer's wallet.  Once it's finished with the data obtained from those keys, it can put the results on its channel and then delete the data obtained from the blockchain.  
  • Both the data source and data consumer could then be run as serverless micro services.

The channels could be maintained by a third party entity, which charges a small fee for each access through a token.  The data source provider could give out these tokens when it allows a data consumer permission to access its channel.  In return, the third party entity serves as a neutral party tasked with keeping up the operations of the data channel.

This general architecture could be used for a variety of business data integrations, specifically the ones from Carbon Accounting and Certification Minimum Viable Product (MVP).

See slides illustrating this.

A Physical Analogy

A physical world analogy is engineering reports filed with city building departments.  There are many engineering companies who would draw up building plans according to building codes.  These are reviewed, approved, and filed by the city building department, and then they are available to owners of the building as well as government agencies.  Here we are proposing to replace the equivalent of the centralized city building department with a blockchain hosted and run by the engineering companies themselves.


  • No labels

24 Comments

  1. Have you evaluated using Kafka directly?  It supports many different architectures for data sharing including the pattern here. It's fully open-source. It would not have some of the performance challenges blockchain has potentially.

    1. If there will be some performance issues - probably not in the PoC - we should spend a look at Hyperledger Avalon.
      "Hyperledger Avalon is a ledger independent implementation of the Trusted Compute Specifications published by the Enterprise Ethereum Alliance. It aims to enable the secure movement of blockchain processing off the main chain to dedicated computing resources. Avalon is designed to help developers gain the benefits of computational trust and mitigate its drawbacks."

      David Boswell Maybe we could invite someone of the avalon community to one of our meetings if our need for outsourcing processing increases.

      1. Hi Robin Klemens I agree with you.  Actually vaneetalso had a question about scalability.  Has anybody had problems scaling Hyperledger Fabric?  Is there something in particular we should be aware of?

        I would like to learn more about Avalon and, also Aries for security keys storage as Martin Wainstein  suggested during our call.   David BoswellDo you think we could get somebody from those projects to talk to our SIG, as well as somebody from Fabric to talk about scalability issues/questions? 

        1. Let me look into Aries, Avalon and Fabric speakers and will get back to you, Martin and Tom about scheduling for those.

      2. Fabric has very good scalability in standard configurations ( eg the ones used for benchmarks ) when compared to any other major blockchain platform. That said, the Fabric team has some additional improvements that have not yet been fully implemented. Separately, when I look at specific use cases, there is a great opportunity in some of those to scale Fabric well beyond the standard configuration benchmarks.

        1. That makes sense, and I've heard the same elsewhere as well.  You're right in that Kafka on its own is going to be a lot faster, but I think that Fabric will be more than adequate for our needs of storing what are essentially mirrors of financial accounting transactions.

        2. Jim Mason  – Do you have recommendations for who from the Fabric project would be best to ask to speak to the CA2 SIG group about Fabric scalability?  Thanks for any suggestions.

  2. Thanks for your feedback.  Yes, Kafka is similar, but the difference is using a blockchain gives us an immutable ledger that preserves the data, which is of a financial and audit nature.  In contrast, Kafka is designed more for data that's not meant for long-term.  If you look at their use cases, for example, there's a lot of logging.  Maybe you could build Kafka to do this (distributed, encrypted, secure, immutable, verifiable, etc.), but I suspect you'd end up with a homegrown blockchain by then.  

  3. Si Chen,

    Thank you for giving me the opportunity for open discussion. Before i can suggest anything related to architecture, I need to understand different actors of the system and their boundaries.

    My first thoughts on reading this page are below

    Above could also be done with server based system, creating in house ESB ( Enterprise Service Bus )  using Pipes and Filters pattern for instance.

    consuming data from external api's → inbound gateway takes in request and transforms payload , does the business logic aka calling core services or calculations and send response to outbound gateway . This can done using channels and can be asynchronous.   Middle layer can also use grpc with protocol buffers for data exchange and inbound gateway and outbound gateways can use http.  Here, caching of data set can be done using Redis or etc, in case data is not available, it can be fetch by the cache.

    Thereby app will access data from outbound gateway → using simple call. All processing is done by middle ware and results are aggregated and response is set to client over http from outbound gateway

    I think that putting data on blockchain node could lead to bottlenecks in transaction management, performance and scalability.

    Also data access layer, data authentication layer and business service layer should be designed in way so that they do not cross too many boundaries, otherwise any new changes in business entities/ layers can be seen in data access and data authentication and it would be hard to maintain and scale the ever changing platform.  In fact, i would design these layers so that they only have single responsibility principle.

    I would use Hybrid architecture aka → using scalable backed architecture having simple micro services using channels to get data, aggregate and send to the client and around that core, using blockchain aka hyperledger for what it serves the best.

    As we have increasing number of nodes/networks,  adding data / accessing via blockchain can have other problems with data. Data consistency, transactions and etc.

    As i said, I am very new to understanding the requirements of the system, but little bit with what i understood,  I gave my point of view


    1. Thanks vaneet  Yes, it could certainly be done that way.  So what do you see Hyperledger or blockchain being used for?

  4. Si Chen

    Well, Blockchain can be used for part of the system where it fits the best and do not compromise on  non functional requirements of the solution.

    I ask myself the following questions

    1) Do we need to store the state ?

    2) Do we have multiple writers ?

    3) can we use an online third party who is trusted ?

    4) Are all writers known ? if yes, are they trusted ?

    With above exercise, we can distinguish how we can create a better platform and using blockchain technologies to solve the problem without impacting the design constraints of the solution.

    so i would use blockchain for audit and verification and let Api's handle complex processing and some part of data required for write / verification should be sufficient enough.

    What do you think ?

    Thank you


  5. vaneet How about let's think about a use case: Figuring out the CO2 emissions from a company's electricity usage.

    Let's assume that the company buys all its electricity from a utility.  So the approach I had in mind is:

    Part 1

    The utility (or a service provider for the utility) takes the company's utility bill, which has the kWH of electricity during a billing period, and compares it to the Emissions & Generation Resource Integrated Database (eGRID) to calculate the CO2 emissions of the electricity used.  It then writes this CO2 emissions used as on a permissioned ledger.

    Part 2

    The company goes to the permissioned ledger and presents an authorization key issued by the utility.  It gets its CO2 emissions data from the ledger.

    So would these be the right answers to your questions?

    1. Yes, we need to store the output, which is CO2 emissions of the company from its electricity purchases.
    2. Maybe.  It could be just a ledger for the utility, or several utilities could participate on it.
    3. We could have a governance entity for the ledger which all the participants agree to, and this entity runs the ledger for a small fee.
    4. Yes, all writers are known and trusted.  They are the utility or authorized service providers of the utility.
  6. Si Chen  Sorry i was busy with family.  Holiday times here. 

     use case is good.

    1) Writing C02 commissions on permissioned ledger.  It could be multiple utility providers providing the service to the company also.  for e.g solar installations and traditional grid.

    How often writes will happen ?

    Can we have a second use case where we can have ledger working with x number of providers and x times y number of clients/ companies and what other pieces of data required to be stored  ?

    what you think of this ?

    1) utility companies can write the data to the traditional database and some important attributes which is required for verification and authentication should be put on ledger

    another service writes data to the ledger, here we can use any protocol instead of depending up the client specific ( aka utility suppliers ) and we can put some filters and etc for data validation before its written to ledger. Writing to a ledger is costly affair.

    2) Company can read from the ledger or the database.  Why we need ledger ?

    please don't get me wrong.  I am just trying to understand system in more detail so that we can  analyze parts of the system for better definition.


    To ease the decision making process, we provide a flow

    chart below. We consider one or multiple parties that

    write the system state, i.e. a writer corresponds to an entity

    with write access in a typical database system or to consensus

    participant in a blockchain system.

    If no data needs to be stored, no database is required at all,

    i.e. a blockchain, as a form of database, is of no use. Similarly,

    if only one writer exists, a blockchain does not provide

    additional guarantees and a regular database is better suited,

    because it provides better performance in terms of throughput

    and latency. If a trusted third party (TTP) is available, there are

    two options. First, if the TTP is always online, write operations

    can be delegated to it and it can function as verifier for state

    transitions. Second, if the TTP is usually offline, it can function

    as a certificate authority in the setting of a permissioned

    blockchain, i.e. where all writers of the system are known.

    If the writers all mutually trust each other, i.e. they assume

    that no participant is malicious, a database with shared write

    access is likely the best solution. If they do not trust each other,

    using a permissioned blockchain makes sense. Depending on

    whether public verifiability is required, anyone can be allowed

    to read the state (public permissioned blockchain) or the

    set of readers may also be restricted (private permissioned

    blockchain). If the set of writers is not fixed and known to the

    participants, as is the case for many cryptocurrencies such as

    Bitcoin, a permissionless blockchain is a suitable solution.

    I like the below diagram that helps in defining blockchain solution


    1. Thanks so much for doing this! 

      In answer to your questions:

      The frequency would coincide will billing cycles.  In reality it would not be too often–once a month or so.

      Not sure I understand what you mean by the "a second use case where we can have ledger working with x number of providers and x times y number of clients/ companies and what other pieces of data required to be stored"?


      I've also had some more time and discussions about this, and I think I was missing an important element: A neutral third party auditor.  But when you think about it, of course the CO2 emissions should be from a neutral third party, not the company that produced it?

      So, the point of the channel: It should actually be a place to store audited CO2 emissions published by a neutral third party based on utility bills to the customer.  So it is not a database.  It should really be an immutable audit trail of objective records.  

      The channel can be accessed by the utility's customer, as well as other trusted parties (government agencies, NGO's, investors).

      The flows I see are:

      1.  Customer authorizes the third party auditor to access his data from the utility (I've implemented this already using UtilityAPI and GreenButton XML.)

      2.  Auditor gets the utility bill and applies government carbon emissions data to calculate emissions and puts them on the channel.

      3.  Auditor grants access to the channel, initially to the customer and the utility, but also to other trusted parties.

      4.  Customer gets data from the channel and uses it for his own total CO2 emissions calculations.

      Eventually there could be many channels, created by possibly different auditors that work with data in each area -- shipping, travel, etc.

      So coming back to your criteria:

      1.  Yes, the data needs to be stored – auditors need to store the results of the CO2 emissions calculated.
      2. Yes, there could be multiple writers – multiple auditors could use the same utility channel.
      3. No, they cannot always be trusted to be online – even if auditors leave the business, their results must still be available, and there can be no question of it being altered.
      4. Yes, all writers are known – they are the auditors.
      5. Are all the writers trusted - Yes.  But why do you think this does not require a blockchain?  It seems a lot of Hyperledger use cases involve trusted parties writing to a ledger together.
      6. Does it need to be verifiable – Yes, but maybe not to all parties.  Sometimes companies will publicly announce their emissions.  Other times, they only want their customers, investors, and government agencies to see the data.

      The offline analog, by the way, is a set of engineering reports for a building.  It's drawn up by a professional engineer (not the builder himself), filed with the municipal building department, which approves and certifies it.  It's available to the builder, who can also make it available to tenants, other contractors.  It's also available to regulatory agencies and tax authorities.

      A permissioned ledger just provides a digital alternative to this set up.  In this case, for CO2 emissions.

  7. Si Chen

    Thank you for giving me the opportunity to share (smile)

    Yes,  A neutral third party auditor.  That was what i was looking for, to find which actor/service can be a good fit for blockchain

    - that is where blockchain part fit  and rest of the actors could use traditional approach to have best of both worlds to maintain latency and keep scalability.

    A neutral third party auditor can be a blockchain compliant framework instead of a managed my humans, companies or clients. of course for that, we assume that utility companies will provide us reliable data at source. 

    So, what you are saying is  : C02 emissions data is updated or created monthly on ledger per customer.

    So auditors write data to the channels.  aka multiple auditors storing/writing to same utility channel.  what do you think of throughput ?

    I meant lets take another complex use case where more actors are involved , simulating a real time scenario for creating a value from point A to point B .  "a second use case where we can have ledger working with X number of auditors and X times Y number of clients/ companies and what other type of data need to be stored"? Objective is to : verify that at each part we need blockchain and eliminate the parts where its not required.

    Best Regards

    1. vaneet Appreciate the questions – they're very helpful so keep them coming!

      Throughput: Not the highest out there in terms of frequency, since bills are issued monthly.  This is not like IOT data or tracking shipments in a supply chain.  The only issue is there are many utility customers out there.  They solve this problem by staggering the billing cycles, though, so not verybody gets billed at the same time.  I think we can try to solve this when the solution reaches that level of adoption, though.

      1. Hi Si Chen and vaneet , thanks for this great discussion and I'd like to hop in.
        The questions vaneet raised are great to reflect critically why we should use blockchain technology in the first place and second which kind of blockchain (private, public, permissioned, permissionless) suits best. The problem decides the technology and it is not the technology looking for an application.
        I want to extend the answers given by Si Chen and wider our view. Concerning the problem of Carbon Accounting and Certification, blockchain can do a lot more than being an "immutable audit trail of objective records".
        Do we have multiple writers?
        - Yes, we do. There has to be a set of writers, e.g. auditors, trusted institutional organizations, governments, independent NGOs, etc. The most important part is that there is a critical mass of parties authorized to write data to the ledger and thus reach consensus by the majority.
        Are all the writers trusted?
        - Is there is a thing like trust in a system that is ruled by monetary driven factors? This is where blockchain brings one of is superpowers into play (as vaneet has mentioned already). Blockchain has the ability to "minimize the amount of trust required from any single actor in the system" by distributing trust among different (independent) actors. Hence, a blockchain architecture should always try to avoid non-redundancy aka a single point of failure/trust or just a bottleneck. (I'll try to reflect this in the architecture proposed for the Utility Emission Channel). By the use of redundantly executed smart contracts provided with the validation logic an auditor would do, blockchain technology can minimize the probability of fraud, save a lot of money due to automation, speed up processes and make all decisions transparent as well as immutable stored to the ledger. Further, as long as there will be humans involved in the process of auditing or accounting, there gonna be random mistakes made by humans (errare humanum est).

        If I go along the diagram proposed by vaneet, I come to the point where I have to decide if public verifiability is required. I'd love to see all the data transparently stored to a public readable ledger as the emission of greenhouse gases concerns the same planet we all live at. However, as long as this will not be enforced by law or there will arise a "forcing" demand in society, most companies would prefer the permissioned way.

        I'd love to get feedback and I recommend continuing the use of blockchain technology with a look at the Utility Emission Channel example as this is more feasible than talking about blockchain and climate accounting in general.

        /Robin


        1. Hi Si Chen Robin Klemens vaneet ,

          Thanks for the great discussion! Here are my thoughts, I'm very interested to know whether you guys agree with them:

          1. If the auditor is a trusted third party (i.e. government agencies, customers and investors all trust its calculation result of CO2 emission), it can cryptographically sign the CO2 emission data so that the data is verifiable and immutable. Therefore, blockchain is not needed to provide immutability, we can just rely on digital signature.
          2. If government agencies, customers and investors don't want to trust a third party as the auditor, blockchain can serve as the auditor. As Robin Klemens mentioned, the trust will be distributed among a network of independent auditors. However, I think there are two things we have to consider:
            1. there needs to be a large number of independent auditors in order to make the ledger data trustworthy. Imagine if the bitcoin network only have 2 nodes, people wouldn't believe their consensus, because it's easy to bribe the two nodes. Then the question is where do we find enough independent auditors for CO2 emission calculation, without sacrificing the privacy of a customer's electricity usage data?
            2. where does the incentive of the auditors come from? In bitcoin network, the incentive is not controlled by anyone (distributing new bitcoins is not controlled by anyone). In our case, who will pay those auditors? If it's a single third party, again the network is not trustworthy.
          3. I think without a good answer to 2.a and 2.b, a blockchain based system is not that convincing and we'd better go back to option 1 and rely on a single trusted auditor, which doesn't require blockchain.

          What do you guys think?



          1. Hello Yilin Sai vaneet Robin Klemens and anybody else reading (smile)

            You ask some good questions!  Here are some attempts at answers:

            1. Yes, you can prove the identity of the auditor with the cryptographic signature.  But then the CO2 emissions data would still need to be stored somewhere and made available to all the potential people who need it – customers down the supply chain, government agencies, investors, general public, etc.   Let's assume that the auditors are independent engineering or environmental companies and not big software companies, so they can perform the audit but not provide an uptime SLA.  How would you make this data available at all times, be trusted and verifiable, to so many parties?  Of course you can build out redundant nodes to serve the data and provide verifications of signatures, but then wouldn't you have built your blockchain?
            2. This is an interesting question of how we will scale up.  I'm sure it'll be different when we get to that point, but here are some preliminary thoughts based on studying some public blockchain services:
              1. the incentive could come from a fee for use of data, which would be very low but enough to pay for the software, the infrastructure, and the governance network.  For example, the utility could pay a certain amount and then issue tokens to their customers so that their customers could get their utility emissions audited on the network.  Or the customers could pay a small fee as part of their ESG audit budget.
              2. with this incentive we could pay for more nodes to host the blockchain.  These nodes could be other audit companies or just independent data centers or cloud providers who would run the smart contracts of the audit companies.
              3. the data could be kept private as part of the Private Data Collection on the channel, so only the peer which is authorized to see the data would have access to it.  The other peers hosting the data would only see the hash of a secret key.
              4. A physical world analogy is engineering reports filed with city building departments.  There are many engineering companies who would draw up building plans according to building codes.  These are reviewed, approved, and filed by the city building department, and then they are available to owners of the building as well as government agencies.  Here we are proposing to replace the equivalent of the centralized city building department with a blockchain hosted and run by the engineering companies themselves.
            1. Hi Si Chen,

              Thank you very much for the reply! Your comments inspires me to think a lot deeper. I want to add some comments:

              For point 1, thank you for pointing out the need for high availability. With that in mind, I came up with the following data flow:

              third party auditors/cloud computers do the emission calculation and sign the result → the result is stored in a cluster of nodes with redundancy → customers/government agencies/etc connect to one of the node, pull the result of interest, and verify the signature to ensure the source is correct and the result hasn't been tampered with.

              Some notes about the system above:

                a. the cluster provides high availability. In other words, they use a crash fault tolerance consensus algorithm.

                c. third party auditors/could computers are the source of trust

                b. the cluster don't need to verify the signature, because the cluster is not the source of trust

                d. the serverless micro service run by customers/government agencies does the signature verification

              Now let's compare the system above with the following one:

              third party auditors/cloud computers do the emission calculation and sign the result → the result is stored in a cluster of nodes which verifies the signatures and reach a consensus on the validity of the result → customers/government agencies/etc connect to one of the node, pull the result of interest, and assume the result is from the correct source and the result hasn't been tampered with.

              Compared with the first system, in the second system:

                a. the cluster provides high availability and trust. In other words, they use a Byzantine fault tolerance consensus algorithm.

                b. the third party auditors/cloud computers plus the cluster are the source of trust

                c. the cluster needs to verify the signatures, because the cluster is part of the source of trust

                d. the serverless micro service run by customers/government agencies does not verify signatures, because they trust the cluster has done the job.

              I think we can call both systems blockchain-based, because they both involve a distributed ledger of signed record. However, the two system has different assumptions/requirements and operates differently. I think what you have in mind is more inline with the second system?

              In my opinion, the first system loosens the requirements of the cluster so that we don't need to worry about distributing trust among the cluster and BFT consensus, which means it's easier to implement. Also it's closer to the real world analogy you mentioned, so the industry might be more likely to adopt. Finally the privacy of the customer is better preserved, because the customer does not need to share its usage data with as many auditors/cloud computers as system 2. What do you think?

              --------------------------------------------

              For point 2, a fee for use of data makes sense, but I think there is something missing if we want to truly distribute the trust. Please allow me to compare with bitcoin again: in bitcoin, a person submitting the transaction doesn't get to choose which node to process the transaction. The fee will be given to a random node (in POW network, it's the node who solved the hashcash problem). In our case, if customer is the one who gets to choose, it could choose a node (or organisation) that is in favour of it. We don't want a third party to choose either. So I think in order to truly distribute the trust, we need a mechanism to both

                a. determine which node(s) is going to calculate the CO2 emission of the current auditing request

                b. distribute the incentive to the node(s) who does the job

              Edit: just want to make this clearer. Without this mechanism, the endorser nodes who do the emission calculation has an incentive to do something in favour of the request submitter, because the submitter hold the money and he/she can decide who to give it to!

              I guess this could be out of scope for the MVP, but I think some work is needed later.

              Sorry for the long message, I'm really interested in this topic because I believe it will inform the design of many solutions. Looking forward to your and everyone else's thoughts!

              1. Thank you Yilin Saifor your deep comments.  They're also inspiring me to think a lot deeper.

                In the first part, I prefer Option 2 (third party auditors/cloud computers plus the cluster are the source of trust):

                • It doesn't seem harder to implement (at least right now.  Maybe I'll change my mind once we start really writing code (smile)
                • It's better that the users of the data don't have to verify the signatures.  Some of those users could be small entities–a small company which buys from a utility or an airline–and it's easier if they don't have to verify the signatures.
                • It's also more similar to a real world scenario.  When you go to the city planning office to get a set of plans, if you see it's stamped "Approved", you'd assume that the city did approve it.  Though you could verify the approval stamp, most people would not.  

                In the second part, we'd have to assume that the customer would choose which auditor to work with.  In a real market with different engineers/architects/consultants/service providers, that's the normal business practice.  They offer different services, specialization within a category, personal relationships, different pricing, etc.  As long as they're all "accredited" and thus trusted, we usually trust they will do the right thing. 

                The only interesting exception is mortage financing: After the mortgage market problems of 2008, it was determined that appraisers must not be hired by the mortgage brokers.  So a system was put in to route appraisals through a pool.  This, however, created its own problems: Randomly chosen appraiser from the the pool of "qualified" appraisers may not be equally familiar with the specific needs of the property.  For example, a pool of all appraisers in a big city may not be equally familiar with each neighborhood and its quirks.  

                1. Thanks Si Chen for addressing my concerns! Really happy to see how the scenario's demands naturally lead to the choice of a blockchain-based solution!

  8. hello there, I was looking for help in hyper ledger so I storm on this forum but I don't really know maybe is for learner like me. so, I have to be sure before I start asking questions please. Thanks all

    1. Sylvester Akinbohun  – I'm glad to hear that you're interested in learning more about Hyperledger.  A good place to start is to go through the free online training courses.  You can find links to those at:

      https://www.hyperledger.org/learn/training

      If you have questions after checking those out, feel free to let us know.