mirror of
https://github.com/fluencelabs/gitbook-docs
synced 2025-06-12 06:21:33 +00:00
Merge branch 'main' into docs
This commit is contained in:
@ -1,6 +1,6 @@
|
||||
# Building The Reward Block Application
|
||||
|
||||
Our project aims to
|
||||
Our project aims to
|
||||
|
||||
* retrieve the latest block height from the Ethereum mainnet,
|
||||
* use that result to retrieve the associated reward block data and
|
||||
|
@ -2,7 +2,8 @@
|
||||
|
||||
In the previous sections we obtained block reward data by discovering the latest Ethereum block created. Of course, Ethereum produces a new block about every 13 seconds or so and it would be nice to automate the data acquisition process. One way, of course, would be to, say, cron or otherwise daemonize our frontend application. But where's the fun in that and we'd rather hand that task to the p2p network.
|
||||
|
||||
As we have seen in our AIR workflows, particles travel the path, trigger execution, and update their data. So far, we have only seen services consume previous outputs as \(complete\) inputs, which means that service at workflow sequence s needs to be fairly tightly coupled to service at sequence s-1, which is less than ideal. Luckily, Fluence provides a solution to access certain types of results as j_son paths_.
|
||||
As we have seen in our AIR workflows, particles travel the path, trigger execution, and update their data. So far, we have only seen services consume previous outputs as \(complete\) inputs, which means that service at workflow sequence s needs to be fairly tightly coupled to service at sequence s-1, which is less than ideal. Luckily, Fluence provides a solution to access certain types of results as j_son paths_.
|
||||
|
||||
|
||||
## Peer-Based Script Storage And Execution
|
||||
|
||||
@ -120,7 +121,6 @@ In order to upload the periodic "block to db poll", we can use parts of the _eth
|
||||
```
|
||||
|
||||
|
||||
|
||||
```bash
|
||||
# script file to string variable
|
||||
AIR=`cat air-scripts/ethqlite_block_committer.clj`
|
||||
@ -187,6 +187,7 @@ And we are golden. Give it some time and start checking Ethqlite for latest bloc
|
||||
Unfortunately, our daemonized service won't work just yet as the current implementation cannot take the \(client\) seed we need in order to get our SQLite write working. It's on the to-do list but if you need it, please contact us and we'll see about juggling priorities.
|
||||
{% endhint %}
|
||||
|
||||
|
||||
For completeness sake, let's remove the stored service with the following AIR script:
|
||||
|
||||
```bash
|
||||
@ -196,7 +197,8 @@ For completeness sake, let's remove the stored service with the following AIR sc
|
||||
|
||||
## Advanced Service Output Access
|
||||
|
||||
As Aquamarine advances a particle's journey through the network, output from a service at workflow sequence s-1 method tends to be the input for a service at sequence s method. For example, the _hex\_to\_int_ method, as used earlier, takes the output from the _get\_latest\_block_ method. With single parameter outputs, this is a pretty straight forward and inherently decoupled dependency relation. However, when result parameters become more complex, such as structs, we still would like to keep services as decoupled as possible.
|
||||
As Aquamarine advances a particle's journey through the network, output from a service at workflow sequence s-1 method tends to be the input for a service at sequence s method. For example, the _hex\_to\_int_ method, as used earlier, takes the output from the _get\_latest\_block_ method. With single parameter outputs, this is a pretty straight forward and inherently decoupled dependency relation. However, when result parameters become more complex, such as structs, we still would like to keep services as decoupled as possible.
|
||||
|
||||
|
||||
Fluence provides this capability by facilitating the conversion of \(Rust\) struct returns into [json values](https://github.com/fluencelabs/aquamarine/blob/master/interpreter-lib/src/execution/boxed_value/jvaluable.rs#L30). This allows json type key-value access to a desired subset of return values. If you got back to the _ethqlite.clj_ script, you may notice some fancy `$`, `!` operators tucked away in the deep recesses of parenthesis stacking. Below the pertinent snippet:
|
||||
|
||||
@ -245,7 +247,8 @@ pub struct RewardBlock {
|
||||
|
||||
and the input expectations of _get\_miner\_rewards_, also an ethqlite service method, with the following [function](https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L177) signature: `pub fn get_miner_rewards(miner_address: String) -> MinerRewards`.
|
||||
|
||||
Basically, _get\_miner\_rewards_ wants an Ethereum address as a `String` and in the context of our AIR script we want to get the value from the _get\_reward\_block_ result. Rather than tightly coupling _get\_miner\_rewards_ to _get\_reward\_block_ in terms of, say, the _RewardBlock_ input parameter, we take advantage of the Fluence capability to turn structs into json strings and then supply the relevant key to extract the desired value. Specifically, we use the `$` operator to access the json representation at the desired index and the `!` operator to flatten the value, if desired.
|
||||
|
||||
Basically, _get\_miner\_rewards_ wants an Ethereum address as a `String` and in the context of our AIR script we want to get the value from the _get\_reward\_block_ result. Rather than tightly coupling _get\_miner\_rewards_ to _get\_reward\_block_ in terms of, say, the _RewardBlock_ input parameter, we take advantage of the Fluence capability to turn structs into json strings and then supply the relevant key to extract the desired value. Specifically, we use the `$` operator to access the json representation at the desired index and the `!` operator to flatten the value, if desired.
|
||||
|
||||
For example,
|
||||
|
||||
|
@ -104,7 +104,8 @@ modules_dir = "artifacts/"
|
||||
name = "block_getter"
|
||||
```
|
||||
|
||||
If you haven't done so already, run `./scripts/build.sh` to compile the projects. Once we have _wasm_ files and the service configuration, we can check out our accomplishments with the REPL:
|
||||
|
||||
If you haven't done so already, run `./scripts/build.sh` to compile the projects. Once we have _wasm_ files and the service configuration, we can check out our accomplishments with the REPL:
|
||||
|
||||
```bash
|
||||
fce-repl Block-Getter-Config.toml
|
||||
@ -341,5 +342,7 @@ Particle id: 930ea13f-1474-4501-862a-ca5fad22ee42. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
Right on! Our two services coordinate into the intended application returning the reward data for the latest block. Before we move on, locate the corresponding services on the Fluence testnet via the [ dashboard](https://dash.fluence.dev/), update your command-line with the appropriate service and node ids and run the same AIR script. Congratulations, you just run an app coordinated by distributed services!
|
||||
|
||||
Right on! Our two services coordinate into the intended application returning the reward data for the latest block. Before we move on, locate the corresponding services on the Fluence testnet via the [ dashboard](https://dash.fluence.dev/), update your command-line with the appropriate service and node ids and run the same AIR script. Congratulations, you just run an app coordinated by distributed services!
|
||||
|
||||
|
||||
|
@ -103,10 +103,10 @@ The script extends our previous incarnation by adding only one more method: `upd
|
||||
"node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"sqlite_service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", \
|
||||
"sqlite_node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}'
|
||||
"api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}'
|
||||
```
|
||||
|
||||
|
||||
|
||||
and run the AIR script with the revised `fldist` command:
|
||||
|
||||
```bash
|
||||
@ -240,7 +240,7 @@ Particle id: 5ce2dcf0-2d4d-40ec-8cef-d5a0cea4f0e7. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
And that's a wrap!
|
||||
And that's a wrap!
|
||||
|
||||
In summary, we have developed and deployed multiple Fluence services to store Ethereum reward block data in a SQLite as a service database and used Aquamarine to coordinate those services into applications. See Figure 2 below.
|
||||
|
||||
|
@ -17,7 +17,7 @@ which [happens about every 13 seconds or so on mainnet](https://etherscan.io/cha
|
||||
|
||||
To get SQLite as a service, we build our service from two modules: the [ethqlite repo](https://github.com/fluencelabs/examples/tree/main/multi-service/ethqlite) and the [Fluence sqlite](https://github.com/fluencelabs/sqlite) Wasm module, which we can build or pickup as a wasm files from the [releases](https://github.com/fluencelabs/sqlite/releases). This largely, but not entirely, mirrors what we did with the cUrl service: build the service by providing an adapter to the binary. Unlike the cUrl binary, we are bringing our own sqlite binary, i.e., _sqlite3.wasm_, with us.
|
||||
|
||||
This leaves us to code our _ethqlite_ module with respect to desired CRUD interfaces and security. As [previously](../../quick_start/quick_start_add_persistence/quick_start_persistence_setup.md) discussed, we want writes to the sqlite services to be privileged, which implies that we need to own the service and have the client seed to manage authentication and ambient authorization. Specifically, we can implement a rudimentary authorization system where authentication implies authorization \(to write\). The `is_owner` function in the _ethqlite_ repo does exactly that: if the caller can prove ownership by providing a valid client seed, than we have a true condition equating write-privileged ownership with the caller identity:
|
||||
This leaves us to code our _ethqlite_ module with respect to desired CRUD interfaces and security. As [previously](../../quick_start/quick_start_add_persistence/quick_start_persistence_setup.md) discussed, we want writes to the sqlite services to be privileged, which implies that we need to own the service and have the client seed to manage authentication and ambient authorization. Specifically, we can implement a rudimentary authorization system where authentication implies authorization \(to write\). The `is_owner` function in the _ethqlite_ repo does exactly that: if the caller can prove ownership by providing a valid client seed, than we have a true condition equating write-privileged ownership with the caller identity:
|
||||
|
||||
```rust
|
||||
// auth.rs
|
||||
@ -83,7 +83,8 @@ wget https://github.com/fluencelabs/sqlite/releases/download/v0.10.0_w/sqlite3.w
|
||||
mv sqlite3.wasm artifacts/
|
||||
```
|
||||
|
||||
Run `./build.sh` and check the artifacts for the expected wasm files
|
||||
|
||||
Run `./build.sh` and check the artifacts for the expected wasm files
|
||||
|
||||
Like all Fluence services, Ethqlite needs a [service configuration](https://github.com/fluencelabs/examples/blob/main/multi-service/ethqlite/Config.toml) file, which looks a little more involved than what we have seen so far.
|
||||
|
||||
@ -111,7 +112,8 @@ name = "ethqlite"
|
||||
mapped_dirs = { "tmp" = "/tmp" }
|
||||
```
|
||||
|
||||
Let's break it down:
|
||||
Let's break it down:
|
||||
|
||||
|
||||
* the first \[\[module\]\] section
|
||||
* specifies the _sqlite3.wasm_ module we pulled from the repo,
|
||||
@ -369,7 +371,8 @@ Particle id: 2fb4a366-6f40-46c1-9329-d77c6d03dfad. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
If you run the init script again, you will receive an error _"Service already initiated"_, so we can be reasonably confident our code is working and it looks like our Ethqlite service is up and running on the local node.
|
||||
|
||||
If you run the init script again, you will receive an error _"Service already initiated"_, so we can be reasonably confident our code is working and it looks like our Ethqlite service is up and running on the local node.
|
||||
|
||||
Due to the security concerns for our database, it is not advisable, or even possible, to use an already deployed Sqlite service from the Fluence Dashboard. Instead, we deploy our own instance with our own \(secret\) client seed. To determine which network nodes are available, run:
|
||||
|
||||
|
Reference in New Issue
Block a user