mirror of
https://github.com/fluencelabs/gitbook-docs
synced 2025-06-12 06:21:33 +00:00
GitBook: [main] 54 pages and 17 assets modified
This commit is contained in:
committed by
gitbook-bot
parent
c6a1b1204a
commit
36ddf50532
2
development_development/README.md
Normal file
2
development_development/README.md
Normal file
@ -0,0 +1,2 @@
|
||||
# Developing Modules And Services
|
||||
|
6
development_development/development_overview.md
Normal file
6
development_development/development_overview.md
Normal file
@ -0,0 +1,6 @@
|
||||
# Overview
|
||||
|
||||
In the Quick Start section we incrementally created a distributed, database-backed request processing application using existing services with Aquamarine. Of course, we left a lot of detail uncovered including where the services we used came from in the first place. In this section, we tackle the very issue of development and deployment of service component.
|
||||
|
||||
Before we proceed, please make sure your Fluence environment is [setup](../recipes_recipes/recipes_setting_up.md) and ready to go. Moreover, we are going to run our own Fluence node to test our services in a network environment. Please refer to the [Running a Local Fluence Node](../tutorials_tutorials/tutorial_run_local_node.md) tutorial if you need support.
|
||||
|
@ -0,0 +1,19 @@
|
||||
# Building The Reward Block Application
|
||||
|
||||
Our project aims to
|
||||
|
||||
* retrieve the latest block height from the Ethereum mainnet,
|
||||
* use that result to retrieve the associated reward block data and
|
||||
* store the result in a SQlite database
|
||||
|
||||
In order to simplify Ethereum data access, we will be using the [Etherscan API](https://etherscan.io/apis). Make sure you got yours ready as we are using two Etherscan endpoints:
|
||||
|
||||
* [eth\_blockNumber](https://api.etherscan.io/api?module=proxy&action=eth_blockNumber&apikey=YourApiKeyToken), which returns the most recent block height as a hex string and
|
||||
* [getBlockReward](https://api.etherscan.io/api?module=block&action=getblockreward&blockno=2165403&apikey=YourApiKeyToken), which returns the block and uncle reward by block height
|
||||
|
||||
Both _eth\_blockNumber_ and _getBlockReward_ need access to remote endpoints and for our purposes, a cUrl service will do just fine. Hence, we ned to implement a curl adapter to access the curl binaries of the node. Moreover, as _eth\_blockNumber_ returns a hex string and _getBlockReward_ requires an integer, we need a hex to int conversion, which we are going to implement as a stand-alone service. Finally, a SQLite adapter is also required, although the sqlite3.wasm module is [readily available](https://github.com/fluencelabs/sqlite/releases) from the Fluence repo.
|
||||
|
||||
The high-level workflow of our application is depicted in Figure 1.
|
||||
|
||||

|
||||
|
@ -0,0 +1,255 @@
|
||||
# Additional Concepts
|
||||
|
||||
In the previous sections we obtained block reward data by discovering the latest Ethereum block created. Of course, Ethereum produces a new block about every 13 seconds or so and it would be nice to automate the data acquisition process. One way, of course, would be to, say, cron or otherwise daemonize our frontend application. But where's the fun in that and we'd rather hand that task to the p2p network.
|
||||
|
||||
As we have seen in our AIR workflows, particles travel the path, trigger execution, and update their data. So far, we have only seen services consume previous outputs as \(complete\) inputs, which means that service at workflow sequence s needs to be fairly tightly coupled to service at sequence s-1, which is less than ideal. Luckily, Fluence provides a solution to access certain types of results as j_son paths_.
|
||||
|
||||
## Peer-Based Script Storage And Execution
|
||||
|
||||
As discussed previously, a peer-based ability to "poll" is a valuable feature to some applications. Fluence nodes come with a set of built-in services including the ability to store scripts on a peer with the intent of periodic execution.
|
||||
|
||||
This service, just as all distributed services, is managed by Aquamarine. The AIR script looks like:
|
||||
|
||||
```text
|
||||
; add a script to
|
||||
(call node ("script" "add") [script interval] id)
|
||||
```
|
||||
|
||||
where:
|
||||
|
||||
* _node_ -- takes the peer id parameter
|
||||
* _"script"_ -- is the \(hard-coded\) service id
|
||||
* _script_ -- takes the AIR script as a **string**
|
||||
* _interval_ -- the execution interval in seconds, optional, default is three \(3\) seconds; provide as **string**, e.g. five seconds are expressed as "5"
|
||||
* _id_ -- is the return value
|
||||
|
||||
In addition to the service "add" method, there are also service "list" and service "remove" methods available:
|
||||
|
||||
```text
|
||||
; list
|
||||
(call node ("script" "list") [] list)
|
||||
|
||||
; remove
|
||||
(call node ("script" "remove") [script_id] result)
|
||||
```
|
||||
|
||||
where remove takes the id \(returned by "add"\) and returns a boolean.
|
||||
|
||||
Let's check on any stored services on our local node \(make sure you use your node id\) and as expected, no services have been uploaded for storage and execution.
|
||||
|
||||
```text
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/list_stored_services.clj -d '{"node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}'
|
||||
client seed: 5ydZWdJAzMHAGQ2hCVJCa5ByYq7obp2yc9gRD43ajXrZ
|
||||
client peerId: 12D3KooWBgzuiNn5mz1DwqDbqapBf3NSF8mRjSJV1KC3VphjAyWL
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 17986fb7-36e7-4f10-b311-d2512f5fe2e5. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
[]
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'script',
|
||||
function_name: 'list',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
In order to upload the periodic "block to db poll", we can use parts of the _ethqlite\_roundtrip.clj_ script and hard-code the parameters since currently there is no option to separately upload the script and data. Make sure you replace the `node_*`, `service_*` and `api_key` placeholders with your actual values in the file!
|
||||
|
||||
```text
|
||||
; air-scripts/ethqlite_block_committer.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_block_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_block_result] int_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_block_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "update_reward_blocks") [block_result] insert_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [insert_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
```bash
|
||||
# script file to string variable
|
||||
AIR=`cat air-scripts/ethqlite_block_committer.clj`
|
||||
# interval variable in seconds to string variable
|
||||
INT="10"
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/add_stored_service.clj -d '{"node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "interval":"'"$INT"'", "script":"'"$AIR"'"}'
|
||||
client seed: Cwhf8VuyqPCUPi8keyZAcRVBkaGNLWviHMRwDL2hG8D4
|
||||
client peerId: 12D3KooWJgFCCeHpcEoVyxT5Fmg47ok43MPU7hfT9cNv5R3KeDEw
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: dd3ad854-b10d-4664-846d-42c59c59335f. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"a1791c0f-084e-4b4d-a85c-a3eb65a18d57" # <= Take note of the storage id !
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'script',
|
||||
function_name: 'add',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Checking once more for listed services hits pay dirt:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/list_stored_services.clj -d '{"node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}'
|
||||
client seed: HpHQc1as9zGdiHaMQzyPDaPWrdMVEvAA8DwdJiAvczWS
|
||||
client peerId: 12D3KooWFiiS7FMo18EbrtWZi38Nwe1SiYCRqcsJNEPtYh28zHNm
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 5fb0af87-310f-4b12-8c73-e044cfd8ef6e. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
[
|
||||
{
|
||||
"failures": 0,
|
||||
"id": "a1791c0f-084e-4b4d-a85c-a3eb65a18d57",
|
||||
"interval": "10s",
|
||||
"owner": "12D3KooWJgFCCeHpcEoVyxT5Fmg47ok43MPU7hfT9cNv5R3KeDEw",
|
||||
"src": "$AIR"
|
||||
}
|
||||
]
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'script',
|
||||
function_name: 'list',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
And we are golden. Give it some time and start checking Ethqlite for latest block and reward info!!
|
||||
|
||||
TODO: this isn't working since we can't upload a key with the script.
|
||||
|
||||
For completeness sake, let's remove the stored service with the following AIR script:
|
||||
|
||||
```bash
|
||||
; remove a script to
|
||||
(call node ("script" "remove") [script_id] result)
|
||||
```
|
||||
|
||||
TODO: finalize or delete for now.
|
||||
|
||||
## Advanced Service Output Access
|
||||
|
||||
As Aquamarine advances a particle's journey through the network, output from a service at workflow sequence s-1 method tends to be the input for a service at sequence s method. For example, the _hex\_to\_int_ method, as used earlier, takes the output from the _get\_latest\_block_ method. With single parameter outputs, this is a pretty straight forward and inherently decoupled dependency relation. However, when result parameters become more complex, such as structs, we still would like to keep services as decoupled as possible.
|
||||
|
||||
Fluence provides this capability by facilitating the conversion of \(Rust\) struct returns into [json values](https://github.com/fluencelabs/aquamarine/blob/master/interpreter-lib/src/execution/boxed_value/jvaluable.rs#L30). This allows json type key-value access to a desired subset of return values. If you got back to the _ethqlite.clj_ script, you may notice some fancy `$`, `!` operators tucked away in the deep recesses of parenthesis stacking. Below the pertinent snippet:
|
||||
|
||||
```text
|
||||
; ethqlite_rountrip.clj
|
||||
; <snip>
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_reward_block") [int_block_result] select_result_2)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_2])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") []) .; coming up next line !!
|
||||
(call sqlite_node (sqlite_service "get_miner_rewards") [select_result_2.$.["block_miner"]!] select_result_3) ; <= Here it is !!
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_3])
|
||||
)
|
||||
)
|
||||
)
|
||||
; <snip>
|
||||
```
|
||||
|
||||
Before we dive in, let's review the output from the _get\_reward\_block_ method which is part of the ethqlite service:
|
||||
|
||||
```rust
|
||||
// https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L139
|
||||
#[fce]
|
||||
#[derive(Debug)]
|
||||
// https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L89
|
||||
pub struct RewardBlock {
|
||||
pub block_number: i64,
|
||||
pub timestamp: i64,
|
||||
pub block_miner: String,
|
||||
pub block_reward: String,
|
||||
}
|
||||
```
|
||||
|
||||
and the input expectations of _get\_miner\_rewards_, also an ethqlite service method, with the following [function](https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L177) signature: `pub fn get_miner_rewards(miner_address: String) -> MinerRewards`.
|
||||
|
||||
Basically, _get\_miner\_rewards_ wants an Ethereum address as a `String` and in the context of our AIR script we want to get the value from the _get\_reward\_block_ result. Rather than tightly coupling _get\_miner\_rewards_ to _get\_reward\_block_ in terms of, say, the _RewardBlock_ input parameter, we take advantage of the Fluence capability to turn structs into json strings and then supply the relevant key to extract the desired value. Specifically, we use the `$` operator to access the json representation at the desired index and the `!` operator to flatten the value, if desired.
|
||||
|
||||
For example,
|
||||
|
||||
```text
|
||||
(call sqlite_node (sqlite_service "get_miner_rewards") [select_result_2.$.["block_miner"]!]
|
||||
```
|
||||
|
||||
uses the _block\_miner_ key to retrieve the miner address for subsequent consumption. In order to take full advantage of this important feature, developers should return more complex results as FCE structs to prevent tight service coupling. This approach allows for significant reduction of service dependencies, re-writes and re-deployments due to even minor changes in upstream dependencies.
|
||||
|
@ -0,0 +1,345 @@
|
||||
# Ethereum Request Service
|
||||
|
||||
The source code for this section can be found [here](https://github.com/fluencelabs/examples/tree/main/multi-service) and is pretty straight forward with the two Etherscan api endpoints wrapped as public, FCE marked functions:
|
||||
|
||||
```rust
|
||||
use crate::curl_request;
|
||||
use fluence::fce;
|
||||
use fluence::MountedBinaryResult;
|
||||
|
||||
|
||||
fn result_to_string(result:MountedBinaryResult) -> String {
|
||||
if result.is_success() {
|
||||
return String::from_utf8(result.stdout).expect("Found invalid UTF-8");
|
||||
}
|
||||
String::from_utf8(result.stderr).expect("Found invalid UTF-8")
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub fn get_latest_block(api_key: String) -> String {
|
||||
let url = f!("https://api.etherscan.io/api?module=proxy&action=eth_blockNumber&apikey={api_key}");
|
||||
let header = "-d \"\"";
|
||||
|
||||
let curl_cmd:Vec<String> = vec![header.into(), url.into()];
|
||||
let response = unsafe { curl_request(curl_cmd) };
|
||||
let res = result_to_string(response);
|
||||
let obj = serde_json::from_str::<serde_json::Value>(&res).unwrap();
|
||||
serde_json::from_value(obj["result"].clone()).unwrap()
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub fn get_block(api_key: String, block_number: u32) -> String {
|
||||
let url = f!("https://api.etherscan.io/api?module=block&action=getblockreward&blockno={block_number}&apikey={api_key}");
|
||||
let header = "-d \"\"";
|
||||
|
||||
let curl_cmd:Vec<String> = vec![header.into(), url];
|
||||
let response = unsafe { curl_request(curl_cmd) };
|
||||
result_to_string(response)
|
||||
}
|
||||
```
|
||||
|
||||
Of course, both functions need to be able to make https calls, which is accomplished by calling \(unsafe\) `curl_request`:
|
||||
|
||||
```rust
|
||||
// main.rs
|
||||
#[macro_use]
|
||||
extern crate fstrings;
|
||||
|
||||
use fluence::{fce, WasmLoggerBuilder};
|
||||
use fluence::MountedBinaryResult as Result;
|
||||
|
||||
mod eth_block_getters;
|
||||
|
||||
fn main() {
|
||||
WasmLoggerBuilder::new().build().ok();
|
||||
}
|
||||
|
||||
#[fce]
|
||||
#[link(wasm_import_module = "curl_adapter")]
|
||||
extern "C" {
|
||||
pub fn curl_request(curl_cmd: Vec<String>) -> Result;
|
||||
}
|
||||
```
|
||||
|
||||
Since we are dealing with Wasm modules, we don't have access to sockets at the module level but may be permissioned to call cUrl at the node level. In order to do that, we need to provide an adapter module. The code from the [cUrl adapter](https://github.com/fluencelabs/examples/tree/main/multi-service/curl_adapter) project illustrates how we're mounting the binary and expose the fce-marked interface for consumption, like above.
|
||||
|
||||
```bash
|
||||
// main.rs
|
||||
#![allow(improper_ctypes)]
|
||||
|
||||
use fluence::fce;
|
||||
use fluence::MountedBinaryResult as Result;
|
||||
|
||||
fn main() {}
|
||||
|
||||
#[fce]
|
||||
pub fn curl_request(curl_cmd: Vec<String>) -> Result {
|
||||
let response = unsafe { curl(curl_cmd.clone()) };
|
||||
log::info!("curl response for {:?} : {:?}", curl_cmd, response);
|
||||
response
|
||||
}
|
||||
|
||||
// mounted_binaries are available to import like this:
|
||||
#[fce]
|
||||
#[link(wasm_import_module = "host")]
|
||||
extern "C" {
|
||||
pub fn curl(cmd: Vec<String>) -> Result;
|
||||
}
|
||||
```
|
||||
|
||||
From both modules, we can now create a service configuration which specifies the name for each module and the permission specification for the mounted binaries:
|
||||
|
||||
```text
|
||||
// Block-Getter-Config.toml
|
||||
modules_dir = "artifacts/"
|
||||
|
||||
[[module]]
|
||||
name = "curl_adapter"
|
||||
|
||||
[module.mounted_binaries]
|
||||
curl = "/usr/bin/curl"
|
||||
|
||||
|
||||
[[module]]
|
||||
name = "block_getter"
|
||||
```
|
||||
|
||||
If you haven't done so already, run `./scripts/build.sh` to compile the projects. Once we have _wasm_ files and the service configuration, we can check out our accomplishments with the REPL:
|
||||
|
||||
```bash
|
||||
fce-repl Block-Getter-Config.toml
|
||||
```
|
||||
|
||||
which gets us in the REPL to call the _interface_ command:
|
||||
|
||||
```bash
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 15b9c3ee-ffbc-4464-bb7f-675a41acf81a
|
||||
elapsed time 111.573048ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
Result {
|
||||
ret_code: S32
|
||||
error: String
|
||||
stdout: Array<U8>
|
||||
stderr: Array<U8>
|
||||
}
|
||||
|
||||
curl_adapter:
|
||||
fn curl_request(curl_cmd: Array<String>) -> Result
|
||||
|
||||
block_getter:
|
||||
fn get_block(api_key: String, block_number: U32) -> String
|
||||
fn get_latest_block(api_key: String) -> String
|
||||
|
||||
2>
|
||||
```
|
||||
|
||||
Checking the available interfaces, shows the **public** interfaces to our respective Wasm modules, which are ready for calling:
|
||||
|
||||
```bash
|
||||
> call curl_adapter curl_request [["-sS", "https://google.com"]]
|
||||
result: Object({"error": String(""), "ret_code": Number(0), "stderr": Array([]), "stdout": Array([Number(60), Number(72), Number(84), Number(77), Number(76), Number(62), Number(60), Number(72), Number(69), Number(65), N
|
||||
<snip>
|
||||
, Number(72), Number(84), Number(77), Number(76), Number(62), Number(13), Number(10)])})
|
||||
elapsed time: 328.965523ms
|
||||
```
|
||||
|
||||
As implemented, the raw cUrl call returns a [MountedBinaryResult](https://github.com/fluencelabs/rust-sdk/blob/c2fec5939fc17dcc227a78c7c8030549a6ff193f/crates/main/src/mounted_binary.rs) and we can see the corresponding _struct_ at the top of our `fce-repl` interfaces output. Looking through the return object, we see the standard pipe approach in place and find our query result in the stdout pipe. Of course, we are mostly interested in using cUrl from other modules as part of our service, such as getting the most recently produced block and its corresponding data:
|
||||
|
||||
```bash
|
||||
3> call block_getter get_latest_block ["MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"]
|
||||
result: String("0xb7eeb3")
|
||||
elapsed time: 559.991486ms
|
||||
```
|
||||
|
||||
and with some cognitive gymnastics we convert 0xb7eeb3 to 12054195:
|
||||
|
||||
```bash
|
||||
4> call block_getter get_block ["MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH", 12054195]
|
||||
result: String("{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12054195\",\"timeStamp\":\"1615957734\",\"blockMiner\":\"0x99c85bb64564d9ef9a99621301f22c9993cb89e3\",\"blockReward\":\"2000000000000000000\",\"uncles\":[],\"uncleInclusionReward\":\"0\"}}")
|
||||
elapsed time: 578.485579ms
|
||||
```
|
||||
|
||||
All good but please note that your latest block data is going to be significantly different from what's use here. Regardless, manual conversions are really not all that productive and that's why we implemented a [hex\_converter](https://github.com/fluencelabs/examples/tree/main/multi-service/hex_converter) module. Let's update our service config to:
|
||||
|
||||
```bash
|
||||
// Block-Getter-With-Converter-Config.toml
|
||||
modules_dir = "artifacts/"
|
||||
|
||||
[[module]]
|
||||
name = "curl_adapter"
|
||||
|
||||
[module.mounted_binaries]
|
||||
curl = "/usr/bin/curl"
|
||||
|
||||
|
||||
[[module]]
|
||||
name = "block_getter"
|
||||
|
||||
[[module]]
|
||||
name = "hex_converter"
|
||||
```
|
||||
|
||||
and running `fce-repl` with _Block-Getter-With-Converter-Config.toml_ lists the interface for the _hex\_converter_ module. So far, so good. Using the previously generated hex string, yields the expected conversion result:
|
||||
|
||||
```bash
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 09bfcff0-67dd-44c2-a677-de5a7a0c6383
|
||||
elapsed time 176.472631ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
Result {
|
||||
ret_code: S32
|
||||
error: String
|
||||
stdout: Array<U8>
|
||||
stderr: Array<U8>
|
||||
}
|
||||
|
||||
hex_converter:
|
||||
fn hex_to_int(data: String) -> U64
|
||||
|
||||
block_getter:
|
||||
fn get_latest_block(api_key: String) -> String
|
||||
fn get_block(api_key: String, block_number: U32) -> String
|
||||
|
||||
curl_adapter:
|
||||
fn curl_request(curl_cmd: Array<String>) -> Result
|
||||
|
||||
2> call hex_converter hex_to_int ["0xb7eeb3"]
|
||||
result: Number(12054195)
|
||||
elapsed time: 120.34µs
|
||||
|
||||
3>
|
||||
```
|
||||
|
||||
Before we review the SQLite code, let's deploy our two services to the local node with the `fldist` tool. Make sure you got the node id and address for **your** local Fluence node:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms artifacts/curl_adapter.wasm:config/curl_cfg.json artifacts/block_getter.wasm:config/block_getter_cfg.json --name EthGetters
|
||||
client seed: 4mp3sXX5FR9heeuqFtfRkq5GRqNJFQ8TvGCZ94PoSvQr
|
||||
client peerId: 12D3KooWBdvur9HwahxMaGN2yrYDiofVD4GDBHivLtJwxwBuyzcr
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint EthGetters to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWBdvur9HwahxMaGN2yrYDiofVD4GDBHivLtJwxwBuyzcr
|
||||
service id: ca0eceb3-871f-440e-aff1-0a186321437d
|
||||
service created successfully
|
||||
```
|
||||
|
||||
and for the hex conversion service:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms artifacts/hex_converter.wasm:config/hex_converter_cfg.json --name HexConverter
|
||||
client seed: BGvUGBvYifJf8oHS6rA7UmBc7Cs8EeaJxie8eFyP7YmY
|
||||
client peerId: 12D3KooWJLXYiXwmmWPEv7kdQ8nYb646L96XyyTgkrrMAXen3FQy
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint HexConverter to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWJLXYiXwmmWPEv7kdQ8nYb646L96XyyTgkrrMAXen3FQy
|
||||
service id: 36043704-4d40-4c74-a1bd-3abbde28305d
|
||||
service created successfully
|
||||
```
|
||||
|
||||
Our first service, _EthGetters_, is comprised of two modules and the the second service, _HexConverter_, of one module. With those two services available, we have everything we need to get the block reward information for the most recently block. In order to get us there, we write a small AIR script to coordinate the services into an app:
|
||||
|
||||
```text
|
||||
; latest_block_reward.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_result] int_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
As always, we use the `fldist` _run\_air_ command:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/latest_reward_block.clj -d '{"service_1": "ca0eceb3-871f-440e-aff1-0a186321437d", "service_2": "36043704-4d40-4c74-a1bd-3abbde28305d", "node_1":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "api_key":"your-api-key"}'
|
||||
client seed: 9xfs3P1r5QmBxCohcA4xmpE448Q64c14jmYn4XNJZEiz
|
||||
client peerId: 12D3KooWNfA3Za3bvfHutWhvtZxC5NWdbaujoFZkR8bh2WVTZzw3
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 930ea13f-1474-4501-862a-ca5fad22ee42. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"0xb7fe13"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'ca0eceb3-871f-440e-aff1-0a186321437d',
|
||||
function_name: 'get_latest_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
12058131
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '36043704-4d40-4c74-a1bd-3abbde28305d',
|
||||
function_name: 'hex_to_int',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12058131\",\"timeStamp\":\"1616010177\",\"blockMiner\":\"0x829bd824b016326a401d083b33d092293333a830\",\"blockReward\":\"6159144598411626490\",\"uncles\":[{\"miner\":\"0xe72f79190bc8f92067c6a62008656c6a9077f6aa\",\"unclePosition\":\"0\",\"blockreward\":\"500000000000000000\"}],\"uncleInclusionReward\":\"62500000000000000\"}}"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'ca0eceb3-871f-440e-aff1-0a186321437d',
|
||||
function_name: 'get_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Right on! Our two services coordinate into the intended application returning the reward data for the latest block. Before we move on, locate the corresponding services on the Fluence testnet via the [ dashboard](https://dash.fluence.dev/), update your command-line with the appropriate service and node ids and run the same AIR script. Congratulations, you just run an app coordinated by distributed services!
|
||||
|
@ -0,0 +1,10 @@
|
||||
# A Little More AIR, Please
|
||||
|
||||
Before you go off becoming a prominent Fluence p2p application developer gazillionaire, there are a couple more AIR functions you should be aware off: par and fold.
|
||||
|
||||
## Disrtributed Workflow Parallelization
|
||||
|
||||
By now may have come to realize that building distributed "applications" is a tad different than
|
||||
|
||||
## Distributed List Processing with fold
|
||||
|
@ -0,0 +1,251 @@
|
||||
# Blocks To Database
|
||||
|
||||
It's been a long time coming but finally, we are ready to save data in SQLIte by simply coordinating the various services we already deployed into one big-ass AIR script:
|
||||
|
||||
```text
|
||||
; ethqlite_rountrip.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_latest_block") [api_key] hex_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [hex_block_result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_2 (service_2 "hex_to_int") [hex_block_result] int_block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [int_block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service_1 "get_block") [api_key int_block_result] block_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [block_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "update_reward_blocks") [block_result] insert_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [insert_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_latest_reward_block") [] select_result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result])
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_reward_block") [int_block_result] select_result_2)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_2])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call sqlite_node (sqlite_service "get_miner_rewards") [select_result_2.$.["block_miner"]!] select_result_3)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [select_result_3])
|
||||
)
|
||||
)
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
The script extends our previous incarnation by adding only one more method: `update_reward_blocks`, and a few testing calls, i.e., query the table. We need to gather our node and service ids \(which are different for you!\) to update our json data argument for the `fldist` call:
|
||||
|
||||
```bash
|
||||
-d '{"service_1":"ca0eceb3-871f-440e-aff1-0a186321437d", \
|
||||
"node_1":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"service_2":"36043704-4d40-4c74-a1bd-3abbde28305d", \
|
||||
"node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"sqlite_service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", \
|
||||
"sqlite_node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}'
|
||||
```
|
||||
|
||||
and run the AIR script with the revised `fldist` command:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/ethqlite_roundtrip.clj -d '{"service_1":"ca0eceb3-871f-440e-aff1-0a186321437d", "node_1":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17","service_2":"36043704-4d40-4c74-a1bd-3abbde28305d", "node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "sqlite_service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", "sqlite_node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", "api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}' -s H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client seed: H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client peerId: 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 5ce2dcf0-2d4d-40ec-8cef-d5a0cea4f0e7. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"0xb807a1"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'ca0eceb3-871f-440e-aff1-0a186321437d',
|
||||
function_name: 'get_latest_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
12060577
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '36043704-4d40-4c74-a1bd-3abbde28305d',
|
||||
function_name: 'hex_to_int',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12060577\",\"timeStamp\":\"1616042932\",\"blockMiner\":\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\",\"blockReward\":\"3622523288217263710\",\"uncles\":[],\"uncleInclusionReward\":\"0\"}}"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'ca0eceb3-871f-440e-aff1-0a186321437d',
|
||||
function_name: 'get_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"err_str": "",
|
||||
"success": 1
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'update_reward_blocks',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"block_miner": "\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\"",
|
||||
"block_number": 12060577,
|
||||
"block_reward": "3622523288217263710",
|
||||
"timestamp": 1616042932
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'get_latest_reward_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"block_miner": "\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\"",
|
||||
"block_number": 12060577,
|
||||
"block_reward": "3622523288217263710",
|
||||
"timestamp": 1616042932
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'get_reward_block',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
===================
|
||||
[
|
||||
{
|
||||
"miner_address": "\"0x2f731c3e8cd264371ffdb635d07c14a6303df52a\"",
|
||||
"rewards": [
|
||||
"3622523288217263710"
|
||||
]
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'get_miner_rewards',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
And that's a wrap!
|
||||
|
||||
In summary, we have developed and deployed multiple Fluence services to store Ethereum reward block data in a SQLite as a service database and used Aquamarine to coordinate those services into applications. See Figure 2 below.
|
||||
|
||||
Figure 2: Aquamarine Application Creation From Modules And Services
|
||||
|
||||

|
||||
|
||||
Working through this project hopefully made it quite clear that the combination of distributed network services and Aquamarine makes for the easy and expedient creation of powerful applications by composition and coordination. Moreover, it showcases the power of reusability and hints at the \(economic\) rent available to developers. Presumably not entirely unexpectedly, there is a bit more to discover, a little more power to be unleashed. In the next section we touch upon two additional concepts to extend our capabilities: How to incorporate peer-based script execution into our workflow and how to utilize advanced, in-flow \(or in-transit\) results processing.
|
||||
|
@ -0,0 +1,397 @@
|
||||
# SQLite Service
|
||||
|
||||
All our work so far has been about gathering block reward information for the latest block:
|
||||
|
||||
```javascript
|
||||
// Block reward info on Wednesday, March 17, at 2021 7:42:57 PM GMT
|
||||
// for block 12058131:
|
||||
"{\"status\":\"1\",\"message\":\"OK\",\"result\":{\"blockNumber\":\"12058131\",
|
||||
\"timeStamp\":\"1616010177\",\"blockMiner\":\"0x829bd824b016326a401d083b33d092293333a830\",
|
||||
\"blockReward\":\"6159144598411626490\",\"uncles\":[
|
||||
{\"miner\":\"0xe72f79190bc8f92067c6a62008656c6a9077f6aa\",\"unclePosition\":\"0\",
|
||||
\"blockreward\":\"500000000000000000\"}],
|
||||
\"uncleInclusionReward\":\"62500000000000000\"}}"
|
||||
```
|
||||
|
||||
which [happens about every 13 seconds or so on mainnet](https://etherscan.io/chart/blocktime) and every four seconds on Kovan. Rather than stashing the block reward results in a frontend-based storage solution, we deploy an SQLite service as our peer-to-peer hosted _Ethqlite_ service. Please see the [ethqlite repo](https://github.com/fluencelabs/examples/tree/main/multi-service/ethqlite) for the code.
|
||||
|
||||
To get SQLite as a service, we build our service from two modules: the [ethqlite repo](https://github.com/fluencelabs/examples/tree/main/multi-service/ethqlite) and the [Fluence sqlite](https://github.com/fluencelabs/sqlite) Wasm module, which we can build or pickup as a wasm files from the [releases](https://github.com/fluencelabs/sqlite/releases). This largely, but not entirely, mirrors what we did with the cUrl service: build the service by providing an adapter to the binary. Unlike the cUrl binary, we are bringing our own sqlite binary, i.e., _sqlite3.wasm_, with us.
|
||||
|
||||
This leaves us to code our _ethqlite_ module with respect to desired CRUD interfaces and security. As [previously](../../quick_start/quick_start_add_persistence/quick_start_persistence_setup.md) discussed, we want writes to the sqlite services to be privileged, which implies that we need to own the service and have the client seed to manage authentication and ambient authorization. Specifically, we can implement a rudimentary authorization system where authentication implies authorization \(to write\). The `is_owner` function in the _ethqlite_ repo does exactly that: if the caller can prove ownership by providing a valid client seed, than we have a true condition equating write-privileged ownership with the caller identity:
|
||||
|
||||
```rust
|
||||
// auth.rs
|
||||
use fluence::{fce, CallParameters};
|
||||
use::fluence;
|
||||
use crate::get_connection;
|
||||
|
||||
pub fn is_owner() -> bool {
|
||||
let meta = fluence::get_call_parameters();
|
||||
let caller = meta.init_peer_id;
|
||||
let owner = meta.service_creator_peer_id;
|
||||
|
||||
caller == owner
|
||||
}
|
||||
|
||||
#[fce]
|
||||
pub fn am_i_owner() -> bool {
|
||||
is_owner()
|
||||
}
|
||||
```
|
||||
|
||||
where the `fluence::get_call_parameters` is a FCE function returning the populated _CallParameter_ struct defined in the [Fluence Rust SDK](https://github.com/fluencelabs/rust-sdk/blob/71591f412cb65879d74e8c38838e827ab74d8802/crates/main/src/call_parameters.rs) provides us with the salient creator and caller parameters at runtime.
|
||||
|
||||
While the majority of the CRUD operations in _crud.rs_ are standard fare except, the auth & auth check appears in update\_reward\_blocks:
|
||||
|
||||
```rust
|
||||
// crud.rs
|
||||
#[fce]
|
||||
pub fn update_reward_blocks(data_string: String) -> UpdateResult {
|
||||
if !is_owner() { // <= auth & auth check !!
|
||||
return UpdateResult { success:false, err_str: "You are not the owner".into()};
|
||||
}
|
||||
|
||||
let obj:serde_json::Value = serde_json::from_str(&data_string).unwrap();
|
||||
let obj = obj["result"].clone();
|
||||
|
||||
if obj["blockNumber"] == serde_json::Value::Null {
|
||||
return UpdateResult { success:false, err_str: "Empty reward block string".into()};
|
||||
}
|
||||
|
||||
let conn = get_connection();
|
||||
|
||||
let insert = "insert or ignore into reward_blocks values(?, ?, ?, ?)";
|
||||
let mut ins_cur = conn.prepare(insert).unwrap().cursor();
|
||||
<snip>
|
||||
```
|
||||
|
||||
That is, any non-permissioned call is prevented from write operations and an error message is returned. Please note that in [main.rs](https://github.com/fluencelabs/examples/blob/main/multi-service/ethqlite/src/main.rs) we have a few admin convenience functions that are also protected by the `is_owner` guard.
|
||||
|
||||
## Building and Deploying Ethqlite
|
||||
|
||||
Our _build.sh_ script should look quite familiar with the possible exception of downloading the already built sqlite3.wasm file:
|
||||
|
||||
```bash
|
||||
# build.sh
|
||||
#!/bin/sh
|
||||
|
||||
fce build --release
|
||||
|
||||
rm artifacts/*
|
||||
cp target/wasm32-wasi/release/ethqlite.wasm artifacts/
|
||||
wget https://github.com/fluencelabs/sqlite/releases/download/v0.10.0_w/sqlite3.wasm
|
||||
mv sqlite3.wasm artifacts/
|
||||
```
|
||||
|
||||
Run `./build.sh` and check the artifacts for the expected wasm files
|
||||
|
||||
Like all Fluence services, Ethqlite needs a [service configuration](https://github.com/fluencelabs/examples/blob/main/multi-service/ethqlite/Config.toml) file, which looks a little more involved than what we have seen so far.
|
||||
|
||||
```text
|
||||
modules_dir = "artifacts/"
|
||||
|
||||
[[module]]
|
||||
name = "sqlite3"
|
||||
mem_pages_count = 100
|
||||
logger_enabled = false
|
||||
|
||||
[module.wasi]
|
||||
preopened_files = ["/tmp"]
|
||||
mapped_dirs = { "tmp" = "/tmp" }
|
||||
|
||||
|
||||
|
||||
[[module]]
|
||||
name = "ethqlite"
|
||||
mem_pages_count = 1
|
||||
logger_enabled = false
|
||||
|
||||
[module.wasi]
|
||||
preopened_files = ["/tmp"]
|
||||
mapped_dirs = { "tmp" = "/tmp" }
|
||||
```
|
||||
|
||||
Let's break it down:
|
||||
|
||||
* the first \[\[module\]\] section
|
||||
* specifies the _sqlite3.wasm_ module we pulled from the repo,
|
||||
* allocates memory, where each page is about 64KB, and
|
||||
* permissions and maps node file access
|
||||
* the second section is for our business logic \(CRUD\) adapter module where, again, we allocate the memory and permission and map file access.
|
||||
|
||||
We can now fire up `fce-repl`:
|
||||
|
||||
```bash
|
||||
fce-repl Config.toml
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 9b923db7-3747-41ab-b1fd-66bd0ccd9f68
|
||||
elapsed time 916.210305ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
UpdateResult {
|
||||
success: I32
|
||||
err_str: String
|
||||
}
|
||||
RewardBlock {
|
||||
block_number: S64
|
||||
timestamp: S64
|
||||
block_miner: String
|
||||
block_reward: String
|
||||
}
|
||||
InitResult {
|
||||
success: I32
|
||||
err_msg: String
|
||||
}
|
||||
MinerRewards {
|
||||
miner_address: String
|
||||
rewards: Array<String>
|
||||
}
|
||||
DBOpenDescriptor {
|
||||
ret_code: S32
|
||||
db_handle: U32
|
||||
}
|
||||
DBPrepareDescriptor {
|
||||
ret_code: S32
|
||||
stmt_handle: U32
|
||||
tail: U32
|
||||
}
|
||||
DBExecDescriptor {
|
||||
ret_code: S32
|
||||
err_msg: String
|
||||
}
|
||||
|
||||
ethqlite:
|
||||
fn init_service() -> InitResult
|
||||
fn get_miner_rewards(miner_address: String) -> MinerRewards
|
||||
fn owner_nuclear_reset() -> I32
|
||||
fn get_reward_block(block_number: U32) -> RewardBlock
|
||||
fn update_reward_blocks(data_string: String) -> UpdateResult
|
||||
fn get_latest_reward_block() -> RewardBlock
|
||||
fn am_i_owner() -> I32
|
||||
|
||||
sqlite3:
|
||||
fn sqlite3_reset(stmt_handle: U32) -> S32
|
||||
<snip>
|
||||
fn sqlite3_column_blob(stmt_handle: U32, icol: S32) -> Array<U8>
|
||||
```
|
||||
|
||||
and see all the public Fluence interfaces including the ones from the _sqlite3.wasm_ module. Let's upload the service to the local network:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms ethqlite/artifacts/sqlite3.wasm:ethqlite/sqlite3_cfg.json ethqlite/artifacts/ethqlite.wasm:ethqlite/ethqlite_cfg.json --name EthQlite
|
||||
client seed: 7VqRt2kXWZ15HABKh1hS4kvGfRcBA69cYuzV1Rwm3kHv
|
||||
client peerId: 12D3KooWCzWm4xBv7nApuK8vNLSbKKYV36kvkz3ywqj5xcjscnz9
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint EthQlite to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWCzWm4xBv7nApuK8vNLSbKKYV36kvkz3ywqj5xcjscnz9
|
||||
service id: fb9ba691-c0fc-4500-88cc-b74f3b281088
|
||||
service created successfully
|
||||
```
|
||||
|
||||
Now that we crated the service on our local node, let's make sure that we have the necessary owner privileges. First, we create a little AIR script that calls the _am\_i\_owner_ function from thee ethqlite service:
|
||||
|
||||
```text
|
||||
; am_i_owner.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service "am_i_owner") [] result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
and run it with the `fldist` tool:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/am_i_owner.clj -d '{"service":"fb9ba691-c0fc-4500-88cc-b74f3b281088", "node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}'
|
||||
client seed: 3J8BqpGTQ1Ujbr8dvnpTxfr5EUneHf9ZwW84ru9sNmj7
|
||||
client peerId: 12D3KooW9z5hBDY6cXnkEGraiPFn6hJ3VstqAkVaAM7oThTiWVjL
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: efa37779-e3aa-4353-b63d-12b444b6366b. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
0
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'fb9ba691-c0fc-4500-88cc-b74f3b281088',
|
||||
function_name: 'am_i_owner',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
As discussed earlier, the service needs some proof that we have owner privileges, which we can provide by adding the client seed, `-s`, to our call parameters:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/am_i_owner.clj -d '{"service":"fb9ba691-c0fc-4500-88cc-b74f3b281088", "node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}' -s 7VqRt2kXWZ15HABKh1hS4kvGfRcBA69cYuzV1Rwm3kHv
|
||||
client seed: 7VqRt2kXWZ15HABKh1hS4kvGfRcBA69cYuzV1Rwm3kHv
|
||||
client peerId: 12D3KooWCzWm4xBv7nApuK8vNLSbKKYV36kvkz3ywqj5xcjscnz9
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: f0371615-7d75-4971-84a9-3111b8263de7. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
1
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: 'fb9ba691-c0fc-4500-88cc-b74f3b281088',
|
||||
function_name: 'am_i_owner',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
and all is well. So where does that client seed _7VqRt2kXWZ15HABKh1hS4kvGfRcBA69cYuzV1Rwm3kHv_ come from ? The easy answer is that we copied it from the service creation return values -- line 2 above. But that doesn't really answer the question. The more involved answer is that every developer should have one or more cryptographic key pairs from which the client seed is derived. Moreover, creating a new service, the client seed should be specified but if not, the system creates one instead as above.
|
||||
|
||||
The easiest way to get a keypair and seed is from the `fldist` tool:
|
||||
|
||||
```bash
|
||||
fldist create_keypair
|
||||
client seed: 8LKYUmsWkMSiHBxo8deXyNJD3wXutq265TSTcmmtgQTJ
|
||||
client peerId: 12D3KooWRtrFyYjis4qQpC4kHcJWbtpM4mZgLYBoDn93eXJEGtVH
|
||||
relay peerId: 12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
{
|
||||
id: '12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww',
|
||||
privKey: 'CAESQO/TcX2DkTukK6XxJUc/2U6gqOLVza5PRWM2FhXfJ1qilKtA6qsHx0Rdibwxsg4Vh7JjTfRfMXSlLJphGCOb7zI=',
|
||||
pubKey: 'CAESIJSrQOqrB8dEXYm8MbIOFYeyY030XzF0pSyaYRgjm+8y',
|
||||
seed: 'H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V'
|
||||
}
|
||||
```
|
||||
|
||||
So let's re-deploy the Ethqlite service and specify the client seed at creation time:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms ethqlite/artifacts/sqlite3.wasm:ethqlite/sqlite3_cfg.json ethqlite/artifacts/ethqlite.wasm:ethqlite/ethqlite_cfg.json --name EthQliteSecure -s H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client seed: H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client peerId: 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint EthQliteSecure to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
service id: 470fcaba-6834-4ccf-ac0c-4f6494e9e77b
|
||||
service created successfully
|
||||
```
|
||||
|
||||
Updating the call parameters to reflect the new service id and client seed confirms our ownership over the service:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/am_i_owner.clj -d '{"service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", "node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}' -s H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client seed: H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client peerId: 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 6d8c158b-d998-44ca-9d4c-255ce4b9cd21. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
1
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'am_i_owner',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
Back to our task at hand: persisting reward block data to our sqlite as a service. Looking over the source code, we know that in order to accomplish persistence, we need to:
|
||||
|
||||
* init the database: `pub fn init_service() -> InitResult`
|
||||
* provide reward data : `pub fn update_reward_blocks(data_string: String) -> UpdateResult`
|
||||
|
||||
Initializing Ethqlite for the most part is a one time event, so we'll do it right now and outside of our recurring block discovery and commit workflow with another small AIR script:
|
||||
|
||||
```text
|
||||
; ethqlite_init.clj
|
||||
(xor
|
||||
(seq
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call node_1 (service "init_service") [] result)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
)
|
||||
(seq
|
||||
(call relay ("op" "identity") [])
|
||||
(call %init_peer_id% (returnService "run") ["XOR FAILED" %last_error%])
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
which we deploy to the node with the `fldist` tool:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p air-scripts/ethqlite_init.clj -d '{"service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", "node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17"}' -s H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client seed: H9BSbZwKmFs93462xbAyfEdGdMXb5LZuXL7GSA4uPK4V
|
||||
client peerId: 12D3KooWKphxxaXofYzC2TsN79RHZVubjmutKVdPUxVMHY3ZsVww
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 2fb4a366-6f40-46c1-9329-d77c6d03dfad. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
{
|
||||
"err_msg": "",
|
||||
"success": 1
|
||||
}
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '470fcaba-6834-4ccf-ac0c-4f6494e9e77b',
|
||||
function_name: 'init_service',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
If you run the init script again, you will receive an error _"Service already initiated"_, so we can be reasonably confident our code is working and it looks like our Ethqlite service is up and running on the local node.
|
||||
|
||||
Due to the security concerns for our database, it is not advisable, or even possible, to use an already deployed Sqlite service from the Fluence Dashboard. Instead, we deploy our own instance with our own \(secret\) client seed. To determine which network nodes are available, run:
|
||||
|
||||
```bash
|
||||
fldist --env testnet env
|
||||
client seed: Cj4Wpy5y955o2N3T8Hs5myRoFGhBaBhytCdsYeyFLQPw
|
||||
client peerId: 12D3KooWQg8cyj4z8Bv4rGq1PeXL1XKEQd6Z2CCFguy9D4NnLaKm
|
||||
relay peerId: 12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
/dns4/net01.fluence.dev/tcp/19001/wss/p2p/12D3KooWEXNUbCXooUwHrHBbrmjsrpHXoEphPwbjQXEGyzbqKnE9
|
||||
/dns4/net01.fluence.dev/tcp/19990/wss/p2p/12D3KooWMhVpgfQxBLkQkJed8VFNvgN4iE6MD7xCybb1ZYWW2Gtz
|
||||
/dns4/net02.fluence.dev/tcp/19001/wss/p2p/12D3KooWHk9BjDQBUqnavciRPhAYFvqKBe4ZiPPvde7vDaqgn5er
|
||||
/dns4/net03.fluence.dev/tcp/19001/wss/p2p/12D3KooWBUJifCTgaxAUrcM9JysqCcS4CS8tiYH5hExbdWCAoNwb
|
||||
/dns4/net04.fluence.dev/tcp/19001/wss/p2p/12D3KooWJbJFaZ3k5sNd8DjQgg3aERoKtBAnirEvPV8yp76kEXHB
|
||||
/dns4/net05.fluence.dev/tcp/19001/wss/p2p/12D3KooWCKCeqLPSgMnDjyFsJuWqREDtKNHx1JEBiwaMXhCLNTRb
|
||||
/dns4/net06.fluence.dev/tcp/19001/wss/p2p/12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH
|
||||
/dns4/net07.fluence.dev/tcp/19001/wss/p2p/12D3KooWBSdm6TkqnEFrgBuSkpVE3dR1kr6952DsWQRNwJZjFZBv
|
||||
/dns4/net08.fluence.dev/tcp/19001/wss/p2p/12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H
|
||||
/dns4/net09.fluence.dev/tcp/19001/wss/p2p/12D3KooWF7gjXhQ4LaKj6j7ntxsPpGk34psdQicN2KNfBi9bFKXg
|
||||
/dns4/net10.fluence.dev/tcp/19001/wss/p2p/12D3KooWB9P1xmV3c7ZPpBemovbwCiRRTKd3Kq2jsVPQN4ZukDfy
|
||||
```
|
||||
|
||||
which lists the available testnet peers. Pick one, update the node-id parameter and drop the node-addr parameter in your deployment command-line, upload the new ethqlite service and initiate it. Congrat's, you are now the proud maker of a Fluence testnet Ehqlite service!
|
||||
|
||||
Now it is time to get block data into the database.
|
||||
|
@ -0,0 +1,2 @@
|
||||
# Recap
|
||||
|
318
development_development/developmet_build_modules.md
Normal file
318
development_development/developmet_build_modules.md
Normal file
@ -0,0 +1,318 @@
|
||||
# From Module To Service
|
||||
|
||||
In Fluence, a service is based on one or more [Wasm](https://webassembly.org/) modules suitable to be deployed to the Fluence Compute Engine \(FCE\). In order to develop our modules, we use Rust and the [Fluence Rust SDK](https://github.com/fluencelabs/rust-sdk).
|
||||
|
||||
## Preliminaries
|
||||
|
||||
The general process to create a Fluence \(module\) project is to:
|
||||
|
||||
```bash
|
||||
cargo +nightly create your_module_name --release
|
||||
```
|
||||
|
||||
and add the [binary target](https://doc.rust-lang.org/cargo/reference/cargo-targets.html#binaries) and [Flunece Rust SDK](https://crates.io/crates/fce) to the Cargo.toml:
|
||||
|
||||
```text
|
||||
<snip>
|
||||
|
||||
[[bin]] # <- binary target
|
||||
name = "<your_module_name>"
|
||||
path = "src/main.rs"
|
||||
|
||||
[dependencies]
|
||||
fluence = { version = "=0.5.0", features = ["logger"] }
|
||||
log = "0.4.14"
|
||||
```
|
||||
|
||||
## Developing A Simple Wasm Module
|
||||
|
||||
Let's build a simple greeting module to verify our setup and quickly go through the steps we need to complete to build a simple service.
|
||||
|
||||
```bash
|
||||
cargo +nightly new greeting --release
|
||||
cd greeting
|
||||
```
|
||||
|
||||
and update _main.rs_:
|
||||
|
||||
```rust
|
||||
use fluence::fce; // 1
|
||||
use fluence::module_manifest; // 2
|
||||
|
||||
module_manifest!(); // 3
|
||||
|
||||
pub fn main() {} // 4
|
||||
|
||||
#[fce] // 5
|
||||
pub fn greeting(name: String) -> String {
|
||||
format!("Hi, {}", name)
|
||||
}
|
||||
```
|
||||
|
||||
Let's go line by line:
|
||||
|
||||
1. Import the [fce](https://github.com/fluencelabs/fce/tree/5effdcba7215cd378f138ab77f27016024720c0e) module from the [Fluence crate](https://crates.io/crates/fluence), which allows us to compile our code to the [wamser32-wasi](https://docs.rs/crate/wasi/0.6.0) target
|
||||
2. Import the [module\_manifest](https://github.com/fluencelabs/rust-sdk/blob/master/crates/main/src/module_manifest.rs), which allows us to embed the SDK version in our module
|
||||
3. Initiate the module\_manifest macro
|
||||
4. Initiate the main function which generally stays empty or is used to instantiate a logger
|
||||
5. Markup the public function we want to expose with the FCE macro which, among other things, checks that only Wasm IT types are used
|
||||
|
||||
Once we compile our code, we generate the wasm32-wasi file, which can be found in the `target/wasm32-wasi` path of your directory. The `greeting.wasm` file is what we need for testing and eventual upload to the peer-to-peer network.
|
||||
|
||||
To make things a little easier on us, let's create a build script, _build.sh_:
|
||||
|
||||
```bash
|
||||
#!/bin/sh
|
||||
# This script builds all sub-projects and puts our Wasm module(s) in a high-level dir
|
||||
|
||||
fce build --release // 1
|
||||
|
||||
mkdir -p artifacts // 2
|
||||
rm artifacts/*
|
||||
cp target/wasm32-wasi/release/greeting.wasm artifacts/ // 3
|
||||
```
|
||||
|
||||
Our script executes the following steps in one handy executable:
|
||||
|
||||
1. Compile the FCE annotated Rust code to the wasm32-wasi target generating the wasm module we so very much desire
|
||||
2. Make a higher-level artifacts directory to hold wasm file\(s\) in a more convenient location
|
||||
3. Copy the wasm build to the artifacts directory
|
||||
|
||||
Before we can run the script we need to `chmod +x build.sh` to make the file executable. Now we can run it:
|
||||
|
||||
```bash
|
||||
./build.sh
|
||||
```
|
||||
|
||||
which starts the build and compilation of the project and eventually, you should see the `greeting.wasm` file in the artifacts directory.
|
||||
|
||||
```bash
|
||||
ll artifacts
|
||||
-rwxr-xr-x 1 bebo staff 81K Mar 15 19:41 greeting.wasm
|
||||
```
|
||||
|
||||
Before we can actually create a service from our module, one more file needs to be added to our project called the service configuration file. Service config files control the order in which the modules are instantiated, their permissions, maximum memory limits and some other parameters. In general, a service configuration file contains:
|
||||
|
||||
* modules\_dir -- the path to the directory with all the Wasm modules
|
||||
* \[\[module\]\] -- a list of modules comprising the service
|
||||
* name -- the \(file\) name of the corresponding Wasm file in the modules\_dir
|
||||
|
||||
For our greeting service, we add the following _Config.toml_ file:
|
||||
|
||||
```text
|
||||
# Config.toml
|
||||
modules_dir = "artifacts/"
|
||||
|
||||
[[module]]
|
||||
name = "greeting"
|
||||
```
|
||||
|
||||
## Taking The Greeting Module For A Spin
|
||||
|
||||
Now that we have a Wasm module and service configuration, we can explore and test our achievements locally with the Fluence REPL tool `fce-repl`. Load the service for inspection and testing:
|
||||
|
||||
```bash
|
||||
fce-repl Config.toml
|
||||
|
||||
Welcome to the FCE REPL (version 0.5.2)
|
||||
app service was created with service id = 10afa1aa-22e6-4c8a-b668-6be95d2d3530
|
||||
elapsed time 54.290336ms
|
||||
|
||||
1> interface
|
||||
Loaded modules interface:
|
||||
|
||||
greeting:
|
||||
fn greeting(name: String) -> String
|
||||
|
||||
2>
|
||||
```
|
||||
|
||||
Using our service config file, we loaded the module and associated config info into the `fce-repl` tool and with the `interface` command, we obtain a listing of module name\(s\) and associated interfaces which when can execute in the tool:
|
||||
|
||||
```bash
|
||||
2> call greeting greeting ["Fluence"]
|
||||
result: String("Hi, Fluence")
|
||||
elapsed time: 98.02µs
|
||||
|
||||
3>
|
||||
```
|
||||
|
||||
The _interface_ command lists the available interfaces by module, i.e., the functions we designated as public and marked up with the _FCE_ macro in our source code. For more command info, use the _help_ command:
|
||||
|
||||
```text
|
||||
1> help
|
||||
Commands:
|
||||
|
||||
n/new [config_path] create a new service (current will be removed)
|
||||
l/load <module_name> <module_path> load a new Wasm module
|
||||
u/unload <module_name> unload a Wasm module
|
||||
c/call <module_name> <func_name> [args] call function with given name from given module
|
||||
i/interface print public interface of all loaded modules
|
||||
e/envs <module_name> print environment variables of a module
|
||||
f/fs <module_name> print filesystem state of a module
|
||||
h/help print this message
|
||||
q/quit/Ctrl-C exit
|
||||
```
|
||||
|
||||
The command we'll be using the most is the _call_ command to execute module functions locally as an effective way to test services, such as calling a function with an incorrect type:
|
||||
|
||||
```bash
|
||||
3> call greeting greeting [5]
|
||||
call failed with: JsonArgumentsDeserializationError: error Error("invalid type: integer `5`, expected a string", line: 0, column: 0) occurred while deserialize output result to a json value
|
||||
|
||||
4> call greeting greeting ["5"]
|
||||
result: String("Hi, 5")
|
||||
elapsed time: 61.505µs
|
||||
|
||||
5>
|
||||
```
|
||||
|
||||
The interface `fn greeting(name: String) -> String` specified a string input and an integer input will cause the method to fail. Looks like all is working as designed and expected.
|
||||
|
||||
## Deploying The Greeting Module To A Local Node
|
||||
|
||||
Now that we are reasonably satisfied that our greeting service works, it is time to deploy it to the local network and test it with an AIR script. Before we do that, however, we need configuration files for each of the modules comprising our service. In our greeting service case, we only have one module and our configuration reads as follows:
|
||||
|
||||
```javascript
|
||||
// greeting_cfg.json
|
||||
{
|
||||
"name" : "greeting"
|
||||
}
|
||||
```
|
||||
|
||||
The configuration files are the per-module equivalents of the service configuration file we've seen earlier. They allow nodes to establish the meta-data and permission requirements, per module, before modules are linked to a service. The resulting configuration \(json\) object for a service over the underlying modules. This is called a _blueprint :_
|
||||
|
||||
```text
|
||||
{
|
||||
"id": "uuid-1234-...",
|
||||
"name": "some name",
|
||||
"dependencies": [ "module_a", "module_b", "facade_module" ]
|
||||
}
|
||||
```
|
||||
|
||||
Back to our use case at hand: our config file only needs a name specifier and we are ready to deploy our service to the network or local development node. For detailed information with respect to running a local node, see the [tutorial](https://github.com/boneyard93501/docs/tree/575ff7b260d1014bdaf4d26e791f0ce8f2841d0d/src/tutorials/tutorial_run_local_node.md).
|
||||
|
||||
To run the local node:
|
||||
|
||||
```bash
|
||||
# start the local dev node
|
||||
docker run --rm --name fluence -e RUST_LOG="info" -p 7777:7777 -p 9999:9999 -p 18080 fluencelabs/fluence
|
||||
```
|
||||
|
||||
and pull the node id, _12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17_ in this case, from the log:
|
||||
|
||||
```bash
|
||||
[docker run --rm --name my_fluence -e RUST_LOG="info" -p 7777:7777 -p 9999:9999 -p 18080 fluencelabs/fluence:latest
|
||||
[2021-03-16T21:01:01.347081Z INFO particle_node]
|
||||
+-------------------------------------------------+
|
||||
| Hello from the Fluence Team. If you encounter |
|
||||
| any troubles with node operation, please update |
|
||||
| the node via |
|
||||
| docker pull fluencelabs/fluence:latest |
|
||||
| |
|
||||
| or contact us at |
|
||||
| github.com/fluencelabs/fluence/discussions |
|
||||
+-------------------------------------------------+
|
||||
|
||||
[2021-03-16T21:01:01.347925Z INFO server_config::fluence_config] Loading config from "/.fluence/Config.toml"
|
||||
[2021-03-16T21:01:01.348061Z INFO server_config::keys] generating a new key pair
|
||||
[2021-03-16T21:01:01.348410Z WARN server_config::defaults] New management key generated. private in base64 = SDB6bW/9Vwwy8KvLONkqPwPzaRnb51MzoNkm18fJ790=; peer_id = 12D3KooWCArczSKMzpnyfxKradjE25NEzcfQghkKrtDNuPbsvSU9
|
||||
[2021-03-16T21:01:01.348455Z INFO particle_node] AIR interpreter: "./aquamarine_0.7.5.wasm"
|
||||
[2021-03-16T21:01:01.348608Z INFO particle_node::config::certificates] storing new certificate for the key pair
|
||||
[2021-03-16T21:01:01.348862Z INFO particle_node] public key = FbBMwyYsRvutVSaPNhLYzUyghzHZFewXvmE7SdowNPHB
|
||||
[2021-03-16T21:01:01.350296Z INFO particle_node::node] server peer id = 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
[2021-03-16T21:01:01.353939Z INFO particle_node::node] Fluence listening on ["/ip4/0.0.0.0/tcp/7777", "/ip4/0.0.0.0/tcp/9999/ws"]
|
||||
[2021-03-16T21:01:01.356075Z INFO particle_node] Fluence has been successfully started.
|
||||
[2021-03-16T21:01:01.356098Z INFO particle_node] Waiting for Ctrl-C to exit...
|
||||
[2021-03-16T21:01:01.358364Z INFO tide::server] Server listening on http://0.0.0.0:18080
|
||||
[2021-03-16T21:01:02.067989Z INFO particle_node::network_api] Connected bootstrap 12D3KooWB9P1xmV3c7ZPpBemovbwCiRRTKd3Kq2jsVPQN4ZukDfy @ [/dns4/net10.fluence.dev/tcp/7001]
|
||||
[2021-03-16T21:01:02.068067Z INFO particle_node::network_api] Connected bootstrap 12D3KooWEXNUbCXooUwHrHBbrmjsrpHXoEphPwbjQXEGyzbqKnE9 @ [/dns4/net01.fluence.dev/tcp/7001]
|
||||
<snip>
|
||||
```
|
||||
|
||||
Now we are in a position to deploy our service using the `fldist` tool to the local node. In your project directory, run:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 new_service --ms artifacts/greeting.wasm:greeting_cfg.json -n MyGreeting
|
||||
```
|
||||
|
||||
And if all went well, you should see output similar to:
|
||||
|
||||
```text
|
||||
client seed: 3XUwhqLs7yLHqwE4xnh2C7LitvmT3dFq6Tj1shSRWw1A
|
||||
client peerId: 12D3KooWH2tx7ywW8nvZuGztJMFHhFh16g9fR63BkEQS6QYbG95o
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
uploading blueprint MyGreeting to node 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 via client 12D3KooWH2tx7ywW8nvZuGztJMFHhFh16g9fR63BkEQS6QYbG95o
|
||||
service id: 9712f9ca-7dfd-4ff5-817d-aef9e1e92e03
|
||||
service created successfully
|
||||
```
|
||||
|
||||
Which not only confirms the success of our operation but also gives us the _service id, 9712f9ca-7dfd-4ff5-817d-aef9e1e92e03_ in this case. We can further check the success of our operation by checking the installed modules on our local node with `fldist get_modules`:
|
||||
|
||||
```text
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 get_modules
|
||||
client seed: AgZjbuMvZmCWbqZBABXXtv3cjGTqYFfiVj7aqg8dm2fA
|
||||
client peerId: 12D3KooWFhUMisVC2VtXAertXt5oQQ7Xj1qppFZRM4mvEQ1iUaBP
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
[{"config":{"logger_enabled":true,"logging_mask":null,"mem_pages_count":100,"mounted_binaries":null,"wasi":{"envs":null,"mapped_dirs":null,"preopened_files":[]}},"hash":"c8aec6cbbc0a9632bf532b9553092ae6f66d2e3a5f71e11d1fe65e423c2204e2","name":"greeting"},{"config":{"logger_enabled":true,"logging_mask":null,"mem_pages_count":100,"mounted_binaries":null,"wasi":{"envs":null,"mapped_dirs":null,"preopened_files":[]}},"hash":"915d7487d4ae99f6136a7fe053c4ebd52cde1650c47492a315287117cedd0d3a","name":"greeting"}]
|
||||
```
|
||||
|
||||
Which confirms our recent upload!!
|
||||
|
||||
Now that we have a service on our local node, we need to construct our AIR script to build our frontend.
|
||||
|
||||
```text
|
||||
(xor
|
||||
(seq
|
||||
(call relay (service "greeting") [name] result)
|
||||
(call %init_peer_id% (returnService "run") [result])
|
||||
)
|
||||
(call %init_peer_id% (returnService "run") [%last_error%])
|
||||
)
|
||||
```
|
||||
|
||||
As we've seen in the Quick Start section, we call the service _"greeting"_ with service id _service_ and the method parameter _name_. As usual, we use the `fldist` tool to execute the AIR script:
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p greeting.clj -d '{"service":"9712f9ca-7dfd-4ff5-817d-aef9e1e92e03", "name": "Fluence"}'
|
||||
```
|
||||
|
||||
Giving us the expected response:
|
||||
|
||||
```bash
|
||||
client seed: EV3bFK7mnqk58HrssTfCdXeYSzrVeiTzxWmh2B7k2g6R
|
||||
client peerId: 12D3KooWLYtUhCj392W8XMhToCVrrsjowVdLirBzNHkEqCDmpe17
|
||||
relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
Particle id: 3dbbdfa6-7401-438d-89b9-53413b0022e4. Waiting for results... Press Ctrl+C to stop the script.
|
||||
===================
|
||||
[
|
||||
"Hi, Fluence"
|
||||
]
|
||||
[
|
||||
[
|
||||
{
|
||||
peer_pk: '12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17',
|
||||
service_id: '9712f9ca-7dfd-4ff5-817d-aef9e1e92e03',
|
||||
function_name: 'greeting',
|
||||
json_path: ''
|
||||
}
|
||||
]
|
||||
]
|
||||
===================
|
||||
```
|
||||
|
||||
And that's a wrap.
|
||||
|
||||
## Summary
|
||||
|
||||
In this section we worked through the various requisites and requirements to develop modules and services. To recap:
|
||||
|
||||
1. Create a Rust bin project and update the Cargo.toml to reflect our binary target
|
||||
2. Mark public module functions with the _FCE_ macro
|
||||
3. Build and compile the project with the `fce` tool
|
||||
4. Create a service config toml file to specify wasm file location, included modules, module permissions and more
|
||||
5. Use `fce-repl` to inspect and test modules and services
|
||||
6. Create a deployment json config file for each module for service deployment
|
||||
7. Deploy a service with `fldist` tool
|
||||
8. Execute the service with an AIR script from the `fldst` command-line tool
|
||||
|
Reference in New Issue
Block a user