mirror of
https://github.com/fluencelabs/gitbook-docs
synced 2025-04-24 23:42:15 +00:00
Merge branch 'main' into docs
This commit is contained in:
commit
90e84a9772
@ -12,7 +12,7 @@ Additional resources and support are available:
|
||||
|
||||
* [Youtube](https://www.youtube.com/channel/UC3b5eFyKRFlEMwSJ1BTjpbw)
|
||||
* [Github](https://github.com/fluencelabs)
|
||||
* [Discord](https://discord.gg/whbNmxD)
|
||||
* [Discord](https://discord.gg/aR2AYErM)
|
||||
* [Telegram](https://t.me/fluence_project)
|
||||
* [Twitter](https://twitter.com/fluence_project)
|
||||
|
||||
|
@ -244,4 +244,6 @@ Fluence JS SDK gives options to register own handlers for aqua vm service calls
|
||||
## References
|
||||
|
||||
- For the list of compiler options see: https://github.com/fluencelabs/aqua
|
||||
- Repository with additional examples: https://github.com/fluencelabs/aqua-playground
|
||||
- Repository with additional examples: https://github.com/fluencelabs/aqua-playground
|
||||
=======
|
||||
# Building A Frontend with JS SDK
|
||||
|
@ -1,6 +1,8 @@
|
||||
# Overview
|
||||
|
||||
In the Quick Start section we incrementally created a distributed, database-backed request processing application using existing services with Aquamarine. Of course, we left a lot of detail uncovered including where the services we used came from in the first place. In this section, we tackle the very issue of development and deployment of service component.
|
||||
|
||||
In the Quick Start section we incrementally created a distributed, database-backed request processing application using existing services with Aquamarine. Of course, we left a lot of detail uncovered including where the services we used came from in the first place. In this section, we tackle the very issue of development and deployment of service component.
|
||||
|
||||
|
||||
Before we proceed, please make sure your Fluence environment is [setup](../recipes_recipes/recipes_setting_up.md) and ready to go. Moreover, we are going to run our own Fluence node to test our services in a network environment. Please refer to the [Running a Local Fluence Node](../tutorials_tutorials/tutorial_run_local_node.md) tutorial if you need support.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Building The Reward Block Application
|
||||
|
||||
Our project aims to
|
||||
Our project aims to
|
||||
|
||||
* retrieve the latest block height from the Ethereum mainnet,
|
||||
* use that result to retrieve the associated reward block data and
|
||||
|
@ -2,7 +2,8 @@
|
||||
|
||||
In the previous sections we obtained block reward data by discovering the latest Ethereum block created. Of course, Ethereum produces a new block about every 13 seconds or so and it would be nice to automate the data acquisition process. One way, of course, would be to, say, cron or otherwise daemonize our frontend application. But where's the fun in that and we'd rather hand that task to the p2p network.
|
||||
|
||||
As we have seen in our AIR workflows, particles travel the path, trigger execution, and update their data. So far, we have only seen services consume previous outputs as \(complete\) inputs, which means that service at workflow sequence s needs to be fairly tightly coupled to service at sequence s-1, which is less than ideal. Luckily, Fluence provides a solution to access certain types of results as j_son paths_.
|
||||
As we have seen in our AIR workflows, particles travel the path, trigger execution, and update their data. So far, we have only seen services consume previous outputs as \(complete\) inputs, which means that service at workflow sequence s needs to be fairly tightly coupled to service at sequence s-1, which is less than ideal. Luckily, Fluence provides a solution to access certain types of results as j_son paths_.
|
||||
|
||||
|
||||
## Peer-Based Script Storage And Execution
|
||||
|
||||
@ -120,7 +121,6 @@ In order to upload the periodic "block to db poll", we can use parts of the _eth
|
||||
```
|
||||
|
||||
|
||||
|
||||
```bash
|
||||
# script file to string variable
|
||||
AIR=`cat air-scripts/ethqlite_block_committer.clj`
|
||||
@ -187,6 +187,7 @@ And we are golden. Give it some time and start checking Ethqlite for latest bloc
|
||||
Unfortunately, our daemonized service won't work just yet as the current implementation cannot take the \(client\) seed we need in order to get our SQLite write working. It's on the to-do list but if you need it, please contact us and we'll see about juggling priorities.
|
||||
{% endhint %}
|
||||
|
||||
|
||||
For completeness sake, let's remove the stored service with the following AIR script:
|
||||
|
||||
```bash
|
||||
@ -196,7 +197,8 @@ For completeness sake, let's remove the stored service with the following AIR sc
|
||||
|
||||
## Advanced Service Output Access
|
||||
|
||||
As Aquamarine advances a particle's journey through the network, output from a service at workflow sequence s-1 method tends to be the input for a service at sequence s method. For example, the _hex\_to\_int_ method, as used earlier, takes the output from the _get\_latest\_block_ method. With single parameter outputs, this is a pretty straight forward and inherently decoupled dependency relation. However, when result parameters become more complex, such as structs, we still would like to keep services as decoupled as possible.
|
||||
As Aquamarine advances a particle's journey through the network, output from a service at workflow sequence s-1 method tends to be the input for a service at sequence s method. For example, the _hex\_to\_int_ method, as used earlier, takes the output from the _get\_latest\_block_ method. With single parameter outputs, this is a pretty straight forward and inherently decoupled dependency relation. However, when result parameters become more complex, such as structs, we still would like to keep services as decoupled as possible.
|
||||
|
||||
|
||||
Fluence provides this capability by facilitating the conversion of \(Rust\) struct returns into [json values](https://github.com/fluencelabs/aquamarine/blob/master/interpreter-lib/src/execution/boxed_value/jvaluable.rs#L30). This allows json type key-value access to a desired subset of return values. If you got back to the _ethqlite.clj_ script, you may notice some fancy `$`, `!` operators tucked away in the deep recesses of parenthesis stacking. Below the pertinent snippet:
|
||||
|
||||
@ -245,7 +247,8 @@ pub struct RewardBlock {
|
||||
|
||||
and the input expectations of _get\_miner\_rewards_, also an ethqlite service method, with the following [function](https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L177) signature: `pub fn get_miner_rewards(miner_address: String) -> MinerRewards`.
|
||||
|
||||
Basically, _get\_miner\_rewards_ wants an Ethereum address as a `String` and in the context of our AIR script we want to get the value from the _get\_reward\_block_ result. Rather than tightly coupling _get\_miner\_rewards_ to _get\_reward\_block_ in terms of, say, the _RewardBlock_ input parameter, we take advantage of the Fluence capability to turn structs into json strings and then supply the relevant key to extract the desired value. Specifically, we use the `$` operator to access the json representation at the desired index and the `!` operator to flatten the value, if desired.
|
||||
|
||||
Basically, _get\_miner\_rewards_ wants an Ethereum address as a `String` and in the context of our AIR script we want to get the value from the _get\_reward\_block_ result. Rather than tightly coupling _get\_miner\_rewards_ to _get\_reward\_block_ in terms of, say, the _RewardBlock_ input parameter, we take advantage of the Fluence capability to turn structs into json strings and then supply the relevant key to extract the desired value. Specifically, we use the `$` operator to access the json representation at the desired index and the `!` operator to flatten the value, if desired.
|
||||
|
||||
For example,
|
||||
|
||||
|
@ -104,7 +104,8 @@ modules_dir = "artifacts/"
|
||||
name = "block_getter"
|
||||
```
|
||||
|
||||
If you haven't done so already, run `./scripts/build.sh` to compile the projects. Once we have _wasm_ files and the service configuration, we can check out our accomplishments with the REPL:
|
||||
|
||||
If you haven't done so already, run `./scripts/build.sh` to compile the projects. Once we have _wasm_ files and the service configuration, we can check out our accomplishments with the REPL:
|
||||
|
||||
```bash
|
||||
fce-repl Block-Getter-Config.toml
|
||||
@ -341,5 +342,7 @@ Particle id: 930ea13f-1474-4501-862a-ca5fad22ee42. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
Right on! Our two services coordinate into the intended application returning the reward data for the latest block. Before we move on, locate the corresponding services on the Fluence testnet via the [ dashboard](https://dash.fluence.dev/), update your command-line with the appropriate service and node ids and run the same AIR script. Congratulations, you just run an app coordinated by distributed services!
|
||||
|
||||
Right on! Our two services coordinate into the intended application returning the reward data for the latest block. Before we move on, locate the corresponding services on the Fluence testnet via the [ dashboard](https://dash.fluence.dev/), update your command-line with the appropriate service and node ids and run the same AIR script. Congratulations, you just run an app coordinated by distributed services!
|
||||
|
||||
|
||||
|
@ -103,10 +103,10 @@ The script extends our previous incarnation by adding only one more method: `upd
|
||||
"node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"sqlite_service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", \
|
||||
"sqlite_node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}'
|
||||
"api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}'
|
||||
```
|
||||
|
||||
|
||||
|
||||
and run the AIR script with the revised `fldist` command:
|
||||
|
||||
```bash
|
||||
@ -240,7 +240,7 @@ Particle id: 5ce2dcf0-2d4d-40ec-8cef-d5a0cea4f0e7. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
And that's a wrap!
|
||||
And that's a wrap!
|
||||
|
||||
In summary, we have developed and deployed multiple Fluence services to store Ethereum reward block data in a SQLite as a service database and used Aquamarine to coordinate those services into applications. See Figure 2 below.
|
||||
|
||||
|
@ -17,7 +17,7 @@ which [happens about every 13 seconds or so on mainnet](https://etherscan.io/cha
|
||||
|
||||
To get SQLite as a service, we build our service from two modules: the [ethqlite repo](https://github.com/fluencelabs/examples/tree/main/multi-service/ethqlite) and the [Fluence sqlite](https://github.com/fluencelabs/sqlite) Wasm module, which we can build or pickup as a wasm files from the [releases](https://github.com/fluencelabs/sqlite/releases). This largely, but not entirely, mirrors what we did with the cUrl service: build the service by providing an adapter to the binary. Unlike the cUrl binary, we are bringing our own sqlite binary, i.e., _sqlite3.wasm_, with us.
|
||||
|
||||
This leaves us to code our _ethqlite_ module with respect to desired CRUD interfaces and security. As [previously](../../quick_start/quick_start_add_persistence/quick_start_persistence_setup.md) discussed, we want writes to the sqlite services to be privileged, which implies that we need to own the service and have the client seed to manage authentication and ambient authorization. Specifically, we can implement a rudimentary authorization system where authentication implies authorization \(to write\). The `is_owner` function in the _ethqlite_ repo does exactly that: if the caller can prove ownership by providing a valid client seed, than we have a true condition equating write-privileged ownership with the caller identity:
|
||||
This leaves us to code our _ethqlite_ module with respect to desired CRUD interfaces and security. As [previously](../../quick_start/quick_start_add_persistence/quick_start_persistence_setup.md) discussed, we want writes to the sqlite services to be privileged, which implies that we need to own the service and have the client seed to manage authentication and ambient authorization. Specifically, we can implement a rudimentary authorization system where authentication implies authorization \(to write\). The `is_owner` function in the _ethqlite_ repo does exactly that: if the caller can prove ownership by providing a valid client seed, than we have a true condition equating write-privileged ownership with the caller identity:
|
||||
|
||||
```rust
|
||||
// auth.rs
|
||||
@ -83,7 +83,8 @@ wget https://github.com/fluencelabs/sqlite/releases/download/v0.10.0_w/sqlite3.w
|
||||
mv sqlite3.wasm artifacts/
|
||||
```
|
||||
|
||||
Run `./build.sh` and check the artifacts for the expected wasm files
|
||||
|
||||
Run `./build.sh` and check the artifacts for the expected wasm files
|
||||
|
||||
Like all Fluence services, Ethqlite needs a [service configuration](https://github.com/fluencelabs/examples/blob/main/multi-service/ethqlite/Config.toml) file, which looks a little more involved than what we have seen so far.
|
||||
|
||||
@ -111,7 +112,8 @@ name = "ethqlite"
|
||||
mapped_dirs = { "tmp" = "/tmp" }
|
||||
```
|
||||
|
||||
Let's break it down:
|
||||
Let's break it down:
|
||||
|
||||
|
||||
* the first \[\[module\]\] section
|
||||
* specifies the _sqlite3.wasm_ module we pulled from the repo,
|
||||
@ -369,7 +371,8 @@ Particle id: 2fb4a366-6f40-46c1-9329-d77c6d03dfad. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
If you run the init script again, you will receive an error _"Service already initiated"_, so we can be reasonably confident our code is working and it looks like our Ethqlite service is up and running on the local node.
|
||||
|
||||
If you run the init script again, you will receive an error _"Service already initiated"_, so we can be reasonably confident our code is working and it looks like our Ethqlite service is up and running on the local node.
|
||||
|
||||
Due to the security concerns for our database, it is not advisable, or even possible, to use an already deployed Sqlite service from the Fluence Dashboard. Instead, we deploy our own instance with our own \(secret\) client seed. To determine which network nodes are available, run:
|
||||
|
||||
|
@ -1,6 +1,7 @@
|
||||
# From Module To Service
|
||||
|
||||
In Fluence, a service is based on one or more [Wasm](https://webassembly.org/) modules suitable to be deployed to the Fluence Compute Engine \(FCE\). In order to develop our modules, we use Rust and the [Fluence Rust SDK](https://github.com/fluencelabs/rust-sdk).
|
||||
In Fluence, a service is based on one or more [Wasm](https://webassembly.org/) modules suitable to be deployed to the Fluence Compute Engine \(FCE\). In order to develop our modules, we use Rust and the [Fluence Rust SDK](https://github.com/fluencelabs/rust-sdk).
|
||||
|
||||
|
||||
## Preliminaries
|
||||
|
||||
@ -10,7 +11,8 @@ The general process to create a Fluence \(module\) project is to:
|
||||
cargo +nightly create your_module_name --release
|
||||
```
|
||||
|
||||
and add the [binary target](https://doc.rust-lang.org/cargo/reference/cargo-targets.html#binaries) and [Fluence Rust SDK](https://crates.io/crates/fce) to the Cargo.toml:
|
||||
and add the [binary target](https://doc.rust-lang.org/cargo/reference/cargo-targets.html#binaries) and [Flunece Rust SDK](https://crates.io/crates/fce) to the Cargo.toml:
|
||||
|
||||
|
||||
```text
|
||||
<snip>
|
||||
@ -49,16 +51,12 @@ pub fn greeting(name: String) -> String {
|
||||
}
|
||||
```
|
||||
|
||||
Let's go line by line:
|
||||
|
||||
1. Import the [fce](https://github.com/fluencelabs/fce/tree/5effdcba7215cd378f138ab77f27016024720c0e) module from the [Fluence crate](https://crates.io/crates/fluence), which allows us to compile our code to the [wasm32-wasi](https://docs.rs/crate/wasi/0.6.0) target
|
||||
|
||||
2. Import the [module\_manifest](https://github.com/fluencelabs/rust-sdk/blob/master/crates/main/src/module_manifest.rs), which allows us to embed the SDK version in our module
|
||||
|
||||
3. Initiate the module\_manifest macro
|
||||
|
||||
4. Initiate the main function which generally stays empty or is used to instantiate a logger
|
||||
Let's go line by line:
|
||||
|
||||
1. Import the [fce](https://github.com/fluencelabs/fce/tree/5effdcba7215cd378f138ab77f27016024720c0e) module from the [Fluence crate](https://crates.io/crates/fluence), which allows us to compile our code to the [wamser32-wasi](https://docs.rs/crate/wasi/0.6.0) target
|
||||
2. Import the [module\_manifest](https://github.com/fluencelabs/rust-sdk/blob/master/crates/main/src/module_manifest.rs), which allows us to embed the SDK version in our module
|
||||
3. Initiate the module\_manifest macro
|
||||
4. Initiate the main function which generally stays empty or is used to instantiate a logger
|
||||
5. Markup the public function we want to expose with the FCE macro which, among other things, checks that only Wasm IT types are used
|
||||
|
||||
Once we compile our code, we generate the wasm32-wasi file, which can be found in the `target/wasm32-wasi` path of your directory. The `greeting.wasm` file is what we need for testing and eventual upload to the peer-to-peer network.
|
||||
@ -113,6 +111,7 @@ modules_dir = "artifacts/"
|
||||
|
||||
The source code for the module can be found in the [examples repo](https://github.com/fluencelabs/examples/tree/main/greeting).
|
||||
|
||||
|
||||
## Taking The Greeting Module For A Spin
|
||||
|
||||
Now that we have a Wasm module and service configuration, we can explore and test our achievements locally with the Fluence REPL tool `fce-repl`. Load the service for inspection and testing:
|
||||
@ -264,9 +263,9 @@ relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
|
||||
```
|
||||
|
||||
Which confirms our recent upload!!
|
||||
Which confirms our recent upload!!
|
||||
|
||||
Now that we have a service on our local node, we need to construct our AIR script to build our frontend.
|
||||
Now that we have a service on our local node, we need to construct our AIR script to build our frontend.
|
||||
|
||||
```text
|
||||
(xor
|
||||
@ -278,7 +277,8 @@ Now that we have a service on our local node, we need to construct our AIR scrip
|
||||
)
|
||||
```
|
||||
|
||||
As we've seen in the Quick Start section, we call the service _"greeting"_ with service id _service_ and the method parameter _name_. As usual, we use the `fldist` tool to execute the AIR script:
|
||||
As we've seen in the Quick Start section, we call the service _"greeting"_ with service id _service_ and the method parameter _name_. As usual, we use the `fldist` tool to execute the AIR script:
|
||||
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p greeting.clj -d '{"service":"9712f9ca-7dfd-4ff5-817d-aef9e1e92e03", "name": "Fluence"}'
|
||||
|
@ -6,12 +6,13 @@ At the core of Aquamarine is the design ideal and idea to pair concurrent system
|
||||
|
||||
## Background
|
||||
|
||||
When we build systems, we need to be able to model, specify, analyze and verify them and this is especially important to concurrent systems such as parallel and multi-threaded systems. [Formal specifications](https://en.wikipedia.org/wiki/Formal_specification) are a family of formal approaches to design, model, and verify system. In the context of concurrent systems, there are two distinct formal specification techniques available. The state oriented approach is concerned with modeling verifying a systems state and state transitions and is often accomplished with [TLA+](https://en.wikipedia.org/wiki/TLA%2B). Modern blockchain design, modeling, and verification tend to rely on a state-based specification.
|
||||
When we build systems, we need to be able to model, specify, analyze and verify them and this is especially important to concurrent systems such as parallel and multi-threaded systems. [Formal specification](https://en.wikipedia.org/wiki/Formal_specification) are a family of formal approaches to design, model, and verify system. In the context of concurrent systems, there are two distinct formal specification techniques available. The state oriented approach is concerned with modeling verifying a systems state and state transitions and is often accomplished with [TLA+](https://en.wikipedia.org/wiki/TLA%2B). Modern blockchain design, modeling, and verification tend to rely on a state-based specification.
|
||||
|
||||
An alternative, complementary approach is based on [Process calculus](https://en.wikipedia.org/wiki/Process_calculus) to model and verify the sequence of communications operations of a system at any given time. [π-Calculs](https://en.wikipedia.org/wiki/%CE%A0-calculus) is a modern process calculus employed in a wide range of applications ranging from biology to games and business processes.
|
||||
|
||||
Aquamarine, Fluence's distributed composition language and runtime, is based on π-calculus and provides a solid theoretical basis toward the design, modeling, implementation, and verification of a wide class of distributed, peer-to-peer networks, applications and backends.
|
||||
|
||||
|
||||
## Language
|
||||
|
||||
[Aquamarine Intermediate Representation](https://github.com/boneyard93501/docs/tree/a512080f81137fb575a5b96d3f3e83fa3044fd1c/src/knowledge-base/knowledge_aquamarine__air.md) \(AIR\) is a low-level language modeled after the [WebAssembly text format](https://developer.mozilla.org/en-US/docs/WebAssembly/Understanding_the_text_format) and allows developers to manage network peers as well as services and backends. AIR, while intended as a compile target, is currently the only Aquamarine language implementation although a high level language \(HLL\) is currently under active development.
|
||||
|
@ -1,10 +1,6 @@
|
||||
# HLL
|
||||
|
||||
### Aquamarine High Level Language
|
||||
|
||||
Since parenthesis management is a bit of a downer, we are very soon providing a high level language with AIR as the compile target. Aquamarine users, rejoice !
|
||||
## Aquamarine High Level Language
|
||||
|
||||
_**Stay Tuned -- Coming Soon To A Repo Near You**_
|
||||
|
||||
|
||||
|
||||
|
@ -53,3 +53,4 @@ This instruction is intended for organizing branches in the flow of execution as
|
||||
|
||||
This is an empty instruction: it takes no arguments and does nothing. The _**null**_ instruction is useful for generating code.
|
||||
|
||||
|
||||
|
@ -24,12 +24,11 @@ h/help print this message
|
||||
q/quit/Ctrl-C exit
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Fluence Proto Distributor: FLDIST
|
||||
|
||||
\`\`[`fldist`](https://github.com/fluencelabs/proto-distributor) is a command line interface \(CLI\) to Fluence peers allowing for the lifecycle management of services and offers the fastest and most effective way to service deployment.
|
||||
|
||||
|
||||
```text
|
||||
mbp16~(:|✔) % fldist --help
|
||||
Usage: fldist <cmd> [options]
|
||||
|
@ -14,10 +14,12 @@ Each Fluence peer is equipped with a set of "built-in" services that can be call
|
||||
|
||||
Please note that the [`fldist`](../knowledge_tools.md#fluence-proto-distributor-fldist) CLI tool, as well as the [JS SDK](../knowledge_tools.md#fluence-js-sdk), provide access to node-based services.
|
||||
|
||||
|
||||
## API
|
||||
|
||||
### peer is\_connected
|
||||
|
||||
|
||||
Checks if there is a direct connection to the peer identified by a given PeerId
|
||||
|
||||
* **Arguments**:
|
||||
@ -30,8 +32,6 @@ Example of a service call:
|
||||
(call node ("peer" "is_connected") ["123D..."] ok)
|
||||
```
|
||||
|
||||
### peer connect
|
||||
|
||||
Initiates a connection to the specified peer
|
||||
|
||||
* **Arguments**
|
||||
@ -89,6 +89,7 @@ Example of service call:
|
||||
|
||||
### peer timestamp\_ms
|
||||
|
||||
|
||||
Get Unix timestamp in milliseconds
|
||||
|
||||
* **Arguments**: None
|
||||
@ -102,6 +103,7 @@ Example of service call:
|
||||
|
||||
### peer timestamp\_sec
|
||||
|
||||
|
||||
Get Unix timestamp in seconds
|
||||
|
||||
* **Arguments**: None
|
||||
@ -365,6 +367,7 @@ Example of service call:
|
||||
|
||||
Used in service aliasing. _\*\*_Stores the specified service provider \(provider\) in the internal storage of the node indicated in the service call and associates it with the given key \(key\). After executing add\_provider, the provider can be accessed via the get\_providers service using this key.
|
||||
|
||||
|
||||
* Arguments:
|
||||
|
||||
* key – a string; usually, it is a human-readable service alias.
|
||||
@ -391,6 +394,7 @@ Used in service aliasing to retrieve providers for a given key.
|
||||
* Returns: an array of objects of the following structure:
|
||||
|
||||
```javascript
|
||||
|
||||
{
|
||||
"peer": "123D...", // required field
|
||||
"service_id": "uuid-1234-..." // optional field
|
||||
@ -400,6 +404,7 @@ Used in service aliasing to retrieve providers for a given key.
|
||||
Example of service call:
|
||||
|
||||
```scheme
|
||||
|
||||
(call node ("deprecated" "get_providers") [key] providers)
|
||||
```
|
||||
|
||||
|
2
p2p.md
2
p2p.md
@ -4,7 +4,7 @@ Building and operating distributed networks, backends and applications are non-t
|
||||
|
||||
Consider a workflow tasked with calling multiple REST endpoints, in sequence, where the response of the previous call is the input to the current call. As illustrated in Figure 1, the application is the focal point and data relay.
|
||||
|
||||

|
||||

|
||||
|
||||
Programming a frontend application in the Fluence peer-to-peer solution, an application is not a workflow intermediary but merely the initiator of a workflow as workflow logic and data traverses the network from service to service. See Figure 2 for an illustration and please note that services may be deployed to different nodes as well as to more than one node.
|
||||
|
||||
|
@ -6,5 +6,6 @@ Without diving too deep into the Fluence security framework, you should be aware
|
||||
|
||||
For the purposes of this tutorial, there is a caveat you need to keep in mind: Every reader of this document inevitably ends up using the same sample service with the same ownership control. In the highly, highly unlikely event you're getting funky results, it's most likely due to someone else doing the very same tutorial at the very same time. \[Jinx\]\([https://en.wikipedia.org/wiki/Jinx\_\(game](https://en.wikipedia.org/wiki/Jinx_%28game)\)\) ! Buy me a Coke, drink the Coke, slowly, try again and you should be fine.
|
||||
|
||||
|
||||
The next sections explore both the setup and the use of our database: Sqlite as a Service.
|
||||
|
||||
|
@ -106,6 +106,7 @@ The new service components called are:
|
||||
* _get\_reward\_block_, which takes a miner address and in this cae the one produced by `get_block`, and finally
|
||||
* _get\_miner\_rewards_, which returns a list of miner rewards for a particular miner address; in this case, the one provided by the `get_reward_block` result. Note the `$` operator to access the `block_miner` field in the return struct and the `!` operator to flatten the response
|
||||
|
||||
|
||||
From the previous section we know that
|
||||
|
||||
* service\_1: 74d5c5da-4c83-4af9-9371-2ab5d31f8019 , node\_1: 12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Building An Application From Multiple Services
|
||||
|
||||
In this section, we compose multiple services into an application to catalog block miner addresses and block rewards for the latest block created on the [Ethereum](https://ethereum.org/en/) mainnet. This block reward data is useful to track miner and pool dominance as well ETH supply and related indexes. For convenience purposes, we use the [Etherscan API](https://etherscan.io/apis) for this portion of the tutorial and in order to proceed, you should have an Etherscan API Key or get one from [Etherscan](https://etherscan.io/apis).
|
||||
In this section, we compose multiple services into an application to catalog block miner addresses and block rewards for the latest block created on the \[Ethereum\[\([https://ethereum.org/en/](https://ethereum.org/en/)\) mainnet. This block reward data is useful to track miner and pool dominance as well ETH supply and related indexes. For convenience purposes, we use the \[Etherscan API\]\([https://etherscan.io/apis](https://etherscan.io/apis)\) for this portion of the tutorial and in order to proceed, you should have an Etherscan API Key or get one from [Etherscan](https://etherscan.io/apis).
|
||||
|
||||
Since we are composing our application from first principles, we use the following services to compose our app:
|
||||
|
||||
@ -53,7 +53,7 @@ _call_ is the _execution_ instruction to launch distributed service methods and
|
||||
**\(**_call_ **node-id \(service-id service-method\) \[input parameters\] result\)**
|
||||
{% endhint %}
|
||||
|
||||
As with the previous AIR script, the _xor_ takes care of capturing errors in case things don't pan out the way we've planned. Other than that, we are calling the `hex_to_int` method and we need to supply the service and node ids as well as the hex value. Save the above script to a local file called _hex2int.clj_ and use `fldist` to deploy the script:
|
||||
As with the previous AIR script, the _xor_ takes care of capturing errors in case things don't pan out the way we've planned. Other than that, we are calling the `hex_to_int` method and we need to supply the service and node ids as well the the hex value. Save the above script to a local file called _hex2int.clj_ and use `fldist` to deploy the script:
|
||||
|
||||
```bash
|
||||
fldist run_air -p hex2int.clj -d '{"hex_service":"285e2a5e-e505-475f-a99d-15c16c7253f9", "hex_node": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH", "hex_string":"0xF"}'
|
||||
@ -85,7 +85,7 @@ Particle id: c0f44da7-3bfb-445b-896a-537c10143392. Waiting for results... Press
|
||||
|
||||
We input the hex string 0xF and, as expected, got 15 radius 10 back. Whoever implemented the hex conversion service seemingly got it right. So let's keep using it as we coordinate an application from multiple services.
|
||||
|
||||
Beware but do not fear the nesting and parenthesis!! As we're building a more complex application, our script of course grows a bit. Next, we use the get\__latest\_block_ function and feed the result, a hex string, into the _hex\_to\_int c_onversion function and feed its output, an integer, to the _get\_block_ function to arrive at the reward block data. Of course, we wrap it all into the trusty XOR just in case something goes wrong.
|
||||
Beware but do not fear the nesting and parenthesis!! As we're building a more complex application, our script of course grows a bit. Next, we use the get\__latest\_block_ function and feed the result, a hex string, into the _hex\_to\_int c\_onversion function and feed its output, an integer, to the \_get\_block_ function to arrive at the reward block data. Of course, we wrap it all into the trusty XOR just in case something goes wrong.
|
||||
|
||||
```text
|
||||
(xor
|
||||
@ -132,7 +132,7 @@ Beware but do not fear the nesting and parenthesis!! As we're building a more co
|
||||
|
||||
Before we deploy the script, notice that we made explicit provisions for service and node id associated with each method and we used the output, i.e., result, as input parameters for subsequent method calls. This further illustrates how Aquamarine allows developers to efficiently write applications from distributed network services.
|
||||
|
||||
Save the script locally to a file named _block\_geter_.clj and deploy it with `fldist`:
|
||||
Save the script locally to a file named _block\_geter_.clj and deploy it with `fldist`:
|
||||
|
||||
```bash
|
||||
fldist run_air -p block_getter.clj -d '{"service_1":"74d5c5da-4c83-4af9-9371-2ab5d31f8019", "service_2":"285e2a5e-e505-475f-a99d-15c16c7253f9", "node_1": "12D3KooWGzNvhSDsgFoHwpWHAyPf1kcTYCGeRBPfznL8J6qdyu2H","node_2": "12D3KooWKnRcsTpYx9axkJ6d69LPfpPXrkVLe96skuPTAo76LLVH", "api_key":<your api key}'
|
||||
@ -194,7 +194,7 @@ Particle id: 50f54bad-03f3-41ba-9950-9f18b47fbdee. Waiting for results... Press
|
||||
|
||||
Very cool. Our coordinated service flow generates the expected latest block hex string, which serves as an input to the hex conversion and the resulting integer value is used as an input in the get\_block method, which returns the associated reward block information. Just as planned. Beautiful.
|
||||
|
||||
Of course, that leaves us wanting as our goal was to get the reward miner address. Not to worry, we incorporate the missing _extract\_miner\_address_ service call into our AIR script:
|
||||
Of course, that leaves us wanting as our goal was to get the reward miner address. Not to worry, we incorporate the missing _extract\_miner\_address_ service call into our AIR script:
|
||||
|
||||
```text
|
||||
(xor
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Using a Service
|
||||
|
||||
Let's dive right into peer-to-peer awesomeness by harnessing a distributed curl service, which pretty much keeps with its namesake: pass it a url and collect the response. Instead of developing our service from scratch, we reuse one already deployed to the [Fluence testnet](https://dash.fluence.dev/nodes).
|
||||
Let's dive right into peer-to-peer awesomeness by harnessing a distributed curl service, which pretty much keeps with its namesake: pass it a url and collect the response. Instead of developing our service from scratch, we reuse one already deployed to the \[Fluence testnet\]\([https://dash.fluence.dev/nodes](https://dash.fluence.dev/nodes)\).
|
||||
|
||||
The [Fluence Dashboard](https://dash.fluence.dev/) facilitates the discovery of available services, such as the [Curl Adapter](https://dash.fluence.dev/blueprint/b7d2454e-2a75-408c-a23a-fe35de3beeb9) service, which allows us to harness http\(s\) requests as a service. Drilling down on the metadata provides a few useful parameters such as _service id_, _node id_ and _ip address_, which we need to execute our distributed curl service.
|
||||
|
||||
@ -29,13 +29,13 @@ The "magic" happens by handing the script to the `fldist` CLI tool, which then s
|
||||
Throughout the document, we utilize service and node ids, which in most cases may be different for you.
|
||||
{% endhint %}
|
||||
|
||||
With the service id parameter obtained from the dashboard lookup above, e.g., `"f92ce98b-1ed6-4ce3-9864-11f4e93a478f"`, and some Fluence goodness at both the local and remote levels enables us to:
|
||||
With the service id parameter obtained from the dashboard lookup above, e.g., "f92ce98b-1ed6-4ce3-9864-11f4e93a478f", and some Fluence goodness at both the local and remote levels enables us to:
|
||||
|
||||
1. find the p2p node hosting the curl service with above service id ,
|
||||
2. execute the service and
|
||||
3. collect the response
|
||||
|
||||
In your directory of choice, save the above script as `curl_request.clj` and run:
|
||||
In your directory of choice, save the above script as _curl\_request.clj_ and run:
|
||||
|
||||
```bash
|
||||
$ fldist run_air -p curl_request.clj -d '{"service_id": "f92ce98b-1ed6-4ce3-9864-11f4e93a478f", "url":"https://api.duckduckgo.com/?q=homotopy&format=json"}'
|
||||
@ -82,7 +82,7 @@ To recap, we:
|
||||
* executed the \(remote\) curl service request, and
|
||||
* collected the result
|
||||
|
||||
With essentially a two-line script and a couple of parameters, we executed a search request as a service on a peer-to-peer network. Even this small example should impress the ease afforded by Aquamarine to compose applications from portable, reusable and distributed services not only taken serverless to the next level by greatly reducing devops requirements but also empowering developers with a composition and coordination medium second to none.
|
||||
With essentially a two line script and a couple of parameters we executed a search request as a service on a peer-to-peer network. Even this small example should impress the ease afforded by Aquamarine to compose applications from portable, reusable and distributed services not only taken serverless to the next level by greatly reducing devops requirements but also empowering developers with a composition and coordination medium second to none.
|
||||
|
||||
In the next section, we build an Ethereum block getter application by coordinating multiple services into an application.
|
||||
|
||||
|
@ -1,5 +1,6 @@
|
||||
# cUrl as a Service
|
||||
|
||||
|
||||
## Overview
|
||||
|
||||
[Curl](https://curl.se/) is a widely available and used command-line tool to receive or send data using URL syntax. Chances are, you probably just used it when you set up your Fluence development environment. For Fluence services to be able to interact with the world, cUrl is one option to facilitate https calls. Since Fluence modules are Wasm IT modules, cUrl cannot not be a service intrinsic. Instead, the curl command-line tool needs to be made available and accessible at the node level. And for Fluence services to be able to interact with Curl, we need to code a cUrl adapter taking care of the mounted \(cUrl\) binary.
|
||||
@ -47,6 +48,7 @@ We are basically linking the [external](https://doc.rust-lang.org/std/keyword.ex
|
||||
* [Mounted binaries](https://github.com/fluencelabs/fce/blob/c559f3f2266b924398c203a45863ebf2fb9252ec/fluence-faas/src/host_imports/mounted_binaries.rs)
|
||||
* [cUrl](https://github.com/curl/curl)
|
||||
|
||||
|
||||
### Service Construction
|
||||
|
||||
In order to create a valid Fluence service, a service configuration is required.
|
||||
|
@ -4,4 +4,3 @@
|
||||
* [Fluence Protocol](https://github.com/fluencelabs/rfcs/blob/main/0-overview.md)
|
||||
|
||||
|
||||
|
||||
|
@ -1,2 +1 @@
|
||||
# Tutorials
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user