Substrate client
The purpose of this document is to explain the Substrate client structure and its functionalities by a basic example.
The code examples that were used are sourced from the repository:
https://github.com/paritytech/polkadot-sdk (opens in a new tab) (branch: release-polkadot-v1.7.0)
General
Substrate Docs: Client outer node services (opens in a new tab)
Polkadot-SDK Docs (opens in a new tab)
Substrate client directory structure
- → dev-project
- ↳ ...
- ↳ ...
- ↳ node
- ↳ src
- → benchmarking.rs
- → chain_spec.rs
- → cli.rs
- → command.rs
- → main.rs
- → rpc.rs
- → service.rs
- → build.rs
- → Cargo.toml
- ↳ src
- ↳ ...
- ↳ ...
Substrate client file structure
-
Cargo.toml: The
Cargo.toml
file in a Rust project is a manifest file used for Cargo, Rust's package manager and build system. This file defines various aspects of the project, such as dependencies, package information and build configurations. In the context of a Substrate client, thenode/Cargo.toml
provides specific information and configurations necessary to compile and manage the client (node) component of the blockchain. -
build.rs: The
build.rs
file in a Rust project is a build script that is executed by Cargo, before the actual project is compiled. The Build script is used to perform complex build tasks such as code generation, compilation of non-Rust dependencies or automatic generation of resources. The build.rs script in this context (client) uses helper functions from thesubstrate_build_script_utils
library to perform certain preparation tasks in the build process of a Substrate-based project. -
benchmarking.rs: Contains the code for the runtime performance benchmarks. These benchmarks help to measure and optimize the performance of different parts of the blockchain under simulated conditions.
-
chain_spec.rs: Defines the specifications of the blockchain chain, including the genesis configuration, network protocol settings and other specific parameters that determine the initial states and behaviors of the blockchain.
-
cli.rs: Contains the definitions of the command line interface (CLI) arguments and options. This file allows users to perform various configurations and operations via the command line, such as starting a validator or connecting to a specific network.
-
command.rs: Implements the logic that executes the various CLI commands defined in cli.rs. This file orchestrates how the node is started and executed based on the commands and options specified by the user.
-
main.rs: The entry point of the node project that starts the execution. This file reads the CLI arguments and starts the blockchain node with the specified configurations and parameters.
-
rpc.rs: Defines Remote Procedure Call (RPC) interfaces and methods. RPCs allow external users and services to interact with the blockchain to request information or perform actions, such as sending transactions or retrieving block information.
-
service.rs: Implements the node service logic that configures and manages the various components of the node (such as the consensus mechanism, P2P network communication and block processing). This file is crucial for setting up the functional environment in which the blockchain runs.
Explore the code structure
For the file Cargo.toml no analysis will take place as the file is intuitively understandable. Cargo.toml example (opens in a new tab)
It is important to note that the following section is a code example and that the client may look different in another environment.
The substrate client (in this case basic example) is defined in the following files:
build.rs example (opens in a new tab)
benchmarking.rs example (opens in a new tab)
chain_spec.rs example (opens in a new tab)
cli.rs example (opens in a new tab)
command.rs example (opens in a new tab)
main.rs example (opens in a new tab)
rpc.rs example (opens in a new tab)
service.rs example (opens in a new tab)
build.rs
The Build script uses helper functions from the substrate_build_script_utils
library to automate two main tasks during the build process of a substrate node:
use substrate_build_script_utils::{generate_cargo_keys, rerun_if_git_head_changed};
fn main() {
generate_cargo_keys();
rerun_if_git_head_changed();
}
-
generate_cargo_keys(): This function generates cargo keys from environment variables that are important for compiling and packaging the node. Cargo keys are environment variables that are used during the build process to pass certain configurations or settings. This can be, for example, the version of the project, paths to important resources or configuration options. Using this function in the
build.rs
script automates the process of generating these keys so that they are generated consistently and correctly for each build. -
rerun_if_git_head_changed(): This function instructs Cargo to re-execute the build script if the HEAD commit of the Git repository has changed since the last build. This ensures that changes to the code that have been made since the last build are correctly recognized and taken into account in the next build. It is a useful feature to prevent outdated or inconsistent code from being included in the final node by ensuring that the build process reacts when changes are made to the code.
The build.rs
script in this context provides a clean integration of environment variables into the build process and ensures that changes to the codebase are correctly reflected in each build.
By using substrate_build_script_utils
, it abstracts some of the repetitive tasks involved in preparing a substrate project for build and ensures efficient and error-free compilation.
benchmarking.rs
The benchmarking is designed in such a way that it can be split into different sections.
- Import and setup: Initializes the environment by importing and configuring the necessary libraries and modules for benchmarking.
- Benchmark structure: Defines structures that serve as building blocks for creating benchmarks. Each structure implements the ExtrinsicBuilder trait to create specific extrinsics for benchmarks.
- Specific and external benchmark implementations: Implements the
ExtrinsicBuilder
trait for specific use cases and enables the integration of the benchmark logics into a larger benchmarking framework or the command line interface (CLI) of the Substrate Node - Help functions for benchmarking: Provides functions for generating Extrinsics and for providing Inherent Data, which are required for carrying out the benchmarks.
- External benchmark functionalities: Enables the integration of the benchmark logics into a larger benchmarking framework or the command line interface (CLI) of the Substrate Node.
Import and setup
In this code section, various Rust crates and modules are imported and some basic configurations for benchmarking a Substrate blockchain are implemented.
use crate::service::FullClient;
use dev_node_runtime as runtime;
use runtime::{AccountId, Balance, BalancesCall, SystemCall};
use sc_cli::Result;
use sc_client_api::BlockBackend;
use sp_core::{Encode, Pair};
use sp_inherents::{InherentData, InherentDataProvider};
use sp_keyring::Sr25519Keyring;
use sp_runtime::{OpaqueExtrinsic, SaturatedConversion};
use std::{sync::Arc, time::Duration};
- FullClient: Imports the FullClient from the local module service. FullClient is a structure that encapsulates the blockchain client functionality, including accessing the blockchain state and sending transactions.
- dev_node_runtime: Binds the local runtime module with the alias runtime for easier referencing. This runtime contains the specific logic of the blockchain, including the available pallets and their configuration.
- runtime: Imports specific types and call enums from the runtime that are required for creating and testing transactions (extrinsics) during benchmarking.
- sc_cli: Imports the result type from the
sc_cli
crate, which is used for error handling and return type conventions in CLI commands. - sc_client_api: Imports the
BlockBackend
trait, which provides functions for accessing blocks and their headers in the blockchain backend. - sp_core: Imports functionalities from the
sp_core
crate, including the Encode trait for encoding data and the Pair type for cryptocurrency key pairs. - sp_inherents: Imports types for working with Inherent Data, which are necessary to provide certain predefined information in blocks, such as timestamps.
- sp_keyring: Imports a ready-made set of key pairs (
Sr25519Keyring
), which are often used in tests and benchmarks to enable fast signing and authentication. - sp_runtime: Imports the
OpaqueExtrinsic
type, which is used for processing transactions as opaque data, andSaturatedConversion
, a utility function for type conversions. - std: Imports standard Rust types
Arc
for atomic reference counting (to manage shared ownership) andDuration
for the durations.
This section provides the necessary tools and types for performing internal benchmark tests. The imported modules and types allow benchmark transactions to be created, coded and sent to the blockchain to measure execution time and other performance indicators.
Benchmark structure
The two benchmark structures RemarkBuilder
and TransferKeepAliveBuilder
are part of the benchmarking setup in a Substrate-based blockchain.
They are used to construct specific extrinsics (transactions), which are then used for performance tests of the blockchain.
pub struct RemarkBuilder {
client: Arc<FullClient>,
}
pub struct TransferKeepAliveBuilder {
client: Arc<FullClient>,
dest: AccountId,
value: Balance,
}
-
RemarkBuilder: is a structure designed for creating
system::remark
extrinsics. A remark transaction is essentially a not-operation; it makes no changes to the state of the blockchain, but can contain any blob of data. This makes it useful for benchmarking, as it allows to measure the baseline congestion of the system without the execution of complex logic distorting the benchmark.- client: An
Arc<FullClient>
specifies thatRemarkBuilder
holds a shared, thread-safe reference to an instance ofFullClient
.FullClient
is a structure that represents a client implementation for the blockchain, including functions for reading state and submitting transactions.
- client: An
-
TransferKeepAliveBuilder: is specifically designed for creating
balances::transfer_keep_alive
extrinsics. This type of transaction transfers tokens from one account to another and ensures that the sending account still has enough balance after the transaction to exceed the existential deposit and thus remain "alive".- client: Similar to RemarkBuilder, TransferKeepAliveBuilder holds a shared reference to the FullClient to access blockchain functionalities.
- dest: Of type AccountId, represents the target account to which the token transfer should go.
- value: Of type Balance, specifies the amount of tokens to be transferred.
Both structures use Arc<FullClient>
to access the blockchain client, which is important for submitting the constructed extrinsics to the blockchain.
The use of Arc
allows the client instance to be securely shared between multiple threads, which is essential in an asynchronous or parallel benchmarking environment.
To summarize, RemarkBuilder
and TransferKeepAliveBuilder
provide targeted means to generate specific types of extrinsics for benchmarking purposes.
RemarkBuilder
is used to measure system baseline costs, while TransferKeepAliveBuilder
generates transactions for performance testing of the balances pallet.
Specific and external benchmark implementations
// SPECIFIC BENCHMARK IMPLEMENTATION
impl RemarkBuilder {
pub fn new(client: Arc<FullClient>) -> Self {
Self { client }
}
}
// EXTERNAL BENCHMARK IMPLEMENTATION
impl frame_benchmarking_cli::ExtrinsicBuilder for RemarkBuilder {
fn pallet(&self) -> &str {
"system"
}
fn extrinsic(&self) -> &str {
"remark"
}
fn build(&self, nonce: u32) -> std::result::Result<OpaqueExtrinsic, &'static str> {
let acc = Sr25519Keyring::Bob.pair();
let extrinsic: OpaqueExtrinsic = create_benchmark_extrinsic(
self.client.as_ref(),
acc,
SystemCall::remark { remark: vec![] }.into(),
nonce,
)
.into();
Ok(extrinsic)
}
}
...
-
RemarkBuilder:
-
new method: Creates a new instance of
RemarkBuilder
with a reference toFullClient
. This allowsRemarkBuilder
to perform operations on the blockchain by using the client to submit extrinsics. -
ExtrinsicBuilder: Returns the names of the pallet and the specific extrinsic to be built. In this case, it is the
system
Pallet and theremark
Extrinsic. -
build method: The
build
method creates aremark
extrinsic by using an empty remark (vec![]
), and returns it as an opaque extrinsic. This process includes signing the extrinsic with a predefined key (Sr25519Keyring::Bob
), applying the current nonce, and converting it into a format understandable by the blockchain.
-
// SPECIFIC BENCHMARK IMPLEMENTATION
impl TransferKeepAliveBuilder {
pub fn new(client: Arc<FullClient>, dest: AccountId, value: Balance) -> Self {
Self { client, dest, value }
}
}
// EXTERNAL BENCHMARK IMPLEMENTATION
impl frame_benchmarking_cli::ExtrinsicBuilder for TransferKeepAliveBuilder {
fn pallet(&self) -> &str {
"balances"
}
fn extrinsic(&self) -> &str {
"transfer_keep_alive"
}
fn build(&self, nonce: u32) -> std::result::Result<OpaqueExtrinsic, &'static str> {
let acc = Sr25519Keyring::Bob.pair();
let extrinsic: OpaqueExtrinsic = create_benchmark_extrinsic(
self.client.as_ref(),
acc,
BalancesCall::transfer_keep_alive { dest: self.dest.clone().into(), value: self.value }
.into(),
nonce,
)
.into();
Ok(extrinsic)
}
}
...
-
TransferKeepAliveBuilder:
-
new method: Initializes a new instance with a
FullClient
for accessing blockchain operations, a target account (dest
) and avalue
to be transferred. These parameters are essential for creating atransfer_keep_alive extrinsic
, which transfers tokens from one account to another while the sender account remains active. -
ExtrinsicBuilder: Specifies the balances pallet and the
transfer_keep_alive
extrinsic as the targets of this builder instance. -
build method: The
build
method generates the specified extrinsic by calling theBalancesCall::transfer_keep_alive
method with the target account and the transfer amount. Similar toRemarkBuilder
, the extrinsic is also signed here and prepared for transmission to the blockchain asOpaqueExtrinsic
.
-
Both implementations follow a similar pattern.
They define which pallet and which extrinsic they target.
Using the create_benchmark_extrinsic
helper function, they create the corresponding extrinsic, sign it with a test key pair, and prepare it for benchmarking.
Altogether, these specific implementations enable accurate measurement of blockchain performance and congestion by performing typical operations (adding an empty comment with remark or transferring tokens with transfer_keep_alive) and measuring the time and resources required to do so.
Support functions for benchmarking
The create_benchmark_extrinsic
and inherent_benchmark_data
functions are key elements in the benchmarking process of a Substrate-based blockchain.
pub fn create_benchmark_extrinsic(
client: &FullClient,
sender: sp_core::sr25519::Pair,
call: runtime::RuntimeCall,
nonce: u32,
) -> runtime::UncheckedExtrinsic {
let genesis_hash = client.block_hash(0).ok().flatten().expect("Genesis block exists; qed");
let best_hash = client.chain_info().best_hash;
let best_block = client.chain_info().best_number;
let period = runtime::BlockHashCount::get()
.checked_next_power_of_two()
.map(|c| c / 2)
.unwrap_or(2) as u64;
let extra: runtime::SignedExtra = (
frame_system::CheckNonZeroSender::<runtime::Runtime>::new(),
frame_system::CheckSpecVersion::<runtime::Runtime>::new(),
frame_system::CheckTxVersion::<runtime::Runtime>::new(),
frame_system::CheckGenesis::<runtime::Runtime>::new(),
frame_system::CheckEra::<runtime::Runtime>::from(sp_runtime::generic::Era::mortal(
period,
best_block.saturated_into(),
)),
frame_system::CheckNonce::<runtime::Runtime>::from(nonce),
frame_system::CheckWeight::<runtime::Runtime>::new(),
pallet_transaction_payment::ChargeTransactionPayment::<runtime::Runtime>::from(0),
);
let raw_payload = runtime::SignedPayload::from_raw(
call.clone(),
extra.clone(),
(
(),
runtime::VERSION.spec_version,
runtime::VERSION.transaction_version,
genesis_hash,
best_hash,
(),
(),
(),
),
);
let signature = raw_payload.using_encoded(|e| sender.sign(e));
runtime::UncheckedExtrinsic::new_signed(
call,
sp_runtime::AccountId32::from(sender.public()).into(),
runtime::Signature::Sr25519(signature),
extra,
)
}
-
create_benchmark_extrinsic: Creates an unsigned extrinsic for benchmarking purposes by embedding specific calls into the blockchain logic. It signs the extrinsic with a given key pair and prepares it for transmission.
-
client: A reference to the
FullClient
that allows access to the current blockchain state, such as the genesis block hash and the best block hash. -
sender: The sender's key pair that is used to sign the extrinsic.
-
call: The specific call within the runtime that is to be executed, e.g. a remark or a
transfer_keep_alive
call. -
nonce: The transaction counter (nonce) of the sender to ensure the uniqueness of the Extrinsic.
-
genesis_hash; best_hash; best_block: retrieves essential blockchain information such as the genesis block hash and the best (most recent) block hash and block number required to construct the extrinsic.
-
period: the calculation of the time period (or "period") for the transaction era and the creation of the extra data containing a collection of checks and information for the extrinsic.
-
runtime::BlockHashCount::get(): This retrieves the value of BlockHashCount from the runtime configuration, which specifies how many blocks back the blockchain state is stored.
-
.checked_next_power_of_two(): Converts this number to the next higher power of two, which is used to calculate the era in which the extrinsic is valid. This conversion helps to optimize the range in which the extrinsic is valid, ensuring that it does not become too old to be included.
-
.map(|c| c / 2): Halve this potency of two to limit the lifetime of the extrinsic. This is part of the logic to define a "mortal" era, which means that the extrinsic is only valid within a certain time window.
-
.unwrap_or(2) as u64: Provides a default value if the previous operations fail. This ensures that there is always a valid time period.
-
-
extra: The extra data is a collection of checks (
SignedExtra
) used by the blockchain system to verify the validity of the extrinsic before it is executed. Each check represents a specific validation rule.- CheckNonZeroSender: Ensures that the sender of the Extrinsic is not the zero account.
- CheckSpecVersion: Checks that the specification version of the Extrinsic matches that of the Runtime.
- CheckTxVersion: Similar to
CheckSpecVersion
, but for the transaction version. - CheckGenesis: Confirms that the extrinsic matches the current genesis block hash of the chain.
- CheckEra: Checks the era of the extrinsic against the current block height, based on the calculated time period, to ensure that the extrinsic has not expired.
- CheckNonce: Ensures that the nonce of the extrinsic is correct, which prevents transactions from being replayed.
- CheckWeight: Checks that the extrinsic does not consume more computing time or memory than is available in the current block.
- ChargeTransactionPay: Deducts transaction fees, whereby a value of 0 is set here as it is a benchmarking extrinsic.
-
In a nutshell, this section defines the temporal validity conditions of the extrinsic and adds several security and consistency checks to validate its execution on the network.
pub fn inherent_benchmark_data() -> Result<InherentData> {
let mut inherent_data = InherentData::new();
let d = Duration::from_millis(0);
let timestamp = sp_timestamp::InherentDataProvider::new(d.into());
futures::executor::block_on(timestamp.provide_inherent_data(&mut inherent_data))
.map_err(|e| format!("creating inherent data: {:?}", e))?;
Ok(inherent_data)
}
This section describes the creation of inherent data, which is necessary for benchmarking tests of a blockchain. Inherent data is information that must be contained in each block but does not originate directly from transactions. Typically, it contains critical system information such as timestamps.
-
inherent_benchmark_data:
-
let mut inherent_data: A new
InherentData
object will be created. This object is used to collect all the necessary Inherent Data that will later be added to the block. Inherent data is essential for the blockchain to function correctly and provides information that is not provided by user transactions. -
let d: A
duration
of 0 milliseconds will be created. In this context, it is used to set the timestamp to a specific point. For benchmarking purposes, time 0 is obviously chosen here to create a controlled environment. -
let timestamp: An
InherentDataProvider
for the timestamp will be initialized with the previously defined time. This provider is responsible for making the timestamp information available as inherent data. The time 0 is used here as a symbolic value in order to create a simple and controllable test environment for benchmarking. -
futures::executor: The
provide_inherent_data
method of thetimestamp
inherent provider is called to insert the timestamp data into theinherent_data object
. This is done asynchronously, andfutures::executor::block_on
is used to synchronize the asynchronous execution and wait for it to complete. This step is critical as it ensures that the block contains the necessary information to be valid in the context of the blockchain. -
inherent_data: Lastly, the
inherent_data
object is returned. It now contains the required inherent data (in this case the timestamp), which is needed to run benchmarking tests or simulate block positions in the blockchain.
-
In short, this process is crucial for setting up a controlled test environment in which benchmarks can be carried out. The precise setting and provision of inherent data ensures that the tests run under standardized conditions, resulting in meaningful and reproducible performance measurements.
chain_spec.rs
Here are the blockchain specifications used to initialize the blockchain when creating and launching nodes. This file is crucial for defining the configuration of the Genesis block, including initial authorities, pre-funded accounts and other important network settings
The chain specification is designed in such a way that it can be split into different sections.
- Import and type definition: In this section, all necessary external libraries and modules required for the definition of the chain specification are imported.
- Support functions: These functions are helper functions for generating cryptographic keys and account IDs from a seed. They facilitate the setup of predefined accounts and authorities for the Genesis block.
- Chain specification: uses the
ChainSpec::builder
method to define basic network information such as the name and ID of the network and the type of chain, and adds specific Genesis configurations such as pre-funded accounts and initial consensus authorities. - Local testnet configuration: Similar to
development_config
from thechain_spec.rs
, but for creating a local testnet specification, with a different configuration aimed at simulating a network environment closer to a real testnet or mainnet, including a larger number of pre-funded accounts and authorities. - Genesis configuration: this section defines the initial configuration of the Genesis block.
This includes setting up the initial authorities for the
Aura
andGrandpa
consensus mechanisms, defining the Sudo account (root key) and pre-funding certain accounts. The return is a JSON value that is inserted directly into the Genesis block configuration.
Import and type definition
The imports and type definitions form the foundation for the creation of blockchain specifications in Substrate. They provide the building blocks with which specific network configurations, genesis states and consensus rules can be defined.
use dev_node_runtime::{AccountId, RuntimeGenesisConfig, Signature, WASM_BINARY};
use sc_service::ChainType;
use sp_consensus_aura::sr25519::AuthorityId as AuraId;
use sp_consensus_grandpa::AuthorityId as GrandpaId;
use sp_core::{sr25519, Pair, Public};
use sp_runtime::traits::{IdentifyAccount, Verify};
pub type ChainSpec = sc_service::GenericChainSpec<RuntimeGenesisConfig>;
...
-
Imports:
-
dev_node_runtime: Imports various components from the runtime of the blockchain, such as
AccountId
,RuntimeGenesisConfig
andSignature
. These elements are central to defining the Genesis configuration and the identity of accounts within the blockchain. -
sc_service: Imports the
ChainType
, which determines the type of blockchain (e.g. development, local testnet or live network). This type is important to determine how the blockchain should be initialized. -
sp_consensus_aura and sp_consensus_grandpa: Import the type definitions for the authority IDs of the aura and grandpa consensus mechanisms. These are used to define the initial consensus authorities in the Genesis block.
-
sp_core and sp_runtime: Provide basic cryptographic functionalities (
Pair
,Public
) and traits (IdentifyAccount
,Verify
) necessary for the creation of account IDs and the verification of signatures.
-
-
Type definition:
- ChainSpec: Defines a new type
ChainSpec
as a specialization of the genericGenericChainSpec<RuntimeGenesisConfig>
from thesc_service
crate. This specialization is used to create a custom chain specification with the specific Genesis configuration of the runtime.
- ChainSpec: Defines a new type
Support functions
These three helper functions play an important role in initializing the blockchain and configuring the genesis block. They provide a method to generate cryptographic keys and account IDs from a seed, which facilitates the establishment of consensus mechanisms and the allocation of initial balances.
...
pub fn get_from_seed<TPublic: Public>(seed: &str) -> <TPublic::Pair as Pair>::Public {
TPublic::Pair::from_string(&format!("//{}", seed), None)
.expect("static values are valid; qed")
.public()
}
type AccountPublic = <Signature as Verify>::Signer;
...
pub fn get_account_id_from_seed<TPublic: Public>(seed: &str) -> AccountId
where
AccountPublic: From<<TPublic::Pair as Pair>::Public>,
{
AccountPublic::from(get_from_seed::<TPublic>(seed)).into_account()
}
...
pub fn authority_keys_from_seed(s: &str) -> (AuraId, GrandpaId) {
(get_from_seed::<AuraId>(s), get_from_seed::<GrandpaId>(s))
}
...
-
get_from_seed: Generating a cryptographic key pair from a seed. This function is used to generate cryptographic keys for the establishment of consensus authorities (
Aura
andGrandpa
) and other security mechanisms within the blockchain.- from_string method: The function uses the
from_string
method of thePair
trait to generate a key pair from the given seed. The seed is given a prefix//
, which in Substrate represents a specific type of key derivation. The result is the public part of the key pair.
- from_string method: The function uses the
-
get_account_id_from_seed: Generates an AccountId from a seed. This function enables the simple creation of accounts for pre-funded accounts in the Genesis block. Similar to the previous function, but with the additional step of converting the generated public key pair into an
AccountId
. This is achieved by implementing theFrom
trait for the public key type, which derives anAccountId
. -
authority_keys_from_seed: Specifically, for the generation of key pairs for the
aura
andgrandpa
consensus mechanisms. Usesthe get_from_seed
function to generate an aura and a grandpa authority key from the same seed. This function returns a tuple that contains both keys. This is important for initializing the consensus mechanisms of the blockchain in the genesis block. By providing these keys, the initial validators (authorities) that can authorize transactions and generate blocks are determined.
These helper functions are essential for configuring and initializing the blockchain, especially for setting up the genesis block. They simplify the process of generating key pairs and account IDs from seeds, which facilitates the creation of consistent and secure configurations for development, test and production environments. The ability to dynamically generate these elements from seeds promotes best practices in cryptographic key handling and facilitates the management of network authorities and pre-funded accounts.
Chain specification
The development_config function configures a chain specification for a development network. This is one of several possible configurations that can be defined in the chain_spec.rs file of a Substrate-based blockchain. A special network setup is created here for development and test purposes.
...
pub fn development_config() -> Result<ChainSpec, String> {
Ok(ChainSpec::builder(
WASM_BINARY.ok_or_else(|| "Development wasm not available".to_string())?,
None,
)
.with_name("Development")
.with_id("dev")
.with_chain_type(ChainType::Development)
.with_genesis_config_patch(testnet_genesis(
// Initial PoA authorities
vec![authority_keys_from_seed("Alice")],
// Sudo account
get_account_id_from_seed::<sr25519::Public>("Alice"),
// Pre-funded accounts
vec![
get_account_id_from_seed::<sr25519::Public>("Alice"),
get_account_id_from_seed::<sr25519::Public>("Bob"),
get_account_id_from_seed::<sr25519::Public>("Alice//stash"),
get_account_id_from_seed::<sr25519::Public>("Bob//stash"),
],
true,
))
.build())
}
...
-
ChainSpec Builder: The configuration starts with the
ChainSpec::builder
, which creates a new instance of aChainSpec
object. This builder allows various aspects of the chain specification to be configured step by step. -
WASM Binary: The builder receives the WASM binary of the blockchain runtime as the first parameter.
WASM_BINARY.ok_or_else(|| "Development wasm not available".to_string())?
checks whether the binary is available and returns an error if not. This ensures that the network is initialized with the current runtime version. -
Network name and ID: With
.with_name("Development")
and.with_id("dev")
, the human-readable name and the unique identification of the network are defined. This information is important for distinguishing between different network configurations. -
Chain type:
.with_chain_type(ChainType::Development)
specifies that it is a development network. Substrate distinguishes between different chain types such asDevelopment
,Local
andLive
in order to support different runtime environments and use cases. -
Genesis configuration:
.with_genesis_config_patch(testnet_genesis(...))
applies a specific Genesis configuration to the network. Thetestnet_genesis
function is called to define the initial settings of the Genesis block, including:- Initial PoA authorities: Defined by
vec![authority_keys_from_seed("Alice")]
, determines which keys act as validators (block authors) when the network starts. - Sudo account:
get_account_id_from_seed::<sr25519::Public>("Alice")
, assigns administrative rights within the blockchain to a specific account. - Pre-funded accounts: A list of accounts that will be seeded in the Genesis block. Here it is the accounts of Alice and Bob as well as their stash accounts.
- Initial PoA authorities: Defined by
-
Build: completes the configuration and creates the ChainSpec object.
Local testnet configuration
This configuration is more complex than the development configuration and aims to create an environment that is closer to a real deployment scenario, with several predefined actors and a more detailed initial configuration.
...
pub fn local_testnet_config() -> Result<ChainSpec, String> {
Ok(ChainSpec::builder(
WASM_BINARY.ok_or_else(|| "Development wasm not available".to_string())?,
None,
)
.with_name("Local Testnet")
.with_id("local_testnet")
.with_chain_type(ChainType::Local)
.with_genesis_config_patch(testnet_genesis(
// Initial PoA authorities
vec![authority_keys_from_seed("Alice"), authority_keys_from_seed("Bob")],
// Sudo account
get_account_id_from_seed::<sr25519::Public>("Alice"),
// Pre-funded accounts
vec![
get_account_id_from_seed::<sr25519::Public>("Alice"),
get_account_id_from_seed::<sr25519::Public>("Bob"),
get_account_id_from_seed::<sr25519::Public>("Charlie"),
...
get_account_id_from_seed::<sr25519::Public>("Alice//stash"),
get_account_id_from_seed::<sr25519::Public>("Bob//stash"),
get_account_id_from_seed::<sr25519::Public>("Charlie//stash"),
...
],
true,
))
.build())
}
...
- WASM Binary: As in the development configuration, the WASM binary of the runtime is checked to ensure that it is available for the network.
- Configurationsnames and ID: The network is identified by specific designations ("Local Testnet") and a unique ID ("local_testnet") in order to distinguish it from other networks.
- Chain type: By specifying
ChainType::Local
(Development
Local
Live
), it is signaled that this is a local network - Genesis configuration: The
testnet_genesis
function is used to define the specific settings of the Genesis block, including:- Initial PoA authorities: Unlike the development configuration, two authorities (Alice and Bob) are defined here, simulating a more realistic consensus environment.
- Sudo account: The administrative account is assigned to Alice, as in the development configuration.
- Pre-funded accounts: A more extensive list of accounts (Alice, Bob, Charlie, etc.) will be pre-funded.
- Build: The
.build()
method completes the structure of the chain specification.
In summary, the local test network
configuration enables a deeper and more comprehensive understanding of how the blockchain application would work under more realistic conditions and serves as an important bridge between development testing and deployment on a live network.
Genesis configuration
The testnet_genesis
function plays a central role in defining the initial state of blockchain storage for FRAME modules in the genesis block.
The configuration of the genesis block is crucial as it defines the initial state of the network from which all future transactions and block productions are based.
...
fn testnet_genesis(
initial_authorities: Vec<(AuraId, GrandpaId)>,
root_key: AccountId,
endowed_accounts: Vec<AccountId>,
_enable_println: bool,
) -> serde_json::Value {
serde_json::json!({
"balances": {
// Configure endowed accounts with initial balance of 1 << 60.
"balances": endowed_accounts.iter().cloned().map(|k| (k, 1u64 << 60)).collect::<Vec<_>>(),
},
"aura": {
"authorities": initial_authorities.iter().map(|x| (x.0.clone())).collect::<Vec<_>>(),
},
"grandpa": {
"authorities": initial_authorities.iter().map(|x| (x.1.clone(), 1)).collect::<Vec<_>>(),
},
"sudo": {
// Assign network admin rights.
"key": Some(root_key),
},
})
}
...
- Parameters of the function:
- initial_authorities: A list of tuples that define the initial authorities for the Aura and Grandpa consensus mechanisms.
- root_key: The account that acts as the
sudo
(superuser) of the blockchain, with the ability to make administrative changes to the network. - endowed_accounts: A list of accounts that will be endowed with an initial balance in the genesis block.
- _enable_println: A boolean value that is used to enable or disable debug output.
- Balances module: Defines the initial balances for the accounts listed in endowed_accounts. Each account is given an amount of
1 << 60
, which is a generous amount of tokens to test and run the network in the initial phase. - Aura and Grandpa modules: Determine the initial setups of consensus authorities based on the key pairs provided via initial_authorities. These authorities are responsible for block production (Aura) and finalization of blocks (Grandpa).
- sudo module: Assigns administrative rights in the network to the account specified by root_key. This account can make far-reaching changes to the network, including updating the runtime logic.
The Genesis configuration is crucial for the start and initial security of the network. It ensures that the network starts with a clear and secure allocation of roles and resources. By defining initial authorities and pre-funded accounts, it enables an orderly start of the network and provides the necessary means to take the first steps in the network, such as staking tokens or performing transactions.
cli.rs
The cli.rs
file defines the command line interface (CLI) for a Substrate-based blockchain.
It uses the clap
crate, a Rust library for handling command line arguments, to configure subcommands and options for the blockchain node executable.
This is where the CLI is structured and defined for the node by providing a rich set of functions and utilities through subcommands.
#[derive(Debug, clap::Parser)]
pub struct Cli {
#[command(subcommand)]
pub subcommand: Option<Subcommand>,
#[clap(flatten)]
pub run: RunCmd,
}
#[derive(Debug, clap::Subcommand)]
#[allow(clippy::large_enum_variant)]
pub enum Subcommand {
#[command(subcommand)]
Key(sc_cli::KeySubcommand),
BuildSpec(sc_cli::BuildSpecCmd),
CheckBlock(sc_cli::CheckBlockCmd),
ExportBlocks(sc_cli::ExportBlocksCmd),
ExportState(sc_cli::ExportStateCmd),
ImportBlocks(sc_cli::ImportBlocksCmd),
PurgeChain(sc_cli::PurgeChainCmd),
Revert(sc_cli::RevertCmd),
#[command(subcommand)]
Benchmark(frame_benchmarking_cli::BenchmarkCmd),
#[cfg(feature = "try-runtime")]
TryRuntime(try_runtime_cli::TryRuntimeCmd),
#[cfg(not(feature = "try-runtime"))]
TryRuntime,
ChainInfo(sc_cli::ChainInfoCmd),
}
- CLI struct: This struct represents the main structure for the command line interface.
It defines the various options and subcommands supported by the CLI.
- subcommand: An optional field that stores the specific subcommand to be executed. Each subcommand corresponds to a specific action or utility that can be executed on the blockchain node.
- run: This field of type
RunCmd
contains the general runtime options for the node, such as configuration paths, network options, logging settings, etc.
- Subcommand enum: This enum lists all available subcommands.
Each member of the enum represents a specific command that can be executed.
- Key; BuildSpec; CheckBlock; ExportBlocks; ExportState; ImportBlocks; PurgeChain; Revert; ChainInfo:
These are standard subcommands provided by
sc_cli
, and provide basic functions such as key management, spec creation, block verification, data export/import, chain purging, and resetting the blockchain to a previous state. - Benchmark: A specific subcommand for performing benchmarks on different parts of the runtime, enabled by the
frame_benchmarking_cli
crate. This command is particularly useful for measurement of the performance of the runtime. - TryRuntime: An optional feature that makes it possible to test certain runtime operations in an isolated environment, e.g. to simulate upcoming upgrades.
It is provided by the
try_runtime_cli
crate and is only available if the try-runtime feature is activated.
- Key; BuildSpec; CheckBlock; ExportBlocks; ExportState; ImportBlocks; PurgeChain; Revert; ChainInfo:
These are standard subcommands provided by
command.rs
The command.rs
file contains the entry point and central logic for handling various command line commands supported by a Substrate-based node.
The file is divided into several sections that cover different aspects of command processing and node initialization.
- Imports and type definition: This section imports the necessary modules, functions and types that will be necessary in the rest of the file. These include basic structures for benchmarking, chain specifications, CLI configurations and service functions, as well as specific runtime types and auxiliary functions.
- Substrate CLI implementation: This defines the implementation of how the CLI (Command Line Interface) structure is specifically adapted for a node.
It defines parameters such as implementation name, version, description, author, support URL and copyright.
In addition, the
load_spec
method is implemented to load specific chain specifications based on the passed identifier. - Main function
run
and its components: The main functionrun
is the central entry point and orchestrates the execution of the program based on the CLI arguments and sub-commands specified by the user.
Imports and type definition
The basic building blocks for the node commands and configuration are defined in the first section of the command.rs code.
use crate::{
benchmarking::{inherent_benchmark_data, RemarkBuilder, TransferKeepAliveBuilder},
chain_spec,
cli::{Cli, Subcommand},
service,
};
use frame_benchmarking_cli::{BenchmarkCmd, ExtrinsicFactory, SUBSTRATE_REFERENCE_HARDWARE};
use dev_node_runtime::{Block, EXISTENTIAL_DEPOSIT};
use sc_cli::SubstrateCli;
use sc_service::PartialComponents;
use sp_keyring::Sr25519Keyring;
#[cfg(feature = "try-runtime")]
use try_runtime_cli::block_building_info::timestamp_with_aura_info;
-
Imports from own crate:
- benchmarking::inherent_benchmark_data, RemarkBuilder, TransferKeepAliveBuilder: These modules provide functions and structures that are required for benchmarking specific extrinsics.
inherent_benchmark_data
provides data for inherent extrinsics, whileRemarkBuilder
andTransferKeepAliveBuilder
are helper constructs for creating specific benchmarking extrinsics. - chain_spec: Defines the specification of the blockchain, including the initial configuration such as genesis block and network parameters.
- cli::Cli, Subcommand: Defines the command line interface (CLI) structure and the available subcommands that can be executed by the user.
- service: Contains functions and types for creating and managing the blockchain service.
- benchmarking::inherent_benchmark_data, RemarkBuilder, TransferKeepAliveBuilder: These modules provide functions and structures that are required for benchmarking specific extrinsics.
-
Imports from external crates:
- frame_benchmarking_cli::BenchmarkCmd, ExtrinsicFactory, SUBSTRATE_REFERENCE_HARDWARE: Provides tools for running benchmarks, including BenchmarkCmd for benchmark commands, ExtrinsicFactory for extrinsics creation and a reference configuration for hardware.
- dev_node_runtime::Block, EXISTENTIAL_DEPOSIT: Imports types from the node's runtime module, including the block type and the minimum balance (
EXISTENTIAL_DEPOSIT
) that an account must have to be considered existing. - sc_cli::SubstrateCli: Imports the Substrate CLI framework, which provides basic functionalities for CLI implementation.
- sc_service::PartialComponents: Provides a structure that contains subcomponents of a substrate service that can be initialized before the service is fully started.
- sp_keyring::Sr25519Keyring: Enables access to a predefined set of
Sr25519
key pairs for testing and development purposes.
-
Feature specific import:
- [cfg(feature = "try-runtime")]: Is only imported if the
try-runtime
feature is activated. Provides functions that are required for thetry-runtime
feature, in particular to provide timestamp information for blocks withAura
consensus.
- [cfg(feature = "try-runtime")]: Is only imported if the
Substrate CLI implementation
This section implements the SubstrateCli
trait for the Cli
struct, which defines the command line arguments for the Substrate node.
The implementation of this trait determines how the node describes itself and how it reacts to various command line arguments.
impl SubstrateCli for Cli {
fn impl_name() -> String {
"Substrate Node".into()
}
fn impl_version() -> String {
env!("SUBSTRATE_CLI_IMPL_VERSION").into()
}
fn description() -> String {
env!("CARGO_PKG_DESCRIPTION").into()
}
fn author() -> String {
env!("CARGO_PKG_AUTHORS").into()
}
fn support_url() -> String {
"support.anonymous.an".into()
}
fn copyright_start_year() -> i32 {
2017
}
fn load_spec(&self, id: &str) -> Result<Box<dyn sc_service::ChainSpec>, String> {
Ok(match id {
"dev" => Box::new(chain_spec::development_config()?),
"" | "local" => Box::new(chain_spec::local_testnet_config()?),
path =>
Box::new(chain_spec::ChainSpec::from_json_file(std::path::PathBuf::from(path))?),
})
}
}
- impl_name: Returns the name of the implementation.
- impl_version: Returns the version of the implementation.
This version is loaded from the environment variable
SUBSTRATE_CLI_IMPL_VERSION
, which is normally defined inCargo.toml
or during the build process. - description: Provides a description of the node.
This description is loaded from the
CARGO_PKG_DESCRIPTION
environment variable, which is defined in the project'sCargo.toml
. - author: Returns the author(s) of the node. This information is loaded from the CARGO_PKG_AUTHORS environment variable.
- support_url: Provides a URL for the support. In this example, a generic URL is used, but normally this would be a URL to a forum, GitHub issue page or other support resource.
- copyright_start_year: Specifies the year in which the copyright for the node began.
- load_spec: Loads the specification of the blockchain based on the given identification.
This function allows the node to load different configurations, e.g. a development config (dev), a local testnet config (local) or a specific configuration from a file.
It uses the functions defined in the
chain_spec
module to create the correspondingChainSpec
objects.
Main function run
and its components
Function run
The run function is the entry point for executing the Node CLI interface. It starts the processing of the command line arguments and initiates various actions depending on the specified subcommand.
pub fn run() -> sc_cli::Result<()> {
let cli = Cli::from_args();
- Parse CLI arguments: First,
Cli::from_args()
is called to parse the arguments entered via the command line and convert them into a Cli structure. This structure contains all specified options and flags as well as the selected subcommands.
Sub commands handling
The subcommand handler in the run function shows how different commands are handled and executed. Each command requires a different procedure, depending on its nature and the requirements.
// SUB COMMAND HANDLER
match &cli.subcommand {
Some(Subcommand::Key(cmd)) => cmd.run(&cli),
Some(Subcommand::BuildSpec(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.sync_run(|config| cmd.run(config.chain_spec, config.network))
},
Some(Subcommand::CheckBlock(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.async_run(|config| {
let PartialComponents { client, task_manager, import_queue, .. } =
service::new_partial(&config)?;
Ok((cmd.run(client, import_queue), task_manager))
})
},
Some(Subcommand::ExportBlocks(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.async_run(|config| {
let PartialComponents { client, task_manager, .. } = service::new_partial(&config)?;
Ok((cmd.run(client, config.database), task_manager))
})
},
Some(Subcommand::ExportState(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.async_run(|config| {
let PartialComponents { client, task_manager, .. } = service::new_partial(&config)?;
Ok((cmd.run(client, config.chain_spec), task_manager))
})
},
Some(Subcommand::ImportBlocks(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.async_run(|config| {
let PartialComponents { client, task_manager, import_queue, .. } =
service::new_partial(&config)?;
Ok((cmd.run(client, import_queue), task_manager))
})
},
Some(Subcommand::PurgeChain(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.sync_run(|config| cmd.run(config.database))
},
Some(Subcommand::Revert(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.async_run(|config| {
let PartialComponents { client, task_manager, backend, .. } =
service::new_partial(&config)?;
let aux_revert = Box::new(|client, _, blocks| {
sc_consensus_grandpa::revert(client, blocks)?;
Ok(())
});
Ok((cmd.run(client, backend, Some(aux_revert)), task_manager))
})
},
- Key: Direct execution of the command without additional asynchronous operations. Manages crypto keys.
- BuildSpec: Creates a configuration for the blockchain. Uses a runner to execute the command synchronously. The run method of this command requires the chain specification and network information.
- CheckBlock: Checks the validity of a block without importing it. Uses a runner for asynchronous execution.
Initializes required components (
PartialComponents
) for the command and executes it with the provided client and import queue. - ExportBlocks: Exports blockchain blocks to a file. Here too, work is asynchronous and the required parts of the node infrastructure are initialized before the command is executed with these components.
- ExportState: Exports the state of the blockchain at a specific block. Similar to the export of blocks, the command is executed asynchronously.
- ImportBlocks: Imports blocks from a file into the blockchain. This process is also asynchronous and uses the import queue for processing.
- PurgeChain: Deletes all blockchain data. This command is executed synchronously and only requires access to the database configuration.
- Revert: Resets the blockchain to a specific block.
Execution is asynchronous and an auxiliary function (
aux_revert
) is provided to perform specific actions when resetting.
Each of these subcommands is prepared using the create_runner
mechanism, which provides the context for execution and ensures that all necessary components are available.
Depending on whether the command is executed synchronously or asynchronously, the runner selects the appropriate method (sync_run
or async_run
) to handle command execution accordingly.
benchmarking and try-runtime
Some(Subcommand::Benchmark(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.sync_run(|config| {
// This switch needs to be in the client, since the client decides
// which sub-commands it wants to support.
match cmd {
BenchmarkCmd::Pallet(cmd) => {
if !cfg!(feature = "runtime-benchmarks") {
return Err(
"Runtime benchmarking wasn't enabled when building the node. \
You can enable it with `--features runtime-benchmarks`."
.into(),
)
}
cmd.run::<Block, ()>(config)
},
BenchmarkCmd::Block(cmd) => {
let PartialComponents { client, .. } = service::new_partial(&config)?;
cmd.run(client)
},
#[cfg(not(feature = "runtime-benchmarks"))]
BenchmarkCmd::Storage(_) => Err(
"Storage benchmarking can be enabled with `--features runtime-benchmarks`."
.into(),
),
#[cfg(feature = "runtime-benchmarks")]
BenchmarkCmd::Storage(cmd) => {
let PartialComponents { client, backend, .. } =
service::new_partial(&config)?;
let db = backend.expose_db();
let storage = backend.expose_storage();
cmd.run(config, client, db, storage)
},
BenchmarkCmd::Overhead(cmd) => {
let PartialComponents { client, .. } = service::new_partial(&config)?;
let ext_builder = RemarkBuilder::new(client.clone());
cmd.run(
config,
client,
inherent_benchmark_data()?,
Vec::new(),
&ext_builder,
)
},
BenchmarkCmd::Extrinsic(cmd) => {
let PartialComponents { client, .. } = service::new_partial(&config)?;
// Register the *Remark* and *TKA* builders.
let ext_factory = ExtrinsicFactory(vec![
Box::new(RemarkBuilder::new(client.clone())),
Box::new(TransferKeepAliveBuilder::new(
client.clone(),
Sr25519Keyring::Alice.to_account_id(),
EXISTENTIAL_DEPOSIT,
)),
]);
cmd.run(client, inherent_benchmark_data()?, Vec::new(), &ext_factory)
},
BenchmarkCmd::Machine(cmd) =>
cmd.run(&config, SUBSTRATE_REFERENCE_HARDWARE.clone()),
}
})
},
#[cfg(feature = "try-runtime")]
Some(Subcommand::TryRuntime(cmd)) => {
use crate::service::ExecutorDispatch;
use sc_executor::{sp_wasm_interface::ExtendedHostFunctions, NativeExecutionDispatch};
let runner = cli.create_runner(cmd)?;
runner.async_run(|config| {
// we don't need any of the components of new_partial, just a runtime, or a task
// manager to do `async_run`.
let registry = config.prometheus_config.as_ref().map(|cfg| &cfg.registry);
let task_manager =
sc_service::TaskManager::new(config.tokio_handle.clone(), registry)
.map_err(|e| sc_cli::Error::Service(sc_service::Error::Prometheus(e)))?;
let info_provider = timestamp_with_aura_info(6000);
Ok((
cmd.run::<Block, ExtendedHostFunctions<
sp_io::SubstrateHostFunctions,
<ExecutorDispatch as NativeExecutionDispatch>::ExtendHostFunctions,
>, _>(Some(info_provider)),
task_manager,
))
})
},
#[cfg(not(feature = "try-runtime"))]
Some(Subcommand::TryRuntime) => Err("TryRuntime wasn't enabled when building the node. \
You can enable it with `--features try-runtime`."
.into()),
Some(Subcommand::ChainInfo(cmd)) => {
let runner = cli.create_runner(cmd)?;
runner.sync_run(|config| cmd.run::<Block>(&config))
},
- Benchmark: Various types of benchmarks are supported in the benchmark section.
- Pallet: Performs benchmarks for certain pallets. It is checked whether the runtime-benchmarks feature was activated when the node was built. If not, an error is output.
- Block: Performs benchmarks at the block level. Necessary subcomponents are initialized and the benchmark runs on the client.
- Storage: Performs benchmarks at the storage level, provided the
runtime-benchmarks
feature is activated. This includes access to the database and the memory. - Overhead: Evaluates the overhead of transactions by using specifically created extrinsics.
- Extrinsic: Performs benchmarks for specific extrinsics. Custom extrinsics blocks are used to measure the performance of different transaction types.
- Machine: Evaluates the performance of the hardware or virtual machine on which the node is running.
A runner is used to execute the benchmarks, which runs synchronously and provides the configuration of the blockchain. Depending on the benchmark type, different configurations and dependencies are required.
- Try-runtime: The try-runtime part allows to test runtime upgrades and migrators in an off-chain environment.
This is particularly useful for identifying potential problems before launching on the live network.
If try-runtime was activated when the node was built (
try-runtime
feature), users can execute try-runtime commands. The execution is asynchronous and uses a simplified version of the node components to perform the tests. If the feature has not been activated, an error is displayed indicating that the try-runtime is not available.
Falback and node start
The fallback section in the code intercepts the case that no specific subcommand was specified when the node was started.
In this case, the standard action is executed, which is the start of the complete node.
None => {
let runner = cli.create_runner(&cli.run)?;
runner.run_node_until_exit(|config| async move {
service::new_full(config).map_err(sc_cli::Error::Service)
})
},
- Fallback operation:
- None: This match statement is selected if the user does not specify any specific subcommands. This means that the node should be started normally, without special operations such as benchmarking, try-runtime or the execution of blockchain checks.
- Node start:
- create_runner(&cli.run): A runner will be created with the general configuration of the node, which is determined by the command line arguments. This runner instance enables the node to be executed with the specified configuration.
- run_node_until_exit: This method starts the node and keeps it running until an external signal is received to terminate it (e.g. by pressing CTRL+C in the command line).
The method receives a function that is executed asynchronously and initializes the full configuration of the node.
- async move |config| : An asynchronous block that receives the full configuration of the node (config).
Within this block, the function
service::new_full(config)
is called, which initializes the complete node with all required components and services. - service::new_full(config): This function is responsible for initializing all parts of the node, including the network, consensus mechanisms, transaction pool management, RPC endpoints and other necessary components. The exact composition and configuration may vary depending on the node implementation.
- map_err(sc_cli::Error::Service): If an error occurs during initialization, this error is converted into a CLI-specific error so that it can be handled properly and displayed to the user.
- async move |config| : An asynchronous block that receives the full configuration of the node (config).
Within this block, the function
main.rs
The main.rs
file is the entry point for a Substrate-based blockchain node.
Its main task is to initialize and start the program.
The file is simple, as the more complex configurations and functionalities are defined in the modular components (such as in the benchmarking, chain_spec, cli, command, rpc, and service modules).
mod benchmarking;
mod chain_spec;
mod cli;
mod command;
mod rpc;
mod service;
fn main() -> sc_cli::Result<()> {
command::run()
}
- Modules: Several modules (benchmarking.rs, chain_spec.rs cli.rs, ...) are imported here that define and implement various aspects of the blockchain node.
- fn main() -> sc_cli::Result(): This is the main function of the program, which is the starting point.
It returns a result type that contains either an
Ok
if the run was successful or an errorErr
in the event of a problem.- command::run(): The
run
function from thecommand
module is called within themain
function. This function is responsible for starting the blockchain node based on the specified CLI arguments and configurations. It takes care of initializing the system, setting up the network, starting the consensus mechanism, and much more. Essentially,command::run
orchestrates the entire process to get the node up and running.
- command::run(): The
The main.rs
file in a Substrate node focuses on starting the program with a clearly defined structure.
It uses the modularity of Rust to split different functions and configurations into separate modules, resulting in better organization and maintainability of the code.
The main
function simply redirects to command::run
, which contains most of the initialization and startup logic for the blockchain node.
rpc.rs
The rpc.rs
file is responsible for defining RPC (Remote Procedure Call) methods that are specific to the runtime configuration of the project.
It builds on the core RPC definitions provided by Substrate and extends these with additional functionalities.
The rpc file is designed in such a way that it can be split into different sections.
- Import and definition: Imports necessary Rust- and Substrate-specific libraries and modules.
- Structure and types: Defines structures that are required for the configuration and provision of RPC services, such as
FullDeps
, which encapsulates dependencies such as the client and the transaction pool. - RPC extension function: The
create_full
function is the main part of the file. It takes the dependencies and uses them to create and configure anRpcModule
that provides the full RPC functionality for the node. Within this function, specific substrate RPC extensions such asSystem
andTransactionPayment
are added, which provide basic information about the system and transaction payment functions via RPC. - Additional RPC methods: At this section, developers could define additional user-defined RPC methods and add them to the RpcModule.
- Custom APIs: At this section custom APIs can be implemented.
- Chain specification and metadata: In this part it can be defined how information about the chain specification and metadata could be made available via RPC, which can be useful for front-end applications and wallets.
Import and type definition
This section in the rpc.rs
file is responsible for integrating necessary dependencies and defining types that are used for configuring and extending the RPC functionality of the Substrate Node.
use std::sync::Arc;
use jsonrpsee::RpcModule;
use dev_node_runtime::{opaque::Block, AccountId, Balance, Nonce};
use sc_transaction_pool_api::TransactionPool;
use sp_api::ProvideRuntimeApi;
use sp_block_builder::BlockBuilder;
use sp_blockchain::{Error as BlockChainError, HeaderBackend, HeaderMetadata};
pub use sc_rpc_api::DenyUnsafe;
-
Imports:
-
std::sync::Arc: A thread-safe reference counter that allows multiple owners of a value. Arc is used to ensure that data can be shared between multiple threads without causing data races or other synchronization problems. It is often used to share client instances or database connections.
-
jsonrpsee::RpcModule: As part of the
jsonrpsee
library it provides an efficient implementation of JSON-RPC for Rust.RpcModule
enables the creation of modular RPC APIs, which can then be integrated into the RPC server of the Substrate Node. -
dev_node_runtime::opaque::Block, AccountId, Balance, Nonce: Imports specific types from the runtime definition of the node. These types are essential for the construction of the RPC API, as they make it possible to interact directly with the blockchain and query information such as block details, account IDs, balances and nonces.
-
sc_transaction_pool_api::TransactionPool: Enables access to the transaction pool of the node. This is necessary to request information about pending transactions or to send new transactions to the pool.
-
sp_api::ProvideRuntimeApi: A trait provided by the client implementation to gain access to the specific API functions of the runtime. This is necessary for operations that query or manipulate data directly from the running blockchain instance.
-
sp_block_builder and sp_blockchain: These modules and traits are used to enable functions related to creating blocks and querying blockchain information such as block headers or blockchain metadata.
-
-
Type definition:
- sc_rpc_api::DenyUnsafe: A re-export of the
DenyUnsafe
type, which is used to control whether certain RPC calls should be allowed or denied due to security concerns. This is particularly important for publicly accessible nodes where certain operations should not be exposed via the RPC interface.
- sc_rpc_api::DenyUnsafe: A re-export of the
Structure and types
The structure and types section defines structures and types that are necessary for the configuration and functioning of the RPC extensions.
In this particular case, the structure FullDeps
is defined, which encapsulates the dependencies for a "full" client.
This is a central part of the RPC configuration as it brings together the components required to make the RPC interface fully functional.
pub struct FullDeps<C, P> {
/// The client instance to use.
pub client: Arc<C>,
/// Transaction pool instance.
pub pool: Arc<P>,
/// Whether to deny unsafe calls
pub deny_unsafe: DenyUnsafe,
}
- pub client: Arc C>: an instance of
client
is defined here, which is encapsulated by anarc
(Atomic Reference Counter).C
is a generic type here that stands for the blockchain client. The use ofArc
allows this client instance to be securely shared between multiple threads, which is important for the asynchronous and parallel nature of blockchain technology. - pub pool: Arc P>: Similar to
client
,pool
refers to the transaction pool. The typeP
is also generic and stands for the specific implementation of the transaction pool. By usingArc
, this component can also be safely shared between threads, which is essential for processing transactions within the node. - pub deny_unsafe: DenyUnsafe: This line adds a layer of security by controlling whether certain RPC calls classified as "unsafe" should be allowed or denied.
The
DenyUnsafe
type is a simple enumeration or structure (depending on the implementation) that essentially serves as a flag to indicate whether or not unsafe calls are allowed.
The FullDeps
structure is a core part of the RPC setup as it allows to cleanly encapsulate and manage all necessary dependencies needed for the RPC endpoints to work.
When a new RPC extension is created, it can easily gain access to the client, transaction pool and security settings by passing an instance of FullDeps
.
RPC extension function
The create_full
function is responsible for creating a complete RPC extension structure for a Substrate Node.
This structure integrates various RPC modules that provide specific blockchain functions and data via an HTTP or WebSocket interface.
The extension allows developers and users to interact with the blockchain without having to know the internal details of the blockchain.
pub fn create_full<C, P>(
deps: FullDeps<C, P>,
) -> Result<RpcModule<()>, Box<dyn std::error::Error + Send + Sync>>
where
C: ProvideRuntimeApi<Block>,
C: HeaderBackend<Block> + HeaderMetadata<Block, Error = BlockChainError> + 'static,
C: Send + Sync + 'static,
C::Api: substrate_frame_rpc_system::AccountNonceApi<Block, AccountId, Nonce>,
C::Api: pallet_transaction_payment_rpc::TransactionPaymentRuntimeApi<Block, Balance>,
C::Api: BlockBuilder<Block>,
P: TransactionPool + 'static,
{
use pallet_transaction_payment_rpc::{TransactionPayment, TransactionPaymentApiServer};
use substrate_frame_rpc_system::{System, SystemApiServer};
let mut module = RpcModule::new(());
let FullDeps { client, pool, deny_unsafe } = deps;
// ADDITIONAL RPC METHODS
// ...
module.merge(System::new(client.clone(), pool, deny_unsafe).into_rpc())?;
module.merge(TransactionPayment::new(client).into_rpc())?;
// Custom APIs ...
// CHAIN SPECIFICATION AND METADATA ...
Ok(module)
}
-
Function declaration and type restrictions:
The function takes as inputdeps
, an instance ofFullDeps
that contains the necessary dependencies (client, transaction pool, security settings). It returns a result that contains either a successfully initializedRpcModule
or an error. The type parametersC
andP
represent the blockchain client and the transaction pool, with specific constraints and capabilities defined by the where clause. -
Core logic:
- RpcModule: A new
RpcModule
is initialized. This module serves as a container for all RPC endpoints - FullDeps: The dependencies are extracted from the
FullDeps
object. This enables access to the client, the transaction pool and the security settings. - System core rpc module: Provides basic system information and functionalities, such as querying the current block number or the account nonce.
- TransactionPayment core rpc module: Allows to query transaction fee information required for sending transactions.
- .merge method: inserts the specific RPC endpoints of the two modules into the main RPC module.
- RpcModule: A new
At the end, the function returns the assembled RpcModule, which is now ready to be addressed via the network.
The create_full
function is crucial for the configuration of the RPC interface of a Substrate Node.
It makes it possible to combine various RPC modules into a coherent package, which is then made available to external clients to interact with the blockchain.
service.rs
This file contains the logic for creating and configuring the service instance for the Substrate Node, which is required for the execution of the blockchain. It can be organized into the following sections:
- Imports and definition: This is where the necessary external libraries and modules required to build the service are imported.
In addition, specific type definitions are created that are used in laters sections, such as
FullClient
,FullBackend
andFullSelectChain
. - Constants: Defines constants that are used in the entire service, such as
GRANDPA_JUSTIFICATION_PERIOD
, which specifies how often justifications are to be imported and generated. - Service type definition: Defines a type of service that contains a partial configuration of components that are required to set up the complete node service.
- new_partial function: This function creates a partial service configuration. It initializes essential components such as the client, backend, transaction pool and import queue, which are necessary for the execution of the blockchain. Optionally, telemetry can also be configured here.
- new_full function: Creates the complete service for a full client. This function calls
new_partial
to get the partial components and then adds additional components that are specifically required for a full client, such as network configuration, Grandpa voter and possible RPC extensions.- RPC extension: Defines a function (in
new_full
function) for adding user-defined RPC extensions to the service. This enables the node to offer additional RPC calls outside the standard substrate RPCs. - Network and consensus configuratio: The network is configured here (in
new_full
function), including peerset configurations forGRANDPA
and possibly warp sync. TheAURA
consensus algorithm and theGRANDPA
finalization process are also initialized. - Task manager and service start: Initializes the Task Manager and adds all necessary tasks that need to be executed in the background. Finally, the network is started, making the node ready for operation.
- RPC extension: Defines a function (in
Imports and definition
This section of service.rs defines the basic building blocks for creating and operating a Substrate service.
use futures::FutureExt;
use dev_node_runtime::{opaque::Block, RuntimeApi};
use sc_client_api::{Backend, BlockBackend};
use sc_consensus_aura::{ImportQueueParams, SlotProportion, StartAuraParams};
use sc_consensus_grandpa::SharedVoterState;
use sc_service::{error::Error as ServiceError, Configuration, TaskManager, WarpSyncParams};
use sc_telemetry::{Telemetry, TelemetryWorker};
use sc_transaction_pool_api::OffchainTransactionPoolFactory;
use sp_consensus_aura::sr25519::AuthorityPair as AuraPair;
use std::{sync::Arc, time::Duration};
pub(crate) type FullClient = sc_service::TFullClient<
Block,
RuntimeApi,
sc_executor::WasmExecutor<sp_io::SubstrateHostFunctions>,
>;
type FullBackend = sc_service::TFullBackend<Block>;
type FullSelectChain = sc_consensus::LongestChain<FullBackend, Block>;
-
Imports:
- futures: Imports extension methods for futures, which enable additional methods such as boxed or map. This is required for asynchronous programming.
- dev_node_runtime: Imports the specific block type and runtime API of the node.
opaque::Block
is an opaque representation of a block that can be used across network boundaries, andRuntimeApi
defines the specific functions provided by the runtime. - sc_client_api: Provides interfaces for interacting with the blockchain database. Backend is a generic interface for blockchain database systems, while BlockBackend provides specific methods for working with blocks.
- sc_consensus_aura: Defines types and parameters for the
AURA
consensus algorithm. This includes configurations for the import queue, slot proportions and parameters for starting AURA. - sc_consensus_grandpa: Refers to the state of the
GRANDPA
voting process that is shared between multiple parts of the system. - sc_services: Imports central service components, including error definitions, configuration structures and the
TaskManager
, which is responsible for managing asynchronous tasks. - sc_telemetry: Provides functionalities for telemetry, i.e. collecting and transmitting runtime information of the node.
- sc_transaction_pool_api: A factory for creating a transaction pool for offchain transactions.
- sp_consensus_aura: Defines the type of key pair for
AURA
authors, specific to thesr25519
cryptographic algorithm. - std: Standard Rust imports for atomic reference counting (
Arc
) and time durations (Duration
).
-
Type definition:
- FullClient: A specific type of client that uses the block type, the runtime API and the Wasm executor. This client is fully equipped to interact with the blockchain.
- FullBackend and FullSelectChain: Define the backend type used for data storage and the method to select the longest chain in the context of the blockchain consensus rules.
Constants
const GRANDPA_JUSTIFICATION_PERIOD: u32 = 512;
This constant defines the minimum time period in block count over which justifications are imported and generated.
A justification is a proof that a certain block has been accepted by the majority of validator nodes in the network.
The value 512
means that after every sequence of 512 blocks
, justifications for blocks must be generated and taken into account during import.
The importance of this constant is to ensure the efficiency and security of the network by ensuring that justifications are generated regularly without overusing network resources.
In the context of GRANDPA
, this helps to coordinate and secure the final approval for blocks, which is particularly important in a fork situation.
By setting a specific time period, a balance between performance and security is achieved.
Service type definition
In this section of the service.rs
file, a service type is defined as a collection of components that are required to build and operate the node service.
This definition uses the PartialComponents
structure from the sc_service
crate to enable a modular configuration of the service.
pub type Service = sc_service::PartialComponents<
FullClient,
FullBackend,
FullSelectChain,
sc_consensus::DefaultImportQueue<Block>,
sc_transaction_pool::FullPool<Block, FullClient>,
(
sc_consensus_grandpa::GrandpaBlockImport<FullBackend, Block, FullClient, FullSelectChain>,
sc_consensus_grandpa::LinkHalf<Block, FullClient, FullSelectChain>,
Option<Telemetry>,
),
>;
- FullClient: This is the full client type used to access blockchain data. It includes API access points that allow it to interact with the runtime logic.
- FullBackend: This represents the backend of the client, i.e. the part responsible for storing blockchain data.
- FullSelectChain: This is the implementation of the logic for selecting the "best" or "longest" chain used for consensus decisions.
- sc_consensus: The default import queue for incoming blocks.
- sc_transaction_pool: The full transaction pool that stores transactions waiting to be included in future blocks.
- GrandpaBlockImport and LinkHalf: The tuple elements (
sc_consensus_grandpa::GrandpaBlockImport
andsc_consensus_grandpa::LinkHalf
) are specific to theGRANDPA
consensus mechanism. They enable the integration of theGRANDPA
consensus into the block import process and the link between the block import and theGRANDPA
logic. - Telemetry: An optional telemetry component that can be used to transmit network and performance data to a telemetry server.
new_partial function
The new_partial
function is crucial for initializing the node service by configuring and initializing basic components such as the client
, the backend
, the transaction pool
and the import queue
.
This function is specifically designed to create the parts of the service that are necessary for the operation of the node before the full service with all its functions is started.
pub fn new_partial(config: &Configuration) -> Result<Service, ServiceError> {
// telemetry
let telemetry = config
.telemetry_endpoints
.clone()
.filter(|x| !x.is_empty())
.map(|endpoints| -> Result<_, sc_telemetry::Error> {
let worker = TelemetryWorker::new(16)?;
let telemetry = worker.handle().new_telemetry(endpoints);
Ok((worker, telemetry))
})
.transpose()?;
// executor
let executor = sc_service::new_wasm_executor::<sp_io::SubstrateHostFunctions>(config);
// client, backend, keystore_container, task_manager
let (client, backend, keystore_container, task_manager) =
sc_service::new_full_parts::<Block, RuntimeApi, _>(
config,
telemetry.as_ref().map(|(_, telemetry)| telemetry.handle()),
executor,
)?;
// client Arc
let client = Arc::new(client);
// telemetry setup
let telemetry = telemetry.map(|(worker, telemetry)| {
task_manager.spawn_handle().spawn("telemetry", None, worker.run());
telemetry
});
let select_chain = sc_consensus::LongestChain::new(backend.clone());
// transaction_pool
let transaction_pool = sc_transaction_pool::BasicPool::new_full(
config.transaction_pool.clone(),
config.role.is_authority().into(),
config.prometheus_registry(),
task_manager.spawn_essential_handle(),
client.clone(),
);
// grandpa_block_import, grandpa_link
let (grandpa_block_import, grandpa_link) = sc_consensus_grandpa::block_import(
client.clone(),
GRANDPA_JUSTIFICATION_PERIOD,
&client,
select_chain.clone(),
telemetry.as_ref().map(|x| x.handle()),
)?;
// slot_duration
let slot_duration = sc_consensus_aura::slot_duration(&*client)?;
// import_queue
let import_queue =
sc_consensus_aura::import_queue::<AuraPair, _, _, _, _, _>(ImportQueueParams {
block_import: grandpa_block_import.clone(),
justification_import: Some(Box::new(grandpa_block_import.clone())),
client: client.clone(),
create_inherent_data_providers: move |_, ()| async move {
let timestamp = sp_timestamp::InherentDataProvider::from_system_time();
let slot =
sp_consensus_aura::inherents::InherentDataProvider::from_timestamp_and_slot_duration(
*timestamp,
slot_duration,
);
Ok((slot, timestamp))
},
spawner: &task_manager.spawn_essential_handle(),
registry: config.prometheus_registry(),
check_for_equivocation: Default::default(),
telemetry: telemetry.as_ref().map(|x| x.handle()),
compatibility_mode: Default::default(),
})?;
// return
Ok(sc_service::PartialComponents {
client,
backend,
task_manager,
import_queue,
keystore_container,
select_chain,
transaction_pool,
other: (grandpa_block_import, grandpa_link, telemetry),
})
}
- telemetry: It first checks whether telemetry endpoints are configured and, if so, creates a
TelemetryWorker
to send telemetry data to these endpoints. The telemetry component allows the node to transmit performance and behavioral data that can be used to monitor and analyze the network. - executor: A WASM executor is created to enable the execution of the runtime in WASM format. This step is crucial for the flexibility and security of the Substrate framework, as it allows the execution of blockchain logic in an isolated environment.
- client, backend, keystore_container, task_manager: Core components of the service are initialized, including the client for accessing blockchain data, the backend for storing this data, a keystore container for cryptographic keys and a task manager for managing asynchronous tasks.
- client Arc: The created client, which provides core access to the blockchain data and APIs, is wrapped in an Arc (Atomic Reference Counted) wrapper. Arc is used to provide thread-safe reference access to the client across multiple parts of the system without having to worry about the lifetime of the client. This is particularly important in an asynchronous or multi-threaded environment, as is typical in Substrate nodes, where different parts of the system can access the client simultaneously.
- Telemetry setup: The telemetry configuration is processed further by starting a new asynchronous task for each telemetry worker, which processes and sends the telemetry data. This is done via the task_manager, which takes over the management and execution of background tasks in the system.
The line
task_manager.spawn_handle().spawn("telemetry", None, worker.run());
effectively starts the telemetry worker as a separate task, which sends the collected telemetry data to the configured endpoints. - transaction_pool: A transaction pool is set up to manage transactions that are waiting to be included in blocks. This pool enables the node to efficiently process and prioritize incoming transactions.
- grandpa_block_import, grandpa_link: Components for the
GRANDPA
consensus mechanism are initialized, including the logic for importing blocks and linking to theGRANDPA
logic. These steps are essential for the integration of the consensus mechanism into the block processing process. - slot_duration and import_queue: The slot duration for the Aura consensus mechanism is calculated and an
ImportQueue
for incoming blocks is configured based on this. The ImportQueue is crucial for the processing and validation of incoming blocks before they are added to the blockchain. - Return: In the end, all initialized components are combined in a
PartialComponents
object and returned. ThesePartialComponents
provide a basis for the further initialization of the complete service.
This function represents a central point in the configuration and initialization of the node service by providing the necessary components for starting the node.
new_full function
In this section of the service.rs
, the new_full
function is defined, which performs the full node configuration and initialization for a Substrate-based blockchain node.
This function complements the subcomponents created by the new_partial
function to create a fully functional node.
General configuration and initialization
This code section from the new_full
function configures and initializes the complete node service based on the transferred configuration.
pub fn new_full(config: Configuration) -> Result<TaskManager, ServiceError> {
// Basic configuration and partial components
let sc_service::PartialComponents {
client,
backend,
mut task_manager,
import_queue,
keystore_container,
select_chain,
transaction_pool,
other: (block_import, grandpa_link, mut telemetry),
} = new_partial(&config)?;
// General network configuration
let mut net_config = sc_network::config::FullNetworkConfiguration::new(&config.network);
// General GRANDPA consensus and Warp-Sync configuration necessary for network initialization
let grandpa_protocol_name = sc_consensus_grandpa::protocol_standard_name(
&client.block_hash(0).ok().flatten().expect("Genesis block exists; qed"),
&config.chain_spec,
);
let (grandpa_protocol_config, grandpa_notification_service) =
sc_consensus_grandpa::grandpa_peers_set_config(grandpa_protocol_name.clone());
net_config.add_notification_protocol(grandpa_protocol_config);
let warp_sync = Arc::new(sc_consensus_grandpa::warp_proof::NetworkProvider::new(
backend.clone(),
grandpa_link.shared_authority_set().clone(),
Vec::default(),
));
// Network initialization
let (network, system_rpc_tx, tx_handler_controller, network_starter, sync_service) =
sc_service::build_network(sc_service::BuildNetworkParams {
config: &config,
net_config,
client: client.clone(),
transaction_pool: transaction_pool.clone(),
spawn_handle: task_manager.spawn_handle(),
import_queue,
block_announce_validator_builder: None,
warp_sync_params: Some(WarpSyncParams::WithProvider(warp_sync)),
block_relay: None,
})?;
// General node configuration
let role = config.role.clone();
let force_authoring = config.force_authoring;
let backoff_authoring_blocks: Option<()> = None;
let name = config.network.node_name.clone();
let enable_grandpa = !config.disable_grandpa;
let prometheus_registry = config.prometheus_registry().cloned();
-
Basic configuration and partial components: The first step is the creation of "Partial Components" with the function
new_partial(&config)?;
. This function constructs the basic components of the service, such as the blockchain client, the backend (for storing block data), theTaskManager
(for executing asynchronous tasks), the ImportQueue (for processing incoming blocks), theTransactionPool
(for managing transactions in the mempool) and additional components such asBlockImport
andGrandpaLink
for the consensus logic. These components are necessary to initialize and operate the network service and the consensus mechanisms. -
General network configuration: A network configuration is created here with
sc_network::config::FullNetworkConfiguration
, which is based on the transferred configuration. This includes various network parameters such as peering settings, protocol details and more that are required for communication in the Substrate network. -
GRANDPA consensus and Warp-Sync configuration: These sections configure the GRANDPA consensus mechanism and the warp-sync functionality. GRANDPA (GHOST-based Recursive Ancestor Deriving Prefix Agreement) is responsible for the finalization of blocks. Warp Sync allows new nodes to quickly synchronize the current state of the blockchain by using snapshot information about the chain. The protocol for GRANDPA is added to the network to allow nodes to communicate about finalization information. The configuration contains the protocol name and the notification service configuration for GRANDPA.
-
Network initialization: In this step, the network is initialized with the previously defined configuration. This includes the creation of network services, the management of RPC transactions, the handling of transactions and the start of the network. This process links the network component with the blockchain client, the
TransactionPool
and other necessary services to enable network communication and synchronization. -
General node configuration: The last group of settings refers to the general configuration of the node, including its role in the network (e.g. validator, full node, light client), whether block production forcing is enabled, and the configuration of Grandpa finalization. It also contains the node names and the configuration for the telemetry system.
Offchain worker
The offchain worker section configures and activates offchain workers for the node, if specified in the configuration. Offchain workers allow certain tasks to be performed outside of the blockchain while still being able to interact with the blockchain.
These can be used for a variety of purposes:
- Retrieving external data and feeding it into the blockchain.
- Performing complex calculations that are not intended to be performed as part of a transaction.
- Interacting with other services or blockchains outside of the current blockchain.
- ...
// Offchain worker
if config.offchain_worker.enabled {
task_manager.spawn_handle().spawn(
"offchain-workers-runner",
"offchain-worker",
sc_offchain::OffchainWorkers::new(sc_offchain::OffchainWorkerOptions {
runtime_api_provider: client.clone(),
is_validator: config.role.is_authority(),
keystore: Some(keystore_container.keystore()),
offchain_db: backend.offchain_storage(),
transaction_pool: Some(OffchainTransactionPoolFactory::new(
transaction_pool.clone(),
)),
network_provider: network.clone(),
enable_http_requests: true,
custom_extensions: |_| vec![],
})
.run(client.clone(), task_manager.spawn_handle())
.boxed(),
);
}
- config.offchain_worker.enabled: Checks whether offchain workers are enabled in the node configuration. If yes, the following code block is executed.
- task_manager.spawn_handle().spawn(...): Starts a new asynchronous task for the execution of offchain worker tasks. This task runs independently of block processing and can perform asynchronous operations.
- sc_offchain: Initializes a new instance of the offchain workers with specific options:
- runtime_api_provider: The blockchain client that gives the offchain workers access to the runtime API.
- is_validator: A boolean value that indicates whether the node has a validator role in the network. This can influence the type of tasks that the offchain worker performs.
- keystore: The keystore used to create signed transactions or messages.
- offchain_db: The storage area for offchain data that allows offchain workers to store data between sessions.
- transaction_pool: The transaction pool where offchain workers can submit transactions for onchain processing.
- network_provider: Access to the network to communicate with other nodes or retrieve data.
- enable_http_requests: Specifies whether offchain workers are allowed to send HTTP requests to external servers.
- custom_extensions: Enables the definition of user-defined extensions for specific offchain worker tasks.
The purpose of this configuration is to provide offchain workers with the necessary tools and information to perform their tasks effectively. Offchain workers play an important role in extending the functionality of blockchain applications by enabling offchain tasks that would otherwise be too expensive or technically impossible to perform directly on the blockchain.
RPC extension
The RPC section, in the new_full
function part of the node service, configures the Remote Procedure Call (RPC) extensions for the Substrate node.
RPCs allow external clients and applications to interact with the node by requesting data or performing actions.
This section ensures that the node is equipped with the necessary RPC methods to support these interactions.
// RPC extension
let rpc_extensions_builder = {
let client = client.clone();
let pool = transaction_pool.clone();
Box::new(move |deny_unsafe, _| {
let deps =
crate::rpc::FullDeps { client: client.clone(), pool: pool.clone(), deny_unsafe };
crate::rpc::create_full(deps).map_err(Into::into)
})
};
let _rpc_handlers = sc_service::spawn_tasks(sc_service::SpawnTasksParams {
network: network.clone(),
client: client.clone(),
keystore: keystore_container.keystore(),
task_manager: &mut task_manager,
transaction_pool: transaction_pool.clone(),
rpc_builder: rpc_extensions_builder,
backend,
system_rpc_tx,
tx_handler_controller,
sync_service: sync_service.clone(),
config,
telemetry: telemetry.as_mut(),
})?;
- rpc_extensions_builder: This is a builder function that uses a closure to define a set of RPC dependencies and create the full RPC extensions.
These extensions allow the node to process specific requests that go beyond the standard set of substrate RPCs.
- client: A cloned instance of the Substrate client that provides access to blockchain data and functions.
- pool: The transaction pool used for managing and sending transactions to the network.
- deny_unsafe: A flag that specifies whether unsafe RPC calls (those that could potentially jeopardize the security or privacy of the node) should be prohibited.
- deps: The closure within
rpc_extensions_builder
creates the full RPC dependencies andcalls crate::rpc::create_full(deps)
to initialize the specific RPC extensions that will be available for this node. This may include custom RPCs defined incrate::rpc
.
- rpc_handlers: This variable catches the RPC handlers returned by the
sc_service::spawn_tasks
function. This function takes the RPC extensions and integrates them into the service so that they are accessible via the network.- spawn_tasks: The
spawn_tasks
function also configures other services such as network management, synchronization and telemetry, which added to the RPC extensions constitute the wider service ecosystem.
- spawn_tasks: The
Providing RPC extensions is critical to the functionality of the node as it determines how to interact with the node. By offering RPCs, a blockchain node can perform a wide range of functions, from simply querying chain data to initiating transactions or executing smart contract functions.
AURA and GRANDPA consensus configuration
The if statements for AURA
and GRANDPA
within the new_full
function set up the consensus mechanism and start relevant background processes based on the role of the node in the network.
// Start AURA authoring task (if Validator)
if role.is_authority() {
let proposer_factory = sc_basic_authorship::ProposerFactory::new(
task_manager.spawn_handle(),
client.clone(),
transaction_pool.clone(),
prometheus_registry.as_ref(),
telemetry.as_ref().map(|x| x.handle()),
);
// slot_duration
let slot_duration = sc_consensus_aura::slot_duration(&*client)?;
// AURA consensus configuration
let aura = sc_consensus_aura::start_aura::<AuraPair, _, _, _, _, _, _, _, _, _, _>(
StartAuraParams {
slot_duration,
client,
select_chain,
block_import,
proposer_factory,
create_inherent_data_providers: move |_, ()| async move {
let timestamp = sp_timestamp::InherentDataProvider::from_system_time();
let slot =
sp_consensus_aura::inherents::InherentDataProvider::from_timestamp_and_slot_duration(
*timestamp,
slot_duration,
);
Ok((slot, timestamp))
},
force_authoring,
backoff_authoring_blocks,
keystore: keystore_container.keystore(),
sync_oracle: sync_service.clone(),
justification_sync_link: sync_service.clone(),
block_proposal_slot_portion: SlotProportion::new(2f32 / 3f32),
max_block_proposal_slot_portion: None,
telemetry: telemetry.as_ref().map(|x| x.handle()),
compatibility_mode: Default::default(),
},
)?;
task_manager
.spawn_essential_handle()
.spawn_blocking("aura", Some("block-authoring"), aura);
}
// Start GRANDPA Voter (if Validator or FullNode or ArchiveNode or special case)
if enable_grandpa {
let keystore = if role.is_authority() { Some(keystore_container.keystore()) } else { None };
// Full GRANDPA consensus and Warp-Sync configuration
let grandpa_config = sc_consensus_grandpa::Config {
gossip_duration: Duration::from_millis(333),
justification_generation_period: GRANDPA_JUSTIFICATION_PERIOD,
name: Some(name),
observer_enabled: false,
keystore,
local_role: role,
telemetry: telemetry.as_ref().map(|x| x.handle()),
protocol_name: grandpa_protocol_name,
};
// Start the full GRANDPA voter
let grandpa_config = sc_consensus_grandpa::GrandpaParams {
config: grandpa_config,
link: grandpa_link,
network,
sync: Arc::new(sync_service),
notification_service: grandpa_notification_service,
voting_rule: sc_consensus_grandpa::VotingRulesBuilder::default().build(),
prometheus_registry,
shared_voter_state: SharedVoterState::empty(),
telemetry: telemetry.as_ref().map(|x| x.handle()),
offchain_tx_pool_factory: OffchainTransactionPoolFactory::new(transaction_pool),
};
// the GRANDPA voter task is considered infallible, i.e.
// if it fails we take down the service with it.
task_manager.spawn_essential_handle().spawn_blocking(
"grandpa-voter",
None,
sc_consensus_grandpa::run_grandpa_voter(grandpa_config)?,
);
}
-
AURA:
- proposer_factory: Creates a factory that is used to propose new blocks during the consensus process. This factory uses information such as the transaction pool and the Promethean registry for metrics.
- Slot Duration: Determines the duration of a slot, which is a fundamental parameter of the
AURA
consensus protocol. The slot duration is important to determine how often blocks should be created. - AURA start: Initiates the
AURA
consensus process with several configurations, including the slot duration, the client, the block import function, and the way in which inherent data is provided. It also defines the conditions for block suggestion and synchronization with the network. - Essential task: The
AURA
block creation task is considered essential. If this task fails, the service is shut down as it is critical to the operation of the network.
-
GRANDPA:
- Keystore: Checks whether the node acts as an authority and provides a keystore if required. This is necessary for participation in the consensus process.
- Configuration: Sets the configuration for the
GRANDPA
consensus, including the gossip duration and the period for generating justifications. In addition, participation as an observer and the specification of the protocol name are configured. - GRANDPA Voter: Starts the full
GRANDPA
voter when the node acts as a validator, full node, archive node or in a special case. This component is crucial for participation in the final block confirmation and ensures network finality. - Infallible task: Similar to
AURA
, theGRANDPA
voter task is considered infallible. Failure of this component leads to the shutdown of the service as it is essential for the security and finality of the network.
These sections configure and start the consensus mechanisms AURA
and GRANDPA
based on the role of the node and the network configuration.
AURA
is used for block generation, while GRANDPA
is responsible for block finalization.
Both mechanisms are essential for the operation of the Substrate-based blockchain network and are treated as critical background processes whose failure would affect the operation of the entire node.
Task manager and service start
The last section of the new_full
function in the service module of a Substrate-based node takes care of starting the network and returning the task manager.
This part is crucial for initializing and starting the node service.
network_starter.start_network();
Ok(task_manager)
-
network_starter.start_network(): This line starts the network. The
network_starter
is a structure that was prepared during network configuration and initialization. This function initiates the network connection setup with other nodes in the blockchain network. It enables the node to become part of the peer-to-peer (P2P) network, synchronize blocks, exchange transactions and perform other network-specific activities. -
Ok(task_manager): The task_manager is returned at the end of the function. The
TaskManager
is a central component that is responsible for managing background tasks within the node. These include executing the consensus logic, processing transactions, executing offchain workers and handling network requests. By returning theTaskManager
, the function enables the calling code to have control over these background processes, start them properly and shut them down if necessary.
To summarize, this section marks the transition from the setup phase, in which the node and its components are configured, to the execution phase, in which the node actively participates in the network.
The successful execution of this function means that the node is fully initialized and ready to fulfill its role in the blockchain network.
The network_starter
activates the network capabilities of the node, while the task_manager
takes over the management of all asynchronous tasks required for the operation of the node.