TON Metrics: Measure to Improve

15 Dec 2019
5 min read
guide
Telegram
5 min read
Article content

PoS and PoS-based blockchain networks consist of a plethora of independent agents, each running its infrastructure, and TON is no exception. The ability of such agents to sustain healthy cash flows is dependent primarily on the performance of the validating nodes as rewards are distributed accordingly to the number of generated blocks. Furthermore, businesses that rely on the interaction with the blockchain ledger in their operating activity maintain infrastructure similar to that of validating teams, which consists primarily of nodes. Hence, the performance of the latter is important to the overall company's success.

As a TOP BP in many PoS and PoS-based blockchain networks, we understand what parameters help monitor and manage nodes accordingly. We believe that some of the most essential yet crucial metrics for analysis are as follows:

  • blocks per round
  • block meantime
  • rewards per round/block/timeframe
  • correlation between blocks and rewards
  • server CPU and RAM utilization
  • outages and interrupts in service operation
  • latency related errors

In our previous post we presented a helpfull script for the automation of application process for TON validator that helps maintain continuity in workflow on the TON blockchain. In this post we share a set of tools that gather relevant metrics of the performance of nodes. You can find source code in our GitHub repository: https://github.com/everstake/ton-helpers. Please, note that the project is experimental - utilize at your own risk.

Step 1: Get Information for Charts and Tables

Please, refer to ton-validation repository.

Get rewards in GRAMs using grep and jq:

cat db.json | jq -r '._default' | jq '[.[]]' | grep "reward" | grep -v '"reward": -1,' | awk 'BEGIN{FS=":"} {print ($2/1000000000) }'

Convert json to csv using jq

a. Install jq, copy db.json from the validator-node, and make file filter.jq containing:

def tocsv:
if length == 0 then empty
else
(.[0] | keys_unsorted) as $keys
| (map(keys) | add | unique) as $allkeys
| ($keys + ($allkeys - $keys)) as $cols
| ($cols, (.[] as $row | $cols | map($row[.])))
| @csv
end ;

tocsv

b. Then you can convert db.json to csv and import to Libreoffice Calc:

cat db.json | jq '._default' | jq '.[]' | jq -r -s -f filter.jq

Credits to SO ;)

Parse logs from validating node to get your blocks

a. Let next command run in background with nohup or tmux to collect logs while your node will validate:

tail -F /TON/dir/with/logs/* | grep --line-buffered "new Block created" >> blocks.log

b. You`ll get these records in blocks.log:

[ 3][t 4][1573576660.370741129][collator.cpp:3695][!collate(-1,8000000000000000):1195270] new Block created

[ 3][t 6][1573576663.622580290][collator.cpp:3695][!collate(-1,8000000000000000):1195271] new Block created

[ 3][t 5][1573576733.900147438][collator.cpp:3695][!collate(-1,8000000000000000):1195292] new Block created

[ 3][t 3][1573576738.687150002][collator.cpp:3695][!collate(-1,8000000000000000):1195293] new Block created

[ 3][t 7][1573576741.845279932][collator.cpp:3695][!collate(-1,8000000000000000):1195294] new Block created

[ 3][t 4][1573576819.818135500][collator.cpp:3695][!collate(-1,8000000000000000):1195317] new Block created

[ 3][t 6][1573576824.916352749][collator.cpp:3695][!collate(-1,8000000000000000):1195318] new Block created

[ 3][t 3][1573576894.696839094][collator.cpp:3695][!collate(-1,8000000000000000):1195338] new Block created

[ 3][t 2][1573576897.733234882][collator.cpp:3695][!collate(-1,8000000000000000):1195339] new Block created

[ 3][t 1][1573576967.855431080][collator.cpp:3695][!collate(-1,8000000000000000):1195358] new Block created

[ 3][t 2][1573576970.334105492][collator.cpp:3695][!collate(-1,8000000000000000):1195359] new Block created

[ 3][t 4][1573576972.634427786][collator.cpp:3695][!collate(-1,8000000000000000):1195360] new Block created
c. Then you parse them with awk and output to another file, e.g. blocks_parsed.log:
cat blocks.log | awk '{print $3}' | awk -F '[' '{print $2}' | awk -F '.' '{print $1}' > blocks_parsed.log
d. You`ll get these records:
1573636135
1573636146
1573636148
1573636149
1573636163

Step 2: Use db.json and blocks_parsed.log to Create Charts

Install:

#set env variable in .bashrc file using export
export BETTER_EXCEPTIONS=1
sudo apt install python3-pip
sudo apt install python3-venv
python3 -m venv env
if you use bash -> source env/bin/activate
#After that your promt will change
pip install -r requirements.txt
#Work with you data
#To exit run
deactivate

Run python count_parse.py db.json blocks_parsed.log chart.html:

Usage: count_parse.py [SWITCHES] db block out_html [validators_elected_for=65536]

You will get output in console + an html file containing chart with some basic info:

undefined

undefined

Notes

As a result of the described operations, we created a chart showing the number of new blocks that a validator created per validating period and received rewards. The tooltip and annotation show election_id in numeric and human-readable datetime format.

Please, keep in mind that there may be inconsistencies errors in the algorithm!

***

Follow news and updates from Everstake by subscribing to the newsletter on our website and join the discussion on our social channels through the links below.

Website: everstake.one
Twitter: @Everstake_pool
Telegram: @Everstake_chat
Facebook: fb.me/everstake.one
Reddit: /r/Everstake/
Medium: medium.com/everstake

Dark - Light
Everstake Logo
Everstake
Content Manager
Everstake is one of the most reliable PoS validators on the market, with current volumes of customer staked funds exceeding 2B$ and over 735K+ delegators as of March 2023.

Contact us

Have questions?
We’re always there to answer!

Our distributed team of 20+ community managers is online 24/7 and is ready to assist you.