Remote deployment

In the previous section, we init subgraph based on command:

graph init subgraphdemo/heco --network heco --from-contract 0x0bb480582ecae1d22bbaeaccfbb849b441450026

The scaffolding file of the project is successfully generated, and the output of the entire execution command is as follows:

This sample code has also been submitted to github, please see:

https://github.com/HGDotNetwork/candy-subgraph

Let's make some simple adjustments to the code, and then perform deployment testing and query testing.

Let's first understand the structure of the code:

./abis
./abis/Contract.json # The interface definition file of the contract is automatically obtained by the graph program according to the specified contract address and network
./schema.graphql # The sample data entity definition file, in which the data objects and attribute definitions to be operated are placed
./subgraph.yaml # Definition of sub-picture data source, including contract, starting block (default is not), event, handler, etc.
./yarn.lock yarn # Dependent update file
./package.json # Project dependencies and definitions
./src/AssemblyScript # The analysis logic code
./src/mapping.ts # Event processing program file, for each contract event, you can write the corresponding processing program here to perform statistics on the data

Let's adjust the code:

  1. The content under the abis directory does not need to be adjusted, unless we want to add a new contract analysis, just put the abi file of the new contract here.

  2. Here is a contract for sending red packet online. The scaffolding program has automatically generated an example entity for us. The example entity corresponds to the first example event and the corresponding handler.

We found the event definition of the contract. This is easy to find. You can find it on the block explorer according to the contract address:

event Packetstarted(uint256 total, address tokenAddress);

This event is an event generated every time a red packet is sent, and two parameters are passed in. The first parameter is the total amount of the red packet, and the second parameter is the currency of the red packet.

In contrast to schema.graphql, a sample entity has been generated for this event:

type ExampleEntity @entity {
  id: ID!
  count: BigInt!
  total: BigInt! # uint256
  tokenAddress: Bytes! # address
}

According to the logic of issuing red packet, we make a simple naming adjustment, and add a statistics of red envelopes issued by a certain currency, and modify schema.graphql as follows:

type PackageEntity @entity {
  id: ID!
  count: BigInt!
  total: BigInt! # uint256
  tokenAddress: Bytes! # address
}
type PackageToken @entity {
  id: ID!
  total: BigInt! # uint256
}

In order to make the logic as simple as possible, other entities will not be processed first

Modify the schema definition, you have to adjust mapping.ts accordingly. It is also very simple, save the sending record of the red envelope in the PackageEntity, and then use the PackageToken to count the data, and adjust the handlePackagestarted as follows:

export function handlePacketstarted(event: Packetstarted): void {
  // Entities can be loaded from the store using a string ID; this ID
  // needs to be unique across all entities of the same type
  let entity = PackageEntity.load(event.transaction.hash.toHex())
  // Entities only exist after they have been saved to the store;
  // `null` checks allow to create entities on demand
  if (entity == null) {
    entity = new PackageEntity(event.transaction.hash.toHex())
    // Entity fields can be set using simple assignments
    entity.count = BigInt.fromI32(0)
  }
  let tokenEntity = PackageToken.load(event.params.tokenAddress.toHex())
  if(tokenEntity == null){
    tokenEntity = new PackageToken(event.params.tokenAddress.toHex())
    tokenEntity.total = BigInt.fromI32(0);
  }
  tokenEntity.total = tokenEntity.total.plus(event.params.total)
  // BigInt and BigDecimal math are supported
  entity.count = entity.count + BigInt.fromI32(1)
  // Entity fields can be set based on event parameters
  entity.total = event.params.total
  entity.tokenAddress = event.params.tokenAddress
  // Entities can be written to the store with `.save()`
  entity.save()
  tokenEntity.save()
}

In this way, the saving of red envelopes and the simple statistics of the amount of red envelopes are realized. Let's test the deployment. For convenience, local deployment is not demonstrated here. If you are interested in local deployment, you can automatically deploy the local environment for testing.

The deployment process is divided into three steps:

1、First, generate the corresponding code according to Abi to manipulate the contract data

2、Generate subgraphs online according to the prompts of the HyperGraph control background

3、According to the prompt of the HyperGraph control background, deploy the subgraph remotely

Enter the sample project directory (the directory where package.json is located), and the command to generate the code is:

yarn codegen

Or manually execute:

npx graph codegen

Or installed the global graph, you can execute it directly:

graph codegen

The output is as follows:

  1. Find the API deployment prompt in the HyperGraph control background, and get the command to create a subgraph, as shown in the figure below

You can open the command line operation prompt:

Use the command in the red box above:

graph create subgraphdemo/heco \
--node https://deploy.hg.network \
--access-token <AuthToken>
Created subgraph: subgraphdemo/heco

Please note that the access-token parameter must be added to create a subgraph, otherwise the creation will not succeed. The value of the access-token parameter AuthToken can be seen on the detail page of the sub-picture of the console.

In this way, the sub-graph can be created. From the output, the sub-graph is created successfully.

After the creation, it can be officially deployed. Note that the deployment must also bring the access-token parameter. According to the command line prompt, we use this command:

Through the HyperGraph console, you can also see the updated subgraph content number, and in the log column, you can also see the log output.

As you can see, the above HTTP query link is:

https://q.hg.network/subgraphs/name/hecograph/heco

Open this link and enter a simple query to get the query result. You can also see the definition of Schema in the document area on the right.

So far, we have successfully completed the process of adding a subgraph from the back-end console, using graph-cli to build a development environment, building a scaffolding from the contract, and then adjusting the code step by step until the deployment is successful.

Last updated