🦓 Module Zoo

Welcome to our module zoo, the place where we discuss all pre-defined modules and outline how they are mapped to logic programs.

All modules listed here are defined in the neuralogic.nn.module package, and their usage is quite similar to the usage of regular rules. You can add them to your template via the += operator or add_module method, e.g.:

from neuralogic.nn.module import GCNConv

template += GCNConv(...)
# or
template.add_module(GCNConv(...))

Right after adding a module into a template, it is expanded into logic form - rules. This allows you to build upon pre-defined modules and create new variations by adding your own custom rules or just mixing modules together.

Pre-defined Modules

Module

Edge formats

GCNConv

R.<edge_name>(<source>, <target>)

SAGEConv

R.<edge_name>(<source>, <target>)

GINConv

R.<edge_name>(<source>, <target>)

RGCNConv

R.<edge_name>(<source>, <relation>, <target>) or R.<relation>(<source>, <target>)

TAGConv

R.<edge_name>(<source>, <target>)

GATv2Conv

R.<edge_name>(<source>, <target>)

SGConv

R.<edge_name>(<source>, <target>)

APPNPConv

R.<edge_name>(<source>, <target>)

ResGatedGraphConv

R.<edge_name>(<source>, <target>)

GINEConv

R.<edge_name>(<source>, <target>)

GENConv

R.<edge_name>(<source>, <target>)


GNN Modules

class GCNConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Graph Convolutional layer from “Semi-supervised Classification with Graph Convolutional Networks”.

Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

  • aggregation (Aggregation) – Aggregation function of nodes’ neighbors. Default: Aggregation.SUM

class SAGEConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

GraphSAGE layer from “Inductive Representation Learning on Large Graphs”. Which can be expressed as:

\[\mathbf{x}^{\prime}_i = act(\mathbf{W}_1 \mathbf{x}_i + \mathbf{W}_2 \cdot {agg}_{j \in \mathcal{N}(i)}(\mathbf{x}_j)))\]

Where act is an activation function, agg aggregation function and W’s are learnable parameters. This equation is translated into the logic form as:

(R.<output_name>(V.I)[<W1>] <= (R.<feature_name>(V.J), R.<edge_name>(V.J, V.I))) | [<aggregation>, Transformation.IDENTITY]
(R.<output_name>(V.I)[<W2>] <= R.<feature_name>(V.I)) | [Transformation.IDENTITY]
R.<output_name> / 1 | [<activation>]
Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

  • aggregation (Aggregation) – Aggregation function of nodes’ neighbors. Default: Aggregation.AVG

class GINConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]
class RGCNConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str | None, relations: ~typing.List[str], activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Relational Graph Convolutional layer from Modeling Relational Data with Graph Convolutional Networks. Which can be expressed as:

\[\mathbf{x}^{\prime}_i = act(\mathbf{W_0} \cdot \mathbf{x}_i + \sum_{r \in \mathcal{R}} {agg}_{j \in \mathcal{N}_r(i)}(\mathbf{W_r} \cdot \mathbf{x}_j))\]

Where act is an activation function, agg aggregation function (by default average), \(W_0\) is a learnable root parameter and \(W_r\) is a learnable parameter for each relation.

The first part of the equation that is “\(\mathbf{W_0} \cdot \mathbf{x}_i\)” can be expressed in the logic form as:

R.<output_name>(V.I) <= R.<feature_name>(V.I)[<W0>]

Another part of the equation that is “\({agg}_{j \in \mathcal{N}_r(i)}(\mathbf{W_r} \cdot \mathbf{x}_j)\)” can be expressed as:

R.<output_name>(V.I) <= (R.<feature_name>(V.J)[<Wr>], R.<edge_name>(V.J, relation, V.I))

where “relation” is a constant name, or as:

R.<output_name>(V.I) <= (R.<feature_name>(V.J)[<Wr>], R.<relation>(V.J, V.I))

The outer summation, together with summing it with the first part, is handled by aggregation of all rules with the same head (and substitution).

Examples

The whole computation of this module (parametrized as RGCNConv(1, 2, "h1", "h0", "_edge", ["sibling", "parent"])) is as follows:

metadata = Metadata(activation=Transformation.IDENTITY, aggregation=Aggregation.AVG)

(R.h1(V.I) <= R.h0(V.I)[2, 1]) | metadata
(R.h1(V.I) <= (R.h0(V.J)[2, 1], R._edge(V.J, sibling, V.I))) | metadata
(R.h1(V.I) <= (R.h0(V.J)[2, 1], R._edge(V.J, parent, V.I))) | metadata
R.h1 / 1 [Transformation.IDENTITY]

Module parametrized as RGCNConv(1, 2, "h1", "h0", None, ["sibling", "parent"]) translates into:

metadata = Metadata(activation=Transformation.IDENTITY, aggregation=Aggregation.AVG)

(R.h1(V.I) <= R.h0(V.I)[2, 1]) | metadata
(R.h1(V.I) <= (R.h0(V.J)[2, 1], R.sibling(V.J, V.I))) | metadata
(R.h1(V.I) <= (R.h0(V.J)[2, 1], R.parent(V.J, V.I))) | metadata
R.h1 / 1 [Transformation.IDENTITY]
Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (Optional[str]) – Edge predicate name to use for neighborhood relations. When None, elements from relations are used instead.

  • relations (List[str]) – List of relations’ names

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

  • aggregation (Aggregation) – Aggregation function of nodes’ neighbors. Default: Aggregation.SUM

class TAGConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str, k: int = 2, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Topology Adaptive Graph Convolutional layer from “Topology Adaptive Graph Convolutional Networks”. Which can be expressed as:

\[\mathbf{x}^{\prime}_i = act(\sum_{k=0}^K \mathbf{W}_k \cdot {agg}_{j \in \mathcal{N}^k(i)}(\mathbf{x}_j))\]

Where act is an activation function, agg aggregation function, Wk are learnable parameters and \(\mathcal{N}^k(i)\) denotes nodes that are k hops away from the node i. This equation is translated into the logic form as:

This equation is translated into the logic form as:

(R.<output_name>(V.I0)[<W0>] <= R.<feature_name>(V.I0)) | [<aggregation>, Transformation.IDENTITY]
(R.<output_name>(V.I0)[<W1>] <= (R.<feature_name>(V.I1), R.<edge_name>(V.I1, V.I0))) | [<aggregation>, Transformation.IDENTITY]
(R.<output_name>(V.I0)[<W2>] <= (R.<feature_name>(V.I2), R.<edge_name>(V.I1, V.I0), R.<edge_name>(V.I2, V.I1)) | [<aggregation>, Transformation.IDENTITY]
...
(R.<output_name>(V.I0)[<Wk>] <= (R.<feature_name>(V.I<k>), R.<edge_name>(V.I1, V.I0), ..., R.<edge_name>(V.I<k>, V.I<k-1>)) | [<aggregation>, Transformation.IDENTITY]
R.<output_name> / 1 | [<activation>]

Examples

The whole computation of this module (parametrized as TAGConv(1, 2, "h1", "h0", "_edge")) is as follows:

(R.h1(V.I0)[2, 2] <= R.h0(V.I0)) | [Aggregation.SUM, Transformation.IDENTITY]
(R.h1(V.I0)[2, 1] <= (R.h0(V.I1), R._edge(V.I1, V.I0)) | [Aggregation.SUM, Transformation.IDENTITY]
(R.h1(V.I0)[2, 1] <= (R.h0(V.I2), R._edge(V.I1, V.I0), R._edge(V.I2, V.I1)) | [Aggregation.SUM, Transformation.IDENTITY]
R.h1 / 1 | [Transformation.IDENTITY]

Module parametrized as TAGConv(1, 2, "h1", "h0", "_edge", 1) translates into:

(R.h1(V.I0)[2, 1] <= R.h0(V.I0)) | [Aggregation.SUM, Transformation.IDENTITY]
(R.h1(V.I0)[2, 1] <= (R.h0(V.I1), R._edge(V.I1, V.I0)) | [Aggregation.SUM, Transformation.IDENTITY]
R.h1 / 1 | [Transformation.IDENTITY]
Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • k (int) – Number of hops. Default: 2

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

  • aggregation (Aggregation) – Aggregation function of nodes’ neighbors. Default: Aggregation.SUM

class GATv2Conv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str, share_weights: bool = False, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>)[source]

GATv2 layer from “How Attentive are Graph Attention Networks?”.

Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • share_weights (bool) – Share weights in attention. Default: False

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

class SGConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str, k: int = 1, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Simple Graph Convolutional layer from “Simplifying Graph Convolutional Networks”. Which can be expressed as:

\[\mathbf{x}^{\prime}_i = act(\mathbf{W} \cdot {agg}_{j \in \mathcal{N}^k(i)}(\mathbf{x}_j))\]

Where act is an activation function, agg aggregation function, W is a learnable parameter and \(\mathcal{N}^k(i)\) denotes nodes that are k hops away from the node i. This equation is translated into the logic form as:

(R.<output_name>(V.I)[<W>] <= (
    R.<feature_name>(V.I<k>),
    R.<edge_name>(V.I<1>, V.I<0>), R.<edge_name>(V.I<2>, V.I<1>), ..., R.<edge_name>(V.I<k>, V.I<k-1>),
)) | [<aggregation>, Transformation.IDENTITY]

R.<output_name> / 1 | [<activation>]

Examples

The whole computation of this module (parametrized as SGConv(2, 3, "h1", "h0", "_edge", 2)) is as follows:

(R.h1(V.I0)[3, 2] <= (R.h0(V.I2), R._edge(V.I1, V.I0), R._edge(V.I2, V.I1))) | [Transformation.IDENTITY, Aggregation.SUM]
R.h1 / 1 | [Transformation.IDENTITY]

Module parametrized as SGConv(2, 3, "h1", "h0", "_edge", 1) translates into:

(R.h1(V.I0)[3, 2] <= (R.h0(V.I1), R._edge(V.I1, V.I0))) | [Transformation.IDENTITY, Aggregation.SUM]
R.h1 / 1 | [Transformation.IDENTITY]
Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • k (int) – Number of hops. Default: 1

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

  • aggregation (Aggregation) – Aggregation function of nodes’ neighbors. Default: Aggregation.SUM

class APPNPConv(output_name: str, feature_name: str, edge_name: str, k: int, alpha: float, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Approximate Personalized Propagation of Neural Predictions layer from “Predict then Propagate: Graph Neural Networks meet Personalized PageRank”. Which can be expressed as:

\[\mathbf{x}^{0}_i = \mathbf{x}_i\]
\[\mathbf{x}^{k}_i = \alpha \cdot \mathbf{x}^0_i + (1 - \alpha) \cdot {agg}_{j \in \mathcal{N}(i)}(\mathbf{x}^{k - 1}_j)\]
\[\mathbf{x}^{\prime}_i = act(\mathbf{x}^{K}_i)\]

Where act is an activation function and agg aggregation function.

The first part of the second equation that is “\(\alpha \cdot \mathbf{x}^0_i\)” is expressed in the logic form as:

R.<output_name>__<k>(V.I) <= R.<feature_name>(V.I)[<alpha>].fixed()

The second part of the second equation that is “\((1 - \alpha) \cdot {agg}_{j \in \mathcal{N}(i)}(\mathbf{x}^{k - 1}_j)\)” is expressed as:

R.<output_name>__<k>(V.I) <= (R.<output_name>__<k-1>(V.J)[1 - <alpha>].fixed(), R.<edge_name>(V.J, V.I))

Examples

The whole computation of this module (parametrized as APPNPConv("h1", "h0", "_edge", 3, 0.1, Transformation.SIGMOID)) is as follows:

metadata = Metadata(transformation=Transformation.IDENTITY, aggregation=Aggregation.SUM)

(R.h1__1(V.I) <= R.h0(V.I)[0.1].fixed()) | metadata
(R.h1__1(V.I) <= (R.h0(V.J)[0.9].fixed(), R._edge(V.J, V.I))) | metadata
R.h1__1/1 [Transformation.IDENTITY]

(R.h1__2(V.I) <= <0.1> R.h0(V.I)) | metadata
(R.h1__2(V.I) <= (<0.9> R.h1__1(V.J), R._edge(V.J, V.I))) | metadata
R.h1__2/1 [Transformation.IDENTITY]

(R.h1(V.I) <= <0.1> R.h0(V.I)) | metadata
(R.h1(V.I) <= (<0.9> R.h1__2(V.J), R._edge(V.J, V.I))) | metadata
R.h1 / 1 [Transformation.SIGMOID]
Parameters:
  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • k (int) – Number of iterations

  • alpha (float) – Teleport probability

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

  • aggregation (Aggregation) – Aggregation function of nodes’ neighbors. Default: Aggregation.SUM

class ResGatedGraphConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str, gating_activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Residual Gated Graph Convolutional layer from “Residual Gated Graph ConvNets”. Which can be expressed as:

\[\mathbf{x}^{\prime}_i = act(\mathbf{W}_1 \mathbf{x}_i + {agg}_{j \in \mathcal{N}(i)}(\eta_{i,j} \odot \mathbf{W}_2 \mathbf{x}_j))\]
\[\mathbf{\eta}_{i,j} = gating\_act(\mathbf{W}_3 \mathbf{x}_i + \mathbf{W}_4 \mathbf{x}_j)\]

Where act is an activation function, agg aggregation function, gating_act is a gating activation function and \(W_n\) are learnable parameters. This equation is translated into the logic form as:

(R.<output_name>__gate(V.I, V.J) <= (R.<feature_name>(V.I)[<W>], R.<feature_name>(V.J)[<W>])) | [Transformation.IDENTITY]
R.<output_name>__gate / 2 | [<activation>]

(R.<output_name>(V.I) <= R.<feature_name>(V.I)[<W>]) | [Transformation.IDENTITY]
(R.<output_name>(V.I) <= (
    R.<output_name>__gate(V.I, V.J), R.<feature_name>(V.J)[<W>], R.<edge_name>(V.J, V.I))
) | Metadata(activation="elementproduct-identity", aggregation=<aggregation>)

R.<output_name> / 1 | [<activation>]

Examples

The whole computation of this module (parametrized as ResGatedGraphConv(1, 2, "h1", "h0", "_edge")) is as follows:

metadata = Metadata(activation="elementproduct-identity", aggregation=Aggregation.SUM)

(R.h1__gate(V.I, V.J) <= (R.h0(V.I)[2, 1], R.h0(V.J)[2, 1])) | [Transformation.IDENTITY]
R.h1__gate / 2 | [Transformation.SIGMOID]

(R.h1(V.I) <= R.h0(V.I)[2, 1]) | [Transformation.IDENTITY]
(R.h1(V.I) <= (R.h1__gate(V.I, V.J), R.h0(V.J)[2, 1], R._edge(V.J, V.I))) | metadata
R.h1 / 1 | [Transformation.IDENTITY]
Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • gating_activation (Transformation) – Gating activation function. Default: Transformation.SIGMOID

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

  • aggregation (Aggregation) – Aggregation function of nodes’ neighbors. Default: Aggregation.SUM

class GINEConv(in_channels: int, feature_name: str, edge_name: str, nn_name: str, eps: float = 0.0, train_eps: bool = False, edge_dim: int | None = None)[source]

GINEConv layer from “Strategies for Pre-training Graph Neural Networks”.

Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • nn_name (str) – Neural network predicate name.

  • eps (float) – \(\epsilon\)-value. Default: 0.0

  • train_eps (bool) – Is eps trainable parameter. Default: false

  • edge_dim (Optional[int]) – Dimension of edge features (None is projection to in_channels is not needed). Default: None

class GENConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, edge_name: str, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.softmax.Softmax object>, num_layers: int = 2, expansion: int = 2, eps: float = 1e-07, train_eps: bool = False, edge_dim: int | None = None)[source]

GENConv layer from “DeeperGCN: All You Need to Train Deeper GCNs”.

Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • edge_name (str) – Edge predicate name to use for neighborhood relations.

  • aggregation (Aggregation) – The aggregation function. Default: Aggregation.SOFTMAX

  • num_layers (int) – The number of MLP layers. Default: 2

  • expansion (int) – The expansion factor of hidden channels in MLP. Default: 2

  • eps (float) – \(\epsilon\)-value. Default: 0.0

  • train_eps (bool) – Is eps trainable parameter. Default: false

  • edge_dim (Optional[int]) – Dimension of edge features (None is projection to in_channels is not needed). Default: None


General Block Modules

class Linear(in_channels: int, out_channels: int, output_name: str, input_name: str, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, arity: int = 1)[source]

Apply linear transformation on the input. Can be expressed as:

\[h_{i_0, .., i_{n}} = W \cdot x_{i_0, .., i_{n}}\]

Where \(x\) is the input, \(W \in R^{(out\_channels \times in\_channels)}\) is a learnable parameter, and \(n\) is the arity of the input and output.

It is also possible to attach non-linearity via the activation parameter and compute:

\[h_{i_0, .., i_{n}} = act(W \cdot x_{i_0, .., i_{n}})\]

Example

The whole computation of this module (parametrized as Linear(1, 2, "h1", "h0")) is as follows:

(R.h1(V.X0)[2, 1] <= R.h0(V.X0)) | [Transformation.IDENTITY]
R.h1 / 1 | [Transformation.IDENTITY]

Module parametrized as Linear(1, 2, "h1", "h0", Transformation.SIGMOID, 2) translates into:

(R.h1(V.X0, V.X1)[2, 1] <= R.h0(V.X0, V.X1)) | [Transformation.IDENTITY]
R.h1 / 2 | [Transformation.SIGMOID]
Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input name.

  • activation (Transformation) – Activation function of the output. Default: Transformation.IDENTITY

  • arity (int) – Arity of the input and output predicate. Default: 1

class MLP(units: ~typing.List[int], output_name: str, input_name: str, activation: ~neuralogic.core.constructs.function.function.Transformation | ~typing.List[~neuralogic.core.constructs.function.function.Transformation] = <neuralogic.core.constructs.function.function.Transformation object>, arity: int = 1)[source]
Parameters:
  • units (List[int]) – List of layer sizes.

  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input name.

  • activation (Union[Transformation, List[Transformation]]) – Activation function of all layers or list of activations for each layer. Default: Transformation.RELU

  • arity (int) – Default: -1

class Transformer(input_dim: int, num_heads: int, dim_feedforward: int, output_name: str, src_name: str, tgt_name: str, src_mask_name: str | None = None, tgt_mask_name: str | None = None, memory_mask_name: str | None = None, arity: int = 1)[source]

A transformer module based on “Attention Is All You Need”.

Parameters:
  • input_dim (int) – The number of expected features.

  • num_heads (int) – The number of heads in the multi-head attention module.

  • dim_feedforward (int) – The dimension of the feedforward network.

  • output_name (str) – Output (head) predicate name of the module.

  • src_name (str) – The name of the predicate of the input to the encoder.

  • tgt_name (str) – The name of the predicate of the input to the decoder.

  • src_mask_name (str, optional) – The name of the predicate of the encoder input mask. Default: None

  • tgt_mask_name (str, optional) – The name of the predicate of the decoder input mask. Default: None

  • memory_mask_name (str, optional) – The name of the predicate of the encoder output mask. Default: None

  • arity (int) – Arity of the input and output predicate. Default: 1

class TransformerEncoder(input_dim: int, num_heads: int, dim_feedforward: int, output_name: str, input_name: str, mask_name: str | None = None, arity: int = 1)[source]

A transformer encoder module based on “Attention Is All You Need”.

Parameters:
  • input_dim (int) – The number of expected features.

  • num_heads (int) – The number of heads in the multi-head attention module.

  • dim_feedforward (int) – The dimension of the feedforward network.

  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – The name of the predicate of the input sequence.

  • mask_name (str, optional) – The name of the predicate of the input sequence mask. Default: None

  • arity (int) – Arity of the input and output predicate. Default: 1

class TransformerDecoder(input_dim: int, num_heads: int, dim_feedforward: int, output_name: str, input_name: str, encoder_name: str, mask_name: str | None = None, memory_mask_name: str | None = None, arity: int = 1)[source]

A transformer decoder module based on “Attention Is All You Need”.

Parameters:
  • input_dim (int) – The number of expected features.

  • num_heads (int) – The number of heads in the multi-head attention module.

  • dim_feedforward (int) – The dimension of the feedforward network.

  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – The name of the predicate of the input sequence.

  • input_name – The name of the input encoder.

  • mask_name (str, optional) – The name of the predicate of the decoder input sequence mask. Default: None

  • memory_mask_name (str, optional) – The name of the predicate of the encoder output mask. Default: None

  • arity (int) – Arity of the input and output predicate. Default: 1

class RvNN(input_size: int, output_name: str, input_name: str, parent_map_name: str, max_children: int = 2, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>, arity: int = 1)[source]

Recursive Neural Network (RvNN) module which is computed as:

\[\mathbf{h}_i = act(agg_{j \in \mathcal{Ch(i)}}(\mathbf{W_{id(j)}} \mathbf{h}_j))\]

Where \(act\) is an activation function, \(agg\) aggregation function and \(\mathbf{W}\)’s are learnable parameters. \(\mathcal{Ch(i)}\) represents the ordered list of children of node \(i\). The \(id(j)\) function maps node \(j\) to its index (position) in its parent’s children list.

Parameters:
  • input_size (int) – Input feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input feature predicate name to get leaf features from.

  • parent_map_name (str) – Name of the predicate to get mapping from parent to children

  • max_children (int) – Maximum number of children (specify which <max_children>-ary tree will be considered). Default: 2

  • activation (Transformation) – Activation function of all layers. Default: Transformation.TANH

  • aggregation (Aggregation) – Aggregation function of a layer. Default: Aggregation.SUM

  • arity (int) – Arity of the input and output predicate (doesn’t include the node id term). Default: 1

class RNN(input_size: int, hidden_size: int, output_name: str, input_name: str, hidden_0_name: str, activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, arity: int = 1)[source]

One-layer Recurrent Neural Network (RNN) module which is computed as:

\[h_t = act(\mathbf{W}_{ih} \mathbf{x}_t + \mathbf{W}_{hh} \mathbf{h}_{t-1})\]

where \(t \in (1, sequence\_length + 1)\) is a time step. In the template, the \(t\) is referred to as V.T, and \(t - 1\) is referred to as V.Z. This module expresses the first equation as:

(R.<output_name>(<...terms>, V.T) <= (
    R.<input_name>(<...terms>, V.T)[<hidden_size>, <input_size>],
    R.<hidden_input_name>(<...terms>, V.Z)[<hidden_size>, <hidden_size>],
    R.special.next(V.Z, V.T),
)) | [<activation>]

R.<output_name> / <arity> + 1 | [Transformation.IDENTITY]

Additionally, we define a rule for the “stop condition”, that is:

(R.<output_name>(<...terms>, 0) <= R.<hidden_0_name>(<...terms>)) | [Transformation.IDENTITY]
Parameters:
  • input_size (int) – Input feature size.

  • hidden_size (int) – Output and hidden feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input feature predicate name to get features from.

  • hidden_0_name (str) – Predicate name to get initial hidden state from.

  • activation (Transformation) – Activation function. Default: Transformation.TANH

  • arity (int) – Arity of the input and output predicate. Default: 1

class GRU(input_size: int, hidden_size: int, output_name: str, input_name: str, hidden_0_name: str, arity: int = 1, next_name: str = '_next__positive')[source]

One-layer Gated Recurrent Unit (GRU) module which is computed as:

\[\begin{split}r_t = \sigma(\mathbf{W}_{xr} \mathbf{x}_t + \mathbf{W}_{hr} \mathbf{h}_{t-1}) \\\end{split}\]
\[\begin{split}z_t = \sigma(\mathbf{W}_{xz} \mathbf{x}_t + \mathbf{W}_{hz} \mathbf{h}_{t-1}) \\\end{split}\]
\[\begin{split}n_t = \tanh(\mathbf{W}_{xn} \mathbf{x}_t + r_t \odot (\mathbf{W}_{hn} \mathbf{h}_{t-1})) \\\end{split}\]
\[h_t = (1 - z_t) \odot n_t + z_t \odot h_{t-1}\]

where \(t \in (1, sequence\_length + 1)\) is a time step. In the template, the \(t\) is referred to as V.T, and \(t - 1\) is referred to as V.Z. This module expresses the first equation as:

(R.<output_name>__r(<...terms>, V.T) <= (
    R.<input_name>(<...terms>, V.T)[<hidden_size>, <input_size>],
    R.<hidden_input_name>(<...terms>, V.Z)[<hidden_size>, <hidden_size>],
    R.<next_name>(V.Z, V.T),
)) | [Transformation.SIGMOID]

R.<output_name>__r / <arity> + 1 | [Transformation.IDENTITY]

The second equation is expressed in the same way, except for a different head predicate name. The third equation is split into three rules. The first two computes the element-wise product - \(r_t * (\mathbf{W}_{hn} \mathbf{h}_{t-1})\).

(R.<output_name>__n_helper_weighted(<...terms>, V.T) <= (
    R.<hidden_input_name>(<...terms>, V.Z)[<hidden_size>, <hidden_size>], R.<next_name>(V.Z, V.T),
)) | [Transformation.IDENTITY],

R.<output_name>__n_helper_weighted / (<arity> + 1) | [Transformation.IDENTITY],

(R.<output_name>__n_helper(<...terms>, V.T) <= (
    R.<output_name>__r(<..terms>, V.T), R.<>__n_helper_weighted(<...terms>, V.T)
)) | [Transformation.IDENTITY, Combination.ELPRODUCT],

R.<output_name>__n_helper / (<arity> + 1) | [Transformation.IDENTITY],

The third one computes the sum and applies the \(tanh\) activation function.

(R.<output_name>__n(<...terms>, V.T) <= (
    R.<input_name>(<...terms>, V.T)[<hidden_size>, <input_size>],
    R.<output_name>__n_helper(<...terms>, V.T)
)) | [Transformation.TANH]
R.<output_name>__n / (<arity> + 1) | [Transformation.IDENTITY],

The last equation is computed via three rules. The first two rules computes element-wise products. That is:

(R.<output_name>__left(<...terms>, V.T) <= (
    R.<output_name>__z(<...terms>, V.T), R.<output_name>__n(<...terms>, V.T)
)) | [Transformation.IDENTITY, Combination.ELPRODUCT]

(R.<output_name>__right(<...terms>, V.T) <= (
    R.<output_name>__z(<...terms>, V.T), R.<hidden_input_name>(<...terms>, V.Z), R.<next_name>(V.Z, V.T),,
)) | [Transformation.IDENTITY, Combination.ELPRODUCT]

R.<output_name>__left / <arity> + 1 | [Transformation.IDENTITY]
R.<output_name>__right / <arity> + 1 | [Transformation.IDENTITY]

The last output rule sums up the element-wise products.

(R.<output_name>(<...terms>, V.T) <= (
    R.<output_name>__left(<...terms>, V.T), R.<output_name>__right(<...terms>, V.T)
)) | [Transformation.IDENTITY]
R.<output_name> / <arity> + 1 | [Transformation.IDENTITY],

Additionally, we define a rule for the “stop condition”, that is:

(R.<output_name>(<...terms>, 0) <= R.<hidden_0_name>(<...terms>)) | [Transformation.IDENTITY]
Parameters:
  • input_size (int) – Input feature size.

  • hidden_size (int) – Output and hidden feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input feature predicate name to get features from.

  • hidden_0_name (str) – Predicate name to get initial hidden state from.

  • arity (int) – Arity of the input and output predicate. Default: 1

class LSTM(input_size: int, hidden_size: int, output_name: str, input_name: str, hidden_0_name: str, cell_state_0_name: str, arity: int = 1)[source]

One-layer Long Short-Term Memory (LSTM) RNN module which is computed as:

\[i_t = \sigma(\mathbf{W}_{xi} \mathbf{x}_t + \mathbf{W}_{hi} \mathbf{h}_{t-1})\]
\[f_t = \sigma(\mathbf{W}_{xf} \mathbf{x}_t + \mathbf{W}_{hf} \mathbf{h}_{t-1})\]
\[o_t = \sigma(\mathbf{W}_{xo} \mathbf{x}_t + \mathbf{W}_{ho} \mathbf{h}_{t-1})\]
\[\begin{split}g_t = \tanh(\mathbf{W}_{xg} \mathbf{x}_t + \mathbf{W}_{hg} \mathbf{h}_{t-1}) \\\end{split}\]
\[c_t = f_t \odot c_{t-1} + i_t \odot g_t\]
\[h_t = o_t \odot \tanh(c_t)\]
Parameters:
  • input_size (int) – Input feature size.

  • hidden_size (int) – Output and hidden feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input feature predicate name to get features from.

  • hidden_0_name (str) – Predicate name to get initial hidden state from.

  • cell_state_0_name (str) – Predicate name to get initial cell state from.

  • arity (int) – Arity of the input and output predicate. Default: 1

class Attention(embed_dim: int, output_name: str, query_name: str, key_name: str, value_name: str, mask_name: str | None = None, arity: int = 1)[source]

A single-head attention module based on “Attention Is All You Need”.

Parameters:
  • embed_dim (int) – The number of expected features.

  • output_name (str) – Output (head) predicate name of the module.

  • query_name (str) – The name of the queries predicate.

  • key_name (str) – The name of the keys predicate.

  • value_name (str) – The name of the values predicate.

  • mask_name (str, optional) – The name of the input mask predicate. Default: None

  • arity (int) – Arity of the input and output predicates. Default: 1

class MultiheadAttention(embed_dim: int, num_heads: int, output_name: str, query_name: str, key_name: str, value_name: str, vdim: int = None, kdim: int = None, mask_name: str | None = None, arity: int = 1)[source]

A multi-head attention module based on “Attention Is All You Need”.

Parameters:
  • embed_dim (int) – The number of expected features.

  • num_heads (int) – The number of heads.

  • output_name (str) – Output (head) predicate name of the module.

  • query_name (str) – The name of the queries predicate.

  • key_name (str) – The name of the keys predicate.

  • value_name (str) – The name of the values predicate.

  • vdim (int) – Total number of features for values.

  • kdim (int) – Total number of features for keys.

  • mask_name (str, optional) – The name of the input mask predicate. Default: None

  • arity (int) – Arity of the input and output predicates. Default: 1

class Pooling(output_name: str, input_name: str, aggregation: Aggregation, input_arity: int = 1)[source]

Apply generic pooling over the input specified by the input_name and the input arity parameters. Can be expressed as:

\[h = agg_{i_{0}, .., i_{n} \in N}(x_{(i_{0}, .., i_{n})})\]

Where \(N\) is a set of tuples of length \(n\) (specified by the input arity parameter) that are valid arguments for the input predicate.

For example, a classic pooling over graph nodes represented by relations of arity 1 (node id) would be calculated as:

\[h = agg_{i \in N}(x_{(i)})\]

Here \(N\) refers to a set of all node ids. Lifting the restriction of the input arity via the input_arity parameter allows for pooling not only nodes but also edges (input_arity=2) and other objects (hyperedges etc.)

Examples

The whole computation of this module (parametrized as Pooling("h1", "h0", Aggregation.AVG)) is as follows:

(R.h1 <= R.h0(V.X0)) | [Aggregation.AVG, Transformation.IDENTITY]
R.h1 / 0 | [Transformation.IDENTITY]

Module parametrized as Pooling("h1", "h0", Aggregation.MAX, 2) translates into:

(R.h1 <= R.h0(V.X0, V.X1)) | [Aggregation.MAX, Transformation.IDENTITY]
R.h1 / 0 | [Transformation.IDENTITY]
Parameters:
  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input name.

  • aggregation (Aggregation) – Aggregation function.

  • input_arity (int) – Arity of the input predicate input_name. Default: 1

class SumPooling(output_name: str, input_name: str, input_arity: int = 1)[source]

Apply sum pooling over the input specified by the input_name and the input arity parameters. Can be expressed as:

\[h = \sum_{i_{0}, .., i_{n} \in N} x_{(i_{0}, .., i_{n})}\]

Where \(N\) is a set of tuples of length \(n\) (specified by the input arity parameter) that are valid arguments for the input predicate.

This module extends the generic pooling Pooling.

Examples

The whole computation of this module (parametrized as SumPooling("h1", "h0")) is as follows:

(R.h1 <= R.h0(V.X0)) | [Aggregation.SUM, Transformation.IDENTITY]
R.h1 / 0 | [Transformation.IDENTITY]
Parameters:
  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input name.

  • input_arity (int) – Arity of the input predicate input_name. Default: 1

class AvgPooling(output_name: str, input_name: str, input_arity: int = 1)[source]

Apply average pooling over the input specified by the input_name and the input arity parameters. Can be expressed as:

\[h = \frac{1}{|N|}\sum_{i_{0}, .., i_{n} \in N} x_{(i_{0}, .., i_{n})}\]

Where \(N\) is a set of tuples of length \(n\) (specified by the input arity parameter) that are valid arguments for the input predicate.

This module extends the generic pooling Pooling.

Examples

The whole computation of this module (parametrized as AvgPooling("h1", "h0")) is as follows:

(R.h1 <= R.h0(V.X0)) | [Aggregation.AVG, Transformation.IDENTITY]
R.h1 / 0 | [Transformation.IDENTITY]
Parameters:
  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input name.

  • input_arity (int) – Arity of the input predicate input_name. Default: 1

class MaxPooling(output_name: str, input_name: str, input_arity: int = 1)[source]

Apply max pooling over the input specified by the input_name and the input arity parameters. Can be expressed as:

\[h = max_{i_{0}, .., i_{n} \in N}(x_{(i_{0}, .., i_{n})})\]

Where \(N\) is a set of tuples of length \(n\) (specified by the input arity parameter) that are valid arguments for the input predicate.

This module extends the generic pooling Pooling.

Examples

The whole computation of this module (parametrized as MaxPooling("h1", "h0")) is as follows:

(R.h1 <= R.h0(V.X0)) | [Aggregation.MAX, Transformation.IDENTITY]
R.h1 / 0 | [Transformation.IDENTITY]
Parameters:
  • output_name (str) – Output (head) predicate name of the module.

  • input_name (str) – Input name.

  • input_arity (int) – Arity of the input predicate input_name. Default: 1

class PositionalEncoding(embed_dim: int, max_len: int, output_name: str, input_name: str, arity: int = 1, learnable: bool = False)[source]

Meta Modules

class MetaConv(in_channels: int, out_channels: int, output_name: str, feature_name: str, role_name: str | None, roles: ~typing.List[str], activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Metagraph Convolutional Unit layer from Meta-GNN: metagraph neural network for semi-supervised learning in attributed heterogeneous information networks. Which can be expressed as:

\[\mathbf{x}^{\prime}_i = act(\mathbf{W_0} \cdot \mathbf{x}_i + {agg}_{j \in \mathcal{N}_r(i)} \sum_{k \in \mathcal{K}} (\mathbf{W_k} \cdot \mathbf{x}_j))\]

Where act is an activation function, agg aggregation function (by default average), \(W_0\) is a learnable root parameter and \(W_k\) is a learnable parameter for each role.

Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • role_name (Optional[str]) – Role predicate name to use for role relations. When None, elements from roles are used instead.

  • roles (List[str]) – List of relations’ names

  • activation (Transformation) – Activation function of the output. Default: Transformation.SIGMOID

  • aggregation (Aggregation) – Aggregation function of nodes’ neighbors. Default: Aggregation.AVG

class MAGNNMean(output_name: str, feature_name: str, relation_name: str, type_name: str | None, meta_paths: ~typing.List[str], activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Intra-metapath Aggregation module with Mean encoder from “MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding”. Which can be expressed as:

\[\mathbf{h}_{P(v,u)} = MEAN(\{\mathbf{x}_t | \forall t \in P(v,u) \})\]
\[\mathbf{h}^P_{v} = act(\sum_{u \in N^P_v} \mathbf{h}_{P(v,u)})\]

Where act is an activation function, \(P(v,u)\) is a single metapath instance, \(N^P_{v}\) is set of metapath-based neighbors.

Parameters:
  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • relation_name (str) – Relation predicate name for connectivity checks between entities.

  • type_name (Optional[str]) – Metapath type predicate name. If none, meta_paths will be used instead.

  • meta_paths (List[str]) – Name of types forming a single metapath.

  • activation (Transformation) – Activation function of the output. Default: Transformation.SIGMOID

class MAGNNLinear(in_channels: int, out_channels: int, output_name: str, feature_name: str, relation_name: str, type_name: str | None, meta_paths: ~typing.List[str], activation: ~neuralogic.core.constructs.function.function.Transformation = <neuralogic.core.constructs.function.function.Transformation object>, aggregation: ~neuralogic.core.constructs.function.function.Aggregation = <neuralogic.core.constructs.function.function.Aggregation object>)[source]

Intra-metapath Aggregation module with Linear encoder from “MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding”. Which can be expressed as:

\[\mathbf{h}_{P(v,u)} = \mathbf{W}_p \cdot MEAN(\{\mathbf{x}_t | \forall t \in P(v,u) \})\]
\[\mathbf{h}^P_{v} = act(\sum_{u \in N^P_v} \mathbf{h}_{P(v,u)})\]

Where act is an activation function, \(P(v,u)\) is a single metapath instance, \(N^P_{v}\) is set of metapath-based neighbors.

Parameters:
  • in_channels (int) – Input feature size.

  • out_channels (int) – Output feature size.

  • output_name (str) – Output (head) predicate name of the module.

  • feature_name (str) – Feature predicate name to get features from.

  • relation_name (str) – Relation predicate name for connectivity checks between entities.

  • type_name (Optional[str]) – Metapath type predicate name. If none, meta_paths will be used instead.

  • meta_paths (List[str]) – Name of types forming a single metapath.

  • activation (Transformation) – Activation function of the output. Default: Transformation.SIGMOID