Model Evaluationļƒ

Model Buildingļƒ

When we have the template, examples, and queries ready, we need to ā€˜compileā€™ them together to retrieve a model that can be trained and evaluated.

The ā€˜compilationā€™ is done in two steps. Firstly, we retrieve a model instance for the specified backend.

from neuralogic.core import Settings

settings = Settings()
model = template.build(settings)

Then we can ā€˜buildā€™ the examples and queries (dataset), yielding a multitude of computational graphs to be trained.

built_dataset = model.build_dataset(dataset)

Saving and Loading Modelļƒ

When our model is trained, or we want to persist the modelā€™s state (e.g., make a checkpoint), we can utilize the model instance method state_dict() (or parameters()). The method puts all parametersā€™ values into a dictionary that can be later saved (e.g., in JSON or in binary) or somehow manipulated.

When we want to load a state into our model, we can then simply pass the state into load_state_dict() method.

Note

Evaluators offer the same interface for saving/loading of the model.

Utilizing Evaluatorsļƒ

Writing custom training loops and handling different backends can be cumbersome and repetitive. The library offers ā€˜evaluatorsā€™ that encapsulate the training loop and testing evaluation. Evaluators also handle other responsibilities, such as building datasets.

from neuralogic.nn import get_evaluator


evaluator = get_evaluator(template, settings)

Once you have an evaluator, you can evaluate or train the model on a dataset. The dataset doesnā€™t have to be pre-built, as in the case of classical evaluation - the evaluator handles that for you.

Note

If it is used more than once, it is more efficient to pass a pre-built dataset into the evaluator (this will prevent redundant dataset building).

Settings Instanceļƒ

The Settings instance contains all the settings used to customize the behavior of different parts of the library.

Most importantly, it affects the behavior of the model building (e.g., specify default rule/relation transformation functions), evaluators (e.g., error function, number of epochs, learning rate, optimizer), and the model itself (e.g., initialization of the learnable parameters).

from neuralogic.core import Settings, Initializer
from neuralogic.nn.init import Uniform
from neuralogic.optim import SGD


Settings(
    initializer=Uniform(),
    optimizer=SGD(lr=0.1),
    epochs=100,
)

In the example above, we define settings to ensure that initial values of learnable parameters (of the model these settings are used for) are sampled from the uniform distribution. We also set properties utilized by evaluators: the number of epochs (\(100\)) and the optimizer, which is set to Stochastic gradient descent (SGD) with a learning rate of \(0.1\).

Evaluator Training/Testing Interfaceļƒ

The evaluatorā€™s basic interface consists of two methods - train and test for training on a dataset and evaluating on a dataset, respectively. Both methods have the same interface and are implemented in two modes - generator and non-generator.

The generator mode (default mode) yields a tuple of two elements (total loss and number of instances/samples) per each epoch. This mode can be useful when we want to, for example, visualize, log or do some other manipulations in real-time during the training (or testing).

for total_loss, seen_instances in neuralogic_evaluator.train(dataset):
    pass

The non-generator mode, on the other hand, returns only a tuple of metrics from the last epoch.

results = neuralogic_evaluator.train(dataset, generator=False)