API

This section documents the ApplicationDrivenLearning API.

Constructors

ApplicationDrivenLearning.ModelType
Model <: JuMP.AbstractModel

Create an empty ApplicationDrivenLearning.Model with empty plan and assess models, missing forecast model and default settings.

source
ApplicationDrivenLearning.PredictiveModelType
PredictiveModel(networks, input_output_map, input_size, output_size)

Creates a predictive (forecast) model for the AppDrivenLearning module from Flux models and input/output information.

...

Arguments

  • networks: array of Flux models to be used.
  • input_output_map::Vector{Dict{Vector{Int}, Vector{Int}}}: array in the same ordering as networks of mappings from input indexes to output indexes on which the models should be applied.
  • input_size::Int: size of the input vector.
  • output_size::Int: size of the output vector. ...

Example

julia> pred_model = PredictiveModel(
        [Flux.Dense(1 => 1), Flux.Dense(3 => 2)],
        [
            Dict([1] => [1], [1] => [2]),
            Dict([1,2,3] => [3,4], [1,4,5] => [5,6])
        ],
        5,
        6
    );
source

JuMP variable types

Structs

ApplicationDrivenLearning.OptionsType
Options(mode; params...)

Options struct to hold optimization mode and mode parameters.

...

Example

options = Options(
    GradientMode;
    rule = Optim.RMSProp(0.01),
    epochs = 100,
    batch_size = 10,
)

...

source

Modes

ApplicationDrivenLearning.NelderMeadModeType
NelderMeadMode <: AbstractOptimizationMode

Used to solve the application driven learning training problem using the Nelder-Mead optimization method implementation from Optim.jl package.

...

Parameters

  • initial_simplex is the initial simplex of solutions to be applied.
  • parameters is the parameters to be applied to the Nelder-Mead optimization method. ...
source
ApplicationDrivenLearning.GradientModeType
GradientMode <: AbstractOptimizationMode

Used to solve the application driven learning training problem using the gradient optimization method

...

Parameters

  • rule is the optimiser object to be used in the gradient optimization process.
  • 'epochs' is the number of epochs to be used in the gradient optimization process.
  • 'batch_size' is the batch size to be used in the gradient optimization process.
  • 'verbose' is the flag of whether to print the training process.
  • 'computecostevery' is the epoch frequency for computing the cost and evaluating best solution.
  • 'time_limit' is the time limit for the training process. ...
source
ApplicationDrivenLearning.BilevelModeType
BilevelMode <: AbstractOptimizationMode

Used to solve the application driven learning training problem as a bilevel optimization problem by using the BilevelJuMP.jl package.

...

Parameters

  • optimizer::Function is equivalent to solver in BilevelJuMP.BilevelModel.
  • silent::Bool is equivalent to silent in BilevelJuMP.BilevelModel.
  • mode::Union{Nothing, BilevelJuMP.BilevelMode} is equivalent to mode in BilevelJuMP.BilevelModel. ...
source

Attributes getters and setters

Flux attributes getters and setters

ApplicationDrivenLearning.apply_gradient!Function
apply_gradient!(model, dCdy, X, rule)

Apply a gradient vector to the model parameters.

...

Arguments

  • model::PredictiveModel: model to be updated.
  • dCdy::Vector{<:Real}: gradient vector.
  • X::Matrix{<:Real}: input data.
  • rule: Optimisation rule. ...
source

Other functions

ApplicationDrivenLearning.compute_costFunction
compute_cost(model, X, Y, with_gradients=false)

Compute the cost function (C) based on the model predictions and the true values.

...

Arguments

  • model::ApplicationDrivenLearning.Model: model to evaluate.
  • X::Matrix{<:Real}: input data.
  • Y::Matrix{<:Real}: true values.
  • with_gradients::Bool=false: flag to compute and return gradients. ...
source