optimizers

This module contains a collection of optimizers for training neuroptica models to fit labeled data. All optimizers starting with “InSitu” use the on-chip interferometric gradient calculation routine described in Hughes, et al. (2018), “Training of photonic neural networks through in situ backpropagation and gradient measurement”.

class neuroptica.optimizers.InSituAdam(model: neuroptica.models.Sequential, loss: Type[neuroptica.losses.Loss], step_size=0.01, beta1=0.9, beta2=0.99, epsilon=1e-08)[source]

Bases: neuroptica.optimizers.Optimizer

On-chip training with in-situ backpropagation using adjoint field method and adam optimizer

__init__(model: neuroptica.models.Sequential, loss: Type[neuroptica.losses.Loss], step_size=0.01, beta1=0.9, beta2=0.99, epsilon=1e-08)[source]

Initialize self. See help(type(self)) for accurate signature.

fit(data: <MagicMock id='140414709380712'>, labels: <MagicMock id='140414709401416'>, epochs=1000, batch_size=32, show_progress=True, cache_fields=False, use_partial_vectors=False)[source]

Fit the model to the labeled data :param data: features vector, shape: (n_features, n_samples) :param labels: labels vector, shape: (n_label_dim, n_samples) :param epochs: :param batch_size: :param show_progress: :param cache_fields: if set to True, will cache fields at the phase shifters on the forward and backward pass :param use_partial_vectors: if set to True, the MZI partial matrices will be stored as Nx2 vectors :return:

class neuroptica.optimizers.InSituGradientDescent(model: neuroptica.models.Sequential, loss: Type[neuroptica.losses.Loss], learning_rate=0.01)[source]

Bases: neuroptica.optimizers.Optimizer

On-chip training with in-situ backpropagation using adjoint field method and standard gradient descent

__init__(model: neuroptica.models.Sequential, loss: Type[neuroptica.losses.Loss], learning_rate=0.01)[source]

Initialize self. See help(type(self)) for accurate signature.

fit(data: <MagicMock id='140414709355576'>, labels: <MagicMock id='140414709372184'>, epochs=1000, batch_size=32, show_progress=True)[source]

Fit the model to the labeled data :param data: features vector, shape: (n_features, n_samples) :param labels: labels vector, shape: (n_label_dim, n_samples) :param epochs: :param learning_rate: :param batch_size: :param show_progress: :return:

class neuroptica.optimizers.Optimizer(model: neuroptica.models.Sequential, loss: Type[neuroptica.losses.Loss])[source]

Bases: object

Base class for an optimizer

__init__(model: neuroptica.models.Sequential, loss: Type[neuroptica.losses.Loss])[source]

Initialize self. See help(type(self)) for accurate signature.

__weakref__

list of weak references to the object (if defined)

static make_batches(data: <MagicMock id='140414709636344'>, labels: <MagicMock id='140414707866984'>, batch_size: int, shuffle=True) → Tuple[<MagicMock id='140414707883536'>, <MagicMock id='140414707916472'>][source]

Prepare batches of a given size from data and labels :param data: features vector, shape: (n_features, n_samples) :param labels: labels vector, shape: (n_label_dim, n_samples) :param batch_size: size of the batch :param shuffle: if true, batches will be randomized :return: yields a tuple (data_batch, label_batch)