Module connectome.models.E2E_conv
the custom Edge-to-Edge layer developed by Kawahara et al.
Classes
class E2E_conv (rank, filters, kernel_size, strides=1, padding='valid', data_format=None, dilation_rate=1, activation=None, use_bias=False, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)-
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and that outputs one or more tensors. It involves computation, defined in the
call()method, and a state (weight variables). State can be created in various places, at the convenience of the subclass implementer:- in
__init__(); - in the optional
build()method, which is invoked by the first__call__()to the layer, and supplies the shape(s) of the input(s), which may not have been known at initialization time; - in the first invocation of
call(), with some caveats discussed below.
Users will just instantiate a layer and then treat it as a callable.
Args
trainable- Boolean, whether the layer's variables should be trainable.
name- String name of the layer.
dtype- The dtype of the layer's computations and weights. Can also be a
tf.keras.mixed_precision.Policy, which allows the computation and weight dtype to differ. Default ofNonemeans to usetf.keras.mixed_precision.global_policy(), which is a float32 policy unless set to different value. dynamic- Set this to
Trueif your layer should only be run eagerly, and should not be used to generate a static computation graph. This would be the case for a Tree-RNN or a recursive network, for example, or generally for any layer that manipulates tensors using Python control flow. IfFalse, we assume that the layer can safely be used to generate a static computation graph.
Attributes
name- The name of the layer (string).
dtype- The dtype of the layer's weights.
variable_dtype- Alias of
dtype. compute_dtype- The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to also
be in this dtype. When mixed precision is used with a
tf.keras.mixed_precision.Policy, this will be different thanvariable_dtype. dtype_policy- The layer's dtype policy. See the
tf.keras.mixed_precision.Policydocumentation for details. trainable_weights- List of variables to be included in backprop.
non_trainable_weights- List of variables that should not be included in backprop.
weights- The concatenation of the lists trainable_weights and non_trainable_weights (in this order).
trainable- Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
layer.trainable_weights. input_spec- Optional (list of)
InputSpecobject(s) specifying the constraints on inputs that can be accepted by the layer.
We recommend that descendants of
Layerimplement the following methods:__init__(): Defines custom layer attributes, and creates layer weights that do not depend on input shapes, usingadd_weight(), or other state.build(self, input_shape): This method can be used to create weights that depend on the shape(s) of the input(s), usingadd_weight(), or other state.__call__()will automatically build the layer (if it has not been built yet) by callingbuild().call(self, inputs, *args, **kwargs): Called in__call__after making surebuild()has been called.call()performs the logic of applying the layer to theinputs. The first invocation may additionally create state that could not be conveniently created inbuild(); see its docstring for details. Two reserved keyword arguments you can optionally use incall()are:training(boolean, whether the call is in inference mode or training mode). See more details in the layer/model subclassing guidemask(boolean tensor encoding masked timesteps in the input, used in RNN layers). See more details in the layer/model subclassing guide A typical signature for this method iscall(self, inputs), and user could optionally addtrainingandmaskif the layer need them.*argsand**kwargsis only useful for future extension when more input parameters are planned to be added.
get_config(self): Returns a dictionary containing the configuration used to initialize this layer. If the keys differ from the arguments in__init__, then overridefrom_config(self)as well. This method is used when saving the layer or a model that contains this layer.
Examples:
Here's a basic example: a layer with two variables,
wandb, that returnsy = w . x + b. It shows how to implementbuild()andcall(). Variables set as attributes of a layer are tracked as weights of the layers (inlayer.weights).class SimpleDense(Layer): def __init__(self, units=32): super(SimpleDense, self).__init__() self.units = units def build(self, input_shape): # Create the state of the layer (weights) w_init = tf.random_normal_initializer() self.w = tf.Variable( initial_value=w_init(shape=(input_shape[-1], self.units), dtype='float32'), trainable=True) b_init = tf.zeros_initializer() self.b = tf.Variable( initial_value=b_init(shape=(self.units,), dtype='float32'), trainable=True) def call(self, inputs): # Defines the computation from inputs to outputs return tf.matmul(inputs, self.w) + self.b # Instantiates the layer. linear_layer = SimpleDense(4) # This will also call `build(input_shape)` and create the weights. y = linear_layer(tf.ones((2, 2))) assert len(linear_layer.weights) == 2 # These weights are trainable, so they're listed in `trainable_weights`: assert len(linear_layer.trainable_weights) == 2Note that the method
add_weight()offers a shortcut to create weights:class SimpleDense(Layer): def __init__(self, units=32): super(SimpleDense, self).__init__() self.units = units def build(self, input_shape): self.w = self.add_weight(shape=(input_shape[-1], self.units), initializer='random_normal', trainable=True) self.b = self.add_weight(shape=(self.units,), initializer='random_normal', trainable=True) def call(self, inputs): return tf.matmul(inputs, self.w) + self.bBesides trainable weights, updated via backpropagation during training, layers can also have non-trainable weights. These weights are meant to be updated manually during
call(). Here's a example layer that computes the running sum of its inputs:class ComputeSum(Layer): def __init__(self, input_dim): super(ComputeSum, self).__init__() # Create a non-trainable weight. self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False) def call(self, inputs): self.total.assign_add(tf.reduce_sum(inputs, axis=0)) return self.total my_sum = ComputeSum(2) x = tf.ones((2, 2)) y = my_sum(x) print(y.numpy()) # [2. 2.] y = my_sum(x) print(y.numpy()) # [4. 4.] assert my_sum.weights == [my_sum.total] assert my_sum.non_trainable_weights == [my_sum.total] assert my_sum.trainable_weights == []For more information about creating layers, see the guide Making new Layers and Models via subclassing
Ancestors
- keras.engine.base_layer.Layer
- tensorflow.python.module.module.Module
- tensorflow.python.training.tracking.autotrackable.AutoTrackable
- tensorflow.python.training.tracking.base.Trackable
- keras.utils.version_utils.LayerVersionSelector
Methods
def build(self, input_shape)-
Creates the variables of the layer (optional, for subclass implementers).
This is a method that implementers of subclasses of
LayerorModelcan override if they need a state-creation step in-between layer instantiation and layer call. It is invoked automatically before the first execution ofcall().This is typically used to create the weights of
Layersubclasses (at the discretion of the subclass implementer).Args
input_shape- Instance of
TensorShape, or list of instances ofTensorShapeif the layer expects a list of inputs (one instance per input).
def call(self, inputs)-
This is where the layer's logic lives.
The
call()method may not create state (except in its first invocation, wrapping the creation of variables or other resources intf.init_scope()). It is recommended to create state in__init__(), or thebuild()method that is called automatically beforecall()executes the first time.Args
inputs- Input tensor, or dict/list/tuple of input tensors.
The first positional
inputsargument is subject to special rules: -inputsmust be explicitly passed. A layer cannot have zero arguments, andinputscannot be provided via the default value of a keyword argument. - NumPy array or Python scalar values ininputsget cast as tensors. - Keras mask metadata is only collected frominputs. - Layers are built (build(input_shape)method) using shape info frominputsonly. -input_speccompatibility is only checked againstinputs. - Mixed precision input casting is only applied toinputs. If a layer has tensor arguments in*argsor**kwargs, their casting behavior in mixed precision should be handled manually. - The SavedModel input specification is generated usinginputsonly. - Integration with various ecosystem packages like TFMOT, TFLite, TF.js, etc is only supported forinputsand not for tensors in positional and keyword arguments. *args- Additional positional arguments. May contain tensors, although this is not recommended, for the reasons above.
**kwargs- Additional keyword arguments. May contain tensors, although
this is not recommended, for the reasons above.
The following optional keyword arguments are reserved:
-
training: Boolean scalar tensor of Python boolean indicating whether thecallis meant for training or inference. -mask: Boolean input mask. If the layer'scall()method takes amaskargument, its default value will be set to the mask generated forinputsby the previous layer (ifinputdid come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support).
Returns
A tensor or list/tuple of tensors.
def compute_output_shape(self, input_shape)-
Computes the output shape of the layer.
This method will cause the layer's state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.
Args
input_shape- Shape tuple (tuple of integers) or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.
Returns
An input shape tuple.
def get_config(self)-
Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by
Network(one layer of abstraction above).Note that
get_config()does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.Returns
Python dictionary.
- in