AEDCNNNetwork¶
- class AEDCNNNetwork(latent_space_dim=128, temporal_latent_space=False, n_layers=4, kernel_size=3, activation='relu', n_filters=None, dilation_rate=1, padding_encoder='same', padding_decoder='same')[source]¶
Establish the Auto-Encoder based structure for a DCN Network.
Dilated Convolutional Neural (DCN) Network based Model for low-rank embeddings.
- Parameters:
- latent_space_dim: int, default=128
Dimension of the models’s latent space.
- temporal_latent_spacebool, default = False
Flag to choose whether the latent space is an MTS or Euclidean space.
- n_layers: int, default=4
Number of convolution layers in the autoencoder.
- kernel_size: Union[int, List[int]], default=3
Size of the 1D Convolutional Kernel of the encoder. Defaults to a list of length n_layers with kernel_size value.
- activation: Union[str, List[str]], default=”relu”
The activation function used by convolution layers of the encoder. Defaults to a list of “relu” for n_layers elements.
- n_filters: Union[int, List[int]], default=None
Number of filters used in convolution layers of the encoder. Defaults to a list of multiples of 32 for n_layers elements.
- dilation_rate: Union[int, List[int]], default=1
The dilation rate for convolution of the encoder. Defaults to a list of powers of 2 for n_layers elements. dilation_rate greater than 1 is not supported on Conv1DTranspose for some devices/OS.
- padding_encoder: Union[str, List[str]], default=”same”
The padding string for the encoder layers. Defaults to a list of “same” for n_layers elements. Valid strings are “causal”, “valid”, “same” or any other Keras compatible string.
- padding_decoder: Union[str, List[str]], default=”same”
The padding string for the decoder layers. Defaults to a list of “same” for n_layers elements.
References
[1]Franceschi, J. Y., Dieuleveut, A., & Jaggi, M. (2019). Unsupervised
scalable representation learning for multivariate time series. Advances in neural information processing systems, 32.
Methods
build_network(input_shape)Construct a network and return its input and output layers.