AEAttentionBiGRUNetwork¶
- class AEAttentionBiGRUNetwork(latent_space_dim=None, temporal_latent_space=False, n_layers_encoder=1, n_layers_decoder=1, activation_encoder='relu', activation_decoder='relu')[source]¶
A class to implement an Auto-Encoder based on Attention Bidirectional GRUs.
- Parameters:
- latent_space_dimint, default=128
Dimension of the latent space.
- temporal_latent_spacebool, default=False
Flag to choose whether the latent space is an MTS or Euclidean space.
- n_layers_encoderint, default=None
Number of Attention BiGRU layers in the encoder. If None, one layer will be used.
- n_layers_decoderint, default=None
Number of Attention BiGRU layers in the decoder. If None, one layer will be used.
- activation_encoderUnion[list, str], default=”relu”
Activation function(s) to use in each layer of the encoder. Can be a single string or a list.
- activation_decoderUnion[list, str], default=”relu”
Activation function(s) to use in each layer of the decoder. Can be a single string or a list.
References
[1]Ienco, D., & Interdonato, R. (2020). Deep multivariate time series
embedding clustering via attentive-gated autoencoder. In Advances in Knowledge Discovery and Data Mining: 24th Pacific-Asia Conference, PAKDD 2020, Singapore, May 11-14, 2020, Proceedings, Part I 24 (pp. 318-329). Springer International Publishing.
Methods
build_network(input_shape, **kwargs)Construct a network and return its input and output layers.