In the link below, I present the usage of Autoencoders on MNIST dataset.
I have experimented with different number of epochs during training. Increasing the number of epochs can lead to smaller validation loss, which in turn leads to better reconstruction of digits, as shown in the example. Of course, at some point, increasing the number of epochs doesn't lead to notable improvements as the validation loss gets stable across new training epochs.
https://colab.research.google.com/drive/1SGgOgsK1H1TN5Mxvoq7Fb1G_mUxgNi_0?usp=sharing