plot_number = 225
generated_images = generate_images(generate_latent_points(number=plot_number,
scale_means=5,
scale_stds=1))
plot_result(generated_images, number=plot_number)
generated_images = generate_images(generate_latent_points(number=plot_number,
scale_means=-5,
scale_stds=1))
plot_result(generated_images, number=plot_number)
generated_images = generate_images(generate_latent_points(number=plot_number,
scale_means=1,
scale_stds=5))
plot_result(generated_images, number=plot_number)
Again, we have found something interesting. Moving around using our means takes us from digit to digit, while moving around using our standard deviations seem to increase the number of different digits! In the last image above, we can barely make out every MNIST digit. Let us make on last plot using this information by upping the standard deviation of our Gaussian noises.
plot_number = 400
generated_images = generate_images(generate_latent_points(number=plot_number,
scale_means=1,
scale_stds=10))
plot_result(generated_images, number=plot_number)
A pretty cool result! We see that our generator indeed has learned a distribution which qualitatively looks a whole lot like the MNIST dataset.