The objective of the project is to synthesize images from text description. Since many powerful recurrent architectures have been proposed to learn the generality and discriminative power of text description and concurrently deep convolutional generative adversarial networks have started generating images of specific categories. We try to combine the advances in both the fields by making a model through the formulation of GAN. The results of our model for "this flower has petals that are yellow, white and purple and has dark lines" can be found in the image.
The report of the project can be found here