top of page


This project would not have been possible without the development of the algorithms and approaches of the people and teams below. If there is anyone that I should have included but have missed please drop me an email so I can update accordingly. Through the success of this project I hope to dedicate some time to giving back to the AI generated art community by developing the codebase and sharing new approaches.

Credits: List


Used for image generation

Thanks to the VQGAN team for releasing their code under the MIT license. 


      title={Taming Transformers for High-Resolution Image Synthesis}, 

      author={Patrick Esser and Robin Rombach and Björn Ommer},







Used for image discrimination

Thanks to OpenAI for releasing the CLIP algorithm under the MIT license.


    title  = {CLIP: Connecting Text and Images},

    author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},

    year   = {2021}


VQGAN - Python Code

Basis of current implementation

Thanks to @NerdyRodent for reworking the VQGAN+CLIP code into a python based executable.

VQGAN - Original implementation

Thanks to Katherine Crowson (@RiversHaveWings) for the original implementation of VQGAN+CLIP that nerdyrodent's implementation is based on.

Basis of current implementation


VQGAN-CLIP precursor

Thanks to @advadnoun for developing the BigSleep colab notebook that originated the VQGAN+CLIP approach.

Analogue photos

Inspiration for the project and basis of heritage series

Thanks to my grandfather for his love of photography and the beautiful catalogue of images he created and archived on 35mm slides

download (1)-gigapixel-art-width-1920px.png
Credits: Image
bottom of page