top of page

Credits:

This project would not have been possible without the development of the algorithms and approaches of the people and teams below. If there is anyone that I should have included but have missed please drop me an email so I can update accordingly. Through the success of this project I hope to dedicate some time to giving back to the AI generated art community by developing the codebase and sharing new approaches.

Credits: List

VQGAN

Used for image generation

Thanks to the VQGAN team for releasing their code under the MIT license. 

@misc{esser2020taming,

      title={Taming Transformers for High-Resolution Image Synthesis}, 

      author={Patrick Esser and Robin Rombach and Björn Ommer},

      year={2020},

      eprint={2012.09841},

      archivePrefix={arXiv},

      primaryClass={cs.CV}

}

CLIP

Used for image discrimination

Thanks to OpenAI for releasing the CLIP algorithm under the MIT license.

@misc{unpublished2021clip,

    title  = {CLIP: Connecting Text and Images},

    author = {Alec Radford, Ilya Sutskever, Jong Wook Kim, Gretchen Krueger, Sandhini Agarwal},

    year   = {2021}

}

VQGAN - Python Code

Basis of current implementation

Thanks to @NerdyRodent for reworking the VQGAN+CLIP code into a python based executable.


VQGAN - Original implementation

Thanks to Katherine Crowson (@RiversHaveWings) for the original implementation of VQGAN+CLIP that nerdyrodent's implementation is based on.

Basis of current implementation

BigSleep

VQGAN-CLIP precursor

Thanks to @advadnoun for developing the BigSleep colab notebook that originated the VQGAN+CLIP approach.

Analogue photos

Inspiration for the project and basis of heritage series

Thanks to my grandfather for his love of photography and the beautiful catalogue of images he created and archived on 35mm slides

Contact
download (1)-gigapixel-art-width-1920px.png
Credits: Image
bottom of page