Lindsey Carr
Artist/Programmer & Illustrator
Source References
My exhibition paintings are built from compositions that draw on historical artworks, my own photography, clay maquettes, and scripted visuals I program in the Processing language.
Certain elements sometimes use generative computation, run on a single GPU (These are older GANs I trained on historic/still life imagery or Stable Diffusion 1.5 for guided imagery ) to develop the reference material further. Using my own maquettes and source images the generative aspect is used to introduce distortion or ambiguity. These tools are used as part of the research and compositional process; the finished works are painted by hand.
No commercial projects involve any generative methods due to the uncertainty surrounding copyright and authorship
Processing Examples
Below shows a basic outline of one process. Starting on the left with a single frame output from 3D shapes scripted in Processing, and adding progressive img2img generation towards the right.

An example video of progressive generation
Example of the 'buds' stage of generation
Example of the 'bloom' stage of generation
Maquette Examples




Historical References

Huybert Van Westhoven's
'Still life with cup, fruits and vine branches on a red tablecloth'

Paws (Off the Table)
Oil on Linen
Training GAN models
GANs, unlike diffusion models, have to be trained from scratch on a chosen set of images. I train them on historical art references, you can learn more on the technicalities of this here. Through this process the models learn the visual language of those works. Where diffusion models interpret a prompt, GANs are more like dreamers under pressure. One network invents an image while another network critiques it, until a kind of visual truth emerges between them. They have inherent problems and a propensity towards 'hallucinations' which is suited to my process




Photography Experiments
.jpg)
