Subtitles prepared by human
Hi, I'm Tobias Revell. I'm an artist and academic from London, who also spends a lot of time looking at computer graphics and rendering and machine learning and data and things like that, and have been a long time Alan Warburton stan. It was my enormous pleasure in the heady salad days of October 2020, to get to see RGBFAQ in person at arebyte Gallery. And I wish we hadn't gone into lockdown so quickly, as I really wanted to go back and see it again. And as with all of Alan's work, I adored it, you know, his approach to thinking about the narrativising a complex and tangled technical and political history that has implications that sort of go back hundreds of years and into the future hundreds of years is always completely and utterly inspiring to actually see... one of his sort of stories exploded over the, over the space of the arebyte Gallery, similar to his exploded image, we felt like crawling inside his head a bit - felt like being inside Alan Warburton worldspace. So as an artist and researcher and curator myself, both as an independent practitioner,
as well as under my research and curatorial outfit with my colleague Natalie Kane, our global super brand Haunted Machines, I spent a lot of time looking at and thinking about the relationship between realism and expediency in computer graphics and data simulations and synthetic image and things like that. So how do creatives, artists and designers as well as computer scientists, and people working on commercial projects, make decisions about how best to represent reality in a way that is expedient to the computational and technical limitations of the time, and what do those decisions imply about the politics of computer graphics? And, and so I always find Alan's work inspiring because he offers an interesting insight as to how the technical and the sort of conditional politics of the time intersect to result in certain decisions that then have ramifications decades on. And one of the things that becomes really parallel in RGBFAQ
to me is the story of computer graphics at this point in its relationship to a story that's really as old as the state and organisations and the human viewpoint on natural phenomena, which I think has been put best by James C. Scott, in his 1996 book, 'Seeing Like a State' and in 'Seeing Like a State', James C. Scott explores this idea that the state or any other large organisation goes through a project of three stages when trying to optimise its relationship with the natural world. First of all, it observes and measures natural phenomena, and records those observations and secondly, it simulates those observations, it's builds simulations or models of the natural phenomena and tries to optimise those models for the conditions it wants to, to get to. And then finally, it reenacts the simulation on the world, so it remodels the world in line with the simulation that is most effective, and James C. Scott looks at German forestry in his books. He looks at the forestry scientists of the 1870s,
who were trying to optimise lumber yield, which was one of Germany's main exports at the time. And they spent a lot of time measuring and observing the way that the forest worked and the way that trees grew and things like that. And then built various models and diagrammatic representations of the forest and their relationships in order to try and maximise the yield they would get. And then they remodelled the forest based on those observations and resulted in the - in this sort of industrial production of lumber where you have these trees in perfectly ordered rows that are spaced the optimum distance apart while being close enough to get as many in as possible to optimise growth speed and stuff like that. And of course, ultimately that that project was a failure because the the model fails to take into account that sort of emerging complexity of the natural world and the ecological relationships that that forests depend on that are sort of measurable and inconceivable to humans. And we see the same thing all the time. And Alan explores this particular journey in three steps through computer graphics from the early history of computer graphics
with the attempts of scientists, physicists, mathematicians, rocket scientists, to measure and observe complex natural phenomena like cybernetic sciences, as well as meteorology and things like that. Then the attempt to simulate them so to build models and those models are enacted through computer graphics, and at that point, that's where the exploded image comes into the representation of natural phenomena into layers or elements of measurable data that is comprehensible to a computer but it looks, looks realistic to a human being so this is where we get into this trade off between expediency and realism, and then finally, in the final part of RGBFAQ, is where there's, there's this feedback loop. So the synthetic image starts to become the de facto image that is then used to train so called artificial intelligence, machine learning systems, and so on and so forth. And at this stage of the synthetic image is what the biker would probably call normalised right? It becomes the standard form of image production
and consumption apart from the photographic image. So we see, you know, in the very end, we see a vintage photographic image being turned into something that is legible to computation, that becomes the default mode of image production and consumption. So the story of RGBFAQ is much more in depth than that, it takes into account a lot of things. And there are a couple of elements that really jumped out to me as really, quite significant. And one is this idea of the exploded image. This idea that the - the in the... in the quest for expediency and the best way to represent and control reality and to simulate it - and this is not the abstract simulation of forestry modelling, or as early in RGBFAQ, the movement of satellites, this is trying to create a deceptive interpretation of reality that is believable to human beings, is the breakdown of the image into layers that are meaningful to a computer but aren't of themselves meaningful to human experience. So things like z depth, which is the distance of elements in an image away from the camera perspective,
face ID and object IDs, which allow individual objects to be identified in a scene, which I suppose is a little bit like human object recognition, and normal mapping so the direction that a face is pointing relative to the position of the camera or the world. Finally, certain types of light interactions, and particularly the interest of utility layers, so how difficult is a certain light interaction, let's say something that happens with a lot of transparency or reflection to a computer. And that is often really interesting, because that's where decisions are made about what to render and what is realistic. We've seen a lot of advances in the last year or two in ray tracing, which is an exceptionally computationally intense form of rendering, which results in very realistic lighting. It deals with things like global illumination, the way that light moves around room and sort of bounces around in corners, and subtly illuminates different elements of an object. But it's something that's usually very hard to capture. But as ray tracing starts to become more and more feasible in households, domestic sort of computational things like games consoles and desktop computers,
we start to see the way that decisions are made about reality impactors, so how do how does a software designer say, okay, actually, in order for this game, or this virtual world to be realistic, I don't need to do ray tracing, I don't need to worry about global illumination or mirrors as Alan references as well, which are another artefact of ray tracing can do weirdly cheaper in some ways than than other forms of rendering. How do I then edit my version of the world in order to meet computational efficiency? Okay, well, I won't have any shiny surfaces, I won't have any transparencies, I won't have any mirrors, and therefore you're starting to construct reality. And we've seen stories in the last few weeks, as I as I talk now, in February 2021, where a lot of the tech giants were really talking about trying to set up the proprietary mechanisms, protocols and standards by which virtual worlds will operate, and they will make decisions about whether virtual worlds should be allowed mirrors, should virtual worlds be allowed glass and things like that. And that doesn't seem to together to be all that impactful on reality. I mean, who cares, you know, and but then
when you think about the applications of virtual worlds in context, beyond entertainment, it does become meaningful. So Simone Niquille, for example, has looked at the rendering engines and the rendering processes that were gone through to create the simulations of the killing of Trayvon Martin by George Zimmerman. And in that case, both the prosecution and defence produced CGI representations of the murder and the event and what happened and both of them made aesthetic decisions that were informed by the technical processes available to them in producing the render and that then has an impact on the perception of the jury, and then the execution of justice. So you know, in the case of the prosecutor and the defence rather, George Zimmerman, they made the scene particularly gloomy, they use a really kind of intense mist, which made it look much darker and gloomier and spookier than the same probably actually was. They had sort of rain in it to occlude the issues with detail and the distance and things like that. So these technical decisions, start to to result aesthetics that then start to result in changes to perception.
And the other big thing, of course in the work is the idea of standardisation. And that is a hugely impactful thing that we see in all forms of technology. So from the very beginning of computer graphics, we start to see the development of standard algorithms and processes that make future computer graphics work easier, but also privilege certain ways of doing stuff. And this is something that Alan's looked at a lot in his other work as well and particularly 'Fairytales of Motion'. And that has significant implications because we're still at a stage I think, where computer graphics aren't quite fully standardised, unlike, for instance, desktop software, or, you know, like apps and sort of interoperability of apps, we don't yet have a point where this whether we're for the average consumer or user, this sort of standard formats, interoperations, protocols and methods of working with, in or being around computer graphics are sort of locked down. There's a lot of competing standards. Some of them are more popular than others, but it hasn't been what Georges Simondon would probably call concretized.
What the biker would call normalised. We don't have that stage yet, but the people who make those decisions on what they consider to be more important, is it about the qualities of light, is it about certain types of motion, is it about certain bodies that are represented over others is going to be really important. So for another example, and Ted Kim has done some really great work looking at the amount of the privilege in research and attention given to getting accurate expediently simulated white skin, young white skin in particular, as opposed to other skin colours. As well as again that the emphasis put on straight hair as opposed to curly or kinky hair and sort of the science of hair simulation. Those decisions about where do we put the research attention, what kind of protocols and algorithms are baked in and made most important, have significant implications because it makes it easier for someone further down the line, you know, a new user, a student who's messing around with CGI, for instance, to then default to rendering a white person because those protocols,
those algorithms, and those systems have been standardised and normalised into the software. And we're very close to getting to that point. But that's too It's too late to change that we're very close to getting to that point. So Alan's work becomes increasingly important there because he's unpicking the rationale for how we got there. How do we get to here? And the next question is, what what do we do next? You know? That's where critical practice starts to come in. That's where, you know, I like to think that I started to maybe have an impact, where we start to sort of question not just question things, but actually propose alternatives and to speculate on alternative ways that computer graphics might be done. So yes, RGBFAQ. Bloody fantastic. Just like them - just like all of Alan's work and you know brilliant narrative exploration of a series of complex entanglements that are often very difficult for us to grasp in our everyday experience of computer graphics. Most of the world is now computer graphics. Most of the images are computer graphics. References Deborah Levitt in the beginning. You know, in this she's - she speculates that that the photographic image, the cinematic image was only a temporary blip in the history of image making
by humans that we've often most often worked with synthetic images and computer graphics are just an extension of painting into a new form, but are as old as that. And so these are really important questions. Go and see it. Go and bloody see it. Enjoy it. It's fantastic, I'm very jealous.
Watch, read, educate! © 2021