Designers are cutting their marks on what will seem like an insane sentient garment, one which lives on and in the surfaces of our future ruins. This clothing combines different kinds of artificial intelligence, embedded industrial sensors, very noisy data, tens of millions of metal and cement machines in motion or at rest, billions of handheld glass-slab computers, billions more sapient hominids, and a tangle of interweaving model abstractions of inputs gleaned from the above. A furtive orchestra of automation is amalgamated from this uneven landscape and capable of unexpected creativity and cruelty: an inside-out cave we may call, after Stanislaw Lem’s ocean of Solaris, the plasmic city. (The “Smart City” is a different prospect. It employs similar tools, but dreams of municipal omniscience and utility optimization. Within this new garment, modern urban programs that have been drawn by the cycles of residence, work, entertainment of earlier eras are re-sorted, but for the Smart City, they are reified and reinforced, misrecognized as controls when they are actually variables.)

An interstellar ocean of subconscious fear-exploiting goo in Solaris by Andreï Tarkovski, 1972

I would like to consider the wearability of this garment as a kind of skin. Given that so much urban-scale machine sensing extends or allegorizes vision, it may seem odd to focus on the skin, but it is our largest sensory organ. We have extended synthetic vision and synthetic audition, but modern media has done less to augment epidermal sensation (though much more of late).1 Still, technologies of skin are part of what humans are. Instead of evolving new skins as we migrated, we honed techniques to make special purpose temporary skins, suited for heat, cold, underwater, ritual dramas, camouflage, or to signal roles, etc. The presentation of self and the sexual selection dynamics that ensue, rely heavily on the local semiotics of how we interpret these artificial skins, and so, we have a global fashion and textile merchandising industry. On a more functional level, synthetic skins modulate our environments, tuning them toward the well-tempered.2 But it is not just us. Urban sensing lets the surfaces of the city sense its environment as well (who, what, where, when, how?). In turn, urban scale Artificial Intelligence depends less on “AI in a Petri dish” than AI in the wild, feeling and reacting to and indexing its world.3 As a different and more literal connotation of “distributed cognition” takes form in this way, the already contested line between world-sensing and information-processing gets blurrier.

Tracing what is a prosthesis of whom is open to more perspectives than master-control chains of command.4 As we wear our skins on our bodies and as our buildings, held under an atmospheric skin in waves of foam (as Sloterdijk would have it), that naturalized arrangement is disturbed by how urban sensing seems to approach proto-sentience. A person is not only a Virtruvian actor at some phenonomenological core who wears the city; he or she is worn as well. We are also the skin of what we wear. The garment being cut and sewn is not only for us to wear; the city also wears us.

Urban Sensing and Sensibility

I mean to be descriptive, not predictive, so before considering any sensing and sensation to come, we map the sensing we have. What is most easily called artificial intelligence is based not on an accumulation of raw inputs but on patterned impressions drawn from that data, but any functional intelligence is defined by its ability to act upon its world, and its ability to act is construed by what and how it can sense that world and itself within it. There is a particular and perhaps peculiar affect theory for machines to be unwound over the coming years.

For example, driverless cars are emblematic of big heavy machines sensing/learning in the streets. Their proprioceptive sensors include wheel speed sensors, altimeters, gyroscopes, tachymeters, touch sensors, while their exteroceptive sensors include multiple visual light cameras, LiDAR range finding, short- and long-range RADAR, ultrasonic sensors on the wheels, global positioning satellite systems/geolocation aerials, etc. Several systems overlap between sensing and interpretation, such as road sign and feature detection and interpretation algorithms, model maps of upcoming roads, and inter-car interaction behavior algorithms. Along the gradient from fully to partially autonomous, the humans inside provide another intelligent component that may be variously copilot or cargo, and together they form a composite User ambling through the City layer of The Stack.5

But the sensing and thinking systems are located not just in the valuable subjects and objects rolling around, they are built into the fabric of the city in various mosaics. Because how a sentient city thinks is inextricable from how a sentient city senses, a good catalog is less a litany of objects in a flat ontology, or the feature set in a new model technology, than an anatomical index of the interlocking capacities and limitations of an incipient machinic sensate world. The distributed body includes not only automotive sensors, but also digital component sensors, flow sensors, humidity sensors, position sensors, rate and inertial sensors, temperature sensors, relative motion sensors, visible light sensors and recording “cameras,” position sensors, local area and wide area scanners, vibration sensors, force sensors, torque sensors, water and moisture sensors, piezo film sensors, fluid property sensors, ultrasonic sensors, pressure sensors, liquid level sensors, and so on. From a more panoramic vantage, remote sensing systems in low Earth orbit interlace with terrestrial networks to draw data up and down in turn. Remote geosensing may observe bodies of water, vegetation, human settlements, soils, minerals, and geomorphology with techniques including photogrammetry, multispectral systems, electromagnetic radiation, aerial photography, multispectral systems, thermal infrared sensing, active and passive microwave sensing, and LiDAR at different scales, etc.6 While many of these have been part of cities, factories and geographies for decades, their integration into the landscape by standardized computational protocols and networks (by conventional Internet of Things models, or otherwise) means that domain specific and more general artificial intelligence has a path out of the laboratory and toward metropolitan-scale evolutionary robotics. How are they to be worn?

Lidar vision from Toyota self-driving car using Luminar technology, 2017.

Wearability

Wearable computing, as a domain of consumer electronics, is embryonic at best. Today the term refers to smart watches and sensors that monitor heartbeat or glucose level in sweat, or blinking lights on clothes triggered by sequencing software. Not very inspiring stuff so far. In time, however, as microelectronics and signal processing layers shrink and become more energy efficient, the expanded sector of “wearables” may become predominate, just as mobile computing took leave of desktop computing. Of more interest to us is how the miniaturization and flattening of system profiles may allow them to cover many different kinds of skins: animal and vegetal skins, architectural skins, machine skins, etc. Any surface is potentially also a skin and its sensitivity is open to design. The sensor arrays that outfit those drivers’ cars, for example, will evolve, combine and specialize further. Descendants of these arrays may cover other machines, in motion or at rest, familiar or unfamiliar. Wearability then is not just for human users, or even only bodies in motion, but for any “user” that has a surface.

Just as what counts as a skin changes once the sensory capacities of a surface are made more animate, what counts as “wearability” changes as diverse skins are augmented by shared sensors. That is, the flexibility and ubiquity of these sensors is also a function of the platformization of components and sub-components across applications, and the distribution of the same or similar sensors across unlike surfaces means that very different kinds of bodies share the same sensory systems. A version of a sensor stuck onto a mammal skin may be derived genealogically from one on an assembly-line, and if we take seriously the implications of technical evolution, then this blurring and blending of sensors across different dermal surfaces stitches cyborgs together as much as the inter-assembly of organs.

However, today’s recommended uses of wearables are trained on banal key performance indicators and the optimization of functions that may have been derived from waning social contexts. The potential of wearable computation considered widely is not this auto-managerialism, but the flowering of unforeseeable biosemiosis between users now capable of sensing and being sensed by one another in strange ways. These may be one-off experiences, which remain isolated and unthought fragments, or they may cohere into more profound processes around which we decide who we are.

In the meantime, our conventional understanding of our own skin will drive and curtail what the expanded scope synthetic skins/wearable computing is asked to do. But sooner rather than later we may encounter phenomena for which we do not have sufficient words (just as we have such an incomplete language for pain, the glossary of touch is mute) and the skin we live in now will be made new again by new terms.

To Be Clothed

Clothing is already a synthetic skin, and its functions are not only thermal regulation or protection against abrasion, but to communicate to other people significant subcultural information about who we are, not only what we are. It is not simply that red clothes will mean one thing and blue another, but through its incredibly nuanced semantics, fashion produces temporary phenotypes that signal to one another within the twists and turns of hypercontextual references: the seasonal formality of the hem, the size of a collar, the drabness of a green, the obtuseness of a brand/band on a t-shirt, and the volumetric ratio of spheres that comprise a necklace that may or may not also connect to a triangular fold that exposes only so much of a shoulder. Social dynamics are not only represented or performed by this plastic semiotics, they are directly and immanently calculated by them.7

We are not by far the only animal to do these sorts of things, and different paths draw in other forms of distributed cognition.8 While we developed synthetic skins, other animals evolved more complex natural skins capable of incredible feats of signification. Cuttlefish, for example, use chromatophores in their skin to dazzle prey, to hide from predators, and to communicate with other cuttlefish. The same reaction may serve different ends depending on the context of presentation. (While crows do seem to have a practical theory of mind, we do not presume that cuttlefish are able to imagine what their skin my look like to another organism, and so to call their shimmering “performance” is probably inaccurate. If so, what then do they see in and as one another?) Importantly, the intelligence is in the skin itself. Cuttlefish’s chromatophores and iridophores instantaneously modulate to produce dazzlingly complex patterns that correspond to isomorphic neuronal patterns. As skin and brain are bound up into direct circuits, we may say that the membrane’s incredible animations are as much a nervous reaction as a cognitive one. The lesson from cuttlefish for how we should imagine a rich ecology of urban-scale AI is profound. It not the aloof central processing brain of Godard’s Alphaville; it is something far more distributed and far less Cartesian. The intelligence is in the skin and the urban sensing regime on whose behalf we design, may be something like a topography of post-cuttlefish drawn from a Lucy McRae project.9 But beyond hyperstitional provocation, what about the nuts and bolts engineering of sensing and sensation? At what scale does it start?

Squid camouflaging patterns (chromatophore). All rights reserved.

Everything is a Chemical

All economics is ecological economics. It should go without saying that design does not float as some virtual layer on top of a given nature. Some design philosophies understood this long ago, and the history of post-Asilomar biotechnology is adorned with conjectural biodesign concepts, narratives and diegetic models, and these inform debates by which the ethical, ecological and political implications of these technologies are considered.10 Biotechnologies are controversial along regular political fault lines, and yet across these, concerns are sometimes possessed by afterimages of creationism. By that term I do not (necessarily) mean the belief that everything in the world was created by a monotheistic agent. It is rather a more diffuse sense that the order of the world is not only a dynamic adaptive system, but a special text in which instantiations of metaphysical essences appear to us. Furthermore, it believes that this order is best served by not contaminating those forms (the theologically inspired taboo on scientific agriculture evangelized by, for example, Vandana Shiva) or denying that fundamental perturbations of the system are really even possible (the theologically inspired denial of anthropogenic climate change evangelized by, for example, Sen. James Inhofe).11 These are often accompanied by admonitions against humanity’s hubris and overreach. I see it quite differently. Instead what is at stake for biodesign has less to do with control (real or imagined) over nature, and whether that is good or bad, than with demystifying the royal human body back into material churn, and locating the designing subject as a form of matter that is acting on matter that it inhabits. In this mode, the limiting foundation of design is chemical.

How so, and how to? Consider the Nanome project that we helped develop at D:GP at University of California, San Diego. It is a set of VR-based scientific modeling and design tools, including CalcFlow and NanoOne.12 In short, you use virtual math to make virtual physics, which you use to make virtual chemistry, which you use to make virtual biology. Applications for biotech, and drug discovery are the first trial applications, but providing easier ways to visualize math as a building block of molecular modeling has more fundamental implications.13 As with many other complex design softwares, we see the integration of machine learning systems to augment and extend form-finding gestures, and in this case we see the accumulation of design queries and solutions also used as training data for biotech research AIs.14 That is, the interface layer for the human user (a means to map, model and simulate material processes) is the input layer for the AI (a pattern of inquiries, both inductive and deductive, that structure the search space for the machine learning system). In this sense, synthetic biology may be seen as a genre of applied artificial intelligence. Together these may support important breakthroughs (some day: industrial scale synthetic photosynthesis and individual genome-tailored drug therapies on demand, etc.) and make the “culinary materialism” of biochemistry more available to popular design/hacker initiatives (hopefully a good thing).15 In fact, the former may prove only to be possible because of the latter.

We think we know quite a bit about animal intelligence and plant intelligence, but AI at urban scale is for the most part a mineral intelligence. Metals, silica, plastics and information carved into them by electromagnetism form the material basis (but not entirely, as I will consider below). In turn, artificial intelligence is a genre of applied inorganic chemistry. Emphasizing the sensory inputs that locate any AI in its own kind of world, we see that this mineral footing does not withdraw it into some arid vacuum from the wet, hot, thermodynamic flesh of the world, quite the contrary. If as Russian Cosmist Nicolai Fyodorov surmised over a century ago, we are the material folded just so, through which the Earth thinks itself, then such folds are available to different sorts of matter as well, including the mixture of organic and inorganic compounds that comprise urban scale AI sensing/thinking systems.

Solar panels, Neom project, Saudi Arabia, 2017. Excerpt from a promotional video available on discoverneom.com

The Persistence of Models

In trying to pinpoint where artificial intelligence can or cannot be located in this folding, defining practical relationships between sensing and thinking come to the fore. Durable threads from Hume and Kant debates enter back: how (and finally if) the sensorium of empirical observation relates to a “transcendental” frame that gives moral coherence and wider deduction from what is sensed into reflective judgment, and ultimately phenomenological interiority. For purposes of AI urbanism, we may invoke this foundational division in modern European philosophy provisionally and perhaps only analogically, but at what point must the inorganic chemistry project of engineered sensation possess something like a “frame”? Or, could it congeal or graduate into possessing one, and if it did, how would that shift how we draw such frames in the first place?

Alongside Reza Negarestani’s cartographies of inductive and deductive epistemic modalities, we may qualify different species-genres of artificial intelligence according to their relative reliance on either end of this spectrum: input-rich/ model-poor (inductive) input-poor, model-rich (deductive). Broadly we may say that older Good Old-Fashioned AI based on symbolic logic relied on more deductive means through the formal construction of models of a given problem space based on understandings of local and intermediate scales of cause and effect within that space. In principle, if an AI were to encounter a real-world version of that problem space it would deduce what to do next by the application of generic logic to the specific instantiation. For many well-known reasons—from insufficient data and processing resources to adaptive limitations of logical symbolization—these methods have fallen out of favor compared to more inductive approaches. For example, deep learning systems based on artificial neural networks build functional responses to input corpora, limning vectors into recognizable outputs. For such systems, functional response to inputs can be achieved without the system producing anything like a recognizable formal “model” of the problem space.

However, we cannot only look for such frames in AI systems abstracted from real world implementation. But while the opacity of Deep Learning processes does suggest interesting and alien forms of “thought,” as practical apparatuses of urban infrastructure our AI systems are not without explicit or implicit human cognitive bias, positive or negative. Drawing on a different connotation of the term, you do need weights and bias in an artificial neural network to find evidence of a particular pattern. But the organization of input data into a useful corpus is itself informed by at least several models, including cultural models, that are necessarily full of apophenic errors and pathologies. By one view of this system, the (cultural) model that would structure input data is external to the deep learning system, but for another the whole apparatus and operation must be seen as at least interconnected and co-constitutive, but more likely part of a dynamic composite that mixes hominid semiotics with machinic cognition (Turing Test either/or filters do not work here either.) The small and large infrastructures that thread through the plasmic city are always a cyborgian cognitive assemblage; they draw upon models of the world that are encoded into one sequence even as they are subtracted from another. Models are mobile, slippery, usually unaccountable even to themselves. That is, even while the beauty of deep learning is in how their hyperinductive processes yield results that often do not (or cannot) match our own models of how we think that we think, the “external” composition of what is relevant input data for the desired output is already internalized into its operations. As would be expected, and as has been shown, explicit and implicit bias in training data (“What is risk? Whose face is risky?”) is not only reflected in outputs but is synthesized and amplified, and often then shielded by veneers of false objectivity.16

In the Field

Whether ultimately this garment cloaks urban ruins or a new rationality of wilderness is a matter of composition not prediction. Even as AI urbanism is a reflection, it is also a departure, and it would be a dire mistake to forestall the latter by preoccupation with the former. Or, more precisely, we should not only see ourselves in the reflection. We may describe ubiquitous computing not only by the introduction of information media onto surfaces, but also by how it draws upon and manipulates information that is already there. In theory and practice, its ubiquity may extend deep into the material substrate of things and across irregular distances. Long before modern computing, or even the appearance of humanlike creatures, evolution has drifted away from primordial entropy and toward biochemical heterogeneity and nested diversity. “Information” has been understood as the calculus of that world-ordering, as seen in patterns of genetic encoding and transmission, organism morphologies, transversal contamination and symbiosis, intraspecies sexual selection, interspecies niche dynamics, displays and camouflages, and various sorts of signaling across shifting boundaries.17 Information, in this sense, may be less the message itself than the measure of the space of possibility by which mediation is possible in a given context.

Now, as we stare down the cliff face of a sixth great extinction, information is also a measure of that collapsing diversity. The mad cycles of hydrocarbon extraction, its instantaneous fabrication into fleetingly-ordered form (a plastic this, or a plastic that), and the transfer of these into waste flows that cannot be metabolically reabsorbed quickly enough is, among many other things, an informational figure (and disfigure).18 That said, any ethics for maximum informational diversity that we would hope to underwrite ecological economies would be qualified by the functional role of standardization that allows encoded signification to become communication. Consider for example how the recycling of carbon atoms means that as organic life decays it also lives again in different form, or how the common signatures within secreted enzymes means that stigmergic communication within an ant colony will sustain its organization, or how a shared range of vision within the light spectrum may make camouflage possible, and how the common semiotic references between sender and receiver sets any culturally complex symbolic economy in motion, and so on. Design must include the deliberate introduction of both channels of translation and integration and as well as regulatory boundaries that enforce existing differences or even cause new ones. In other words, design philosophy informed by an ethics of ecological information cannot elevate deterritorialization above territorialization or vice versa.

It is with that serious caveat that we scope the enrollment of augmented environments into programs of AI urbanism. Processes described by formal biosemiotics—relations between parasites and hosts, flowering plants and insects, predators and prey, etc.—are not only things about which AI may know, they may also be directly outfitted with technologies of synthetic sensing and algorithmic reason. The presumption that of all the information-rich entities in the world the hominid brain should be the primary if not exclusive seat from which prostheses of AI would extend is based in multiple misrecognitions of what and where intelligence is. In such a circumstance, intelligence does not only radiate from us into the world, it already is in the world, and in the form of information (which is form) it is the world.

Environmental monitoring and sensing systems can describe and predict the state of living systems over time but usually cannot act back upon them. They are sensor-rich and effector-poor. By way of a provisional conclusion, I advocate that technologies that augment the capacities of exposed surfaces, whole organisms, or relations between them should extend deeply into the ecological cacophony. Yes: not only training data from plants, but augmented reality for crows, and artificial intelligence for insects. Far from command and control, altering how different species sense, index, calculate and act upon their world may introduce chaotic results (if some people are concerned about the cascading effects of merely modifying rice to make it rich in Vitamin A, we can assume there will also be pushback on TensorFlow-compatible ants, trees, and octopi.) The picture I draw is less one in which the AI supervises those creatures than one in which they themselves inform and pilot diverse forms of AI on their own behalf and in their own inscrutable ways. We should crave to learn what would ensue. The insights of synthetic biology as a genre of AI, and AI as a genre of inorganic chemistry, mean little if the cycles of cybernetics are monopolized by humans’ own errands. The city will also wear us.