In what follows I specify some aspects concerning the philosophical foundations and systematic structure of a ‘functionalist computationalist’ theory of mind, departing from some considerations elaborated by Reza Negarestani in his text Revolution Backwards, Functional Realization and Computational Implementation.1 In the first section, I address some of the methodological quandaries and approaches through which functionalist theories of cognition and computationalism become coordinated, particularly in sight of the ample dissemination of information-theory to different fields of scientific and philosophical investigation, as an integrative lever to understand the connections between Matter, Life, and Thought. I briefly focus on two computationalist accounts: the naturalist, evolutionary, genetic computationalism pursued by Eric Baum in What is Thought?2 by way of an application of the principle of Occam’s Razor, and the post-Sellarsian approach pursued by Negarestani’s ‘computational inhumanism’, which rekindles the project of a transcendental investigation into the forms of cognition.
In the second section, I provide a rudimentary sketch of a functionalist theory of sapient cognition, drawing and combining aspects from these different approaches to elucidate some general aspects concerning the pragmatics of Thought in its stratified complexity and development. I build upon three fundamental sources on whose basis the realization of functionalist theory of mind through a pragmatics of cognition is rendered intelligible:
. The pragmatic foundations for a functionalist account of cognition elaborated by Wilfrid Sellars, particularly in his 1954 paper Some Reflections on Language Games,3 which he gives in the context of an inferentialist account of conceptual and discursive practice for representational systems. I succinctly schematize and expand upon Sellars’ construal of sapience in terms of the functional binding of perception, inference and action.
. The pragmatic-expressivist elaboration of the inferentialist project proposed by Robert Brandom, following particularly his 2008 paper How Analytic Philosophy has Failed Cognitive Science,4 in which he proposes to coordinate a ‘hierarchy of semantic content’ with the development of cognitive capacities defining conceptual-linguistic behavior in its progressive complexity, thereby supplementing the account of syntactic functional complexity offered canonically by Chomsky and his successors.
. The systematic elaboration of a formal account of the practice of theory-formation proposed within Lorenz Puntel’s ‘global systematics’, and particularly in his work Structure and Being: A Theoretical Framework for a Systematic Philosophy.5 Puntel’s account of theory construction allows us to supplement the Brandomian semantic and pragmatic hierarchy, distinguishing capacities of formalization and theoretical-construction, in virtue of which sapient systems do not only think of a formalized ‘nature in mathematical language’, but represent and abstract the generic functional architecture of Thought in its procedural-conceptual dimension and connection from causal-material constitution, enabling a synthetic practice of theory which systematically binds different domains of being and discursive domains, within which the nature of the relation between Thought and Being becomes epistemically tractable.6
I – Information, Function, Computation
Given the rather varied ways in which functionalism and computationalism can be articulated in a theory of cognition, methodological and philosophical clarity is well in order. For before we address consequences or prospects for the implementation of intelligence in ‘artificial’ mediums we must first interrogate the conditions under which cognition can be said to have been theoretically specified, which is at once a question about how we understand cognition and what cognition is. Put simply, our theoretical understanding of cognition is methodologically prior to an investigation into the consequences of the possible intervention into or engineering of cognition.
With this in mind, and defining the general conditions that a functionalist computationalist account of mind entails conceptually, Negarestani7 proposes the following distinctions:
. First, a functionalist account of mind is one which explains the behavior of mindful beings in terms of the functional roles expressed by the activity of intentional systems, in at least three senses of varying scope and generality:
1. Metaphysically – In terms of the selection and purpose-attainment routines which express holistically the causal organization of a system’s parts according to specifiable selection criteria which relate to ‘the whole’ so as to globally fulfill specifiable aims-functions.
2. Epistemically-semantically – Which distinguishes causal-informational relations from the semantic-content expressed in the logical-conceptual roles exhibited in the behavior of cognitive systems, e.g. particularly, the inferential roles that define the modally rich semantic and pragmatic proprieties of conceptual behavior in sapient creatures.
3. Engineering – Interrogating how the functional routines which define the mind could be realized in relation or isolation to the metaphysical constitution and contingently specified epistemic-semantic proprieties which determine the causal and conceptual functional architecture of existing sentient and sapient systems.
. Second, any such account is computationalist just insofar as the functional organization of mind allows for computational characterization, in one of two senses:
1. Intrinsically – In which the functional routines of the system instantiate computations which are specified irrespective of a characterization of ‘the semantics of utility’, which encoding causal information without the output states being overtly instantiated by the system.
2. Logical-Conceptual – Which overtly instantiate processes that map the system’s inputs to definable output-states or aims.
It is clear that here the distinction between intrinsic and algorithmic computation, for Negarestani, is coordinated with the distinction between metaphysical and epistemic-semantic functional characterizations of mind, respectively. We shall return to this issue below, but for now we can say that while the functional criterion of cognition tells us that thought is to be defined not in terms of some intrinsic property or ‘mind stuff’, but as the capacity of a system to organize its resources and behavior purposefully, the computational criterion tells that such functionality is to be understood as a kind of algorithmic process or effective procedure. Within the spectrum of functionalist accounts of mind, one thus discerns a variety of approaches in relation to the possible scopes and aims of the investigation to follow and the account of mind implied thereby:
“Rational or normative functionalism with structural constraints (Sellars[…]), strongly mechanistic / causal functionalism (Bechtel[…]]), rational functionalism with a level of algorithmic decomposability (Brandom […]), normatively constrained functionalism with intrinsic computational elements (Craver […]), strongly logical functionalism with algorithmic computationalism (classical variations of artificial intelligence), causal functionalism with intrinsic computationalism (Crutchfield […]), weak logical functionalism with intrinsic computationalism and strong structural constraints (artificial intelligence programs informed by embodied cognition) and so on.”8
A functionalist theory of mind is continuous with pragmatics, i.e. an account that specifies the roles, “activities and doings” in terms of which it is said that “…the mind is what it does”.9 Whether such functional specification falls under the scope of metaphysical or epistemic investigation supposes a general understanding about the concept of functional explanation in general, not only as a possible way to characterize the mind, but more fundamentally as a way to frame the potentially multifaceted mediations between Being and Thought. The choices separating functionalist paradigms ultimately reveal difficult methodological issues concerning just when cognition is ascribable, and what we mean when we give a functional characterization of cognition or indeed anything else. It is within the scope of these questions that the alignment of functionalism with computationalism becomes intelligible, not only as a pragmatically tractable research program, but as a conceptually coherent theory of mind.
Take the concept of function in its possible application within the scope of metaphysical theories of mind: one is immediately obliged to explain what it means to say that the causal patterns, selection mechanisms, tendencies and dispositions which organize mental activity are carried out in a purpose-oriented manner. In the intentional mode deployed routinely in exposition, one may say that a system organizes its resources or parts as means in relationship to the fulfillment of the ends of the system considered as a whole, e.g. we say that the heart pumps blood in order for circulation to proceed, giving a functional characterization of the circulatory system. This means that in characterizing the function of the circulatory system in terms of purpose-oriented selections in the intentional mode we understand the system in analogy with practical reasoning, that is, in analogy with the capacities of conscious systems engaging in deliberative practices, i.e. systems whose behavioral outputs are mediated by inferentially correlated states by entering contexts of normative evaluation with regard to itself and other systems.10 Such use of analogical postulation is also observed as we attribute cognitive capacities to entities for representing or modeling their environments, drawing an analogy to theoretical reasoning, e.g. We say of the frog that it represents the fly as a light-dot within its visual field, in analogy with the way in which we understand the epistemic statuses attributed to conceptually able systems, when the relevant causal inferences and descriptive claims are taken as true.
Tempted by the surface grammar of such analogical explanatory caprice, in a Heideggerean spirit, one might thereby diagnose the implicit endorsement of a teleological metaphysics or panpsychist-vitalist view, according to which purposes, discursive intentionality, and the expression of semantic content are rather ubiquitous phenomena found at all levels of material organization beneath and beyond the mind. Given that we sufficiently relax our standards for when this method of analogical postulation is warranted it is not difficult at all to arbitrarily provide functional characterizations of any causal system to explain all sorts of natural and social dynamics, from biochemical reactions, to the cosmic movements of galaxies, to markets, etc. But biting the teleological bullet clearly does nothing but to blur the line of demarcation between a characterization of non-intentional systems in intentional terms for heuristic-explanatory or predictive purposes, and the literal attribution of intentional states to systems which lack conscious capacities associated with conceptual and discursive cognition.
Does this mean that metaphysical talk about ‘functions’ at the subpersonal and suprapersonal level is nothing but a metaphorical way to understand bare causal dynamics? As a response to the teleological dangers, a so-called ‘criterological’ solution appears tempting, according to which one seeks to explain away the functional purpose-attainment routines of mindless systems in terms of bare causal regularities, making thus no appeal to teleological, normative, or intentional terms. So the argument goes, to do so one simply paraphrases locutions like “The heart’s function is to circulate blood” into corresponding causal statements like “The circulation of the blood is a consequence of the heart’s action”, eliminating the intentionality-ridden talk of ‘functions’. However, as Jay Rosenberg reminds us, this reductive alternative becomes equally problematic: since many things result from the action of the heart other than the circulation of blood, for instance, the question arises about which consequences allow us to identify the functional integrity of a ‘system’ apart from ‘accidental’ causal effects.11 In other words, the purported reduction of functional vocabulary into causal explanations immediately faces the epistemological problem of clearly demarcating the ‘surplus content’ that allows one to discern those correlations that instantiate functional proprieties from those regularities whose consequents are not considered expressions of a system’s ‘proper function’.
This line of questioning gives us a sense of a crucial methodological quandary at the heart of functional characterizations which make liberal use of intentionally vested locutions in order to explain what functions are supposed to be. An unrestrained use of the analogy that does not address these methodological quandaries must necessarily obfuscate the distinctiveness of sapient theoretical and practical behavior, either by liberally using deontic vocabulary to model nature writ large, or by failing to explain the precise wedge between causal and semantic function. For even if we might provide a functional characterization of, say, iron rusting in water, the explanatory leverage one gains thereby is dubiously informative, at best. If functional attribution does purport to express something about the organizational dynamics of material systems at levels that extend above and below the capacities of sentient and sapient intentional agency, then the question becomes how to characterize these functional dynamics in a non-teleological and non-intentional way.
From the start, when we ask “what is thought?” we thus face the demarcation question about just where to draw the relevant lines between semantic content and bare physical processes. And this issue is in turn inextricable from an integrative constraint which determines the emergence question about how to explain the development of sapient Thought from sentient Life, and the latter in turn from ubiquitous physical processes that constitute the order of Matter. If the emergence question is bound to the naturalist constraint to explain cognition as an emergent phenomenon in the evolution of organic life from pre-sapient capacities, whatever the future of organic intelligence might be, then the demarcation question asks about the generic functional characterization of cognition, unbound from the specific way in which it is instantiated in a particular organic medium or host. It is this last, abstract characterization of cognition that, for Negarestani, maps the generic properties of sapience which would have to be implemented in any system whatsoever, natural or artificial, for it to count as conveying intelligence proper. Yet the question about how to characterize functional attribution and explanation causally or logically remains a propaedeutic task for any functionalist theory of mind.
In response to the Heideggerean detection of an implicit teleological metaphysics encoded in functionalist explanation of causal systems, one might begin by resisting the compulsion to draw ontological consequences from the surface grammar deployed in exposition. If functionalism about mind is to overcome the metaphysical dangers of teleological dissemination, the first step is thus to understand that attributions of functional organization need not be understood analogically in terms of practical or theoretical ‘reasonings’. This means that attributions of functional propriety are not by default to be parsed in the teleological mode which takes deontic modal-intentional notions as primitives, or else reduces functions to linear causal dependencies. In short, as far as our account of function is concerned, one might remain a realist about the functional organization of a causal and conceptual system and about the modal structure of the objective world, without being a realist about intentional-normative states.
To return to our example, the causal dependencies that obtain in ‘biological systems’, for instance, becomes intelligible as we map the causal dynamics of the organism into a wider and integral explanatory frame, which is the theory of evolution. As Jay Rosenberg writes:
“We explain the circulation of the blood causally by appealing to the action of the heart, but the existence of the heart is explained in turn not by a further appeal to either synchronic or teleological causality but by embedding that question in a broader, diachronic, theoretical context. We explain the existence (now) of hearts (that is, of the organs) by explaining the emergence (by random mutation) and the persistence (by environmental selection and genetic transmission) of creatures with hearts (that is, of the organisms). It is such a diachronic evolutionary account which is in fact unperspicuously encoded by the teleological vocabulary: “The heart exists in order to circulate the blood”. The sought “surplus content” is an implicit appeal to the contributions of blood circulation to the biological integrity and adaptability of organisms so structured (efficient internal transport of oxygen and nutrients, thermal homeostasis, and so on) an appeal which becomes both explicit and explanatory in the context of an evolutionary account of the origin and proliferation of organisms possessing such cardio-vascular systems. The theory of evolution shows us how we can fund functional explanations without appeal to explanatory principles different in kind from those structuring causal explanations.”12
Functional proprieties can thus be formalized and characterized to represent tendencies, patterns or regularities which encode objective modal invariances and embedded causal dependencies not only in a linear or synchronic manner, but across holistic and diachronic dynamics and levels of organizational complexity, a point to which we shall return in the last section. Corresponding explanatory routes become open to different descriptive-scientific and logico-conceptual registers, in accordance with the specific kinds of causal or inferential relations that they natively map, to decant functional attribution from teleological contamination. And it is important to note that this is not necessarily to endorse a reductionist position about intentional attitudes or deontic vocabulary, but simply to adopt a more nuanced and precise separation of the role that functional explanation plays in metaphysical and epistemic endeavors, answering to the methodological demands imposed by the demarcation and emergence questions. In any case, endorsing a metaphysical functionalism to explain causal dynamisms, wherever these are said to obtain, immediately begs the question about which vocabulary or vocabularies afford such an ‘objective modal realism’ which avoids taking intentional attitudes as primitives.
(a) Information Theory and Naturalism
It is precisely at this juncture that computationalist-functionalist accounts, derived from a generalized information-theoretic frame, become particularly attractive to explain the relations and continuity between Matter, Life and Thought across its metaphysical, epistemological-semantic and pragmatic dimensions.13 The stipulated continuity between computationalism and naturalism binds the procedural formalism of effective procedures and the informational dynamics of communication to a process-metaphysics, providing a modally robust, widely generalizable meta-theoretic framework for the prospects of scientific unification. Indeed, although the cognitivist paradigm had already been successful in displacing behavioristic accounts of sapient agency by the 1950s, and following downstream from the founding work of W. Ross Ashby and Shannon, the recent developments and applications of informational and computational approaches in the fields of synthetic biology, quantum information theory, and computationalist accounts of cognition, have rekindled the promise of unification in unforeseen ways. As a result, information-theoretic accounts have become a lever for inter-theoretical syntheses of varying scope, in virtue of their dynamic explanatory range, to explain causal processes across different empirical domains of study, bridging modally rich natural and social scientific dynamics within a more comprehensive ontological frame.14
On this account, not without some reservation about the somewhat liberal philosophical appropriation of information theory across the board, James Ladyman and Don Ross write:
“Extravagant claims have been made about information because the concept is now apparently indispensable in so many fields of inquiry. For example, statistical mechanics and thermodynamic entropy is often explained in information theoretic terms (Jaynes 1957; Brillouin 1956), genes are often characterized as entities that code for particular proteins, and mathematical logic can be understood in terms of information-processing (Chaitin)… The world is not made of anything, and information is a fundamental concept for understanding the objective modality of the world, for example laws, causation and kinds.”
Given its principled formal extensibility, information theoretic accounts provide something like a new integrative lever with which to understand natural processes, within a new metaphysical paradigm amenable to naturalism. Ladyman and Ross explain this descriptive and synthetic power as part of an ‘ontic structural realism’, in which the formal distillation of structure through logical-mathematical vocabulary replaces the fundamental assumptions of traditional substance-ontologies. Accordingly, they postulate a general correspondence or homomorphism between the modal relations specified by mathematical-logical structures, and the objective modal relations of ‘real patterns’ in material nature. Thought gains traction on being as far as theories and their embedded empirical substructures-models represent objective modal relations between real patterns. But empirical substructures must themselves be understood as embedding data models which more primitively represent measured phenomena and the non-modal extensional relations between them. Following Cussins (1990), this representational indexing of measured phenomena into cohesive data models is said to constitute the pre-discursive representational capacities through which systems carry fundamental ordering-filtering ‘…operations of fixing, stabilizing and maintaining salience of some data from some measurement to another.”15 Functionally specific locators actively filter and sort inputs into a ‘coordinate system’ according to a ‘specific address system’, e.g. the basic operational syntheses which spatially distribute phenomena into an integral simulated space for perceptual-sentient systems. These are, as it were, the ‘forms of intuition’ of a cognitive system. Information theoretic language couches then the formal coordination between structures and data models, corresponding to the cognitive integration of conceptual representations of objective modal structures and base pre-conceptual representations of pure patterns.
This is because information theory maps the mathematical-logical functional structure of effective procedures, which would correspond to real process-patterns, as they generically may be instantiated across different empirical domains and data models, in different forms of computation dynamic complexity. This includes not only the logico-conceptual processes of rational deliberation in sapient systems, but also the characterization of the locators and indexing operations by which a cognitive system represents itself and its environment pre-conceptually and pre-theoretically. The malleability of informational construals of processes and functional dynamics then concerns the formal austerity and explanatory flexibility: information is simply characterized as correlation, in the sense in which to say that…
“[…] A transfers information about B is to say that there is a correlation between the state of A and the state of B such that the probability of A being in a certain state is not equal to the probability of it being so conditional on the state of B… Computation is a kind of flow of information, in at least the most general sense of the latter idea: any computation takes as input some distribution of possibilities in a state space and determines, by physical processes, some other distribution consistent with that input.”16
In any case, once information is thereby used to interpret the fundamental processes of physical Matter and the progressive elaboration of genetic-organic Life, but also the higher-order socially distributed capacities of sapient Thought, informational characterization becomes a medium not only for local-empirical description, but for metaphysical synthetic-systematic unification. Ascriptions of computational function become then, while not arbitrarily licensed as in the teleological mode, at least widely applicable across domains and scales. Following the general lines of such a synthetic, naturalist account, fleshing out the diachronic-evolutionary dynamics of such a physical process leading to the genesis of Life, George Church has recently proposed to distinguish between different levels of ‘problem solving’ functionality, which separates the capacities of ‘rudimentary’ artificial Turing-machines from the dynamics of organic systems, viewing the latter as higher-level ‘engines of creation’ evolved from barren biophysical processes along the course of natural history. In the same way in which universal Turing machines can be made to encode the behavior of any other machine, Church argues, organisms can be seen as ‘universal production machines’, in the sense that they can be programed and reprogramed to produce anything else. The mapping of these generic productive dynamisms and their implementation constitute the binding of computer science and synthetic biology:
“Just as computers were universal machines in the sense that given the appropriate programming they could simulate the activities of any other machine, so biological organisms approached the condition of being universal constructors in that with the appropriate changes to their genetic programming, they could be made to produce practically any imaginable artifact. A living organism, after all, was a ready-made prefabricated production system that, like a computer, was governed by a program, its genome.”17
In broad continuity with this evolutionary account, Eric Baum has argued that biophysical-genetic information (DNA) can be seen as a ‘source code’ instantiating a (highly compressed) program whose algorithmic subroutines already express a ‘semantics’ to the extent that they instantiate the problem-solving and representational tasks of exploiting and abstracting the “…underlying, compact structure of the world”.18 Thus, Baum extends the classical thesis according to which “mind is a computer program” to claim that computational description is continuous with the chemical machinery of Life, which encodes an effective procedure for efficiently exploiting semantic structure through compression.
“The answer to the mystery of life and the answer to the mystery of mind (thought) are one and the same. It is given by the information flow, which in each case is provided (largely) by the genome. In the case relevant here, understanding thought, the genome encodes a compact expression that gives rise to understanding in the mind. This genome is quite a compact expression, which grows out (interacting with the world) into an immense flowering—the mind—much as the genome grows out (interacting with the world) into the body.”19
As Baum emphasizes, this is not to say that the biochemical machinery which encodes and expresses genetic information, however, resembles in its architecture a Turing machine ‘read-write head’ or that its selection procedures refer to ‘lookup tables’, even if it necessarily fulfills the function of being a model of universal computation (MUCs) which can be instantiated-mapped onto a universal Turing machine. For, Baum does well to remind us, it is not only true that any effective procedure can be computed by a universal Turing machine, but any Turing machine must be functionally equivalent and thus isomorphic to any model of universal computation. The abstraction of computational function from the precise structure of Turing machines thereby enjoins the identification of variegated MUCs, across a variety of vocabularies and formal structures:
“The machinery is not exactly identical to the read-write head in a Turing machine, but as I’ve said, any algorithmic machine can be logically mapped into a universal Turing machine, and conversely, a Turing machine program is logically isomorphic to any other model of universal computation… Thus, we can consider Turing machines, parallel processors, Lambda calculi, Post machines, and many other models to all equivalently be computers.”20
The question about which model most adequately characterizes a system which functions as a MUC thus pertains to the specific way in which the generic function of computation is implemented at the empirical level, which obviously entails that this generic functional integrity must necessarily be in itself abstract or transcendental with respect to its particular instantiation in a given model-structure. As it so happens, Baum argues, the “Program of Life” which underlies the dynamics of sentient and sapient cognition most closely exemplifies the model-structure of Post-machines, whose architecture consists of a ‘collection of productions’ generated in processes of pattern-identification-matching and substitution:
“Life is reasonably well described as a giant Post production system. Again and again in life, computation proceeds by matching a pattern and thus invoking the next computational step, which is typically similar to invoking a Post production by posting some new sequence for later patterns to match.”21
Insofar as genetic process already encode the base level representational heuristics of organic systems that simulate their environments, Thought is to be explained as but a late development in the process of functional adaptation which organizes genetic information all the way to higher-order cognitive capacities. The continuity between the genetic and the cognitive concerns essentially computational efficiency: according to Baum’s account, a strikingly efficient form of informational compression characterizes the ‘Program of Life’ so as to instantiate the structure-exploiting, representational functions of organic systems. To say that cognitive representation is but a mechanism for a system to ‘exploit the underlying structure of the world’ is thereby to say that a cognitive system’s ‘search patterns’ and selection procedures function to produce highly compact ‘model-structures’, that allow for its self-modulating and generalizing problem-solving capacities across diverging circumstances. And since cognition ‘exploits’ general structure to deal with an indefinite number of new cases through highly compressed modules, biophysical computation expresses a non-conceptual kind of ‘semantic content’ intrinsic to the causal dynamics of genetic information. These highly dynamic and efficient functional mechanisms adapt and train complex cognitive systems to operate modularly, instantiating a series of functional subroutines which constitute Thought, in virtue of which it models the compact structure of its environment and itself, sparing insurmountable computational demands facing an immeasurably complex world. Both Life and Thought are thus to be understood…
“…as complex, evolved computations largely programmed in the DNA, both exploiting semantics in related ways. In any case, it provides worthwhile background to review another example of evolved, natural computation… The DNA is information: a sequence of bits that is read by chemical machinery and that causes a sequence of mechanical interactions to transpire, processing the information in the DNA.”22
This information-theoretic genetic inscription of semantic content – grounded in an evolutionary application of Occam’s Razor – is at once a naturalization of representational computational functionalism, and a computational characterization of genetic material structure, including those operations canonically associated with higher-order discursive cognition: objective identification, counting, causal-subjunctive reasoning, etc. Baum continues:
“The reason we learn so fast, the reason our learning is guided by semantics, is that the compact DNA code has already extracted the semantics and constrains our reasoning and learning to deal only with meaningful quantities….[S]emantics arises from the principle, roughly speaking, that a sufficiently compact program explaining and exploiting a complex world essentially captures reality. The point is that the only way one can find an extremely short computer program that makes a huge number of decisions correctly in a vast and complex world is if the world actually has a compact underlying structure and the program essentially captures that structure… Once one makes the ansatz that every thought is simply the execution of computer code, and understands how that code is evolved to deal with semantics, a self-consistent, compact, and meaningful picture of consciousness and soul will follow as naturally as thoughts follow from the constraints of meaning.”23
The strangeness of this position is apparent, for it directly folds transcendental conditions of possibility into empirical conditions intrinsic to causal processes, while nevertheless recognizes the generality or transcendence of function with regard to concrete machinic implementation, i.e. in abstraction from the neural circuitry and concrete MUC-implementation of organic sapient systems:
“It seems very likely that the kinds of neural circuits studied provide a good model of certain brain functions such as early vision, but it seems unlikely that they are a good model of higher mental processes. Although it is true that more general classes of neural nets could simulate the actual neural circuitry of the brain, it does not follow that this is a fruitful way to talk about thought. To talk about thought fruitfully, at least along the line of attack in this book, one must be able to discuss the compactness in the algorithm. The compactness in the program of mind lies in the DNA and in the process by which the neural circuitry is constructed. The neural circuitry is, in my view, akin to an executable. The DNA is more like the source code. Looking at the neural circuitry is not, I suggest, the best way to intuit how the program works any more than we would look at the executable of an application like Microsoft Word.”24
But if the practice of ‘synthetic biology’ cannot be straightforwardly equated with the production of something like ‘artificial life’, it is because it operates on the basis of naturally evolved biophysical genetic material, however subject to repurposing or re-engineering through ‘editing’. It is not difficult to see that if information then becomes the very ontological currency to explain everything in nature, from physical process to sapient-deliberative cognition, then both the engineering and production of a Life and Thought which did not evolve naturally becomes trivially part of the ‘synthetic’ reorganization of physical matter in its functional constitution, by outsourcing the dynamics of mind to other, potentially inorganic mediums. For what could an ‘artificial’ production of mind be, other than the capacity of cognitive information-processing systems to organize, identify and repurpose the world that’s given, but only understood in its structure once the synthetic endowments of information-theoretic knowledge weaves the vistas of nature to, for and beyond us?
(b) The Transcendental Approach
In contrast with Baum’s naturalist characterization, those wary of transposing semantic function onto material constitution have insisted that the procedural routines which may be computationally generalized to provide a functionalist theory of cognition should not be conflated with the causal-mechanistic dynamics at the level of material organization. On this account, and as we noted in the first section, Negarestani argues that while one ought to recognize that the mind is always constrained by material structure, the characterization of mind must nevertheless be couched not in metaphysical, but in pragmatic terms, that is, it must be “…described in the functional vocabulary of activities and doings”.25 In particular, those activities and doings of systems endowed with inferential awareness. Such systems evince a qualitatively integrated, generative social model for what Negarestani calls a general intelligence, i.e. the kind of inferentially mediated behavior by virtue of which a system dynamically updates itself, modifies its abilities and constitutions, abstracting cognition from the ‘here-and-now’ into a dialogically articulated, socially distributed agency or multimodal system. The History of Mind (Geist) is then nothing but the procedural unfolding of sapient thought as it realizes its generative function through the synthetic powers of Reason, to sublate and transform itself and the world of which it is part.
The obverse of this claim is that, whatever such a pragmatic vocabulary may involve, it distinctively specifies a non-empirical relational/causal functional structure. Which is to say that pragmatic specification abstracts the functional organization and operations of mind from its material basis, so that semantic computations are not intelligible in terms of causal proprieties described in a metaphysical or empirical register. This pragmatic elaboration of the abstract mind will be cogent as it specifies the functional core of specifically discursive-conceptual practices which encode conceptual-logical relations expressive of semantic content in a socially distributed space, irreducible to biophysical function in the material mode. This marriage of a functionalist pragmatics of sapient cognition with computationalism thus involves more generally a computational theory of practice in general and of conceptual agency in particular. Negarestani describes the challenge:
“…in order to find and develop the appropriate computational models and algorithms of concept-formation and meaning-use, first we have to determine what sorts of activities a group of agents—be they animals or artifacts—have to perform in order to count as engaging in linguistic discursive practices.”
The project in question here predictably leads to a fastidious roster of methodological questions, some of which encode iterations of the general problems associated with functionalist theories of mind: how is the “pragmatics of Thought” that specifies linguistic-discursive practices to be expressed in strict computational terms? Is the ‘pragmatic idiom’ which captures the structure of linguistic-discursive cognition necessarily laden with the kind of deontic modal notions that imply intentional actions and normative statuses as explanatory primitives? How are the relations between the causal and logical-pragmatic levels of explanation to be understood if we are not to relapse into a teleological metaphysics, and how does it answer the emergence question? And if the distinction between material and conceptual functional proprieties is to be understood in terms of different kinds of computational behavior, have we not presupposed from the start the equation of material relation with information-computational function? To what epistemological standard does such an information-theoretic metaphysics answer to, under the computational characterization of the division between Being and Thought, so as to avoid dogmatic postulation?
Fastidious as they may be, these questions point to a clear exigency facing any functionalism that genuinely answers the demarcation and emergence questions, avoiding the overzealous compulsion to disseminate semantic content characteristic of those naturalist accounts. Noticing thus the dual dangers of an uncritical merger and a restrictive dogmatism, Negarestani reminds us that the marriage between functionalism and computationalism can only obtain by distinguishing distinct realizability conditions for specifiable functional roles, and corresponding classes/types of computation which may be incommensurable with each other. Put differently, computationalism must distinguish between causal-informational and conceptual-semantic functioning by distinguishing both the kinds of doings and activities involved at different levels of organizational complexity (pragmatics), and the kinds of computation which provide their perspicuous expression.
Here the distinction between the semantic-epistemic and metaphysics levels of functional description reappears in its computational form, in the distinction between intrinsic and logical computation. For the computational retrieval of the pragmatics of discursive cognition entails that the computational kinds that determine causal relations – and which Negarestani tells us are characterized in terms of intrinsic computation – are incommensurate to the logical-conceptual relations exhibited by linguistic functioning in sapient activity, which is rather to be characterized in terms of logical-symbolic computations, and which involve overt specification of output states in regulating the system’s behavior:
“Combining functionalism with computationalism requires a carefully controlled merger. If by computationalism, we mean a general view of computation in which computation at the level of causal mechanisms and computation at the level of logico-conceptual functions are indiscriminately joined together and there is no distinction between different classes of computational function or computational models with their appropriate criteria of applicability to algorithmic and non-algorithmic (interactive) behaviors, then nothing except a naïve bias-riddled computational culture comes out of the marriage between functionalism and computationalism.”26
At the same time, Negarestani recognizes that invoking symbolic computations remains necessary but insufficient to retrieve proper semantic function, in the sense required to characterize the discursive, ‘concept mongering’ behavior of the kind sapient systems exhibit. The great task for a theory of mind that goes beyond the limitations of the symbolic-computationalist frame set by the Church-Turing paradigm – looking to the so-called ‘interactive’ approach to computation advocated by Peter Wegner and Samson Abramsky – is to account for the computational routines expressed in the semantic and properly socio-discursive levels of cognitive function.
In this regard, Negarestani notes that while the Church-Turing paradigm construes interaction between the system and its environment in terms of sequential algorithmic representations, the interactive model represents the adaptive dynamics of concurrent processes and synchronous-asynchronous actions in terms of distributed parallel systems. Within the scope of such interactive cognition one encodes the dialogical dimension of inference implying an intersubjective interactive space, irreducible to sequential procedures in which “…openness to implementation suggests a functional evolution that is no longer biological or determined by an essential structure.”27 This radical unbinding of Thought from the material-causal dynamics of Life and Matter resists the blunt naturalization of semantic computation which threatens to blur the lines between the informational-computational and the semantic proper, even if it appears to accept the basis of an information-theoretic metaphysical paradigm within which the distinction between the causal and the conceptual is gauged.
In the same spirit, forcefully integrating the scope of functionalist computationalism to the systematic aims of German Idealist philosophy, Pete Wolfendale has endorsed the prospects a ‘Kantian computationalism’ that separates material and transcendental description, “…explaining how the normative structure of reason can be autonomous and nevertheless be implemented by the causal structure of homo sapiens and its techno-linguistic infrastructure”.28 Transcendental computational accounts of this sort tend to share the view endorsed by ‘strong normativists’, such as Robert Brandom, that even when so constrained by causal-factors, the functional routines of cognitive behavior can in principle be thus procedurally abstracted from the causal relations tracked by empirical science, including the sciences of synthetic biology and neurophysiology. This is because the dynamics of sapient cognition are inherently social and collectively instantiated in the discursive space of instituted theoretical and practical norms-rules, i.e. to say that meanings “ain’t in the head” is just to say that semantic operations are intelligible not in terms of the internal states of any given thinker or their material-causal makeup, but only in the way in which conceptual capacities integrate a system into a normatively regulated space of socio-cognitive practices that institute a kind of collective agency. In approximating this social dimension of cognition, Wolfendale tells us, one may thus study the protocol of cognition without studying how it is implemented under given causal-material constraints, and thereby endorse a kind of semantic operationalism.
Abiding to the constraints set by the emergence question, however, Wolfendale’s account proposes a distinct evolutionary genealogy of ‘information processing systems’, tracking the development of mind from the ‘problem solving’ capabilities exhibited by sentient systems that behave according to a multiplicity of autonomous drives, to the capacities for simulation instantiated by selection-enabling functional integration of these drives into a controlled information ‘storage medium’, to the higher-order discursively mediated doings that confer semantic content through the conceptual protocols of theoretical and pragmatic inferential reasoning. As it turns out, it is precisely the generalizing capacity of conceptual functioning which allows a system’s problem solving capabilities to reach a new level of modulation, revision and integration, allowing it to unbind itself from whatever parochial information storage medium supports it as it assesses the consequences of concepts in relation to socially distributed-instantiated theoretical and practical ends:
“What may initially seem like an unsurpassable problem for linguistic accounts of intelligence actually reveals the distinctive feature of language, namely, that insofar as its meaning consists in the functional role that sentences play in reasoning, or in the whole social economy of perception, inference, and action, there is nothing in principle constraining the extent of their possible theoretical consequences, or their potential practical relevance.”29
Only in reckoning the operational abstraction of socio-conceptual practices does one appreciate the distinctive generalizing powers of sapient cognition, which enjoins rational systems to extend, ‘un-frame’ and revise the socially instituted norms that define their theoretical and practical tasks. Sapient systems are thus operationally-functionally characterized as instantiating an absolutely general problem solving protocol, defined by the power to unbind themselves, modulating perceptual, inferential and agential capacities to represent, deliberate, and act upon the world or themselves in ever more precise, diverse, and transformative ways:
“[T]he in principle generality of theoretical and practical reason derives from the in principle extensibility of the social norms which encode the content of its representations. The real significance of language is the capacity it grants us to make explicit and selectively modify the heuristic frames implicitly embedded in adapted cognitive heuristics. This means that the distinctive feature of rational cognition is un-framing…[T]here is no reason to think that the institution of rationality is irrevocably tied to these specific morphological and computational forms. The inhuman system that ensouls our bodies – transforming us into subjects responsible for our thoughts, agents responsible for our actions, and selves responsible for our own cultivation – can ensoul entirely alien somatic forms.”30
In any case, Wolfendale identifies the Kantian account of the forms of cognition as coeval with a kind of transcendental psychology, opening the way for a generic mathematical mapping of the sapient mind, apposite to think its rational scaffolding beyond the resources of pure information-theory, drawing from a category-theoretic understanding of the operational transformations or ‘morphisms’ that articulate the integrity of sapient cognitive systems across its syntactic, semantic, and pragmatic levels. This account thereby involves compartmentalizing the generic subsystems of theoretical and practical cognition by decanting residual metaphysical morsels from the original transcendental project, revising the Kantian account of facultative ‘synthesis’ to track the functional architecture of conceptual and sensory systems, at once updating its parochial adherence to Euclidean geometry on the side of ‘the forms of intuition’, and the categorical account of intellection derived from Aristotelian-scholastic metaphysics.
Insofar as the functional integration of non-conceptual with conceptual factors instantiate cognition as an integrated system of ‘apperceptive consciousness’ that binds perception, inference and action, the pragmatic routines of sapient cognition are thus specified as generically abstract in relation to empirically constrained causal relations (i.e. irrespective of the Post-machine-resembling empirical structure which instantiates ‘The Program of Life’). Only in doing so may one answer the demarcation question in a rigorous way, not only avoiding the dual danger of conflating semantic and informational processes, or the equally pernicious danger of eliminating semantic processes altogether, but discerning the various functional kinds that organize the sapient mind across a variety of practices and vocabularies.
In the second part of this essay, I want to flesh out just what the functional-pragmatic specification of the ‘general problem solving’ propriety of sapience is said to entail, integrating linguistically mediated capacities for theoretical and practical reasoning with the sensorial differential responsiveness found already in sentient systems, and before it is said to obtain under the constraints of any empirical model; in short, spelling out what invariant conditions obtain for the instantiation of sapience, what any system would have to do, in order to count as expressing sapient intelligence, irrespective of what it is or what causal material conditions enable it to do what it does.
II – The Pragmatic Basis of Functional Cognition – Circumspection, Perception, Inference, and Action
In a thoroughly minimalist tenor, Wilfrid Sellars famously proposes a speculative, top-down pragmatic theory of mind which captures the ‘minimal core’ on whose basis we may distinguish between concept-mongering systems that convey semantic content and those that process bare information. To do this, Sellars proposes to separate sapient from non-sapient behavior by distinguishing the way in which conceptual-discursive behavior functionally instantiates routines in relation to sensory inputs and behavioral outputs, integrally binding perception, inference, and action. Amplifying on Sellars’ frame, the basic functional characterization of a cognitive system requires then distinct levels of integrated processing, in which conceptual and non-conceptual states are coordinated and understood as instantiating four generic functional kinds of ‘transitions’31:
1. Language-entry transitions (perception) – system non-inferentially transitions from non-conceptual state x to conceptual state p. Paradigmatically, we may think about perceptual responsiveness, in which sensory-psychosomatic inputs yield perceptual reports as outputs, e.g. John sees a red cat and non-inferentially responds “There’s a red cat!”
2. Language-language transitions (inference) – system goes from conceptual state p to conceptual state q. John goes from “There’s a red cat on the street” to “There’s street animals in this area.”
3. Language-exit transitions (action): system goes from conceptual state p to non-conceptual state y. John goes from “There are street cats in the area” to buying some cat-treats in the nearby shop.
4. Non-language-non-language transitions (circumspection): system goes from non-conceptual state x to non-conceptual state y.32
The centrality of language concerns the way in which inferential behavior mediates and integrates perception, action, and circumspection. Semantic-conceptual content gets conferred by inferential practices as they specify the proprieties of use of discursive tokens in relation to others, in response to non-discursive inputs and as mediating agential outputs. States that are called linguistic are thus not just to be understood as pertaining to systems relaying syntactic-analogs of sentences or ‘propositional states’ endowed with subject-predicate form in natural languages, or in the mediums proper to human speech or writing, but as any functional-symbolic economy capable of instituting the same functional integration, e.g. in construing a non-human conceptual frame, Sellars famously imagines the fictional ‘Jumblese’, a functional analog of a descriptive language endowed with only nominal sign-designs and style modifications, dispensing of predicates, and thereby subtracting propositional functionality from the strict sentential form. More generally, computational characterization in its transcendental scope may be understood as a way to subtract symbolic-conceptual function in its inferential makeup from narrowly conceived ‘propositional’ structure, understanding linguistic-conceptual behavior as ascribed to any system that relays tokens in a discursive economy by entering into inferential relations of incompatibility and consequence with other such states, in relation to non-discursive capacities for observational knowledge and practical agency.
Here the conceptual, explanatory role of normative vocabulary within a pragmatics of Thought becomes salient: following Robert Brandom, we can say that a cognitive system is one that that exhibits sensitivity and is capable of re-cognitive practices of epistemic assessment by instituting normative statuses. In essence, this means that sapient systems engage in the kind of interactive dynamics that realize epistemic roles within ‘the game of giving and asking for reasons’, by virtue of conferring, assuming and assessing the normative statuses of entitlement and commitment33: p is a conceptual state if and only if it enables an inference to q, and if there is some further state r which enables the inference to p. In becoming liable to such assessment and normative-conceptual binding, a system becomes a candidate for knowledge by undertaking beliefs for which reasons may be given.
This is why the characterization of linguistic practice as that of a ‘game’ (indeed, in a fundamental sense the game by which sapient systems constitute themselves as social beings) is more than a cursory analogy. In computational terms, this means that a system conveys semantic content insofar as it becomes capable of instantiating dialogical processes of dual-interactivity between itself and another computational agent/interlocutor,34 of the sort amenable to the pragmatic functional characterization of the reason-asking-and-giving practices that Brandom calls ‘deontic scorekeeping’. This inferential articulation of Reason as it apperceptively responds to and intervenes on perception and action serves thus as a general protocol for organizing a system’s resources and behavior conforming to specific theoretical and practical aims-tasks. This means that a system enters contexts of evaluation by entering an inferential interactive space that is Reason, insofar as it binds the system’s perceptual, discursive and agential capacities. Being able to transit in and out of language in perception or action thus integrates the behavioral capacities of a discursive system in such a way so as to enable it to attribute, assume and become bound to epistemic attitudes within dynamic interactive decision spaces, across ever-evolving languages and vocabularies involving multiply-realizable functions across indefinitely modifiable structures for material implementation.
The basic picture resulting from this account yields the following diagram, in which the different functional relations between non-linguistic (NL) and linguistic states (L) are specified, relative to the role they play as inputs (IN) and outputs (OUT) of a process, and the levels of integration which accordingly allow us to distinguish levels of functional organizational complexity in a system, from inorganic systems to sapient cognitive systems:
There are three essential points to build upon from this schematic account:
1. It is clear that the basic Sellarsian picture, even when amplified to account for circumspect behavior, is obviously quite minimal in its construal of cognitively mediated function. To clarify and go beyond such a parochial level of generality, we must be able to spell out the different kinds of non-linguistic states and the variety of entry and exit roles that they play, as well as the nature and kinds of inferential relations that obtain within ‘the space of reasons’ and which mediate our perceptual responsiveness to environmental inputs and agential possibilities, i.e. the way in which conceptual function allows a system to relate to the world
For instance, one may amplify one’s account of perceptual consciousness by distinguishing different kinds of non-linguistic inputs besides sensory signals or direct somatic stimulation which function to yield perceptual reports as outputs, e.g. non-mnemonic states of eidetic-photographic memory or imaginative projection, through which a non-conceptual mental event triggers a non-inferential report, being obvious examples. It is also clear that language-exit transitions are not limited to cases of overt action, if by the latter we think of corporeal movement: one may transit from linguistic states to other kinds of non-linguistic cognitive states of the sort that can, under specific conditions, themselves play an entry role into language, e.g. the transition from language to instances of non-discursive memory or imagination.
2. In addition, and this is an extension of the Heideggerean inclusion of non-discursive ‘copings’, we may recognize that most of the activity we undertake is between non-linguistic states: most of what we do, we do circumspectly. In a way, this level of process involves the functional basis for bare sentience, in reflex patterns which take psycho-somatic inputs and yield non-somatic outputs, including the kind of sensory operations that do function as language-entry transitions, i.e. bare sensory function that is not however perceptual function. This degree of reliable differential responsiveness associated with circumspection functionally and genetically precedes and conditions discursively mediated observational, inferential, and agential capacities.
However, at the same time, non-linguistic transitions which embody ‘know-how’ are not only transparent routines which upon malfunction trigger conceptual states, functioning as inputs in language-entry transitions. For we also go from inferential knowing-that to internalization of these functional roles into circumspect know-how: thinking of the way in which somatic memory is related to discursive cognition allows one to understand, for instance, how it is that one can go from reading an instruction manual and overtly following inferential rules to build an Ikea bed to being able to do it without even thinking about it. We do not only learn to make explicit our inferential patterns through the acquisition of logical vocabulary in the form of rules, but we learn to make implicit non-inferential routines in circumspect behavior from overt cognitive inferential practices.
3. The nature of the inferential moves that may obtain within a language are of different sorts, supposing different abilities and conceptual resources. And one might worry: doesn’t the inferentialist account, including the ‘algorithmic structuration’ of cognitive function proposed by computationalism, suppose that thinking beings are able to follow rules, which supposes the capacity to use logical vocabulary, paradigmatically subjunctive conditionals? Are the inferences in question formal inferences, which always involve a deductive procedure of the sort that supposes the capacity for syllogistic reasoning? Haven’t we learned the horrors that follow from the coruscating regresses of regulism, i.e. the view that conceptual articulation requires overt rule following to function inferentially?
In order to spell out the nature of inference, and its role in a functionally integrated picture of cognition more perspicuously, in the next section I introduce the Brandomian semantic hierarchy, which clarifies the nature of discursivity and of inference.
III – The Semantic and Pragmatic Hierarchy: Labeling, Material Inference, Logical Inference, Theory Formation
Developing on the functionalist Sellarsian picture outlined above, Robert Brandom has proposed an answer to the demarcation question by tracking the different degrees of functional complexity conveyed by a cognitive system, taking us from pre-conceptual basic discriminatory capacities that are necessary but not sufficient for sapience, to full bloodied conceptual reasoning and semantic expression, to overt logical reasoning.35 Amplifying and schematizing the Brandomian scheme, we can trace at least seven levels of semantic and pragmatic complexity:
(1) – Labeling – Beyond the bare causal regularities that characterize any differentially responsive system, a representational system must have the capacity to classify its environmental inputs by sorting stimuli-types reliably in given circumstances, issuing a labeling output. This kind of informational-sorting relation between a system and its environment is something that clearly already sentient organisms and many inorganic machines can perform, e.g. a parrot may reliably respond by uttering “Daniel!” when it sees me pass me by; a scouting robot may scan its environment printing “Green object” when its sensors register a green object in its nearby environment, etc. In this sense, the system’s output states are considered as labels that classify the items it scans from its environment, producing a representational analog of what it registers, i.e. the ‘locators’ which functionally enable a cognitive system to organize inputs into an integrated ‘coordinate system’ in integral simulation. It is clear that the capacity of such a system to respond to and classify given sorts of inputs is contingent on the architecture of the system in question, the kinds of signals that it is liable to register while passively excluding others, as well as the operational mechanisms by which its address system actively filters and sorts registered inputs into classes and modules, the ‘data models’ that Ladyman and Ross ascribe to the basis of representational capacities. For the moment, we may remain neutral about the metaphysical status of the ‘phenomena’ thus indexed or filtered.
(2) – Material Inference / Description – Second, a system must be able of integrating its differential responses in relations of incompatibility and consequence, entering contexts of evaluation: it must not only be able to respond to its environment as per the right circumstances of application, but also assess the consequences of application of its classifying responses. This means that a system must be able not only to label its environment by responding to it reliably, but be able to describe its environment by assessing what follows from classifying it.
There are several essential and related traits about this level of inferential competence:
. The key to understanding the way in which syntactical primitives issued as outputs by a system must be organized with each other so as to yield semantic content implies that we understand the difference between acting in accordance with explicit rules and following implicit norms. Following norms, as the ability to normatively assess semantic and pragmatic consequences of application by inferential practices is therefore also the bedrock of a system’s capacity to assume and attribute normative statuses of commitment and entitlement, expressed in practices of assertion, description, and explanation which constitute the holistic operational structure of an epistemic-conceptual system endowed with intentional attitudes.
This is what the parrot is missing: it cannot infer that if there goes Daniel it follows that there goes a Peruvian biped animal, that if it had been an Italian chimp by the name of Fabio it would not be Daniel, or that if a system identifies Daniel with an Italian chimp it is incorrectly applying the concept ‘Daniel’ to what it registers, etc. This is why the parrot’s coordinated and reliable response “There goes Daniel!” does not count as an assertion that would express a belief for which we would say the parrot is epistemically responsible: for it cannot assess the (counterfactually robust) consequences of having undertaken a commitment, nor integrate its utterance into a justificatory process. To be able to assume a position in the game of giving and asking for reasons is what it means for a system to be subject to practices of epistemic appraisal. And since it is beliefs that are possibly true or false that make a system a candidate for knowledge, it follows that merely sentient creatures incapable of engaging in inferential behavior, assuming normative statuses, are not capable of having properly epistemic states. Brandom summarizes this point:
“The ‘parrot’ does not treat “That’s red” as incompatible with “That’s green”, nor as following from “That’s scarlet” and entailing “That’s colored”. Insofar as the repeatable response is not, in the parrot, caught up in the practical proprieties of inference and justification, and so of the making of further judgments, it is not a conceptual or a cognitive matter at all.”
. To ‘count states as reasons for others’ by distinguishing the primitive deontic statuses of commitment and entitlement, yields four basic kinds of pragmatic inferential relations assumed and conferred in contexts of evaluation:
– (Commitment-Preserving Inferences) – p is commitment-preserving with respect to q iff commitment to p entails commitment to q, e.g. if one is committed to a beer that has brettanomyces one is committed to the beer being a wild ale, since it follows from a beer having brettanomyces that it is a wild ale. These provide a pragmatic generalization of deductive inferences.
– (Entitlement-Preserving Inferences) – p is entitlement-preserving with respect to q iff entitlement to p entitles one to (asserting) q, e.g. if one is entitled to assert that the beer is massively hopped one is entitled to assert that the beer will be bitter, since a beer being massively hopped provides good reason (but does not oblige) to think the beer will be bitter. These provide generalizations of inductive inference.
– (Explanation-Enabling Inferences) – p is explanation-enabling with respect to q iff commitment to q entitles one to p → q, and p → q entitles one to q. In such a case, we will say one infers p as an explanation of q. These provide generalizations of abductive inference and hypothetical reasoning, e.g. if one believes that the beer is bitter one is entitled to say this is because it is hoppy.
– (Incompatibility Inferences) – p is incompatible with q iff commitment to p precludes entitlement to q, and vice versa, e.g. the assertion “the beer tastes hoppy” is incompatible with the assertion “the beer has no bitter qualities” since although one can practically commit to both, commitment to each precludes entitlement to the other, so that one cannot be entitled to both at once.
– As Brandom makes clear, the inferences that encode implicit norms of conceptual use are thus not formal but material: they do not suppose a syllogistic procedure in which conclusions detach from premises through subjunctive rules that are overtly expressed with logical vocabulary. The inferential moves to and from reasons are, in the first instance, transitions which are endorsed implicitly in practice, before the system has the capacity to make these inferential connections explicit in the form of a rule, e.g. a system goes from “There goes a man” to “There goes a biped animal” without the mediation of the conditional rule “If something is a man, then it is a biped animal”. To say that the proprieties of inference are undertaken implicitly entails that the system reliably transitions from one state to another, in accordance with a rule of inference which the system itself is not in a position to make explicit through the use of logical vocabulary.
– The question of how such inferential-linguistic relations are to be understood, however, is not immediately transparent: is this inferential functionality to be ultimately reduced by recourse to the causal or mechanical properties of a system? Or are the semantic and pragmatic proprieties of use for concepts, on the other hand, transcendentally specified with regard from mechanistic determinations? This takes us back to the problem of the ontological status of intentional attitudes and normative status ascriptions in relation to causal relations said to obtain at the material level.
The Sellarsian solution, as we already suggested above, is to endorse the possibility of reconciling a methodological dualism between deontic and alethic modal relations, and between causal and transcendental discourse, that does not result in metaphysical dualism, i.e. what James O’Shea has identified as the causal reducibility cum normative irreducibility of thought-episodes with respect to material constitution.36 The normative characterization of cognition provides a paradigm of such ‘transcendental’ description: for a system’s states to be able to play a conceptual role through inferential articulation entails that the states of the system are organized in relations of incompatibility and consequence, which make them liable to play a justificatory role in relation to others. For, as Sellars’ famously put it, to characterize an episode or state as one of ‘knowing’ is not merely to give an empirical description of that state, but to place it ‘in the space of reasons’, of saying and justifying what one says. If so, it becomes possible to endorse an operational or methodological dualism without compromising ontological univocity, that is, following the Sellarsian ambition, without unwittingly endorsing a metaphysical ‘norm-nature’ dualism that amputates thought from its material basis.
With this said, whether norms and intentional locutions are ultimately reducible or eliminable by means of alternative functional descriptions (mechanistic or transcendental), the latter would have to retrieve at least the essential operational structure of a system that engages in contexts of evaluation to explain and describe itself and its environment. One may thereby retrieve the functional core of rationality in its theoretical and practical dimensions without endorsing a metaphysical realism about deontic modality or normative statuses.
– Finally, material inferences are by and large non-monotonic: they are not exceptionless but generally highly context-sensitive, in the sense in which their goodness is not guaranteed to hold under the addition of arbitrary premises. This means that material inferences are across vocabularies operating over local rather than absolute spaces of possibilities, such that the inferences are also usually implicitly qualified by a (generally open and indefinite) set of defeasors: ‘p, then q…unless r, or s…’.37
(3) Logical Systems – One level up, systems become capable of making formal inferences, which make explicit their implicit material inferential normative commitments as rules, through the use of logical vocabulary, forming compound sentential units, paradigmatically subjunctive conditionals and negative judgments. Insofar as conditionals are asserted compound sentential units in which free-standing, non-asserted antecedent clauses function as ingredient conditions for satisfying consequents, it is at this level that cognitive systems become capable of assessing and contemplating premises and arguments, precisely by synthetic-logical integration of sentential units as unasserted components of asserted compounds. These systems acquire then the capacity to distinguish between semantic and pragmatic consequences of application, between what follows from something being thus and so, and what follows from someone saying that something is thus and so, between content and force…“A creature that can understand a claim like “If the red light is on, then there is a biscuit in the drawer” without disagreeing when the light is not on and no biscuit is present, or immediately looking for the biscuit regardless of how it is with the light, has learned to distinguish between the content of descriptive concepts and the force of applying them, and as a result can entertain and explore those concepts and their connections with each other without necessarily applying them in the sense of endorsing their applicability to anything present. The capacity in this way to free oneself from the bonds of the here-and-now is a distinctive kind of conceptual achievement.”38
This means that the capacity to merely ‘entertain’ premises supposes and is built upon the capacity of systems to assert and infer by adhering to implicit norms; logical vocabulary allows us to abstract as it were predication from assertion, reasoning from belief. In this sense, paraphrasing Brandom, Logic is the organon of our semantic and pragmatic self-consciousness.
4. Complex Predicate Formation – Distinguishing between pragmatic and semantic inferences, a system may map embedded conceptual contents of varying generality and scope to formulate complex concepts, forming equivalence classes from sentences which can be seen as behaving invariantly under substitution in corresponding inferential patterns. This substitutional ability allows a system to produce new complex concepts and predicates, in an ever extensible process of semantic stratification and expressive enrichment in which sense or meaning becomes formalized and constructed as we learn to operationalize inference holistically.
5. Theory building – Amplifying this Brandomian scheme, and drawing from the work of Lorenz Puntel on the philosophical prospect for a ‘global systematics’, we can take the semantic ladder three-steps further.39 For beyond the capacity to make inferential rules explicit through logical vocabulary and the making of formal inferences, a cognitive system eventually is capable of organizing its formal inferential resources to abstract from the specific material content of the descriptive terms and their logical-conceptual relations, through assignations of variables and constants for names and operators/functions for relations organized structurally in the formation of theories.
To be able to engage in formalization and not only theoretical and practical reasoning, a system must be able to abstract from the material inferential organization of names and predicates in empirical languages and to form artificial languages in which the syntactic and semantic proprieties are defined by integrated rules of formation, axioms and theorems, such that every well-formed sentence in the language follows from explicit rules which define the admissible operational manipulation of the syntactical basis. This procedure is thus at once enabling in the sense of an increase in operational control – insofar as it explicitly regulates the admissible inferential processes, including of the specification of the spaces of possibility that operate for non-monotonic vocabularies – and liberating – insofar as it instantiates a power for generalizing abstraction from the concrete proprieties of given languages to encode generic inferential forms.
Following Nicholas Rescher’s work on pragmatics and the practice of theory, we can identify four stages in the process of integral formalization qua theorization40:
. Informal theorization: In this stage, a system identifies syntactical primitives or terms that count as basic data structures seeking maximal coherence and ordering the data according to the principle of what Puntel calls “inference to the best systematization”.41 This process essentially involves minimally ordering the data so as to attain maximal explanatory coherency, where a ‘datum’ is defined purely and simply functionally as what is to be brought into a coherence-nexus within the scope of theorization.
. Formal theorization: At this level, the coherently articulated sentences compiled in the first stage are put into proper theoretical form, i.e. a theory is structurally simplified as the doublet consisting of the relations between a language-structure, and a universe of discourse: T = <S, U>. This entails plugging informal theories into either proper axiomatic theories or holistic networks. The difference between the two kinds accordingly concerns the method of formalization at stake and the inference-kinds involved in the theory’s admissible operations.
. Axiomatic theories are minimally composed of a language, a logic (rules of inference), a set of axioms, and theorems. These have a hierarchical-linear structure, by which a system moves deductively from basal theses (axioms) to theorems. In short: T is an axiomatic theory if T is a set of formulas such that there is a subset A of T whose elements are the axioms of T, and for every formula X, if X belongs to T then X is provable or derivable on the basis of A:
Following Heinrich Stegmuller, we can discern five corresponding stages in the constitution of axiomatic theories42:
Euclidean axiomatics – Which relied on the idea that the extensive, geometrical properties of objects could be deduced from ‘basic axioms’, considered to be self-evident intuited principles (‘principia per se nota’) expressed as underived sentences held as universally true. This incipient usage would continue until the late 19th Century.
Informal Hilbertian axiomatics – Which amplifies the scope of the ‘intuitive’ concept of axiomatic assumptions (holding for geometrical structures as establishing relations between points, lines, and surfaces) to an abstract level, enabling the construction of formal frameworks whose sentence-forms in turn do not refer to these geometrical primitives and relations, but operate as generic structures unbound from intuitive postulates. This is the birth of what can properly be called a ‘modern axiomatics’, following the publication of Hilbert’s Foundations of Geometry in 1902.
. Formal Hilbertian axiomatics – In which an axiom-system E functions as a meta-language for a language L, such that (decidable) well-formed formulas (rather than sentence-forms) of L allow the identification of a subset A whose elements are the well-formed formulas that constitute axioms of E. Then, a set of derivation rules R is given to specify the derivation of other formulas from the axioms. The axiom structure is then that of a calculus, which can be expressed in the form of the triad E = <S, A, R>.
It is at this level of axiomatization that model structures may be assigned to the formulas of the axiomatized theory, as providing a domain of interpretation of its syntax, which endow these with semantic content. The consistency of the formal theory, at this point, is bound among other things to the transparently determinable criteria of non-contradictoriness, decidability and completeness. A model is then an interpretation of S in which all of the axioms specified by E are true.
. Informal-set-theoretical axiomatics – In which axioms are no longer sentences, sentence-forms, or formulas, but become the elements and parts in a set-theoretical statement, and achieve an informal axiomatization by means of the definition of a set-theoretical predicate. For, as Suppe argues, to axiomatize a theory is to define a predicate in terms of the notions of set theory, e.g. an interpretation of the set-theoretical predicate “is a group” for group theory. Accordingly, models are at this stage simply valences that are said to satisfy a set-theoretical predicate.43
. Formal set-theoretical axiomatics – In which one defines an explicit concept of an axiom structure as the counterpart of its informal axiomatization. While informal set-theoretical axiomatic theories use the concepts of naïve set theory as the base means to construct the predicates of a syntactic structure, in a formal axiomatic the predicate is within the context of a formal structure of set theory, i.e. according to the axioms and principles of Zermelo-Fraenkel axiomatization of set theory.
. Network-Coherence theories in contrast encode inferences which do not behave in a linear-hierarchical way. As Puntel argues, the more comprehensive a theory is, the less probable it is that it can be articulated in accordance with the axiomatic-theory form. Network-coherence theoretical approaches are used thus, among other things, for inter-theoretical organization between theories, which might themselves be axiomatic or not.
It is important to understand how theoretical practice, in both its axiomatic and coherence expressions, plugs back into the Sellarsian understanding of the functional integration between inferential reasoning and observation reports, whose statements within formalized or rule-governed languages do not only work to make explicit already existing norms of material inference, but can themselves be constructed only to become eventually materially inferentially solicited. As Sellars noted, this means that the distinction between theory and observation is fundamentally epistemic and not ontological, so that theoretical statements that have only inferential uses may acquire observational non-inferential roles; e.g. we go from postulating the existence of Pluto theoretically to explain the gravitational orbit of the planets, to being able to make direct non-inferential observational reports about Pluto. This means that formal inference does not only make already endorsed material inferences explicit, but furthermore possible, as part of the intrinsic articulation of formal theories and descriptive-predictive practices intrinsic to the labor of empirical science.
Theory does not derive from but shapes perceptual capacities; inferential practices affect not only our non-discursive agential possibilities through deliberation and circumspect action, but also our capacities to non-inferentially respond to sensory inputs. The required amplification of the Sellarsian account of perception, inference, action and circumspection allows us to better grasp the way in which theory not only obtains upon the malfunction of our ‘circumspect’ doings, but furthermore becomes the means by virtue of which we amplify and modify our capacities to perceive, reason, and act upon the world in turn, so that knowing-that also inculcates new forms of know-how, e.g. just like we go from reading rules to doing Ikea beds, we go from making inferences about Pluto to making empirical reports. To say that inference amplifies and determines what is possible in action and perception is also to notice that our capacity to make perceptual reports is relative to the inferential norms and roles that an agent undertakes.
It is clear that, like any kind of ‘transition’ in the pragmatic functional schema derived from Sellars, theory is itself a practice, the practice of theory. And it is likewise clear that the elaboration of a rigorous concept of practice itself, which thematizes the practical structure of theoretical activity, is a kind of theoretical practice itself, i.e. pragmatics is a theory of practice, including the practice of theory. The functional characterization of rationality and intelligence in terms of distinct forms of inferentially mediated practical abilities becomes thus available quite late, not as a kind of transparent insight into our phenomenological givenness, but as part of a meta-theoretical vocabulary within which one discerns and articulates relations between discursive, perceptual and practical abilities or functions. Through meta-theoretic systematization, we associate and operationalize formal systems and articulate divergent vocabularies, bridging theories to substructural domains ranging over various data, including the binding of representational capacities from indexical pre-conceptual representations to the high-theoretical mapping of objective modal structures, i.e. we may reach the point where Thought as a whole makes itself explicit in its relation to Being.
With this said, in Puntel’s account, it is the coherence methodology which allows for theoretical articulation into systematic networks of theories, by uniting frameworks which are not liable thus, however provisionally, to hierarchical-linear inter-theoretical organization. In general, only axiomatic theories within the domains of ‘pure mathematics’ or fundamental physics behave monotonically, so that axiomatization, where possible, pragmatically allows for maximal predictive purchase through monotonic inferential control; while coherence and network approaches provide a lever to encode more complex, and context-sensitive inferential relations. So, at the next level of semantic ascent, we obtain…
(6). The systematization of theories: Interrelation of component theories into increasingly comprehensive theories (holistic theoretical networks). Formal dialectical integration is but the production of a theoretical network which weaves domains, vocabularies, and generic structures into a comprehensive system. It is evident that these techniques of theoretical syntheses, capable of providing us with a meta-theoretic apparatus to render the mediations between theories explicit and to render generic epistemic goals tractable, are continuous with the project of formulating a dialectical conceptual framework to understand the relations between natural and formal vocabularies, and the distinct domains of being encoded thereby. We have already seen the way in which information-theoretic accounts become amenable to different forms of integration, but the process of systematization encompasses investigations of varying scopes, making use of different formal-conceptual tools: Ladyman and Ross’ ontic structural realist metaphysics of pure patterns as a naturalist reading of information theory that binds fundamental physics to the special sciences; Brandom’s formal semantic ‘pragmatic meta-vocabulary’ to map relations of semantic and pragmatic dependence between vocabularies; Puntel’s coherence-network ‘global systematics’ as a theoretical framework that thinks the ‘universe of discourse’, etc.
At the same time, such meta-theoretic techniques of synthesis map the intricate relations between the formal mathematical tools and registers which constitute the ‘language of theories’ as such – as is the case of Uwe Petersen’s dialectical logic, or Fernando Zalamea’s Peircean approach to pragmatics for a ‘synthetic philosophy of contemporary mathematics’ in which one sees an “…integration of diagrams, correlations, modalities, contexts and frontiers between the world and its various interpretants… Far from being the mere study of utilitarian correlations in practical contexts of action-reaction…pragmatics aims to reintegrate the differential fibers of the world, explicitly inserting the broad relational and modal spectrum of fibers into the investigation as a whole…”44
Similarly, in pursuing a phenomenology of worlds, Badiou characterizes the ‘topological’ investigation into distinct mathematical ‘universes’ as an investigation into the logic of ‘possible worlds’, where the latter will now come to stand as not only mathematical discursive domains, but the objective structuration of all sorts of non-discursive artistic, political, amorous and scientific ‘situations’:
“What topos theory offers is a description of possible mathematical universes. Its method employs definitions and schemas, and a geometric synopsis of its resources. It is tantamount to an inspection of Leibniz’s God: a categorical journey through thinkable worlds, their kinds and distinctive features. It ascertains that each universe bear its own internal logic. The theory establishes the general correlations between ontological features of these universes and the characterization of their logic. But it does not decide on a particular universe. Unlike Leibniz’s God, we do not have any reason to consider some such mathematical universe as the best of possible universes.”45
(7) – Universal Assessment of Systems: Evaluation of a comprehensive system or network of theories. A global systematic philosophical system allows us to develop an appraisal of not only individual theories intrinsically with regard to internal coherence, but to assess meta-theoretically the organization of theories in their systematic placement. The scope of such integral assessment of course depends on the scope of the aims regulating the meta-theoretic enterprise. For instance, one might assess how formalized empirical scientific vocabularies stand in diachronic placement, in accordance to prospective normative regulative criteria of epistemic success (e.g. representational isomorphy for descriptive-empirical theories) and retrospective assessment, assessing structure-preservation and measurement-refinement between theories and successor theories.46 The following diagram provides a schematic representation of the terrain:
The choice for a base meta-vocabulary used in any such synthetic approaches used to characterize the various relations between vocabularies and structures leads to predictable iterations of well-known epistemological questions about methodological priority and expediency. Regardless, the capacity of thought to undertake a systematic exploration of itself and the world through ever exacting procedures for mapping the mediations and relations between theoretical structures and their corresponding empirical substructures becomes the essential lever by virtue of which representational function frees itself not only from the ‘here and now’ by the explicitation of distinct pragmatic-theoretic modalities encoded as implicit norms, but as it constructively aspires to capture its own operational unfolding and structure, mapping the objective modal structure of the world, including the structural dynamics of a synthetic intelligence which eventually promises to go beyond its contingently evolved, materially constrained forms of intuition as well as its operationally constrained forms of cognition.
In this process of increasing articulation and abstraction of theoretical and practical rationality by a systematizing process of conceptual synthesis a sapient system extends the capacities of operating between theories and models, using set-theoretical structures as domains of interpretation for syntactical bases. Insofar as it amplifies the navigational scope of a cognitive system, and insofar as it transforms the material basis to which it is causally bound, conceptual labor is the instrument for the production of a new aesthetic as well as a new world. Concepts are causally anchored on, if not derivable from, sensible experience; but the latter is both represented and transformed by the former. So, even though both sensation and conception are autonomous facultative powers, our conceptual capacities to perceive, infer and judge that things are thus-and-so enables intervention into the ontological fabric of intuition to as to amplify what we can sense-of things, as well to intervene in the world so as to change what things there are to be sensed.