Currying

In mathematics and computer science, currying is the technique of translating the evaluation of a function that takes multiple arguments (or a tuple of arguments) into evaluating a sequence of functions, each with a single argument. Currying is related to, but not the same as, partial application.

Currying is useful in both practical and theoretical settings. In functional programming languages, and many others, it provides a way of automatically managing how arguments are passed to functions and exceptions. In theoretical computer science, it provides a way to study functions with multiple arguments in simpler theoretical models which provide only one argument. The most general setting for the strict notion of currying and uncurrying is in the closed monoidal categories; this is interesting because it underpins a vast generalization of the Curry-Howard correspondence of proofs and programs to a correspondence with many other structures, including quantum mechanics, cobordisms and string theory.[1] It was introduced by Gottlob Frege,[2] developed by Moses Schönfinkel,[3][4][5] and further developed by Haskell Curry.[6][7]

Uncurrying is the dual transformation to currying, and can be seen as a form of defunctionalization. It takes a function f whose return value is another function g, and yields a new function f? that takes as parameters the arguments for both f and g, and returns, as a result, the application of f and subsequently, g, to those arguments. The process can be iterated.

## Motivation

Currying provides a way for working with functions that take multiple arguments, and using them in frameworks where functions might take only one argument. For example, some analytical techniques can only be applied to functions with a single argument. Practical functions frequently take more arguments than this. Frege showed that it was sufficient to provide solutions for the single argument case, as it was possible to transform a function with multiple arguments into a chain of single-argument functions instead. This transformation is the process now known as currying.[8] All "ordinary" functions that might typically be encountered in mathematical analysis or in computer programming can be curried. However, there are categories in which currying is not possible; the most general category which does allow currying is the closed monoidal category.

Some programming languages almost always use curried functions to achieve multiple arguments; notable examples are ML and Haskell, where in both cases all functions have exactly one argument. This property is inherited from lambda calculus, where multi-argument functions are usually represented in curried form.

Currying is related to, but not the same as partial application. In practice, the programming technique of closures can be used to perform partial application and a kind of currying, by hiding arguments in an environment that travels with the curried function.

### Partial application

Currying resembles the process of evaluating a function of multiple variables, when done by hand, on paper, being careful to show all of the steps.

For example, given the function ${\displaystyle f(x,y)=y/x}$:

To evaluate ${\displaystyle f(2,3)}$, first replace ${\displaystyle x}$ with ${\displaystyle 2}$
Since the result is a function of ${\displaystyle y}$, this new function ${\displaystyle g(y)}$ can be defined as ${\displaystyle g(y)=f(2,y)=y/2}$
Next, replace the ${\displaystyle y}$ argument with ${\displaystyle 3}$, producing ${\displaystyle g(3)=f(2,3)=3/2}$

On paper, using classical notation, this is usually done all in one step. However, each argument can be replaced sequentially as well. Each replacement results in a function taking exactly one argument.

This example is somewhat flawed, in that currying, while similar to partial function application, is not the same (see below).

### Illustration

Currying is a method for producing a chain of functions, each taking exactly one argument. The construction is achieved by "hiding" all but one argument in another, new, curried function, whose job it is to return functions of the remaining arguments. This is explicitly (but informally) illustrated next.

Given a function f that takes two arguments x and y, that is,

${\displaystyle f(x,y)}$

one may then construct a new function hx, related to the original f. This form takes only one argument, y, and, given that argument, hx(y) returns f(x,y). That is,

${\displaystyle h_{x}(y)=f(x,y)}$.

Here, it should be clear that the subscript x on h is a notational device used to hide or salt away one of the arguments, putting it to the side, so that one gets to work with a function having only one argument. Currying formulates a functional abstraction, for such a notational trick.

To complete such a trick, a map or function constructor must then be devised using the common notation ${\displaystyle \mapsto }$, that signifies actual abstract action. Notation such as ${\displaystyle y\mapsto z}$ then denotes a function, mapping an argument y to a result z.

Thereafter, consider erasing the subscript x from the representative notation hx. Such action yields a curried representative notation h; thereby being a function given an x, is able to return as its "value", a different function hx; which then happens to be a function constructor, whose map is notationally represented with the statement ${\displaystyle y\mapsto f(x,y)}$. Alternatively, described as a function mapping an argument y to a result z. That is,

${\displaystyle h_{x}=(y\mapsto f(x,y))}$,

seen with different (but equivalent) representative notation,

${\displaystyle h(x)=(y\mapsto f(x,y))}$

The function h itself, can now be represented in a notationally similar fashion with hx, and written as

${\displaystyle h=(x\mapsto (y\mapsto f(x,y)))}$

capable of accounting for, and acting upon, all of the initially stated, to be involved functional arguments. Given the above, the act of currying can be recognized as a function that, given some arbitrary f, that is to be involved with related h functionality, is then able to produce, the stated functionality of h; involving said f. That is,

${\displaystyle {\text{curry}}=(f\mapsto h)}$

or, equivalently

${\displaystyle {\text{curry}}(f)=h}$

This illustrates the fundamental, essential nature of currying: it is a mechanism for relocating an argument, by bundling it into a function that returns another related function involved with the argument f. That is, given a function f that returns a "value", one "constructs" a new function h that returns a function involving f. A different way of understanding currying is to realize that it is just an algebraic game, a syntactic rearranging of symbols. One does not ask what the "meaning" of the symbols are; one only agrees to the rules for their rearrangement. To see this, note that the original function f itself may be written as

${\displaystyle f=((x,y)\mapsto f(x,y))}$

Comparing to the function h above, one sees that the two forms involve a re-arrangement of parenthesis, together with the conversion of a comma into an arrow.

Returning to the earlier example,

${\displaystyle f(x,y)={\frac {y}{x}}}$

one then has that,

${\displaystyle g=h_{2}=h(2)=(y\mapsto f(2,y))}$

as the curried equivalent of the example above. Adding an argument to g then gives

${\displaystyle g(y)=h_{2}(y)=h(2)(y)=f(2,y)={\frac {y}{2}}}$

and

${\displaystyle g(3)=h_{2}(3)=h(2)(3)=f(2,3)={\frac {3}{2}}.}$

The peeling away of arguments might be better understood by considering a function of, say, four arguments:

${\displaystyle f(x,y,z,w)}$

Proceeding as above, one is led to the form

${\displaystyle h_{x}=((y,z,w)\mapsto f(x,y,z,w))}$

which can be applied to a triple to get

${\displaystyle h_{x}(y,z,w)=f(x,y,z,w)}$.

The curried form is then properly written as

${\displaystyle h=(x\mapsto ((y,z,w)\mapsto f(x,y,z,w)))}$

By continuing to play the algebraic game of re-arranging symbols, one is eventually led to the fully curried form

${\displaystyle x\mapsto (y\mapsto (z\mapsto (w\mapsto f(x,y,z,w))))}$

It is commonly understood that the arrow operator is right-associative, and so most of the parenthesis above are superfluous, and can be removed without altering the meaning. Thus, it is common to write

${\displaystyle x\mapsto y\mapsto z\mapsto w\mapsto f(x,y,z,w)}$

for the fully curried form of f.

## History

The name "currying", coined by Christopher Strachey in 1967, is a reference to logician Haskell Curry. The alternative name "Schönfinkelisation" has been proposed as a reference to Moses Schönfinkel.[9] In the mathematical context, the principle can be traced back to work in 1893 by Frege.

## Definition

Currying is most easily understood by starting with an informal definition, which can then be molded to fit many different domains. First, there is some notation to be established. Let ${\displaystyle f\colon X\to Y}$ denote a function ${\displaystyle f}$ from ${\displaystyle X}$ to ${\displaystyle Y}$. The notation ${\displaystyle X\to Y}$ then denotes all functions from ${\displaystyle X}$ to ${\displaystyle Y}$. Here, ${\displaystyle X}$ and ${\displaystyle Y}$ may be sets, or they may be types, or they may be other kinds of objects, as explored below. Let ${\displaystyle X\times Y}$ denote ordered pairs, that is, the Cartesian product.

Given a function ${\displaystyle f}$ of type ${\displaystyle f\colon (X\times Y)\to Z}$, currying constructs or makes a new function ${\displaystyle {\text{curry}}(f)\colon X\to (Y\to Z)}$. That is, ${\displaystyle {\text{curry}}(f)}$ takes an argument of type ${\displaystyle X}$ and returns a function of type ${\displaystyle Y\to Z}$. Uncurrying is the reverse transformation, and is most easily understood in terms of its right adjoint, apply.

### Set theory

In set theory, the notation ${\displaystyle Y^{X}}$ is used to denote the set of functions from the set ${\displaystyle X}$ to the set ${\displaystyle Y}$. Currying is the natural correspondence between the set ${\displaystyle A^{B\times C}}$ of functions from ${\displaystyle B\times C}$ to ${\displaystyle A}$, and the set ${\displaystyle \left(A^{C}\right)^{B}}$ of functions from ${\displaystyle B}$ to the set of functions from ${\displaystyle C}$ to ${\displaystyle A}$. Indeed, it is this natural correspondence that justifies the exponential notation for the set of functions. In the category of sets, the object ${\displaystyle Y^{X}}$ is called the exponential object.

### Function spaces

In the theory of function spaces, such as in functional analysis or homotopy theory, one is commonly interested in continuous functions between topological spaces. One writes ${\displaystyle {\text{Hom}}(X,Y)}$ (the Hom functor) for the set of all functions from ${\displaystyle X}$ to ${\displaystyle Y}$, and uses the notation ${\displaystyle Y^{X}}$ to denote the subset of continuous functions. Here, ${\displaystyle {\text{curry}}}$ is the bijection

${\displaystyle {\text{curry}}:{\text{Hom}}(X\times Y,Z)\to {\text{Hom}}(X,{\text{Hom}}(Y,Z)),}$

while uncurrying is the inverse map. If the set ${\displaystyle Y^{X}}$ of continuous functions from ${\displaystyle X}$ to ${\displaystyle Y}$ is given the compact-open topology, and if the space ${\displaystyle Y}$ is locally compact Hausdorff, then ${\displaystyle {\text{curry}}}$ is a continuous function, and is a homeomorphism. This is also the case when ${\displaystyle X}$, ${\displaystyle Y}$ and ${\displaystyle Y^{X}}$ are kaonized,[10](chapter 5)[11] although there are more cases.[12][13]

This result motivates the exponential notation

${\displaystyle {\text{curry}}:Z^{X\times Y}\to (Z^{Y})^{X}}$

which is sometimes called the exponential law.[14] One useful corollary is that a function is continuous if and only if its curried form is continuous. Another important result is that the application map, usually called "evaluation" in this context, is continuous (note that eval is a strictly different concept in computer science.) That is,

{\displaystyle {\begin{aligned}&&{\text{eval}}:Y^{X}\times X\to Y\\&&(f,x)\mapsto f(x)\end{aligned}}}

is continuous when ${\displaystyle Y^{X}}$ is compact-open and ${\displaystyle Y}$ locally compact Hausdorff.[15] These two results are central for establishing the continuity of homotopy, i.e. when ${\displaystyle X}$ is the unit interval ${\displaystyle I}$, so that ${\displaystyle Z^{I\times Y}\cong (Z^{Y})^{I}}$ can the thought of as either a homotopy of two functions from ${\displaystyle Y}$ to ${\displaystyle Z}$, or, equivalently, a single (continuous) path in ${\displaystyle Z^{Y}}$.

### Algebraic topology

In algebraic topology, currying serves as an example of Eckmann-Hilton duality, and, as such, plays an important role in a variety of different settings. For example, loop space is adjoint to reduced suspensions; this is commonly written as

${\displaystyle [\Sigma X,Z]\approxeq [X,\Omega Z]}$

where ${\displaystyle [A,B]}$ is the set of homotopy classes of maps ${\displaystyle A\rightarrow B}$, and ${\displaystyle \Sigma A}$ is the suspension of A, and ${\displaystyle \Omega A}$ is the loop space of A. In essence, the suspension ${\displaystyle \Sigma X}$ can be seen as the cartesian product of ${\displaystyle X}$ with the unit interval, modulo an equivalence relation to turn the interval into a loop. The curried form then maps the space ${\displaystyle X}$ to the space of functions from loops into ${\displaystyle Z}$, that is, from ${\displaystyle X}$ into ${\displaystyle \Omega Z}$.[15] Then ${\displaystyle {\text{curry}}}$ is the adjoint functor that maps suspensions to loop spaces, and uncurrying is the dual.[15]

The duality between the mapping cone and the mapping fiber (cofibration and fibration)[10](chapters 6,7) can be understood as a form of currying, which in turn leads to the duality of the long exact and coexact Puppe sequences.

In homological algebra, the relationship between currying and uncurrying is known as tensor-hom adjunction. Here, an interesting twist arises: the Hom functor and the tensor product functor might not lift to an exact sequence; this leads to the definition of the Ext functor and the Tor functor.

### Domain theory

In order theory, that is, the theory of lattices of partially ordered sets, ${\displaystyle {\text{curry}}}$ is a continuous function when the lattice is given the Scott topology.[16] Scott-continuous functions were first investigated in the attempt to provide a semantics for lambda calculus (as ordinary set theory is inadequate to do this). More generally, Scott-continuous functions are now studied in domain theory, which encompasses the study of denotational semantics of computer algorithms. Note that the Scott topology is quite different than many common topologies one might encounter in the category of topological spaces; the Scott topology is typically finer, and is not sober.

The notion of continuity makes its appearance in homotopy type theory, where, roughly speaking, two computer programs can be considered to be homotopic, i.e. compute the same results, if they can be "continuously" refactored from one to the other.

### Lambda calculi

In theoretical computer science, currying provides a way to study functions with multiple arguments in very simple theoretical models, such as the lambda calculus, in which functions only take a single argument. Consider a function ${\displaystyle f(x,y)}$ taking two arguments, and having the type ${\displaystyle (X\times Y)\to Z}$, which should be understood to mean that x must have the type ${\displaystyle X}$, y must have the type ${\displaystyle Y}$, and the function itself returns the type ${\displaystyle Z}$. The curried form of f is defined as

${\displaystyle {\text{curry}}(f)=\lambda x.(\lambda y.(f(x,y)))}$

where ${\displaystyle \lambda }$ is the abstractor of lambda calculus. Since curry takes, as input, functions with the type ${\displaystyle (X\times Y)\to Z}$, one concludes that the type of curry itself is

${\displaystyle {\text{curry}}:((X\times Y)\to Z)\to (X\to (Y\to Z))}$

The -> operator is often considered right-associative, so the curried function type ${\displaystyle X\to (Y\to Z)}$ is often written as ${\displaystyle X\to Y\to Z}$. Conversely, function application is considered to be left-associative, so that ${\displaystyle f(x,y)}$ is equivalent to

${\displaystyle (({\text{curry}}(f)\;x)\;y)={\text{curry}}(f)\;x\;y}$.

That is, the parenthesis are not required to disambiguate the order of the application.

Curried functions may be used in any programming language that supports closures; however, uncurried functions are generally preferred for efficiency reasons, since the overhead of partial application and closure creation can then be avoided for most function calls.

### Type theory

In type theory, the general idea of a type system in computer science is formalized into a specific algebra of types. For example, when writing ${\displaystyle f\colon X\to Y}$, the intent is that ${\displaystyle X}$ and ${\displaystyle Y}$ are types, while the arrow ${\displaystyle \to }$ is a type constructor, specifically, the function type or arrow type. Similarly, the Cartesian product ${\displaystyle X\times Y}$ of types is constructed by the product type constructor ${\displaystyle \times }$.

The type-theoretical approach is expressed in programming languages such as ML and the languages derived from and inspired by it: CaML, Haskell and F#.

The type-theoretical approach provides a natural complement to the language of category theory, as discussed below. This is because categories, and specifically, monoidal categories, have an internal language, with simply-typed lambda calculus being the most prominent example of such a language. It is important in this context, because it can be built from a single type constructor, the arrow type. Currying then endows the language with a natural product type. The correspondence between objects in categories and types then allows programming languages to be re-interpreted as logics (via Curry-Howard correspondence), and as other types of mathematical systems, as explored further, below.

### Logic

Under the Curry-Howard correspondence, the existence of currying and uncurrying is equivalent to the logical theorem ${\displaystyle (A\land B)\to C\Leftrightarrow A\to (B\to C)}$, as tuples (product type) corresponds to conjunction in logic, and function type corresponds to implication.

The exponential object ${\displaystyle Q^{P}}$ in the category of Heyting algebras is normally written as material implication ${\displaystyle P\to Q}$. Distributive Heyting algebras are Boolean algebras, and the exponential object has the explicit form ${\displaystyle \neg P\lor Q}$, thus making it clear that the exponential object really is material implication.[17]

### Category theory

The above notions of currying and uncurrying find their most general, abstract statement in category theory. Currying is a universal property of an exponential object, and gives rise to an adjunction in cartesian closed categories. That is, there is a natural isomorphism between the morphisms from a binary product ${\displaystyle f\colon (X\times Y)\to Z}$ and the morphisms to an exponential object ${\displaystyle g\colon X\to Z^{Y}}$. This generalizes to a broader result that the tensor product and the Hom functor are adjoint in closed monoidal categories.[18] Currying is the statement that the tensor product and the internal Hom are adjoint functors; that is, there is a natural transformation:

${\displaystyle \hom(A\otimes B,C)\cong \hom(A,B\Rightarrow C).}$

Here, hom is written lower case, to indicate that it is the internal hom functor, distinguishing it from the external (upper-case) Hom. For the category of sets, the two are the same. When the product is the cartesian product, then the internal hom ${\displaystyle B\Rightarrow C}$ becomes the exponential object ${\displaystyle C^{B}}$.

Currying can break down in one of two ways. One is if a category is not closed, and thus lacks an internal hom functor (possibly because there is more than one choice for such a functor). Another ways is if it is not monoidal, and thus lacks a product (that is, lacks a way of writing down pairs of objects). Categories that do have both products and internal homs are exactly the closed monoidal categories.

The setting of cartesian closed categories is sufficient for the discussion of classical logic; the more general setting of closed monoidal categories is suitable for quantum computation.[19]

The difference between these two is that the product for cartesian categories (such as the category of sets, complete partial orders or Heyting algebras) is just the Cartesian product; it is interpreted as an ordered pair of items (or a list). Simply typed lambda calculus is the internal language of cartesian closed categories; and it is for this reason that pairs and lists are the primary types in the type theory of LISP, scheme and many functional programming languages.

By contrast, the product for monoidal categories (such as Hilbert space and the vector spaces of functional analysis) is the tensor product. The internal language of such categories is linear logic, a form of quantum logic; the corresponding type system is the linear type system. Such categories are suitable for describing entangled quantum states, and, more generally, allow a vast generalization of the Curry-Howard correspondence to quantum mechanics, to cobordisms in algebraic topology, and to string theory.[1] The linear type system, and linear logic are useful for describing synchronization primitives, such as mutual exclusion locks, and the operation of vending machines.

## Contrast with partial function application

Currying and partial function application are often conflated.[20] (Reference is a blog). One of the significant differences between the two is that a call to a partially applied function returns the result right away, not another function down the currying chain; this distinction can be illustrated clearly for functions whose arity is greater than two.[21]

Given a function of type ${\displaystyle f\colon (X\times Y\times Z)\to N}$, currying produces ${\displaystyle {\text{curry}}(f)\colon X\to (Y\to (Z\to N))}$. That is, while an evaluation of the first function might be represented as ${\displaystyle f(1,2,3)}$, evaluation of the curried function would be represented as ${\displaystyle f_{\text{curried}}(1)(2)(3)}$, applying each argument in turn to a single-argument function returned by the previous invocation. Note that after calling ${\displaystyle f_{\text{curried}}(1)}$, we are left with a function that takes a single argument and returns another function, not a function that takes two arguments.

In contrast, partial function application refers to the process of fixing a number of arguments to a function, producing another function of smaller arity. Given the definition of ${\displaystyle f}$ above, we might fix (or 'bind') the first argument, producing a function of type ${\displaystyle {\text{partial}}(f)\colon (Y\times Z)\to N}$. Evaluation of this function might be represented as ${\displaystyle f_{\text{partial}}(2,3)}$. Note that the result of partial function application in this case is a function that takes two arguments.

Intuitively, partial function application says "if you fix the first argument of the function, you get a function of the remaining arguments". For example, if function div stands for the division operation x/y, then div with the parameter x fixed at 1 (i.e., div 1) is another function: the same as the function inv that returns the multiplicative inverse of its argument, defined by inv(y) = 1/y.

The practical motivation for partial application is that very often the functions obtained by supplying some but not all of the arguments to a function are useful; for example, many languages have a function or operator similar to plus_one. Partial application makes it easy to define these functions, for example by creating a function that represents the addition operator with 1 bound as its first argument.

Partial application can be seen as evaluating a curried function at a fixed point, e.g. given ${\displaystyle f\colon (X\times Y\times Z)\to N}$ and ${\displaystyle a\in X}$ then ${\displaystyle {\text{curry}}({\text{partial}}(f)_{a})(y)(z)={\text{curry}}(f)(a)(y)(z)}$ or simply ${\displaystyle {\text{partial}}(f)_{a}={\text{curry}}_{1}(f)(a)}$ where ${\displaystyle {\text{curry}}_{1}}$ curries f's first parameter.

Thus, partial application is reduced to a curried function at a fixed point. Further, a curried function at a fixed point is (trivially), a partial application. Thus, by reduction, a curried function at a fixed point is a partial application. For further evidence, note that, given any function ${\displaystyle f(x,y)}$, a function ${\displaystyle g(y,x)}$ may be defined such that ${\displaystyle g(y,x)=f(x,y)}$. Thus, any partial application may be reduced to a single curry operation. As such, curry is more suitably defined as an operation which, in many theoretical cases, is often applied recursively, but which is theoretically indistinguishable (when considered as an operation) from a partial application.

So, a partial application can be defined as the objective result of a single application of the curry operator on some ordering of the inputs of some function.

## Notes

1. ^ a b John C. Baez and Mike Stay, "Physics, Topology, Logic and Computation: A Rosetta Stone", (2009) ArXiv 0903.0340 in New Structures for Physics, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, 2011, pp. 95-174.
2. ^ Willard Van Orman Quine, introduction to "Bausteine der mathematischen Logik", pp. 305-316. Translated by Stefan Bauer-Mengelberg as "On the building blocks of mathematical logic" in Jean van Heijenoort (1967), A Source Book in Mathematical Logic, 1879-1931. Harvard University Press, pp. 355-66.
3. ^ Strachey, Christopher (2000). "Fundamental Concepts in Programming Languages". Higher-Order and Symbolic Computation. 13: 11-49. doi:10.1023/A:1010000313106. There is a device originated by Schönfinkel, for reducing operators with several operands to the successive application of single operand operators. (Reprinted lecture notes from 1967.)
4. ^ Reynolds, John C. (1998). "Definitional Interpreters for Higher-Order Programming Languages". Higher-Order and Symbolic Computation. 11 (4): 374. doi:10.1023/A:1010027404223. In the last line we have used a trick called Currying (after the logician H. Curry) to solve the problem of introducing a binary operation into a language where all functions must accept a single argument. (The referee comments that although "Currying" is tastier, "Schönfinkeling" might be more accurate.)
5. ^ Kenneth Slonneger and Barry L. Kurtz. Formal Syntax and Semantics of Programming Languages. 1995. p. 144.
6. ^ Henk Barendregt, Erik Barendsen, "Introduction to Lambda Calculus", March 2000, page 8.
7. ^ Curry, Haskell; Feys, Robert (1958). Combinatory logic. I (2 ed.). Amsterdam, Netherlands: North-Holland Publishing Company.
8. ^ Graham Hutton. "Frequently Asked Questions for comp.lang.functional: Currying". nott.ac.uk.
9. ^ I. Heim and A. Kratzer (1998). Semantics in Generative Grammar. Blackwell.
10. ^ a b J.P. May, A Concise Course in Algebraic Topology, (1999) Chicago Lectures in Mathematics ISBN 0-226-51183-9
11. ^
12. ^ P. I. Booth and J. Tillotson, "Monoidal closed, Cartesian closed and convenient categories of topological spaces", Pacific Journal of Mathematics, 88 (1980) pp.33-53.
13. ^
14. ^
15. ^ a b c Joseph J. Rotman, An Introduction to Algebraic Topology (1988) Springer-Verlag ISBN 0-387-96678-1 (See Chapter 11 for proof.)
16. ^ Barendregt, H.P. (1984). The Lambda Calculus. North-Holland. ISBN 0-444-87508-5. (See theorems 1.2.13, 1.2.14)
17. ^ Saunders Mac Lane and Ieke Moerdijk, Sheaves in Geometry and Logic (1992) Springer ISBN 0-387-97710-4 (See Chapter 1, pp.48-57)
18. ^
19. ^ Samson Abramsky and Bob Coecke, "A Categorical Semantics for Quantum Protocols."
20. ^
21. ^

## References

• Schönfinkel, Moses (1924). "Über die Bausteine der mathematischen Logik". Math. Ann. 92 (3-4): 305-316. doi:10.1007/BF01448013.
• Heim, Irene; Kratzer, Angelika (1998). "Semantics in a Generative Grammar". Malden: Blackwall Publishers

Connect with defaultLogic
What We've Done
Led Digital Marketing Efforts of Top 500 e-Retailers.
Worked with Top Brands at Leading Agencies.
Successfully Managed Over \$50 million in Digital Ad Spend.
Developed Strategies and Processes that Enabled Brands to Grow During an Economic Downturn.