{"id":547,"date":"2007-11-15T10:55:34","date_gmt":"2007-11-15T10:55:34","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/11\/15\/tiptoeing-into-type-theory\/"},"modified":"2014-08-14T14:51:44","modified_gmt":"2014-08-14T18:51:44","slug":"tiptoeing-into-type-theory","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2007\/11\/15\/tiptoeing-into-type-theory\/","title":{"rendered":"Tiptoeing into Type Theory"},"content":{"rendered":"<p> When Cantor&#8217;s set theory &#8211; what we now call naive set theory &#8211; was shown to have problems in the form of Russell&#8217;s paradox, there were many different attempts to salvage the theory. In addition to the axiomatic approaches that we&#8217;ve looked at (ZFC and NBG), there were attempts by changing the basis of set theory &#8211; discarding sets in favor of something similar, but with restrictions that avoid the problems of naive set theory. Today, I&#8217;m going to talk about an example of the latter approach, called <em>type theory<\/em>. Type theory is a very different approach from what we&#8217;ve seen before, and one which is particularly useful and interesting to people in my neck of the woods.<\/p>\n<p><!--more--><\/p>\n<p> What type theory does is create a new kind of collection object, called a <em>type<\/em>. The idea behind types is to create a stratification, similar to the idea behind ordered logics. In first order logic, you can only reason about primitive atoms, not about predicates. In second order logic, you can reason about atoms and first-order predicates, but not second-order predicates. And so on.<\/p>\n<p> In type theory, you do roughly the same kind of thing, but you do it even more strictly. A first-order type can only contain atoms &#8211; primitive values that are not types. (Some versions of type theory call these ur-elements.) A second order type can only contain first-order types as elements.  And so on. So type are strictly stratified: each level or stratum of types has members of the next lower stratum of types. <\/p>\n<p> Using the basic idea above, of stratified types, we can define the simplest version of type theory, called <b>ST<\/b>, with a collection of axiom schema. For the axiom schema, there&#8217;s a convention in how they&#8217;re written. Each axiom makes statements about types of two levels: a base level, and its successor. Variables that range over types of the base level are written as lower-case letters; variables that range over the next level are written as lower-case letters followed by &#8220;&#8216;&#8221;. So if &#8220;a&#8221; is a variable ranging over level <em>n<\/em> types, &#8220;b'&#8221; is a variable ranging over level <em>n+1<\/em> types.<\/p>\n<p> <b>ST<\/b> has 4 axiom schemas:<\/p>\n<ol>\n<li> <b>Axiom of Identity<\/b>: <em>x=y &hArr; &forall;z'[x &isin; z&#8217; &hArr; y&isin;z&#8217;]<\/em>: two values x and y are the same if and only if every type that contains x also contains y. It&#8217;s interesting to compare this one to set theory; it&#8217;s exactly inverse. In set theory, two values are the same if they have the same members; in type theory, two values are the same if they&#8217;re members of the same types.<\/li>\n<li> <b>Axiom of Extensionality:<\/b> <em>&forall;x[x&isin;y&#8217; &hArr; x&isin;z&#8217;] &rArr; y&#8217; = z&#8217;.<\/em> This is very much the same idea as equality in set theory. Two types are equal as long as they have the same members. So in type theory, we get both the upwards and downwards properties of equality.<\/li>\n<li> <b>Axiom of Comprehension<\/b>: If &phi; is a first-order predicate, then: <em>&exist;z&#8217;: &forall;x[x&isin;z&#8217; &hArr; &phi;(x)]<\/em>. Basically the same as the comprehension axiom of set theory, modified to include the stratification of types. It says that we can define a level n+1 type using a predicate over level n types.<\/li>\n<li> <b>Axiom of Infinity<\/b>: <em>There exists a relation, &#8220;&lt;&#8220;, which ranges over pairs of atoms, which is <em>total<\/em>, <em>irreflexive<\/em>, <em>transitive<\/em>, and <em>strongly connected<\/em>, such that &forall;x,y[x&ne;y &rArr; [x&lt;y &or; y&lt;x]]<\/em>. The version of the axiom of infinity in type theory is a bear. Instead of being a direct construction like it is in ZF, it&#8217;s an indirect statement.  What it says is that &#8220;&lt;&#8221; is a total relation, which imposes a closed total order on the atoms. More, it requires that the domain and the codomain of &lt; are the same: so if a value can appear on the <em>right<\/em> side of &#8220;&lt;&#8221; (saying that there&#8217;s something smaller than it), it must also appear on the left side of &#8220;&lt;&#8221; (saying that there&#8217;s something larger than it.) For this relation over the atoms to be possible, there must be at least one level 0 infinite type.  So this <em>does<\/em> really assert the existence of an infinite type; but it does it indirectly, via requiring a relation which can only be satisfied by an infinite type. <em>(Note: originally, I left &#8220;total&#8221; out of this definition; without totality, it doesn&#8217;t necessarily imply the existence of an infinite set! Thanks to commenters for pointing this out.)<\/em><\/li>\n<\/ol>\n<p> If you look at this structure: atoms, stratified types, and the axioms, it should be clear that most of the things that we can do using axiomatic set theory can also be done in <b>ST<\/b>. You can&#8217;t mix things like x and {x} like you could in set theory, which does place some limits on you &#8211; but the fact that you&#8217;re allowed to have non-type primitive atoms goes a long way towards fixing that. The downside there is clear: while you can generally say the same things in set theory and type theory, the statements are often a lot more complex in terms of type theory.<\/p>\n<p> Type theory is clearly more complicated than good old set theory. So what does that complexity buy you? Most importantly, it gets rid of Russell&#8217;s paradox. Since no type can contain members of its own type level, you can&#8217;t construct anything like &#8220;The types of all types that don&#8217;t contain themselves as members.&#8221;; there&#8217;s just no way of saying that. Axiomatic theory does the same thing &#8211; but it does it by, essentially, creating only two levels: proper classes and sets, and sharply limiting the ability to do reasoning about proper classes. Type theory restricts the structures you can build at any level, but by doing so, it allows to create a hierarchy of types, each of which is first class, capable of membership, and fully amenable to formal reasoning.<\/p>\n<p> In addition, logic using type theory can be quite elegant. A lot of work in logic is done in terms of stratified logics, and type theory fits quite naturally with stratified logic. Plus, type theory and logic together are the basis of an awful lot of theoretical computer science. <b>ST<\/b>, and its more powerful children, form the basis of types in &lambda;-calculus, which in turn is the source of the whole idea of types in programming languages. In fact, the parametric type system which was added to Java not too long ago has its semantics formally specified using a relative of <b>ST<\/b>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When Cantor&#8217;s set theory &#8211; what we now call naive set theory &#8211; was shown to have problems in the form of Russell&#8217;s paradox, there were many different attempts to salvage the theory. In addition to the axiomatic approaches that we&#8217;ve looked at (ZFC and NBG), there were attempts by changing the basis of set [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[56],"tags":[],"class_list":["post-547","post","type-post","status-publish","format-standard","hentry","category-set-theory"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-8P","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/547","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=547"}],"version-history":[{"count":2,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/547\/revisions"}],"predecessor-version":[{"id":3021,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/547\/revisions\/3021"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=547"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=547"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=547"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}