{"id":768,"date":"2009-04-28T19:34:39","date_gmt":"2009-04-28T19:34:39","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2009\/04\/28\/a-quick-bit-of-temporal-logic-introducing-ctl\/"},"modified":"2009-04-28T19:34:39","modified_gmt":"2009-04-28T19:34:39","slug":"a-quick-bit-of-temporal-logic-introducing-ctl","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2009\/04\/28\/a-quick-bit-of-temporal-logic-introducing-ctl\/","title":{"rendered":"A Quick Bit of Temporal Logic: Introducing CTL"},"content":{"rendered":"<p>This post is something that I&#8217;m thinking of including in my<br \/>\nbook. I haven&#8217;t decided whether I want to spend this much time on<br \/>\nlogics; they&#8217;re really interesting and fun &#8211; but there&#8217;s lots of<br \/>\nother interesting and fun stuff, and there&#8217;s only so much space.<\/p>\n<p> The topic of the moment is temporal logic &#8211; that is, a logic which<br \/>\nis built for reasoning about things that change over time.<\/p>\n<p> To begin with, why do we want temporal logic? Most of the time,<br \/>\nwhen we want to use logical reasoning, we use predicate logic. So why<br \/>\ndo we need another logic?<\/p>\n<p><!--more--><\/p>\n<h3>Why Temporal Logic?<\/h3>\n<p> One of the basic properties of predicate logic is that<br \/>\na true statement is <em>always<\/em> true, and a false statement is<br \/>\n<em>always<\/em> false. Predicate logic doesn&#8217;t allow things to change.<br \/>\nBut the most fundamental fact about time &#8211; what<br \/>\nreally <em>defines<\/em> the passage of time &#8211; is that<br \/>\nthings <em>do<\/em> change. When I&#8217;m writing this, it&#8217;s monday. But<br \/>\nwhen you read it, it might be tuesday or wednesday. Today, my<br \/>\nchildren are in elementary school.  Five years from now, they won&#8217;t<br \/>\nbe. <\/p>\n<p> Of course, you can always find a way to work around things if<br \/>\nyou&#8217;re clever. There are ways to work around the problem of change<br \/>\nover time using standard predicate logic. For example, one way is to<br \/>\nadd a time parameter to every predicate. So instead of<br \/>\nsaying <code>GoesToSchool(Elementary,Rebecca)<\/code>, I could<br \/>\nsay <code>GoesToSchool(Elementary, Rebecca, 20090420)<\/code>.  But<br \/>\nthen for all of the typical, non-temporal statements in predicate<br \/>\nlogic, I would need to add universal statements: &#8220;&forall;t:<br \/>\nFather(Mark,Rebecca,t)&#8221;. It gets very cumbersome very quickly, and<br \/>\nworse, it makes using the logic for reasoning incredibly awkward.<\/p>\n<p> There&#8217;s also another problem with predicate logic: there are lots<br \/>\nof temporal statements I&#8217;d like to make which have a common form,<br \/>\nwhich I can&#8217;t express in first-order logic.  I&#8217;d like to be able to<br \/>\nsay things like &#8220;Eventually I&#8217;ll be hungry&#8221;; or &#8220;I&#8217;ll be tired until I<br \/>\nget some sleep&#8221;. Those are two typical temporal statements, which have<br \/>\ncommon forms, and it would be really useful to be able to take<br \/>\nadvantage of those common forms in temporal inference &#8211; but in first<br \/>\norder predicate logic, I can&#8217;t define a form like &#8220;eventually&#8221; &#8211; in<br \/>\npredicate logic with time parameters, eventually is a statement about<br \/>\nanother statement &#8211; that is, it&#8217;s a second-order logical<br \/>\nconstruction.<\/p>\n<h3>Computation Tree Logic<\/h3>\n<p> So if predicate logic is so awkward for reasoning about time, what<br \/>\ndo we do? We create a new logic. That may sound silly, but it&#8217;s<br \/>\nsomething we do <em>all the time<\/em> in math. After a logic is a<br \/>\npretty simple formal system &#8211; we can define new ones whenever we want<br \/>\n&#8211; and mathematicians do it all the time. So we&#8217;ll just create a new<br \/>\nlogic, a <em>temporal<\/em> logic, which will make it easy for us to<br \/>\nreason about how things change over time. In fact, there isn&#8217;t just<br \/>\none temporal logic &#8211; mathematicians have designed many temporal logics<br \/>\n&#8211; CTL, ATL, CTL*, LTL, and &mu;TL to name just a few.  I&#8217;m going to<br \/>\ndescribe the one I&#8217;m most familiar with, which is<br \/>\ncalled <em>Computation Tree Logic<\/em>, or CTL. CTL is designed for<br \/>\nreasoning about computations with mutable states. It&#8217;s a very<br \/>\nsimple logic &#8211; it may seem unreasonably simple when you see it. But<br \/>\nit&#8217;s really not; it&#8217;s actually extremely useful. <\/p>\n<p> CTL may be simple, but it&#8217;s a fairly typical example of the way<br \/>\nthat you can look at time in a logic. The semantics &#8211; that is the<br \/>\nmeaning or <em>model<\/em> of the logic is based on a general idea<br \/>\ncalled <em>Kripke semantics<\/em> which is very useful for describing<br \/>\nchange. I&#8217;ll describe the specific model for temporal logic in this<br \/>\narticle, but if you want to know more about the idea of Kripke<br \/>\nsemantics, I wrote about it when I was talking about intuitionistic<br \/>\nlogic <a href=\"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/03\/kripke-semantics-and-models-for-intuitionistic-logic\">here<\/a>.<\/p>\n<p> The starting point for CTL is propositional logic. So we start with<br \/>\na basic propositional logic: a finite set of atomic statements (propositions)<br \/>\nthat we can use. (There are predicate extensions to CTL, but they make<br \/>\nit vastly more complicated, so we&#8217;ll stick to the simple, basic propositional<br \/>\nversion.) We can combine the propositions using the standard propositional<br \/>\nlogical operators &#8211; and, or, implication, and negation. <\/p>\n<p> Where it gets interesting is that we also have a set<br \/>\nof <em>temporal quantifiers<\/em> which are used to specify the<br \/>\ntemporal properties of propositional statements. Every statement in<br \/>\nCTL has at least two temporal quantifiers. But before we get into them<br \/>\nin detail, we need to talk about the basic model of time in CTL.<\/p>\n<p> The idea of the model CTL, as I said above, is based on Kripke<br \/>\nsemantics. Kripke semantics defines a changing system by using<br \/>\na collection of worlds. Each world consists of an assignment of truth<br \/>\nvalues to each of the basic propositions, and a set of<br \/>\n<em>successor worlds<\/em>. In Kripke semantics, you follow a path<br \/>\nthrough the worlds &#8211; in each step, you move from a<br \/>\nworld to one of its successors. In CTL, a world represents a moment in<br \/>\ntime; the successors to a world represent the possible moments of time<br \/>\nthat immediately follow it.<\/p>\n<p> The Kripke semantics of CTL effectively gives us a<br \/>\n<em>non-deterministic<\/em> model of time. From a given moment, there can be<br \/>\nmore than one possible future &#8211; and we have<br \/>\n<em>no way<\/em> of determining which possible future will come true<br \/>\nuntil we reach it. So time becomes a tree of possibilities &#8211; from each<br \/>\nmoment, you could go to any of its successors, so each moment spawns a<br \/>\nbranch for each of its successors &#8211; and each path through the tree represents<br \/>\na timeline for a possible future.<\/p>\n<p> CTL gives us two different ways of talking about time in that tree<br \/>\nof possible futures; to make a meaningful temporal statement, we need<br \/>\nto combine them. First, if you look at time from any particular<br \/>\nmoment, there&#8217;s a collection of possible paths into the future &#8211; so<br \/>\nyou can talk about things in terms of the space of possible futures &#8211;<br \/>\nmaking statements that begin with things like &#8220;In all possible futures &#8230;&#8221;;<br \/>\nor &#8220;In some possible future &#8230;&#8221;.  Second, you can talk about the steps<br \/>\nalong a particular path into the future &#8211; a sequence<br \/>\nof worlds that define one specific future. You can make statements<br \/>\nabout paths like &#8220;&#8230; will eventually be true&#8221;. By putting them<br \/>\ntogether, you can produce meaningful temporal statements:<br \/>\n&#8220;In all possible futures, X will always be true&#8221;; or<br \/>\n&#8220;In at least one possible future, X will eventually become true&#8221;.<\/p>\n<p> Following on this idea, there are two kinds of <em>temporal<br \/>\nquantifiers<\/em> that are used to construct temporal<br \/>\nstatements: <em>universe<\/em> quantifiers and <em>path<\/em><br \/>\nquantifiers. Every CTL statement uses a pair of quantifiers:<br \/>\none universe quantifier, and one path quantifier.<\/p>\n<p> Universe quantifiers are used to make statements that range over<br \/>\nall paths forward from a particular moment in time. Path quantifiers<br \/>\nare used to make statements that range over all moments of time on a<br \/>\nparticular timeline-path. As I said above, in CTL statements, the<br \/>\nquantifiers always appear in pairs: one universe quantifier which<br \/>\nspecifies what set of potential futures you&#8217;re talking about, and one<br \/>\npath quantifier that describes the properties of paths within the set<br \/>\nquantified by the universe.<\/p>\n<p> There are two universe quantifiers, which correspond to the<br \/>\nuniversal and existential quantifiers from predicate logic. There&#8217;s<br \/>\n&#8220;A&#8221;, which is used to say that some statement is true <em>in all<br \/>\npossible futures<\/em>; and there&#8217;s &#8220;E&#8221;, which is<br \/>\nused to say that there exists at least one possible future where<br \/>\nsomething is true.<\/p>\n<p> Next, there are path quantifiers. Path quantifiers are similar to<br \/>\nuniverse quantifiers, except that instead of ranging over a set of<br \/>\npossible timelines, they range over the time-worlds on a specific<br \/>\ntimeline-path. There are five path quantifiers, which can<br \/>\nbe divided into three groups:<\/p>\n<ol>\n<li> The simplest path quantifier is the immediate quantifier, X (next).<br \/>\nX is used to make a statement about the <em>very next<\/em> time-world on this<br \/>\npath.<\/li>\n<li> There are universal and existential path quantifiers:<br \/>\n&#8220;G&#8221; (global), which is used to state a fact about every world-moment on the path,<br \/>\nand &#8220;F&#8221; (finally)  which is used to state a fact about at<br \/>\nleast one world-moment along a particular timeline-path.<\/li>\n<li> There are temporal relationship quantifiers which are used to<br \/>\ndescribe relationships between facts. U(strong until) and W(weak<br \/>\nuntil).  U and W both join statements, like &#8220;xUy&#8221; or &#8220;xWy&#8221;. What<br \/>\nthey mean is that the first statement (x) is true <em>until<\/em><br \/>\nthe second statement (y) becomes true. The strong form also means<br \/>\nthat the second statement <em>must<\/em> eventually become true;<br \/>\nthe weak form says that x is true until y becomes true, but y may<br \/>\nnever become true &#8211; in which case x will be true forever.<\/li>\n<\/ol>\n<p> It&#8217;s hard to see quite what all that means in the abstract,<br \/>\nbut once you see a few examples, it&#8217;s actually quite simple.<\/p>\n<ul>\n<li> <code>AG.(I have a big nose)<\/code>: No matter what happens,<br \/>\nat every point in time, I will always have a big nose.<\/li>\n<li> <code>EF.(I lost my job)<\/code>: It&#8217;s possible that in some<br \/>\nfuture, I will be laid off. (Formally, there exists a possible<br \/>\ntimeline where the statement &#8220;I lost my job&#8221; will eventually<br \/>\nbecome true.<\/li>\n<li> <code>A.(I do my job well)W(I deserve to get fired)<\/code>: For<br \/>\nall possible futures, I do my job well until I deserve to be fired &#8211;<br \/>\nbut I won&#8217;t necessary eventually deserve to be fired &#8211; it&#8217;s possible (and<br \/>\nI hope very probable) that I&#8217;ll always do my job well, and so &#8220;I deserve to be fired&#8221;<br \/>\nwill never become true.<\/li>\n<li> <code>A.(I am alive)U(I am dead)<\/code>: No matter what happens, I<br \/>\nwill be alive until I die &#8211; and my eventual death is absolutely inevitable.<\/li>\n<li> <code>AG.(EF.(I am sick))<\/code>: It&#8217;s always possible that I&#8217;ll eventually get sick.<\/li>\n<li> <code>AG.(EF.(My house is painted blue) &or; AG.(My house is painted brown))<\/code>: In<br \/>\nall possible futures, either my house will eventually be painted blue, or it will stay<br \/>\nbrown. It will never be any color other than blue or brown.<\/li>\n<\/ul>\n<h3>What&#8217;s it good for?<\/h3>\n<p> Up above, I said that CTL, despite its simplicity, is actually very<br \/>\nuseful. What&#8217;s it really good for?<\/p>\n<p> I&#8217;m not going to go into detail &#8211; but one of the main uses of it<br \/>\nis in something called <em>model checking<\/em>. Model checking is a<br \/>\ntechnique used by both hardware and software engineers to check the<br \/>\ncorrectness of certain temporal aspects of their system. They write a<br \/>\nspecification of their system in terms of CTL; and then they use an<br \/>\nautomated tool that compares an actual implementation of a piece of<br \/>\nhardware or software to the specification. The system can then verify<br \/>\nwhether or not the system does what it&#8217;s supposed to; and if not, it<br \/>\ncan provide a specific counter-example demonstrating when it will<br \/>\ndo the wrong thing.<\/p>\n<p> In hardware model checking, you&#8217;ve got a simple piece of hardware<br \/>\n&#8211; like a single functional unit from a microprocessor. That hardware<br \/>\nis, basically, a complex finite state machine. The way that you can think of<br \/>\nit is that the hardware has some set of points where it can have a<br \/>\nzero or a one. Each of those points can be represented by a<br \/>\nproposition. Then you can describe operations in terms of how outputs<br \/>\nare produced from inputs.<\/p>\n<p> So, for example, if you were looking at a functional unit that<br \/>\nimplements division, one of the propositions would be &#8220;The divide by<br \/>\nzero flag is set&#8221;. Then your specification would include statements<br \/>\nlike <code>AG.(DivisorIsZero)&amp;implies;AF.(DivideByZeroFlag)<\/code>. That<br \/>\nspecification says that if the divisor is zero, then eventually<br \/>\nthe divide by zero flag will be set. It does <em>not<\/em> specify<br \/>\nhow long it will take &#8211; it could take one step, or it could take 100. But<br \/>\nsince the behavior that we care about with that specification should<br \/>\nbe true regardless of the details of how the divider is implemented,<br \/>\nwe don&#8217;t want to specify how many steps &#8211; the important behavior is that<br \/>\nif you try to divide by zero, the appropriate flag bit is set.<\/p>\n<p> Obviously, the real specifications are a lot more complex than that,<br \/>\nbut that gives you the general sense.<\/p>\n<p> It&#8217;s also used in software. A friend of mine from when I worked at<br \/>\nIBM did some really cool work on model-checking software. Lots of people<br \/>\nhave looked at that, but it doesn&#8217;t appear to be too useful in general &#8211;<br \/>\nwriting the specifications is hard, and checking them given the amount<br \/>\nof state in a typical program is a nightmare. What my friend did is<br \/>\nrealize that when you look at a multithreaded system, the desired<br \/>\nsynchronization behavior is generally pretty simple &#8211; and is almost<br \/>\nideally suited for description in a language like CTL. So he worked<br \/>\nout a way of using model checking to verify the correctness of<br \/>\nthe synchronization behaviors of software systems. <\/p>\n<p> So that&#8217;s the basics of CTL &#8211; an extremely simple, but extremely<br \/>\nuseful logic for describing time-based behaviors.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This post is something that I&#8217;m thinking of including in my book. I haven&#8217;t decided whether I want to spend this much time on logics; they&#8217;re really interesting and fun &#8211; but there&#8217;s lots of other interesting and fun stuff, and there&#8217;s only so much space. The topic of the moment is temporal logic &#8211; [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[10,33],"tags":[],"class_list":["post-768","post","type-post","status-publish","format-standard","hentry","category-book","category-logic"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-co","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/768","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=768"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/768\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=768"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=768"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=768"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}