{"id":395,"date":"2007-04-23T08:30:00","date_gmt":"2007-04-23T08:30:00","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/04\/23\/process-equivalence-in-calculus\/"},"modified":"2007-04-23T08:30:00","modified_gmt":"2007-04-23T08:30:00","slug":"process-equivalence-in-calculus","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2007\/04\/23\/process-equivalence-in-calculus\/","title":{"rendered":"Process Equivalence in &#960;-calculus"},"content":{"rendered":"<p> Before moving on, there&#8217;s one question that I thought was important enough to promote up to the front page, instead of just answering it in the comments. A commenter asked about process replication, !P, wanting to know if it terminates.<\/p>\n<p>The answer comes in two parts.<\/p>\n<ul>\n<li> !P does process creation <em>on demand<\/em>. It&#8217;s not an infinite parallel group of processes; it&#8217;s just a way of writing that there are as many of P as you need. So you can think of it like tape cells in a Turing machine: we never say that a Turing machine tape is infinitely long; but there&#8217;s always <em>enough<\/em> cells. !P isn&#8217;t an infinite replication of P; it&#8217;s a notation for &#8220;enough copies of P&#8221;. So when you&#8217;re done using replicas of P, you can think of !P disappearing.<\/li>\n<li> Often in &pi;-calculus, we actually don&#8217;t worry about termination of every process. You can think of it in terms of garbage collection: we&#8217;re running some group of processes to perform a computation. When they&#8217;re done, they terminate. Any process which has no ports that are reachable by any other active process can simply be collected.<\/li>\n<\/ul>\n<hr \/>\n<p> The next place to go in our exploration of &pi;-calculus is considering just what it really means for two processes to be equivalent. The fundamental equivalence relation in<br \/>\n&pi;-calculus is called <em>bisimulation<\/em>. The idea of bisimulation is roughly a kind of observational equivalence: two processes are equivalent under bisimulation if they exhibit the same visible behavior under all tests by an external observer.<\/p>\n<p><!--more--><\/p>\n<p> Consider two processes, P and Q. The <em>only<\/em> thing that an external observer can see are the <em>communication channels<\/em> that are not localized to the processes.  So the first requirement for P and Q to be equivalent is that they have equivalent externally visible communication channels.<\/p>\n<p> We need to look a bit more in detail at that first requirement. What does it mean for communication channels to be equivalent? Given two channels x and y, they are equivalent if they can be used to send exactly the same messages. So far, in our examination of &pi;-calculus, we&#8217;ve used a fully general notion of channels &#8211; any channel can be used to send or receive any message. But as we move on, we&#8217;ll be starting to use typing. So two channels will be equivalent if they have the same <em>type<\/em>; and the definition of the type of a channel will be a list of the kinds of messages that can be sent and received using the channel. (To keep things simple, we&#8217;ll assume that channels are <em>symmetric<\/em>, meaning that any process that can use a channel can use it to send or receive (or both) any message supported by the channel.)<\/p>\n<p> So now we get to the difficult part.<\/p>\n<p> From the outside of the process, observing the process, we can view it as a kind of state transition system. What that really means is that we can send it a message, and then watch what happens after it receives it: to be specific, what messages it sends in response. <\/p>\n<p> So given a process P with <em>n<\/em> communication channels<br \/>\nC<sub>P<\/sub>=c<sub>p1<\/sub>&#8230;c<sub>pn<\/sub>, we&#8217;ll model it as a state machine. So, when the process<br \/>\nis first created, we&#8217;ll say it&#8217;s in state S<sub>P0<\/sub>. Each time it receives a message,<br \/>\nit will take a transition to a new state depending on which channel received the message,<br \/>\nand what the content of the message was. When it makes the transition, it can also<br \/>\n<em>generate<\/em> a sequence of messages. So for each transition, we need to specify an<br \/>\ninitial state a message as input, and a target state and a sequence of messages as output.<br \/>\nSo for a process P, we&#8217;ll define the following types:<\/p>\n<ul>\n<li> S<sub>P<\/sub> is the set of states of P<\/li>\n<li> C<sub>P<\/sub> is the set of channels of P<\/li>\n<li> M<sub>P<\/sub> is the set of pairs (c,m) where c&isin;C<sub>P<\/sub>, and m is a message that can be send using c<\/li>\n<\/ul>\n<p> With that set of type definition, we can say that the <em>transition relation<\/em> for<br \/>\nP is a relation T : (S<sub>P<\/sub>&times;M<sub>P<\/sub>) &rarr; (S<sub>P<\/sub>&times;Sequence(M<sub>P<\/sub>)) &#8211; that is, a <em>relation<\/em> from a state of P and an input message to a new state of P and a sequence of messages generated by the transition. The fact that it&#8217;s a <em>relation<\/em> is very important: there can be more than one possible transition from P on a given message.<\/p>\n<p> Suppose we have two processes P, with transition relation T<sub>P<\/sub> and message set<br \/>\nM<sub>P<\/sub>, and Q with transition relation T<sub>Q<\/sub> and message set M<sub>Q<\/sub><br \/>\nWe&#8217;ll say that P in state P<sub>i<\/sub> <em>simulates<\/em> Q in state Q<sub>j<\/sub> if<br \/>\nM<sub>Q<\/sub>&sube;M<sub>P<\/sub>, <em>and<\/em> for every message m &isin; M<sub>Q<\/sub>,<br \/>\nT<sub>Q<\/sub>(Q<sub>j<\/sub>,m) and T<sub>P<\/sub>(P<sub>i<\/sub>) generate the <em>same<\/em><br \/>\nsequence of messages on corresponding channels. (I&#8217;ll gloss over the formality of defining corresponding channels; I&#8217;m sure you can use your intuition to figure out what it means.)<\/p>\n<p> Now, here&#8217;s where it gets tricky.<\/p>\n<p> Suppose I have a state Q<sub>x<\/sub>. Given a message m, there is some <em>set<\/em> of<br \/>\nstates Q<sub>x<\/sub>(m) that it could transition to. We can take the <em>union<\/em> of the states in Q<sub>x<\/sub>(m) &#8211; Q<sub>x<\/sub><sup>&cup;<\/sup>(m). Now, T<sub>Q<\/sub>(Q<sub>x<\/sub><sup>&cup;<\/sup>(m), n) &#8211; that is, the transition relation from the set of possible targets to T<sub>Q<\/sub>(m) given message n &#8211; is the union of T<sub>Q<\/sub>(q,n) for all q&isin;Q<sub>x<\/sub><sup>&cup;<\/sup>(m).  A <em>set of states<\/em> of P simulates a set of states of Q if the simulation relation as defined for single states works for the union-states when their transition sets are unioned.<\/p>\n<p> If the initial state of P is P<sub>0<\/sub> and the initial state of Q is Q<sub>0<\/sub>,<br \/>\nthen <em>P simulates Q<\/em> if P<sub>0<\/sub> simulates Q<sub>0<\/sub>, and for all possible transitions from <em>the set of states<\/em> Q<sub>i<\/sub> to the <em>set of states<\/em> Q<sub>j<\/sub> triggered by message m, there is some pair of <em>sets of<\/em> states P<sub>k<\/sub> and P<sub>l<\/sub> such that P<sub>k<\/sub> can transition to P<sub>l<\/sub> on message m, <em>and<\/em> P<sub>k<\/sub> simulates Q<sub>i<\/sub> and P<sub>l<\/sub> simulates Q<sub>j<\/sub>. <\/p>\n<p> Whew. That&#8217;s a mouthful. (Or a keyboardful, or whatever&#8230;) All of that is a<br \/>\nformal way of saying that for all <em>sequences<\/em> of messages that can be received by Q, P and Q will make transitions that generate the same outputs. Note that this is not yet symmetric: P <em>can<\/em> have messages not supported by Q, and it can do anything it wants when it receives one of its messages. But if someone has processes P and Q, then they cannot possibly distinguish them by sending messages that both processes can understand.<\/p>\n<p> Now that that&#8217;s out of the way, the next step is easy. P and Q are equivalent under bisimulation if P simulates Q and Q simulates P.<\/p>\n<p> So let&#8217;s look at one or two of the things that we&#8217;ve seen before, and see if they&#8217;re equivalent under bisimulation.<\/p>\n<ol>\n<li> P={?a(x).F(x)+?a(x).G(x)} and Q={?a(x).{F(x)+G(x)}}. Are these equivalent? Yes. Even though P commits earlier than Q does, there is no way for an external observer to determine when it make a commitment to F or G. <\/li>\n<li> P={?a(x).F(x)+?a.G(x)}, Q={?a(x).(F(x)+G(x))}. These are equivalent, for the same reason as above: there&#8217;s no way to tell when the commitment to F(x) or G(x) happens.<\/li>\n<li> P={?a(x).(F(x)|G(x))}, Q={?a(x).F(x)|?a(x).G(x)}. Are these equivalent? In general, no, these are <em>not<\/em> equivalent. In P, sending one message to a will start both F(x) and G(x); in Q, sending one message to a will start F(x) <em>or<\/em> G(x).<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>Before moving on, there&#8217;s one question that I thought was important enough to promote up to the front page, instead of just answering it in the comments. A commenter asked about process replication, !P, wanting to know if it terminates. The answer comes in two parts. !P does process creation on demand. It&#8217;s not an [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[50],"tags":[],"class_list":["post-395","post","type-post","status-publish","format-standard","hentry","category-pi-calculus"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-6n","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/395","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=395"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/395\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=395"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=395"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=395"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}