{"id":313,"date":"2007-02-17T12:46:04","date_gmt":"2007-02-17T12:46:04","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/2007\/02\/17\/basics-optimization\/"},"modified":"2007-02-17T12:46:04","modified_gmt":"2007-02-17T12:46:04","slug":"basics-optimization","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2007\/02\/17\/basics-optimization\/","title":{"rendered":"Basics: Optimization"},"content":{"rendered":"<p> Yet another term that we frequently hear, but which is often not properly understood, is the concept of <em>optimization<\/em>. What is optimization? And how does it work?<\/p>\n<p> The idea of optimization is quite simple. You have some complex situation, where<br \/>\nsome variable of interest (called the <em>target<\/em>) is based on a complex<br \/>\nrelationship with some other variables. Optimization is the process of trying to find<br \/>\nan assignment of values to the other variables (called <em>parameters<\/em>) that produces a <em>maximum<\/em> or <em>minimum<\/em> value of the target variable, called<br \/>\nthe <em>optimum<\/em> or <em>optimal value<\/em> <\/p>\n<p> The <em>practice<\/em>  of optimization is quite a bit harder. It depends greatly<br \/>\non the relationships between the other variables. The general process of finding<br \/>\nan optimum is called <em>programming<\/em> &#8211; not like programming a computer; the term predates the invention of computers. <\/p>\n<p><!--more--><\/p>\n<p> To make things a bit more formal, we can describe an optimization program in terms of geometry. An optimization problem can be described using a function<br \/>\nfrom a space S to a real number. The space S is some subset of a euclidean space &real;<sup>n<\/sup> for some N. So we have a function, f:S&rarr;&real;, (that is, a function from the space S to a real number) which we call<br \/>\nthe <em>objective function<\/em> if we&#8217;re looking for a maximum, or the <em>cost function<\/em> if we&#8217;re looking for a minimum.. Each point in S is called a <em>potential<\/em> or <em>feasible<\/em> solution for the objective function. The <em>optimum<\/em> of the objective function f is the point p in S where f(p) is at a maximum that is, p&isin;S: (&forall;x&isin;S:f(p)&ge;f(x). (If f is a cost function, jut replace the &#8220;&ge;&#8221; in the previous with &#8220;&le;&#8221;.) <\/p>\n<p> The space S is generally defined by a set of <em>constraints<\/em>, which are relationships between the variables\/dimensions of S. If all of the constraints are<br \/>\nlinear, and the objective\/cost function is linear in terms of all of the variables,<br \/>\nthen the solution process is called <em>linear programming<\/em>. Linear programming problem where all of the values of parameters in the solution must be integers is called an <em>integer programming<\/em>.<\/p>\n<p> General linear programming &#8211; where the values of the variables in the solution can be any real number &#8211; is a very tractable problem, which can be solved very efficiently.  Integer linear programming, on the other hand, is <em>NP-hard<\/em> &#8211; which is a very fancy way of saying that we have absolutely no clue of how to produce an optimal solution in a reasonable amount of time.<\/p>\n<p> Linear programming turns out to be an extremely useful technique, which solves a large number of very common problems. Most problems of scheduling and resource distribution are ultimately reducible to linear programming. There&#8217;s an entire field of applied mathematical research called <em>operations research (OR)<\/em> which is devoted to optimization problems in general, and most work in OR is focused on finding solutions to linear programming problems, or to finding ways of characterizing different problems in terms of the simplest possible linear programming problem.<\/p>\n<p> There&#8217;s also a form of optimization problem of particular interest to computer scientists. Given an optimization problem that has the property that finding the optimal solution involves combining the optimal solutions to multiple overlapping<br \/>\nsubproblems, the solution technique is called <em>dynamic programming<\/em>. Dynamic programming works by computing and remembering the solutions to the subproblems, and then finding a way of combining a selection of the solutions to the subproblems to<br \/>\ncreate a solution to the full problem.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Yet another term that we frequently hear, but which is often not properly understood, is the concept of optimization. What is optimization? And how does it work? The idea of optimization is quite simple. You have some complex situation, where some variable of interest (called the target) is based on a complex relationship with some [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[74],"tags":[],"class_list":["post-313","post","type-post","status-publish","format-standard","hentry","category-basics"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-53","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/313","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=313"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/313\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=313"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=313"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=313"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}