{"id":2308,"date":"2014-02-04T12:08:47","date_gmt":"2014-02-04T17:08:47","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/?p=2308"},"modified":"2014-02-04T12:08:47","modified_gmt":"2014-02-04T17:08:47","slug":"everyone-stop-implementing-programming-languages-right-now-its-been-solved","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2014\/02\/04\/everyone-stop-implementing-programming-languages-right-now-its-been-solved\/","title":{"rendered":"Everyone stop implementing programming languages, right now! It&#039;s been solved!"},"content":{"rendered":"<p> Back when I was a student working on my PhD, I specialized in programming languages. Lucky for me I did it a long time ago! According to <a href=\"http:\/\/www.wired.com\/wiredenterprise\/2014\/02\/julia\">Wired<\/a>, if I was working on it now, I&#8217;d be out of luck &#8211; the problem is already solved!<\/p>\n<p> See, these guys built a new programming language which solves all the problems! I mean, just look how daft all of us programming language implementors are!<\/p>\n<blockquote><p>\nToday\u2019s languages were each designed with different goals in mind. Matlab was built for matrix calculations, and it\u2019s great at linear algebra. The R language is meant for statistics. Ruby and Python are good general purpose languages, beloved by web developers because they make coding faster and easier. But they don\u2019t run as quickly as languages like C and Java. What we need, Karpinski realized after struggling to build his network simulation tool, is a single language that does everything well.\n<\/p><\/blockquote>\n<p> See, we&#8217;ve been wasting our time, working on languages that are only good for one thing, when if only we&#8217;d had a clue, we would have just been smart, and built one perfect language which was good for everything!<\/p>\n<p> How did they accomplish this miraculous task?<\/p>\n<blockquote>\n<p> Together they fashioned a general purpose programming language that was also suited to advanced mathematics and statistics and could run at speeds rivaling C, the granddaddy of the programming world.<\/p>\n<p> Programmers often use tools that translate slower languages like Ruby and Python into faster languages like Java or C. But that faster code must also be translated \u2014 or compiled, in programmer lingo \u2014 into code that the machine can understand. That adds more complexity and room for error.<\/p>\n<p> Julia is different in that it doesn\u2019t need an intermediary step. Using LLVM, a compiler developed by University of Illinois at Urbana-Champaign and enhanced by the likes of Apple and Google, Karpinski and company built the language so that it compiles straight to machine code on the fly, as it runs.<\/p>\n<\/blockquote>\n<p> Ye bloody gods, but it&#8217;s hard to know just where to start ripping that apart.<\/p>\n<p> Let&#8217;s start with that last paragraph. Apparently, the guys who designed Julia are geniuses, because they used the LLVM backend for their compiler, eliminating the need for an intermediate language.<\/p>\n<p> That&#8217;s clearly a revolutionary idea. I mean, no one has <em>ever<\/em> tried to do that before &#8211; no programming languages except C and C++ (the original targets of LLVM). Except for Ada. And D. And fortran. And Pure. And Objective-C. And Haskell. And Java. And plenty of others.<\/p>\n<p> And those are just the languages that specifically use the LLVM backend. There are others that use different code generators to generate true binary code.<\/p>\n<p> But hey, let&#8217;s ignore that bit, and step back.<\/p>\n<p> Let&#8217;s look at what they say about how other people implement programming languages, shall we?  The problem with other languages, they allege, is that their implementations don&#8217;t actually generate machine code. They translate from a slower language into a faster language. Let&#8217;s leave aside the fact that speed is an attribute of an implementation, not a language. (I can show you a CommonLisp interpreter that&#8217;s slow as a dog, and I can show you a CommonLisp interpreter that&#8217;ll knock your socks off.)<\/p>\n<p> What do the Julia guys actually do? They write a front-end that generates LLVM intermediate code. That is, they <em>don&#8217;t<\/em> generate machine code directly. They translate code written in their programming languages into code written in an abstract virtual machine code. And then they take the virtual machine code, and pass it to the LLVM backend, which translates from virtual code to actual true machine code.<\/p>\n<p> In other words, they&#8217;re not doing anything different from pretty much any other compiled language. It&#8217;s incredibly rare to see a compiler that actually doesn&#8217;t do the intermediate code generation. The only example I can think of at the moment is one of the compilers for Go &#8211; and even it uses some intermediates internally.<\/p>\n<blockquote>\n<p> Even if Julia never displaces the more popular languages \u2014 or if something better comes along \u2014 the team believes it\u2019s changing the way people think about language design. It\u2019s showing the world that one language can give you everything.<\/p>\n<p> That said, it isn\u2019t for everyone. Bezanson says it\u2019s not exactly ideal for building desktop applications or operating systems, and though you can use it for web programming, it\u2019s better suited to technical computing. But it\u2019s still evolving, and according to Jonah Bloch-Johnson, a climate scientist at the University of Chicago who has been experimenting with Julia, it\u2019s more robust than he expected. He says most of what he needs is already available in the language, and some of the code libraries, he adds, are better than what he can get from a seasoned language like Python.<\/p>\n<\/blockquote>\n<p> So, our intrepid reporter tells us, the glorious thing about Julia is that it&#8217;s one language that can give you everything! This should completely change the whole world of programming language design &#8211; because us idiots who&#8217;ve worked on languages weren&#8217;t smart enough to realize that there should be one language that does everything!<\/p>\n<p> And then, in the very next paragraph, he points out that Julia, the great glorious language that&#8217;s going to change the world of programming language design by being good at everything, isn&#8217;t good at everything!<\/p>\n<p> Jeebus. Just shoot me now.<\/p>\n<p> I&#8217;ll finish with a quote that pretty much sums up the idiocy of these guys.<\/p>\n<blockquote>\n<p> \u201cPeople have assumed that we need both fast and slow languages,\u201d Bezanson says. \u201cI happen to believe that we don\u2019t need slow languages.\u201d<\/p>\n<\/blockquote>\n<p> This sums up just about everything that I hate about what happens when idiots who don&#8217;t understand programming languages pontificate about how languages should be designed\/implemented.<\/p>\n<p> At the moment, in my day job, I&#8217;m doing almost all of my programming in Python. Now, I&#8217;m not exactly a huge fan of Python. There&#8217;s an awful lot of slapdash and magic about it that drive me crazy.  But I can&#8217;t really dispute the decision to use it for my project, because it&#8217;s a very good choice.<\/p>\n<p> What makes it a good choice? A certain kind of flexibility and dynamicism. It&#8217;s a great language for splicing together different pieces that come from different places. It&#8217;s not the fastest language in the world. But for my purposess, that&#8217;s completely irrelevant. If you took a super-duper brilliant, uber-fast language with a compiler that could generate perfectly optimal code every time, it wouldn&#8217;t be any faster than my Python program. How can that be?<\/p>\n<p> Because my Python program spends most of its time idle, waiting for something to happen. It&#8217;s talking to a server out on a datacenter cluster, sending it requests, and then waiting for them to complete. When they&#8217;re done, it looks at the results, and then generates output on a local console. If I had a fast compiler, the only effect it would have is that my program would spend more time idle. If I were pushing my CPU anywhere close to its limits, using less CPU before going idle might be helpful. But it&#8217;s not.<\/p>\n<p> The speed of the language doesn&#8217;t matter. But by making my job easier &#8211; making it easier to write the code &#8211; it saves something much more valuable than CPU time. It saves <em>human<\/em> time. And a human programmer is vastly more expensive than another 100 CPUs. <\/p>\n<p> We don&#8217;t specifically need <em>slow<\/em> languages. But no one sets out to implement a slow language. People implement <em>useful<\/em> languages. And they make intelligent decisions about where to spend their time. You could implement a machine code generator for Python.  It would be an extremely complicated thing to do &#8211; but you could do it. (In fact, someone is working on an LLVM front-end for Python! It&#8217;s not for Python code like my system, but there&#8217;s a whole community of people who use Python for implementing numeric processing code with <a href=\"http:\/\/www.numpy.org\/\">NumPy<\/a>.) But what&#8217;s the benefit? For most applications, absolutely nothing.\n<\/p>\n<p> According the the Julia guys, the perfectly rational decision to <em>not<\/em> dedicate effort to optimization when optimization won&#8217;t actually pay off is a bad, stupid idea. And that should tell you all that you need to know about their opinions.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Back when I was a student working on my PhD, I specialized in programming languages. Lucky for me I did it a long time ago! According to Wired, if I was working on it now, I&#8217;d be out of luck &#8211; the problem is already solved! See, these guys built a new programming language which [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[7],"tags":[],"class_list":["post-2308","post","type-post","status-publish","format-standard","hentry","category-bad-software"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-Be","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/2308","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=2308"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/2308\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=2308"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=2308"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=2308"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}