{"id":2273,"date":"2014-01-02T20:03:32","date_gmt":"2014-01-03T01:03:32","guid":{"rendered":"http:\/\/scientopia.org\/blogs\/goodmath\/?p=2273"},"modified":"2014-01-02T20:03:32","modified_gmt":"2014-01-03T01:03:32","slug":"leading-in-to-machine-code-why","status":"publish","type":"post","link":"http:\/\/www.goodmath.org\/blog\/2014\/01\/02\/leading-in-to-machine-code-why\/","title":{"rendered":"Leading in to Machine Code: Why?"},"content":{"rendered":"<p> I&#8217;m going to write a few posts about programming in machine language. It seems that many more people are interested in learning about the ARM processor, so that&#8217;s what I&#8217;ll be writing about. In particular, I&#8217;m going to be working with the <a href=\"http:\/\/www.raspberrypi.org\/\">Raspberry Pi<\/a> running Raspbian linux. For those who aren&#8217;t familiar with it, the Pi is a super-inexpensive computer that&#8217;s very easy to program, and very easy to interface with the outside world. It&#8217;s a delightful little machine, and you can get one for around $50!<\/p>\n<p> Anyway, before getting started, I wanted to talk about a few things. First of all, why learn machine language? And then, just what the heck is the ARM thing anyway?<\/p>\n<h2>Why learn machine code?<\/h2>\n<p> My answer might surprise you. Or, if you&#8217;ve been reading this blog for a while, it might not.<\/p>\n<p> Let&#8217;s start with the wrong reason. Most of the time, people say that you should learn machine language for speed: programming at the machine code level gets you right down to the hardware, eliminating any layers of junk that would slow you down. For example, one of the books that I bought to learn ARM assembly (<a href=\"http:\/\/www.amazon.com\/gp\/product\/1492135283\/ref=as_li_qf_sp_asin_tl?ie=UTF8&#038;camp=1789&#038;creative=9325&#038;creativeASIN=1492135283&#038;linkCode=as2&#038;tag=goodmathbadma-20\">Raspberry Pi Assembly Language RASPBIAN Beginners: Hands On Guide<\/a><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/ir-na.amazon-adsystem.com\/e\/ir?t=goodmathbadma-20&#038;l=as2&#038;o=1&#038;a=1492135283\" width=\"1\" height=\"1\" border=\"0\" alt=\"\" style=\"border:none !important; margin:0px !important;\" \/>) said:<\/p>\n<blockquote>\n<p> even the most efficient languages can be over 30 times<br \/>\nslower than their machine code equivalent, and that\u2019s on a good<br \/>\nday!<\/p>\n<\/blockquote>\n<p> This is pure, utter rubbish. I have no idea where he came up with that 30x figure, but it&#8217;s got no relationship to reality.  (It&#8217;s a decent book, if a bit elementary in approach; this silly statement isn&#8217;t representative of the book as a whole!)<\/p>\n<p> In modern CPUs &#8211; and the ARM definitely does count as modern! &#8211; the fact is, for real world programs, writing code by hand in machine language will probably result in <em>slower<\/em> code!<\/p>\n<p> If you&#8217;re talking about writing a single small routine, humans can be very good at that, and they often <em>do<\/em> beat compilers. Butonce you get beyond that, and start looking at whole programs, any human advantage in machine language goes out the window. The constraints that actually affect performance have become incredibly complex &#8211; too complex for us to juggle effectively. We&#8217;ll look at some of these in more detail, but I&#8217;ll explain one example.<\/p>\n<p> The CPU needs to fetch instructions from memory. But memory is <em>dead slow<\/em> compared to the CPU! In the best case, your CPU can execute a couple of instructions in the time it takes to fetch a single value from memory. This leads to an obvious problem: it can execute (or at least start executing) one instruction for each clock tick, but it takes several ticks to fetch an instruction!<\/p>\n<p> To get around this, CPUs play a couple of tricks. Basically, they don&#8217;t fetch single instructions, but instead grab entire blocks of instructions; and they start retrieving instructions before they&#8217;re needed, so that by the time the CPU is ready to execute an instruction, it&#8217;s already been fetched.<\/p>\n<p> So the instruction-fetching hardware is constantly looking ahead, and fetching instructions so that they&#8217;ll be ready when the CPU needs them. What happens when your code contains a conditional branch instruction?<\/p>\n<p> The fetch hardware doesn&#8217;t know whether the branch will be taken or not. It can make an educated guess by a process called branch prediction. But if it guesses wrong, then the CPU is stalled until the correct instructions can be fetched! So you want to make sure that your code is written so that the CPUs branch prediction hardware is more likely to guess correctly. Many of the tricks that humans use to hand-optimize code actually have the effect of confusing branch prediction! They shave off a couple of instructions, but by doing so, they also force the CPU to sit idle while it waits for instructions to be fetched. That branch prediction failure penalty frequently outweighs the cycles that they saved!<\/p>\n<p> That&#8217;s one simple example. There are many more, and they&#8217;re much more complicated. And to write efficient code, you need to keep all of those in mind, and fully understand every tradeoff. That&#8217;s incredibly hard, and no matter how smart you are, you&#8217;ll probably blow it for large programs.<\/p>\n<p> If not for efficiency, then why learn machine code? Because it&#8217;s how your computer really works! You might never actually use it, but it&#8217;s interesting and valuable to know what&#8217;s happening under the covers. Think of it like your car: most of us will never actually modify the engine, but it&#8217;s still good to understand how the engine and transmission work.<\/p>\n<p> Your computer is an amazingly complex machine. It&#8217;s literally got billions of tiny little parts, all working together in an intricate dance to do what you tell it to. Learning machine code gives you an idea of just how it does that. When you&#8217;re programming in another language, understanding machine code lets you understand what your program is <em>really<\/em> doing under the covers. That&#8217;s a useful and fascinating thing to know!<\/p>\n<h2>What is this ARM thing?<\/h2>\n<p> As I said, we&#8217;re going to look at machine language coding on the<br \/>\nARM processor. What is this ARM beast anyway?<\/p>\n<p>It&#8217;s probably not the CPU in your laptop. Most desktop and laptop computers today are based on a direct descendant of the first microprocessor: the Intel 4004.<\/p>\n<p> Yes, seriously: the Intel CPUs that drive most PCs are, really, direct descendants of the first CPU designed for desktop calculators! That&#8217;s not an insult to the intel CPUs, but rather a testament to the value of a good design: they&#8217;ve just kept on growing and enhancing. It&#8217;s hard to see the resemblance unless you follow the design path, where each step follows directly on its predecessors.<\/p>\n<p> The Intel 4004, released in 1971, was a 4-bit processor designed for use in calculators. Nifty chip, state of the art in 1971, but not exactly what we&#8217;d call flexible by modern standards. Even by the standards of the day, they recognized its limits. So following on its success, they created an 8-bit version, which they called the 8008. And then they extended the instruction set, and called the result the 8080. The 8080, in turn, yielded successors in the 8088 and 8086 (and the Z80, from a rival chipmaker). <\/p>\n<p> The 8086  was the processor chosen by IBM for its newfangled personal computers. Chip designers kept making it better, producing the 80286, 386, Pentium, and so on &#8211; up to todays CPUs, like the Core i7 that drives my MacBook.<\/p>\n<p> The ARM comes from a different design path. At the time that Intel was producing the 8008 and 8080, other companies were getting into the same game. From the PC perspective, the most important was the 6502, which<br \/>\nwas used by the original Apple, Commodore, and BBC microcomputers. The<br \/>\n6502 was, incidentally, the first CPU that I learned to program!<\/p>\n<p> The ARM isn&#8217;t a descendant of the 6502, but it is a product of the 6502 based family of computers. In the early 1980s, the BBC decided to create an educational computer to promote computer literacy. They hired a company called Acorn to develop a computer for their program. Acorn developed a<br \/>\nbeautiful little system that they called the BBC Micro. <\/p>\n<p> The BBC micro was a huge success. Acorn wanted to capitalize on its success, and try to move it from the educational market to the business market. But the 6502 was underpowered for what they wanted to do. So they decided to add a companion processor: they&#8217;d have a computer which could still run all of the BBC Micro programs, but which could do fancy graphics and fast computation with this other processor.<\/p>\n<p> In a typical tech-industry NIH (Not Invented Here) moment, they decided that none of the other commercially available CPUs were good enough, so they set out to design their own. They were impressed by the work done by the Berkeley RISC (Reduced Instruction Set Computer) project, and so they adopted the RISC principles, and designed their own CPU, which they called the Acorn RISC Microprocessor, or ARM.<\/p>\n<p> The ARM design was absolutely gorgeous. It was simple but flexible<br \/>\nand powerful, able to operate on very low power and generating very little heat. It had lots of registers and an extremely simple instruction set, which made it a pleasure to program. Acorn built a lovely computer with a great operating system called <a href=\"https:\/\/www.riscosopen.org\/content\/\">RiscOS<\/a> around the ARM, but it never really caught on. (If you&#8217;d like to try RiscOS, you can run it on your Raspberry Pi!)<\/p>\n<p> But the ARM didn&#8217;t disappear. Tt didn&#8217;t catch on in the desktop computing world, but it rapidly took over the world of embedded devices. Everything from your cellphone to your dishwasher to your iPad are all running on ARM CPUs. <\/p>\n<p> Just like the Intel family, the ARM has continued to evolve: the ARM family has gone through 8 major design changes, and dozens of smaller variations. They&#8217;re no longer just produced by Acorn &#8211; the ARM design is maintained by a consortium, and ARM chips are now produced by dozens of different manufacturers &#8211; Motorola, Apple, Samsung, and many others.\n<\/p>\n<p> Recently, they&#8217;ve even starting to expand even beyond embedded platforms: the Chromebook laptops are ARM based, and several companies are starting to market server boxes for datacenters that are ARM based! I&#8217;m looking forward to the day when I can buy a nice high-powered ARM laptop.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I&#8217;m going to write a few posts about programming in machine language. It seems that many more people are interested in learning about the ARM processor, so that&#8217;s what I&#8217;ll be writing about. In particular, I&#8217;m going to be working with the Raspberry Pi running Raspbian linux. For those who aren&#8217;t familiar with it, the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[91],"tags":[],"class_list":["post-2273","post","type-post","status-publish","format-standard","hentry","category-machine-language"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p4lzZS-AF","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/2273","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/comments?post=2273"}],"version-history":[{"count":0,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/posts\/2273\/revisions"}],"wp:attachment":[{"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/media?parent=2273"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/categories?post=2273"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.goodmath.org\/blog\/wp-json\/wp\/v2\/tags?post=2273"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}