Introduction
If you're a member of this or any other technology-based forum, odds are that you've noticed the several versions of Microsoft's latest offering, Windows Vista. If you haven't, well... please come out from under that rock and get with the programming!One of the biggest changes has been the clear offering and even a gentle push towards the 64-bit version of the OS. Indubitably, this extra option becomes fodder for forum discussion, usually along the line of:
Forumite 1: "Hi, I am building a new system and I wanted to know what your thoughts were on whether I should use 64-bit or 32-bit Vista? I've heard varying things around the net regarding compatibility, and was hoping someone could help."
Forumite 2: "Hi! I just read your post. You should definitely go with the 32-bit version. There's tons of compatibility problems with 64b (Just look at XP-64), and it's going to die a long, drawn-out death. Besides, the only actual difference between them is that 64-bit can make proper use of 4GB of RAM."
Forumite 1: "Oh, ok! Thanks!"
Now, what's wrong with this picture? The answer is a lot. Time and time again, self-proclaimed gurus determine that the only real difference between 32-bit computing and 64-bit computing is the memory limit. Are they right that RAM is a reason? Definitely - but that's missing about 99 percent of the true differences. By that logic, the only major difference between your old 8-bit Nintendo console and your Xbox 360 is processor speed. I think we can all agree, that's just wrong.
So, if memory is only one of the myriad of changes to 64-bit computing, what does it actually do? And how? More importantly, why do we even care? We'll get to each of these questions in turn; but first, let's get some definitions straight and take a little trip down memory lane.
The 64 million dollar question
So, what is all this 64-bit stuff, anyway? Well, for a moment we'll keep this relatively simple, as some much deeper definitions will follow on the next page. For now, if you're not aware, 64-bit deals with how your computer works with the data it is given, called words.Words (which are composed of a piece of data, instructions to manipulate that data or both) are the key to computing. As such, the size of word that a computer's system allows can greatly alter how quickly it can process data and how accurate that data is - the larger the word, the more information that can be passed through a processor in every clock cycle, or the more transformations that can be performed on that data.
The concept of words is a tricky thing that we'll get into much deeper in the next page, but for now what's important to know is that 64-bit computers can theoretically process double the information per cycle. However, that is very different than running twice as fast!
It's all about the Pentiums...or not
Though 64-bit computing seems like a relatively new invention, it's actually been around since long before the desktop computer. The first proper 64-bit system was actually IBM's 7030 Stretch computer, in 1961. Since that time, 64-bit computing has enjoyed a niche but fruitful life in high-end server setups, processing setups, and super-computers.It took many years for 64-bit computing to filter down to even common server levels. In 1994, Intel announced its first plans to move to 64-bit on its servers by joint venture with HP. Less than a year later, Sun released the SPARC 64-bit native systems for enterprise-level workstations. It seemed like things were finally turning to 64-bit.
It would be another six years before the launch of Intel's first offering in 2001. The Itanium line, affectionately dubbed the "Itanic" due to its commercial flop, was the culmination of its research started in 1994 with HP. However, it was still targeted at the server line, leaving desktop users out in the cold. Finally, 64-bit hit the desktop market with AMD's launch of the Opterons and Athlon 64 chips in 2003. Though these chips were not based on a "pure" 64-bit architecture (as they had to maintain compatibility with existing operating systems and software), they could use an expansive group of 64-bit instructions known as x86-64.
The problem is, no matter whether it's pure 64-bit or x86-64, nothing will work without an operating system that can make use of it. And aside from 'nix operating systems, nothing that the average consumer used supported true 64-bit computing until Microsoft's lacklustre attempt at re-doing Windows XP in 2005. Contrary to popular belief, OS X did not support native 64-bit extensions until Tiger, as the PowerPC processors used in Macs up to and including the G5 were not actual 64-bit processors, though the G5 could use 64-bit instructions.
But before we get too far into that, let's take a look at some of the more basic ideas behind 64-bit.
"Word up, dawg"
Before we can get into what computer programmers refer to as "higher-level" differences (the programming of software), we need to understand some very basic "low-level" differences. Yep, that's right, we're going back to the words, guys.As I had explained before, a word is a group of information that is processed together. It can be some data from your program, or it might be an instruction that helps alter that data. Sometimes, instructions or data are too big and end up being spread over several words.
So if some instructions are too small and others too large, what decides the right size for a word? Well, some of it is just based on history, while another part of it is based on a very important factor called a processor register.
A register is the largest chunk of data that a CPU can process at once, and will be flagged to denote its contents (which takes up a portion of the register). Since a register is the biggest that any data can get and be processed at once, it makes sense to make the size of a word equal. This way, the CPU can always handle the instructions that come at it.
If we have such a neat and tidy explanation there, where does the history come in? Well, historically, the x86 architecture was finalised with a 16-bit word length for the 8086. This meant that every instruction, every piece of data, and every memory address had to be exactly 16 bits.
Unlike today's computers, these more rudimentary processors did not allow for data to be spanned over multiple words, limiting the functions that could be done over the already sluggish clock speeds. If any data was larger than 16 bits it would end up truncated, and smaller would end up crashing (and so would need to be "padded" before processing).
Though processor speed has increased exponentially and the complexity of "bridged" words came into being, processors that comply with the x86 architecture are still required to have 16-bit words. So, in order to cope with our growing need for data space, those nifty computer engineers developed the concept of "segmenting" the pipeline, sending multiple 16-bit words through at once.
It is through this concept that we arrive at the computer terminology today, which we call 32- and 64-bit. A 32-bit processor handles "dwords" (short for double-words), which are actually two 16-bit words processed simultaneously. x86-64 computers function on "qwords," or quad-words - you guessed it, four 16-bit words.
So why maintain such an archaic principle when no processors today really have use for a 16-bit word? It's simple - backwards compatibility. The entire purpose of the x86 architecture was to create a standard that would need to be adhered to. A true 64-bit processor (such as the Itanium or SPARC station) would not know any more how to deal with a 16-bit word than a 16-bit processor could deal with a 64-bit word. The two are simply not compatible.
Ah, the memories
Now that we've touched on CPU registers, we can move on to what everyone thinks of when "64-bit" is spoken aloud: memory. See, one of the most important CPU register types is the memory address - it is a single word (note that, it cannot be bridged) that gives the last chunk of data a place to go outside of the CPU.Sound confusing? It's really not - think of it like going over to someone's house via the train, tube, subway, whatever public transport you fancy. You travel to the station via roads to board the train. But the train you're on rarely gets you exactly where you want to go - sometimes you have to switch. How do you know which station? Or what time? Where do you go while at the station?
These things are what a memory address solves - it tells your data where to be held outside of the CPU until it's time to use it again. Data travels through the bus to the CPU where it gets manipulated, transformed, etc. But one trip through the CPU is rarely all that is necessary - so the data has to get moved to the RAM for future use.
Obviously, this is an over-simplification, but it points to where memory addresses come in. A register is passed along behind the data that tells the CPU where to shove it for a while until it's needed again. The bigger the word, the more possible combinations, or addresses, there can be.
It is this limitation that people are thinking of when wondering why not all four gigs of RAM are showing up. A 32-bit operating system can only properly address exactly four gigs, which must be shared between kernel and processes as well as holding further virtual memory addressing. Because of that, Windows XP users can never see the full 4GB of memory - it will always be approximately 3.2GB with Service Pack 2.
Instructions? We don't need no stinkin' instructions!
So, we've covered the memory addressing issues. We're done, right? Not even close.The memory issue is by far the most visible aspect of 64-bit computing, which is probably why it is touted as the only thing that is different. However, being able to use qwords instead of dwords means a lot more than just a longer address of where to stick the data - it means a longer string of instructions explaining what to do with that data, and also a bigger chunk of the data itself.
Instructions in the 64-bit desktop world come in the flavour of the x86-64 group, known as AMD64 and EM64T (Intel's version). It is worth noting that Intel actually copied almost all of EM64T from AMD64, so these two sets are pretty much the same. However, neither should be confused with IA-64, which is the Intel Itanium instruction set. This is currently the only true 64-bit instruction set in general use today, and AMD has no true 64-bit processors.
Though AMD had been using x86-64 instructions since 2003, Intel did not bring its first x86-64 compatible chips to life until the Prescott series in the middle of 2004. Since the release of the Core microarchitecture, Intel has renamed its version to Intel 64, and it is supported on all Core chips.
AMD64/Intel 64 brings quite a benefit to the x86 world. For starters, x86-64 contains a huge step forward for programmers with the introduction of "relative pointers." Without going into an entire programming explanation, pointers are reference points in code (in this case we are referring to the translated machine code, or assembly) that tell a program where to go next or look for its next piece of data. These pointers previously needed to beabsolute, meaning that you needed to know the exact memory address or register that you were wanting to access.
This type of programming can be very inefficient, as it requires an absolute understanding of which addresses and registers are free at the time. If that for some reason was already full or was otherwise unable to be written, the program would crash due to a general protection fault. It also meant that little program pieces were strewn about free memory addresses and registers rather than intelligently organised.
The AMD64 architecture was the first of its kind.
One of the most controversial features of the x86-64 instruction set has been the addition of an instruction known as the NX Bit. This is a kernel security feature, and is short for "No eXecute". By using the NX bit to flag various registers and memory addresses, it is possible for an operating system to prevent code from being executed without a fault - think of it as a "write protect" switch for registers and memory.
The NX bit was implemented to help prevent one of the weaknesses of the x86 architecture ever since the 286 - buffer overruns. Of course, there have been various other theorised uses for the NX bit, including (but not limited to) its possible use as a hardware-level DRM.
On top of this, there are even some benefits to 32-bit code executed on 64-bit systems. Both Intel and AMD processors have the ability to "double up" certain 32-bit instructions, running two commands at once instead one command per clock. Though this isn't universally functional for all instructions or data, it can provide some nice little speed enhancements over the aggregate of a program running.
Feed me, Seymour!
All of the instructions, increased maximum data sizes and memory addresses won't do any good without the ability for the rest of the system to transport the same size chunks of information. In particular, data has to be able to flow between the northbridge, RAM and CPU with at least the same data width in order for 64-bit extensions to function well.Fortunately, bus width is something that seems to largely stay a step ahead of the curve. All northbridge chipsets on the Intel side since the G915 have supported a full 64-bit bus, as have most A64 chipsets since Nvidia's nForce3 chipset.
For Intel, memory bandwidth has been more of a problem due to the lack of an integrated memory controller, causing a greater slowdown as data and instructions get routed through the northbridge before being stored in memory.
However, AMD's HyperTransport system has come with its own weaknesses - in order to comply with 32-bit execution, HyperTransport sends a pair of 32-bit words. In order to link the two words together, a few bits from each word are used as a common identifier. By the time the system is done flagging each word as part of a memory address, illustrating whether the address is an NX space or not and adds the linking bits, the 64-bit bus can only use 40-bit memory addresses.
Real-world decreases in memory addressing from 64-bit down to 40-bit are tremendous, but assuredly well beyond what any user is likely to encounter. We're talking a decrease from 16 exabytes to somewhere in the neighbourhood of 1TB - still far more RAM than modern desktops will have in the foreseeable future, and likely well in excess of what will be readily available even by the turn of 128-bit processing.
Program! Get yer program!
Of course, as with every massive step forward, there are some weak points to 64-bit computing. Probably the greatest weakness of all is that in order for something to gain any of the benefits of it, the software you use must be made for it.As we mentioned earlier, 64-bit programming is not compatible with 32-bit, and vice-versa. Therefore, you can't run a 32-bit OS and expect to use any features of 64-bit execution - the code won't be compiled to use any of the new features, even ones that seem like they could be implemented in any programming method (such as relative pointers). The same goes for software - if it's programmed in 32-bit, it will run as a 32-bit program from start to finish.
Therefore, the first place to look when you're interested in delving into 64-bit computing is your OS. At its most basic, the OS isn't there to just be a pretty little shell around your programs - it's there to help your software interact with your hardware. So if your OS isn't 64-bit, you can't expect it to use the x86-64 extensions, or even execute 64-bit software.
All major operating systems these days come with a 64-bit version. Windows Vista is considerably more advanced in 64-bit features than Windows XP x64, it is worth noting, but both are what are known as hybrids. You can also add Mac OS-X (Tiger and Leopard) to this list. Each OS provides both a 32-bit and a 64-bit execution path, allowing both instructions simultaneously and thus not requiring specially coded 64-bit versions of all programs.
Because they offer the ability to run 64-bit programs, these hybrid operating systems make use of 64-bit memory addresses. In order to allow 32-bit programs to access the upper memory addresses and certain bits of the 64-bit instructions (mostly ones designed to simplify repetitive 32-bit commands and piping), programs are run in protected space similar to emulation. This allows the OS to get the "best of both worlds" with 32-bit code and not sacrifice any performance for 64-bit code.
Linux/Unix/BSD users are given a whole different choice. The 'nix universe has developed kernels designed specifically to run only 64-bit, and the vast group of coders in the OSS underground have created programs to suit. This leaves Linux as a natural choice for people looking to see exactly how vastly improved 64-bit computing can be. Users have found speed boosts, security improvements, compiling benefits and a myriad of other little gains.
Once into the applications, users of Mac and Windows systems will find themselves a little shorter of options, but far from barren. The latest offerings from Adobe all feature 64-bit instructions, as do some of the newer modelling/rendering software. You can even find 64-bit "enhancements" in games all the way back to Far Cry, and Valve's Source engine has some significant speed improvements for those running a 64-bit OS.
Driving me crazy
No step forward is without at least one little step back, of course, and 64-bit computing is absolutely no exception. If there is one thing that is slowing the uptake of 64-bit, it's the support by manufacturers in the form of drivers (or modules, for those in the 'nix community). A trip around even some big-name websites like Linksys and Creative will leave you with a sour taste from the word "go," wondering where all the love has gone.However, many of the driver issues that seem to plague 64-bit are actually little more than lack of publicity and slow market adaptation. Installing a 64-bit OS requires a complete re-code and re-install, something that many home users aren't willing to undertake. Further, most businesses are running on systems that can be years old, many of which can't support 64-bit due to hardware limitations.
Because of this, "big-name" hardware manufacturers have largely been slow to develop proper drivers for 64-bit operating systems. However, many of these same companies are simply board partners, especially in the networking sector. For instance, most Linksys cards nowadays use Broadcomm and Railink chips, which have had 64-bit drivers for quite some time. A quick search on some forums will bring you to some wrapped drivers for your particular hardware.
These same trials have been worked through by Linux lovers for a few years now, so most common technologies have readily discoverable drivers if (for whatever reason) something hasn't worked out of the box. However, it is worth noting that this writer has installed Vista 64 on five different setups with completely different hardware, and never found a piece that didn't work which wasn't tremendously obscure to begin with.
It's not all beer and skittles
Aside from driver support, 64-bit computing is pretty much a step forward in nearly every way. However, you can't expect all that improvement for nothing - there are some further costs on the hardware side.First of all, for 64-bit computers, more memory isn't just a perk, it's a little more of a necessity. Each instruction code, memory address, etc. will take up more room in your RAM and cache due to the increased length of pointers, protected space, etc. Where a 32-bit OS may be able to execute a program in 100MB, you can find that the size of a 64-bit OS running the same program can take 105-110MB. Often times this isn't the case, but it is an important consideration.
Also, due to the lack of current commercial penetration of 64-bit operating systems and computers, most software is not making use of it just yet. However, as the old 32-bit computers begin to phase out and become generally obsolete, more software writers will begin to take advantage of the new systems.
Conclusion
Intel's Conroe CPU comes complete with Intel 64 technology.
Until that time, those who use 64-bit systems can enjoy the perks - slightly sped-up processes on double-executed 32 bit code, a full 4GB of RAM, protected execution, more stable and accurate code due to the increased data size and enhanced overall system efficiency due to relative pointers just to name a few. And for the software that has already come to pass with the likes of Adobe Photoshop, Maya, Vue Infinite, some CAD programs and quite a few games, you'll have plenty of toys that get a little boost right now.
With that being said, it's still not quite a perfect system - the steeper hardware requirements in processing power and memory versus the lack of current software leave it more to "If you have it, you should use it" rather than "I need this now!" But whether you are intrigued enough to take the plunge into Vista, a new 'nix build or stick with your good old Windows XP, one thing is for certain - 64-bit is here to stay, it's going to grow, and it's more than just the RAM.
Thanks for reading!
No comments:
Post a Comment