back to top

Explanation of the bus system

[ad_1]

Explanation of the bus system

I don't know about you, but I do personally, though, thinking of myself that everything in my hand is always confused when thinking about a bus set on a modern computer. CPU, Memory and AGP bandwidth plus new technologies like Hypertransport always leave me always spinning, especially when you talk to someone about it who can manage your skepticism in your own knowledge.

I thought I would write it for reference, and I hope to make an understanding for those who want to know everything about this topic.

Throughout this article we will try to understand understanding all the components that make the computer system applicable and we hope to see in the past marketing that surpasses our lack of understanding.

the aim

Computers are not being marketed these days from a purely technical standpoint. All retailers or manufacturers will try to give their product an edge over very similar products in their category. Graphics cards and motherboards are an excellent example of this now. Different names, the same technology.

Marketing goes so far as to deviate from the correct technical terms for computers. Kilo, mega, giga are not the same when it comes to making numbers "easy" for Joe's audience.

Technically and correctly:

1 bit is one unit of information pictured as 1 or 0.

There are 8 bits per byte

There are 1024 bytes in the KB

There are 1024 kb in mb

There is 1024MB in GB

Incidentally, although not used in this article …

There is 1024 GB in terabytes

1024 * 1024 * 1024 is critical and provides not nice results for marketing.

Instead, they move to multiples of 1,000. 1000 bytes per kilobyte, 1,000 kilobytes per megabyte, and so on. This provides nice round numbers.

Take this for example (we'll cover the accounts later):

Technically:

PC2100 DDR memory / DDR266 memory

64 (bits) * 266,000,000 (Hz) = 17024,000,000 bits / sec

(17024,000,000 / 8) / (1024 * 1024) = 2029.4MB / s

marketing:

PC2100 DDR memory / DDR266 memory

64 (bits) * 266,000,000 (Hz) = 17024,000,000 bits / sec

(17024,000,000 / 8) / (1000 * 1000) = 2128MB / s

Comfortable do not you think that? Not only does it provide 100MB / s of magic bandwidth, it also provides a good number (no decimal places etc.)

Response time

The problem with high complications in modern CPUs is latency. The processor clock speed (we'll use 1.73GHz as an example) is too early for the relatively trivial speeds of the memory bus, AGP bus, etc. The CPU finds itself forced to wait until the rest of the system is able to catch up. .

We must use an example to illustrate:

The processor with a speed of 133 MHz operating at 1.73 GHz has a clockwise multiplier 13 (13 * 133 = 1733).

# The CPU sends a request to the system memory for information. Then the CPU waits for one session (usually known as the command rate (1T) # Memory passes with what is known as RAS / CAS latency # There is a memory delay in finding data known as the latency CAS

Thus, while the CPU waited for one session of the CPU and then four bus cycles, it had to wait for + 1 cycles of the CPU (4 * multiplier) to get the data it was after that. For each memory transfer cycle, the CPU has passed 13 cycles. Not much when you think of a 1.73GHz CPU that has 1.73 billion cycles per second, but how often does the CPU reach main memory? Very little and even adds everything.

memory

We will look at 3 different types of computer memory in this article.

# SDR-SDRAM (Single data rate – Dynamic Random Access Memory) – SDR-SDRAM was the dominant memory in the late 1990s. The latest version was available at speeds of 66/100/133 MHz as standard. This type of memory is being used / used by both Intel and AMD for their recent offerings, even on the i845 / 845G chipset with Pentium 4 processor. Later on we will show what is wrong or the distinctive wastage of the CPU.

# DDR-SDRAM (Dual Data Rate – Dynamic Simultaneous RAM) – DDR-SDRAM has taken over where the SDR has stopped. DDR memory systems have become special with AMD (Thunderbird / XP / Thoroughbred) systems, as they have become the main memory of the foreseeable future, with DDR-II on the horizon.

# RDRAM (RAMBUS Dynamic random access memory) – Although it has only become popular on the main PC market via the Intel Pentium 4 processor, RDRAM technology goes back to before the DDR memory.

Bandwidth accounts

To avoid confusion later here, there is a reference table for bits, bytes, mega, kilo, gig …

1 bit is one unit of information pictured as 1 or 0.

There are 8 bits per byte

There are 1024 bytes in the KB

There are 1024 kb in mb

There is 1024MB in GB

Incidentally, although not used in this article …

There is 1024 GB in terabytes

SDR-SDRAM

To calculate the memory bandwidth, we need to know two things. Display data and operating frequency. The latter is the easiest to know because it is usually part of the marketing / retail address.

We usually see SDR at 100 or 133MHz. Taking 133MHz as an example, this means that memory can perform 133 million times every second.

Find the data view, and that's just something you have to search for. SDR contains 64-bit or 8-byte (8-byte) data.

PC100 SDR memory

The calculation is as follows: data width * operating frequency = bandwidth (in bits / sec)

To convert to more realistic and manageable shapes, divide the result by 8 to give bytes / sec, then divide again by 1024 to get kilobits / sec and then by 1024 again to get megabytes / sec.

Thus: 64 (bits) * 1 billion (Hz) = 6400,000 bits / sec

(6400,000,000 / 8) / (1024 * 1024) = Memory Bandwidth 762.9MB / s.

PC133 SDR memory

By using the same forumla as we did for PC100 SDR memory, we can easily calculate theoretical bandwidth of PC133 SDR memory.

64 (bits) * 133,000,000 (Hz) = 8512,000,000 bits / sec

(8512,000,000 / 8) / (1024 * 1024) = 1014.7MB / s or about the memory bandwidth is about 1GB / s.

DDR-SDRAM

DDR memory is a little more complicated to understand for two reasons. First, DDR memory has the ability to transmit data on the high edge and fall off the hour cycle, which means that DDR memory theoretically doubles the memory bandwidth of a system capable of using it.

Second, as a marketing batch to compete with rival technology at the time DDR was introduced, RAMBUS; DDR was sold as a measure of the theoretical approximate bandwidth. Similar to AMD and the XP PR ranking of today's processors, people buy numbers, and DDR is seen as faster if sold as PC1600 and PC2100 instead of PC200 and PC266.

PC1600 DDR memory / DDR200 memory

DDR memory has the same width of data as SDR: 64 bit.

We use the same calculation to measure bandwidth, with additional frequency.

64 (bits) * 200,000,000 (Hz) = 12800,000,000 bits / sec

(12800,000,000 / 8) / (1024 * 1024) = 1525.9MB / s.

Note that the bandwidth is twice that of the PC100 SDR.

PC2100 DDR memory / DDR266 memory

64 (bits) * 266,000,000 (Hz) = 17024,000,000 bits / sec

(17024,000,000 / 8) / (1024 * 1024) = 2029.4MB / s or about 2GB / s of memory bandwidth.

With improved returns to memory, modules capable of running at higher speeds are being launched around the clock on the market. PC2700 PC finally comes with AMDXP2700 + / 2800 + and Intel i845PE chipset.

Here are some bandwidths for the latest available memory:

PC2700 DDR memory / DDR333 memory

64 (bits) * 333,000,000 (Hz) = 21312,000,000 bits / sec

(21312,000,000 / 8) / (1024 * 1024) = 2540.6MB / s.

PC3200 DDR memory / DDR400 memory

64 (bits) * 400,000,000 (Hz) = 25600,000,000 bits / sec

(25600,000,000 / 8) / (1024 * 1024) = 3051.8MB / s.

PC3500 DDR memory / DDR434 memory

64 (bits) * 434,000,000 (Hz) = 27776,000,000 bps

(27776,000,000 / 8) / (1024 * 1024) = 3311.2MB / s.

RDRAM

RDRAM is a little more complicated in that the bus operates at an effective DDR 64-bit bus width but is separated into two 16/32-bit channels. What does this mean? Currently 2 well sticks of RDRAM must be used in the system. The DDR feature (usually from a cost perspective) is the ability to use it in individual DIMMs.

The caclulation is basically the same, however, we just need to consider the extra channel and the extra memory speed.

PC800

16 (bits) * 800,000,000 (Hz) = 12800,000,000 bits / sec

(12800,000,000 / 8) / (1024 * 1024) = 1525.9MB / s. Times 2 due to dual channel configuration – 3051.8MB / s

PC1066

16 (bits) * 1066,000,000 (Hz) = 17056,000,000 bits / sec

(17056,000,000 / 8) / (1024 * 1024) = 2033.2MB / s. Times 2 due to dual channel configuration – 4066.4 mb / s

NForce

nForce is special because it heralds the receiver of memory interfaces, for at least DDR. Dual DDR technology provides two 64-bit channels instead of 1 which makes the 128-bit memory bus efficient. This allows twice the bus bandwidth.

Although DualDDR has never had a major impact on nForce memory bandwidth (so standards tell us at least), it does have great potential for converting DDR recently.

The Intel Pentium 4 processor, an old defender of RAMBUS / RDRAM, has vowed to stay away from serial memory technology and embrace DDR. Unfortunately, as the memory bandwidth calculations showed on page 4, DDR in its current form does not have bandwidth or the ability to extend its band until it reaches the RDRAM bands at the current iteration.

Dual DDR will make a big difference in Pentium 4 chips. P4s with QDR architecture can achieve a bandwidth of up to around 4 GB / s, which is fully compatible with PC1066 RDRAM. The fastest DDR currently available on the other hand, the PC3500 has a bandwidth of about 3.1 GB / s. P4 is disabled with the current DDR chips.

Doubling the memory bandwidth then is something Intel looks for.

PCI Bus

PCI bus is one of the oldest buses in the modern system. It is the carrier that connects all the expansion cards in the system to the main chips, along with the IDE and USB.

PCI bus is a 32-bit wide bus that operates at 33MHz. With our familiar account, we can now easily calculate the maximum bandwidth.

32 (bits) * 33,000,000 (Hz) = 1056,000,000 bits / sec

(1056,000,000 / 8) / (1024 * 1024) = 125.9MB / s. Approx. Up to 133MB / s

It is relatively easy to imagine that with modern ATA133 hard drives, PCI network adapters, sound cards, and the like, a PCI bus can easily become saturated. There are 3 methods around this solution. 2 has already been implemented.

# Bus Bandwidth Expansion – Server motherboards, especially as SCSI hard drives that require more bandwidth than the PCI bus can transfer, have moved to a 66MHz bus using 64-bit slots. This is four times the available bandwidth.

64 (bits) * 66,000,000 (Hz) = 4224,000,000 bits / sec

(4224,000,000 / 8) / (1024 * 1024) = 503.5MB / s. Approx. 533MB / s

# Moving to a dedicated bus – the obvious example here is graphics cards. With the increased graphics card speeds required to handle complex games, the old PCI bus simply cannot handle the huge amount of information needed to reach the North Bridge and vice versa. Thus the AGP bus was born. Direct link from AGP to 66MHz chipset with 32-bit bus provides maximum bandwidth:

32 (bits) * 66,000,000 (Hz) = 2112,000,000 bits / sec

(1056,000,000 / 8) / (1024 * 1024) = 251.77MB / s; approx. Up to 266MB / s

IDE

IDE hard drives transfer data to CPU and back, via PCI bus. Of course, this means that any transfers are limited due to the PCI bus speed or 133MB / s or what it means that ATA133 is as high as the IDE you can get (although it doesn't actually shut down anyway).

Recent innovations have attempted to bypass the PCI bus for IDE transfer. VLink technology from VIA is a dedicated bus that runs at 266 Mbps between Southbridge and Northbridge.

Serial ATA

IDE successor. Why is this in the PCI section? Well despite all the fuss, all Serial ATA connectors use a PCI bus to transmit information. SATA150 with a theoretical maximum transfer of 150 MB / s is limited to a trivial 133MB / s of PCI bus. The Serial ATA receiver chipsets will reduce the burden of the PCI bus and allow direct access to chipsets possibly in a dedicated bus. This is necessary for the next generation of SATA devices that can run at 300 / 600MB / s speed.

AGP Bus

As partly described on page 6, the AGP bus was born to accommodate the increasing bandwidth needs for the graphics card. Simply, the 133MB / s PCI bus capacity couldn't handle the likes of cards faster than Voodoo 3, which is one of the last PCI graphics cards.

The AGP bus was a 32-bit bus like the PCI bus, but it runs at 66MHz giving it a maximum bandwidth of 266MB / s. This was known as AGP 1x.

Similar to the QDR implementation of the Intel Pentium 4 processor, the AGP bus has been redesigned to allow for 2 data processing, then 4 times per round-the-clock. This is known as AGP2x / 4x. AGP8x was recently introduced.

Each repeat of AGP doubled the bandwidth of the previous standard:

# AGP1x = 266MB / s
# AGP2x = 533MB / s
# AGP4x = 1066MB / s
# AGP8x = 2132MB / s

HyperTransport

In all walks of life, things go. The standards described 10 years ago after the announcement cannot hope to achieve the scalability of today's needs.

Since the 8-bit PCI bus has been replaced by the PCI bus, therefore the old PCI needs to phase out and define a new connectivity protocol. The main competitor to the throne at the moment is Hypertransport.

A consortium led by AMD hopes to make Hypertransport the specific interconnection protocol for the foreseeable future.

What is Hypertransport?

Hypertransport is a Primarilly point-to-point connection designed for speed, portability and standardization of the various system buses that we have today. The same link can be used to recover data from the network card and DDR memory bank.

Here is an example of a typical computer bus layout as we know it today:

Hypertransport will eliminate most of the bottlenecks in today's systems. The PCI bus as described earlier is easily saturated with the high bandwidth peripherals used.

Regarding speed, Hypertransport is able (at the present time) to provide throughput of 51.2 Gbps.

Using 500MHz clock rate as an example

2 (bits * 500,000,000 (Hz) = 1000,000,000 bits / sec

(1000,000,000 / 8) / (1024 * 1024) = 119.2MB / s – with the ability to indicate DDR, this number doubled to 238.4MB / s.

Or to use Gbits (mainly because it looks more):

1000,000,000 / (1024 * 1024 * 1024) = 0.93 Gbps (approximately up to 1 Gbps). With a DDR signal, this is converted to 2 Gbps.

We see Hypertransport in today's technology by creating one company out of the standard. NForce (and, of course, NVIDIA nForce2) uses Hypertransport as the primary interconnect production methods of 800MB / s (nForce1) and 1600MB / s (nForce2). The Hypertransport network isn't super fast but more than enough for today's ingredients.

VIA has validated Hypertransport for use in the upcoming K8 AMD Hammer chipset, so the future will definitely see a fresh protocol.

reunion

Before we talk about what will happen, let us briefly cover what is going on at the moment.

It is assumed that there are many pitfalls when deciding on a new computer system, for home and corporate users alike. As always, technical details are buried under a large pile of marketing. Minor advances in technology that, in fact, do nothing herald as "the next big thing." But a quick glance below the surface shows that this is not the case.

It pains me to see users asking if they should upgrade the VIA KT266a-based motherboard to the VIA KT333 chipset because "it should be faster", do bigger numbers mean faster correctly? Wrong, a balanced system means that you can squeeze the most out of your setup, be it for games, CAD, or other intense operations. No one wants to spend money unnecessarily so read this article again, get to know the numbers involved and reach your own conclusions.

the future

We briefly covered aspects related to future IO buses. Hypertransport and PCI-Express are on the horizon, or are already here already. We need peripherals and components to take advantage of this additional bandwidth. For now, there seems to be a bottleneck wherever you look.

It is hoped in the future that manufacturers will settle for fewer buses, which is less disruptive to the consumer and also means that computers will become less sophisticated. Take for example USB2.0 and Firewire (not covered in this article), two competing protocols that basically do the same. Quick connectable, expandable, high-bandwidth connections. Why not settle for one and stick to it?

Anyway, the end of the scream. We hope you enjoy this article. It will be constantly updated as new technologies emerge in this ever-changing industry.

At the end of the day, this is a reference for all of us.

[ad_2]&

More post

How to Change WordPress SMPT To Gmail or Google Work Space

Let's learn How to Change WordPress SMPT To Gmail or Google Work Space. This guide provide step by step tutorial on how to acomplish...
Newslodge front page

Newslodge

Newslodge - What they said in there about company "Newslodge.com.ng, we believe in the power of information to transform lives. We are not just a...

Gospelmetrics

Gospelmetrics - What they said in there about company "Welcome to Gospelmetrics, where we bring the light of the Gospel to your fingertips! We're thrilled...