Veteran Member
|
Broadcom has new single chip iSCSI, TOE, RDMA
demonstrating its industry leading NetXtreme II™ C-NIC (converged network interface controller) technology at this week's Windows Hardware Engineering Conference in Booth #203. This innovative single-chip C-NIC demonstration simultaneously runs three specific functions -- accelerated TOE (TCP Offload Engine) for data networking, accelerated iSCSI for block storage networking and accelerated RDMA (Remote Direct Memory Access) for high-performance server clustering -- over Ethernet. By converging disparate network traffic over Ethernet, Broadcom's NetXtreme II C-NICs enable a lower total cost of ownership (TCO) versus configuring and running three separate and disparate networks. Ok what the hell are these technolgies you ask? RDMA- Remote Direct Memory Access is a concept whereby two or more computers communicate via Direct Memory Access directly from the main memory of one system to the main memory of another. As there is no CPU, cache, or context switching overhead needed to perform the transfer, and transfers can continue in parallel with other system operations, this is particularly useful in applications where high thoughput, low latency networking is needed such as in massively parallel Linux clusters Pretty cool huh? :smokey: It gets better. RDMA is often used with Infiniband but... An alternate proposal is RDMA over TCP/IP, in which the TCP/IP protocol is used to move the data over a commodity data networking technology such as Gigabit Ethernet. Unlike conventional TCP/IP implementations, the RDMA implementation would have its TCP/IP stack implemented on the network adapter card, which would thus act as a I/O processor, taking up the load of RDMA processing. Imagine a cluster of Xserves that can transfer data between them without having to pass the data through the TCP/IP stack and kernel. Speeeeeeed iSCSI The iSCSI protocol uses TCP for its data transfer. Unlike other network storage protocols, such as Fibre Channel (which is the foundation of most SANs), it requires only the simple and ubiquitous Ethernet interface (or any other TCP/IP-capable network) to operate. This enables low-cost centralization of storage without all of the usual expense and incompatibility normally associated with Fibre Channel storage area networks. Critics of iSCSI expect worse performance than Fibre Channel due to the protocol overhead TCP/IP adds to the communication between client and storage. However new techniques like TCP Offload Engine (TOE) help in reducing this overhead. And tests have shown excellent performance of iSCSI SANs, whether TOEs or plain Gigabit Ethernet NICs were used iSCSI is cool. Currently with Fiber SAN you have to consolidate storage behind your Fiber Switch. How'd you like to utilize your Gigabit network for easy placement of storage resources yet maintain excellent performance. iSCSI allows this if your network is up to snuff. How does 200MBps duplex sound to you? It does this by transferring the SCSI protocol commands(small) over the network thus it works across LANS, MANS, WANS or whatever you have that's IP. No this isn't NAS...NAS only works at the file level so it never appears as though its direct attached storage. TCP Offload Engine Networking speed gradually raised over the years. What started as a protocol build for unreliable low speed networks (few Kbytes per second) is now required to run at 1 Gigabit per second. The TCP software implementations on host systems require extensive computing power. Gigabit TCP communication, alone, eats up 100% of a 2.4 GHz Pentium processor. See? We're choking off computer performance with Gigabit. Broadcom now has a single chip solution(read more affordable) that can handle 3 hot new technologies. I can see this hitting on future Xserves and then trickling down to Powermacs. Xgrid might be able to hop right on RDMA to speed up clustered performance. Imagine if we get Quad 3GHz chips one day that can blast out gigabit ethernet with barely any CPU intervention. I want...and you should to. omgwtfbbq |
quote |
Selfish Heathen
Join Date: May 2004
Location: Zone of Pain
|
I'm fixing your thread title. It's Xserve. </pedant>
|
quote |
Less than Stellar Member
|
Wow. There are times I forget I spend a *lot* of time among geeks.
Then a thread like this comes along and all is good again. |
quote |
Veteran Member
|
Quote:
Quote:
I'm ready for the next gen Xserve. Now that Tiger is here with ACLs I can actually sell right into windows domains without scaring off the IT director LOL. omgwtfbbq |
||
quote |
Veteran Member
Join Date: Jun 2004
Location: Mile 1
|
And just think, with 10 Gb ethernet just around the corner, things are going to get very interesting.
Still pisses me off that the whole of Apple's product line does not ship with 1 Gb ethernet yet. On the PC side, you can buy a huge variety of motherboards with this built in. The board I use in my [cough]PC[/cough] has it, and was a requirement of mine when I had this PC built. That was 2.5 years ago and the whole reason I did it was Apple pushing it out. I believe that PCI-e will allow 1 gb ethernet to actually finally reach it's potential. Granted nothing else in my house supports 1 gb ethernet yet. I can't wait till I can add several other systems and a switch to support it Mile 1 |
quote |
I shot the sherrif.
|
Unless you have a RAID setup, you're not likely to saturate the bandwidth of 100Mb anyway.
|
quote |
Veteran Member
Join Date: Jun 2004
Location: Mile 1
|
Quote:
My supervisor thanked me the next day, because she can call IT and give them shit for their shit network. Yes, part of my job is to figure out how to break things (and break them) so others can fix it. Mile 1 |
|
quote |
Veteran Member
|
Yes Gigabit is getting faster and many motherboards are removing the gigabit from the PCI bus to run it faster. Nvidia's nForce has a decent Gigabit architecture and Intel has CSA for their Gigabit. The next logical step is to reduce the CPUs need to handle the I/O transactions.
I liken it to back in the days when the CPU still handles the Transform and Lighting for 3D games. Once that became a part of the GPU games instantly became faster. We're going to need to depend on our networks a bit more in the future. Faster networking is a must. Microsoft is here as well with their TOE software support codenamed "Chimney" and Apple can't be too far behind. Host Bus Adapters(HBA) are expensive now. What we need is for them to take off and in a few years they'll be commonplace on the midrange to high-end PC. Nvidia has TOE in their new nForce Pro motherboards now. I agree that Gigabit should be in every computer. I doubt that the cost difference between 10/100 and 10/100/1000 is significant at all. I assume that Giga is just being used as a differentiator of product lines. Giga to the desktop is big going forward and the key to opening up some of these new technologies. Can't wait to see what Apple does with them. omgwtfbbq |
quote |
Member
|
Ok, nVidia was actually the first to market a TCP Offload Engine. The Opteron 2200/2050 chipset supports a 1Gbe port with a hardware firewall and TCPOE on each chip. There is also an availible board that runs 2 of these chips (1 2200 and 1 2050) with a single processor for $225 on NewEGG. Just food for thought. Like I have said before, Apple should copy that chipset, or pay nVidia to convert it over to G5's.
Oh, 1 other thing, the Gbe on nVidia's chipsets are directly connected, no worries about bus speeds or bandwidth saturation on the mobo. |
quote |
Veteran Member
|
Quote:
I think we'll see Apple move in TOE direction as well. Apple's Gigabit is also off the PCI bus. That's like standard fare now for chipset design thank God. omgwtfbbq |
|
quote |
Member
|
nVidia has the TCPOE integrated into the chipset. There is no ASIC. This is the primary difference between the nForce4 chipset and the 2200/2050 server/workstation chipsets.
edit: yes, the nic chip is not integrated, but the TCPOE is. |
quote |
Veteran Member
|
New twist on network storage coming from some of the founders of Western Digital.
Zetera IP based storage Zetera aims to help the networked home consolidate their storage over IP without the high cost of NAS, SAN and iSCSI. Their technology allows multiple computers to access central storage hooked into the wireless/wired router. Each strip is assigned an IP address and each stripe contains the volume and a mirror of an adjacent drive's volume. As long as you don't lose more than one adjacent drive you'll have the deads drives data mirrored safely. No controller is required and it doesn't use TCP but rather UDP for lower overhead and faster performance. Zetera is claiming they will achieve performance that exceeds todays NAS at a cheaper price because they are able to use IP as a bridge for their proprietary protocols. Their device can strip not only the volumes but the communication channels. The cost savings from obviating the need for a specialized controller mean that Zetera IP storage should be cheaper. The only downside seems to be the proprietary nature of their solution. I hear it requires client side software thus Mac users will probably be waiting. omgwtfbbq |
quote |
Veteran Member
|
This helps illustrate the point.
|
quote |
Posting Rules | Navigation |
|
Thread Tools | |
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Downloadable games for future iPods? | diagonalman | Speculation and Rumors | 16 | 2007-01-05 21:57 |
Discussion about the Future of Wi-fi/WAN | Wrao | General Discussion | 1 | 2005-03-27 22:19 |
Networking Documentation | regnyouth | Third-Party Products | 2 | 2005-03-15 18:19 |
Transformation: The Future Of Apple? | Messiahtosh | Speculation and Rumors | 50 | 2004-09-18 13:47 |
UCLA plasma Physics to create a Xserve G5 cluster. | Quagmire | General Discussion | 7 | 2004-07-06 07:42 |