Blog on June 25, 2010
At the latest meeting of the T11.3 standards organization (FC-BB-6 Ad Hoc Working Group) the concept of FCoE Direct End-to-End protocol was accepted for input into the Workgroup's next standard. It is also known as FCoE VN_Port to VN_Port (FCoE VN2VN). This new function permits FCoE adapters, which are interconnected within the same Level 2 Lossless Ethernet network, to discover and connect to compatible FCoE adapters -- which have the appropriate Virtual N_Ports -- and then transmit Fibre Channel commands and data via the standard FCoE protocol.
This is all done on a Lossless Ethernet Network without any assistance from a Fibre Channel Switch or an FCoE Switch (called an FC Forwarder -- FCF). All that is needed is the appropriate VN2VN FCoE Adapters and a Lossless Ethernet layer 2 Network.
There also exists, today, some Open Source FCoE software that only requires a normal Ethernet NIC, to operate standard FCoE protocols (a special Converged Network Adapter -- CNA -- is not required). It is expected that this Open Source software will be updated to also support the new VN2VN function.
The VN2VN (direct End-to-End) function will support 2 types of direct connections:
1. Connections through Lossless Ethernet switches
2. One to One Connection via a single cable (point to point)
The FCoE protocol is made up of 2 types of Ethernet frames (which have their own unique Ethertypes):
1. The FCoE initialization Protocol (FIP) frame packets
2. The Fibre Channel over Ethernet (FCoE) frame packets
The FIP packets are only used as part of discovery and connection setup whereas the FCoE packets carry the actual FC commands and data. The new VN2VN functions have only added additional FIP packets, and have left the rest of the protocol unchanged. The new VN2VN FIP packets were needed since in this mode there is no FCF to provide connection services.
The transfer of FC data and commands via the FCoE protocol -- which was developed in the T11.3 standards organization (FC-BB-5 Ad Hoc Working Group) -- continues to operate as currently specified and will continue unchanged in this new VN2VN environment.
The upper levels of the protocol remain FC, and that means that there continues to be complete compatibility with existing FC & FCoE Device Drivers etc. The vendors are, of course, adding additional management capabilities to exploit the additional capabilities of FCoE, but the command and data protocol do not require any modifications. Likewise, as adapters are updated to support VN2VN mode, the upper layers will retain their current FC compatibility even as additional management capabilities are added to permit ease (and flexibility) of use.
This new VN2VN capability will permit FC protocol to go "Down Market" to entry and Mid Range environments. Yet, as the installation grows it will be able to install FCF switches and thereby obtain the additional functions of a FC network without having to change the server or storage connections.
The new VN2VN capability will be competitive to iSCSI within a Data Center Environment. And I fully expect the Lossless Ethernet Standards, which were focused at a 10Gb/s Ethernet, to be offered by various vendors on 1Gb/s networks and switches. This will mean that FCoE VN2VN will operate very well with the Open Source FCoE code and 1Gb/s NICs without the overhead of TCP/IP. This should make the FCoE VN2VN capability very performance competitive with iSCSI.
Stay tuned to this Blog to see how the capability unfolds.
Showing posts with label iSCSI. Show all posts
Showing posts with label iSCSI. Show all posts
Friday, June 25, 2010
Tuesday, April 14, 2009
iSCSI vrs NAS
There seems to be continuing discussions about the value of iSCSI vrs NAS (Network Attached Storage). NAS, in general, has two incarnations: NFS (Network File System) seen mostly in UNIX type systems and CIFS (Common Internet File System) which is seen mostly in Microsoft systems.
The discussion seems to center around looking at the iSCSI and NAS technologies as if it were interchangeable. It is true that both technologies can be used for reading and writing storage and it is also true that NAS filers (or storage controllers) can do everything that an iSCSI storage controller can do, plus more. However, they are fundamentally different in their structure and as a result are significantly different in what hardware processing capabilities (CPU, Memory, etc.) are required to support their capabilities.
The iSCSI structure is based on SCSI Block protocol, which is created as a result of application file system calls for Reads or Writes. The NAS (NFS/CIFS) structure is based on special “Client-Server” protocols which are also created as a result of application file systems calls.
In the case of NAS the file system work is not really done in the client system, but via the NAS (NFS/CIFS) protocol which invokes various functions in the NAS server’s File System. The file system in the NAS server must then convert these file system functions into a SCSI Block protocol that will in turn access the actual storage device. In other words NAS moves the function of the physical file system from the client into the NAS server appliance. The same physical file system work needs to be done whether it is done in the client or in the NAS appliance.

The discussion seems to center around looking at the iSCSI and NAS technologies as if it were interchangeable. It is true that both technologies can be used for reading and writing storage and it is also true that NAS filers (or storage controllers) can do everything that an iSCSI storage controller can do, plus more. However, they are fundamentally different in their structure and as a result are significantly different in what hardware processing capabilities (CPU, Memory, etc.) are required to support their capabilities.
The iSCSI structure is based on SCSI Block protocol, which is created as a result of application file system calls for Reads or Writes. The NAS (NFS/CIFS) structure is based on special “Client-Server” protocols which are also created as a result of application file systems calls.
In the case of NAS the file system work is not really done in the client system, but via the NAS (NFS/CIFS) protocol which invokes various functions in the NAS server’s File System. The file system in the NAS server must then convert these file system functions into a SCSI Block protocol that will in turn access the actual storage device. In other words NAS moves the function of the physical file system from the client into the NAS server appliance. The same physical file system work needs to be done whether it is done in the client or in the NAS appliance.
Years ago (in 1998) when I first got involved with iSCSI, there was a lot of discussion about whether there was even a need for the iSCSI protocol. After all, the discussion went; we have NAS (NFS/CIFS), so why does the world need yet another TCP/IP based storage access protocol. (At that time the name iSCSI had not even been coined, we called it SCSI over (GE) TCP/IP.) So to try to fully understand the value (if any) of this potentially new protocol, we set out to measure the SCSI over Gigabit Ethernet (GE) TCP/IP vrs NFS using (GE) TCP/IP. The results were startling to us at the time, but were key to our decision to continue with the effort to standardize what came to be known as iSCSI.
At this time there was also a lot of talk about offloading onto an adapter card the TCP/IP functions and various other TCP/IP optimizations that would be useful for not only transporting SCSI but also NAS protocols. To fully understand the potential we looked at three different implementations of the TCP/IP part of the equation: normal TCP/IP implementations (which generally used two buffer to buffer copies during its processing in the host system), versions that had only one buffer copy in the host system, and a versions that had zero buffer copies in the host system (data was fetched/placed from/into the application memory location directly by/from an adapter). This last approach became known as a TOE (TCP/IP Offload Engine). A graph of the results of the analysis can be seen in the slide shown below.
The results of this analysis showed that iSCSI (SCSI over GE-TCP/IP) transmit would be 26% of the processing time of NFS and iSCSI receive would be about 32% of the processing time of NFS. So a rough general statement might say that iSCSI used about one third of the processing power need by NFS. (That can be seen by comparing the blue columns with the yellow columns.) The analysis also showed that if a TOE approach was used for iSCSI and NFS (zero copies), the results would even be more dramatic. In that case the iSCSI Transmit became about 8% (1/12th) of the processing time of NFS, and the iSCSI Receive would be about 6% (1/15th) of the processing time of NFS.(All of these measurements were based on the same processors, NICs, storage and Gigabit Ethernet Links, and the same about of file data was transferred. It should probably also be noted that we measured the client side overhead in the same way but could not find significant differences in the processing time of iSCSI clients vrs NFS clients.)
Usually now days the Storage Controller has replace the target server that was measured above, however, the comparison between the processor needs of iSCSI vrs NAS (NFS/CIFS) is probably still valid.
Now at first glance one would think that the case was a complete repudiation of NAS by iSCSI, however, that is just not correct. We were not really comparing apples to apples here, because NFS provides other capabilities that are not available with iSCSI. And that is the ability to share files between different client systems. And NAS permits whole file management capabilities, which often simplifies the management of storage.
We did not take measurements with CIFS since we felt the point had been made and the additional protocol elements of CIFS would add even more processing time into the equation. However, like NFS, CIFS provides sharing capabilities – and includes a built in locking capability to manage dynamic file updates while sharing.
So the generality seems to say that if you do not have a data sharing requirement between your clients, then iSCSI is probably the most effective approach. But if you do have data sharing requirements, then an iSCSI approach is probably not appropriate, but a NAS protocol (NFS/CIFS) probably is.
Applying the above to practical situations of scale out, the slide shown below depicts the issues and nets them down.

This means that an installation that has requirements for some file sharing should probably have some NAS servers/controllers. But since the majority of data on a client is not shared, having an iSCSI storage controller probably makes since also.
One important point to consider is that when it comes to Scaling, one always needs to look for the possible bottle necks and apply the best approach to the reduction of the bottle neck affect. Clearly the overhead in a NAS controller is significantly more than an iSCSI controller so the iSCSI controller should be able to scale better than the NAS approach. But since they each provide different capabilities, an installation should use both NAS and iSCSI where only the shared data goes to the NAS controller.
Since iSCSI and NFS/CIFS are both IP based protocols, the same physical Ethernet connection can be used to carry both protocols. Therefore, some vendors have implemented what I call dual-dialect Storage Controllers. These are Storage Controllers that can accept either iSCSI or NAS (NFS/CIFS) protocols. In this case one can see that it might be possible to balance the low overhead of iSCSI with the functionality of NFS/CIFS protocols.
…………. John L. Hufferd
Thursday, April 2, 2009
iSCSI vrs FCoE
Blog on -- 2 April 2009
I continue to be amused by the people that try to position iSCSI (Internet Small Computer Systems Interconnect) and FCoE (Fibre Channel over Ethernet) by placing them in conflict with each other. One group might say iSCSI is better than FCoE because …… Another group will say FCoE is better than iSCSI because …. In truth they are both wrong and both right. The appropriate truth is all in the circumstances in which the customer finds themselves.
If a customer has an IT shop which has a small number of servers and a minimum amount of external storage, they should very definitely consider iSCSI and define a SAN (Storage Area Network) with normal Ethernet. An iSCSI network is easily setup and will often be all that is needed for this type of environment. And the cost is usually significantly less than would be the case with a FC fabric.
In my opinion, if the customer has not had a SAN before, they should consider it; especially if they would like to have easy failover, or use some of the new consolidation capabilities of the various Server Virtualization products. In server virtualization environments, the movement of applications (Virtual Machines) quickly and dynamically between physical servers is very valuable, but requires a SAN that connects the physical servers with external storage controllers. Many customers that desire to have this type of consolidation environment are not familiar with Storage Networking -- and iSCSI operating on a 1Gibabit Ethernet network is not only simple to set up and use, but is usually all that is needed and meets their requirements very well. There is a caution here, and that is in regards to the total bandwidth that might be needed after the consolidation of multiple systems/applications into a single physical server. In some cases the consolidation will require more storage bandwidth than can be handled by a simple 1GE network. That means that one will need to multiply the number of 1GE attachments, and increase the bandwidth capability to/from the physical servers. Depending on the approach, this will either provide a significant increase in the processor cycles (in the case of a software iSCSI driver), or in the number or capabilities of the iSCSI Adapters (which will drive up the cost). So it is possible that with the virtualization of servers, one could find that the cost of an iSCSI solution, in terms of processors cycles or adapter cost will approach that of a FC or FCoE solution. But if the installation is not familiar with storage networking, then only if the installation sees dramatic growth in its future should anything other than iSCSI be seen as the right initial solution.
Customers that already have a large server and storage network have probably already established a Fibre Channel (FC) network and are committed to the high bandwidth and low latency that FC provides. These types of IT organization often have an in-depth knowledge of FC configurations and all that comes with a FC Fabric. It is also not unusual to find FC networks that contain storage functions within the fabric itself (such as Storage Virtualization and Encryption at Rest, etc). That said, many of these organizations still find value in the idea that they might be able to save money by having a common network which includes not only storage access but also the IP messaging that occurs between their servers and clients whether transported across the data center or across the Intranet or Internet. FC over Ethernet (FCoE) is the type of protocol that permits FC to flow over a new type of Ethernet (a Lossless Ethernet within the Data Center), and which also permits the use of other protocols such as TCP/IP etc. The goal of this type of connection is to permit FC protocols and procedures to work with other network protocols. Of course this only makes sense in a FC environment if the speed of the new (lossless) Ethernet fabric is fast enough to carry the required storage bandwidth plus the interactive messaging bandwidth associated with the installation’s IP network. This means that since much of FC is operating at a 4GB (or 8GB) speed, the addition of the IP network will often require an Ethernet Fabric with speeds of 10GB (or more). Hence the FCoE Lossless Ethernet has been initially assumed to be a 10GB fabric.
I expect many FC installations to continue to use normal FC and keep their storage and IP networks separate; however, I also expect a large number of installations to move toward FCoE. Even though most of these FC to FCoE installations will first only convert the “Server Edge” (Server Side connection to the network) some may (over time) extended the Lossless Ethernet throughout their Data Center for both IP and Storage Networks. But whether or not they continue to evolve their FC fabric to an FCoE fabric the point is they are quite a different community of customers than those that would be operating an iSCSI network. And to these customers, they see FCoE as the simplest way to evolve to an Ethernet based Fabric while keeping the speed and sophistication of their current FC storage network.
So you see it is not iSCSI vrs FCoE, each protocol meets the needs of a different community of customers. Yes, they can both do similar things, but until iSCSI is shown to perform cost effectively at the high speeds and with the low latency of FCoE, in very complex configurations -- which might also have storage related functions within the Fabric -- iSCSI will not quickly move (if ever) into the high end environment. Likewise, FCoE will not move into the low-mid size environment to displace iSCSI unless it can be shown to be as easy to setup and use while maintaining a low cost profile at least equivalent to iSCSI.
So the bottom line is: iSCSI and FCoE are two different tools that can be used to connect and manage external storage, depending on the customer needs. One tool does not meet all needs, so let’s not even go to the question of which is better iSCSI or FCoE since it depends on the environment of the IT organization.
…………. John L. Hufferd
I continue to be amused by the people that try to position iSCSI (Internet Small Computer Systems Interconnect) and FCoE (Fibre Channel over Ethernet) by placing them in conflict with each other. One group might say iSCSI is better than FCoE because …… Another group will say FCoE is better than iSCSI because …. In truth they are both wrong and both right. The appropriate truth is all in the circumstances in which the customer finds themselves.
If a customer has an IT shop which has a small number of servers and a minimum amount of external storage, they should very definitely consider iSCSI and define a SAN (Storage Area Network) with normal Ethernet. An iSCSI network is easily setup and will often be all that is needed for this type of environment. And the cost is usually significantly less than would be the case with a FC fabric.
In my opinion, if the customer has not had a SAN before, they should consider it; especially if they would like to have easy failover, or use some of the new consolidation capabilities of the various Server Virtualization products. In server virtualization environments, the movement of applications (Virtual Machines) quickly and dynamically between physical servers is very valuable, but requires a SAN that connects the physical servers with external storage controllers. Many customers that desire to have this type of consolidation environment are not familiar with Storage Networking -- and iSCSI operating on a 1Gibabit Ethernet network is not only simple to set up and use, but is usually all that is needed and meets their requirements very well. There is a caution here, and that is in regards to the total bandwidth that might be needed after the consolidation of multiple systems/applications into a single physical server. In some cases the consolidation will require more storage bandwidth than can be handled by a simple 1GE network. That means that one will need to multiply the number of 1GE attachments, and increase the bandwidth capability to/from the physical servers. Depending on the approach, this will either provide a significant increase in the processor cycles (in the case of a software iSCSI driver), or in the number or capabilities of the iSCSI Adapters (which will drive up the cost). So it is possible that with the virtualization of servers, one could find that the cost of an iSCSI solution, in terms of processors cycles or adapter cost will approach that of a FC or FCoE solution. But if the installation is not familiar with storage networking, then only if the installation sees dramatic growth in its future should anything other than iSCSI be seen as the right initial solution.
Customers that already have a large server and storage network have probably already established a Fibre Channel (FC) network and are committed to the high bandwidth and low latency that FC provides. These types of IT organization often have an in-depth knowledge of FC configurations and all that comes with a FC Fabric. It is also not unusual to find FC networks that contain storage functions within the fabric itself (such as Storage Virtualization and Encryption at Rest, etc). That said, many of these organizations still find value in the idea that they might be able to save money by having a common network which includes not only storage access but also the IP messaging that occurs between their servers and clients whether transported across the data center or across the Intranet or Internet. FC over Ethernet (FCoE) is the type of protocol that permits FC to flow over a new type of Ethernet (a Lossless Ethernet within the Data Center), and which also permits the use of other protocols such as TCP/IP etc. The goal of this type of connection is to permit FC protocols and procedures to work with other network protocols. Of course this only makes sense in a FC environment if the speed of the new (lossless) Ethernet fabric is fast enough to carry the required storage bandwidth plus the interactive messaging bandwidth associated with the installation’s IP network. This means that since much of FC is operating at a 4GB (or 8GB) speed, the addition of the IP network will often require an Ethernet Fabric with speeds of 10GB (or more). Hence the FCoE Lossless Ethernet has been initially assumed to be a 10GB fabric.
I expect many FC installations to continue to use normal FC and keep their storage and IP networks separate; however, I also expect a large number of installations to move toward FCoE. Even though most of these FC to FCoE installations will first only convert the “Server Edge” (Server Side connection to the network) some may (over time) extended the Lossless Ethernet throughout their Data Center for both IP and Storage Networks. But whether or not they continue to evolve their FC fabric to an FCoE fabric the point is they are quite a different community of customers than those that would be operating an iSCSI network. And to these customers, they see FCoE as the simplest way to evolve to an Ethernet based Fabric while keeping the speed and sophistication of their current FC storage network.
So you see it is not iSCSI vrs FCoE, each protocol meets the needs of a different community of customers. Yes, they can both do similar things, but until iSCSI is shown to perform cost effectively at the high speeds and with the low latency of FCoE, in very complex configurations -- which might also have storage related functions within the Fabric -- iSCSI will not quickly move (if ever) into the high end environment. Likewise, FCoE will not move into the low-mid size environment to displace iSCSI unless it can be shown to be as easy to setup and use while maintaining a low cost profile at least equivalent to iSCSI.
So the bottom line is: iSCSI and FCoE are two different tools that can be used to connect and manage external storage, depending on the customer needs. One tool does not meet all needs, so let’s not even go to the question of which is better iSCSI or FCoE since it depends on the environment of the IT organization.
…………. John L. Hufferd
Subscribe to:
Comments (Atom)
