Showing posts with label Fibre Channel. Show all posts
Showing posts with label Fibre Channel. Show all posts

Thursday, November 17, 2011

"CLOUD" Infrastructure as a Service (IaaS) and FCoE VN2VN

When the new FCoE (Fibre Channel over Ethernet) VN2VN (aka Direct End Node to End Node) was defined in the T11.3 FC-BB-6 Ad Hoc Working Group it was assumed that it would find a niche in the Low to medium IT organizations that wanted to have compatibility with Fibre Channel (FC). Though that is still valid, it looks as though it may also be important to some of the new "Cloud" services that provide Infrastructure as a Service (IaaS). 

FCoE VN2VN is a additional FCoE protocol which permits FCoE End Nodes such as Servers acting as "Initiators" and FCoE End Nodes such as Storage Controllers acting as" Targets" to either directly attach to each other or attach with only lossless Ethernet switches between them (perhaps as few as one switch between the End Nodes).  This form of FCoE does not require any FC/FCoE networking equipment.

FCoE VN2VN permits the IaaS organization to enable their installation to provide storage interconnectivity with FC and/or FCoE.  FCoE VN2VN capability can be used to give a customer an FCoE VN2VN connection between the servers and the storage that are supplied by the IaaS provider. This VN2VN interconnect can provide the fastest end-to-end connection with the fewest number of "hops" as possible.  That is, the data path can traverse between the server and the storage unit by passing through perhaps as few as one Lossless Ethernet switch. No FCF (Fibre Channel Forwarder) is required, which means that no additional FC switching processes and overhead are involved in the data path.  In addition, the lossless Ethernet switch can be provided by a great number of vendors, thus permitting the lowest possible cost data path.  This means that the IaaS provider can give a customer the fastest interconnect at the lowest possible cost.

To enable this type of capability there is certain implications upon the configuration of the "Cloud" installation. For example: if the customer would like to purchase infrastructure where the required servers and storage can fit into a single rack (or even a 2-3 rack side-to-side configuration) they are candidates for FCoE VN2VN interconnection.  In such a configuration a lossless Ethernet switch can be placed at the top of the Rack (or Rack set) and Ethernet connections run from the servers to the Switch and then to the storage units. For total installation flexibility the Top-of Rack (ToR) switches may also be physically interconnected to an End-of-Row (EoR) Director class FCoE switch that may have full FCF capabilities.  However, the EoR Director would have no direct involvement with the data path for this IaaS rack-set.  It is also possible to have a ToR switch at the top of each rack and have them interconnected with each other. In this case, the data path may go through two ToR switches but would still not need to go through the EoR FCoE Director.

So depending on the needs of the customer, and the physical configuration required by the provider, it is possible to obtain the minimum switch/"hop" count and lowest latency interconnect. This means that the provider of IaaS services can "carve-out" a rack or set of racks that can be dedicated to a specific IaaS customer, and give them isolated service yet when that customers grows and has a much larger requirement, or they leave the IaaS provider's installation, the installation can easily re-task the servers and storage, or expand to other racks of server and storage, without needing to physically re-cable the network configuration. 
 
In this example, the IaaS systems and storage are given their own VLANs that can be used by the FCoE VN2VN to permit "direct" connection between the IaaS customer's servers and storage without involvement of other systems within the IaaS providing installation. It should be noted that when the customer either leaves the installation or expands, the provider can re-task the equipment and remove the VLAN specification, and in the case of expansion utilize a regular FCoE interconnect (via the EoR director FCoE switches). 

Likewise, a company often has the need to provide IaaS like services to various internal departments which for various company technical or "political" reasons need to be provided with dedicated server and storage rack(s) which can function as isolated environments for various company departments and projects. This then becomes an internal IaaS "Cloud" environment in which FCoE VN2VN can often be an appropriate solution to this configuration requirement.

But independent of the internal or external "Cloud" IaaS environments FCoE VN2VN is still appropriate for the smaller computing environments such as "Big Box" stores, "Disaster Recovery Trailers" and small to medium IT installations.

In smaller organizations such as local "Big Box" stores, they can have their whole data center located in a single rack which has the appropriate servers and storage all inclusive.  In this type of configuration the various Server vendors can be asked to bid on the "total rack" that includes FCoE VN2VN, and often obtain a "total solution" at a minimum cost.  I was once associated with an organization that wanted to sell such configurations to the big box stores but was deterred because of the cost of the Fibre Channel Connections and switches. That concern is no longer relevant when FCoE and VN2VN connections, within the rack, are utilized.  

I also understand that various "disaster recovery trailers" can utilize such configurations in their trailers when they are used to provide temporary IT service to big box stores (and others) after various disasters. 

And, of course, when it comes small to medium IT installations (ones that fit within a single or few Racks) FCoE VN2VN configurations seems to offer a high performing low cost storage interconnect solution that is compatible with future growth into a full FCoE or FC installation.  These types of installations may also be seen as a valuable asset that can easily be integrated during a merge or buy-out with larger organizations that probably have an FC and/or FCoE.

Thursday, April 2, 2009

iSCSI vrs FCoE

Blog on -- 2 April 2009

I continue to be amused by the people that try to position iSCSI (Internet Small Computer Systems Interconnect) and FCoE (Fibre Channel over Ethernet) by placing them in conflict with each other. One group might say iSCSI is better than FCoE because …… Another group will say FCoE is better than iSCSI because …. In truth they are both wrong and both right. The appropriate truth is all in the circumstances in which the customer finds themselves.
If a customer has an IT shop which has a small number of servers and a minimum amount of external storage, they should very definitely consider iSCSI and define a SAN (Storage Area Network) with normal Ethernet. An iSCSI network is easily setup and will often be all that is needed for this type of environment. And the cost is usually significantly less than would be the case with a FC fabric.
In my opinion, if the customer has not had a SAN before, they should consider it; especially if they would like to have easy failover, or use some of the new consolidation capabilities of the various Server Virtualization products. In server virtualization environments, the movement of applications (Virtual Machines) quickly and dynamically between physical servers is very valuable, but requires a SAN that connects the physical servers with external storage controllers. Many customers that desire to have this type of consolidation environment are not familiar with Storage Networking -- and iSCSI operating on a 1Gibabit Ethernet network is not only simple to set up and use, but is usually all that is needed and meets their requirements very well. There is a caution here, and that is in regards to the total bandwidth that might be needed after the consolidation of multiple systems/applications into a single physical server. In some cases the consolidation will require more storage bandwidth than can be handled by a simple 1GE network. That means that one will need to multiply the number of 1GE attachments, and increase the bandwidth capability to/from the physical servers. Depending on the approach, this will either provide a significant increase in the processor cycles (in the case of a software iSCSI driver), or in the number or capabilities of the iSCSI Adapters (which will drive up the cost). So it is possible that with the virtualization of servers, one could find that the cost of an iSCSI solution, in terms of processors cycles or adapter cost will approach that of a FC or FCoE solution. But if the installation is not familiar with storage networking, then only if the installation sees dramatic growth in its future should anything other than iSCSI be seen as the right initial solution.
Customers that already have a large server and storage network have probably already established a Fibre Channel (FC) network and are committed to the high bandwidth and low latency that FC provides. These types of IT organization often have an in-depth knowledge of FC configurations and all that comes with a FC Fabric. It is also not unusual to find FC networks that contain storage functions within the fabric itself (such as Storage Virtualization and Encryption at Rest, etc). That said, many of these organizations still find value in the idea that they might be able to save money by having a common network which includes not only storage access but also the IP messaging that occurs between their servers and clients whether transported across the data center or across the Intranet or Internet. FC over Ethernet (FCoE) is the type of protocol that permits FC to flow over a new type of Ethernet (a Lossless Ethernet within the Data Center), and which also permits the use of other protocols such as TCP/IP etc. The goal of this type of connection is to permit FC protocols and procedures to work with other network protocols. Of course this only makes sense in a FC environment if the speed of the new (lossless) Ethernet fabric is fast enough to carry the required storage bandwidth plus the interactive messaging bandwidth associated with the installation’s IP network. This means that since much of FC is operating at a 4GB (or 8GB) speed, the addition of the IP network will often require an Ethernet Fabric with speeds of 10GB (or more). Hence the FCoE Lossless Ethernet has been initially assumed to be a 10GB fabric.
I expect many FC installations to continue to use normal FC and keep their storage and IP networks separate; however, I also expect a large number of installations to move toward FCoE. Even though most of these FC to FCoE installations will first only convert the “Server Edge” (Server Side connection to the network) some may (over time) extended the Lossless Ethernet throughout their Data Center for both IP and Storage Networks. But whether or not they continue to evolve their FC fabric to an FCoE fabric the point is they are quite a different community of customers than those that would be operating an iSCSI network. And to these customers, they see FCoE as the simplest way to evolve to an Ethernet based Fabric while keeping the speed and sophistication of their current FC storage network.
So you see it is not iSCSI vrs FCoE, each protocol meets the needs of a different community of customers. Yes, they can both do similar things, but until iSCSI is shown to perform cost effectively at the high speeds and with the low latency of FCoE, in very complex configurations -- which might also have storage related functions within the Fabric -- iSCSI will not quickly move (if ever) into the high end environment. Likewise, FCoE will not move into the low-mid size environment to displace iSCSI unless it can be shown to be as easy to setup and use while maintaining a low cost profile at least equivalent to iSCSI.
So the bottom line is: iSCSI and FCoE are two different tools that can be used to connect and manage external storage, depending on the customer needs. One tool does not meet all needs, so let’s not even go to the question of which is better iSCSI or FCoE since it depends on the environment of the IT organization.
…………. John L. Hufferd