Citrix or VPN?

This is a quick thought on the Citrix/VPN comparison question…
plugs
I would use a product such as Citrix in which an end user would use a secured browser session using SSL. VPN clients are still in use for remote encrypted access to the organization, but Citrix-type solutions are becoming more popular: this is because there is not a need to install a VPN client on the remote machine – this can reduce risk of vulnerability due to a mis-configuration of the VPN software on the client end. Besides ssl, digital certificates are also to be used with browser-based access to ensure authenticity of the target site.

In addition to the Citrix-type http/ssh technology, the remote devices would have encryption enabled on storage devices to protect data that is to be stored or transferred. A portable encryption device such as a handheld USB device that encrypts data and communications would be ideal.

If only VPN were to be used, the VPN clients would have split tunneling disabled to prevent any communications other than the encrypted connection to the organization’s intranet. If split tunneling were enabled, vulnerability would manifest, as a second channel would be opened to the outside internet. This would produce an “open hole” to the secure encrypted channel. In addition to the VPN solution, SecID token authentication would bring another layer of security to remote access.

Advertisements

The Historical Development of Storage Networks

IBM7094In the 1960’s, computing was in the hands of government and scientific organizations, with a few large business enterprises using rudimentary data processing technology. In these early days of the computing establishment of organizational computing, storage was utilized centrally with the mainframe computers that used it. This was a very secure way of data storage and administration of that data was more streamlined than the way it was about to become with departmentalization, as we will see shortly. In the mainframe era, an application would consume all resources when it was used and site idle when there were no processes assigned. This developed the need for timesharing in which the idle system time was spent on other tasks – a tiered and more efficient approach to data processing, as this improved ROI and production numbers.

As with any organization, it is segmented into numerous departments, such as finance, development and research, technology, marketing, etc: and in the ’70s we saw the departmentalization of data in which each department would store its own data. This developed with the installation of microcomputers within the various departments, in place of the traditional terminal, which would access the back-end mainframe directly. While terminals were (and still are) in use, the microcomputer had started to create the segregated storage architecture with the organization.

Later in time, file sharing began to consolidate data for departments, and that data resided on departmental small or midrange servers. This was the first step in storage consolidation under a storage network. Server farms developed from this approach, but data was still departmentalized and that data still often resided on separate servers, which required more administration. With the advancements in data networking, applications were being developed now that incorporated use of data from numerous locations. Software was now developing to support the wide-area-network, which would transfer data across vast distances, thereby spreading data storage over a wider footprint – making administration of that data more complex.

Client-server computing, already in use as this time within the aforementioned departmentalized computing, was another significant step towards the present storage network; however this still included department data residing on separate hosting servers each with its own DAS – Direct Attached Storage. This method includes separate backup tape drives for each department and administration of it: enhancing administrative overhead.

With the proliferation of globalized data, the need for a centralized data store was once again being realized. This would ease the administration of data storage, backups, and especially streamline disaster recovery planning and execution. The SAN was the answer to an organization’s data storage consolidation and streamlined administration. Failure-tolerant external storage subsystems made it possible to share access to storage devices (Barker & Massiglia, 2002).From the SAN, administrators could backup and restore data quicker because of a less complex and segregated architecture, and applications access databases from a centralized repository.

Reference:
Barker, R., Massiglia, P. (2002). External Storage Subsystems: Software. Storage Area Network Essentials. John Wiley & Sons

iSCSI vrs Fiber Channel

From SearchSMBStorage: Data storage management is a multifaceted process that is always changing. In this tutorial on SMB storage management, learn about iSCSI versus Fibre Channel storage area networks (SANs), managing your data backups, and the biggest SAN storage management trends for the year ahead. Read this article at SearchSMBStorage:
http://searchsmbstorage.techtarget.com/generic/0,295582,sid188_gci1346117,00.html#

DWDM: Dense Wave Divisional Multiplexing

fiberopticDense wave divisional multiplexing is used to split one fiber into numerous channels (lambdas), or networks. This enhances value and utilization of dark fiber by producing multiple channels on this fiber. Within Fiber Channel communications, weather WAN or MAN, DWDM can provide some benefits such as increased performance and capacity, enable the support of numerous interfaces, improve the ROI, usage of a protocol-independent physical interface, and enable the client administrative organization that leases the fiber more control and security (Massiglia & Marcus, 2002). DWDM increases the amount of available channels through this virtual provisioning of channels produced from one dark fiber, giving the client sixteen to 64 ports (Massiglia & Marcus, 2002). Another advantage of DWDM in fiber channel is that DWDM can be configured to operate in a ring, point to point, or multi-drop topology and operate at least 2Gbs up to 10Gbs.

References:
Massiglia, P., Marcus, E. (2002). Enterprise Resiliency. The Resilient Enterprise. P.332, 396. Veritas.

Backup and restore a Brocade 4900 series fiber switch

brocadeCreating a back up of a configuration file
Keep a backup copy of the configuration file in case the configuration
is lost or unintentional changes are made. You should keep
individual backup files for all switches in the fabric. You should
avoid copying configurations from one switch to another.
To back up a configuration file
1. Open the Switch Administration window as described on
page 59.
2. Click the Configure tab.
3. Click the Upload/Download subtab (see Figure 28 on page 102).
4. Click the Config Upload radio button.
5. Choose whether the download source is located on the network
or a USB device.
• If you select the USB radio button, you can specify a firmware
path. The USB radio button is available if the USB is present
on the switch.
• If you selected the network as the configuration file source,
type the host IP, user name, file name, and password.
You can enter the IP address in either IPv4 or IPv6 format.
6. Type the configuration file with a fully qualified path.
7. Select a protocol to use to transfer the file.
8. Click Apply.
You can monitor the progress by looking at the
Upload/Download progress bar.

Restoring a configuration
Restoring a configuration involves overwriting the configuration on
the switch by downloading a previously saved backup configuration
file. Perform this procedure during a planned downtime.
Make sure that the configuration file you are downloading is
compatible with your switch model, because configuration files from
other model switches might cause your switch to fail.
Maintaining configurations 105
Maintaining Configurations and Firmware
To download a configuration to the switch
1. Open the Switch Administration window as described on
page 59.
2. Disable the switch, as described in “Enabling and disabling a
switch” on page 71.
You can download configurations only to a disabled (offline)
switch. You will only be able to disable the switch if you the
Admin Domain you are logged into owns the switch.
3. Click the Configure tab.
4. Click the Upload/Download subtab (see Figure 28 on page 102).
5. Click the Config Download to Switch radio button.
6. Choose whether the download source is located on the network
or a USB device.
• If you select the USB radio button, you can specify a firmware
path. The USB radio button is available if the USB is present
on the switch.
• If you selected the network as the configuration file source,
type the host IP, user name, file name, and password.
You can enter the IP address in either IPv4 or IPv6 format.
7. Type the configuration file with a fully qualified path.
8. Select a protocol to use to transfer the file.
9. Click Apply.
You can monitor the progress by looking at the
Upload/Download progress bar.

EMC on Using the iSCSI Wizard for Celerra

A short how-to on adding iSCSI storage with EMC Celerra.

http://www.youtube.com/watch?v=zPqWWxnTwdI

About Infiniband

infinibandA high-throughput I/O technology, Infiniband is based on a switched fabric of serial data streams. Attaining bandwidths of 2.5 to 30Gbs, it exceeds the fault tolerance and scalability limits of shared bus architecture “through the use of switches and routers in the construction of its fabric” (Pentakalos, 2002). Starting out as two separate initiatives, Future I/O founded by Compaq, IBM, and HP, and the Next-Generation I/O Initiative by Dell, Hitachi, Intel, NEC, Siemens, and Sun, a single entity was eventually formed: the Infiniband Trade Association (ITA). The main reason that Infininband Architecture (IBA) was developed was because processing power was quickly “outstripping the capabilities of industry-standard I/O subsystems using busses” (Pfister, undated). Part of the speed within this architecture is achieved through a serial connection: four connections as opposed to wide parallel on a PCI bus. Infiniband also provides, within the switched fabric (which provides multiple paths to a storage target), the capability of sharing storage targets among multiple servers and the “ability to perform third party I/O” (Pentakalos, 2002). This means that the storage device(s) can complete the I/O transaction without the contribution of a host: freeing CPU cycles on the host and speeding up application response times. Basically, Infiniband is a point-to-point connection, not bussed. This provides fault isolation and avoids arbitration within data flow, and can scale to a large extent because of a switched network element. Implemented within high performance computing (HPC) clusters, infiniband provides “high bandwidth and low message latency attributes to inter-processor communication systems” (Lugones, et al, 2008).

Reference:

Pentakalos, O. (2002). An Introduction to the Infiniband Architecture. Windows 2000 Performance Guide. Oreilly. Retrieved August 14, 2009 from http://www.oreillynet.com/pub/a/network/2002/02/04/windows.html

Pfister, G. (undated). An Introduction to the Infiniband Architecture. IBM Enterprise Server Group Server Technology and Architecture. p.617.

Lugones, D., Franco, D., Luque, E. (2008). Dynamic routing balancing on InfiniBand networks. Journal of Computer Science & Technology 8.2 Academic OneFile. Gale. BCR Regis University. 14 Aug. 2009