Citrix or VPN?

This is a quick thought on the Citrix/VPN comparison question…
I would use a product such as Citrix in which an end user would use a secured browser session using SSL. VPN clients are still in use for remote encrypted access to the organization, but Citrix-type solutions are becoming more popular: this is because there is not a need to install a VPN client on the remote machine – this can reduce risk of vulnerability due to a mis-configuration of the VPN software on the client end. Besides ssl, digital certificates are also to be used with browser-based access to ensure authenticity of the target site.

In addition to the Citrix-type http/ssh technology, the remote devices would have encryption enabled on storage devices to protect data that is to be stored or transferred. A portable encryption device such as a handheld USB device that encrypts data and communications would be ideal.

If only VPN were to be used, the VPN clients would have split tunneling disabled to prevent any communications other than the encrypted connection to the organization’s intranet. If split tunneling were enabled, vulnerability would manifest, as a second channel would be opened to the outside internet. This would produce an “open hole” to the secure encrypted channel. In addition to the VPN solution, SecID token authentication would bring another layer of security to remote access.


The importance of an organizational security policy and an overview of its components

Steve Johns


Organizational security policy is a paramount subject in today’s organization. Since the advent of electronic communications and networks, private, public, government or non-profit organizations need firm information security practices to maintain a high level of continuity for business processes and information systems. Business continuity involves protecting information systems from usual hardware and software instances and, vitally, ensuring those systems are secure from inside and outside hazard.

Organizations must examine liability within electronic systems, and recognize the call for legal advocacy. The variable that is often the greatest risk to security is the end user. All users of electronic systems within an organization must complete training in proper use and liability within technology. At the conclusion of that end user’s training, he or she is to sign off on a statement of completion which acknowledges that employee’s participation, comprehension, and desire to comply with policy. Organizational policy will dictate, according to the business model, the type and frequency of training to comply with relevant law. If an employee should violate the security policy, the organization can be liable to relevant law; however with employee policy training in completion, the employee is liable and can be terminated or disciplined according to organizational rules. This process of mandating employee participation in policy training is recognized as due care on behalf of the organization, and due diligence as the organization has made “…a valid effort to protect others and continually maintain this level of effort” (Bowles, 2009).

Policy that defines security parameters within information systems must be implemented at the start of systems and network design and “…should extend from the core to all valid remote access sites” (Massiglia & Marcus, 2002). Effective security policy not only spans across hardware and software interfaces, but among every entity within the organization:

This is to be of the latest revision to ensure system compatibility and vulnerability resolution. All software (operating systems especially) are to be on a regular patching schedule and monitored with an enterprise suite; for example, Microsoft Systems Management Server (SMS). Anti-virus servers that control and distribute antivirus updates to all client nodes within the enterprise will ensure that each client node is compliant with the latest antivirus definitions to protect against any virus, worms, or other malware on both servers and end user nodes.

At the core network level, it is essential to implement a hardware-based firewall that contains network address translation (NAT) and port forwarding to ensure the discretion of internal IP addresses. VLAN tagging should be implemented on a number of switches to segregate IP traffic on the LAN and zoning security configuration within fiber channel storage networks.

At this level, security can be realized through policy that defines the type of data end users can access and install. This is often done through departmental policy, user organizational unit policy definitions, and VLAN tagging.

Often the greatest security risk, all end users are to be educated on organizational security policy, usually on a semi-annual or annual basis, with a sign-off at the completion of training. The training is to be simple for all end users to understand. Human resources will record user attendance within training.

Means of data access is to be controlled. For example, for remote telecommuters to access internal resources, one means of access is to be utilized such as Citrix or via VPN client, and all remote access is to be logged.

Routing can be used to segregate internal networks into subnets, and access lists can be used to filter traffic from specific networks, weather internal or external.

Organization security policy should be constructed in the manner of the security systems development life cycle, or SSDLC. Following the same methodology as the SDLC, the SSDLC focuses on the security of systems within the organization. It must be stressed that “implementing information security involves identifying specific threats and creating specific controls to counter those threats” (Whitman & Mattord, 2005). The SSDLC is designed to do this within the planning, design, installation, and administration of information systems. In effect, the SSDLC is to be part of the overall enterprise architecture and business plan from inception. The SSDLC is to be constructed of multiple steps:

Directed from upper management, this phase consists of an initial feasibility analysis to determine the need for security policy, and if the organization has the resources to conduct the project. After this assessment, problems, needs, and goals are examined.

Upon acquiring data from the investigation phase, a study of the data is undertaken. Existing security procedures are examined and legal issues and law is reviewed by the organization’s legal counsel to determine the plan’s compliance with law, and how this will affect the design of the developing security policy. Risk management is a large part of this stage, where an assessment of risk facing the organization is put into action.

Logical design
Based upon the previously acquired artifacts, the logical design is a blueprint of the security regulations and implementation of “key policies that influence later decisions” (Whitman & Mattord, 2005). Response to incidents is also planned at this stage for continuity and disaster recovery. The decision to outsource is usually decided at this stage.

Physical design
The implementation of the physical information systems and appliances is based upon the previously-architected logical design. Many system architectures can be presented at this stage and a final architecture agreed upon. Outside vendor support and consulting can be a positive element to determining the right systems to fit the desired security policy.

As with the traditional SDLC, this stage includes purchasing the information systems, installation, configuration, and testing of those systems as well as administrator and operator training.

Maintenance and change
Within an environment where threats are continuous and new threats happen frequently, it is essential that security systems are diligently updated and monitored. Updates should include anti-virus, router and switch firmware updates, storage system firmware updates, wireless access point monitoring, and all access log auditing. This is an endless process that will ensure organizational security of information systems and data. Disaster recovery testing can be a part of this phase, and should be executed on a regular schedule.

Security policy is the means of ensuring organizational compliance to local, state, and federal law. It also is due diligence on behalf of the organization in preventing breach of security-related practices by employees and contractors. With globalization of business and electronic commerce and ever-present threat to data, a well planned and rigorously executed security policy will enable an organization to remain compliant within law, and ensure business continuity.

Massiglia, P., Marcus, E. (2002). Information Techologies for Disaster Recovery. The Resilient Enterprise. p.229. Veritas.

Whitman, M., Mattord, H. (2005). The Security Systems Development Life Cycle. Principles of Information Security. p.23. Thomson.

Bowles, B. (2009). Legal, Ethical, & Professional Issues in Information Security. Chapter 3. [Lecture notes] [PowerPoint]. Denver, Colorado: Regis University. Enterprise Information Assurance.

Key principles of risk management

Risk management is an important element of organizational security and business continuity. This includes identifying vulnerabilities in the business and planning and executing the steps to ensure the security and continuity of information systems in the organization.

To protect an organization from internal and outside threat, knowledge of those threats must be acquired. One must know about ones’ enemy when it comes to defense. The more that is known about the opponent, the better one is prepared against it. Within an organization, threat assessment must be a constant topic of research. To formulate and implement a risk assessment strategy, an organized plan must be produced that defines the classes of information assets, then defines threats, and finally defines control strategies.

Identifying and classifying information assets

This phase includes identifying people and groups, procedures, data, software, and hardware and networking entities. Asset identification is to be included in each of these objects. People include both employees and non-employees, such as outside contractors and visitors. Each employee should have an accurate record of security level definitions and training. Records of outside contractors and vendors should be kept up to date. Procedures or courses of action, within each group should be reviewed and enforced. Procedures are defined by policy, and policy should always be enforced: as policy is only as effective as the level of enforcement. Classifying data is a task for information systems administrators, and includes prioritizing information systems and determining the levels of risk in each. Of course, there is no such thing as a system that has no risk, as any information system can be compromised and due diligence must be taken on every system; no matter the importance of it. If it is part of the organization, it must be secured – from Smartphone to mainframe. Data is a lifeblood of an organization and not only must be secured, but be kept from the danger of corruption by fragmentation, snooping, dropped packets during transport, and exposure to outside entities. Data can be classified by “owner, creator, and manager…size of data structure; location” (Whitman & Mattord, 2005). Software must be inventoried and the licensing accurate. This includes operating systems and applications on servers as well as end user PC’s. Monitoring software that queries software installations and configurations on each server and user workstations is essential to software inventory and control of software policy and security. Finally, hardware and network equipment should be accurately inventoried and scanned for compliance to the latest component and firmware updates. Network hardware inventory must include IP and MAC addresses, serial numbers, software version (most hacks are due to an exploit within the firmware revision), and the controlling entity. Determining who controls what element is important, as having an organized administrative strategy will ensure that no network devices remain unmanaged.

Defining threat

Threat assessment includes many variables. Threats can originate within every entity in the organization as well as the outside. Human error is common, especially within data integrity: data can accidentally be deleted, moved, or corrupted. Another threat is copyright infringement; which can be the result of piracy or lack of licensing compliance. Deliberate acts of electronic trespassing, vandalism or extortion are a threat. Theft in its various forms such as stealing data or hardware is a risk. Worms, virus, and other malware are a constant threat to system integrity, and are often at the hands of end users that breach policy by downloading and installing software that has not been authorized by systems administration. Forces of nature are another threat: any kind of natural disaster is possible, and this requires another plan – disaster recovery planning. Many threats originate from the outside such as network providers that grant WAN connectivity to data and telecommunications networks. Sometimes these core connections are interrupted, diminishing quality of service (QoS). Finally, systems hardware and software as well as obsolete systems can fail resulting in corrupted and lost data, as well as the infamous systems downtime: the bane of any CIO.

Control strategies

Risk within information security is a competitive disadvantage. Once this is realized, four major control strategies within risk control can be undertaken after the initial vulnerability assessment.


Prevention is the first step to security. This includes eradicating vulnerability within systems and implementing policy that restricts the access to assets. Management mandates policy and ensures that “certain procedures are always followed” (Whitman & Mattord, 2005). To ensure avoidance also includes the education of employees on new and existing technologies. This will enable safer and more experienced use of information systems. Applying technology within the systems administrative staff should include strong password policy to control access.


The saying that there is always someone better at it applies here. Transference is the process of outsourcing the risk to other organizations or processes. For example, outsourcing security management could put the administration of it into the hands of more experienced people which can free a business of these duties so that business can focus on its core operations – the product.


Through preparation and diligent pre-planning, the impact “caused by the exploitation of vulnerability” (Whitman & Mattord, 2005) can be minimized. This includes the disaster recovery plan, business continuity plan, and the incident response plan. These plans require substantial time and resources to construct and are a large part of any organizations enterprise architecture. This is the stage where the constant auditing and project management within information security planning happens.


This methodology is manifested when the cost of defending something is not justifiable. Sometimes it is more economical to replace something instead of the expense of protection.

Selecting a risk control strategy relies on the initial feasibility study which includes a cost benefit analysis that determines the worth of protecting the information assets. The costs that will be reviewed are purchasing, service costs, maintenance, and training expenses. Information security is “required because the technology applied to information creates risk” (Blakley, McDermott & Geer). Risk management is the core of today’s business continuity.

Whitman, M., Mattord, H. (2005). Risk Management. Principles of Information Security. p.116. Thomson.

Blakley, B., McDermott, E., Geer., D. (2001). Information Security is Information Risk Management. Proceedings of the 2001 Workshop on New Security Paradigms Paradigms. ACM.

Types of Encryption

diskThere are numerous kinds of encryption to protect electronic data. Using various algorithms, encryption supports numerous communication methodologies. Examined here are a few popular means of encryption used today.


This cryptographic algorithm, a federal information processing standard, is used within government for protecting data within non-classified environments. Designed to replace legacy encryption methods such as DES and 3DES, AES has been approved for use by “the Secretary of Commerce as the official federal governmental standard”, and the selection thereof has included “…the U.S. government, private industry, and academia” (Whitman & Mattord, 2005). Experts tout that to compromise AES security would take over 4 quintillion years to accomplish. How this encryption works is by converting a block of 128-bit text to 128-bit encrypted, otherwise known as cipher text, by using one of three key strengths: 128, 192, or 256-bit keys. The algorithm behaves in a different manner within each key size. So, “…the increasing key sizes not only offer a larger number of bits with which you can scramble the data, but also increase the complexity of the cipher algorithm” (Allman, 2002).

The algorithm is not truly symmetric in contrast to the predecessor, DES, and repeats it core in numerous periods depending on the key size. Known as rounds, these loop repetitions within the cipher “complete pre-round and post-round operations” (Allman, 2002).

A symmetric cipher – based form of encryption, Blowfish symmetric cipher is popular for protecting electronic documents, PDF’s, and compressed archives. Used within electronic transfer over the internet or locally on a workstation, Blowfish uses a pass phrase key for encryption and decryption of data. This is a 64-bit that both encrypts and decrypts at 64 bit chunks. Blowfish can be used to verify the sender of the message, “…or that the message is unaltered; however, you cannot prove these things to anyone else without revealing your key.” (McBride & Matthew, 2004).

A free form of encryption, Blowfish is unpatented and license-free. Used in numerous business applications and operating systems such as Linux as well as the popular TiVo DVR product, this type of symmetric cipher has not yet been cracked according to most cryptographers.

Digital Certificate

A mainstay in hypertext transfer security, the digital certificate is an electronic document that contains identification of an entity; such as a web page, by storing a key value about the identification of that entity. Often registered by a third party such as a digital certificate provider such as Verisign, known as a certificate authority, the certificate will provide a means of proving the identity of the entity, or site, to the requestor. According to PC Magazine, there are four general uses for digital certificates: secure (SSL & https) web connections, web client authentication, signing and encrypting email, and software publishing (PC Magazine, 1999).

The digital certificate contains a digital signature which uses the certificate for verification. The certificate contacts the certificate authority (CA) database, or repository, in which it is hosted: from that database the site is verified. Two such types of certificates are used today: PGP (Pretty Good Privacy) and X.509v3 from the Telecommunications Union (ITU-T).


The Secure Socket Layer was developed by Netscape to provide secure channels for browser communication over the internet. Within a client-server connection, the server controls the secure connection by sending a signal to the browser client that a secure connection is necessary. A public key would be sent by the client, and this has to match the public key found by the server, which sends a certificate for the client’s authentication. When verified by the client, the SSL connection is established.


McBride, M. (2004). Securing Communications and Files. Searcher. Vol. 12 Issue 5, p46.

Allman, S. (2002). Encryption and security: the Advanced Encryption Standard. How it Works.Vol. 47. P26

Pleas, K. (1999). Certificates, Keys, and Security. PC Magazine Vol. 18. Issue 8.